Joint source-channel coding

Implicit in Shannon's source-channel coding theorem is the fact that reliable transmission of a source through a channel can be accomplished by using separate source and channel codes. This means that a concatenation of a (channel-independent) source code followed by a (source-independent) channel code achieves vanishing error probability as the block length goes to infinity, as long as the source entropy is smaller than the channel capacity.

However, in the non-asymptotic regime an optimally designed joint source-channel code can perform strictly better than a separate code. This improvement (i. e., reduction in error probability) has been quantified in terms of error exponents and in terms of source and channel dispersion. Joint design has an error exponent at most twice of that of separate codes, and a dispersion gain that depends on the target error probability; for vanishing values of the latter, the dispersion of joint design is at best half of the dispersion of separate design. This potential gain justifies the interest in good finite-length joint source-channel codes.

Non-asymptotic performance bounds

The reliable transmission of messages in the finite block length regime can be accurately characterized via upper and lower bounds on the average error probability of an optimal code. Random-coding techniques are commonly used to obtain upper bounds on the average error probability, which help demonstrate the presence of effective codes. Conversely, computing lower bounds that hold true for all codes is generally difficult due to the need for optimizing the bound across all possible codebooks and decoding rules.

The article [1] presents novel tight achievability bounds for the average error probability in joint source-channel coding. The bounds are derived using random coding techniques and focus on maximum a posteriori and threshold decoding. These new bounds improve upon previous results, and can be effectively utilized to establish the direct part of the joint source-channel coding theorem.

Analogously, the research in [2] stablishes lower bounds on the average error probability in joint source-channel coding. In particular, this work defines a sequence of independent binary hypothesis tests for each source message and then applies the Neyman-Pearson lemma to derive a lower bound on the average error probability for each test. While this method initially requires a computationally intensive optimization process across all possible codebooks and decoding rules, it is shown to recover several previous results, such as information-spectrum bounds or Csiszar's sphere-packing exponent for joint source-channel coding. This results are also related to [3], where we prove that an instance of this hypothesis testing bounds indeed coincides with the actual probability of error of the best joint source-channel code.

Source-channel error exponent

The error exponent in joint source-channel coding can be significantly higher compared to the concatenation of source and channel codes. Csiszár introduced an optimal construction in which codewords are randomly selected from a set of sequences with composition depending on the source message. He demonstrated that this exponent coincides with the upper bound known as the sphere-packing exponent within a specific rate region.

In [4], we investigate an alternatively random-coding construction to attain this error exponent. In particular, we define a code ensemble where codewords associated with different source messages are generated using distinct product distributions. We establish a novel random-coding bound for this ensemble and demonstrate that its exponent reaches the sphere-packing exponent in cases where it is known to be tight. Remarkably, we show that it is sufficient to consider either one or two different distributions within the optimal ensemble. Compared to previous results, the proof techniques employed in [4] are quite elementary, and can be easily generalized to continuous channels. More details can be found in [4], [5] and [6].

Multi-class source-channel coding

Previous research has explored various joint source-channel coding schemes. These schemes include adapting existing channel coding techniques to incorporate source information at the decoder side, such as modifying the Viterbi decoding algorithm, punctured turbo-codes, and source and channel LDPC codes. Other approaches utilize source statistics at both the encoder and decoder, such as matching source bits to a non-systematic LDPC code, trellis-structured descriptions of Huffman codes, and arithmetic and Lempel-Ziv source coding.

In [7], based on the results in [4], we propose a novel almost-lossless source-channel coding scheme that divides source messages into distinct classes and employs class-dependent codes for encoding. However, we no longer rely on maximum a posteriori (MAP) decoding due to its implementation challenges in practical systems. In [7] we propose an unequal error protection (UEP), where more probable messages receive increased protection against channel errors through the use of low-rate channel codes. Conversely, less probable messages are assigned to classes with less protection. While this construction fails to achieve the best performance of joint source-channel coding, for a fixed number of classes, it can be implemented with reduced complexity using existing source and channel codes. This scheme is shown to improve on the error exponent of separate coding, and, as the number of classes increases, to approach the error exponent of joint source-channel coding. More details can be found in [7] and [8].

References

[1] Adria Tauste, Gonzalo Vazquez-Vilar, Albert Guillen i Fabregas, and Alfonso Martinez. “Random-coding joint source-channel bounds,” 2011 IEEE International Symposium on Information Theory (ISIT 2011). Saint Petersburg, Russia., Jul 31 - Aug 5, 2011.

[2] Adria Tauste, Gonzalo Vazquez-Vilar, Albert Guillen i Fabregas, Tobias Koch, and Alfonso Martinez. “Converse bounds for finite-length joint source-channel coding,” 50th Annual Allerton Conference on Communication, Control and Computing (Allerton 2012). Allerton, IL, USA., Oct 1-5, 2012.

[3] Gonzalo Vazquez-Vilar, Adria Tauste, Albert Guillen i Fabregas, and Alfonso Martinez. “The meta-converse bound is tight,” 2013 IEEE International Symposium on Information Theory (ISIT 2013). Istanbul, Turkey, Jul 7-12, 2013.

[4] Adria Tauste, Gonzalo Vazquez-Vilar, Albert Guillen i Fabregas, Tobias Koch, and Alfonso Martinez. “A derivation of the source-channel error exponent using non-identical product distributions,” IEEE Transactions on Information Theory, vol.60, no.6, pp.3209-3217, Jun. 2014.

[5] Adria Tauste, Gonzalo Vazquez-Vilar, Albert Guillen i Fabregas, Tobias Koch, and Alfonso Martinez. “Achieving Csiszár’s exponent for joint source-channel coding with product distributions,” 2012 IEEE International Symposium on Information Theory (ISIT 2012). Boston, USA., Jul 1-6, 2012.

[6] Adria Tauste, Gonzalo Vazquez-Vilar, Albert Guillen i Fabregas, Tobias Koch, and Alfonso Martinez. “Random Coding Bounds that Attain the Joint Source-Channel Exponent,” 46th Annual Conference on Information Sciences and Systems (CISS 2012). Princeton, USA., March 21-23, 2012.

[7] Irina E. Bocharova, Albert Guillén i Fàbregas, Boris D. Kudryashov, Alfonso Martinez, Adria Tauste Campo, and Gonzalo Vazquez-Vilar. “Multi-Class Source-Channel Coding,” IEEE Transactions on Information Theory, vol.62, no.9, pp.5093-5104, Sep. 2016.

[8] Irina E. Bocharova, Albert Guillen i Fabregas, Boris D. Kudryashov, Alfonso Martinez, Adria Tauste Campo, and Gonzalo Vazquez-Vilar. “Source-Channel Coding with Multiple Classes,” 2014 IEEE International Symposium on Information Theory (ISIT 2014). Honolulu, HI, USA, Jun 29-Jul 4, 2014.