r should satisfy the conditions given by r.HT = 0
i.e.,
Hence, each column of HT (each row of H )defines
a Check. There are a total of M checks.
Bits of r and the checks form a Bayesian Network. Each bit of
r
participates
in t checks and hence, every bit is the parent of t checks
in the Bayesian Network.
Since each check verifies tr bits, each check is
the child of tr bits in the Bayesian Network.
The decoding is done by the method of Probability Propogation or the
Sum-Product Algorithm.[3] gives an excellent tutorial
on this topic.
More Notation: (This notation is the same as the notation used by the authors in [2],[3] )
N (m)º{l
: Hmn=1} --- This is the set of bits
participating in check m i.e., the columns which have 1's in row m of H
belong to this set. For each m, there are tr elements
in this set.
M(n)º{m:
Hmn=1}
--- This is the set of checks bit n of r participates
in i.e., the rows which have 1's in column l of H
belong to this set. For each n, there are t elements in this set.
N (m)\n --- The set of (tr-1) elements participating in check m apart from bit n
qxmn ---The probability that bit l of r has a value x given the information obtained via the M-1 checks apart from check m
rxmn
---Probability of check m being satisfied by bit n of r, if rn
is considered fixed at the value xÎ
{0,1} and the
other N-1 bits have a seperable distribution given by {qxmn'
: n'ÎL(m)\n}
DECODING ALGORITHM
Step 1. Initialization
Initialize q0mn and
q1mn to the likelyhoods of xn, fn0
and
fn1 .
For the AWGN channel,
fn1 = 1/(1+exp(-2ayn/s2)
and
fn0
=1- fn1 where,
the input to the Gaussian channel
is ±a, s2 =N0/2
is the variance of the additive noise and yn is the soft output
of the
Gaussian Channel.
Step 2. Horizontal Step
For each row/check m and every nÎN(m),
the 2 probabilities r0mn
and r1mn
are
computed.
For example , say we are looking
at a check m where the 2nd , 3rd and 5th columns are 1(here tr=3).
i.e., the check is r2År3År5=0
Suppose we wanted to calculate r0m2,
r0m2 = Prob{ r2År3År5=0|
r2=0,r3=0,r5=0}
x q0m3 x q0m5
+ Prob{ r2År3År5=0|
r2=0,r3=0,r5=1}
x q0m3 x q1m5
+ Prob{ r2År3År5=0|
r2=0,r3=1,r5=0}
x q1m3 x q0m5
+ Prob{ r2År3År5=0|
r2=0,r3=1,r5=1}
x q1m3 x q1m5
|||ly r0m2 can be calculated.
Therefore, generalizing the above equations we have,
The conditional probabilities in the above equations are 0 or 1 depending
on whether check m is satisfied by the
corresponding values of xn and xn'.
Using the above equations to calculate the r0mn
and
r1mn will require the enumeration of
all possible combinations
of xn' which may be computationally intensive. The
authors in [2] suggest a couple of different methods to compute
the above probabilities and one of the effecient schemes they suggest
is as follows:
compute dqmn= q0mn-q1mn
then compute drmn= r0mn-r1mn= Pdqmn' : n'ÎL(m)\n
and then we get r0mn=(1+drmn)/2 and r1mn=(1-drmn)/2
Step 3. Vertical Step
In this step we update the values
of q0mnand q1mnusing the r's
obtained in step 3.
q0mn=
amn
fn0
Pr0m'n
:m'ÎM(n)\m
q1mn=
amn
fn1
Pr1m'n
:m'ÎM(n)\m
where amn
is
chosen such that q0mn +q1mn=1
The pseudoposterior probabilities
are then obtained.
q0n=
an
fn0
Pr0mn
:mÎM(n)
q1n=
an
fn1
Pr1mn
:mÎM(n)
Step 4. Tentative Decoding
set rn =1 if q1n >0.5
and rn=0 otherwise.
If r.HT =
0
stop
Else go to step 2.
In the software implementation the algorithm iterates 1000 times.
The function decode_ldpc(rx_waveform,No,amp,h) does the decoding.
The function takes 4 parameters
(i) rx_waveform : The soft output of the gaussian channel
(ii) No: Twice the magnitude of PSD of the AWGN (No/2)
(iii)amp: the amplitude of the bpsk waveform (amp representing the voltage
for 1and -amp
representing the voltage for 0)
(iv)h: the parity check matrix to be used for decoding.
The function returns the decoded codeword and not the message bits.
The function can be used along with the modulation and channel transmission
functions as shown below:
Function Usage Example:
% assume the codeword c, the parity check
matrix H and No are already loaded into the MATLAB memory
tx_waveform=bpsk(c,1); %amp= 1
rx_waveform=awgn(tx_waveform,No);
vhat=decode_ldpc(rx_waveform,No,1,h) %since
amp=1;
Once the decoding is done ( the termination condition is satisfied or
the maximum number of iterations are performed), the message bits need
to be extracted. At the last step of encoding the message blocks,
the reordering done to get a non-
singular A was undone. If this reordering was done again, the message
bits would be located at the end of the decoded codeword and can be extracted
easily. The rearranged_cols array (output by the rearrange_cols( ) function)
that holds the
reordering information is used to do the reordering. The extract_mesg(vhat,rearranged_cols)
works in this manner.
Function Usage Example:
% assume we have the rearranged_cols array
output by the rearrange_cols( ) function already loaded into MATLAB memory
uhat = extract_mesg(vhat,rearranged_cols);