赞
踩
1993 年两位法国教授Berrou、Glavieux 和他们的缅甸籍博士生Thitimajshima 提出的一种全新的编码方式——Turbo 码。它巧妙地将两个简单分量码通过伪随机交织器并行级联来构造具有伪随机特性的长码,并通过在两个软入/软出(SISO)译码器之间进行多次迭代实现了伪随机译码。
Turbo码依靠迭代译码解决计算复杂性问题,通过在编译码器中交织器和解交织器的使用,有效地实现随机性编译码的思想,通过短码的有效结合实现长码,达到了接近Shannon理论极限的性能。
Turbo 编码的出现是为了解决信道编码的问题。传统的信道编码方法,例如纠错码(如 Hamming 码)和纠删码(如瑞利码),是基于计算机理论的编码方法,它们可以有效地检测和纠正单个的位级错误。然而,在信道传输过程中,错误往往不是单独出现的,而是集体出现的。这就导致了传统的纠错码在实际应用中的效率不佳。
Turbo 编码是一种经过编码的分组信息,它通过多次循环地计算得到的。Turbo 编码能够有效地检测和纠正错误集体,因此在实际应用中效率更高。Turbo 编码的应用范围广泛,包括无线通信、视频传输、数据存储等领域。
较好的纠错性能:Turbo码可以比较接近香农极限,其误码性能较传统的纠错码(如卷积码、码块编码等)有显著的提升。
适应性强:Turbo码可以根据信道质量的变化自适应地调整编码和译码参数,从而进一步提高纠错性能。
迭代译码:Turbo码的译码过程采用迭代译码算法,可以不断进行多轮迭代以提高纠错性能。
应用广泛:Turbo码已经在许多无线通信标准(如3GPP、WiMAX等)和卫星通信系统中得到应用。
系统框图:
典型的Turbo编码器如图1所示,主要由两个结构相同的RSC(带反馈的卷积编码器)和一个交织器并行实现,编码后的校验位通过删余阵,删余可以产生不同的码率。
编码结构:
信息序列一方面进入RSC1,另一方面经过交织器处理后进入RSC2,交织器的处理是输入序号至输出序号的一映射。为了提高码率,除了选用高码率的分量码外,还可以采取删余操作从两个校验序列中删除一些校验位,再与信息序列复用在一起输出。递归系统卷积码(RSC)不同于一般的卷积编码器在于其的反馈结构。
Turbo码的译码算法主要分为两大类:一类是基于最大后验概率(Maximum A Posteriori,MAP)软输出算法,这类算法由标准MAP算法演化得来。对标准MAP算法取对数得到Log-MAP算法,对Log-MAP算法中的分支度量进行简化,得到MAX-Log-MAP算法。
较好的误码性能:Log-MAP译码算法是一种近似最优的Turbo码译码算法,相比于其它译码算法具有更好的误码性能,特别是在高信噪比(SNR)下表现更为明显。
较高的计算复杂度:Log-MAP译码算法需要进行指数和对数计算,计算复杂度较高。在Turbo码的迭代译码过程中,需要多次进行Log-MAP译码操作,因此整个译码过程的计算复杂度也较高。
可能存在过早收敛问题:在迭代次数较少时,Log-MAP译码算法可能会出现过早收敛的问题,导致最终的译码性能无法达到理论上的最优性能。
(仅供参考)
另一类是基于Viterbi算法的软输出算法,是对卷积码的译码算法Viterbi的改进,使其满足SISO特性,软信息可以在两个分量译码器之间交换。这种改进的Viterbi算法为软输出Viterbi算法(SOVA)。
总体上讲,MAP 类算法性能优于 SOVA 算法,但是实现复杂度要高于 SOVA 算法。
本次仿真主要采用Log-MAP译码算法,它是一类具有反馈结构的伪随机译码器,2个码可以交替互不影响的译码,并且还可以通过关于系统码信息位的软判决输出相互传递信息,进行递推式迭代译码。如果编码时进行了删余操作,那么分离两个校验序列时需要补零以保证三个序列的长度相同,三个序列在送入译码器之前还要进行信道置信度L_c的加权。
最大后验概率算法在译码时首先对接收信息进行处理,两个成员译码器之间外部信息的传递就形成了一个循环迭代的结构。由于外部信息的作用,一定信噪比下的误比特率将随着循环次数的增加而降低。但同时外部信息与接受序列间的相关性也随着译码次数的增加而逐渐增加,外部信息所提供的纠错能力也随之减弱,在一定的循环次数之后,译码性能将不再提高。本次仿真用到的Log-MAP算法实际上就是对标准MAP算法中的似然全部用对数似然度来表示,这样,乘法运算变成了加法运算,加快计算速度。
关于代码的详细解读可以参考下面这位大佬的文章:
Turbo编译码Matlab仿真解读 – WuYufei_matlab
Turbo编译码Matlab仿真解读 (2)-- WuYufei_matlab
关于编译码原理也有很多的文章在讲解,大家可以自行查阅。
例:
下面为主程序的代码,全部的代码已打包上传【可点击此处进行下载】,如果手中有原始代码应该也可以(各函数部分貌似没有作修改,记不清楚了),只需要把主程序替换一下就一样了。
% turbo_sys_demo.m
% BCJR算法引入简化运算:ln(e^x+e^y) = max(x,y) + ln(1+e^(-abs(x-y))) max+查表
% 如果使用近似式:ln(e^x+e^y) = max(x,y),就变成了 MAX-Log-MAP算法
clc;clear
diary turbo_logmap.txt % 将命令行窗口文本记录到日志文件中(开启)
% 选择译码算法
dec_alg = input(' 选择译码算法 (0:Log-MAP, 1:SOVA),默认 0 : ');
% Log-MAP算法用的不多,因为其计算量较大(对标准MAP算法中的似然全部用对数似然度来表示)
% 乘法运算变成加法运算,Log-MAP属于简化译码算法,性能略低于MAP译码算法
if isempty(dec_alg)
dec_alg = 0; % 默认为0:Log-MAP
end
% 帧长度
L_total = input(' 输入帧长度 (= 信息长度 + 卷尾,默认 1024) : ');
if isempty(L_total)
L_total = 1024; % 默认为1024
end
% Turbo码生成多项式
g = input(' 输入生成多项式( 默认: g = [1 1 1; 1 0 1 ] ) : ');
if isempty(g)
g = [ 1 1 1;1 0 1 ]; % 即[7,5]
end
%g = [1 1 0 1; 1 1 1 1];
%g = [1 1 1 1 1; 1 0 0 0 1];
[n,K] = size(g);
m = K - 1; % 寄存器阶数m=矩阵g列数-1
nstates = 2^m;
puncture = input(' 选择是否打孔 (0/1),默认 0: ');
if isempty(puncture)
puncture = 0; % 未打孔默认码率为1/3,打孔后默认码率提高至1/2
end
% 码率
rate = 1/(2+puncture); % 1/2
% AWGN 信道
a = 1;
% 迭代次数
niter = input(' 选择帧迭代次数,默认 3: ');
if isempty(niter)
niter = 3;
end
% N终止程序的帧错误次数
ferrlim = input(' 选择终止程序的帧错误次数,默认 15: ');
if isempty(ferrlim)
ferrlim = 15; % 默认为15
end
EbN0db = input(' 选择系统信噪比 Eb/N0(dB),默认[0.5,1.0,1.5,2.0,2.5,3.0]: ');
if isempty(EbN0db)
EbN0db = [0.5,1.0,1.5,2.0,2.5,3.0]; % 默认
end
fprintf('\n\n----------------------------------------------------\n');
if dec_alg == 0
fprintf(' === Log-MAP 译码器 === \n');
else
fprintf(' === SOVA 译码器 === \n');
end
fprintf(' 帧长度 = %6d\n',L_total);
fprintf(' Turbo码生成多项式: \n');
for i = 1:n
for j = 1:K
fprintf( '%6d', g(i,j));
end
fprintf('\n');
end
if puncture==0
fprintf(' 打孔, 码率 = 1/2 \n');
else
fprintf(' 未打孔, 码率 = 1/3 \n');
end
fprintf(' 迭代次数 = %6d\n', niter);
fprintf(' 终止程序的帧错误次数 = %6d\n', ferrlim);
fprintf(' Eb / N0 (dB) = ');
for i = 1:length(EbN0db)
fprintf('%10.2f',EbN0db(i));
end
fprintf('\n----------------------------------------------------\n\n');
% fprintf('+ + + + Please be patient. Wait a while to get the result. + + + +\n');
for nEN = 1:length(EbN0db)
en = 10^(EbN0db(nEN)/10); % 从dB转换为比值
L_c = 4*a*en*rate; % reliability value of the channel
sigma = 1/sqrt(2*rate*en); % standard deviation of AWGN noise
% 初始化:置零
errs(nEN,1:niter) = zeros(1,niter);
nferr(nEN,1:niter) = zeros(1,niter);
nframe = 0; % clear counter of transmitted frames
while nferr(nEN, niter)<ferrlim
nframe = nframe + 1;
x = round(rand(1, L_total-m)); % info. bits
[temp, alpha] = sort(rand(1,L_total)); % random interleaver mapping
% 使用sort产生Turbo编码所使用的交织器,仿真时倾向于采用LTE自带的交织器
en_output = encoderm( x, g, alpha, puncture ) ; % 编码输出 (+1/-1)
r = en_output+sigma*randn(1,L_total*(2+puncture)); % received bits
yk = demultiplex(r,alpha,puncture); % demultiplex to get input for decoder 1 and 2
% 缩放接收比特
rec_s = 0.5*L_c*yk;
% Initialize extrinsic information
L_e(1:L_total) = zeros(1,L_total);
for iter = 1:niter
% Decoder one
L_a(alpha) = L_e; % a priori info.
if dec_alg == 0
L_all = logmapo(rec_s(1,:), g, L_a, 1); % complete info.
else
L_all = sova0(rec_s(1,:), g, L_a, 1); % complete info.
end
L_e = L_all - 2*rec_s(1,1:2:2*L_total) - L_a; % extrinsic info.
% Decoder two
L_a = L_e(alpha); % a priori info.
if dec_alg == 0
L_all = logmapo(rec_s(2,:), g, L_a, 2); % complete info.
else
L_all = sova0(rec_s(2,:), g, L_a, 2); % complete info.
end
L_e = L_all - 2*rec_s(2,1:2:2*L_total) - L_a; % extrinsic info.
% Estimate the info. bits
xhat(alpha) = (sign(L_all)+1)/2;
% 纪录当前迭代对应的比特错误数
err(iter) = length(find(xhat(1:L_total-m)~=x));
% 记录当前迭代次数对应的帧错误数
if err(iter)>0
nferr(nEN,iter) = nferr(nEN,iter)+1;
end
end %iter
% 迭代次数对应的总比特错误数
errs(nEN,1:niter) = errs(nEN,1:niter) + err(1:niter);
if rem(nframe,3)==0 | nferr(nEN, niter)==ferrlim
% Bit error rate
ber(nEN,1:niter) = errs(nEN,1:niter)/nframe/(L_total-m);
% Frame error rate
fer(nEN,1:niter) = nferr(nEN,1:niter)/nframe;
% 显示结果
fprintf('************** Eb/N0 = %5.2f db **************\n', EbN0db(nEN));
fprintf('帧长度 = %d, rate 1/%d. \n', L_total, 2+puncture);
fprintf('%d 帧被传输, %d 帧存在错误.\n', nframe, nferr(nEN, niter));%nframe帧计数,nferr帧错误计数
fprintf('比特错误率 (从1次迭代到%d次迭代):\n', niter);
for i=1:niter
fprintf('%8.4e ', ber(nEN,i));
end
fprintf('\n');
fprintf('帧错误率 (从1次迭代到%d次迭代):\n', niter);
for i=1:niter
fprintf('%8.4e ', fer(nEN,i));
end
fprintf('\n');
fprintf('***********************************************\n\n');
save turbo_sys_demo EbN0db ber fer
% ber:Bit error rate fer:Frame error rate
end %iter
end
end
% 需要绘制不同迭代次数的结果,进行对比
% 绘图部分仅供参考
figure(1)% 绘制 BER
for i=1:niter
semilogy(EbN0db,ber(:,i));hold on
title('不同迭代次数,Log-MAP算法,帧长1024');
xlabel('E_s/N_0[dB]');ylabel('BER');
end
lengend('1次迭代','2次迭代','3次迭代');
figure(2)% 绘制 FER
for i=1:niter
semilogy(EbN0db,fer(:,i));hold on
title('不同迭代次数,Log-MAP 算法,帧长 1024');
xlabel('E_s/N_0[dB]');ylabel('FER');
end
legend('1 次迭代','2 次迭代','3 次迭代');
diary off % 关闭
5.1 选用Log-MAP译码,SNR=[0.5,1.0,1.5,2.0,2.5,3.0,3.5],删余rate=1/2
(由于迭代次数越多,仿真运行时间越长,所以这里只设置了3次迭代,大家如果设备性能够强的话可以把迭代次数以及SNR的范围设置的大一些,理论上是可以看到误码平台的出现,根据在AWGN 信道上对 PCCC 的性能仿真证明,当 BER 随 SNR 的增加下降到一定程度时,就会出现下降缓慢甚至不再降低的情况,一般称为误码平台(error floor))
迭代次数是影响译码准确度的重要参数,理论上迭代次数越多,译码越准确,我们可以通过改变迭代次数,观察发送数据与接收数据的差异去验证这一结论。不同的算法也有不同的译码结果,算法的优劣也是决定译码准确度的重要因素。
分析:
5.2 选用Log-MAP译码,SNR=[0.5,1.0,1.5,2.0,2.5,3.0,3.5],不删余rate=1/3
分析:
5.3 大家可以尝试在在SNR = 1dB,迭代次数为5,两种情况均删余,rate = 1/2的条件下,对比SOVA译码算法与Log-MAP译码算法的性能
结论:
1、交织长度越大,译码性能越好,但同时也延长了译码时间。
2、降低码率,可较大地改善译码性能,但码率的降低,意味着传输效率的降低,因此码率的选择,必须衡量传输效率和传输质量两个方面的得失。
不足:
计算复杂度高:Turbo码的编码和译码过程都需要较高的计算复杂度,尤其是在迭代次数较多时,会导致系统的计算负载较大。
可靠性不够稳定:Turbo码的纠错性能在不同的信道条件下可能存在较大的波动,而且其性能也受到迭代次数、码长等因素的影响。
对信道条件敏感:Turbo码的性能很大程度上取决于信道条件,如果信道质量较差,即使采用Turbo码也可能难以保证高可靠性的传输。
可扩展性有限:Turbo码的码率和码长存在一定的限制,不适合应用于一些特殊的场景中。
注意,原文件夹内是包含一个turbo_logmap.txt文件的,这是因为使用了日志工具diary来记录,对应:
diary turbo_logmap.txt % 将命令行窗口文本记录到日志文件中(开启)
diary off;
也可以把这两行注释掉
附各函数实现代码:
function subr = demultiplex(r, alpha, puncture)
% At receiver end, serial to paralle demultiplex to get the code word of each
% encoder
% alpha: interleaver mapping
% puncture = 0: use puncturing to increase rate to 1/2;
% puncture = 1; unpunctured, rate 1/3;
% Frame size, which includes info. bits and tail bits
L_total = length(r)/(2+puncture);
% Extract the parity bits for both decoders
if puncture == 1 % 未打孔
for i = 1:L_total
x_sys(i) = r(3*(i-1)+1);
for j = 1:2
subr(j,2*i) = r(3*(i-1)+1+j);
end
end
else % unpunctured
for i = 1:L_total
x_sys(i) = r(2*(i-1)+1);
for j = 1:2
subr(j,2*i) = 0;
end
if rem(i,2)>0
subr(1,2*i) = r(2*i);
else
subr(2,2*i) = r(2*i);
end
end
end
% Extract the systematic bits for both decoders
for j = 1:L_total
% For decoder one
subr(1,2*(j-1)+1) = x_sys(j);
% For decoder two: interleave the systematic bits
subr(2,2*(j-1)+1) = x_sys(alpha(j));
end
end
function [output, state] = encode_bit(g, input, state)% encode_bit.m
% Copyright 1996 Matthew C. Valenti
% MPRG lab, Virginia Tech
% for academic use only
% This function takes as an input a single bit to be encoded,
% as well as the coeficients of the generator polynomials and
% the current state vector.
% It returns as output n encoded data bits, where 1/n is the
% code rate.
% the rate is 1/n
% k is the constraint length
% m is the amount of memory
[n,k] = size(g);
m = k-1;
% determine the next output bit
for i=1:n
output(i) = g(i,1)*input;
for j = 2:k
output(i) = xor(output(i),g(i,j)*state(j-1));
end;
end
% 输出的output为总信息长度的2倍,即800bit
% 卷积码原理
state = [input, state(1:m-1)];
end
function en_output = encoderm( x, g, alpha, puncture )% encoderm.m
% 交织器映射采用 'alpha' 数组
% puncture = 1, unpunctured, produces a rate 1/3 output of fixed length
% puncture = 0, punctured, produces a rate 1/2 output
% multiplexer chooses odd check bits from RSC1
% and even check bits from RSC2
% determine the constraint length (K), memory (m)
% and number of information bits plus tail bits.
[n,K] = size(g);
m = K - 1; % 寄存器阶数 = 矩阵g列数 - 1
L_info = length(x);
L_total = L_info + m; %总编码长度等于 信息长度 + m (加入尾比特处理,迫使寄存器状态最终归为0)
% generate the codeword corresponding to the 1st RSC coder
% end = 1, perfectly terminated;
input = x;
output1 = rsc_encode(g,input,1);
% make a matrix with first row corresponing to info sequence
% second row corresponsing to RSC #1's check bits.
% third row corresponsing to RSC #2's check bits.
y(1,:) = output1(1:2:2*L_total); % 第一行是系统比特,即信息比特
y(2,:) = output1(2:2:2*L_total); % 第二行是校验比特1
% interleave input to second encoder
for i = 1:L_total
input1(1,i) = y(1,alpha(i));
end
output2 = rsc_encode(g, input1(1,1:L_total), -1 );
y(3,:) = output2(2:2:2*L_total);
% paralell to serial multiplex to get output vector
% puncture = 0: rate increase from 1/3 to 1/2;
% puncture = 1; unpunctured, rate = 1/3;
if puncture > 0 % 未打孔
for i = 1:L_total
for j = 1:3
en_output(1,3*(i-1)+j) = y(j,i);
end
end
else % punctured into rate 1/2
for i=1:L_total
en_output(1,n*(i-1)+1) = y(1,i);
if rem(i,2)
% RSC1校验位
en_output(1,n*i) = y(2,i);
else
% RSC2校验位
en_output(1,n*i) = y(3,i);
end
end
end
% antipodal modulation: +1/-1
% 极性变换输出
en_output = 2 * en_output - ones(size(en_output));
end
function int_state = int_state( state )
% converts a row vector of m bits into a integer (base 10)
[dummy, m] = size( state );
for i = 1:m
vect(i) = 2^(m-i);
end
int_state = state*vect';
end
function bin_state = bin_state( int_state, m )
% converts an vector of integer into a matrix; the i-th row is the binary form
% of m bits for the i-th integer
for j = 1:length( int_state )
for i = m:-1:1
state(j,m-i+1) = fix( int_state(j)/ (2^(i-1)) );
int_state(j) = int_state(j) - state(j,m-i+1)*2^(i-1);
end
end
bin_state = state;
end
function L_all = logmapo(rec_s,g,L_a,ind_dec)% logmapo.m
% rec_s:scaled received bits 缩放接收比特
% L_a:先验L值
% ind_dec:index of decoder
% 本函数是Turbo译码的核心
% Log_MAP algorithm using straightforward method to compute branch metrics
% no approximation is used.
% Can be simplified to Max-Log-MAP by using approximation ln(e^x+e^y) = max(x,y).
% Input: rec_s: scaled received bits.
% rec_s = 0.5 * L_c * yk = ( 2 * a * rate * Eb/N0 ) * yk
% g: code generator for the component RSC code, in binary matrix form.
% L_a: a priori info. for the current decoder,
% scrambled version of extrinsic Inftyo. of the previous decoder.
% ind_dec: index of decoder. Either 1 or 2.
% Encoder 1 is assumed to be terminated, while encoder 2 is open.
%
% Output: L_all: log-likelihood ratio of the symbols. Complete information.
% Total number of bits: Inftyo. + tail
L_total = length(rec_s)/2;
[n,K] = size(g);
m = K - 1;
nstates = 2^m; % number of states in the trellis
% Set up the trellis
[next_out, next_state, last_out, last_state] = trellis(g);
Infty = 1e10;
% Initialization of Alpha
Alpha(1,1) = 0;
Alpha(1,2:nstates) = -Infty*ones(1,nstates-1);
% Initialization of Beta
if ind_dec==1
Beta(L_total,1) = 0;
Beta(L_total,2:nstates) = -Infty*ones(1,nstates-1);
elseif ind_dec==2
Beta(L_total,1:nstates) = zeros(1,nstates);
else
fprintf('ind_dec is limited to 1 and 2!\n');
end
% Trace forward, compute Alpha
for k = 2:L_total+1
for state2 = 1:nstates
gamma = -Infty*ones(1,nstates);
gamma(last_state(state2,1)) = (-rec_s(2*k-3)+rec_s(2*k-2)*last_out(state2,2))....
-log(1+exp(L_a(k-1)));
gamma(last_state(state2,2)) = (rec_s(2*k-3)+rec_s(2*k-2)*last_out(state2,4))....
+L_a(k-1)-log(1+exp(L_a(k-1)));
if(sum(exp(gamma+Alpha(k-1,:)))<1e-300)
Alpha(k,state2)=-Infty;
else
Alpha(k,state2) = log( sum( exp( gamma+Alpha(k-1,:) ) ) );
end
end
tempmax(k) = max(Alpha(k,:));
Alpha(k,:) = Alpha(k,:) - tempmax(k);
end
% Trace backward, compute Beta
for k = L_total-1:-1:1
for state1 = 1:nstates
gamma = -Infty*ones(1,nstates);
gamma(next_state(state1,1)) = (-rec_s(2*k+1)+rec_s(2*k+2)*next_out(state1,2))....
-log(1+exp(L_a(k+1)));
gamma(next_state(state1,2)) = (rec_s(2*k+1)+rec_s(2*k+2)*next_out(state1,4))....
+L_a(k+1)-log(1+exp(L_a(k+1)));
if(sum(exp(gamma+Beta(k+1,:)))<1e-300)
Beta(k,state1)=-Infty;
else
Beta(k,state1) = log(sum(exp(gamma+Beta(k+1,:))));
end
end
Beta(k,:) = Beta(k,:) - tempmax(k+1);
end
% Compute the soft output, log-likelihood ratio of symbols in the frame
for k = 1:L_total
for state2 = 1:nstates
gamma0 = (-rec_s(2*k-1)+rec_s(2*k)*last_out(state2,2))....
-log(1+exp(L_a(k)));
gamma1 = (rec_s(2*k-1)+rec_s(2*k)*last_out(state2,4))...
+L_a(k)-log(1+exp(L_a(k)));
temp0(state2) = exp(gamma0 + Alpha(k,last_state(state2,1)) + Beta(k,state2));
temp1(state2) = exp(gamma1 + Alpha(k,last_state(state2,2)) + Beta(k,state2));
end
L_all(k) = log(sum(temp1)) - log(sum(temp0));
end
end
function y = rsc_encode(g, x, terminated)% rsc_encode.m
% encodes a block of data x (0/1)with a recursive systematic
% convolutional code with generator vectors in g, and
% returns the output in y (0/1).
% if terminated>0, the trellis is perfectly terminated
% if terminated<0, it is left unterminated;
% determine the constraint length (K), memory (m), and rate (1/n)
% and number of information bits.
[n,K] = size(g);
m = K - 1;
if terminated>0
L_info = length(x);
L_total = L_info + m;
else
L_total = length(x);
L_info = L_total - m;
end
% 初始化状态向量
state = zeros(1,m);
% % 生成信息比特 d_k
for i = 1:L_total
if terminated<0 | (terminated>0 & i<=L_info)
d_k = x(1,i);
elseif terminated>0 & i>L_info
% terminate the trellis
d_k = rem( g(1,2:K)*state', 2 );
end
a_k = rem( g(1,:)*[d_k state]', 2 );
[output_bits, state] = encode_bit(g, a_k, state);
% since systematic, first output is input bit
output_bits(1,1) = d_k;
y(n*(i-1)+1:n*i) = output_bits;
end
end
function L_all = sova(rec_s, g, L_a, ind_dec)
% This function implememts Soft Output Viterbi Algorithm in trace back mode
% Input:
% rec_s: scaled received bits. rec_s(k) = 0.5 * L_c(k) * y(k)
% L_c = 4 * a * Es/No, reliability value of the channel
% y: received bits
% g: encoder generator matrix in binary form, g(1,:) for feedback, g(2,:) for feedforward
% L_a: a priori information about the info. bits. Extrinsic info. from the previous
% component decoder
% ind_dec: index of the component decoder.
% =1: component decoder 1; The trellis is terminated to all zero state
% =2: component decoder 2; The trellis is not perfectly terminated.
% Output:
% L_all: log ( P(x=1|y) ) / ( P(x=-1|y) )
%
% Copyright: Yufei Wu, Nov. 1998
% MPRG lab, Virginia Tech
% for academic use only
% Frame size, info. + tail bits
L_total = length(L_a);
[n,K] = size(g);
m = K - 1;
nstates = 2^m;
Infty = 1e10;
% SOVA window size. Make decision after 'delta' delay. Decide bit k when received bits
% for bit (k+delta) are processed. Trace back from (k+delta) to k.
delta = 30;
% Set up the trellis defined by g.
[next_out, next_state, last_out, last_state] = trellis(g);
% Initialize path metrics to -Infty
for t=1:L_total+1
for state=1:nstates
path_metric(state,t) = -Infty;
end
end
% Trace forward to compute all the path metrics
path_metric(1,1) = 0;
for t=1:L_total
y = rec_s(2*t-1:2*t);
for state=1:nstates
sym0 = last_out(state,1:2);
sym1 = last_out(state,3:4);
state0 = last_state(state,1);
state1 = last_state(state,2);
Mk0 = y*sym0' - L_a(t)/2 + path_metric(state0,t);
Mk1 = y*sym1' + L_a(t)/2 + path_metric(state1,t);
if Mk0>Mk1
path_metric(state,t+1)=Mk0;
Mdiff(state,t+1) = Mk0 - Mk1;
prev_bit(state, t+1) = 0;
else
path_metric(state,t+1)=Mk1;
Mdiff(state,t+1) = Mk1 - Mk0;
prev_bit(state,t+1) = 1;
end
end
end
% For decoder 1, trace back from all zero state,
% for decoder two, trace back from the most likely state
if ind_dec == 1
mlstate(L_total+1) = 1;
else
mlstate(L_total+1) = find( path_metric(:,L_total+1)==max(path_metric(:,L_total+1)) );
end
% Trace back to get the estimated bits, and the most likely path
for t=L_total:-1:1
est(t) = prev_bit(mlstate(t+1),t+1);
mlstate(t) = last_state(mlstate(t+1), est(t)+1);
end
% Find the minimum delta that corresponds to a compitition path with different info. bit estimation.
% Give the soft output
for t=1:L_total
llr = Infty;
for i=0:delta
if t+i<L_total+1
bit = 1-est(t+i);
temp_state = last_state(mlstate(t+i+1), bit+1);
for j=i-1:-1:0
bit = prev_bit(temp_state,t+j+1);
temp_state = last_state(temp_state, bit+1);
end
if bit~=est(t)
llr = min( llr,Mdiff(mlstate(t+i+1), t+i+1) );
end
end
end
L_all(t) = (2*est(t) - 1) * llr;
end
end
function [next_out, next_state, last_out, last_state] = trellis(g)% trellils.m
% next_out:后向输出
% next_state;后向状态
% last_out:前向输出
% last_state:前向状态
% set up the trellis given code generator g
% g given in binary matrix form. e.g. g = [ 1 1 1; 1 0 1 ];
% next_out(i,1:2): trellis next_out (systematic bit; parity bit) when input = 0, state = i; next_out(i,j) = -1 or 1
% next_out(i,3:4): trellis next_out (systematic bit; parity bit) when input = 1, state = i;
% next_state(i,1): next state when input = 0, state = i; next_state(i,i) = 1,...2^m
% next_state(i,2): next state when input = 1, state = i;
% last_out(i,1:2): trellis last_out (systematic bit; parity bit) when input = 0, state = i; last_out(i,j) = -1 or 1
% last_out(i,3:4): trellis last_out (systematic bit; parity bit) when input = 1, state = i;
% last_state(i,1): previous state that comes to state i when info. bit = 0;
% last_state(i,2): previous state that comes to state i when info. bit = 1;
[n,K] = size(g); % n=2,K=3
m = K - 1; % m=2
max_state = 2^m; % max_state=4
% set up next_out and next_state matrices for systematic code
for state=1:max_state
state_vector = bin_state( state-1, m );% 将整数向量state转换成 m-bit 向量形式
% 当输入为0时
d_k = 0;
a_k = rem( g(1,:)*[0 state_vector]', 2 );
[out_0, state_0] = encode_bit(g, a_k, state_vector);
out_0(1) = 0;
% 当输入为1时
d_k = 1;
a_k = rem( g(1,:)*[1 state_vector]', 2 );
[out_1, state_1] = encode_bit(g, a_k, state_vector);
out_1(1) = 1;
next_out(state,:) = 2*[out_0 out_1]-1;
next_state(state,:) = [(int_state(state_0)+1) (int_state(state_1)+1)];
end
% find out which two previous states can come to present state
last_state = zeros(max_state,2);
for bit=0:1
for state=1:max_state
last_state(next_state(state,bit+1), bit+1)=state;
last_out(next_state(state, bit+1), bit*2+1:bit*2+2) ...
= next_out(state, bit*2+1:bit*2+2);
end
end
[1]白宝明. Turbo码理论及其应用的研究[D].西安电子科技大学,1999.
[2]刘东华. Turbo码关键技术及Turbo原理的应用研究[D].中国人民解放军国防科学技术大学,2003.
[3]刘东华,唐朝京.用于Turbo迭代译码的log-MAP算法的简化[J].电子与信息学报,2001(12):1340-1347.
-觉得还不错的话,麻烦点个赞呢 :)
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。