1
00:00:13,590 --> 00:00:20,720
In this lecture we are going to conclude our
discussion on introduction and so in this
2
00:00:20,720 --> 00:00:28,360
lecture I will first describe what is a difference
between block codes and convolutional codes
3
00:00:28,360 --> 00:00:35,441
And then we will talk about very simple decoding
strategies and finally we will explain by
4
00:00:35,441 --> 00:00:43,130
what we mean by forward error correction automatic
repeat request and hybrid a r q So as I said
5
00:00:43,130 --> 00:00:50,760
we will first describe so error correcting
codes can be broadly classified into two classes
6
00:00:50,760 --> 00:00:57,550
block codes and convolutional codes We will
describe what is meant by block code and what
7
00:00:57,550 --> 00:01:01,610
is meant by convolutional code and we will
bring out the difference and the similarities
8
00:01:01,610 --> 00:01:08,740
between the two Then we will talk about various
decoding strategies and finally we will talk
9
00:01:08,740 --> 00:01:15,430
about what we mean by forward error correction
hybrid a r q and automatic repeat request
10
00:01:15,430 --> 00:01:26,149
So we will start with what is block codes
So as the name suggests in block codes we
11
00:01:26,149 --> 00:01:37,140
take a block of k-bits and map it to an n-bit
codeword So our informative sequence is parsed
12
00:01:37,140 --> 00:01:48,179
into blocks of k-bits And we take this block
of k-bits and map it to block of n-bits So
13
00:01:48,179 --> 00:01:57,710
we denote our information sequence by u So
this is a k-bit sequence u 0 u 1 to u k minus
14
00:01:57,710 --> 00:02:06,020
1 and our encoder is going to map this k-bits
into a n-bit sequence which is denoted by
15
00:02:06,020 --> 00:02:14,620
v Now in block code the encoder is memoryless
What do we mean by that So when we encode
16
00:02:14,620 --> 00:02:22,860
a block of k-bits our output depends only
on that current block of k-bits It does not
17
00:02:22,860 --> 00:02:30,530
depend on what was the previous blocks of
data It only depends output only depends on
18
00:02:30,530 --> 00:02:37,769
the current k-bits So that is one property
of block codes which makes it different from
19
00:02:37,769 --> 00:02:46,680
convolutional codes Block codes are memoryless
As we mentioned in the previous lectures we
20
00:02:46,680 --> 00:02:57,390
defined our code rate to be the ratio of number
of information bits to number of coded bits
21
00:02:57,390 --> 00:03:05,819
So the ratio of information bits to coded
bit is basically will be denoted by code rate
22
00:03:05,819 --> 00:03:12,609
And it is typically denoted by R k is number
of information bits n is number of coded bits
23
00:03:12,609 --> 00:03:20,370
So n minus k is number of redundant bits that
we are adding to our information bits and
24
00:03:20,370 --> 00:03:27,079
these are also known as parity bits If you
are considering without loss of generality
25
00:03:27,079 --> 00:03:33,450
we will basically consider in this set of
lectures binary codewords so our information
26
00:03:33,450 --> 00:03:40,750
sequence consists of zeros and ones similarly
our code sequence also consists of zeros and
27
00:03:40,750 --> 00:03:47,290
ones Since we are considering the block of
k-bits and binary codewords the number of
28
00:03:47,290 --> 00:03:56,890
codewords is basically 2 raised to power k
So a binary n k block code consists of 2 k
29
00:03:56,890 --> 00:04:07,510
codewords each of length n Now these codewords
need not be binary uh however it's the same
30
00:04:07,510 --> 00:04:14,749
theory mostly applies to non-binary codewords
as well so we will restrict our discussion
31
00:04:14,749 --> 00:04:18,440
to binary codewords
32
00:04:18,440 --> 00:04:25,560
So let us consider an example of linear block
code So in this example a number of information
33
00:04:25,560 --> 00:04:35,410
bits is 3 the number of coded bits is 6 So
the code rate which is ratio of information
34
00:04:35,410 --> 00:04:44,470
bits to number of coded bits is 3 by 6 which
is half So what we have here is basically
35
00:04:44,470 --> 00:04:52,180
our message bits Now k is 3 that means there
are 2 raised to power 3 which is 8 codewords
36
00:04:52,180 --> 00:05:00,050
and these are basically from 0 0 0 to 1 1
1 these are the 8 codewords Now the message
37
00:05:00,050 --> 00:05:07,680
these are the 8 message bits and corresponding
to these message bits these are the 8 codewords
38
00:05:07,680 --> 00:05:17,080
Now 0 0 0 is mapped to all zero sequence 1
0 0 is mapped to 0 1 1 1 0 0 likewise other
39
00:05:17,080 --> 00:05:23,750
sequences have been mapped So let us look
at how we have mapped how have we found out
40
00:05:23,750 --> 00:05:32,530
the message parity bits for this particular
codeword So let's look at each of the columns
41
00:05:32,530 --> 00:05:41,629
of these codewords So let us look at this
column first which is 0 0 0 0 1 1 1 1 So how
42
00:05:41,629 --> 00:05:50,830
was this column how did we map to get this
column If you look at uh information bits
43
00:05:50,830 --> 00:06:06,460
this column is nothing but same as this information
bit You can see 0 0 0 0 1 1 1 1 1 Similarly
44
00:06:06,460 --> 00:06:25,349
look at this one This column is same as this
column 0 0 1 1 0 0 1 1 and this column is
45
00:06:25,349 --> 00:06:34,710
same as this column So in other words this
bit of the codeword is same as this bit of
46
00:06:34,710 --> 00:06:42,400
the information sequence This bit of the codeword
is same as this bit of the information sequence
47
00:06:42,400 --> 00:06:50,039
This bit of the codeword is same as this bit
of the information sequence Now let's look
48
00:06:50,039 --> 00:07:03,650
at this one So if we do u xor of these two
look at this x u 0 plus 0 is 0 1 plus 0 is
49
00:07:03,650 --> 00:07:15,590
1 0 plus 1 is 1 1 plus 1 is 0 We are talking
about binary addition over binary field so
50
00:07:15,590 --> 00:07:26,629
0 plus 0 is 0 0 plus 1 is 1 1 plus 0 is 1
and 1 plus 1 is 0 it is modulo two addition
51
00:07:26,629 --> 00:07:40,180
So 1 plus 1 is 0 this is 0 plus 0 is 0 1 plus
0 is 1 0 plus 1 is 1 and 1 plus 1 is 0 So
52
00:07:40,180 --> 00:07:53,870
if we let's say denote this by u 0 u 1 u 2
and we denote this by v 0 v 1 v 2 v 3 v 4
53
00:07:53,870 --> 00:08:10,349
and v 5 what we have found out so far is v
5 is same as u 2 v 4 is same as u 1 v 3 is
54
00:08:10,349 --> 00:08:29,280
same as u 0 and what is v 2 v 2 was u 0 plus
u 1 Now let's look at v 1 If we look at these
55
00:08:29,280 --> 00:08:48,230
two u 0 plus u 2 so 0 plus 0 is 0 1 plus 0
is 1 0 plus 0 is 0 1 plus 0 is 1 0 plus 1
56
00:08:48,230 --> 00:09:04,210
is 1 1 plus 1 is 0 0 plus 1 is 1 and 1 plus
1 is 0 So v 1 is nothing but u 0 plus u 2
57
00:09:04,210 --> 00:09:14,880
Ok Now look at last this one v 0 what is v
0 we can see that this is same as u 1 plus
58
00:09:14,880 --> 00:09:25,100
u 2 This is u 1 plus u 2 So 0 plus 0 is 0
0 plus 0 is 0 1 plus 0 is 1 1 plus 0 is 1
59
00:09:25,100 --> 00:09:37,750
0 plus 1 is 1 0 plus 1 is 1 1 plus 1 is 0
and 1 plus 1 is 0 So this is how we have mapped
60
00:09:37,750 --> 00:09:44,430
our information bits into our coded bits
61
00:09:44,430 --> 00:09:54,250
Ok so again to recap in block codes we take
we partition our information sequence into
62
00:09:54,250 --> 00:10:03,970
blocks of k-bits and we map these k-bits into
blocks of n-bits and this mapping is memoryless
63
00:10:03,970 --> 00:10:13,890
In other words how we map these k-bits does
not depend upon how we have mapped the previous
64
00:10:13,890 --> 00:10:22,690
blocks of k-bits Ok so letâ€™s now contrast
it with what are convolutional codes and how
65
00:10:22,690 --> 00:10:29,170
are they different from convolutional codes
So in block codes we parse our information
66
00:10:29,170 --> 00:10:36,301
into blocks of data and we handle them block
by block where as in convolutional code you
67
00:10:36,301 --> 00:10:45,160
can process information sequence in a continuous
fashion The second difference is the encoding
68
00:10:45,160 --> 00:10:53,890
in convolutional code is with memory In other
words the current output not only depends
69
00:10:53,890 --> 00:11:06,160
on current input but it also depends on past
inputs and outputs Ok So unlike block codes
70
00:11:06,160 --> 00:11:15,880
in convolutional codes output depends on past
inputs and outputs So if we have n k convolutional
71
00:11:15,880 --> 00:11:23,180
codes where k is number of information bits
n is the number of coded bits we have another
72
00:11:23,180 --> 00:11:32,790
parameter we are calling it memory order which
signifies basically how many past bits or
73
00:11:32,790 --> 00:11:44,310
how many what's past information that has
been used to uh generate the current output
74
00:11:44,310 --> 00:11:53,950
So we define convolutional code not only by
these parameters n and k but another parameter
75
00:11:53,950 --> 00:12:04,060
which basically denotes the memory of the
encoder Another subtle difference in case
76
00:12:04,060 --> 00:12:11,450
of convolutional codes typically the values
of k and n are much smaller compared to values
77
00:12:11,450 --> 00:12:17,030
of k and n for block codes
78
00:12:17,030 --> 00:12:25,000
So let's take an example now for convolutional
code So here we have one input and two outputs
79
00:12:25,000 --> 00:12:36,760
The input we are denoting by u of l output
we are denoting by v of l 1 v of l 2 Now note
80
00:12:36,760 --> 00:12:44,110
here each of the outputs here not only depends
on the current input which is u 1 but it also
81
00:12:44,110 --> 00:12:52,870
depends on these past values It also depends
on what u l minus 1 was what u l minus 2 was
82
00:12:52,870 --> 00:13:01,940
so this an example of memory order 2 So the
current input current output not only depends
83
00:13:01,940 --> 00:13:08,800
on current input but also depends on past
two values of the input So this is an example
84
00:13:08,800 --> 00:13:19,420
of a 2 1 2 convolutional code n is 2 there
are two outputs k is 1 one input and memory
85
00:13:19,420 --> 00:13:28,180
order is 2 because the output depends on past
two values of information sequence So you
86
00:13:28,180 --> 00:13:41,040
can see here the first input which is v 1
v l 1 it is basically u l plus u l minus 2
87
00:13:41,040 --> 00:13:48,010
So in other words it depends on the current
input and what was the input past two values
88
00:13:48,010 --> 00:13:58,080
basically and similarly this one depends on
current input past input one past input and
89
00:13:58,080 --> 00:14:07,100
the this u l minus 2 So this is basically
how so you can see the difference here The
90
00:14:07,100 --> 00:14:15,270
output not only depends on current input but
it also depends on past inputs Similarly here
91
00:14:15,270 --> 00:14:24,230
basically you can see uh the in the convolutional
code the output depends on past inputs and
92
00:14:24,230 --> 00:14:36,120
outputs Ok So that is one of the major difference
between convolutional codes and block codes
93
00:14:36,120 --> 00:14:45,750
Now let's move to the topic of what sort of
decoding strategy should we employ when we
94
00:14:45,750 --> 00:15:00,790
want to decode a code So now as I said a decoder
objective is it takes as input the demodulated
95
00:15:00,790 --> 00:15:09,320
signal r and it has to produce an estimate
of the information sequence you had right
96
00:15:09,320 --> 00:15:17,450
So the decoder produces an estimate of the
information sequence based on what it has
97
00:15:17,450 --> 00:15:25,560
received of demodulated output which is r
Now we can see this estimation of the information
98
00:15:25,560 --> 00:15:33,490
sequence problem is equivalent to estimating
the code sequence because there is one to
99
00:15:33,490 --> 00:15:44,660
one mapping from a particular codeword to
the information sequence So we can say equivalently
100
00:15:44,660 --> 00:15:50,700
the problem that decoder has to estimate is
it has to estimate the code sequence given
101
00:15:50,700 --> 00:15:58,030
a received sequence r because there is one
to one mapping from the message message bits
102
00:15:58,030 --> 00:16:04,030
to the code bits So what is a decoding strategy
or what is a decoding rule A decoding rule
103
00:16:04,030 --> 00:16:15,800
is nothing but given a received sequence r
we are trying to estimate what our code sequence
104
00:16:15,800 --> 00:16:21,610
transmitted code sequence was So we are trying
to estimate v hat or u hat from received sequence
105
00:16:21,610 --> 00:16:30,640
r So we have to decide how what rule or what
logic should we use when we get received sequence
106
00:16:30,640 --> 00:16:39,390
r how do we assign that received sequence
r to any particular codeword Now one of the
107
00:16:39,390 --> 00:16:44,670
policies which we can use is basically to
minimize probability of error
108
00:16:44,670 --> 00:16:52,850
Now when does an error occur when my decoded
sequence is not same as my transmitted signal
109
00:16:52,850 --> 00:17:00,930
so my probability of error is given by probability
then when my estimated sequence which I denote
110
00:17:00,930 --> 00:17:11,610
by v hat is not same as v So this can be written
as probability of error given r received sequence
111
00:17:11,610 --> 00:17:18,429
multiplied by probability of the received
sequence r and sum over all possible received
112
00:17:18,429 --> 00:17:26,630
sequence And error is nothing but when v is
not same as v hat so I can write this equation
113
00:17:26,630 --> 00:17:34,460
in this particular form So if I want to minimize
probability of error I will have to minimize
114
00:17:34,460 --> 00:17:41,200
this So my decoding rule should be such that
this is minimized So there are two terms in
115
00:17:41,200 --> 00:17:53,570
this One is P of r and another is this term
Now whatever v hat I choose that does not
116
00:17:53,570 --> 00:18:05,000
change P of r So the choice of decoding rule
does not change my P of r So in other words
117
00:18:05,000 --> 00:18:11,600
if I have to minimize probability of error
I should choose my v hat in such a way such
118
00:18:11,600 --> 00:18:25,649
that this is minimized For each received sequence
r this term should be minimized Ok Now minimizing
119
00:18:25,649 --> 00:18:39,840
this term minimizing this term is same as
maximizing this term correct Minimizing the
120
00:18:39,840 --> 00:18:47,250
probability v hat is not same as v given r
is equivalent to maximizing the probability
121
00:18:47,250 --> 00:18:57,460
that v hat is equal to v given r ok so we
have to maximize this Now using Bayes rule
122
00:18:57,460 --> 00:19:03,799
we can write probability of v given r as probability
of r given v multiplied by probability of
123
00:19:03,799 --> 00:19:13,400
v divided by probability of r And this has
to be maximized for every basically v so we
124
00:19:13,400 --> 00:19:22,309
should choose our v such that this thing is
maximized Now again choice of v does not change
125
00:19:22,309 --> 00:19:38,320
this So we can write our probability to maximize
so to maximize this then becomes maximizing
126
00:19:38,320 --> 00:19:50,100
this quantity So we can say maximizing this
is nothing but maximizing this quantity because
127
00:19:50,100 --> 00:20:00,940
this quantity does not depend on choice of
v Ok So if you want to minimize probability
128
00:20:00,940 --> 00:20:07,220
of error you want to maximize this We want
to maximize this quantity
129
00:20:07,220 --> 00:20:16,350
And MAP decoder Maximum a posteriori probability
decoder is the one which will do exactly that
130
00:20:16,350 --> 00:20:26,480
It will choose a v hat such that this is this
probability is maximized Now what happens
131
00:20:26,480 --> 00:20:32,370
if all codewords are equally likely to happen
If all codewords are equally likely to happen
132
00:20:32,370 --> 00:20:40,980
then look at this term Probability of v will
be same So in that case maximizing probability
133
00:20:40,980 --> 00:20:50,639
of v given r is same as maximizing probability
of r given v So that's what we are saying
134
00:20:50,639 --> 00:21:00,809
If all codewords are equally likely then maximizing
probability of v given r is same as maximizing
135
00:21:00,809 --> 00:21:08,350
this likelihood ratio likelihood function
p of r given v so maximum likelihood decoder
136
00:21:08,350 --> 00:21:19,700
is the one which will choose v hat such that
this quantity is maximized Now if we consider
137
00:21:19,700 --> 00:21:28,120
that our channel is discrete memoryless channel
in other words we can write the probability
138
00:21:28,120 --> 00:21:34,179
for discrete memoryless channel we can write
probability of v c c plus r given transmitted
139
00:21:34,179 --> 00:21:44,110
sequence v we can write it as product of each
individual probabilities If that happens then
140
00:21:44,110 --> 00:21:53,409
we can further simplify our maximizing criteria
So we want to maximize this is same as maximizing
141
00:21:53,409 --> 00:22:03,240
this Now since log of x is a monotonously
increasing function of x we can say maximizing
142
00:22:03,240 --> 00:22:11,309
this probability is same is equivalent to
maximizing log of probability of r given v
143
00:22:11,309 --> 00:22:23,090
Now if we do that then this product becomes
summation Ok So then we can basically write
144
00:22:23,090 --> 00:22:31,480
this as basically then log of probability
of r given v will become basically summation
145
00:22:31,480 --> 00:22:40,059
and this will be basically of course this
will be some log term here log term here and
146
00:22:40,059 --> 00:22:46,320
this is basically much easier to compute Ok
147
00:22:46,320 --> 00:22:55,580
So let's take an example uh We are interested
in finding what would be the maximum likelihood
148
00:22:55,580 --> 00:23:01,760
decoding rule for a binary symmetric channel
Now recall what is a binary symmetric channel
149
00:23:01,760 --> 00:23:13,220
There are two inputs this is 0 and 1 two outputs
0 and 1 with probability 1 minus p I received
150
00:23:13,220 --> 00:23:24,549
my bits correctly And there is a crossover
probability of p Ok So the question I am asking
151
00:23:24,549 --> 00:23:32,830
is if I have a codeword of length n which
is transmitted over a binary symmetric channel
152
00:23:32,830 --> 00:23:42,530
whose crossover probability is p what should
be my maximum likelihood decoding rule So
153
00:23:42,530 --> 00:23:53,419
how do I solve it As we just saw in the previous
slide maximizing probability of r for maximum
154
00:23:53,419 --> 00:24:02,999
likelihood decoder we have to maximize probability
of r given v which is equivalent to maximizing
155
00:24:02,999 --> 00:24:15,519
log of probability of r given v So let's try
to compute what's log of r given v Ok Now
156
00:24:15,519 --> 00:24:23,059
before I calculate probability of r given
v let me introduce another term which is called
157
00:24:23,059 --> 00:24:31,059
Hamming distance Now what is Hamming distance
between two codewords Hamming distance between
158
00:24:31,059 --> 00:24:37,119
two codewords or two n tuples let's call it
Hamming distance between r and v both are
159
00:24:37,119 --> 00:24:45,440
n bit vector basically so Hamming distance
between r and v is defined as number of positions
160
00:24:45,440 --> 00:24:57,730
in which r and v are differing So for example
if let's say r is 1 1 1 0 1 1 and v is 0 1
161
00:24:57,730 --> 00:25:02,789
1 1 0 1 then what is the Hamming distance
162
00:25:02,789 --> 00:25:07,860
It's differing in the first location 1 it
is not differing here not differing here it
163
00:25:07,860 --> 00:25:14,299
is differing here that's two that's differing
in this location That's three It's not differing
164
00:25:14,299 --> 00:25:21,100
in this location So r and v differs in three
locations one is this location other is this
165
00:25:21,100 --> 00:25:28,429
location Third is this location So the Hamming
distance between r and v is 3 in this case
166
00:25:28,429 --> 00:25:39,029
Ok Now when we are sending a n-bit codeword
over a binary symmetric channel what happens
167
00:25:39,029 --> 00:25:49,460
Some of the bits get flipped with probability
crossover probability p Let's denote those
168
00:25:49,460 --> 00:25:59,950
number of flip bits by d So Hamming distance
between two r and v will specify the locations
169
00:25:59,950 --> 00:26:06,850
where r and v are not same When r and v are
not same that means those are locations where
170
00:26:06,850 --> 00:26:17,320
error has occurred So number of positions
that got flipped as a result of sending this
171
00:26:17,320 --> 00:26:26,559
codeword of binary symmetric channel that
is denoted by this d of r and v And the remaining
172
00:26:26,559 --> 00:26:36,269
number of bits which did not get changed that
is basically n minus d r v So these many bits
173
00:26:36,269 --> 00:26:41,639
did not get changed and these many bits got
flipped
174
00:26:41,639 --> 00:26:53,379
So what is the probability that d bits got
flipped That is given by p raised to power
175
00:26:53,379 --> 00:27:00,950
d r v and what's the probability that n minus
d bits were received correctly That is given
176
00:27:00,950 --> 00:27:11,879
by 1 minus p raised to power this quantity
So we can write probability of r given v as
177
00:27:11,879 --> 00:27:26,190
p raised to power d into 1 minus p raised
to power n minus d If we take log on both
178
00:27:26,190 --> 00:27:39,720
sides then this will basically become n minus
d log of 1 minus p plus d times log of p Now
179
00:27:39,720 --> 00:27:51,850
we take terms containing d r v out so what
we will get is d r v log p by 1 minus p plus
180
00:27:51,850 --> 00:28:07,279
n times log of 1 minus p So to maximize this
probability we have to choose our v hat such
181
00:28:07,279 --> 00:28:16,820
that this is maximized Now look closely at
both of these terms Let us first look at this
182
00:28:16,820 --> 00:28:26,110
term Does this term depend on selection of
v No It depends on n which is codeword length
183
00:28:26,110 --> 00:28:32,830
It depends on crossover probability p So whatever
v we choose it does not change this probability
184
00:28:32,830 --> 00:28:43,999
So in other words to maximize this then we
will have to maximize this first term Now
185
00:28:43,999 --> 00:28:46,940
look at this term closely
186
00:28:46,940 --> 00:28:55,239
Typically the crossover probability will be
smaller than half If that happens what happens
187
00:28:55,239 --> 00:29:05,230
to this ratio p by l minus p This will be
some ratio between 0 and 1 And what happens
188
00:29:05,230 --> 00:29:14,639
to log of a number which is between 0 and
1 That is a negative quantity So what we get
189
00:29:14,639 --> 00:29:30,759
then is to maximize this we have to maximize
minus of d r v correct So a maximum likelihood
190
00:29:30,759 --> 00:29:39,320
decoder will choose a v such that minus of
d r v is maximized In other words we should
191
00:29:39,320 --> 00:29:47,730
choose a codeword v in such a way such that
d r v is minimized When d r v is minimized
192
00:29:47,730 --> 00:29:57,269
then only minus of d r v will be maximized
So that's what we are seeing here Log of p
193
00:29:57,269 --> 00:30:03,119
1 minus p is less than zero so this will be
a negative quantity When you want to maximize
194
00:30:03,119 --> 00:30:09,659
a negative quantity this term should be as
small as possible And this term does not depend
195
00:30:09,659 --> 00:30:20,860
on selection of v So this gives us a decoding
maximum likelihood decoding rule for binary
196
00:30:20,860 --> 00:30:30,639
symmetric channel And what is that We should
choose a v such that d r v is minimized In
197
00:30:30,639 --> 00:30:37,289
other words we should choose a codeword v
such that Hamming distance between the codeword
198
00:30:37,289 --> 00:30:45,739
v and the received sequences is minimized
And that makes sense And that's our maximum
199
00:30:45,739 --> 00:30:48,110
likelihood decoding rule
200
00:30:48,110 --> 00:30:59,759
So finally I am going to conclude this lecture
with definition of few error correcting strategies
201
00:30:59,759 --> 00:31:07,820
The first one which I am going to describe
is what is known as F E C forward error correction
202
00:31:07,820 --> 00:31:15,769
So in systems where there is no feedback from
the receiver to the transmitter we are calling
203
00:31:15,769 --> 00:31:21,639
those systems as one way systems where transmission
happens only in one direction from transmitter
204
00:31:21,639 --> 00:31:29,929
to receiver In those systems the error correcting
codes that are used are known as F E C So
205
00:31:29,929 --> 00:31:37,350
when you hear this term F E C code it basically
means basically when we are sending so this
206
00:31:37,350 --> 00:31:46,059
is error correcting code used for when we
are using transmission one way from transmitter
207
00:31:46,059 --> 00:31:57,009
to receiver Now in some cases we have a mechanism
of feedback from the receiver to back to the
208
00:31:57,009 --> 00:32:04,249
transmitter So in those cases where there
exists a feedback from receiver to the transmitter
209
00:32:04,249 --> 00:32:12,090
we are calling these systems as two way systems
So for these systems basically the error correcting
210
00:32:12,090 --> 00:32:18,590
strategy which is used is what is known as
automated repeat request Now how does this
211
00:32:18,590 --> 00:32:19,690
work
212
00:32:19,690 --> 00:32:31,669
So you initially send some coded packets uh
where you just use parity bits for error detection
213
00:32:31,669 --> 00:32:38,220
So you send your information bits and some
few parity bits for error detection At the
214
00:32:38,220 --> 00:32:47,399
receiver using those parity bits the receiver
will try to judge whether there is any error
215
00:32:47,399 --> 00:32:54,769
in the received packet If it finds that there
are errors it will send a negative acknowledgement
216
00:32:54,769 --> 00:33:05,659
and again you will retransmit the same packet
or some additional parity bits So that's basically
217
00:33:05,659 --> 00:33:10,919
same packet basically So that is your automatic
repeat request scheme Now in this automatic
218
00:33:10,919 --> 00:33:20,269
repeat request scheme the idea is you are
sending an uncoded packet with some bits for
219
00:33:20,269 --> 00:33:26,889
error detection So you are not really uh sending
any bits for error correction So only so this
220
00:33:26,889 --> 00:33:33,759
is typically useful if the links are very
good You just are sending some uncoded packets
221
00:33:33,759 --> 00:33:41,539
with some bits for error detection and occasionally
when the packets are not received correctly
222
00:33:41,539 --> 00:33:48,629
then you ask for re-transmission A strategy
that combines both forward error correction
223
00:33:48,629 --> 00:33:56,840
and a r q is known as hybrid a r q system
In this you send coded packets from transmitter
224
00:33:56,840 --> 00:34:04,690
to receiver Now if these coded packets are
not received correctly by the receiver the
225
00:34:04,690 --> 00:34:11,240
receiver will basically send a negative acknowledgment
and then you will send resend the same packet
226
00:34:11,240 --> 00:34:15,639
or you will try to send some additional parity
bits and using those additional parity bits
227
00:34:15,639 --> 00:34:23,980
you will try to now decode the original packet
So hybrid a r q is a combination of forward
228
00:34:23,980 --> 00:34:32,740
error correcting scheme and automatic repeat
request scheme Typically it's in a communication
229
00:34:32,740 --> 00:34:38,100
system you will see a combination of both
forward error correcting schemes and hybrid
230
00:34:38,100 --> 00:34:51,629
a r q schemes used So with this I am going
to conclude this lecture Thank you