1
00:00:00,000 --> 00:00:02,360
The following content is
provided under a Create

2
00:00:02,360 --> 00:00:03,630
Commons license.

3
00:00:03,630 --> 00:00:06,600
Your support will help MIT
OpenCourseWare continue to

4
00:00:06,600 --> 00:00:09,970
offer high quality education
resources for free.

5
00:00:09,970 --> 00:00:12,870
To make a donation or to view
additional materials from

6
00:00:12,870 --> 00:00:15,280
hundreds of MIT courses, visit
MIT OpenCourseWare at

7
00:00:15,280 --> 00:00:16,530
ocw.mit.edu.

8
00:00:21,070 --> 00:00:23,380
PROFESSOR: Let's get
started then.

9
00:00:23,380 --> 00:00:26,260
We went through Rayleigh
fading very, very

10
00:00:26,260 --> 00:00:29,200
quickly last time.

11
00:00:29,200 --> 00:00:34,110
I want to spend a little more
time on it today since it's

12
00:00:34,110 --> 00:00:40,950
one of the sort of classical
models of wireless channels.

13
00:00:40,950 --> 00:00:43,400
And it's good to understand
how it works.

14
00:00:43,400 --> 00:00:46,070
And it's good to also understand
what all the

15
00:00:46,070 --> 00:00:51,260
assumptions that are made when
one assumes Rayleigh fading,

16
00:00:51,260 --> 00:00:54,220
because they're really
quite a few of them.

17
00:00:54,220 --> 00:00:58,560
OK, so what we're doing is we're
assuming flat fading.

18
00:00:58,560 --> 00:01:00,960
In other words when we talk
about flat fading, we're

19
00:01:00,960 --> 00:01:06,830
talking about fading where if
you generate a discrete model

20
00:01:06,830 --> 00:01:10,500
for the channel, that discrete
model is just going to have

21
00:01:10,500 --> 00:01:12,280
one path in it.

22
00:01:12,280 --> 00:01:16,460
In other words, the output is
going to look like a faded

23
00:01:16,460 --> 00:01:18,290
version of the input.

24
00:01:18,290 --> 00:01:21,700
It'll be shifted in phase
because of the unknown phase

25
00:01:21,700 --> 00:01:22,760
in the channel.

26
00:01:22,760 --> 00:01:26,770
It'll be attenuated by
some random amount.

27
00:01:26,770 --> 00:01:29,590
But if you look at the waveform,
it'll look like the

28
00:01:29,590 --> 00:01:32,860
waveform that you transmitted
except for the noise.

29
00:01:36,260 --> 00:01:39,070
And that's what really is
represented by this one tap

30
00:01:39,070 --> 00:01:41,260
model that we've been
looking at.

31
00:01:41,260 --> 00:01:45,980
In general we've said that you
can model a pretty arbitrary

32
00:01:45,980 --> 00:01:49,590
channel for purposes of
somewhat narrow band

33
00:01:49,590 --> 00:01:56,070
communication by using a
sequence of taps where usually

34
00:01:56,070 --> 00:01:59,510
for want of something better to
do, we model those taps as

35
00:01:59,510 --> 00:02:03,800
being Gaussian random variables,
complex Gaussian

36
00:02:03,800 --> 00:02:13,630
random variables with zero mean,
and variables which are

37
00:02:13,630 --> 00:02:15,340
circularly symmetric.

38
00:02:15,340 --> 00:02:19,350
And we assume for not any very
good reason, that the taps are

39
00:02:19,350 --> 00:02:21,100
independent of each other.

40
00:02:21,100 --> 00:02:23,370
I mean we have to make some
assumptions or we can't start

41
00:02:23,370 --> 00:02:27,490
to make any progress on trying
to analyze these channels.

42
00:02:27,490 --> 00:02:30,830
But we should realize that all
of these assumptions are

43
00:02:30,830 --> 00:02:33,940
subject to a certain
amount of question.

44
00:02:33,940 --> 00:02:34,300
OK.

45
00:02:34,300 --> 00:02:42,760
When we assume a single tap
model, and these tap models

46
00:02:42,760 --> 00:02:46,630
are always given with the number
of the tap given first,

47
00:02:46,630 --> 00:02:48,340
and the time given second.

48
00:02:48,340 --> 00:02:50,760
So what we're assuming here
is the only tap is

49
00:02:50,760 --> 00:02:52,580
the tap at time 0.

50
00:02:52,580 --> 00:02:56,270
And it's at time 0 because we're
assuming the receiver

51
00:02:56,270 --> 00:02:59,080
timing is locked at transmitter
timing.

52
00:02:59,080 --> 00:03:01,160
And we're just going to get
rid of the zero because

53
00:03:01,160 --> 00:03:03,740
there's only one tap, and
call this G sub m.

54
00:03:03,740 --> 00:03:08,210
We're also going to pretty much
assume that G sub m stays

55
00:03:08,210 --> 00:03:10,270
relatively constant
for a relatively

56
00:03:10,270 --> 00:03:11,560
long amount of time.

57
00:03:14,440 --> 00:03:17,680
Except as far as this analysis
of Rayleigh fading goes, we

58
00:03:17,680 --> 00:03:19,890
don't have to assume that.

59
00:03:19,890 --> 00:03:23,860
Because in fact, when we're
assuming Rayleigh fading, the

60
00:03:23,860 --> 00:03:28,180
analysis that we're going to
follow, the receiver doesn't

61
00:03:28,180 --> 00:03:30,960
know anything about the channel
at all, except that

62
00:03:30,960 --> 00:03:32,980
it's a single tap model.

63
00:03:32,980 --> 00:03:35,760
And therefore what the receiver
does is it goes

64
00:03:35,760 --> 00:03:39,700
through maximum likelihood
detection assuming that that

65
00:03:39,700 --> 00:03:45,010
single tap is just a complex
Gaussian random variable.

66
00:03:45,010 --> 00:03:48,030
OK when you have a complex
Gaussian random variable as

67
00:03:48,030 --> 00:03:52,060
you've seen in the problem sets
and we've noted a number

68
00:03:52,060 --> 00:03:57,720
of times, the energy in that
complex Gaussian random

69
00:03:57,720 --> 00:04:00,630
variable is exponential.

70
00:04:00,630 --> 00:04:08,620
And the magnitude is just a
square root of the magnitude

71
00:04:08,620 --> 00:04:10,310
squared, namely the energy.

72
00:04:10,310 --> 00:04:13,460
And that has a Rayleigh
distribution which looks like

73
00:04:13,460 --> 00:04:20,750
this, namely the probability
density of how much

74
00:04:20,750 --> 00:04:22,210
response you get.

75
00:04:22,210 --> 00:04:23,980
We'll base this law here.

76
00:04:23,980 --> 00:04:28,810
And the phase of course, is
equally likely to be anything.

77
00:04:28,810 --> 00:04:31,990
Namely the phase is uniform
and random.

78
00:04:31,990 --> 00:04:34,610
This density looks like this.

79
00:04:34,610 --> 00:04:37,910
I wanted to draw this so it
would emphasize the fact that

80
00:04:37,910 --> 00:04:41,290
the magnitude is in fact,
always nonnegative.

81
00:04:41,290 --> 00:04:43,830
But also to emphasize the fact
there's a whole lot of

82
00:04:43,830 --> 00:04:46,670
probability down here
where there's

83
00:04:46,670 --> 00:04:48,990
very, very little channel.

84
00:04:48,990 --> 00:04:52,680
And this is in fact what gives
rise to the fact that if you

85
00:04:52,680 --> 00:04:58,150
try to communicate over Rayleigh
fading, and you don't

86
00:04:58,150 --> 00:05:00,860
make any use of diversity--
and we'll talk later about

87
00:05:00,860 --> 00:05:02,290
diversity--

88
00:05:02,290 --> 00:05:05,660
in fact you can't communicate
very well at all.

89
00:05:05,660 --> 00:05:09,270
And that's because of this very
bad region here where the

90
00:05:09,270 --> 00:05:11,230
channel is very badly faded.

91
00:05:11,230 --> 00:05:16,210
You send a bit on this channel
which is very badly faded, and

92
00:05:16,210 --> 00:05:18,010
there's nothing much the
receiver can do.

93
00:05:18,010 --> 00:05:22,670
And that's the thing we
want to try to get a

94
00:05:22,670 --> 00:05:24,000
real feeling for.

95
00:05:24,000 --> 00:05:26,970
OK so the output of the channel
when you put in an

96
00:05:26,970 --> 00:05:31,230
input U sub m, and we'll think
of this as being a binary

97
00:05:31,230 --> 00:05:33,470
digit for this time being.

98
00:05:33,470 --> 00:05:37,830
So the output is going to be
the input times this tap

99
00:05:37,830 --> 00:05:40,500
variable, which is this complex
Gaussian random

100
00:05:40,500 --> 00:05:44,800
variable plus a noise random
variable, which we're also

101
00:05:44,800 --> 00:05:49,330
assuming is complex Gaussian
and circularly symmetric.

102
00:05:49,330 --> 00:05:53,020
OK so what we have, if you're
going to make two hypotheses

103
00:05:53,020 --> 00:05:56,780
about two possible values of
U sub m, look at what this

104
00:05:56,780 --> 00:05:58,790
random phase does here.

105
00:05:58,790 --> 00:06:06,660
No matter what U sub m you
transmit in one epoch of time,

106
00:06:06,660 --> 00:06:10,130
the channel is going to rotate
this around by a completely

107
00:06:10,130 --> 00:06:11,420
random phase.

108
00:06:11,420 --> 00:06:14,020
It's going to add a noise to
it which has a completely

109
00:06:14,020 --> 00:06:15,330
random phase.

110
00:06:15,330 --> 00:06:17,240
And the output is going
to come out.

111
00:06:17,240 --> 00:06:21,530
And the output has a completely
random phase.

112
00:06:21,530 --> 00:06:25,610
Namely the phase of the output
cannot possibly tell you

113
00:06:25,610 --> 00:06:28,280
anything about what
input you're

114
00:06:28,280 --> 00:06:29,980
putting into the channel.

115
00:06:29,980 --> 00:06:34,530
OK so in other words, in this
model that we're using, the

116
00:06:34,530 --> 00:06:37,990
phase is completely useless.

117
00:06:37,990 --> 00:06:45,270
And if we want to talk about
anything connected to using

118
00:06:45,270 --> 00:06:49,630
likelihoods, the only thing we
can use is the magnitude of

119
00:06:49,630 --> 00:06:50,730
the output.

120
00:06:50,730 --> 00:06:52,110
OK.

121
00:06:52,110 --> 00:06:54,930
Now, why don't we just analyze
it in terms of the magnitude

122
00:06:54,930 --> 00:06:56,060
of the output?

123
00:06:56,060 --> 00:07:00,360
Well when you analyze these
problems, Gaussian things are

124
00:07:00,360 --> 00:07:05,680
usually much easier to analyze
than things like this.

125
00:07:05,680 --> 00:07:07,590
Not always, I mean we
have to get used to

126
00:07:07,590 --> 00:07:09,080
analyzing all of them.

127
00:07:09,080 --> 00:07:12,400
But this particular problem of
Rayleigh fading is really

128
00:07:12,400 --> 00:07:14,310
easier to analyze in
terms of these

129
00:07:14,310 --> 00:07:16,220
Gaussian random variables.

130
00:07:16,220 --> 00:07:19,510
But it's easier to understand
in terms of recognizing that

131
00:07:19,510 --> 00:07:23,320
the only thing you can make any
use is these magnitudes.

132
00:07:28,190 --> 00:07:32,040
OK if we only use one complex
degree of freedom in a signal,

133
00:07:32,040 --> 00:07:37,540
namely if we try to send some
signal and we only use one

134
00:07:37,540 --> 00:07:41,730
input to the channel, then
we only get one output.

135
00:07:41,730 --> 00:07:45,800
Namely we sent U sub
0, we get V sub 0.

136
00:07:45,800 --> 00:07:49,330
And we try to decide from
V sub 0 what was sent.

137
00:07:49,330 --> 00:07:52,850
We're really in a very bad
pickle at that point.

138
00:07:52,850 --> 00:07:56,050
Because the only thing that
makes any sense, since we can

139
00:07:56,050 --> 00:08:01,090
only use the magnitudes, is to
send a very small magnitude or

140
00:08:01,090 --> 00:08:03,530
a positive magnitude.

141
00:08:03,530 --> 00:08:05,550
Magnitudes are positive
anyway.

142
00:08:05,550 --> 00:08:08,170
So if you're going to send
binary signals, this is your

143
00:08:08,170 --> 00:08:11,520
only choice-- if it
makes any sense--

144
00:08:11,520 --> 00:08:13,760
if you make this larger
than zero you're

145
00:08:13,760 --> 00:08:15,980
just wasting energy.

146
00:08:15,980 --> 00:08:18,060
So you only have this choice.

147
00:08:18,060 --> 00:08:20,700
And you can choose the amplitude
a that you're using.

148
00:08:20,700 --> 00:08:23,020
But that's the only thing
that you can do.

149
00:08:23,020 --> 00:08:28,340
OK, this is a very nasty thing
to analyze for one thing.

150
00:08:28,340 --> 00:08:29,820
It gives you a very large error

151
00:08:29,820 --> 00:08:31,780
probability for another thing.

152
00:08:31,780 --> 00:08:34,330
Nobody uses it for
another thing.

153
00:08:34,330 --> 00:08:38,080
And therefore almost all systems
of trying to transmit

154
00:08:38,080 --> 00:08:42,160
in this kind of Rayleigh fading,
always use at least

155
00:08:42,160 --> 00:08:44,520
two sample values.

156
00:08:44,520 --> 00:08:47,180
In other words, instead of
just putting one complex

157
00:08:47,180 --> 00:08:49,970
degree of freedom into the
channel, you're going to put

158
00:08:49,970 --> 00:08:52,930
two complex degrees of freedom
into the channel.

159
00:08:52,930 --> 00:08:55,860
And the thing that we're going
to analyze, because it's the

160
00:08:55,860 --> 00:08:58,700
easiest thing to do in this
discrete time model we've

161
00:08:58,700 --> 00:09:03,930
developed, is to think of
modeling hypothesis 0 as

162
00:09:03,930 --> 00:09:08,320
sending two symbols U sub 0 and
U sub 1 will make U sub 0

163
00:09:08,320 --> 00:09:12,380
equal to a, and U sub
1 equal to 0.

164
00:09:12,380 --> 00:09:15,530
And the alternative case, if
we're going to try to send

165
00:09:15,530 --> 00:09:18,300
input 1, this is binary
transmission.

166
00:09:18,300 --> 00:09:21,660
You can talk about more than
binary transmission, but

167
00:09:21,660 --> 00:09:24,400
binary is awful enough.

168
00:09:24,400 --> 00:09:27,680
You get U sub 0 and U sub
1 is equal to 0 and a.

169
00:09:27,680 --> 00:09:31,480
So what you're going to
be doing here in this

170
00:09:31,480 --> 00:09:35,030
pulse-position modulation, is
choosing one of these two

171
00:09:35,030 --> 00:09:38,040
different epochs to
put the data in.

172
00:09:38,040 --> 00:09:41,000
So in one case, you put all your
energy in the first one.

173
00:09:41,000 --> 00:09:44,690
In the other case, you put all
your energy in the second one.

174
00:09:44,690 --> 00:09:47,340
Mathematically, this is
completely equivalent to

175
00:09:47,340 --> 00:09:50,190
frequency-shift keying, that's
completely equivalent to

176
00:09:50,190 --> 00:09:52,320
phase-shift keying.

177
00:09:52,320 --> 00:09:54,740
And if we had a little
more time, we

178
00:09:54,740 --> 00:09:55,890
could talk about that.

179
00:09:55,890 --> 00:09:58,750
And I'll probably put an
appendix in which talks about

180
00:09:58,750 --> 00:10:00,020
those two systems.

181
00:10:00,020 --> 00:10:02,310
But in fact, it's completely
the same thing.

182
00:10:02,310 --> 00:10:04,810
It's just that you're using
different complex degrees of

183
00:10:04,810 --> 00:10:06,910
freedom than we're using here.

184
00:10:06,910 --> 00:10:10,500
So we're really analyzing
FSK and PSK.

185
00:10:10,500 --> 00:10:13,160
And that's where people usually
come up with these

186
00:10:13,160 --> 00:10:17,240
analyses of Rayleigh fading.

187
00:10:17,240 --> 00:10:25,000
OK when we have input 0, what
we receive then, is the 0 is

188
00:10:25,000 --> 00:10:29,470
going to be the input a, times
the magnitude of the channel

189
00:10:29,470 --> 00:10:32,060
of time 0, plus a
noise variable.

190
00:10:32,060 --> 00:10:35,290
The noise is complex
Gaussian, remember.

191
00:10:35,290 --> 00:10:39,110
The second input is just going
to be the noise variable.

192
00:10:39,110 --> 00:10:42,120
Alternatively, if we're sending
the second symbol,

193
00:10:42,120 --> 00:10:44,740
which means we put our energy
into the second degree of

194
00:10:44,740 --> 00:10:48,440
freedom, it means that what
we're going to get is the 0

195
00:10:48,440 --> 00:10:50,430
was just going to
be the noise.

196
00:10:50,430 --> 00:10:53,840
And the second output
is going to be the

197
00:10:53,840 --> 00:10:56,020
signal plus the noise.

198
00:10:56,020 --> 00:11:00,740
And remember, both this variable
and this variable are

199
00:11:00,740 --> 00:11:02,790
both complex Gaussian.

200
00:11:02,790 --> 00:11:04,390
The phase doesn't
mean anything.

201
00:11:04,390 --> 00:11:10,080
So what we can use is simply
the magnitude.

202
00:11:10,080 --> 00:11:16,250
OK, so when we have hypothesis
equal to 0, what comes out is

203
00:11:16,250 --> 00:11:20,710
going to be, the 0 is going
to be a complex

204
00:11:20,710 --> 00:11:22,120
Gaussian random variable.

205
00:11:22,120 --> 00:11:24,920
Let me introduce a new piece
of notation now.

206
00:11:24,920 --> 00:11:28,870
Because it gets to be a real
mess to constantly talk about

207
00:11:28,870 --> 00:11:32,640
a Gaussian complex random
variable, and talk about it's

208
00:11:32,640 --> 00:11:36,310
real part and imaginary part as
being independent Gaussian.

209
00:11:36,310 --> 00:11:41,230
So I'll just call this
normal complex.

210
00:11:41,230 --> 00:11:44,550
And this first thing is the
mean, which is a real and

211
00:11:44,550 --> 00:11:47,510
imaginary part, but it's
zero in most of the

212
00:11:47,510 --> 00:11:48,980
things we deal with.

213
00:11:48,980 --> 00:11:53,850
And the second one is the mean
square value of this random

214
00:11:53,850 --> 00:11:55,660
variable V sub zero.

215
00:11:55,660 --> 00:12:01,640
So this quantity here is now
twice the variance of the real

216
00:12:01,640 --> 00:12:05,635
part of V sub 0, and twice
the imaginary part of

217
00:12:05,635 --> 00:12:09,910
the variance of v0.

218
00:12:09,910 --> 00:12:14,170
We scaled the noise in
a peculiar way here.

219
00:12:14,170 --> 00:12:19,040
And I apologize for all
of the mess that

220
00:12:19,040 --> 00:12:21,510
occurs when we do this.

221
00:12:21,510 --> 00:12:26,100
Because sometimes we think of
the noise as having variance N

222
00:12:26,100 --> 00:12:29,720
sub 0 over 2 in each real and
imaginary degree of freedom.

223
00:12:29,720 --> 00:12:33,310
And therefore N sub 0 in a
complex degree of freedom.

224
00:12:33,310 --> 00:12:37,560
And sometimes we think of it as
having variance N sub 0 W

225
00:12:37,560 --> 00:12:39,930
Where does that difference
come from?

226
00:12:39,930 --> 00:12:44,060
It's this infernal problem of
the sampling theorem being so

227
00:12:44,060 --> 00:12:47,810
critical in most of the models
that we talk about.

228
00:12:47,810 --> 00:12:51,920
OK because when you use the
sampling theorem, the sinc x

229
00:12:51,920 --> 00:12:56,080
over x waveforms that we use are
not orthonormal, they're

230
00:12:56,080 --> 00:12:57,700
orthogonal.

231
00:12:57,700 --> 00:13:03,390
And this factor of W appears
exactly because of that.

232
00:13:03,390 --> 00:13:08,240
They appear because the
magnitude of the signal is a,

233
00:13:08,240 --> 00:13:10,690
and the energy and
the power in the

234
00:13:10,690 --> 00:13:13,020
signal is then a-squared.

235
00:13:13,020 --> 00:13:13,910
OK.

236
00:13:13,910 --> 00:13:17,050
In this case the power in the
signal is not quite a-squared

237
00:13:17,050 --> 00:13:21,950
because we only send energy in
one or the other of alternate

238
00:13:21,950 --> 00:13:23,150
degrees of freedom.

239
00:13:23,150 --> 00:13:31,620
So therefore, if we look at a
time one second, we get W

240
00:13:31,620 --> 00:13:33,810
complex degrees of
freedom to use.

241
00:13:33,810 --> 00:13:41,990
We only send energy in half of
those so that the actual power

242
00:13:41,990 --> 00:13:47,290
that we're sending is a-squared
divided by 2.

243
00:13:47,290 --> 00:13:47,730
OK.

244
00:13:47,730 --> 00:13:50,810
Because of that, when we
normalize the noise the same

245
00:13:50,810 --> 00:13:55,090
way the signal is normalized,
we get this

246
00:13:55,090 --> 00:13:57,600
variance W N sub 0.

247
00:13:57,600 --> 00:14:00,910
If you're confused by that,
everyone is confused by it.

248
00:14:00,910 --> 00:14:04,220
Everyone I know, when they go
through calculations like

249
00:14:04,220 --> 00:14:07,410
this, they always start out
with some arbitrary fudge

250
00:14:07,410 --> 00:14:08,840
factor like this.

251
00:14:08,840 --> 00:14:12,120
And after they get all done,
they think it through or more

252
00:14:12,120 --> 00:14:14,785
likely they look it up in a book
to see what somebody else

253
00:14:14,785 --> 00:14:16,390
has gotten.

254
00:14:16,390 --> 00:14:19,100
And then they sweat about it a
little bit, and they finally

255
00:14:19,100 --> 00:14:21,430
decide what it ought to be.

256
00:14:21,430 --> 00:14:23,190
And that's just the way it is.

257
00:14:23,190 --> 00:14:27,710
It's the problem of having both
the sampling theorem and

258
00:14:27,710 --> 00:14:29,800
orthonormal waveforms
sitting around.

259
00:14:29,800 --> 00:14:34,400
It's also the problem of
multiplying the power by 2 as

260
00:14:34,400 --> 00:14:36,110
soon as we go to passband.

261
00:14:36,110 --> 00:14:40,370
Because both of those things
together generate all of this

262
00:14:40,370 --> 00:14:41,090
difficulty.

263
00:14:41,090 --> 00:14:43,710
But anyway, this is
the way it is.

264
00:14:43,710 --> 00:14:47,820
And the important thing for us
is that what we can have now

265
00:14:47,820 --> 00:14:49,950
is under these two hypotheses.

266
00:14:49,950 --> 00:14:53,300
We just have two Gaussian random
variables, complex

267
00:14:53,300 --> 00:14:55,030
Gaussian random variables.

268
00:14:55,030 --> 00:14:59,690
And in one case, the larger mean
square value is in one.

269
00:14:59,690 --> 00:15:01,500
And in the other case the
larger mean square

270
00:15:01,500 --> 00:15:02,910
value is in the other.

271
00:15:09,900 --> 00:15:10,380
OK.

272
00:15:10,380 --> 00:15:12,470
So just reviewing that.

273
00:15:12,470 --> 00:15:17,450
If H is equal to zero, V sub 0
and V sub 1 are these complex

274
00:15:17,450 --> 00:15:19,250
Gaussian random variables.

275
00:15:19,250 --> 00:15:23,590
If H is equal to one, then we
have this set of Gaussian

276
00:15:23,590 --> 00:15:25,190
random variables.

277
00:15:25,190 --> 00:15:29,560
The probability density of V
sub 0 and V sub 1, and now

278
00:15:29,560 --> 00:15:34,460
it's more convenient to use the
real and imaginary parts

279
00:15:34,460 --> 00:15:36,230
for the Gaussian density.

280
00:15:36,230 --> 00:15:40,260
Anytime you're working problems
of this type, try

281
00:15:40,260 --> 00:15:45,300
both densities using real and
imaginary parts, and using

282
00:15:45,300 --> 00:15:49,300
magnitude in phase, and see
which one is easier.

283
00:15:49,300 --> 00:15:52,400
Here it turns out that the
easiest thing is just to use

284
00:15:52,400 --> 00:15:56,230
the ordinary conventional
density over real and

285
00:15:56,230 --> 00:15:58,060
imaginary parts.

286
00:15:58,060 --> 00:16:01,450
And what we wind up with is
this Gaussian density.

287
00:16:04,130 --> 00:16:10,190
On V sub 0 the density is V sub
0 squared divided by the

288
00:16:10,190 --> 00:16:13,440
variance a-squared
plus W N sub 0.

289
00:16:13,440 --> 00:16:18,760
And on V sub 1, it's this
Gaussian density V sub 1

290
00:16:18,760 --> 00:16:21,250
squared divided by W N sub 0.

291
00:16:21,250 --> 00:16:23,920
Just because here we
have this variance.

292
00:16:23,920 --> 00:16:26,330
Here we have this variance.

293
00:16:26,330 --> 00:16:29,910
OK, on the alternative
hypothesis when H is equal to

294
00:16:29,910 --> 00:16:33,880
one, you have the same thing
but the denominators are

295
00:16:33,880 --> 00:16:35,010
switched around.

296
00:16:35,010 --> 00:16:38,250
When you take the likelihood
ratio, you want to take the

297
00:16:38,250 --> 00:16:41,660
ratio of this, to the
ratio of this.

298
00:16:41,660 --> 00:16:44,420
If you look at it and you take
the logarithm of that, you're

299
00:16:44,420 --> 00:16:46,650
taking the ratio of
this to this.

300
00:16:46,650 --> 00:16:49,140
Incidentally the coefficient
here, you could write it out

301
00:16:49,140 --> 00:16:50,320
if you want to.

302
00:16:50,320 --> 00:16:53,630
It's 1 over the square root of
blah, times 1 over the square

303
00:16:53,630 --> 00:16:54,700
root of blah.

304
00:16:54,700 --> 00:16:58,350
But if you recognize that the
coefficient here has to be the

305
00:16:58,350 --> 00:17:00,550
same as the coefficient
here, you don't have

306
00:17:00,550 --> 00:17:01,850
to worry about it.

307
00:17:01,850 --> 00:17:05,320
So when you take the log
likelihood ratio, you get this

308
00:17:05,320 --> 00:17:06,700
divided by this.

309
00:17:06,700 --> 00:17:08,790
You have the same form
in both cases.

310
00:17:08,790 --> 00:17:13,050
In one case, you have this term
minus this term and this

311
00:17:13,050 --> 00:17:14,840
term minus this term.

312
00:17:14,840 --> 00:17:24,340
And the other case well, for
V sub 0, you have this term

313
00:17:24,340 --> 00:17:25,480
minus this term.

314
00:17:25,480 --> 00:17:28,830
And for V sub 1 you have this
term minus this term.

315
00:17:28,830 --> 00:17:31,440
Because of the symmetry between
the two, this just

316
00:17:31,440 --> 00:17:34,230
comes out to V sub 0 squared
minus V sub 1

317
00:17:34,230 --> 00:17:36,010
squared times a-squared.

318
00:17:36,010 --> 00:17:39,040
And when you do the algebra, the
denominator is a-squared

319
00:17:39,040 --> 00:17:43,330
plus W N sub 0 times
W N sub 0.

320
00:17:43,330 --> 00:17:43,850
OK.

321
00:17:43,850 --> 00:17:48,600
What do we do for making a
maximum likelihood decision?

322
00:17:48,600 --> 00:17:53,150
Maximum likelihood is map when
the threshold is equal to 1,

323
00:17:53,150 --> 00:17:55,530
which is when the logarithm
of the correct

324
00:17:55,530 --> 00:17:57,510
threshold is equal 0.

325
00:17:57,510 --> 00:18:02,040
Which says that you take this
quantity, and if it's

326
00:18:02,040 --> 00:18:07,550
nonnegative, you choose
H equals zero.

327
00:18:07,550 --> 00:18:10,490
And if it's negative, you
choose H equals one.

328
00:18:10,490 --> 00:18:15,200
Which says you compare V sub 0
squared and V sub 1 squared.

329
00:18:15,200 --> 00:18:18,700
And whichever one is larger,
that's the one you choose.

330
00:18:18,700 --> 00:18:22,660
And if you go back and look at
the problem, it's pretty

331
00:18:22,660 --> 00:18:25,650
obvious that that's what
you want to do anyway.

332
00:18:25,650 --> 00:18:27,770
I mean you'd be very, very
surprised when you're

333
00:18:27,770 --> 00:18:31,280
comparing two Gaussian random
variables where one of them

334
00:18:31,280 --> 00:18:33,410
has a larger variance
than the other.

335
00:18:33,410 --> 00:18:35,490
And on the other hypothesis,
the absolon

336
00:18:35,490 --> 00:18:38,650
has the larger variance.

337
00:18:38,650 --> 00:18:41,950
If you came up with any rule
other than to take the

338
00:18:41,950 --> 00:18:46,360
magnitude squares and to then
compare those two magnitude

339
00:18:46,360 --> 00:18:49,410
squares, you would go back and
look at the problem again

340
00:18:49,410 --> 00:18:52,810
realizing you must have
done something wrong.

341
00:18:52,810 --> 00:18:57,960
But anyway when you deal with
problems like this, I advise

342
00:18:57,960 --> 00:19:01,430
you to take log likelihood
ratio anyway.

343
00:19:01,430 --> 00:19:04,800
Because every once in awhile you
find something which comes

344
00:19:04,800 --> 00:19:07,430
out in a somewhat
peculiar way.

345
00:19:07,430 --> 00:19:10,370
But anyway, here there's
nothing peculiar.

346
00:19:10,370 --> 00:19:12,950
So what we have to do
now is to find the

347
00:19:12,950 --> 00:19:14,680
probability of error.

348
00:19:14,680 --> 00:19:16,480
Now what's the probability
of error?

349
00:19:24,990 --> 00:19:34,250
OK if we actually transmit zero,
then V sub 0 squared is

350
00:19:34,250 --> 00:19:36,160
exponential.

351
00:19:36,160 --> 00:19:42,600
It's exponential
with this mean.

352
00:19:42,600 --> 00:19:45,830
Namely this is the mean
of V sub 0 squared.

353
00:19:45,830 --> 00:19:48,710
And V sub 1 is exponential
with this mean.

354
00:19:48,710 --> 00:19:52,340
In other words, this is
a big exponential.

355
00:19:52,340 --> 00:19:54,310
And this is a little
exponential.

356
00:19:54,310 --> 00:19:57,370
The two of them have probability
densities that

357
00:19:57,370 --> 00:19:58,620
look like this.

358
00:20:01,170 --> 00:20:02,610
This is not going to work.

359
00:20:07,550 --> 00:20:11,620
The big one has a probability
density that looks like this.

360
00:20:14,710 --> 00:20:16,680
And the little one--

361
00:20:16,680 --> 00:20:19,220
this is big--

362
00:20:19,220 --> 00:20:21,490
and the little one has
a probability density

363
00:20:21,490 --> 00:20:23,140
that looks like this.

364
00:20:23,140 --> 00:20:28,190
And what you want to do is to
subtract a random variable

365
00:20:28,190 --> 00:20:30,250
with this density from a random

366
00:20:30,250 --> 00:20:33,140
variable with this density.

367
00:20:33,140 --> 00:20:37,630
So you're convolving two
exponential densities with

368
00:20:37,630 --> 00:20:39,300
each other.

369
00:20:39,300 --> 00:20:42,860
And unfortunately, you're taking
the differences of two.

370
00:20:42,860 --> 00:20:47,040
So you're convolving the
negatives of this with this.

371
00:20:47,040 --> 00:20:48,660
And then you have to integrate
the thing.

372
00:20:48,660 --> 00:20:53,290
And it's just something
you have to do.

373
00:20:53,290 --> 00:21:01,080
And the answer is, the
probability of error is then 2

374
00:21:01,080 --> 00:21:05,570
plus a-squared over W N
sub 0 to the minus 1.

375
00:21:05,570 --> 00:21:08,250
OK that is really
an awful result.

376
00:21:08,250 --> 00:21:12,090
Because that says that if you
increase the energy that

377
00:21:12,090 --> 00:21:15,910
you're using, the probability of
error goes down very, very,

378
00:21:15,910 --> 00:21:17,390
very slowly.

379
00:21:17,390 --> 00:21:20,320
And if you look at this picture
you think about it a

380
00:21:20,320 --> 00:21:23,490
little bit, it should be clear
that that's the only thing

381
00:21:23,490 --> 00:21:26,050
that can happen.

382
00:21:26,050 --> 00:21:26,420
OK.

383
00:21:26,420 --> 00:21:30,660
Because if you increase
a-squared a little bit, it's

384
00:21:30,660 --> 00:21:32,970
not going to save
you much here.

385
00:21:32,970 --> 00:21:35,670
Because when you have a bigger
a-squared, it's just going to

386
00:21:35,670 --> 00:21:38,570
move down the value
of g bar that's

387
00:21:38,570 --> 00:21:39,960
going to give you trouble.

388
00:21:39,960 --> 00:21:44,900
Namely when you double a, the
value of magnitude of g that

389
00:21:44,900 --> 00:21:48,130
gives you trouble just goes
down by a factor of two.

390
00:21:48,130 --> 00:21:52,850
When that goes down by a factor
two, this bad part of

391
00:21:52,850 --> 00:21:57,330
the curve just goes down
in a quadratic way.

392
00:21:57,330 --> 00:22:01,400
Well that's what this
is telling us.

393
00:22:01,400 --> 00:22:03,730
OK.

394
00:22:03,730 --> 00:22:08,010
I mean the thing that we see
is a quadratic and a.

395
00:22:08,010 --> 00:22:10,590
So we're sort of assured that
we're doing the right thing.

396
00:22:10,590 --> 00:22:13,750
And we're sort of also assured
that the reason why this

397
00:22:13,750 --> 00:22:17,370
result is so awful, is just that
sometimes the fading is

398
00:22:17,370 --> 00:22:21,640
so bad there's nothing
you can do about it.

399
00:22:21,640 --> 00:22:23,790
OK now the signal power
as we said before is

400
00:22:23,790 --> 00:22:25,150
a-squared over 2.

401
00:22:25,150 --> 00:22:27,040
Since half the inputs
are zero.

402
00:22:27,040 --> 00:22:29,710
So we can put twice as
much energy into the

403
00:22:29,710 --> 00:22:31,440
ones that are non-zero.

404
00:22:31,440 --> 00:22:35,520
And therefore when you put this
in terms of the average

405
00:22:35,520 --> 00:22:39,240
signal energy that you're
sending, what we get is E sub

406
00:22:39,240 --> 00:22:41,070
b over N sub 0.

407
00:22:41,070 --> 00:22:42,020
OK.

408
00:22:42,020 --> 00:22:46,550
So that again says exactly the
same thing that this does.

409
00:22:46,550 --> 00:22:51,060
It's worthwhile keeping both
of these notions around,

410
00:22:51,060 --> 00:22:55,460
because we have done something
kind of peculiar here.

411
00:22:55,460 --> 00:22:57,590
I should mention it for you.

412
00:22:57,590 --> 00:23:02,580
As soon as you're looking at a
fading channel, the power that

413
00:23:02,580 --> 00:23:05,380
you're talking about becomes
a little peculiar.

414
00:23:05,380 --> 00:23:07,030
Because remember when
we were looking

415
00:23:07,030 --> 00:23:09,020
at white noise channels?

416
00:23:09,020 --> 00:23:12,600
What we were looking at is the
power at the receiver, the

417
00:23:12,600 --> 00:23:16,650
signal power as received
at the receiver.

418
00:23:16,650 --> 00:23:19,890
Now at this point, we still want
to talk about E sub b.

419
00:23:19,890 --> 00:23:23,320
We still want to isolate this
problem from the attenuation

420
00:23:23,320 --> 00:23:26,590
that occurs just because of
distance and things like.

421
00:23:26,590 --> 00:23:31,680
Because of that, when we model
g, this original model that we

422
00:23:31,680 --> 00:23:39,220
use here was a model in
which the magnitude

423
00:23:39,220 --> 00:23:41,840
of g had mean one.

424
00:23:41,840 --> 00:23:44,500
And we made it have mean
one so that the energy

425
00:23:44,500 --> 00:23:45,560
would come out right.

426
00:23:45,560 --> 00:23:48,470
Which is another reason why you
get confused with these E

427
00:23:48,470 --> 00:23:50,940
sub b over N sub 0 terms.

428
00:23:50,940 --> 00:23:57,720
OK so anyway, that's
the answer.

429
00:23:57,720 --> 00:24:03,760
And E sub b is in terms of the
received energy using the

430
00:24:03,760 --> 00:24:06,360
average value of fading.

431
00:24:06,360 --> 00:24:09,800
OK we next want to look at
non-coherent detection.

432
00:24:09,800 --> 00:24:12,270
Non-coherent detection
is another thing that

433
00:24:12,270 --> 00:24:15,370
communication engineers
use all the time, talk

434
00:24:15,370 --> 00:24:16,760
about all the time.

435
00:24:16,760 --> 00:24:20,670
And you have to understand what
the difference is between

436
00:24:20,670 --> 00:24:24,060
coherent transmission and
incoherent transmission.

437
00:24:24,060 --> 00:24:27,850
The general idea is that when
you're doing incoherent

438
00:24:27,850 --> 00:24:30,590
detection, you're assuming that
you don't know what the

439
00:24:30,590 --> 00:24:32,590
phase of the channel is.

440
00:24:32,590 --> 00:24:35,960
And somehow you want to do your
detection without knowing

441
00:24:35,960 --> 00:24:37,520
that phase.

442
00:24:37,520 --> 00:24:39,870
The difference between Rayleigh
fading on this kind

443
00:24:39,870 --> 00:24:43,990
of channel and incoherent
detection, is that with

444
00:24:43,990 --> 00:24:48,140
incoherent detection the
receiver is assumed to know

445
00:24:48,140 --> 00:24:52,200
what the magnitude of the
channel is, but not the phase.

446
00:24:52,200 --> 00:24:55,640
It's harder to measure the phase
of the channel than it

447
00:24:55,640 --> 00:24:57,060
is the measure of
the magnitude.

448
00:24:57,060 --> 00:25:01,010
Because the phase changes
very, very fast.

449
00:25:01,010 --> 00:25:03,810
If you look at these equations
we have for what the response

450
00:25:03,810 --> 00:25:08,610
of the channel is, you see the
phase changing many, many

451
00:25:08,610 --> 00:25:13,240
times during the time where the
amplitude of the fading

452
00:25:13,240 --> 00:25:15,560
changes by just a little bit.

453
00:25:15,560 --> 00:25:20,100
So a very common assumption that
people make when trying

454
00:25:20,100 --> 00:25:23,910
to do detection is that
it's incoherent.

455
00:25:23,910 --> 00:25:27,250
Partly, people get used to
analyzing incoherent

456
00:25:27,250 --> 00:25:29,480
communication.

457
00:25:29,480 --> 00:25:31,450
And I've seen this
so many times.

458
00:25:31,450 --> 00:25:34,740
And they insist on building
communication systems using

459
00:25:34,740 --> 00:25:36,360
incoherent detection.

460
00:25:36,360 --> 00:25:38,300
They will swear up and down
there's no way you

461
00:25:38,300 --> 00:25:39,740
can measure the phase.

462
00:25:39,740 --> 00:25:42,840
And what they're really saying
is that's the only kind of

463
00:25:42,840 --> 00:25:45,250
communication they understand.

464
00:25:45,250 --> 00:25:48,090
And because that's the only
thing they understand, they

465
00:25:48,090 --> 00:25:51,260
become very, very upset if
anyone suggests that you ought

466
00:25:51,260 --> 00:25:53,050
to try to measure the phase.

467
00:25:53,050 --> 00:25:56,830
But that's a tale
for another day.

468
00:26:00,310 --> 00:26:04,050
OK so now we want to look at the
case where we're assuming

469
00:26:04,050 --> 00:26:07,560
that we know the magnitude
of the channel.

470
00:26:07,560 --> 00:26:11,570
It's just some quantity that
we'll call g tilde.

471
00:26:11,570 --> 00:26:16,390
We're assuming that the same
magnitude occurs both on U sub

472
00:26:16,390 --> 00:26:17,810
0 and U sub 1.

473
00:26:17,810 --> 00:26:20,430
We're going to use the same
transmission system that we

474
00:26:20,430 --> 00:26:24,190
used before, namely
pulse-position modulation.

475
00:26:24,190 --> 00:26:28,340
We'll either put our energy in
U sub 0 or we'll put our

476
00:26:28,340 --> 00:26:29,750
energy in U sub 1.

477
00:26:29,750 --> 00:26:31,830
We'll try to detect
what's going on.

478
00:26:31,830 --> 00:26:34,770
But we just give the detector
this little extra amount of

479
00:26:34,770 --> 00:26:37,920
ability of knowing what
the channel is.

480
00:26:37,920 --> 00:26:42,050
I'm going to talk more later
about how you can use this

481
00:26:42,050 --> 00:26:45,190
knowledge of what the channel
is, and how you can measure

482
00:26:45,190 --> 00:26:46,050
what the channel is.

483
00:26:46,050 --> 00:26:48,710
But for now we just assume
that we know it.

484
00:26:48,710 --> 00:26:51,450
So the phase is random
and independent

485
00:26:51,450 --> 00:26:53,050
of everything else.

486
00:26:53,050 --> 00:27:00,400
So under hypothesis H equals
zero, we have the output of

487
00:27:00,400 --> 00:27:01,060
the channel.

488
00:27:01,060 --> 00:27:07,690
And times 0 is whatever input
level we put in a, times what

489
00:27:07,690 --> 00:27:12,190
the channel does to us, times
e to this random phase.

490
00:27:12,190 --> 00:27:13,760
And V sub 1 it's just--

491
00:27:19,650 --> 00:27:23,320
plus Z sub 0.

492
00:27:23,320 --> 00:27:28,050
And in the other case we have
V sub 1 equals Z sub 1.

493
00:27:28,050 --> 00:27:32,355
And under the other hypothesis
V sub 1 is this input with a

494
00:27:32,355 --> 00:27:37,820
random phase but a known
magnitude and again, a

495
00:27:37,820 --> 00:27:39,040
Gaussian random variable.

496
00:27:39,040 --> 00:27:42,250
Phases are independent
of the hypothesis.

497
00:27:42,250 --> 00:27:44,000
The phases are independent
of the

498
00:27:44,000 --> 00:27:45,660
magnitudes which are known.

499
00:27:45,660 --> 00:27:48,470
The phases are independent of
everything and therefore, we

500
00:27:48,470 --> 00:27:51,130
just want to forget
about them.

501
00:27:51,130 --> 00:27:54,350
So the question is, how do we
make a maximum likelihood

502
00:27:54,350 --> 00:27:57,510
decision on this problem?

503
00:27:57,510 --> 00:27:59,080
Well you look at the problem.

504
00:27:59,080 --> 00:28:03,510
And for the same reason as
before you say, it's obvious

505
00:28:03,510 --> 00:28:06,070
how to make a maximum likelihood
decision just from

506
00:28:06,070 --> 00:28:09,450
all the symmetry
that you have.

507
00:28:09,450 --> 00:28:12,770
If the magnitude of V sub 0 is
bigger than the magnitude of V

508
00:28:12,770 --> 00:28:16,000
sub 1, V sub 0 corresponds to
this little bit of extra

509
00:28:16,000 --> 00:28:17,960
energy that you have.

510
00:28:17,960 --> 00:28:22,010
So if V sub 0, the magnitude
of V sub 0 is positive, is

511
00:28:22,010 --> 00:28:24,790
bigger than the magnitude
of V sub 1, you want to

512
00:28:24,790 --> 00:28:26,960
choose H equals 0.

513
00:28:26,960 --> 00:28:30,740
And alternatively you'll want
to choose H equals 1.

514
00:28:30,740 --> 00:28:33,250
It's obvious right?

515
00:28:33,250 --> 00:28:37,040
I've tried for years to find
a way to prove that.

516
00:28:37,040 --> 00:28:40,280
And the only way I can prove
it is by going into Bessel

517
00:28:40,280 --> 00:28:43,340
functions which is the way that
everybody else proves it.

518
00:28:43,340 --> 00:28:46,040
And this seems like absolute
foolishness to me.

519
00:28:46,040 --> 00:28:48,930
And if any of you can find a
way to do this, I would be

520
00:28:48,930 --> 00:28:51,000
delighted to hear it.

521
00:28:51,000 --> 00:28:55,120
I will be in great admiration
of you.

522
00:28:55,120 --> 00:28:57,020
Because I'm sure there has
to be an easy way to

523
00:28:57,020 --> 00:28:58,430
look at this problem.

524
00:28:58,430 --> 00:29:00,860
And I just can't find it.

525
00:29:00,860 --> 00:29:03,440
OK anyway, we're not going to
worry about all these Bessel

526
00:29:03,440 --> 00:29:08,670
functions, because that's just
arithmetic in a sense.

527
00:29:08,670 --> 00:29:12,040
So we're just going to say well
it can be proven using

528
00:29:12,040 --> 00:29:13,380
all of this machinery.

529
00:29:13,380 --> 00:29:16,610
So what we really want to find
is what is the probability of

530
00:29:16,610 --> 00:29:19,420
error when we make
that decision.

531
00:29:19,420 --> 00:29:23,180
And when we make that decision,
namely what we're

532
00:29:23,180 --> 00:29:26,740
looking for is the probability
of this magnitude then, is

533
00:29:26,740 --> 00:29:34,370
bigger than this magnitude
when H equals one is the

534
00:29:34,370 --> 00:29:35,420
correct hypothesis.

535
00:29:35,420 --> 00:29:38,440
Because that's the probability
of error then.

536
00:29:38,440 --> 00:29:40,130
So you have these two
different terms.

537
00:29:40,130 --> 00:29:43,230
You just go through all of the
junk that's in the appendix to

538
00:29:43,230 --> 00:29:46,080
the notes we passed
out last time.

539
00:29:46,080 --> 00:29:48,080
If you want to go through that,
I think it's great.

540
00:29:48,080 --> 00:29:50,510
It's an interesting analysis.

541
00:29:50,510 --> 00:29:52,510
Certainly not going
to do it now.

542
00:29:52,510 --> 00:29:55,490
When you get done doing that you
find out the probability

543
00:29:55,490 --> 00:30:00,540
of error is exactly one half
times e to the minus a-squared

544
00:30:00,540 --> 00:30:04,250
times this known magnitude
of the channel.

545
00:30:04,250 --> 00:30:08,670
I mean, a-squared and g tilde
have to appear together here.

546
00:30:08,670 --> 00:30:11,860
OK because what's coming out of
the channel, the magnitude

547
00:30:11,860 --> 00:30:15,370
of what's coming out of the
channel without noise is just

548
00:30:15,370 --> 00:30:17,510
a times g tilde.

549
00:30:17,510 --> 00:30:19,810
They both come together
everywhere.

550
00:30:19,810 --> 00:30:24,560
And therefore, they have to come
together anytime you're

551
00:30:24,560 --> 00:30:27,860
talking about optimal detection,
probability of

552
00:30:27,860 --> 00:30:29,700
error, or anything else.

553
00:30:29,700 --> 00:30:31,030
So these two appear together.

554
00:30:31,030 --> 00:30:34,350
We have the same noise term down
here as we had before.

555
00:30:34,350 --> 00:30:37,650
Because again we're using a
sampling theorem analysis and

556
00:30:37,650 --> 00:30:42,510
the noise in each of these
random variables is W N sub 0.

557
00:30:42,510 --> 00:30:44,630
OK so that's a little surprising
that that's what

558
00:30:44,630 --> 00:30:47,270
the noise is.

559
00:30:47,270 --> 00:30:52,130
If you knew the phase also, if
the detector knew both the

560
00:30:52,130 --> 00:30:55,200
magnitude and the phase of the
channel, it would be the

561
00:30:55,200 --> 00:30:57,520
conventional Gaussian
problem that we've

562
00:30:57,520 --> 00:31:00,000
analyzed many times before.

563
00:31:00,000 --> 00:31:04,770
And the solution would be that
probability of error is equal

564
00:31:04,770 --> 00:31:08,420
to Q of a-squared times
g tilde squared

565
00:31:08,420 --> 00:31:11,750
divided by W N sub 0.

566
00:31:11,750 --> 00:31:13,890
Now if you remember the
estimates we've come up with

567
00:31:13,890 --> 00:31:17,350
and the bounds we've come up
with on the Q function, the

568
00:31:17,350 --> 00:31:21,940
simplest bound that we came
up with was this.

569
00:31:21,940 --> 00:31:25,490
Namely you take this thing, you
take one half of it, which

570
00:31:25,490 --> 00:31:28,070
is the Gaussian density
with the coefficient.

571
00:31:28,070 --> 00:31:29,970
You multiply it by one half.

572
00:31:29,970 --> 00:31:33,740
So this is the simplest estimate
we can get of this.

573
00:31:33,740 --> 00:31:37,280
On the other hand when this
quantity is large, a much

574
00:31:37,280 --> 00:31:40,930
better estimate of this is to
have that estimate which has a

575
00:31:40,930 --> 00:31:47,030
1 over the square root of pi
times W N sub 0 over a-squared

576
00:31:47,030 --> 00:31:48,560
g tilde squared in it.

577
00:31:48,560 --> 00:31:50,600
So we have that term extra.

578
00:31:50,600 --> 00:31:53,520
Which says that when this
is large, whenever we're

579
00:31:53,520 --> 00:31:57,430
communicating at all reasonably,
this probability

580
00:31:57,430 --> 00:32:03,230
of error is much smaller than
this probability of error.

581
00:32:03,230 --> 00:32:06,345
However you talk to any
communication engineer, and

582
00:32:06,345 --> 00:32:09,090
they'll say when you have a
good signal noise ratio,

583
00:32:09,090 --> 00:32:11,130
incoherent detection
is virtually as

584
00:32:11,130 --> 00:32:14,300
good as coherent detection.

585
00:32:14,300 --> 00:32:15,550
And why did they say that?

586
00:32:18,640 --> 00:32:22,920
Well it's because the
probability of error goes down

587
00:32:22,920 --> 00:32:24,860
so quickly with energy here.

588
00:32:24,860 --> 00:32:27,330
It's going down as a square
of an exponent.

589
00:32:27,330 --> 00:32:31,590
Well it's going down as an
exponent in the energy.

590
00:32:31,590 --> 00:32:37,160
The question you want to ask is
how much extra energy do I

591
00:32:37,160 --> 00:32:40,120
have to use?

592
00:32:40,120 --> 00:32:43,070
If I'm using coherent detection,
how much more

593
00:32:43,070 --> 00:32:47,520
energy does an incoherent
detector need at the input in

594
00:32:47,520 --> 00:32:49,930
order to get the same results?

595
00:32:49,930 --> 00:32:52,300
And then you see the question
is very different.

596
00:32:52,300 --> 00:32:55,580
Because if I increase this
quantity just a little bit,

597
00:32:55,580 --> 00:33:00,230
this probability of error
goes down like a bat.

598
00:33:00,230 --> 00:33:01,590
OK.

599
00:33:01,590 --> 00:33:05,100
So what happens then, when you
compare these two terms is

600
00:33:05,100 --> 00:33:09,220
that as the signal to noise
ratio gets larger and larger,

601
00:33:09,220 --> 00:33:12,100
the amount of extra energy you
need to make incoherent

602
00:33:12,100 --> 00:33:16,470
detection work as well as
coherent detection goes down

603
00:33:16,470 --> 00:33:17,720
with 1 over a-squared.

604
00:33:21,250 --> 00:33:26,280
Which says that these
communication engineers who

605
00:33:26,280 --> 00:33:30,770
swear that they like incoherent
detection in fact,

606
00:33:30,770 --> 00:33:33,500
have something on their side.

607
00:33:33,500 --> 00:33:35,380
because they don't
have to assume so

608
00:33:35,380 --> 00:33:36,830
much about the channel.

609
00:33:36,830 --> 00:33:39,620
They have something which
is more robust.

610
00:33:39,620 --> 00:33:43,560
And in fact what's turning out
here, is that even though this

611
00:33:43,560 --> 00:33:47,140
error probability is a little
bigger than this error

612
00:33:47,140 --> 00:33:50,640
probability, there's only a
very negligible amount of

613
00:33:50,640 --> 00:33:54,270
extra dB required to make
the two the same.

614
00:33:54,270 --> 00:33:57,320
So it only costs a little bit of
extra energy to be able to

615
00:33:57,320 --> 00:34:01,710
use incoherent detection instead
of coherent detection.

616
00:34:01,710 --> 00:34:04,640
OK so this is very
strange now.

617
00:34:04,640 --> 00:34:07,810
We have a nice error probability
which is almost as

618
00:34:07,810 --> 00:34:11,140
good as the Gaussian
error probability

619
00:34:11,140 --> 00:34:13,890
using incoherent detection.

620
00:34:13,890 --> 00:34:18,110
This is assuming that the
channel, that the receiver

621
00:34:18,110 --> 00:34:22,440
knows what g tilde is.

622
00:34:22,440 --> 00:34:25,450
But now we go back and think
about this, and look at our

623
00:34:25,450 --> 00:34:29,340
detection rule, which is the
optimal detection rule.

624
00:34:29,340 --> 00:34:32,950
And the optimal detection rule
is no matter what g tilde

625
00:34:32,950 --> 00:34:36,700
happens to be, we compare the
magnitude of V sub 0 with the

626
00:34:36,700 --> 00:34:39,640
magnitude of V sub 1.

627
00:34:39,640 --> 00:34:42,020
In other words, we have analyzed
this assuming that we

628
00:34:42,020 --> 00:34:43,350
know what g tilde is.

629
00:34:43,350 --> 00:34:45,660
We know what the gain
of the channel is.

630
00:34:45,660 --> 00:34:47,900
But the receiver doesn't pay
any attention to it.

631
00:34:50,420 --> 00:34:56,590
OK so now we have this very
peculiar situation where

632
00:34:56,590 --> 00:35:01,910
incoherent detection with a
known value channel is almost

633
00:35:01,910 --> 00:35:06,150
as good as coherent
detection is.

634
00:35:06,150 --> 00:35:10,180
But at the same time Rayleigh
fading gives this awful error

635
00:35:10,180 --> 00:35:12,800
probability.

636
00:35:12,800 --> 00:35:16,580
So now you have the final part
of the argument take this

637
00:35:16,580 --> 00:35:22,270
probability of error, multiply
it by the probability density

638
00:35:22,270 --> 00:35:27,690
of g tilde squared, integrate
it to find out what the

639
00:35:27,690 --> 00:35:31,090
average error probability is
when we average over the

640
00:35:31,090 --> 00:35:32,900
channel fading.

641
00:35:32,900 --> 00:35:34,340
And guess what answer you get?

642
00:35:37,830 --> 00:35:39,690
Well you ought to be able to
guess it if you've looked at

643
00:35:39,690 --> 00:35:41,050
the homework already.

644
00:35:41,050 --> 00:35:43,190
Because in the homework you
actually go through this

645
00:35:43,190 --> 00:35:44,260
integration.

646
00:35:44,260 --> 00:35:47,170
And bingo you get the Rayleigh
fading result.

647
00:35:47,170 --> 00:35:50,510
Which says that the problem with
Rayleigh fading is not

648
00:35:50,510 --> 00:35:52,960
any lack of knowledge
about the channel.

649
00:35:52,960 --> 00:35:57,580
Knowing what the channel is
would not give you, even

650
00:35:57,580 --> 00:36:00,010
knowing what the phase is of the
channel would not give you

651
00:36:00,010 --> 00:36:01,020
a lot of extra help.

652
00:36:01,020 --> 00:36:03,790
The only help in knowing what
the phase is, is to get this

653
00:36:03,790 --> 00:36:05,680
result instead of this result.

654
00:36:05,680 --> 00:36:07,870
And even that won't
help you much.

655
00:36:07,870 --> 00:36:13,070
The problem is anytime you're
dealing with Rayleigh fading,

656
00:36:13,070 --> 00:36:17,170
the channel has faded so badly
a large fraction of the time,

657
00:36:17,170 --> 00:36:19,110
that you can't get
an acceptable

658
00:36:19,110 --> 00:36:22,030
probability of error.

659
00:36:22,030 --> 00:36:23,730
OK so now we have to
stop and think.

660
00:36:23,730 --> 00:36:25,080
What do you do about this?

661
00:36:34,190 --> 00:36:37,910
Well you have two general kinds
of techniques to use at

662
00:36:37,910 --> 00:36:39,270
this point.

663
00:36:39,270 --> 00:36:42,330
OK and one of them is to
try to measure the

664
00:36:42,330 --> 00:36:45,490
channel at the receiver.

665
00:36:45,490 --> 00:36:48,320
You take the measurement of the
channel at the receiver.

666
00:36:48,320 --> 00:36:50,530
You send it to the
transmitter.

667
00:36:50,530 --> 00:36:54,810
And the transmitter then does
something to compensate for

668
00:36:54,810 --> 00:36:56,510
the amount of fading.

669
00:36:56,510 --> 00:37:00,210
One thing that the transmitter
can do is anytime the channel

670
00:37:00,210 --> 00:37:03,080
is badly faded, it increases
the amount of

671
00:37:03,080 --> 00:37:05,620
power that it's sending.

672
00:37:05,620 --> 00:37:09,220
That's what typical
voice systems do.

673
00:37:09,220 --> 00:37:11,700
And the other thing that you can
do is change the rate at

674
00:37:11,700 --> 00:37:13,830
which you're transmitting.

675
00:37:13,830 --> 00:37:16,700
You can do all sorts of things
with the transmitter if you

676
00:37:16,700 --> 00:37:18,180
know what the channel is.

677
00:37:18,180 --> 00:37:21,300
You can respond to it
in various ways.

678
00:37:21,300 --> 00:37:24,310
And all these different
communication systems have

679
00:37:24,310 --> 00:37:28,250
various ways of dealing
with that.

680
00:37:28,250 --> 00:37:31,780
And we'll talk a little about
that on Wednesday when we talk

681
00:37:31,780 --> 00:37:33,030
about CDMA.

682
00:37:35,450 --> 00:37:38,270
The other thing you can do
about it is use something

683
00:37:38,270 --> 00:37:40,050
called diversity.

684
00:37:40,050 --> 00:37:46,270
And the idea of diversity is
that instead of sending this

685
00:37:46,270 --> 00:37:49,580
one bit, trying to use as few
degrees of freedom as

686
00:37:49,580 --> 00:37:54,610
possible, you try to send your
bits using as many degrees of

687
00:37:54,610 --> 00:37:56,320
freedom as possible.

688
00:37:56,320 --> 00:37:59,560
If you can use a large number of
degrees of freedom, and if

689
00:37:59,560 --> 00:38:02,280
the fading is independent on
these different degrees of

690
00:38:02,280 --> 00:38:05,790
freedom, then in fact
you gain something.

691
00:38:05,790 --> 00:38:09,490
Because instead of having one
random variable which can

692
00:38:09,490 --> 00:38:14,350
totally cripple you, you have
lots of random variables.

693
00:38:14,350 --> 00:38:16,670
And if any one of them is
good, you get through.

694
00:38:16,670 --> 00:38:20,960
So you get a benefit
out of diversity.

695
00:38:20,960 --> 00:38:23,510
OK so that's our next topic.

696
00:38:27,460 --> 00:38:29,860
Namely, how do you measure
the channel?

697
00:38:29,860 --> 00:38:33,000
Because if you're going to use
diversity it's a help to know

698
00:38:33,000 --> 00:38:34,010
the channel.

699
00:38:34,010 --> 00:38:37,080
If you're going to use coding,
coding is just another way to

700
00:38:37,080 --> 00:38:39,290
get diversity.

701
00:38:39,290 --> 00:38:42,180
Again your coding will work
better if you know what the

702
00:38:42,180 --> 00:38:43,950
channel is.

703
00:38:43,950 --> 00:38:48,590
So somehow we would like to be
able to measure the channel

704
00:38:48,590 --> 00:38:51,480
and send it back to the
transmitter if we want to

705
00:38:51,480 --> 00:38:55,830
alter the power, the rate of
the transmitter, and to let

706
00:38:55,830 --> 00:39:01,780
the receiver use it if the
receiver is going to.

707
00:39:01,780 --> 00:39:06,430
Well we have seen that when
you use one bit on just a

708
00:39:06,430 --> 00:39:10,060
couple of degrees of freedom,
knowing what the channel is

709
00:39:10,060 --> 00:39:11,780
does not do you much help.

710
00:39:11,780 --> 00:39:15,890
If you use coding or if you use
one bit and spread it over

711
00:39:15,890 --> 00:39:18,920
a large number of degrees of
freedom, then knowing what the

712
00:39:18,920 --> 00:39:21,740
channel is gives you
a great deal.

713
00:39:21,740 --> 00:39:25,240
This is one of the basic
confusions that everyone has

714
00:39:25,240 --> 00:39:27,170
when they deal with
Rayleigh fading.

715
00:39:27,170 --> 00:39:29,400
Because when you look at a
Rayleigh faded channel, the

716
00:39:29,400 --> 00:39:32,580
first thing you analyze is this
incredibly small number

717
00:39:32,580 --> 00:39:34,160
of degrees of freedom.

718
00:39:34,160 --> 00:39:35,770
And you say wow, that's awful.

719
00:39:35,770 --> 00:39:39,880
There's no way to
deal with that.

720
00:39:39,880 --> 00:39:42,500
And then you start looking
for something.

721
00:39:42,500 --> 00:39:45,510
And you say, well diversity
helps me.

722
00:39:45,510 --> 00:39:48,140
But in general, this is the
general scheme of things that

723
00:39:48,140 --> 00:39:50,110
we're going to use.

724
00:39:50,110 --> 00:39:51,020
OK.

725
00:39:51,020 --> 00:39:54,600
So as we said channel
measurement helps if diversity

726
00:39:54,600 --> 00:39:55,820
is available.

727
00:39:55,820 --> 00:39:58,940
Why does that help when
diversity is available?

728
00:39:58,940 --> 00:40:02,790
OK, think of sending
this one bit.

729
00:40:02,790 --> 00:40:06,000
You get one reception, and then

730
00:40:06,000 --> 00:40:07,970
you get another reception.

731
00:40:07,970 --> 00:40:11,200
And on this reception, you
get one amount of fading.

732
00:40:11,200 --> 00:40:15,470
On this reception you get
another amount of fading.

733
00:40:15,470 --> 00:40:19,350
If I don't know how much fading
there is it doesn't

734
00:40:19,350 --> 00:40:21,010
help me an awful lot.

735
00:40:21,010 --> 00:40:22,260
It helps me some.

736
00:40:22,260 --> 00:40:25,300
But if I know that this channel
is faded badly and

737
00:40:25,300 --> 00:40:28,320
this channel is not faded, then
I'm going to use what

738
00:40:28,320 --> 00:40:32,310
comes out here instead of
what comes out here.

739
00:40:32,310 --> 00:40:35,990
And then my detector is going
to work much, much better.

740
00:40:35,990 --> 00:40:40,100
When you look at diversity
results, always ask yourself a

741
00:40:40,100 --> 00:40:42,360
couple of questions.

742
00:40:42,360 --> 00:40:45,620
Is the detector using knowledge
of what the strength

743
00:40:45,620 --> 00:40:50,420
of the channel is on these
two diversity outputs?

744
00:40:50,420 --> 00:40:53,360
Is the transmitter using it's
knowledge of what those

745
00:40:53,360 --> 00:40:54,430
channels are?

746
00:40:54,430 --> 00:40:57,640
You get very different results
for diversity depending on the

747
00:40:57,640 --> 00:41:02,670
answers to both of
those questions.

748
00:41:02,670 --> 00:41:08,660
OK, so if you have a multi-tap
model for a channel--

749
00:41:08,660 --> 00:41:13,000
OK remember the multi-tap models
that we came up with.

750
00:41:13,000 --> 00:41:18,970
We we're looking at transmission
using multipath.

751
00:41:18,970 --> 00:41:22,470
And we had multipath in
different ranges of a delay.

752
00:41:22,470 --> 00:41:29,950
We came up with a model which
gave us multiple taps for a

753
00:41:29,950 --> 00:41:32,440
discrete model of the channel.

754
00:41:32,440 --> 00:41:35,560
You get a large number of taps
if you're using broadband

755
00:41:35,560 --> 00:41:36,430
communication.

756
00:41:36,430 --> 00:41:40,210
Because using broadband
communication 1 over W becomes

757
00:41:40,210 --> 00:41:41,250
very small.

758
00:41:41,250 --> 00:41:45,200
And therefore these ranges of
delay become very small.

759
00:41:45,200 --> 00:41:48,030
And if you're using very narrow
band communication,

760
00:41:48,030 --> 00:41:50,180
that's when you have
the flat fading.

761
00:41:50,180 --> 00:41:54,080
Namely flat fading is not flat
fading, it's fading which is

762
00:41:54,080 --> 00:41:56,740
flat over the bandwidth
that you're using.

763
00:41:56,740 --> 00:41:59,370
So if you use a broader
bandwidth and you have

764
00:41:59,370 --> 00:42:02,580
multiple taps, then these taps
are going to be independent of

765
00:42:02,580 --> 00:42:03,580
each other.

766
00:42:03,580 --> 00:42:06,170
And you automatically
have diversity.

767
00:42:06,170 --> 00:42:08,520
So the question is how
do you use that.

768
00:42:08,520 --> 00:42:10,300
Well if you're going to use
it, you better be able to

769
00:42:10,300 --> 00:42:12,270
measure it.

770
00:42:12,270 --> 00:42:16,310
OK so now we're going to try to
figure out how to do that

771
00:42:16,310 --> 00:42:17,560
measurement.

772
00:42:20,170 --> 00:42:24,600
And the first thing to do is
to assume the simplest

773
00:42:24,600 --> 00:42:26,180
possible thing.

774
00:42:26,180 --> 00:42:29,610
I mean, suppose you know how
many taps the channel has.

775
00:42:29,610 --> 00:42:32,850
Suppose it has k sub
0 channel taps.

776
00:42:32,850 --> 00:42:35,400
So the channel looks like
this, G sub 0, G sub

777
00:42:35,400 --> 00:42:38,120
1, and G sub 2.

778
00:42:38,120 --> 00:42:42,170
You're transmitting a
sequence of inputs.

779
00:42:42,170 --> 00:42:45,960
OK remember all of this stuff
came from trying to model a

780
00:42:45,960 --> 00:42:49,430
channel in terms of discrete
inputs, where you're sending

781
00:42:49,430 --> 00:42:52,810
one input each one
over W seconds.

782
00:42:52,810 --> 00:42:55,270
So you put in a sequence
of inputs.

783
00:42:55,270 --> 00:42:58,510
You have these three different
channel taps here.

784
00:42:58,510 --> 00:43:05,920
And what comes out when you put
in a single bit here or a

785
00:43:05,920 --> 00:43:08,120
single symbol.

786
00:43:08,120 --> 00:43:11,110
You get something out when
this tap right away.

787
00:43:11,110 --> 00:43:15,430
You get something out here
one time unit later.

788
00:43:15,430 --> 00:43:19,720
You get something out here,
one epoch still later.

789
00:43:19,720 --> 00:43:22,870
So all of these outputs
get added up.

790
00:43:22,870 --> 00:43:32,290
And therefore the output here
at time m, is the input at

791
00:43:32,290 --> 00:43:39,160
time m times this tap, plus the
input at time m minus 1

792
00:43:39,160 --> 00:43:42,800
times this tap, plus the
input at time m minus

793
00:43:42,800 --> 00:43:44,740
2 times this tap.

794
00:43:44,740 --> 00:43:48,370
Because it takes these inputs
that long to go through there.

795
00:43:48,370 --> 00:43:52,360
All this is is just digital
convolution, OK.

796
00:43:52,360 --> 00:43:54,370
I'm just drawing it out in
the figure so you see

797
00:43:54,370 --> 00:43:55,090
what's going on.

798
00:43:55,090 --> 00:43:57,990
Because otherwise you tend to
think everything happens at

799
00:43:57,990 --> 00:43:59,560
one instant of time.

800
00:43:59,560 --> 00:44:03,080
Then we're adding this
white Gaussian noise.

801
00:44:03,080 --> 00:44:05,230
When we're talking about
digital systems, white

802
00:44:05,230 --> 00:44:08,600
Gaussian noise just means that
each of these random variables

803
00:44:08,600 --> 00:44:11,050
are independent at each
other random variable.

804
00:44:11,050 --> 00:44:12,760
They all have the
same variance.

805
00:44:12,760 --> 00:44:14,900
And the real parts and
imaginary parts

806
00:44:14,900 --> 00:44:15,960
have the same variance.

807
00:44:15,960 --> 00:44:17,310
And they're independent
of each other.

808
00:44:17,310 --> 00:44:20,660
Namely these are all normal
random variables.

809
00:44:20,660 --> 00:44:27,270
Since we're sending a, or minus
a, or something with

810
00:44:27,270 --> 00:44:30,960
magnitude a, we want to divide
by a out here, if we want to

811
00:44:30,960 --> 00:44:34,940
figure out anything about
what these taps are.

812
00:44:34,940 --> 00:44:39,550
OK so suppose that what we send
now is a bunch of zeros,

813
00:44:39,550 --> 00:44:43,240
followed by a single input,
followed by a bunch of zeros.

814
00:44:43,240 --> 00:44:44,730
What comes out?

815
00:44:44,730 --> 00:44:48,320
Well the thing that comes out is
at the point that this big

816
00:44:48,320 --> 00:44:57,630
input comes in, we get a
G sub 0 out at the time

817
00:44:57,630 --> 00:44:58,960
that you put in a.

818
00:44:58,960 --> 00:45:01,430
I mean we're leaving out
propagation delay here.

819
00:45:01,430 --> 00:45:04,900
We got a times G sub
1, the next epoch.

820
00:45:04,900 --> 00:45:08,370
Then we get a times G sub
2, the next epoch.

821
00:45:08,370 --> 00:45:11,770
And by that time the input is
completely out of the filter.

822
00:45:11,770 --> 00:45:14,270
And we get zeros after that.

823
00:45:14,270 --> 00:45:19,410
So if you put in a bunch of
zeros and then a single a, you

824
00:45:19,410 --> 00:45:21,550
got a nice measurement
of the channel.

825
00:45:21,550 --> 00:45:25,050
There's Gaussian noise added
to each of these inputs.

826
00:45:25,050 --> 00:45:28,710
But in fact you do
get a reading of

827
00:45:28,710 --> 00:45:29,960
each channel output.

828
00:45:29,960 --> 00:45:34,980
When you divide these by the a
here, then you get something

829
00:45:34,980 --> 00:45:42,410
which is a measurement of the
appropriate tap G plus

830
00:45:42,410 --> 00:45:44,690
Gaussian noise on it.

831
00:45:44,690 --> 00:45:47,930
OK now you try to make an
estimation from this.

832
00:45:47,930 --> 00:45:50,040
And the trouble is we don't
want to say much about

833
00:45:50,040 --> 00:45:52,020
estimation theory.

834
00:45:52,020 --> 00:45:55,430
But in fact the notes gives you
a very brief introduction

835
00:45:55,430 --> 00:45:57,690
into estimation.

836
00:45:57,690 --> 00:46:00,080
There are two well-known
kinds of estimation.

837
00:46:00,080 --> 00:46:03,550
One of them is maximum
likelihood estimation.

838
00:46:03,550 --> 00:46:07,530
And the other one is minimum
mean square error estimation.

839
00:46:07,530 --> 00:46:11,510
Maximum likelihood estimation is
in fact exactly the same as

840
00:46:11,510 --> 00:46:13,630
maximum likelihood detection.

841
00:46:13,630 --> 00:46:16,000
Namely you look at the
likelihoods which is the

842
00:46:16,000 --> 00:46:21,910
probabilities of the outputs
given the inputs.

843
00:46:21,910 --> 00:46:23,630
And what's the input
in this problem?

844
00:46:27,430 --> 00:46:30,140
The input is these channel
variables.

845
00:46:30,140 --> 00:46:33,840
Because that's the thing we're
trying to measure in this

846
00:46:33,840 --> 00:46:34,860
measurement problem.

847
00:46:34,860 --> 00:46:37,310
We assume that the probing
signal is known.

848
00:46:37,310 --> 00:46:40,035
It's just a bunch of zeros,
followed by a, followed by a

849
00:46:40,035 --> 00:46:41,100
bunch of zeros.

850
00:46:41,100 --> 00:46:42,220
So we know that.

851
00:46:42,220 --> 00:46:45,070
We're trying to estimate
these things.

852
00:46:45,070 --> 00:46:48,020
So these are the variables that
we're trying to estimate.

853
00:46:48,020 --> 00:46:50,880
So we try to find the
probability density of the

854
00:46:50,880 --> 00:46:54,810
output conditional on the
knowledge of G sub 0.

855
00:46:54,810 --> 00:46:58,560
Which is just the Gaussian
density shifted to

856
00:46:58,560 --> 00:47:00,380
a times G sub 0.

857
00:47:04,130 --> 00:47:11,580
You then look at the maximum
likelihood estimate of G. So

858
00:47:11,580 --> 00:47:17,120
you're looking at the value
you can put in to maximize

859
00:47:17,120 --> 00:47:25,560
this estimate which comes out
here as a times G sub 2.

860
00:47:25,560 --> 00:47:29,020
And then at this appropriate
time, you're looking at G sub

861
00:47:29,020 --> 00:47:32,690
2 here, plus a noise
random variable.

862
00:47:32,690 --> 00:47:36,160
And since the noise is zero
mean, this quantity here is in

863
00:47:36,160 --> 00:47:39,570
fact the best estimate in
terms of the maximum

864
00:47:39,570 --> 00:47:41,260
likelihood that you can get.

865
00:47:41,260 --> 00:47:44,395
If you assume that this is a
Gaussian random variable and

866
00:47:44,395 --> 00:47:47,520
this is a Gaussian random
variable, you can solve a

867
00:47:47,520 --> 00:47:50,310
minimum mean square error
estimation problem.

868
00:47:50,310 --> 00:47:55,210
It's much like the map problem
except these random variables

869
00:47:55,210 --> 00:47:57,770
are all continuous here.

870
00:47:57,770 --> 00:48:01,250
But it's a little different from
the map problem in the

871
00:48:01,250 --> 00:48:05,400
sense that you can't have
equally likely inputs where

872
00:48:05,400 --> 00:48:07,570
you have a continuous
random variable.

873
00:48:07,570 --> 00:48:09,700
You make them all equally
probable.

874
00:48:09,700 --> 00:48:11,990
The only possible value
you can have is zero.

875
00:48:11,990 --> 00:48:13,840
Because it has to
extend forever.

876
00:48:13,840 --> 00:48:19,040
So anyway, maximum likelihood
detection is just normalize

877
00:48:19,040 --> 00:48:22,820
what you get so that in the
absence of Gaussian noise, you

878
00:48:22,820 --> 00:48:25,120
would get the variable
you're looking for.

879
00:48:25,120 --> 00:48:26,720
And then ignore the
Gaussian noise,

880
00:48:26,720 --> 00:48:28,470
and that's your estimate.

881
00:48:28,470 --> 00:48:29,050
OK.

882
00:48:29,050 --> 00:48:32,630
If you want to do this and you
want to use the strategy, it

883
00:48:32,630 --> 00:48:34,860
looks like a very
nice strategy.

884
00:48:34,860 --> 00:48:38,350
But what's the problem in it?

885
00:48:38,350 --> 00:48:41,940
If this sequence is somewhat
longer, you need a whole lot

886
00:48:41,940 --> 00:48:45,200
of zeros in between each
probing signal.

887
00:48:45,200 --> 00:48:48,590
And what that means is you're
going to be using your energy

888
00:48:48,590 --> 00:48:52,550
and clumping it all up
into the small number

889
00:48:52,550 --> 00:48:53,920
of degrees of freedom.

890
00:48:53,920 --> 00:48:57,590
Which means you're going to be
sending a lot of energy at one

891
00:48:57,590 --> 00:48:59,990
instant of time and then nothing
for a long period of

892
00:48:59,990 --> 00:49:03,100
time, then a very big signal for
awhile then nothing for a

893
00:49:03,100 --> 00:49:06,630
long period of time,
and so forth.

894
00:49:06,630 --> 00:49:11,010
If you do that, the FTC
is really going

895
00:49:11,010 --> 00:49:12,610
to be down on you.

896
00:49:12,610 --> 00:49:15,970
Because you're not supposed to
send too much energy in any

897
00:49:15,970 --> 00:49:19,570
small amount of time or any
small amount of frequency.

898
00:49:19,570 --> 00:49:21,950
So you're supposed to spread
things out a little bit.

899
00:49:21,950 --> 00:49:24,470
You say OK, that doesn't
work too well.

900
00:49:24,470 --> 00:49:25,960
What am I going to do?

901
00:49:25,960 --> 00:49:30,100
How can I choose a sequence of
input so they have relatively

902
00:49:30,100 --> 00:49:34,850
constant amplitude, but at the
same time so that when I go

903
00:49:34,850 --> 00:49:38,700
through this kind of filter, I
can sort out what's coming

904
00:49:38,700 --> 00:49:41,290
from here, and what's coming
from here, and

905
00:49:41,290 --> 00:49:43,480
what's coming from here.

906
00:49:43,480 --> 00:49:46,020
Well it turns out that the
answer to that question is to

907
00:49:46,020 --> 00:49:48,880
use a pseudonoise sequence.

908
00:49:48,880 --> 00:49:52,540
And the next thing I want to
do is to give you some idea

909
00:49:52,540 --> 00:49:55,530
about why these pseudonoise
sequences work.

910
00:49:58,310 --> 00:50:00,410
OK so we'll think in terms
of vectors now.

911
00:50:04,520 --> 00:50:09,240
OK so we have a vector input, u
sub 1, u sub 2, up to u sub

912
00:50:09,240 --> 00:50:11,210
n, a vector of length n.

913
00:50:11,210 --> 00:50:15,670
So we're putting these
discrete signals in

914
00:50:15,670 --> 00:50:17,610
one after the other.

915
00:50:17,610 --> 00:50:19,960
We're passing them through
this, which is a digital

916
00:50:19,960 --> 00:50:21,140
filter now.

917
00:50:21,140 --> 00:50:26,350
So what comes out here V prime
is just a convolution of u and

918
00:50:26,350 --> 00:50:30,370
G. We then add the
noise to it.

919
00:50:30,370 --> 00:50:32,670
I claim that what we ought
to do is use the matched

920
00:50:32,670 --> 00:50:35,140
filter here to u.

921
00:50:35,140 --> 00:50:39,950
And if I use a matched filter
to u here, that matched

922
00:50:39,950 --> 00:50:43,830
filter, if I'm using a
pseudonoise sequence, is going

923
00:50:43,830 --> 00:50:48,820
to bingo give me the filter that
I started out with, plus

924
00:50:48,820 --> 00:50:49,910
some noise.

925
00:50:49,910 --> 00:50:51,780
OK why is that?

926
00:50:51,780 --> 00:50:55,480
The property that pseudonoise
sequences have, if I choose

927
00:50:55,480 --> 00:51:01,070
each of the inputs to have the
magnitude of a, and think of

928
00:51:01,070 --> 00:51:04,150
it as being real plus or minus
a, which is what people

929
00:51:04,150 --> 00:51:05,620
usually do.

930
00:51:05,620 --> 00:51:09,090
If you look at the correlation
of this sequence, namely the

931
00:51:09,090 --> 00:51:14,560
correlation of u sub m with the
complex conjugate of u sub

932
00:51:14,560 --> 00:51:20,540
m spaced a little bit, PN
sequences have the property

933
00:51:20,540 --> 00:51:24,720
that this correlation function
looks like an impulse.

934
00:51:24,720 --> 00:51:25,610
OK.

935
00:51:25,610 --> 00:51:29,290
Now how you find sequences that
have that property is

936
00:51:29,290 --> 00:51:31,060
another question.

937
00:51:31,060 --> 00:51:32,630
But in fact they do exist.

938
00:51:32,630 --> 00:51:34,020
There are lots of them.

939
00:51:34,020 --> 00:51:35,570
They're easy to find.

940
00:51:38,730 --> 00:51:41,910
And they have this very
nice property.

941
00:51:41,910 --> 00:51:46,550
Another way to say this is that
is that the vector u has

942
00:51:46,550 --> 00:51:49,260
to be orthogonal to
all of its shifts.

943
00:51:49,260 --> 00:51:51,800
That's exactly what
this is saying.

944
00:51:51,800 --> 00:51:55,850
And another way of saying it
is that u, if you pass it

945
00:51:55,850 --> 00:51:58,440
through the matched filter to
u-- now remember what a

946
00:51:58,440 --> 00:52:02,440
matched filter is on
an analog waveform.

947
00:52:02,440 --> 00:52:06,850
You take a waveform, you switch
it around in time.

948
00:52:06,850 --> 00:52:09,200
You take the complex conjugate
of it, and

949
00:52:09,200 --> 00:52:10,900
that's the matched filter.

950
00:52:10,900 --> 00:52:14,200
And when you convolve u with
this matched filter, what it's

951
00:52:14,200 --> 00:52:18,490
doing is just exactly the same
operation of correlation.

952
00:52:18,490 --> 00:52:22,110
OK in other words, convolution
with one of the sequences

953
00:52:22,110 --> 00:52:26,380
turned around this time, is
the same as correlation.

954
00:52:26,380 --> 00:52:29,460
And most of you have
seen that I'm sure.

955
00:52:29,460 --> 00:52:34,070
So that if we take this matched
filter where u tilde

956
00:52:34,070 --> 00:52:38,180
sub j is equal to the complex
conjugate of u at time minus

957
00:52:38,180 --> 00:52:43,940
j, then I pass u through the
filter G. Forget about the

958
00:52:43,940 --> 00:52:45,480
noise for the time being.

959
00:52:45,480 --> 00:52:49,900
I then pass it through the
matched filter u tilde.

960
00:52:49,900 --> 00:52:53,870
What I'm going to get out, I
claim, is G. And I'll show you

961
00:52:53,870 --> 00:52:56,270
why that is in just a minute.

962
00:52:56,270 --> 00:52:59,120
Let make caution you
about something.

963
00:52:59,120 --> 00:53:01,970
Because you can get very
confused with this picture.

964
00:53:01,970 --> 00:53:07,180
Because as soon as I take this
input, u 1 up to u sub m.

965
00:53:07,180 --> 00:53:11,610
This matched filter is going to
start responding at time u

966
00:53:11,610 --> 00:53:12,740
sub minus m.

967
00:53:12,740 --> 00:53:14,650
And it's going to finish
responding a

968
00:53:14,650 --> 00:53:16,750
time u sub minus 1.

969
00:53:16,750 --> 00:53:19,580
So it responds before
it's hit.

970
00:53:19,580 --> 00:53:23,310
Which again is this business of
thinking of timing at the

971
00:53:23,310 --> 00:53:27,030
receiver being very much delayed
from timing at the

972
00:53:27,030 --> 00:53:29,020
transmitter.

973
00:53:29,020 --> 00:53:30,920
Which is a trick that
we've always played.

974
00:53:30,920 --> 00:53:33,710
Which is why we don't have to
think of filters as being

975
00:53:33,710 --> 00:53:34,880
realizable.

976
00:53:34,880 --> 00:53:37,820
Still in this example, this
becomes confusing.

977
00:53:37,820 --> 00:53:39,070
And I'll show you
why in a minute.

978
00:53:44,280 --> 00:53:49,100
OK so I'm going to assume that
I picked a good PN sequence.

979
00:53:49,100 --> 00:53:53,150
So when I convolve it with its
matched filter I essentially

980
00:53:53,150 --> 00:53:58,170
get an impulse function, namely
a discrete impulse.

981
00:53:58,170 --> 00:54:02,130
Which is the same as saying that
u is orthogonal to all of

982
00:54:02,130 --> 00:54:03,370
its shifts.

983
00:54:03,370 --> 00:54:04,900
And that's exactly what
you what to do.

984
00:54:04,900 --> 00:54:06,320
You want to think of
turning it around

985
00:54:06,320 --> 00:54:07,490
and passing it through.

986
00:54:07,490 --> 00:54:10,660
And that's exactly what
this is doing.

987
00:54:10,660 --> 00:54:16,880
OK so we have the output of
this filter, which is u

988
00:54:16,880 --> 00:54:21,660
convolved with G. We're then
convolving that with this

989
00:54:21,660 --> 00:54:24,170
matched filter u tilde.

990
00:54:24,170 --> 00:54:29,860
And now we use the nice property
of convolution, which

991
00:54:29,860 --> 00:54:32,630
you probably don't think
of very often.

992
00:54:32,630 --> 00:54:38,860
But the nice property that
convolution has, is that it's

993
00:54:38,860 --> 00:54:41,670
both associative and
commutative.

994
00:54:41,670 --> 00:54:42,180
OK.

995
00:54:42,180 --> 00:54:47,010
And therefore when we look at
V prime times u tilde, it's

996
00:54:47,010 --> 00:54:51,510
the convolution of u with G--
that's what the prime is-- all

997
00:54:51,510 --> 00:54:54,450
convolved with the
matched filter u.

998
00:54:54,450 --> 00:54:57,600
Because of the associativity and
the commutativity, you can

999
00:54:57,600 --> 00:55:00,760
reverse these two things so
you're taking the convolution

1000
00:55:00,760 --> 00:55:03,080
of u with its matched filter.

1001
00:55:03,080 --> 00:55:05,870
When you take the convolution of
u with its matched filter,

1002
00:55:05,870 --> 00:55:07,900
you get a delta function.

1003
00:55:07,900 --> 00:55:11,350
And you take a delta function
and pass it through G. And

1004
00:55:11,350 --> 00:55:15,570
what comes out is a delta
function weighted by a-squared

1005
00:55:15,570 --> 00:55:19,240
n, which is just the energy
of what we're putting in.

1006
00:55:19,240 --> 00:55:23,050
OK so that says that if we can
find pseudonoise sequences,

1007
00:55:23,050 --> 00:55:23,940
all of this works.

1008
00:55:23,940 --> 00:55:26,400
And it works just dandy.

1009
00:55:26,400 --> 00:55:29,340
If you put noise in,
what happens there?

1010
00:55:29,340 --> 00:55:31,860
Well let's analyze the
noise separately.

1011
00:55:31,860 --> 00:55:35,890
The noise is going through
this matched filter.

1012
00:55:35,890 --> 00:55:41,030
Well if u is a pseudonoise
sequence, if it has this nice

1013
00:55:41,030 --> 00:55:45,300
correlation property and you
flip it around in time, it's

1014
00:55:45,300 --> 00:55:48,190
going to have the same nice
correlation property.

1015
00:55:48,190 --> 00:55:57,020
So that in fact u tilde is
going to have the same

1016
00:55:57,020 --> 00:56:01,240
property that it's orthogonal
to all of its time shifts.

1017
00:56:01,240 --> 00:56:05,490
If you now look at what happens
when you take Z and

1018
00:56:05,490 --> 00:56:10,290
send it through this filter,
and you find the covariance

1019
00:56:10,290 --> 00:56:14,270
matrix for Z passed through
this filter, what that

1020
00:56:14,270 --> 00:56:18,760
independence gives you is the
correlation function is just

1021
00:56:18,760 --> 00:56:22,510
all diagonal, all terms,
all the same.

1022
00:56:22,510 --> 00:56:27,770
Which says that all of these
terms and this vector here are

1023
00:56:27,770 --> 00:56:29,770
all white Gaussian
noise variables.

1024
00:56:29,770 --> 00:56:34,210
So what comes out is the filter
plus white noise.

1025
00:56:34,210 --> 00:56:36,710
Which is the same thing that
happened when we put in a

1026
00:56:36,710 --> 00:56:40,320
single input with zeros
on both sides.

1027
00:56:40,320 --> 00:56:42,500
OK.

1028
00:56:42,500 --> 00:56:47,910
So using a PN sequence works
in exactly the same way as

1029
00:56:47,910 --> 00:56:51,380
this very special pseudonoise
sequence, which just has one

1030
00:56:51,380 --> 00:56:52,200
input in it.

1031
00:56:52,200 --> 00:56:54,960
Which happens to be a
pseudonoise sequence

1032
00:56:54,960 --> 00:56:57,460
in this term also.

1033
00:56:57,460 --> 00:57:00,300
OK so the output then, is
going to be a maximum

1034
00:57:00,300 --> 00:57:04,000
likelihood estimate of G. OK,
this is the way that people

1035
00:57:04,000 --> 00:57:06,420
typically measure channels.

1036
00:57:06,420 --> 00:57:08,970
They use pseudonoise inputs.

1037
00:57:08,970 --> 00:57:11,890
And the output that comes
out, namely the

1038
00:57:11,890 --> 00:57:14,360
output that comes out.

1039
00:57:14,360 --> 00:57:18,510
When we put in a finite duration
pseudonoise sequence,

1040
00:57:18,510 --> 00:57:22,860
what we're going to look for
is the output at the exact

1041
00:57:22,860 --> 00:57:26,580
instant of the last digit
as the input goes in.

1042
00:57:26,580 --> 00:57:30,060
And the output then is G sub
0, followed by G sub 1,

1043
00:57:30,060 --> 00:57:33,600
followed by G sub 2,
and then silence.

1044
00:57:33,600 --> 00:57:36,580
So you see nothing coming out
until this big burst of

1045
00:57:36,580 --> 00:57:44,760
energy, which is all
digits of G.

1046
00:57:44,760 --> 00:57:47,970
OK so now we want to put all
of this together into

1047
00:57:47,970 --> 00:57:50,070
something called a
rake receiver.

1048
00:57:50,070 --> 00:57:52,820
I wish I could spend more time
on the rake receiver because

1049
00:57:52,820 --> 00:57:55,110
it's a really neat thing.

1050
00:57:55,110 --> 00:57:59,630
It was developed in the 50s
about the same time that

1051
00:57:59,630 --> 00:58:01,570
information theory was
getting developed.

1052
00:58:01,570 --> 00:58:06,770
But it was developed by people
who were trying to do radar.

1053
00:58:06,770 --> 00:58:09,110
And at the same time trying to
do a little communication

1054
00:58:09,110 --> 00:58:11,230
along with the radar.

1055
00:58:11,230 --> 00:58:14,170
And this was one of the things
they came up with.

1056
00:58:14,170 --> 00:58:19,000
So they wanted to measure the
channel and make decisions in

1057
00:58:19,000 --> 00:58:21,870
transmitting data both
at the same time.

1058
00:58:21,870 --> 00:58:26,340
And the trick here is about the
same as the trick we use

1059
00:58:26,340 --> 00:58:30,700
in trying to measure carrier
frequency, and make decisions

1060
00:58:30,700 --> 00:58:31,930
at the same time.

1061
00:58:31,930 --> 00:58:34,480
Namely you use the decisions
you make

1062
00:58:34,480 --> 00:58:36,000
to measure the frequency.

1063
00:58:36,000 --> 00:58:38,190
You use the frequency
that you've measured

1064
00:58:38,190 --> 00:58:40,670
to make future decisions.

1065
00:58:40,670 --> 00:58:43,510
And here, we're going to do
exactly the same thing.

1066
00:58:43,510 --> 00:58:44,770
We make decisions.

1067
00:58:44,770 --> 00:58:48,680
We use those decisions as a way
of measuring the channel.

1068
00:58:48,680 --> 00:58:51,830
We then use the measurements of
the channel to create this

1069
00:58:51,830 --> 00:58:55,210
matched filter G tilde.

1070
00:58:55,210 --> 00:58:58,940
And that's what we're going to
use to make the decisions.

1071
00:58:58,940 --> 00:59:03,860
OK if you have two different
inputs, I mean here we'll just

1072
00:59:03,860 --> 00:59:05,270
look at binary inputs.

1073
00:59:05,270 --> 00:59:09,370
You take u sub 0 and u sub 1,
and you look at what happens

1074
00:59:09,370 --> 00:59:11,070
when you have those
two inputs.

1075
00:59:11,070 --> 00:59:14,830
This is just a vector white
Gaussian noise problem that we

1076
00:59:14,830 --> 00:59:17,570
looked at in quite a bit of
detail when we were studying

1077
00:59:17,570 --> 00:59:19,580
decision theory.

1078
00:59:19,580 --> 00:59:24,460
What we want to do is to look
at, I mean if these two

1079
00:59:24,460 --> 00:59:28,540
signals are not antipodal to
each other you want to look at

1080
00:59:28,540 --> 00:59:30,650
the mean of them.

1081
00:59:30,650 --> 00:59:34,065
And you'll want to look at u sub
0 minus that mean, and u

1082
00:59:34,065 --> 00:59:37,280
sub 1 minus that mean as
two antipodal signals.

1083
00:59:37,280 --> 00:59:41,160
When you go through all of
that, you find that the

1084
00:59:41,160 --> 00:59:45,020
maximum likelihood decision is
to take the real part of the

1085
00:59:45,020 --> 00:59:53,490
output, of the inner product
of the output, with u sub 0

1086
00:59:53,490 --> 00:59:58,026
convolved with g, and the real
part of v convolved with u sub

1087
00:59:58,026 --> 00:59:59,510
1 convolved with G.

1088
00:59:59,510 --> 01:00:02,520
OK in other words, what's
happening here is that as far

1089
01:00:02,520 --> 01:00:07,340
as anybody is concerned, we're
not using u sub 0 and u sub 1

1090
01:00:07,340 --> 01:00:09,330
in this making a decision.

1091
01:00:09,330 --> 01:00:11,420
We know what the channel is.

1092
01:00:11,420 --> 01:00:15,425
And therefore what exists right
before the white noise

1093
01:00:15,425 --> 01:00:20,485
is added, is these two signals u
sub 0 convolved with g and u

1094
01:00:20,485 --> 01:00:22,170
sub 1 convolved with g.

1095
01:00:22,170 --> 01:00:24,480
So we're doing binary
detection on

1096
01:00:24,480 --> 01:00:26,750
those two known signals.

1097
01:00:26,750 --> 01:00:29,610
And we're using the output to
try to make the best choice

1098
01:00:29,610 --> 01:00:30,500
between them.

1099
01:00:30,500 --> 01:00:32,670
So this is the thing
that we do.

1100
01:00:32,670 --> 01:00:36,300
So we want to use a filter
matched to u sub 0

1101
01:00:36,300 --> 01:00:38,040
convolved with g.

1102
01:00:38,040 --> 01:00:41,220
Now how do we build a filter
matched to a convolution of

1103
01:00:41,220 --> 01:00:43,140
two things?

1104
01:00:43,140 --> 01:00:45,880
Well we convolve
u sub 0 with g.

1105
01:00:45,880 --> 01:00:48,080
And then we turn the
thing around.

1106
01:00:48,080 --> 01:00:50,340
And then we see that after
turning it around what we've

1107
01:00:50,340 --> 01:00:53,890
gotten is the turned around
version of u convolved with

1108
01:00:53,890 --> 01:00:56,460
the turned around
version of g.

1109
01:00:56,460 --> 01:00:57,770
I mean write it down
and you'll see that

1110
01:00:57,770 --> 01:00:59,850
that's what you have.

1111
01:00:59,850 --> 01:01:08,120
So what you wind up with is
the following figure.

1112
01:01:08,120 --> 01:01:11,860
You either send u sub 0
or you send u sub 1.

1113
01:01:11,860 --> 01:01:14,450
This is a way to send
one binary digit.

1114
01:01:14,450 --> 01:01:20,060
We're sending it by using these
long PN sequences now.

1115
01:01:20,060 --> 01:01:28,190
If u sub 0 goes through g, we
got a V prime out, which is

1116
01:01:28,190 --> 01:01:30,880
the output before
noise is added.

1117
01:01:30,880 --> 01:01:34,980
We then add noise so we get.

1118
01:01:34,980 --> 01:01:38,610
And then we pass to try
to detect whether

1119
01:01:38,610 --> 01:01:40,530
this or this is true.

1120
01:01:40,530 --> 01:01:48,000
We take this output V. We
convolve it with u sub 1

1121
01:01:48,000 --> 01:01:52,060
convolved with g, and with
u sub 0 convolved with g.

1122
01:01:52,060 --> 01:01:55,380
Now you'll all say I'm
wasting stuff here.

1123
01:01:55,380 --> 01:01:59,490
Because I could just put the g
over here and then follow it

1124
01:01:59,490 --> 01:02:02,850
with u sub 1 or u sub 0.

1125
01:02:02,850 --> 01:02:04,090
Be patient for a little bit.

1126
01:02:04,090 --> 01:02:05,390
I want to put both of them in.

1127
01:02:05,390 --> 01:02:07,280
And I want to put them
in this order.

1128
01:02:07,280 --> 01:02:10,580
And then I make a
decision here.

1129
01:02:10,580 --> 01:02:14,520
OK well here comes the clincher
to the argument.

1130
01:02:14,520 --> 01:02:17,500
Look at what happens
right there.

1131
01:02:17,500 --> 01:02:24,120
If I forget about this and I
forget about this, what I get

1132
01:02:24,120 --> 01:02:26,440
here is u sub 0 coming in.

1133
01:02:26,440 --> 01:02:28,900
It's going through
the filter g.

1134
01:02:28,900 --> 01:02:31,510
It has white noise
added to it.

1135
01:02:31,510 --> 01:02:35,520
It goes through the matched
filter to u sub 0.

1136
01:02:35,520 --> 01:02:38,770
And what comes out is
a measurement of g.

1137
01:02:38,770 --> 01:02:39,650
That's what we showed before.

1138
01:02:39,650 --> 01:02:42,740
When we were trying to measure
g, that was the way we did it.

1139
01:02:42,740 --> 01:02:44,970
We started out with
a PN sequence, go

1140
01:02:44,970 --> 01:02:47,240
through g, add noise--

1141
01:02:47,240 --> 01:02:49,970
we can't avoid the noise-- go
through the matched filter.

1142
01:02:49,970 --> 01:02:53,260
That is a measurement of
g at that point there.

1143
01:02:53,260 --> 01:02:57,540
And if we send u sub 1, that's
a measurement of g at that

1144
01:02:57,540 --> 01:02:59,050
point there.

1145
01:02:59,050 --> 01:03:06,790
So finally we have the rake
receiver which does both of

1146
01:03:06,790 --> 01:03:08,830
these things it once.

1147
01:03:08,830 --> 01:03:11,670
You either send u sub
0 or u sub 1.

1148
01:03:11,670 --> 01:03:12,810
You go through this filter.

1149
01:03:12,810 --> 01:03:14,510
You add white noise.

1150
01:03:17,240 --> 01:03:20,390
As far as making a decision is
concerned, you do what we

1151
01:03:20,390 --> 01:03:22,310
talked about before.

1152
01:03:22,310 --> 01:03:25,070
You compare this output
with this

1153
01:03:25,070 --> 01:03:27,360
output to make a decision.

1154
01:03:27,360 --> 01:03:31,270
After you make a decision you
go forward in time, because

1155
01:03:31,270 --> 01:03:34,500
we've done everything backwards
in time here.

1156
01:03:34,500 --> 01:03:39,290
And you take what is going to
come out of here, which hasn't

1157
01:03:39,290 --> 01:03:40,460
come out yet.

1158
01:03:40,460 --> 01:03:44,550
And you use that to make
a new estimate of g.

1159
01:03:44,550 --> 01:03:48,180
You use that estimate of g
turned around in time, to

1160
01:03:48,180 --> 01:03:54,260
alter your estimate of the
matched filter to g.

1161
01:03:54,260 --> 01:03:57,090
And if you read the notes, the
notes explains what's going on

1162
01:03:57,090 --> 01:03:59,230
as far as the timing in
here, a little bit

1163
01:03:59,230 --> 01:04:00,940
better than I can here.

1164
01:04:00,940 --> 01:04:02,970
But in fact, this is the kind
of circuit that people

1165
01:04:02,970 --> 01:04:07,060
actually use to both measure
channels, and to send data at

1166
01:04:07,060 --> 01:04:08,140
the same time.

1167
01:04:08,140 --> 01:04:11,400
I want stop here because
we're supposed to

1168
01:04:11,400 --> 01:04:13,300
evaluate the class.