1
00:00:18,119 --> 00:00:25,119
Good afternoon. This is Doctor Rudra Pradhan
here. Welcome to NPTEL project on Econometric
2
00:00:25,589 --> 00:00:29,480
modelling. Today, we will continue the topic
Bivariate Econometric modelling. So, in the
3
00:00:29,480 --> 00:00:36,480
last class, we have discussed detail about
the structure of bivariate econometric modelling.
4
00:00:37,670 --> 00:00:42,019
So, the starting point of bivariate econometric
modelling is that, we must have two variables
5
00:00:42,019 --> 00:00:49,019
in the systems. Let me highlight briefly,
what what was our last discussions.
6
00:00:49,079 --> 00:00:56,079
So, for two variables X and Y, then the bivariate
models is represented as Y equal to alpha
7
00:00:56,949 --> 00:01:03,949
plus beta X plus U. So, this is the basic
format of bivariate econometric modelling.
8
00:01:06,170 --> 00:01:12,409
So, there are three ways we can represent
this particular structures. And that that
9
00:01:12,409 --> 00:01:19,409
is with respect to various data types. So,
we have three different data setup. One is
10
00:01:20,780 --> 00:01:27,780
cross sectional analysis, second, time series
analysis, then penal data analysis. So, penal
11
00:01:28,170 --> 00:01:32,739
data is the combination of cross sectional
analysis and time series analysis.
12
00:01:32,739 --> 00:01:38,640
So now, for briefly, if we will go by three
different structures with respect to this
13
00:01:38,640 --> 00:01:43,660
bivariate econometric modelling, then obviously,
the three way representation is like this.
14
00:01:43,660 --> 00:01:50,660
Y i equal to alpha plus beta X i plus U i.
This is cross sectional modelling, then similarly,
15
00:01:52,950 --> 00:01:59,950
Y t equal to alpha plus beta X t plus U t.
This is time series modelling and Y i t equal
16
00:02:01,840 --> 00:02:08,840
to alpha plus beta X i t plus U i t is penal
data modelling.
17
00:02:10,679 --> 00:02:17,170
So now, we are not in a position to discuss
all these things simultaneously. So, we start
18
00:02:17,170 --> 00:02:24,000
with a basic framework of bivariate modelling,
that too, cross sectional analysis only. Now,
19
00:02:24,000 --> 00:02:28,090
for cross sectional analysis, either we can
represent the simple models like Y equal to
20
00:02:28,090 --> 00:02:34,760
alpha plus beta X plus U or you can write
Y i equal to alpha plus beta X i plus U i.
21
00:02:34,760 --> 00:02:40,530
Now, you make a look here. The entire structure
of bivariate econometric modelling is represented
22
00:02:40,530 --> 00:02:47,530
here. So, this particular structures is divided
into three parts. One is intercept that is
23
00:02:48,180 --> 00:02:55,180
what, we call it alpha. This is intercept
and this is slope and this is residuals or
24
00:02:57,980 --> 00:03:04,980
error terms error terms.
So now, here the idea is that, so, we have
25
00:03:06,150 --> 00:03:13,150
Y equal to Y 1, Y 2 up to Y n. So, X equal
to X 1, X 2 up to X n. And U equal to U 1,
26
00:03:21,590 --> 00:03:28,590
U 2 up to U n. Now, we have discussed the
detail constants of this particular bivariate
27
00:03:32,790 --> 00:03:38,279
econometric modelling in the last class. Now,
here we are assuming that there are n number
28
00:03:38,279 --> 00:03:43,480
of observations and one of the interesting
point of this bivariate econometric modelling
29
00:03:43,480 --> 00:03:50,480
is that both the variables must have same
number of observations. If there is any up
30
00:03:50,620 --> 00:03:55,739
comings, then obviously, bivariate modelling
cannot be fitted.
31
00:03:55,739 --> 00:04:01,659
So, we are assuming that there are n number
of observations and that too Y variables and
32
00:04:01,659 --> 00:04:07,999
X variables and corresponding to U variables.
Here, Y is dependent variables, X is independent
33
00:04:07,999 --> 00:04:14,999
variables and U is the error terms, which
is usually not captures but means the variable
34
00:04:15,180 --> 00:04:21,250
which is not captured in the system will be
represented in the form of U. Now, we are
35
00:04:21,250 --> 00:04:27,620
we are assuming that there will be a estimated
models. So, Y equal to Y hat equal to alpha
36
00:04:27,620 --> 00:04:34,620
plus alpha hat plus beta hat X. So, let us
assume that, so, this is the estimated models.
37
00:04:36,729 --> 00:04:43,729
So, put in other way, in a other different
way.
38
00:04:46,620 --> 00:04:52,300
So, our starting point is Y equal to alpha
plus beta X plus U. So, this is two regression
39
00:04:52,300 --> 00:04:59,300
lines. So, let us assume that let us assume
that Y hat equal to alpha hat plus beta hat
40
00:05:02,740 --> 00:05:09,740
X. Now, obviously, Y equal to Y hat plus e.
That implies e equal to Y minus Y hat. So,
41
00:05:14,259 --> 00:05:19,340
what is this particular structures? Now, let
us see here. So, this is the entire setup
42
00:05:19,340 --> 00:05:24,729
here. This is X X series and this is Y series
and this particular component is called as
43
00:05:24,729 --> 00:05:31,610
alpha. So, our movement of Y hat is like this.
So, here Y hat equal to alpha hat plus beta
44
00:05:31,610 --> 00:05:35,090
hat X.
So now, there are certain original points
45
00:05:35,090 --> 00:05:42,090
here. This is the estimated line. So, the
origin two points are like this. So, the difference
46
00:05:43,220 --> 00:05:50,220
will be like this here. So, we have the difference
like this. Now, this is e 1, this is e 2,
47
00:05:50,449 --> 00:05:57,449
this is e 3, this is e 4, this is e 5, like
this. Now, this e represented as the error
48
00:05:58,789 --> 00:06:05,789
terms. So, that means, when we fitted a line,
then obviously, that is different from the
49
00:06:07,110 --> 00:06:12,860
true points. So, that true point and the estimated
line, so, it will give you or it will give
50
00:06:12,860 --> 00:06:19,220
the signal of error terms.
So now, if we further elaborate this particular
51
00:06:19,220 --> 00:06:26,220
equation, then obviously, e equal to Y minus
Y hat. Y hat is here, Y hat minus, sorry,
52
00:06:28,939 --> 00:06:35,939
Y minus Y hat. So, that is alpha hat minus
beta hat X. So, e equal to Y minus alpha hat
53
00:06:36,879 --> 00:06:43,879
minus beta hat X. So, here we have two objectives.
So, here we have we have two specific objectives
54
00:06:44,990 --> 00:06:50,189
we have two specific objectives. what is this
objectives?
55
00:06:50,189 --> 00:06:56,530
The first objective is to get the alpha hat
and what is the actual value of alpha hat
56
00:06:56,530 --> 00:07:03,530
and what is beta hats and second objective
is to find out the error components. So, we
57
00:07:04,250 --> 00:07:11,250
have now, when you got the estimated equations
and through which we have to get the error
58
00:07:11,840 --> 00:07:17,680
component, then our objective is very simple.
We like to know, what is the exact value of
59
00:07:17,680 --> 00:07:22,419
alpha hat and what is the exact value of beta
hat? And through the help of alpha hat and
60
00:07:22,419 --> 00:07:27,979
beta hat, we get to know the error component
or we have to evaluate the error term through
61
00:07:27,979 --> 00:07:33,340
the help of alpha hat and beta hats.
So now, how do you go for that? So, since
62
00:07:33,340 --> 00:07:40,340
error is a residual term and which is you
know not supporting to the dependent variables
63
00:07:40,360 --> 00:07:46,860
exactly. So, our objective must be to minimize
that error components. So, that means, we
64
00:07:46,860 --> 00:07:52,990
must represent a models where, every variables
should be identified; means most of the variables
65
00:07:52,990 --> 00:07:59,789
should be extremely dependent variables. If
that percentage is very less, then the model
66
00:07:59,789 --> 00:08:06,430
accuracy will be you can say very least.
So, we have to prepare our self or we have
67
00:08:06,430 --> 00:08:11,879
to fit the model in such a way that most of
the relevant variable must be included in
68
00:08:11,879 --> 00:08:18,879
the system. So, accordingly, we have to design
our structure or you can say systems. Now,
69
00:08:19,189 --> 00:08:26,189
the entire structure is nothing but, e equal
to Y minus alpha hat minus beta hat X and
70
00:08:26,819 --> 00:08:31,860
our objective is to get the alpha hat and
to get the beta hat. And with the help of
71
00:08:31,860 --> 00:08:36,349
alpha hat and beta hat we have to observe
the e components. Let us see how we have to
72
00:08:36,349 --> 00:08:37,150
observe this one.
73
00:08:37,150 --> 00:08:44,150
So now, e equal to Y minus Y hat. So, this
is nothing but, Y minus alpha hat minus beta
74
00:08:45,630 --> 00:08:52,630
hat X. So, summation, so, here to get alpha
hat and beat hat X, so, we have to minimize
75
00:08:53,510 --> 00:09:00,510
the error terms. So, we have to we have to
minimize minimize the error term error term.
76
00:09:05,860 --> 00:09:11,389
The way we have to minimize the error term,
then obviously, we will get the best value
77
00:09:11,389 --> 00:09:15,500
of alpha hat and best value of beta hat. So,
how do you go for that?
78
00:09:15,500 --> 00:09:21,670
So now, there are there are several methods
to which we have to minimize the errors. So,
79
00:09:21,670 --> 00:09:27,180
there are methods like you know, sum of the
methods like ordinary least square method,
80
00:09:27,180 --> 00:09:31,649
generalized least square method, weighted
list care methods, maximum likelihood estimators,
81
00:09:31,649 --> 00:09:38,540
maximum likelihood estimators. Like this,
so many methods are there, where we can minimize
82
00:09:38,540 --> 00:09:45,459
the error sum. So now, it is not possible
to go each methods simultaneously. So, we
83
00:09:45,459 --> 00:09:50,820
will take a particular methods, then through
which we have to minimize the error sum.
84
00:09:50,820 --> 00:09:56,529
So now, the easiest method for this is called
as a ordinary least square methods and popularly
85
00:09:56,529 --> 00:10:01,170
known known as wireless techniques. So, what
is all about this wireless techniques? The
86
00:10:01,170 --> 00:10:05,920
wireless techniques objective is to minimize
the error sum squares. Now, our objective
87
00:10:05,920 --> 00:10:11,380
or agenda is to calculate what is error sum.
Now, e is e is nothing but errors. So, which
88
00:10:11,380 --> 00:10:14,810
is equal to Y minus alpha hat minus beta hat
X.
89
00:10:14,810 --> 00:10:20,149
So now, we have to calculate what is the error
sum square? So, that means, sum of the error
90
00:10:20,149 --> 00:10:27,100
sum squares. So, that means, summation e squares
i equal to 1 to n. So, e i squares. So, this
91
00:10:27,100 --> 00:10:33,940
is the error sum squares. So, which is equal
to summations Y minus alpha hat minus beta
92
00:10:33,940 --> 00:10:40,940
hat X. And of course, there is i also. Now,
i equal to 1 to n. So, this is also squares.
93
00:10:41,800 --> 00:10:48,269
So, error sum square is nothing but the difference
between the actual Y minus the expected Y.
94
00:10:48,269 --> 00:10:53,480
So, the difference will give you the error
such that, if you make it squares, then you
95
00:10:53,480 --> 00:10:58,649
will get and it is apply sum, then obviously,
you will get the error sum squares.
96
00:10:58,649 --> 00:11:04,639
So now, through which you have to, we have
to we have to minimize the components. Now,
97
00:11:04,639 --> 00:11:10,699
let us see. This is the this is the starting
procedure of this particular system. Now,
98
00:11:10,699 --> 00:11:17,399
our objective is here to get the alpha hat
and beta hat. That is why, we have to minimize
99
00:11:17,399 --> 00:11:24,139
the error sum squares. Now, since, we like
to get the value of alpha hat and beta hat,
100
00:11:24,139 --> 00:11:29,860
so, we have to minimize the error sum square
with respect to alpha hat and beta hat.
101
00:11:29,860 --> 00:11:34,589
So now, so, there are two system here. So,
how do you minimize the system? So, there
102
00:11:34,589 --> 00:11:40,980
are you know, means this is typically optimization
techniques. So, we have two different structure
103
00:11:40,980 --> 00:11:45,949
of optimization. One is called as minimization
technique and another is called as a maximization
104
00:11:45,949 --> 00:11:52,949
technique. Now, here, we are in the process
of minimization. So, there are two standard
105
00:11:52,970 --> 00:11:59,970
rules to minimize the sum squares.
So now, here, first first step is to take
106
00:12:00,779 --> 00:12:07,779
d summation e square Y d alpha hat is equal
to 0 and d summation e square by d beta hat
107
00:12:08,720 --> 00:12:15,720
is equal to 0. Now, let us call it f 1, this
is called as f 1 and this is called as f 2.
108
00:12:16,959 --> 00:12:22,100
Now, the first order, this is this is otherwise
known as first order necessary conditions.
109
00:12:22,100 --> 00:12:29,100
So, second order sufficient condition is that,
now f 11 and f 1 2, f 2 1 and f 2 2 must be
110
00:12:32,720 --> 00:12:39,720
greater than 0. And f 1 1 greater than 0 and
f 2 2 must be greater than 0. So, that means,
111
00:12:40,149 --> 00:12:46,690
what is f f 1? So, f 11 is nothing but d summation
e squares, d square summation e square by
112
00:12:46,690 --> 00:12:52,970
d alpha hat squares. So, f 2 2 is nothing
but d square summation a square d beta hat
113
00:12:52,970 --> 00:12:58,329
squares. So, like this f 1 2 is nothing but
d square summation e square by d alpha hat
114
00:12:58,329 --> 00:13:03,899
and d beta hats. So now, we are not going
to discuss detail about this particular mathematical
115
00:13:03,899 --> 00:13:08,740
setup. So, what we have to do? We can get
the sensor through only first order necessary
116
00:13:08,740 --> 00:13:14,350
conditions. Now, what we have to do? We have
to just minimize the sum square.
117
00:13:14,350 --> 00:13:20,610
So, what is what is d summation e square by
d alpha hat d summation e square by d alpha
118
00:13:20,610 --> 00:13:27,610
hat is nothing but 2 into summations Y minus
alpha hat minus beta hat X. So, into with
119
00:13:29,170 --> 00:13:34,649
respect to alpha hat. So, of course, then
and there is minus 1. Now, which must be equal
120
00:13:34,649 --> 00:13:40,459
to 0. Now, if we will simplify, that implies
which is nothing but summation Y equal to
121
00:13:40,459 --> 00:13:47,380
n alpha hat plus beta hat summation X. Let
us assume that this is equation number one.
122
00:13:47,380 --> 00:13:53,130
So now, similarly, we have to calculate d
summation e square by d beta hat. So, d summation
123
00:13:53,130 --> 00:14:00,130
square beta hat is nothing but 2 summation
Y minus alpha hat minus beta hat X into with
124
00:14:02,139 --> 00:14:08,139
respect to beta hat. So, obviously, minus
X is the extra terms which has to be multiplied
125
00:14:08,139 --> 00:14:15,120
in systems. Now, this should be exactly equal
to 0. Now, if we will simplify again, so,
126
00:14:15,120 --> 00:14:22,120
obviously, this is equal to summation Y X
Y, that implies summation X Y equal to alpha
127
00:14:22,949 --> 00:14:29,949
hat summation X plus beta hat summation X
squares. So, since equal to 0 so, obviously,
128
00:14:31,190 --> 00:14:37,589
if you properly structure, then we will get
summation X Y equal to alpha hat summation
129
00:14:37,589 --> 00:14:44,589
X plus beta hat summation X square. Now, let
us call it equation number two.
130
00:14:45,009 --> 00:14:52,009
Now, if you club this two equations, so, that
means, the system will be now the system will
131
00:14:52,519 --> 00:14:59,519
be now summation Y equal to n alpha hat plus
beta hat summation X and summation X Y is
132
00:15:02,819 --> 00:15:09,660
equal to alpha hat summation X plus beta hat
summation X square. So, what is our objective
133
00:15:09,660 --> 00:15:16,660
here? Our objective is here to get alpha hat
and to get beta hat X. Forget about this second
134
00:15:18,509 --> 00:15:23,290
objective of error component. So, in the mean
times, we have derived these two equations,
135
00:15:23,290 --> 00:15:29,850
just to know, what is the exact value of alpha
hat and what is the exact value of beta hats.
136
00:15:29,850 --> 00:15:36,850
Now, we have to, you know items to get and
we have two equations. So, the system is systematic
137
00:15:38,959 --> 00:15:43,670
one. So, that means, the system is unique
one. So, it can be operated.
138
00:15:43,670 --> 00:15:50,670
So, what I like to do here? So, I will put
this, a concept into matrix format. So, this
139
00:15:51,839 --> 00:15:58,839
is nothing but Y equal to simply X beta, simply
called as a X beta. Now, what is X beta here?
140
00:16:03,769 --> 00:16:10,769
So, X beta X beta is nothing but, you put
it here like where, where Y equal to summation
141
00:16:22,839 --> 00:16:29,839
Y summation X Y. Then, X equal to then X equal
to n summation X, then summation X summation
142
00:16:36,050 --> 00:16:43,050
X square, then beta equal to then beta equal
to alpha hat and beta hats. So, that means,
143
00:16:47,810 --> 00:16:54,259
the whole system will be represented as Y
equal to X beta. Now, let us assume that this
144
00:16:54,259 --> 00:17:00,100
is equation number three.
So now, if we will multiply, now multiply
145
00:17:00,100 --> 00:17:07,100
multiply now multiply X inverse on both the
sides on both the sides, multiplying X inverse
146
00:17:13,730 --> 00:17:20,730
on the both the sides. So, what happens? Now,
X inverse Y is equal to X inverse X into beta.
147
00:17:24,390 --> 00:17:31,390
X into inverse Y equal to X inverse X beta.
Now, what is X inverse X? X inverse X by matrix
148
00:17:34,190 --> 00:17:41,190
it is nothing but unit matrix; it is nothing
but unit matrix. So, as the result, the value
149
00:17:41,590 --> 00:17:48,590
of matrix is exactly equal to one. So, that
implies beta equal to X inverse Y, beta equal
150
00:17:50,150 --> 00:17:56,340
to X inverse Y. Now, the question is, what
is X inverse Y? So now, we know, what is X.
151
00:17:56,340 --> 00:18:03,340
So, X is n summation X summation X summation
X squares. So, we have to find out the X inverse.
152
00:18:03,590 --> 00:18:10,590
So, X inverse equal to adjoint of X divide
by divide by mod X mod X. What is, if we will
153
00:18:16,460 --> 00:18:23,460
put it in different structure, then X inverse
X inverse is represented as a summation X
154
00:18:26,510 --> 00:18:33,510
squares minus summation X, then minus summation
X then n. So, this is what X inverse divided
155
00:18:37,070 --> 00:18:44,070
by mod A, which is nothing but n summation
X squares minus summation X whole squares.
156
00:18:44,940 --> 00:18:50,510
So, this is the entire value of X inverse.
So, there is a rule how to get the X inverse.
157
00:18:50,510 --> 00:18:57,400
So, I am not going detail about this explanations.
So, you have to know yourself. So, the X inverse
158
00:18:57,400 --> 00:19:03,520
means if f is X is available and it is in
square format, then obviously, we will get,
159
00:19:03,520 --> 00:19:09,090
we are able to manage to get the X inverse.
Now, the system is two into two. So, it is
160
00:19:09,090 --> 00:19:14,440
square matrix of order two into two. So, it
is not difficult to get the, you can say inverse
161
00:19:14,440 --> 00:19:21,440
matrix. So, X inverse is this much.
So now, we like to know X inverse Y. Now,
162
00:19:21,610 --> 00:19:27,760
X inverse Y X inverse Y is nothing but, so,
again, we have to go for matrix multiplication.
163
00:19:27,760 --> 00:19:34,760
Summation X square minus summation X minus
summation X n divide by 1 n summation X square
164
00:19:36,210 --> 00:19:43,210
minus sum X whole square into into Y. What
is Y? Y equal to sum Y sum X Y sum Y and sum
165
00:19:48,360 --> 00:19:55,360
X Y.
So now, this is the X inverse Y. Now, beta
166
00:19:56,650 --> 00:20:03,650
equal to X inverse Y. So, what is beta? Beta
is nothing but alpha hat and beta hat. Now,
167
00:20:05,590 --> 00:20:12,590
if we will simplify simplify this particular
equation by matrix multiplication, then we
168
00:20:13,030 --> 00:20:20,030
will get alpha hat equal to alpha hat equal
to, so, we will get alpha hat equal to like
169
00:20:20,440 --> 00:20:27,440
this. This alpha hat equal to summation Y
into summation X square, then minus summation
170
00:20:36,260 --> 00:20:43,260
X into summation X Y divide by n summation
X squares minus sum X whole square. This is
171
00:20:45,559 --> 00:20:50,429
the alpha hat component, this is the this
is the alpha hat component.
172
00:20:50,429 --> 00:20:57,429
Similarly, we will get beta hat. Beta hat
equal to beta hat equal to n summation X Y
173
00:21:00,750 --> 00:21:07,750
n summation X Y minus minus n summation X
Y minus sum X into sum Y into sum Y divide
174
00:21:17,750 --> 00:21:24,750
by n summation X square minus sum X whole
square. So, this is beta hat component.
175
00:21:25,350 --> 00:21:32,350
So, if we will if we will simplify further,
then this particular item can be represented
176
00:21:33,840 --> 00:21:40,840
as you can say summation X Y by summation
X square. So, this X represent where X equal
177
00:21:42,080 --> 00:21:49,080
to X minus X bar and Y equal to Y minus Y
bar. I will I will explain how it is how it
178
00:21:49,470 --> 00:21:56,470
is transferred into this particular format.
So, there is a trick to solve this particular
179
00:21:56,669 --> 00:22:03,610
problems. Now, since we have objective to
get alpha hat and beta hat, so now, you are
180
00:22:03,610 --> 00:22:08,669
in a position to know the value of alpha hat
and to know the value of beta hats.
181
00:22:08,669 --> 00:22:15,669
So, this is our starting point of bivariate
econometric modelling. The moment you get
182
00:22:16,440 --> 00:22:23,440
alpha hat and beta hat, then the game plan
will be completely different now. Now, the
183
00:22:23,669 --> 00:22:29,510
idea is, the basic idea for this particular
bivariate econometric modelling is that we
184
00:22:29,510 --> 00:22:36,510
have to fit a best line, otherwise it is called
as a best fitted line. So, how do we get best
185
00:22:36,549 --> 00:22:40,840
fitted line? Best fitted line depends upon
the value of alpha hat and beta hat.
186
00:22:40,840 --> 00:22:47,840
So now, the alpha hat and beta hat may may
be it cannot be it cannot be a constant or
187
00:22:49,470 --> 00:22:55,799
it cannot be unique. It can be different with
respect to different setup or different structures
188
00:22:55,799 --> 00:23:00,909
because the moment we will get a particular
estimated equation Y hat equal to alpha hat
189
00:23:00,909 --> 00:23:07,909
plus beta hat X, then obviously, that model
has to be you can say a identify properly.
190
00:23:09,650 --> 00:23:16,650
So, that is what we called as a reliability
of the models. So, the details, testing structure
191
00:23:16,890 --> 00:23:23,870
we have discussed long back, in my first one
or two lectures. Now, when will we when we
192
00:23:23,870 --> 00:23:29,679
have a estimated model, so, we have to go
first the reliability part or that is nothing
193
00:23:29,679 --> 00:23:34,740
but diagnostic check.
Now, once you have that and if the model is
194
00:23:34,740 --> 00:23:40,240
free from this particular diagnostic check
or it is reliable one, then you can use that
195
00:23:40,240 --> 00:23:46,220
model or you can say that this model is perfectly
okay or best fitted model. If not, then you
196
00:23:46,220 --> 00:23:52,029
have to modify by various ways, either you
can redesign the model or redesign the system,
197
00:23:52,029 --> 00:23:59,029
redesign the data setup or redesign the technique.
So, by this way, you will get a particular
198
00:24:00,000 --> 00:24:06,860
models. At the end, which one is the best
model for this particular analysis?
199
00:24:06,860 --> 00:24:13,529
So now, once you have alpha hat and beta hat,
so, you are estimated equation will be Y hat
200
00:24:13,529 --> 00:24:20,529
equal to alpha hat plus beta hat X. So, alpha
hat is followed by this one and beta hat is
201
00:24:22,130 --> 00:24:29,130
followed by this one. Now, there is actually
tricks here. So, when you know particularly
202
00:24:30,490 --> 00:24:35,970
exam point of view, it is very difficult to
go for you know, so much derivation or analysis,
203
00:24:35,970 --> 00:24:39,720
there is trick how to get the solution very
quickly.
204
00:24:39,720 --> 00:24:45,100
So, what is our starting point here? Our starting
point is here. Summation Y equal to n alpha
205
00:24:45,100 --> 00:24:52,100
hat plus beta hat summation X square and summation
X Y is equal to alpha hat summation X plus
206
00:24:53,500 --> 00:24:59,330
beta hat summation X square. So, this is how,
we have started our journey. So now, I think
207
00:24:59,330 --> 00:25:06,330
this is alpha hat plus beta hat summation
X square and summation X Y equal to alpha
208
00:25:08,340 --> 00:25:13,880
hat summation X plus beta hat summation X
square. Sorry, this is summation Y equal to
209
00:25:13,880 --> 00:25:19,779
n alpha hat plus beta beta hat summation X
and alpha summation X Y equal to alpha hat
210
00:25:19,779 --> 00:25:25,450
summation X plus beta hat summation X square.
So, what you have to do? Now, let us take
211
00:25:25,450 --> 00:25:31,169
a first equation here. So, summation Y equal
to n alpha hat plus beta hat summation X.
212
00:25:31,169 --> 00:25:37,480
Now, what I will do? I will divide n both
the sides. Summation Y equal to n alpha hat
213
00:25:37,480 --> 00:25:44,480
Y n plus beta hat summation X by n. So, summation
Y by n is nothing but Y bar. This is what,
214
00:25:46,020 --> 00:25:53,020
we have already discussed detail in the univariate
univariate data structure.
215
00:25:54,649 --> 00:26:01,649
So now, Y Y bar is equal to n n cancel. This
is nothing but alpha hat plus beta hat X bar.
216
00:26:06,640 --> 00:26:12,679
Summation X by n is nothing but X bar. So,
it will be X bar. Now, our objective is here
217
00:26:12,679 --> 00:26:18,220
to get the alpha and beta hat. Now, alpha
hat is only single element here. So, obviously,
218
00:26:18,220 --> 00:26:25,220
alpha hat equal to Y bar minus beta hat X
bar. So, technically there is no point to
219
00:26:25,230 --> 00:26:31,470
no point to derive the alpha hat or you have
to run behind this alpha hat value alpha hat
220
00:26:31,470 --> 00:26:38,149
value. We will get automatically because we
know the Y information and we know the X information
221
00:26:38,149 --> 00:26:43,250
by the by the help of Y information and X
information, we can get to know what is Y
222
00:26:43,250 --> 00:26:48,299
bar and what is X bar.
So, it is not a difficult task. So, what is
223
00:26:48,299 --> 00:26:53,120
the difficult task here? So, here the unknown
factor is beta hat. So, once you will get
224
00:26:53,120 --> 00:26:58,360
the beta hat, other things will be remain
available with you. So, as a result, so, you
225
00:26:58,360 --> 00:27:03,830
have to calculate first beta hat rather than
alpha hat. So, once you will get beta hat,
226
00:27:03,830 --> 00:27:08,059
with the help of beta hat you can able to
get the alpha hat. So, what is beta hat here?
227
00:27:08,059 --> 00:27:12,679
So, beta hat equal to the formula we have
already mentioned. So, beta hat equal to n
228
00:27:12,679 --> 00:27:19,679
summation X Y minus sum X into sum Y by n
summation X square and sum X square minus
229
00:27:22,590 --> 00:27:27,620
sum X whole square.
So, once you will get beta hat, then through
230
00:27:27,620 --> 00:27:34,620
which alpha hat can be observed through which
alpha hat can be observed. Now, so, what we
231
00:27:35,460 --> 00:27:42,460
have to do here? So, we like to take a case
here. So, we like to know, what is this what
232
00:27:43,700 --> 00:27:48,500
is this entire structure? How do we get this
alpha hat and beta hat? So, before we go to
233
00:27:48,500 --> 00:27:53,360
particular example, so, let me highlight here
this particular issue. So, this is otherwise
234
00:27:53,360 --> 00:28:00,360
called as a covariance of X Y by sigma sigma
X or it is variance of X, this is covariance
235
00:28:02,600 --> 00:28:06,350
of X.
So, covariance X is nothing but, some simply
236
00:28:06,350 --> 00:28:13,350
a summation X minus X bar into Y minus Y bar
divide by n and variance of X is nothing but,
237
00:28:15,059 --> 00:28:22,059
summation X minus X bar into X minus X bar
divide by n. n n cancels, so obviously, this
238
00:28:23,200 --> 00:28:30,059
is nothing but sigma X and this is nothing
but covariance of Y. So, it is sigma X means
239
00:28:30,059 --> 00:28:37,059
it is a square root. So, obviously, this is
this is okay. Now, alpha hat is this much
240
00:28:38,429 --> 00:28:43,510
and beta hat is this much. So, that means,
the other way you have to represent the beta
241
00:28:43,510 --> 00:28:50,510
is nothing but summation X Y by summation,
you can say X square summation X square. So,
242
00:28:51,190 --> 00:28:57,559
X is X minus X bar, Y is Y minus Y bar, X
square is nothing but this particular item.
243
00:28:57,559 --> 00:29:03,450
X minus X bar and summation X i j this much.
So, if we will simplify, then you will get
244
00:29:03,450 --> 00:29:09,029
this particular equation. So now, we have
alpha hat and we have beta hat. So now, we
245
00:29:09,029 --> 00:29:15,409
will see how practical it can be evaluated.
So, take a example here.
246
00:29:15,409 --> 00:29:22,409
So, we take here X series X series here, then
Y series here. This is sample points. So,
247
00:29:24,789 --> 00:29:31,789
1, 2, 3, 4, 5, like this. So, here, so, this
sample points are 51, 60, then 65, then 71,
248
00:29:45,539 --> 00:29:52,539
then 39, then 32, then 81, then 76, then 66,
then 66, then Y series is nothing but 187,
249
00:29:58,929 --> 00:30:05,929
then 210, then 137, then 136, then 241, then
262, then 110, 143, then 152. So, that means,
250
00:30:09,360 --> 00:30:16,360
1, 2, 3, 4, 5, this is 6, this is 7, 8, 9.
So, there are 9 sample points. So, this sample
251
00:30:20,580 --> 00:30:27,250
points is a 9. These are the sample points
and these are the X series and these are the
252
00:30:27,250 --> 00:30:34,250
Y series. Since, X has a nine sample points
and Y has a 9 sample points, that means, system
253
00:30:37,190 --> 00:30:44,190
is okay now. So, the model can be estimated.
Now, what is the idea behind this models?
254
00:30:44,309 --> 00:30:51,309
So, we will assume that the model or Y and
X are means, Y and X are related in a linear
255
00:30:51,940 --> 00:30:55,440
Y.
So, our assumption that Y equal to alpha plus
256
00:30:55,440 --> 00:31:02,440
beta X and if we will add the error term,
then obviously, this is plus U. Now, we are
257
00:31:03,890 --> 00:31:09,210
assuming that the estimated model is equal
to Y hat equal to alpha hat plus beta hat
258
00:31:09,210 --> 00:31:16,210
X and where alpha hat equal to Y minus Y bar
minus beta hat X bar and beta hat is equal
259
00:31:19,850 --> 00:31:26,850
to n summation X Y minus sum X into sum Y
divide by n summation X square minus sum X
260
00:31:29,200 --> 00:31:35,419
whole square.
So, now you see here since we have a X and
261
00:31:35,419 --> 00:31:42,419
Y series, so, what is the essential point
here? For X and Y you see here. This is nothing
262
00:31:42,789 --> 00:31:49,789
but, we first need we first need mu X mu Y,
we need mu X mu Y and another is sigma X X,
263
00:31:54,690 --> 00:32:01,690
sigma X Y, sigma Y X and sigma Y Y. So, this
is this is mean of X, this is mean of Y and
264
00:32:07,580 --> 00:32:13,669
this particular matrix is called as a variance
covariance matrix variance covariance matrix.
265
00:32:13,669 --> 00:32:20,669
So, now within the given setup, so, you can
able to get all these items separately. Now,
266
00:32:21,159 --> 00:32:26,100
to solve this particular equations, so, what
is the essential requirement?
267
00:32:26,100 --> 00:32:33,100
So, essential requirement is that essential
requirement is that we like to know first,
268
00:32:33,720 --> 00:32:40,720
what is summation X, then summation Y, then
summation X Y, then summation X square, then
269
00:32:40,909 --> 00:32:47,909
summation Y squares. These are the requirements
we like to know. So, what is summation X,
270
00:32:48,159 --> 00:32:54,720
what is summation Y, what is summation X Y,
what is summation X square, what is summation
271
00:32:54,720 --> 00:32:58,490
Y square, and finally, what is the sample
size sample size?
272
00:32:58,490 --> 00:33:05,490
So now, in fact, I have already calculated
this particular items. So, this is nothing
273
00:33:05,539 --> 00:33:12,539
but X series. So, sum X equal to 541. I am
just filling here. Summation Y equal to 1578
274
00:33:13,769 --> 00:33:20,769
1578, then summation X Y is equal to 88291,
then summation X square is equal to 34705,
275
00:33:22,450 --> 00:33:29,450
and Y Y is
nothing but 298712. So, this summation X square
is nothing but 347 34705 and n is here 9,
276
00:33:49,309 --> 00:33:56,309
n is 9 here. So now, we like to know alpha
hat. Alpha hat equal to Y bar minus beta hat
277
00:34:00,750 --> 00:34:07,750
X bar. Let us start first beta. So, beta hat
equal to n summation X Y. So, n into 8, n
278
00:34:09,159 --> 00:34:16,159
is 9 here. So, 9 into 88291 minus summation
X into summation Y. So, 541 multiplied by
279
00:34:19,720 --> 00:34:25,790
151578 summation X into summation Y divided
by n summation X square. What is summation
280
00:34:25,790 --> 00:34:32,790
X square? It is 9 into 34705 minus sum X whole
square root is sum X sum X is 541. So, 541
281
00:34:37,370 --> 00:34:43,070
whole square. So, this is what the beta value
is all about.
282
00:34:43,070 --> 00:34:50,070
So now, if we simplify this particular equation
this particular equation, then you will get
283
00:34:50,250 --> 00:34:57,250
you will get this particular equation is like
this. So, beta hat is equal to 3.004, you
284
00:34:59,110 --> 00:35:06,110
will get 3.004 where where all these information
are available. So, if you simplify this particular
285
00:35:09,110 --> 00:35:16,110
equation, so, you will get beta hat equal
to 3.004. So now, alpha hat equal to Y where
286
00:35:17,730 --> 00:35:24,730
minus beta hat equal to 3.004 into X bar.
Now, if we simplify further, then it is nothing
287
00:35:28,140 --> 00:35:35,140
but 355.93. So, that means, your final equation
equal to, Y hat equal to 355.93 minus 355.93
288
00:35:45,220 --> 00:35:52,220
minus 3.004 into X 3 point into X. So, this
is what we call as a estimated model. So,
289
00:35:56,780 --> 00:36:00,840
this is what we call as a estimated models.
290
00:36:00,840 --> 00:36:07,840
So, now what you have to do? So, we get to
know, we just summarize what we have done
291
00:36:08,970 --> 00:36:15,970
till now. Now, the starting point is, so,
we have we have Y equal to alpha plus beta
292
00:36:16,820 --> 00:36:23,820
X plus U. This is original format where, U
is error term. This is slope and this is intercept,
293
00:36:32,710 --> 00:36:39,710
this is dependent variable, this is this particular
item is independent variables and this is
294
00:36:42,900 --> 00:36:49,900
explained items, this particular is a unexplained
items. So, by this process, we are assuming
295
00:36:50,880 --> 00:36:57,880
that Y hat equal to alpha hat plus beta hat
alpha hat equal to alpha hat equal to Y bar
296
00:36:59,350 --> 00:37:06,350
minus beta hat X bar and beta hat equal to
summation X Y summation X square where, X
297
00:37:08,840 --> 00:37:15,840
equal to X minus X bar X equal to X minus
X bar Y equal to Y minus Y bar and X Y is
298
00:37:19,950 --> 00:37:26,950
nothing but, X minus X bar into Y minus Y
bar. And X square is nothing but, X minus
299
00:37:27,180 --> 00:37:34,180
X bar into Y minus Y bar. So, this is what,
we have received the final equation. So, that
300
00:37:36,310 --> 00:37:42,740
is called as a line of the best fitted.
So now, you see here. So, the original structure
301
00:37:42,740 --> 00:37:49,740
is we start with the Y and X only. By the
way, we will get U component here or you can
302
00:37:50,910 --> 00:37:57,910
say error. So, this is sample format. So,
1, 2, 3 up to 9. So, for every items, you
303
00:38:00,960 --> 00:38:07,960
must have a some observations some observations.
So now, how you have to setup this particular
304
00:38:08,330 --> 00:38:15,310
series? So, you have Y, and X means our original
starting is with respect to Y information
305
00:38:15,310 --> 00:38:22,310
and X information. So, we are assuming that
Y and X has a relationship. And by the way
306
00:38:22,820 --> 00:38:29,820
Y Y is dependent variable and X is independent
variable. Now, we have to fit in such a way,
307
00:38:30,410 --> 00:38:37,090
so that, we will get a best fitted line or
that is called as a best related equations.
308
00:38:37,090 --> 00:38:44,090
So now, the way you have to get the best related
equation, so, we have to apply some technique.
309
00:38:44,590 --> 00:38:49,880
So, here we are we are using the ordinary
least square method. So that, we will get
310
00:38:49,880 --> 00:38:56,790
the best fitted line. Now, so that, we will
assume that it is nothing but Y hat. So, Y
311
00:38:56,790 --> 00:39:03,790
hat. So, Y hat equal to alpha hat plus beta
hat X. Now, you will get U here. So, this
312
00:39:08,100 --> 00:39:14,370
is Y hat structures because Y hat equal to
alpha hat plus beta hat. X alpha hat is here,
313
00:39:14,370 --> 00:39:21,370
beta hat is here. So, put this value here,
then X is there. So, for every sample X value
314
00:39:22,620 --> 00:39:26,450
is there. So, for every sample, put X value
here. So, you will get the Y hat value. Put
315
00:39:26,450 --> 00:39:31,790
X value 2 here, then obviously, we will get
Y 2 hat. Similarly, up to Y 9 hat, you will
316
00:39:31,790 --> 00:39:38,790
get it. So now, how do you get U? U is nothing
but, Y minus U hat. So, it will be called
317
00:39:39,070 --> 00:39:46,070
as a e 1, e 2, e 3 up to e 9. So, these are
all called as a error item. Now, we have to
318
00:39:47,500 --> 00:39:53,110
see what is the contribution of a error and
what is the contribution of X 2, what is the
319
00:39:53,110 --> 00:39:57,820
Y? This is our basic agenda behind this particular
topic.
320
00:39:57,820 --> 00:40:04,820
So now, now there is certain problem here.
So, what is this problem? Now, when we will
321
00:40:06,410 --> 00:40:13,280
fitting a models, Y equal to alpha plus beta
X plus error terms and you will get Y hat
322
00:40:13,280 --> 00:40:19,350
equal to alpha hat plus beta hat X. So, this
particular transformations, we have applied
323
00:40:19,350 --> 00:40:25,830
OLS technique. So, there may, of course, there
are several techniques. We can we can use
324
00:40:25,830 --> 00:40:32,060
to get this Y value equal to alpha hat and
beta hat X, but wireless technique is the
325
00:40:32,060 --> 00:40:39,060
very standard technique and very easy to understand
and simple to simplify. So, that is how, we
326
00:40:39,670 --> 00:40:44,410
have to start with the wireless technique.
So, when we go deep in this particular econometric
327
00:40:44,410 --> 00:40:50,350
modelling, then we can apply maximum likelihood
estimated techniques or generalized least
328
00:40:50,350 --> 00:40:56,000
square methods and weighted least square methods.
So, some of the problems under this econometric
329
00:40:56,000 --> 00:41:01,370
modelling can be solved with these particular
methods. That time, wireless technique may
330
00:41:01,370 --> 00:41:07,440
not be may not be appropriate to get the best
fitted best fitted line. So, there is way,
331
00:41:07,440 --> 00:41:12,740
how, when or what times you kept like this,
GLS technique or WLS technique or maximum
332
00:41:12,740 --> 00:41:17,930
likelihood technique. So, here we start with
first the basic level, then we have to go
333
00:41:17,930 --> 00:41:24,930
into complex complex scenario.
So now, here, when we will apply wireless
334
00:41:27,220 --> 00:41:34,100
technique, then the entire equation will transfer
into Y Y hat equal to alpha hat plus beta
335
00:41:34,100 --> 00:41:41,100
hat X. Wireless techniques, of course, technique
is the standard technique and easy to understand,
336
00:41:41,230 --> 00:41:48,230
easy to apply, but it has certain limitations.
There are there are certain limitation with
337
00:41:48,910 --> 00:41:55,700
respect to its assumptions. So, we have certain
assumptions before applying the wireless technique
338
00:41:55,700 --> 00:42:02,700
or to get this estimated lines. And these
assumptions are you know later point of times,
339
00:42:03,340 --> 00:42:08,080
it is problem for this particular econometric
modelling and each problem has to be investigated
340
00:42:08,080 --> 00:42:13,560
problem. So, we will discuss detail what is
the exact assumption and how this problem
341
00:42:13,560 --> 00:42:20,510
can be, you can say generated in this particular
systems. So, this problems are very complex
342
00:42:20,510 --> 00:42:27,510
and very interesting also.
So now, the system is, means the idea is here.
343
00:42:28,280 --> 00:42:32,130
What is the, what are these assumptions related
to wireless techniques? Because, wireless
344
00:42:32,130 --> 00:42:39,060
techniques without these assumptions, wireless
techniques cannot be applied and you cannot
345
00:42:39,060 --> 00:42:45,620
get the best fitted models. So, that is what
we call it, Y hat equal to alpha hat plus
346
00:42:45,620 --> 00:42:51,030
beta hat X. Yes, it is means, theatrically
we are just writing Y hat equal to alpha hat
347
00:42:51,030 --> 00:42:56,540
plus beta hat X. But, to get alpha hat beta
hat is not so easy. There are lots of complex
348
00:42:56,540 --> 00:43:01,570
processes or complex structure through which
we have obtained the alpha and beta hat. Just
349
00:43:01,570 --> 00:43:07,950
now, you have derived the entire structures
with respect to this particular alpha hat
350
00:43:07,950 --> 00:43:13,440
value and beta hat value.
So now, the way we are applying this OLS,
351
00:43:13,440 --> 00:43:19,320
so, we have to go with certain assumptions
because without such assumptions, it is very
352
00:43:19,320 --> 00:43:25,750
difficult to minimize this error sum squares,
that too by the help of wireless techniques.
353
00:43:25,750 --> 00:43:32,750
So, these assumption are actually divided
into three parts: one part is related to error
354
00:43:33,760 --> 00:43:40,760
term, another part is related to independent
variables and third part is related to dependent
355
00:43:42,490 --> 00:43:49,490
or other items in a particular system. There
are certain other items means, that items
356
00:43:50,070 --> 00:43:55,110
related to statistics only, not some other
things. Now, we will receive, what are these
357
00:43:55,110 --> 00:44:02,110
assumption under this particular setup.
So, first first assumption is that, the model
358
00:44:02,660 --> 00:44:09,660
must be linear in parameters. Model parameters,
model parameters are linear in nature, linear
359
00:44:14,120 --> 00:44:21,120
in nature. So, every times, we are using Y
equal to alpha plus beta X plus U. So, that
360
00:44:22,740 --> 00:44:29,740
means, this model is linear one with respect
to variables and with both parameters. So,
361
00:44:32,030 --> 00:44:39,030
our the complex problems, so, this variable
can be, you can say non-linear one and the
362
00:44:39,480 --> 00:44:45,100
parameter can be non-linear one. But, suppose
wireless technique is concerned, we have to
363
00:44:45,100 --> 00:44:52,100
assume that all parameters should be linear
in nature, but variables may be, you can say
364
00:44:52,360 --> 00:44:59,060
may not be non-linear one. So, that means,
we apply this. Let us say quadratic equation,
365
00:44:59,060 --> 00:45:06,060
cubic equation or logarithmic equation, it
can be possible; that means, Y can be log
366
00:45:07,360 --> 00:45:13,760
Y, Y can be Y square, X can be log X, X can
be X square or simply, we can put Y. Y equal
367
00:45:13,760 --> 00:45:20,760
to like this. We can put Y equal to alpha
plus beta X square beta X square plus, you
368
00:45:23,380 --> 00:45:30,070
can say gamma gamma X. We can also fit like
this way and we will get the value of, you
369
00:45:30,070 --> 00:45:35,660
can say alpha beta and gamma. It is not a
difficult task but, the standard assumption
370
00:45:35,660 --> 00:45:42,660
is that whatever means, parameters are using
in this particular setup, all parameters must
371
00:45:43,820 --> 00:45:49,800
be linear in nature. And for bivariate model,
obviously, there are only two parameters in
372
00:45:49,800 --> 00:45:56,800
the system. One is related to supporting component;
that is intercept. And another is slow formation;
373
00:45:57,290 --> 00:46:02,000
that is indicated the weighted of the dependent
variable towards the dependent variables.
374
00:46:02,000 --> 00:46:07,960
So, this is first first assumption behind
this particular techniques. So, model must
375
00:46:07,960 --> 00:46:14,960
be means model parameter must be linear one.
Second, X should be non-stochastic, X followed
376
00:46:15,750 --> 00:46:22,750
by non-stochastic. So, that means, in fact,
last class I have discussed it should be random
377
00:46:25,680 --> 00:46:32,070
in nature. So, that means, there is some kind
of probability may be involved in this particular
378
00:46:32,070 --> 00:46:39,070
process because, we are hoping that this is
the expected relationship and expected equations
379
00:46:40,100 --> 00:46:45,230
or you can say, whatever may be the, since,
we are using the term expectations, so, that
380
00:46:45,230 --> 00:46:51,100
means, it is for future only. Because, the
whole idea behind this particular estimated
381
00:46:51,100 --> 00:46:56,800
model is to go for forecasting. So, what should
be in the future? This is the original structure
382
00:46:56,800 --> 00:47:02,020
within the original setup. We have to build
a first through which can predict or forecast
383
00:47:02,020 --> 00:47:08,450
the future one. So, that is how, we are doing
all these jobs.
384
00:47:08,450 --> 00:47:13,560
So now, so, means that is how, we have to
assume that the variables are very much non-stochastic
385
00:47:13,560 --> 00:47:20,560
in nature. Otherwise, it is very difficult
to observe it or you can say, plan it. Now,
386
00:47:20,630 --> 00:47:27,630
the the second, this is the second assumption
behind this particular wireless technique.
387
00:47:28,780 --> 00:47:35,780
Third assumption, mean of error terms should
be equal to 0, mean of error term should be
388
00:47:40,770 --> 00:47:47,770
equal to equal to 0. So, like this. So, that
means, e of 1 U is equal to 0. So, this is
389
00:47:54,670 --> 00:47:59,650
mean of error terms should be equal to 0;
that means, you see, when we are considering
390
00:47:59,650 --> 00:48:03,370
mean, then, obviously, some items should be
above and some items should be below. This
391
00:48:03,370 --> 00:48:10,370
is what we we have learned from the standard
univariate data setup. So, mean is the, you
392
00:48:11,810 --> 00:48:18,810
can say average or usually we consider divided
into two equal parts, which is some 50 percent
393
00:48:20,790 --> 00:48:27,250
above, 50 percent below. If that is the setup,
then obviously, the entire system is model
394
00:48:27,250 --> 00:48:30,270
less.
So now, mean of the error term should be equal
395
00:48:30,270 --> 00:48:37,270
to 0. Now, when we will get Y hat, then obviously,
to get the to get the error component e, we
396
00:48:37,770 --> 00:48:44,430
have to subtract Y minus Y and Y hat. Now,
the difference sigma called as a error term.
397
00:48:44,430 --> 00:48:51,430
Now, we have series of items through which
we will get Y 1, Y 1 hat, X then Y 2, Y 2
398
00:48:53,750 --> 00:49:00,750
hats, like this. So, since Y 1, Y 2 up to
Y n, say Y hat 1, Y hat 2, like this up to
399
00:49:01,910 --> 00:49:06,150
Y hat n.
So now, for every items, so, there is error
400
00:49:06,150 --> 00:49:12,700
components like e 1 for first component, for
second component, you must have e 2. Like
401
00:49:12,700 --> 00:49:19,700
this, it will continue up to nth items. Now,
since, we are discussing about the average,
402
00:49:20,720 --> 00:49:24,470
then obviously, sometimes the difference may
be positive, sometimes the difference may
403
00:49:24,470 --> 00:49:31,470
be negative. But, at the end, when we will
go for summation, the plus items and minus
404
00:49:32,070 --> 00:49:38,550
items should be equal n. If that is the case,
then your system is perfectly okay, otherwise
405
00:49:38,550 --> 00:49:41,530
the system is some kind of error.
406
00:49:41,530 --> 00:49:48,530
So, this is third assumption behind this wireless
technique. Then, fourth assumption is that
407
00:49:50,130 --> 00:49:57,130
variance of error term should be constant
variance of error term should be constant.
408
00:50:03,740 --> 00:50:10,740
What is that? So, that means, what is variance?
Variance here, now, we are we are discussing
409
00:50:12,170 --> 00:50:19,170
here we are discussing here U U is the error
term. So, we are calling it U i. So now, we
410
00:50:21,880 --> 00:50:28,440
will take another error term U j. Now, so,
there are two variables. In fact, now what
411
00:50:28,440 --> 00:50:35,440
is variance? So, we start with a covariance.
Covariance equal to Y i of 1 or U i of 1 U
412
00:50:36,120 --> 00:50:43,120
j. So, this is what is called as a covariance
of U i, i j. Now, this covariance of U i U
413
00:50:43,340 --> 00:50:50,340
j can be equal to variance of U provided a
means, if i equal to j. So, that means, when
414
00:50:53,020 --> 00:51:00,020
we when we say variance of error terms should
be constant or you can say unique, then obviously,
415
00:51:03,420 --> 00:51:10,420
covariance of U i upon U j should be equal
to 1, for i equal to j. And this particular
416
00:51:10,630 --> 00:51:16,050
setup is called as a homoscedasity. This is
particular setup is called as homoscedasity;
417
00:51:16,050 --> 00:51:23,050
that means, when there is error error variance,
so that error variance should be for very
418
00:51:23,440 --> 00:51:28,440
equal like this.
So now, when there are U U is the error terms.
419
00:51:28,440 --> 00:51:33,930
So, through one U, you can create several
U’s like this. So, let us say in a more
420
00:51:33,930 --> 00:51:40,930
generalized format U 1, U 2 up to U n. So,
this side U 1, U 2 up to U n. Now, we have
421
00:51:42,620 --> 00:51:49,620
the variance covariance matrix. So, this is
U 1 1, this is U 1 2 and this is U 1 n. So,
422
00:51:50,680 --> 00:51:57,680
this is U 2 1, this is U 2 2, then this is
U 2 n. So, this is U n 1, U n 2, this is U
423
00:51:57,940 --> 00:52:04,110
n n; that means, the complex structure is
divided into three parts. This is this is
424
00:52:04,110 --> 00:52:09,850
diagonal elements, this is off diagonal, on
diagonal and this is off diagonal.
425
00:52:09,850 --> 00:52:13,980
So now, when we will you say that variance
of error terms are equal, that means, these
426
00:52:13,980 --> 00:52:18,230
are all variance and these are all covariance.
Now, this variance should be exactly similar.
427
00:52:18,230 --> 00:52:25,230
If this is the case, then this particular
setup is called as a homoscedasity principle
428
00:52:25,550 --> 00:52:32,550
and wireless techniques assumes that error
variance are equal; that means, there is homoscedasity.
429
00:52:32,620 --> 00:52:39,620
If a the situation is reverse or that means,
if the error variance are not equal, it varies
430
00:52:40,080 --> 00:52:47,080
with respect to sample points either in the
cross sectional or something time series,
431
00:52:48,030 --> 00:52:53,570
then obviously, it is in different format.
So, that particular format is called as a
432
00:52:53,570 --> 00:52:59,690
heteroscedasity problem.
So, we have two different game together. One
433
00:52:59,690 --> 00:53:05,330
is called as a homoscedasity and another is
called as a heteroscedasity. So, homoscedasity
434
00:53:05,330 --> 00:53:10,810
is very consistent with wireless technique.
So, that means, one of the standardization
435
00:53:10,810 --> 00:53:15,800
assumption of wireless is that, so, error
variance should be equal. So, that is what
436
00:53:15,800 --> 00:53:20,440
we call it homoscedasity. If that is not the
case, then it is called as a heteroscedasity.
437
00:53:20,440 --> 00:53:26,620
So now, when there is a heteroscedasity problem
with the application of OLS, that means, the
438
00:53:26,620 --> 00:53:32,260
model cannot be treated as a best fitted models.
So, in that context, we have to redesign this
439
00:53:32,260 --> 00:53:39,260
setup again. so, that the heteroscedasity
problem can be removed, then we will get the
440
00:53:39,770 --> 00:53:43,980
homoscedasity structures. So, then the model
can be used for forecasting.
441
00:53:43,980 --> 00:53:50,500
With this, we can close this subject today.
So, next class we will start with the some
442
00:53:50,500 --> 00:53:54,890
assumption of this particular bivariate modelling
with the application of wireless technique.
443
00:53:54,890 --> 00:54:01,890
Thank you very much. Have a nice day.