Econometric Models & Methods(ECMT3110)
The final exam this year consists of 13 multiple choice questions worth 2 points each, followed by 3 extended response questions worth 8 points each.
You can only view one question at a time, and once you have proceded to subsequent question, you cannot go back to view earlier questions.
The geometry of linear regression
Recall that in the linear regression model $y=X \beta+u$, the OLS estimator of $\beta$ satisfies
$$
X^{\top}(y-X \hat{\beta})=0
$$
This is an equality between two $k \times 1$ vectors. Writing out the equality elementby-element, we obtain
$$
x_{i}^{\top}(y-X \hat{{\beta}})=0, \quad i=1, \ldots, k
$$
where $x_{i}$ denotes the $i^{\text {th }}$ column of $X$. Denoting the $n \times 1$ vector of fitted OLS residuals by $\hat{u}=y-X \hat{\beta}$, we thus have
$$
\left\langle x_{i}, \hat{u}\right\rangle=0, \quad i=1, \ldots, k .
$$
We have shown an important fact: the vector of fitted OLS residuals is orthogonal to every column of $X$. Moreover, if we consider any vector $z$ that lies in the column space of $X$, so that $z=b_{1} x_{1}+\cdots+b_{k} x_{k}$ for some $k \times 1$ vector $b$ with $i^{\text {th }}$ element $b_{i}$, then
$$
\langle z, \hat{u}\rangle=\sum_{i=1}^{k} b_{i}\left\langle x_{i}, \hat{u}\right\rangle=0
$$
Thus we see that $\hat{u} \in \mathcal{S}^{\perp}(X)$. Clearly we also have $X \hat{\beta} \in \mathcal{S}(X)$. The fitted regression equation
$$
y=X \hat{\beta}+\hat{u}
$$
thus decomposes the vector $y$ into the sum of $X \hat{\beta}$, a vector in the column space of $X$, and $\hat{u}$, a vector in the orthogonal complement to the column space of $X$.
Statistical properties of the OLS coefficients
We say that the OLS estimator $\hat{\beta}$ is unbiased if $\mathrm{E}(\hat{\beta})=\beta$. This means that $\hat{\beta}$ correctly estimates $\beta$ “on average”. We say that $\hat{\beta}$ is conditionally unbiased if $\mathrm{E}(\hat{\beta} \mid X)=\beta$. That conditional unbiasedness implies unbiasedness is immediate from the Law of Iterated Expectations. We will show that for $\hat{\beta}$ to be conditionally unbiased it is sufficient to assume that
$$
\mathrm{E}(u \mid X)=0
$$
Observe that
$$
\begin{aligned}
\hat{\beta} &=\left(X^{\top} X\right)^{-1} X^{\top} y \\
&=\left(X^{\top} X\right)^{-1} X^{\top} X \beta+\left(X^{\top} X\right)^{-1} X^{\top} u \\
&=\beta+\left(X^{\top} X\right)^{-1} X^{\top} u
\end{aligned}
$$
If we take the expected value of both sides of (3.2) conditional on $X$ we obtain
$$
\mathrm{E}(\hat{\beta} \mid X)=\beta+\left(X^{\top} X\right)^{-1} X^{\top} \mathrm{E}(u \mid X)=\beta
$$
since we have assumed that $\mathrm{E}(u \mid X)=0$. This shows that $\hat{\beta}$ is conditionally unbiased, hence unbiased.
Condition (3.1) is called an exogeneity condition, and when it holds $X$ is said to be exogenous. There are different kinds of exogeneity conditions used to study the behavior of the OLS coefficients, some stronger than others. A common weak exogeneity condition is that
$$
\mathrm{E}\left(u_{t}\right)=0 \text { and } \operatorname{Cov}\left(x_{s i}, u_{t}\right)=0 \text { for every } s, t=1, \ldots, n \text { and } i=1, \ldots, k
$$
Confidence intervals for linear regression
A confidence interval for an element $\beta_{i}$ of our parameter vector $\beta$ is an interval, constructed from data, that is claimed to contain $\beta_{i}$ with some prespecified probability, say $1-\alpha$. This probability is called the coverage rate of the confidence interval. Since the confidence interval is constructed from data it should be considered random, whereas the parameter value $\beta_{i}$ is nonrandom.
Consider testing the null hypothesis that $\beta_{i}=b$, for some arbitrary real number $b$. We can do this using the $t$-statistic
$$
t_{\beta_{t}}(b)=\frac{\hat{\beta}_{i}-b}{\text { s.e. }\left(\hat{\beta}_{i}\right)} .
$$
We reject the null $\beta_{i}=b$ if $\left|t_{\beta_{l}}(b)\right|>c_{\alpha}$, where $c_{\alpha}$ is the $(1-\alpha / 2)$-quantile of the $t(n-k)$ distribution. That is, we do not reject the null if
$$
-c_{\alpha} \leq t_{\beta_{l}}(b) \leq c_{\alpha}
$$
which happens if and only if
$$
\hat{\beta}_{i}-c_{\alpha} \text { s.e. }\left(\hat{\beta}_{i}\right) \leq b \leq \hat{\beta}_{i}+c_{\alpha} \text { s.e. }\left(\hat{\beta}_{i}\right)
$$
What is the probability that the inequalities in (5.3) are satisfied? We saw in Section $4.2$ that when condition (4.1) is satisfied, and the null hypothesis $\beta_{i}=b$ is true, $t_{\beta_{l}}(b)$ has the $t(n-k)$ distribution. Thus, the inequalities in (5.1), and hence also the inequalities in (5.3), are satisfied with probability $1-\alpha$ when $b=\beta_{i}$. That is,
$$
\operatorname{Pr}\left(\hat{\beta}_{i}-c_{\alpha} \text { s.e. }\left(\hat{\beta}_{i}\right) \leq \beta_{i} \leq \hat{\beta}_{i}+c_{\alpha} \text { s.e. }\left(\hat{\beta}_{i}\right)\right)=1-\alpha
$$
Linear regression with instrumental variables
Instrumental variables can sometimes provide a solution to the problem of endogeneous regressors. Suppose that our observed variables $y$ and $X$ satisfy the linear regression equation $y=X \beta+u$, where we assume that $\mathrm{E}(u)=0$. Suppose that we also observe a $n \times \ell$ matrix of “instruments” denoted $W$, where $\ell \geq k$. In a typical application, we might partition $X$ as $\left[\begin{array}{ll}X_{1} & X_{2}\end{array}\right]$, where the $k_{1}$ columns of $X_{1}$ are presumed to be exogenous, but the $k_{2}$ columns of $X_{2}$ are suspected to be endogenous. To each column of $X_{2}$ we assign one or more exogenous instruments $Z_{i}$, each an $n \times 1$ vector. This gives us a total of $\ell_{2} \geq k_{2}$ instruments assigned to $X_{2}$. Our complete matrix of instruments $W$ is then given by
$$
W=\left[\begin{array}{llll}
X_{1} & Z_{1} & \cdots & Z_{\ell_{2}}
\end{array}\right]
$$
In other words, we allow each column of the exogenous regressor matrix $X_{1}$ to be its own instrument, and we find at least one new exogenous variable to be an instrument for each column of the potentially endogenous regressor matrix $X_{2}$.
Let’s be very clear about the assumptions we will use to analyze the instrumental variables regression model. We extend the random sampling conditions (3.5) and (3.6) to include the instruments $W$ in the obvious way:
The rows of $\left[\begin{array}{lll}X & W & u\end{array}\right]$ have identical joint distributions. The rows of $\left[\begin{array}{lll}X & W & u\end{array}\right]$ are independent of one another.
We replace the rank condition (3.7) with the following two rank conditions.
The $\ell \times k$ matrix $\mathrm{E}\left(W_{t}^{\top} X_{t}\right)$ has full column rank $k$.
The $\ell \times \ell$ matrix $\mathrm{E}\left(W_{t}^{\top} W_{t}\right)$ has full rank $\ell$.
Finally, and crucially, rather than impose an exogeneity condition on the regressors $X$, we instead impose an exogeneity condition on the instruments $W$ :
$$
\mathrm{E}\left(W_{t}^{\top} u_{t}\right)=0
$$
计量经济学代写请认准UpriviateTA. UpriviateTA为您的留学生涯保驾护航。
Econometric Models and Methods – ECMT3110
This unit provides a rigorous treatment of linear regression analysis and related methods, including estimation by instrumental variables. It is designed for students who have taken an introductory course on linear regression and have had prior exposure to matrix algebra and relevant numerical software. Finite sample and asymptotic properties of linear regression are developed and discussed. Numerical software is used to implement and illustrate tools and concepts.
Classes
1x2hr lecture/week, 1x1hr tutorial/week
Assessment
1x1000wd equivalent written assignment (20%), 1×1.5hr mid-semester exam (30%), 1x2hr final exam (50%). Please refer to the unit of study outline for individual sessions https://www.sydney.edu.au/units
Pre-requisites
ECMT2110 or ECMT2010 or ECMT2160
Prohibitions
ECMT3010