Problem 1.

Let $X_{1}, X_{2}, \cdots, X_{n}$ be a random sample from $\operatorname{Normal}\left(\mu, \sigma^{2}\right)$ distribution. Suppose we wish to test the hypothesis that the largest order statistic $X_{n: n}$ is an outlier, and that we wish to use the test statistic
$$T=\frac{X_{n: n}-X_{n-1: n}}{S},$$
where $S$ is the sample standard deviation.
(a) Under the null hypothesis that $X_{n: n}$ is not an outlier, show that the statistic $T$ is ancillary;
(b) Then, using Basu’s theorem, derive an expression for the mean and variance of $T$, under $H_{0}$;
(c) Using the tables of means, variances and covariances of order statistics (see, for example, H.L. Harter and N. Balakrishnan, CRC Handbook of Tables for the Use of Order Statistics in Estimation, CRC Press, Florida, 1996), compute the mean and variance of $T$ for sample size $n=10$, using the expressions derived in Part (b);
(d) Using 1,000 Monte Carlo simulation runs, simulate the values of mean and variance of $T$ under $H_{0}$, and compare them with those determined in Part (c) and comment;
(e) Explain what the critical region will be for the test based on $T$ for testing whether $X_{n: n}$ is a large outlier;
(f) Then, determine the upper $5 \%$ critical value for $T$, for sample size $n=10$, through Monte Carlo simulations (use 1,000 simulation runs);
(g) With the logarithms of the number of trees in orchards being assumed to be normal, Singh et al. (1982) presented the following observed values of logarithms of the number of trees:
$$1.7918,2.3026,2.7726,3.2581,3.5264,3.8067,3.9703,4.0943,4.2905,4.5747$$
Then, using the statistic $T$, test whether the largest observation $4.5747$ is an outlier or not, at $5 \%$ level of sienificance.

Problem 2.

Suppose $\mathbf{C}=\left(\left(c_{i j}\right){i, j=1}^{k}\right.$ is a symmetric non-singular product-decomposable matrix with $c{i j}=a_{i} b_{j}$. Then, show that $\mathrm{C}^{-1}$ is a symmetric tri-diagonal matrix with (for $i \leq j$ )
$$c^{i j}=\left{\begin{array}{cll} \frac{a_{2}}{a_{1}\left(a_{2} b_{1}-a_{1} b_{2}\right)} & \text { for } & i=j=1 \ \frac{a_{i+1} b_{i-1}-a_{i-1} b_{i+1}}{\left(a_{i} b_{i-1}-a_{i-1} b_{i}\right)\left(a_{i+1} b_{i}-a_{i} b_{i+1}\right)} & \text { for } & 2 \leq i=j \leq k-1 \ \frac{b_{k-1}}{b_{k}\left(a_{k} b_{k-1}-a_{k-1} b_{k}\right)} & \text { for } & i=j=k \ -\frac{1}{\left(a_{i+1} b_{i}-a_{i} b_{i+1}\right)} & \text { for } & j=i+1 \text { and } 1 \leq i \leq k-1 \ 0 & \text { for } & j>i+1 \end{array}\right.$$

Problem 3.

Let $X_{1: n}<X_{2 n n}<\cdots<X_{n: n}$ be the order statistics obtained from a random sample of size $n$ from Uniform $(0, \theta)$ distribution. Then:
(a) Using the results of the last exercise, derive the Generalized Least Squares Estimator (Best Linear Unbiased Estimator) of $\theta$ and its variance;
(b) Make some comments about how it compares with the Uniformly Minimum Variance Unbiased Estimator derived from Lehmann-Scheffé theorem.

Problem 4.

Consider a location family of distributions with density function $f_{X}(x ; \mu)=f(x-\mu)$, for $x, \mu \in \mathbf{R}$. Let $X_{1: n}<X_{2: n}<\cdots<X_{n: n}$ be the order statistics obtained from a random sample of size $n$ from this location family of distributions.
(a) Then, by adopting the Lagrangian multiplier method, derive an expression for the Best Linear Unbiased Estimator of $\mu$;
(b) Obtain an expression for the variance of that estimator.

Problem 5.

Let $X_{1: n}<X_{2 n}<\cdots<X_{n: n}$ be the order statistics obtained from a random sample of size $n$ from Uniform $(\theta, \theta+1)$ distribution. By using the results of the last exercise, derive the Best Linear Unbiased Estimator of $\theta$ and its variance.

real analysis代写analysis 2, analysis 3请认准UprivateTA™. UprivateTA™为您的留学生涯保驾护航。

# 概率论代考

## 离散数学代写

Advising
The vice chair for graduate studies is the chief graduate adviser and heads a committee of faculty advisers who may serve as academic advisers. The research interests of the members of this committee span most of the major areas of statistics. During their first quarter in the program students are required to meet with an academic adviser who assists them in planning a reasonable course of study. In addition, the academic adviser is responsible for monitoring the student’s degree progress and approving the study list each quarter. After the student identifies a thesis topic, the chair of the thesis committee becomes the student’s academic adviser.

Continuing students should meet with either the vice chair for graduate studies or their academic adviser at least once each quarter and a record of this interview is placed in the student’s academic file. Each fall a committee consisting of all regular departmental faculty meet to evaluate the progress of all enrolled M.S. degree students. This committee decides if students are making satisfactory progress, and if not offers specific recommendations to correct the situation. For students who have begun thesis work, the determination of satisfactory progress is typically delegated to the academic adviser. Students who are found to be consistently performing unsatisfactorily may be recommended for termination by a vote of this committee.

Areas of Study
The strengths of current and prospective faculty dictate the specific fields of emphasis in the department: applied multivariate analysis; bioinformatics (Center for Statistical Research in Computational Biology); computational and computer-intensive statistics; computer vision; cognition; artificial intelligence; machine learning (Center for Vision, Cognition, Learning, and Autonomy); social statistics (Center for Social Statistics); experimental design and environmental statistics.

Foreign Language Requirement
None.