线性代数代写|Convex Optimization代写|凸优化代写|least square regression代写 UpriviateTA代写,做自己擅长的领域!UpriviateTA的专家精通各种经典论文、方程模型、通读各种完整书单,比如代写和线性代数密切相关的Matrix theory, 各种矩阵形式的不等式,比如Wely inequality, Min-Max principle,polynomial method, rank trick那是完全不在话下,对于各种singularity value decomposition, LLS, Jordan standard form ,QR decomposition, 基于advance的LLS的optimazation问题,乃至于linear programming method(8维和24维Sphere packing的解决方案的关键)的计算也是熟能生巧正如一些大神所说的,代写线性代数留学生线性代数作业assignment论文paper的很多方法都是围绕几个线性代数中心问题的解决不断产生的。比如GIT,从klein时代开始研究的几何不变量问题,以及牛顿时代开始研究的最优化问题等等(摆线),此外,强烈推荐我们UpriviateTA代写的线性代数这门课,从2017年起,我们已经陆续代写过近千份类似的作业assignment。 以下就是一份典型凸优化案例的答案 线性代数代写|Convex Optimization代写|凸优化代写|least square regression代写
Problem 1. Solution set of a quadratic inequality. Let $C \subseteq \mathbf{R}^{n}$ be the solution set of a quadratic inequality,
$$
C=\left\{x \in \mathbf{R}^{n} \mid x^{T} A x+b^{T} x+c \leq 0\right\}
$$
with $A \in \mathbf{S}^{n}, b \in \mathbf{R}^{n},$ and $c \in \mathbf{R}$
(a) Show that $C$ is convex if $A \succeq 0$.
(b) Show that the intersection of $C$ and the hyperplane defined by $g^{T} x+h=0$ (where $g \neq 0$ ) is convex if $A+\lambda g g^{T} \succeq 0$ for some $\lambda \in \mathbf{R}$
Are the converses of these statements true?
Proof . A set is convex if and only if its intersection with an arbitrary line $\{\hat{x}+t v \mid t \in$ $\mathbf{R}\}$ is convex.
(a) We have
$$
(\hat{x}+t v)^{T} A(\hat{x}+t v)+b^{T}(\hat{x}+t v)+c=\alpha t^{2}+\beta t+\gamma
$$
where $e^{e}$
$$
\alpha=v^{T} A v, \quad \beta=b^{T} v+2 \hat{x}^{T} A v, \quad \gamma=c+b^{T} \hat{x}+\hat{x}^{T} A \hat{x}
$$
The intersection of $C$ with the line defined by $\hat{x}$ and $v$ is the set
$$
\left\{\hat{x}+t v \mid \alpha t^{2}+\beta t+\gamma \leq 0\right\}
$$
which is convex if $\alpha \geq 0$. This is true for any $v,$ if $v^{T} A v \geq 0$ for all $v,$ i.e., $A \succeq 0$ The converse does not hold; for example, take $A=-1, b=0, c=-1 .$ Then $A \nsucc 0$ but $C=\mathbf{R}$ is convex.
(b) Let $H=\left\{x \mid g^{T} x+h=0\right\} .$ We define $\alpha, \beta,$ and $\gamma$ as in the solution of part (a), and, in addition.
$$
\delta=g^{T} v, \quad \epsilon=g^{T} \hat{x}+h
$$
Without loss of generality we can assume that $\hat{x} \in H,$ i.e., $\epsilon=0 .$ The intersection of $C \cap H$ with the line defined by $\hat{x}$ and $v$ is
$$
\left\{\hat{x}+t v \mid \alpha t^{2}+\beta t+\gamma \leq 0, \delta t=0\right\}
$$
If $\delta=g^{T} v \neq 0,$ the intersection is the singleton $\{\hat{x}\},$ if $\gamma \leq 0,$ or it is empty. $\operatorname{In}$ either case it is a convex set. If $\delta=g^{T} v=0,$ the set reduces to
$$
\left\{\hat{x}+t v \mid \alpha t^{2}+\beta t+\gamma \leq 0\right\}
$$
which is convex if $\alpha>0$. Therefore $C \cap H$ is convex if
$$
g^{T} v=0 \Longrightarrow v^{T} A v \geq 0
$$
This is true if there exists $\lambda$ such that $A+\lambda g g^{T} \succeq 0 ;$ then
$$
v^{T} A v=v^{T}\left(A+\lambda g g^{T}\right) v \geq 0
$$
for all $v$ satisfying $g^{T} v=0$. Again, the converse is not true.
Problem 2. Hyperbolic sets. Show that the hyperbolic set $\left\{x \in \mathbf{R}_{+}^{2} \mid x_{1} x_{2} \geq 1\right\}$ is convex. As a generalization, show that $\left\{x \in \mathbf{R}_{+}^{n} \mid \prod_{i=1}^{n} x_{i} \geq 1\right\}$ is convex. Hint. If $a, b \geq 0$ and $0 \leq \theta \leq 1,$ then $a^{\theta} b^{1-\theta} \leq \theta a+(1-\theta) b$
Proof . (a) We prove the first part without using the hint. Consider a convex combination $z$ of two points $\left(x_{1}, x_{2}\right)$ and $\left(y_{1}, y_{2}\right)$ in the set. If $x \succeq y,$ then $z=\theta x+(1-\theta) y \succeq y$ and obviously $z_{1} z_{2} \geq y_{1} y_{2} \geq 1$. Similar proof if $y \succeq x$. Suppose $y \nsucceq 0$ and $x \nsucceq y,$ i.e., $\left(y_{1}-x_{1}\right)\left(y_{2}-x_{2}\right)<0 .$ Then
$$
\begin{array}{l}
\left(\theta x_{1}+(1-\theta) y_{1}\right)\left(\theta x_{2}+(1-\theta) y_{2}\right) \\
\quad=\theta^{2} x_{1} x_{2}+(1-\theta)^{2} y_{1} y_{2}+\theta(1-\theta) x_{1} y_{2}+\theta(1-\theta) x_{2} y_{1} \\
\quad=\theta x_{1} x_{2}+(1-\theta) y_{1} y_{2}-\theta(1-\theta)\left(y_{1}-x_{1}\right)\left(y_{2}-x_{2}\right) \\
\geq 1
\end{array}
$$
(b) Assume that $\prod_{i} x_{i} \geq 1$ and $\prod_{i} y_{i} \geq 1 .$ Using the inequality in the hint, we have
$$
\prod_{i}\left(\theta x_{i}+(1-\theta) y_{i}\right) \geq \prod x_{i}^{\theta} y_{i}^{1-\theta}=\left(\prod_{i} x_{i}\right)^{\theta}\left(\prod_{i} y_{i}\right)^{1-\theta} \geq 1 .
$$
Problem 3. Expanded and restricted sets. Let $S \subseteq \mathbf{R}^{n}$, and let $\|\cdot\|$ be a norm on $\mathbf{R}^{n}$.
(a) For $a \geq 0$ we define $S_{a}$ as $\{x \mid \operatorname{dist}(x, S) \leq a\},$ where $\operatorname{dist}(x, S)=\inf _{y \in S}\|x-y\|$.
We refer to $S_{a}$ as $S$ expanded or extended by $a$. Show that if $S$ is convex, then $S_{a}$ is convex.
(b) For $a \geq 0$ we define $S_{-a}=\{x \mid B(x, a) \subseteq S\},$ where $B(x, a)$ is the ball (in the norm $\|\cdot\|),$ centered at $x,$ with radius $a$. We refer to $S_{-a}$ as $S$ shrunk or restricted by $a$ since $S_{-a}$ consists of all points that are at least a distance $a$ from $\mathbf{R}^{n} \backslash S .$ Show that if $S$ is convex, then $S_{-a}$ is convex.
Proof . (a) Consider two points $x_{1}, x_{2} \in S_{a} .$ For $0 \leq \theta \leq 1$,
$$
\begin{aligned}
\operatorname{dist}\left(\theta x_{1}+(1-\theta) x_{2}, X\right) &=\inf _{y \in S}\left\|\theta x_{1}+(1-\theta) x_{2}-y\right\| \\
&=\inf _{y_{1}, y_{2} \in S}\left\|\theta x_{1}+(1-\theta) x_{2}-\theta y_{1}-(1-\theta) y_{2}\right\| \\
&=\inf _{y_{1}, y_{2} \in S}\left\|\theta\left(x_{1}-y_{1}\right)+(1-\theta)\left(x_{2}-y_{2}\right)\right\| \\
& \leq \inf _{y_{1}, y_{2} \in S}\left(\theta\left\|x_{1}-y_{1}\right\|+(1-\theta)\left\|x_{2}-y_{2}\right\|\right) \\
&\left.=\theta \inf _{y_{1} \in S}\left\|x_{1}-y_{1}\right\|+(1-\theta) \inf _{y_{2} \in s}\left\|x_{2}-y_{2}\right\|\right) \\
& \leq a
\end{aligned}
$$
so $\theta x_{1}+(1-\theta) x_{2} \in S_{a},$ proving convexity.
(b) Consider two points $x_{1}, x_{2} \in S_{-a},$ so for all $u$ with $\|u\| \leq a$,
$$
x_{1}+u \in S, \quad x_{2}+u \in S
$$
For $0 \leq \theta \leq 1$ and $\|u\| \leq a$
$$
\theta x_{1}+(1-\theta) x_{2}+u=\theta\left(x_{1}+u\right)+(1-\theta)\left(x_{2}+u\right) \in S
$$
because $S$ is convex. We conclude that $\theta x_{1}+(1-\theta) x_{2} \in S_{-a}$.
Problem 4. Invertible linear-fractional functions. Let $f: \mathbf{R}^{n} \rightarrow \mathbf{R}^{n}$ be the linear-fractional function
$$
f(x)=(A x+b) /\left(c^{T} x+d\right), \quad \operatorname{dom} f=\left\{x \mid c^{T} x+d>0\right\}
$$
Suppose the matrix
$$
Q=\left[\begin{array}{ll}
A & b \\
c^{T} & d
\end{array}\right]
$$
is nonsingular. Show that $f$ is invertible and that $f^{-1}$ is a linear-fractional mapping. Give an explicit expression for $f^{-1}$ and its domain in terms of $A, b, c,$ and $d$. Hint. It may be easier to express $f^{-1}$ in terms of $Q$.
Proof . This follows from remark 2.2 on page $41 .$ The inverse of $f$ is given by
$$
f^{-1}(x)=\mathcal{P}^{-1}\left(Q^{-1} \mathcal{P}(x)\right)
$$
so $f^{-1}$ is the projective transformation associated with $Q^{-1}$.
Problem 5. Strictly positive solution of linear equations. Suppose $A \in \mathbf{R}^{m \times n}, b \in \mathbf{R}^{m},$ with $b \in \mathcal{R}(A) .$ Show that there exists an $x$ satisfying
$$
x \succ 0, \quad A x=b
$$
if and only if there exists no $\lambda$ with
$$
A^{T} \lambda \succeq 0, \quad A^{T} \lambda \neq 0, \quad b^{T} \lambda \leq 0
$$
Hint. First prove the following fact from linear algebra: $c^{T} x=d$ for all $x$ satisfying $A x=b$ if and only if there is a vector $\lambda$ such that $c=A^{T} \lambda, d=b^{T} \lambda$.
Proof . Solution. We first prove the result in the hint. Suppose that there exists a $\lambda$ such that $c=A^{T} \lambda, d=b^{T} \lambda .$ It is clear that if $A x=b$ then
$$
c^{T} x=\lambda^{T} A x=\lambda^{T} b=d
$$
Conversely, suppose $A x=b$ implies $c^{T} x=d,$ and that $\operatorname{rank} A=r .$ Let $F \in \mathbf{R}^{n \times(n-r)}$ be a matrix with $\mathcal{R}(F)=\mathcal{N}(A),$ and let $x_{0}$ be a solution of $A x=b .$ Then $A x=b$ if and only if $x=F y+x_{0}$ for some $y,$ and $c^{T} x=d$ for all $x=F y+x_{0}$ implies
$$
c^{T} F y+c^{T} x_{0}=d
$$
for all $y .$ This is only possible if $F^{T} c=0,$ i.e., $c \in \mathcal{N}\left(F^{T}\right)=\mathcal{R}\left(A^{T}\right),$ i.e., there exists a $\lambda$ such that $c=A^{T} \lambda$. The condition $c^{T} F y+c^{T} x_{0}=d$ then reduces to $c^{T} x_{0}=d,$ i.e., $\lambda^{T} A x_{0}=\lambda^{T} b=d .$ In conclusion, if $c^{T} x=d$ for all $x$ with $A x=b,$ then there there exists a $\lambda$ such that $c=A^{T} \lambda$ and $d=b^{T} \lambda$.
To prove the main result, we use a standard separating hyperplane argument, applied to the sets $C=\mathbf{R}_{++}^{n}$ and $D=\{x \mid A x=b\} .$ If they are disjoint, there exists $c \neq 0$ and $d$ such that $c^{T} x \geq d$ for all $x \in C$ and $c^{T} x \leq d$ for all $x \in D .$ The first condition means that $c \succeq 0$ and $d \leq \overline{0}$. Since $c^{T} x \leq d$ on $D,$ which is an affine set, we must have $c^{T} x$ constant on $D .$ (If $c^{T} x$ weren’t constant on $D$, it would take on all values.) We can relabel $d$ to be this constant value, so we have $c^{T} x=d$ on $D .$ Now using the hint, there is some $\lambda$ such that $c=A^{T} \lambda, d=b^{T} \lambda$
更多线性代数代写案例请参阅此处。
E-mail: [email protected] 微信:shuxuejun
uprivate™是一个服务全球中国留学生的专业代写公司 专注提供稳定可靠的北美、澳洲、英国代写服务 专注于数学,统计,金融,经济,计算机科学,物理的作业代写服务