Here, “Cov”(X,X)\text{Cov}(X, X) is the variance of XX, “Cov”(Y,Y)\text{Cov}(Y, Y) is the variance of YY, “Cov”(X,Y)\text{Cov}(X, Y) is the covariance between XX and YY, and “Cov”(Y,X)\text{Cov}(Y, X) is the covariance between YY and XX.
Symmetry of Covariance
To show that the covariance matrix is symmetric, we need to prove that “Cov”(X,Y)=”Cov”(Y,X)\text{Cov}(X, Y) = \text{Cov}(Y, X).
Definition of Covariance
The covariance between two random variables XX and YY is defined as:
Since “Cov”(X,Y)=”Cov”(Y,X)\text{Cov}(X, Y) = \text{Cov}(Y, X), the off-diagonal elements of the covariance matrix are equal, making the covariance matrix symmetric.
Symmetric Covariance Matrix
Therefore, the covariance matrix Sigma\Sigma is symmetric:
The statement "The covariance matrix of random vectors XX and YY is symmetric" is true, as demonstrated by the equality “Cov”(X,Y)=”Cov”(Y,X)\text{Cov}(X, Y) = \text{Cov}(Y, X).
(b) If XX is a pp-variate normal random vector, then every linear combination c^(‘)Xc^{\prime} X, where c_(pxx1)\mathrm{c}_{\mathrm{p} \times 1} is a scalar vector, is also p\mathrm{p}-variate normal vector.
Answer:
The statement is false. Let’s analyze and justify this.
Understanding the Statement
The statement says:
"If XX is a pp-variate normal random vector, then every linear combination c^(‘)Xc^{\prime} X, where cc is a p xx1p \times 1 scalar vector, is also a pp-variate normal vector."
Breaking Down the Components
pp-variate Normal Random Vector: XX is a random vector with a multivariate normal distribution in pp-dimensional space.
Linear Combination c^(‘)Xc^{\prime}X: c^(‘)Xc^{\prime}X represents a linear combination of the elements of XX, where cc is a vector of coefficients.
Correct Interpretation
Let’s rephrase the correct version of the statement for clarity:
If XX is a pp-variate normal random vector, then every linear combination c^(‘)Xc^{\prime} X, where cc is a p xx1p \times 1 vector, is also normally distributed (not necessarily a pp-variate normal vector, but a univariate normal distribution).
Proof of Correct Interpretation
If XX is a pp-variate normal vector, then:
X∼N_(p)(mu,Sigma)X \sim N_p(\mu, \Sigma)
where mu\mu is the mean vector and Sigma\Sigma is the covariance matrix.
Now consider a linear combination c^(‘)Xc^{\prime}X:
“Var”(Y)=”Var”(c^(‘)X)=c^(‘)Sigma c\text{Var}(Y) = \text{Var}(c^{\prime}X) = c^{\prime}\Sigma c
Distribution of YY
Since XX is multivariate normal, any linear combination of its components is also normally distributed. Therefore, Y=c^(‘)XY = c^{\prime}X is normally distributed with mean c^(‘)muc^{\prime}\mu and variance c^(‘)Sigma cc^{\prime}\Sigma c:
Y∼N(c^(‘)mu,c^(‘)Sigma c)Y \sim N(c^{\prime}\mu, c^{\prime}\Sigma c)
Conclusion
While Y=c^(‘)XY = c^{\prime}X is normally distributed, it is not a pp-variate normal vector. It is a univariate normal random variable. Hence, the original statement is false.
Correct Statement
The correct statement should be:
"If XX is a pp-variate normal random vector, then every linear combination c^(‘)Xc^{\prime} X, where cc is a p xx1p \times 1 vector, is also normally distributed (a univariate normal random variable)."
This highlights the distinction between being normally distributed in a univariate sense and being a pp-variate normal vector.
(c) The trace of matrix ([3,-2],[-2,6])\left(\begin{array}{cc}3 & -2 \\ -2 & 6\end{array}\right) is 9 .
Answer:
The trace of a matrix is defined as the sum of the elements on the main diagonal (the diagonal that runs from the top left to the bottom right) of the matrix.
The elements on the main diagonal are 3 and 6. Therefore, the trace of AA is:
“tr”(A)=3+6=9\text{tr}(A) = 3 + 6 = 9
Conclusion
The statement "The trace of the matrix ([3,-2],[-2,6])\begin{pmatrix} 3 & -2 \\ -2 & 6 \end{pmatrix} is 9" is true.
Justification
The trace of the matrix was calculated correctly by summing the diagonal elements:
3+6=93 + 6 = 9
Therefore, the statement is justified as true.
(d) If a matrix is positive definite then its inverse is also positive definite.
Answer:
The statement "If a matrix is positive definite then its inverse is also positive definite" is true. Here’s a detailed justification:
Positive Definite Matrix
A matrix AA is positive definite if for any non-zero vector xx,
x^(T)Ax > 0.x^T A x > 0.
Inverse of a Positive Definite Matrix
To show that the inverse A^(-1)A^{-1} of a positive definite matrix AA is also positive definite, we need to show that for any non-zero vector yy,
y^(T)A^(-1)y > 0.y^T A^{-1} y > 0.
Proof
Let AA be a positive definite matrix, and let x=A^(-1)yx = A^{-1} y. Since AA is positive definite, we have:
x^(T)Ax > 0.x^T A x > 0.
Using the substitution x=A^(-1)yx = A^{-1} y, we get:
(A^(-1)y)^(T)A(A^(-1)y) > 0.(A^{-1} y)^T A (A^{-1} y) > 0.
Simplifying the left-hand side:
y^(T)(A^(-1))^(T)AA^(-1)y.y^T (A^{-1})^T A A^{-1} y.
Since AA is symmetric (as all positive definite matrices are), (A^(-1))^(T)=A^(-1)(A^{-1})^T = A^{-1}. So the expression becomes:
y^(T)A^(-1)y > 0.y^T A^{-1} y > 0.
This shows that for any non-zero vector yy, y^(T)A^(-1)y > 0y^T A^{-1} y > 0, implying that A^(-1)A^{-1} is positive definite.
Conclusion
The statement "If a matrix is positive definite then its inverse is also positive definite" is true. The proof is based on the property that if AA is positive definite, then x^(T)Ax > 0x^T A x > 0 for any non-zero vector xx, and by substituting x=A^(-1)yx = A^{-1} y, we showed that y^(T)A^(-1)y > 0y^T A^{-1} y > 0 for any non-zero vector yy, thereby proving that A^(-1)A^{-1} is also positive definite.
(e) If X∼N_(2)(([2],[1]),([1,0],[0,1]))X \sim N_2\left(\left(\begin{array}{l}2 \\ 1\end{array}\right),\left(\begin{array}{ll}1 & 0 \\ 0 & 1\end{array}\right)\right) and Y∼N_(2)(([-1],[3]),([1,0],[0,1]))Y \sim N_2\left(\left(\begin{array}{c}-1 \\ 3\end{array}\right),\left(\begin{array}{ll}1 & 0 \\ 0 & 1\end{array}\right)\right), then
XX is a bivariate normal random vector with mean vector mu _(X)=([2],[1])\mu_X = \left(\begin{array}{c}2 \\ 1\end{array}\right) and covariance matrix Sigma _(X)=([1,0],[0,1])\Sigma_X = \left(\begin{array}{cc}1 & 0 \\ 0 & 1\end{array}\right).
YY is a bivariate normal random vector with mean vector mu _(Y)=([-1],[3])\mu_Y = \left(\begin{array}{c}-1 \\ 3\end{array}\right) and covariance matrix Sigma _(Y)=([1,0],[0,1])\Sigma_Y = \left(\begin{array}{cc}1 & 0 \\ 0 & 1\end{array}\right).
Sum of Two Independent Multivariate Normals
If XX and YY are independent, then the sum Z=X+YZ = X + Y is also a bivariate normal random vector. The mean vector and covariance matrix of ZZ can be determined as follows:
Mean Vector of ZZ
The mean vector of Z=X+YZ = X + Y is the sum of the mean vectors of XX and YY:
is false because the correct mean vector is ([1],[4])\left(\begin{array}{c}1 \\ 4\end{array}\right) and the correct covariance matrix is ([2,0],[0,2])\left(\begin{array}{cc}2 & 0 \\ 0 & 2\end{array}\right).