mmt-008-solved-assignment-2023-42d93f61-84aa-4123-884e-f4c710d84785

### Question:-01

1. State whether the following statements are True or False. Justify your answer with a short proof or a counter example:
a) If $\mathrm{P}$$\mathrm{P}$P\mathrm{P}$\mathrm{P}$ is a transition matrix of a Markov Chain, then all the rows of $\underset{\mathrm{n}\to \mathrm{\infty }}{lim}{\mathrm{P}}^{\mathrm{n}}$$\underset{\mathrm{n}\to \mathrm{\infty }}{lim} {\mathrm{P}}^{\mathrm{n}}$lim_(nrarr oo)P^(n)\lim _{\mathrm{n} \rightarrow \infty} \mathrm{P}^{\mathrm{n}}$\underset{\mathrm{n}\to \mathrm{\infty }}{lim}{\mathrm{P}}^{\mathrm{n}}$ are identical.
The statement “If $P$$P$PP$P$ is a transition matrix of a Markov Chain, then all the rows of $\underset{n\to \mathrm{\infty }}{lim}{P}^{n}$$\underset{n\to \mathrm{\infty }}{lim} {P}^{n}$lim_(n rarr oo)P^(n)\lim_{{n \to \infty}} P^n$\underset{n\to \mathrm{\infty }}{lim}{P}^{n}$ are identical” is generally not true for all Markov Chains. It is true under certain conditions, such as if the Markov Chain is irreducible, aperiodic, and positive recurrent (i.e., it is ergodic).

### Counterexample:

Consider a simple Markov Chain with two states $A$$A$AA$A$ and $B$$B$BB$B$ and the following transition matrix:
$P=\left(\begin{array}{cc}1& 0\\ 0& 1\end{array}\right)$$P=\left(\begin{array}{cc}1& 0\\ 0& 1\end{array}\right)$P=([1,0],[0,1])P = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$P=\left(\begin{array}{cc}1& 0\\ 0& 1\end{array}\right)$
In this case, the Markov Chain is not irreducible (it consists of two disconnected states). The limit $\underset{n\to \mathrm{\infty }}{lim}{P}^{n}$$\underset{n\to \mathrm{\infty }}{lim} {P}^{n}$lim_(n rarr oo)P^(n)\lim_{{n \to \infty}} P^n$\underset{n\to \mathrm{\infty }}{lim}{P}^{n}$ exists and is:
$\underset{n\to \mathrm{\infty }}{lim}{P}^{n}=\left(\begin{array}{cc}1& 0\\ 0& 1\end{array}\right)$$\underset{n\to \mathrm{\infty }}{lim} {P}^{n}=\left(\begin{array}{cc}1& 0\\ 0& 1\end{array}\right)$lim_(n rarr oo)P^(n)=([1,0],[0,1])\lim_{{n \to \infty}} P^n = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$\underset{n\to \mathrm{\infty }}{lim}{P}^{n}=\left(\begin{array}{cc}1& 0\\ 0& 1\end{array}\right)$
Here, the rows are not identical, contradicting the statement.

### Conditions for the Statement to be True:

For an ergodic Markov Chain, the statement is true. In such a case, the Markov Chain has a unique stationary distribution $\pi$$\pi$pi\pi$\pi$, and:
$\underset{n\to \mathrm{\infty }}{lim}{P}^{n}=\left(\begin{array}{c}\pi \\ \pi \\ ⋮\\ \pi \end{array}\right)$$\underset{n\to \mathrm{\infty }}{lim} {P}^{n}=\left(\begin{array}{c}\pi \\ \pi \\ ⋮\\ \pi \end{array}\right)$lim_(n rarr oo)P^(n)=([pi],[pi],[vdots],[pi])\lim_{{n \to \infty}} P^n = \begin{pmatrix} \pi \\ \pi \\ \vdots \\ \pi \end{pmatrix}$\underset{n\to \mathrm{\infty }}{lim}{P}^{n}=\left(\begin{array}{c}\pi \\ \pi \\ ⋮\\ \pi \end{array}\right)$
Here, all rows are identical and equal to the stationary distribution $\pi$$\pi$pi\pi$\pi$.
So, the statement is not universally true for all Markov Chains but holds under specific conditions.

Page Break
b) In a variance-covariance matrix all elements are always positive.
The statement “In a variance-covariance matrix all elements are always positive” is false.

### Counterexample:

Consider a simple dataset with two variables $X$$X$XX$X$ and $Y$$Y$YY$Y$, where $X=\left[1,2,3\right]$$X=\left[1,2,3\right]$X=[1,2,3]X = [1, 2, 3]$X=\left[1,2,3\right]$ and $Y=\left[3,2,1\right]$$Y=\left[3,2,1\right]$Y=[3,2,1]Y = [3, 2, 1]$Y=\left[3,2,1\right]$.
The variance-covariance matrix for this dataset would be:
$\left(\begin{array}{cc}\text{Var}\left(X\right)& \text{Cov}\left(X,Y\right)\\ \text{Cov}\left(Y,X\right)& \text{Var}\left(Y\right)\end{array}\right)=\left(\begin{array}{cc}1& -1\\ -1& 1\end{array}\right)$$\left(\begin{array}{cc}\text{Var}\left(X\right)& \text{Cov}\left(X,Y\right)\\ \text{Cov}\left(Y,X\right)& \text{Var}\left(Y\right)\end{array}\right)=\left(\begin{array}{cc}1& -1\\ -1& 1\end{array}\right)$([“Var”(X),”Cov”(X”,”Y)],[“Cov”(Y”,”X),”Var”(Y)])=([1,-1],[-1,1])\begin{pmatrix} \text{Var}(X) & \text{Cov}(X, Y) \\ \text{Cov}(Y, X) & \text{Var}(Y) \end{pmatrix} = \begin{pmatrix} 1 & -1 \\ -1 & 1 \end{pmatrix}$\left(\begin{array}{cc}\text{Var}\left(X\right)& \text{Cov}\left(X,Y\right)\\ \text{Cov}\left(Y,X\right)& \text{Var}\left(Y\right)\end{array}\right)=\left(\begin{array}{cc}1& -1\\ -1& 1\end{array}\right)$
Here, the covariance between $X$$X$XX$X$ and $Y$$Y$YY$Y$ is $-1$$-1$-1-1$-1$, which is not positive. Therefore, the statement is false.

1. The diagonal elements of a variance-covariance matrix, which represent variances, are always non-negative because variance cannot be negative.
2. Off-diagonal elements, which represent covariances, can be negative, zero, or positive, depending on the relationship between the variables involved.

Page Break
c) If ${X}_{1},{X}_{2},{X}_{3}$${X}_{1},{X}_{2},{X}_{3}$X_(1),X_(2),X_(3)X_1, X_2, X_3${X}_{1},{X}_{2},{X}_{3}$ are iid from ${N}_{2}\left(\mu ,\mathrm{\Sigma }\right)$${N}_{2}\left(\mu ,\mathrm{\Sigma }\right)$N_(2)(mu,Sigma)N_2(\mu, \Sigma)${N}_{2}\left(\mu ,\mathrm{\Sigma }\right)$, then $\frac{{X}_{1}+{X}_{2}+{X}_{3}}{3}$$\frac{{X}_{1}+{X}_{2}+{X}_{3}}{3}$(X_(1)+X_(2)+X_(3))/(3)\frac{X_1+X_2+X_3}{3}$\frac{{X}_{1}+{X}_{2}+{X}_{3}}{3}$ follows ${N}_{2}\left(\mu ,\frac{1}{3}\mathrm{\Sigma }\right)$${N}_{2}\left(\mu ,\frac{1}{3}\mathrm{\Sigma }\right)$N_(2)(mu,(1)/(3)Sigma)N_2\left(\mu, \frac{1}{3} \Sigma\right)${N}_{2}\left(\mu ,\frac{1}{3}\mathrm{\Sigma }\right)$.
The statement “If ${X}_{1},{X}_{2},{X}_{3}$${X}_{1},{X}_{2},{X}_{3}$X_(1),X_(2),X_(3)X_1, X_2, X_3${X}_{1},{X}_{2},{X}_{3}$ are iid from ${N}_{2}\left(\mu ,\mathrm{\Sigma }\right)$${N}_{2}\left(\mu ,\mathrm{\Sigma }\right)$N_(2)(mu,Sigma)N_2(\mu, \Sigma)${N}_{2}\left(\mu ,\mathrm{\Sigma }\right)$, then $\frac{{X}_{1}+{X}_{2}+{X}_{3}}{3}$$\frac{{X}_{1}+{X}_{2}+{X}_{3}}{3}$(X_(1)+X_(2)+X_(3))/(3)\frac{X_1+X_2+X_3}{3}$\frac{{X}_{1}+{X}_{2}+{X}_{3}}{3}$ follows ${N}_{2}\left(\mu ,\frac{1}{3}\mathrm{\Sigma }\right)$${N}_{2}\left(\mu ,\frac{1}{3}\mathrm{\Sigma }\right)$N_(2)(mu,(1)/(3)Sigma)N_2\left(\mu, \frac{1}{3} \Sigma\right)${N}_{2}\left(\mu ,\frac{1}{3}\mathrm{\Sigma }\right)$” is true.

### Justification:

1. Mean: The mean of $\frac{{X}_{1}+{X}_{2}+{X}_{3}}{3}$$\frac{{X}_{1}+{X}_{2}+{X}_{3}}{3}$(X_(1)+X_(2)+X_(3))/(3)\frac{X_1+X_2+X_3}{3}$\frac{{X}_{1}+{X}_{2}+{X}_{3}}{3}$ is $\frac{\mu +\mu +\mu }{3}=\mu$$\frac{\mu +\mu +\mu }{3}=\mu$(mu+mu+mu)/(3)=mu\frac{\mu + \mu + \mu}{3} = \mu$\frac{\mu +\mu +\mu }{3}=\mu$.
2. Covariance Matrix: The covariance matrix of ${X}_{1}+{X}_{2}+{X}_{3}$${X}_{1}+{X}_{2}+{X}_{3}$X_(1)+X_(2)+X_(3)X_1+X_2+X_3${X}_{1}+{X}_{2}+{X}_{3}$ is $\mathrm{\Sigma }+\mathrm{\Sigma }+\mathrm{\Sigma }=3\mathrm{\Sigma }$$\mathrm{\Sigma }+\mathrm{\Sigma }+\mathrm{\Sigma }=3\mathrm{\Sigma }$Sigma+Sigma+Sigma=3Sigma\Sigma + \Sigma + \Sigma = 3\Sigma$\mathrm{\Sigma }+\mathrm{\Sigma }+\mathrm{\Sigma }=3\mathrm{\Sigma }$ because ${X}_{1},{X}_{2},{X}_{3}$${X}_{1},{X}_{2},{X}_{3}$X_(1),X_(2),X_(3)X_1, X_2, X_3${X}_{1},{X}_{2},{X}_{3}$ are independent. Therefore, the covariance matrix of $\frac{{X}_{1}+{X}_{2}+{X}_{3}}{3}$$\frac{{X}_{1}+{X}_{2}+{X}_{3}}{3}$(X_(1)+X_(2)+X_(3))/(3)\frac{X_1+X_2+X_3}{3}$\frac{{X}_{1}+{X}_{2}+{X}_{3}}{3}$ is $\frac{1}{{3}^{2}}\left(3\mathrm{\Sigma }\right)=\frac{1}{3}\mathrm{\Sigma }$$\frac{1}{{3}^{2}}\left(3\mathrm{\Sigma }\right)=\frac{1}{3}\mathrm{\Sigma }$(1)/(3^(2))(3Sigma)=(1)/(3)Sigma\frac{1}{3^2}(3\Sigma) = \frac{1}{3}\Sigma$\frac{1}{{3}^{2}}\left(3\mathrm{\Sigma }\right)=\frac{1}{3}\mathrm{\Sigma }$.
Since both the mean and the covariance matrix match the given distribution ${N}_{2}\left(\mu ,\frac{1}{3}\mathrm{\Sigma }\right)$${N}_{2}\left(\mu ,\frac{1}{3}\mathrm{\Sigma }\right)$N_(2)(mu,(1)/(3)Sigma)N_2\left(\mu, \frac{1}{3} \Sigma\right)${N}_{2}\left(\mu ,\frac{1}{3}\mathrm{\Sigma }\right)$, the statement is true.

Page Break
d) The partial correlation coefficients and multiple correlation coefficients lie between -1 and 1.
The statement “The partial correlation coefficients and multiple correlation coefficients lie between -1 and 1” is true.

### Justification:

1. Partial Correlation Coefficients: The partial correlation coefficient measures the strength and direction of the relationship between two variables while controlling for the effect of one or more other variables. Mathematically, it is defined in a way that ensures its value lies between -1 and 1, inclusive. Specifically, it is computed as the correlation between the residuals resulting from the linear regression of each variable against the control variables. Since residuals are uncorrelated with the predicted values, the partial correlation coefficient is constrained to be between -1 and 1.
2. Multiple Correlation Coefficients: The multiple correlation coefficient $R$$R$RR$R$ is defined as the square root of the coefficient of determination ${R}^{2}$${R}^{2}$R^(2)R^2${R}^{2}$, which is the proportion of the variance in the dependent variable that is predictable from the independent variables in a multiple regression model. Since ${R}^{2}$${R}^{2}$R^(2)R^2${R}^{2}$ is between 0 and 1, $R$$R$RR$R$ must be between 0 and 1. However, $R$$R$RR$R$ can be negative if the model includes a constant term and the slope is negative, but its absolute value will still be between 0 and 1.
Therefore, both the partial correlation coefficients and multiple correlation coefficients are bounded between -1 and 1, making the statement true.

Page Break
e) For a renewal function ${M}_{t},\underset{t\to 0}{lim}\frac{{M}_{t}}{t}=\frac{1}{\mu }$${M}_{t},\underset{t\to 0}{lim} \frac{{M}_{t}}{t}=\frac{1}{\mu }$M_(t),lim_(t rarr0)(M_(t))/(t)=(1)/(mu)M_t, \lim _{t \rightarrow 0} \frac{M_t}{t}=\frac{1}{\mu}${M}_{t},\underset{t\to 0}{lim}\frac{{M}_{t}}{t}=\frac{1}{\mu }$.
The statement “For a renewal function ${M}_{t},\underset{t\to 0}{lim}\frac{{M}_{t}}{t}=\frac{1}{\mu }$${M}_{t},\underset{t\to 0}{lim} \frac{{M}_{t}}{t}=\frac{1}{\mu }$M_(t),lim_(t rarr0)(M_(t))/(t)=(1)/(mu)M_t, \lim_{{t \rightarrow 0}} \frac{M_t}{t} = \frac{1}{\mu}${M}_{t},\underset{t\to 0}{lim}\frac{{M}_{t}}{t}=\frac{1}{\mu }$” is generally true under certain conditions.

### Justification:

A renewal function ${M}_{t}$${M}_{t}$M_(t)M_t${M}_{t}$ is defined as the expected number of renewals (or arrivals, or events) that have occurred by time $t$$t$tt$t$. Mathematically, it is often defined as:
${M}_{t}=\mathbb{E}\left[N\left(t\right)\right]$${M}_{t}=\mathbb{E}\left[N\left(t\right)\right]$M_(t)=E[N(t)]M_t = \mathbb{E}[N(t)]${M}_{t}=\mathbb{E}\left[N\left(t\right)\right]$
where $N\left(t\right)$$N\left(t\right)$N(t)N(t)$N\left(t\right)$ is the number of renewals by time $t$$t$tt$t$.
The mean inter-arrival time (or mean time between renewals) is denoted by $\mu$$\mu$mu\mu$\mu$ and is defined as:
$\mu =\mathbb{E}\left[X\right]$$\mu =\mathbb{E}\left[X\right]$mu=E[X]\mu = \mathbb{E}[X]$\mu =\mathbb{E}\left[X\right]$
where $X$$X$XX$X$ is the random variable representing the time between renewals.
Under the assumption that $\mu <\mathrm{\infty }$$\mu <\mathrm{\infty }$mu < oo\mu < \infty$\mu <\mathrm{\infty }$ and the distribution of inter-arrival times has a finite variance, it is generally true in renewal theory that:
$\underset{t\to 0}{lim}\frac{{M}_{t}}{t}=\frac{1}{\mu }$$\underset{t\to 0}{lim} \frac{{M}_{t}}{t}=\frac{1}{\mu }$lim_(t rarr0)(M_(t))/(t)=(1)/(mu)\lim_{{t \rightarrow 0}} \frac{M_t}{t} = \frac{1}{\mu}$\underset{t\to 0}{lim}\frac{{M}_{t}}{t}=\frac{1}{\mu }$
This result is often derived using advanced methods in renewal theory and stochastic processes, and it provides a way to relate the renewal function to the mean inter-arrival time $\mu$$\mu$mu\mu$\mu$.
So, the statement is true under the conditions that $\mu <\mathrm{\infty }$$\mu <\mathrm{\infty }$mu < oo\mu < \infty$\mu <\mathrm{\infty }$ and the distribution of inter-arrival times has a finite variance.

Page Break

### Question:-02

1. a) Let $\left(\mathrm{X},\mathrm{Y}\right)$$\left(\mathrm{X},\mathrm{Y}\right)$(X,Y)(\mathrm{X}, \mathrm{Y})$\left(\mathrm{X},\mathrm{Y}\right)$ have the joint p.d.f. given by:
$f\left(x,y\right)=\left\{\begin{array}{ll}1,& \text{if}|y|f(x,y)={[1″,”,” if “|y| < x;0 < x < 1],[0″,”,” otherwise “]:}f(x, y)= \begin{cases}1, & \text { if }|y|<x ; 0<x<1 \\ 0, & \text { otherwise }\end{cases}
i) Find the marginal p.d.f.’s of $\mathrm{X}$$\mathrm{X}$X\mathrm{X}$\mathrm{X}$ and $\mathrm{Y}$$\mathrm{Y}$Y\mathrm{Y}$\mathrm{Y}$.
ii) Test the independence of $\mathrm{X}$$\mathrm{X}$X\mathrm{X}$\mathrm{X}$ and $\mathrm{Y}$$\mathrm{Y}$Y\mathrm{Y}$\mathrm{Y}$.
iii) Find the conditional distribution of $X$$X$XX$X$ given $Y=y$$Y=y$Y=yY=y$Y=y$.
iv) Compute $\mathrm{E}\left(\mathrm{X}\mid \mathrm{Y}=\mathrm{y}\right)$$\mathrm{E}\left(\mathrm{X}\mid \mathrm{Y}=\mathrm{y}\right)$E(X∣Y=y)\mathrm{E}(\mathrm{X} \mid \mathrm{Y}=\mathrm{y})$\mathrm{E}\left(\mathrm{X}\mid \mathrm{Y}=\mathrm{y}\right)$ and $\mathrm{E}\left(\mathrm{Y}\mid \mathrm{X}=\mathrm{x}\right)$$\mathrm{E}\left(\mathrm{Y}\mid \mathrm{X}=\mathrm{x}\right)$E(Y∣X=x)\mathrm{E}(\mathrm{Y} \mid \mathrm{X}=\mathrm{x})$\mathrm{E}\left(\mathrm{Y}\mid \mathrm{X}=\mathrm{x}\right)$.

### i) Marginal p.d.f.’s of $X$$X$XX$X$ and $Y$$Y$YY$Y$

1. Marginal p.d.f. of $X$$X$XX$X$
${f}_{X}\left(x\right)={\int }_{-x}^{x}1\phantom{\rule{thinmathspace}{0ex}}dy=2x\phantom{\rule{1em}{0ex}}\text{for}0f_(X)(x)=int_(-x)^(x)1dy=2x quad”for “0 < x < 1f_X(x) = \int_{-x}^{x} 1 \, dy = 2x \quad \text{for } 0 < x < 1
1. Marginal p.d.f. of $Y$$Y$YY$Y$
${f}_{Y}\left(y\right)={\int }_{|y|}^{1}1\phantom{\rule{thinmathspace}{0ex}}dx=1-|y|\phantom{\rule{1em}{0ex}}\text{for}-1f_(Y)(y)=int_(|y|)^(1)1dx=1-|y|quad”for “-1 < y < 1f_Y(y) = \int_{|y|}^{1} 1 \, dx = 1 – |y| \quad \text{for } -1 < y < 1

### ii) Test for Independence

Two random variables $X$$X$XX$X$ and $Y$$Y$YY$Y$ are independent if and only if $f\left(x,y\right)={f}_{X}\left(x\right)×{f}_{Y}\left(y\right)$$f\left(x,y\right)={f}_{X}\left(x\right)×{f}_{Y}\left(y\right)$f(x,y)=f_(X)(x)xxf_(Y)(y)f(x, y) = f_X(x) \times f_Y(y)$f\left(x,y\right)={f}_{X}\left(x\right)×{f}_{Y}\left(y\right)$.
Here, $f\left(x,y\right)=1$$f\left(x,y\right)=1$f(x,y)=1f(x, y) = 1$f\left(x,y\right)=1$ for $|y|$|y||y| < x;0 < x < 1|y| < x; 0 < x < 1$|y|.
${f}_{X}\left(x\right)=2x$${f}_{X}\left(x\right)=2x$f_(X)(x)=2xf_X(x) = 2x${f}_{X}\left(x\right)=2x$ and ${f}_{Y}\left(y\right)=1-|y|$${f}_{Y}\left(y\right)=1-|y|$f_(Y)(y)=1-|y|f_Y(y) = 1 – |y|${f}_{Y}\left(y\right)=1-|y|$.
Clearly, $f\left(x,y\right)\ne {f}_{X}\left(x\right)×{f}_{Y}\left(y\right)$$f\left(x,y\right)\ne {f}_{X}\left(x\right)×{f}_{Y}\left(y\right)$f(x,y)!=f_(X)(x)xxf_(Y)(y)f(x, y) \neq f_X(x) \times f_Y(y)$f\left(x,y\right)\ne {f}_{X}\left(x\right)×{f}_{Y}\left(y\right)$.
Therefore, $X$$X$XX$X$ and $Y$$Y$YY$Y$ are not independent.

### iii) Conditional Distribution of $X$$X$XX$X$ given $Y=y$$Y=y$Y=yY = y$Y=y$

The conditional p.d.f. ${f}_{X|Y}\left(x|y\right)$${f}_{X|Y}\left(x|y\right)$f_(X|Y)(x|y)f_{X|Y}(x|y)${f}_{X|Y}\left(x|y\right)$ is given by:
${f}_{X|Y}\left(x|y\right)=\frac{f\left(x,y\right)}{{f}_{Y}\left(y\right)}=\frac{1}{1-|y|}\phantom{\rule{1em}{0ex}}\text{for}|y|f_(X|Y)(x|y)=(f(x,y))/(f_(Y)(y))=(1)/(1-|y|)quad”for “|y| < x;0 < x < 1f_{X|Y}(x|y) = \frac{f(x, y)}{f_Y(y)} = \frac{1}{1 – |y|} \quad \text{for } |y| < x; 0 < x < 1

### iv) Compute $E\left(X|Y=y\right)$$E\left(X|Y=y\right)$E(X|Y=y)E(X|Y = y)$E\left(X|Y=y\right)$ and $E\left(Y|X=x\right)$$E\left(Y|X=x\right)$E(Y|X=x)E(Y|X = x)$E\left(Y|X=x\right)$

1. $E\left(X|Y=y\right)$$E\left(X|Y=y\right)$E(X|Y=y)E(X|Y = y)$E\left(X|Y=y\right)$
$E\left(X|Y=y\right)={\int }_{|y|}^{1}x\frac{1}{1-|y|}\phantom{\rule{thinmathspace}{0ex}}dx$$E\left(X|Y=y\right)={\int }_{|y|}^{1} x\frac{1}{1-|y|}\phantom{\rule{thinmathspace}{0ex}}dx$E(X|Y=y)=int_(|y|)^(1)x(1)/(1-|y|)dxE(X|Y = y) = \int_{|y|}^{1} x \frac{1}{1 – |y|} \, dx$E\left(X|Y=y\right)={\int }_{|y|}^{1}x\frac{1}{1-|y|}\phantom{\rule{thinmathspace}{0ex}}dx$
1. $E\left(Y|X=x\right)$$E\left(Y|X=x\right)$E(Y|X=x)E(Y|X = x)$E\left(Y|X=x\right)$
$E\left(Y|X=x\right)={\int }_{-x}^{x}y\frac{1}{2x}\phantom{\rule{thinmathspace}{0ex}}dy$$E\left(Y|X=x\right)={\int }_{-x}^{x} y\frac{1}{2x}\phantom{\rule{thinmathspace}{0ex}}dy$E(Y|X=x)=int_(-x)^(x)y(1)/(2x)dyE(Y|X = x) = \int_{-x}^{x} y \frac{1}{2x} \, dy$E\left(Y|X=x\right)={\int }_{-x}^{x}y\frac{1}{2x}\phantom{\rule{thinmathspace}{0ex}}dy$
Let’s calculate these expectations.
After solving
1. $E\left(X|Y=y\right)$$E\left(X|Y=y\right)$E(X|Y=y)E(X|Y = y)$E\left(X|Y=y\right)$
$E\left(X|Y=y\right)=\frac{1}{2}-\frac{|y{|}^{2}}{2\left(1-|y|\right)}\phantom{\rule{1em}{0ex}}\text{for}-1E(X|Y=y)=(1)/(2)-(|y|^(2))/(2(1-|y|))quad”for “-1 < y < 1E(X|Y = y) = \frac{1}{2} – \frac{|y|^2}{2(1 – |y|)} \quad \text{for } -1 < y < 1
1. $E\left(Y|X=x\right)$$E\left(Y|X=x\right)$E(Y|X=x)E(Y|X = x)$E\left(Y|X=x\right)$
$E\left(Y|X=x\right)=0\phantom{\rule{1em}{0ex}}\text{for}0E(Y|X=x)=0quad”for “0 < x < 1E(Y|X = x) = 0 \quad \text{for } 0 < x < 1
Thus, we have found the conditional distributions and expected values for $X$$X$XX$X$ and $Y$$Y$YY$Y$ given the joint p.d.f. $f\left(x,y\right)$$f\left(x,y\right)$f(x,y)f(x, y)$f\left(x,y\right)$.

Page Break
b) Let the joint probability density function of two discrete random $\mathrm{X}$$\mathrm{X}$X\mathrm{X}$\mathrm{X}$ and $\mathrm{Y}$$\mathrm{Y}$Y\mathrm{Y}$\mathrm{Y}$ be given as:
 $\mathrm{X}$$\mathrm{X}$X\mathrm{X}$\mathrm{X}$ 2 3 4 5 $Y$$Y$YY$Y$ 0 0 0.03 0 0 1 0.34 0.30 0.16 0 2 0 0 0.03 0.14
X 2 3 4 5 Y 0 0 0.03 0 0 1 0.34 0.30 0.16 0 2 0 0 0.03 0.14| | | $\mathrm{X}$ | | | | | :—: | :—: | :—: | :—: | :—: | :—: | | | | 2 | 3 | 4 | 5 | | $Y$ | 0 | 0 | 0.03 | 0 | 0 | | | 1 | 0.34 | 0.30 | 0.16 | 0 | | | 2 | 0 | 0 | 0.03 | 0.14 |
i) Find the marginal distribution of $\mathrm{X}$$\mathrm{X}$X\mathrm{X}$\mathrm{X}$ and $\mathrm{Y}$$\mathrm{Y}$Y\mathrm{Y}$\mathrm{Y}$.
ii) Find the conditional distribution of $X$$X$XX$X$ given $Y=1$$Y=1$Y=1Y=1$Y=1$.
iii) Test the independence of variable $\mathrm{s}X$$\mathrm{s}X$sX\mathrm{s} X$\mathrm{s}X$ and $\mathrm{Y}$$\mathrm{Y}$Y\mathrm{Y}$\mathrm{Y}$.
iv) Find $V\left[\frac{Y}{X}=x\right]$$V\left[\frac{Y}{X}=x\right]$V[(Y)/(X)=x]V\left[\frac{Y}{X}=x\right]$V\left[\frac{Y}{X}=x\right]$.

### Introduction

In this problem, we are given the joint probability density function (pdf) of two discrete random variables $X$$X$XX$X$ and $Y$$Y$YY$Y$. We are tasked with:
1. Finding the marginal distribution of $X$$X$XX$X$ and $Y$$Y$YY$Y$.
2. Finding the conditional distribution of $X$$X$XX$X$ given $Y=1$$Y=1$Y=1Y=1$Y=1$.
3. Testing the independence of $X$$X$XX$X$ and $Y$$Y$YY$Y$.
4. Finding the variance $V\left[\frac{Y}{X}=x\right]$$V\left[\frac{Y}{X}=x\right]$V[(Y)/(X)=x]V\left[\frac{Y}{X}=x\right]$V\left[\frac{Y}{X}=x\right]$.
Let’s proceed to solve each part step-by-step.

### Part i: Marginal Distribution of $X$$X$XX$X$ and $Y$$Y$YY$Y$

#### Marginal Distribution of $X$$X$XX$X$

The marginal distribution of $X$$X$XX$X$ can be found by summing the probabilities along each row of the table for each value of $X$$X$XX$X$.
$P\left(X=x\right)=\sum _{y}P\left(X=x,Y=y\right)$$P\left(X=x\right)=\sum _{y} P\left(X=x,Y=y\right)$P(X=x)=sum_(y)P(X=x,Y=y)P(X=x) = \sum_{y} P(X=x, Y=y)$P\left(X=x\right)=\sum _{y}P\left(X=x,Y=y\right)$
Let’s substitute the values and calculate.
For $X=2$$X=2$X=2X=2$X=2$:
$P\left(X=2\right)=P\left(X=2,Y=0\right)+P\left(X=2,Y=1\right)+P\left(X=2,Y=2\right)$$P\left(X=2\right)=P\left(X=2,Y=0\right)+P\left(X=2,Y=1\right)+P\left(X=2,Y=2\right)$P(X=2)=P(X=2,Y=0)+P(X=2,Y=1)+P(X=2,Y=2)P(X=2) = P(X=2, Y=0) + P(X=2, Y=1) + P(X=2, Y=2)$P\left(X=2\right)=P\left(X=2,Y=0\right)+P\left(X=2,Y=1\right)+P\left(X=2,Y=2\right)$
$P\left(X=2\right)=0+0.34+0$$P\left(X=2\right)=0+0.34+0$P(X=2)=0+0.34+0P(X=2) = 0 + 0.34 + 0$P\left(X=2\right)=0+0.34+0$
For $X=3$$X=3$X=3X=3$X=3$:
$P\left(X=3\right)=P\left(X=3,Y=0\right)+P\left(X=3,Y=1\right)+P\left(X=3,Y=2\right)$$P\left(X=3\right)=P\left(X=3,Y=0\right)+P\left(X=3,Y=1\right)+P\left(X=3,Y=2\right)$P(X=3)=P(X=3,Y=0)+P(X=3,Y=1)+P(X=3,Y=2)P(X=3) = P(X=3, Y=0) + P(X=3, Y=1) + P(X=3, Y=2)$P\left(X=3\right)=P\left(X=3,Y=0\right)+P\left(X=3,Y=1\right)+P\left(X=3,Y=2\right)$
$P\left(X=3\right)=0.03+0.30+0$$P\left(X=3\right)=0.03+0.30+0$P(X=3)=0.03+0.30+0P(X=3) = 0.03 + 0.30 + 0$P\left(X=3\right)=0.03+0.30+0$
For $X=4$$X=4$X=4X=4$X=4$:
$P\left(X=4\right)=P\left(X=4,Y=0\right)+P\left(X=4,Y=1\right)+P\left(X=4,Y=2\right)$$P\left(X=4\right)=P\left(X=4,Y=0\right)+P\left(X=4,Y=1\right)+P\left(X=4,Y=2\right)$P(X=4)=P(X=4,Y=0)+P(X=4,Y=1)+P(X=4,Y=2)P(X=4) = P(X=4, Y=0) + P(X=4, Y=1) + P(X=4, Y=2)$P\left(X=4\right)=P\left(X=4,Y=0\right)+P\left(X=4,Y=1\right)+P\left(X=4,Y=2\right)$
$P\left(X=4\right)=0+0.16+0.03$$P\left(X=4\right)=0+0.16+0.03$P(X=4)=0+0.16+0.03P(X=4) = 0 + 0.16 + 0.03$P\left(X=4\right)=0+0.16+0.03$
For $X=5$$X=5$X=5X=5$X=5$:
$P\left(X=5\right)=P\left(X=5,Y=0\right)+P\left(X=5,Y=1\right)+P\left(X=5,Y=2\right)$$P\left(X=5\right)=P\left(X=5,Y=0\right)+P\left(X=5,Y=1\right)+P\left(X=5,Y=2\right)$P(X=5)=P(X=5,Y=0)+P(X=5,Y=1)+P(X=5,Y=2)P(X=5) = P(X=5, Y=0) + P(X=5, Y=1) + P(X=5, Y=2)$P\left(X=5\right)=P\left(X=5,Y=0\right)+P\left(X=5,Y=1\right)+P\left(X=5,Y=2\right)$
$P\left(X=5\right)=0+0+0.14$$P\left(X=5\right)=0+0+0.14$P(X=5)=0+0+0.14P(X=5) = 0 + 0 + 0.14$P\left(X=5\right)=0+0+0.14$
After calculating, we get:
• $P\left(X=2\right)=0.34$$P\left(X=2\right)=0.34$P(X=2)=0.34P(X=2) = 0.34$P\left(X=2\right)=0.34$
• $P\left(X=3\right)=0.33$$P\left(X=3\right)=0.33$P(X=3)=0.33P(X=3) = 0.33$P\left(X=3\right)=0.33$
• $P\left(X=4\right)=0.19$$P\left(X=4\right)=0.19$P(X=4)=0.19P(X=4) = 0.19$P\left(X=4\right)=0.19$
• $P\left(X=5\right)=0.14$$P\left(X=5\right)=0.14$P(X=5)=0.14P(X=5) = 0.14$P\left(X=5\right)=0.14$

#### Marginal Distribution of $Y$$Y$YY$Y$

The marginal distribution of $Y$$Y$YY$Y$ can be found by summing the probabilities along each column of the table for each value of $Y$$Y$YY$Y$.
$P\left(Y=y\right)=\sum _{x}P\left(X=x,Y=y\right)$$P\left(Y=y\right)=\sum _{x} P\left(X=x,Y=y\right)$P(Y=y)=sum_(x)P(X=x,Y=y)P(Y=y) = \sum_{x} P(X=x, Y=y)