mmt-008-solved-assignment-2023-42d93f61-84aa-4123-884e-f4c710d84785

Question:-01

  1. State whether the following statements are True or False. Justify your answer with a short proof or a counter example:
    a) If P P P\mathrm{P}P is a transition matrix of a Markov Chain, then all the rows of lim n P n lim n P n lim_(nrarr oo)P^(n)\lim _{\mathrm{n} \rightarrow \infty} \mathrm{P}^{\mathrm{n}}limnPn are identical.
Answer:
The statement “If P P PPP is a transition matrix of a Markov Chain, then all the rows of lim n P n lim n P n lim_(n rarr oo)P^(n)\lim_{{n \to \infty}} P^nlimnPn are identical” is generally not true for all Markov Chains. It is true under certain conditions, such as if the Markov Chain is irreducible, aperiodic, and positive recurrent (i.e., it is ergodic).

Counterexample:

Consider a simple Markov Chain with two states A A AAA and B B BBB and the following transition matrix:
P = ( 1 0 0 1 ) P = 1 0 0 1 P=([1,0],[0,1])P = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}P=(1001)
In this case, the Markov Chain is not irreducible (it consists of two disconnected states). The limit lim n P n lim n P n lim_(n rarr oo)P^(n)\lim_{{n \to \infty}} P^nlimnPn exists and is:
lim n P n = ( 1 0 0 1 ) lim n P n = 1 0 0 1 lim_(n rarr oo)P^(n)=([1,0],[0,1])\lim_{{n \to \infty}} P^n = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}limnPn=(1001)
Here, the rows are not identical, contradicting the statement.

Conditions for the Statement to be True:

For an ergodic Markov Chain, the statement is true. In such a case, the Markov Chain has a unique stationary distribution π π pi\piπ, and:
lim n P n = ( π π π ) lim n P n = π π π lim_(n rarr oo)P^(n)=([pi],[pi],[vdots],[pi])\lim_{{n \to \infty}} P^n = \begin{pmatrix} \pi \\ \pi \\ \vdots \\ \pi \end{pmatrix}limnPn=(πππ)
Here, all rows are identical and equal to the stationary distribution π π pi\piπ.
So, the statement is not universally true for all Markov Chains but holds under specific conditions.

Page Break
b) In a variance-covariance matrix all elements are always positive.
Answer:
The statement “In a variance-covariance matrix all elements are always positive” is false.

Counterexample:

Consider a simple dataset with two variables X X XXX and Y Y YYY, where X = [ 1 , 2 , 3 ] X = [ 1 , 2 , 3 ] X=[1,2,3]X = [1, 2, 3]X=[1,2,3] and Y = [ 3 , 2 , 1 ] Y = [ 3 , 2 , 1 ] Y=[3,2,1]Y = [3, 2, 1]Y=[3,2,1].
The variance-covariance matrix for this dataset would be:
( Var ( X ) Cov ( X , Y ) Cov ( Y , X ) Var ( Y ) ) = ( 1 1 1 1 ) Var ( X ) Cov ( X , Y ) Cov ( Y , X ) Var ( Y ) = 1 1 1 1 ([“Var”(X),”Cov”(X”,”Y)],[“Cov”(Y”,”X),”Var”(Y)])=([1,-1],[-1,1])\begin{pmatrix} \text{Var}(X) & \text{Cov}(X, Y) \\ \text{Cov}(Y, X) & \text{Var}(Y) \end{pmatrix} = \begin{pmatrix} 1 & -1 \\ -1 & 1 \end{pmatrix}(Var(X)Cov(X,Y)Cov(Y,X)Var(Y))=(1111)
Here, the covariance between X X XXX and Y Y YYY is 1 1 -1-11, which is not positive. Therefore, the statement is false.

Additional Notes:

  1. The diagonal elements of a variance-covariance matrix, which represent variances, are always non-negative because variance cannot be negative.
  2. Off-diagonal elements, which represent covariances, can be negative, zero, or positive, depending on the relationship between the variables involved.

Page Break
c) If X 1 , X 2 , X 3 X 1 , X 2 , X 3 X_(1),X_(2),X_(3)X_1, X_2, X_3X1,X2,X3 are iid from N 2 ( μ , Σ ) N 2 ( μ , Σ ) N_(2)(mu,Sigma)N_2(\mu, \Sigma)N2(μ,Σ), then X 1 + X 2 + X 3 3 X 1 + X 2 + X 3 3 (X_(1)+X_(2)+X_(3))/(3)\frac{X_1+X_2+X_3}{3}X1+X2+X33 follows N 2 ( μ , 1 3 Σ ) N 2 μ , 1 3 Σ N_(2)(mu,(1)/(3)Sigma)N_2\left(\mu, \frac{1}{3} \Sigma\right)N2(μ,13Σ).
Answer:
The statement “If X 1 , X 2 , X 3 X 1 , X 2 , X 3 X_(1),X_(2),X_(3)X_1, X_2, X_3X1,X2,X3 are iid from N 2 ( μ , Σ ) N 2 ( μ , Σ ) N_(2)(mu,Sigma)N_2(\mu, \Sigma)N2(μ,Σ), then X 1 + X 2 + X 3 3 X 1 + X 2 + X 3 3 (X_(1)+X_(2)+X_(3))/(3)\frac{X_1+X_2+X_3}{3}X1+X2+X33 follows N 2 ( μ , 1 3 Σ ) N 2 μ , 1 3 Σ N_(2)(mu,(1)/(3)Sigma)N_2\left(\mu, \frac{1}{3} \Sigma\right)N2(μ,13Σ)” is true.

Justification:

  1. Mean: The mean of X 1 + X 2 + X 3 3 X 1 + X 2 + X 3 3 (X_(1)+X_(2)+X_(3))/(3)\frac{X_1+X_2+X_3}{3}X1+X2+X33 is μ + μ + μ 3 = μ μ + μ + μ 3 = μ (mu+mu+mu)/(3)=mu\frac{\mu + \mu + \mu}{3} = \muμ+μ+μ3=μ.
  2. Covariance Matrix: The covariance matrix of X 1 + X 2 + X 3 X 1 + X 2 + X 3 X_(1)+X_(2)+X_(3)X_1+X_2+X_3X1+X2+X3 is Σ + Σ + Σ = 3 Σ Σ + Σ + Σ = 3 Σ Sigma+Sigma+Sigma=3Sigma\Sigma + \Sigma + \Sigma = 3\SigmaΣ+Σ+Σ=3Σ because X 1 , X 2 , X 3 X 1 , X 2 , X 3 X_(1),X_(2),X_(3)X_1, X_2, X_3X1,X2,X3 are independent. Therefore, the covariance matrix of X 1 + X 2 + X 3 3 X 1 + X 2 + X 3 3 (X_(1)+X_(2)+X_(3))/(3)\frac{X_1+X_2+X_3}{3}X1+X2+X33 is 1 3 2 ( 3 Σ ) = 1 3 Σ 1 3 2 ( 3 Σ ) = 1 3 Σ (1)/(3^(2))(3Sigma)=(1)/(3)Sigma\frac{1}{3^2}(3\Sigma) = \frac{1}{3}\Sigma132(3Σ)=13Σ.
Since both the mean and the covariance matrix match the given distribution N 2 ( μ , 1 3 Σ ) N 2 μ , 1 3 Σ N_(2)(mu,(1)/(3)Sigma)N_2\left(\mu, \frac{1}{3} \Sigma\right)N2(μ,13Σ), the statement is true.

Page Break
d) The partial correlation coefficients and multiple correlation coefficients lie between -1 and 1.
Answer:
The statement “The partial correlation coefficients and multiple correlation coefficients lie between -1 and 1” is true.

Justification:

  1. Partial Correlation Coefficients: The partial correlation coefficient measures the strength and direction of the relationship between two variables while controlling for the effect of one or more other variables. Mathematically, it is defined in a way that ensures its value lies between -1 and 1, inclusive. Specifically, it is computed as the correlation between the residuals resulting from the linear regression of each variable against the control variables. Since residuals are uncorrelated with the predicted values, the partial correlation coefficient is constrained to be between -1 and 1.
  2. Multiple Correlation Coefficients: The multiple correlation coefficient R R RRR is defined as the square root of the coefficient of determination R 2 R 2 R^(2)R^2R2, which is the proportion of the variance in the dependent variable that is predictable from the independent variables in a multiple regression model. Since R 2 R 2 R^(2)R^2R2 is between 0 and 1, R R RRR must be between 0 and 1. However, R R RRR can be negative if the model includes a constant term and the slope is negative, but its absolute value will still be between 0 and 1.
Therefore, both the partial correlation coefficients and multiple correlation coefficients are bounded between -1 and 1, making the statement true.

Page Break
e) For a renewal function M t , lim t 0 M t t = 1 μ M t , lim t 0 M t t = 1 μ M_(t),lim_(t rarr0)(M_(t))/(t)=(1)/(mu)M_t, \lim _{t \rightarrow 0} \frac{M_t}{t}=\frac{1}{\mu}Mt,limt0Mtt=1μ.
Answer:
The statement “For a renewal function M t , lim t 0 M t t = 1 μ M t , lim t 0 M t t = 1 μ M_(t),lim_(t rarr0)(M_(t))/(t)=(1)/(mu)M_t, \lim_{{t \rightarrow 0}} \frac{M_t}{t} = \frac{1}{\mu}Mt,limt0Mtt=1μ” is generally true under certain conditions.

Justification:

A renewal function M t M t M_(t)M_tMt is defined as the expected number of renewals (or arrivals, or events) that have occurred by time t t ttt. Mathematically, it is often defined as:
M t = E [ N ( t ) ] M t = E [ N ( t ) ] M_(t)=E[N(t)]M_t = \mathbb{E}[N(t)]Mt=E[N(t)]
where N ( t ) N ( t ) N(t)N(t)N(t) is the number of renewals by time t t ttt.
The mean inter-arrival time (or mean time between renewals) is denoted by μ μ mu\muμ and is defined as:
μ = E [ X ] μ = E [ X ] mu=E[X]\mu = \mathbb{E}[X]μ=E[X]
where X X XXX is the random variable representing the time between renewals.
Under the assumption that μ < μ < mu < oo\mu < \inftyμ< and the distribution of inter-arrival times has a finite variance, it is generally true in renewal theory that:
lim t 0 M t t = 1 μ lim t 0 M t t = 1 μ lim_(t rarr0)(M_(t))/(t)=(1)/(mu)\lim_{{t \rightarrow 0}} \frac{M_t}{t} = \frac{1}{\mu}limt0Mtt=1μ
This result is often derived using advanced methods in renewal theory and stochastic processes, and it provides a way to relate the renewal function to the mean inter-arrival time μ μ mu\muμ.
So, the statement is true under the conditions that μ < μ < mu < oo\mu < \inftyμ< and the distribution of inter-arrival times has a finite variance.

Page Break

Question:-02

  1. a) Let ( X , Y ) ( X , Y ) (X,Y)(\mathrm{X}, \mathrm{Y})(X,Y) have the joint p.d.f. given by:
f ( x , y ) = { 1 , if | y | < x ; 0 < x < 1 0 , otherwise f ( x , y ) = 1 ,       if  | y | < x ; 0 < x < 1 0 ,       otherwise  f(x,y)={[1″,”,” if “|y| < x;0 < x < 1],[0″,”,” otherwise “]:}f(x, y)= \begin{cases}1, & \text { if }|y|<x ; 0<x<1 \\ 0, & \text { otherwise }\end{cases}f(x,y)={1, if |y|<x;0<x<10, otherwise 
i) Find the marginal p.d.f.’s of X X X\mathrm{X}X and Y Y Y\mathrm{Y}Y.
ii) Test the independence of X X X\mathrm{X}X and Y Y Y\mathrm{Y}Y.
iii) Find the conditional distribution of X X XXX given Y = y Y = y Y=yY=yY=y.
iv) Compute E ( X Y = y ) E ( X Y = y ) E(X∣Y=y)\mathrm{E}(\mathrm{X} \mid \mathrm{Y}=\mathrm{y})E(XY=y) and E ( Y X = x ) E ( Y X = x ) E(Y∣X=x)\mathrm{E}(\mathrm{Y} \mid \mathrm{X}=\mathrm{x})E(YX=x).
Answer:

i) Marginal p.d.f.’s of X X XXX and Y Y YYY

  1. Marginal p.d.f. of X X XXX
f X ( x ) = x x 1 d y = 2 x for 0 < x < 1 f X ( x ) = x x 1 d y = 2 x for  0 < x < 1 f_(X)(x)=int_(-x)^(x)1dy=2x quad”for “0 < x < 1f_X(x) = \int_{-x}^{x} 1 \, dy = 2x \quad \text{for } 0 < x < 1fX(x)=xx1dy=2xfor 0<x<1
  1. Marginal p.d.f. of Y Y YYY
f Y ( y ) = | y | 1 1 d x = 1 | y | for 1 < y < 1 f Y ( y ) = | y | 1 1 d x = 1 | y | for  1 < y < 1 f_(Y)(y)=int_(|y|)^(1)1dx=1-|y|quad”for “-1 < y < 1f_Y(y) = \int_{|y|}^{1} 1 \, dx = 1 – |y| \quad \text{for } -1 < y < 1fY(y)=|y|11dx=1|y|for 1<y<1

ii) Test for Independence

Two random variables X X XXX and Y Y YYY are independent if and only if f ( x , y ) = f X ( x ) × f Y ( y ) f ( x , y ) = f X ( x ) × f Y ( y ) f(x,y)=f_(X)(x)xxf_(Y)(y)f(x, y) = f_X(x) \times f_Y(y)f(x,y)=fX(x)×fY(y).
Here, f ( x , y ) = 1 f ( x , y ) = 1 f(x,y)=1f(x, y) = 1f(x,y)=1 for | y | < x ; 0 < x < 1 | y | < x ; 0 < x < 1 |y| < x;0 < x < 1|y| < x; 0 < x < 1|y|<x;0<x<1.
f X ( x ) = 2 x f X ( x ) = 2 x f_(X)(x)=2xf_X(x) = 2xfX(x)=2x and f Y ( y ) = 1 | y | f Y ( y ) = 1 | y | f_(Y)(y)=1-|y|f_Y(y) = 1 – |y|fY(y)=1|y|.
Clearly, f ( x , y ) f X ( x ) × f Y ( y ) f ( x , y ) f X ( x ) × f Y ( y ) f(x,y)!=f_(X)(x)xxf_(Y)(y)f(x, y) \neq f_X(x) \times f_Y(y)f(x,y)fX(x)×fY(y).
Therefore, X X XXX and Y Y YYY are not independent.

iii) Conditional Distribution of X X XXX given Y = y Y = y Y=yY = yY=y

The conditional p.d.f. f X | Y ( x | y ) f X | Y ( x | y ) f_(X|Y)(x|y)f_{X|Y}(x|y)fX|Y(x|y) is given by:
f X | Y ( x | y ) = f ( x , y ) f Y ( y ) = 1 1 | y | for | y | < x ; 0 < x < 1 f X | Y ( x | y ) = f ( x , y ) f Y ( y ) = 1 1 | y | for  | y | < x ; 0 < x < 1 f_(X|Y)(x|y)=(f(x,y))/(f_(Y)(y))=(1)/(1-|y|)quad”for “|y| < x;0 < x < 1f_{X|Y}(x|y) = \frac{f(x, y)}{f_Y(y)} = \frac{1}{1 – |y|} \quad \text{for } |y| < x; 0 < x < 1fX|Y(x|y)=f(x,y)fY(y)=11|y|for |y|<x;0<x<1

iv) Compute E ( X | Y = y ) E ( X | Y = y ) E(X|Y=y)E(X|Y = y)E(X|Y=y) and E ( Y | X = x ) E ( Y | X = x ) E(Y|X=x)E(Y|X = x)E(Y|X=x)

  1. E ( X | Y = y ) E ( X | Y = y ) E(X|Y=y)E(X|Y = y)E(X|Y=y)
E ( X | Y = y ) = | y | 1 x 1 1 | y | d x E ( X | Y = y ) = | y | 1 x 1 1 | y | d x E(X|Y=y)=int_(|y|)^(1)x(1)/(1-|y|)dxE(X|Y = y) = \int_{|y|}^{1} x \frac{1}{1 – |y|} \, dxE(X|Y=y)=|y|1x11|y|dx
  1. E ( Y | X = x ) E ( Y | X = x ) E(Y|X=x)E(Y|X = x)E(Y|X=x)
E ( Y | X = x ) = x x y 1 2 x d y E ( Y | X = x ) = x x y 1 2 x d y E(Y|X=x)=int_(-x)^(x)y(1)/(2x)dyE(Y|X = x) = \int_{-x}^{x} y \frac{1}{2x} \, dyE(Y|X=x)=xxy12xdy
Let’s calculate these expectations.
After solving
  1. E ( X | Y = y ) E ( X | Y = y ) E(X|Y=y)E(X|Y = y)E(X|Y=y)
E ( X | Y = y ) = 1 2 | y | 2 2 ( 1 | y | ) for 1 < y < 1 E ( X | Y = y ) = 1 2 | y | 2 2 ( 1 | y | ) for  1 < y < 1 E(X|Y=y)=(1)/(2)-(|y|^(2))/(2(1-|y|))quad”for “-1 < y < 1E(X|Y = y) = \frac{1}{2} – \frac{|y|^2}{2(1 – |y|)} \quad \text{for } -1 < y < 1E(X|Y=y)=12|y|22(1|y|)for 1<y<1
  1. E ( Y | X = x ) E ( Y | X = x ) E(Y|X=x)E(Y|X = x)E(Y|X=x)
E ( Y | X = x ) = 0 for 0 < x < 1 E ( Y | X = x ) = 0 for  0 < x < 1 E(Y|X=x)=0quad”for “0 < x < 1E(Y|X = x) = 0 \quad \text{for } 0 < x < 1E(Y|X=x)=0for 0<x<1
Thus, we have found the conditional distributions and expected values for X X XXX and Y Y YYY given the joint p.d.f. f ( x , y ) f ( x , y ) f(x,y)f(x, y)f(x,y).

Page Break
b) Let the joint probability density function of two discrete random X X X\mathrm{X}X and Y Y Y\mathrm{Y}Y be given as:
X X X\mathrm{X}X
2 3 4 5
Y Y YYY 0 0 0.03 0 0
1 0.34 0.30 0.16 0
2 0 0 0.03 0.14
X 2 3 4 5 Y 0 0 0.03 0 0 1 0.34 0.30 0.16 0 2 0 0 0.03 0.14| | | $\mathrm{X}$ | | | | | :—: | :—: | :—: | :—: | :—: | :—: | | | | 2 | 3 | 4 | 5 | | $Y$ | 0 | 0 | 0.03 | 0 | 0 | | | 1 | 0.34 | 0.30 | 0.16 | 0 | | | 2 | 0 | 0 | 0.03 | 0.14 |
i) Find the marginal distribution of X X X\mathrm{X}X and Y Y Y\mathrm{Y}Y.
ii) Find the conditional distribution of X X XXX given Y = 1 Y = 1 Y=1Y=1Y=1.
iii) Test the independence of variable s X s X sX\mathrm{s} XsX and Y Y Y\mathrm{Y}Y.
iv) Find V [ Y X = x ] V Y X = x V[(Y)/(X)=x]V\left[\frac{Y}{X}=x\right]V[YX=x].
Answer:

Introduction

In this problem, we are given the joint probability density function (pdf) of two discrete random variables X X XXX and Y Y YYY. We are tasked with:
  1. Finding the marginal distribution of X X XXX and Y Y YYY.
  2. Finding the conditional distribution of X X XXX given Y = 1 Y = 1 Y=1Y=1Y=1.
  3. Testing the independence of X X XXX and Y Y YYY.
  4. Finding the variance V [ Y X = x ] V Y X = x V[(Y)/(X)=x]V\left[\frac{Y}{X}=x\right]V[YX=x].
Let’s proceed to solve each part step-by-step.

Part i: Marginal Distribution of X X XXX and Y Y YYY

Marginal Distribution of X X XXX

The marginal distribution of X X XXX can be found by summing the probabilities along each row of the table for each value of X X XXX.
P ( X = x ) = y P ( X = x , Y = y ) P ( X = x ) = y P ( X = x , Y = y ) P(X=x)=sum_(y)P(X=x,Y=y)P(X=x) = \sum_{y} P(X=x, Y=y)P(X=x)=yP(X=x,Y=y)
Let’s substitute the values and calculate.
For X = 2 X = 2 X=2X=2X=2:
P ( X = 2 ) = P ( X = 2 , Y = 0 ) + P ( X = 2 , Y = 1 ) + P ( X = 2 , Y = 2 ) P ( X = 2 ) = P ( X = 2 , Y = 0 ) + P ( X = 2 , Y = 1 ) + P ( X = 2 , Y = 2 ) P(X=2)=P(X=2,Y=0)+P(X=2,Y=1)+P(X=2,Y=2)P(X=2) = P(X=2, Y=0) + P(X=2, Y=1) + P(X=2, Y=2)P(X=2)=P(X=2,Y=0)+P(X=2,Y=1)+P(X=2,Y=2)
P ( X = 2 ) = 0 + 0.34 + 0 P ( X = 2 ) = 0 + 0.34 + 0 P(X=2)=0+0.34+0P(X=2) = 0 + 0.34 + 0P(X=2)=0+0.34+0
For X = 3 X = 3 X=3X=3X=3:
P ( X = 3 ) = P ( X = 3 , Y = 0 ) + P ( X = 3 , Y = 1 ) + P ( X = 3 , Y = 2 ) P ( X = 3 ) = P ( X = 3 , Y = 0 ) + P ( X = 3 , Y = 1 ) + P ( X = 3 , Y = 2 ) P(X=3)=P(X=3,Y=0)+P(X=3,Y=1)+P(X=3,Y=2)P(X=3) = P(X=3, Y=0) + P(X=3, Y=1) + P(X=3, Y=2)P(X=3)=P(X=3,Y=0)+P(X=3,Y=1)+P(X=3,Y=2)
P ( X = 3 ) = 0.03 + 0.30 + 0 P ( X = 3 ) = 0.03 + 0.30 + 0 P(X=3)=0.03+0.30+0P(X=3) = 0.03 + 0.30 + 0P(X=3)=0.03+0.30+0
For X = 4 X = 4 X=4X=4X=4:
P ( X = 4 ) = P ( X = 4 , Y = 0 ) + P ( X = 4 , Y = 1 ) + P ( X = 4 , Y = 2 ) P ( X = 4 ) = P ( X = 4 , Y = 0 ) + P ( X = 4 , Y = 1 ) + P ( X = 4 , Y = 2 ) P(X=4)=P(X=4,Y=0)+P(X=4,Y=1)+P(X=4,Y=2)P(X=4) = P(X=4, Y=0) + P(X=4, Y=1) + P(X=4, Y=2)P(X=4)=P(X=4,Y=0)+P(X=4,Y=1)+P(X=4,Y=2)
P ( X = 4 ) = 0 + 0.16 + 0.03 P ( X = 4 ) = 0 + 0.16 + 0.03 P(X=4)=0+0.16+0.03P(X=4) = 0 + 0.16 + 0.03P(X=4)=0+0.16+0.03
For X = 5 X = 5 X=5X=5X=5:
P ( X = 5 ) = P ( X = 5 , Y = 0 ) + P ( X = 5 , Y = 1 ) + P ( X = 5 , Y = 2 ) P ( X = 5 ) = P ( X = 5 , Y = 0 ) + P ( X = 5 , Y = 1 ) + P ( X = 5 , Y = 2 ) P(X=5)=P(X=5,Y=0)+P(X=5,Y=1)+P(X=5,Y=2)P(X=5) = P(X=5, Y=0) + P(X=5, Y=1) + P(X=5, Y=2)P(X=5)=P(X=5,Y=0)+P(X=5,Y=1)+P(X=5,Y=2)
P ( X = 5 ) = 0 + 0 + 0.14 P ( X = 5 ) = 0 + 0 + 0.14 P(X=5)=0+0+0.14P(X=5) = 0 + 0 + 0.14P(X=5)=0+0+0.14
After calculating, we get:
  • P ( X = 2 ) = 0.34 P ( X = 2 ) = 0.34 P(X=2)=0.34P(X=2) = 0.34P(X=2)=0.34
  • P ( X = 3 ) = 0.33 P ( X = 3 ) = 0.33 P(X=3)=0.33P(X=3) = 0.33P(X=3)=0.33
  • P ( X = 4 ) = 0.19 P ( X = 4 ) = 0.19 P(X=4)=0.19P(X=4) = 0.19P(X=4)=0.19
  • P ( X = 5 ) = 0.14 P ( X = 5 ) = 0.14 P(X=5)=0.14P(X=5) = 0.14P(X=5)=0.14

Marginal Distribution of Y Y YYY

The marginal distribution of Y Y YYY can be found by summing the probabilities along each column of the table for each value of Y Y YYY.
P ( Y = y ) = x P ( X = x , Y = y ) P ( Y = y ) = x P ( X = x , Y = y ) P(Y=y)=sum_(x)P(X=x,Y=y)P(Y=y) = \sum_{x} P(X=x, Y=y)