BPCC-108 Solved Assignment
BPCC-108 Solved Assignment
- Explain factorial designs with the help of suitable examples. Describe the types of interactional effect with the help of suitable diagrams.
- Compute One Way ANOVA (Parametric statistics) for the following data:
Scores obtained on Emotional Intelligence Scale | ||||||||||
Group 1 | 2 | 5 | 6 | 3 | 6 | 2 | 1 | 5 | 6 | 4 |
Group 2 | 5 | 4 | 6 | 3 | 4 | 5 | 6 | 4 | 3 | 4 |
Group 3 | 5 | 6 | 3 | 2 | 6 | 7 | 3 | 2 | 3 | 4 |
Assignment Two
Answer the following questions in about 100 words each (wherever applicable). Each question carries 5 marks.
Answer the following questions in about 100 words each (wherever applicable). Each question carries 5 marks.
- Explain the procedure for testing hypothesis.
- Explain Kruskal Wallis ANOVA test with a focus on its assumptions. Compare between Kruskal Wallis ANOVA test and one way ANOVA (parametric statistics)
- Compute Chi-square for the following data:
Responces | |||||
Managers | Agree | Undecided | Disagree | ||
Junior Managers | 4 | 5 | 6 | ||
|
5 | 5 | 5 |
- Compute independent
t t test for the following data :
Scores obtained on Achievement Motivation Scale | ||||||||||
Group 1 | 3 | 3 | 2 | 4 | 6 | 7 | 5 | 4 | 2 | 3 |
Group 2 | 3 | 4 | 5 | 1 | 2 | 4 | 5 | 3 | 4 | 4 |
- Explain the concept of parametric statistics and differentiate it from nonparametric statistics.
- Explain Statistical Package for Social Sciences (SPSS) with a focus on its data entry and coding.
Expert Answer:
Question:-1
Explain factorial designs with the help of suitable examples. Describe the types of interactional effect with the help of suitable diagrams.
Answer:
Factorial Designs
A factorial design is a type of experimental design that involves manipulating two or more independent variables (factors) simultaneously to study their effect on a dependent variable. It allows researchers to study not only the individual effects of each factor but also the interaction effects—how the factors work together to influence the outcome.
In factorial designs, each factor has two or more levels, and all possible combinations of the levels of the factors are included in the study. The most basic factorial design is a 2×2 factorial design, where there are two factors, and each factor has two levels (e.g., high and low, present and absent).
Example 1: Studying the Effect of Study Time and Type of Study Material on Exam Performance
Imagine a study that examines two factors:
- Factor 1: Study Time (2 levels: 1 hour vs. 3 hours)
- Factor 2: Type of Study Material (2 levels: Notes vs. Videos)
This is a 2×2 factorial design, with 4 possible combinations:
- 1 hour of study with notes
- 1 hour of study with videos
- 3 hours of study with notes
- 3 hours of study with videos
Researchers would then measure the performance of students on an exam (the dependent variable) based on these combinations.
Main Effects
The main effect of a factor is the overall effect of that factor, ignoring the other factors.
- Main Effect of Study Time: The average difference in exam performance between students who studied for 1 hour vs. 3 hours, across both types of study material.
- Main Effect of Study Material: The average difference in exam performance between students who used notes vs. videos, across both study times.
Interaction Effects
An interaction effect occurs when the effect of one factor depends on the level of another factor. In other words, the impact of one variable on the dependent variable changes based on the presence or absence of another variable.
Types of Interaction Effects
- No Interaction
-
If the effect of study time on performance is the same regardless of the study material (e.g., study time always improves performance by the same amount, whether students use notes or videos), then there is no interaction.
-
Diagram: Two parallel lines show the relationship between study time and performance for both notes and videos.
-

- Synergistic Interaction (Positive Interaction)
-
A synergistic interaction occurs when the effect of two factors combined is greater than the sum of their individual effects. For example, studying for 3 hours using videos might lead to a significantly higher performance boost than expected from either study time or study material alone.
-
Diagram: The lines representing the two study materials are not parallel and diverge, indicating an enhanced effect when study time and study material interact.
-

- Antagonistic Interaction (Negative Interaction)
-
An antagonistic interaction occurs when the effect of one factor is reduced by the presence of another factor. For example, studying for 3 hours with notes might actually result in lower performance than studying for just 1 hour with videos, suggesting that more study time with notes could lead to fatigue or confusion.
-
Diagram: The lines cross, showing that the effect of one factor (study time) is reversed depending on the other factor (study material).
-

Types of Factorial Designs
-
2×2 Factorial Design: Two factors, each with two levels.
- Example: Study Time (1 hour, 3 hours) and Study Material (Notes, Videos).
-
3×3 Factorial Design: Two factors, each with three levels.
- Example: Study Time (1 hour, 3 hours, 5 hours) and Study Material (Notes, Videos, Flashcards).
-
2×3 Factorial Design: One factor with two levels and another with three levels.
- Example: Study Time (1 hour, 3 hours) and Study Material (Notes, Videos, Flashcards).
Conclusion
Factorial designs provide a powerful way to study the combined effects of multiple factors on a dependent variable. By examining interaction effects, researchers can gain insights that go beyond the simple main effects of each factor individually. Interaction plots, like the diagrams above, visually demonstrate whether and how factors interact.
Question:-2
Compute One Way ANOVA (Parametric statistics) for the following data:
Scores obtained on Emotional Intelligence Scale | ||||||||||
Group 1 | 2 | 5 | 6 | 3 | 6 | 2 | 1 | 5 | 6 | 4 |
Group 2 | 5 | 4 | 6 | 3 | 4 | 5 | 6 | 4 | 3 | 4 |
Group 3 | 5 | 6 | 3 | 2 | 6 | 7 | 3 | 2 | 3 | 4 |
Answer:
Step 1: Organize the Data
We have three groups with the following scores on the Emotional Intelligence Scale:
- Group 1: 2, 5, 6, 3, 6, 2, 1, 5, 6, 4
- Group 2: 5, 4, 6, 3, 4, 5, 6, 4, 3, 4
- Group 3: 5, 6, 3, 2, 6, 7, 3, 2, 3, 4
Step 2: Define Hypotheses
- Null Hypothesis (H₀): The means of all groups are equal.
- Alternative Hypothesis (H₁): At least one group mean is different from the others.
Step 3: Calculate Group Means
We first calculate the mean of each group.
- Mean of Group 1 =
(2+5+6+3+6+2+1+5+6+4)/(10)=4.0 \frac{2 + 5 + 6 + 3 + 6 + 2 + 1 + 5 + 6 + 4}{10} = 4.0 - Mean of Group 2 =
(5+4+6+3+4+5+6+4+3+4)/(10)=4.4 \frac{5 + 4 + 6 + 3 + 4 + 5 + 6 + 4 + 3 + 4}{10} = 4.4 - Mean of Group 3 =
(5+6+3+2+6+7+3+2+3+4)/(10)=4.1 \frac{5 + 6 + 3 + 2 + 6 + 7 + 3 + 2 + 3 + 4}{10} = 4.1
Step 4: Calculate the Overall Mean
The overall mean is calculated by averaging the means of all the groups combined.
Step 5: Compute the Sum of Squares
a. Between-Groups Sum of Squares (SSB)
Where n=10 n = 10 (number of scores in each group), and the group means are compared to the overall mean:
b. Within-Groups Sum of Squares (SSW)
SSW measures the variance within each group.
For Group 1:
For Group 2:
For Group 3:
Step 6: Compute the Degrees of Freedom
- Degrees of Freedom Between Groups (dfB):
k-1=3-1=2 k – 1 = 3 – 1 = 2 , wherek k is the number of groups. - Degrees of Freedom Within Groups (dfW):
N-k=30-3=27 N – k = 30 – 3 = 27 , whereN N is the total number of observations.
Step 7: Compute the Mean Squares
- Mean Square Between (MSB):
- Mean Square Within (MSW):
Step 8: Compute the F-Statistic
The F-statistic is the ratio of MSB to MSW:
Step 9: Find the p-Value
Using statistical software or an F-distribution table, we find the p-value for the F-statistic. In this case, the p-value = 0.849.
Step 10: Conclusion
- Since the p-value (0.849) is much larger than the significance level (usually 0.05), we fail to reject the null hypothesis.
- Conclusion: There is no statistically significant difference in the emotional intelligence scores between the three groups.
Question:-3
Explain the procedure for testing hypothesis.
Answer:
The procedure for testing a hypothesis typically involves several steps, forming the foundation of statistical inference in research. Here’s a breakdown of the standard hypothesis testing process:
1. State the Hypotheses
- Null Hypothesis (H₀): This is the default assumption that there is no effect or no difference. It represents the status quo or the claim to be tested. For example, "There is no significant difference between the means of two groups."
- Alternative Hypothesis (H₁ or Hₐ): This is what you want to prove, suggesting that there is an effect, a difference, or a relationship. For example, "There is a significant difference between the means of two groups."
2. Set the Significance Level (α)
- This is the threshold for deciding whether to reject the null hypothesis, usually set at 0.05 or 5%. A significance level of 0.05 means there is a 5% risk of concluding that an effect exists when there is no actual effect (false positive).
3. Choose the Appropriate Test
- The test depends on the type of data and the hypothesis. Common tests include:
- t-tests: Compare means between two groups (independent or paired).
- ANOVA: Compare means among more than two groups.
- Chi-square test: For categorical data to test relationships.
- Regression analysis: For testing relationships between variables.
- Other tests include z-tests, non-parametric tests, etc., depending on the data distribution and sample size.
4. Calculate the Test Statistic
- Perform the test based on your data. Each test will generate a test statistic (e.g., t-value, z-value, F-value) that measures how far the sample data deviates from the null hypothesis.
5. Find the p-value
- The p-value indicates the probability of obtaining the observed results if the null hypothesis were true. A small p-value (usually less than 0.05) suggests that the observed data is unlikely under the null hypothesis, leading to its rejection.
6. Make a Decision
- Compare the p-value to the significance level (α):
- If p-value ≤ α, reject the null hypothesis. This means there is enough evidence to support the alternative hypothesis.
- If p-value > α, fail to reject the null hypothesis. This means there isn’t enough evidence to support the alternative hypothesis.
7. Draw Conclusions
- Based on the decision, you either accept the evidence supporting the alternative hypothesis or conclude that the data does not provide sufficient evidence to reject the null hypothesis.
8. Report the Results
- Clearly report the hypothesis, test results, p-value, confidence intervals, and your conclusion. Also, discuss the implications of these results in the context of the research.
This procedure ensures that your results are statistically sound and the likelihood of errors is minimized when interpreting the data.
Question:-4
Explain Kruskal Wallis ANOVA test with a focus on its assumptions. Compare between Kruskal Wallis ANOVA test and one way ANOVA (parametric statistics).
Answer:
The Kruskal-Wallis ANOVA is a non-parametric statistical test used to compare more than two groups when the assumption of normality (required by parametric tests like one-way ANOVA) is violated. It’s based on ranks rather than the actual data values, which makes it suitable for ordinal data or when the data do not meet the assumptions of a normal distribution or equal variances.
Key Points of Kruskal-Wallis ANOVA:
- Purpose: It tests whether there are statistically significant differences between the medians of three or more independent groups.
- Test Statistic: The test is based on the rank sums of the data rather than the actual values.
- Hypotheses:
- Null hypothesis (
H_(0) H_0 ): The population medians of all groups are equal. - Alternative hypothesis (
H_(1) H_1 ): At least one group has a different median.
- Null hypothesis (
Assumptions of Kruskal-Wallis ANOVA:
- Independent Observations: The groups are independent of each other, and the observations within each group are also independent.
- Ordinal or Continuous Data: The test can handle data that is ordinal or continuous, but it doesn’t require the data to follow a normal distribution.
- Similar Distribution Shapes: The distributions of the groups should have the same shape. The Kruskal-Wallis test only compares medians, so if the shape of the distributions differs, the test might lead to incorrect conclusions.
Kruskal-Wallis ANOVA Test Process:
- Rank all data points together, regardless of their group.
- Sum the ranks for each group.
- Use these ranks to calculate the Kruskal-Wallis test statistic (
H H ). - Compare the test statistic to a chi-squared distribution to determine significance.
Kruskal-Wallis ANOVA vs. One-Way ANOVA:
Characteristic | Kruskal-Wallis ANOVA | One-Way ANOVA |
---|---|---|
Parametric vs Non-Parametric | Non-parametric, does not assume normality. | Parametric, assumes normality and equal variances (homogeneity). |
Data Type | Ordinal or continuous data; ranks are used. | Continuous data (interval or ratio); actual values are used. |
Assumptions about Distributions | Does not assume normal distribution; requires similar distribution shapes. | Assumes data is normally distributed within groups. |
Variances | Less sensitive to unequal variances between groups. | Sensitive to unequal variances (homoscedasticity required). |
Measurement | Compares medians. | Compares means. |
Power of the Test | Generally less powerful than ANOVA when assumptions of ANOVA are met. | More powerful if the assumptions of normality and equal variances hold. |
Data Transformation | Not necessary since it operates on ranks. | Often requires transformation if assumptions are violated. |
When to Use Each:
-
Kruskal-Wallis ANOVA is preferred when:
- The data is not normally distributed.
- You have ordinal data.
- The variances between groups are unequal.
-
One-Way ANOVA is preferred when:
- The data is normally distributed.
- The groups have similar variances.
- The data is continuous and measured on an interval or ratio scale.
In summary, Kruskal-Wallis ANOVA provides a robust alternative when the assumptions of one-way ANOVA are violated, particularly when dealing with non-normal or ordinal data.
Question:-5
Compute Chi-square for the following data:
Responces | |||||
Managers | Agree | Undecided | Disagree | ||
Junior Managers | 4 | 5 | 6 | ||
|
5 | 5 | 5 |
Answer:
Chi-Square Test of Independence – Detailed Steps
We are performing a Chi-square test of independence for the data provided, which compares the responses from Junior and Senior Managers regarding their agreement, disagreement, or neutrality.
Step 1: Data Observation
The observed data is:
Agree | Undecided | Disagree | |
---|---|---|---|
Junior Managers | 4 | 5 | 6 |
Senior Managers | 5 | 5 | 5 |
We will test whether there is an association between the type of manager (Junior vs. Senior) and their responses (Agree, Undecided, Disagree).
Step 2: Hypotheses
- Null Hypothesis (H₀): There is no association between the type of manager and their response (i.e., the responses are independent of the manager level).
- Alternative Hypothesis (H₁): There is an association between the type of manager and their response (i.e., the responses are not independent).
Step 3: Calculate Row and Column Totals
First, we calculate the row and column totals:
Agree | Undecided | Disagree | Row Total | |
---|---|---|---|---|
Junior Managers | 4 | 5 | 6 | 15 |
Senior Managers | 5 | 5 | 5 | 15 |
Column Total | 9 | 10 | 11 | 30 |
Step 4: Calculate Expected Values
To calculate the expected frequencies, use the formula:
-
For Junior Managers (Agree):
E=((15)xx(9))/(30)=4.5 E = \frac{(15) \times (9)}{30} = 4.5 -
For Junior Managers (Undecided):
E=((15)xx(10))/(30)=5.0 E = \frac{(15) \times (10)}{30} = 5.0 -
For Junior Managers (Disagree):
E=((15)xx(11))/(30)=5.5 E = \frac{(15) \times (11)}{30} = 5.5 -
For Senior Managers (Agree):
E=((15)xx(9))/(30)=4.5 E = \frac{(15) \times (9)}{30} = 4.5 -
For Senior Managers (Undecided):
E=((15)xx(10))/(30)=5.0 E = \frac{(15) \times (10)}{30} = 5.0 -
For Senior Managers (Disagree):
E=((15)xx(11))/(30)=5.5 E = \frac{(15) \times (11)}{30} = 5.5
The expected frequency table is:
Agree | Undecided | Disagree | |
---|---|---|---|
Junior Managers | 4.5 | 5.0 | 5.5 |
Senior Managers | 4.5 | 5.0 | 5.5 |
Step 5: Calculate the Chi-Square Statistic
The Chi-square statistic is calculated using the formula:
Where:
O O is the observed frequency.E E is the expected frequency.
For each cell:
-
Junior Managers (Agree):
((4-4.5)^(2))/(4.5)=(0.25)/(4.5)~~0.0556 \frac{(4 – 4.5)^2}{4.5} = \frac{0.25}{4.5} \approx 0.0556 -
Junior Managers (Undecided):
((5-5.0)^(2))/(5.0)=0 \frac{(5 – 5.0)^2}{5.0} = 0 -
Junior Managers (Disagree):
((6-5.5)^(2))/(5.5)=(0.25)/(5.5)~~0.0455 \frac{(6 – 5.5)^2}{5.5} = \frac{0.25}{5.5} \approx 0.0455 -
Senior Managers (Agree):
((5-4.5)^(2))/(4.5)=(0.25)/(4.5)~~0.0556 \frac{(5 – 4.5)^2}{4.5} = \frac{0.25}{4.5} \approx 0.0556 -
Senior Managers (Undecided):
((5-5.0)^(2))/(5.0)=0 \frac{(5 – 5.0)^2}{5.0} = 0 -
Senior Managers (Disagree):
((5-5.5)^(2))/(5.5)=(0.25)/(5.5)~~0.0455 \frac{(5 – 5.5)^2}{5.5} = \frac{0.25}{5.5} \approx 0.0455
Now, summing all these values gives the Chi-square statistic:
Step 6: Degrees of Freedom
The degrees of freedom (dof) for a Chi-square test are calculated as:
Here, there are 2 rows and 3 columns:
Step 7: P-value and Conclusion
Using the Chi-square statistic (chi^(2)=0.202 \chi^2 = 0.202 ) and 2 degrees of freedom, we look up the p-value (or use a statistical calculator). The p-value is approximately 0.904.
- If the p-value is less than the significance level (typically 0.05), we reject the null hypothesis.
- Since the p-value (0.904) is much larger than 0.05, we fail to reject the null hypothesis.
Conclusion:
There is no significant association between the type of manager and their response. The responses are independent of the manager level.
Question:-6
Compute independent t t test for the following data:
Scores obtained on Achievement Motivation Scale | ||||||||||
Group 1 | 3 | 3 | 2 | 4 | 6 | 7 | 5 | 4 | 2 | 3 |
Group 2 | 3 | 4 | 5 | 1 | 2 | 4 | 5 | 3 | 4 | 4 |
Answer:
To perform an independent t t -test, we follow these steps:
1. State the Hypotheses:
- Null Hypothesis
H_(0) H_0 : There is no significant difference between the means of Group 1 and Group 2. - Alternative Hypothesis
H_(1) H_1 : There is a significant difference between the means of Group 1 and Group 2.
2. Data Overview:
The two sets of data represent scores obtained on the Achievement Motivation Scale.
- Group 1:
[3,3,2,4,6,7,5,4,2,3] [3, 3, 2, 4, 6, 7, 5, 4, 2, 3] - Group 2:
[3,4,5,1,2,4,5,3,4,4] [3, 4, 5, 1, 2, 4, 5, 3, 4, 4]
3. Assumptions:
Before performing a t t -test, we assume:
- Independence of samples: The two groups are independent.
- Normality: The data in each group is approximately normally distributed.
- Equal Variances: The variances of the two groups are equal (which we assume here for the independent
t t -test).
4. Formula for the Independent t t -Test:
The t t -statistic is calculated as:
Where:
bar(X)_(1) \bar{X}_1 andbar(X)_(2) \bar{X}_2 are the means of Group 1 and Group 2.s_(1)^(2) s_1^2 ands_(2)^(2) s_2^2 are the variances of Group 1 and Group 2.n_(1) n_1 andn_(2) n_2 are the sample sizes for Group 1 and Group 2.
5. Step-by-Step Calculation:
a) Calculate the Means:
b) Calculate the Variances:
For Group 1:
For Group 2:
c) Calculate the Standard Error:
d) Calculate the t t -statistic:
6. Degrees of Freedom (df):
The degrees of freedom for an independent t t -test are calculated as: