1. What is the difference between descriptive and inferential statistics? Give an example of each. - Descriptive statistics summarize the characteristics of a data set, such as mean, median, mode, standard deviation, range, etc. Inferential statistics use sample data to make generalizations or predictions about a population or a parameter, such as confidence intervals, hypothesis testing, regression, etc. For example, descriptive statistics can tell us the average height of students in a class, while inferential statistics can tell us how likely it is that the average height of students in the school is within a certain range. 2. What are the assumptions of linear regression? How can you check them? - Linear regression assumes that the relationship between the dependent variable and the independent variables is linear, that the residuals are normally distributed, that the variance of the residuals is constant (homoscedasticity), and that there is no multicollinearity or autocorrelation among the independent variables or the residuals. To check these assumptions, we can use various methods such as scatterplots, histograms, Q-Q plots, residual plots, variance inflation factor (VIF), Durbin-Watson test, etc. 3. What is the difference between parametric and nonparametric tests? Give an example of each. - Parametric tests are based on the assumption that the data follow a certain distribution, such as normal, binomial, Poisson, etc. Nonparametric tests do not make any assumptions about the distribution of the data. Parametric tests are usually more powerful and precise than nonparametric tests, but they require more stringent conditions to be met. Nonparametric tests are more robust and flexible, but they may lose some information or efficiency. For example, t-test and ANOVA are parametric tests that compare means of groups assuming normality and homogeneity of variance, while Mann-Whitney U test and Kruskal-Wallis test are nonparametric tests that compare medians of groups without making any distributional assumptions. 4. What is the difference between Type I and Type II errors? How can you control them? - Type I error is the probability of rejecting a true null hypothesis (false positive), while Type II error is the probability of failing to reject a false null hypothesis (false negative). The significance level (alpha) is the maximum allowable Type I error rate, while the power (1-beta) is the minimum desired Type II error rate. To control Type I error, we can adjust the alpha level or use multiple testing corrections such as Bonferroni or Holm methods. To control Type II error, we can increase the sample size, use a larger effect size, or use a smaller alpha level.

No comments found.
Login to post a comment
This item has not received any review yet.
Login to review this item
No Questions / Answers added yet.
Price $7.00
Add To Cart

Buy Now
Category Exams and Certifications
Comments 0
Rating
Sales 0

Buy Our Plan

We have

The latest updated Study Material Bundle with 100% Satisfaction guarantee

Visit Now
{{ userMessage }}
Processing