# A summary of statistical tests¶

At this point, we've learned quite a few statistical tests. Here's an outline with a few more details below:

• $z$-test
• for means (sections 4.1-4.3)
• for proportions (sections 6.1-6.2)
• $t$-test
• One sample mean (section 5.1)
• One sample proportion (section 6.1)
• Compairing data sets
• Paired data (section 5.2)
• Difference of two means (section 5.3)
• $\chi^2$-test
• Homogeneity (section 6.3)
• Independence (section 6.4)
• Linear regression (Chapter 7)

### Commonalities¶

All of the tests have a few things in common.

• They all involve some well-formulated hypothesis - a null hypothesis $H_0$ vs an alternative hypothesis $H_A$.
• Of course, they all involve data; the general question is - do the data support the alternative to the point where we should reject the null hypothesis?
• The precise formulation of the general question involves a $p$-value which is
• The probability of observing data at least as favorable to the alternative hypothesis as our current data set, if the null hypothesis is true.
• The smaller the $p$-value, the less viable is the null hypothesis.

### Differences¶

Perhaps the most obvious difference centers on the type of data being considered: numerical vs categorical.

There are other differences too, though.

• How many data sets are under consideration?
• How large are the data sets?
• What is the relationship between the variables?

Understanding these helps you know which to apply in a certain situation.

## The tests¶

### Hypotheses for means¶

This is the simplest, first situation that we dealt with. We are measuring the mean of numerical data. In the simplest case, we have one data sample - just a list of numbers.

#### The hypothesis test¶

The question is - does that data support the hypothesis that the mean of the population from which is was drawn is some particular number? If our data has sample mean $\bar{x}$ and we suspec the population mean is $\mu_0$, then our two-sided hypothesis can be written

• $H_0$: $\bar{x}=\mu_0$
• $H_A$: $\bar{x}\neq\mu_0$

A one sided hypothesis can be written with a greater or less, rather than a not equal.

#### Conditions to check¶

• Random sample of numeric data
• Need less than 10% of population for independence
• Large enough
• Typically, at least 30

#### The $z$-score¶

The $z$-score for our mean is $$Z = \frac{\bar{x} - \mu_0}{\sigma/\sqrt{n}}.$$ We use then compare this against the standard normal distribution or a $t$-distribution (depending on the sample size) to compute the $p$-value. There are a couple of examples in our notes on the $t$-test.

### Hypotheses for a single proportion¶

These are very much like our tests for means. We are dealing now with proportions of categorical data. We often think of this in terms of a random variable $X$ that is binomially distributed; thus, we need to know the binomoial distribution after dividing through by $n$:

\begin{align} \mu &= p &\sigma^2 &= p(1-p)/n &\sigma &= \sqrt{p(1-p)/n} \end{align}

Our hypothesis can be written

\begin{align} H_0 : \hat{p}=p_0 \\ H_A : \hat{p} \neq p_0 \end{align}

Ultimately, we compute the $p$ value using either a normal distribution (if the sample size is large) or a $t$-distribution (if the sample size is small). There are a couple of examples in our intro notes on Hypothesis Testing.

### Tests for two sample means¶

We use these tests when we have two numerical data sets that are independent of one another and we want to compute the difference between their means.

If the sets have sizes $n_1$ and $n_2$, we analyze the difference of the two means using a $t$-test with

• Mean $\bar{x}_1 - \bar{x}_2$,
• Standard error $$\sqrt{\frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2}},$$
• and we use the minimum of $n_1-1$ and $n_2-1$ as the degrees of freedom.

Our hypothesis test again looks like

$$\begin{array}{ll} H_0: & \mu_1 = \mu_2 \\ H_A: & \mu_1 \neq \mu_2 \end{array}$$

There are some examples of this in our notes on Relating Data Sets.

### Tests for two sample proportions¶

We use these tests when we have two categorical data sets that are independent of one another and we want to compute the difference between their proportions.

This is very similar to the difference of the two means but we now use

$$\hat{p} = \hat{p}_1 - \hat{p}_2$$

and $$SE = \sqrt{\frac{\hat{p}_1(1-\hat{p}_1)}{n_1} + \frac{\hat{p}_2(1-\hat{p}_2)}{n_2}}.$$

We again use the minimum of $n_1-1$ and $n_2-1$ as the degrees of freedom.

Our hypothesis test again looks like

$$\begin{array}{ll} H_0: & \hat{p}_1 = \hat{p}_2 \\ H_A: & \hat{p}_1 \neq \hat{p}_2 \\ \end{array}$$

There are again some examples of this in our notes on Relating Data Sets.

### Tests for paired data¶

We use this when we have two data sets that are paired in a natural way; that is, each data point in one set corresponds to a particular data point in the other set.

Such a data set can be translated to a single data set by simply subtracting the data sets pair-wise.

Our hypotesis test looks like

$$\begin{array}{ll} H_0: & \mu_1 = \mu_2 \\ H_A: & \mu_1 \neq \mu_2 \\ \end{array}$$

There are again some examples of this in our notes on Relating Data Sets.

## The $\chi^2$-test¶

The chi-square test is a tool for assessing a model of categorical data. There are two situations:

### Homogeneity¶

In this situation, we have two data sets, call them

• $O_1$, $O_2$, ..., $O_k$, which represents observations in $k$ categories and
• $E_1$, $E_2$, ..., $E_k$, which represents expected counts in $k$ categories.

Our hypothesis test looks like

• $H_0$: The observations are representative of the expected counts
• $H_A$: The observations are not representative of the expected counts

We then compute the $\chi^2$ statistic $$\chi^2 = \frac{(O_1 - E_1)^2}{E_1} + \frac{(O_2 - E_2)^2}{E_2} + \cdots + \frac{(O_k - E_k)^2}{E_k}$$ and use the $\chi^2$ distribution with $k-1$ degrees of freedom.

More often, though, we can compute the $\chi^2$ statistic and resulting $p$-value on a computer.

### Independence¶

In this situation, we have data on two categorical variables in a contingency table. The question is whether the variables are related or not. The hypothesis test looks like

• $H_0$: The variables are not related
• $H_A$: The variables are related.

The only tool that we have to compute the $\chi^2$ statistic and resulting $p$-value is the computer.

## Linear regression¶

Linear regression is topic that spans much more than just hypothesis testsing. There is an important hypothesis test that arises from linear regression, though.

In linear regression, we have two data samples $(x_1,\ldots,x_k)$ and $(y_1,\ldots,y_k)$. The question is - are they related?

The hypothsis statement looks like:

• $H_0$: The data are not related
• $H_A$: The data are related

This can be stated in terms of the slope of the regression line

• $H_0$: $m=0$
• $H_0$: $m\neq0$

In practice, we run the hypothesis test on a computer and determine the results of the test from the $p$-value that we find there.