Numerical data

with Studies or Experiments

Last week, we learned some data basics with a focus on data tables and the types of data (numeric and categorical) they contain. We also discussed generating data with studies and experiments. Today, we'll take a closer look at numeric data - looking not just at the pictures but also digging a bit deeper into the quantitative parameters that describe the data.

This is all based on section 1.2 of our text.

CDC Data

Let's start with a specific, real world data set obtained from the Center for Disease Control that publishes loads of data - the Behavioral Risk Factor Surveillance System.

This is an ongoing process where over 400,000 US adults are interviewed every year. The resulting data file has over 2000 variables ranging from simple descriptors like age and weight, through basic behaviors like activity level and whether the subject smokes to what kind of medical care the subject receives.

I've got a subset of this data on my website listing just 8 variables for a random sample of 20000 individuals: https://www.marksmath.org/data/cdc.csv

Viewing the data table

Here's the CDC sample rendered as a data table:

Most of the variables (ie., the column names) are self-explanatory. My favorite is smoke100, which is a boolean flag indicating whether or not the individual has smoked 100 cigarettes or more throughout their life. You should probably be able to classify the rest as numerical or categorical.

Box plot and five-point summary

A box plot is a picture of the data tied to the so-called five-point summary that we'll go over in a bit more detail in a bit.

Mean, standard deviation and histogram

A histogram is a picture of the data tied to the mean and standard deviation.

The effect of standard deviation

You can use the slider below to see how the graph changes when you cange either mean or standard deviation. It's particularly hard to see the affect of standard deviation in a single image.

Other distributions

It's worth mentioning that there are other types of distributions that can arise.

Here's an example of a bimodal histogram.

Skewed distributions

And here's a skewed histogram. Specifically, it's skewed left, since more of the data lies to the left of the mean.

Scatter plots for 2D data

Sometimes, we need to visualize the relationship between two variables. One great way to do that is with a scatter plot. For example, here's the relationship between height and weight in the CDC data.

Definitions

At this point we've met several parameters that describe numerical data, including

  • The mean,
  • The median,
  • percentiles, and
  • the standard deviation

Let's take a look at how these quantities are actually defined.

Before we go through these, it's worth pointing out that the mean and standard deviation are the most important to understand thoroughly.
It's worth understanding percentiles from a conceptual standpoint, but we will rarely compute them directly. We will compute mean and standard deviation.

The mean

The mean is a measure of where the data is centered. It is computed by simply averaging the numbers.

For example, our data might be: $$2,8,2,4,7.$$ The mean of the data is then: $$\frac{2+8+2+4+7}{5} = \frac{23}{5} = 4.6.$$

The median

Like the mean, the median is a measure of where the data is centered.

Roughly speaking, it represents the middle value. They way it is computed depends on how many numbers are in your list.

If the number of terms in your data is odd, then the median is simply the middle entry.

For example, if the data is $$1,3,4,8,9,$$ then the median is $4$.

If the number of terms in your data is even, then the median is simply the average of the middle two entries.

For example, if the data is $$1,3,8,9,$$ then the median is $(3+8)/2 = 5.5$.

Percentiles (also called quantiles)

  • The median is a special case of a percentile - 50% of the population lies below the median and 50% lies above.
  • Similarly, 25% of the population lies below the first quartile and 75% lies above.
  • Also, 75% of the population lies below the third quartile and 25% lies above.
  • The second quartile is just another name for the median.
  • The inter-quartile range is the difference between the third and first quartile.
  • One reasonable definition of an outlier is a data point that lies more than 3 inter-quartile ranges from the median.

Example

Suppose our data is $$4, 5, 9, 7, 6, 10, 2, 1, 5.$$ To find percentiles, it helps to sort the data: $$1,2,4,5,5,6,7,9,10.$$

  • The median is definitely 5,
  • the $25^{\text{th}}$ percentile might be 4,
  • the $75^{\text{th}}$ percentile could be 7,
  • and the inter-quartile range would be 3.

There are differing conventions on how you interpolate when the number of terms doesn't work well with the percentile, but these differences diminish with sample size.

Variance and standard deviation

  • The interquartile range forms a measure of the spread of a population or sample related to the median of that population or sample.
  • The standard deviation forms a measure of the spread of a population or sample related to the mean of the population or sample.
  • Roughly, the standard deviation measures how far the individuals deviate from the mean on average.
  • The variance is defined to be the square of the standard deviation. Thus, if the standard deviation is $s$, then the variance is $s^2$.

Definitions

  • If we have a sample of $n$ observations $$x_1,x_2,x_3, \ldots, x_n,$$ then the sample variance is defined by $$s^2 = \frac{(x_1 - \bar{x})^2 + (x_2-\bar{x})^2 +\cdots+(x_n-\bar{x})^2}{n-1}.$$
  • If $s^2$ is the variance, then $s$ is the standard deviation.

Example

Suppose our sample is $$1,2,3,4.$$ Then, the mean is $2.5$ and the variance is $$s^2=\frac{(-3/2)^2 + (-1/2)^2 + (1/2)^2 + (3/2)^2}{3} = \frac{5}{3}.$$ The standard deviation is $$s = \sqrt{5/3} \approx 1.290994.$$

Sample variance vs population variance

  • You might see the definition $$s^2 = \frac{(x_1 - \bar{x})^2 + (x_2-\bar{x})^2 +\cdots+(x_n-\bar{x})^2}{n}.$$
  • The difference in the definition is the $n$ in the denominator, rather than $n-1$.
  • The difference arises because
    • The definition with the $n$ in the denominator is applied to populations and
    • The definition with the $n-1$ in the denominator is applied to samples.
  • To make things clear, we will sometimes refer to sample variance vs population variance.

More often than not, we will be computing sample variance and the corresponding standard deviation.