Recent Posts

Three considerations for sample size

Sample size considerations

Detecting a change or difference is often the aim of an experiment or set of measurements. We want to learn which vendor, process, or design provides a better result.

When we use a sample to estimate a statistic for a population we take the risk that the sample provides values that are not representative of the population. For example, if we use a professional basketball team to sample mens height. We may conclude that the height of men in the general population is taller than it actual.

Unlike the basketball team sample we take care to draw a random and hopefully representative sample. Due to the nature of chance alone, the sample may not provide accurate results.

Confidence and Significance

Statisticians use terms like ‘confidence’ and ‘significance’ with specific meaning related to sampling risks. These terms are related and provide the result’s ability to be convincing. Low significance or confidence implies the results based on the sample data is not convincing.

Confidence is the idea of being certain that the estimate based on the sample is correctly representing the population. This is often used in relationship to an interval or bound within which the true and unknown population value is expected to reside.

Significance is the idea that the results are not due to random change alone. It is the notion that there is convincing evidence based on the sample data that there really is a difference. We commonly use this when accepting the alternative hypothesis with a specified level of statistical significance.

Selecting a meaningful sample size

The risks around using a sample to make conclusions about a population is only one of three considerations when determining the sample size for an experiment. The sampling risk, the population’s variance, and the precision or amount of change we wish to detect all impact the calculation of sample size.

The less risk we want to take related to the sample representing the population the more samples required. If you want to have no sampling risk measure every unit in the population using a method that has very little measurement error.

The higher the underlying populations variation or spread the more samples we will need to determine the same result over a smaller variation population. The chance of selecting samples further from the mean value (if we are attempting to estimate a population’s mean for example) it will take more samples to get an accurate estimate than is the population has a very tight variance.

The more precise or smaller the difference that we want to detect, the more samples required. Given the same risk and variance, the ability of a sample to detect a 1 millimeter height difference in two groups versus a 1 meter difference will require many more samples.

The following sample size formula for an estimate of a normal distribution mean provide the relationship of these three elements.

n=\frac{{{Z}^{2}}{{\sigma }^{2}}}{{{E}^{2}}}

Z represents the Type I (the 1-α) risk. Other versions of sample size formulas include both Type I and Type II risks and the appropriate distribution for the specific measure. As the desired risk goes down, the Z value from the normal table goes up, thus the sample size increases.

The risk is a business decision based on what risk the decision maker wishes to take often becoming a trade off between the cost of the samples and experiment versus the possibility the sample does not represent the population and we make the wrong decision.

σ is the standard deviation of the population (we often use an estimate of the variation from the sample and then use the Student t table instead of the normal table). The variance is standard deviation square. As the variance increases the sample size increases.

The population variance is what it is. By working to reduce the variance by reducing process or measurement variation we may both improve the product or process, and reduce the samples needed for sample sizes.

E is the difference of interest. Some others use delta, Δ, to represent this term as it is the amount of separation or precision desired to detect that drives the sample size. The smaller this value we will need more samples.

E for this normal distribution mean estimate is the difference of means, μ1 -  μ2. Where  the difference is the amount that is worth knowing to make a decision.

This becomes a tradeoff between the cost of the samples and experiment and the amount of difference that is important. At what amount of difference will we make a change is a good way to consider how to set this value.

Summary

Most sample size formulas contain these three elements. Two parts, risk and precision, are business or technical decisions. The variance is part of the underlying data and not and option to increase or decrease easily.

The sample size formula is useful for the discussion around risk and decision points prior to conducting the experiment. It is one way to design and conduct measurements that provide value. If our results are to useful than considering all three elements that make up a sample size formula become important.

  1. Mann-Whitney U Test Leave a reply
  2. Laplace’s Trend Test Leave a reply
  3. Failure modes and mechanisms Leave a reply
  4. 3 Steps NRTL use for product safety Leave a reply
  5. Reliability Growth Testing 1 Reply
  6. Discovery Testing Leave a reply
  7. Hypothesis Test Selection Flowchart Leave a reply
  8. 10 steps of FMEA 2 Replies
  9. How to read a standard normal table Leave a reply