Recent Posts

Expectation and Moment Generating Functions

Expectation and Moment Generating Functions

In statistics and reliability we use distributions to describe time to failure patterns. The four functions commonly used in reliability engineering include

  • The probability density function
  • The cumulative distribution function
  • The reliability function
  • The hazard function

We often use terms like, mean, variance, skewness, and kurtosis to describe distributions (along with shape, scale and location). The mean is defined with the use of a moment generating function. First though let’s first back up to the concept of center of gravity (cog) from mechanics.

Mean

Any shape has a point where it will balance, meaning there is an equal area or mass about the center point. It may not be in the middle which implies an equal area or mass on either side. From our engineering studies, cog of a shape is

\displaystyle cog=\frac{\int_{-\infty }^{\infty }{xf\left( x \right)dx}}{\int_{-\infty }^{\infty }{f\left( x \right)dx}}

Where f(x) is a function describing the area of the shape over the x.

Recall now from statistics that the area below a probability distribution, describing the probability of occurrence at any point x, is defined as equal to one. Thus the denominator drops out of the equation.
The numerator based on x f(x) is the first moment generating function about the origin. The definition of the mean of a distribution and the cog are the same, thus we are left with the expected value (mean, or cog) or the most likely x to occur from the distribution f(x) is

\displaystyle E\left( x \right)=\int_{-\infty }^{\infty }{xf\left( x \right)dx}

For the population mean we use the Greek letter, μ, thus the expected value of the distribution is

\displaystyle \mu =\int_{-\infty }^{\infty }{xf\left( x \right)dx.}

In practical terms the mean for a sample is

\displaystyle \bar{x}=\frac{\sum\limits_{i=i}^{n}{\left( {{X}_{i}} \right)}}{\left( N \right)}

Variance

The second moment generating function provides information on the dispersion of the mass about the origin. In the case of probability distributions it allows us to define the dispersion about the mean. We use the square of the differences to properly account for the dispersion.

\displaystyle Var\left( X \right)=E\left[ {{\left( X-\mu \right)}^{2}} \right]

With a little algebra and using the Greek letter σ we have variance for the population

\displaystyle {{\sigma }^{2}}=\int_{-\infty }^{\infty }{{{x}^{2}}f\left( x \right)dx}-{{\mu }^{2}}

In practical terms variance for a sample is

\displaystyle {{s}^{2}}=\frac{\sum\limits_{i=i}^{n}{{{\left( {{X}_{i}}-\bar{X} \right)}^{2}}}}{\left( N-1 \right)}

Skewness

The normal distribution is symmetrical about the mean. Most distributions are asymmetrical or not balanced about the mean. We call this skewness. The third moment about the mean provides a measure of the asymmetry of the distribution.
The visual characteristic of skewness is a long tail. If the long tail is on the right the skewness is positive. Likewise, a negative skewness has the long tail on the left.

\displaystyle Skew\left( X \right)=E\left[ {{\left( \frac{X-\mu }{\sigma } \right)}^{3}} \right]

This is known as Pearson’s moment coefficient of skewness and is the third standardized moment, hence the use of μ and σ.

\displaystyle \gamma =\frac{\int\limits_{-\infty }^{\infty }{{{x}^{3}}f\left( x \right)dx}-3\mu {{\sigma }^{2}}-{{\mu }^{2}}}{{{\sigma }^{3}}}

In practical terms skewness for a sample is

\displaystyle c=\frac{\sum\limits_{i=i}^{n}{{{\left( {{X}_{i}}-\bar{X} \right)}^{3}}}}{\left( N-1 \right){{s}^{3}}}

Kurtosis

The fourth moment described the bunchiness or peakedness of the distribution. Again Person suggests using the standardized moment.

\displaystyle Skew\left( X \right)=\frac{E\left[ {{\left( X-\mu \right)}^{4}} \right]}{{{\left( E\left[ {{\left( X-\mu \right)}^{2}} \right] \right)}^{2}}}

In practical terms kurtosis for a sample is

\displaystyle k=\frac{\sum\limits_{i=i}^{n}{{{\left( {{X}_{i}}-\bar{X} \right)}^{4}}}}{\left( N-1 \right){{s}^{4}}}

A positive kurtosis means the data is bunched near the mean. And, a negative kurtosis means the distribution is flat (heavy tails).

  1. Benefits of Reliability Engineering Leave a reply
  2. Censored Data and CDF Plotting Points Leave a reply
  3. 4 Electronics Nondestructive Evaluations 2 Replies
  4. 8 Nondestructive Evaluation Techniques 3 Replies
  5. 8 Factor of Design for Maintainability Leave a reply
  6. Reliability Testing Considerations 1 Reply
  7. Field Industry and Public Failure Data 1 Reply
  8. Types of failure data Leave a reply
  9. Sources of Reliability Data 1 Reply