## Thursday, May 29, 2014

### Single-Factor ANOVA Test Power With G*Power Utility

This is one of the following sixteen articles on Single-Factor ANOVA in Excel

Overview of Single-Factor ANOVA

Single-Factor ANOVA in 5 Steps in Excel 2010 and Excel 2013

Shapiro-Wilk Normality Test in Excel For Each Single-Factor ANOVA Sample Group

Kruskal-Wallis Test Alternative For Single Factor ANOVA in 7 Steps in Excel 2010 and Excel 2013

Levene’s and Brown-Forsythe Tests in Excel For Single-Factor ANOVA Sample Group Variance Comparison

Single-Factor ANOVA - All Excel Calculations

Overview of Post-Hoc Testing For Single-Factor ANOVA

Tukey-Kramer Post-Hoc Test in Excel For Single-Factor ANOVA

Games-Howell Post-Hoc Test in Excel For Single-Factor ANOVA

Overview of Effect Size For Single-Factor ANOVA

ANOVA Effect Size Calculation Eta Squared in Excel 2010 and Excel 2013

ANOVA Effect Size Calculation Psi – RMSSE – in Excel 2010 and Excel 2013

ANOVA Effect Size Calculation Omega Squared in Excel 2010 and Excel 2013

Power of Single-Factor ANOVA Test Using Free Utility G*Power

Welch’s ANOVA Test in 8 Steps in Excel Substitute For Single-Factor ANOVA When Sample Variances Are Not Similar

Brown-Forsythe F-Test in 4 Steps in Excel Substitute For Single-Factor ANOVA When Sample Variances Are Not Similar

# Power of Single-Factor ANOVA Test Using Free Utility G*Power

The accuracy of a statistical test is very dependent upon the sample size. The larger the sample size, the more reliable will be the test’s results. The accuracy of a statistical test is specified as the Power of the test. A statistical test’s Power is the probability that the test will detect an effect of a given size at a given level of significance (alpha). The relationships are as follows:

α (“alpha”) = Level of Significance = 1 – Level of Confidence

α = probability of a type 1 error (a false positive)

α = probability of detecting an effect where there is none

Β (“beta”) = probability of a type 2 error (a false negative)

Β = probability of not detecting a real effect

1 - Β = probability of detecting a real effect

Power = 1 - Β

Power needs to be clarified further. Power is the probability of detecting a real effect of a given size at a given Level of Significance (alpha) at a given total sample size and number of groups.

The term Power can be described as the accuracy of a statistical test. The Power of a statistical test is related with alpha, sample size, and effect size in the following ways:

1) The larger the sample size, the larger is a test’s Power because a larger sample size increases a statistical test’s accuracy.

2) The larger alpha is, the larger is a test’s Power because a larger alpha reduces the amount of confidence needed to validate a statistical test’s result. Alpha = 1 – Level of Confidence. The lower the Level of Confidence needed, the more likely a statistical test will detect an effect.

3) The larger the specified effect size, the larger is a test’s Power because a larger effect size is more likely to be detected by a statistical test.

If any three of the four related factors (Power, alpha, sample size, and effect size) are known, the fourth factor can be calculated. These calculations can be very tedious. Fortunately there are a number of free utilities available online that can calculate a test’s Power or the sample size needed to achieve a specified Power. One very convenient and easy-to-use downloadable Power calculator called G-Power is available at the following link at the time of this writing:

http://www.psycho.uni-duesseldorf.de/abteilungen/aap/gpower3/

## Power calculations are generally used in two ways:

### 1) A priori

- Calculation of the minimum sample size needed to achieve a specified Power to detect an effect of a given size at a given alpha. This is the most common use of Power analysis and is normally conducted a priori (before the test is conducted) when designing the test. A Power level of 80 percent for a given alpha and effect size is a common target. Sample size is increased until the desired Power level can be achieved. Since Power equals 1 – Β, the resulting Β of the targeted Power level represents the highest acceptable level of a type 2 error (a false negative – failing to detect a real effect). Calculation of the sample size necessary to achieve a specified Power requires three input variables:

a) Power level – This is often set at .8 meaning that the test has an 80 percent to detect an effect of a given size.

b) Effect size - Effect sizes are specified by the variable f. Effect size f is calculated from a different measure of effect size called η2 (eta square). η2 = SSBetween_Groups / SSTotal These two terms are part of the ANOVA calculations found in the Single-factor ANOVA output.

The relationship between effect size f and effect size η2 is as follows:

(Click Image To See a Larger Version)

Jacob Cohen in his landmark 1998 book Statistical Analysis for the Behavior Sciences proposed that effect sizes could be generalized as follows:

η2 = 0.01 for a small effect. A small effect is one that not easily observable.

η2 = 0.05 for a medium effect. A medium effect is more easily detected than a small effect but less easily detected than a large effect.

η2 = 0.14 for a small effect. A large effect is one that is readily detected with the current measuring equipment.

The above values of η2 produce the following values of effect size f:

f = 0.1 for a small effect.

f = 0.25 for a medium effect.

f = 0.4 for a large effect.

c) Alpha – This is commonly set at 0.05.

## Calculating Power With Online Tool G Power

### 1) A Priori

- An example of a priori Power calculation would be the following. Power calculations are normally used a priori to determine the total ANOVA sample size necessary to achieve a specific Power level for detecting an effect of a specified size at a given alpha.

The single-factor ANOVA example used in this chapter has three groups. The G-Power utility could be used a priori in this way:

Calculate the total sample needed achieve the following parameters:

Power level = 0.8 (80 percent chance of detecting the effect)

Effect size f = 0.4 (a large effect)

Number of Groups = 3

Alpha = 0.05

The G-Power dialogue box would be filled in as follows and calculates that a total sample size of 66 would be needed have attain a Power of 0.818 (81.8 percent) to detect a large effect of effect size f = 0.4. The example used in this chapter has a total of 63 data observations. That would be nearly a large enough total size to have an 80 percent chance of detecting a large effect (f = 0.4) at an alpha = 0.05.

(Click Image To See a Larger Version)

### 2) Post hoc

- Calculation of a test’s Power to detect an effect of a given size at a given alpha for a given sample size. This is usually conducted post hoc (after a test has been performed). If a test’s Power is deemed unacceptably low, the test’s results are usually considered invalid.

An example of a post hoc Power calculation would be the following. Power calculations are normally used post hoc to determine the current Power level of an ANOVA test for detecting an effect of a specified size at a given alpha given the total sample size.

The single-factor ANOVA example used in this chapter has three groups. The G-Power utility could be used post hoc in this way:

Calculate the total sample needed achieve the following parameters:

Effect size f = 0.25 (a medium effect)

Number of Groups = 3

Total sample size = 63

Alpha = 0.05

The G-Power dialogue box would be filled in as follows and calculates that this single-factor ANOVA test achieves a Power level of 0.391 (39.1 percent chance) to detect a medium effect (effect size f = 0.25) with three groups of 63 total data observations.

(Click Image To See a Larger Version)

Excel Master Series Blog Directory

Statistical Topics and Articles In Each Topic

• Histograms in Excel
• Bar Chart in Excel
• Combinations & Permutations in Excel
• Normal Distribution in Excel
• t-Distribution in Excel
• Binomial Distribution in Excel
• z-Tests in Excel
• t-Tests in Excel
• Hypothesis Tests of Proportion in Excel
• Chi-Square Independence Tests in Excel
• Chi-Square Goodness-Of-Fit Tests in Excel
• F Tests in Excel
• Correlation in Excel
• Pearson Correlation in Excel
• Spearman Correlation in Excel
• Confidence Intervals in Excel
• Simple Linear Regression in Excel
• Multiple Linear Regression in Excel
• Logistic Regression in Excel
• Single-Factor ANOVA in Excel
• Two-Factor ANOVA With Replication in Excel
• Two-Factor ANOVA Without Replication in Excel
• Randomized Block Design ANOVA in Excel
• Repeated-Measures ANOVA in Excel
• ANCOVA in Excel
• Normality Testing in Excel
• Nonparametric Testing in Excel
• Post Hoc Testing in Excel
• Creating Interactive Graphs of Statistical Distributions in Excel
• Solving Problems With Other Distributions in Excel
• Optimization With Excel Solver
• Chi-Square Population Variance Test in Excel
• Analyzing Data With Pivot Tables
• SEO Functions in Excel
• Time Series Analysis in Excel
• VLOOKUP