This is one of the following four articles on z-Tests in Excel
Overview of Hypothesis Tests Using the Normal Distribution in Excel 2010 and Excel 2013
One-Sample z-Test in 4 Steps in Excel 2010 and Excel 2013
2-Sample Unpooled z-Test in 4 Steps in Excel 2010 and Excel 2013
Overview of the Paired (Two-Dependent-Sample) z-Test in 4 Steps in Excel 2010 and Excel 2013
Paired (Two-Dependent-
Sample) z-Test in 4 Steps
in Excel
This hypothesis test determines whether the mean of a sample of differences between pairs of data (x_bardiff) is equal to (two-tailed test) or else greater than or less than (one-tailed test) than a constant. Before-and-after fitness levels of individuals undergoing a training program would be an example of paired data. The sample evaluated would be the group of differences between the before-and-after scores of the individuals. This is called the difference sample.
The t-test is nearly always used instead of a z-Test to perform a two-dependent-sample (paired) hypothesis test of mean. The z-Test requires the population standard deviation of the differences between the pairs be known. The sample standard deviation of the difference sample is readily available but the population standard deviation of the differences is usually not known. The t-test requires only the sample standard deviation of the sample of paired differences be known.
x_bardiff = difference sample mean
Null Hypothesis H0: x_bardiff = Constant
(Click On Image To See a Larger Version)
(Click On Image To See a Larger Version)
Example of Paired, 1-Tailed (Left-
Tail) z-Test in Excel
This problem is very similar to the problem solved in the t-test section for a paired, one-tailed t-test. Similar problems were used in each of these sections to show the similarities and also contrast the differences between the paired z-Test and t-test as easily as possible.
A new clerical program was introduced to a large company with the hope that clerical errors would be reduced. 5,000 clerical workers in the company underwent the training program. 50 Clerical employees who underwent the training were randomly selected. The average number of clerical errors that each of these 50 employees made per month for six months prior to the training and also for six months following the training were recorded. Each of the 50 employees had a similar degree of clerical experience within the company and performed nearly the same volume and type of clerical work in the before and after months. The standard deviation of the after-before differences for all 5,000 employees who underwent the training is known to be 6.4.
Based upon the results of the 50 sampled clerical employees, determine with 95 percent certainty whether the average number of monthly clerical mistakes was reduced for the entire 5,000 clerical employees who underwent the training.
It is the difference that we are concerned with. A hypothesis test will be performed on the sample of differences. The distributed variable will be designated as x_bardiff and will represent that average difference between After and Before samples.
x_bardiff was calculated by subtracting the Before measurement from the After measurement. This is the intuitive way to determine if a reduction in error occurred.
This problem illustrates why the t-test is nearly always used instead of a z-Test to perform a two-dependent-sample (paired) hypothesis test of mean. The z-Test requires the population standard deviation of the differences between the pairs be known. This is rarely ever the case, but will be given for this problem so that a paired z-Test can be used. The t-test requires only the sample standard deviation of the sample of paired differences be known.
(Click On Image To See a Larger Version)
It is the difference that we are concerned with. A hypothesis test will be performed on the sample of differences. The distributed variable will be designated as x_bardiff and will represent that average difference between After and Before samples.
x_bardiff was calculated by subtracting the Before measurement from the After measurement. This is the intuitive way to determine if a reduction in error occurred
Summary of Problem Information
x_bardiff = sample mean =AVERAGE() = -2.14
σdiff = population standard deviation = 6.4
n = sample size = number of pairs = COUNT() = 40
SEdiff = Standard Error = σdiff / SQRT(n) = 6.4 / SQRT(40) = 1.01
Note that this calculation of the Standard Error, SEdiff, using the population standard deviation, σdiff, is the true Standard Error. If the sample standard error, sdiff, were used in place of σdiff, the Standard Error calculated would be an estimate of the true Standard Error. The z-Test requires the population standard deviation of the paired differences but the t-test uses the sample standard deviation as an estimate of the population standard deviation of the paired differences.
Level of Certainty = 0.95
Alpha = 1 - Level of Certainty = 1 – 0.95 = 0.05
The Excel data analysis tool Descriptive Statistics in not employed when the z-Test is used. Descriptive Statistics should only be used if a t-Test will be performed. The Standard Deviation and Standard Error calculated by Descriptive Statistics is based upon the sample standard deviation. the z-Test uses the population standard deviation instead of the sample standard deviation used by the t-Test.
As with all Hypothesis Tests of Mean, we must satisfactorily answer these two questions and then proceed to the four-step method of solving the hypothesis test that follows.
The Initial Two Questions That Must be Answered Satisfactorily
What Type of Test Should Be Done?
Have All of the Required Assumptions For This Test Been Met?
The Four-Step Method For Solving All Hypothesis Tests of Mean
Step 1) Create the Null Hypothesis and the Alternative Hypothesis
Step 2 – Map the Normal or t-Distribution Curve Based on the Null Hypothesis
Step 3 – Map the Regions of Acceptance and Rejection
Step 4 – Perform the Critical Value Test, the p Value Test, or the Critical t Value Test
The Initial Two Questions That Need To Be Answered Before Performing the Four-Step Hypothesis Test of Mean Are As Follows:
Question 1) What Type of Test Should Be Done?
a) Hypothesis Test of Mean or Proportion?
This is a test of mean because each individual observation (each sampled difference) within the sample can have a wide range of values. Data points for tests of proportion are binary: they can take only one of two possible values.
b) One-Sample or Two-Sample Test?
This is a two-sample hypothesis test because the data exists in two groups of measurements. One sample group contains Before measurements and the other sample group contains After measurements.
c) Independent (Unpaired) Test or Dependent (Paired) Test?
This is a paired (dependent) hypothesis test because each Before observation has a related After observation made on the same person.
d) One-Tailed or Two-Tailed Test?
The problem asks to determine whether there has been a reduction in clerical mistake from Before to After. This is a directional inequality making this hypothesis test a one-tailed test. This one-tailed test will be in the left tail because the Alternative Hypothesis, which will be created shortly, will use the “less than” operator.
If the problem asked whether Before and After were simply different, the inequality would be non-directional and the resulting hypothesis test would be a two-tailed test. A two-tailed test is more stringent than a one-tailed test.
e) t-Test or z-Test?
A z-Test can be performed if the Test Statistic’s distribution can be approximated by the normal distribution under the Null Hypothesis. The Test Statistic’s distribution can be approximated by the normal distribution only if the difference sample size is large (n > 30) and the population standard deviation, σ, is known. A t-Test must be used in all other cases.
Sample size, n, equals 40 and population standard deviation, σ, equals 6.4 so both conditions are met for the z-Test.
It should be noted that a paired t-Test can always be used in place of a paired z-Test. All z-Tests can be replaced be their equivalent t-Tests. As a result, some major commercial statistical software packages including the well-known SPSS provide only t-Tests and no direct z-Tests.
This hypothesis test is a z-Test that is two-sample, paired (dependent), one-tailed hypothesis test of mean.
Question 2) Test Requirements Met?
a) Test Statistic Distributed According to Normal Distribution
A z-Test can be performed if the distribution of the Test Statistic can be approximated under the Null Hypothesis by the normal distribution. The Test Statistic is derived from the mean of the difference sample and therefore has the same distribution that the difference sample mean would have if multiple similar samples were taken from the same population of differences between data sample pairs.
The difference sample size indicates how to determine the distribution of the difference sample mean and therefore the distribution of the Test Statistic. As per the Central Limit Theorem, as the difference sample size increases, the distribution of the difference sample means converges to the normal distribution.
In actuality, the sample mean converges toward the t-Distribution as sample size increases. The t-Distribution converges to the standard normal distribution as sample size increases. The t-Distribution nearly exactly resembles the standard normal distribution when sample size exceeds 30. The sample mean’s distribution can therefore be approximated by the normal distribution. The Test Statistic’s distribution can therefore be approximated by the normal distribution because the Test Statistic is derived from the sample mean.
As per the Central Limit Theorem, the Test Statistic’s distribution can be approximated by the normal distribution when the difference sample size is large regardless of the distribution of population from which the sample was drawn. There is also no need to verify the normality of the difference sample, as would be the case with a t-Test when population distribution is not known.
We can now proceed to complete the four-step method for solving all Hypothesis Tests of Mean. These four steps are as follows:
Step 1 – Create the Null and Alternative Hypotheses
The Null Hypothesis is always an equality and states that the items being compared are the same. In this case, the Null Hypothesis would state that the there is no difference between before and after data. We will use the variable x_bardiff to represent the mean between the before and after measurements. The Null Hypothesis is as follows:
H0: x_bardiff = Constant = 0
The Alternative Hypothesis is always in inequality and states that the two items being compared are different. This hypothesis test is trying to determine whether there has been a reduction in clerical errors, i.e., the After measurements are, on average, smaller than the Before measurements. The Alternative Hypothesis is as follows:
H1: x_bardiff < Constant
H1: x_bardiff < 0
The Alternative Hypothesis is directional (“greater than” or “less than” instead of “not equal,” which is non-directional) and the hypothesis test is therefore a one-tailed test. The “less than” operator indicates that this is a one-tailed test with the Region of Rejection (the alpha region) entirely contained in the left tail. A “greater than” operator would indicate a one-tailed test focused on the right tail.
It should also be noted that a two-tailed test is more rigorous (requires a greater differences between the two entities being compared before the test shows that there is a difference) than a one-tailed test.
It is important to note that the Null and Alternative Hypotheses refer to the means of the population of paired differences from which the difference samples were taken. A population of paired differences would be the differences of data pairs in a population of data pairs.
A paired z-Test determines whether to reject or fail to reject the Null Hypothesis that states that that population of paired differences from which the difference sample was taken has a mean equal to the Constant. The Constant in this case is equal to 0. This means that the Null Hypothesis states that the average difference between data pairs of an entire population from which the sample of data pairs were drawn is zero.
Parameters necessary to map the distributed variable, x_bardiff , to the normal distribution are the following:
x_bardiff = sample mean =AVERAGE() = -2.14
σdiff = population standard deviation = 6.4
n = sample size = number of pairs = COUNT() = 40
SEdiff = Standard Error = σdiff / SQRT(n) = 6.4 / SQRT(40) = 1.01
These parameters are used to map the distributed variable, x_bardiff, to the Excel-generated normal distribution curve as follows:
Step 2 – Map Distributed Variable to Normal Distribution Curve
A z-Test can be performed if the difference sample mean, and the Test Statistic (the t Value) are distributed according to the normal distribution. If the difference sample has passed a normality test, then the difference sample mean and closely-related Test Statistic are distributed according to the normal distribution.
The variable x_bardiff is distributed according to the normal d Distribution. Mapping this distributed variable to a t-Distribution curve is shown on this Excel-generated graph:
(Click On Image To See a Larger Version)
This non-standardized t-Distribution curve has its mean set to equal the Constant taken from the Null Hypothesis, which is:
H0: x_bardiff = Constant = 0
This non-standardized normal distribution curve is constructed from the following parameters:
Curve Mean = Constant = 0 (This x_bardiff is hypothesized by the Null Hypothesis to be the curve’s mean.)
Standard Errordiff = 1.01
Distributed Variable = x_bardiff
Step 3 – Map the Regions of Acceptance and Rejection
The goal of a hypothesis test is to determine whether to accept or reject the Null Hypothesis at a given level of certainty. If the two things being compared are far enough apart from each other, the Null Hypothesis (which states that the two things are not different) can be rejected. In this case we are trying to show graphically how different x_bardiff (-2.14) is from the hypothesized mean of 0.
The non-standardized normal distribution curve can be divided up into two types of regions: the Region of Acceptance and the Region of Rejection. A boundary between a Region of Acceptance and a Region of Rejection is called a Critical Value.
The above distribution curve that maps the distribution of variable x_bardiff can be divided up into two types of regions: the Region of Acceptance and the Region of Rejection.
If x_bardiff’s value of -2.14 falls in the Region of Acceptance, we must accept the Null Hypothesis. If x_bardiff’s value of -2.14 falls in the Region of Rejection, we can reject the Null Hypothesis.
The total size of the Region of Rejection is equal to Alpha. In this case Alpha, α, is equal to 0.05. This means that the Region of Rejection will take up 5 percent of the total area under this t-Distribution curve.
This 5 percent is entirely contained in the outer left tail. The outer left tail contains the 5 percent of the curve that is the Region of Rejection.
The boundary between Regions of Acceptance and Regions of Rejection are called Critical Values. The locations of these Critical values need to be calculated as follows.
Calculate the Critical Value
One-tailed, Left tail Critical Value = Mean + (Number of Standard Errors from Mean to Region of Rejection) * SEdiff
Note that the Mean = the Constant from the Null Hypothesis, which is 0.
One-tailed, Left tail Critical Value = Mean + NORM.S.INV(α) * SEdiff
One-tailed, Left tail Critical Value = 0 + NORM.S.INV(0.05) * 1.01
One-tailed, Left tail Critical Value = 0 + (-1.6449) * 1.01
One-tailed, Left tail Critical Value = -1.66
The Region of Rejection is therefore everything that is to the left of -1.66.
It should be noted that the Mean in the Critical Values formula refers to the mean of the mapped normal distribution curve. The mean of this curve is the Constant from the Null Hypothesis. The Mean in the Critical values formula is therefore the hypothesized mean from the Null Hypothesis. The Null Hypothesis is as follows:
H0: x_bardiff = Constant = 0
The hypothesized mean is the hypothesized x_bardiff which is 0.
The distribution curve with the blue 95-percent Region of Acceptance and the yellow 5-percent Region of Rejection entirely contained in the left tail is shown on this Excel-generated graph:
(Click On Image To See a Larger Version)
Step 4 – Determine Whether to Reject Null Hypothesis
The object of a hypothesis test is to determine whether to accept of reject the Null Hypothesis. There are three equivalent-Tests that determine whether to accept or reject the Null Hypothesis. Only one of these tests needs to be performed because all three provide equivalent information. The three tests are as follows:
1) Compare x-bardiff, with Critical Value
Reject the Null Hypothesis if the sample mean, x_bardiff = -2.14, falls into the Region of Rejection.
Equivalently, reject the Null Hypothesis if the sample mean, x_bardiff, is further the curve’s mean of 0 than the Critical Value.
The Critical Values have been calculated to be -1.66 on the left. x_bardiff (-2.14) is further from the curve mean (0) than left Critical Value (-1.66). The Null Hypothesis would therefore be rejected.
2) Compare z Score with Critical z Value
The z Score is also known as the Test Statistic in a z-Test and is the number of Standard Errors that x_bardiff is from the mean (mean = Constant = 0).
The Critical z Value is the number of Standard Errors that the Critical Value is from the mean.
If the z Score is further from the standardized mean of zero than the Critical z Value, the Null Hypothesis can be rejected.
Z Score (as called the Test Statistic) = (x_bardiff – 0) / SEdiff
Z Score (Test Statistic) = (-2.14 – 0) / 1.01
Z Score (Test Statistic) = -2.11
This indicates that x_bardiff is 2.11 standard errors to the left of the mean (mean = Constant = 0).
Critical z Value = NORM.S.INV(α)
If α = 0.05, the Critical z Value for a one-tailed hypothesis test in the left tail is calculated as follows:
Critical z Value = NORM.S.INV(0.05) = NORM.S.INV(0.05) = -1.6449
This means that the Region of Rejection for a one-tailed hypothesis test in the left tail begins at 1.66 standard errors from (to the left of) the standardized mean of zero.
The z Score (-2.11) is farther from the standardized mean of zero than the Critical z Value (-1.6449) so the Null Hypothesis is rejected.
3) Compare p Value to Alpha.
The p Value is the percent of the curve that is beyond x_bardiff (-2.14). If the p Value is smaller than Alpha, the Null Hypothesis is rejected. The p Value in this case is calculated by the following Excel formula:
p Value =MIN(NORM.S.DIST(z Score,TRUE),1-NORM.S.DIST(z Score,TRUE))
p Value =MIN(NORM.S.DIST(-2.11,TRUE),1-NORM.S.DIST(-2.11,TRUE))
p Value = 0.0174
The p Value (0.0174) is smaller than Alpha (0.05) and we therefore reject the Null Hypothesis. The following Excel-generated graph shows that the red p Value (the curve area beyond x_bardiff) is smaller than the yellow Region of Rejection in the left tail.
(Click On Image To See a Larger Version)
Excel does not provide any formulas or tools in the Data Analysis ToolPak add-in that directly perform the paired z-Test. The easy work-around is to perform a one-sample z-Test on the difference data sample. This formula is as follows:
p Value =MIN(Z.TEST(array,Constant,σdiff),1-Z.TEST(array,Constant,σdiff))
It should be noted that when the Constant is positive, the p Value Excel formula is
p Value = Z.TEST(array,Constant,σdiff),
When the Constant is negative, the p Value Excel formula is
p Value = 1-Z.TEST(array, Constant,σdiff))
The Constant is taken from the Null Hypothesis and is equal to 0.
The Null Hypothesis is as follows:
H0: x_bardiff = Constant = 0
Applying the Excel one-sample z-Test formula to the sample of difference data would give the following p Value for this paired z-Test:
(Click On Image To See a Larger Version)
Excel Master Series Blog Directory
Statistical Topics and Articles In Each Topic
- Histograms in Excel
- Bar Chart in Excel
- Combinations & Permutations in Excel
- Normal Distribution in Excel
- Overview of the Normal Distribution
- Normal Distribution’s PDF (Probability Density Function) in Excel 2010 and Excel 2013
- Normal Distribution’s CDF (Cumulative Distribution Function) in Excel 2010 and Excel 2013
- Solving Normal Distribution Problems in Excel 2010 and Excel 2013
- Overview of the Standard Normal Distribution in Excel 2010 and Excel 2013
- An Important Difference Between the t and Normal Distribution Graphs
- The Empirical Rule and Chebyshev’s Theorem in Excel – Calculating How Much Data Is a Certain Distance From the Mean
- Demonstrating the Central Limit Theorem In Excel 2010 and Excel 2013 In An Easy-To-Understand Way
- t-Distribution in Excel
- Binomial Distribution in Excel
- z-Tests in Excel
- Overview of Hypothesis Tests Using the Normal Distribution in Excel 2010 and Excel 2013
- One-Sample z-Test in 4 Steps in Excel 2010 and Excel 2013
- 2-Sample Unpooled z-Test in 4 Steps in Excel 2010 and Excel 2013
- Overview of the Paired (Two-Dependent-Sample) z-Test in 4 Steps in Excel 2010 and Excel 2013
- t-Tests in Excel
- Overview of t-Tests: Hypothesis Tests that Use the t-Distribution
- 1-Sample t-Tests in Excel
- 1-Sample t-Test in 4 Steps in Excel 2010 and Excel 2013
- Excel Normality Testing For the 1-Sample t-Test in Excel 2010 and Excel 2013
- 1-Sample t-Test – Effect Size in Excel 2010 and Excel 2013
- 1-Sample t-Test Power With G*Power Utility
- Wilcoxon Signed-Rank Test in 8 Steps As a 1-Sample t-Test Alternative in Excel 2010 and Excel 2013
- Sign Test As a 1-Sample t-Test Alternative in Excel 2010 and Excel 2013
- 2-Independent-Sample Pooled t-Tests in Excel
- 2-Independent-Sample Pooled t-Test in 4 Steps in Excel 2010 and Excel 2013
- Excel Variance Tests: Levene’s, Brown-Forsythe, and F Test For 2-Sample Pooled t-Test in Excel 2010 and Excel 2013
- Excel Normality Tests Kolmogorov-Smirnov, Anderson-Darling, and Shapiro Wilk Tests For Two-Sample Pooled t-Test
- Two-Independent-Sample Pooled t-Test - All Excel Calculations
- 2- Sample Pooled t-Test – Effect Size in Excel 2010 and Excel 2013
- 2-Sample Pooled t-Test Power With G*Power Utility
- Mann-Whitney U Test in 12 Steps in Excel as 2-Sample Pooled t-Test Nonparametric Alternative in Excel 2010 and Excel 2013
- 2- Sample Pooled t-Test = Single-Factor ANOVA With 2 Sample Groups
- 2-Independent-Sample Unpooled t-Tests in Excel
- 2-Independent-Sample Unpooled t-Test in 4 Steps in Excel 2010 and Excel 2013
- Variance Tests: Levene’s Test, Brown-Forsythe Test, and F-Test in Excel For 2-Sample Unpooled t-Test
- Excel Normality Tests Kolmogorov-Smirnov, Anderson-Darling, and Shapiro-Wilk For 2-Sample Unpooled t-Test
- 2-Sample Unpooled t-Test Excel Calculations, Formulas, and Tools
- Effect Size for a 2-Independent-Sample Unpooled t-Test in Excel 2010 and Excel 2013
- Test Power of a 2-Independent Sample Unpooled t-Test With G-Power Utility
- Paired (2-Sample Dependent) t-Tests in Excel
- Paired t-Test in 4 Steps in Excel 2010 and Excel 2013
- Excel Normality Testing of Paired t-Test Data
- Paired t-Test Excel Calculations, Formulas, and Tools
- Paired t-Test – Effect Size in Excel 2010, and Excel 2013
- Paired t-Test – Test Power With G-Power Utility
- Wilcoxon Signed-Rank Test in 8 Steps As a Paired t-Test Alternative
- Sign Test in Excel As A Paired t-Test Alternative
- Hypothesis Tests of Proportion in Excel
- Hypothesis Tests of Proportion Overview (Hypothesis Testing On Binomial Data)
- 1-Sample Hypothesis Test of Proportion in 4 Steps in Excel 2010 and Excel 2013
- 2-Sample Pooled Hypothesis Test of Proportion in 4 Steps in Excel 2010 and Excel 2013
- How To Build a Much More Useful Split-Tester in Excel Than Google's Website Optimizer
- Chi-Square Independence Tests in Excel
- Chi-Square Goodness-Of-Fit Tests in Excel
- F Tests in Excel
- Correlation in Excel
- Pearson Correlation in Excel
- Spearman Correlation in Excel
- Confidence Intervals in Excel
- z-Based Confidence Intervals of a Population Mean in 2 Steps in Excel 2010 and Excel 2013
- t-Based Confidence Intervals of a Population Mean in 2 Steps in Excel 2010 and Excel 2013
- Minimum Sample Size to Limit the Size of a Confidence interval of a Population Mean
- Confidence Interval of Population Proportion in 2 Steps in Excel 2010 and Excel 2013
- Min Sample Size of Confidence Interval of Proportion in Excel 2010 and Excel 2013
- Simple Linear Regression in Excel
- Overview of Simple Linear Regression in Excel 2010 and Excel 2013
- Complete Simple Linear Regression Example in 7 Steps in Excel 2010 and Excel 2013
- Residual Evaluation For Simple Regression in 8 Steps in Excel 2010 and Excel 2013
- Residual Normality Tests in Excel – Kolmogorov-Smirnov Test, Anderson-Darling Test, and Shapiro-Wilk Test For Simple Linear Regression
- Evaluation of Simple Regression Output For Excel 2010 and Excel 2013
- All Calculations Performed By the Simple Regression Data Analysis Tool in Excel 2010 and Excel 2013
- Prediction Interval of Simple Regression in Excel 2010 and Excel 2013
- Multiple Linear Regression in Excel
- Basics of Multiple Regression in Excel 2010 and Excel 2013
- Complete Multiple Linear Regression Example in 6 Steps in Excel 2010 and Excel 2013
- Multiple Linear Regression’s Required Residual Assumptions
- Normality Testing of Residuals in Excel 2010 and Excel 2013
- Evaluating the Excel Output of Multiple Regression
- Estimating the Prediction Interval of Multiple Regression in Excel
- Regression - How To Do Conjoint Analysis Using Dummy Variable Regression in Excel
- Logistic Regression in Excel
- Logistic Regression Overview
- Logistic Regression in 6 Steps in Excel 2010 and Excel 2013
- R Square For Logistic Regression Overview
- Excel R Square Tests: Nagelkerke, Cox and Snell, and Log-Linear Ratio in Excel 2010 and Excel 2013
- Likelihood Ratio Is Better Than Wald Statistic To Determine if the Variable Coefficients Are Significant For Excel 2010 and Excel 2013
- Excel Classification Table: Logistic Regression’s Percentage Correct of Predicted Results in Excel 2010 and Excel 2013
- Hosmer- Lemeshow Test in Excel – Logistic Regression Goodness-of-Fit Test in Excel 2010 and Excel 2013
- Single-Factor ANOVA in Excel
- Overview of Single-Factor ANOVA
- Single-Factor ANOVA in 5 Steps in Excel 2010 and Excel 2013
- Shapiro-Wilk Normality Test in Excel For Each Single-Factor ANOVA Sample Group
- Kruskal-Wallis Test Alternative For Single Factor ANOVA in 7 Steps in Excel 2010 and Excel 2013
- Levene’s and Brown-Forsythe Tests in Excel For Single-Factor ANOVA Sample Group Variance Comparison
- Single-Factor ANOVA - All Excel Calculations
- Overview of Post-Hoc Testing For Single-Factor ANOVA
- Tukey-Kramer Post-Hoc Test in Excel For Single-Factor ANOVA
- Games-Howell Post-Hoc Test in Excel For Single-Factor ANOVA
- Overview of Effect Size For Single-Factor ANOVA
- ANOVA Effect Size Calculation Eta Squared in Excel 2010 and Excel 2013
- ANOVA Effect Size Calculation Psi – RMSSE – in Excel 2010 and Excel 2013
- ANOVA Effect Size Calculation Omega Squared in Excel 2010 and Excel 2013
- Power of Single-Factor ANOVA Test Using Free Utility G*Power
- Welch’s ANOVA Test in 8 Steps in Excel Substitute For Single-Factor ANOVA When Sample Variances Are Not Similar
- Brown-Forsythe F-Test in 4 Steps in Excel Substitute For Single-Factor ANOVA When Sample Variances Are Not Similar
- Two-Factor ANOVA With Replication in Excel
- Two-Factor ANOVA With Replication in 5 Steps in Excel 2010 and Excel 2013
- Variance Tests: Levene’s and Brown-Forsythe For 2-Factor ANOVA in Excel 2010 and Excel 2013
- Shapiro-Wilk Normality Test in Excel For 2-Factor ANOVA With Replication
- 2-Factor ANOVA With Replication Effect Size in Excel 2010 and Excel 2013
- Excel Post Hoc Tukey’s HSD Test For 2-Factor ANOVA With Replication
- 2-Factor ANOVA With Replication – Test Power With G-Power Utility
- Scheirer-Ray-Hare Test Alternative For 2-Factor ANOVA With Replication
- Two-Factor ANOVA Without Replication in Excel
- Randomized Block Design ANOVA in Excel
- Repeated-Measures ANOVA in Excel
- Single-Factor Repeated-Measures ANOVA in 4 Steps in Excel 2010 and Excel 2013
- Sphericity Testing in 9 Steps For Repeated Measures ANOVA in Excel 2010 and Excel 2013
- Effect Size For Repeated-Measures ANOVA in Excel 2010 and Excel 2013
- Friedman Test in 3 Steps For Repeated-Measures ANOVA in Excel 2010 and Excel 2013
- ANCOVA in Excel
- Normality Testing in Excel
- Creating a Box Plot in 8 Steps in Excel
- Creating a Normal Probability Plot With Adjustable Confidence Interval Bands in 9 Steps in Excel With Formulas and a Bar Chart
- Chi-Square Goodness-of-Fit Test For Normality in 9 Steps in Excel
- Kolmogorov-Smirnov, Anderson-Darling, and Shapiro-Wilk Normality Tests in Excel
- Nonparametric Testing in Excel
- Mann-Whitney U Test in 12 Steps in Excel
- Wilcoxon Signed-Rank Test in 8 Steps in Excel
- Sign Test in Excel
- Friedman Test in 3 Steps in Excel
- Scheirer-Ray-Hope Test in Excel
- Welch's ANOVA Test in 8 Steps Test in Excel
- Brown-Forsythe F Test in 4 Steps Test in Excel
- Levene's Test and Brown-Forsythe Variance Tests in Excel
- Chi-Square Independence Test in 7 Steps in Excel
- Chi-Square Goodness-of-Fit Tests in Excel
- Chi-Square Population Variance Test in Excel
- Post Hoc Testing in Excel
- Creating Interactive Graphs of Statistical Distributions in Excel
- Interactive Statistical Distribution Graph in Excel 2010 and Excel 2013
- Interactive Graph of the Normal Distribution in Excel 2010 and Excel 2013
- Interactive Graph of the Chi-Square Distribution in Excel 2010 and Excel 2013
- Interactive Graph of the t-Distribution in Excel 2010 and Excel 2013
- Interactive Graph of the t-Distribution’s PDF in Excel 2010 and Excel 2013
- Interactive Graph of the t-Distribution’s CDF in Excel 2010 and Excel 2013
- Interactive Graph of the Binomial Distribution in Excel 2010 and Excel 2013
- Interactive Graph of the Exponential Distribution in Excel 2010 and Excel 2013
- Interactive Graph of the Beta Distribution in Excel 2010 and Excel 2013
- Interactive Graph of the Gamma Distribution in Excel 2010 and Excel 2013
- Interactive Graph of the Poisson Distribution in Excel 2010 and Excel 2013
- Solving Problems With Other Distributions in Excel
- Solving Uniform Distribution Problems in Excel 2010 and Excel 2013
- Solving Multinomial Distribution Problems in Excel 2010 and Excel 2013
- Solving Exponential Distribution Problems in Excel 2010 and Excel 2013
- Solving Beta Distribution Problems in Excel 2010 and Excel 2013
- Solving Gamma Distribution Problems in Excel 2010 and Excel 2013
- Solving Poisson Distribution Problems in Excel 2010 and Excel 2013
- Optimization With Excel Solver
- Maximizing Lead Generation With Excel Solver
- Minimizing Cutting Stock Waste With Excel Solver
- Optimal Investment Selection With Excel Solver
- Minimizing the Total Cost of Shipping From Multiple Points To Multiple Points With Excel Solver
- Knapsack Loading Problem in Excel Solver – Optimizing the Loading of a Limited Compartment
- Optimizing a Bond Portfolio With Excel Solver
- Travelling Salesman Problem in Excel Solver – Finding the Shortest Path To Reach All Customers
- Chi-Square Population Variance Test in Excel
- Analyzing Data With Pivot Tables
- SEO Functions in Excel
- Time Series Analysis in Excel
- VLOOKUP
What an interesting post. I find this so informative! lanai screening Bradenton, FL
ReplyDelete