This is one of the following six articles on 2-Independent-Sample Unpooled t-Tests in Excel

2-Independent-Sample Unpooled t-Test in 4 Steps in Excel 2010 and Excel 2013

Variance Tests: Levene’s Test, Brown-Forsythe Test, and F-Test in Excel For 2-Sample Unpooled t-Test

2-Sample Unpooled t-Test Excel Calculations, Formulas, and Tools

Effect Size for a 2-Independent-Sample Unpooled t-Test in Excel 2010 and Excel 2013

Test Power of a 2-Independent Sample Unpooled t-Test With G-Power Utility

# Two-Independent-Sample

Unpooled t-Test in 4 Steps

# in Excel

This hypothesis test evaluates two independent samples to determine whether the difference between the two sample means (x_bar** _{1}** and x_bar

**) is equal to (two-tailed test) or else greater than or less than (one-tailed test) than a constant. This is an unpooled test because a single pooled standard deviation**

_{2}*CANNOT*replace both sample standard deviations because they are too different.

x_bar** _{1}** - x_bar

**= Observed difference between the sample means**

_{2} *(Click On Image To See a Larger Version)*

*(Click On Image To See a Larger Version)*

*(Click On Image To See a Larger Version)*

Unpooled t-Tests are performed if the variances of both sample groups are not similar. A rule-of-thumb is as follows: A Pooled t-Test should only be performed if the standard deviation of one sample, s** _{1}**, is no more than twice as large as the standard deviation in the other sample s

**. An unpooled t-Test should be performed if that condition is not met.**

_{2}Null Hypothesis H** _{0}**: x_bar

**- x_bar**

_{1}**= Constant**

_{2}The Null Hypothesis is rejected if any of the following equivalent conditions are shown to exist:

1) The observed x_bar** _{1}** - x_bar

**is beyond the Critical Value.**

_{2}2) The t Value (the Test Statistic) is farther from zero than the Critical t Value.

3) The p value is smaller than α for a one-tailed test or α/2 for a two-tailed test.

## Example of 2-Independent-Sample,

2-Tailed, Unpooled t-Test in Excel

This problem is very similar to the problem solved in the z-test section for a two-independent-sample, two-tailed z-test. Similar problems were used in each of these sections to show the similarities and also contrast the differences between the two-independent-sample z-Test and t-test as easily as possible.

Two shifts on a production are being compared to determine if there is a difference in the average daily number of units produced by each shift. The two shifts operate eight hours per day under nearly identical conditions that remain fairly constant from day to day. A sample of the total number of units produced by each shift on a random selection of days is taken. Determine with a 95 percent Level of Confidence if there is a difference between the average daily number of units produced by the two shifts.

Here is the sampled data as follows:

*(Click On Image To See a Larger Version)*

Running the Excel data analysis tool Descriptive Statistics separately on each sample group produces the following output:

*(Click On Image To See a Larger Version)*

Note that when performing two-sample t-Tests in Excel, always designate Sample 1 (Variable 1) to be the sample with the larger mean.

The results of the Unpooled t-Test will be more intuitive if the sample group with the larger mean is designated as the first sample and the sample group with the smaller mean is designated as the second sample.

Another reason for designating the sample group with the larger mean as the first sample is to obtain the correct result from the Excel data analysis tool **t-Test:Two-Sample Assuming Unequal Variances**. The test statistic (T Stat in the Excel output) and the Critical t value (t Critical two-tail in the Excel output) will have the same sign (as they always should) only if the sample group with the larger mean is designated the first sample.

### Summary of Problem Information

__Sample Group 1 – Shift A (Variable 1)__

x_bar_{1} = sample_{1} mean = AVERAGE() = 46.55

µ_{1} (Greek letter “mu”) = population mean from which Sample 1 was drawn = Not Known

s_{1} = sample_{1} standard deviation =STDEV.S() = 24.78

Var_{1} = sample_{1} variance =VAR() = 613.84

σ_{1} (Greek letter “sigma”) = population standard deviation from which Sample 1 was drawn = Not Known

n_{1} = sample_{1} size = COUNT() = 20

__Sample Group 2 – Shift B (Variable 2)__

x_bar_{2} = sample_{2} mean = AVERAGE() = 42.24

µ_{2} (Greek letter “mu”) = population mean from which Sample 2 was drawn = Not Known

s_{2} = sample_{2} standard deviation =STDEV.S() = 11.80

Var_{2} = sample_{2} variance =VAR() = 139.32

σ_{2} (Greek letter “sigma”) = population standard deviation from which Sample 2 was drawn = Not Known

n_{2} = sample_{2} size = COUNT() = 17

x_bar_{1} - x_bar_{2} = 46.55 – 42.24 = 4.31

Level of Certainty = 0.95

Alpha = 1 - Level of Certainty = 1 – 0.95 = 0.05

As with all Hypothesis Tests of Mean, we must satisfactorily answer these two questions and then proceed to the four-step method of solving the hypothesis test that follows.

### The Initial Two Questions That Must be Answered Satisfactorily

What Type of Test Should Be Done?

Have All of the Required Assumptions For This Test Been Met?

### The Four-Step Method For Solving All Hypothesis Tests of Mean

Step 1) Create the Null Hypothesis and the Alternate Hypothesis

Step 2 – Map the Normal or t Distribution Curve Based on the Null Hypothesis

Step 3 – Map the Regions of Acceptance and Rejection

Step 4 – Perform the Critical Value Test, the p Value Test, or the Critical t Value Test

The initial two questions that need to be answered before performing the Four-Step Hypothesis Test of Mean are as follows:

### Question 1) What Type of Test Should Be Done?

**a) Hypothesis Test of Mean or Proportion?**

This is a Hypothesis Test of Mean because each individual observation (each sampled shift’s output) within each of the two sample groups can have a wide range of values. Data points for Hypothesis Tests of Proportion are binary: they can take only one of two possible values.

**b) One-Sample or Two-Sample Test?**

This is a two-sample hypothesis test because two independent samples are being compared with each other. The two sample groups are the daily units produced by Shift A and the daily units produced by Shift B.

**c) Independent (Unpaired) Test or Dependent (Paired) Test?**

It is an unpaired test because data observations in each sample group are completely unrelated to data observations in the other sample group. The designation of “paired” or “unpaired” applies only for two-sample hypothesis tests.

**d) One-Tailed or Two-Tailed Test?**

The problem asks to determine whether there is simply a difference in the average number of daily units produced by Shift A and by Shift B. This is a non-directional inequality making this hypothesis test a two-tailed test. If the problem asked to determine whether Shift A’s production is * greater than* or

*than Shift B’s, the inequality would be directional and the resulting hypothesis test would be a one-tailed test. A two-tailed test is more stringent than a one-tailed test.*

__less than__**e) t-Test or z-Test?**

A two-independent-sample hypothesis test of mean must be performed as a t-Test if sample size is small (n_{1} + n_{2} < 40). In this case the sample size is small as n_{1} + n_{2} = 37 This Hypothesis Test of Mean must be performed as a t-Test. A t-Test uses the t distribution and not the normal distribution as does a z-Test.

**f) Pooled or Unpooled t-Test?**

Pooled t-Tests are performed if the variances of both sample groups are similar. A rule-of-thumb is as follows: A Pooled t-Test should be performed if the standard deviation of one sample is no more than twice as large as the standard deviation in the other sample. That is definitely not the case here as the following are true:

s_{1} = sample_{1} standard deviation = 24.78

and

s_{2} = sample_{2} standard deviation = 11.80

### Comparing Sample Variances

The following sample variance comparison tests will be performed on this problem’s sample data in a blog article shortly after this one:

**1) F Test**

**2) Levene’s Test**

**3) Brown-Forsythe Test**

Each of the above tests can be considered relatively equivalent to the others. The results of these tests, which will be calculated in Excel and shown in a blog article that follows this, indicate that the variances of the data samples are not similar enough to perform a pooled t-Test.

We believe that the variances of both sample groups are dissimilar enough to force using an Unpooled test for this two-independent sample hypothesis test.

__This hypothesis test is a t-Test that is two-independent-sample, two-tailed, Unpooled hypothesis test of mean.__

### Question 2) Test Requirements Met?

**a) Normal Distribution of Both Sample Means**

A t-Test can be performed if the distribution of the Test Statistic (the t value) can be approximated under the Null Hypothesis by the t Distribution. The t Value for this test is calculated as follows:

*(Click On Image To See a Larger Version)*

To perform a hypothesis test that is based on the normal distribution or t distribution, both sample means must be normally distributed. In other words, if we took multiple samples just like either one of the two mentioned here, the means of those samples would have to be normally distributed in order to be able to perform a hypothesis test that is based upon the normal or t distributions.

For example, 30 independent, random samples of the daily production from each of the two shifts could be evaluated just like the single sample of units produced from 15+ production days from each of the two shifts as mentioned here. If the means of all of the 30 samples from one shift and, separately, the means of the other 30 samples from the other shift are normally distributed, a hypothesis test based on the normal or t distribution can be performed on the two independent samples taken.

The means of the samples would be normally distributed if any of the following are true:

__1) Sample Size of Both Samples Greater Than 30__

The Central Limit Theorem states that the means of similar-sized, random, independent samples will be normally distributed if the sample size is large (n >30) no matter how the underlying population from which the samples came from is distributed. In reality, the distribution of sample means converges toward normality when n is as small as 5 as long as the underlying population is not too skewed.

__2) Both Populations Are Normally Distributed__

If this is the case, the means of similar sized, random, independent samples will also be normally distributed. It is quite often the case that the distribution of the underlying population is not known and should not be assumed.

__3) Both Samples Are Normally Distributed__

If the sample is normally distributed, the means of other similar-sized, independent, random samples will also be normally distributed. Normality testing must be performed on the sample to determine whether the sample is normally distributed.

In this case the sample size for both samples is small: n_{1} and n_{2} are both less than 30. The normal distribution of both sample means must therefore be tested and confirmed. Normality testing on each of the samples has to be performed to confirm the normal distribution of the means of both samples.

**b) Significantly Different Sample Variances**

The two-independent sample, Unpooled t-Test expects that the two independent samples have significantly different variances. Samples that have similar variances are said to be **homoscedastistic**. Samples that have significantly different variances are said to be **heteroscedastistic .** The samples in this example have the significantly different variances.

**Variance comparison tests will be perform on this problem’s data in a blog article that follows this one.**

**c) Independence of Samples**

This type of a hypothesis test requires both samples be totally independent of each other. In this case they are completely independent.

### Evaluating the Normality of the Sample Data

The following five normality tests will be performed on the sample data here:

An Excel histogram of the sample data will be created.

A normal probability plot of the sample data will be created in Excel.

The Kolmogorov-Smirnov test for normality of the sample data will be performed in Excel.

The Anderson-Darling test for normality of the sample data will be performed in Excel.

The Shapiro-Wilk test for normality of the sample data will be performed in Excel.

**These normality tests will be perform on this problem’s data in a blog article that follows this one.**

We now proceed to complete the four-step method for solving all Hypothesis Tests of Mean. These four steps are as follows:

__Step 1) Create the Null Hypothesis and the Alternate Hypothesis__

__Step 2 – Map the Normal or t Distribution Curve Based on the Null Hypothesis__

__Step 3 – Map the Regions of Acceptance and Rejection__

__Step 4 – Determine Whether to Accept or Reject theNull Hypothesis By Performing the Critical Value Test, the p Value Test, or the Critical t Value Test__

Proceeding through the four steps is done is follows:

### Step 1 – Create the Null and Alternate Hypotheses

The Null Hypothesis is always an equality and states that the items being compared are the same. In this case, the Null Hypothesis would state that the average optimism scores for both sample groups are the same. We will use the variable x_bar_{1}-x_bar_{2} to represent the difference between the means of the two groups. If the mean scores for both groups are the same, then the difference between the two means, x_bar_{1}-x_bar_{2}, would equal zero. The Null Hypothesis is as follows:

H_{0}: x_bar_{1}-x_bar_{2} = Constant = 0

The Alternate Hypothesis is always in inequality and states that the two items being compared are different. This hypothesis test is trying to determine whether the first mean (x_bar_{1}) is different than the second mean (x_bar_{2}). The Alternate Hypothesis is as follows:

H_{1}: x_bar_{1}-x_bar_{2} ≠ Constant, which is 0

H_{1}: x_bar_{1}-x_bar_{2} ≠ 0

The Alternative Hypothesis is non-directional (“not equal” instead of “greater than” or “less than”) and the hypothesis test is therefore a two-tailed test. It should be noted that a two-tailed test is more rigorous (requires a greater differences between the two entities being compared before the test shows that there is a difference) than a one-tailed test.

The following formulas are used by the Two-Independent Sample, Unpooled t-Test:

**Unpooled Degrees of Freedom**

* (Click On Image To See a Larger Version)*

df = **[**** {** (Var_{1}/n_{1}) + (Var_{2}/n_{2}) **}**^2 **] **/ **[** **{**(Var_{1}/n_{1})^2 / (n_{1} - 1) **}** + **{ **(Var_{2}/n_{2})^2 / (n_{2}-1) } **]**

df = **[**** {** (613.84/20) + (139.32/17) **}**^2 **] **/ **[** **{**(613.84/20)^2 / (20 - 1) **}** + **{ **(139.32/17)^2 / (17 - 1) } **]**

df = 28

**Unpooled Sample Standard Error**

*(Click On Image To See a Larger Version)*

SE = SQRT[ (Var_{1}/n_{1}) + (Var_{2}/n_{2}) ]

SE = SQRT[ (613.84/20) + (139.32/17) ]

SE = 6.236

Note that this calculation of the Standard Error using the sample variance, s^{2}, is an estimate of the true Standard Error which would be calculated using the population variance, σ^{2}, of the populations from which the samples were drawn.

These parameters are used to map the distributed variable, x_bar_{1}-x_bar_{2}, to the t Distribution curve as follows:

### Step 2 – Map the Distributed Variable on a t-Distribution Curve

A t-Test can be performed if the sample mean, and the Test Statistic (the t Value) are distributed according to the t Distribution. If the sample has passed a normality test, the sample mean and closely-related Test Statistic are distributed according to the t Distribution.

The t Distribution always has a mean of zero and a standard error equal to one. The t Distribution varies only in its shape. The shape of a specific t Distribution curve is determined by only one parameter: its degrees of freedom, which equals n – 1 if n = sample size.

The means of similar, random samples taken from a normal population are distributed according to the t Distribution. This means that the distribution of a large number of means of samples of size n taken from a normal population will have the same shape as a t Distribution with its degrees of equal to n – 1.

The sample mean and the Test Statistic are both distributed according to the t Distribution with degrees of freedom equal to n – 1 if the sample or population is shown to be normally distributed. This step will map the sample mean to a t Distribution curve with a degrees of freedom equal to n – 1.

The t Distribution is usually presented in its finalized form with standardized values of a mean that equals zero and a standard error that equals one. The horizontal axis is given in units of Standard Errors and the distributed variable is the t Value (the Test Statistic) as follows in this Excel-generated graph:

*(Click On Image To See a Larger Version)*

A non-standardized t Distribution curve would simply have its horizontal axis given in units of the measure used to take the samples. The distributed variable would be the sample mean, x_bar_{1}-x_bar_{2}.

The variable x_bar_{1}-x_bar_{2} is distributed according to the t Distribution. Mapping this distributed variable to a t Distribution curve is shown as follows in this Excel-generated graph:

*(Click On Image To See a Larger Version)*

This non-standardized t Distribution curve is constructed from the following parameters:

Mean = 0, which is the constant taken from the Null Hypothesis

Standard Error_{ }= 6.236

Degrees of Freedom = 28

Distributed Variable = : x_bar_{1}-x_bar_{2}

### Step 3 – Map the Regions of Acceptance and Rejection

The goal of a hypothesis test is to determine whether to reject or fail to reject the Null Hypothesis at a given level of certainty. If the two things being compared are far enough apart from each other, the Null Hypothesis (which states that the two things are not different) can be rejected. In this case we are trying to show graphically how different x_bar_{1} is from x_bar_{2} by showing how different x_bar_{1}-x_bar_{2} (4.31) is from zero.

The non-standardized t Distribution curve can be divided up into two types of regions: the Region of Acceptance and the Region of Rejection. A boundary between a Region of Acceptance and a Region of Rejection is called a Critical Value.

If the difference between the sample means, x_bar_{1}-x_bar_{2} (4.31), falls into a Region of Rejection, the Null Hypothesis is rejected. If the difference between the sample means, x_bar_{1}-x_bar_{2} (4.31), falls into a Region of Acceptance, the Null Hypothesis is not rejected.

The total size of the Region of Rejection is equal to Alpha. In this case Alpha, α, is equal to 0.05. This means that the Region of Rejection will take up 5 percent of the total area under this t distribution curve.

This 5 percent Alpha (Region of Rejection) is entirely contained in the outer right tail. The operator in the Alternative Hypothesis whether the hypothesis test is two-tailed or one-tailed and, if one tailed, which outer tail. The Alternative Hypothesis is the follows:

H_{1}: x_bar_{1}-x_bar_{2} ≠ 0

A “not equal” operator indicates that this will be a two-tailed test. This means that the Region of Rejection is split between both outer tails.

The boundaries between Regions of Acceptance and Regions of Rejection are called Critical Values. The locations of these Critical Values need to be calculated.

__Calculate the Critical Values__

**Two-Tailed Critical Values**

Critical Values = Mean ± (Number of Standard Errors from Mean to Region of Rejection) * SE

Critical Values = Mean ± T.INV(1-α/2,df) * SE

Critical Values = 0 ± T.INV(0.975, 28) * 6.236

Critical Values = 0 ± 12.77

Critical Values = -12.77 and +12.77

The Region of Rejection therefore includes everything that is to the right of +12.77 and to the left of -12.77.

The distribution curve with the blue Region of Acceptance and the yellow Regions of Rejection is shown is as follows in this Excel-generated graph:

*(Click On Image To See a Larger Version)*

If this were a one-tailed test, the Critical Values would be determined as follows:

**One-Tailed Critical Value**

The Region of Rejection would be in the right tail because Sample 1 has the higher mean.

Critical Value = Mean + (Number of Standard Errors from Mean to Region of Rejection) * SE

Critical Value = Mean + T.INV(1-α,df) * SE

Critical Value = 0 + T.INV(0.95, 28) * 6.236

Critical Value = 0 + 10.61

Critical Value = +10.61

The Critical Values for the two-tailed test are farther from the mean than the Critical Value for one-tailed test. This means that a two-tailed test is more stringent than a one-tailed test.

### Step 4 – Determine Whether to Reject Null Hypothesis

The object of a hypothesis test is to determine whether to reject or fail to reject the Null Hypothesis. There are three equivalent-Tests that determine whether to accept or reject the Null Hypothesis. Only one of these tests needs to be performed because all three provide equivalent information. The three tests are as follows:

**1) Compare Sample Mean, x_bar _{1}-x_bar_{2} With Critical Value**

Reject the Null Hypothesis if the sample mean, x_bar_{1}-x_bar_{2} = 4.31, falls into the Region of Rejection. Fail to Reject the Null Hypothesis if the sample mean, x_bar_{1}-x_bar_{2} = 4.31, falls into the Region of Acceptance.

Equivalently, reject the Null Hypothesis if the sample mean, x_bar_{1}-x_bar_{2}, is further the curve’s mean of 0 then the Critical Value. Fail to reject the Null Hypothesis if the sample mean, x_bar_{1}-x_bar_{2}, is closer the curve’s mean of 0 then the Critical Value.

The Critical Values have been calculated to be -12.77 on the left and +12.77 on the right. x_bar_{1}-x_bar_{2} (4.31) is closer from the curve mean (0) than right Critical Value (+12.77). The Null Hypothesis would therefore not be rejected.

**2) Compare t Value With Critical t Value**

The t Value corresponds to the standardized value of the sample mean, x_bar_{1}-x_bar_{2 }= 4.31. The t Value is the number of Standard Errors that x_bar is from the curve’s mean of 0.

The Critical t Value is the number of Standard Errors that the Critical Value is from the curve’s mean.

Reject the Null Hypothesis if the t Value is farther from the standardized mean of zero than the Critical t Value.

Equivalently, reject the Null Hypothesis if the t Value is closer to the standardized mean of zero than the Critical t Value.

The t Value, the Test Statistic in a t-Test, is the number of Standard Errors that x_bar_{1}-x_bar_{2} is from the mean. The Critical t Value is the number of Standard Errors that the Critical Value is from the mean. If the t Value is larger than the Critical t Value, the Null Hypothesis can be rejected.

*(Click On Image To See a Larger Version)*

t Value (test statistic) = (x_bar_{1 }- x_bar_{2} - 0) / SE

t Value (test statistic) = (4.31)/6.239 = 0.69

Two-Tailed Critical t Values = ± T.INV(1-α/2,df)

Two-Tailed Critical t Values = ±T.INV(1- 0.05/2, 28) = ±2.048

Right Critical t Value = +2.048

This indicates that (x_bar_{1 }- x_bar_{2}) is 2.048 standard errors to the right of the mean (mean = Constant = 0).

The t Value (0.69) is much closer to the mean (mean = Constant = 0) than the Critical t Value (+2.048) so the Null Hypothesis is accepted.

**3) Compare the p Value With Alpha**

The p Value is the percent of the curve that is beyond x_bar_{1}-x_bar_{2} (4.31). If the p Value is larger than Alpha/2 (since this is a two-tailed test), the Null Hypothesis is accepted. The p Value in this case is calculated by the following Excel formula:

p Value = T.DIST.RT(ABS(t Value), df) = T.DIST.RT(ABS(0.69), 28) = 0.247

The p Value (0.247) is much larger than Alpha/2 (0.025 – because this is one tail of a two-tailed test) and we therefore accept the Null Hypothesis. A graph below shows that the red p Value (the curve area beyond x_bar_{1}-x_bar_{2}) is much larger than the yellow Alpha, which is the 2.5 percent Region of Rejection in the outer right tail. This is shown in the following Excel-generated graph of this non-standardized t Distribution curve:

*(Click On Image To See a Larger Version)*

It should be noted that if this t-Test were a one-tailed test, which is less stringent than a two-tailed test, the Null Hypothesis would still be accepted because:

1) The p Value (0.247) is still much larger than Alpha (0.05)

2) x_bar_{1}-x_bar_{2} (4.31) is still in the Region of Acceptance, which would now have its outer right boundary at 10.61 (mean + T.INV(**1-α**,df)*SE)

3) The t Value (0.69) would still be smaller than the Critical t Value which would now be 1.70 (TINV(**1-α**,df))

**Excel Master Series Blog Directory**

Statistical Topics and Articles In Each Topic

- Histograms in Excel
- Bar Chart in Excel
- Combinations & Permutations in Excel
- Normal Distribution in Excel
- Overview of the Normal Distribution
- Normal Distribution’s PDF (Probability Density Function) in Excel 2010 and Excel 2013
- Normal Distribution’s CDF (Cumulative Distribution Function) in Excel 2010 and Excel 2013
- Solving Normal Distribution Problems in Excel 2010 and Excel 2013
- Overview of the Standard Normal Distribution in Excel 2010 and Excel 2013
- An Important Difference Between the t and Normal Distribution Graphs
- The Empirical Rule and Chebyshev’s Theorem in Excel – Calculating How Much Data Is a Certain Distance From the Mean
- Demonstrating the Central Limit Theorem In Excel 2010 and Excel 2013 In An Easy-To-Understand Way

- t-Distribution in Excel
- Binomial Distribution in Excel
- z-Tests in Excel
- Overview of Hypothesis Tests Using the Normal Distribution in Excel 2010 and Excel 2013
- One-Sample z-Test in 4 Steps in Excel 2010 and Excel 2013
- 2-Sample Unpooled z-Test in 4 Steps in Excel 2010 and Excel 2013
- Overview of the Paired (Two-Dependent-Sample) z-Test in 4 Steps in Excel 2010 and Excel 2013

- t-Tests in Excel
- Overview of t-Tests: Hypothesis Tests that Use the t-Distribution
- 1-Sample t-Tests in Excel
- 1-Sample t-Test in 4 Steps in Excel 2010 and Excel 2013
- Excel Normality Testing For the 1-Sample t-Test in Excel 2010 and Excel 2013
- 1-Sample t-Test – Effect Size in Excel 2010 and Excel 2013
- 1-Sample t-Test Power With G*Power Utility
- Wilcoxon Signed-Rank Test in 8 Steps As a 1-Sample t-Test Alternative in Excel 2010 and Excel 2013
- Sign Test As a 1-Sample t-Test Alternative in Excel 2010 and Excel 2013

- 2-Independent-Sample Pooled t-Tests in Excel
- 2-Independent-Sample Pooled t-Test in 4 Steps in Excel 2010 and Excel 2013
- Excel Variance Tests: Levene’s, Brown-Forsythe, and F Test For 2-Sample Pooled t-Test in Excel 2010 and Excel 2013
- Excel Normality Tests Kolmogorov-Smirnov, Anderson-Darling, and Shapiro Wilk Tests For Two-Sample Pooled t-Test
- Two-Independent-Sample Pooled t-Test - All Excel Calculations
- 2- Sample Pooled t-Test – Effect Size in Excel 2010 and Excel 2013
- 2-Sample Pooled t-Test Power With G*Power Utility
- Mann-Whitney U Test in 12 Steps in Excel as 2-Sample Pooled t-Test Nonparametric Alternative in Excel 2010 and Excel 2013
- 2- Sample Pooled t-Test = Single-Factor ANOVA With 2 Sample Groups

- 2-Independent-Sample Unpooled t-Tests in Excel
- 2-Independent-Sample Unpooled t-Test in 4 Steps in Excel 2010 and Excel 2013
- Variance Tests: Levene’s Test, Brown-Forsythe Test, and F-Test in Excel For 2-Sample Unpooled t-Test
- Excel Normality Tests Kolmogorov-Smirnov, Anderson-Darling, and Shapiro-Wilk For 2-Sample Unpooled t-Test
- 2-Sample Unpooled t-Test Excel Calculations, Formulas, and Tools
- Effect Size for a 2-Independent-Sample Unpooled t-Test in Excel 2010 and Excel 2013
- Test Power of a 2-Independent Sample Unpooled t-Test With G-Power Utility

- Paired (2-Sample Dependent) t-Tests in Excel
- Paired t-Test in 4 Steps in Excel 2010 and Excel 2013
- Excel Normality Testing of Paired t-Test Data
- Paired t-Test Excel Calculations, Formulas, and Tools
- Paired t-Test – Effect Size in Excel 2010, and Excel 2013
- Paired t-Test – Test Power With G-Power Utility
- Wilcoxon Signed-Rank Test in 8 Steps As a Paired t-Test Alternative
- Sign Test in Excel As A Paired t-Test Alternative

- Hypothesis Tests of Proportion in Excel
- Hypothesis Tests of Proportion Overview (Hypothesis Testing On Binomial Data)
- 1-Sample Hypothesis Test of Proportion in 4 Steps in Excel 2010 and Excel 2013
- 2-Sample Pooled Hypothesis Test of Proportion in 4 Steps in Excel 2010 and Excel 2013
- How To Build a Much More Useful Split-Tester in Excel Than Google's Website Optimizer

- Chi-Square Independence Tests in Excel
- Chi-Square Goodness-Of-Fit Tests in Excel
- F Tests in Excel
- Correlation in Excel
- Pearson Correlation in Excel
- Spearman Correlation in Excel
- Confidence Intervals in Excel
- z-Based Confidence Intervals of a Population Mean in 2 Steps in Excel 2010 and Excel 2013
- t-Based Confidence Intervals of a Population Mean in 2 Steps in Excel 2010 and Excel 2013
- Minimum Sample Size to Limit the Size of a Confidence interval of a Population Mean
- Confidence Interval of Population Proportion in 2 Steps in Excel 2010 and Excel 2013
- Min Sample Size of Confidence Interval of Proportion in Excel 2010 and Excel 2013

- Simple Linear Regression in Excel
- Overview of Simple Linear Regression in Excel 2010 and Excel 2013
- Complete Simple Linear Regression Example in 7 Steps in Excel 2010 and Excel 2013
- Residual Evaluation For Simple Regression in 8 Steps in Excel 2010 and Excel 2013
- Residual Normality Tests in Excel – Kolmogorov-Smirnov Test, Anderson-Darling Test, and Shapiro-Wilk Test For Simple Linear Regression
- Evaluation of Simple Regression Output For Excel 2010 and Excel 2013
- All Calculations Performed By the Simple Regression Data Analysis Tool in Excel 2010 and Excel 2013
- Prediction Interval of Simple Regression in Excel 2010 and Excel 2013

- Multiple Linear Regression in Excel
- Basics of Multiple Regression in Excel 2010 and Excel 2013
- Complete Multiple Linear Regression Example in 6 Steps in Excel 2010 and Excel 2013
- Multiple Linear Regression’s Required Residual Assumptions
- Normality Testing of Residuals in Excel 2010 and Excel 2013
- Evaluating the Excel Output of Multiple Regression
- Estimating the Prediction Interval of Multiple Regression in Excel
- Regression - How To Do Conjoint Analysis Using Dummy Variable Regression in Excel

- Logistic Regression in Excel
- Logistic Regression Overview
- Logistic Regression in 6 Steps in Excel 2010 and Excel 2013
- R Square For Logistic Regression Overview
- Excel R Square Tests: Nagelkerke, Cox and Snell, and Log-Linear Ratio in Excel 2010 and Excel 2013
- Likelihood Ratio Is Better Than Wald Statistic To Determine if the Variable Coefficients Are Significant For Excel 2010 and Excel 2013
- Excel Classification Table: Logistic Regression’s Percentage Correct of Predicted Results in Excel 2010 and Excel 2013
- Hosmer- Lemeshow Test in Excel – Logistic Regression Goodness-of-Fit Test in Excel 2010 and Excel 2013

- Single-Factor ANOVA in Excel
- Overview of Single-Factor ANOVA
- Single-Factor ANOVA in 5 Steps in Excel 2010 and Excel 2013
- Shapiro-Wilk Normality Test in Excel For Each Single-Factor ANOVA Sample Group
- Kruskal-Wallis Test Alternative For Single Factor ANOVA in 7 Steps in Excel 2010 and Excel 2013
- Levene’s and Brown-Forsythe Tests in Excel For Single-Factor ANOVA Sample Group Variance Comparison
- Single-Factor ANOVA - All Excel Calculations
- Overview of Post-Hoc Testing For Single-Factor ANOVA
- Tukey-Kramer Post-Hoc Test in Excel For Single-Factor ANOVA
- Games-Howell Post-Hoc Test in Excel For Single-Factor ANOVA
- Overview of Effect Size For Single-Factor ANOVA
- ANOVA Effect Size Calculation Eta Squared in Excel 2010 and Excel 2013
- ANOVA Effect Size Calculation Psi – RMSSE – in Excel 2010 and Excel 2013
- ANOVA Effect Size Calculation Omega Squared in Excel 2010 and Excel 2013
- Power of Single-Factor ANOVA Test Using Free Utility G*Power
- Welch’s ANOVA Test in 8 Steps in Excel Substitute For Single-Factor ANOVA When Sample Variances Are Not Similar
- Brown-Forsythe F-Test in 4 Steps in Excel Substitute For Single-Factor ANOVA When Sample Variances Are Not Similar

- Two-Factor ANOVA With Replication in Excel
- Two-Factor ANOVA With Replication in 5 Steps in Excel 2010 and Excel 2013
- Variance Tests: Levene’s and Brown-Forsythe For 2-Factor ANOVA in Excel 2010 and Excel 2013
- Shapiro-Wilk Normality Test in Excel For 2-Factor ANOVA With Replication
- 2-Factor ANOVA With Replication Effect Size in Excel 2010 and Excel 2013
- Excel Post Hoc Tukey’s HSD Test For 2-Factor ANOVA With Replication
- 2-Factor ANOVA With Replication – Test Power With G-Power Utility
- Scheirer-Ray-Hare Test Alternative For 2-Factor ANOVA With Replication

- Two-Factor ANOVA Without Replication in Excel
- Randomized Block Design ANOVA in Excel
- Repeated-Measures ANOVA in Excel
- Single-Factor Repeated-Measures ANOVA in 4 Steps in Excel 2010 and Excel 2013
- Sphericity Testing in 9 Steps For Repeated Measures ANOVA in Excel 2010 and Excel 2013
- Effect Size For Repeated-Measures ANOVA in Excel 2010 and Excel 2013
- Friedman Test in 3 Steps For Repeated-Measures ANOVA in Excel 2010 and Excel 2013

- ANCOVA in Excel
- Normality Testing in Excel
- Creating a Box Plot in 8 Steps in Excel
- Creating a Normal Probability Plot With Adjustable Confidence Interval Bands in 9 Steps in Excel With Formulas and a Bar Chart
- Chi-Square Goodness-of-Fit Test For Normality in 9 Steps in Excel
- Kolmogorov-Smirnov, Anderson-Darling, and Shapiro-Wilk Normality Tests in Excel

- Nonparametric Testing in Excel
- Mann-Whitney U Test in 12 Steps in Excel
- Wilcoxon Signed-Rank Test in 8 Steps in Excel
- Sign Test in Excel
- Friedman Test in 3 Steps in Excel
- Scheirer-Ray-Hope Test in Excel
- Welch's ANOVA Test in 8 Steps Test in Excel
- Brown-Forsythe F Test in 4 Steps Test in Excel
- Levene's Test and Brown-Forsythe Variance Tests in Excel
- Chi-Square Independence Test in 7 Steps in Excel
- Chi-Square Goodness-of-Fit Tests in Excel
- Chi-Square Population Variance Test in Excel

- Post Hoc Testing in Excel
- Creating Interactive Graphs of Statistical Distributions in Excel
- Interactive Statistical Distribution Graph in Excel 2010 and Excel 2013
- Interactive Graph of the Normal Distribution in Excel 2010 and Excel 2013
- Interactive Graph of the Chi-Square Distribution in Excel 2010 and Excel 2013
- Interactive Graph of the t-Distribution in Excel 2010 and Excel 2013
- Interactive Graph of the t-Distribution’s PDF in Excel 2010 and Excel 2013
- Interactive Graph of the t-Distribution’s CDF in Excel 2010 and Excel 2013
- Interactive Graph of the Binomial Distribution in Excel 2010 and Excel 2013
- Interactive Graph of the Exponential Distribution in Excel 2010 and Excel 2013
- Interactive Graph of the Beta Distribution in Excel 2010 and Excel 2013
- Interactive Graph of the Gamma Distribution in Excel 2010 and Excel 2013
- Interactive Graph of the Poisson Distribution in Excel 2010 and Excel 2013

- Solving Problems With Other Distributions in Excel
- Solving Uniform Distribution Problems in Excel 2010 and Excel 2013
- Solving Multinomial Distribution Problems in Excel 2010 and Excel 2013
- Solving Exponential Distribution Problems in Excel 2010 and Excel 2013
- Solving Beta Distribution Problems in Excel 2010 and Excel 2013
- Solving Gamma Distribution Problems in Excel 2010 and Excel 2013
- Solving Poisson Distribution Problems in Excel 2010 and Excel 2013

- Optimization With Excel Solver
- Maximizing Lead Generation With Excel Solver
- Minimizing Cutting Stock Waste With Excel Solver
- Optimal Investment Selection With Excel Solver
- Minimizing the Total Cost of Shipping From Multiple Points To Multiple Points With Excel Solver
- Knapsack Loading Problem in Excel Solver – Optimizing the Loading of a Limited Compartment
- Optimizing a Bond Portfolio With Excel Solver
- Travelling Salesman Problem in Excel Solver – Finding the Shortest Path To Reach All Customers

- Chi-Square Population Variance Test in Excel
- Analyzing Data With Pivot Tables
- SEO Functions in Excel
- Time Series Analysis in Excel
- VLOOKUP

## No comments:

## Post a Comment