This is one of the following seven articles on Multiple Linear Regression in Excel
Basics of Multiple Regression in Excel 2010 and Excel 2013
Complete Multiple Linear Regression Example in 6 Steps in Excel 2010 and Excel 2013
Multiple Linear Regression’s Required Residual Assumptions
Normality Testing of Residuals in Excel 2010 and Excel 2013
Evaluating the Excel Output of Multiple Regression
Estimating the Prediction Interval of Multiple Regression in Excel
Regression - How To Do Conjoint Analysis Using Dummy Variable Regression in Excel
Multiple Linear
Regression’s Required
Residual Assumptions
Linear regression has several required assumptions regarding the residuals. These required residual assumptions are as follows:
1) Outliers have been removed.
2) The residuals must be independent of each other. They must not be correlated with each other.
3) The residuals should have a mean of approximately 0.
4) The residuals must have similar variances throughout all residual values.
5) The residuals must be normally-distributed.
6) The residuals may not be highly correlated with any of the independent (X) variables.
7) There must be enough data points to conduct normality testing of residuals.
Here is how to evaluate each of these assumptions in Excel.
Locating and Removing Outliers
In many cases a data point is considered to be an outlier if its residual value is more than three standard deviations from the mean of the residuals. Checking the checkbox next to Standardized Residuals in the regression dialogue box calculates standardized value of each residual, which is the number of standard deviations that the residual is from the residual mean. Below once again is the Excel regression output showing the residuals and their distance in standard deviations from the residual mean.
Following are the standardized residuals of the current data set. None are larger in absolute value than 1.755 standard deviations from the residual mean.
(Click Image To See a Larger Version)
(Click Image To See a Larger Version)
A data point is often considered an outlier if its residual value is more than three standard deviations from the residual mean. The following Excel output shows that none of the residuals are more than 1.755 standard deviations from the residual mean. On that basis, no data points are considered outliers as a result of having excessively large residuals.
Any outliers that have been removed should be documented and evaluated. Outliers more than 3 standard deviations from the mean are to be expected occasionally for normally-distributed data. If an outlier appears to have been generated by the normal process and not be an aberration of the process, then perhaps it should not be removed. One item to check is whether a data entry error occurred when inputting the data set. Another item that should be checked is whether there was a measurement error when that data point’s parameters were recorded.
If a data point is removed, the regression analysis has to be performed again on the new data set that does not include that data point.
Determining Whether Residuals
Are Independent
This is the most important residual assumption that must be confirmed. If the residuals are not found to be independent, the regression is not considered to be valid.
If the residuals are independent of each other, a graph of the residuals will show no patterns. The residuals should be graphed across all values of the dependent variable. The Excel regression output produced individual graphs of the residuals across all values of each independent variable, but not across all values of the dependent variable. This graph is not part of Excel’s regression output and needs to be generated separately.
An Excel X-Y scatterplot graph of the Residuals plotted against all values of the dependent variable is shown as follows:
(Click Image To See a Larger Version)
Residuals that are not independent of each other will show patterns in a Residual graph. No patterns among Residuals are evidenced in this Residual graph so the required regression assumption of Residual independence is validated.
It is important to note an upward or downward linear trend appearing in the Residuals probably indicates that an independent (X) variable is missing. The first tip that this might be occurring is if the Residual mean does not equal approximately zero.
Calculating Durbin-Watson Statistic
To Determine If Autocorrelation
Exists
An important part of evaluating whether the residuals are independent is to calculate the degree of autocorrelation that exists within the residuals. If the residuals are shown to have a high degree of correlation with each other, the residual are not independent and the regression is not considered valid.
Autocorrelation often occurs with time-series or any other type of longitudinal data. Autocorrelation is evident when data values are influenced by the time interval between them. An example might be a graph of a person’s income. A person’s level of income in one year is likely influenced by that person’s income level in the previous year.
The degree of autocorrelation existing within a variable is calculated by the Durbin-Watson statistics, d. The Durbin-Watson statistic can take values from 0 to 4. A Durbin-Watson statistic near 2 indicates very little autocorrelation within a variable. Values close to 2 indicate that little to no correlation exists among residuals. This, along with no apparent patterns in the Residuals, would confirm the independence of the Residuals.
Values near 0 indicate a perfect positive autocorrelation. Subsequent values are similar to each other in this case. Values will appear to following each other in this case. Values near 4 indicate a perfect negative correlation. Subsequent values are opposite of each other in an alternating pattern.
The data used in this example is not time series data but the Durbin-Watson statistic of the Residuals will be calculated in Excel to show how it is done. Before calculating the Durbin-Watson statistic, the data should be sorted chronologically. The Durbin-Watson for the Residuals would be calculated in Excel as follows:
(Click Image To See a Larger Version)
SUMXMY2(x_array,y_array) calculates the sum of the square of (X – Y) for the entire array.
SUMSQ(array) squares the values in the array and then sums those squares.
If the Residuals are in cells DV11:DV30, then the Excel formula to calculate the Durbin-Watson statistic for those Residuals is the following:
=SUMXMY2(DV12:DV30,DV11:DV29)/SUMSQ(DV11:DV30)
The Durbin-Watson statistic of 2.07 calculated here indicates that the Residuals have very little autocorrelation. The Residuals can be considered independent of each other because of the value of the Durbin-Watson statistic and the lack of apparent patterns in the scatterplot of the Residuals.
Determining if Residual Mean
Equals Zero
The mean of the residuals is shown to be zero as follows:
(Click Image To See a Larger Version)
Determining If Residual Variance
Is Constant
If the Residuals have similar variances across all residual values, the Residuals are said to be homoscedastistic. The property of having similar variance across all sample values or across different sample groups is known as homoscedasticity.
If the Residuals do not have similar variances across all residual values, the Residuals are said to be heteroscedastistic. The property of having different variance across all sample values or across different sample groups is known as heteroscedasticity. Linear regression requires that Residuals be homoscedastistic, i.e., have similar variances across all residual values.
The variance of the Residuals is the degree of spread among the Residual values. This can be observed on the Residual scatterplot graph. If the variance of the residuals changes as residual values increase, the spread between the values will visibly change on the Residual scatterplot graph. If Residual variance increases, the Residual values will appear to fan out along the graph. If Residual variance decreases, the Residual values will do the opposite; they will appear to clump together along the graph.
Here is the Residual graph again:
(Click Image To See a Larger Version)
The Residuals’ spread appears to be fairly consistent across all Residual values. This indicates that the Residuals are homoscedastistic, i.e., have similar variance across all Residual values. There appears to be no fanning in or fanning out.
Slight unequal variance in Residuals in not usually a reason to discard an otherwise good model. One way to remove unequal variance in the residuals is to reduce the interval between data points. Shorter intervals will have closer variances.
If the number of data points is too small, the residual spread will sometimes produce a cigar-shaped pattern.
Determining if Residuals Are
Normally-Distributed
An important assumption of linear regression is that the Residuals be normally-distributed. Normality testing must be performed on the Residuals. The five normality tests will be performed in the next blog article are as follows:
1) An Excel histogram of the Residuals will be created.
2) A normal probability plot of the Residuals will be created in Excel.
3) The Kolmogorov-Smirnov test for normality of Residuals will be performed in Excel.
4) The Anderson-Darling test for normality of Residuals will be performed in Excel.
5) The Shapiro-Wilk test for normality of Residuals will be performed in Excel.
Determining If Any Input Variables
Are Too Highly Correlated With
Residuals
To determine whether the Residuals have significant correlation with any other variables, an Excel correlation matrix can be created. An Excel correlation matrix will simultaneously calculate correlations between all variables. The Excel correlation matrix for all variables in this regression is shown as follows:
(Click Image To See a Larger Version)
The correlation matrix shows all of the correlations between each of the variables to be low. Correlation values go from (-1) to (+1). Correlation values near zero indicate very low correlation. This correlation matrix was created by inserting the following information into the Excel correlation data analysis tool dialogue box as follows:
(Click Image To See a Larger Version)
Determining If There Are Enough
Data Points
Violations of important assumptions such as normality of Residuals is difficult to detect if too few data exist. 20 data points is sufficient. 10 data points is probably on the borderline of being too few. All of the normality tests are significantly more powerful (accurate) as data size goes from 15 to 20 data points. Normality of data is very difficult to access accurately when only 10 data points are present.
All required regression assumptions concerning the Residuals have been met. The next step is to evaluate the remainder of the Excel regression output.
Excel Master Series Blog Directory
Statistical Topics and Articles In Each Topic
- Histograms in Excel
- Bar Chart in Excel
- Combinations & Permutations in Excel
- Normal Distribution in Excel
- Overview of the Normal Distribution
- Normal Distribution’s PDF (Probability Density Function) in Excel 2010 and Excel 2013
- Normal Distribution’s CDF (Cumulative Distribution Function) in Excel 2010 and Excel 2013
- Solving Normal Distribution Problems in Excel 2010 and Excel 2013
- Overview of the Standard Normal Distribution in Excel 2010 and Excel 2013
- An Important Difference Between the t and Normal Distribution Graphs
- The Empirical Rule and Chebyshev’s Theorem in Excel – Calculating How Much Data Is a Certain Distance From the Mean
- Demonstrating the Central Limit Theorem In Excel 2010 and Excel 2013 In An Easy-To-Understand Way
- t-Distribution in Excel
- Binomial Distribution in Excel
- z-Tests in Excel
- Overview of Hypothesis Tests Using the Normal Distribution in Excel 2010 and Excel 2013
- One-Sample z-Test in 4 Steps in Excel 2010 and Excel 2013
- 2-Sample Unpooled z-Test in 4 Steps in Excel 2010 and Excel 2013
- Overview of the Paired (Two-Dependent-Sample) z-Test in 4 Steps in Excel 2010 and Excel 2013
- t-Tests in Excel
- Overview of t-Tests: Hypothesis Tests that Use the t-Distribution
- 1-Sample t-Tests in Excel
- 1-Sample t-Test in 4 Steps in Excel 2010 and Excel 2013
- Excel Normality Testing For the 1-Sample t-Test in Excel 2010 and Excel 2013
- 1-Sample t-Test – Effect Size in Excel 2010 and Excel 2013
- 1-Sample t-Test Power With G*Power Utility
- Wilcoxon Signed-Rank Test in 8 Steps As a 1-Sample t-Test Alternative in Excel 2010 and Excel 2013
- Sign Test As a 1-Sample t-Test Alternative in Excel 2010 and Excel 2013
- 2-Independent-Sample Pooled t-Tests in Excel
- 2-Independent-Sample Pooled t-Test in 4 Steps in Excel 2010 and Excel 2013
- Excel Variance Tests: Levene’s, Brown-Forsythe, and F Test For 2-Sample Pooled t-Test in Excel 2010 and Excel 2013
- Excel Normality Tests Kolmogorov-Smirnov, Anderson-Darling, and Shapiro Wilk Tests For Two-Sample Pooled t-Test
- Two-Independent-Sample Pooled t-Test - All Excel Calculations
- 2- Sample Pooled t-Test – Effect Size in Excel 2010 and Excel 2013
- 2-Sample Pooled t-Test Power With G*Power Utility
- Mann-Whitney U Test in 12 Steps in Excel as 2-Sample Pooled t-Test Nonparametric Alternative in Excel 2010 and Excel 2013
- 2- Sample Pooled t-Test = Single-Factor ANOVA With 2 Sample Groups
- 2-Independent-Sample Unpooled t-Tests in Excel
- 2-Independent-Sample Unpooled t-Test in 4 Steps in Excel 2010 and Excel 2013
- Variance Tests: Levene’s Test, Brown-Forsythe Test, and F-Test in Excel For 2-Sample Unpooled t-Test
- Excel Normality Tests Kolmogorov-Smirnov, Anderson-Darling, and Shapiro-Wilk For 2-Sample Unpooled t-Test
- 2-Sample Unpooled t-Test Excel Calculations, Formulas, and Tools
- Effect Size for a 2-Independent-Sample Unpooled t-Test in Excel 2010 and Excel 2013
- Test Power of a 2-Independent Sample Unpooled t-Test With G-Power Utility
- Paired (2-Sample Dependent) t-Tests in Excel
- Paired t-Test in 4 Steps in Excel 2010 and Excel 2013
- Excel Normality Testing of Paired t-Test Data
- Paired t-Test Excel Calculations, Formulas, and Tools
- Paired t-Test – Effect Size in Excel 2010, and Excel 2013
- Paired t-Test – Test Power With G-Power Utility
- Wilcoxon Signed-Rank Test in 8 Steps As a Paired t-Test Alternative
- Sign Test in Excel As A Paired t-Test Alternative
- Hypothesis Tests of Proportion in Excel
- Hypothesis Tests of Proportion Overview (Hypothesis Testing On Binomial Data)
- 1-Sample Hypothesis Test of Proportion in 4 Steps in Excel 2010 and Excel 2013
- 2-Sample Pooled Hypothesis Test of Proportion in 4 Steps in Excel 2010 and Excel 2013
- How To Build a Much More Useful Split-Tester in Excel Than Google's Website Optimizer
- Chi-Square Independence Tests in Excel
- Chi-Square Goodness-Of-Fit Tests in Excel
- F Tests in Excel
- Correlation in Excel
- Pearson Correlation in Excel
- Spearman Correlation in Excel
- Confidence Intervals in Excel
- z-Based Confidence Intervals of a Population Mean in 2 Steps in Excel 2010 and Excel 2013
- t-Based Confidence Intervals of a Population Mean in 2 Steps in Excel 2010 and Excel 2013
- Minimum Sample Size to Limit the Size of a Confidence interval of a Population Mean
- Confidence Interval of Population Proportion in 2 Steps in Excel 2010 and Excel 2013
- Min Sample Size of Confidence Interval of Proportion in Excel 2010 and Excel 2013
- Simple Linear Regression in Excel
- Overview of Simple Linear Regression in Excel 2010 and Excel 2013
- Complete Simple Linear Regression Example in 7 Steps in Excel 2010 and Excel 2013
- Residual Evaluation For Simple Regression in 8 Steps in Excel 2010 and Excel 2013
- Residual Normality Tests in Excel – Kolmogorov-Smirnov Test, Anderson-Darling Test, and Shapiro-Wilk Test For Simple Linear Regression
- Evaluation of Simple Regression Output For Excel 2010 and Excel 2013
- All Calculations Performed By the Simple Regression Data Analysis Tool in Excel 2010 and Excel 2013
- Prediction Interval of Simple Regression in Excel 2010 and Excel 2013
- Multiple Linear Regression in Excel
- Basics of Multiple Regression in Excel 2010 and Excel 2013
- Complete Multiple Linear Regression Example in 6 Steps in Excel 2010 and Excel 2013
- Multiple Linear Regression’s Required Residual Assumptions
- Normality Testing of Residuals in Excel 2010 and Excel 2013
- Evaluating the Excel Output of Multiple Regression
- Estimating the Prediction Interval of Multiple Regression in Excel
- Regression - How To Do Conjoint Analysis Using Dummy Variable Regression in Excel
- Logistic Regression in Excel
- Logistic Regression Overview
- Logistic Regression in 6 Steps in Excel 2010 and Excel 2013
- R Square For Logistic Regression Overview
- Excel R Square Tests: Nagelkerke, Cox and Snell, and Log-Linear Ratio in Excel 2010 and Excel 2013
- Likelihood Ratio Is Better Than Wald Statistic To Determine if the Variable Coefficients Are Significant For Excel 2010 and Excel 2013
- Excel Classification Table: Logistic Regression’s Percentage Correct of Predicted Results in Excel 2010 and Excel 2013
- Hosmer- Lemeshow Test in Excel – Logistic Regression Goodness-of-Fit Test in Excel 2010 and Excel 2013
- Single-Factor ANOVA in Excel
- Overview of Single-Factor ANOVA
- Single-Factor ANOVA in 5 Steps in Excel 2010 and Excel 2013
- Shapiro-Wilk Normality Test in Excel For Each Single-Factor ANOVA Sample Group
- Kruskal-Wallis Test Alternative For Single Factor ANOVA in 7 Steps in Excel 2010 and Excel 2013
- Levene’s and Brown-Forsythe Tests in Excel For Single-Factor ANOVA Sample Group Variance Comparison
- Single-Factor ANOVA - All Excel Calculations
- Overview of Post-Hoc Testing For Single-Factor ANOVA
- Tukey-Kramer Post-Hoc Test in Excel For Single-Factor ANOVA
- Games-Howell Post-Hoc Test in Excel For Single-Factor ANOVA
- Overview of Effect Size For Single-Factor ANOVA
- ANOVA Effect Size Calculation Eta Squared in Excel 2010 and Excel 2013
- ANOVA Effect Size Calculation Psi – RMSSE – in Excel 2010 and Excel 2013
- ANOVA Effect Size Calculation Omega Squared in Excel 2010 and Excel 2013
- Power of Single-Factor ANOVA Test Using Free Utility G*Power
- Welch’s ANOVA Test in 8 Steps in Excel Substitute For Single-Factor ANOVA When Sample Variances Are Not Similar
- Brown-Forsythe F-Test in 4 Steps in Excel Substitute For Single-Factor ANOVA When Sample Variances Are Not Similar
- Two-Factor ANOVA With Replication in Excel
- Two-Factor ANOVA With Replication in 5 Steps in Excel 2010 and Excel 2013
- Variance Tests: Levene’s and Brown-Forsythe For 2-Factor ANOVA in Excel 2010 and Excel 2013
- Shapiro-Wilk Normality Test in Excel For 2-Factor ANOVA With Replication
- 2-Factor ANOVA With Replication Effect Size in Excel 2010 and Excel 2013
- Excel Post Hoc Tukey’s HSD Test For 2-Factor ANOVA With Replication
- 2-Factor ANOVA With Replication – Test Power With G-Power Utility
- Scheirer-Ray-Hare Test Alternative For 2-Factor ANOVA With Replication
- Two-Factor ANOVA Without Replication in Excel
- Randomized Block Design ANOVA in Excel
- Repeated-Measures ANOVA in Excel
- Single-Factor Repeated-Measures ANOVA in 4 Steps in Excel 2010 and Excel 2013
- Sphericity Testing in 9 Steps For Repeated Measures ANOVA in Excel 2010 and Excel 2013
- Effect Size For Repeated-Measures ANOVA in Excel 2010 and Excel 2013
- Friedman Test in 3 Steps For Repeated-Measures ANOVA in Excel 2010 and Excel 2013
- ANCOVA in Excel
- Normality Testing in Excel
- Creating a Box Plot in 8 Steps in Excel
- Creating a Normal Probability Plot With Adjustable Confidence Interval Bands in 9 Steps in Excel With Formulas and a Bar Chart
- Chi-Square Goodness-of-Fit Test For Normality in 9 Steps in Excel
- Kolmogorov-Smirnov, Anderson-Darling, and Shapiro-Wilk Normality Tests in Excel
- Nonparametric Testing in Excel
- Mann-Whitney U Test in 12 Steps in Excel
- Wilcoxon Signed-Rank Test in 8 Steps in Excel
- Sign Test in Excel
- Friedman Test in 3 Steps in Excel
- Scheirer-Ray-Hope Test in Excel
- Welch's ANOVA Test in 8 Steps Test in Excel
- Brown-Forsythe F Test in 4 Steps Test in Excel
- Levene's Test and Brown-Forsythe Variance Tests in Excel
- Chi-Square Independence Test in 7 Steps in Excel
- Chi-Square Goodness-of-Fit Tests in Excel
- Chi-Square Population Variance Test in Excel
- Post Hoc Testing in Excel
- Creating Interactive Graphs of Statistical Distributions in Excel
- Interactive Statistical Distribution Graph in Excel 2010 and Excel 2013
- Interactive Graph of the Normal Distribution in Excel 2010 and Excel 2013
- Interactive Graph of the Chi-Square Distribution in Excel 2010 and Excel 2013
- Interactive Graph of the t-Distribution in Excel 2010 and Excel 2013
- Interactive Graph of the t-Distribution’s PDF in Excel 2010 and Excel 2013
- Interactive Graph of the t-Distribution’s CDF in Excel 2010 and Excel 2013
- Interactive Graph of the Binomial Distribution in Excel 2010 and Excel 2013
- Interactive Graph of the Exponential Distribution in Excel 2010 and Excel 2013
- Interactive Graph of the Beta Distribution in Excel 2010 and Excel 2013
- Interactive Graph of the Gamma Distribution in Excel 2010 and Excel 2013
- Interactive Graph of the Poisson Distribution in Excel 2010 and Excel 2013
- Solving Problems With Other Distributions in Excel
- Solving Uniform Distribution Problems in Excel 2010 and Excel 2013
- Solving Multinomial Distribution Problems in Excel 2010 and Excel 2013
- Solving Exponential Distribution Problems in Excel 2010 and Excel 2013
- Solving Beta Distribution Problems in Excel 2010 and Excel 2013
- Solving Gamma Distribution Problems in Excel 2010 and Excel 2013
- Solving Poisson Distribution Problems in Excel 2010 and Excel 2013
- Optimization With Excel Solver
- Maximizing Lead Generation With Excel Solver
- Minimizing Cutting Stock Waste With Excel Solver
- Optimal Investment Selection With Excel Solver
- Minimizing the Total Cost of Shipping From Multiple Points To Multiple Points With Excel Solver
- Knapsack Loading Problem in Excel Solver – Optimizing the Loading of a Limited Compartment
- Optimizing a Bond Portfolio With Excel Solver
- Travelling Salesman Problem in Excel Solver – Finding the Shortest Path To Reach All Customers
- Chi-Square Population Variance Test in Excel
- Analyzing Data With Pivot Tables
- SEO Functions in Excel
- Time Series Analysis in Excel
- VLOOKUP
I am very interested in this article. Your information is very detailed and useful to me.
ReplyDeleteApply Now May/September Intake for Study in USA
Understanding the nuances of linear regression is crucial for meaningful analysis. The assumptions, such as linearity, independence, homoscedasticity, and normality of residuals, form the foundation of its validity. Just as a printer relies on a stable power supply like the C6455-60009 for consistent performance, adhering to these assumptions ensures the reliability of linear regression models, fostering accurate predictions and meaningful insights in statistical analyses.
ReplyDelete