Tuesday, May 27, 2014

Interpret Excel Output of Multiple Regression

This is one of the following seven articles on Multiple Linear Regression in Excel

Basics of Multiple Regression in Excel 2010 and Excel 2013

Complete Multiple Linear Regression Example in 6 Steps in Excel 2010 and Excel 2013

Multiple Linear Regression’s Required Residual Assumptions

Normality Testing of Residuals in Excel 2010 and Excel 2013

Evaluating the Excel Output of Multiple Regression

Estimating the Prediction Interval of Multiple Regression in Excel

Regression - How To Do Conjoint Analysis Using Dummy Variable Regression in Excel

Step 7 – Evaluate the Excel Regression Output

The Excel regression output that will now be evaluated is as follows:

Interpretation of the most important individual parts of the Excel regression output are as follows:

Regression Equation

The regression equation is shown to be the following:

Yi = b0 + b1 * X1i + b2 * X2i

Power Consumption (kW) = 37,123,164 + 10.234 (Number of Production Machines X 1,000) + 3.573 (New Employees Added in Last 5 Years X 1,000)

Note that the scaling of the independent variables in Step 2 ensures that the calculated coefficients in the regression equation were of reasonable size (between 1 and 10)

For example, if a company had 10,000 production machines and added 500 new employees in the last 5 years, the company’s annual power consumption would be predicted as follows:

Annual Power Consumption (kW) = 37,123,164 + 10.234 (Number of Production Machines X 1,000) + 3.573 (New Employees Added in Last 5 Years X 1,000)

Annual Power Consumption (kW) = 37,123,164 + 10.234 (10,000 X 1,000) + 3.573 (500 X 1,000)

Annual Power Consumption = 49,143,690 kW

It is very important to note that a regression equation should never be extrapolated outside the range of the original data set used to create the regression equation. The inputs for a regression prediction should not be outside of the following ranges of the original data set:

Number of machine: 442 to 28,345

New employees added in last 5 years: -1,460 to 7,030

A simple example to illustrate why a regression line should never be extrapolated is as follows: Imagine that the height of a child was recorded every six months from ages one to seventeen. Most people stop growing in height at approximately age seventeen. If a regression line was created from that data and then extrapolated to predict that person’s height at age 50, the regression equation might predict that the person would be fifteen feet tall. Conditions are often very different outside the range of the original data set.

Extrapolation of a regression equation beyond the range of the original input data is one of the most common statistical mistakes made.

R Square –The Equation’s Overall Predictive Power

R Square tells how closely the Regression Equation approximates the data. R Square tells what percentage of the variance of the output variables is explained by the input variables. We would like to see at least .6 or .7 for R Square. The remainder of the variance of the output is unexplained. R Square here is a relatively high value of 0.963. This indicates the 96.3 percent of the total variance in the output variable (annual power consumption) is explained by the variance of the input variables (number of production machines and number of new employees added in the last five years).

Adjusted R Square is quoted more often than R Square because it is more conservative. Adjusted R Square only increases when new independent variables are added to the regression analysis if those new variables increase an equation’s predictive ability. When you are adding independent variables to the regression equation, add them one at a time and check whether Adjusted R Square has gone up with the addition of the new variable. The value of Adjusted R Square here is 0.959.

Significance of F - Overall p Value and Validity Measure

The Significance of F is a p Value. A very small Significance of F confirms the validity of the Regression Equation. The Regression handout has more information about the Significance of F that appears in the Excel Regression output. The significance of F is actually a p Value. If the p value (Significance of F) is nearly zero, then there is almost no chance that the Regression Equation is random. This is very strong evidence of the validity of the overall Regression Equation.

To be more specific, this p value (Significance of F) indicates whether to reject the overall Null Hypothesis of this regression analysis. The overall Null Hypothesis for this regression equation states that all coefficients of the independent variables equal zero. In other words, that for this multiple regression equation:

Y = b0 + b1X1 + b2X2 + … + bkXk

The Null Hypothesis for multiple regression states that the coefficients b1, b2, … , bk all equal zero. The Y intercept, b0, is not included in this Null Hypothesis.

For this simple regression equation:

Y = b0 + b1X

The Null Hypothesis for simple regression states that the Coefficient b1 equals zero. The Y intercept, b0, is not included in this Null Hypothesis. Coefficient b1 is the slope of the regression line in simple regression.

In this case, the p Value (Significance of F) is extremely low (6.726657E-13) so we have very strong evidence that this is a valid regression equation. There is almost no probability that the relationship shown to exist between the dependent and independent variables (the nonzero values of coefficient b1, b2, … , bk) was obtained merely by chance.

This low p Value (or corresponding high F Value) indicates that there is enough evidence to reject the Null Hypothesis of this regression analysis.

The 95 percent Level of Confidence is usually required to reject the Null Hypothesis. This translates to a 5 percent Level of Significance. The Null Hypothesis is rejected is the p Value (Significance of F) is less than 0.05. If the Null Hypothesis is rejected, the regression output stating that the regression coefficients b1, b2, … , bk do not equal zero is deemed to be statistically significant.

p Value of Intercept and Coefficients – Measure of Their Validity

The lower the p-Value for each, more likely that Y-Intercept or coefficient is valid. The Intercept’s low p Value of 0.0003 indicates that there is only a 0.03 chance that this calculated value of the Intercept is a random occurrence.

The extremely low p value for the coefficient for the Number_of_Production_Machines indicates that there is almost no chance that this calculated value of this coefficient is a random occurrence.

The p Value for the Number_of_New_Employees_Added is relatively large. This coefficient cannot be considered statistically significant (reliable) at a 95 percent certainty level. A 95 percent certainty level would be the equivalent of a Level of Significance (Alpha) equal to 0.05. The coefficient for the Number_of_Employees_Added would be considered statistically significant at a 0.05 Level of Significance if its p Value were less than 0.05. This is not the case because this p Value is shown to be 0.2432.

The coefficient for the Number_of_Machines can be considered reliable but not the coefficient for New_Employees_Added.

The following blog article show how to estimate the Prediction Interval for Multiple Linear Regression.

Excel Master Series Blog Directory

Excel Master Series Blog Directory

Statistical Topics and Articles In Each Topic

• Histograms in Excel
• Bar Chart in Excel
• Combinations & Permutations in Excel
• Normal Distribution in Excel
• t-Distribution in Excel
• Binomial Distribution in Excel
• z-Tests in Excel
• t-Tests in Excel
• Hypothesis Tests of Proportion in Excel
• Chi-Square Independence Tests in Excel
• Chi-Square Goodness-Of-Fit Tests in Excel
• F Tests in Excel
• Correlation in Excel
• Pearson Correlation in Excel
• Spearman Correlation in Excel
• Confidence Intervals in Excel
• Simple Linear Regression in Excel
• Multiple Linear Regression in Excel
• Logistic Regression in Excel
• Single-Factor ANOVA in Excel
• Two-Factor ANOVA With Replication in Excel
• Two-Factor ANOVA Without Replication in Excel
• Randomized Block Design ANOVA in Excel
• Repeated-Measures ANOVA in Excel
• ANCOVA in Excel
• Normality Testing in Excel
• Nonparametric Testing in Excel
• Post Hoc Testing in Excel
• Creating Interactive Graphs of Statistical Distributions in Excel
• Solving Problems With Other Distributions in Excel
• Optimization With Excel Solver
• Chi-Square Population Variance Test in Excel
• Analyzing Data With Pivot Tables
• SEO Functions in Excel
• Time Series Analysis in Excel
• VLOOKUP