This is one of the following seven articles on Logistic Regression in Excel

Logistic Regression in 7 Steps in Excel 2010 and Excel 2013

R Square For Logistic Regression Overview

Excel R Square Tests: Nagelkerke, Cox and Snell, and Log-Linear Ratio in Excel 2010 and Excel 2013

# Binary Logistic Regression

Overview

Binary logistic regression is a predictive technique that is applied when the dependent variable (y) is dichotomous (binary), i.e., there are only two possible outcomes. Binary logistic regression calculates the probability of the event designated as the positive event occurring.

Logistic regression is widely used in many fields. Engineers often use logistic regression to predict the probability of a system or part failing. Marketers use logistic regression to calculate the probability of prospective customer making a purchase or a subscriber cancelling a subscription. Bankers might use logistic regression to calculate the probability of a homeowner defaulting on a mortgage. Doctors use logistic regression to calculate a probability of a patient surviving trauma or serious disease.

Binary logistic regression is sometimes called __ Dummy Dependent Variable Regression__ because the dependent variable is binary and therefore resembles a dummy variable, which is binary. Dummy variables are binary variables that must be substituted when categorical independent variables are used as inputs to multiple linear regression. Multiple linear regression requires that independent variables be continuous or binary. Categorical independent variables must be converted to binary dummy variables before they can serve as inputs for multiple linear regression. Another chapter in this book covers this type of dummy variable regression in detail.

## The Goal of Binary Logistic

Regression

The goal of binary logistic regression analysis is to create an equation, P(X), that most accurately calculates the probability of the occurrence of binary event X for a given the inputs X** _{1}**, X

**, …, X**

_{2}**.**

_{k}Variable Y describes the observed occurrence of event X. Y takes the value of 1 when event X actually occurred and the value of 0 when event X did not occur for a given set of inputs X** _{1}**, X

**, …,X**

_{2}**.**

_{k}P(X) should calculate a probability close to 1 as often as possible for any given set of inputs for which event X occurred (Y = 1). P(X) should also calculate a probability close to 0 as often as possible for any given set of inputs for which event X did not occur (Y = 0).

## Allowed Variable Types For Binary

Logistic Regression

The __ dependent variable__ of binary logistic regression is a categorical variable with two possible outcomes.

The __ independent variables__ (the inputs, a.k.a. the predictor variables) can be any of the four variable types. The four types of numeric variables are nominal, ordinal, interval, and ratio.

__ Nominal__ variables are categorical and are simply arbitrary labels whose order doesn’t matter.

__ Ordinal__ variables are categorical variables whose order has meaning but the distance between units is usually not measurable.

__ Interval__ variables have measurable distance between units and a zero point that is arbitrarily chosen. Fahrenheit and Celsius temperatures are interval data.

__ Ratio__ variables have measurable distance between units and a zero point that indicates that there is none of the variable present. Absolute temperature is an example of ratio data.

## Logistic Regression Calculates the

Probability of an Event Occurring

Logistic regression calculates the probability of the positive event (the event whose observed occurrence is designated by Y = 1) occurring for a given set of inputs X** _{1}**, X

**, …, X**

_{2}**.**

_{k}Binary logistic regression therefore calculates the following conditional probability:

Pr(Y=1 | X** _{1}**, X

**, …, X**

_{2}**)**

_{k}This is the probability that the actual observed output, Y, equals 1 given the inputs X** _{1}**, X

**, …, X**

_{2}**.**

_{k}

## The Difference Between Linear

Regression and Logistic Regression

Linear regression requires that the dependent variable (y) be continuous. The dependent variable for binary logistic regression is binary is therefore not continuous. Logistic regression is a method for calculating a continuous probability for a discontinuous event. A brief description of how that continuous probability is created follows.

### The Relationship Between Probability and Odds

Event X is the event whose actual occurrence is designated by Y = 1. The probability of event X occurring is given as P(X). The odds of event X occurring are given as O(X). The “X” is somewhat of a strange variable name in P(X), O(X), and Event X because it is not related to the logistic regression inputs X** _{1}**, X

**, … , X**

_{2}**.**

_{k}The relationship between the probability of event X occurring and the odds of event X occurring is given as follows:

O(X) = P(X) / (1 – P(X))

For example, the probability of event X occurring is 75 percent, the odds of event X occurring are 3-to-1.

The odds, O(X), of discontinuous, binary event X occurring can be expressed as a continuous variable by taking the natural log of the odds. A complicated derivation proving this will not be shown here.

### The Logit – The Natural Log of the Odds

The natural log of the odds is called the Logit, L, (pronounced *LOH-jit*) and is calculated as follows:

Given the following k inputs, X** _{1}**, X

**, …, X**

_{2}**, and the following k constants, b**

_{k}**, b**

_{0}**, b**

_{1}**, …b**

_{2}**, the Logit equals the following:**

_{k}ln[O(X)] = Logit = L = b** _{0}** + b

**X**

_{1}**+ b**

_{1}**X**

_{2}**+ …+ b**

_{2}**X**

_{k}

_{k}Since ln[O(X)] = Logit = L

O(X) therefore equals e** ^{L}**.

O(X) = e** ^{L}** = e

^{b0+b1X1+b2X2 +..+bkXk}If O(X) = P(X) / (1 – P(X)), simple algebra can be applied to define P(X) as follows:

P(X) = O(X)/(1 + O(X))

or

P(X) = e** ^{L}**/(1+ e

**)**

^{L}With algebraic manipulation, this can also be expressed as the following for occasions when this formula is simpler to work with:

P(X) = 1 / (1+e** ^{-L}**)

Keep in mind that P(X) is the conditional probability Pr(Y=1 | X** _{1}**, X

**, …,X**

_{2}**)**

_{k}

### Showing How Closely The Predicted Value Matches The Actual Value

P(X) is the estimated probability of Event X occurring. Variable Y records the actual occurrence of Event X. The goal of binary logistic regression analysis is to create an equation P(X) that most closely matches Y for each set of inputs X** _{1}**, X

**, …, X**

_{2}**.**

_{k}P(X) should calculate a probability close to 1 as often as possible for any given set of inputs for which event X occurred (Y = 1). P(X) should also calculate a probability close to 0 as often as possible for any given set of inputs for which event X did not occur (Y = 0).

The conditional probability Pr(Y** _{i}**=y

**|X**

_{i}**,X**

_{1i}**,…X**

_{2i}**) is the probability that predicted dependent variable y**

_{ki}**equals the actual observed value Y**

_{i}**given the values of the independent variables inputs X**

_{i}**,X**

_{1i}**,…X**

_{2i}**.**

_{ki}The conditional probability Pr(Y** _{i}**=y

**|X**

_{i}**,X**

_{1i}**,…X**

_{2i}**) will be abbreviated Pr(Y=y|X) from here forward for convenience.**

_{ki}The conditional probability Pr(Y=y|X) is calculated by the following formula:

Pr(Y=y|X) = P(X)** ^{Y}** * [1-P(X)]

^{(1-Y)}Pr(Y=y|X) can take values between 0 and 1 just like any other probability.

Pr(Y=y|X) = P(X)** ^{Y}** * [1-P(X)]

**is maximized (approaches 1) when P(X) matches Y:**

^{(1-Y)}In other words, Pr(Y=y|X) is maximized (approaches 1) when either of the following occur:

1) Y = 1 and P(X) approaches 1

2) Y = 0 and P(X) approaches 0

To demonstrate this, here are several scenarios. In the first two scenarios Y and P(X) are nearly the same and Pr(Y=y|X) is maximized (approaches 1):

Y = 1 and P(X) = 0.995,

Pr(Y=y|X) = P(X)** ^{Y}** * [1-P(X)]

**=**

^{(1-Y) }Pr(Y=y|X) = 0.995** ^{1}** * [1-0.995]

**= 0.995**

^{(1-1) }Y = 0 and P(X) = 0.005,

Pr(Y=y|X) = P(X)** ^{Y}** * [1-P(X)]

**=**

^{(1-Y) }Pr(Y=y|X) = 0.005** ^{0}** * [1-0.005]

**= 0.995**

^{(1-0) }In the third scenario Y and P(X) are very different and Pr(Y=y|X) is not maximized (does not approach 1):

Y = 0 and P(X) = 0.45

Pr(Y=y|X) = P(X)** ^{Y}** * [1-P(X)]

**=**

^{(1-Y) }Pr(Y=y|X) = 0.45** ^{0}** * [1-0.45]

**= 0.55**

^{(1-0) }

### LE - The Likelihood Estimation

As explained, the following equation is maximized (approaches 1) when P(X) matches Y:

Pr(Y=y|X) = P(X)** ^{Y}** * [1-P(X)]

^{(1-Y)}If that conditional probability were calculated for each data record (each set of inputs and the associated output, Y), the __product__ of all of these conditional probabilities is called the Likelihood Estimation, LE. The Likelihood Estimation is given by the following formula:

Likelihood Estimation = LE = ∏ Pr(Y** _{i}**=y

**|X**

_{i}**)**

_{i}LE = ∏ P(X** _{i}**)

*** [1-P(X**

^{Yi}**)]**

_{i}

^{(1-Yi)}In simple language, The LE is equal to the product of all P(X)** ^{Y}** * [1-P(X)]

**terms calculated for each of the data records.**

^{(1-Y)}

### MLE – The Maximum Likelihood Estimation

The goal of binary logistic regression analysis is to create an equation P(X) that most accurately calculates the probability of the occurrence of binary event X for a given the inputs X** _{1}**, X

**, …, X**

_{2}**.**

_{k}Equation P(X) = e** ^{L}**/(1+ e

**)**

^{L}Logit = L = b** _{0}** + b

**X**

_{1}**+ b**

_{1}**X**

_{2}**+ …+ b**

_{2}**X**

_{k}

_{k}The highest possible value of the Likelihood Estimation, LE, is called the Maximum Likelihood Estimation, the MLE. The specific P(X) equation that maximizes the Likelihood Estimation, LE, to produce the Maximum Likelihood Estimation, the MLE, is the most accurate predictive equation.

The goal is therefore to determine the values of the constants b** _{0}**, b

**, b**

_{1}**, …b**

_{2}**that create an equation P(X) that maximizes the LE to creates the MLE.**

_{k}

### LL - The Log-Likelihood Function

The Likelihood Function has been given by the following formula:

LK = ∏ P(X** _{i}**)

*** [1-P(X**

^{Yi}**)]**

_{i}

^{(1-Yi)}Taking the natural logarithm, ln(), of both sides of that equation creates LL, the Log-Likelihood Function. The formula for the Log-Likelihood Function is as follows:

ln [ LK ] = LL = ln [∏ P(X** _{i}**)

*** [1-P(X**

^{Yi}**)]**

_{i}**]**

^{(1-Yi)}LL = ∑ Y** _{i}** *P(X

**) + (1 – Y**

_{i}**)(1-P(X**

_{i}**))**

_{i}This is due to the following property of logarithms:

ln( a** ^{b}** * c

**) = b*ln(a) + d*ln(c)**

^{d}

### MLL – Maximum Log Likelihood Function

It is often more convenient to work with the logarithm of a number than the actual number. That is the case here. Each LE term, P(X** _{i}**)

*** [1-P(X**

^{Yi}**)]**

_{i}**, is equal to between one and zero. The MLE is equal to the maximum possible ∏ P(X**

^{(1-Yi)}**)**

_{i}*** [1-P(X**

^{Yi}**)]**

_{i}**. The product of a large number of terms, e.g., 1,000 such terms, between zero and one would produce an unwieldy small number.**

^{(1-Yi)}A better solution is maximize the natural log of the MLE. Maximizing the log of the MLE would involve calculating the __sum__ of terms and not the product. Maximizing the sum of small terms is much more convenient than maximizing the product of small terms.

The Log-Likelihood Function, LL, is given as follows:

LL = ∑ Y** _{i}** *P(X

**) + (1 – Y**

_{i}**)(1-P(X**

_{i}**))**

_{i}The Maximum Log-Likelihood Function, MLL, is the maximum possible value of LL.

The MLE is maximized when its natural log, the MLL, is maximized since the logarithm is a monotonically increasing function. Two variables are monotonic if they either always move in the same direction or always move in the opposite direction. Two variables are monotonically increasing if one variable always increases when the other increases. Variables X and ln(X) are monotonically increasing because the ln(X) always increases when X increases. The maximum value of X will produce the maximum value of ln(X) and vice versa.

The parameters that produce the MLE (the Maximum Likelihood Estimation) also produce the MLL (the Maximum Log-Likelihood Function). In other words, the values of values of the constants b** _{0}**, b

**, b**

_{1}**, …b**

_{2}**that create an equation P(X) that maximizes the LE to creates the MLE are the same constant that maximize the LL to produce the MLL.**

_{k}

## Using the Excel Solver To Calculate

the MLL and the Optimal P(X)

The coefficients b** _{0}**, b

**, b**

_{1}**,…, b**

_{2}**that produce MLL are the same coefficients b**

_{k}**, b**

_{0}**, b**

_{1}**,…, b**

_{2}**that produce the most accurate predictive equation P(X). The ultimate goal of binary logistic regression is to produce the most accurate predictive equation P(X). The Excel Solver is a quick and easy way to calculates the values of coefficients b**

_{k}**, b**

_{0}**, b**

_{1}**,…, b**

_{2}**that produce MLL, the Maximum Log-Likelihood function.**

_{k}Working step-by-step through the example in the following blog article will provide clarity to what has just been covered in this article.

**Excel Master Series Blog Directory**

**Click Here To See a List Of All Statistical Topics And Articles In This Blog**

**You Will Become an Excel Statistical Master!**

very nice topic! Is it possible to dowload the EXcel file ?n

ReplyDeleteregards

Nice explanation. I think there is a typo however. Shouldn't the log likelihood be?

ReplyDeleteLL = ln [∏ P(Xi)Yi * [1-P(Xi)](1-Yi) ]

= ∑ Yi *ln[P(Xi)] + (1 – Yi)ln[(1-P(Xi))]

Instead of ∑ Yi *P(Xi) + (1 – Yi)(1-P(Xi)) because as you note:

ln( ab * cd) = b*ln(a) + d*ln(c)