Introducing 

Prezi AI.

Your new presentation assistant.

Refine, enhance, and tailor your content, source relevant images, and edit visuals quicker than ever before.

Loading…
Transcript

Statistical Inference 2

MLE, Information, Transformations and Multiple Parameters

Maximum Likelihood

Estimation (MLE)

MLE provides the parameter value(s) that makes the observed sample the most likely sample among all possible samples.

is the maximum likelihood estimate if

MLE

for all values of

in the parameter space

Note that MLE can refer to:

  • Maximum likelihood estimation
  • Maximum likelihood estimator
  • Maximum likelihood estimate

Example 1

Pregnancy success from artificial insemination

Let n = number of couples

y = total pregnancy attempts by all couples

θ = probability of individual success

Consider y = number of attempts required for couple i

i

p(2) = (0.85)(0.15)

y -1

i

p(y ) = (0.85) (0.15)

i

y -1

i

p(y ) = (1-θ) (θ)

i

Example 1

Sketch L(θ) and ℓ(θ) if 20 successfully pregnant couples took a total of 100 attempts

Score equation

To find the MLE, we need to differentiate the log-likelihood function and set it equal to 0.

This is called the score equation

Information

Score and Information

Our confidence in the MLE is quantified by the "pointedness" of the log-likelihood

This is called the observed information

Taking the expected value gives us the expected information

Which gives us the variance of the estimator!

Pregnancy success example

Find the MLE where n=20, y=100 and calculate a 95% confidence interval

Example

-1

I(θ)

95% CI:

Properties of MLE

Explicit vs Implicit MLE

  • If it is possible to solve the score equation to get an expression for the MLE, the MLE is explicit.
  • If software algorithms are required to generate the MLE expression, the MLE is implicit.

Biasedness and Consistency

  • The MLE is not an unbiased estimator necessarily.
  • The MLE is consistent
  • The asymptotic distribution is normal (good for CIs!)

Properties of MLE

d

Asymptotic efficiency

  • The MLE is asymptotically efficient. (ie. has the smallest asymptotic variance possible of all consistent estimators)
  • The MLE is optimal in all but a handful of unique situations

Invariance

Parameter transformations

Consider the log-odds transformation of population prevalence, θ :

If the MLE for θ is:

Parameter transformations

Then the MLE for the odds is:

If the std err for (in large samples) is:

Then the std err for the odds (in large samples) is:

Multiple parameters

d

MLE distribution:

when n is large

Multiple parameters

solves

Where

Example 2

Systolic blood pressure in pregnancy

A sample of 5 pregnant women have their SBP taken, which is considered to be normally distributed.

Sample: {135,123,120,102,110}

Find the maximum likelihood estimate for μ and σ:

Example 2

where

^

μ =118

MLE

^

σ =11.30

MLE

Sample: {135,123,120,102,110}

Deriving MLE for a

Normal Distribution

First find the log-likelihood

and

Now find

, and set them equal to 0

Calculations I

Deriving MLE for a

Normal Distribution

Sample: {135,123,120,102,110}

Calculations II

Other estimation methods

  • OLS
  • Method of moments

MLE advantages: uses distribution of Y and is the most efficient estimator possible

Other estimation methods

MLE disadvantages: less convenient to calculate in certain circumstances (ie. when implicit). Also requires distributional assumptions.

Learn more about creating dynamic, engaging presentations with Prezi