For those wishing to follow along with the R-based demo in class, click here for the companion R-script for this lecture.

Optimization

We can’t maximize a likelihood function without an optimization algorithm.

We can’t optimize a sampling or monitoring regime, as in the power analysis problem, without an optimization algorithm.

Clearly, we need optimization algorithms!! In addition, they provide an excellent example of how computer algorithms are so essential for modern data analysis.

You may not have built your own optimization algorithm before, but you’ve probably taken advantage of optimization algorithms that are operating behind the scenes. For example, if you have performed a glmm or a non-linear regression in R, you have exploited numerical optimization routines!

We will discuss optimization in the context of maximum likelihood estimation, and then we will discuss optimization and parameter estimation in a Bayesian context.

Let’s start with the most conceptually simple of all optimization algorithms – brute force!

NOTE: you won’t need to build your own optimization routines for this class- the code in this lecture is for demonstration purposes only!

Brute Force!

Just like we did for the two-dimensional likelihood surface, we could evaluate the likelihood at tiny intervals across a broad range of parameter space. Then we can identify the parameter set associated with the maximum likelihood across all evaluated parameter sets (and the range of plausible parameter estimates!).

Positives

  • Simple!! (conceptually very straightforward)
  • Identify false peaks! (guaranteed to find the MLE!)
  • Undeterred by discontinuities in the likelihood surface

Negatives

  • Speed: even slower and less efficient than a typical ecologist is willing to accept! Practically impossible for complex multi-dimensional problems (curse of dimensionality)
  • Resolution: we can only get the answer to within plus or minus the interval size.

Example dataset: Myxomatosis titer in rabbits

Let’s use Bolker’s myxomatosis example dataset (an example we’ll return to frequently!) to illustrate:

# Explore Bolker's myxomatosis example   -------------------------

library(emdbook)    # this is the package provided to support the textbook!
library(ggplot2)
library(ggthemes)

MyxDat <- MyxoTiter_sum         # load Bolker's example data
MyxDat$grade <- as.factor(MyxDat$grade)

ggplot(MyxDat,aes(day,titer)) + 
  geom_point(aes(col=grade))  +
  facet_wrap(vars(grade), scales = "free") +
  theme_clean()

Myx <- subset(MyxDat,grade==1)    # subset: select most virulent
head(Myx)
##   grade day titer
## 1     1   2 5.207
## 2     1   2 5.734
## 3     1   2 6.613
## 4     1   3 5.997
## 5     1   3 6.612
## 6     1   3 6.810

For this example, we’re modeling the distribution of measured titers (virus loads) for Australian rabbits. Bolker chose to use a Gamma distribution. Here is the empirical distribution:

hist(Myx$titer,freq=FALSE)    # distribution of virus loads

We need to estimate the gamma ‘rate’ and ‘shape’ parameters that best fit this empirical distribution. Here is one example of a Gamma fit to this distribution:

# Overlay a gamma distribution on the histogram -------------------

hist(Myx$titer,freq=FALSE)     # note the "freq=FALSE", which displays densities of observations, and therefore makes histograms comparable with probability density functions
curve(dgamma(x,shape=40,rate=6),add=T,col="red")

This is clearly not a great fit, but perhaps this would be an okay starting point (optimization algorithms don’t really require perfect starting points, just need to be in the ballpark)…

Let’s build a likelihood function for this problem!

# Build gamma likelihood function  ---------------------

GammaLikelihoodFunction <- function(params){           # only one argument (params)- the data are hard-coded here (this is often the case with simple likelihood functions)
  -sum(dgamma(Myx$titer,shape=params['shape'],rate=params['rate'],log=T))     # use params and data to compute likelihood 
}

params <- c(shape=40,rate=6) 
GammaLikelihoodFunction(params)    # test the function!
## [1] 38.74585

Now let’s optimize using ‘optim()’ like we did before, to find the MLE!

NOTE: “optim()” will throw some warnings here because it will try to find the data likelihood for certain impossible parameter combinations!

# Optimize using R's built-in "optim()" function: find the maximum likelihood estimate

MLE <- optim(params,GammaLikelihoodFunction)  

MLE$par
##     shape      rate 
## 49.608753  7.164569
MLE$value
## [1] 37.66714

We can ignore the warnings!

Let’s visualize the fit of the MLE in this case…

# visualize the maximum likelihood fit

hist(Myx$titer,freq=FALSE)
curve(dgamma(x,shape=MLE$par["shape"],rate=MLE$par["rate"]),add=T,col="red")

Looks pretty good…

But as dangerous ecological statisticians we aren’t satisfied with using a “black box” like the “optim()” function, we need to understand what is going on behind the scenes. Let’s write our own optimizer!

We start with the conceptually simple, often computationally impossible, brute force method…

# BRUTE FORCE ALTERNATIVE    ------------------------------

# define 2-D parameter space!

shapevec <- seq(10,100,by=0.1)        # divide parameter space into tiny increments
ratevec <- seq(0.5,30,by=0.05)

# define the likelihood surface across this grid within parameter space


surface2D <- matrix(nrow=length(shapevec),ncol=length(ratevec))   # initialize storage variable

newparams <- params
for(i in 1:length(shapevec)){
  newparams['shape'] <- shapevec[i]
  for(j in 1:length(ratevec)){
    newparams['rate'] <- ratevec[j]
    surface2D[i,j] <- -1*GammaLikelihoodFunction(newparams)   # compute likelihood for every point in 2-d parameter space
  }
}

# Visualize the likelihood surface

image(x=shapevec,y=ratevec,z=surface2D,zlim=c(-250,-35),col=topo.colors(12))
contour(x=shapevec,y=ratevec,z=surface2D,levels=c(-30,-40,-80,-150),add=T)

Now what is the maximum likelihood estimate?

# Find the MLE (brute force)  ------------------------

ndx <- which(surface2D==max(surface2D),arr.ind=T)  # index of the max likelihood grid cell
shapevec[ndx[,1]]     
## [1] 49.5
ratevec[ndx[,2]]
## [1] 7.15
MLE$par  # compare with the answer from "optim()"
##     shape      rate 
## 49.608753  7.164569

Q how would we compute the profile likelihood confidence intervals for the shape and scale parameters?

Derivative based methods!

If we assume that the likelihood surface is smooth (differentiable) and has only one minimum, we can use very efficient derivative-based optimization algorithms.

In general, derivative-based methods look for the point in parameter space where the first derivative (slope) of the likelihood function is zero (the ‘root’ of the likelihood function). That is, at the peak- or the valley bottom.

Let’s imagine we are interested in determining the shape parameter, given a known scale parameter for a gamma distribution. To use derivative based methods, let’s first build a function that estimates the slope of the function at any arbitrary point in parameter space:

# Derivative-based optimization methods   ------------------

# function for estimating the slope of the likelihood surface at any point in parameter space....

## NOTE: even here I'm using a coarse, brute force method for estimating the first and second derivative of the likelihood function

params <- MLE$par
SlopeFunc <- function(shape_guess,tiny=0.001){      
  params['shape'] <- shape_guess
  high <- GammaLikelihoodFunction(params+c(tiny,0))
  low <- GammaLikelihoodFunction(params-c(tiny,0))
  slope <- (high-low)/(tiny*2)
  return(slope)
}

SlopeFunc(shape_guess=30)    #try it!
## [1] -13.75925

Now let’s visualize this!

# Visualize the slope of the likelihood function at different points in parameter space

shapevec <- seq(10,100,by=0.1)   

# define the likelihood surface

surface1D <- numeric(length(shapevec))   # initialize storage variable

newparams <- params
for(i in 1:length(shapevec)){
  newparams['shape'] <- shapevec[i]
  surface1D[i] <- GammaLikelihoodFunction(newparams) 
}

plot(surface1D~shapevec,type="l")
point <- GammaLikelihoodFunction(c(shape=30,MLE$par['rate']))
slope <- SlopeFunc(shape_guess=30)
lines(c(20,40),c(point-slope*10,point+slope*10),col="red")

We also need a function to compute the second derivative, or the curvature (rate of change in the slope)…

# function for estimating the curvature of the likelihood function at any point in parameter space

params <- MLE$par
CurvatureFunc <- function(shape_guess,tiny=0.001){
  params['shape'] <- shape_guess
  high <- SlopeFunc(shape_guess+tiny)
  low <- SlopeFunc(shape_guess-tiny)
  curvature <- (high-low)/(tiny*2)   # how much the slope is changing in this region of the function
  return(curvature)
}

CurvatureFunc(shape_guess=30)   # try it!
## [1] 0.9151666

Okay, now we can implement a derivative-based optimization algorithm!

Essentially, we are trying to find the point where the derivative of the likelihood function is zero (the root of the function!).

The simplest derivative-based optimization algorithm is the Newton-Raphson algorithm. Here is the pseudocode:

  • pick a guess for a parameter value
  • compute the first derivative of the likelihood function for that guess
  • compute the slope of the first derivative (curvature, or second derivative) of the likelihood function for that guess
  • Extrapolate linearly to try to find the root (where the first derivative of the likelihood function should be zero assuming the rate of change in the slope is constant)
  • repeat until the first derivative of the likelihood function is close enough to zero (within a specified tolerance), using the new value from the previous step as your initial guess.

Let’s first visualize the shape of the first derivative of the likelihood function

# First- visualize the gradient of the likelihood function

firstderiv <- numeric(length(shapevec))   # initialize storage variable
for(i in 1:length(shapevec)){
  firstderiv[i] <- SlopeFunc(shapevec[i]) 
}

plot(firstderiv~shapevec,type="l")
abline(h=0,col="red")

Let’s use the Newton method to find the root of the likelihood function. First we pick a starting value. Say we pick 80.

First compute the derivatives:

# Now we can perform a simple, derivative-based optimization!

### Pick "80" as the starting value

firstderiv <- SlopeFunc(80)           # evaluate the first and second derivatives
secondderiv <- CurvatureFunc(80)
firstderiv
## [1] 13.00653
secondderiv
## [1] 0.3396182

Now let’s use this linear function to extrapolate to where the first derivative is equal to zero:

# Use this info to estimate the root

oldguess <- 80
newguess <- oldguess - firstderiv/secondderiv   # estimate the root (where first deriv is zero)
newguess
## [1] 41.70248

Our new guess is that the shape parameter is 41.31. Let’s do it again!

# Repeat this process

oldguess <- 41.31
newguess <- oldguess - SlopeFunc(oldguess)/CurvatureFunc(oldguess) 
newguess
## [1] 48.86382

Okay, we’re already getting close to our MLE of around 49.36. Let’s do it again:

# again...

oldguess<-newguess
newguess <- oldguess - SlopeFunc(oldguess)/CurvatureFunc(oldguess)
newguess
## [1] 49.60238

And again!

# again...

oldguess<-newguess
newguess <- oldguess - SlopeFunc(oldguess)/CurvatureFunc(oldguess)
newguess
## [1] 49.60804

And again!!!

# again...

oldguess<-newguess
newguess <- oldguess - SlopeFunc(oldguess)/CurvatureFunc(oldguess)
newguess
## [1] 49.60804

Wow, in just a few iterations we already basically found the true root. Let’s find the root for real, using an algorithm…

# Implement the Newton Method as a function!  ------------------

NewtonMethod <- function(firstguess,tolerance=0.0000001){
  deriv <- SlopeFunc(firstguess)
  oldguess <- firstguess
  counter <- 0
  while(abs(deriv)>tolerance){
    deriv <- SlopeFunc(oldguess)
    newguess <- oldguess - deriv/CurvatureFunc(oldguess)
    oldguess<-newguess
    counter=counter+1
  }
  mle <- list()
  mle$estimate <- newguess
  mle$likelihood <- GammaLikelihoodFunction(c(shape=newguess,MLE$par['rate']))
  mle$iterations <- counter
  return(mle)
}


newMLE <- NewtonMethod(firstguess=80)
newMLE
## $estimate
## [1] 49.60804
## 
## $likelihood
## [1] 37.66714
## 
## $iterations
## [1] 6

In just 6 steps we successfully identified the maximum likelihood estimate to within 0.0000001 of the true value! How many computations did we have to perform to use the brute force method?

Hopefully this illustrates the power of optimization algorithms!!

Note that this method and other derivative-based methods can work in multiple dimensions! The only constraint here is that the likelihood function is differentiable (smooth)

Derivative-free optimization methods

Derivative-free methods make no assumption about smoothness. In some ways, they represent a middle ground between the brute force method and the elegant but finicky derivative-based methods- walking a delicate balance between simplicity and generality.

Derivative-free methods only require a likelihood function that returns real numbers but have no additional requirements.

Derivative-free method 1: simplex method

This is the default optimization method for “optim()”! That means that R used this method for optimizing the fuel economy example from the previous lecture!

Definition: Simplex

A simplex is the multi-dimensional analog of the triangle. In a two dimensional space, the triangle is the simplest shape possible that encloses an area. It has just one more vertex than there are dimensions! In n dimensions, a simplex is defined by n+1 vertices.

Pseudocode for Nelder-Mead simplex algorithm

Set up an initial simplex in parameter space, essentially representing three initial guesses about the parameter values. NOTE: when you use the Nelder-Mead algorithm in “optim()” you only specify one initial value for each free parameter. “optim()”’s internal algorithm turns that initial guess into a simplex prior to starting the Nelder-Mead algorithm.

Continue the following steps until your answer is good enough:

  • Start by identifying the worst vertex (the one with the lowest likelihood)
  • REFLECT IT: Take the worst vertex and reflect it across the center of the shape represented by the other vertices. This is your ‘proposal vertex’. If the new likelihood at the reflected point is now the second best of all the vertices in the new simplex, then replace the old vertex with the proposal vertex.
  • EXPAND IT: If the likelihood is highest for the proposal vertex (out of all the vertices), increase the length of the jump! If this increased jump improves the likelihood even more, replace the old vertex with this new extended-jump vertex. If the expansion is not as good as the reflection, use the reflected vertex instead.
  • CONTRACT IT: If the original reflected vertex was bad (lower likelihood than the original) then try a point closer to the original vertex along the reflection line. If this point is better than the original vertex, replace the old vertex with this new contracted vertex.
  • If all reflections, expansions and contractions were worse than the original vertex, then contract (shrink) the simplex toward the highest-likelihood vertex.

Q: What does the simplex look like for a one-dimensional optimization problem?

Q: Is this method likely to be good at avoiding false peaks in the likelihood surface?

Example: Simplex method

Step 1: Set up an initial simplex in parameter space

# SIMPLEX OPTIMIZATION METHOD!   -----------------------

# set up an "initial" simplex

firstguess <- c(shape=70,rate=5)   # "user" first guess 

simplex <- list()
 
           # set up the initial simplex based on the first guess...
simplex[['vertex1']] <- firstguess + c(3,1)
simplex[['vertex2']] <- firstguess + c(-3,-1)
simplex[['vertex3']] <- firstguess + c(3,-1)

simplex
## $vertex1
## shape  rate 
##    73     6 
## 
## $vertex2
## shape  rate 
##    67     4 
## 
## $vertex3
## shape  rate 
##    73     4

Let’s plot the simplex…

    ## first let's make a function to plot the simplex on a 2-D likelihood surface...

addSimplex <- function(simplex,col="red"){
  temp <- as.data.frame(simplex)    # easier to work with data frame here
  points(x=temp[1,c(1,2,3,1)], y=temp[2,c(1,2,3,1)],type="b",lwd=2,col=col)
}

image(x=shapevec,y=ratevec,z=surface2D,zlim=c(-300,-30),col=topo.colors(12))
contour(x=shapevec,y=ratevec,z=surface2D,levels=c(-30,-40,-80,-120),add=T)
addSimplex(simplex)

Now let’s evaluate the log likelihood at each vertex

# Evaluate log-likelihood at each vertex of the simplex

SimplexLik <- function(simplex){
  newvec <- -1*unlist(lapply(simplex,GammaLikelihoodFunction))   # note use of apply instead of for loop...
  return(newvec)
}

SimplexLik(simplex)
##   vertex1   vertex2   vertex3 
## -300.5682 -575.2062 -725.8400

Now let’s develop functions to assist our moves through parameter space, according to the rules defined above…

# Helper Functions

## this function reflects the worst vertex across the remaining vector

# values <- SimplexLik(simplex)
# oldsimplex=simplex[order(values,decreasing = T)]   # note: must be sorted with worst vertex last
ReflectIt <- function(oldsimplex){
  
  # vertnames <- names(oldsimplex)
  n=length(oldsimplex[[1]])
  centroid <- apply(t(as.data.frame(oldsimplex[1:n])),2,mean)
  
  reflected <- centroid + (centroid - oldsimplex[[n+1]])
  expanded <- centroid + 2*(centroid - oldsimplex[[n+1]])
  contracted <- centroid + 0.5*(centroid - oldsimplex[[n+1]])
  
  alternates <- list()
  alternates$reflected <- oldsimplex
  alternates$expanded <- oldsimplex 
  alternates$contracted <- oldsimplex 
  alternates$reflected[[n+1]] <- reflected
  alternates$expanded[[n+1]] <- expanded
  alternates$contracted[[n+1]] <- contracted
  return(alternates)
}
# ReflectIt(oldsimplex)


ShrinkIt <- function(oldsimplex){
  n <- length(oldsimplex[[1]])
  X.vert <- t(as.data.frame(oldsimplex[(1:(n+1))]))
  temp <- sweep(0.5*sweep(X.vert, 2, oldsimplex[[1]], FUN = "-"), 2, X.vert[1, ], FUN="+")
  temp2 <- as.data.frame(t(temp))
  lapply(temp2,function(t) c(shape=t[1],rate=t[2])  )
}


MoveTheSimplex <- function(oldsimplex){     # (incomplete) nelder-mead algorithm
  newsimplex <- oldsimplex  # 
           # Start by sorting the simplex (worst vertex last)
  VertexLik <- SimplexLik(newsimplex)
  newsimplex <- newsimplex[order(VertexLik,decreasing=T)]
  liks <- VertexLik[order(VertexLik,decreasing=T)]
  worstLik <- liks[3]
  secondworstLik <- liks[2]
  bestLik <- liks[1]
  
  candidates <- ReflectIt(oldsimplex=newsimplex)      # reflect across the remaining edge
  CandidateLik <- sapply(candidates,SimplexLik)                          # re-evaluate likelihood at the vertices...
  CandidateLik <- apply(CandidateLik,c(1,2), function(t) ifelse(is.nan(t),-99999,t))
  bestCandidate <- names(which.max(CandidateLik[3,]))
  bestCandidateLik <- CandidateLik[3,bestCandidate]
  
  if((CandidateLik[3,"reflected"]<=bestLik)&(CandidateLik[3,"reflected"]>secondworstLik)){
    newsimplex <- candidates[["reflected"]]
  }else if (CandidateLik[3,"reflected"]>bestLik){
    if(CandidateLik[3,"expanded"]>CandidateLik[3,"reflected"]){
      newsimplex <- candidates[["expanded"]]
    }else{
      newsimplex <- candidates[["reflected"]]
    }
  }else{
    if(CandidateLik[3,"contracted"]>worstLik){
      newsimplex <- candidates[["contracted"]]
    }else{
      newsimplex <- ShrinkIt(newsimplex)
    }
  }

  return(newsimplex)
}

# image(x=shapevec,y=scalevec,z=surface2D,zlim=c(-1000,-30),col=topo.colors(12))
# contour(x=shapevec,y=scalevec,z=surface2D,levels=c(-30,-40,-80,-500),add=T)
# addSimplex(oldsimplex,col="red")
# addSimplex(candidates$reflected,col="green")
# addSimplex(candidates$half,col="green")

# Visualize the simplex  ---------------------

oldsimplex <- simplex
newsimplex <- MoveTheSimplex(oldsimplex)
image(x=shapevec,y=ratevec,z=surface2D,zlim=c(-500,-30),col=topo.colors(12))
contour(x=shapevec,y=ratevec,z=surface2D,levels=c(-30,-40,-80,-125),add=T)
addSimplex(oldsimplex,col="red")
addSimplex(newsimplex,col="green")

Let’s try another few moves

# Make another move  -------------

oldsimplex <- newsimplex
newsimplex <- MoveTheSimplex(oldsimplex)

image(x=shapevec,y=ratevec,z=surface2D,zlim=c(-500,-30),col=topo.colors(12))
contour(x=shapevec,y=ratevec,z=surface2D,levels=c(-30,-40,-80,-125),add=T)
addSimplex(oldsimplex,col="red")
addSimplex(newsimplex,col="green")

And again!

# Make another move  ----------------------

oldsimplex <- newsimplex
newsimplex <- MoveTheSimplex(oldsimplex)

image(x=shapevec,y=ratevec,z=surface2D,zlim=c(-500,-30),col=topo.colors(12))
contour(x=shapevec,y=ratevec,z=surface2D,levels=c(-30,-40,-80,-125),add=T)
addSimplex(oldsimplex,col="red")
addSimplex(newsimplex,col="green")

Again:

# Make another move  ----------------

oldsimplex <- newsimplex
newsimplex <- MoveTheSimplex(oldsimplex)

image(x=shapevec,y=ratevec,z=surface2D,zlim=c(-500,-30),col=topo.colors(12))
contour(x=shapevec,y=ratevec,z=surface2D,levels=c(-30,-40,-80,-125),add=T)
addSimplex(oldsimplex,col="red")
addSimplex(newsimplex,col="green")

And another few times:

# Make another few moves  ----------------------

par(mfrow=c(2,2))

for(i in 1:4){
  oldsimplex <- newsimplex
  newsimplex <- MoveTheSimplex(oldsimplex)
  
  image(x=shapevec,y=ratevec,z=surface2D,zlim=c(-500,-30),col=topo.colors(12))
  contour(x=shapevec,y=ratevec,z=surface2D,levels=c(-30,-40,-80,-125),add=T)
  addSimplex(oldsimplex,col="red")
  addSimplex(newsimplex,col="green")
}

Now we can build a function and use the algorithm for optimizing!

# Build a simplex optimization function!  -----------------

SimplexMethod <- function(firstguess,tolerance=0.00001){
  initsimplex <- list()
  initsimplex[['vertex1']] <- firstguess + c(5,0.5)
  initsimplex[['vertex2']] <- firstguess + c(-5,-0.5)
  initsimplex[['vertex3']] <- firstguess + c(5,-0.5)
  VertexLik <- SimplexLik(initsimplex)
  oldbestlik <- VertexLik[which.max(VertexLik)]
  deltalik <- 100
  counter <- 0
  oldsimplex <- initsimplex
  while((counter<250)&(any(abs(diff(VertexLik))>tolerance))){
    newsimplex <- MoveTheSimplex(oldsimplex)
    VertexLik <- SimplexLik(newsimplex)
    bestlik <- VertexLik[which.max(VertexLik)]
    oldsimplex <- newsimplex
    counter <- counter+1
  }
  mle <- list()
  mle$estimate <- newsimplex[[1]]
  mle$likelihood <- bestlik
  mle$iterations <- counter
  return(mle)
}

SimplexMethod(firstguess = c(shape=39,rate=4))
## $estimate
##     shape      rate 
## 49.656948  7.172449 
## 
## $likelihood
##   vertex3 
## -37.66714 
## 
## $iterations
## [1] 41

I like to call this the “amoeba” method of optimization!

In general, the simplex-based methods are less efficient than the derivative-based methods at finding the MLE- especially as you near the MLE.

Derivative-free method 2: simulated annealing (SE).

Simulated annealing is one of my favorite optimization techniques. I think it serves as a good metaphor for problem-solving in general. When solving a problem, the first step is to think big, try to imagine whether we might be missing possible solutions. Then we settle (focus) on a general solution, learn more about how that solution applies to our problem, and ultimately get it done!

The temperature analogy is fun too! We start out “hot”- unfocused, frenzied, bouncing around - and we end up “cold” - crystal clear and focused on a solution!

SE: A “global” optimization solution

Simulated annealing is called a “global” optimization solution because it can deal with false peaks and other strangenesses that can arise in optimization problems (e.g., maximizing likelihood). The price is in reduced efficiency!

Pseudocode for the Metropolis simulated annealing routine

Pick an initial starting point and evaluate the likelihood.

Continue the following steps until your answer is good enough:

  • Pick a new point at random near your old point and compute the (log) likelihood
  • If the new value is better, accept it and start again
  • If the new value is worse, then
    - Pick a random number between zero and 1
    - Accept the new (worse) value anyway if the random number is less than exp(change in log likelihood/k). Otherwise, go back to the previous value
  • Periodically (e.g. every 100 iterations) lower the value of k to make it harder to accept bad moves. Eventually, the algorithm will “settle down” on a particular point in parameter space.

A simulated annealing method is available in the “optim” function in R (method = “SANN”)

Example: Simulated annealing!

Let’s use the same familiar myxomatosis example!

# Simulated annealing!  -----------------------

startingvals <- c(shape=80,rate=7)
startinglik <- -GammaLikelihoodFunction(startingvals)
startinglik
## [1] -270.5487
k = 100   # set the "temperature"
 
     # function for making new guesses
newGuess <- function(oldguess=startingvals){
  maxshapejump <- 5
  maxratejump <- 0.75
  jump <- c(runif(1,-maxshapejump,maxshapejump),runif(1,-maxratejump,maxratejump))
  newguess <- oldguess + jump
  return(newguess)
}
  # set a new "guess" near to the original guess

newGuess(oldguess=startingvals)     # each time is different- this is the first optimization procedure with randomness built in
##     shape      rate 
## 76.416983  7.329611
newGuess(oldguess=startingvals)
##     shape      rate 
## 80.837027  7.247608
newGuess(oldguess=startingvals)
##     shape      rate 
## 76.552051  7.611385

Now let’s evaluate the difference in likelihood between the old and the new guess…

# evaluate the difference in likelihood between the new proposal and the old point

LikDif <- function(oldguess,newguess){
  oldLik <- -GammaLikelihoodFunction(oldguess)
  newLik <- -GammaLikelihoodFunction(newguess)
  return(newLik-oldLik)
}

newguess <- newGuess(oldguess=startingvals)
loglikdif <- LikDif(oldguess=startingvals,newguess)
loglikdif
## [1] -103.0489

Now let’s look at the Metropolis routine:

# run and visualize a Metropolis simulated annealing routine -------------

k <- 100
oldguess <- startingvals
counter <- 0
guesses <- matrix(0,nrow=100,ncol=2)
colnames(guesses) <- names(startingvals)
while(counter<100){
  newguess <- newGuess(oldguess)
  loglikdif <- LikDif(oldguess,newguess)
  if(loglikdif>0){ 
    oldguess <- newguess
  }else{
    rand=runif(1)
    if(rand <= exp(loglikdif/k)){
      oldguess <- newguess   # accept even if worse!
    }
  }
  counter <- counter + 1
  guesses[counter,] <- oldguess
}

# visualize!

image(x=shapevec,y=ratevec,z=surface2D,zlim=c(-500,-30),col=topo.colors(12))
contour(x=shapevec,y=ratevec,z=surface2D,levels=c(-30,-40,-80,-135),add=T)
lines(guesses,col="red")

Clearly this is the most inefficient, brute-force method we have seen so far (aside from the actual brute force method). And also quite clearly, in the context of this class, the best and most fun (and dangerous?!).

NOTE: simulated annealing is still way more efficient than the brute force method we saw earlier, especially with multiple dimensions!

Let’s run it for longer, and with a smaller value of k..

# Run it for longer!

k <- 10
oldguess <- startingvals
counter <- 0
guesses <- matrix(0,nrow=1000,ncol=2)
colnames(guesses) <- names(startingvals)
while(counter<1000){
  newguess <- newGuess(oldguess)
  while(any(newguess<0)) newguess <- newGuess(oldguess)
  loglikdif <- LikDif(oldguess,newguess)
  if(loglikdif>0){ 
    oldguess <- newguess
  }else{
    rand=runif(1)
    if(rand <= exp(loglikdif/k)){
      oldguess <- newguess   # accept even if worse!
    }
  }
  counter <- counter + 1
  guesses[counter,] <- oldguess
}

# visualize!

image(x=shapevec,y=ratevec,z=surface2D,zlim=c(-500,-30),col=topo.colors(12))
contour(x=shapevec,y=ratevec,z=surface2D,levels=c(-30,-40,-80,-135),add=T)
lines(guesses,col="red")

This looks better! The search algorithm is finding the high-likelihood parts of parameter space pretty well!

Now let’s “cool” the temperature over time, let the algorithm settle down on a likelihood peak

# cool the "temperature" over time and let the algorithm settle down

k <- 100
oldguess <- startingvals
counter <- 0
guesses <- matrix(0,nrow=10000,ncol=2)
colnames(guesses) <- names(startingvals)
MLE <- list(vals=startingvals,lik=-GammaLikelihoodFunction(startingvals),step=0)
while(counter<10000){
  newguess <- newGuess(oldguess)
  while(any(newguess<0)) newguess <- newGuess(oldguess)
  loglikdif <- LikDif(oldguess,newguess)
  if(loglikdif>0){ 
    oldguess <- newguess
  }else{
    rand=runif(1)
    if(rand <= exp(loglikdif/k)){
      oldguess <- newguess   # accept even if worse!
    }
  }
  counter <- counter + 1
  if(counter%%100==0) k <- k*0.8
  guesses[counter,] <- oldguess
  thislik <- -GammaLikelihoodFunction(oldguess)
  if(thislik>MLE$lik) MLE <- list(vals=oldguess,lik=-GammaLikelihoodFunction(oldguess),step=counter)
}

# visualize!

image(x=shapevec,y=ratevec,z=surface2D,zlim=c(-500,-30),col=topo.colors(12))
contour(x=shapevec,y=ratevec,z=surface2D,levels=c(-30,-40,-80,-135),add=T)
lines(guesses,col="red")
points(MLE$vals[1],MLE$vals[2],col="green",pch=20,cex=3)

MLE
## $vals
##     shape      rate 
## 49.914729  7.206274 
## 
## $lik
## [1] -37.66748
## 
## $step
## [1] 5777
optim(params,GammaLikelihoodFunction)$par
##     shape      rate 
## 49.608753  7.164569

As you can see, the simulated annealing method did pretty well. However, we needed thousands of iterations to do what other methods just take a few iterations to do. But, we might feel better that we have explored parameter space more thoroughly and avoided the potential problem of false peaks (although there’s no guarantee that the simulated annealing method will find the true MLE).

Other methods

As you can see, there are many ways to optimize- and the optimal optimization routine is not always obvious!

You can probably use some creative thinking and imagine your own optimization algorithm… For example, some have suggested combining the simplex method with the simulated annealing method! Optimization is an art!!

What about the confidence interval??

As you can see in the previous examples, most of the optimization techniques we have looked at do not explore parameter space enough to discern the shape of the likelihood surface around the maximum likelihood estimate. Therefore, we do not have the information we need to compute the confidence intervals around our parameter estimates. And what good is a point estimate without a corresponding estimate of uncertainty??

There are several techniques that are widely used to estimate and describe parameter uncertainty:

  1. Brute force (expose the entire likelihood surface!) [okay, this one isn’t actually used very often]
  2. Profile likelihood (the most accurate way!)
  3. Evaluate curvature at the MLE and use that to estimate sampling error (somewhat inexact but efficient- but generally performs pretty well, and is the default for many MLE routines!)

–go to next lecture–