1 The Beta regression model

We will assume that the tropical tuna percentage (TTP) in longliner catches is a function of the sea surface temperature (SST). Here \(y=\)TTP is a number between 0 and 1, a percentage, while, \(x=\)SST is a real and positive number.

1.1 The beta distribution

Usually we say that \(y \sim Beta(\alpha,\beta)\) when \[ p(y|\alpha,\beta) = \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)} y^{\alpha-1} (1-y)^{\beta-1}, \qquad 0<y<1 \ \ \ \alpha,\beta>0, \] where \(\Gamma(\cdot)\) is the gamma function \[ \Gamma(z) = \int_0^\infty t^{z-1}e^{-t}dt, \] with \(\Gamma(z)=(z-1)!\) when \(z\) is a positive integer.

It is easy to verify that

  • \(E(y|\alpha,\beta) = \frac{\alpha}{\alpha+\beta}\)

  • \(V(y|\alpha,\beta) = \frac{\alpha\beta}{(\alpha+\beta)^2(\alpha+\beta+1)}\)

  • Mode\((y|\alpha,\beta) = \frac{\alpha-1}{\alpha+\beta-2}, \qquad \alpha,\beta>1\)

1.2 The reparametrized Beta distribution

For modeling proportions that depend on covariates, it is suggested to use the following parametrization of the Beta distribution: \[ \beta|\mu,\phi \sim Beta(\mu,\phi), \] where \(\mu=\alpha/(\alpha+\beta)\) and \(\phi=\alpha+\beta\). The inverse transformation is given by \(\alpha=\mu \phi\) and \(\beta=(1-\mu)\phi\). This transformation leads to

  • \(E(y|\mu,\phi) = \mu\)

  • \(V(y|\mu,\phi) = \frac{\mu(1-\mu)}{1+\phi}\)

  • Mode\((y|\mu,\phi) = \frac{\mu\phi-1}{\phi-2}, \qquad \phi>2, \mu>1/\phi\)

1.3 Beta regression

We are now apt to link the response \(y\) to the regression \(x\) via \(\mu\), ie.

\[ y_i|\mu_i,\phi \sim Beta(\mu_i,\phi) \ \ \ \mbox{and} \ \ \ \log\left(\frac{\mu_i}{1-\mu_i}\right)= \theta_0 + \theta_1 x_i, \] for \(\theta_0,\theta_1 \in \Re\).

Here are two key references to understand the Beta regression model:

  1. Ferrari SLP, Cribari-Neto F (2004). Beta Regression for Modelling Rates and Proportions. Journal of Applied Statistics, 31(7), 799-815.

  2. Francisco Cribari-Neto & Achim Zelleis, Beta Regression in R. Journal of Statistical Software, 34(2), 1–24. 2010. https://www.jstatsoft.org/article/view/v034i02

#install.packages("rstan")
#install.packages("rstanarm")
#install.packages("statmod")
library(rstan)
## Loading required package: StanHeaders
## 
## rstan version 2.32.7 (Stan version 2.32.2)
## For execution on a local, multicore CPU with excess RAM we recommend calling
## options(mc.cores = parallel::detectCores()).
## To avoid recompilation of unchanged Stan programs, we recommend calling
## rstan_options(auto_write = TRUE)
## For within-chain threading using `reduce_sum()` or `map_rect()` Stan functions,
## change `threads_per_chain` option:
## rstan_options(threads_per_chain = 1)
library(rstanarm)
## Loading required package: Rcpp
## This is rstanarm version 2.32.2
## - See https://mc-stan.org/rstanarm/articles/priors for changes to default priors!
## - Default priors may change, so it's safest to specify priors, even if equivalent to the defaults.
## - For execution on a local, multicore CPU with excess RAM we recommend calling
##   options(mc.cores = parallel::detectCores())
## 
## Attaching package: 'rstanarm'
## The following object is masked from 'package:rstan':
## 
##     loo

2 The dataset

atlantic = read.table("https://hedibert.org/wp-content/uploads/2026/02/atlantic.txt",header=TRUE)
atla     = data.frame(atlantic)
ind      = atla[,1]==1981
x        = atla[ind,6]
y        = atla[ind,4]/100
y[y==0]  = 0.000255362

par(mfrow=c(1,1))
plot(x,y,xlab="Sea surface temperature",ylab="Tropical tuna percentage",ylim=c(0,0.4))

3 Beta regression: Bayesian approach

3.1 Running STAN

data    = data.frame(y=y,x=x)
fit1    = stan_betareg(y ~ x, link = "logit",data=data)
## 
## SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 0.00028 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 2.8 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
## Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
## Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
## Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
## Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
## Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
## Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
## Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
## Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
## Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
## Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
## Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 0.172 seconds (Warm-up)
## Chain 1:                0.218 seconds (Sampling)
## Chain 1:                0.39 seconds (Total)
## Chain 1: 
## 
## SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2).
## Chain 2: 
## Chain 2: Gradient evaluation took 2.4e-05 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.24 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2: 
## Chain 2: 
## Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
## Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
## Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
## Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
## Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
## Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
## Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
## Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
## Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
## Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
## Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
## Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
## Chain 2: 
## Chain 2:  Elapsed Time: 0.16 seconds (Warm-up)
## Chain 2:                0.216 seconds (Sampling)
## Chain 2:                0.376 seconds (Total)
## Chain 2: 
## 
## SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3).
## Chain 3: 
## Chain 3: Gradient evaluation took 2.5e-05 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.25 seconds.
## Chain 3: Adjust your expectations accordingly!
## Chain 3: 
## Chain 3: 
## Chain 3: Iteration:    1 / 2000 [  0%]  (Warmup)
## Chain 3: Iteration:  200 / 2000 [ 10%]  (Warmup)
## Chain 3: Iteration:  400 / 2000 [ 20%]  (Warmup)
## Chain 3: Iteration:  600 / 2000 [ 30%]  (Warmup)
## Chain 3: Iteration:  800 / 2000 [ 40%]  (Warmup)
## Chain 3: Iteration: 1000 / 2000 [ 50%]  (Warmup)
## Chain 3: Iteration: 1001 / 2000 [ 50%]  (Sampling)
## Chain 3: Iteration: 1200 / 2000 [ 60%]  (Sampling)
## Chain 3: Iteration: 1400 / 2000 [ 70%]  (Sampling)
## Chain 3: Iteration: 1600 / 2000 [ 80%]  (Sampling)
## Chain 3: Iteration: 1800 / 2000 [ 90%]  (Sampling)
## Chain 3: Iteration: 2000 / 2000 [100%]  (Sampling)
## Chain 3: 
## Chain 3:  Elapsed Time: 0.19 seconds (Warm-up)
## Chain 3:                0.207 seconds (Sampling)
## Chain 3:                0.397 seconds (Total)
## Chain 3: 
## 
## SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4).
## Chain 4: 
## Chain 4: Gradient evaluation took 2.3e-05 seconds
## Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.23 seconds.
## Chain 4: Adjust your expectations accordingly!
## Chain 4: 
## Chain 4: 
## Chain 4: Iteration:    1 / 2000 [  0%]  (Warmup)
## Chain 4: Iteration:  200 / 2000 [ 10%]  (Warmup)
## Chain 4: Iteration:  400 / 2000 [ 20%]  (Warmup)
## Chain 4: Iteration:  600 / 2000 [ 30%]  (Warmup)
## Chain 4: Iteration:  800 / 2000 [ 40%]  (Warmup)
## Chain 4: Iteration: 1000 / 2000 [ 50%]  (Warmup)
## Chain 4: Iteration: 1001 / 2000 [ 50%]  (Sampling)
## Chain 4: Iteration: 1200 / 2000 [ 60%]  (Sampling)
## Chain 4: Iteration: 1400 / 2000 [ 70%]  (Sampling)
## Chain 4: Iteration: 1600 / 2000 [ 80%]  (Sampling)
## Chain 4: Iteration: 1800 / 2000 [ 90%]  (Sampling)
## Chain 4: Iteration: 2000 / 2000 [100%]  (Sampling)
## Chain 4: 
## Chain 4:  Elapsed Time: 0.17 seconds (Warm-up)
## Chain 4:                0.22 seconds (Sampling)
## Chain 4:                0.39 seconds (Total)
## Chain 4:
draws   = as.matrix(fit1)
xx      = seq(min(x),max(x),length=100)
fitted1 = 1/(1+exp(-(fit1$coef[1]+fit1$coef[2]*xx)))

param = c("intercept","slope","precision")
par(mfrow=c(2,3))
for (i in 1:3) ts.plot(draws[,i],xlab="Iterations",ylab="",main=param[i])
for (i in 1:3) hist(draws[,i],prob=TRUE,main="",xlab="")

3.2 Posterior inference

fits   = matrix(0,4000,100)
draws1 = matrix(0,4000,100)
for (i in 1:100){
  fits[,i]   = 1/(1+exp(-(draws[,1]+draws[,2]*xx[i])))
  draws1[,i] = rbeta(4000,fits[,i]*draws[,3],(1-fits[,i])*draws[,3])
}
qfits  = t(apply(fits,2,quantile,c(0.05,0.5,0.95)))
qfits1 = t(apply(draws1,2,quantile,c(0.05,0.5,0.95)))

par(mfrow=c(1,1))
plot(x,y,xlab="Sea surface temperature",ylab="Tropical tuna percentage",
     ylim=c(0,1))
lines(xx,fitted1,col=2,lwd=2)
lines(xx,qfits[,1],col=2,lwd=2,lty=2)
lines(xx,qfits[,3],col=2,lwd=2,lty=2)
lines(xx,qfits1[,1],col=3,lwd=2,lty=2)
lines(xx,qfits1[,3],col=3,lwd=2,lty=2)

4 Extending the model

Here we allow both \(\mu_i\) and \(\phi_i\) to be functions of \(x_i\):

\[ \log\left(\frac{\mu_i}{1-\mu_i}\right)= \theta_0 + \theta_1 x_i \qquad\qquad \mbox{and} \qquad\qquad \log \phi_i = \gamma_0 + \gamma_1 x_i. \]

fit2    = stan_betareg(y ~ x | x, link = "logit", link.phi="log", data=data)
## 
## SAMPLING FOR MODEL 'continuous' NOW (CHAIN 1).
## Chain 1: 
## Chain 1: Gradient evaluation took 0.000254 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 2.54 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1: 
## Chain 1: 
## Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
## Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
## Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
## Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
## Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
## Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
## Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
## Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
## Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
## Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
## Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
## Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
## Chain 1: 
## Chain 1:  Elapsed Time: 0.372 seconds (Warm-up)
## Chain 1:                0.315 seconds (Sampling)
## Chain 1:                0.687 seconds (Total)
## Chain 1: 
## 
## SAMPLING FOR MODEL 'continuous' NOW (CHAIN 2).
## Chain 2: 
## Chain 2: Gradient evaluation took 3.5e-05 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.35 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2: 
## Chain 2: 
## Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
## Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
## Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
## Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
## Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
## Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
## Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
## Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
## Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
## Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
## Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
## Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
## Chain 2: 
## Chain 2:  Elapsed Time: 0.36 seconds (Warm-up)
## Chain 2:                0.324 seconds (Sampling)
## Chain 2:                0.684 seconds (Total)
## Chain 2: 
## 
## SAMPLING FOR MODEL 'continuous' NOW (CHAIN 3).
## Chain 3: 
## Chain 3: Gradient evaluation took 3.5e-05 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.35 seconds.
## Chain 3: Adjust your expectations accordingly!
## Chain 3: 
## Chain 3: 
## Chain 3: Iteration:    1 / 2000 [  0%]  (Warmup)
## Chain 3: Iteration:  200 / 2000 [ 10%]  (Warmup)
## Chain 3: Iteration:  400 / 2000 [ 20%]  (Warmup)
## Chain 3: Iteration:  600 / 2000 [ 30%]  (Warmup)
## Chain 3: Iteration:  800 / 2000 [ 40%]  (Warmup)
## Chain 3: Iteration: 1000 / 2000 [ 50%]  (Warmup)
## Chain 3: Iteration: 1001 / 2000 [ 50%]  (Sampling)
## Chain 3: Iteration: 1200 / 2000 [ 60%]  (Sampling)
## Chain 3: Iteration: 1400 / 2000 [ 70%]  (Sampling)
## Chain 3: Iteration: 1600 / 2000 [ 80%]  (Sampling)
## Chain 3: Iteration: 1800 / 2000 [ 90%]  (Sampling)
## Chain 3: Iteration: 2000 / 2000 [100%]  (Sampling)
## Chain 3: 
## Chain 3:  Elapsed Time: 0.391 seconds (Warm-up)
## Chain 3:                0.323 seconds (Sampling)
## Chain 3:                0.714 seconds (Total)
## Chain 3: 
## 
## SAMPLING FOR MODEL 'continuous' NOW (CHAIN 4).
## Chain 4: 
## Chain 4: Gradient evaluation took 3.2e-05 seconds
## Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.32 seconds.
## Chain 4: Adjust your expectations accordingly!
## Chain 4: 
## Chain 4: 
## Chain 4: Iteration:    1 / 2000 [  0%]  (Warmup)
## Chain 4: Iteration:  200 / 2000 [ 10%]  (Warmup)
## Chain 4: Iteration:  400 / 2000 [ 20%]  (Warmup)
## Chain 4: Iteration:  600 / 2000 [ 30%]  (Warmup)
## Chain 4: Iteration:  800 / 2000 [ 40%]  (Warmup)
## Chain 4: Iteration: 1000 / 2000 [ 50%]  (Warmup)
## Chain 4: Iteration: 1001 / 2000 [ 50%]  (Sampling)
## Chain 4: Iteration: 1200 / 2000 [ 60%]  (Sampling)
## Chain 4: Iteration: 1400 / 2000 [ 70%]  (Sampling)
## Chain 4: Iteration: 1600 / 2000 [ 80%]  (Sampling)
## Chain 4: Iteration: 1800 / 2000 [ 90%]  (Sampling)
## Chain 4: Iteration: 2000 / 2000 [100%]  (Sampling)
## Chain 4: 
## Chain 4:  Elapsed Time: 0.351 seconds (Warm-up)
## Chain 4:                0.355 seconds (Sampling)
## Chain 4:                0.706 seconds (Total)
## Chain 4:
draws2   = as.matrix(fit2)
fitted2 = 1/(1+exp(-(fit2$coef[1]+fit2$coef[2]*xx)))

param = c("intercept","slope","int.precision","slope.precision")
par(mfrow=c(2,4))
for (i in 1:4) ts.plot(draws2[,i],xlab="Iterations",ylab="",main=param[i])
for (i in 1:4) hist(draws2[,i],prob=TRUE,main="",xlab="")

4.1 Comparison

draws21 = matrix(0,4000,100)
for (i in 1:100){
  mu.fit   = 1/(1+exp(-(draws2[,1]+draws2[,2]*xx[i])))
  phi.fit  = exp(draws2[,3]+draws2[,4]*xx[i])
  draws21[,i] = rbeta(4000,mu.fit*phi.fit,(1-mu.fit)*phi.fit)
}
qfits2 = t(apply(draws21,2,quantile,c(0.05,0.5,0.95)))

par(mfrow=c(1,1))
plot(x,y,xlab="Sea surface temperature",ylab="Tropical tuna percentage",ylim=c(0,1))
lines(xx,fitted1,col=2,lwd=2)
lines(xx,qfits1[,1],col=2,lwd=2,lty=2)
lines(xx,qfits1[,3],col=2,lwd=2,lty=2)

lines(xx,fitted2,col=4,lwd=2)
lines(xx,qfits2[,1],col=4,lwd=2,lty=2)
lines(xx,qfits2[,3],col=4,lwd=2,lty=2)

legend("topleft",legend=c("Model 1: mu(x)","Model 2: mu(x),phi(x)"),col=c(2,4),lty=1,lwd=2,bty="n")

LS0tCnRpdGxlOiAiQmV0YSByZWdyZXNzaW9uIgpzdWJ0aXRsZTogIlR1bmEgcGVyY2VudGFnZSB2cyBzZWEgc3VyZmFjZSB0ZW1wZXJhdHVyZSIKYXV0aG9yOiAiSGVkaWJlcnQgRnJlaXRhcyBMb3BlcyIKZGF0ZTogImByIFN5cy5EYXRlKClgIgpvdXRwdXQ6CiAgaHRtbF9kb2N1bWVudDoKICAgIHRoZW1lOiBmbGF0bHkKICAgIGhpZ2hsaWdodDogdGFuZ28KICAgIHRvYzogdHJ1ZQogICAgdG9jX2RlcHRoOiAzCiAgICB0b2NfY29sbGFwc2VkOiB0cnVlCiAgICBjb2RlX2Rvd25sb2FkOiB0cnVlCiAgICBudW1iZXJfc2VjdGlvbnM6IHRydWUKLS0tCgojIFRoZSBCZXRhIHJlZ3Jlc3Npb24gbW9kZWwKCldlIHdpbGwgYXNzdW1lIHRoYXQgdGhlIHRyb3BpY2FsIHR1bmEgcGVyY2VudGFnZSAoVFRQKSBpbiBsb25nbGluZXIgY2F0Y2hlcwppcyBhIGZ1bmN0aW9uIG9mIHRoZSBzZWEgc3VyZmFjZSB0ZW1wZXJhdHVyZSAoU1NUKS4gIEhlcmUgJHk9JFRUUCBpcyBhIG51bWJlciAKYmV0d2VlbiAwIGFuZCAxLCBhIHBlcmNlbnRhZ2UsIHdoaWxlLCAkeD0kU1NUIGlzIGEgcmVhbCBhbmQgcG9zaXRpdmUgbnVtYmVyLgoKIyMgVGhlIGJldGEgZGlzdHJpYnV0aW9uClVzdWFsbHkgd2Ugc2F5IHRoYXQgJHkgXHNpbSBCZXRhKFxhbHBoYSxcYmV0YSkkIHdoZW4gCiQkCnAoeXxcYWxwaGEsXGJldGEpID0gXGZyYWN7XEdhbW1hKFxhbHBoYStcYmV0YSl9e1xHYW1tYShcYWxwaGEpXEdhbW1hKFxiZXRhKX0geV57XGFscGhhLTF9ICgxLXkpXntcYmV0YS0xfSwgXHFxdWFkIDA8eTwxIFwgXCBcIFxhbHBoYSxcYmV0YT4wLAokJAp3aGVyZSAkXEdhbW1hKFxjZG90KSQgaXMgdGhlIGdhbW1hIGZ1bmN0aW9uCiQkClxHYW1tYSh6KSA9IFxpbnRfMF5caW5mdHkgdF57ei0xfWVeey10fWR0LAokJAp3aXRoICRcR2FtbWEoeik9KHotMSkhJCB3aGVuICR6JCBpcyBhIHBvc2l0aXZlIGludGVnZXIuCgpJdCBpcyBlYXN5IHRvIHZlcmlmeSB0aGF0CgoqICRFKHl8XGFscGhhLFxiZXRhKSA9IFxmcmFje1xhbHBoYX17XGFscGhhK1xiZXRhfSQKCiogJFYoeXxcYWxwaGEsXGJldGEpID0gXGZyYWN7XGFscGhhXGJldGF9eyhcYWxwaGErXGJldGEpXjIoXGFscGhhK1xiZXRhKzEpfSQKCiogTW9kZSQoeXxcYWxwaGEsXGJldGEpID0gXGZyYWN7XGFscGhhLTF9e1xhbHBoYStcYmV0YS0yfSwgXHFxdWFkIFxhbHBoYSxcYmV0YT4xJAoKIyMgVGhlIHJlcGFyYW1ldHJpemVkIEJldGEgZGlzdHJpYnV0aW9uCgpGb3IgbW9kZWxpbmcgcHJvcG9ydGlvbnMgdGhhdCBkZXBlbmQgb24gY292YXJpYXRlcywgaXQgaXMgc3VnZ2VzdGVkIHRvIHVzZSB0aGUgZm9sbG93aW5nIHBhcmFtZXRyaXphdGlvbiBvZiB0aGUgQmV0YSBkaXN0cmlidXRpb246CiQkClxiZXRhfFxtdSxccGhpIFxzaW0gQmV0YShcbXUsXHBoaSksCiQkCndoZXJlICRcbXU9XGFscGhhLyhcYWxwaGErXGJldGEpJCBhbmQgJFxwaGk9XGFscGhhK1xiZXRhJC4gIFRoZSBpbnZlcnNlIHRyYW5zZm9ybWF0aW9uIGlzIGdpdmVuIGJ5ICRcYWxwaGE9XG11IFxwaGkkIGFuZCAkXGJldGE9KDEtXG11KVxwaGkkLiAgVGhpcyB0cmFuc2Zvcm1hdGlvbiBsZWFkcyB0byAKCiogJEUoeXxcbXUsXHBoaSkgPSBcbXUkCgoqICRWKHl8XG11LFxwaGkpID0gXGZyYWN7XG11KDEtXG11KX17MStccGhpfSQKCiogTW9kZSQoeXxcbXUsXHBoaSkgPSBcZnJhY3tcbXVccGhpLTF9e1xwaGktMn0sIFxxcXVhZCBccGhpPjIsIFxtdT4xL1xwaGkkCgojIyBCZXRhIHJlZ3Jlc3Npb24KCldlIGFyZSBub3cgYXB0IHRvIGxpbmsgdGhlIHJlc3BvbnNlICR5JCB0byB0aGUgcmVncmVzc2lvbiAkeCQgdmlhICRcbXUkLCBpZS4KCiQkCnlfaXxcbXVfaSxccGhpIFxzaW0gQmV0YShcbXVfaSxccGhpKSBcIFwgXCBcbWJveHthbmR9IFwgXCBcIFxsb2dcbGVmdChcZnJhY3tcbXVfaX17MS1cbXVfaX1ccmlnaHQpPSBcdGhldGFfMCArIFx0aGV0YV8xIHhfaSwKJCQKZm9yICRcdGhldGFfMCxcdGhldGFfMSBcaW4gXFJlJC4KCkhlcmUgYXJlIHR3byBrZXkgcmVmZXJlbmNlcyB0byB1bmRlcnN0YW5kIHRoZSBCZXRhIHJlZ3Jlc3Npb24gbW9kZWw6CgoxKSBGZXJyYXJpIFNMUCwgQ3JpYmFyaS1OZXRvIEYgKDIwMDQpLiBCZXRhIFJlZ3Jlc3Npb24gZm9yIE1vZGVsbGluZyBSYXRlcyBhbmQgUHJvcG9ydGlvbnMuIAoqSm91cm5hbCBvZiBBcHBsaWVkIFN0YXRpc3RpY3MqLCAzMSg3KSwgNzk5LTgxNS4KCjIpIEZyYW5jaXNjbyBDcmliYXJpLU5ldG8gJiBBY2hpbSBaZWxsZWlzLCBCZXRhIFJlZ3Jlc3Npb24gaW4gUi4gCipKb3VybmFsIG9mIFN0YXRpc3RpY2FsIFNvZnR3YXJlKiwgMzQoMiksIDHigJMyNC4gMjAxMC4KaHR0cHM6Ly93d3cuanN0YXRzb2Z0Lm9yZy9hcnRpY2xlL3ZpZXcvdjAzNGkwMgoKCmBgYHtyIGZpZy5hbGlnbj0nY2VudGVyJywgZmlnLndpZHRoPTEwLCBmaWcuaGVpZ2h0PTZ9CiNpbnN0YWxsLnBhY2thZ2VzKCJyc3RhbiIpCiNpbnN0YWxsLnBhY2thZ2VzKCJyc3RhbmFybSIpCiNpbnN0YWxsLnBhY2thZ2VzKCJzdGF0bW9kIikKbGlicmFyeShyc3RhbikKbGlicmFyeShyc3RhbmFybSkKYGBgCgojIFRoZSBkYXRhc2V0CgpgYGB7ciBmaWcuYWxpZ249J2NlbnRlcicsIGZpZy53aWR0aD0xMCwgZmlnLmhlaWdodD02fQphdGxhbnRpYyA9IHJlYWQudGFibGUoImh0dHBzOi8vaGVkaWJlcnQub3JnL3dwLWNvbnRlbnQvdXBsb2Fkcy8yMDI2LzAyL2F0bGFudGljLnR4dCIsaGVhZGVyPVRSVUUpCmF0bGEgICAgID0gZGF0YS5mcmFtZShhdGxhbnRpYykKaW5kICAgICAgPSBhdGxhWywxXT09MTk4MQp4ICAgICAgICA9IGF0bGFbaW5kLDZdCnkgICAgICAgID0gYXRsYVtpbmQsNF0vMTAwCnlbeT09MF0gID0gMC4wMDAyNTUzNjIKCnBhcihtZnJvdz1jKDEsMSkpCnBsb3QoeCx5LHhsYWI9IlNlYSBzdXJmYWNlIHRlbXBlcmF0dXJlIix5bGFiPSJUcm9waWNhbCB0dW5hIHBlcmNlbnRhZ2UiLHlsaW09YygwLDAuNCkpCmBgYAoKIyBCZXRhIHJlZ3Jlc3Npb246IEJheWVzaWFuIGFwcHJvYWNoCgojIyBSdW5uaW5nIFNUQU4KYGBge3IgZmlnLmFsaWduPSdjZW50ZXInLCBmaWcud2lkdGg9MTAsIGZpZy5oZWlnaHQ9Nn0KZGF0YSAgICA9IGRhdGEuZnJhbWUoeT15LHg9eCkKZml0MSAgICA9IHN0YW5fYmV0YXJlZyh5IH4geCwgbGluayA9ICJsb2dpdCIsZGF0YT1kYXRhKQpkcmF3cyAgID0gYXMubWF0cml4KGZpdDEpCnh4ICAgICAgPSBzZXEobWluKHgpLG1heCh4KSxsZW5ndGg9MTAwKQpmaXR0ZWQxID0gMS8oMStleHAoLShmaXQxJGNvZWZbMV0rZml0MSRjb2VmWzJdKnh4KSkpCgpwYXJhbSA9IGMoImludGVyY2VwdCIsInNsb3BlIiwicHJlY2lzaW9uIikKcGFyKG1mcm93PWMoMiwzKSkKZm9yIChpIGluIDE6MykgdHMucGxvdChkcmF3c1ssaV0seGxhYj0iSXRlcmF0aW9ucyIseWxhYj0iIixtYWluPXBhcmFtW2ldKQpmb3IgKGkgaW4gMTozKSBoaXN0KGRyYXdzWyxpXSxwcm9iPVRSVUUsbWFpbj0iIix4bGFiPSIiKQpgYGAKCiMjIFBvc3RlcmlvciBpbmZlcmVuY2UKYGBge3IgZmlnLmFsaWduPSdjZW50ZXInLCBmaWcud2lkdGg9MTAsIGZpZy5oZWlnaHQ9Nn0KZml0cyAgID0gbWF0cml4KDAsNDAwMCwxMDApCmRyYXdzMSA9IG1hdHJpeCgwLDQwMDAsMTAwKQpmb3IgKGkgaW4gMToxMDApewogIGZpdHNbLGldICAgPSAxLygxK2V4cCgtKGRyYXdzWywxXStkcmF3c1ssMl0qeHhbaV0pKSkKICBkcmF3czFbLGldID0gcmJldGEoNDAwMCxmaXRzWyxpXSpkcmF3c1ssM10sKDEtZml0c1ssaV0pKmRyYXdzWywzXSkKfQpxZml0cyAgPSB0KGFwcGx5KGZpdHMsMixxdWFudGlsZSxjKDAuMDUsMC41LDAuOTUpKSkKcWZpdHMxID0gdChhcHBseShkcmF3czEsMixxdWFudGlsZSxjKDAuMDUsMC41LDAuOTUpKSkKCnBhcihtZnJvdz1jKDEsMSkpCnBsb3QoeCx5LHhsYWI9IlNlYSBzdXJmYWNlIHRlbXBlcmF0dXJlIix5bGFiPSJUcm9waWNhbCB0dW5hIHBlcmNlbnRhZ2UiLAogICAgIHlsaW09YygwLDEpKQpsaW5lcyh4eCxmaXR0ZWQxLGNvbD0yLGx3ZD0yKQpsaW5lcyh4eCxxZml0c1ssMV0sY29sPTIsbHdkPTIsbHR5PTIpCmxpbmVzKHh4LHFmaXRzWywzXSxjb2w9Mixsd2Q9MixsdHk9MikKbGluZXMoeHgscWZpdHMxWywxXSxjb2w9Myxsd2Q9MixsdHk9MikKbGluZXMoeHgscWZpdHMxWywzXSxjb2w9Myxsd2Q9MixsdHk9MikKYGBgCgojIEV4dGVuZGluZyB0aGUgbW9kZWwKCkhlcmUgd2UgYWxsb3cgYm90aCAkXG11X2kkIGFuZCAkXHBoaV9pJCB0byBiZSBmdW5jdGlvbnMgb2YgJHhfaSQ6CgokJApcbG9nXGxlZnQoXGZyYWN7XG11X2l9ezEtXG11X2l9XHJpZ2h0KT0gXHRoZXRhXzAgKyBcdGhldGFfMSB4X2kgXHFxdWFkXHFxdWFkIFxtYm94e2FuZH0gXHFxdWFkXHFxdWFkIFxsb2cgXHBoaV9pID0gXGdhbW1hXzAgKyBcZ2FtbWFfMSB4X2kuCiQkCgpgYGB7ciBmaWcuYWxpZ249J2NlbnRlcicsIGZpZy53aWR0aD0xMCwgZmlnLmhlaWdodD02fQpmaXQyICAgID0gc3Rhbl9iZXRhcmVnKHkgfiB4IHwgeCwgbGluayA9ICJsb2dpdCIsIGxpbmsucGhpPSJsb2ciLCBkYXRhPWRhdGEpCmRyYXdzMiAgID0gYXMubWF0cml4KGZpdDIpCmZpdHRlZDIgPSAxLygxK2V4cCgtKGZpdDIkY29lZlsxXStmaXQyJGNvZWZbMl0qeHgpKSkKCnBhcmFtID0gYygiaW50ZXJjZXB0Iiwic2xvcGUiLCJpbnQucHJlY2lzaW9uIiwic2xvcGUucHJlY2lzaW9uIikKcGFyKG1mcm93PWMoMiw0KSkKZm9yIChpIGluIDE6NCkgdHMucGxvdChkcmF3czJbLGldLHhsYWI9Ikl0ZXJhdGlvbnMiLHlsYWI9IiIsbWFpbj1wYXJhbVtpXSkKZm9yIChpIGluIDE6NCkgaGlzdChkcmF3czJbLGldLHByb2I9VFJVRSxtYWluPSIiLHhsYWI9IiIpCmBgYAoKIyMgQ29tcGFyaXNvbgpgYGB7ciBmaWcuYWxpZ249J2NlbnRlcicsIGZpZy53aWR0aD0xMCwgZmlnLmhlaWdodD02fQpkcmF3czIxID0gbWF0cml4KDAsNDAwMCwxMDApCmZvciAoaSBpbiAxOjEwMCl7CiAgbXUuZml0ICAgPSAxLygxK2V4cCgtKGRyYXdzMlssMV0rZHJhd3MyWywyXSp4eFtpXSkpKQogIHBoaS5maXQgID0gZXhwKGRyYXdzMlssM10rZHJhd3MyWyw0XSp4eFtpXSkKICBkcmF3czIxWyxpXSA9IHJiZXRhKDQwMDAsbXUuZml0KnBoaS5maXQsKDEtbXUuZml0KSpwaGkuZml0KQp9CnFmaXRzMiA9IHQoYXBwbHkoZHJhd3MyMSwyLHF1YW50aWxlLGMoMC4wNSwwLjUsMC45NSkpKQoKcGFyKG1mcm93PWMoMSwxKSkKcGxvdCh4LHkseGxhYj0iU2VhIHN1cmZhY2UgdGVtcGVyYXR1cmUiLHlsYWI9IlRyb3BpY2FsIHR1bmEgcGVyY2VudGFnZSIseWxpbT1jKDAsMSkpCmxpbmVzKHh4LGZpdHRlZDEsY29sPTIsbHdkPTIpCmxpbmVzKHh4LHFmaXRzMVssMV0sY29sPTIsbHdkPTIsbHR5PTIpCmxpbmVzKHh4LHFmaXRzMVssM10sY29sPTIsbHdkPTIsbHR5PTIpCgpsaW5lcyh4eCxmaXR0ZWQyLGNvbD00LGx3ZD0yKQpsaW5lcyh4eCxxZml0czJbLDFdLGNvbD00LGx3ZD0yLGx0eT0yKQpsaW5lcyh4eCxxZml0czJbLDNdLGNvbD00LGx3ZD0yLGx0eT0yKQoKbGVnZW5kKCJ0b3BsZWZ0IixsZWdlbmQ9YygiTW9kZWwgMTogbXUoeCkiLCJNb2RlbCAyOiBtdSh4KSxwaGkoeCkiKSxjb2w9YygyLDQpLGx0eT0xLGx3ZD0yLGJ0eT0ibiIpCmBgYA==