http://cran.nexr.com/web/packages/BAYSTAR/index.html
Model
In this exercise, we will illustrate the implementation of both Gibbs
and Random Walk Metropolis steps an MCMC algorithm to draw from the
posterior distribution of the parameters of a Threshold Autoregressive
model of orderm one, or simply an TAR(1) model: \[
y_t = \phi_1 y_{t-1}1_{\{y_{t-1}\leq\alpha\}}+\phi_2
y_{t-1}1_{\{y_{t-1}>\alpha\}}+\varepsilon_t,
\] where \(\{\varepsilon_t\}\)s
follow a Gaussian white noite with variance \(\sigma^2\). We will keep \(\sigma^2=1\) for simplicity and focus on
the full conditional distributions of \(\phi=(\phi_1,\phi_2)' \in \Re^2\) and
\(\alpha \in \Re\).
set.seed(1245)
n = 500
alpha = 2.0
phi1 = 0.0
phi2 = 0.95
sig = 1.0
error = rnorm(n,0,sig)
y = rep(10,n)
for (t in 2:n){
if (y[t-1]<=alpha){
y[t] = phi1*y[t-1]+error[t]
}else{
y[t] = phi2*y[t-1]+error[t]
}
}
par(mfrow=c(1,2))
ts.plot(y)
plot(y[1:(n-1)],y[2:n],xlab=expression(y[t-1]),ylab=expression(y[t]))
abline(v=alpha,col=2)
alpha.true = alpha
phi1.true = phi1
phi2.true = phi2
Posterior
inference
\(\phi|\mbox{data},\alpha\)
If we rename the lagged components, the above linear regression would
become: \[
y_t = \phi_1 x_{t1} +\phi_2 x_{t2} +\varepsilon_t,
\] where \(x_{t1}=y_{t-1}1_{\{y_{t-1}\leq\alpha\}}\)
and \(x_{t2}=y_{t-1}1_{\{y_{t-1}>\alpha\}}\).
Notice that \(\alpha\) is hidden inside
both \(x_1\) and \(x_2\), so conditionally on \(\alpha\) the vector \(\phi\) can be updated according to a
Gaussian linear regression.
For simplicity, let us assume a noninformative prior for \(\phi\), ie \(p(\phi) \propto 1\). Then, it is easy to
show that \[
p(\phi|\mbox{data},\alpha) \propto \exp\left\{-\frac12
(Y-X\phi)'(Y-X\phi)\right\}
\propto \exp\left\{-\frac12
\left[\phi'X'X\phi-2\phi'X'Y\right] \right\},
\] for \(Y=(y_2,\ldots,y_n)'\) and \(X=(x_2,\ldots,x_n)'\) and \(x_t=(x_{t1},x_{t2})\). The kernel of the
above posterior is Gaussian with mean vector vector and covariance
matrix given by \[
{\hat \phi}(\alpha) = (X'X)^{-1}X'Y \ \ \ \mbox{and} \ \ \
\Sigma(\alpha)=(X'X)^{-1},
\] respectively. In summary, conditionally on \(\alpha\) the full posterior distribution of
\(\phi\) is \[
\phi|\mbox{data},\alpha \sim N({\hat \phi}(\alpha),\Sigma(\alpha)).
\]
\(\alpha|\mbox{data},\phi\)
The full posterior distribution of \(\phi\) does not belong to any nice and easy
family of distributions: \[
p(\alpha|\mbox{data},\phi) \propto \exp\left\{-\frac12
(Y-X(\alpha)\phi)'(Y-X(\alpha)\phi)\right\},
\] where \(\phi\) is fixed here,
so all changes due to \(\alpha\) come
from \(X(\alpha)\).
The oldest, simples and, consequently, most popular MCMC algorithm is
the random-walk Metropolis algorithm. First we sample a candidate \(\alpha^*\) from \(N(\alpha^{\mbox{old},\tau^2)\). Here \(\alpha^{\mbox{old}}\) is the current
position of the Markov chain and \(\tau\) is a tuning parameters to control
the rate of acceptance of those random-walk draws. The candidate draw,
\(\alpha^*\), is accepted with
probability \[
\min\left\{1,\frac{p(\alpha^*|\mbox{data},\phi)
}{p(\alpha^{\mbox{old}}|\mbox{data},\phi) }\right\}.
\]
MCMC scheme
Start at \((\phi^{(0)},\alpha^{(0)})\)
Set i=1
1.1 Draw \(\phi^{(i)}\) from \(N({\hat
\phi}(\alpha^{(i-1)}),\Sigma(\alpha^{(i-1)}))\)
1.2 Draw \(\alpha^*\) from \(N(\alpha^{(j-1)},\tau^2)\)
1.3 Make \(\alpha^{(i)}=\alpha^*\)
with probability \[
\min\left\{1,\frac{p(\alpha^*|\mbox{data},\phi)
}{p(\alpha^{(i-1)}|\mbox{data},\phi) }\right\}.
\]
Set \(i=i+1\) and go back to 1
until \(i=N+1\)
Se sequence of draws \(\{(\phi^{(i)},\alpha^{(i)})\}_{i=1}^N\)
converges to draws from the posterior distribution \(p(\phi,\alpha| \mbox{data})\).
Running the MCMC
scheme
logpostalpha = function(alpha,phi){
X = cbind(y[1:(n-1)]*ifelse(y[1:(n-1)]<=alpha,1,0),y[1:(n-1)]*ifelse(y[1:(n-1)]>alpha,1,0))
sum(dnorm(Y-X%*%phi,log=TRUE))
}
Y = y[2:n]
alpha = 0
tau = 0.2
ndraws = 20000
burnin = 100000
niter = ndraws+burnin
phis = matrix(0,niter,2)
alphas = rep(0,niter)
for (iter in 1:niter){
X = cbind(y[1:(n-1)]*ifelse(y[1:(n-1)]<=alpha,1,0),y[1:(n-1)]*ifelse(y[1:(n-1)]>alpha,1,0))
v.phi = solve(t(X)%*%X)
phi.hat = v.phi%*%t(X)%*%Y
phi = phi.hat + t(chol(v.phi))%*%rnorm(2)
alphastar = rnorm(1,alpha,tau)
logaccept = min(0,logpostalpha(alphastar,phi)-logpostalpha(alpha,phi))
if (log(runif(1))<logaccept){
alpha = alphastar
}
phis[iter,] = phi
alphas[iter] = alpha
}
Monitoring the Markov
chains
par(mfrow=c(3,3))
ind = seq(1,niter,by=100)
plot(ind,phis[ind,1],xlab="Iterations",ylab="",main=expression(phi[1]),type="l")
abline(h=phi1.true,col=2,lwd=2)
abline(v=burnin,col=3,lwd=2)
plot(ind,phis[ind,2],xlab="Iterations",ylab="",main=expression(phi[2]),type="l")
abline(h=phi2.true,col=2,lwd=2)
abline(v=burnin,col=3,lwd=2)
plot(ind,alphas[ind],xlab="Iterations",ylab="",main=expression(alpha),type="l")
abline(h=alpha.true,col=2,lwd=2)
abline(v=burnin,col=3,lwd=2)
phis = phis[(burnin+1):niter,]
alphas = alphas[(burnin+1):niter]
ts.plot(phis[,1],xlab="Iterations",ylab="",main=expression(phi[1]))
abline(h=phi1.true,col=2,lwd=2)
ts.plot(phis[,2],xlab="Iterations",ylab="",main=expression(phi[2]))
abline(h=phi2.true,col=2,lwd=2)
ts.plot(alphas,xlab="Iterations",ylab="",main=expression(alpha))
abline(h=alpha.true,col=2,lwd=2)
acf(phis[,1],main="")
acf(phis[,2],main="")
acf(alphas,main="")
Marginal
posteriors
par(mfrow=c(1,3))
hist(phis[,1],prob=TRUE,main=expression(phi[1]),xlab="")
abline(v=phi1.true,col=2,lwd=2)
hist(phis[,2],prob=TRUE,main=expression(phi[2]),xlab="")
abline(v=phi2.true,col=2,lwd=2)
hist(alphas,prob=TRUE,main=expression(alpha),xlab="")
abline(v=alpha.true,col=2,lwd=2)
---
title: "Threshold AR model"
subtitle: "Gibbs and Random-Walk-Metropolis steps"
author: "Hedibert Freitas Lopes"
date: "2/13/2023"
output:
  html_document:
    toc: true
    toc_depth: 2
    code_download: yes
    number_sections: true
---

```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE)
```


http://cran.nexr.com/web/packages/BAYSTAR/index.html

# Model
In this exercise, we will illustrate the implementation of both Gibbs and Random Walk Metropolis steps an MCMC algorithm to draw from the posterior distribution of the parameters of a Threshold Autoregressive model of orderm one, or simply an TAR(1) model:
$$
y_t = \phi_1 y_{t-1}1_{\{y_{t-1}\leq\alpha\}}+\phi_2 y_{t-1}1_{\{y_{t-1}>\alpha\}}+\varepsilon_t,
$$
where $\{\varepsilon_t\}$s follow a Gaussian white noite with variance $\sigma^2$.  We will keep $\sigma^2=1$ for simplicity and focus on the full conditional distributions of $\phi=(\phi_1,\phi_2)' \in \Re^2$ and $\alpha \in \Re$.  

```{r fig.width=10, fig.height=5}
set.seed(1245)
n     = 500
alpha = 2.0
phi1  = 0.0
phi2  = 0.95
sig   = 1.0
error = rnorm(n,0,sig)

y = rep(10,n)
for (t in 2:n){
  if (y[t-1]<=alpha){
    y[t] = phi1*y[t-1]+error[t]  
  }else{
    y[t] = phi2*y[t-1]+error[t]  
  }
}

par(mfrow=c(1,2))
ts.plot(y)
plot(y[1:(n-1)],y[2:n],xlab=expression(y[t-1]),ylab=expression(y[t]))
abline(v=alpha,col=2)

alpha.true = alpha
phi1.true = phi1
phi2.true = phi2
```

# Posterior inference

## $\phi|\mbox{data},\alpha$
If we rename the lagged components, the above linear regression would become:
$$
y_t = \phi_1 x_{t1} +\phi_2 x_{t2} +\varepsilon_t,
$$
where $x_{t1}=y_{t-1}1_{\{y_{t-1}\leq\alpha\}}$ and $x_{t2}=y_{t-1}1_{\{y_{t-1}>\alpha\}}$.  Notice that $\alpha$ is hidden inside both $x_1$ and $x_2$, so conditionally on $\alpha$ the vector $\phi$ can be updated according to a Gaussian linear regression.  

For simplicity, let us assume a noninformative prior for $\phi$, ie $p(\phi) \propto 1$.  Then, it is easy to show that 
$$
p(\phi|\mbox{data},\alpha) \propto \exp\left\{-\frac12 (Y-X\phi)'(Y-X\phi)\right\}
\propto \exp\left\{-\frac12 \left[\phi'X'X\phi-2\phi'X'Y\right] \right\},
$$
for $Y=(y_2,\ldots,y_n)'$ and $X=(x_2,\ldots,x_n)'$ and $x_t=(x_{t1},x_{t2})$.  The kernel of the above posterior is Gaussian with mean vector vector and covariance matrix given by
$$
{\hat \phi}(\alpha) = (X'X)^{-1}X'Y \ \ \ \mbox{and} \ \ \ \Sigma(\alpha)=(X'X)^{-1},
$$
respectively. In summary, conditionally on $\alpha$ the full posterior distribution of $\phi$ is
$$
\phi|\mbox{data},\alpha \sim N({\hat \phi}(\alpha),\Sigma(\alpha)).
$$

# $\alpha|\mbox{data},\phi$
The full posterior distribution of $\phi$ does not belong to any nice and easy family of distributions:
$$
p(\alpha|\mbox{data},\phi) \propto \exp\left\{-\frac12 (Y-X(\alpha)\phi)'(Y-X(\alpha)\phi)\right\},
$$
where $\phi$ is fixed here, so all changes due to $\alpha$ come from $X(\alpha)$.

The oldest, simples and, consequently, most popular MCMC algorithm is the random-walk Metropolis algorithm. First we sample a candidate $\alpha^*$ from $N(\alpha^{\mbox{old},\tau^2)$.  Here $\alpha^{\mbox{old}}$ is the current position of the Markov chain and $\tau$ is a tuning parameters to control the rate of acceptance of those random-walk draws.  The candidate draw, $\alpha^*$, is accepted with probability
$$
\min\left\{1,\frac{p(\alpha^*|\mbox{data},\phi) }{p(\alpha^{\mbox{old}}|\mbox{data},\phi) }\right\}.
$$

## MCMC scheme

0. Start at $(\phi^{(0)},\alpha^{(0)})$

1. Set i=1

   1.1 Draw $\phi^{(i)}$ from $N({\hat \phi}(\alpha^{(i-1)}),\Sigma(\alpha^{(i-1)}))$

   1.2 Draw $\alpha^*$ from $N(\alpha^{(j-1)},\tau^2)$

   1.3 Make $\alpha^{(i)}=\alpha^*$ with probability 
   $$
   \min\left\{1,\frac{p(\alpha^*|\mbox{data},\phi) }{p(\alpha^{(i-1)}|\mbox{data},\phi) }\right\}.
   $$
2. Set $i=i+1$ and go back to 1 until $i=N+1$

Se sequence of draws $\{(\phi^{(i)},\alpha^{(i)})\}_{i=1}^N$ converges to draws from the posterior distribution $p(\phi,\alpha| \mbox{data})$.

# Running the MCMC scheme

```{r fig.width=10, fig.height=7}
logpostalpha = function(alpha,phi){
  X = cbind(y[1:(n-1)]*ifelse(y[1:(n-1)]<=alpha,1,0),y[1:(n-1)]*ifelse(y[1:(n-1)]>alpha,1,0))
  sum(dnorm(Y-X%*%phi,log=TRUE))
}

Y = y[2:n]

alpha = 0

tau   = 0.2

ndraws = 20000
burnin = 100000
niter = ndraws+burnin

phis = matrix(0,niter,2)
alphas = rep(0,niter)
for (iter in 1:niter){
  X = cbind(y[1:(n-1)]*ifelse(y[1:(n-1)]<=alpha,1,0),y[1:(n-1)]*ifelse(y[1:(n-1)]>alpha,1,0))
  v.phi = solve(t(X)%*%X)
  phi.hat = v.phi%*%t(X)%*%Y
  phi =  phi.hat + t(chol(v.phi))%*%rnorm(2)
  
  alphastar = rnorm(1,alpha,tau)
  logaccept = min(0,logpostalpha(alphastar,phi)-logpostalpha(alpha,phi))
  if (log(runif(1))<logaccept){
    alpha = alphastar
  }
  phis[iter,] = phi
  alphas[iter] = alpha
}
```


## Monitoring the Markov chains

```{r fig.width=10, fig.height=7}
par(mfrow=c(3,3))
ind = seq(1,niter,by=100)
plot(ind,phis[ind,1],xlab="Iterations",ylab="",main=expression(phi[1]),type="l")
abline(h=phi1.true,col=2,lwd=2)
abline(v=burnin,col=3,lwd=2)
plot(ind,phis[ind,2],xlab="Iterations",ylab="",main=expression(phi[2]),type="l")
abline(h=phi2.true,col=2,lwd=2)
abline(v=burnin,col=3,lwd=2)
plot(ind,alphas[ind],xlab="Iterations",ylab="",main=expression(alpha),type="l")
abline(h=alpha.true,col=2,lwd=2)
abline(v=burnin,col=3,lwd=2)

phis = phis[(burnin+1):niter,]
alphas = alphas[(burnin+1):niter]

ts.plot(phis[,1],xlab="Iterations",ylab="",main=expression(phi[1]))
abline(h=phi1.true,col=2,lwd=2)
ts.plot(phis[,2],xlab="Iterations",ylab="",main=expression(phi[2]))
abline(h=phi2.true,col=2,lwd=2)
ts.plot(alphas,xlab="Iterations",ylab="",main=expression(alpha))
abline(h=alpha.true,col=2,lwd=2)

acf(phis[,1],main="")
acf(phis[,2],main="")
acf(alphas,main="")
```

## Marginal posteriors

```{r fig.width=10, fig.height=5}
par(mfrow=c(1,3))
hist(phis[,1],prob=TRUE,main=expression(phi[1]),xlab="")
abline(v=phi1.true,col=2,lwd=2)
hist(phis[,2],prob=TRUE,main=expression(phi[2]),xlab="")
abline(v=phi2.true,col=2,lwd=2)
hist(alphas,prob=TRUE,main=expression(alpha),xlab="")
abline(v=alpha.true,col=2,lwd=2)
```



