--- title: "Posterior inference for the variance in an IID Normal problem" author: "Hedibert Freitas Lopes" date: "5/4/2018" output: # word_document: default # pdf_document: default html_document: default --- ```{r setup, include=FALSE} knitr::opts_chunk$set(echo = TRUE) ``` ## Observed data (simulated, of course!) ```{r} set.seed(1234) n = 10 sig2 = 0.25 x = rnorm(n,0,sqrt(sig2)) ``` ## Statistical model We assume that, conditional on $\sigma^2$, the observations $x_1,\ldots,x_n$ are iid Gaussian with mean zero and variance $\sigma^2$. We are interested in cases where $\sigma^20$. In this case, it is easy to see that the MLE of $\sigma^2$ is $$ {\hat \sigma}_{MLE}^2 = \frac{1}{n}\sum_{i=1}^n x_i^2, $$ for ${\hat \sigma}_{MLE}^20.25|x_1,\ldots,x_n)$ ```{r} (pinvgamma(u,a,b)-pinvgamma(0.25,a,b))/pinvgamma(u,a,b) ``` # Posterior quantiles (via inverse CDF transformation) ```{r} alpha = c(0.975,0.5,0.025) ci = qinvgamma((1-alpha)*pinvgamma(u,a,b),a,b) ci sig2.median = ci[2] # Estimates: MLE & posterior mean, mode and median c(sig2.mle,sig2.mean,sig2.mode,sig2.median) ``` # Your turn You can now repeat the above exercise by changing the sample size, $n$, the prior upperbound,$u$, and the true value of $\sigma^2$.