# MA 590  Homework 10

\$30.00

Category:
Rate this product

MA 590
Homework 10 Problems
1 (20 points) This problem highlights the difference between the maximum a posteriori
(MAP) and conditional mean (CM) Bayesian point estimators. Consider a random variable
X : S → R, and assume the posterior density is given by
πpost(x) = α
σ0
φ

x
σ0

+
1 − α
σ1
φ

x − 1
σ1

where 0 < α < 1, σ0, σ1 > 0, and φ(x) is the Gaussian distribution
φ(x) = 1

e
−x
2/2
as in Kaipio and Somersalo (2005) Ch. 3, Example 1.
(a) Using their respective definitions, show that
xMAP =



0 , if α
σ0
>
1 − α
σ1
1 , if α
σ0
<
1 − α
σ1
and
xCM = 1 − α .
Hint: If you come across some complicated integration in computing xCM, stop and think
about which useful definitions (or densities, actually) could help!
(b) Make figures in MATLAB that show where xMAP and xCM lie with respect to the posterior
πpost(x) for the following cases:
• α = 0.5, σ0 = 0.08, σ1 = 0.04
• α = 0.01, σ0 = 0.001, σ1 = 0.1
Does the MAP estimate or CM estimate give a better representation of the underlying
posterior density? Explain your findings for each case.
If the conditional covariance
σ
2 =
ˆ ∞
−∞
(x − xCM)
2πpost(x)dx
is given by
σ
2 = ασ2
0 + (1 − α)(σ
2
1 + 1) − (1 − α)
2
how does this spread estimator change in each of the above cases? Comment on your results.
2 (10 points) Given the random variables X : S → R
n and Y, E : S → R
m and the forward
operator G : R
n → R
m, consider the multiplicative noise model
Y = E. ∗ G(X)
where .∗ denotes component-wise multiplication. One way to construct a likelihood function
corresponding to this noise model is to transform it into an additive noise model in the
natural log space, assuming that E is log-normally distributed, i.e., that the natural log of
the noise follows a normal distribution N (0, σ2
Im) with variance σ
2
.
Write the likelihood function for log Y conditioned on X = x when σ
2
is both known and
unknown.
Note: For any of the above problems for which you use MATLAB to help you solve, you must