Research School of Computer Science Assignment 4 Theory Questions

COMP3670: Introduction to Machine Learning

Errata: All corrections are in red.

Note: For the purposes of this assignment, if X is a random variable we let pX denote the probability

density function (pdf) of X, FX to denote it’s cumulative distribution function, and P to denote

probabilities. These can all be related as follows:

P(X ≤ x) = FX(x) = Z x

−∞

pX(z)dz

P(a ≤ X ≤ b) = FX(b) − FX(a) = Z b

a

pX(z)dz

Often, we will simply write pX as p, where it’s clear what random variable the distribution refers to.

You should show your derivations, but you may use a computer algebra system (CAS) to assist

with integration or differentiation. We are not assessing your ability to integrate/differentiate here.1

.

Question 1 Continuous Bayesian Inference 5+5+2+4+4+6+6+5=37 credits

Let X be a random variable representing the outcome of a biased coin with possible outcomes X =

{0, 1}, x ∈ X . The bias of the coin is itself controlled by a random variable Θ, with outcomes2

θ ∈ θ,

where

θ = {θ ∈ R : 0 ≤ θ ≤ 1}

The two random variables are related by the following conditional probability distribution function of

X given Θ.

p(X = 1 | Θ = θ) = θ

p(X = 0 | Θ = θ) = 1 − θ

We can use p(X = 1 | θ) as a shorthand for p(X = 1 | Θ = θ).

We wish to learn what θ is, based on experiments by flipping the coin.

We flip the coin a number of times.3 After each coin flip, we update the probability distribution for θ

to reflect our new belief of the distribution on θ, based on evidence.

Suppose we flip the coin n times, and obtain the sequence of coin flips 4 x1:n.

a) Compute the new PDF for θ after having observed n consecutive ones (that is, x1:n is a sequence

where ∀i.xi = 1), for an arbitrary prior pdf p(θ). Simplify your answer as much as possible.

b) Compute the new PDF for θ after having observed n consecutive zeros, (that is, x1:n is a sequence

where ∀i.xi = 0) for an arbitrary prior pdf p(θ). Simplify your answer as much as possible.

c) Compute p(θ|x1:n = 1n

) for the uniform prior p(θ) = 1.

d) Compute the expected value µn of θ after observing n consecutive ones, with a uniform prior

p(θ) = 1. Provide intuition explaining the behaviour of µn as n → ∞.

1For example, asserting that R 1

0

x

2

x

3 + 2x

dx = 2/3 with no working out is adequate, as you could just plug the

integral into Wolfram Alpha using the command Integrate[x^2(x^3 + 2x),{x,0,1}]

2For example, a value of θ = 1 represents a coin with 1 on both sides. A value of θ = 0 represnts a coin with 0 on

both sides, and θ = 1/2 represents a fair, unbaised coin.

3The coin flips are independent and identically distributed (i.i.d).

4We write x1:n as shorthand for the sequence x1x2 . . . xn.

e) Compute the variance σ

2

n of the distribution of θ after observing n consecutive ones, with a uniform

prior p(θ) = 1. Provide intuition explaining the behaviour of σ

2

n as n → ∞.

f) Compute the maximum a posteriori estimation θMAPn of the distribution on θ after observing

n consecutive ones, with a uniform prior p(θ) = 1. Provide intuition explaining how θMAPn varies

with n.

g) Given we have observed n consecutive coin flips of ones in a row, what do you think would be a

better choice for the best guess of the true value of θ? µn or θMAP ? Justify your answer. (Assume

p(θ) = 1.)

h) Plot the probability distributions p(θ|x1:n = 1) over the interval 0 ≤ θ ≤ 1 for n ∈ {0, 1, 2, 3, 4} to

compare them. Assume p(θ) = 1.

Question 2 Bayesian Inference on Imperfect Information (4+5+8+4+4=25 credits)

We have a Bayesian agent running on a computer, trying to learn information about what the parameter θ could be in the coin flip problem, based on observations through a noisy camera. The noisy

camera takes a photo of each coin flip and reports back if the result was a 0 or a 1. Unfortunately, the

camera is not perfect, and sometimes reports the wrong value.5 The probability that the camera makes

mistakes is controlled by two parameters α and β, that control the likelihood of correctly reporting a

zero, and a one, respectively. Letting X denote the true outcome of the coin, and Xb denoting what

the camera reported back, we can draw the relationship between X and Xb as shown.

X = 0 Xb = 0

X = 1 Xb = 1

1 − θ

θ

1 − α

α

1 − β

β

So, we have

p(Xb = 0 | X = 0) = α

p(Xb = 0 | X = 1) = 1 − β

p(Xb = 1 | X = 1) = β

p(Xb = 1 | X = 0) = 1 − α

We would now like to investigate what posterior distributions are obtained, as a function of the

parameters α and β.

a) (5 credits) Briefly comment about how the camera behaves for α = β = 1, for α = β = 1/2, and

for α = β = 0. For each of these cases, how would you expect this would change how the agent

updates it’s prior to a posterior on θ, given an observation of Xb? (No equations required.) You

shouldn’t need any assumptions about p(θ) for this question.

b) (10 credits) Compute p(Xb = x|θ) for all x ∈ {0, 1}.

5The errors made by the camera are i.i.d, in that past camera outputs do not affect future camera outputs.

2

c) (15 credits) The coin is flipped, and the camera reports seeing a one. (i.e. that Xˆ = 1.)

Given an arbitrary prior p(θ), compute the posterior p(θ|Xˆ = 1). What does p(θ|Xˆ = 1) simplify

to when α = β = 1? When α = β = 1/2? When α = β = 0? Explain your observations.

d) Compute p(θ|Xˆ = 1) for the uniform prior p(θ) = 1. Simplify it under the assumption that β := α.

e) (10 credits) Let β = α. Plot p(θ|Xˆ = 1) as a function of θ, for all α ∈ {0,

1

4

,

2

4

,

3

4

, 1} on the same

graph to compare them. Comment on how the shape of the distribution changes with α. Explain

your observations. (Assume p(θ) = 1.)

Question 3 Relating Random Variables (10+7+5+16=38 credits)

A casino offers a new game. Let X ∼ fX be a random variable on (0, 1] with pdf pX. Let Y be a

random variable on [1, ∞) such that Y = 1/X. A random number c is sampled from Y , and the player

guesses a number m ∈ [1, ∞). If the player’s guess m was lower than c, then the player wins m − 1

dollars from the casino (which means higher guesses pay out more money). But if the player guessed

too high, (m ≥ c), they go bust, and have to pay the casino 1 dollar.

a) Show that the probability density function pY for Y is given by

pY (y) = 1

y

2

pX(

1

y

)

b) Hence, or otherwise, compute the expected profit for the player under this game. Your answer will

be in terms of m and pX, and should be as simplified as possible.

c) Suppose the casino chooses a uniform distribution over (0, 1] for X, that is,

pX(x) = (

1 0 < x ≤ 1

0 otherwise

What strategy should the player use to maximise their expected profit?

d) Find a pdf pX : (0, 1] → R such that for any B > 0, there exists a corresponding player guess m

such that the expected profit for the player is at least B. (That is, prove that the expected profit

for pX, as a function of m, is unbounded.)

Make sure that your choice for pX is a valid pdf, i.e. it should satisfy

Z 1

0

pX(x)dx = 1 and pX(x) ≥ 0

You should also briefly mention how you came up with your choice for pX.

Hint: We want X to be extremely biased towards small values, so that Y is likely to be large, and

the player can choose higher values of m without going bust.

3

## Reviews

There are no reviews yet.