Sale!

CS 281 Assignment #0, v 1.0

$30.00

Category:

Variance and Covariance
Problem 1
Let X and Y be two independent random variables.
(a) Show that the independence of X and Y implies that their covariance is zero.
(b) Zero covariance does not imply independence between two random variables. Give an example
of this.
(c) For a scalar constant a, show the following two properties:
E(X + aY ) = E(X) + aE(Y )
var(X + aY ) = var(X) + a
2
var(Y )

Densities
Problem 2
Answer the following questions:
(a) Can a probability density function (pdf) ever take values greater than 1?
(b) Let X be a univariate normally distributed random variable with mean 0 and variance 1/100.
What is the pdf of X?
(c) What is the value of this pdf at 0?
(d) What is the probability that X = 0?
(e) Explain the discrepancy

Conditioning and Bayes’ rule
Problem 3
Let µ ∈ R
m and Σ, Σ0 ∈ R
m×m. Let X be an m-dimensional random vector with X ∼ N (µ, Σ), and
let Y be a m-dimensional random vector such that Y | X ∼ N (X, Σ0
). Derive the distribution and
parameters for each of the following.
(a) The unconditional distribution of Y .
(b) The joint distribution for the pair (X, Y ).
Hints:
• You may use without proof (but they are good advanced exercises) the closure properties of
multivariate normal distributions. Why is it helpful to know when a distribution is normal?
• Review Eve’s and Adam’s Laws, linearity properties of expectation and variance, and Law of
Total Covariance.

I can Ei-gen
Problem 4
Let X ∈ R
n×m.
(a) What is the relationship between the n eigenvalues of XXT and the m eigenvalues of XT X?
(b) Suppose X is square (i.e., n = m) and symmetric. What does this tell you about the eigenvalues
of X? What are the eigenvalues of X + I, where I is the identity matrix?
(c) Suppose X is square, symmetric, and invertible. What are the eigenvalues of X−1
?
Hints:
• Make use of singular value decomposition and the properties of orthogonal matrices. Show your
work.
• Review and make use of (but do not derive) the spectral theorem.

Vector Calculus
Problem 5
Let x, y ∈ R
m and A ∈ R
m×m. Please derive from elementary scalar calculus the following useful
properties. Write your final answers in vector notation.
(a) What is the gradient with respect to x of x
T y?
(b) What is the gradient with respect to x of x
T x?
(c) What is the gradient with respect to x of x
T Ax?

Gradient Check
Problem 6
Often after finishing an analytic derivation of a gradient, you will need to implement it in code. However, there may be mistakes – either in the derivation or in the implementation. This is particularly the
case for gradients of multivariate functions.
One way to check your work is to numerically estimate the gradient and check it on a variety of inputs.
For this problem we consider the simplest case of a univariate function and its derivative. For example,
consider a function f(x) : R → R:
df
dx = lim→0
f(x + ) − f(x − )
2
A common check is to evaluate the right-hand side for a small value of , and check that the result is
similar to your analytic result.
In this problem, you will implement the analytic and numerical derivatives of the function
f(x) = cos(x) + x
2 + e
x
.
1. Implement f in Python (feel free to use whatever numpy or scipy functions you need):
def f ( x ) :
r e tu rn np . cos ( x ) + x ∗∗ 2 + np . exp ( x )
2. Analytically derive the derivative of that function, and implement it in Python:
def g r ad f ( x ) :
r e tu rn − np . si n ( x ) + 2 ∗ x + np . exp ( x )
3. Now, implement a gradient check (the numerical approximation to the derivative), and by plotting, show that the numerical approximation approaches the analytic as epsilon → 0 for a few
values of x:
def grad check ( x , e p sil o n ) :
r e tu rn ( f ( x + e p sil o n ) − f ( x − e p sil o n ) ) / ( 2 ∗ e p sil o n )

Reviews

There are no reviews yet.

Be the first to review “CS 281 Assignment #0, v 1.0”

Your email address will not be published.