## Description

Homework 8,

1. Implement Logitboost using 1D linear regressors as weak learners. At each boosting

iteration choose the weak learner that obtains the largest reduction in the loss function

on the training set D = {(xi

, yi), i = 1, …, N}, with yi ∈ {0, 1}:

L =

X

N

i=1

ln(1 + exp[−y˜ih(xi)]) (1)

where y˜i = 2yi − 1 take values ±1 and h(x) = h1(x) + … + hk(x) is the boosted

classifier. Please note that the Logitboost algorithm from the slides uses yi ∈ {0, 1}

and the loss uses y˜ ∈ {−1, 1}.

a) Using the Gisette data, train a Logitboost classifier on the training set, with

k ∈ {10, 30, 100, 300, 500} boosted iterations. Plot the training loss vs iteration

number for k = 500. Report in a table the misclassification errors on the training

and test set for the models obtained for all these k. Plot the misclassification

errors on the training and test set vs k. (5 points)

b) Repeat point a) on the dexter dataset. (2 points)

c) Repeat point a) on the madelon dataset. (2 points)