Assignment #6

ECE449, Intelligent Systems Engineering

Points: 10

in the assignment box in Donadeo ICE

Note: Show your work! Marks are allocated

for technique and not just the answer.

1. [5 points] Consider the following training set

4 1

1

1

3 1 4

0

1

2 1 3

1

0

1 0 2

0

0

x 1 = , t =

, x = , t =

, x = , t =

, x = , t =

a) Plot the training samples in the feature space.

b) Apply the perceptron learning rule to the training samples one-at-a-time to obtain weights w1, w2, and

bias w0 that separate the training samples. Use w = [w0, w1, w2] = [0, 0, 0] as initial values (consider bias

input x0 = 1, and learning rate = 1). Write the expression for the resulting decision boundary and draw it

in the graph. [Hint: You can use Excel / OO Calc to implement the learning rule for perceptron, such as

the spreadsheet of InClass_09 posted on eClass].

Epoch Inputs Desired

output t

Initial weights Actual

output y

Error Updated weights

x1 x2 w0 w1 w2 w0 w1 w2

1 0 0 0 0 0 0

0 1 1

1 0 1

1 1 1

2 0 0 0

0 1 1

1 0 1

1 1 1

3 0 0 0

0 1 1

1 0 1

1 1 1

Student Name:

ID Number:

2. [5 points] Consider the following training set

4 0

1

1

3 1 4

0

1

2 1 3

1

0

1 0 2

0

0

x 1 = , t =

, x = , t =

, x = , t =

, x = , t =

which describes the exclusive OR (XOR) problem.

a) Establish mathematical (not graphical) proof that this problem is not linearly separable. [Hint: Start

with assumption that these patterns are linearly separable, write down equations/inequalities

corresponding to this assumption and examine them for conflict; first such inequality is provided below

as an example.]

Suppose that the problem is linearly separable. The decision boundary can be represented as:

∑ 𝑥𝑖𝑤𝑖 = 0

2

0

or (expanded) 𝑥0𝑤0 + 𝑥1𝑤1 + 𝑥2𝑤2 = 0

This assumption means that either

a) 𝑥0𝑤0 + 𝑥1𝑤1 + 𝑥2𝑤2 < 0𝑓𝑜𝑟 (𝑥1, 𝑥2

) = (0,1) ∧ (𝑥1, 𝑥2

) = (1,0)

𝑥0𝑤0 + 𝑥1𝑤1 + 𝑥2𝑤2 ≥ 0𝑓𝑜𝑟 (𝑥1, 𝑥2

) = (0,0) ∧ (𝑥1, 𝑥2

) = (1,1)

,

or

b) 𝑥0𝑤0 + 𝑥1𝑤1 + 𝑥2𝑤2 0𝑓𝑜𝑟 (𝑥1, 𝑥2

) = (0,1) ∧ (𝑥1, 𝑥2

) = (1,0)

𝑥0𝑤0 + 𝑥1𝑤1 + 𝑥2𝑤2 ≤ 0𝑓𝑜𝑟 (𝑥1, 𝑥2

) = (0,0) ∧ (𝑥1, 𝑥2

) = (1,1)

.

must be satisfied. Following one of the cases and putting the values (𝑥1, 𝑥2

) under variables, one obtains

(1) 𝑥0𝑤0 + 𝑤2 < 0

(2)

(3)

(4)

…

b) Apply the perceptron learning rule following the same procedure as in Problem 1. Describe your

observation.

Epoch Inputs Desired

output t

Initial weights Actual

output y

Error Updated weights

x1 x2 w0 w1 w2 w0 w1 w2

1 0 0 0 0 0 0

0 1 1

1 0 1

1 1 0

2 0 0 0

0 1 1

1 0 1

1 1 0

3 0 0 0

0 1 1

1 0 1

1 1 0