Solved: Perceptron Simulation Homework (Learning and Memory PSYCH)

*Perceptron Simulation Homework *
*(Learning and Memory PSYCH)*

Perceptron is a linear classifier devised by Frank Rosenblatt. It consists of input and output vectors, a weight vector, an activation function and some rules that define the steps of operation. The simplest example of Perceptron is binary classification where its weight vector can be adjusted to solve classification problems.

It is a form of supervised learning and requires a set of inputs and outputs (training set) to learn a mapping function. The amount and quality of the training set determines the final performance of the perceptron.

The figure above can be translated into the following operational steps that could be implemented in any programming language or even in simple spreadsheet programs.

Step 1: Prepare your training set.
Step 2: Set your initial weight to be a small nonzero value between 0 and 1.
Step 3: Compute weighted sum of inputs. In other words, calculate w0x0 + w1x 1 + w2x2 + ….. wnxn. Here w0x0 is a bias (with w0 = θ and x0 is always 1).
Setp 4: Calculate the output by computing the activation function *f(**W**·* *X**).* Output Y= 1 if W·X > 0*, *otherwise Y= -1.

Step 5: Compare the calculated output and the teaching output. Adjust the weight according to the following formula.
w*i *(new) = w*i *(old) + *α *(T-Y) x*i*
Here *α* is a learning parameter and takes the value between 0 and 1 (the larger the faster the learning).

Repeat Step 3-5 until Y matches the training set.

A computation example: Simulating logical operation AND (*α* = 0.1; *θ =* 0.5)

X0
X1
X2
Output (T)
W0
W1
W2
X·W
Y
α(T-Y)X0
α(T-Y)X1
α(T-Y)X2
alpha
1
-1
-1
-1
0.5
0.1
0.1
0.3
1
-0.2
0.2
0.2
0.1
1
1
-1
-1
0.3
0.3
0.3
0.3
1
-0.2
-0.2
0.2
0.1
1
-1
1
-1
0.1
0.1
0.5
0.5
1
-0.2
0.2
-0.2
0.1
1
1
1
1
-0.1
0.3
0.3
0.5
1
0
0
0
0.1
1
-1
-1
-1
-0.1
0.3
0.3
-0.7
-1
0
0
0
0.1
1
1
-1
-1
-0.1
0.3
0.3
-0.1
-1
0
0
0
0.1
1
-1
1
-1
-0.1
0.3
0.3
-0.1
-1
0
0
0
0.1
1
1
1
1
-0.1
0.3
0.3
0.5
1
0
0
0
0.1
1
-1
-1
-1
-0.1
0.3
0.3
-0.7
-1
0
0
0
0.1
1
1
-1
-1
-0.1
0.3
0.3
-0.1
-1
0
0
0
0.1
1
-1
1
-1
-0.1
0.3
0.3
-0.1
-1
0
0
0
0.1
1
1
1
1
-0.1
0.3
0.3
0.5
1
0
0
0
0.1

Tags: ,