Author: Michael Marsalli 
Additional Credits: 
Funding This module was supported by National Science Foundation Grants #9981217 and #0127561. 


We begin by choosing starting weights for our MCP neuron. Keep in mind that these weights will probably change as we proceed. For simplicity we choose w0 = 0, w1 = 0, and w2 = 0. (Remember w0 = T, so w0 = 0 is the same as T = 0.) We could start with any choice of weights, but with some initial weights the process will take much longer to achieve the desired result.
Now suppose we want our MCP neuron to have the following table, where we use D for the desired output.















Of course, we could just choose the appropriate weights for the MCP neuron to produce the above table. But we want to see how it is possible for the MCP neuron to start with weights w0 = 0, w1 = 0, and w2 = 0, and step by step to proceed to Table 8 by modifying its weights according to a rule. In a sense, the MCP neuron will "learn" to produce the outputs in Table 8. In order to see what is happening during the process, we will keep track of the weights in a list (w0, w1, w2). Let's start the process.
First we give the input (1,1) to the MCP neuron. Then w0*x0+ w1*x1 + w2*x2 = 0*1 + 0*1 + 0*1 * 0, so A = 1. Now D = 1 for this input, as you can see in Table 8. Because A = D, we do nothing. The weights are still (0,0,0).
Next we give the input (1,0) to the MCP neuron. Then w0*x0+ w1*x1 + w2*x2 = 0*1 + 0*1 + 0*0 * 0, so again A = 1. Also D = 1 for this input. So again we do nothing. The weights are still (0,0,0).
Next we give the input (0,1) to the MCP neuron. Then w0*x0+ w1*x1 + w2*x2 = 0*1 + 0*0 + 0*1 * 0, so again A = 1. But D = 0 for this input. Because D < A, we must adjust the weights by the rule wi  xi. So we now have weights (0  1, 0  0 , 0  1) = (1, 0, 1), because x0 = 1, x1 = 0, and x2 = 1. (Remember x0 is always 1.) Because our weights have been modified, we have a new MCP neuron. We must now compute the actual value A for this MCP neuron using the weights (1, 0, 1).
Now we give the input (0,0) to the MCP neuron with weights (1, 0, 1). Then A = 0, because (1)*1 + 0*0 + (1)*0 < 0. The desired output for (0,0) is also 0, as you can see in Table 8. So A = D, and we do nothing. The weights remain (1, 0, 1).
At this point we are not done. We must return to the input (1,1), because the weights have changed since we last gave the input to the neuron. When we give (1,1) to the MCP neuron with weights (1, 0, 1), we get A = 0, because (1)*1 + 0*1 + (1)*1 < 0. But D = 1 for this input. Because D > A, we must adjust the weights by the rule wi + xi. So now we have weights (1+ 1, 0 + 1, 1 + 1) = (0, 1, 0).
It appears we will have to pass through all the inputs again. In order to simplify the exposition, we'll just list the input, the weights, the actual output using the weights, the desired output, and the weights after modification. Here's how the previous computations for the input (1,1) would look.










Exercise. Complete the following table which summarizes all the steps we have performed so far.































And so, continuing with (1,0), we would have the following. We will stop when A = D for all four inputs.








































Notice that on our last pass through the inputs, A = D for all four inputs. This means we have found a list of weights that produce the desired output. So if we use w0= 1, w1 = 1, and w2 = 1, we obtain an MCP neuron with the same output as the logic function in Table 8. Our procedure started with all weights 0, and arrived at an appropriate MCP neuron. In effect, the MCP neuron "learned" to produce the desired output. Perhaps we can use the learning rule to find an MCP neuron that has the same output as the XOR function.
Exercise. Make a table like the one above, but using the XOR function as the desired output. Begin with the weights w0 = 0, w1 = 0, and w2 = 0, and make only two complete passes through the inputs.
As you may have noticed, the procedure for producing an MCP neuron with a desired output is rather tedious. Also, you may wonder if the procedure always stops. Is it possible that we can continue to apply the learning rule without ever getting the actual output to match the desired output for all inputs? In order to explore these questions more expeditiously, we are going to use a computer program that implements the above procedure. Then we can carry out many passes through the inputs with relative ease. The program will free us from the computational drudgery so that we can concentrate on using the procedure to find MCP neurons with desired outputs. In particular, we'll use the program to try to find an MCP neuron that reproduces the XOR function.

