PYTHON PERCEPTRON TRAINING TO IMPLEMENT XOR GATE
Perceptron is a concept of machine learning, it is an algorithm for binary classifiers in supervised learning. It is the basic artificial neural network. It is basically a single node in classifier which takes a vector or row of data as an input and predicts the target or class name. Here, we are building a perceptron model to implement XOR gate. However, as the XOR gate points form a square shape and they cant be classified with a linear classifier. So, there will be infinite number of epochs i.e. iterations and it does not stop calculating error and wi values.
CODE:
lst=[[0,0,1,1],[0,1,0,1],[1,1,1,1],[0,0,0,0],[],[],[],[],[],[],0,0,0]
#list is in the order of w0,w1,w2,yin,Y,target,error,delta w0,delta w1,delta w2,w0,w1,w2
#intially w1=w2=w0=0
def fund(x): #activation function
if(x<0):
return 0
else:
return 1
fw1=0 #final w1
fw2=0 #final w2
fw0=0 #final w0
fyin=[] #final yin values
lst[5]=[0,1,1,0]
while(1):
lst[3]=[(lst[10]*i)+(lst[11]*j)+(lst[12]*k) for i,j,k in zip(lst[0],lst[1],lst[2])]
lst[7]=[(i*j) for i,j in zip(lst[0],lst[6])] #w0
lst[8]=[(k*l) for k,l in zip(lst[1],lst[6])] #w1
lst[9]=[(m*n) for m,n in zip(lst[2],lst[6])] #w2
lst[4]=[fund(p) for p in lst[3]] #finding Y using activation function
lst[6]=[(0.5)*(o-t) for t,o in zip(lst[4],lst[5])] #error
print(lst)
if(lst[4]==lst[5]):#if target equals to output then append w values into fwi and break
fw1=lst[10]+sum(lst[7])
fw2=lst[11]+sum(lst[8])
fw0=lst[12]+sum(lst[9])
fyin=[(lst[10]*i)+(lst[11]*j)+(lst[12]*k) for i,j,k in zip(lst[0],lst[1],lst[2])]
break
else:#if not then continue iteration and update delta wi
lst[10]=lst[10]+sum(lst[7]) #using wi:=wi+delta wi formulae to update wi values.
lst[11]=lst[11]+sum(lst[8])
lst[12]=lst[12]+sum(lst[9])
print("\n\nFinal\nw1={}, w2={}, w3={}, Yin(O)={}".format(fw1,fw2,fw0,fyin))
#print the final wi values
Output:
Here, we can see the epoch is not ending for XOR gate as it is in non-linear form.
Comments