# PYTHON ADALINE TRAINING TO                                   IMPLEMENT OR GATE

In the previous article, we saw perceptron model is a linear classifier and it can not classify non-linear decision boundary. So, to overcome that we have ADALINE which means Adaptive linear neuron or network. Adaline is also for binary classification. Adaline uses gradient descent concept to reduce the error and form a accurate decision boundary. In this method, we continue to use same weights throughout a epoch, because here we calculate error between target and yin i.e. before calculating final output which comes after going through the activation function where as in perceptron we calculate error between the target and final output. So, in this it tell us how much we were right or wrong and therefore it is powerful than perceptron. Here, we are building a ADALINE model to implement OR gate.
We considered bipolar input.

### CODE:

lst=[[-1,-1,1,1],[-1,1,-1,1],[1,1,1,1],[0,0,0,0],[],[],[],[],[],[],0,0,0]
#list is in the order of w0,w1,w2,yin,Y,target,error,delta w0,delta w1,delta w2,w0,w1,w2
def fund(x): #activation function
if(x<0):
return -1
else:
return 1
fw1=
0
fw2=
0
fw0=
0
fyin=[]
#final yin values
sse=[]
epoch=[]
print(
"OR GATE\n")
c=
0
lst[
5]=[-1,1,1,1] #OR Output
while(1):
lst[
3]=[(lst[10]*i)+(lst[11]*j)+(lst[12]*k) for i,j,k in zip(lst[0],lst[1],lst[2])]
lst[
6]=[(0.2)*(t-o) for t,o in zip(lst[5],lst[3])] #Error
sse_cal=sum([i**
2 for i in lst[6]]) #SSE
sse.append(sse_cal)
epoch.append(c)
lst[
7]=[(i*j) for i,j in zip(lst[0],lst[6])] #w0
lst[
8]=[(k*l) for k,l in zip(lst[1],lst[6])] #w1
lst[
9]=[(m*n) for m,n in zip(lst[2],lst[6])] #w2
lst[
4]=[fund(p) for p in lst[3]]

print(
"Epoch {} = {}\n".format(c,lst))
if(lst[4]==lst[5]): #if target equals to output then append w values into fwi and break
fw1=lst[
10]+sum(lst[7]) #using wi:=wi+delta wi formulae to update wi values.
fw2=lst[
11]+sum(lst[8])
fw0=lst[
12]+sum(lst[9])
fyin=[(lst[
10]*i)+(lst[11]*j)+(lst[12]*k) for i,j,k in zip(lst[0],lst[1],lst[2])]
break
else: #if not then continue iteration and update delta wi
lst[
10]=lst[10]+sum(lst[7])
lst[
11]=lst[11]+sum(lst[8])
lst[
12]=lst[12]+sum(lst[9])
c+=
1
print(
"\n\nFinal\nw1={}, w2={}, w0={}, Yin(O)={}".format(fw1,fw2,fw0,fyin))
#print the final wi values