Ich versuche, ein lineares Single-Layer-Perceptron (dh keine versteckten Schichten, alle Eingänge an alle Ausgänge angeschlossen, lineare Aktivierungsfunktion) und trainieren, einen Datenpunkt zu einer Zeit, mit die Delta-Regel, aber ich bekomme nicht die Ergebnisse, die ich erwarte. Ich verwende den mittleren quadratischen Fehler als meine Verlustfunktion, deren Ableitung sollte Gewichtserweiterungen ergeben, die einfach learning_rate * error (* 2) sind, aber irgendwie sehen die Ergebnisse sehr unterschiedlich zu meiner manuellen Berechnung aus. Was vermisse ich?Mit der Delta-Regel in Keras
import numpy as np
from keras.models import Sequential
from keras.optimizers import SGD
from keras.layers import Dense
features = np.array([[1,0,1],[0,1,1]])
features = np.tile(features, (500,1))
labels = np.array([[1,0],[0,1]])
labels = np.tile(labels, (500,1))
network = Sequential()
network.add(Dense(2, input_dim = 3, init = "zero", activation = "linear"))
network.compile(loss = "mse", optimizer = SGD(lr = 0.01))
network.fit(features, labels, nb_epoch = 1, batch_size = 1, shuffle = False)
network.get_weights()
# [[ 0.59687883, -0.39686254],
# [-0.39689422, 0.59687883],
# [ 0.19998412, 0.20001581]],
# manually
weights = np.array([[0.0,0.0],[0.0,0.0],[0.0,0.0]])
for i in range(500):
summed_out1 = weights[0,0] + weights[2,0]
summed_out2 = weights[0,1] + weights[2,1]
change_out1 = 0.01 * (1.0 - summed_out1)
change_out2 = 0.01 * (0.0 - summed_out2)
weights[0,0] += change_out1
weights[2,0] += change_out1
weights[0,1] += change_out2
weights[2,1] += change_out2
#
summed_out1 = weights[1,0] + weights[2,0]
summed_out2 = weights[1,1] + weights[2,1]
change_out1 = 0.01 * (0.0 - summed_out1)
change_out2 = 0.01 * (1.0 - summed_out2)
weights[1,0] += change_out1
weights[2,0] += change_out1
weights[1,1] += change_out2
weights[2,1] += change_out2
weights
# [[ 0.66346388, -0.33011442],
# [-0.33014677, 0.66346388],
# [ 0.33331711, 0.33334946]]