2016-11-20 2 views
-1

Ich versuche, Börsendaten mit einem rekurrenten neuronalen Netzwerk im Tensorflow zu prognostizieren. Es gibt 5 Funktionen und> 5000 Zeilen in der Datendatei. Label ist der Angepasste Abschluss.InvalidArgumentError: logits und labels müssen dieselbe Größe haben: logits_size = [128,1] labels_size = [1,128]

Nach sentdex's rnn code Bearbeitung für die meine Eingabedatei:

import tensorflow as tf 
import numpy as np 
from preprocess import create_feature_sets_and_labels 
from tensorflow.python.ops import rnn, rnn_cell 

train_x,train_y,test_x,test_y = create_feature_sets_and_labels() 

hm_epochs = 10 
n_classes = 1 
batch_size = 128 
chunk_size = 5 
n_chunks = 1 
rnn_size = 128 

x = tf.placeholder('float', [None, n_chunks, chunk_size]) 
y = tf.placeholder('float') 


def recurrent_neural_network(x): 

    layer = {'weights':tf.Variable(tf.random_normal([rnn_size, n_classes])), 
      'biases':tf.Variable(tf.random_normal([n_classes]))} 

    x = tf.transpose(x, [1,0,2]) 
    x = tf.reshape(x, [-1, chunk_size]) 
    x = tf.split(0, n_chunks, x) 


    lstm_cell = rnn_cell.BasicLSTMCell(rnn_size) 
    outputs, states = rnn.rnn(lstm_cell, x, dtype = tf.float32) 

    output = tf.add(tf.matmul(outputs[-1], layer['weights']), layer['biases']) 
    return output 



def train_neural_network(x): 
    prediction = recurrent_neural_network(x) 
    cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(prediction, y)) 
    optimizer = tf.train.AdamOptimizer().minimize(cost) 

    with tf.Session() as sess: 
     sess.run(tf.initialize_all_variables()) 

     for epoch in range(hm_epochs): 
      epoch_loss = 0 
      i = 0 
      while i < len(train_x): 
       start = i 
       end = i+batch_size 
       batch_x = np.array(train_x[start:end]) 
       batch_y = np.array(train_y[start:end]) 
       batch_x = batch_x.reshape((batch_size, n_chunks, chunk_size)) 


       _, c = sess.run([optimizer, cost], feed_dict={x: batch_x, 
                   y: batch_y}) 
       epoch_loss += c 
      print('Epoch', epoch, 'completed out of', hm_epochs, 'loss:', epoch_loss) 

     correct = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1)) 

     accuracy = tf.reduce_mean(tf.cast(correct, 'float')) 

     print('Accuracy:', accuracy.eval({x: test_x, y: test_y})) 

train_neural_network(x) 

Die Zurückverfolgungs zeigt dies:

Traceback (most recent call last): 
    File "rnn.py", line 70, in <module> 
    train_neural_network(x) 
    File "rnn.py", line 60, in train_neural_network 
    y: batch_y}) 
    File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 717, in run 
    run_metadata_ptr) 
    File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 915, in _run 
    feed_dict_string, options, run_metadata) 
    File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 965, in _do_run 
    target_list, options, run_metadata) 
    File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 985, in _do_call 
    raise type(e)(node_def, op, message) 
tensorflow.python.framework.errors.InvalidArgumentError: logits and labels must be same size: logits_size=[128,1] labels_size=[1,128] 
    [[Node: SoftmaxCrossEntropyWithLogits = SoftmaxCrossEntropyWithLogits[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](Reshape_1, Reshape_2)]] 

Caused by op u'SoftmaxCrossEntropyWithLogits', defined at: 
    File "rnn.py", line 70, in <module> 
    train_neural_network(x) 
    File "rnn.py", line 42, in train_neural_network 
    cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(prediction, y)) 
    File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/nn_ops.py", line 676, in softmax_cross_entropy_with_logits 
    precise_logits, labels, name=name) 
    File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 1744, in _softmax_cross_entropy_with_logits 
    features=features, labels=labels, name=name) 
    File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 749, in apply_op 
    op_def=op_def) 
    File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2380, in create_op 
    original_op=self._default_original_op, op_def=op_def) 
    File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1298, in __init__ 
    self._traceback = _extract_stack() 

InvalidArgumentError (see above for traceback): logits and labels must be same size: logits_size=[128,1] labels_size=[1,128] 
    [[Node: SoftmaxCrossEntropyWithLogits = SoftmaxCrossEntropyWithLogits[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](Reshape_1, Reshape_2)]] 

Ich weiß nicht, was die logit Größe oder Etikettengröße kann daher sollte nicht wickle meinen Kopf um diesen Fehler. Bitte helfen Sie !!

+0

Haben Sie dies gelöst? Ich stoße auf das gleiche Problem. – zerogravty

Antwort

-1

letzte Zeile def recurrent_neural_network(x):

output = tf.transpose(tf.add(tf.matmul(outputs[-1], layer['weights']), layer['biases']))) 
0

Der Fehler ist in dieser Zeile:

cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(prediction, y)) 

Es gibt zwei Probleme:

  1. Die Fehlerform, die festgelegt werden kann durch Umformen y zu [128]:

    cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
        logits=prediction, labels=tf.reshape(y, [batch_size]))) 
    
  2. Der Code verwendet einen Softmax-Cross-Entropieverlust mit einer einzelnen Ausgabeklasse. Die softmax eines einzelnen Werts ist 1, also werden alle Vorhersagen 1,0 sein, und Ihr Modell wird nicht lernen. Erwägen Sie, Ihr Modell so zu ändern, dass zwischen zwei oder mehr Klassen vorhergesagt wird, oder berechnen Sie eine Regression.

Verwandte Themen