Ich versuche Dynamic_decode in Tensorflow für ein Aufmerksamkeitsmodell zu verwenden. Die Originalversion wird von https://github.com/tensorflow/nmt#decoder bereitgestelltAttributeError: 'Tensor' Objekt hat kein Attribut 'Aufmerksamkeit'
learning_rate = 0.001
n_hidden = 128
total_epoch = 10000
num_units=128
n_class = n_input = 47
num_steps=8
embedding_size=30
mode = tf.placeholder(tf.bool)
embed_enc = tf.placeholder(tf.float32, shape=[None,num_steps,300])
embed_dec = tf.placeholder(tf.float32, shape=[None,num_steps,300])
targets=tf.placeholder(tf.int32, shape=[None,num_steps])
enc_seqlen = tf.placeholder(tf.int32, shape=[None])
dec_seqlen = tf.placeholder(tf.int32, shape=[None])
decoder_weights= tf.placeholder(tf.float32, shape=[None, num_steps])
with tf.variable_scope('encode'):
enc_cell = tf.contrib.rnn.BasicRNNCell(n_hidden)
enc_cell = tf.contrib.rnn.DropoutWrapper(enc_cell, output_keep_prob=0.5)
outputs, enc_states = tf.nn.dynamic_rnn(enc_cell, embed_enc,sequence_length=enc_seqlen,
dtype=tf.float32,time_major=True)
attention_states = tf.transpose(outputs, [1, 0, 2])
# Create an attention mechanism
attention_mechanism = tf.contrib.seq2seq.LuongAttention(
num_units, attention_states,
memory_sequence_length=enc_seqlen)
decoder_cell = tf.contrib.rnn.BasicLSTMCell(num_units)
decoder_cell = tf.contrib.seq2seq.AttentionWrapper(
decoder_cell, attention_mechanism,
attention_layer_size=num_units)
helper = tf.contrib.seq2seq.TrainingHelper(
embed_dec, dec_seqlen, time_major=True)
# Decoder
projection_layer = Dense(
47, use_bias=False)
decoder = tf.contrib.seq2seq.BasicDecoder(
decoder_cell, helper, enc_states,
output_layer=projection_layer)
# Dynamic decoding
outputs, _ = tf.contrib.seq2seq.dynamic_decode(decoder)
Aber ich habe einen Fehler, wenn ich
lieftf.contrib.seq2seq.dynamic_decode(decoder)
und Fehler zeigt, wie unten
Traceback (most recent call last):
File "<ipython-input-19-0708495dbbfb>", line 27, in <module>
outputs, _ = tf.contrib.seq2seq.dynamic_decode(decoder)
File "D:\Anaconda3\lib\site-packages\tensorflow\contrib\seq2seq\python\ops\decoder.py", line 286, in dynamic_decode
swap_memory=swap_memory)
File "D:\Anaconda3\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 2775, in while_loop
result = context.BuildLoop(cond, body, loop_vars, shape_invariants)
File "D:\Anaconda3\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 2604, in BuildLoop
pred, body, original_loop_vars, loop_vars, shape_invariants)
File "D:\Anaconda3\lib\site-packages\tensorflow\python\ops\control_flow_ops.py", line 2554, in _BuildLoop
body_result = body(*packed_vars_for_body)
File "D:\Anaconda3\lib\site-packages\tensorflow\contrib\seq2seq\python\ops\decoder.py", line 234, in body
decoder_finished) = decoder.step(time, inputs, state)
File "D:\Anaconda3\lib\site-packages\tensorflow\contrib\seq2seq\python\ops\basic_decoder.py", line 139, in step
cell_outputs, cell_state = self._cell(inputs, state)
File "D:\Anaconda3\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py", line 180, in __call__
return super(RNNCell, self).__call__(inputs, state)
File "D:\Anaconda3\lib\site-packages\tensorflow\python\layers\base.py", line 450, in __call__
outputs = self.call(inputs, *args, **kwargs)
File "D:\Anaconda3\lib\site-packages\tensorflow\contrib\seq2seq\python\ops\attention_wrapper.py", line 1143, in call
cell_inputs = self._cell_input_fn(inputs, state.attention)
AttributeError: 'Tensor' object has no attribute 'attention'
Ich habe versucht, die neueste tensorflow installiert 1.2.1 aber es hat nicht funktioniert. Vielen Dank für Ihre Hilfe.
UPDATE:
Das Problem ist, wenn ich initial_states von BasicDecoder ändern:
decoder = tf.contrib.seq2seq.BasicDecoder(
decoder_cell, helper, enc_states,
output_layer=projection_layer)
in:
decoder = tf.contrib.seq2seq.BasicDecoder(
decoder_cell, helper,
decoder_cell.zero_state(dtype=tf.float32,batch_size=batch_size),
output_layer=projection_layer)
Dann funktioniert es. Ich habe keine Ahnung, ob es eine korrekte Lösung ist, weil initial_states auf Null gesetzt scheint. Vielen Dank für Ihre Hilfe.
Ich versuchte dies, aber es zeigte den gleichen Fehler. Kannst du meine Updates sehen? Ist das ein richtiger Weg? – Shichen