python - How to learn two sequences simultaenously through LSTM in Tensorflow/TFLearn? -


i learning lstm based seq2seq model in tensorflow platform. can train model on given simple seq2seq examples.

however, in cases have learn 2 sequences @ once given sequence (for e.g: learning previous sequence , next sequence current sequence simultaneously), how can i.e, compute combined error both sequence , backpropogate same error both sequences?

here's snippet lstm code using (mostly taken ptb example: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/models/rnn/ptb/ptb_word_lm.py#l132):

        output = tf.reshape(tf.concat(1, outputs), [-1, size])         softmax_w = tf.get_variable("softmax_w", [size, word_vocab_size])         softmax_b = tf.get_variable("softmax_b", [word_vocab_size])         logits = tf.matmul(output, softmax_w) + softmax_b         loss = tf.nn.seq2seq.sequence_loss_by_example(             [logits],             [tf.reshape(self._targets, [-1])],             [weights])         self._cost = cost = tf.reduce_sum(loss) / batch_size         self._final_state = state         self._lr = tf.variable(0.0, trainable=false)         tvars = tf.trainable_variables()         grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars),config.max_grad_norm)         optimizer = tf.train.gradientdescentoptimizer(self.lr)         self._train_op = optimizer.apply_gradients(zip(grads, tvars)) 

it seems me want have single encoder , multiple decoders (e.g. 2, 2 output sequences), right? there one2many in seq2seq use-case.

as loss, think can add losses both sequences. or want weight them somehow? think it's idea add them, , compute gradients , else if added losses loss.


Comments