i have been learning tensorflow
, i'm trying put network following format:
n_inputs = 16 n_hidden = 64*2 n_outputs = 3 x = tf.placeholder(tf.float32, shape=[1,n_inputs]) w = tf.variable(tf.truncated_normal([n_inputs, n_hidden])) b = tf.variable(tf.zeros([n_hidden])) hidden_layer = tf.nn.relu(tf.matmul(x, w) + b) w2 = tf.variable(tf.truncated_normal([n_hidden, n_outputs])) b2 = tf.variable(tf.zeros([n_outputs])) logits = tf.matmul(hidden_layer, w2) + b2
i have 1784
sets of training data, valid repeat training using data repeatedly? guess result in over-fitting training data if repeated many times.
i training this:
print "training" in range(100): errs = [] xt, yt in zip(train[:n_dat-50], test[:n_dat-50]): _, err = sess.run([train_step, cost], feed_dict={x: [xt], y_: [yt]}) errs.append(err) print "error: %.5f" % np.mean(errs)
i'm looking using l2 regularisation , dropouts improve classification. other tips improve training low levels of data helpful.
you might consider adding noise. add random stuff inputs (maybe have noise have same mean , variance - depends). prevents overfittig , gives "more" training data (make sure put data side in order validate generalized training success).
also, sometimes, possible create artificial data-sets follow same logic pretraining.
Comments
Post a Comment