Loss not converging in Polynomial regression in Tensorflow -


import numpy np  import tensorflow tf   #input data: x_input=np.linspace(0,10,1000) y_input=x_input+np.power(x_input,2)  #model parameters w = tf.variable(tf.random_normal([2,1]), name='weight') #bias b = tf.variable(tf.random_normal([1]), name='bias')  #placeholders #x=tf.placeholder(tf.float32,shape=(none,2)) x=tf.placeholder(tf.float32,shape=[none,2]) y=tf.placeholder(tf.float32) x_modified=np.zeros([1000,2])  x_modified[:,0]=x_input x_modified[:,1]=np.power(x_input,2) #model #x_new=tf.constant([x_input,np.power(x_input,2)]) y_pred=tf.add(tf.matmul(x,w),b)  #algortihm loss = tf.reduce_mean(tf.square(y_pred -y )) #training algorithm optimizer = tf.train.gradientdescentoptimizer(0.01).minimize(loss) #initializing variables init = tf.initialize_all_variables()  #starting session session  sess = tf.session() sess.run(init)  epoch=100  step in xrange(epoch):     # temp=x_input.reshape((1000,1))      #y_input=temp       _, c=sess.run([optimizer, loss], feed_dict={x: x_modified, y: y_input})      if step%50==0 :        print c  print "model paramters:"        print  sess.run(w) print "bias:%f" %sess.run(b) 

i'm trying implement polynomial regression(quadratic) in tensorflow. loss isn't converging. please me out this. similar logic working linear regression though!

first there problem in shapes, y_pred , y:

  • y has unknown shape, , fed array of shape (1000,)
  • y_pred has shape (1000, 1)
  • y - y_pred have shape (1000, 1000)

this small code prove point:

a = tf.zeros([1000])  # shape (1000,) b = tf.zeros([1000, 1])  # shape (1000, 1) print (a-b).get_shape()  # prints (1000, 1000) 

you should use consistent types:

y_input = y_input.reshape((1000, 1))  y = tf.placeholder(tf.float32, shape=[none, 1]) 

anyway, loss exploding because have high values (input between 0 , 100, should normalize it) , high loss (around 2000 @ beginning of training).
gradient high , parameters explode, , loss gets infinite.

the quickest fix lower learning rate (1e-5 converges me, albeit @ end). can make higher after loss converges around 1.


Comments

Popular posts from this blog

javascript - Slick Slider width recalculation -

jsf - PrimeFaces Datatable - What is f:facet actually doing? -

angular2 services - Angular 2 RC 4 Http post not firing -