본문 바로가기

코딩/Machine Learning

Linear Regression2 and placeholders (1.X버전)

 

import tensorflow as tf

# X and Y data
x_train = [1, 2, 3]
y_train = [1, 2, 3]

W = tf.Variable(tf.random_normal([1]), name='weight')
b = tf.Variable(tf.random_normal([1]), name='bias')

# Our hypothesis XW+b
hypothesis = x_train * W + b

# cost/loss function
cost = tf.reduce_mean(tf.square(hypothesis - y_train))

# Minimize
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
train = optimizer.minimize(cost)

# Launch the graph in a session.
sess = tf.Session()
# Initializes global variables in the graph.
sess.run(tf.global_variables_initializer())

# Fit the line
for step in range(2001):
   sess.run(train)
   if step % 20 == 0:
       print(step, sess.run(cost), sess.run(W), sess.run(b))  

 

'''

0 2.82329 [ 2.12867713] [-0.85235667]

20 0.190351 [ 1.53392804] [-1.05059612]

40 0.151357 [ 1.45725465] [-1.02391243]

...

1920 1.77484e-05 [ 1.00489295] [-0.01112291]

1940 1.61197e-05 [ 1.00466311] [-0.01060018]

1960 1.46397e-05 [ 1.004444] [-0.01010205]

1980 1.32962e-05 [ 1.00423515] [-0.00962736]

2000 1.20761e-05 [ 1.00403607] [-0.00917497]

'''

 

 

placeholders(1버전)

import tensorflow as tf
W = tf.Variable(tf.random_normal([1]), name='weight')
b = tf.Variable(tf.random_normal([1]), name='bias')

X = tf.placeholder(tf.float32, shape=[None])
Y = tf.placeholder(tf.float32, shape=[None])  # 차원이 아무거나 가능

# Our hypothesis XW+b
hypothesis = X * W + b
# cost/loss function
cost = tf.reduce_mean(tf.square(hypothesis - Y))
# Minimize
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
train = optimizer.minimize(cost)

# Launch the graph in a session.
sess = tf.Session()
# Initializes global variables in the graph.
sess.run(tf.global_variables_initializer())

# Fit the line
for step in range(2001):
   cost_val, W_val, b_val, _ = sess.run([cost, W, b, train],
                feed_dict={X: [1, 2, 3], Y: [1, 2, 3]})
   if step % 20 == 0:
       print(step, cost_val, W_val, b_val)

 

...

1980 1.32962e-05 [ 1.00423515] [-0.00962736]

2000 1.20761e-05 [ 1.00403607] [-0.00917497]

# Testing our model

print(sess.run(hypothesis, feed_dict={X: [5]}))

print(sess.run(hypothesis, feed_dict={X: [2.5]}))

print(sess.run(hypothesis,

feed_dict={X: [1.5, 3.5]}))

[ 5.0110054]

[ 2.50091505]

[ 1.49687922 3.50495124]

출처 - https://www.youtube.com/channel/UCML9R2ol-l0Ab9OXoNnr7Lw

 

'코딩 > Machine Learning' 카테고리의 다른 글

How to minimize cost  (0) 2021.01.22
Linear Regression  (0) 2021.01.22
TensorFlow Basics(1.X버전)  (0) 2021.01.22
Machine Learning Basics  (0) 2021.01.22