TensorFlow 예제 따라하기(lab 3)


Linear Regression: H(x) = Wx + b 계속...


1. W에 따른 cost의 추이를 그래프로 표현하기

# TensorFlow 불러오기

import tensorflow as tf


# Matplotlib 불러오기

import matplotlib

Matplotlib backend 설정

matplotlib.use('TkAgg')

import matplotlib.pyplot as plt


X = [1, 2, 3]

Y = [1, 2, 3]


W = tf.placeholder(tf.float32)


# H(x) = Wx

hypothesis = X * W


# Cost: 1/m * ∑(H(x) - y)^2

cost = tf.reduce_mean(tf.square(hypothesis - Y))


# Session 생성

sess = tf.Session()


# Global variable 초기화

sess.run(tf.global_variables_initializer())

W_val = []

cost_val = []


# -3 ~ 5 까지 W(기울기)를 0.1씩 바꿔보면서 W에 따른 cost 값을 구한다

for i in range(-30, 50):

    feed_W = i * 0.1

    curr_cost, curr_W = sess.run([cost, W], feed_dict={W: feed_W})

    W_val.append(curr_W)

    cost_val.append(curr_cost)


# matplotlib으로 표현하기

plt.plot(W_val, cost_val)

plt.show()

- 결과:


2. GradientDescentOptimizer를 직접 구현하기

  Before


  # Gradient Descent Magic

 optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.1)

 train = optimizer.minimize(cost)


  After

# Gradient Descent 구현

# 한번 학습을 반영하는 비율
learning_rate = 0.1

# Cost의 미분(cost의 기울기)
gradient = tf.reduce_mean((W * X - Y) * X)

# 학습된 W(linear 함수의 기울기)를 반영
descent = W - learning_rate * gradient
train = W.assign(descent)




3. W값을 지정하여 학습시키기

 Before


 # W 초기 랜덤값 적용

 W = tf.Variable(tf.random_normal([1]), name="weight") 


 After


 # W 초기값을 지정하여 수행


 # W = tf.Variable(5.0)

 W = tf.Variable(-3.0)




4. Gradient 값 수정하기

 Before


  # Gradient Descent Magic

 optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.1)

 train = optimizer.minimize(cost)


 After

 # Gradient 계산값 구하여 수동 적용


 gvs = optimizer.compute_gradients(cost)

 # TODO: gvs 수정

 train = optimizer.apply_gradients(gvs)




# gradient를 계산한 값 구해서 적용하기(compute_gradient, apply_gradient)

# Loading TensorFlow
import tensorflow as tf

x_data = [1, 2, 3]
y_data = [1, 2, 3]

# W = tf.Variable(5.0)
W = tf.Variable(-3.0)
X = tf.placeholder(tf.float32)
Y = tf.placeholder(tf.float32)

# H(x) = Wx
hypothesis = X * W

# Cost: 1/m * ∑(H(x) - y)^2
cost = tf.reduce_mean(tf.square(hypothesis - Y))
gradient = tf.reduce_mean(((W * X - Y) * X) * 2)

# Gradient Descent Magic
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.1)

# Gradient 계산값 구하여 수동 적용
# train = optimizer.minimize(cost)
gvs = optimizer.compute_gradients(cost)
# TODO: gvs 수정
train = optimizer.apply_gradients(gvs)

# Session 생성
sess = tf.Session()

# Global variable 초기화
sess.run(tf.global_variables_initializer())

for step in range(10):
print(step, sess.run([gradient, W, gvs], feed_dict={X: x_data, Y: y_data}))
sess.run(train, feed_dict={X: x_data, Y: y_data})

'''
Result
0 [-37.333332, -3.0, [(-37.333336, -3.0)]]
1 [-2.4888866, 0.73333359, [(-2.4888866, 0.73333359)]]
2 [-0.1659257, 0.98222226, [(-0.16592571, 0.98222226)]]
3 [-0.011061668, 0.99881482, [(-0.011061668, 0.99881482)]]
4 [-0.00073742867, 0.99992096, [(-0.00073742867, 0.99992096)]]
5 [-4.9630802e-05, 0.9999947, [(-4.9630802e-05, 0.9999947)]]
6 [-3.0994415e-06, 0.99999964, [(-3.0994415e-06, 0.99999964)]]
7 [-6.7551929e-07, 0.99999994, [(-6.7551935e-07, 0.99999994)]]
8 [0.0, 1.0, [(0.0, 1.0)]]
9 [0.0, 1.0, [(0.0, 1.0)]]
'''



# 여기의 모든 소스는 github에... -> https://github.com/ndukwon/learning_TensorFlow



Reference

- https://www.youtube.com/watch?v=Y0EF9VqRuEA&feature=youtu.be

'Machine Learning > TensorFlow' 카테고리의 다른 글

TensorFlow 예제 따라하기(lab 1 ~ 2)  (0) 2017.06.12
TensorFlow의 시작과 설치  (0) 2017.06.07

+ Recent posts