본문 바로가기

코딩/Machine Learning

How to minimize cost

Gradient descent algorithm

• Minimize cost function

• Gradient descent is used many minimization problems

• For a given cost function, cost (W, b), it will find W, b to minimize cost

• It can be applied to more general function: cost (w1, w2, …)

How it works?

• Start with initial guesses - Start at 0,0 (or any other value)

- Keeping changing W and b a little bit to try and reduce cost(W, b)

• Each time you change the parameters, you select the gradient which reduces cost(W, b) the most possible

• Repeat

• Do so until you converge to a local minimum

• Has an interesting property

- Where you start can determine which minimum you end up

출처 - https://www.youtube.com/channel/UCML9R2ol-l0Ab9OXoNnr7Lw

'코딩 > Machine Learning' 카테고리의 다른 글

Linear Regression2 and placeholders (1.X버전)  (0) 2021.01.22
Linear Regression  (0) 2021.01.22
TensorFlow Basics(1.X버전)  (0) 2021.01.22
Machine Learning Basics  (0) 2021.01.22