Wednesday, April 1, 2020

[tensorflow certification] 2-5. How to predict stock prevent overfitting




Hello. I'm DVM who talk about tensorflow certification.
Today, I will explain how to prevent overfitting using stock example.
If you haven't seen the previous lecture, 
I recommend that watch the previous lecture.

First. What is the overfitting?
wiki said that
In statistics, overfitting is the production of an analysis that corresponds too closely or exactly to a particular set of data 

Second. How to prevent overfitting.
I recommend 3 methods.
A. method is regularization.
Regularization is to prevent the weight from increasing,
 If we add weight factor to cost, we can limit the weight.

B. method is preprocessing.
we divide data into test data and train data.
Test datas never be used for learning, and should only be used for evaluation.

C. Standardization.
If the datas are spread, normalization gather the datas.


Third what is the online learning.
Online Learning means continuously updating new data to existing models.
if you use this method, you don't have to train the existing data again.


fourth. Practice.
Let`s predict next value of Stock



--------------------------------------------------
stock_prediction

Let's typing the library for stock

In [4]:
import numpy as np #for numpy calculation
from tensorflow.keras.models import Sequential #for sequence model
from tensorflow.keras.layers import Dense #for model 
from tensorflow.keras import backend as K # for regularization
from sklearn.preprocessing import MinMaxScaler # for normalization
import matplotlib.pyplot as plt # for drawing graph

Let's load the stock data

In [6]:
xy = np.loadtxt('GOOG.csv',delimiter=',')
print(xy)
[[1.18409998e+03 1.19666003e+03 1.18200000e+03 1.19443005e+03
  1.19443005e+03 1.25250000e+06]
 [1.19531995e+03 1.20134998e+03 1.18570996e+03 1.20048999e+03
  1.20048999e+03 8.27900000e+05]
 [1.20747998e+03 1.21630005e+03 1.20050000e+03 1.20592004e+03
  1.20592004e+03 1.01780000e+06]
 ...
 [1.12646997e+03 1.14890002e+03 1.08601001e+03 1.10248999e+03
  1.10248999e+03 4.08150000e+06]
 [1.11180005e+03 1.16996997e+03 1.09353003e+03 1.16175000e+03
  1.16175000e+03 3.57170000e+06]
 [1.12567004e+03 1.15067004e+03 1.10591003e+03 1.11070996e+03
  1.11070996e+03 3.20720000e+06]]

Let's normalize the data

In [7]:
scaler = MinMaxScaler()
print(scaler)
MinMaxScaler(copy=True, feature_range=(0, 1))
In [8]:
normalize_xy = scaler.fit_transform(xy)
print(normalize_xy)
[[0.29284272 0.30781083 0.33171082 0.32255451 0.32255451 0.15445004]
 [0.31611246 0.31748848 0.33901585 0.33491013 0.33491013 0.08198652]
 [0.34133186 0.3483378  0.3681379  0.34598148 0.34598148 0.11439543]
 ...
 [0.17332053 0.20925855 0.14270355 0.13509769 0.13509769 0.63725574]
 [0.14289573 0.25273617 0.1575107  0.25592307 0.25592307 0.55025173]
 [0.17166152 0.21291097 0.18188732 0.15185741 0.15185741 0.48804506]]
In [9]:
plt.plot(xy)
Out[9]:
[<matplotlib.lines.Line2D at 0x245dc148f48>,
 <matplotlib.lines.Line2D at 0x245dc1677c8>,
 <matplotlib.lines.Line2D at 0x245dc16c188>,
 <matplotlib.lines.Line2D at 0x245dc1700c8>,
 <matplotlib.lines.Line2D at 0x245dc173108>,
 <matplotlib.lines.Line2D at 0x245dc176588>]
In [10]:
plt.plot(normalize_xy)
Out[10]:
[<matplotlib.lines.Line2D at 0x245dc57d388>,
 <matplotlib.lines.Line2D at 0x245dc58b188>,
 <matplotlib.lines.Line2D at 0x245dc0a72c8>,
 <matplotlib.lines.Line2D at 0x245dc01d7c8>,
 <matplotlib.lines.Line2D at 0x245dc59ab08>,
 <matplotlib.lines.Line2D at 0x245dbf665c8>]

Data information

1- open, 2- high, 3-low, 4-close, 5- close, 6-volum

In [13]:
x_data = normalize_xy[:,[0,1,2,5]]
print(x_data)
[[0.29284272 0.30781083 0.33171082 0.15445004]
 [0.31611246 0.31748848 0.33901585 0.08198652]
 [0.34133186 0.3483378  0.3681379  0.11439543]
 ...
 [0.17332053 0.20925855 0.14270355 0.63725574]
 [0.14289573 0.25273617 0.1575107  0.55025173]
 [0.17166152 0.21291097 0.18188732 0.48804506]]
In [14]:
y_data = normalize_xy[:,3]
print(y_data)
[0.32255451 0.33491013 0.34598148 0.36449463 0.34848929 0.34174041
 0.32830411 0.33831519 0.34333081 0.37034627 0.37693188 0.38922652
 0.40800473 0.40806596 0.43349101 0.46552234 0.44808962 0.46327935
 0.48107918 0.51247807 0.31042289 0.26882923 0.2576765  0.30414316
 0.31227837 0.28110347 0.26513895 0.25720759 0.26106115 0.19532695
 0.17169589 0.26093869 0.29105332 0.25704457 0.20923216 0.23121159
 0.2348613  0.21314694 0.19826285 0.19964941 0.1635811  0.16661905
 0.13742207 0.         0.03429448 0.01221301 0.01653547 0.0607798
 0.09001759 0.08663295 0.08318732 0.10712401 0.10015088 0.11472908
 0.13736085 0.1347714  0.1533052  0.17463204 0.16166465 0.10218978
 0.08883512 0.0811076  0.09109827 0.125841   0.1529585  0.17402027
 0.19442971 0.16335685 0.18064671 0.21255558 0.22016064 0.22156762
 0.23265913 0.23926515 0.22452393 0.22448311 0.19139176 0.20764175
 0.22423845 0.20711187 0.19551038 0.43669223 0.4142643  0.38516913
 0.36792009 0.35228162 0.32165727 0.23669611 0.27264197 0.28087922
 0.34369792 0.30946467 0.28234717 0.32834493 0.26110196 0.26715744
 0.28823963 0.33075069 0.29861757 0.31607069 0.31256384 0.23459623
 0.27048087 0.26833992 0.27482374 0.31933289 0.3096481  0.26946141
 0.29600796 0.35711381 0.34396299 0.34290272 0.3461445  0.37503584
 0.4037435  0.41457019 0.39772884 0.39334514 0.39999199 0.41283692
 0.39493555 0.403295   0.37216092 0.42876087 0.41830129 0.38506708
 0.37265024 0.34430944 0.28626195 0.30909756 0.35226121 0.34956997
 0.31174823 0.33862107 0.35158846 0.36541203 0.36885791 0.4216043
 0.42288882 0.44211553 0.42666074 0.42800649 0.42117621 0.4544714
 0.45826373 0.46670481 0.51741231 0.46158715 0.4588755  0.45646948
 0.48425973 0.52020559 0.52155134 0.52108243 0.55586598 0.56098364
 0.5361497  0.53535475 0.53372353 0.56116707 0.60889785 0.58000651
 0.56932268 0.54402008 0.5405538  0.52829998 0.55144147 0.56542856
 0.56428665 0.54791421 0.51724928 0.52817777 0.57968047 0.59515567
 0.62062154 0.62661604 0.62885878 0.62959276 0.640297   0.63532194
 0.66252108 0.65018562 0.64508837 0.6520615  0.63891043 0.63738126
 0.62661604 0.66095109 0.64360001 0.6114873  0.61328154 0.67516218
 0.66148122 0.72988625 0.72811241 0.75049952 0.78212292 0.8023081
 0.82167767 0.80465289 0.82161645 0.84710273 0.90559897 0.91377499
 0.91693514 0.91836252 0.87770667 0.81081041 0.8488564  0.86123243
 0.85554381 0.81148316 0.91691473 0.8376626  0.8400278  0.89711706
 0.90323377 0.9632796  0.96350385 0.9828326  0.97547219 0.98786863
 0.98568712 1.         0.98258794 0.91522253 0.78571141 0.71814215
 0.72778637 0.57468501 0.61799127 0.7194879  0.62219153 0.71420721
 0.57662211 0.53455955 0.36563653 0.49781849 0.36533065 0.16042095
 0.37413859 0.09807116 0.17039122 0.12349646 0.16119575 0.07358392
 0.04157325 0.20028135 0.13509769 0.25592307 0.15185741]
In [15]:
plt.plot(x_data)
Out[15]:
[<matplotlib.lines.Line2D at 0x245dc4442c8>,
 <matplotlib.lines.Line2D at 0x245dbad9048>,
 <matplotlib.lines.Line2D at 0x245dc2d95c8>,
 <matplotlib.lines.Line2D at 0x245dc2d9f48>]
In [16]:
plt.plot(y_data)
Out[16]:
[<matplotlib.lines.Line2D at 0x245dc692148>]

Splite the train and test data 7:3

In [17]:
train_size = int((len(x_data)*0.7))
test_size = len(x_data)-train_size
In [18]:
x_train, x_test = x_data[0:train_size], x_data[train_size:len(x_data)]
In [19]:
y_train, y_test = y_data[0:train_size], y_data[train_size:len(y_data)]

Let's train using regularization

In [20]:
model = Sequential()
In [22]:
def reg(weight_matrix):
    return 0.01*K.sum(K.abs(weight_matrix))

model.add(Dense(input_dim=4,kernel_regularizer=reg, units = 1))
model.compile(loss='mse',optimizer='sgd',metrics=['mse'])
model.fit(x_train,y_train,epochs = 100)
Train on 175 samples
Epoch 1/100
175/175 [==============================] - 1s 4ms/sample - loss: 1.2115 - mse: 1.1854
Epoch 2/100
175/175 [==============================] - 0s 148us/sample - loss: 0.8971 - mse: 0.8725
Epoch 3/100
175/175 [==============================] - 0s 126us/sample - loss: 0.6700 - mse: 0.6467
Epoch 4/100
175/175 [==============================] - 0s 154us/sample - loss: 0.5057 - mse: 0.4835
Epoch 5/100
175/175 [==============================] - 0s 160us/sample - loss: 0.3874 - mse: 0.3662
Epoch 6/100
175/175 [==============================] - 0s 205us/sample - loss: 0.3049 - mse: 0.2846
Epoch 7/100
175/175 [==============================] - 0s 188us/sample - loss: 0.2428 - mse: 0.2233
Epoch 8/100
175/175 [==============================] - 0s 177us/sample - loss: 0.1981 - mse: 0.1792
Epoch 9/100
175/175 [==============================] - 0s 166us/sample - loss: 0.1671 - mse: 0.1488
Epoch 10/100
175/175 [==============================] - 0s 160us/sample - loss: 0.1441 - mse: 0.1262
Epoch 11/100
175/175 [==============================] - 0s 194us/sample - loss: 0.1266 - mse: 0.1092
Epoch 12/100
175/175 [==============================] - 0s 177us/sample - loss: 0.1144 - mse: 0.0973
Epoch 13/100
175/175 [==============================] - 0s 166us/sample - loss: 0.1049 - mse: 0.0882
Epoch 14/100
175/175 [==============================] - 0s 131us/sample - loss: 0.0981 - mse: 0.0817
Epoch 15/100
175/175 [==============================] - 0s 200us/sample - loss: 0.0924 - mse: 0.0763
Epoch 16/100
175/175 [==============================] - 0s 160us/sample - loss: 0.0886 - mse: 0.0727
Epoch 17/100
175/175 [==============================] - 0s 103us/sample - loss: 0.0858 - mse: 0.0701
Epoch 18/100
175/175 [==============================] - 0s 160us/sample - loss: 0.0835 - mse: 0.0680
Epoch 19/100
175/175 [==============================] - 0s 166us/sample - loss: 0.0815 - mse: 0.0663
Epoch 20/100
175/175 [==============================] - 0s 177us/sample - loss: 0.0798 - mse: 0.0647
Epoch 21/100
175/175 [==============================] - 0s 326us/sample - loss: 0.0786 - mse: 0.0637
Epoch 22/100
175/175 [==============================] - 0s 109us/sample - loss: 0.0775 - mse: 0.0627
Epoch 23/100
175/175 [==============================] - 0s 211us/sample - loss: 0.0763 - mse: 0.0617
Epoch 24/100
175/175 [==============================] - 0s 120us/sample - loss: 0.0753 - mse: 0.0609
Epoch 25/100
175/175 [==============================] - 0s 109us/sample - loss: 0.0744 - mse: 0.0601
Epoch 26/100
175/175 [==============================] - 0s 114us/sample - loss: 0.0736 - mse: 0.0594
Epoch 27/100
175/175 [==============================] - 0s 200us/sample - loss: 0.0727 - mse: 0.0586
Epoch 28/100
175/175 [==============================] - 0s 251us/sample - loss: 0.0719 - mse: 0.0580
Epoch 29/100
175/175 [==============================] - 0s 171us/sample - loss: 0.0711 - mse: 0.0573
Epoch 30/100
175/175 [==============================] - 0s 126us/sample - loss: 0.0704 - mse: 0.0567
Epoch 31/100
175/175 [==============================] - 0s 126us/sample - loss: 0.0697 - mse: 0.0561
Epoch 32/100
175/175 [==============================] - 0s 137us/sample - loss: 0.0690 - mse: 0.0555
Epoch 33/100
175/175 [==============================] - 0s 97us/sample - loss: 0.0683 - mse: 0.0549
Epoch 34/100
175/175 [==============================] - 0s 154us/sample - loss: 0.0677 - mse: 0.0544
Epoch 35/100
175/175 [==============================] - 0s 143us/sample - loss: 0.0671 - mse: 0.0538
Epoch 36/100
175/175 [==============================] - 0s 280us/sample - loss: 0.0666 - mse: 0.0533
Epoch 37/100
175/175 [==============================] - 0s 114us/sample - loss: 0.0660 - mse: 0.0528
Epoch 38/100
175/175 [==============================] - 0s 206us/sample - loss: 0.0654 - mse: 0.0523
Epoch 39/100
175/175 [==============================] - 0s 126us/sample - loss: 0.0649 - mse: 0.0518
Epoch 40/100
175/175 [==============================] - 0s 366us/sample - loss: 0.0643 - mse: 0.0513
Epoch 41/100
175/175 [==============================] - 0s 126us/sample - loss: 0.0638 - mse: 0.0508
Epoch 42/100
175/175 [==============================] - 0s 228us/sample - loss: 0.0632 - mse: 0.0503
Epoch 43/100
175/175 [==============================] - 0s 126us/sample - loss: 0.0627 - mse: 0.0498
Epoch 44/100
175/175 [==============================] - 0s 268us/sample - loss: 0.0622 - mse: 0.0494
Epoch 45/100
175/175 [==============================] - 0s 103us/sample - loss: 0.0617 - mse: 0.0489
Epoch 46/100
175/175 [==============================] - 0s 120us/sample - loss: 0.0612 - mse: 0.0485
Epoch 47/100
175/175 [==============================] - 0s 137us/sample - loss: 0.0607 - mse: 0.0480
Epoch 48/100
175/175 [==============================] - 0s 154us/sample - loss: 0.0602 - mse: 0.0475
Epoch 49/100
175/175 [==============================] - 0s 126us/sample - loss: 0.0596 - mse: 0.0471
Epoch 50/100
175/175 [==============================] - 0s 120us/sample - loss: 0.0591 - mse: 0.0466
Epoch 51/100
175/175 [==============================] - 0s 228us/sample - loss: 0.0587 - mse: 0.0462
Epoch 52/100
175/175 [==============================] - 0s 109us/sample - loss: 0.0581 - mse: 0.0457
Epoch 53/100
175/175 [==============================] - 0s 97us/sample - loss: 0.0577 - mse: 0.0453
Epoch 54/100
175/175 [==============================] - 0s 320us/sample - loss: 0.0572 - mse: 0.0448
Epoch 55/100
175/175 [==============================] - 0s 143us/sample - loss: 0.0567 - mse: 0.0444
Epoch 56/100
175/175 [==============================] - 0s 171us/sample - loss: 0.0562 - mse: 0.0440
Epoch 57/100
175/175 [==============================] - 0s 114us/sample - loss: 0.0557 - mse: 0.0435
Epoch 58/100
175/175 [==============================] - 0s 137us/sample - loss: 0.0553 - mse: 0.0431
Epoch 59/100
175/175 [==============================] - 0s 91us/sample - loss: 0.0548 - mse: 0.0427
Epoch 60/100
175/175 [==============================] - 0s 131us/sample - loss: 0.0543 - mse: 0.0423
Epoch 61/100
175/175 [==============================] - 0s 188us/sample - loss: 0.0539 - mse: 0.0419
Epoch 62/100
175/175 [==============================] - 0s 200us/sample - loss: 0.0535 - mse: 0.0416
Epoch 63/100
175/175 [==============================] - 0s 91us/sample - loss: 0.0530 - mse: 0.0411
Epoch 64/100
175/175 [==============================] - 0s 120us/sample - loss: 0.0526 - mse: 0.0407
Epoch 65/100
175/175 [==============================] - 0s 131us/sample - loss: 0.0522 - mse: 0.0403
Epoch 66/100
175/175 [==============================] - 0s 154us/sample - loss: 0.0517 - mse: 0.0399
Epoch 67/100
175/175 [==============================] - 0s 86us/sample - loss: 0.0513 - mse: 0.0396
Epoch 68/100
175/175 [==============================] - 0s 103us/sample - loss: 0.0509 - mse: 0.0392
Epoch 69/100
175/175 [==============================] - 0s 171us/sample - loss: 0.0505 - mse: 0.0388
Epoch 70/100
175/175 [==============================] - 0s 177us/sample - loss: 0.0501 - mse: 0.0385
Epoch 71/100
175/175 [==============================] - 0s 217us/sample - loss: 0.0496 - mse: 0.0381
Epoch 72/100
175/175 [==============================] - 0s 194us/sample - loss: 0.0492 - mse: 0.0377
Epoch 73/100
175/175 [==============================] - 0s 217us/sample - loss: 0.0488 - mse: 0.0374
Epoch 74/100
175/175 [==============================] - 0s 120us/sample - loss: 0.0484 - mse: 0.0370
Epoch 75/100
175/175 [==============================] - 0s 148us/sample - loss: 0.0481 - mse: 0.0367
Epoch 76/100
175/175 [==============================] - 0s 143us/sample - loss: 0.0476 - mse: 0.0363
Epoch 77/100
175/175 [==============================] - 0s 177us/sample - loss: 0.0472 - mse: 0.0360
Epoch 78/100
175/175 [==============================] - 0s 103us/sample - loss: 0.0468 - mse: 0.0356
Epoch 79/100
175/175 [==============================] - 0s 91us/sample - loss: 0.0464 - mse: 0.0353
Epoch 80/100
175/175 [==============================] - 0s 109us/sample - loss: 0.0461 - mse: 0.0349
Epoch 81/100
175/175 [==============================] - 0s 194us/sample - loss: 0.0457 - mse: 0.0346
Epoch 82/100
175/175 [==============================] - 0s 120us/sample - loss: 0.0453 - mse: 0.0343
Epoch 83/100
175/175 [==============================] - 0s 160us/sample - loss: 0.0450 - mse: 0.0340
Epoch 84/100
175/175 [==============================] - 0s 143us/sample - loss: 0.0446 - mse: 0.0336
Epoch 85/100
175/175 [==============================] - 0s 223us/sample - loss: 0.0442 - mse: 0.0333
Epoch 86/100
175/175 [==============================] - 0s 286us/sample - loss: 0.0439 - mse: 0.0330
Epoch 87/100
175/175 [==============================] - 0s 109us/sample - loss: 0.0435 - mse: 0.0327
Epoch 88/100
175/175 [==============================] - 0s 143us/sample - loss: 0.0432 - mse: 0.0324
Epoch 89/100
175/175 [==============================] - 0s 154us/sample - loss: 0.0429 - mse: 0.0321
Epoch 90/100
175/175 [==============================] - 0s 366us/sample - loss: 0.0425 - mse: 0.0318
Epoch 91/100
175/175 [==============================] - 0s 160us/sample - loss: 0.0422 - mse: 0.0315
Epoch 92/100
175/175 [==============================] - 0s 114us/sample - loss: 0.0418 - mse: 0.0312
Epoch 93/100
175/175 [==============================] - 0s 183us/sample - loss: 0.0415 - mse: 0.0310
Epoch 94/100
175/175 [==============================] - 0s 268us/sample - loss: 0.0412 - mse: 0.0307
Epoch 95/100
175/175 [==============================] - 0s 200us/sample - loss: 0.0408 - mse: 0.0304
Epoch 96/100
175/175 [==============================] - 0s 126us/sample - loss: 0.0405 - mse: 0.0301
Epoch 97/100
175/175 [==============================] - 0s 171us/sample - loss: 0.0402 - mse: 0.0298
Epoch 98/100
175/175 [==============================] - 0s 194us/sample - loss: 0.0399 - mse: 0.0295
Epoch 99/100
175/175 [==============================] - 0s 166us/sample - loss: 0.0395 - mse: 0.0292
Epoch 100/100
175/175 [==============================] - 0s 228us/sample - loss: 0.0392 - mse: 0.0290
Out[22]:
<tensorflow.python.keras.callbacks.History at 0x245dcbf3e48>

Let's check the results

In [23]:
results = model.evaluate(x_test,y_test,verbose = 1)
76/76 [==============================] - 0s 2ms/sample - loss: 0.2600 - mse: 0.2497

Let's check the test data and prediction data

In [25]:
predictions = model.predict(x_test)
In [26]:
plt.plot(y_test,'.')
plt.plot(predictions,'--')
plt.show()

Let's do online learning

In [28]:
new_x = [[0.2,0.3,0.33,0.13]]
new_y = [0.32]
model.fit(new_x,new_y,epochs=1)
Train on 1 samples
1/1 [==============================] - 0s 10ms/sample - loss: 0.0122 - mse: 0.0020
Out[28]:
<tensorflow.python.keras.callbacks.History at 0x245de405b48>

Total codes

In [ ]:
import numpy as np
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras import backend as K
import numpy as np
from sklearn.preprocessing import MinMaxScaler
import matplotlib.pyplot as plt

xy = np.loadtxt('GOOG.csv',delimiter=',')
scaler = MinMaxScaler()
normalize_xy = scaler.fit_transform(xy)
x_data = normalize_xy[:, [0,1,2,5]] 
y_data = normalize_xy[:, 3]
train_size = int((len(x_data)*0.7))
test_size = len(x_data) - train_size

Fifth Exam.
Let's solve the problem for 5 seconds.




First. What is the Overfitting
the production of an analysis that corresponds closely or exactly to a particular set of data





Second.  What is the regulazation equation




Third. What is the normalization
If the datas are spreaded, normalization gather the datas





Fourth, What is the online learning.
Online Learning means continuously updating new data to existing models.

No comments:

Post a Comment