This repository has been archived by the owner on Nov 8, 2021. It is now read-only.
Replies: 3 comments 2 replies
-
@rmsouza01 ^fyi we got some progress with the initial time-series prediction |
Beta Was this translation helpful? Give feedback.
0 replies
-
@mklasby : curves look nice. I will look into the code carefully during the coming days. Can you remind me how many days ahead you are trying to predict the stock price? |
Beta Was this translation helpful? Give feedback.
2 replies
-
Discussion updated to log the history of our results |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Some progress to report. A few months ago, we had developed models that used dataset wide normalization (ie., min/max scale the entire training set at once). These models performed relatively OK on near term predictions (about 1 year from training period), but were totally incapable of predicting accurately many years into the future or during unusually high volatility events.
The first major insight was to use dynamic windowed normalization. That is, we normalize each window with respect to its lag period. We have been using 100-day lag windows in this case. As you can see below, we can follow the evolving mean much more accurately. However, this process excessively smoothes the predictions, so we end up missing the stock volitility.
The final piece was that we realized keras' LSTM network has some default parametres that are not well suited to very long data series. Given that the stock data we are using consists of very long seqeuences, it is critical that you DO NOT ALLOW keras to reset the state of the network between batches.
Using online learning,
batch size = 1
, withstatefulness=True
greatly improves model performance. See below:Please review the following for more info:
Beta Was this translation helpful? Give feedback.
All reactions