title
Cryptocurrency-predicting RNN Model - Deep Learning w/ Python, TensorFlow and Keras p.11
description
Welcome to the next tutorial covering deep learning with Python, Tensorflow, and Keras. We've been working on a cryptocurrency price movement prediction recurrent neural network, focusing mainly on the pre-processing that we've got to do. In this tutorial, we're going to be finishing up by building our model and training it.
Text tutorials and sample code: https://pythonprogramming.net/crypto-rnn-model-deep-learning-python-tensorflow-keras/
Discord: https://discord.gg/sentdex
Support the content: https://pythonprogramming.net/support-donate/
Twitter: https://twitter.com/sentdex
Facebook: https://www.facebook.com/pythonprogramming.net/
Twitch: https://www.twitch.tv/sentdex
G+: https://plus.google.com/+sentdex
detail
{'title': 'Cryptocurrency-predicting RNN Model - Deep Learning w/ Python, TensorFlow and Keras p.11', 'heatmap': [{'end': 805.992, 'start': 775.908, 'weight': 1}, {'end': 933.34, 'start': 899.643, 'weight': 0.846}], 'summary': 'Using python, tensorflow, and keras, the video demonstrates creating an rnn model to predict cryptocurrency price movements, achieving 57.28% accuracy after six epochs, with bitcoin cash showing the highest validation accuracy of over 60%.', 'chapters': [{'end': 80.693, 'segs': [{'end': 27.627, 'src': 'embed', 'start': 1.775, 'weight': 0, 'content': [{'end': 7.982, 'text': 'What is going on everybody and welcome to yet another deep learning with Python, TensorFlow, and Keras tutorial.', 'start': 1.775, 'duration': 6.207}, {'end': 8.803, 'text': 'In this tutorial.', 'start': 8.141, 'duration': 0.662}, {'end': 10.224, 'text': "we're going to continue with the last tutorial,", 'start': 8.803, 'duration': 1.421}, {'end': 20.655, 'text': 'where we are attempting to predict future price movements of a certain cryptocurrency based on the sequence and the historical prices and volume of that cryptocurrency,', 'start': 10.224, 'duration': 10.431}, {'end': 22.878, 'text': 'as well as other major cryptocurrencies.', 'start': 20.655, 'duration': 2.223}, {'end': 25.546, 'text': "And we're trying to do this with a recurrent neural network.", 'start': 23.905, 'duration': 1.641}, {'end': 27.627, 'text': "So let's continue.", 'start': 25.826, 'duration': 1.801}], 'summary': 'Tutorial on using recurrent neural network to predict cryptocurrency price movements.', 'duration': 25.852, 'max_score': 1.775, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yWkpRdpOiPY/pics/yWkpRdpOiPY1775.jpg'}, {'end': 80.693, 'src': 'embed', 'start': 41.296, 'weight': 2, 'content': [{'end': 45.877, 'text': "Then we're gonna go with a batch size and we're gonna go with 64 to start.", 'start': 41.296, 'duration': 4.581}, {'end': 48.278, 'text': 'we can tinker with that later if we wanted.', 'start': 45.877, 'duration': 2.401}, {'end': 56.06, 'text': "and then finally, we're gonna go with a name and we're gonna make this an F string, and what you want is a name that is Descriptive of the model,", 'start': 48.278, 'duration': 7.782}, {'end': 61.362, 'text': "because generally you're gonna tinker a little bit with the model, tweak it a little bit here and there, Rerun it again,", 'start': 56.06, 'duration': 5.302}, {'end': 62.842, 'text': "and then you're gonna do the same thing again.", 'start': 61.362, 'duration': 1.48}, {'end': 68.245, 'text': "Hopefully you don't have as itchy of eyes as I do, and Anyway so you want to have a unique name,", 'start': 62.862, 'duration': 5.383}, {'end': 71.527, 'text': 'both for the model that you save as well as in TensorBoard.', 'start': 68.245, 'duration': 3.282}, {'end': 77.371, 'text': "so later you can compare the results of a bunch of different models and you don't have to be like I don't know what model that was.", 'start': 71.527, 'duration': 5.844}, {'end': 80.693, 'text': "Or worse, you'd overwrite the other models.", 'start': 78.412, 'duration': 2.281}], 'summary': 'Model parameters set: batch size 64, unique descriptive name for tracking in tensorboard.', 'duration': 39.397, 'max_score': 41.296, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yWkpRdpOiPY/pics/yWkpRdpOiPY41296.jpg'}], 'start': 1.775, 'title': 'Using rnn for cryptocurrency price prediction', 'summary': 'Focuses on using a recurrent neural network to predict future price movements of a cryptocurrency based on historical data, using python, tensorflow, and keras with a batch size of 64, and emphasizes creating a unique name for models.', 'chapters': [{'end': 80.693, 'start': 1.775, 'title': 'Predicting cryptocurrency price movements with rnn', 'summary': 'Focuses on using a recurrent neural network to predict future price movements of a cryptocurrency based on historical data, using python, tensorflow, and keras, with a batch size of 64 and emphasis on creating a unique name for models.', 'duration': 78.918, 'highlights': ['The tutorial focuses on predicting future price movements of a cryptocurrency using a recurrent neural network. prediction of future price movements, recurrent neural network, cryptocurrency', 'The tutorial utilizes Python, TensorFlow, and Keras for the prediction process. Python, TensorFlow, Keras', 'A batch size of 64 is initially used for the model training, with the possibility of adjusting it later. batch size: 64, model training', 'Emphasis is placed on creating a unique and descriptive name for the model to facilitate comparison between different models and avoid overwriting. unique model name, comparison of models, avoiding overwriting']}], 'duration': 78.918, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yWkpRdpOiPY/pics/yWkpRdpOiPY1775.jpg', 'highlights': ['Focuses on predicting future price movements of a cryptocurrency using a recurrent neural network.', 'Utilizes Python, TensorFlow, and Keras for the prediction process.', 'Initial batch size of 64 used for model training, with the possibility of adjusting it later.', 'Emphasis on creating a unique and descriptive name for the model to facilitate comparison and avoid overwriting.']}, {'end': 344.658, 'segs': [{'end': 189.023, 'src': 'embed', 'start': 80.733, 'weight': 0, 'content': [{'end': 83.395, 'text': "So anyways, yeah, so let's come up with a good name.", 'start': 80.733, 'duration': 2.662}, {'end': 94.442, 'text': "So I'm going to go with sequence length dash sequence dash future period predict dash predict dash.", 'start': 83.835, 'duration': 10.607}, {'end': 103.891, 'text': "dash and then we'll just throw in an int time.time here, and that should be good enough.", 'start': 94.742, 'duration': 9.149}, {'end': 107.433, 'text': "that'll give us a nice unique model name.", 'start': 103.891, 'duration': 3.542}, {'end': 111.954, 'text': "so now we're going to do is import all the tensorflow stuff that we need.", 'start': 107.433, 'duration': 4.521}, {'end': 123.558, 'text': 'so import tensorflow as tf, as tf, stop it sublime from tensorflow.keros.models.', 'start': 111.954, 'duration': 11.604}, {'end': 130.085, 'text': "we're going to import, import sequential.", 'start': 123.558, 'duration': 6.527}, {'end': 132.568, 'text': 'there we go.', 'start': 130.085, 'duration': 2.483}, {'end': 140.998, 'text': "uh oh, you're such a hmm sequence, dude, oh this.", 'start': 132.568, 'duration': 8.43}, {'end': 147.333, 'text': 'this might be a hard one to get through from tensorflow, tensorflow.keros.layers.', 'start': 140.998, 'duration': 6.335}, {'end': 160.199, 'text': "We're going to import dense, dropout, lstm, probably also kudnn, lstm, and then batch normalization.", 'start': 148.154, 'duration': 12.045}, {'end': 175.648, 'text': "All of these we've seen, except for I don't remember if we've done batch normalization.", 'start': 171.924, 'duration': 3.724}, {'end': 177.49, 'text': "I apologize if I've already covered it.", 'start': 175.688, 'duration': 1.802}, {'end': 180.334, 'text': "Basically, it's just normalization, but between the layers.", 'start': 177.51, 'duration': 2.824}, {'end': 186.782, 'text': 'So for the same reason you want to normalize your input data, batch normalization can be useful from layer to layer because you..', 'start': 180.414, 'duration': 6.368}, {'end': 189.023, 'text': 'Because really you can think of each layer.', 'start': 187.102, 'duration': 1.921}], 'summary': 'Developing a model using tensorflow for sequence prediction with various imports and considerations.', 'duration': 108.29, 'max_score': 80.733, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yWkpRdpOiPY/pics/yWkpRdpOiPY80733.jpg'}, {'end': 246.281, 'src': 'embed', 'start': 218.159, 'weight': 5, 'content': [{'end': 225.283, 'text': "Why can't I type right now? Anyway, TensorBoard and Model Checkpoint.", 'start': 218.159, 'duration': 7.124}, {'end': 229.424, 'text': 'I think B in TensorBoard probably needs to be capitalized.', 'start': 225.383, 'duration': 4.041}, {'end': 230.045, 'text': 'It does.', 'start': 229.584, 'duration': 0.461}, {'end': 234.953, 'text': "Okay, TensorBoard, we've really already seen this callback.", 'start': 232.091, 'duration': 2.862}, {'end': 243.198, 'text': 'Model Checkpoint is a fancy-dancy little callback where basically you can set various parameters as to when you want to save certain checkpoints.', 'start': 235.133, 'duration': 8.065}, {'end': 246.281, 'text': 'So I like to use Validation Accuracy.', 'start': 243.279, 'duration': 3.002}], 'summary': 'Tensorboard and model checkpoint for setting checkpoints, like validation accuracy.', 'duration': 28.122, 'max_score': 218.159, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yWkpRdpOiPY/pics/yWkpRdpOiPY218159.jpg'}, {'end': 321.862, 'src': 'embed', 'start': 293.788, 'weight': 2, 'content': [{'end': 297.472, 'text': "we've seen that one model.add and we're going to add.", 'start': 293.788, 'duration': 3.684}, {'end': 299.674, 'text': "i'm going to use kudian nlstm.", 'start': 297.472, 'duration': 2.202}, {'end': 304.158, 'text': "if you're on cpu version of tensorflow, use a regular lstm.", 'start': 299.674, 'duration': 4.484}, {'end': 309.864, 'text': "it's going to be 128 nodes in this layer and input shape.", 'start': 304.158, 'duration': 5.706}, {'end': 312.226, 'text': 'man something bangs on this desk and it makes me mad.', 'start': 309.864, 'duration': 2.362}, {'end': 315.098, 'text': 'What is that noise?', 'start': 314.317, 'duration': 0.781}, {'end': 316.719, 'text': 'I can never find what that noise is.', 'start': 315.198, 'duration': 1.521}, {'end': 318.02, 'text': 'It drives me nuts.', 'start': 317.159, 'duration': 0.861}, {'end': 321.862, 'text': 'And then sometimes it, like, stops happening for, like, months, and then it comes back.', 'start': 318.7, 'duration': 3.162}], 'summary': 'Using kudian nlstm with 128 nodes for model.add in tensorflow', 'duration': 28.074, 'max_score': 293.788, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yWkpRdpOiPY/pics/yWkpRdpOiPY293788.jpg'}], 'start': 80.733, 'title': 'Creating and training models with tensorflow', 'summary': 'Covers creating a unique model name using sequence length and time, importing various tensorflow components, applying batch normalization in neural network layers, implementing tensorboard and model checkpoint for model training and validation, and emphasizing data normalization and checkpoint-based model saving.', 'chapters': [{'end': 160.199, 'start': 80.733, 'title': 'Creating unique model name and importing tensorflow components', 'summary': 'Discusses creating a unique model name by incorporating sequence length and time, and importing various components from tensorflow, including sequential, dense, dropout, lstm, kudnn, lstm, and batch normalization.', 'duration': 79.466, 'highlights': ["Creating a unique model name by incorporating sequence length and time, using 'int time.time' to ensure uniqueness", 'Importing various components from TensorFlow, including sequential, dense, dropout, lstm, kudnn, lstm, and batch normalization']}, {'end': 344.658, 'start': 171.924, 'title': 'Batch normalization and model training', 'summary': 'Introduces the concept of batch normalization, its application in neural network layers, and the implementation of tensorboard and model checkpoint for model training and validation, emphasizing the importance of normalizing data between layers and the use of callbacks to save model checkpoints based on validation accuracy and loss.', 'duration': 172.734, 'highlights': ['The chapter explains batch normalization as a method of normalizing data between neural network layers to ensure consistent input data, improving training performance.', 'It introduces the implementation of TensorBoard and Model Checkpoint for model training, highlighting the use of callbacks to save checkpoints based on validation accuracy and loss, ensuring the preservation of the best-performing models.', 'The author emphasizes the importance of normalizing data between layers, akin to normalizing input data, to improve training performance, demonstrating the practical application of batch normalization.', 'The chapter provides insights into the implementation of Kudian NLSTM with 128 nodes and the input shape for model training, featuring the use of specialized layers for neural network architecture.']}], 'duration': 263.925, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yWkpRdpOiPY/pics/yWkpRdpOiPY80733.jpg', 'highlights': ['Importing various components from TensorFlow, including sequential, dense, dropout, lstm, kudnn, lstm, and batch normalization', 'The chapter explains batch normalization as a method of normalizing data between neural network layers to ensure consistent input data, improving training performance.', 'The chapter provides insights into the implementation of Kudian NLSTM with 128 nodes and the input shape for model training, featuring the use of specialized layers for neural network architecture.', "Creating a unique model name by incorporating sequence length and time, using 'int time.time' to ensure uniqueness", 'The author emphasizes the importance of normalizing data between layers, akin to normalizing input data, to improve training performance, demonstrating the practical application of batch normalization.', 'It introduces the implementation of TensorBoard and Model Checkpoint for model training, highlighting the use of callbacks to save checkpoints based on validation accuracy and loss, ensuring the preservation of the best-performing models.']}, {'end': 595.373, 'segs': [{'end': 595.373, 'src': 'embed', 'start': 430.891, 'weight': 0, 'content': [{'end': 440.999, 'text': "Dense 32 activation will be rectified linear, which reminds me, if you can't use CUDIAN NLSTM, make sure you throw in some activations.", 'start': 430.891, 'duration': 10.108}, {'end': 447.625, 'text': "I would do either TNH, because that's what CUDIAN NLSTM is using, or you could just use rectified linear.", 'start': 441.179, 'duration': 6.446}, {'end': 452.792, 'text': "So we've got that dense layer.", 'start': 450.871, 'duration': 1.921}, {'end': 454.572, 'text': 'Cool Model.add.', 'start': 452.952, 'duration': 1.62}, {'end': 456.693, 'text': "We'll throw in a dropout.", 'start': 454.592, 'duration': 2.101}, {'end': 461.155, 'text': '0.2 I really wonder if this was like a typo or what.', 'start': 457.914, 'duration': 3.241}, {'end': 463.536, 'text': 'Everybody else is 0.2, except for this layer.', 'start': 461.235, 'duration': 2.301}, {'end': 467.197, 'text': 'So anyway.', 'start': 466.357, 'duration': 0.84}, {'end': 470.358, 'text': 'And then finally, we need the final.', 'start': 468.057, 'duration': 2.301}, {'end': 473.126, 'text': 'dense layer.', 'start': 472.526, 'duration': 0.6}, {'end': 477.79, 'text': 'This is a binary choice, so it should be only two options there.', 'start': 474.087, 'duration': 3.703}, {'end': 481.333, 'text': "Activation, because it's the output layer, softmax.", 'start': 478.35, 'duration': 2.983}, {'end': 488.158, 'text': "Okay Now that we've got that, we're ready to specify the optimizer.", 'start': 482.714, 'duration': 5.444}, {'end': 495.423, 'text': "We'll go with tf.keros.optimizers.atom with a learning rate of 0.001, 1e-3, and a dk of 1e-6.", 'start': 488.198, 'duration': 7.225}, {'end': 506.739, 'text': "And then we'll do the model.compile.", 'start': 503.598, 'duration': 3.141}, {'end': 516.922, 'text': "And we'll go with loss is sparse cat categorical cross entropy.", 'start': 508.279, 'duration': 8.643}, {'end': 519.823, 'text': 'You could also go with binary cross entropy.', 'start': 517.642, 'duration': 2.181}, {'end': 525.104, 'text': "Next, what we're going to do is optimizer is the optimizer that we defined.", 'start': 520.903, 'duration': 4.201}, {'end': 530.465, 'text': 'And then metrics, we will go with accuracy.', 'start': 525.644, 'duration': 4.821}, {'end': 534.262, 'text': 'OK Cool.', 'start': 531.966, 'duration': 2.296}, {'end': 541.53, 'text': 'The next thing is we need to define our callbacks, and I guess we can fix this, make Sublime happy.', 'start': 534.663, 'duration': 6.867}, {'end': 544.352, 'text': 'So we have two callbacks that we want to do.', 'start': 542.911, 'duration': 1.441}, {'end': 556.384, 'text': "One is tensorboard, which we've really already seen, and so that's just going to be a tensorboard object, logder equals logs.", 'start': 545.133, 'duration': 11.251}, {'end': 560.396, 'text': "And then we'll do some formatting here.", 'start': 557.835, 'duration': 2.561}, {'end': 564.397, 'text': "And in my text-based version, I haven't made this an F-string.", 'start': 560.656, 'duration': 3.741}, {'end': 566.477, 'text': "I'm trying to move completely to F-strings.", 'start': 564.437, 'duration': 2.04}, {'end': 568.738, 'text': 'They are better.', 'start': 567.697, 'duration': 1.041}, {'end': 571.838, 'text': "It's just I'm so used to doing the format.", 'start': 568.858, 'duration': 2.98}, {'end': 577.66, 'text': "But it's so much nicer to use F-strings, to be honest, to build it out and then not have to remember what order things are in.", 'start': 571.858, 'duration': 5.802}, {'end': 582.803, 'text': "And then later, the worst thing is when you build it for the first time, it's really no big deal.", 'start': 578.28, 'duration': 4.523}, {'end': 590.409, 'text': "But then, if you want to go in and move things around or add a bunch more, it's kind of challenging with string formatting,", 'start': 582.843, 'duration': 7.566}, {'end': 595.373, 'text': "whereas with the F strings it's just a breeze.", 'start': 590.409, 'duration': 4.964}], 'summary': 'Constructing a deep learning model with specific layers and parameters, including a dense layer, dropout, and softmax activation, using tensorflow with specified optimizer and callbacks.', 'duration': 164.482, 'max_score': 430.891, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yWkpRdpOiPY/pics/yWkpRdpOiPY430891.jpg'}], 'start': 344.658, 'title': 'Neural network model configuration and tensorboard callback usage', 'summary': 'Outlines configuring a neural network model for high accuracy in training, emphasizing dropout, batch normalization, tensorboard callback usage for logging, and the benefits of f-strings over traditional string formatting in python.', 'chapters': [{'end': 541.53, 'start': 344.658, 'title': 'Neural network model configuration', 'summary': 'Outlines the configuration of a neural network model with specific layers, parameters, and optimization settings, aiming for high accuracy in training, with a focus on dropout and batch normalization.', 'duration': 196.872, 'highlights': ['The model consists of various layers such as dropout, batch normalization, and dense layers with specific parameters, aiming for high accuracy in training.', 'The optimizer is specified as tf.keros.optimizers.atom with a learning rate of 0.001 and a dk of 1e-6, emphasizing the importance of optimization in the training process.', 'The chapter emphasizes the significance of specifying activations like TNH or rectified linear for the dense layers, to ensure effective utilization of resources like CUDIAN NLSTM.', 'The use of callbacks is mentioned, indicating a comprehensive approach to model training and fine-tuning for optimal performance.']}, {'end': 595.373, 'start': 542.911, 'title': 'Tensorboard callback and f-strings usage', 'summary': 'Discusses the usage of tensorboard callback for logging and the benefits of using f-strings over traditional string formatting in python, highlighting the ease and flexibility of f-strings for building and modifying strings.', 'duration': 52.462, 'highlights': ['The advantages of using F-strings over traditional string formatting for building and modifying strings, providing a more flexible and easier approach to manage strings in Python.', 'The implementation of tensorboard callback for logging in the context of the discussed codebase, demonstrating the practical application of tensorboard in logging data during the model training process.']}], 'duration': 250.715, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yWkpRdpOiPY/pics/yWkpRdpOiPY344658.jpg', 'highlights': ['The model consists of various layers such as dropout, batch normalization, and dense layers with specific parameters, aiming for high accuracy in training.', 'The optimizer is specified as tf.keros.optimizers.atom with a learning rate of 0.001 and a dk of 1e-6, emphasizing the importance of optimization in the training process.', 'The use of callbacks is mentioned, indicating a comprehensive approach to model training and fine-tuning for optimal performance.', 'The advantages of using F-strings over traditional string formatting for building and modifying strings, providing a more flexible and easier approach to manage strings in Python.', 'The implementation of tensorboard callback for logging in the context of the discussed codebase, demonstrating the practical application of tensorboard in logging data during the model training process.', 'The chapter emphasizes the significance of specifying activations like TNH or rectified linear for the dense layers, to ensure effective utilization of resources like CUDIAN NLSTM.']}, {'end': 843.355, 'segs': [{'end': 624.828, 'src': 'embed', 'start': 596.073, 'weight': 1, 'content': [{'end': 597.595, 'text': "OK, so that's our TensorBoard object.", 'start': 596.073, 'duration': 1.522}, {'end': 601.297, 'text': "The next thing we're ready to do is do the checkpoint object.", 'start': 598.355, 'duration': 2.942}, {'end': 603.399, 'text': 'So here, file path.', 'start': 601.397, 'duration': 2.002}, {'end': 606.401, 'text': 'And to be quite honest with you, this is not my code.', 'start': 603.839, 'duration': 2.562}, {'end': 612.521, 'text': 'I basically searched for an example for the checkpoint thing.', 'start': 607.938, 'duration': 4.583}, {'end': 615.082, 'text': 'So this might be from Keras documentation.', 'start': 612.541, 'duration': 2.541}, {'end': 616.443, 'text': "I don't even remember where I found this.", 'start': 615.102, 'duration': 1.341}, {'end': 622.807, 'text': "But basically, we'll do RNN final.", 'start': 619.164, 'duration': 3.643}, {'end': 624.828, 'text': "Obviously, it didn't have this exact name.", 'start': 623.307, 'duration': 1.521}], 'summary': 'Using tensorboard and checkpoint object in rnn training', 'duration': 28.755, 'max_score': 596.073, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yWkpRdpOiPY/pics/yWkpRdpOiPY596073.jpg'}, {'end': 768.903, 'src': 'embed', 'start': 688.459, 'weight': 0, 'content': [{'end': 691.12, 'text': "But then it's like these other things in the format.", 'start': 688.459, 'duration': 2.661}, {'end': 693.101, 'text': "Anyway, I don't get it.", 'start': 692.041, 'duration': 1.06}, {'end': 695.443, 'text': "I don't know how it works, but it works.", 'start': 693.341, 'duration': 2.102}, {'end': 702.366, 'text': "Anyways, so now we're going to do a history equals model.fit.", 'start': 696.223, 'duration': 6.143}, {'end': 708.569, 'text': "We're going to fit train x, train y.", 'start': 703.887, 'duration': 4.682}, {'end': 714.072, 'text': 'The batch size is just batch size.', 'start': 708.569, 'duration': 5.503}, {'end': 721.691, 'text': 'Epochs is going to equal epochs capitals.', 'start': 716.889, 'duration': 4.802}, {'end': 722.812, 'text': "Let's fix this.", 'start': 721.691, 'duration': 1.121}, {'end': 728.034, 'text': 'they are parameters, therefore no spaces.', 'start': 722.812, 'duration': 5.222}, {'end': 746.384, 'text': 'After epochs we have the validation valid validation, underscore data, and then in here we have validation X, validation Y and And then, finally,', 'start': 728.034, 'duration': 18.35}, {'end': 748.346, 'text': 'we have our callbacks that we want to run.', 'start': 746.384, 'duration': 1.962}, {'end': 753.011, 'text': "So that's a tensor board and a checkpoint.", 'start': 748.566, 'duration': 4.445}, {'end': 768.903, 'text': "Okay, and then once that's done, I do run an evaluate in the text-based version, but really every epoch should be running validation data anyways.", 'start': 756.232, 'duration': 12.671}], 'summary': 'Training model with history equals model.fit, batch size, epochs, validation data, and callbacks.', 'duration': 80.444, 'max_score': 688.459, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yWkpRdpOiPY/pics/yWkpRdpOiPY688459.jpg'}, {'end': 805.992, 'src': 'heatmap', 'start': 775.908, 'weight': 1, 'content': [{'end': 778.03, 'text': 'because the model checkpoint is going to be saved.', 'start': 775.908, 'duration': 2.122}, {'end': 781.353, 'text': "So I guess we're done at this point.", 'start': 779.431, 'duration': 1.922}, {'end': 784.093, 'text': 'Okay Save that.', 'start': 782.853, 'duration': 1.24}, {'end': 786.994, 'text': "And now let's go ahead and run this thing.", 'start': 784.653, 'duration': 2.341}, {'end': 790.275, 'text': 'Time dash three, six crypto.', 'start': 787.014, 'duration': 3.261}, {'end': 800.657, 'text': "Let's see if we have any errors faster, please.", 'start': 790.295, 'duration': 10.362}, {'end': 802.778, 'text': "I'm impatient.", 'start': 800.677, 'duration': 2.101}, {'end': 805.992, 'text': 'Nothing yet.', 'start': 805.492, 'duration': 0.5}], 'summary': 'Model checkpoint saved, running with no errors yet.', 'duration': 30.084, 'max_score': 775.908, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yWkpRdpOiPY/pics/yWkpRdpOiPY775908.jpg'}, {'end': 843.355, 'src': 'embed', 'start': 807.372, 'weight': 3, 'content': [{'end': 808.433, 'text': 'Okay, it starts training.', 'start': 807.372, 'duration': 1.061}, {'end': 812.294, 'text': 'Already at 51-ish accuracy.', 'start': 809.213, 'duration': 3.081}, {'end': 815.855, 'text': 'The epochs are not too, too slow.', 'start': 813.634, 'duration': 2.221}, {'end': 819.676, 'text': "I think I'll probably pause it or just chop this out.", 'start': 815.955, 'duration': 3.721}, {'end': 825.437, 'text': 'But we can also watch it as it trains with a tensor..', 'start': 820.376, 'duration': 5.061}, {'end': 832.959, 'text': 'No What have you done? It went to save it in that..', 'start': 825.437, 'duration': 7.522}, {'end': 836.691, 'text': "Oh.. Probably because we don't have a models directory.", 'start': 832.959, 'duration': 3.732}, {'end': 839.933, 'text': "Let's just throw one in real quick.", 'start': 838.192, 'duration': 1.741}, {'end': 843.355, 'text': 'Models Dang it.', 'start': 840.473, 'duration': 2.882}], 'summary': 'Training started, reached 51% accuracy, encountering issues with model saving.', 'duration': 35.983, 'max_score': 807.372, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yWkpRdpOiPY/pics/yWkpRdpOiPY807372.jpg'}], 'start': 596.073, 'title': 'Training model and validation', 'summary': 'Covers training a model with validation data, using callbacks like tensorboard and checkpoints, and encountering errors, achieving a model accuracy of around 51%.', 'chapters': [{'end': 728.034, 'start': 596.073, 'title': 'Tensorboard object and model fitting', 'summary': "Covers the usage of tensorboard object and model fitting in python, including the creation of a checkpoint object and the process of model fitting using the 'model.fit' function with specified parameters.", 'duration': 131.961, 'highlights': ['The chapter covers the usage of the TensorBoard object and creating a checkpoint object for model saving.', 'The speaker mentions searching for an example for the checkpoint code, indicating the use of external resources for code implementation.', "The process of model fitting is described, including the parameters 'train x', 'train y', batch size, and epochs."]}, {'end': 843.355, 'start': 728.034, 'title': 'Training model and validation', 'summary': 'Discusses the process of training a model with validation data, including the use of callbacks like tensor board and checkpoints, and encountering errors while running the training process with a model accuracy of around 51%.', 'duration': 115.321, 'highlights': ['The training process involves using validation data and running callbacks like tensor board and checkpoints.', 'Encountered an error while running the training process with a model accuracy of around 51%.', 'The need to create a models directory to save the model during the training process.']}], 'duration': 247.282, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yWkpRdpOiPY/pics/yWkpRdpOiPY596073.jpg', 'highlights': ['The training process involves using validation data and running callbacks like tensor board and checkpoints.', 'The chapter covers the usage of the TensorBoard object and creating a checkpoint object for model saving.', "The process of model fitting is described, including the parameters 'train x', 'train y', batch size, and epochs.", 'Encountered an error while running the training process with a model accuracy of around 51%.', 'The need to create a models directory to save the model during the training process.', 'The speaker mentions searching for an example for the checkpoint code, indicating the use of external resources for code implementation.']}, {'end': 1385.539, 'segs': [{'end': 872.804, 'src': 'embed', 'start': 845.476, 'weight': 3, 'content': [{'end': 849.758, 'text': "Well, here's a perfect example why you want to throw in time to your name.", 'start': 845.476, 'duration': 4.282}, {'end': 855.601, 'text': "Because you're going to fix the bug, you're going to rerun it, and then you're going to be like, dang it, I didn't change the name.", 'start': 849.798, 'duration': 5.803}, {'end': 857.883, 'text': "And then it's going to overwrite it in the logs.", 'start': 855.961, 'duration': 1.922}, {'end': 859.543, 'text': "And now it doesn't matter.", 'start': 858.063, 'duration': 1.48}, {'end': 862.085, 'text': "Now there's two of them by timestamp.", 'start': 860.404, 'duration': 1.681}, {'end': 869.882, 'text': "Although.. Actually, why? Oh, no, it's not the same.", 'start': 863.466, 'duration': 6.416}, {'end': 872.804, 'text': "Dude, I can't see at all, clearly.", 'start': 870.903, 'duration': 1.901}], 'summary': 'Importance of adding timestamps to avoid overwriting logs and duplicates.', 'duration': 27.328, 'max_score': 845.476, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yWkpRdpOiPY/pics/yWkpRdpOiPY845476.jpg'}, {'end': 933.34, 'src': 'heatmap', 'start': 899.643, 'weight': 0.846, 'content': [{'end': 905.174, 'text': 'So why is this dash dash logder with no underscore? Uh, and then this here.', 'start': 899.643, 'duration': 5.531}, {'end': 915.041, 'text': 'is oh shoot, yeah, yeah, this here has an underscore drives me nuts because it really always confuses me every time.', 'start': 907.378, 'duration': 7.663}, {'end': 919.543, 'text': 'okay, so then we can go to our browser, go to our pc.', 'start': 915.041, 'duration': 4.502}, {'end': 921.044, 'text': "you can always go to 127.001, colon 6006, if you don't know.", 'start': 919.543, 'duration': 1.501}, {'end': 922.905, 'text': "but anyways, hpc for me is my pc's name.", 'start': 921.044, 'duration': 1.861}, {'end': 933.34, 'text': "Okay, so the blue line is what's.", 'start': 930.999, 'duration': 2.341}], 'summary': 'Discussion about logder with no underscore, pc address, and hpc confusion.', 'duration': 33.697, 'max_score': 899.643, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yWkpRdpOiPY/pics/yWkpRdpOiPY899643.jpg'}, {'end': 1086.396, 'src': 'embed', 'start': 1048.018, 'weight': 1, 'content': [{'end': 1050.942, 'text': 'And then you can even see validation accuracy is kind of doing the same thing.', 'start': 1048.018, 'duration': 2.924}, {'end': 1052.964, 'text': 'After about six epochs, it starts falling.', 'start': 1051.022, 'duration': 1.942}, {'end': 1061.274, 'text': 'But we had about as high as 57.28% accuracy, which is pretty darn good.', 'start': 1053.204, 'duration': 8.07}, {'end': 1065.399, 'text': 'So the other thing we probably should have in our..', 'start': 1061.814, 'duration': 3.585}, {'end': 1070.873, 'text': "our name here is the thing that we're predicting.", 'start': 1067.412, 'duration': 3.461}, {'end': 1074.634, 'text': 'So I would also throw in ratio to predict.', 'start': 1071.413, 'duration': 3.221}, {'end': 1086.396, 'text': "Um, so now what we could do is like, let's change this to Ethereum and do Ethereum USD, and then come over here and rerun that.", 'start': 1075.694, 'duration': 10.702}], 'summary': "Validation accuracy drops after 6 epochs, reaching 57.28% at peak. considering adding 'ratio to predict' in prediction model.", 'duration': 38.378, 'max_score': 1048.018, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yWkpRdpOiPY/pics/yWkpRdpOiPY1048018.jpg'}, {'end': 1169.01, 'src': 'embed', 'start': 1146.345, 'weight': 0, 'content': [{'end': 1155.896, 'text': "Okay, Bitcoin Cash is done and actually it's the best performing one of all of them in terms of validation accuracy and loss in both.", 'start': 1146.345, 'duration': 9.551}, {'end': 1158.138, 'text': 'It has the least loss and the highest accuracies.', 'start': 1156.036, 'duration': 2.102}, {'end': 1159.56, 'text': 'Pretty cool.', 'start': 1159.159, 'duration': 0.401}, {'end': 1165.246, 'text': 'Finally, we are going to run Bitcoin and see how that does.', 'start': 1160.261, 'duration': 4.985}, {'end': 1166.668, 'text': "I'm interested to see Bitcoin now.", 'start': 1165.266, 'duration': 1.402}, {'end': 1169.01, 'text': "Again, I actually don't know the results.", 'start': 1167.128, 'duration': 1.882}], 'summary': 'Bitcoin cash outperformed others in validation accuracy and loss, with the least loss and highest accuracies.', 'duration': 22.665, 'max_score': 1146.345, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yWkpRdpOiPY/pics/yWkpRdpOiPY1146345.jpg'}, {'end': 1257.182, 'src': 'embed', 'start': 1234.111, 'weight': 2, 'content': [{'end': 1242.433, 'text': 'I mean, I kind of draw a threshold around 60% for like a pretty impressive classifier when it comes to predicting finance.', 'start': 1234.111, 'duration': 8.322}, {'end': 1247.374, 'text': 'And in this case, this is predicting just simply an upward or downward movement.', 'start': 1243.593, 'duration': 3.781}, {'end': 1250.36, 'text': 'So I think you could probably tweak this model.', 'start': 1247.514, 'duration': 2.846}, {'end': 1252.526, 'text': "I mean, I didn't spend too long on the model.", 'start': 1250.4, 'duration': 2.126}, {'end': 1257.182, 'text': 'All I did was I just wanted to find a model that started to learn From here.', 'start': 1252.626, 'duration': 4.556}], 'summary': 'Threshold for impressive finance classifier is around 60%.', 'duration': 23.071, 'max_score': 1234.111, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yWkpRdpOiPY/pics/yWkpRdpOiPY1234111.jpg'}], 'start': 845.476, 'title': 'Tensor board analysis and cryptocurrency model results', 'summary': 'Discusses debugging a bug in the code, identifying tensor board logs, and analyzing model performance with 57.28% accuracy after six epochs. additionally, it explores the performance of machine learning models on ethereum, litecoin, bitcoin cash, and bitcoin, with bitcoin cash showing the best validation accuracy and potential for further improvement in predicting market movements, achieving over 60% accuracy.', 'chapters': [{'end': 1074.634, 'start': 845.476, 'title': 'Tensor board analysis and bug fixing', 'summary': 'Discusses debugging a bug in the code, identifying tensor board logs, and analyzing model performance with 57.28% accuracy after six epochs.', 'duration': 229.158, 'highlights': ['The model achieved a validation accuracy of 57.28% after six epochs, indicating a strong performance.', 'The chapter emphasizes the importance of adding time to the name to avoid overwriting logs, which can lead to confusion and errors during debugging.', "The speaker discusses the confusion caused by naming inconsistencies, such as 'logder' versus 'logger', highlighting the need for consistent naming conventions in the code.", "The speaker mentions the need for including the predicted variable in the name, providing insight into potential improvements in the model's predictive capabilities."]}, {'end': 1385.539, 'start': 1075.694, 'title': 'Cryptocurrency model results', 'summary': 'Discusses running machine learning models on ethereum, litecoin, bitcoin cash, and bitcoin, with bitcoin cash performing the best in terms of validation accuracy and loss, and the validation accuracy reaching over 60% in predicting upward or downward movements, with potential for further improvement.', 'duration': 309.845, 'highlights': ['Bitcoin Cash performed the best among Ethereum, Litecoin, and Bitcoin in terms of validation accuracy and loss, with the least loss and the highest accuracies.', "The validation accuracy reached over 60% in predicting upward or downward movements, indicating a promising potential for the model's predictive capabilities.", 'The chapter also mentions the intention to explore audio-related applications in future projects and invites feedback and suggestions from the audience.']}], 'duration': 540.063, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yWkpRdpOiPY/pics/yWkpRdpOiPY845476.jpg', 'highlights': ['Bitcoin Cash showed the best validation accuracy and potential for further improvement in predicting market movements, achieving over 60% accuracy.', 'The model achieved a validation accuracy of 57.28% after six epochs, indicating a strong performance.', "The validation accuracy reached over 60% in predicting upward or downward movements, indicating a promising potential for the model's predictive capabilities.", 'The chapter emphasizes the importance of adding time to the name to avoid overwriting logs, which can lead to confusion and errors during debugging.', "The speaker discusses the confusion caused by naming inconsistencies, such as 'logder' versus 'logger', highlighting the need for consistent naming conventions in the code."]}], 'highlights': ['The model achieved a validation accuracy of 57.28% after six epochs, indicating a strong performance.', 'Bitcoin Cash showed the best validation accuracy and potential for further improvement in predicting market movements, achieving over 60% accuracy.', 'The chapter emphasizes the importance of adding time to the name to avoid overwriting logs, which can lead to confusion and errors during debugging.', 'The chapter provides insights into the implementation of Kudian NLSTM with 128 nodes and the input shape for model training, featuring the use of specialized layers for neural network architecture.', 'The chapter introduces the implementation of TensorBoard and Model Checkpoint for model training, highlighting the use of callbacks to save checkpoints based on validation accuracy and loss, ensuring the preservation of the best-performing models.']}