title
Recurrent Neural Network (RNN) Tutorial | RNN LSTM Tutorial | Deep Learning Tutorial | Simplilearn

description
šŸ”„Artificial Intelligence Engineer Program (Discount Coupon: YTBE15): https://www.simplilearn.com/artificial-intelligence-masters-program-training-course?utm_campaign=MachineLearning-lWkFhVq9-nc&utm_medium=DescriptionFirstFold&utm_source=youtube šŸ”„Professional Certificate Program In AI And Machine Learning: https://www.simplilearn.com/pgp-ai-machine-learning-certification-training-course?utm_campaign=MachineLearning-lWkFhVq9-nc&utm_medium=DescriptionFirstFold&utm_source=youtube This Recurrent Neural Network tutorial will help you understand what is a neural network, what are the popular neural networks, why we need recurrent neural network, what is a recurrent neural network, how does a RNN work, what is vanishing and exploding gradient problem, what is LSTM and you will also see a use case implementation of LSTM (Long short term memory). Neural networks used in Deep Learning consists of different layers connected to each other and work on the structure and functions of the human brain. It learns from huge volumes of data and used complex algorithms to train a neural net. The recurrent neural network works on the principle of saving the output of a layer and feeding this back to the input in order to predict the output of the layer. Now lets deep dive into this video and understand what is RNN and how does it actually work. Below topics are explained in this recurrent neural networks tutorial: 1. What is a neural network? 2. Popular neural networks? 3. Why recurrent neural network? 4. What is a recurrent neural network? 5. How does an RNN work? 6. Vanishing and exploding gradient problem 7. Long short term memory (LSTM) 8. Use case implementation of LSTM šŸ”„Free Machine Learning Course: https://www.simplilearn.com/learn-machine-learning-basics-skillup?utm_campaign=MachineLearning&utm_medium=Description&utm_source=youtube To learn more about Deep Learning, subscribe to our YouTube channel: https://www.youtube.com/user/Simplilearn?sub_confirmation=1 You can also go through the slides here: https://goo.gl/wsjuLv Watch more videos on Deep Learning: https://www.youtube.com/watch?v=FbxTVRfQFuI&list=PLEiEAq2VkUUIYQ-mMRAGilfOKyWKpHSip #DeepLearning #Datasciencecourse #DataScience #SimplilearnMachineLearning #DeepLearningCourse Simplilearnā€™s Deep Learning course will transform you into an expert in deep learning techniques using TensorFlow, the open-source software library designed to conduct machine learning & deep neural network research. With our deep learning course, you'll master deep learning and TensorFlow concepts, learn to implement algorithms, build artificial neural networks and traverse layers of data abstraction to understand the power of data and prepare you for your new role as deep learning scientist. And according to payscale.com, the median salary for engineers with deep learning skills tops $120,000 per year. You can gain in-depth knowledge of Deep Learning by taking our Deep Learning certification training course. With Simplilearnā€™s Deep Learning course, you will prepare for a career as a Deep Learning engineer as you master concepts and techniques including supervised and unsupervised learning, mathematical and heuristic aspects, and hands-on modeling to develop algorithms. Those who complete the course will be able to: 1. Understand the concepts of TensorFlow, its main functions, operations and the execution pipeline 2. Implement deep learning algorithms, understand neural networks and traverse the layers of data abstraction which will empower you to understand data like never before 3. Master and comprehend advanced topics such as convolutional neural networks, recurrent neural networks, training deep networks and high-level interfaces 4. Build deep learning models in TensorFlow and interpret the results 5. Understand the language and fundamental concepts of artificial neural networks 6. Troubleshoot and improve deep learning models There is booming demand for skilled deep learning engineers across a wide range of industries, making this deep learning course with TensorFlow training well-suited for professionals at the intermediate to advanced level of experience. We recommend this Deep Learning online course particularly for the following professionals: 1. Software engineers 2. Data scientists 3. Data analysts 4. Statisticians with an interest in deep learning Learn more at: https://www.simplilearn.com/deep-learning-course-with-tensorflow-training?utm_campaign=Recurrent-Neural-Network-Tutorial-lWkFhVq9-nc&utm_medium=Tutorials&utm_source=youtube For more information about Simplilearnā€™s courses, visit: - Facebook: https://www.facebook.com/Simplilearn - LinkedIn: https://www.linkedin.com/company/simp... - Website: https://www.simplilearn.com šŸ”„šŸ”„ Interested in Attending Live Classes? Call Us: IN - 18002127688 / US - +18445327688

detail
{'title': 'Recurrent Neural Network (RNN) Tutorial | RNN LSTM Tutorial | Deep Learning Tutorial | Simplilearn', 'heatmap': [{'end': 607.931, 'start': 569.707, 'weight': 1}, {'end': 676.638, 'start': 637.175, 'weight': 0.886}, {'end': 748.646, 'start': 711.843, 'weight': 0.849}], 'summary': "This tutorial covers the introduction and applications of recurrent neural network (rnn) in google's autocomplete feature, its usage in natural language processing for sentiment analysis and machine translation, the role and functionality of lstms in predicting words and processing sequential data, implementing lstm for stock price prediction, and rnn model building and training with specific details on initialization and dropout regularization.", 'chapters': [{'end': 139.069, 'segs': [{'end': 51.8, 'src': 'embed', 'start': 22.788, 'weight': 0, 'content': [{'end': 27.232, 'text': "It's important to know the framework we're in and what we're going to be looking at specifically.", 'start': 22.788, 'duration': 4.444}, {'end': 35.364, 'text': "Then we'll touch on why a recurrent neural network, what is a recurrent neural network, and how does an RNN work.", 'start': 27.472, 'duration': 7.892}, {'end': 40.772, 'text': 'One of the big things about RNNs is what they call the vanishing and exploding gradient problem.', 'start': 36.045, 'duration': 4.727}, {'end': 47.558, 'text': "so we'll look at that and then We're going to be using a use case study that's going to be in Keras on TensorFlow.", 'start': 40.772, 'duration': 6.786}, {'end': 51.8, 'text': 'Keras is a Python module for doing neural networks in deep learning.', 'start': 47.718, 'duration': 4.082}], 'summary': 'Introduction to rnns, addressing vanishing and exploding gradient problem, and use case study in keras on tensorflow.', 'duration': 29.012, 'max_score': 22.788, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/lWkFhVq9-nc/pics/lWkFhVq9-nc22788.jpg'}, {'end': 98.493, 'src': 'embed', 'start': 65.967, 'weight': 1, 'content': [{'end': 66.867, 'text': "And we'll get into that.", 'start': 65.967, 'duration': 0.9}, {'end': 68.628, 'text': 'The use case is always my favorite part.', 'start': 67.007, 'duration': 1.621}, {'end': 75.853, 'text': "Before we dive into any of this, We're going to take a look at what is an RNN or an introduction to the RNN.", 'start': 68.908, 'duration': 6.945}, {'end': 83.36, 'text': "Do you know how Google's autocomplete feature predicts the rest of the words a user is typing? I love that autocomplete feature as I'm typing away.", 'start': 75.994, 'duration': 7.366}, {'end': 84.822, 'text': 'It saves me a lot of time.', 'start': 83.38, 'duration': 1.442}, {'end': 89.186, 'text': "I can just kind of hit the enter key and it autofills everything and I don't have to type as much.", 'start': 84.962, 'duration': 4.224}, {'end': 94.751, 'text': "Well, first there's a collection of large volumes of most frequently occurring consecutive words.", 'start': 89.486, 'duration': 5.265}, {'end': 98.493, 'text': 'This is fed into a recurrent neural network,', 'start': 95.551, 'duration': 2.942}], 'summary': "An introduction to rnn and its application in google's autocomplete feature.", 'duration': 32.526, 'max_score': 65.967, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/lWkFhVq9-nc/pics/lWkFhVq9-nc65967.jpg'}], 'start': 3.171, 'title': 'Rnn tutorial: introduction and applications', 'summary': "Introduces recurrent neural network (rnn), explains its working principles, and highlights its application in google's autocomplete feature, which saves time by predicting the next word in a sentence, based on frequently occurring consecutive words.", 'chapters': [{'end': 139.069, 'start': 3.171, 'title': 'Rnn tutorial: introduction and applications', 'summary': "Introduces recurrent neural network (rnn), explains its working principles, and highlights its application in google's autocomplete feature, which saves time by predicting the next word in a sentence, based on frequently occurring consecutive words.", 'duration': 135.898, 'highlights': ['Recurrent Neural Network (RNN) and its working principles are explained. The chapter covers the fundamentals of RNN, including its framework, working principles, and the vanishing and exploding gradient problem.', "Application of RNN in Google's autocomplete feature is discussed. The transcript explains how RNN is used to predict the next word in a sentence based on frequently occurring consecutive words, showcasing its application in Google's autocomplete feature."]}], 'duration': 135.898, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/lWkFhVq9-nc/pics/lWkFhVq9-nc3171.jpg', 'highlights': ['The chapter covers the fundamentals of RNN, including its framework, working principles, and the vanishing and exploding gradient problem.', "The transcript explains how RNN is used to predict the next word in a sentence based on frequently occurring consecutive words, showcasing its application in Google's autocomplete feature."]}, {'end': 992.249, 'segs': [{'end': 166.527, 'src': 'embed', 'start': 139.41, 'weight': 5, 'content': [{'end': 145.771, 'text': "So before we dive into the RNN and getting into the depths, let's go ahead and talk about what is a neural network.", 'start': 139.41, 'duration': 6.361}, {'end': 154.024, 'text': 'Neural networks used in deep learning, consist of different layers connected to each other and work on the structure and functions of a human brain.', 'start': 146.191, 'duration': 7.833}, {'end': 159.565, 'text': "You're going to see that thread, human and human brain and human thinking, throughout deep learning.", 'start': 154.264, 'duration': 5.301}, {'end': 166.527, 'text': 'The only way we can evaluate an artificial intelligence or anything like that is to compare it to human function.', 'start': 159.665, 'duration': 6.862}], 'summary': 'Neural networks in deep learning mimic human brain functions.', 'duration': 27.117, 'max_score': 139.41, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/lWkFhVq9-nc/pics/lWkFhVq9-nc139410.jpg'}, {'end': 336.373, 'src': 'embed', 'start': 308.544, 'weight': 3, 'content': [{'end': 313.91, 'text': 'Decisions are based on current input, no memory about the past, no future scope.', 'start': 308.544, 'duration': 5.366}, {'end': 317.634, 'text': 'Why recurrent neural network? Issues in feedforward neural network.', 'start': 313.93, 'duration': 3.704}, {'end': 322.7, 'text': "So one of the biggest issues is because it doesn't have a scope of memory or time.", 'start': 318.055, 'duration': 4.645}, {'end': 326.485, 'text': "a feedforward neural network doesn't know how to handle sequential data.", 'start': 322.7, 'duration': 3.785}, {'end': 329.308, 'text': 'It only considers only the current input.', 'start': 326.925, 'duration': 2.383}, {'end': 336.373, 'text': "So if you have a series of things, and because three points back affects what's happening now and what's your output affects what's happening.", 'start': 329.728, 'duration': 6.645}], 'summary': 'Feedforward neural networks lack memory and sequential data handling, prompting the need for recurrent neural networks.', 'duration': 27.829, 'max_score': 308.544, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/lWkFhVq9-nc/pics/lWkFhVq9-nc308544.jpg'}, {'end': 408.015, 'src': 'embed', 'start': 383.806, 'weight': 1, 'content': [{'end': 389.988, 'text': "And if we're going to look at general drawings and solutions, we should also look at applications of the RNN.", 'start': 383.806, 'duration': 6.182}, {'end': 391.169, 'text': 'Image captioning.', 'start': 390.268, 'duration': 0.901}, {'end': 395.41, 'text': 'RNN is used to caption an image by analyzing the activities present in it.', 'start': 391.549, 'duration': 3.861}, {'end': 397.371, 'text': 'A dog catching a ball in midair.', 'start': 395.69, 'duration': 1.681}, {'end': 398.572, 'text': "That's very tough.", 'start': 397.871, 'duration': 0.701}, {'end': 406.234, 'text': "I mean we have a lot of stuff that analyzes images of a dog and the image of a ball, but it's able to add one more feature in there,", 'start': 398.632, 'duration': 7.602}, {'end': 408.015, 'text': "that's actually catching the ball in midair.", 'start': 406.234, 'duration': 1.781}], 'summary': 'Rnn is applied for image captioning, analyzing activities in images, such as a dog catching a ball in midair.', 'duration': 24.209, 'max_score': 383.806, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/lWkFhVq9-nc/pics/lWkFhVq9-nc383806.jpg'}, {'end': 470.528, 'src': 'embed', 'start': 443.618, 'weight': 0, 'content': [{'end': 448.399, 'text': "In here, we'll give you a little jump on that, so that's exciting, but don't expect to get rich off of it immediately.", 'start': 443.618, 'duration': 4.781}, {'end': 452.112, 'text': 'Another application of the RNN is natural language processing.', 'start': 448.789, 'duration': 3.323}, {'end': 457.036, 'text': 'Text mining and sentiment analysis can be carried out using RNN for natural language processing.', 'start': 452.212, 'duration': 4.824}, {'end': 459.939, 'text': 'And you can see right here the term natural language.', 'start': 457.236, 'duration': 2.703}, {'end': 466.484, 'text': 'processing, when you stream those three words together, is very different than if I said processing language naturally.', 'start': 459.939, 'duration': 6.545}, {'end': 470.528, 'text': "So the time series is very important when we're analyzing sentiments.", 'start': 466.764, 'duration': 3.764}], 'summary': 'Rnn can be used for natural language processing and sentiment analysis, providing a jumpstart without immediate wealth.', 'duration': 26.91, 'max_score': 443.618, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/lWkFhVq9-nc/pics/lWkFhVq9-nc443618.jpg'}, {'end': 607.931, 'src': 'heatmap', 'start': 569.707, 'weight': 1, 'content': [{'end': 574.67, 'text': 'And usually we have a propagation forward neural network with the input layers, the hidden layers, the output layer.', 'start': 569.707, 'duration': 4.963}, {'end': 577.811, 'text': 'With the recurrent neural network, we turn that on its side.', 'start': 574.83, 'duration': 2.981}, {'end': 585.714, 'text': 'So here it is, and now our X comes up from the bottom into the hidden layers, into Y, and they usually draw it very simplified X to H,', 'start': 578.111, 'duration': 7.603}, {'end': 590.496, 'text': 'with C as a loop, A to Y, where A, B and C are the perimeters.', 'start': 585.714, 'duration': 4.782}, {'end': 593.057, 'text': "A lot of times you'll see this kind of drawing in here.", 'start': 590.896, 'duration': 2.161}, {'end': 596.66, 'text': 'Digging closer and closer into the H and how it works.', 'start': 593.477, 'duration': 3.183}, {'end': 601.645, 'text': "Going from left to right, you'll see that the C goes in and then the X goes in.", 'start': 596.941, 'duration': 4.704}, {'end': 604.708, 'text': 'So the X is going upward bound and C is going to the right.', 'start': 601.925, 'duration': 2.783}, {'end': 607.931, 'text': 'A is going out and C is also going out.', 'start': 605.188, 'duration': 2.743}], 'summary': 'Comparison between feedforward and recurrent neural networks explained with visual diagrams.', 'duration': 38.224, 'max_score': 569.707, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/lWkFhVq9-nc/pics/lWkFhVq9-nc569707.jpg'}, {'end': 676.638, 'src': 'heatmap', 'start': 637.175, 'weight': 0.886, 'content': [{'end': 642.779, 'text': "So it's the last output of h, combined with the new input of x, where ht is the new state.", 'start': 637.175, 'duration': 5.604}, {'end': 644.8, 'text': 'fc is a function with the parameter c.', 'start': 642.779, 'duration': 2.021}, {'end': 646.761, 'text': "that's a common way of denoting it.", 'start': 644.8, 'duration': 1.961}, {'end': 653.685, 'text': 'ht minus 1 is the old state coming out and then x of t is an input vector at time of step t.', 'start': 646.761, 'duration': 6.924}, {'end': 656.987, 'text': 'Well, we need to cover types of recurrent neural networks.', 'start': 653.685, 'duration': 3.302}, {'end': 661.91, 'text': 'And so the first one is the most common one, which is a one-to-one single output.', 'start': 657.367, 'duration': 4.543}, {'end': 669.189, 'text': 'One-to-one neural network is usually known as a vanilla neural network used for regular machine learning problems.', 'start': 662.782, 'duration': 6.407}, {'end': 673.474, 'text': 'Why? Because vanilla is usually considered kind of just a real basic flavor.', 'start': 669.63, 'duration': 3.844}, {'end': 676.638, 'text': "But because it's very basic, a lot of times they'll call it the vanilla neural network.", 'start': 673.574, 'duration': 3.064}], 'summary': 'Types of recurrent neural networks include one-to-one single output, often known as a vanilla neural network.', 'duration': 39.463, 'max_score': 637.175, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/lWkFhVq9-nc/pics/lWkFhVq9-nc637175.jpg'}, {'end': 764.519, 'src': 'heatmap', 'start': 711.843, 'weight': 2, 'content': [{'end': 716.928, 'text': "So positive sentiment where rain might be a negative sentiment if you're just adding up the words in there.", 'start': 711.843, 'duration': 5.085}, {'end': 721.356, 'text': "And then the course, if you're going to do a one-to-one, many-to-one, one-to-many,", 'start': 717.355, 'duration': 4.001}, {'end': 726.058, 'text': "there's many-to-many networks takes in a sequence of inputs and generates a sequence of outputs.", 'start': 721.356, 'duration': 4.702}, {'end': 728.358, 'text': 'Example, machine translation.', 'start': 726.378, 'duration': 1.98}, {'end': 732.779, 'text': 'So we have a lengthy sentence coming in in English and then going out in all the different languages.', 'start': 728.418, 'duration': 4.361}, {'end': 734.56, 'text': 'You know, just a wonderful tool.', 'start': 732.799, 'duration': 1.761}, {'end': 736.941, 'text': 'Very complicated set of computations.', 'start': 734.64, 'duration': 2.301}, {'end': 741.202, 'text': "You know, if you're a translator, you realize just how difficult it is to translate into different languages.", 'start': 736.961, 'duration': 4.241}, {'end': 748.646, 'text': "One of the biggest things you need to understand when we're working with this neural network is what's called the vanishing gradient problem.", 'start': 741.92, 'duration': 6.726}, {'end': 755.393, 'text': 'While training an RNN, your slope can be either too small or very large, and this makes training difficult.', 'start': 748.887, 'duration': 6.506}, {'end': 759.737, 'text': 'When the slope is too small, the problem is known as vanishing gradient.', 'start': 755.513, 'duration': 4.224}, {'end': 764.519, 'text': "And you'll see here, they have a nice image, loss of information through time.", 'start': 760.017, 'duration': 4.502}], 'summary': 'Neural networks can handle sequences of inputs and outputs, such as in machine translation, but face challenges like the vanishing gradient problem.', 'duration': 52.676, 'max_score': 711.843, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/lWkFhVq9-nc/pics/lWkFhVq9-nc711843.jpg'}], 'start': 139.41, 'title': 'Recurrent neural networks in nlp', 'summary': 'Covers the basics of neural networks, the limitations of feedforward neural network, and the advantages of recurrent neural network, including its usage in handling sequential data, image captioning, and time series prediction. it also delves into the applications of recurrent neural networks (rnn) in natural language processing, encompassing sentiment analysis, machine translation, and addressing the challenges of vanishing and exploding gradient problems.', 'chapters': [{'end': 443.358, 'start': 139.41, 'title': 'Understanding neural networks and rnn', 'summary': 'Discusses the basics of neural networks, the limitations of feedforward neural network, and the applications and advantages of recurrent neural network, including its usage in handling sequential data and applications in image captioning and time series prediction.', 'duration': 303.948, 'highlights': ['Applications of RNN Recurrent Neural Network (RNN) is utilized for image captioning, analyzing activities present in images, and time series prediction for stock prices, particularly helpful in handling sequential data and analyzing complex stock market data.', 'Limitations of Feedforward Neural Network The feedforward neural network lacks the capability to handle sequential data and consider the impact of previous inputs, requiring the implementation of recurrent neural network to address these limitations.', 'Basics of Neural Networks Neural networks consist of different layers connected to each other, mimicking human brain functions, and are used to compare artificial intelligence to human function, learning from large volumes of data and complex algorithms to train a neural net.']}, {'end': 992.249, 'start': 443.618, 'title': 'Recurrent neural networks in nlp', 'summary': 'Discusses the applications of recurrent neural networks (rnn) in natural language processing, including sentiment analysis, machine translation, and the challenges of vanishing and exploding gradient problems.', 'duration': 548.631, 'highlights': ['Applications of RNN in natural language processing RNN can be used for text mining, sentiment analysis, and machine translation in natural language processing, allowing analysis of sentiments based on word order and translation of input into different languages as output.', 'Challenges of vanishing and exploding gradient problems The vanishing gradient problem can lead to loss of information through time, while the exploding gradient problem can cause long tracking time, poor performance, and bad accuracy in RNN, requiring solutions like identity initialization, truncating back propagation, gradient clipping, weight initialization, choosing the right activation function, and using long short-term memory networks (LSTMs).', 'Types of recurrent neural networks Different types of RNN include one-to-one, one-to-many, many-to-one, and many-to-many networks, allowing applications like image captioning, sentiment analysis, and machine translation, with each type handling a specific sequence of inputs and outputs.']}], 'duration': 852.839, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/lWkFhVq9-nc/pics/lWkFhVq9-nc139410.jpg', 'highlights': ['Applications of RNN in natural language processing RNN can be used for text mining, sentiment analysis, and machine translation in natural language processing, allowing analysis of sentiments based on word order and translation of input into different languages as output.', 'Applications of RNN Recurrent Neural Network (RNN) is utilized for image captioning, analyzing activities present in images, and time series prediction for stock prices, particularly helpful in handling sequential data and analyzing complex stock market data.', 'Challenges of vanishing and exploding gradient problems The vanishing gradient problem can lead to loss of information through time, while the exploding gradient problem can cause long tracking time, poor performance, and bad accuracy in RNN, requiring solutions like identity initialization, truncating back propagation, gradient clipping, weight initialization, choosing the right activation function, and using long short-term memory networks (LSTMs).', 'Limitations of Feedforward Neural Network The feedforward neural network lacks the capability to handle sequential data and consider the impact of previous inputs, requiring the implementation of recurrent neural network to address these limitations.', 'Types of recurrent neural networks Different types of RNN include one-to-one, one-to-many, many-to-one, and many-to-many networks, allowing applications like image captioning, sentiment analysis, and machine translation, with each type handling a specific sequence of inputs and outputs.', 'Basics of Neural Networks Neural networks consist of different layers connected to each other, mimicking human brain functions, and are used to compare artificial intelligence to human function, learning from large volumes of data and complex algorithms to train a neural net.']}, {'end': 1571.733, 'segs': [{'end': 1067.613, 'src': 'embed', 'start': 1040.097, 'weight': 0, 'content': [{'end': 1046.182, 'text': 'Instead of having a single neural network layer, there are four interacting layers communicating in a very special way.', 'start': 1040.097, 'duration': 6.085}, {'end': 1052.805, 'text': 'LSTMS are a special kind of recurrent neural network capable of learning long-term dependencies,', 'start': 1046.742, 'duration': 6.063}, {'end': 1056.107, 'text': 'remembering information for long periods of time as their default behavior.', 'start': 1052.805, 'duration': 3.302}, {'end': 1061.47, 'text': 'LSTMSs also have a chain-like structure, but the repeating module has a different structure.', 'start': 1056.487, 'duration': 4.983}, {'end': 1067.613, 'text': 'Instead of having a single neural network layer, there are four interacting layers communicating in a very special way.', 'start': 1061.73, 'duration': 5.883}], 'summary': 'Lstms have four interacting layers, capable of learning long-term dependencies and remembering information for long periods of time.', 'duration': 27.516, 'max_score': 1040.097, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/lWkFhVq9-nc/pics/lWkFhVq9-nc1040097.jpg'}, {'end': 1154.089, 'src': 'embed', 'start': 1124.524, 'weight': 2, 'content': [{'end': 1129.708, 'text': "But it's a very powerful tool to help us address the problem of complicated, long,", 'start': 1124.524, 'duration': 5.184}, {'end': 1133.07, 'text': 'sequential information coming in like we were just looking at in the sentence.', 'start': 1129.708, 'duration': 3.362}, {'end': 1140.897, 'text': "And when we're looking at our long, short-term memory network, there's three steps of processing in the LSTMS that we look at.", 'start': 1133.47, 'duration': 7.427}, {'end': 1144.7, 'text': 'The first one is we want to forget irrelevant parts of the previous state.', 'start': 1141.017, 'duration': 3.683}, {'end': 1154.089, 'text': "A lot of times like is as in a, unless we're trying to look at whether it's a plural noun or not, they don't really play a huge part in the language,", 'start': 1145.101, 'duration': 8.988}], 'summary': 'Lstms aids in processing sequential data, involving three steps and addressing irrelevant parts.', 'duration': 29.565, 'max_score': 1124.524, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/lWkFhVq9-nc/pics/lWkFhVq9-nc1124524.jpg'}, {'end': 1204.442, 'src': 'embed', 'start': 1167.123, 'weight': 3, 'content': [{'end': 1169.626, 'text': "So whatever is coming out, we want to limit what's going out too.", 'start': 1167.123, 'duration': 2.503}, {'end': 1171.869, 'text': "Let's dig a little deeper into this.", 'start': 1170.367, 'duration': 1.502}, {'end': 1174.492, 'text': "Let's just see what this really looks like.", 'start': 1172.069, 'duration': 2.423}, {'end': 1177.896, 'text': 'So step one decides how much of the past it should remember.', 'start': 1174.592, 'duration': 3.304}, {'end': 1185.244, 'text': 'First step in the LSTM is to decide which information to be omitted in from the cell in that particular time step.', 'start': 1178.156, 'duration': 7.088}, {'end': 1187.507, 'text': 'It is decided by the sigmoid function.', 'start': 1185.545, 'duration': 1.962}, {'end': 1190.55, 'text': 'It looks at the previous state, h of t minus 1.', 'start': 1187.787, 'duration': 2.763}, {'end': 1193.773, 'text': 'and the current input, x of t, and computes the function.', 'start': 1190.55, 'duration': 3.223}, {'end': 1204.442, 'text': 'So you can see over here we have a function of t equals the sigmoid function of the weight of f the h at t minus 1, and then x of t plus.', 'start': 1193.953, 'duration': 10.489}], 'summary': 'Limit outgoing information, lstm uses sigmoid function to decide what to omit, based on previous state and current input.', 'duration': 37.319, 'max_score': 1167.123, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/lWkFhVq9-nc/pics/lWkFhVq9-nc1167123.jpg'}, {'end': 1558.951, 'src': 'embed', 'start': 1529.605, 'weight': 4, 'content': [{'end': 1533.028, 'text': "Let's predict the prices of stocks using the LSTM network.", 'start': 1529.605, 'duration': 3.423}, {'end': 1536.112, 'text': 'based on the stock price data between 2012 and 2016.', 'start': 1533.388, 'duration': 2.724}, {'end': 1539.977, 'text': "We're going to try to predict the stock prices of 2017.", 'start': 1536.112, 'duration': 3.865}, {'end': 1543.02, 'text': 'This will be a narrow set of data.', 'start': 1539.977, 'duration': 3.043}, {'end': 1544.883, 'text': "We're not going to do the whole stock market.", 'start': 1543.14, 'duration': 1.743}, {'end': 1551.029, 'text': 'It turns out that the New York Stock Exchange generates roughly 3 terabytes of data Per day.', 'start': 1545.083, 'duration': 5.946}, {'end': 1554.59, 'text': "That's all the different trades up and down of all the different stocks going on.", 'start': 1551.189, 'duration': 3.401}, {'end': 1558.951, 'text': 'And each individual one second to second or nanosecond to nanosecond.', 'start': 1554.73, 'duration': 4.221}], 'summary': 'Using lstm to predict stock prices for 2017 based on 2012-2016 data.', 'duration': 29.346, 'max_score': 1529.605, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/lWkFhVq9-nc/pics/lWkFhVq9-nc1529605.jpg'}], 'start': 992.589, 'title': 'Lstms in neural networks', 'summary': 'Explains the role and functionality of lstms in predicting words and processing sequential data, emphasizing their ability to learn long-term dependencies, unique structure, and case study on predicting stock prices using lstm network based on a narrow set of data from 2012-2016.', 'chapters': [{'end': 1105.651, 'start': 992.589, 'title': 'Lstms and long-term dependencies', 'summary': 'Explains the role of lstms in predicting words based on context, highlighting their ability to learn long-term dependencies and the unique structure with four interacting layers.', 'duration': 113.062, 'highlights': ['LSTMs are a special kind of recurrent neural network capable of learning long-term dependencies, with the default behavior of remembering information for long periods of time.', 'The repeating module of LSTMs has a chain-like structure with four interacting layers communicating in a special way.', 'The chapter emphasizes the role of LSTMs in predicting words based on context and their ability to handle information gaps, with a detailed explanation of the structure and interactions of the four layers.']}, {'end': 1280.059, 'start': 1105.651, 'title': 'Understanding lstm in neural networks', 'summary': 'Discusses the functionality of long short-term memory (lstm) networks, highlighting the process of forgetting irrelevant information, updating cell state values, and limiting the output in sequential data processing, illustrating the power and complexity of lstm networks.', 'duration': 174.408, 'highlights': ['The process of forgetting irrelevant information, updating cell state values, and limiting output in LSTM networks is crucial for addressing the complexity of sequential data processing, demonstrating the power and resource implications of LSTM networks.', 'The forget gate in LSTM networks decides which information to be omitted from the cell at a particular time step, using a sigmoid function to compare the previous state and the current input, thereby determining the relevance of information for sequential data processing.', 'In sequential data processing, the LSTM network efficiently weeds out irrelevant information and only passes on relevant information, showcasing its ability to handle and process long, sequential data by focusing on the current context.', 'The complexity and memory resources required for LSTM networks increase with the size of sequential data, emphasizing the trade-offs in utilizing LSTM for addressing complicated, long, sequential information.']}, {'end': 1571.733, 'start': 1280.439, 'title': 'Understanding lstm in neural networks', 'summary': 'Explains the working of lstm in neural networks, emphasizing the significance of the sigmoid and tangent h functions in determining the importance and weightage of information, and presents a case study on predicting stock prices using lstm network based on a narrow set of data from 2012-2016, highlighting the vast volume of data generated by the new york stock exchange.', 'duration': 291.294, 'highlights': ['The sigmoid and tangent h functions play a crucial role in determining the importance and weightage of information in the LSTM network. The sigmoid function decides which values to let through, 0 or 1, while the tangent h function gives weightage to the values, deciding their level of importance, ranging from -1 to 1.', "The case study on predicting stock prices using LSTM network is based on a narrow set of data from 2012-2016, and it's revealed that the New York Stock Exchange generates approximately 3 terabytes of data per day. The case study involves predicting stock prices using LSTM network based on data from 2012-2016 to forecast the stock prices of 2017. It is highlighted that the New York Stock Exchange generates approximately 3 terabytes of data per day."]}], 'duration': 579.144, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/lWkFhVq9-nc/pics/lWkFhVq9-nc992589.jpg', 'highlights': ['LSTMs are a special kind of recurrent neural network capable of learning long-term dependencies, with the default behavior of remembering information for long periods of time.', 'The repeating module of LSTMs has a chain-like structure with four interacting layers communicating in a special way.', 'The process of forgetting irrelevant information, updating cell state values, and limiting output in LSTM networks is crucial for addressing the complexity of sequential data processing, demonstrating the power and resource implications of LSTM networks.', 'The forget gate in LSTM networks decides which information to be omitted from the cell at a particular time step, using a sigmoid function to compare the previous state and the current input, thereby determining the relevance of information for sequential data processing.', "The case study on predicting stock prices using LSTM network is based on a narrow set of data from 2012-2016, and it's revealed that the New York Stock Exchange generates approximately 3 terabytes of data per day."]}, {'end': 2468.302, 'segs': [{'end': 1599.846, 'src': 'embed', 'start': 1571.833, 'weight': 0, 'content': [{'end': 1575.374, 'text': "A very valid use for machine learning in today's markets.", 'start': 1571.833, 'duration': 3.541}, {'end': 1579.095, 'text': 'Use case implementation of LSTM.', 'start': 1576.154, 'duration': 2.941}, {'end': 1580.373, 'text': "Let's dive in.", 'start': 1579.653, 'duration': 0.72}, {'end': 1582.115, 'text': "We're going to import our libraries.", 'start': 1580.454, 'duration': 1.661}, {'end': 1586.157, 'text': "We're going to import the training set and get the scaling going.", 'start': 1582.335, 'duration': 3.822}, {'end': 1591.541, 'text': 'Now if you watch any of our other tutorials, a lot of these pieces should start to look very familiar.', 'start': 1586.898, 'duration': 4.643}, {'end': 1592.922, 'text': "It's a very similar setup.", 'start': 1591.741, 'duration': 1.181}, {'end': 1594.483, 'text': "Let's take a look at that.", 'start': 1593.142, 'duration': 1.341}, {'end': 1599.846, 'text': "and just a reminder we're going to be using anaconda, the jupyter notebook.", 'start': 1594.483, 'duration': 5.363}], 'summary': 'Use machine learning for market analysis with lstm implementation and anaconda, jupyter notebook.', 'duration': 28.013, 'max_score': 1571.833, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/lWkFhVq9-nc/pics/lWkFhVq9-nc1571833.jpg'}, {'end': 1786.092, 'src': 'embed', 'start': 1756.931, 'weight': 2, 'content': [{'end': 1761.915, 'text': "In fact, depending on what your system, and since we're using Keras, I put an overlap here,", 'start': 1756.931, 'duration': 4.984}, {'end': 1767.22, 'text': "but you'll find that almost maybe even half of the code we do is all about the data prep.", 'start': 1761.915, 'duration': 5.305}, {'end': 1771.749, 'text': 'And the reason I overlap this with Keras let me just put that down,', 'start': 1767.728, 'duration': 4.021}, {'end': 1776.21, 'text': "because that's what we're working in is because Keras has like their own preset stuff.", 'start': 1771.749, 'duration': 4.461}, {'end': 1778.391, 'text': "So it's already pre-built in, which is really nice.", 'start': 1776.23, 'duration': 2.161}, {'end': 1781.711, 'text': "So there's a couple steps a lot of times that are in the Keras setup.", 'start': 1778.531, 'duration': 3.18}, {'end': 1786.092, 'text': "We'll take a look at that to see what comes up in our code as we go through and look at stock.", 'start': 1782.191, 'duration': 3.901}], 'summary': 'Data preparation is a significant part of keras usage, with almost half of the code dedicated to it.', 'duration': 29.161, 'max_score': 1756.931, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/lWkFhVq9-nc/pics/lWkFhVq9-nc1756931.jpg'}, {'end': 1936.506, 'src': 'embed', 'start': 1895.605, 'weight': 3, 'content': [{'end': 1899.327, 'text': 'We have a date, open, high, low, close, volume.', 'start': 1895.605, 'duration': 3.722}, {'end': 1902.189, 'text': 'This is the standard stuff that we import into our stock.', 'start': 1899.447, 'duration': 2.742}, {'end': 1904.971, 'text': 'One of the most basic set of information you can look at in stock.', 'start': 1902.269, 'duration': 2.702}, {'end': 1906.271, 'text': "It's all free to download.", 'start': 1905.171, 'duration': 1.1}, {'end': 1908.873, 'text': 'In this case, we downloaded it from Google.', 'start': 1906.652, 'duration': 2.221}, {'end': 1910.613, 'text': "That's why we call it the Google stock price.", 'start': 1908.953, 'duration': 1.66}, {'end': 1912.934, 'text': 'And it specifically is Google.', 'start': 1911.053, 'duration': 1.881}, {'end': 1918.937, 'text': 'This is the Google stock values from, as you can see here, we started off at 1-3-2012.', 'start': 1913.134, 'duration': 5.803}, {'end': 1926.98, 'text': 'So when we look at this first setup up here, we have a data set train equals PD underscore CSV.', 'start': 1919.957, 'duration': 7.023}, {'end': 1931.502, 'text': 'And if you noticed on the original frame, let me just go back there.', 'start': 1927.28, 'duration': 4.222}, {'end': 1933.944, 'text': 'they had it set to home.', 'start': 1932.362, 'duration': 1.582}, {'end': 1936.506, 'text': 'Ubuntu downloads Google stock price train.', 'start': 1933.944, 'duration': 2.562}], 'summary': 'The transcript discusses the process of importing and analyzing google stock price data, which is freely downloadable and begins from 1-3-2012.', 'duration': 40.901, 'max_score': 1895.605, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/lWkFhVq9-nc/pics/lWkFhVq9-nc1895605.jpg'}, {'end': 2056.063, 'src': 'embed', 'start': 2022.125, 'weight': 5, 'content': [{'end': 2024.486, 'text': 'We want to do what they call feature scaling.', 'start': 2022.125, 'duration': 2.361}, {'end': 2032.792, 'text': "And in here, we're going to pull it up from the sklearn or the skkit preprocessing import min max scaler.", 'start': 2024.927, 'duration': 7.865}, {'end': 2038.496, 'text': 'And when you look at this, you got to remember that biases in our data, we want to get rid of that.', 'start': 2033.212, 'duration': 5.284}, {'end': 2042.719, 'text': "So if you have something that's like a really high value, let's just draw a quick graph.", 'start': 2038.636, 'duration': 4.083}, {'end': 2046.342, 'text': 'And I have something here like the, maybe the stock has a value.', 'start': 2043.54, 'duration': 2.802}, {'end': 2047.923, 'text': 'One stock has a value of a hundred.', 'start': 2046.462, 'duration': 1.461}, {'end': 2056.063, 'text': 'another stock has a value of five, you start to get a bias between different stocks.', 'start': 2048.656, 'duration': 7.407}], 'summary': "Implement feature scaling using sklearn's minmaxscaler to remove biases in data.", 'duration': 33.938, 'max_score': 2022.125, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/lWkFhVq9-nc/pics/lWkFhVq9-nc2022125.jpg'}, {'end': 2419.026, 'src': 'embed', 'start': 2393.717, 'weight': 6, 'content': [{'end': 2398.54, 'text': 'np.reshape comes from and using the existing shapes to form it.', 'start': 2393.717, 'duration': 4.823}, {'end': 2400.762, 'text': "We'll go ahead and run this piece of code.", 'start': 2398.86, 'duration': 1.902}, {'end': 2402.223, 'text': "Again, there's no real output.", 'start': 2400.782, 'duration': 1.441}, {'end': 2406.428, 'text': "And then we'll import our different Keras modules that we need.", 'start': 2402.742, 'duration': 3.686}, {'end': 2409.572, 'text': "So from Keras models, we're going to import the sequential model.", 'start': 2406.528, 'duration': 3.044}, {'end': 2410.834, 'text': "We're dealing with sequential data.", 'start': 2409.592, 'duration': 1.242}, {'end': 2412.596, 'text': 'We have our dense layers.', 'start': 2410.894, 'duration': 1.702}, {'end': 2414.419, 'text': "We have actually three layers we're going to bring in.", 'start': 2412.817, 'duration': 1.602}, {'end': 2419.026, 'text': "Our dense, our LSTM, which is what we're focusing on, and our dropout.", 'start': 2414.619, 'duration': 4.407}], 'summary': 'Using keras modules to import sequential model with dense, lstm, and dropout layers.', 'duration': 25.309, 'max_score': 2393.717, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/lWkFhVq9-nc/pics/lWkFhVq9-nc2393717.jpg'}, {'end': 2459.996, 'src': 'embed', 'start': 2432.436, 'weight': 7, 'content': [{'end': 2435.538, 'text': "And if you read it closer, it's not actually an error, it's a warning.", 'start': 2432.436, 'duration': 3.102}, {'end': 2436.799, 'text': 'What does this warning mean?', 'start': 2435.618, 'duration': 1.181}, {'end': 2442.423, 'text': "These things come up all the time when you're working with such cutting-edge modules that are completely being updated all the time.", 'start': 2436.879, 'duration': 5.544}, {'end': 2444.344, 'text': "We're not going to worry too much about the warning.", 'start': 2442.523, 'duration': 1.821}, {'end': 2450.969, 'text': "All it's saying is that the H5PY module, which is part of Keras, is going to be updated at some point.", 'start': 2444.444, 'duration': 6.525}, {'end': 2459.996, 'text': "And if you're running new stuff on Keras and you start updating your Keras system, you better make sure that your H5PY is updated too.", 'start': 2451.37, 'duration': 8.626}], 'summary': 'Warning about h5py module update in keras, emphasizing the necessity to update for new keras systems.', 'duration': 27.56, 'max_score': 2432.436, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/lWkFhVq9-nc/pics/lWkFhVq9-nc2432436.jpg'}], 'start': 1571.833, 'title': 'Lstm for stock price prediction and google stock price analysis', 'summary': 'Discusses implementing lstm for stock price prediction using python, anaconda, and jupyter notebook, emphasizing data pre-processing and evaluation, along with analyzing google stock prices, including importing, scaling, preprocessing, and modeling data with keras, lstm, and dropout layers.', 'chapters': [{'end': 1875.41, 'start': 1571.833, 'title': 'Implementing lstm for stock price prediction', 'summary': 'Discusses the implementation of lstm for stock price prediction using python, anaconda, and jupyter notebook, emphasizing the importance of data pre-processing and evaluation, along with the use of keras under tensorflow for model building.', 'duration': 303.577, 'highlights': ['The chapter discusses the implementation of LSTM for stock price prediction using Python, Anaconda, and Jupyter notebook. The tutorial focuses on implementing LSTM for stock price prediction using Python, Anaconda, and Jupyter notebook.', 'Emphasizes the importance of data pre-processing and evaluation in the implementation of LSTM. The chapter highlights the significance of data pre-processing and evaluation in the LSTM implementation, indicating that almost half of the code is dedicated to data pre-processing, with evaluation being the next significant step.', "Utilizes Keras under TensorFlow for model building. The tutorial utilizes Keras under TensorFlow for model building, highlighting the advantages of Keras' preset features and robustness compared to other packages."]}, {'end': 2468.302, 'start': 1875.45, 'title': 'Analyzing google stock price with python', 'summary': 'Demonstrates the process of analyzing google stock prices using python, including importing, scaling, preprocessing, and modeling data with keras, lstm, and dropout layers.', 'duration': 592.852, 'highlights': ['The chapter explains the process of analyzing Google stock prices using Python, covering importing, scaling, and preprocessing the data, and modeling it with Keras, LSTM, and dropout layers.', 'The data includes stock values from Google, covering open, high, low, close, and volume, starting from 1-3-2012.', 'The process involves feature scaling to address biases in the data, using min-max scaling to bring values between 0 and 1.', 'The code involves reshaping the data into a suitable format for modeling and imports various Keras modules, including sequential model, dense layers, LSTM, and dropout.', 'The chapter also addresses a warning related to the H5PY module, emphasizing the importance of updating it when working with the latest Keras systems.']}], 'duration': 896.469, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/lWkFhVq9-nc/pics/lWkFhVq9-nc1571833.jpg', 'highlights': ['The tutorial focuses on implementing LSTM for stock price prediction using Python, Anaconda, and Jupyter notebook.', 'The chapter highlights the significance of data pre-processing and evaluation in the LSTM implementation, indicating that almost half of the code is dedicated to data pre-processing, with evaluation being the next significant step.', "The tutorial utilizes Keras under TensorFlow for model building, highlighting the advantages of Keras' preset features and robustness compared to other packages.", 'The chapter explains the process of analyzing Google stock prices using Python, covering importing, scaling, and preprocessing the data, and modeling it with Keras, LSTM, and dropout layers.', 'The data includes stock values from Google, covering open, high, low, close, and volume, starting from 1-3-2012.', 'The process involves feature scaling to address biases in the data, using min-max scaling to bring values between 0 and 1.', 'The code involves reshaping the data into a suitable format for modeling and imports various Keras modules, including sequential model, dense layers, LSTM, and dropout.', 'The chapter also addresses a warning related to the H5PY module, emphasizing the importance of updating it when working with the latest Keras systems.']}, {'end': 2906.126, 'segs': [{'end': 2572.199, 'src': 'embed', 'start': 2547.143, 'weight': 2, 'content': [{'end': 2552.587, 'text': "The units is the positive integer, and it's the dimensionality of the output space.", 'start': 2547.143, 'duration': 5.444}, {'end': 2554.688, 'text': "This is what's going out into the next layer.", 'start': 2552.707, 'duration': 1.981}, {'end': 2558.471, 'text': 'So we might have 60 coming in, but we have 50 going out.', 'start': 2554.908, 'duration': 3.563}, {'end': 2562.734, 'text': 'We have a return sequence because it is a sequence data, so we want to keep that true.', 'start': 2558.851, 'duration': 3.883}, {'end': 2564.555, 'text': "And then you have to tell it what shape it's in.", 'start': 2562.934, 'duration': 1.621}, {'end': 2569.077, 'text': 'Well, we already know the shape by just going in here and looking at X train shape.', 'start': 2564.795, 'duration': 4.282}, {'end': 2572.199, 'text': 'So input shape equals the X train shape of 1 comma 1.', 'start': 2569.237, 'duration': 2.962}], 'summary': 'The output space has 50 dimensions, with input shape of 1,1.', 'duration': 25.056, 'max_score': 2547.143, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/lWkFhVq9-nc/pics/lWkFhVq9-nc2547143.jpg'}, {'end': 2608.374, 'src': 'embed', 'start': 2584.545, 'weight': 3, 'content': [{'end': 2591.208, 'text': 'Now, understanding the dropout layer is kind of exciting because one of the things that happens is we can overtrain our network.', 'start': 2584.545, 'duration': 6.663}, {'end': 2599.191, 'text': "That means that our neural network will memorize such specific data that it has trouble predicting anything that's not in that specific realm.", 'start': 2591.588, 'duration': 7.603}, {'end': 2608.374, 'text': "To fix for that, each time we run through the training mode, we're going to take 0.2 or 20% of our neurons and just turn them off.", 'start': 2599.331, 'duration': 9.043}], 'summary': 'Dropout layer prevents overtraining by deactivating 20% of neurons during each training run.', 'duration': 23.829, 'max_score': 2584.545, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/lWkFhVq9-nc/pics/lWkFhVq9-nc2584545.jpg'}, {'end': 2642.674, 'src': 'embed', 'start': 2618.658, 'weight': 4, 'content': [{'end': 2624.842, 'text': 'And finally, they see a big difference as we go from the first to the second and third and fourth.', 'start': 2618.658, 'duration': 6.184}, {'end': 2630.046, 'text': "The first thing is we don't have to input the shape because the shapes already the output units is 50 here.", 'start': 2625.082, 'duration': 4.964}, {'end': 2634.689, 'text': 'This out of the next step automatically knows this layer is putting out 50.', 'start': 2630.166, 'duration': 4.523}, {'end': 2639.532, 'text': "And because it's the next layer, it automatically sets that and says, oh, 50 is coming out from our last layer.", 'start': 2634.689, 'duration': 4.843}, {'end': 2642.674, 'text': "It's coming out, you know, goes into the regressor and of course you have our dropout.", 'start': 2639.712, 'duration': 2.962}], 'summary': 'Transition to later layers increases output units to 50, eliminating need for inputting shape.', 'duration': 24.016, 'max_score': 2618.658, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/lWkFhVq9-nc/pics/lWkFhVq9-nc2618658.jpg'}, {'end': 2781.683, 'src': 'embed', 'start': 2755.846, 'weight': 1, 'content': [{'end': 2761.89, 'text': "And if you've been looking at any of our other tutorials on neural networks, you'll see we're going to use the optimizer Atom.", 'start': 2755.846, 'duration': 6.044}, {'end': 2764.232, 'text': 'Atom is optimized for big data.', 'start': 2762.19, 'duration': 2.042}, {'end': 2769.975, 'text': "There's a couple other optimizers out there beyond the scope of this tutorial, but certainly Atom will work pretty good for this.", 'start': 2764.272, 'duration': 5.703}, {'end': 2772.797, 'text': 'And loss equals mean squared value.', 'start': 2770.235, 'duration': 2.562}, {'end': 2775.799, 'text': "So when we're training it, this is what we want to base the loss on.", 'start': 2773.037, 'duration': 2.762}, {'end': 2781.683, 'text': "How bad is our error? Well, we're going to use the mean squared value for our error and the Atom optimizer for its differential equations.", 'start': 2775.859, 'duration': 5.824}], 'summary': 'Using atom optimizer for big data, with mean squared error.', 'duration': 25.837, 'max_score': 2755.846, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/lWkFhVq9-nc/pics/lWkFhVq9-nc2755846.jpg'}, {'end': 2896.056, 'src': 'embed', 'start': 2870.971, 'weight': 0, 'content': [{'end': 2876.237, 'text': "So let's go ahead and run this and this will actually take a little bit on my computer, because it's an older laptop,", 'start': 2870.971, 'duration': 5.266}, {'end': 2877.738, 'text': 'and give it a second to kick in there.', 'start': 2876.237, 'duration': 1.501}, {'end': 2878.399, 'text': 'There we go.', 'start': 2877.938, 'duration': 0.461}, {'end': 2879.68, 'text': 'All right, so we have Epic.', 'start': 2878.579, 'duration': 1.101}, {'end': 2885.887, 'text': "So this is going to tell me it's running the first run through all the data and as it's going through, it's batching them in 32 pieces.", 'start': 2879.981, 'duration': 5.906}, {'end': 2886.948, 'text': 'so 32 lines each time.', 'start': 2885.887, 'duration': 1.061}, {'end': 2890.571, 'text': "And there's 1198.", 'start': 2889.07, 'duration': 1.501}, {'end': 2892.533, 'text': "I think I said 1199 earlier, but it's 1198.", 'start': 2890.571, 'duration': 1.962}, {'end': 2893.654, 'text': 'I was off by one.', 'start': 2892.533, 'duration': 1.121}, {'end': 2896.056, 'text': 'And each one of these is 13 seconds.', 'start': 2893.814, 'duration': 2.242}], 'summary': 'Running epic on an older laptop, processing 1198 data in batches of 32, taking 13 seconds each.', 'duration': 25.085, 'max_score': 2870.971, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/lWkFhVq9-nc/pics/lWkFhVq9-nc2870971.jpg'}], 'start': 2468.562, 'title': 'Rnn model building and training', 'summary': 'Discusses rnn model initialization, dropout regularization, building a sequential model with keras, and compiling an rnn regressor with atom optimizer, mean squared error, and 32-line batching, taking 20-30 minutes to run on an older laptop at 0.9 gigahertz.', 'chapters': [{'end': 2618.658, 'start': 2468.562, 'title': 'Rnn layer initialization and dropout regularization', 'summary': 'Discusses initializing the rnn with lstm layers and dropout regularization, emphasizing the use of dropout to prevent overtraining by randomly deactivating 20% of neurons during each training cycle.', 'duration': 150.096, 'highlights': ['The dropout layer randomly deactivates 20% of neurons during training cycles to prevent overtraining and memorization of specific data.', 'The LSTM layer has 50 units for dimensionality of the output space, and it is used for sequence data with return sequence set to true.', 'The chapter emphasizes the importance of dropout regularization in preventing overtraining of the neural network by deactivating 20% of neurons during each training cycle.']}, {'end': 2736.566, 'start': 2618.658, 'title': 'Building a sequential model with keras', 'summary': "Discusses the process of building a sequential model using keras, outlining the automatic shape inference, the use of dense layers for output, and the impact of keras's approach on model flexibility and output.", 'duration': 117.908, 'highlights': ["Keras's automatic shape inference in building sequential models eliminates the need to manually input the shape, making the process more efficient and user-friendly.", 'The use of dense layers in Keras condenses the output into a single result, streamlining the sequence into the final output.', "Keras's approach, although involving more steps in building the model, provides greater flexibility and impact on the output of the models."]}, {'end': 2906.126, 'start': 2736.566, 'title': 'Running and compiling rnn model', 'summary': 'Covers compiling and fitting a full model with an rnn regressor using atom optimizer for big data, mean squared value for error, and batching with 32 lines, taking approximately 20 to 30 minutes to run on an older laptop at 0.9 gigahertz.', 'duration': 169.56, 'highlights': ['The chapter emphasizes the use of Atom optimizer for big data in training the RNN model.', 'It highlights the utilization of mean squared value for error during training.', 'The chapter explains the batching process with 32 lines and the estimated runtime of 20 to 30 minutes on an older laptop at 0.9 gigahertz.']}], 'duration': 437.564, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/lWkFhVq9-nc/pics/lWkFhVq9-nc2468562.jpg', 'highlights': ['The chapter explains the batching process with 32 lines and the estimated runtime of 20 to 30 minutes on an older laptop at 0.9 gigahertz.', 'The chapter emphasizes the use of Atom optimizer for big data in training the RNN model.', 'The LSTM layer has 50 units for dimensionality of the output space, and it is used for sequence data with return sequence set to true.', 'The dropout layer randomly deactivates 20% of neurons during training cycles to prevent overtraining and memorization of specific data.', "Keras's automatic shape inference in building sequential models eliminates the need to manually input the shape, making the process more efficient and user-friendly."]}, {'end': 3558.227, 'segs': [{'end': 3279.335, 'src': 'embed', 'start': 3248.573, 'weight': 1, 'content': [{'end': 3250.555, 'text': "And you'll see it runs much quicker than the training.", 'start': 3248.573, 'duration': 1.982}, {'end': 3252.376, 'text': "That's what's so wonderful about these neural networks.", 'start': 3250.575, 'duration': 1.801}, {'end': 3259.281, 'text': 'Once you put them together, it takes just a second to run the same neural network that took us, what, a half hour to train? Add and plot the data.', 'start': 3252.456, 'duration': 6.825}, {'end': 3265.725, 'text': "We're going to plot what we think it's going to be, and we're going to plot it against the real data, what the Google stock actually did.", 'start': 3259.381, 'duration': 6.344}, {'end': 3267.827, 'text': "So let's go ahead and take a look at that in code.", 'start': 3266.006, 'duration': 1.821}, {'end': 3269.688, 'text': "And let's pull this code up.", 'start': 3268.187, 'duration': 1.501}, {'end': 3271.009, 'text': 'So we have our PLT.', 'start': 3269.728, 'duration': 1.281}, {'end': 3275.372, 'text': "That's our, if you remember from the very beginning, let me just go back up to the top.", 'start': 3271.109, 'duration': 4.263}, {'end': 3279.335, 'text': 'We have our matplotlibrary.pyplot as PLT.', 'start': 3275.672, 'duration': 3.663}], 'summary': 'Neural networks run faster, taking just a second to run what took 30 minutes to train.', 'duration': 30.762, 'max_score': 3248.573, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/lWkFhVq9-nc/pics/lWkFhVq9-nc3248573.jpg'}, {'end': 3334.85, 'src': 'embed', 'start': 3304.965, 'weight': 2, 'content': [{'end': 3307.388, 'text': "So we're going to plot the real stock price.", 'start': 3304.965, 'duration': 2.423}, {'end': 3308.79, 'text': "That's what it actually is.", 'start': 3307.628, 'duration': 1.162}, {'end': 3310.732, 'text': "And we're going to give that color red.", 'start': 3308.81, 'duration': 1.922}, {'end': 3311.813, 'text': "So it's going to be a bright red.", 'start': 3310.752, 'duration': 1.061}, {'end': 3314.396, 'text': "We're going to label it real Google stock price.", 'start': 3311.833, 'duration': 2.563}, {'end': 3316.799, 'text': "And then we're going to do our predicted stock.", 'start': 3314.736, 'duration': 2.063}, {'end': 3319.921, 'text': "And we're going to do it in blue, and it's going to be labeled predicted.", 'start': 3317.159, 'duration': 2.762}, {'end': 3325.744, 'text': "And we'll give it a title, because it's always nice to give a title to your graph, especially if you're going to present this to somebody you know,", 'start': 3320.181, 'duration': 5.563}, {'end': 3327.545, 'text': 'to your shareholders in the office.', 'start': 3325.744, 'duration': 1.801}, {'end': 3331.468, 'text': "And the X label is going to be time, because it's a time series.", 'start': 3327.926, 'duration': 3.542}, {'end': 3334.85, 'text': "And we didn't actually put the actual date and times on here, but that's fine.", 'start': 3331.488, 'duration': 3.362}], 'summary': 'Plot real stock price in red and predicted stock in blue with labels and title.', 'duration': 29.885, 'max_score': 3304.965, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/lWkFhVq9-nc/pics/lWkFhVq9-nc3304965.jpg'}, {'end': 3411.771, 'src': 'embed', 'start': 3387.737, 'weight': 3, 'content': [{'end': 3394.719, 'text': "We have the Google price and the Google price has this little up, jump and then down, and you'll see that the actual Google,", 'start': 3387.737, 'duration': 6.982}, {'end': 3400.001, 'text': "instead of a turn down here, just didn't go up as high and didn't go down.", 'start': 3394.719, 'duration': 5.282}, {'end': 3406.306, 'text': 'So our prediction It has the same pattern, but the overall value is pretty far off as far as stock.', 'start': 3400.041, 'duration': 6.265}, {'end': 3408.088, 'text': "But then again, we're only looking at one column.", 'start': 3406.426, 'duration': 1.662}, {'end': 3409.529, 'text': "We're only looking at the open price.", 'start': 3408.148, 'duration': 1.381}, {'end': 3411.771, 'text': "We're not looking at how many volumes were traded.", 'start': 3409.629, 'duration': 2.142}], 'summary': 'Google stock price prediction is off, based on open price but not considering trading volume.', 'duration': 24.034, 'max_score': 3387.737, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/lWkFhVq9-nc/pics/lWkFhVq9-nc3387737.jpg'}, {'end': 3478.25, 'src': 'embed', 'start': 3449.992, 'weight': 0, 'content': [{'end': 3453.815, 'text': "So this bend here does not quite match up with that bend there, but it's pretty darn close.", 'start': 3449.992, 'duration': 3.823}, {'end': 3458.098, 'text': "We have the basic shape of it and the prediction isn't too far off.", 'start': 3454.015, 'duration': 4.083}, {'end': 3465.403, 'text': 'And you can imagine that as we add more data in and look at different aspects in the specific domain of stock,', 'start': 3458.358, 'duration': 7.045}, {'end': 3468.585, 'text': 'we should be able to get a better representation each time we drill in deeper.', 'start': 3465.403, 'duration': 3.182}, {'end': 3472.128, 'text': 'Of course, this took a half hour for my program my computer to train.', 'start': 3468.745, 'duration': 3.383}, {'end': 3478.25, 'text': 'So you can imagine that if I was running it across all those different variables, it might take a little bit longer to train the data.', 'start': 3472.388, 'duration': 5.862}], 'summary': 'Model prediction is close, but training took 30 minutes.', 'duration': 28.258, 'max_score': 3449.992, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/lWkFhVq9-nc/pics/lWkFhVq9-nc3449992.jpg'}, {'end': 3532.925, 'src': 'embed', 'start': 3504.064, 'weight': 4, 'content': [{'end': 3506.747, 'text': 'We discussed what is a recurrent neural network.', 'start': 3504.064, 'duration': 2.683}, {'end': 3513.272, 'text': "We went into one of the big problems with the RNN, and that's the exploding gradient problem.", 'start': 3507.087, 'duration': 6.185}, {'end': 3518.316, 'text': "We also discussed the long short-term memory networks, which is part of what we're working on.", 'start': 3513.472, 'duration': 4.844}, {'end': 3521.038, 'text': "That's the LST, which is part of the RNN setup.", 'start': 3518.336, 'duration': 2.702}, {'end': 3525.1, 'text': 'And finally, we went through and we predicted the Google stock to see how it did.', 'start': 3521.298, 'duration': 3.802}, {'end': 3526.981, 'text': 'So I want to thank you for joining us today.', 'start': 3525.12, 'duration': 1.861}, {'end': 3529.863, 'text': 'Again, my name is Richard Kirshner, one of the Simply Learn team.', 'start': 3527.142, 'duration': 2.721}, {'end': 3532.925, 'text': "That's www.simplylearn.com.", 'start': 3529.923, 'duration': 3.002}], 'summary': 'Discussed rnn, highlighted exploding gradient problem, covered lst networks, and predicted google stock performance.', 'duration': 28.861, 'max_score': 3504.064, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/lWkFhVq9-nc/pics/lWkFhVq9-nc3504064.jpg'}], 'start': 2906.146, 'title': 'Stock price prediction using neural networks', 'summary': 'Delves into training a neural network model to predict stock prices, achieving a loss of 0.0014, visualizing and comparing predicted and real stock prices, discussing challenges and potential of using recurrent neural networks for stock prediction, including the use of long short-term memory networks, and its applications across different domains.', 'chapters': [{'end': 3304.785, 'start': 2906.146, 'title': 'Neural network stock price prediction', 'summary': 'Details the process of training a neural network model to predict stock prices, achieving a loss of 0.0014, visualizing the results, and plotting the predicted stock prices against the real data with the model running much quicker than the training process.', 'duration': 398.639, 'highlights': ['The model achieved a loss of 0.0014, showing that the prediction was close to the actual stock prices. The loss stopped at 0.0014, indicating the accuracy of the stock price prediction model.', "The process of training the neural network model and making predictions ran much quicker than the initial training, showcasing the efficiency of the neural network. The model's prediction and visualization process ran much quicker than the training, demonstrating the efficiency of the neural network.", 'The chapter details the process of training a neural network model to predict stock prices and visualizing the results. The chapter discusses training a neural network model to predict stock prices and visualizing the results.']}, {'end': 3558.227, 'start': 3304.965, 'title': 'Predicted vs real google stock price', 'summary': 'Discusses plotting the real and predicted stock prices of google, compares their patterns, and highlights the challenges and potential of using recurrent neural networks (rnn) for stock prediction, including the exploding gradient problem and the use of long short-term memory networks (lst). it also mentions the time taken for training and its applications across different domains.', 'duration': 253.262, 'highlights': ['The chapter details the process of plotting the real Google stock price in red and the predicted stock price in blue, with labels and a title for the graph, emphasizing the importance of visualization for presentation purposes.', 'It discusses the limitations of the current prediction, noting that it only considers the open price of the stock and acknowledges the potential for improvement by analyzing additional data and aspects within the stock domain.', 'The transcript addresses the challenges of training the data, highlighting that it took a half hour for the computer to train and mentions the potential time implications of training across various variables.', 'It introduces the recurrent neural network (RNN) and its setup, discusses the exploding gradient problem associated with RNN, and delves into the concept of long short-term memory networks (LST) as part of the RNN setup.', 'The chapter concludes by mentioning the prediction of Google stock using the RNN setup and emphasizes the potential applications of RNN in various domains, while also encouraging viewers to explore more courses and resources on the topic.']}], 'duration': 652.081, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/lWkFhVq9-nc/pics/lWkFhVq9-nc2906146.jpg', 'highlights': ['The model achieved a loss of 0.0014, showing that the prediction was close to the actual stock prices.', 'The process of training the neural network model and making predictions ran much quicker than the initial training, showcasing the efficiency of the neural network.', 'The chapter details the process of plotting the real Google stock price in red and the predicted stock price in blue, with labels and a title for the graph, emphasizing the importance of visualization for presentation purposes.', 'It discusses the limitations of the current prediction, noting that it only considers the open price of the stock and acknowledges the potential for improvement by analyzing additional data and aspects within the stock domain.', 'It introduces the recurrent neural network (RNN) and its setup, discusses the exploding gradient problem associated with RNN, and delves into the concept of long short-term memory networks (LST) as part of the RNN setup.']}], 'highlights': ['The LSTM layer has 50 units for dimensionality of the output space, and it is used for sequence data with return sequence set to true.', 'The chapter covers the fundamentals of RNN, including its framework, working principles, and the vanishing and exploding gradient problem.', 'The process of training the neural network model and making predictions ran much quicker than the initial training, showcasing the efficiency of the neural network.', 'The model achieved a loss of 0.0014, showing that the prediction was close to the actual stock prices.', 'The chapter explains the batching process with 32 lines and the estimated runtime of 20 to 30 minutes on an older laptop at 0.9 gigahertz.', 'The chapter details the process of plotting the real Google stock price in red and the predicted stock price in blue, with labels and a title for the graph, emphasizing the importance of visualization for presentation purposes.', "The transcript explains how RNN is used to predict the next word in a sentence based on frequently occurring consecutive words, showcasing its application in Google's autocomplete feature.", 'The chapter emphasizes the use of Atom optimizer for big data in training the RNN model.', 'The chapter focuses on implementing LSTM for stock price prediction using Python, Anaconda, and Jupyter notebook.', 'The chapter highlights the significance of data pre-processing and evaluation in the LSTM implementation, indicating that almost half of the code is dedicated to data pre-processing, with evaluation being the next significant step.']}