title
Stock Price Prediction And Forecasting Using Stacked LSTM- Deep Learning
description
A Machine Learning Model for Stock Market Prediction. Stock market prediction is the act of trying to determine the future value of a company stock or other financial instrument traded on a financial exchange
References: Jason Browniee Machine Learning Mastery Blogs
https://machinelearningmastery.com/time-series-prediction-with-deep-learning-in-python-with-keras/
github link: https://github.com/krishnaik06/Stock-MArket-Forecasting
⭐ Kite is a free AI-powered coding assistant that will help you code faster and smarter. The Kite plugin integrates with all the top editors and IDEs to give you smart completions and documentation while you’re typing. I've been using Kite for a few months and I love it! https://www.kite.com/get-kite/?utm_medium=referral&utm_source=youtube&utm_campaign=krishnaik&utm_content=description-only
Please join as a member in my channel to get additional benefits like materials in Data Science, live streaming for Members and many more
https://www.youtube.com/channel/UCNU_lfiiWBdtULKOw6X0Dig/join
Please do subscribe my other channel too
https://www.youtube.com/channel/UCjWY5hREA6FFYrthD0rZNIw
Connect with me here:
Twitter: https://twitter.com/Krishnaik06
Facebook: https://www.facebook.com/krishnaik06
instagram: https://www.instagram.com/krishnaik06
detail
{'title': 'Stock Price Prediction And Forecasting Using Stacked LSTM- Deep Learning', 'heatmap': [{'end': 197.937, 'start': 152.082, 'weight': 0.755}, {'end': 397.798, 'start': 325.805, 'weight': 0.741}, {'end': 1056.105, 'start': 1030.502, 'weight': 0.82}, {'end': 1383.11, 'start': 1271.063, 'weight': 0.717}, {'end': 1585.782, 'start': 1550.341, 'weight': 0.843}, {'end': 1886.481, 'start': 1838.587, 'weight': 0.886}], 'summary': 'Covers stock market prediction and forecasting using stacked lstm, demonstrating data collection, preprocessing, model creation, and predicting future 30 days with 71.6% accuracy, utilizing tingo api for 50 requests per day and achieving effective model performance and evaluation.', 'chapters': [{'end': 204.558, 'segs': [{'end': 59.484, 'src': 'embed', 'start': 34.346, 'weight': 2, 'content': [{'end': 43.209, 'text': "Now, one thing I really want to request you all, guys, is that please don't use this model for your personal investments.", 'start': 34.346, 'duration': 8.863}, {'end': 47.551, 'text': "You know, just don't think that this model will give you any kind of profit.", 'start': 43.269, 'duration': 4.282}, {'end': 51.377, 'text': 'because understand stock market here.', 'start': 48.554, 'duration': 2.823}, {'end': 55.28, 'text': 'I just tried it to do it by using stacked LSTM itself.', 'start': 51.377, 'duration': 3.903}, {'end': 58.303, 'text': 'I just wanted to show you how we can actually do it.', 'start': 55.28, 'duration': 3.023}, {'end': 59.484, 'text': "but just don't do.", 'start': 58.303, 'duration': 1.181}], 'summary': 'Caution against using the model for personal investments, as it was created for demonstration purposes only.', 'duration': 25.138, 'max_score': 34.346, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/H6du_pfuznE/pics/H6du_pfuznE34346.jpg'}, {'end': 128.896, 'src': 'embed', 'start': 104.192, 'weight': 0, 'content': [{'end': 111.924, 'text': "Probably for this, we need to have some kind of data, right? And I'll just show you all the steps that we will be performing.", 'start': 104.192, 'duration': 7.732}, {'end': 114.125, 'text': "first of all, we'll collect the stock data.", 'start': 112.404, 'duration': 1.721}, {'end': 118.889, 'text': "in this particular example, i'm taking the apple data, say apple company stock prices.", 'start': 114.125, 'duration': 4.764}, {'end': 122.932, 'text': 'you know, from 2015 till today or till 25th.', 'start': 118.889, 'duration': 4.043}, {'end': 125.274, 'text': "i'll try to take that particular data.", 'start': 122.932, 'duration': 2.342}, {'end': 128.896, 'text': "then we'll pre-process the data, that is, divide that particular data into train test.", 'start': 125.274, 'duration': 3.622}], 'summary': 'Process stock data of apple from 2015 to 25th for training and testing.', 'duration': 24.704, 'max_score': 104.192, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/H6du_pfuznE/pics/H6du_pfuznE104192.jpg'}, {'end': 197.937, 'src': 'heatmap', 'start': 152.082, 'weight': 0.755, 'content': [{'end': 153.923, 'text': 'So here is your data collection strategy, guys.', 'start': 152.082, 'duration': 1.841}, {'end': 158.885, 'text': "Here, I'm going to use a wonderful library which is called as Pandas underscore data reader.", 'start': 154.223, 'duration': 4.662}, {'end': 163.226, 'text': 'If you really want to see what in this particular documentation you can go.', 'start': 159.385, 'duration': 3.841}, {'end': 169.669, 'text': 'regarding Pandas data reader, it provides a lot of libraries where you can actually access this kind of data.', 'start': 163.226, 'duration': 6.443}, {'end': 171.81, 'text': "So currently I've selected Tingo.", 'start': 169.969, 'duration': 1.841}, {'end': 175.772, 'text': "OK, in Tingo, you can see over here, it's pretty much simple.", 'start': 172.29, 'duration': 3.482}, {'end': 177.652, 'text': 'Just go into the Tingo site.', 'start': 176.212, 'duration': 1.44}, {'end': 181.574, 'text': 'OK, just log in, log in inside this or sign up.', 'start': 178.072, 'duration': 3.502}, {'end': 183.734, 'text': 'And then you click on the API part.', 'start': 182.054, 'duration': 1.68}, {'end': 187.615, 'text': "In API, you'll be able to find the key.", 'start': 184.494, 'duration': 3.121}, {'end': 193.816, 'text': "If you go down in this particular section, in the authentication section, you'll be able to find your own API key.", 'start': 187.875, 'duration': 5.941}, {'end': 197.937, 'text': "I will not share it with you guys because I'm going to use that for my personal purpose.", 'start': 194.356, 'duration': 3.581}], 'summary': 'Using pandas data reader for data collection, accessing tingo data, and obtaining an api key for authentication.', 'duration': 45.855, 'max_score': 152.082, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/H6du_pfuznE/pics/H6du_pfuznE152082.jpg'}, {'end': 187.615, 'src': 'embed', 'start': 159.385, 'weight': 1, 'content': [{'end': 163.226, 'text': 'If you really want to see what in this particular documentation you can go.', 'start': 159.385, 'duration': 3.841}, {'end': 169.669, 'text': 'regarding Pandas data reader, it provides a lot of libraries where you can actually access this kind of data.', 'start': 163.226, 'duration': 6.443}, {'end': 171.81, 'text': "So currently I've selected Tingo.", 'start': 169.969, 'duration': 1.841}, {'end': 175.772, 'text': "OK, in Tingo, you can see over here, it's pretty much simple.", 'start': 172.29, 'duration': 3.482}, {'end': 177.652, 'text': 'Just go into the Tingo site.', 'start': 176.212, 'duration': 1.44}, {'end': 181.574, 'text': 'OK, just log in, log in inside this or sign up.', 'start': 178.072, 'duration': 3.502}, {'end': 183.734, 'text': 'And then you click on the API part.', 'start': 182.054, 'duration': 1.68}, {'end': 187.615, 'text': "In API, you'll be able to find the key.", 'start': 184.494, 'duration': 3.121}], 'summary': 'Pandas data reader offers access to various libraries, with tingo providing a simple api for key retrieval.', 'duration': 28.23, 'max_score': 159.385, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/H6du_pfuznE/pics/H6du_pfuznE159385.jpg'}], 'start': 14.431, 'title': 'Stock market prediction using stacked lstm', 'summary': 'Discusses stock market prediction and forecasting using stacked lstm, emphasizing not to use the model for personal investments, and explaining the process of collecting apple stock data from 2015 till date, preprocessing the data, creating a model, and predicting the future 30 days using pandas data reader and tingo api, which allows 50 requests per day.', 'chapters': [{'end': 103.571, 'start': 14.431, 'title': 'Stock market prediction using stacked lstm', 'summary': 'Discusses stock market prediction and forecasting using stacked lstm, emphasizing not to use the model for personal investments and explaining the process of data collection.', 'duration': 89.14, 'highlights': ['The video discusses stock market prediction and forecasting using stacked LSTM, emphasizing not to use the model for personal investments.', 'The presenter requests viewers not to use the model for personal investments due to the unpredictable nature of the stock market.', 'The chapter emphasizes the importance of not using the model for personal investments to avoid potential losses.', 'The first step discussed is the data collection process for stock market prediction and forecasting using stacked LSTM.']}, {'end': 204.558, 'start': 104.192, 'title': 'Stock data collection and prediction', 'summary': 'Discusses the process of collecting apple stock data from 2015 till date, preprocessing the data, creating a stacked lstm model, and predicting the future 30 days using pandas data reader and tingo api, which allows 50 requests per day.', 'duration': 100.366, 'highlights': ['The process involves collecting Apple stock data from 2015 till date, preprocessing the data, creating a stacked LSTM model, and predicting the future 30 days. Apple stock data from 2015 till date', 'Pandas data reader and Tingo API allow 50 requests per day for accessing stock data. 50 requests per day', 'The Tingo API provides access to a variety of data libraries for accessing stock data. Access to various data libraries']}], 'duration': 190.127, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/H6du_pfuznE/pics/H6du_pfuznE14431.jpg', 'highlights': ['The process involves collecting Apple stock data from 2015 till date, preprocessing the data, creating a stacked LSTM model, and predicting the future 30 days. Apple stock data from 2015 till date', 'Pandas data reader and Tingo API allow 50 requests per day for accessing stock data. 50 requests per day', 'The video discusses stock market prediction and forecasting using stacked LSTM, emphasizing not to use the model for personal investments.', 'The presenter requests viewers not to use the model for personal investments due to the unpredictable nature of the stock market.']}, {'end': 404.908, 'segs': [{'end': 235.245, 'src': 'embed', 'start': 204.638, 'weight': 0, 'content': [{'end': 208.419, 'text': 'So it is a wonderful way that you can actually collect any amount of data that you like.', 'start': 204.638, 'duration': 3.781}, {'end': 214.394, 'text': "now i'm going to import pandas data reader as pdr.", 'start': 209.269, 'duration': 5.125}, {'end': 220.159, 'text': 'then, guys, and this pandas underscore data reader will be already present in your anaconda environment.', 'start': 214.394, 'duration': 5.765}, {'end': 223.923, 'text': 'then you have a method which is called as get underscore data underscore.', 'start': 220.159, 'duration': 3.764}, {'end': 226.665, 'text': 'dingo, just give the stock price.', 'start': 223.923, 'duration': 2.742}, {'end': 228.567, 'text': 'uh, you know this particular name.', 'start': 226.665, 'duration': 1.902}, {'end': 230.269, 'text': 'that is a apl for google.', 'start': 228.567, 'duration': 1.702}, {'end': 235.245, 'text': 'you can write gog And another parameter is basically your API underscore key.', 'start': 230.269, 'duration': 4.976}], 'summary': 'Using pandas data reader, collect any amount of data and get stock prices with an api key.', 'duration': 30.607, 'max_score': 204.638, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/H6du_pfuznE/pics/H6du_pfuznE204638.jpg'}, {'end': 291.859, 'src': 'embed', 'start': 251.078, 'weight': 1, 'content': [{'end': 254.982, 'text': 'csv As I told you that we just have 50 requests that we can go ahead and hit it.', 'start': 251.078, 'duration': 3.904}, {'end': 262.658, 'text': 'But after this, you can again re-upload your dataset from this apple.csv file which you have actually saved.', 'start': 255.512, 'duration': 7.146}, {'end': 269.063, 'text': "So below this particular cell, what I'm going to do is that I'm just going to write df.head and show you how the dataset looks like.", 'start': 263.098, 'duration': 5.965}, {'end': 275.689, 'text': "If you go and see the data.head, df.head, you'll be able to see a lot of features like date, close, high.", 'start': 269.083, 'duration': 6.606}, {'end': 282.273, 'text': 'low, open, volume, adjusted close, adjusted close, adjusted high, adjusted low, and many of them.', 'start': 276.349, 'duration': 5.924}, {'end': 284.594, 'text': 'You can take one of them, either close or high.', 'start': 282.333, 'duration': 2.261}, {'end': 286.876, 'text': "In this particular example, I've just taken high.", 'start': 285.075, 'duration': 1.801}, {'end': 291.859, 'text': 'If you want to see from what date it is, we are having the date from 2015.', 'start': 287.356, 'duration': 4.503}], 'summary': 'Dataset with 50 requests from apple.csv file, with features like date, close, high, low, open, volume, adjusted close, adjusted high, adjusted low. data available from 2015.', 'duration': 40.781, 'max_score': 251.078, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/H6du_pfuznE/pics/H6du_pfuznE251078.jpg'}, {'end': 411.716, 'src': 'heatmap', 'start': 325.805, 'weight': 2, 'content': [{'end': 330.247, 'text': 'you can also be able to do for other by using the same strategy and method.', 'start': 325.805, 'duration': 4.442}, {'end': 334.678, 'text': "Okay, so let's go ahead and try to quickly show you how we can actually do this.", 'start': 330.717, 'duration': 3.961}, {'end': 337.639, 'text': "Till then, I'll just delete this cell because we don't require it.", 'start': 334.958, 'duration': 2.681}, {'end': 346.281, 'text': "Okay, now what I'm going to do is that first of all, I just want to pick up this close column, right? So for this, I'm writing df.resetIndex.", 'start': 338.239, 'duration': 8.042}, {'end': 350.722, 'text': 'If I do resetIndex, that basically means all this particular value will be going inside this.', 'start': 346.461, 'duration': 4.261}, {'end': 353.208, 'text': "And then I'm just going to take the close column.", 'start': 351.226, 'duration': 1.982}, {'end': 357.531, 'text': 'Okay So by this, I will be able to get all my close values in DF1.', 'start': 353.508, 'duration': 4.023}, {'end': 361.574, 'text': 'So if I go and see the shape, it is having 1, 2, 5, 8 records.', 'start': 357.871, 'duration': 3.703}, {'end': 367.059, 'text': "Okay So if you want to really see the DF1, I'm having these values like this.", 'start': 362.055, 'duration': 5.004}, {'end': 371.141, 'text': 'You can see that it is increasing as we go ahead till 1, 2, 5, 7.', 'start': 367.119, 'duration': 4.022}, {'end': 379.128, 'text': 'Right If you really want to plot this particular data frame also, you can use matplotlib.pyplot as PLT and just use PLT.plot DF1.', 'start': 371.142, 'duration': 7.986}, {'end': 389.155, 'text': 'So here you can see that this is how your plotted stock price looks like from 2015 till 2020.', 'start': 380.472, 'duration': 8.683}, {'end': 392.176, 'text': 'So this is how your stock price is having the movement.', 'start': 389.155, 'duration': 3.021}, {'end': 394.137, 'text': 'It has both up and down movement.', 'start': 392.556, 'duration': 1.581}, {'end': 397.798, 'text': "So let's see how we can actually solve this particular problem.", 'start': 394.637, 'duration': 3.161}, {'end': 404.908, 'text': 'now, one thing that you need to note is, guys, lstm are very, very much sensitive to the scale of the data.', 'start': 398.88, 'duration': 6.028}, {'end': 411.716, 'text': 'now, in this particular data, this value is in some kind of scale, so we should always try to transform this particular value.', 'start': 404.908, 'duration': 6.808}], 'summary': 'Using df.resetindex, transformed data for 1258 records, plotted stock price from 2015 to 2020, and highlighted the sensitivity of lstm to data scale.', 'duration': 85.911, 'max_score': 325.805, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/H6du_pfuznE/pics/H6du_pfuznE325805.jpg'}], 'start': 204.638, 'title': 'Stock price data collection and prediction', 'summary': 'Explains collecting stock price data using pandas data reader, specifying the stock name and api key, and visualizing stock price data from 2015 to 2020 for apple, resulting in a dataframe with 1258 records and plotting the stock price movement.', 'chapters': [{'end': 269.063, 'start': 204.638, 'title': 'Collecting stock price data using pandas data reader', 'summary': 'Explains how to use pandas data reader to collect stock price data, including mentioning the method for retrieving data, specifying the stock name and api key, saving the data in a csv file, and displaying the dataset using df.head.', 'duration': 64.425, 'highlights': ['Using pandas data reader to collect stock price data by specifying the stock name and API key, and saving the data in a CSV file.', "Mentioning the method 'get_data_dingo' to retrieve stock price data.", 'Demonstrating the display of the dataset using df.head to show the first few rows of the data.']}, {'end': 404.908, 'start': 269.083, 'title': 'Stock price prediction from 2015-2020', 'summary': "Demonstrates extracting and visualizing stock price data from 2015 to 2020 for apple, focusing on the 'high' column, resulting in a dataframe with 1258 records, and plotting the stock price movement using matplotlib.pyplot.", 'duration': 135.825, 'highlights': ["The chapter demonstrates extracting and visualizing stock price data from 2015 to 2020 for Apple, focusing on the 'high' column, resulting in a dataframe with 1258 records. The data is extracted and the 'high' column is focused on, resulting in a dataframe with 1258 records. This provides a quantifiable measure of the dataset's size.", 'Plotting the stock price movement using matplotlib.pyplot. The stock price movement from 2015 to 2020 is visualized using matplotlib.pyplot, offering a clear representation of the data for analysis and prediction.']}], 'duration': 200.27, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/H6du_pfuznE/pics/H6du_pfuznE204638.jpg', 'highlights': ['Using pandas data reader to collect stock price data by specifying the stock name and API key, and saving the data in a CSV file.', "The chapter demonstrates extracting and visualizing stock price data from 2015 to 2020 for Apple, focusing on the 'high' column, resulting in a dataframe with 1258 records.", 'Plotting the stock price movement using matplotlib.pyplot. The stock price movement from 2015 to 2020 is visualized using matplotlib.pyplot, offering a clear representation of the data for analysis and prediction.', "Mentioning the method 'get_data_dingo' to retrieve stock price data.", 'Demonstrating the display of the dataset using df.head to show the first few rows of the data.']}, {'end': 886.814, 'segs': [{'end': 428.733, 'src': 'embed', 'start': 404.908, 'weight': 0, 'content': [{'end': 411.716, 'text': 'now, in this particular data, this value is in some kind of scale, so we should always try to transform this particular value.', 'start': 404.908, 'duration': 6.808}, {'end': 417.95, 'text': "in this particular example, we are going to take min max scalar, where we'll be transforming our values between 0 to 1..", 'start': 411.716, 'duration': 6.234}, {'end': 423.372, 'text': "So for this, I'm going to import NumPy and I'm going to say from a scaleon.preprocessing import minmaxscaler.", 'start': 417.95, 'duration': 5.422}, {'end': 428.733, 'text': "In the minmaxscaler, I'm going to take a feature range between 0, 1.", 'start': 423.912, 'duration': 4.821}], 'summary': 'Transform data values to a 0-1 scale using min-max scaler.', 'duration': 23.825, 'max_score': 404.908, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/H6du_pfuznE/pics/H6du_pfuznE404908.jpg'}, {'end': 642.563, 'src': 'embed', 'start': 610.466, 'weight': 1, 'content': [{'end': 613.427, 'text': 'okay, so this will actually be my test data set.', 'start': 610.466, 'duration': 2.961}, {'end': 621.192, 'text': "now i'll tell you why we are doing like this, guys, because after doing this division, okay, we will try to do the data pre-processing,", 'start': 613.427, 'duration': 7.765}, {'end': 626.535, 'text': 'but always understand that whenever you have any kind of sequence of data or a time series data,', 'start': 621.192, 'duration': 5.343}, {'end': 629.917, 'text': 'the next data is always dependent on your previous data.', 'start': 627.035, 'duration': 2.882}, {'end': 633.358, 'text': 'okay, so you should always try to do this division in this particular way.', 'start': 629.917, 'duration': 3.441}, {'end': 635.5, 'text': "so let's go and do this particular step.", 'start': 633.358, 'duration': 2.142}, {'end': 642.563, 'text': "so over here you can see that I have taken an example after I'm telling that the splitting the data set into train and test split.", 'start': 635.5, 'duration': 7.063}], 'summary': 'Data set split for time series analysis to ensure data independence.', 'duration': 32.097, 'max_score': 610.466, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/H6du_pfuznE/pics/H6du_pfuznE610466.jpg'}, {'end': 865.165, 'src': 'embed', 'start': 833.942, 'weight': 3, 'content': [{'end': 839.071, 'text': 'the second value will basically be 130, 130.', 'start': 833.942, 'duration': 5.129}, {'end': 843.514, 'text': 'the third value will actually be, or the feature 3 will be 125.', 'start': 839.071, 'duration': 4.443}, {'end': 849.801, 'text': 'now, when I have this, then my output should actually be 140.', 'start': 843.514, 'duration': 6.287}, {'end': 851.842, 'text': 'Now, what we are doing in this data pre-processing, guys,', 'start': 849.801, 'duration': 2.041}, {'end': 856.463, 'text': 'we are just converting this into independent and dependent feature based on these timestamps.', 'start': 851.842, 'duration': 4.621}, {'end': 865.165, 'text': 'This basically indicates to calculate this, I need to consider the previous three days import, whatever input over here is there, previous three days.', 'start': 856.983, 'duration': 8.182}], 'summary': "Data pre-processing involves converting features and calculating outputs based on timestamps and previous three days' input.", 'duration': 31.223, 'max_score': 833.942, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/H6du_pfuznE/pics/H6du_pfuznE833942.jpg'}], 'start': 404.908, 'title': 'Scaling and pre-processing data', 'summary': 'Demonstrates scaling data with min-max scaler to transform values between 0 and 1, and discusses time series data pre-processing, emphasizing the importance of data division and feature calculation.', 'chapters': [{'end': 496.425, 'start': 404.908, 'title': 'Scaling data using min-max scaler', 'summary': 'Demonstrates the process of scaling data using min-max scaler, transforming values between 0 and 1, and converting a data array into values between 0 and 1.', 'duration': 91.517, 'highlights': ['The process involves using Min-Max Scaler to transform values between 0 and 1, or any specified range, illustrated through the example of transforming a data array into values between 0 and 1.', 'The step of fitting and transforming the data array is crucial in order to provide the input for Min-Max Scaler, resulting in the conversion of the array values between 0 and 1.', 'The initial data array, df1, contains values such as 132 and 131, which get converted into an array with values between 0 and 1 after the transformation process.']}, {'end': 886.814, 'start': 496.425, 'title': 'Time series data pre-processing', 'summary': 'Discusses the importance of time series data division for training and testing, emphasizing the dependence of the next data on the previous data, and then delves into the detailed process of data pre-processing and feature calculation.', 'duration': 390.389, 'highlights': ['Explaining the division of training and test data based on date for time series data, showcasing the significance of data dependence on previous values. The chapter emphasizes the need to divide time series data for training and testing based on date, highlighting the dependence of the next data on previous values for accurate modeling.', "Demonstrating the process of data pre-processing for time series data, specifically showcasing the calculation of independent and dependent features based on timestamps. The detailed process of data pre-processing involves calculating independent and dependent features based on timestamps, exemplifying the representation of previous days' data and its correlation with the next day's output.", "Illustrating the calculation of features for time series data, exemplifying the determination of previous day's values and their correlation with the next day's output. The chapter provides a clear example of calculating features for time series data, showcasing the determination of previous day's values and their correlation with the next day's output, emphasizing the importance of feature calculation in time series data pre-processing."]}], 'duration': 481.906, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/H6du_pfuznE/pics/H6du_pfuznE404908.jpg', 'highlights': ['The process involves using Min-Max Scaler to transform values between 0 and 1, or any specified range, illustrated through the example of transforming a data array into values between 0 and 1.', 'Explaining the division of training and test data based on date for time series data, showcasing the significance of data dependence on previous values.', 'The step of fitting and transforming the data array is crucial in order to provide the input for Min-Max Scaler, resulting in the conversion of the array values between 0 and 1.', 'Demonstrating the process of data pre-processing for time series data, specifically showcasing the calculation of independent and dependent features based on timestamps.']}, {'end': 1451.499, 'segs': [{'end': 997.553, 'src': 'embed', 'start': 969.291, 'weight': 5, 'content': [{'end': 971.112, 'text': "okay, now, similarly, what i'm going to do?", 'start': 969.291, 'duration': 1.821}, {'end': 973.252, 'text': "i'm going to write 160 over here.", 'start': 971.112, 'duration': 2.14}, {'end': 974.413, 'text': 'you will be able to see it.', 'start': 973.252, 'duration': 1.161}, {'end': 977.194, 'text': "i'll be writing 160.", 'start': 974.413, 'duration': 2.781}, {'end': 986.618, 'text': 'first my data pre-processing will have 160 over here, then 190, then 154, you know, and then my output will be 160 right.', 'start': 977.194, 'duration': 9.424}, {'end': 989.946, 'text': "similarly, Similarly, you can see that I've taken the first four elements.", 'start': 986.618, 'duration': 3.328}, {'end': 995.451, 'text': 'Now in the next element, again, what will happen? If I see over here, it will just go over here.', 'start': 990.026, 'duration': 5.425}, {'end': 997.553, 'text': 'It will take the next three elements, 190, 154, and 160.', 'start': 995.471, 'duration': 2.082}], 'summary': 'Data pre-processing involves 160, 190, and 154, with an output of 160.', 'duration': 28.262, 'max_score': 969.291, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/H6du_pfuznE/pics/H6du_pfuznE969291.jpg'}, {'end': 1070.013, 'src': 'heatmap', 'start': 1030.502, 'weight': 6, 'content': [{'end': 1038.148, 'text': 'here we actually take the data set, then under timestamp, in the timestamp, by default, uh, whatever values are giving, like 100,', 'start': 1030.502, 'duration': 7.646}, {'end': 1040.33, 'text': 'it will be taking 100 timestamp like this here.', 'start': 1038.148, 'duration': 2.182}, {'end': 1042.512, 'text': 'in this case it is three timestamp, right.', 'start': 1040.33, 'duration': 2.182}, {'end': 1046.296, 'text': 'so here it will be taking a hundred timestamp and remember, the more better.', 'start': 1042.512, 'duration': 3.784}, {'end': 1047.558, 'text': 'you have a timestamp value.', 'start': 1046.296, 'duration': 1.262}, {'end': 1052.562, 'text': 'this is again a hyperparameter tuning technique which we will be selecting those timestamp values.', 'start': 1047.558, 'duration': 5.004}, {'end': 1056.105, 'text': 'so inside this particular method, that is, create data set, what we are doing?', 'start': 1052.562, 'duration': 3.543}, {'end': 1057.707, 'text': 'first of all, we give our training data.', 'start': 1056.105, 'duration': 1.602}, {'end': 1064.37, 'text': 'similarly, how we got we divided the training and test data on the top right, like how we how we explained over here.', 'start': 1058.127, 'duration': 6.243}, {'end': 1070.013, 'text': 'now, for this train data, i am saying that you know, take the timestamp value as 100.', 'start': 1064.37, 'duration': 5.643}], 'summary': 'Data set timestamp value set to 100 for hyperparameter tuning in creating training data.', 'duration': 39.511, 'max_score': 1030.502, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/H6du_pfuznE/pics/H6du_pfuznE1030502.jpg'}, {'end': 1263.727, 'src': 'embed', 'start': 1222.216, 'weight': 1, 'content': [{'end': 1227.258, 'text': 'In my Y-Train, I just have one value, and I have 716 records, so pretty much amazing.', 'start': 1222.216, 'duration': 5.042}, {'end': 1233.72, 'text': "Similarly, for X-Test, you can see I'm having 340 records and 100 features, which are same as compared to the X-Train.", 'start': 1227.278, 'duration': 6.442}, {'end': 1236.421, 'text': 'Now, we have got it in an amazing way.', 'start': 1234.26, 'duration': 2.161}, {'end': 1238.361, 'text': 'Our dataset pre-processing is done.', 'start': 1236.661, 'duration': 1.7}, {'end': 1241.603, 'text': 'We have completed basically this particular second step.', 'start': 1238.441, 'duration': 3.162}, {'end': 1244.964, 'text': 'Now, the next step is basically creating a stack LHTML model.', 'start': 1241.963, 'duration': 3.001}, {'end': 1249.836, 'text': 'before, uh, implementing any lstm.', 'start': 1246.934, 'duration': 2.902}, {'end': 1253.559, 'text': 'guys, we need to always reshape our x train.', 'start': 1249.836, 'duration': 3.723}, {'end': 1256.421, 'text': 'you know, reshape our x train into a three dimension.', 'start': 1253.559, 'duration': 2.862}, {'end': 1257.662, 'text': 'now, what are dimensions are there?', 'start': 1256.421, 'duration': 1.241}, {'end': 1259.684, 'text': 'understand, external next test both.', 'start': 1257.662, 'duration': 2.022}, {'end': 1263.727, 'text': 'so you know that these are number of records, these are number of timestamps.', 'start': 1259.684, 'duration': 4.043}], 'summary': 'Pre-processed 716 records in y-train, 340 in x-test, preparing for lstm model.', 'duration': 41.511, 'max_score': 1222.216, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/H6du_pfuznE/pics/H6du_pfuznE1222216.jpg'}, {'end': 1383.11, 'src': 'heatmap', 'start': 1271.063, 'weight': 0.717, 'content': [{'end': 1276.826, 'text': "that is basically we need to add one, okay, one over here, so you can see that i've written x underscore train dot.", 'start': 1271.063, 'duration': 5.763}, {'end': 1278.887, 'text': 'reshape x train dot, shape of zero.', 'start': 1276.826, 'duration': 2.061}, {'end': 1281.048, 'text': 'shape of zero is nothing but 716.', 'start': 1278.887, 'duration': 2.161}, {'end': 1283.689, 'text': 'then x underscore train of shape of one.', 'start': 1281.048, 'duration': 2.641}, {'end': 1286.071, 'text': 'it is basically nothing, but the timestamp.', 'start': 1283.689, 'duration': 2.382}, {'end': 1289.973, 'text': 'then this should actually be comma one, so that it gets converted into a three dimension.', 'start': 1286.071, 'duration': 3.902}, {'end': 1294.455, 'text': 'the reason why we are doing this is that because we will be giving this two as the input to our lsdn.', 'start': 1289.973, 'duration': 4.482}, {'end': 1300.092, 'text': "If you have already seen my LSTM videos in my deep learning playlist, guys, you'll be able to understand.", 'start': 1295.485, 'duration': 4.607}, {'end': 1302.956, 'text': 'So this is done for the extrins and extras.', 'start': 1300.653, 'duration': 2.303}, {'end': 1304.298, 'text': 'We have to also do it for that.', 'start': 1302.996, 'duration': 1.302}, {'end': 1311.648, 'text': "Then I'm going to import I'm going to start creating my stack LSTM model, right? Stack LSTM.", 'start': 1304.798, 'duration': 6.85}, {'end': 1314.431, 'text': 'So I have sequential dense LSTM.', 'start': 1312.009, 'duration': 2.422}, {'end': 1315.712, 'text': "I'm adding this.", 'start': 1314.971, 'duration': 0.741}, {'end': 1318.594, 'text': "Okay These are my libraries that I'll require.", 'start': 1316.432, 'duration': 2.162}, {'end': 1321.136, 'text': 'LSTM model is pretty much simple guys.', 'start': 1319.374, 'duration': 1.762}, {'end': 1322.777, 'text': 'We will be using a sequential model.', 'start': 1321.176, 'duration': 1.601}, {'end': 1325.179, 'text': "First LSTM layer I'm adding over here.", 'start': 1323.358, 'duration': 1.821}, {'end': 1327.24, 'text': "I'm giving the hidden layer 50.", 'start': 1325.199, 'duration': 2.041}, {'end': 1329.102, 'text': "I'm saying a determinative sequence is equal to true.", 'start': 1327.24, 'duration': 1.862}, {'end': 1334.346, 'text': 'The first input shape, as I told you, should be my, this two values.', 'start': 1329.622, 'duration': 4.724}, {'end': 1338.566, 'text': 'Now I know that X train of shape is nothing but, 100, 1.', 'start': 1334.866, 'duration': 3.7}, {'end': 1340.107, 'text': "So I've given 100, 1 over here.", 'start': 1338.566, 'duration': 1.541}, {'end': 1343.228, 'text': "Then in my next layer, I'd added LSTM.", 'start': 1340.607, 'duration': 2.621}, {'end': 1349.99, 'text': "Again, this is a stacked LSTM model, one LSTM after the other, right? And finally, I've added one more.", 'start': 1344.848, 'duration': 5.142}, {'end': 1353.931, 'text': "After that, you can see that I've added one final output.", 'start': 1351.27, 'duration': 2.661}, {'end': 1360.345, 'text': "And then I'm compiling with the help of mean squared error and the optimizer is Adam.", 'start': 1354.844, 'duration': 5.501}, {'end': 1364.946, 'text': 'So once I execute this, if I go and see my summary, this is my summary.', 'start': 1360.785, 'duration': 4.161}, {'end': 1369.807, 'text': "After that, seeing the summary, what I can do is that, guys, I'll just remove this.", 'start': 1365.786, 'duration': 4.021}, {'end': 1370.747, 'text': 'This is not required.', 'start': 1369.887, 'duration': 0.86}, {'end': 1377.169, 'text': "I have to just fit on my X train, Y train, and the validation data that I'm taking is X test, Y test.", 'start': 1370.927, 'duration': 6.242}, {'end': 1379.749, 'text': "Epochs, I'm considering it as 100.", 'start': 1377.709, 'duration': 2.04}, {'end': 1381.37, 'text': 'Bash size as 64.', 'start': 1379.749, 'duration': 1.621}, {'end': 1383.11, 'text': 'Let me just execute it in front of you.', 'start': 1381.37, 'duration': 1.74}], 'summary': 'Creating a stack lstm model with sequential dense lstm layers, using input shape of 100,1 and compiling with mean squared error and optimizer adam.', 'duration': 112.047, 'max_score': 1271.063, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/H6du_pfuznE/pics/H6du_pfuznE1271063.jpg'}, {'end': 1329.102, 'src': 'embed', 'start': 1304.798, 'weight': 0, 'content': [{'end': 1311.648, 'text': "Then I'm going to import I'm going to start creating my stack LSTM model, right? Stack LSTM.", 'start': 1304.798, 'duration': 6.85}, {'end': 1314.431, 'text': 'So I have sequential dense LSTM.', 'start': 1312.009, 'duration': 2.422}, {'end': 1315.712, 'text': "I'm adding this.", 'start': 1314.971, 'duration': 0.741}, {'end': 1318.594, 'text': "Okay These are my libraries that I'll require.", 'start': 1316.432, 'duration': 2.162}, {'end': 1321.136, 'text': 'LSTM model is pretty much simple guys.', 'start': 1319.374, 'duration': 1.762}, {'end': 1322.777, 'text': 'We will be using a sequential model.', 'start': 1321.176, 'duration': 1.601}, {'end': 1325.179, 'text': "First LSTM layer I'm adding over here.", 'start': 1323.358, 'duration': 1.821}, {'end': 1327.24, 'text': "I'm giving the hidden layer 50.", 'start': 1325.199, 'duration': 2.041}, {'end': 1329.102, 'text': "I'm saying a determinative sequence is equal to true.", 'start': 1327.24, 'duration': 1.862}], 'summary': 'Creating a stack lstm model with sequential dense lstm and a hidden layer of 50.', 'duration': 24.304, 'max_score': 1304.798, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/H6du_pfuznE/pics/H6du_pfuznE1304798.jpg'}], 'start': 886.814, 'title': 'Lstm and predictive model training', 'summary': 'Covers lstm model training and testing, including data preparation and examples, and predictive model creation with pre-processing, reshaping, training stack lstm model, achieving 71.6% accuracy with 100 epochs and a batch size of 64.', 'chapters': [{'end': 1012.331, 'start': 886.814, 'title': 'Lstm model training and testing', 'summary': 'Discusses the process of preparing data for training and testing an lstm model, including creating independent and dependent features, splitting the data, and providing examples of data processing.', 'duration': 125.517, 'highlights': ['The chapter discusses the process of preparing data for training and testing an LSTM model It covers the steps involved in preparing the data for training and testing an LSTM model.', 'Creating independent and dependent features The speaker discusses the creation of independent and dependent features for the LSTM model.', 'Providing examples of data processing Examples of data processing are provided, including specific elements and their corresponding outputs.', 'Splitting the data The process of splitting the data for training and testing is explained, with examples of specific elements being utilized.']}, {'end': 1451.499, 'start': 1012.331, 'title': 'Predictive model training and evaluation', 'summary': 'Outlines the process of creating a predictive model by pre-processing the dataset, reshaping the input data, and training a stack lstm model with a mean squared error as the loss function and adam as the optimizer, achieving an accuracy of 71.6% with 100 epochs and a batch size of 64.', 'duration': 439.168, 'highlights': ['The LSTM model achieved an accuracy of 71.6% with 100 epochs and a batch size of 64. The stack LSTM model achieved an accuracy of 71.6% after training with 100 epochs and a batch size of 64, indicating the effectiveness of the predictive model.', 'The process involved pre-processing the dataset, reshaping the input data, and training a stack LSTM model with a mean squared error as the loss function and Adam as the optimizer. The process involved pre-processing the dataset, reshaping the input data, and training a stack LSTM model with a mean squared error as the loss function and Adam as the optimizer, showcasing the comprehensive approach to model training.', 'The timestamp value was set to 100 for training data, resulting in 716 records with 100 features for X-train and 716 records with 1 value for Y-train. The timestamp value was set to 100 for training data, resulting in 716 records with 100 features for X-train and 716 records with 1 value for Y-train, indicating the specific configuration used for the training dataset.']}], 'duration': 564.685, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/H6du_pfuznE/pics/H6du_pfuznE886814.jpg', 'highlights': ['The stack LSTM model achieved an accuracy of 71.6% after training with 100 epochs and a batch size of 64, indicating the effectiveness of the predictive model.', 'The process involved pre-processing the dataset, reshaping the input data, and training a stack LSTM model with a mean squared error as the loss function and Adam as the optimizer, showcasing the comprehensive approach to model training.', 'The timestamp value was set to 100 for training data, resulting in 716 records with 100 features for X-train and 716 records with 1 value for Y-train, indicating the specific configuration used for the training dataset.', 'The chapter discusses the process of preparing data for training and testing an LSTM model It covers the steps involved in preparing the data for training and testing an LSTM model.', 'Creating independent and dependent features The speaker discusses the creation of independent and dependent features for the LSTM model.', 'Providing examples of data processing Examples of data processing are provided, including specific elements and their corresponding outputs.', 'Splitting the data The process of splitting the data for training and testing is explained, with examples of specific elements being utilized.']}, {'end': 1714.62, 'segs': [{'end': 1481.766, 'src': 'embed', 'start': 1452.56, 'weight': 0, 'content': [{'end': 1456.124, 'text': 'So you can see how many epochs is done, 16.', 'start': 1452.56, 'duration': 3.564}, {'end': 1458.247, 'text': 'Probably it may take another one minute.', 'start': 1456.124, 'duration': 2.123}, {'end': 1459.428, 'text': "Let's see till then.", 'start': 1458.727, 'duration': 0.701}, {'end': 1462.273, 'text': 'We will try to explore the next line of code.', 'start': 1460.331, 'duration': 1.942}, {'end': 1470.078, 'text': 'Okay Now, after doing that, guys, we will try to do the prediction for the X-train data because we need to find out the performance matrix.', 'start': 1462.573, 'duration': 7.505}, {'end': 1472.46, 'text': "So, for X-train also, I'll be doing the predict.", 'start': 1470.498, 'duration': 1.962}, {'end': 1474.381, 'text': "For X-test also, I'll be doing the predict.", 'start': 1472.6, 'duration': 1.781}, {'end': 1475.822, 'text': "I'll get those values over here.", 'start': 1474.481, 'duration': 1.341}, {'end': 1481.766, 'text': "Then, I'm also going to do the scalar inverse transform because you understand that we have already scaled that particular.", 'start': 1476.523, 'duration': 5.243}], 'summary': 'Training completed in 16 epochs, prediction on x-train and x-test data performed with scalar inverse transform.', 'duration': 29.206, 'max_score': 1452.56, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/H6du_pfuznE/pics/H6du_pfuznE1452560.jpg'}, {'end': 1558.404, 'src': 'embed', 'start': 1530.315, 'weight': 1, 'content': [{'end': 1533.736, 'text': 'train predict is basically my output for the train data set.', 'start': 1530.315, 'duration': 3.421}, {'end': 1541.898, 'text': 'so this whatever RMSE is there, it is basically for the training data set and obviously it will be good enough when compared to the test,', 'start': 1533.736, 'duration': 8.162}, {'end': 1544.779, 'text': 'because we have already trained the data on the training data set.', 'start': 1541.898, 'duration': 2.881}, {'end': 1550.341, 'text': 'so probably we can actually get a good output considering this particular value.', 'start': 1544.779, 'duration': 5.562}, {'end': 1558.404, 'text': 'okay now, similarly, for the test data, I can again use mat.square root mean squared error, ytest and test underscore predict.', 'start': 1550.341, 'duration': 8.063}], 'summary': 'Rmse for trained data will be good, test data needs evaluation too.', 'duration': 28.089, 'max_score': 1530.315, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/H6du_pfuznE/pics/H6du_pfuznE1530315.jpg'}, {'end': 1592.791, 'src': 'heatmap', 'start': 1550.341, 'weight': 2, 'content': [{'end': 1558.404, 'text': 'okay now, similarly, for the test data, I can again use mat.square root mean squared error, ytest and test underscore predict.', 'start': 1550.341, 'duration': 8.063}, {'end': 1563.565, 'text': 'So here you can see that there will be definitely a difference, but the difference is very, very less, which is pretty much amazing.', 'start': 1558.464, 'duration': 5.101}, {'end': 1567.467, 'text': 'That basically means that our LSTM model has done a great work.', 'start': 1563.605, 'duration': 3.862}, {'end': 1574.369, 'text': "It has done actually a good work, okay? Then you can see that I'm getting a value of 235.", 'start': 1567.567, 'duration': 6.802}, {'end': 1577.53, 'text': 'Now, this is how we will be predicting in this case.', 'start': 1574.369, 'duration': 3.161}, {'end': 1582.792, 'text': 'The test data predicted output that you see, that is basically my green color output over here.', 'start': 1578.17, 'duration': 4.622}, {'end': 1585.782, 'text': 'The blue color output is my complete data set guys.', 'start': 1583.399, 'duration': 2.383}, {'end': 1592.791, 'text': 'For the training data set that you see that how the prediction has gone is basically this orange color.', 'start': 1587.604, 'duration': 5.187}], 'summary': 'Lstm model predicts with 235 value, showing great performance.', 'duration': 25.224, 'max_score': 1550.341, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/H6du_pfuznE/pics/H6du_pfuznE1550341.jpg'}, {'end': 1660.857, 'src': 'embed', 'start': 1636.212, 'weight': 3, 'content': [{'end': 1643.257, 'text': 'uh, if i consider my loss my loss had actually started from 0.02 then it is going lowering, lowering.', 'start': 1636.212, 'duration': 7.045}, {'end': 1645.719, 'text': "okay, it's lowering in a better way.", 'start': 1643.257, 'duration': 2.462}, {'end': 1648.901, 'text': 'you can also see the validation loss is also decreasing a lot.', 'start': 1645.719, 'duration': 3.182}, {'end': 1651.932, 'text': 'so i think we are going in the right way.', 'start': 1649.451, 'duration': 2.481}, {'end': 1660.857, 'text': 'we are going in the proper way because our validation loss and our loss is basically decreasing and our main aim should be that we need to minimize the loss.', 'start': 1651.932, 'duration': 8.925}], 'summary': 'Loss is decreasing from 0.02, validation loss also decreasing a lot, indicating progress.', 'duration': 24.645, 'max_score': 1636.212, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/H6du_pfuznE/pics/H6du_pfuznE1636212.jpg'}, {'end': 1694.637, 'src': 'embed', 'start': 1670.242, 'weight': 4, 'content': [{'end': 1676.882, 'text': "first of all, i'm taking nan values everywhere based on the size of the data frame, one Over here.", 'start': 1670.242, 'duration': 6.64}, {'end': 1678.624, 'text': 'my time step is 100..', 'start': 1676.882, 'duration': 1.742}, {'end': 1686.57, 'text': "Okay So considering this 100, what I'm going to do is that whatever train predict and test predict I've got, I'm just plotting it over here.", 'start': 1678.624, 'duration': 7.946}, {'end': 1688.172, 'text': "Okay I'm just plotting it.", 'start': 1687.051, 'duration': 1.121}, {'end': 1692.655, 'text': 'This all data that you see till here is basically my train predict data.', 'start': 1688.252, 'duration': 4.403}, {'end': 1694.637, 'text': 'And this is my test predict.', 'start': 1693.076, 'duration': 1.561}], 'summary': 'Nan values filled based on data frame size, time step at 100, plotting train and test predict data.', 'duration': 24.395, 'max_score': 1670.242, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/H6du_pfuznE/pics/H6du_pfuznE1670242.jpg'}], 'start': 1452.56, 'title': 'Model performance, evaluation, and training', 'summary': 'Details the evaluation of an lstm model with 16 epochs, including predicting and calculating the root mean squared error for both training and test data, demonstrates the effectiveness of the model. it also demonstrates the process of predicting test data with a value of 235, visually comparing the predicted output with the complete data set and training data set. additionally, it discusses the training process of a neural network, highlighting the decreasing trend in loss and validation loss, aiming to minimize loss, and the plotting process for train predict and test predict data with a time step of 100.', 'chapters': [{'end': 1567.467, 'start': 1452.56, 'title': 'Lstm model performance evaluation', 'summary': "Details the process of evaluating the performance of an lstm model with 16 epochs, including predicting, scaling, and calculating the root mean squared error for both training and test data, demonstrating the model's effectiveness.", 'duration': 114.907, 'highlights': ['The LSTM model is trained with 16 epochs, indicating the depth of the training process.', "The process includes predicting and scaling the X-train data to calculate the root mean squared error, ensuring the accuracy of the model's performance metrics.", "The root mean squared error (RMSE) is calculated for both the training and test data, demonstrating the model's efficiency in providing accurate predictions for both datasets."]}, {'end': 1614.956, 'start': 1567.567, 'title': 'Predictive model evaluation', 'summary': 'Demonstrates the process of predicting test data with a value of 235 and visually comparing the predicted output with the complete data set and training data set.', 'duration': 47.389, 'highlights': ['The test data predicted output has a value of 235, indicating the success of the predictive model.', "The visual comparison between the predicted output (green color) and the complete data set (blue color) helps in evaluating the model's performance.", "The orange color represents the prediction for the training data set, providing insights into the model's accuracy during training."]}, {'end': 1714.62, 'start': 1615.377, 'title': 'Neural network training and validation', 'summary': 'Discusses the training process of a neural network, highlighting the decreasing trend in loss and validation loss, which started at 0.02 and lowered, with the aim of minimizing loss, along with the plotting process for train predict and test predict data with a time step of 100.', 'duration': 99.243, 'highlights': ['The loss value not varying significantly after 63 epochs, starting from 0.02 and decreasing, along with the validation loss also decreasing.', 'The importance of minimizing loss as the main aim of the training process.', 'The process of plotting train predict and test predict data with a time step of 100 and the necessity of scalar dot inverse transform for proper visualization.']}], 'duration': 262.06, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/H6du_pfuznE/pics/H6du_pfuznE1452560.jpg', 'highlights': ['The LSTM model is trained with 16 epochs, indicating the depth of the training process.', "The root mean squared error (RMSE) is calculated for both the training and test data, demonstrating the model's efficiency in providing accurate predictions for both datasets.", 'The test data predicted output has a value of 235, indicating the success of the predictive model.', 'The loss value not varying significantly after 63 epochs, starting from 0.02 and decreasing, along with the validation loss also decreasing.', 'The process of plotting train predict and test predict data with a time step of 100 and the necessity of scalar dot inverse transform for proper visualization.']}, {'end': 2190.651, 'segs': [{'end': 1886.481, 'src': 'heatmap', 'start': 1831.124, 'weight': 4, 'content': [{'end': 1838.267, 'text': "then only I'll be able to predict my 23rd May data or output, you know, by giving it to my LSTM order.", 'start': 1831.124, 'duration': 7.143}, {'end': 1847.49, 'text': "So the same thing that I am doing over here, over here, you can see that what I am doing, I'm taking from 341, 341 to 441 data.", 'start': 1838.587, 'duration': 8.903}, {'end': 1848.951, 'text': 'That is my previous 100 days.', 'start': 1847.55, 'duration': 1.401}, {'end': 1851.612, 'text': "And I'm reshaping that into one comma minus one.", 'start': 1849.351, 'duration': 2.261}, {'end': 1855.153, 'text': "So once I do it, you'll be able to see that I'm having 100 data over here.", 'start': 1851.652, 'duration': 3.501}, {'end': 1857.354, 'text': 'Okay So this is also the same step.', 'start': 1855.613, 'duration': 1.741}, {'end': 1858.714, 'text': "I'm just going to remove this.", 'start': 1857.574, 'duration': 1.14}, {'end': 1859.875, 'text': "I'm going to remove this.", 'start': 1858.894, 'duration': 0.981}, {'end': 1864.891, 'text': "Okay Now, after this, what I'm going to do is that I'm going to convert that into a list.", 'start': 1860.795, 'duration': 4.096}, {'end': 1866.212, 'text': 'Okay List.', 'start': 1865.552, 'duration': 0.66}, {'end': 1869.514, 'text': "I'm converting it and I'm actually taking all the values from there.", 'start': 1866.292, 'duration': 3.222}, {'end': 1874.678, 'text': "So once I execute this, let's see my temp underscore input.", 'start': 1869.935, 'duration': 4.743}, {'end': 1879.481, 'text': 'Temp underscore input is basically all my values.', 'start': 1875.699, 'duration': 3.782}, {'end': 1881.843, 'text': 'All my values you can see over here.', 'start': 1880.222, 'duration': 1.621}, {'end': 1886.481, 'text': 'that is basically all my data from this test underscore data.', 'start': 1882.836, 'duration': 3.645}], 'summary': 'Predict 23rd may output using lstm model on 100 days data.', 'duration': 24.029, 'max_score': 1831.124, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/H6du_pfuznE/pics/H6du_pfuznE1831124.jpg'}, {'end': 1959.371, 'src': 'embed', 'start': 1926.489, 'weight': 1, 'content': [{'end': 1931.213, 'text': 'you can see that i told you that the first step is always doing the reshape right.', 'start': 1926.489, 'duration': 4.724}, {'end': 1933.154, 'text': 'so here also we have done that reshape.', 'start': 1931.213, 'duration': 1.941}, {'end': 1936.737, 'text': 'so for the new data also, we always have to do this reshape.', 'start': 1933.154, 'duration': 3.583}, {'end': 1938.198, 'text': 'so this reshape.', 'start': 1936.737, 'duration': 1.461}, {'end': 1940.14, 'text': 'you can see that we have done over here.', 'start': 1938.198, 'duration': 1.942}, {'end': 1943.526, 'text': 'then we are doing the prediction, Getting the y hat value.', 'start': 1940.14, 'duration': 3.386}, {'end': 1948.988, 'text': "I'm adding this y hat value inside my final output Okay over here.", 'start': 1944.066, 'duration': 4.922}, {'end': 1959.371, 'text': "You can see that I'm adding it and then I'm also adding it in my Previous input, in my previous input, which input this particular input.", 'start': 1949.028, 'duration': 10.343}], 'summary': 'Reshape data before prediction, appending y hat to final and previous input.', 'duration': 32.882, 'max_score': 1926.489, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/H6du_pfuznE/pics/H6du_pfuznE1926489.jpg'}, {'end': 2050.342, 'src': 'embed', 'start': 2019.458, 'weight': 0, 'content': [{'end': 2022.781, 'text': 'so you know that for my test data I have taken the previous hundred days.', 'start': 2019.458, 'duration': 3.323}, {'end': 2028.837, 'text': 'so I have kept hundred indexes inside this day, underscore new.', 'start': 2022.781, 'duration': 6.056}, {'end': 2035.739, 'text': 'in the predicted underscore day i have taken 101 to 131, because 30 days future i need to predict right.', 'start': 2028.837, 'duration': 6.902}, {'end': 2037.84, 'text': 'so this will actually come in my x-axis.', 'start': 2035.739, 'duration': 2.101}, {'end': 2040.36, 'text': "i'm going to use math, dot, lib.", 'start': 2037.84, 'duration': 2.52}, {'end': 2043.661, 'text': "i'll come to this particular thing, guys, just a second.", 'start': 2040.36, 'duration': 3.301}, {'end': 2050.342, 'text': "so i'm saying plt dot, plot, scalar dot, inverse, transform df1 of 1, 1, 5, 8.", 'start': 2043.661, 'duration': 6.681}], 'summary': 'Using 100 days of test data, predicting 30 days into the future with x-axis and math dot lib, plotting scalar inverse transform.', 'duration': 30.884, 'max_score': 2019.458, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/H6du_pfuznE/pics/H6du_pfuznE2019458.jpg'}, {'end': 2116.048, 'src': 'embed', 'start': 2086.289, 'weight': 2, 'content': [{'end': 2093.335, 'text': 'so when i do this you will be able to see we are getting a wonderful graph which looks like this okay,', 'start': 2086.289, 'duration': 7.046}, {'end': 2100.3, 'text': 'this is the new 30 days output that you can see and i think the lstm has done wonderful work.', 'start': 2093.335, 'duration': 6.965}, {'end': 2109.242, 'text': 'okay, but still, if you want to see the complete output, i would suggest is that just try to do in this particular way combine,', 'start': 2100.3, 'duration': 8.942}, {'end': 2116.048, 'text': 'combine your df1 and your list output inside df3 and just try to see how the prediction comes.', 'start': 2109.242, 'duration': 6.806}], 'summary': 'Lstm model produces impressive 30-day output graph.', 'duration': 29.759, 'max_score': 2086.289, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/H6du_pfuznE/pics/H6du_pfuznE2086289.jpg'}, {'end': 2179.4, 'src': 'embed', 'start': 2153.231, 'weight': 3, 'content': [{'end': 2158.935, 'text': 'One more thing that you can do is that try to predict for the next 100 days and try to see that how it looks like.', 'start': 2153.231, 'duration': 5.704}, {'end': 2161.597, 'text': 'you know how the data, how the output looks like.', 'start': 2158.935, 'duration': 2.662}, {'end': 2165.1, 'text': 'Video guys, I hope you were able to understand all this particular thing.', 'start': 2161.617, 'duration': 3.483}, {'end': 2171.104, 'text': 'So yes, this was all about this particular video and we can still improve the security by using bidirectional LSTM.', 'start': 2165.66, 'duration': 5.444}, {'end': 2175.087, 'text': "So yes, and that I'll be showing you in my upcoming videos.", 'start': 2171.644, 'duration': 3.443}, {'end': 2176.888, 'text': 'So yes, this was all about this particular video.', 'start': 2175.147, 'duration': 1.741}, {'end': 2179.4, 'text': "Please do subscribe to the channel if you're not already subscribed.", 'start': 2177.398, 'duration': 2.002}], 'summary': 'Predict for next 100 days, improve security using bidirectional lstm, subscribe to the channel.', 'duration': 26.169, 'max_score': 2153.231, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/H6du_pfuznE/pics/H6du_pfuznE2153231.jpg'}], 'start': 1715.02, 'title': 'Lstm model training and stock prediction', 'summary': "Details the process of training an lstm model for stock prediction using a dataset of 441 records and emphasizes the need for 100 days' data to predict the next 30 days' output. it also discusses the implementation of the model, achieving a 30-day output and showcasing potential improvements by adjusting the time stamp and predicting for the next 100 days.", 'chapters': [{'end': 1906.094, 'start': 1715.02, 'title': 'Lstm model training and future prediction', 'summary': "Details the process of training an lstm model and making future predictions based on a dataset of 441 records, emphasizing the need for 100 days' data to predict the next 30 days' output.", 'duration': 191.074, 'highlights': ['The chapter details the process of training an LSTM model The speaker discusses training an LSTM model on a laptop and successfully exploring various aspects, indicating efficient model training.', "emphasizing the need for 100 days' data to predict the next 30 days' output The speaker emphasizes the need for 100 days' data to predict the next 30 days' output, highlighting the specific requirement for accurate future predictions.", 'the dataset consists of 441 records The speaker mentions that the dataset consists of 441 records, providing context on the size of the dataset being utilized for the LSTM model.']}, {'end': 2190.651, 'start': 1906.094, 'title': 'Implementing lstm model for stock prediction', 'summary': 'Discusses the implementation of an lstm model for stock prediction, showcasing the process of reshaping input data, making predictions, and plotting the results, achieving a 30-day output with an overview of how to improve the model by adjusting the time stamp and predicting for the next 100 days.', 'duration': 284.557, 'highlights': ['The process of reshaping input data and making predictions, including adding the predicted value to the final output and previous input, and shifting the data to accommodate new predictions.', 'The method of plotting the predicted data against the real data, showcasing the effectiveness of the LSTM model in generating a smooth and accurate 30-day output.', 'Suggestions for improving the model by adjusting the time stamp and predicting for the next 100 days.']}], 'duration': 475.631, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/H6du_pfuznE/pics/H6du_pfuznE1715020.jpg', 'highlights': ["The speaker emphasizes the need for 100 days' data to predict the next 30 days' output, highlighting the specific requirement for accurate future predictions.", 'The process of reshaping input data and making predictions, including adding the predicted value to the final output and previous input, and shifting the data to accommodate new predictions.', 'The method of plotting the predicted data against the real data, showcasing the effectiveness of the LSTM model in generating a smooth and accurate 30-day output.', 'The chapter details the process of training an LSTM model on a laptop and successfully exploring various aspects, indicating efficient model training.', 'The dataset consists of 441 records, providing context on the size of the dataset being utilized for the LSTM model.', 'Suggestions for improving the model by adjusting the time stamp and predicting for the next 100 days.']}], 'highlights': ['The stack LSTM model achieved an accuracy of 71.6% after training with 100 epochs and a batch size of 64, indicating the effectiveness of the predictive model.', "The root mean squared error (RMSE) is calculated for both the training and test data, demonstrating the model's efficiency in providing accurate predictions for both datasets.", 'The test data predicted output has a value of 235, indicating the success of the predictive model.', 'The process of reshaping input data and making predictions, including adding the predicted value to the final output and previous input, and shifting the data to accommodate new predictions.', 'The process of plotting the predicted data against the real data, showcasing the effectiveness of the LSTM model in generating a smooth and accurate 30-day output.', 'The video discusses stock market prediction and forecasting using stacked LSTM, emphasizing not to use the model for personal investments.', 'The presenter requests viewers not to use the model for personal investments due to the unpredictable nature of the stock market.', 'The process involves collecting Apple stock data from 2015 till date, preprocessing the data, creating a stacked LSTM model, and predicting the future 30 days. Apple stock data from 2015 till date', 'Pandas data reader and Tingo API allow 50 requests per day for accessing stock data. 50 requests per day', 'The process involved pre-processing the dataset, reshaping the input data, and training a stack LSTM model with a mean squared error as the loss function and Adam as the optimizer, showcasing the comprehensive approach to model training.']}