title
Time Series Analysis in Python | Time Series Forecasting Project [Complete] | Python Data Science

description
In this python data science project tutorial I have shown the time series project from scratch. This tutorial will help you understand some of the very important features related to time series project in python like how to manipulate dataset, manipulate series, acf, pacf, autoregressive, moving average and difference. I shown first how you can create a base model and figure out its error rate using scikit learn mean squared error and then how to you can create ARIMA model which is auto regressive integrated moving average model and a most advance and most used statistical model for time series forecasting. Dataset link - https://tinyurl.com/yd65vnf3 Want to support me for creation of new free videos - https://www.instamojo.com/abhishek_agarrwal/support-required/ My other projects - Data Science Project Tutorial for Beginners - https://youtu.be/z3xfNAZtbvw Tableau Data Science Project 2 - Tableau Project for Practice Data Analysis and Prediction https://youtu.be/d8U20PwKapo Python Complete Tutorial for Beginners [Full Course] 2019 https://youtu.be/BMWaxI2e1U0 Python Complete Tutorial for Beginners [Full Course] 2019 - Part 2 https://youtu.be/lcjE-FXrkJY Python Text Analytics for Beginners - Part 1 - Creating and Manipulating Strings in Python https://youtu.be/FfbM8sGZqAI My website - http://www.datantools.com Connect with me on Facebook Page - https://www.facebook.com/datantools Twitter - https://twitter.com/Abhishe30886934 LinkedIn - https://www.linkedin.com/in/abhishek-agarwal-9549876/ ⭐My Favorite Python Books - Python Crash Course: https://amzn.to/2J0AqbI - Automate the Boring Stuff with Python: https://amzn.to/2VQuPd7 - A Smarter Way to Learn Python: https://amzn.to/35JBOcs - Machine Learning for Absolute Beginners: https://amzn.to/35IKteV - Hands-on Machine Learning with scikit-learn and TensorFlow: https://amzn.to/31kU9cg Python official page - https://www.python.org/ Python documentation for each version - https://www.python.org/doc/versions/ Python Community - https://www.python.org/community/ Download Python - https://www.python.org/downloads/ Python Success Stories - https://www.python.org/success-stories/ Python News - https://www.python.org/blogs/ Python Events - https://www.python.org/events/ Python String Documentation - https://docs.python.org/3.4/library/string.html

detail
{'title': 'Time Series Analysis in Python | Time Series Forecasting Project [Complete] | Python Data Science', 'heatmap': [], 'summary': 'Covers time series modeling, data analysis, female birth statistics, and arima model with key highlights including data manipulation in python, birth statistics in california, time series analysis techniques, and error reduction in advanced models, aiming to create an arima model with specific parameters for experimentation and improvement.', 'chapters': [{'end': 240.338, 'segs': [{'end': 109.102, 'src': 'embed', 'start': 75.006, 'weight': 0, 'content': [{'end': 78.168, 'text': 'that means here the data is collected every day.', 'start': 75.006, 'duration': 3.162}, {'end': 79.489, 'text': "that means it's a daily data.", 'start': 78.168, 'duration': 1.321}, {'end': 86.935, 'text': 'So this is the data points which are collected over a period of time, and the period of time is day.', 'start': 80.049, 'duration': 6.886}, {'end': 95.802, 'text': 'Similarly, if you have seen, there is the population data which you know, government collects after every 5 years or 10 years,', 'start': 87.395, 'duration': 8.407}, {'end': 97.604, 'text': 'based on the frequency that they have set.', 'start': 95.802, 'duration': 1.802}, {'end': 109.102, 'text': 'or there are something which is called as GDP data which is the number are coming out on a quarterly basis and similarly with the company results.', 'start': 98.297, 'duration': 10.805}], 'summary': 'Data is collected daily, while population data is collected every 5 or 10 years and gdp data on a quarterly basis.', 'duration': 34.096, 'max_score': 75.006, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/MmC4b7gPY0Q/pics/MmC4b7gPY0Q75006.jpg'}, {'end': 159.3, 'src': 'embed', 'start': 137.421, 'weight': 1, 'content': [{'end': 147.39, 'text': 'And for that, you need to use the time series modeling approach for doing the data science project or for the forecasting of the future values.', 'start': 137.421, 'duration': 9.969}, {'end': 151.073, 'text': "All right, let's go back to the Python, the Jupyter Notebook.", 'start': 147.91, 'duration': 3.163}, {'end': 156.838, 'text': 'So for that, what do we need is basically a couple of libraries, imports, pandas, sppd.', 'start': 151.534, 'duration': 5.304}, {'end': 159.3, 'text': 'That is our basic library.', 'start': 158.239, 'duration': 1.061}], 'summary': 'Use time series modeling for data science projects and forecasting using python and jupyter notebook.', 'duration': 21.879, 'max_score': 137.421, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/MmC4b7gPY0Q/pics/MmC4b7gPY0Q137421.jpg'}], 'start': 0.189, 'title': 'Time series modeling in python', 'summary': 'Introduces time series modeling in python, emphasizing the importance of collecting data over time and the need for time series modeling in data science projects for forecasting future values, using libraries like pandas and matplotlib.', 'chapters': [{'end': 240.338, 'start': 0.189, 'title': 'Time series modeling in python', 'summary': 'Introduces time series modeling in python, emphasizing the importance of collecting data over a period of time and the need for time series modeling in data science projects for forecasting future values, with a focus on using libraries like pandas and matplotlib.', 'duration': 240.149, 'highlights': ['Time series data consists of data points collected over a period of time, such as daily, quarterly, or based on specific frequencies, which is crucial for data science projects and forecasting future values. The importance of collecting data over a period of time, such as daily, quarterly, or based on specific frequencies, is emphasized for data science projects and forecasting future values.', 'The need for time series modeling in data science projects for forecasting future values is explained, highlighting the significance of using libraries like pandas and matplotlib for data reading and visualization. The explanation of the need for time series modeling in data science projects for forecasting future values, emphasizing the significance of using libraries like pandas and matplotlib for data reading and visualization.', 'Introduction to the basic libraries required for time series modeling in Python, including pandas for data reading and matplotlib for data visualization, with an emphasis on using the magic command %matplotlib for displaying plots. Introduction to the basic libraries required for time series modeling in Python, including pandas for data reading and matplotlib for data visualization, with an emphasis on using the magic command %matplotlib for displaying plots.']}], 'duration': 240.149, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/MmC4b7gPY0Q/pics/MmC4b7gPY0Q189.jpg', 'highlights': ['Time series data consists of data points collected over a period of time, such as daily, quarterly, or based on specific frequencies, which is crucial for data science projects and forecasting future values.', 'The need for time series modeling in data science projects for forecasting future values is explained, highlighting the significance of using libraries like pandas and matplotlib for data reading and visualization.', 'Introduction to the basic libraries required for time series modeling in Python, including pandas for data reading and matplotlib for data visualization, with an emphasis on using the magic command %matplotlib for displaying plots.']}, {'end': 827.857, 'segs': [{'end': 291.142, 'src': 'embed', 'start': 266.368, 'weight': 0, 'content': [{'end': 274.253, 'text': 'So this is what I have is basically the total the date and daily total female birth in California 1959.', 'start': 266.368, 'duration': 7.885}, {'end': 278.416, 'text': 'This is the data that you can even find in the description.', 'start': 274.253, 'duration': 4.163}, {'end': 280.597, 'text': 'Now with this.', 'start': 279.176, 'duration': 1.421}, {'end': 291.142, 'text': 'I would suggest that keep on viewing it, because we will continue seeing even the interview questions and the scenarios whenever you will see that.', 'start': 280.597, 'duration': 10.545}], 'summary': 'The transcript discusses the daily total female birth in california in 1959.', 'duration': 24.774, 'max_score': 266.368, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/MmC4b7gPY0Q/pics/MmC4b7gPY0Q266368.jpg'}, {'end': 494.533, 'src': 'embed', 'start': 466.345, 'weight': 1, 'content': [{'end': 469.188, 'text': 'You can specify header, you can specify separator.', 'start': 466.345, 'duration': 2.843}, {'end': 473.552, 'text': "By default it's a comma separated file but if you have any other separator you can specify this.", 'start': 469.228, 'duration': 4.324}, {'end': 478.337, 'text': 'And you can read this as well because it has a lot of parameters.', 'start': 474.072, 'duration': 4.265}, {'end': 480.579, 'text': 'It has explanation for each parameter.', 'start': 478.417, 'duration': 2.162}, {'end': 484.983, 'text': 'I think down there it has some examples as well looking at the file location.', 'start': 481.119, 'duration': 3.864}, {'end': 491.932, 'text': 'okay, so index underscore call equals to zero.', 'start': 485.93, 'duration': 6.002}, {'end': 494.533, 'text': 'that means I want the first column.', 'start': 491.932, 'duration': 2.601}], 'summary': 'The transcript discusses specifying headers and separators in a file, with example of setting index_call to zero for the first column.', 'duration': 28.188, 'max_score': 466.345, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/MmC4b7gPY0Q/pics/MmC4b7gPY0Q466345.jpg'}, {'end': 562.99, 'src': 'embed', 'start': 530.39, 'weight': 2, 'content': [{'end': 540.797, 'text': 'Now, many times there is a question which you will usually find that if you have a data frame like this, than how you will convert into a series.', 'start': 530.39, 'duration': 10.407}, {'end': 541.777, 'text': 'This is a data frame.', 'start': 540.877, 'duration': 0.9}, {'end': 548.141, 'text': 'So there are two ways in which you can convert this into a series.', 'start': 542.538, 'duration': 5.603}, {'end': 556.486, 'text': 'One is simply coming over here and saying squeeze equals to true.', 'start': 548.481, 'duration': 8.005}, {'end': 562.99, 'text': 'So now if you see this is converted into a different format altogether.', 'start': 558.668, 'duration': 4.322}], 'summary': "Convert a data frame into a series using 'squeeze=true' option.", 'duration': 32.6, 'max_score': 530.39, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/MmC4b7gPY0Q/pics/MmC4b7gPY0Q530390.jpg'}, {'end': 765.884, 'src': 'embed', 'start': 739.937, 'weight': 4, 'content': [{'end': 748.463, 'text': 'so this is somewhere around 55, 52, 48 and similarly, for first few observations it was around anywhere between 30 to 44..', 'start': 739.937, 'duration': 8.526}, {'end': 751.767, 'text': 'So 1959 is clearly an outlier.', 'start': 748.463, 'duration': 3.304}, {'end': 756.534, 'text': "And that's why you really need to see the first few observations and last few observations.", 'start': 751.887, 'duration': 4.647}, {'end': 758.136, 'text': 'And not only this.', 'start': 756.974, 'duration': 1.162}, {'end': 762.282, 'text': 'you have the method, which is f, underscore birth dot.', 'start': 758.136, 'duration': 4.146}, {'end': 765.884, 'text': 'So what it does?', 'start': 763.783, 'duration': 2.101}], 'summary': 'Outlier in 1959 with observations ranging from 30 to 44, method f_birth_dot discussed.', 'duration': 25.947, 'max_score': 739.937, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/MmC4b7gPY0Q/pics/MmC4b7gPY0Q739937.jpg'}], 'start': 241.559, 'title': 'Data analysis in python', 'summary': 'Covers reading birth data from a csv file, using indexing for data manipulation, converting a data frame into a series, identifying outliers, and creating statistics for the data set, with a notable outlier in female births in california in 1959. it also specifies parameters for reading csv files.', 'chapters': [{'end': 491.932, 'start': 241.559, 'title': 'Reading and indexing data in python', 'summary': 'Discusses reading birth data from a csv file, creating an object, and using indexing to manipulate and perform operations on the data. it also highlights the importance of indexing for data manipulation and specifies parameters for reading csv files.', 'duration': 250.373, 'highlights': ['The chapter introduces reading birth data from a CSV file and creating an object for further analysis.', 'It emphasizes the importance of indexing for manipulating and performing operations on the data, such as slicing and dicing based on specified columns.', 'The chapter specifies parameters for reading CSV files, including index_column, header, and separator, with examples and explanations provided for each parameter.']}, {'end': 827.857, 'start': 491.932, 'title': 'Data frame to series conversion and analysis', 'summary': 'Covers converting a data frame into a series, showing the size of the data frame, identifying outliers, and creating statistics for the data set, with a notable outlier in female births in california in 1959.', 'duration': 335.925, 'highlights': ["Converting a data frame into a series can be achieved by setting 'squeeze' to true, resulting in the transformation of the data frame into a series format.", 'The size of the data frame is indicated to be 366 observations, with a focus on the last observations revealing a notable outlier in female births in California in 1959.', 'Descriptive statistics for the entire data set are created, showcasing a mean of 47, a standard deviation of 100, and a maximum value of 1959, indicating the presence of an outlier.', 'The importance of examining the first and last observations in the data set is emphasized, with 1959 highlighted as a clear outlier in the female births data.']}], 'duration': 586.298, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/MmC4b7gPY0Q/pics/MmC4b7gPY0Q241559.jpg', 'highlights': ['Identifying outliers, such as the notable outlier in female births in California in 1959, is a key focus of the chapter.', 'The chapter specifies parameters for reading CSV files, including index_column, header, and separator, with examples and explanations provided for each parameter.', "Converting a data frame into a series can be achieved by setting 'squeeze' to true, resulting in the transformation of the data frame into a series format.", 'The size of the data frame is indicated to be 366 observations, with a focus on the last observations revealing a notable outlier in female births in California in 1959.', 'The importance of examining the first and last observations in the data set is emphasized, with 1959 highlighted as a clear outlier in the female births data.']}, {'end': 1278.617, 'segs': [{'end': 887.572, 'src': 'embed', 'start': 828.277, 'weight': 0, 'content': [{'end': 830.917, 'text': 'Okay, now it will be much more clearer to you.', 'start': 828.277, 'duration': 2.64}, {'end': 837.759, 'text': 'So F underscore birth, again F underscore birth, right from 0 to 365.', 'start': 831.778, 'duration': 5.981}, {'end': 839, 'text': 'Ah crap, F underscore birth.', 'start': 837.759, 'duration': 1.241}, {'end': 849.929, 'text': "All right, so now if I see if but again, I'm writing it wrong.", 'start': 844.679, 'duration': 5.25}, {'end': 851.712, 'text': "I don't know why just that doesn't.", 'start': 849.969, 'duration': 1.743}, {'end': 855.712, 'text': 'go that describe.', 'start': 853.53, 'duration': 2.182}, {'end': 858.875, 'text': "now, if you see it's very meaningful.", 'start': 855.712, 'duration': 3.163}, {'end': 860.656, 'text': 'so the minimum value is 23.', 'start': 858.875, 'duration': 1.781}, {'end': 865.22, 'text': 'the so minimum birth on a given day is 23.', 'start': 860.656, 'duration': 4.564}, {'end': 873.828, 'text': 'maximum birth on a given female birth is 73 and the standard aviation one standard aviation is 7 and mean is 41 on an average.', 'start': 865.22, 'duration': 8.608}, {'end': 879.65, 'text': 'you know 41 Birth, that female birth which is happening in the state of California.', 'start': 873.828, 'duration': 5.822}, {'end': 887.572, 'text': 'So a lot of important statistic and company uses it for a lot of different ways in which they need to plan their future marketing.', 'start': 879.65, 'duration': 7.922}], 'summary': 'California records 23-73 female births daily. mean: 41, std dev: 7. valuable for marketing planning.', 'duration': 59.295, 'max_score': 828.277, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/MmC4b7gPY0Q/pics/MmC4b7gPY0Q828277.jpg'}, {'end': 942.863, 'src': 'embed', 'start': 911.31, 'weight': 4, 'content': [{'end': 916.896, 'text': 'their needs arises and they basically go out in a shop and do the purchasing.', 'start': 911.31, 'duration': 5.586}, {'end': 920.658, 'text': 'You can clearly understand how you need to plan your business.', 'start': 917.977, 'duration': 2.681}, {'end': 928.68, 'text': 'These are very economic factors that every company consider like the minimum, maximum, standard deviation, all of this.', 'start': 920.678, 'duration': 8.002}, {'end': 933.581, 'text': 'And the most important is basically the average and standard deviation,', 'start': 929.14, 'duration': 4.441}, {'end': 942.863, 'text': 'because they reflect a lot that what is the average value and how much it can vary it, or you know that there will be a variability in the birth.', 'start': 933.581, 'duration': 9.282}], 'summary': 'Business planning involves considering economic factors such as average and standard deviation for understanding variability.', 'duration': 31.553, 'max_score': 911.31, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/MmC4b7gPY0Q/pics/MmC4b7gPY0Q911310.jpg'}, {'end': 1063.347, 'src': 'embed', 'start': 1006.44, 'weight': 5, 'content': [{'end': 1019.166, 'text': 'the stationary series is important when you are working with the time series problem and the way we do that is to figure it out the basically the difference between the current term and the previous term,', 'start': 1006.44, 'duration': 12.726}, {'end': 1024.907, 'text': 'which we call it D, and I will show you in the in couple of minutes that.', 'start': 1019.166, 'duration': 5.741}, {'end': 1027.148, 'text': 'what does this difference?', 'start': 1024.907, 'duration': 2.241}, {'end': 1034.031, 'text': 'order of difference, or the the particular value D which I just mentioned?', 'start': 1027.148, 'duration': 6.883}, {'end': 1045.295, 'text': 'But before that, the way we can much more clearly see this trend or this series is by smoothing the series.', 'start': 1034.412, 'duration': 10.883}, {'end': 1053.198, 'text': 'Smoothing is one of the factors which you will hear or you must have heard when working or reading about the time series.', 'start': 1045.315, 'duration': 7.883}, {'end': 1063.347, 'text': 'project, so that the smoothing is basically taking up the these values and smoothing out with the help of moving averages.', 'start': 1054.139, 'duration': 9.208}], 'summary': 'Understanding stationary series and smoothing in time series analysis.', 'duration': 56.907, 'max_score': 1006.44, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/MmC4b7gPY0Q/pics/MmC4b7gPY0Q1006440.jpg'}], 'start': 828.277, 'title': 'Female birth statistics and time series analysis', 'summary': 'Covers female birth statistics in california, with birth counts ranging from 23 to 73, a standard deviation of 7, and an average of 41 births. additionally, it discusses time series analysis and visualization techniques for economic factors, including identifying stationary series and smoothing data for trend analysis.', 'chapters': [{'end': 887.572, 'start': 828.277, 'title': 'Female birth statistics in california', 'summary': 'Discusses the statistics of female births in california, with the minimum birth count being 23, maximum being 73, standard deviation being 7, and an average of 41 births. these statistics are crucial for future marketing planning.', 'duration': 59.295, 'highlights': ['The maximum birth on a given day is 73, indicating the highest count of female births in a day.', 'The minimum value of 23 represents the lowest count of female births on a given day.', 'The standard deviation of 7 signifies the level of variation in the female birth statistics.', 'The mean of 41 highlights the average number of female births occurring in the state of California.']}, {'end': 1278.617, 'start': 887.572, 'title': 'Time series analysis and visualization', 'summary': 'Discusses the use of time series analysis and visualization techniques to understand economic factors for business planning, including the calculation of average and standard deviation, identifying stationary series, and smoothing the data using moving averages for trend analysis and noise reduction.', 'duration': 391.045, 'highlights': ['Understanding economic factors for business planning, including the calculation of average and standard deviation. The transcript discusses the use of time series analysis to understand economic factors for business planning, including the calculation of average and standard deviation.', 'Identifying stationary series and its importance in time series problems. The importance of identifying stationary series in time series problems is highlighted, indicating a series with a constant mean and standard deviation and its relevance in trend analysis.', 'Smoothing the data using moving averages for trend analysis and noise reduction. The technique of smoothing the data using moving averages is discussed, emphasizing its use for trend analysis and noise reduction, particularly in stock market analysis.']}], 'duration': 450.34, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/MmC4b7gPY0Q/pics/MmC4b7gPY0Q828277.jpg', 'highlights': ['The maximum birth on a given day is 73, indicating the highest count of female births in a day.', 'The mean of 41 highlights the average number of female births occurring in the state of California.', 'The minimum value of 23 represents the lowest count of female births on a given day.', 'The standard deviation of 7 signifies the level of variation in the female birth statistics.', 'Understanding economic factors for business planning, including the calculation of average and standard deviation.', 'Identifying stationary series and its importance in time series problems.', 'Smoothing the data using moving averages for trend analysis and noise reduction.']}, {'end': 1523.758, 'segs': [{'end': 1467.786, 'src': 'embed', 'start': 1339.248, 'weight': 0, 'content': [{'end': 1345.149, 'text': 'So, this is also generally the question within the interview that what is a baseline model within the time series.', 'start': 1339.248, 'duration': 5.901}, {'end': 1346.91, 'text': 'So, there are different types of models.', 'start': 1345.309, 'duration': 1.601}, {'end': 1349.05, 'text': 'One is a baseline model I just said.', 'start': 1347.41, 'duration': 1.64}, {'end': 1350.411, 'text': 'There is a moving average model.', 'start': 1349.09, 'duration': 1.321}, {'end': 1358.259, 'text': 'There is an autoregressive model, there is an exponential model and there is something which is called as arema.', 'start': 1351.031, 'duration': 7.228}, {'end': 1361.884, 'text': 'There is S arema which is seasonal arema.', 'start': 1358.72, 'duration': 3.164}, {'end': 1369.953, 'text': 'So you will see that as you will gradually increase your understanding about the time series.', 'start': 1362.464, 'duration': 7.489}, {'end': 1379.018, 'text': 'There are a lot of models, like I just said, but a baseline model is the model which helps, which is first of all, very naive.', 'start': 1370.533, 'duration': 8.485}, {'end': 1380.839, 'text': 'That means very, very basic.', 'start': 1379.178, 'duration': 1.661}, {'end': 1391.344, 'text': "It's assumption is that your previous value that you have, so right now, you know, whatever day we are, let's say, uh, so today is 11th of November.", 'start': 1381.559, 'duration': 9.785}, {'end': 1401.735, 'text': "so in 11th of november, let's say the the data, the birth number, is somewhere around maybe 41.", 'start': 1392.825, 'duration': 8.91}, {'end': 1407.861, 'text': 'right, that means what we assume is tomorrow also, we will get the number 41.', 'start': 1401.735, 'duration': 6.126}, {'end': 1411.045, 'text': 'so that is what we are basically assuming with the baseline model.', 'start': 1407.861, 'duration': 3.184}, {'end': 1417.007, 'text': 'So history, the recent history, is the best reflection of the future.', 'start': 1411.785, 'duration': 5.222}, {'end': 1418.408, 'text': "That's what the assumption is.", 'start': 1417.227, 'duration': 1.181}, {'end': 1420.449, 'text': "And as you can see, it's very easy.", 'start': 1419.068, 'duration': 1.381}, {'end': 1423.85, 'text': "Whatever your previous value is, it's getting used in the next value.", 'start': 1420.489, 'duration': 3.361}, {'end': 1428.552, 'text': "That's where you will create the baseline model.", 'start': 1425.071, 'duration': 3.481}, {'end': 1431.253, 'text': "And to create this, it's very simple.", 'start': 1429.152, 'duration': 2.101}, {'end': 1447.675, 'text': 'here. let me start with so earlier we already had the values, if I remember over here, series underscore value, F, Perth underscore values, this statement,', 'start': 1433.875, 'duration': 13.8}, {'end': 1459.442, 'text': 'and so with this series value, what we will do is we will create one more value which will take the next value from the existing data series.', 'start': 1447.675, 'duration': 11.767}, {'end': 1462.984, 'text': 'so if i show you in excel, what does that mean?', 'start': 1459.442, 'duration': 3.542}, {'end': 1467.786, 'text': 'is so suppose uh, this is date.', 'start': 1462.984, 'duration': 4.802}], 'summary': 'A baseline model assumes the next value will be the same as the previous value in time series analysis.', 'duration': 128.538, 'max_score': 1339.248, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/MmC4b7gPY0Q/pics/MmC4b7gPY0Q1339248.jpg'}], 'start': 1278.617, 'title': 'Time series models', 'summary': 'Introduces time series analysis, discussing steps such as normalization and model building. it provides a baseline understanding of time series models for easy learning and confidence in creating initial models. it also explains the concept of a baseline model for future prediction, highlighting the assumption that the previous value will be the same as the future value, emphasizing the simplicity and reliance on recent history. additionally, it discusses creating predicted values from time series data by using the previous values as the best reflection of the future, to show the next value from the existing data series and its application in excel.', 'chapters': [{'end': 1380.839, 'start': 1278.617, 'title': 'Introduction to time series models', 'summary': 'Introduces the concept of time series analysis, discussing steps such as normalization and model building, aiming to provide a baseline understanding of time series models for easy learning and confidence in creating initial models.', 'duration': 102.222, 'highlights': ['The chapter emphasizes the importance of building a baseline model in time series analysis, discussing its naive and basic nature compared to other models.', 'It mentions different types of time series models including moving average, autoregressive, exponential, arema, and seasonal arema, providing an overview of the variety of models available for analysis.', 'It mentions steps for time series analysis such as normalization of values and log transformation, setting the foundation for further discussions in upcoming projects.']}, {'end': 1431.253, 'start': 1381.559, 'title': 'Baseline model for future prediction', 'summary': 'Explains the concept of a baseline model for future prediction, highlighting the assumption that the previous value will be the same as the future value, emphasizing the simplicity and reliance on recent history.', 'duration': 49.694, 'highlights': ['The assumption in the baseline model is that the previous value will be the same as the future value, illustrated by the example of using the birth number on a specific date as a predictor (e.g., if the birth number on 11th November is 41, the assumption is that it will also be 41 on the next day).', 'Emphasizes that recent history is the best reflection of the future, emphasizing the reliance on historical data for future predictions.', 'Explains the simplicity of the baseline model, highlighting its ease of use and creation based on the previous value being used in the next value.']}, {'end': 1523.758, 'start': 1433.875, 'title': 'Creating predicted values from time series data', 'summary': 'Discusses creating predicted values from time series data by using the previous values as the best reflection of the future, to show the next value from the existing data series and its application in excel.', 'duration': 89.883, 'highlights': ['The process involves creating a new value which takes the next value from the existing data series, exemplified by the increment of values in an Excel sheet.', 'The technique assumes the previous value as the best reflection of the future, leading to the prediction of future values based on the previous ones.']}], 'duration': 245.141, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/MmC4b7gPY0Q/pics/MmC4b7gPY0Q1278617.jpg', 'highlights': ['The chapter emphasizes the importance of building a baseline model in time series analysis, discussing its naive and basic nature compared to other models.', 'It mentions different types of time series models including moving average, autoregressive, exponential, arema, and seasonal arema, providing an overview of the variety of models available for analysis.', 'The assumption in the baseline model is that the previous value will be the same as the future value, illustrated by the example of using the birth number on a specific date as a predictor (e.g., if the birth number on 11th November is 41, the assumption is that it will also be 41 on the next day).', 'Emphasizes that recent history is the best reflection of the future, emphasizing the reliance on historical data for future predictions.', 'The process involves creating a new value which takes the next value from the existing data series, exemplified by the increment of values in an Excel sheet.']}, {'end': 2486.374, 'segs': [{'end': 1589.238, 'src': 'embed', 'start': 1524.539, 'weight': 2, 'content': [{'end': 1525.979, 'text': 'So this is what we were going to do.', 'start': 1524.539, 'duration': 1.44}, {'end': 1530.362, 'text': 'So series underscore value we already had.', 'start': 1527.42, 'duration': 2.942}, {'end': 1532.783, 'text': 'So what I create is birth.', 'start': 1530.902, 'duration': 1.881}, {'end': 1546.024, 'text': 'underscore DF data frame and PD dot con cat concat.', 'start': 1534.716, 'duration': 11.308}, {'end': 1552.026, 'text': 'yeah, so concatenation function basically takes two series and create one data frame.', 'start': 1546.024, 'duration': 6.002}, {'end': 1555.847, 'text': 'so one series we already have, that is series underscore value.', 'start': 1552.026, 'duration': 3.821}, {'end': 1560.188, 'text': 'but we need to create one series which is taking one more value.', 'start': 1555.847, 'duration': 4.341}, {'end': 1574.272, 'text': 'so first of all we have series underscore value, that is our current series, and then we have series dot values, dot Shift method, sh ift,', 'start': 1560.188, 'duration': 14.084}, {'end': 1578.154, 'text': 'shift by one place Right.', 'start': 1574.272, 'duration': 3.882}, {'end': 1583.676, 'text': 'so the excel manual operation that we did, we did it over here and surrounding it by,', 'start': 1578.154, 'duration': 5.522}, {'end': 1589.238, 'text': 'because this this is part of one data frame and Axis equals to one,', 'start': 1583.676, 'duration': 5.562}], 'summary': 'Creating a new data frame by concatenating two series, with one series being shifted by one place right.', 'duration': 64.699, 'max_score': 1524.539, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/MmC4b7gPY0Q/pics/MmC4b7gPY0Q1524539.jpg'}, {'end': 1663.687, 'src': 'embed', 'start': 1627.449, 'weight': 5, 'content': [{'end': 1635.892, 'text': "So pd.dataframe is nothing but it's a function part of the library which will convert a series into a dataframe.", 'start': 1627.449, 'duration': 8.443}, {'end': 1640.194, 'text': "That is what the requirement over here because it's not part of numpy array.", 'start': 1635.912, 'duration': 4.282}, {'end': 1646.997, 'text': 'So what do we need is series value, right? Series underscore value.', 'start': 1640.814, 'duration': 6.183}, {'end': 1653.339, 'text': 'So now series value and everything, sorry, value over here.', 'start': 1647.997, 'duration': 5.342}, {'end': 1657.161, 'text': 'I will remove this value.', 'start': 1655.5, 'duration': 1.661}, {'end': 1663.687, 'text': 'and now everything is fine.', 'start': 1661.346, 'duration': 2.341}], 'summary': 'Pd.dataframe function converts series into a dataframe', 'duration': 36.238, 'max_score': 1627.449, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/MmC4b7gPY0Q/pics/MmC4b7gPY0Q1627449.jpg'}, {'end': 1794.538, 'src': 'embed', 'start': 1761.866, 'weight': 4, 'content': [{'end': 1771.394, 'text': 'because f underscore birth is already part of the data frame and we could have directly used the shift function on this.', 'start': 1761.866, 'duration': 9.528}, {'end': 1774.896, 'text': 'So how you can do that? You need to show me by doing it.', 'start': 1771.914, 'duration': 2.982}, {'end': 1784.884, 'text': 'And I will put this particular Python notebook in my description and then you can go ahead and directly use it.', 'start': 1775.077, 'duration': 9.807}, {'end': 1794.538, 'text': 'Alright, so after this, once we have done this is to identify the error that is present.', 'start': 1786.167, 'duration': 8.371}], 'summary': 'Using the existing f_birth in the data frame, demonstrate the shift function in python.', 'duration': 32.672, 'max_score': 1761.866, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/MmC4b7gPY0Q/pics/MmC4b7gPY0Q1761866.jpg'}, {'end': 1884.193, 'src': 'embed', 'start': 1847.297, 'weight': 6, 'content': [{'end': 1857.543, 'text': 'so from sklearn dot matrix, import mean squared error.', 'start': 1847.297, 'duration': 10.246}, {'end': 1871.445, 'text': 'i have just pressed tab mean squared error and I need to import numpy as np because i want to take a square root of mean squared error.', 'start': 1857.543, 'duration': 13.902}, {'end': 1879.53, 'text': 'so that because mean squared error will going to square, do the uh square of the current value.', 'start': 1871.445, 'duration': 8.085}, {'end': 1884.193, 'text': 'that means if the mean square, mean value is 10, the square value will be 100.', 'start': 1879.53, 'duration': 4.663}], 'summary': 'Using sklearn to calculate mean squared error, then taking the square root using numpy.', 'duration': 36.896, 'max_score': 1847.297, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/MmC4b7gPY0Q/pics/MmC4b7gPY0Q1847297.jpg'}, {'end': 1945.113, 'src': 'embed', 'start': 1908.224, 'weight': 7, 'content': [{'end': 1920.608, 'text': 'And to calculate the error, so birth underscore error equals to mean squared error and the actual birth value.', 'start': 1908.224, 'duration': 12.384}, {'end': 1932.57, 'text': 'so here it will be birth underscore df dot, actual birth comma.', 'start': 1921.867, 'duration': 10.703}, {'end': 1936.711, 'text': 'birth underscore df dot for a cast birth.', 'start': 1932.57, 'duration': 4.141}, {'end': 1943.853, 'text': 'now this will return an error and the reason if you see this data, the first value is not a number.', 'start': 1936.711, 'duration': 7.142}, {'end': 1945.113, 'text': 'that means the missing value.', 'start': 1943.853, 'duration': 1.26}], 'summary': 'Calculating birth error using mean squared error and actual birth value.', 'duration': 36.889, 'max_score': 1908.224, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/MmC4b7gPY0Q/pics/MmC4b7gPY0Q1908224.jpg'}, {'end': 2208.65, 'src': 'embed', 'start': 2176.93, 'weight': 0, 'content': [{'end': 2183.232, 'text': 'Ideally what I should have done is 0 to 364 and to do that I just need to run this again.', 'start': 2176.93, 'duration': 6.302}, {'end': 2191.115, 'text': 'tail is coming from here and both, as this had.', 'start': 2186.209, 'duration': 4.906}, {'end': 2197.081, 'text': "now I'm fine, 32, 30, 34 and tail is over here.", 'start': 2191.115, 'duration': 5.966}, {'end': 2202.147, 'text': 'yeah, so this particular observation, as you can see, is removed 50 to 55.', 'start': 2197.081, 'duration': 5.066}, {'end': 2208.65, 'text': 'and now, if I come back here, see the birth error, which is 84 from 10, 000 it has reduced to 84.', 'start': 2202.147, 'duration': 6.503}], 'summary': 'Data range 0-364 achieved, 84 birth errors reduced from 10,000.', 'duration': 31.72, 'max_score': 2176.93, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/MmC4b7gPY0Q/pics/MmC4b7gPY0Q2176930.jpg'}, {'end': 2334.591, 'src': 'embed', 'start': 2294.908, 'weight': 1, 'content': [{'end': 2298.07, 'text': 'that means the error should go down as much as possible.', 'start': 2294.908, 'duration': 3.162}, {'end': 2301.663, 'text': 'so, as i said, there are many models.', 'start': 2299.321, 'duration': 2.342}, {'end': 2309.25, 'text': 'but finally, in any case, we come down to something which is called arima.', 'start': 2301.663, 'duration': 7.587}, {'end': 2321.925, 'text': 'right, arima is nothing but auto regressive, integrated moving average.', 'start': 2309.25, 'duration': 12.675}, {'end': 2325.947, 'text': 'so what it does is it has basically three components.', 'start': 2321.925, 'duration': 4.022}, {'end': 2334.591, 'text': 'one is autoregressive, right, then you have integrated and then you have moving average.', 'start': 2325.947, 'duration': 8.644}], 'summary': 'The goal is to reduce errors using arima, consisting of autoregressive, integrated, and moving average components.', 'duration': 39.683, 'max_score': 2294.908, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/MmC4b7gPY0Q/pics/MmC4b7gPY0Q2294908.jpg'}, {'end': 2486.374, 'src': 'embed', 'start': 2454.022, 'weight': 8, 'content': [{'end': 2460.785, 'text': "that means D equals to 1 or D, and that's how you will specify its value.", 'start': 2454.022, 'duration': 6.763}, {'end': 2465.72, 'text': 'if you want the 2 order difference, that means d equals to two.', 'start': 2460.785, 'duration': 4.935}, {'end': 2471.522, 'text': 'what you have is this you are on second instance and going back to the first instance.', 'start': 2465.72, 'duration': 5.802}, {'end': 2477.044, 'text': 'then you are just dragging it down right.', 'start': 2471.522, 'duration': 5.522}, {'end': 2478.665, 'text': 'so this is d equals to two.', 'start': 2477.044, 'duration': 1.621}, {'end': 2480.425, 'text': 'similarly, you can have d equals to three.', 'start': 2478.665, 'duration': 1.76}, {'end': 2486.374, 'text': 'so generally I have not seen going beyond to 2..', 'start': 2480.425, 'duration': 5.949}], 'summary': 'D can be specified as 1 or 2 for 2nd order difference, not beyond 2.', 'duration': 32.352, 'max_score': 2454.022, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/MmC4b7gPY0Q/pics/MmC4b7gPY0Q2454022.jpg'}], 'start': 1524.539, 'title': 'Data frame processing and error analysis', 'summary': 'Focuses on processing the data frame, removing outliers, and analyzing the error, with a significant reduction in birth error from 10000 to 84, and a detailed explanation of the arima model components and the importance of reducing error in advanced models.', 'chapters': [{'end': 1663.687, 'start': 1524.539, 'title': 'Creating data frame from pandas series', 'summary': 'Discusses creating a data frame from pandas series using the concat function and the shift method, aiming to shift series values by one place to the right and convert a series into a dataframe.', 'duration': 139.148, 'highlights': ['Using PD.concat function to create a data frame from two series The concat function is used to create a data frame from two series, allowing to combine series_values and the shifted series_values to construct the data frame.', 'Applying the shift method to shift series values by one place to the right The shift method is applied to series_values to shift the values by one place to the right, replicating the manual operation performed in Excel.', 'Converting a series into a dataframe using PD.dataframe function The PD.dataframe function is utilized to convert a series into a dataframe, fulfilling the requirement as it is not part of the numpy array.']}, {'end': 2061.536, 'start': 1663.687, 'title': 'Data analysis and error calculation', 'summary': 'Covers data manipulation using python pandas, including adding columns, using shift function, and error calculation through mean squared error to identify the baseline model.', 'duration': 397.849, 'highlights': ['Using shift function for data manipulation Demonstrates the use of the shift function to manipulate data series into a data frame, resulting in clear actual and forecast birth values.', 'Error calculation using mean squared error and numpy Explains the process of using mean squared error and numpy to calculate the error, resulting in a baseline model with a high error of 100.', 'Identifying and removing missing values Shows the identification and removal of missing values, ensuring a proper dataset for error calculation and model analysis.']}, {'end': 2486.374, 'start': 2061.536, 'title': 'Data frame processing and error analysis', 'summary': 'Focuses on processing the data frame, removing outliers, and analyzing the error, with a significant reduction in birth error from 10000 to 84, and a detailed explanation of the arima model components and the importance of reducing error in advanced models.', 'duration': 424.838, 'highlights': ['Significant reduction in birth error from 10000 to 84. Birth error reduced from 10000 to 84, indicating a substantial improvement in data processing.', 'Detailed explanation of ARIMA model components and the importance of reducing error in advanced models. Comprehensive explanation of ARIMA model components and the necessity of minimizing error in advanced models for accurate predictions.', 'Explanation of the order of difference (D) in the ARIMA model, with examples of D=1, D=2, and D=3. Clear illustration of the order of difference in ARIMA model, showcasing examples of D=1, D=2, and D=3 for better understanding.']}], 'duration': 961.835, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/MmC4b7gPY0Q/pics/MmC4b7gPY0Q1524539.jpg', 'highlights': ['Significant reduction in birth error from 10000 to 84. Birth error reduced from 10000 to 84, indicating a substantial improvement in data processing.', 'Detailed explanation of ARIMA model components and the importance of reducing error in advanced models. Comprehensive explanation of ARIMA model components and the necessity of minimizing error in advanced models for accurate predictions.', 'Using PD.concat function to create a data frame from two series The concat function is used to create a data frame from two series, allowing to combine series_values and the shifted series_values to construct the data frame.', 'Applying the shift method to shift series values by one place to the right The shift method is applied to series_values to shift the values by one place to the right, replicating the manual operation performed in Excel.', 'Using shift function for data manipulation Demonstrates the use of the shift function to manipulate data series into a data frame, resulting in clear actual and forecast birth values.', 'Converting a series into a dataframe using PD.dataframe function The PD.dataframe function is utilized to convert a series into a dataframe, fulfilling the requirement as it is not part of the numpy array.', 'Error calculation using mean squared error and numpy Explains the process of using mean squared error and numpy to calculate the error, resulting in a baseline model with a high error of 100.', 'Identifying and removing missing values Shows the identification and removal of missing values, ensuring a proper dataset for error calculation and model analysis.', 'Explanation of the order of difference (D) in the ARIMA model, with examples of D=1, D=2, and D=3. Clear illustration of the order of difference in ARIMA model, showcasing examples of D=1, D=2, and D=3 for better understanding.']}, {'end': 3505.491, 'segs': [{'end': 2539.642, 'src': 'embed', 'start': 2486.374, 'weight': 0, 'content': [{'end': 2488.594, 'text': 'Generally, by 0 or 1, we are pretty much done.', 'start': 2486.374, 'duration': 2.22}, {'end': 2491.535, 'text': 'In some cases, we can even go to 2.', 'start': 2488.734, 'duration': 2.801}, {'end': 2494.875, 'text': 'But I have never seen we are going to a level of 3.', 'start': 2491.535, 'duration': 3.34}, {'end': 2497.536, 'text': 'So, 0, 1 or 2 is something you can experiment with it.', 'start': 2494.875, 'duration': 2.661}, {'end': 2501.157, 'text': 'Right? So, coming back to this.', 'start': 2498.176, 'duration': 2.981}, {'end': 2507.278, 'text': 'Autoregressive, like I said, you have a way by which you can identify its value.', 'start': 2501.577, 'duration': 5.701}, {'end': 2508.098, 'text': "So, it's a value of p.", 'start': 2507.318, 'duration': 0.78}, {'end': 2514.335, 'text': 'and moving average is nothing but the value of Q.', 'start': 2511.325, 'duration': 3.01}, {'end': 2517.668, 'text': 'So autoregressive for autoregressive and moving average.', 'start': 2514.966, 'duration': 2.702}, {'end': 2524.753, 'text': 'what we have is basically ACF and PACF curve, which is autocorrelation, and partial autocorrelation,', 'start': 2517.668, 'duration': 7.085}, {'end': 2531.477, 'text': 'which checks from the current how well the current value is correlated with the previous value.', 'start': 2524.753, 'duration': 6.724}, {'end': 2539.642, 'text': 'And when we see the correlation is not present after the first value or second value, then we go ahead and use its value.', 'start': 2532.037, 'duration': 7.605}], 'summary': 'Arima model uses acf and pacf to identify p and q values, typically not exceeding 2.', 'duration': 53.268, 'max_score': 2486.374, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/MmC4b7gPY0Q/pics/MmC4b7gPY0Q2486374.jpg'}, {'end': 2891.14, 'src': 'embed', 'start': 2856.9, 'weight': 3, 'content': [{'end': 2868.323, 'text': "Okay, so we'll start with the ARIMA model, which is P is 2, P and D is 0 because series is stationary.", 'start': 2856.9, 'duration': 11.423}, {'end': 2878.046, 'text': 'If you see this particular, there is a very less evidence that there is sort of a trend or seasonality that is present.', 'start': 2868.443, 'duration': 9.603}, {'end': 2884.094, 'text': 'And we will go ahead and create a model with the help of these two.', 'start': 2879.93, 'duration': 4.164}, {'end': 2886.896, 'text': 'And the Q value is over here with the correlation chart.', 'start': 2884.174, 'duration': 2.722}, {'end': 2891.14, 'text': 'We can start with the 1, 2, 3.', 'start': 2887.697, 'duration': 3.443}], 'summary': 'Using arima model with p=2 and d=0 for stationary series, q value to be determined with correlation chart.', 'duration': 34.24, 'max_score': 2856.9, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/MmC4b7gPY0Q/pics/MmC4b7gPY0Q2856900.jpg'}, {'end': 3159.243, 'src': 'embed', 'start': 3132.267, 'weight': 4, 'content': [{'end': 3139.633, 'text': 'so this statistics basically indicates a particular value and in general you can read about the akaik information criteria.', 'start': 3132.267, 'duration': 7.366}, {'end': 3143.276, 'text': "that's a separate topic itself, but lesser the value of aic.", 'start': 3139.633, 'duration': 3.643}, {'end': 3153.081, 'text': "that means when you experiment with the order, Let's say 111, 113, 213, 211, 212, all of that experiment.", 'start': 3143.276, 'duration': 9.805}, {'end': 3159.243, 'text': 'if you do, you will see that the AIC value whenever it is less.', 'start': 3153.081, 'duration': 6.162}], 'summary': 'Lower aic values indicate better model fit for different orders.', 'duration': 26.976, 'max_score': 3132.267, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/MmC4b7gPY0Q/pics/MmC4b7gPY0Q3132267.jpg'}, {'end': 3322.992, 'src': 'embed', 'start': 3290.604, 'weight': 5, 'content': [{'end': 3292.846, 'text': 'what we created is the naive model over here.', 'start': 3290.604, 'duration': 2.242}, {'end': 3297.671, 'text': 'so if i go up Yeah, this is here.', 'start': 3292.846, 'duration': 4.825}, {'end': 3306.92, 'text': '9 point, there was an error of 9.177 but then we got the error of 6.86 birth every day.', 'start': 3297.691, 'duration': 9.229}, {'end': 3313.105, 'text': 'So we reduced to 3 births and that means we are doing a better job in this case.', 'start': 3307.14, 'duration': 5.965}, {'end': 3318.229, 'text': 'Now there are many things which can be done to further improve it.', 'start': 3313.826, 'duration': 4.403}, {'end': 3322.992, 'text': 'Either you can be satisfied with this because this is again a very low score.', 'start': 3318.489, 'duration': 4.503}], 'summary': 'Naive model reduced error from 9.177 to 6.86 births/day, aiming for further improvement.', 'duration': 32.388, 'max_score': 3290.604, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/MmC4b7gPY0Q/pics/MmC4b7gPY0Q3290604.jpg'}, {'end': 3428.848, 'src': 'embed', 'start': 3405.359, 'weight': 6, 'content': [{'end': 3412.46, 'text': 'other way of improving it by normalizing the data or standardizing the data or doing the data transformation.', 'start': 3405.359, 'duration': 7.101}, {'end': 3418.062, 'text': 'there are couple of techniques which are there for data transformation and i will share it with you in the next videos.', 'start': 3412.46, 'duration': 5.602}, {'end': 3422.763, 'text': 'but those techniques really help you to bring it, bring the value down.', 'start': 3418.062, 'duration': 4.701}, {'end': 3428.848, 'text': 'So there may be a scenario that it may bring down, but there may be a scenario it may not bring it down.', 'start': 3423.423, 'duration': 5.425}], 'summary': 'Data normalization and standardization techniques can bring value down, with varying scenarios.', 'duration': 23.489, 'max_score': 3405.359, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/MmC4b7gPY0Q/pics/MmC4b7gPY0Q3405359.jpg'}, {'end': 3473.203, 'src': 'embed', 'start': 3444.7, 'weight': 7, 'content': [{'end': 3448.563, 'text': "It doesn't really matter that you are getting very good result or very bad result.", 'start': 3444.7, 'duration': 3.863}, {'end': 3455.068, 'text': "It may happen that you may get the worst result and you feel like, oh, why did I do it? But that's not the case.", 'start': 3449.444, 'duration': 5.624}, {'end': 3457.21, 'text': 'You have to keep on experimenting.', 'start': 3455.128, 'duration': 2.082}, {'end': 3464.576, 'text': 'The more you experiment, the more you learn about it and you will actually become the real data scientist, but not the one who just run the algorithm.', 'start': 3457.65, 'duration': 6.926}, {'end': 3465.716, 'text': "It's done.", 'start': 3465.216, 'duration': 0.5}, {'end': 3473.203, 'text': 'So I would really like you to test all of these different techniques like I just said.', 'start': 3466.377, 'duration': 6.826}], 'summary': 'Experimentation leads to becoming a real data scientist, not just running algorithms.', 'duration': 28.503, 'max_score': 3444.7, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/MmC4b7gPY0Q/pics/MmC4b7gPY0Q3444700.jpg'}], 'start': 2486.374, 'title': 'Time series analysis and arima model', 'summary': 'Covers the basics of autoregressive and moving average models, acf and pacf plots, and model forecasting with an error value of 6.86, aiming to create an arima model with specific parameters for experimentation and improvement.', 'chapters': [{'end': 2539.642, 'start': 2486.374, 'title': 'Time series analysis basics', 'summary': 'Discusses the basics of autoregressive and moving average models in time series analysis, emphasizing that values of 0, 1, or 2 are typical for experimentation, and introduces acf and pacf curves for assessing autocorrelation and partial autocorrelation.', 'duration': 53.268, 'highlights': ['The chapter emphasizes that values of 0, 1, or 2 are typical for experimentation in autoregressive and moving average models.', 'It introduces ACF and PACF curves for assessing autocorrelation and partial autocorrelation, which check how well the current value is correlated with the previous value.', 'The speaker mentions that they have never seen a level of 3 in their experience with autoregressive and moving average models.']}, {'end': 3159.243, 'start': 2539.783, 'title': 'Time series analysis and arima model', 'summary': 'Discusses the process of identifying correlations in time series data and using acf and pacf plots to determine the parameters for the arima model, aiming to create a model with p=2, d=0, and varying q values of 2, 3, and 4.', 'duration': 619.46, 'highlights': ['The process of identifying correlations in time series data using ACF and PACF plots is explained, emphasizing the significance of correlation at different levels of the data and the implications for model parameter selection. ACF and PACF plots used to identify correlations at different levels of the data.', 'The procedure for determining the parameters for the ARIMA model (P=2, D=0, and varying Q values of 2, 3, and 4) is outlined, with an emphasis on the significance of stationarity for model fitting. Parameters for the ARIMA model discussed: P=2, D=0, Q=2, 3, and 4.', 'The importance of AIC (Akaike Information Criterion) in evaluating different model orders and the significance of lower AIC values in model selection is highlighted. Emphasis on the significance of lower AIC values in model selection.']}, {'end': 3505.491, 'start': 3159.243, 'title': 'Model forecasting and error analysis', 'summary': 'Demonstrates model forecasting with 35 values, achieving close predictions for some cases, but not as accurate for others, resulting in an error value of 6.86, and encourages further experimentation and improvement using data transformation techniques.', 'duration': 346.248, 'highlights': ['The model forecasts close values for some cases but not as accurate for others, resulting in an error value of 6.86. The forecasted values are close to the actual values in some cases, while in others, there is a noticeable discrepancy, resulting in an error value of 6.86.', 'Encourages further experimentation and improvement using data transformation techniques to enhance the forecasting accuracy. The chapter encourages experimenting with data transformation techniques such as standardization and normalization to improve the forecasting accuracy and reduce the error value.', 'Stresses the importance of continuous experimentation and learning to become proficient in data science. Emphasizes the significance of continuous experimentation and learning in data science, highlighting that it is essential to keep experimenting and learning, regardless of the outcome, to become proficient in the field.']}], 'duration': 1019.117, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/MmC4b7gPY0Q/pics/MmC4b7gPY0Q2486374.jpg', 'highlights': ['The chapter emphasizes typical values of 0, 1, or 2 for experimentation in autoregressive and moving average models.', 'ACF and PACF curves are introduced for assessing autocorrelation and partial autocorrelation.', 'The process of identifying correlations in time series data using ACF and PACF plots is explained.', 'The procedure for determining the parameters for the ARIMA model (P=2, D=0, and varying Q values of 2, 3, and 4) is outlined.', 'The importance of AIC (Akaike Information Criterion) in evaluating different model orders is highlighted.', 'The model forecasts with an error value of 6.86, encouraging further experimentation and improvement.', 'Encourages experimenting with data transformation techniques such as standardization and normalization.', 'Stresses the importance of continuous experimentation and learning in data science.']}], 'highlights': ['Significant reduction in birth error from 10000 to 84. Birth error reduced from 10000 to 84, indicating a substantial improvement in data processing.', 'The model forecasts with an error value of 6.86, encouraging further experimentation and improvement.', 'The process of identifying correlations in time series data using ACF and PACF plots is explained.', 'The procedure for determining the parameters for the ARIMA model (P=2, D=0, and varying Q values of 2, 3, and 4) is outlined.', 'The importance of AIC (Akaike Information Criterion) in evaluating different model orders is highlighted.', 'Understanding economic factors for business planning, including the calculation of average and standard deviation.', 'The need for time series modeling in data science projects for forecasting future values is explained, highlighting the significance of using libraries like pandas and matplotlib for data reading and visualization.', 'The chapter emphasizes typical values of 0, 1, or 2 for experimentation in autoregressive and moving average models.', 'The assumption in the baseline model is that the previous value will be the same as the future value, illustrated by the example of using the birth number on a specific date as a predictor (e.g., if the birth number on 11th November is 41, the assumption is that it will also be 41 on the next day).', 'The importance of examining the first and last observations in the data set is emphasized, with 1959 highlighted as a clear outlier in the female births data.']}