title
TensorFlow 2.0 Complete Course - Python Neural Networks for Beginners Tutorial
description
Learn how to use TensorFlow 2.0 in this full tutorial course for beginners. This course is designed for Python programmers looking to enhance their knowledge and skills in machine learning and artificial intelligence.
Throughout the 8 modules in this course you will learn about fundamental concepts and methods in ML & AI like core learning algorithms, deep learning with neural networks, computer vision with convolutional neural networks, natural language processing with recurrent neural networks, and reinforcement learning.
Each of these modules include in-depth explanations and a variety of different coding examples. After completing this course you will have a thorough knowledge of the core techniques in machine learning and AI and have the skills necessary to apply these techniques to your own data-sets and unique problems.
⭐️ Google Colaboratory Notebooks ⭐️
📕 Module 2: Introduction to TensorFlow - https://colab.research.google.com/drive/1F_EWVKa8rbMXi3_fG0w7AtcscFq7Hi7B#forceEdit=true&sandboxMode=true
📗 Module 3: Core Learning Algorithms - https://colab.research.google.com/drive/15Cyy2H7nT40sGR7TBN5wBvgTd57mVKay#forceEdit=true&sandboxMode=true
📘 Module 4: Neural Networks with TensorFlow - https://colab.research.google.com/drive/1m2cg3D1x3j5vrFc-Cu0gMvc48gWyCOuG#forceEdit=true&sandboxMode=true
📙 Module 5: Deep Computer Vision - https://colab.research.google.com/drive/1ZZXnCjFEOkp_KdNcNabd14yok0BAIuwS#forceEdit=true&sandboxMode=true
📔 Module 6: Natural Language Processing with RNNs - https://colab.research.google.com/drive/1ysEKrw_LE2jMndo1snrZUh5w87LQsCxk#forceEdit=true&sandboxMode=true
📒 Module 7: Reinforcement Learning - https://colab.research.google.com/drive/1IlrlS3bB8t1Gd5Pogol4MIwUxlAjhWOQ#forceEdit=true&sandboxMode=true
⭐️ Course Contents ⭐️
⌨️ (00:03:25) Module 1: Machine Learning Fundamentals
⌨️ (00:30:08) Module 2: Introduction to TensorFlow
⌨️ (01:00:00) Module 3: Core Learning Algorithms
⌨️ (02:45:39) Module 4: Neural Networks with TensorFlow
⌨️ (03:43:10) Module 5: Deep Computer Vision - Convolutional Neural Networks
⌨️ (04:40:44) Module 6: Natural Language Processing with RNNs
⌨️ (06:08:00) Module 7: Reinforcement Learning with Q-Learning
⌨️ (06:48:24) Module 8: Conclusion and Next Steps
⭐️ About the Author ⭐️
The author of this course is Tim Ruscica, otherwise known as “Tech With Tim” from his educational programming YouTube channel. Tim has a passion for teaching and loves to teach about the world of machine learning and artificial intelligence. Learn more about Tim from the links below:
🔗 YouTube: https://www.youtube.com/channel/UC4JX40jDee_tINbkjycV4Sg
🔗 LinkedIn: https://www.linkedin.com/in/tim-ruscica/
--
Learn to code for free and get a developer job: https://www.freecodecamp.org
Read hundreds of articles on programming: https://freecodecamp.org/news
detail
{'title': 'TensorFlow 2.0 Complete Course - Python Neural Networks for Beginners Tutorial', 'heatmap': [{'end': 10388.263, 'start': 10138.755, 'weight': 1}], 'summary': 'This tensorflow 2.0 course for beginners covers ai, machine learning, and neural networks, achieving 76% accuracy in model training, 80% accuracy in classifying iris flowers, and 67% accuracy in cnn training. it also addresses text data encoding challenges, lstm sentiment analysis with 88% validation accuracy, and reinforcement learning implementation for ai training.', 'chapters': [{'end': 202.781, 'segs': [{'end': 33.984, 'src': 'embed', 'start': 0.289, 'weight': 2, 'content': [{'end': 7.693, 'text': 'Hello everybody and welcome to an absolutely massive TensorFlow slash machine learning slash artificial intelligence course.', 'start': 0.289, 'duration': 7.404}, {'end': 14.177, 'text': 'Now, please stick with me for this short introduction, as I am going to give you a lot of important information regarding the course content,', 'start': 8.093, 'duration': 6.084}, {'end': 17.979, 'text': 'the resources for the course and what you can expect after going through this.', 'start': 14.177, 'duration': 3.802}, {'end': 21.76, 'text': 'Now, first I will tell you who this course is aimed for.', 'start': 18.519, 'duration': 3.241}, {'end': 33.984, 'text': 'So this course is aimed for people that are beginners in machine learning and artificial intelligence or maybe have a little bit of understanding but are trying to get better but do have a basic fundamental knowledge of programming and Python.', 'start': 22.16, 'duration': 11.824}], 'summary': 'Massive tensorflow, ml, ai course for beginners with basic python knowledge.', 'duration': 33.695, 'max_score': 0.289, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk289.jpg'}, {'end': 76.513, 'src': 'embed', 'start': 51.312, 'weight': 3, 'content': [{'end': 56.776, 'text': 'Some of you may know me as Tech with Tim from my YouTube channel, where I teach all kinds of different programming topics.', 'start': 51.312, 'duration': 5.464}, {'end': 61.38, 'text': "And I've actually been working with Free Code Camp and posted some of my series on their channel as well.", 'start': 57.037, 'duration': 4.343}, {'end': 66.765, 'text': "Now, let's get into the course breakdown and talk about exactly what you're going to learn and what you can expect from this course.", 'start': 61.921, 'duration': 4.844}, {'end': 72.23, 'text': 'So, as this course is geared towards beginners and people just getting started in the machine learning and AI world,', 'start': 67.165, 'duration': 5.065}, {'end': 76.513, 'text': "we're going to start by breaking down exactly what machine learning and artificial intelligence is.", 'start': 72.23, 'duration': 4.283}], 'summary': 'Tech with tim collaborates with free code camp to teach beginners about machine learning and ai.', 'duration': 25.201, 'max_score': 51.312, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk51312.jpg'}, {'end': 110.993, 'src': 'embed', 'start': 83.139, 'weight': 1, 'content': [{'end': 86.342, 'text': 'versus neural networks versus simple machine learning.', 'start': 83.139, 'duration': 3.203}, {'end': 91.667, 'text': "We're going to go through all those different differences, and then we're going to get into a general introduction of TensorFlow.", 'start': 86.362, 'duration': 5.305}, {'end': 96.848, 'text': "Now, for those of you that don't know, TensorFlow is a module developed and maintained by Google,", 'start': 92.367, 'duration': 4.481}, {'end': 103.13, 'text': 'which can be used within Python to do a ton of different scientific computing, machine learning and artificial intelligence applications.', 'start': 96.848, 'duration': 6.282}, {'end': 105.771, 'text': "We're going to be working with that through the entire tutorial series.", 'start': 103.17, 'duration': 2.601}, {'end': 110.993, 'text': "And after we do that general introduction to TensorFlow, we're going to get into our core learning algorithms.", 'start': 106.151, 'duration': 4.842}], 'summary': 'Comparison of neural networks and simple machine learning, followed by an introduction to tensorflow for scientific computing and ai applications.', 'duration': 27.854, 'max_score': 83.139, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk83139.jpg'}, {'end': 138.524, 'src': 'embed', 'start': 111.433, 'weight': 0, 'content': [{'end': 115.614, 'text': 'Now, these are the learning algorithms that you need to know before we can get further into machine learning.', 'start': 111.433, 'duration': 4.181}, {'end': 117.615, 'text': 'They build a really strong foundation.', 'start': 115.914, 'duration': 1.701}, {'end': 121.196, 'text': "They're pretty easy to understand and implement, and they're extremely powerful.", 'start': 117.875, 'duration': 3.321}, {'end': 127.179, 'text': "After we do that, we're going to get into neural networks, discuss all the different things that go into, how neural networks work,", 'start': 121.856, 'duration': 5.323}, {'end': 129.44, 'text': 'how we can use them, and then do a bunch of different examples.', 'start': 127.179, 'duration': 2.261}, {'end': 138.524, 'text': "And then we're going to get into some more complex aspects of machine learning and artificial intelligence and get to convolutional neural networks which can do things like image recognition and detection.", 'start': 129.46, 'duration': 9.064}], 'summary': 'Learn foundational algorithms before diving into neural networks and convolutional neural networks.', 'duration': 27.091, 'max_score': 111.433, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk111433.jpg'}, {'end': 187.255, 'src': 'embed', 'start': 156.155, 'weight': 5, 'content': [{'end': 159.078, 'text': 'And for me is doing everything through Google collaboratory.', 'start': 156.155, 'duration': 2.923}, {'end': 161.5, 'text': "Now, if you haven't heard of Google collaboratory,", 'start': 159.298, 'duration': 2.202}, {'end': 170.529, 'text': "essentially it's a collaborative coding environment that runs an I Python notebook in the cloud on a Google machine where you can do all of your machine learning for free.", 'start': 161.5, 'duration': 9.029}, {'end': 175.671, 'text': "so you don't need to install any packages, you don't need to use pip, you don't need to get your environment set up.", 'start': 170.829, 'duration': 4.842}, {'end': 181.973, 'text': "all you need to do is open a new google collaboratory window and you can start writing code, and that's what we're going to be doing in this series.", 'start': 175.671, 'duration': 6.302}, {'end': 187.255, 'text': 'if you look in the description right now, you will see links to all of the notebooks that i use throughout this guide.', 'start': 181.973, 'duration': 5.282}], 'summary': 'Google collaboratory is a free, collaborative coding environment that runs ipython notebooks in the cloud, eliminating the need for package installation or environment setup.', 'duration': 31.1, 'max_score': 156.155, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk156155.jpg'}], 'start': 0.289, 'title': 'Tensorflow & ai course for beginners', 'summary': 'Introduces a comprehensive tensorflow, machine learning, and artificial intelligence course for beginners, led by instructor tim. it covers the basics of machine learning, ai, and tensorflow, suitable for those with fundamental programming and python knowledge.', 'chapters': [{'end': 66.765, 'start': 0.289, 'title': 'Massive tensorflow & ai course', 'summary': 'Introduces a massive tensorflow, machine learning, and artificial intelligence course aimed at beginners with fundamental knowledge of programming and python, led by instructor tim.', 'duration': 66.476, 'highlights': ['The course is aimed for beginners in machine learning and artificial intelligence, requiring a basic fundamental knowledge of programming and Python.', 'The instructor, Tim, also known as Tech with Tim, has been working with Free Code Camp and has experience in teaching programming topics.', 'The introduction provides important information regarding the course content, resources, and expectations after completing the course.']}, {'end': 202.781, 'start': 67.165, 'title': 'Introduction to machine learning and tensorflow', 'summary': "Provides a beginner's guide to machine learning and ai, covering the differences between machine learning and artificial intelligence, various types of machine learning, and an introduction to tensorflow, along with the resources available for the course.", 'duration': 135.616, 'highlights': ['The course covers the differences between machine learning and artificial intelligence, various types of machine learning, and an introduction to TensorFlow, a module developed and maintained by Google.', 'The course includes core learning algorithms necessary as a foundation for machine learning, along with neural networks, convolutional neural networks for image recognition, recurrent neural networks for natural language processing, and reinforcement learning.', 'The resources for the course are primarily accessed through Google Collaboratory, a collaborative coding environment that runs an IPython notebook in the cloud on a Google machine, enabling free machine learning without the need to install packages or set up environments.']}], 'duration': 202.492, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk289.jpg', 'highlights': ['The course covers core learning algorithms for machine learning, including neural networks and convolutional neural networks.', 'The course introduces TensorFlow, a module developed and maintained by Google, and its applications in machine learning.', 'The course is aimed at beginners in machine learning and artificial intelligence, requiring fundamental programming and Python knowledge.', 'The instructor, Tim, known as Tech with Tim, has experience in teaching programming topics and has worked with Free Code Camp.', 'The introduction provides important information regarding the course content, resources, and expectations after completing the course.', 'The resources for the course are primarily accessed through Google Collaboratory, a collaborative coding environment for free machine learning.']}, {'end': 2397.543, 'segs': [{'end': 237.644, 'src': 'embed', 'start': 207.124, 'weight': 0, 'content': [{'end': 211.968, 'text': "So, in this first section, I'm going to spend a few minutes discussing the difference between artificial intelligence,", 'start': 207.124, 'duration': 4.844}, {'end': 214.23, 'text': 'neural networks and machine learning.', 'start': 211.968, 'duration': 2.262}, {'end': 218.533, 'text': "And the reason we need to go into this is because we're going to be covering all of these topics throughout this course.", 'start': 214.63, 'duration': 3.903}, {'end': 223.737, 'text': "So it's vital that you guys understand what these actually mean, and you can kind of differentiate between them.", 'start': 218.853, 'duration': 4.884}, {'end': 225.158, 'text': "So that's what we're going to focus on now.", 'start': 223.757, 'duration': 1.401}, {'end': 229.36, 'text': "Now, quick disclaimer here, just so everyone's aware, I'm using something called Windows Inc.", 'start': 225.578, 'duration': 3.782}, {'end': 231.141, 'text': 'This just default comes with Windows.', 'start': 229.54, 'duration': 1.601}, {'end': 237.644, 'text': "I have a drawing tablet down here, and this is what I'm going to be using for some of the explanatory parts where there's no real coding,", 'start': 231.501, 'duration': 6.143}], 'summary': 'Explanation of ai, neural networks, and machine learning; essential for course understanding.', 'duration': 30.52, 'max_score': 207.124, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk207124.jpg'}, {'end': 458.837, 'src': 'embed', 'start': 429.427, 'weight': 3, 'content': [{'end': 432.07, 'text': "That's kind of the definition of artificial intelligence.", 'start': 429.427, 'duration': 2.643}, {'end': 439.586, 'text': 'Now. obviously, today, AI has evolved into a much more complex field, where we now have machine learning and deep learning and all these other techniques,', 'start': 432.782, 'duration': 6.804}, {'end': 440.847, 'text': "which is what we're going to talk about now.", 'start': 439.586, 'duration': 1.261}, {'end': 443.909, 'text': 'So what I want to start by doing is just drawing a circle here.', 'start': 441.287, 'duration': 2.622}, {'end': 448.151, 'text': 'And I want to label this circle and say a I like that.', 'start': 443.929, 'duration': 4.222}, {'end': 453.534, 'text': "So this is going to define AI because everything I'm about to put inside of here is considered artificial intelligence.", 'start': 448.531, 'duration': 5.003}, {'end': 456.256, 'text': "So now, let's get into machine learning.", 'start': 454.295, 'duration': 1.961}, {'end': 458.837, 'text': "So what I'm going to do is draw another circle inside of here.", 'start': 456.856, 'duration': 1.981}], 'summary': 'Ai has evolved into a complex field of machine learning and deep learning.', 'duration': 29.41, 'max_score': 429.427, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk429427.jpg'}, {'end': 542.592, 'src': 'embed', 'start': 518.292, 'weight': 2, 'content': [{'end': 526.863, 'text': "So you'll often hear that you know machine learning requires a lot of data and you need ton of examples and you know input data to really train a good model.", 'start': 518.292, 'duration': 8.571}, {'end': 532.506, 'text': 'Well, the reason for that is because the way that machine learning works is it generates the rules for us.', 'start': 527.264, 'duration': 5.242}, {'end': 535.808, 'text': 'we give it some input data, we give it what the output data should be.', 'start': 532.506, 'duration': 3.302}, {'end': 542.592, 'text': 'And then it looks at that information and figures out what rules can we generate so that when we look at new data,', 'start': 536.188, 'duration': 6.404}], 'summary': 'Machine learning requires a lot of data to generate rules for training a good model.', 'duration': 24.3, 'max_score': 518.292, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk518292.jpg'}, {'end': 832.761, 'src': 'embed', 'start': 800.544, 'weight': 1, 'content': [{'end': 803.445, 'text': 'but just know that there are layered representation of data.', 'start': 800.544, 'duration': 2.901}, {'end': 810.266, 'text': 'We have multiple layers of information, whereas in standard machine learning, we only have, you know, one or two layers.', 'start': 803.605, 'duration': 6.661}, {'end': 816.308, 'text': "And in artificial intelligence in general, we don't necessarily have to have like a predefined set of layers.", 'start': 810.647, 'duration': 5.661}, {'end': 820.271, 'text': 'Okay So that is pretty much it for neural networks.', 'start': 817.168, 'duration': 3.103}, {'end': 824.794, 'text': "There's one last thing I will say about them is that they're actually not modeled after the brain.", 'start': 820.291, 'duration': 4.503}, {'end': 832.761, 'text': 'So a lot of people seem to think that neural networks are modeled after the brain and the fact that you have neurons firing in your brain and that can relate to neural networks.', 'start': 824.995, 'duration': 7.766}], 'summary': 'Neural networks have multiple layers of data, unlike standard machine learning. they are not modeled after the brain.', 'duration': 32.217, 'max_score': 800.544, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk800544.jpg'}, {'end': 1088.87, 'src': 'embed', 'start': 1057.602, 'weight': 4, 'content': [{'end': 1061.688, 'text': 'because we will get into this more as we continue going and about the importance of it.', 'start': 1057.602, 'duration': 4.086}, {'end': 1068.073, 'text': 'So the reason why data is so important is this is kind of the key thing that we use to create models.', 'start': 1062.108, 'duration': 5.965}, {'end': 1071.836, 'text': "So whenever we're doing AI and machine learning, we need data.", 'start': 1068.393, 'duration': 3.443}, {'end': 1077.141, 'text': "pretty much unless you're doing a very specific type of machine learning and artificial intelligence, which we'll talk about later.", 'start': 1071.836, 'duration': 5.305}, {'end': 1080.163, 'text': 'Now, for most of these models, we need tons of different data.', 'start': 1077.741, 'duration': 2.422}, {'end': 1081.785, 'text': 'We need tons of different examples.', 'start': 1080.283, 'duration': 1.502}, {'end': 1088.87, 'text': "And that's because we know how machine learning works now, which is essentially we're trying to come up with rules for a data set.", 'start': 1082.125, 'duration': 6.745}], 'summary': 'Data is crucial for creating ai models, requiring numerous examples for machine learning to come up with rules.', 'duration': 31.268, 'max_score': 1057.602, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk1057602.jpg'}, {'end': 1724.08, 'src': 'embed', 'start': 1695.499, 'weight': 5, 'content': [{'end': 1697.24, 'text': "that's moving around in our environment.", 'start': 1695.499, 'duration': 1.741}, {'end': 1701.804, 'text': 'We have this environment, which is just what the agent can move around in.', 'start': 1697.701, 'duration': 4.103}, {'end': 1703.465, 'text': 'And then we have a reward,', 'start': 1702.144, 'duration': 1.321}, {'end': 1711.671, 'text': 'and the reward is what we need to figure out as the programmer a way to reward the agent correctly so that it gets to the objective in the best possible way.', 'start': 1703.465, 'duration': 8.206}, {'end': 1714.974, 'text': 'But the agent simply maximizes that reward.', 'start': 1712.552, 'duration': 2.422}, {'end': 1717.696, 'text': 'So it just figures out where I need to go to maximize that reward.', 'start': 1715.214, 'duration': 2.482}, {'end': 1724.08, 'text': "It starts at the beginning, kind of randomly exploring the environment because it doesn't know any of the rewards it gets at any of the positions.", 'start': 1717.996, 'duration': 6.084}], 'summary': 'Programmer needs to reward agent for best possible path.', 'duration': 28.581, 'max_score': 1695.499, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk1695499.jpg'}, {'end': 1869.155, 'src': 'embed', 'start': 1840.141, 'weight': 6, 'content': [{'end': 1841.702, 'text': "So that's exactly what we're going to cover here.", 'start': 1840.141, 'duration': 1.561}, {'end': 1847.368, 'text': "Now, for those of you that don't know what TensorFlow is, essentially, this is an open source machine learning library.", 'start': 1842.183, 'duration': 5.185}, {'end': 1849.191, 'text': "It's one of the largest ones in the world.", 'start': 1847.628, 'duration': 1.563}, {'end': 1853.317, 'text': "It's one of the most well-known and it's maintained and supported by Google.", 'start': 1849.211, 'duration': 4.106}, {'end': 1859.246, 'text': 'Now, TensorFlow essentially allows us to do and create machine learning models and neural networks,', 'start': 1853.798, 'duration': 5.448}, {'end': 1862.611, 'text': 'and all of that without having to have a very complex math background.', 'start': 1859.246, 'duration': 3.365}, {'end': 1869.155, 'text': 'Now, as we get further in and we start discussing more in detail how neural networks work and machine learning algorithms actually function,', 'start': 1863.112, 'duration': 6.043}], 'summary': 'Transcript covers the overview of tensorflow, an open source machine learning library supported by google, enabling creation of models without complex math background.', 'duration': 29.014, 'max_score': 1840.141, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk1840141.jpg'}, {'end': 1976.714, 'src': 'embed', 'start': 1948.725, 'weight': 7, 'content': [{'end': 1951.627, 'text': 'So again, to follow along, click the link in the description.', 'start': 1948.725, 'duration': 2.902}, {'end': 1952.607, 'text': 'All right.', 'start': 1952.327, 'duration': 0.28}, {'end': 1957.849, 'text': "So what can we do with TensorFlow? Well, these are some of the different things I've listed them here.", 'start': 1952.987, 'duration': 4.862}, {'end': 1958.529, 'text': "So I don't forget.", 'start': 1957.889, 'duration': 0.64}, {'end': 1964.091, 'text': 'We can do image classification, data clustering, regression reinforcement, learning,', 'start': 1959.069, 'duration': 5.022}, {'end': 1968.652, 'text': 'natural language processing and pretty much anything that you can imagine with machine learning.', 'start': 1964.091, 'duration': 4.561}, {'end': 1976.714, 'text': 'Essentially, what TensorFlow does is gives us a library of tools that allow us to omit having to do these very complicated math operations.', 'start': 1969.312, 'duration': 7.402}], 'summary': 'Tensorflow enables image classification, data clustering, regression, reinforcement learning, and natural language processing, simplifying complex math operations.', 'duration': 27.989, 'max_score': 1948.725, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk1948725.jpg'}, {'end': 2209.32, 'src': 'embed', 'start': 2179.693, 'weight': 8, 'content': [{'end': 2184.999, 'text': 'Now what Google Collaboratory is, is essentially a free Jupyter notebook in the cloud for you.', 'start': 2179.693, 'duration': 5.306}, {'end': 2186.66, 'text': 'The way this works is.', 'start': 2185.419, 'duration': 1.241}, {'end': 2187.962, 'text': 'you can open up this notebook.', 'start': 2186.66, 'duration': 1.302}, {'end': 2191.445, 'text': 'you can see this is called I, pi and B I.', 'start': 2187.962, 'duration': 3.483}, {'end': 2192.306, 'text': 'yeah, what is that?', 'start': 2191.445, 'duration': 0.861}, {'end': 2195.349, 'text': 'I, pi and B, which I think just stands for I Python notebook.', 'start': 2192.306, 'duration': 3.043}, {'end': 2199.754, 'text': 'And what you can do in here is actually write code and write text as well.', 'start': 2195.77, 'duration': 3.984}, {'end': 2203.717, 'text': 'So this in here is what I called, you know, Google Collaboratory notebook.', 'start': 2200.114, 'duration': 3.603}, {'end': 2209.32, 'text': "And essentially why it's called a notebook, is because not only can you put code, but you can also put notes,", 'start': 2204.037, 'duration': 5.283}], 'summary': 'Google collaboratory is a free jupyter notebook in the cloud, allowing users to write code and text.', 'duration': 29.627, 'max_score': 2179.693, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk2179693.jpg'}], 'start': 207.124, 'title': 'Ai and machine learning', 'summary': 'Covers the difference between artificial intelligence, neural networks, and machine learning, the evolution and importance of ai, data in machine learning, types of machine learning, and an introduction to tensorflow, including its capabilities and applications.', 'chapters': [{'end': 239.985, 'start': 207.124, 'title': 'Ai, neural networks, and machine learning', 'summary': 'Covers the difference between artificial intelligence, neural networks, and machine learning, emphasizing the importance of understanding these concepts for the course.', 'duration': 32.861, 'highlights': ['The chapter emphasizes the importance of understanding the difference between artificial intelligence, neural networks, and machine learning for the course.', 'The instructor will use Windows Inc. and a drawing tablet for explanatory parts without coding.']}, {'end': 857.604, 'start': 240.306, 'title': 'Understanding artificial intelligence', 'summary': 'Discusses the evolution of artificial intelligence, machine learning, and neural networks, explaining their definitions, roles, and differences, while highlighting how they have revolutionized the field of technology and automation.', 'duration': 617.298, 'highlights': ['Neural networks are a form of machine learning that uses a layered representation of data, involving multiple layers for transforming and extracting features, unlike standard machine learning with only one or two layers.', 'Machine learning generates rules from input and output data, aiming to achieve the highest accuracy possible, unlike traditional AI where rules are predefined and explicitly given by programmers.', 'Artificial intelligence has evolved from predefined rule-based systems to complex fields like machine learning and deep learning, which are integral parts of AI.']}, {'end': 1483.558, 'start': 858.104, 'title': 'Importance of data in machine learning', 'summary': 'Emphasizes the importance of data in machine learning and artificial intelligence, explaining the concept of features and labels, supervised and unsupervised learning, and their applications, with an example of student grades dataset, highlighting the significance of data for training and testing models.', 'duration': 625.454, 'highlights': ['The importance of data in machine learning and artificial intelligence is emphasized, explaining the concept of features and labels, supervised and unsupervised learning, and their applications.', 'Explanation of features and labels in the context of a student grades dataset, illustrating their significance in predicting outcomes and training machine learning models.', 'Detailed explanation of supervised learning, its process, and significance in machine learning, debunking misconceptions about its complexity and relevance.', 'Overview of unsupervised learning and its application in clustering data points without labeled outputs, with an example of grouping data based on similarities.']}, {'end': 1729.584, 'start': 1483.558, 'title': 'Types of machine learning', 'summary': 'Explains supervised, unsupervised, and reinforcement learning, with reinforcement learning being the coolest type of machine learning, involving an agent, environment, and reward to maximize its reward.', 'duration': 246.026, 'highlights': ['Reinforcement learning involves an agent, environment, and reward to maximize its reward.', "Unsupervised learning is when you don't have the labels and the unsupervised model figures it out for you.", 'Supervised learning involves having output information and the model figuring out the output based on the input data.']}, {'end': 1948.184, 'start': 1730.004, 'title': 'Introduction to tensorflow', 'summary': 'Covers the fundamental differences between supervised, unsupervised, and reinforcement learning, along with an introduction to tensorflow, its capabilities, and applications in machine learning.', 'duration': 218.18, 'highlights': ['Reinforcement learning is used to train AIs to play games and explore the environment, eliminating the need for extensive data.', 'Module two: General introduction to TensorFlow, understanding tensors, shapes, and data representation, and how TensorFlow works on a lower level.', 'TensorFlow is an open-source machine learning library maintained and supported by Google, allowing the creation of machine learning models without a complex math background.']}, {'end': 2397.543, 'start': 1948.725, 'title': 'Tensorflow: graphs and sessions', 'summary': 'Introduces tensorflow, explaining its capabilities in image classification, data clustering, regression, reinforcement learning, and natural language processing. it outlines how tensorflow works with graphs and sessions, emphasizing the ability to execute computations and manipulate code without complex math operations. the chapter also introduces google collaboratory, a cloud-based jupyter notebook, and highlights its features, including pre-installed modules, code execution, and hardware utilization for machine learning tasks.', 'duration': 448.818, 'highlights': ['TensorFlow enables image classification, data clustering, regression, reinforcement learning, and natural language processing.', 'TensorFlow operates with graphs and sessions, allowing the creation of a graph of partial computations and the execution of computations through sessions.', 'Google Collaboratory is a cloud-based Jupyter notebook with pre-installed modules, allowing code execution and hardware utilization for machine learning tasks.']}], 'duration': 2190.419, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk207124.jpg', 'highlights': ['The importance of understanding AI, neural networks, and machine learning is emphasized for the course', 'Neural networks use layered representation, unlike standard machine learning with one or two layers', 'Machine learning generates rules from input and output data to achieve high accuracy', 'Artificial intelligence has evolved from predefined rule-based systems to complex fields like machine learning', 'The importance of data in machine learning and AI is emphasized, explaining features and labels', 'Reinforcement learning involves an agent, environment, and reward to maximize its reward', 'TensorFlow is an open-source machine learning library maintained and supported by Google', 'TensorFlow enables image classification, data clustering, regression, reinforcement learning, and NLP', 'Google Collaboratory is a cloud-based Jupyter notebook with pre-installed modules']}, {'end': 4582.967, 'segs': [{'end': 2430.296, 'src': 'embed', 'start': 2397.863, 'weight': 0, 'content': [{'end': 2400.705, 'text': 'I just want to show you guys some basic components of Collaboratory.', 'start': 2397.863, 'duration': 2.842}, {'end': 2405.748, 'text': 'Now, some other things that are important to understand is this runtime tab, which you might see me use.', 'start': 2401.205, 'duration': 4.543}, {'end': 2411.67, 'text': "So restart runtime essentially clears all of your output, and just restarts whatever's happened.", 'start': 2406.568, 'duration': 5.102}, {'end': 2416.311, 'text': 'Because the great thing with collaboratory is, since I can run specific code blocks,', 'start': 2412.05, 'duration': 4.261}, {'end': 2420.453, 'text': "I don't need to execute the entire thing of code every time I want to run something.", 'start': 2416.311, 'duration': 4.142}, {'end': 2424.554, 'text': "If I've just made a minor change in one code block, I can just run that code.", 'start': 2420.833, 'duration': 3.721}, {'end': 2430.296, 'text': "sorry, I can just run that code block, I don't need to run everything before it or even everything after it, right.", 'start': 2425.274, 'duration': 5.022}], 'summary': "Introduction to collaboratory's basic components and runtime tab for efficient code execution.", 'duration': 32.433, 'max_score': 2397.863, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk2397863.jpg'}, {'end': 3335.575, 'src': 'embed', 'start': 3308.559, 'weight': 1, 'content': [{'end': 3313.702, 'text': "it's equal to the number of elements in this tensor that will reshape it for us and give us that new shaped data.", 'start': 3308.559, 'duration': 5.143}, {'end': 3315.403, 'text': 'this is very useful.', 'start': 3314.582, 'duration': 0.821}, {'end': 3318.484, 'text': "We'll use this actually a lot as we go through TensorFlow.", 'start': 3315.803, 'duration': 2.681}, {'end': 3320.886, 'text': "So make sure you're kind of familiar with how that works.", 'start': 3318.504, 'duration': 2.382}, {'end': 3324.388, 'text': "Alright, so now we're moving on to types of tensors.", 'start': 3321.406, 'duration': 2.982}, {'end': 3328.39, 'text': 'So there is a bunch of different types of tensors that we can use.', 'start': 3325.329, 'duration': 3.061}, {'end': 3331.052, 'text': "So far, the only one we've looked at is variable.", 'start': 3328.57, 'duration': 2.482}, {'end': 3335.575, 'text': "So we've created tf dot variables and kind of just hard coded our own tensors.", 'start': 3331.172, 'duration': 4.403}], 'summary': 'Using tf dot variables, we reshape tensors in tensorflow.', 'duration': 27.016, 'max_score': 3308.559, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk3308559.jpg'}, {'end': 3373.276, 'src': 'embed', 'start': 3347.361, 'weight': 4, 'content': [{'end': 3354.225, 'text': "Now we're not going to really talk about these two that much, although constant and variable are important to understand the difference between.", 'start': 3347.361, 'duration': 6.864}, {'end': 3361.769, 'text': 'So we can read this as with the exception of variable, all of these tensors are immutable, meaning their value may not change during execution.', 'start': 3355.045, 'duration': 6.724}, {'end': 3369.133, 'text': "So essentially all of these, when we create a tensor, mean we have some constant value, which means that whatever we've defined here,", 'start': 3362.289, 'duration': 6.844}, {'end': 3370.374, 'text': "it's not going to change.", 'start': 3369.133, 'duration': 1.241}, {'end': 3373.276, 'text': 'whereas the variable tensor could change.', 'start': 3370.894, 'duration': 2.382}], 'summary': 'Immutable tensors have constant values, while variable tensors can change.', 'duration': 25.915, 'max_score': 3347.361, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk3347361.jpg'}, {'end': 3631.83, 'src': 'embed', 'start': 3602.938, 'weight': 5, 'content': [{'end': 3605.319, 'text': 'So welcome to module three of this course.', 'start': 3602.938, 'duration': 2.381}, {'end': 3610.7, 'text': "Now, what we're going to be doing in this module is learning the core machine learning algorithms that come with TensorFlow.", 'start': 3605.359, 'duration': 5.341}, {'end': 3613.281, 'text': 'Now, these algorithms are not specific to TensorFlow,', 'start': 3611.14, 'duration': 2.141}, {'end': 3617.201, 'text': "but they are used within there and we'll use some tools from TensorFlow to kind of implement them.", 'start': 3613.281, 'duration': 3.92}, {'end': 3623.323, 'text': 'But essentially, these are the building blocks before moving on to things like neural networks and more advanced machine learning techniques.', 'start': 3617.522, 'duration': 5.801}, {'end': 3629.388, 'text': "You really need to understand how these work because they're kind of used in a lot of different techniques and combined together.", 'start': 3623.703, 'duration': 5.685}, {'end': 3631.83, 'text': "And what I'm about to show you is actually very powerful.", 'start': 3629.648, 'duration': 2.182}], 'summary': 'Module three covers core machine learning algorithms in tensorflow.', 'duration': 28.892, 'max_score': 3602.938, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk3602938.jpg'}, {'end': 3794.828, 'src': 'embed', 'start': 3765.006, 'weight': 8, 'content': [{'end': 3767.408, 'text': "So let's go ahead and get started with linear regression.", 'start': 3765.006, 'duration': 2.402}, {'end': 3772.293, 'text': "So what is linear regression? What's one of those basic forms of machine learning.", 'start': 3768.129, 'duration': 4.164}, {'end': 3776.917, 'text': 'And essentially what we try to do is have a linear correspondence between data points.', 'start': 3772.413, 'duration': 4.504}, {'end': 3779.379, 'text': "So I'm just going to scroll down here, do a good example.", 'start': 3777.357, 'duration': 2.022}, {'end': 3782.702, 'text': "So what I've done is use matplotlib just to plot a little graph here.", 'start': 3779.659, 'duration': 3.043}, {'end': 3784.203, 'text': 'So we can see this one right here.', 'start': 3782.882, 'duration': 1.321}, {'end': 3786.884, 'text': 'And essentially, this is kind of our data set.', 'start': 3784.723, 'duration': 2.161}, {'end': 3788.085, 'text': "This is what we'll call our data set.", 'start': 3786.904, 'duration': 1.181}, {'end': 3794.828, 'text': 'What we want to do is use linear regression to come up with a model that can give us some good predictions for our data points.', 'start': 3788.205, 'duration': 6.623}], 'summary': 'Introduction to linear regression in machine learning.', 'duration': 29.822, 'max_score': 3765.006, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk3765006.jpg'}, {'end': 3850.795, 'src': 'embed', 'start': 3822.336, 'weight': 2, 'content': [{'end': 3826.697, 'text': 'It pretty much I mean, it is the perfect line of best fit for this data set.', 'start': 3822.336, 'duration': 4.361}, {'end': 3831.218, 'text': 'And using this line, we can actually predict future values in our data set.', 'start': 3827.017, 'duration': 4.201}, {'end': 3836.883, 'text': 'essentially linear regression is used when you have data points that correlate in kind of a linear fashion.', 'start': 3831.778, 'duration': 5.105}, {'end': 3842.047, 'text': "Now, this is a very basic example, because we're doing this in two dimensions with x and y.", 'start': 3837.263, 'duration': 4.784}, {'end': 3847.732, 'text': "But oftentimes, what you'll have is you'll have data points that have, you know, eight or nine kind of input values.", 'start': 3842.047, 'duration': 5.685}, {'end': 3850.795, 'text': 'So that gives us, you know, a nine dimensional kind of data set.', 'start': 3847.812, 'duration': 2.983}], 'summary': 'Linear regression predicts future values based on correlated data points, with potential for higher dimensions.', 'duration': 28.459, 'max_score': 3822.336, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk3822336.jpg'}, {'end': 3911.713, 'src': 'embed', 'start': 3887.223, 'weight': 7, 'content': [{'end': 3892.784, 'text': 'line of best fit refers to a line, through a scatterplot of data points, that best expresses the relationship between those points.', 'start': 3887.223, 'duration': 5.561}, {'end': 3895.045, 'text': "So exactly what I've kind of been trying to explain.", 'start': 3893.164, 'duration': 1.881}, {'end': 3901.928, 'text': 'when we have data that correlates linearly, and I always butcher that word, what we can do is draw a line through it.', 'start': 3895.785, 'duration': 6.143}, {'end': 3904.73, 'text': 'And then we can use that line to predict new data points.', 'start': 3902.028, 'duration': 2.702}, {'end': 3911.713, 'text': "Because if that line is good, it's a good line of best fit for the data set, then hopefully we would assume that we can just, you know,", 'start': 3904.85, 'duration': 6.863}], 'summary': 'The line of best fit helps predict new data points in a linearly correlated dataset.', 'duration': 24.49, 'max_score': 3887.223, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk3887223.jpg'}, {'end': 4230.024, 'src': 'embed', 'start': 4202.127, 'weight': 6, 'content': [{'end': 4206.93, 'text': "So actually, most times, what's going to end up happening is you're going to have, you know, like eight or nine input variables.", 'start': 4202.127, 'duration': 4.803}, {'end': 4209.752, 'text': "And then you're going to have one output variable that you're predicting.", 'start': 4207.63, 'duration': 2.122}, {'end': 4215.556, 'text': 'Now, so long as our data points are correlated linearly in three dimensions, we can still do this.', 'start': 4210.412, 'duration': 5.144}, {'end': 4220.299, 'text': "So I'm going to attempt to show you this actually in three dimensions, just to hopefully clear some things up,", 'start': 4215.956, 'duration': 4.343}, {'end': 4225.022, 'text': 'because it is important to kind of get a grasp and perspective of the different dimensions.', 'start': 4220.299, 'duration': 4.723}, {'end': 4230.024, 'text': "So let's say we have a bunch of data points that are kind of like this.", 'start': 4226.042, 'duration': 3.982}], 'summary': 'Data analysis involves predicting one output variable using eight or nine input variables, assuming linear correlation in three dimensions.', 'duration': 27.897, 'max_score': 4202.127, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk4202127.jpg'}], 'start': 2397.863, 'title': 'Google collaboratory, tensorflow, and linear regression basics', 'summary': 'Introduces google collaboratory components, tensorflow basics, tensor reshaping and evaluation, and linear regression basics. it covers topics such as runtime functionality, tensor usage and manipulation, reshaping in tensorflow, and the application of linear regression in machine learning, including its implementation with various tools and datasets.', 'chapters': [{'end': 2434.238, 'start': 2397.863, 'title': 'Google collaboratory components', 'summary': 'Introduces the basic components of collaboratory and explains the functionality of the runtime tab, including the option to restart runtime which clears all output and restarts the code, enabling selective code execution without the need to rerun the entire code.', 'duration': 36.375, 'highlights': ['The runtime tab in Collaboratory allows users to selectively execute specific code blocks without rerunning the entire code, improving efficiency and saving time.', 'The option to restart runtime in Collaboratory clears all output and restarts the code, providing a way to start afresh when needed.']}, {'end': 3291.271, 'start': 2434.578, 'title': 'Google collaboratory and tensorflow basics', 'summary': 'Explains the usage of google collaboratory and how to import tensorflow 2.0, it defines tensors, their importance in tensorflow, their data types, shapes, and ranks. it also discusses reshaping and manipulating tensors.', 'duration': 856.693, 'highlights': ['Google Collaboratory and TensorFlow Basics', 'Importing TensorFlow 2.0 in Google Collaboratory', 'Definition and Importance of Tensors in TensorFlow', 'Data Types and Shapes of Tensors', 'Reshaping and Manipulating Tensors']}, {'end': 3764.926, 'start': 3291.371, 'title': 'Tensor reshaping and evaluating in tensorflow', 'summary': 'Discusses how to reshape tensors in tensorflow, detailing the process and its applications, before delving into the types of tensors and the core machine learning algorithms that come with tensorflow.', 'duration': 473.555, 'highlights': ["Tensor reshaping allows for changing the shape of a tensor, with the reshape method enabling the transformation to a new valid shape, and the number of elements in the tensor being equal to the product of the shape's dimensions.", 'Different types of tensors, including constant, placeholder, sparse tensor, and variable, are discussed, with the emphasis on the immutability of constant and placeholder tensors, and the potential for change in variable tensors.', 'The core machine learning algorithms in TensorFlow, including linear regression, classification, clustering, and hidden Markov models, are introduced as essential building blocks before moving on to more advanced techniques.']}, {'end': 3958.01, 'start': 3765.006, 'title': 'Linear regression basics', 'summary': 'Introduces linear regression as a basic form of machine learning used to establish a linear correspondence between data points for making predictions, exemplified through a 2d graph and elaborated to show its applicability in higher dimensions.', 'duration': 193.004, 'highlights': ['Linear regression is used to establish a linear correspondence between data points for making predictions.', 'Application of linear regression in higher dimensions is explained, with a reference to a scenario involving multiple input values and the prediction of an output value.', "Explanation of 'line of best fit' and its role in predicting new data points is provided."]}, {'end': 4582.967, 'start': 3958.03, 'title': 'Linear regression basics', 'summary': 'Explains the basics of linear regression, including the definition of a line in two dimensions, calculation of slope, equation of a line, and examples of linear regression. it also discusses the use of sk learn, tensorflow, numpy, pandas, and matplotlib for implementing linear regression with the titanic dataset.', 'duration': 624.937, 'highlights': ['The chapter explains the basics of linear regression, including the definition of a line in two dimensions, calculation of slope, equation of a line, and examples of linear regression.', 'The use of SK learn, TensorFlow, NumPy, pandas, and matplotlib for implementing linear regression with the Titanic dataset.']}], 'duration': 2185.104, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk2397863.jpg', 'highlights': ['The runtime tab in Collaboratory allows selective execution, improving efficiency.', 'Tensor reshaping enables transformation to a new valid shape.', 'Linear regression establishes a linear correspondence for predictions.', 'The option to restart runtime in Collaboratory clears all output.', 'Different types of tensors are discussed, emphasizing immutability and potential for change.', 'Introduction of core machine learning algorithms in TensorFlow.', 'Application of linear regression in higher dimensions is explained.', "Explanation of 'line of best fit' and its role in predicting new data points.", 'The use of SK learn, TensorFlow, NumPy, pandas, and matplotlib for implementing linear regression.']}, {'end': 5699.857, 'segs': [{'end': 4626.912, 'src': 'embed', 'start': 4599.381, 'weight': 4, 'content': [{'end': 4602.605, 'text': "So we're going to call this our label, right, where our output information.", 'start': 4599.381, 'duration': 3.224}, {'end': 4609.732, 'text': 'So here, a zero stands for the fact that someone did not survive, and one stands for the fact that someone did survive.', 'start': 4603.706, 'duration': 6.026}, {'end': 4615.408, 'text': 'Now just thinking about it on your own for a second and looking at some of the categories we have up here,', 'start': 4610.546, 'duration': 4.862}, {'end': 4619.389, 'text': 'can you think about why linear regression would be a good algorithm for something like this?', 'start': 4615.408, 'duration': 3.981}, {'end': 4626.912, 'text': "Well, for example, if someone is a female, we can kind of assume that they're going to have a higher chance of surviving on the Titanic.", 'start': 4620.209, 'duration': 6.703}], 'summary': 'Using linear regression to predict survival based on gender and other factors.', 'duration': 27.531, 'max_score': 4599.381, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk4599381.jpg'}, {'end': 4807.962, 'src': 'embed', 'start': 4783.782, 'weight': 2, 'content': [{'end': 4789.747, 'text': "So what I've done here, if I've loaded in my data set, and notice that I've loaded a training data set in a testing data set.", 'start': 4783.782, 'duration': 5.965}, {'end': 4791.968, 'text': "Now we'll talk about this more later.", 'start': 4790.347, 'duration': 1.621}, {'end': 4792.869, 'text': 'This is important.', 'start': 4792.169, 'duration': 0.7}, {'end': 4797.753, 'text': 'I have two different data sets, one to train the model with, and one to test the model with.', 'start': 4792.989, 'duration': 4.764}, {'end': 4798.594, 'text': 'Now kind of.', 'start': 4798.174, 'duration': 0.42}, {'end': 4803.598, 'text': "the basic reason we would do this is because when we test our model for accuracy to see how well it's doing,", 'start': 4798.594, 'duration': 5.004}, {'end': 4806.601, 'text': "it doesn't make sense to test it on data it's already seen.", 'start': 4803.598, 'duration': 3.003}, {'end': 4807.962, 'text': 'it needs to see fresh data.', 'start': 4806.601, 'duration': 1.361}], 'summary': 'Loaded training and testing data sets for model evaluation.', 'duration': 24.18, 'max_score': 4783.782, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk4783782.jpg'}, {'end': 4919.538, 'src': 'embed', 'start': 4890.247, 'weight': 3, 'content': [{'end': 4891.028, 'text': "So let's do that.", 'start': 4890.247, 'duration': 0.781}, {'end': 4892.209, 'text': 'And there we go.', 'start': 4891.708, 'duration': 0.501}, {'end': 4894.852, 'text': 'So this is what our data frame head looks like.', 'start': 4892.229, 'duration': 2.623}, {'end': 4902.9, 'text': 'Now head, what that does is show us the first five entries in our data set, as well as show us a lot of the different columns that are in it.', 'start': 4895.312, 'duration': 7.588}, {'end': 4908.886, 'text': "Now, since we have more than you know, we have a few different columns, it's not showing us all of them, it's just giving us the dot dot dot.", 'start': 4903.24, 'duration': 5.646}, {'end': 4911.448, 'text': 'But we can see this is what the data frame looks like.', 'start': 4909.246, 'duration': 2.202}, {'end': 4913.511, 'text': 'And this is kind of the representation internally.', 'start': 4911.488, 'duration': 2.023}, {'end': 4919.538, 'text': 'So we have entry zero survived zero survived one, we have male, female, all that.', 'start': 4914.091, 'duration': 5.447}], 'summary': 'Data frame head shows first 5 entries with multiple columns.', 'duration': 29.291, 'max_score': 4890.247, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk4890247.jpg'}, {'end': 5183.059, 'src': 'embed', 'start': 5155.022, 'weight': 0, 'content': [{'end': 5157.264, 'text': 'So we know kind of some of the attributes of the data set.', 'start': 5155.022, 'duration': 2.242}, {'end': 5160.065, 'text': 'Now we want to describe the data set sometimes.', 'start': 5158.064, 'duration': 2.001}, {'end': 5163.047, 'text': 'What describe does is just give us some overall information.', 'start': 5160.526, 'duration': 2.521}, {'end': 5164.508, 'text': "So let's have a look at it here.", 'start': 5163.387, 'duration': 1.121}, {'end': 5167.07, 'text': 'We can see that we have 627 entries.', 'start': 5164.888, 'duration': 2.182}, {'end': 5169.771, 'text': 'The mean of age is 29.', 'start': 5167.51, 'duration': 2.261}, {'end': 5172.333, 'text': 'The standard deviation is, you know, 12 point whatever.', 'start': 5169.771, 'duration': 2.562}, {'end': 5176.475, 'text': 'And then we get the same information about all of these other different attributes.', 'start': 5172.893, 'duration': 3.582}, {'end': 5180.258, 'text': 'So for example, it gives us, you know, the mean fare, the minimum fare, and just some statistics.', 'start': 5176.495, 'duration': 3.763}, {'end': 5183.059, 'text': "If you guys understand this great, if you don't, doesn't really matter.", 'start': 5180.338, 'duration': 2.721}], 'summary': 'The data set contains 627 entries with a mean age of 29 and various other statistics.', 'duration': 28.037, 'max_score': 5155.022, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk5155022.jpg'}, {'end': 5339.393, 'src': 'embed', 'start': 5314.185, 'weight': 1, 'content': [{'end': 5320.387, 'text': 'So we can see that males have about a 20% survival rate, whereas females are all the way up to about 78%.', 'start': 5314.185, 'duration': 6.202}, {'end': 5321.747, 'text': "So that's important to understand.", 'start': 5320.387, 'duration': 1.36}, {'end': 5330.97, 'text': "that kind of confirms that what we were looking at before in the data set when we were exploring it and you don't need to do this every time that you're looking at a data set but it is good to kind of get some intuition about it.", 'start': 5321.747, 'duration': 9.223}, {'end': 5332.491, 'text': "So this is what we've learned so far.", 'start': 5331.411, 'duration': 1.08}, {'end': 5334.411, 'text': 'Majority of passengers are in their twenties or thirties.', 'start': 5332.631, 'duration': 1.78}, {'end': 5336.292, 'text': 'Then majority passengers are male.', 'start': 5334.812, 'duration': 1.48}, {'end': 5339.393, 'text': "They're in third class and females have a much higher chance of survival.", 'start': 5336.452, 'duration': 2.941}], 'summary': 'In the titanic dataset, females have a 78% survival rate, while males have a 20% survival rate, confirming the importance of gender and class in survival.', 'duration': 25.208, 'max_score': 5314.185, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk5314185.jpg'}], 'start': 4583.167, 'title': 'Titanic dataset analysis', 'summary': 'Discusses using linear regression for survival prediction, pandas data frame operations, and exploring titanic dataset features including 627 entries, 9 attributes, and visualizations revealing passenger demographics and survival rates.', 'chapters': [{'end': 4813.307, 'start': 4583.167, 'title': 'Linear regression for titanic survival prediction', 'summary': 'Discusses the attributes and labels of the titanic data set, explaining the rationale for using linear regression to predict survival based on factors like gender, age, class, deck, and companionship, while also highlighting the importance of splitting the data into training and testing sets.', 'duration': 230.14, 'highlights': ['Explaining the rationale for using linear regression to predict survival based on factors like gender, age, class, deck, and companionship', 'Importance of splitting the data into training and testing sets', 'Explanation of the attributes and labels in the Titanic data set']}, {'end': 5137.138, 'start': 4814.116, 'title': 'Pandas data frame operations', 'summary': 'Covers how to load data into a pandas data frame, remove specific columns, and reference specific rows and columns to understand the structure of the data and prepare it for classification, with examples showing the use of head, pop, loc, and accessing specific columns.', 'duration': 323.022, 'highlights': ['Using head() to display the first five entries in the data set and view the different columns, allowing for a structured representation of the data.', "Removing the 'survived' column from the data frame using pop() to separate the data for classification and input information, with 626 or 627 entries in the y train variable showing whether someone survived or not.", 'Using loc[] to reference specific rows in the data frame and the corresponding indexes in the training and testing data frames, allowing for precise data access and manipulation.', "Accessing a specific column, such as 'age', by using DF_train['age'] to retrieve all the different age values and understand the data frame's functionality for future use."]}, {'end': 5699.857, 'start': 5137.648, 'title': 'Exploring titanic dataset features', 'summary': 'Covers exploring the titanic dataset, including data attributes, data description, shape, and creating feature columns for categorical and numeric data, with 627 entries and 9 attributes, as well as visualizing age distribution, gender distribution, class distribution, and survival rate, revealing that the majority of passengers are in their twenties or thirties, mostly male, in third class, and females have a much higher chance of survival.', 'duration': 562.209, 'highlights': ['The data set has 627 entries and 9 attributes, with a mean age of 29 and a standard deviation of 12.', 'The age distribution shows that the majority of passengers are in their twenties or thirties, with some outliers in the 70s and 80s.', 'Females have a much higher chance of survival, with a survival rate of about 78%, compared to males who have a survival rate of about 20%.', 'The testing data set has 264 entries, which will be used to evaluate the model created using the training data set.', 'Categorical data is transformed into numeric values for encoding, such as representing females as 0 and males as 1, and classes as 0, 1, and 2.']}], 'duration': 1116.69, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk4583167.jpg', 'highlights': ['The data set has 627 entries and 9 attributes, with a mean age of 29 and a standard deviation of 12.', 'Females have a much higher chance of survival, with a survival rate of about 78%, compared to males who have a survival rate of about 20%.', 'Importance of splitting the data into training and testing sets', 'Using head() to display the first five entries in the data set and view the different columns, allowing for a structured representation of the data.', 'Explaining the rationale for using linear regression to predict survival based on factors like gender, age, class, deck, and companionship']}, {'end': 6891.528, 'segs': [{'end': 5792.286, 'src': 'embed', 'start': 5763.92, 'weight': 0, 'content': [{'end': 5768.663, 'text': 'So encoded it manually, TensorFlow just can do this for us now in TensorFlow 2.0.', 'start': 5763.92, 'duration': 4.743}, {'end': 5769.664, 'text': "So we'll just use that tool.", 'start': 5768.663, 'duration': 1.001}, {'end': 5772.246, 'text': "Okay, so that's what we did with these feature columns.", 'start': 5770.244, 'duration': 2.002}, {'end': 5774.847, 'text': 'Now for the numeric columns, a little bit different.', 'start': 5772.626, 'duration': 2.221}, {'end': 5775.788, 'text': "it's actually easier.", 'start': 5774.847, 'duration': 0.941}, {'end': 5779.731, 'text': 'all we need to do is give the feature name and whatever the data type is, and create a column with that.', 'start': 5775.788, 'duration': 3.943}, {'end': 5785.498, 'text': "So notice, we don't we can omit this unique value, because we know when it's numeric, that you know, there could be an infinite amount of values.", 'start': 5780.131, 'duration': 5.367}, {'end': 5788.542, 'text': "And then I've just printed out the feature columns, you can see what this looks like.", 'start': 5785.878, 'duration': 2.664}, {'end': 5792.286, 'text': 'So vocabulary list categorical column gives us the number of siblings.', 'start': 5788.562, 'duration': 3.724}], 'summary': 'In tensorflow 2.0, feature encoding is simplified. numeric columns require less effort. categorical columns use vocabulary lists.', 'duration': 28.366, 'max_score': 5763.92, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk5763920.jpg'}, {'end': 5925.914, 'src': 'embed', 'start': 5891.448, 'weight': 4, 'content': [{'end': 5892.568, 'text': "So I'm not going to go too far into that.", 'start': 5891.448, 'duration': 1.12}, {'end': 5897.151, 'text': "Now that we understand we kind of load it in batches right, so we don't load it entirely all at once,", 'start': 5893.149, 'duration': 4.002}, {'end': 5900.373, 'text': 'we just load a specific set of kind of elements as we go.', 'start': 5897.151, 'duration': 3.222}, {'end': 5903.596, 'text': 'what we have is called epochs.', 'start': 5901.874, 'duration': 1.722}, {'end': 5909.801, 'text': 'Now, what are epochs? Well, epochs are essentially how many times the model is going to see the same data.', 'start': 5904.256, 'duration': 5.545}, {'end': 5914.785, 'text': 'So what might be the case right when we pass the data to our model the first time?', 'start': 5910.361, 'duration': 4.424}, {'end': 5915.465, 'text': "it's pretty bad.", 'start': 5914.785, 'duration': 0.68}, {'end': 5918.708, 'text': "like it looks at, the model creates our line of best fit, but it's not great.", 'start': 5915.465, 'duration': 3.243}, {'end': 5919.889, 'text': "it's not working perfectly.", 'start': 5918.708, 'duration': 1.181}, {'end': 5925.914, 'text': "So we need to use something called an epoch, which means we're just going to feed the model, feed the data again, but in a different order.", 'start': 5920.249, 'duration': 5.665}], 'summary': 'Training data loaded in batches, using epochs to improve model performance.', 'duration': 34.466, 'max_score': 5891.448, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk5891448.jpg'}, {'end': 6030.205, 'src': 'embed', 'start': 6000.875, 'weight': 3, 'content': [{'end': 6003.458, 'text': "We need to do this, but it's necessary.", 'start': 6000.875, 'duration': 2.583}, {'end': 6004.499, 'text': 'So, essentially,', 'start': 6003.938, 'duration': 0.561}, {'end': 6011.908, 'text': 'what an input function is is the way that we define how our data is going to be broke into epochs and into batches to feed to our model.', 'start': 6004.499, 'duration': 7.409}, {'end': 6017.373, 'text': "Now these you probably aren't ever going to really need to code like from scratch by yourself.", 'start': 6012.489, 'duration': 4.884}, {'end': 6021.917, 'text': "But this is the one I've just stolen from the TensorFlow website pretty much like everything else that's in the series.", 'start': 6017.854, 'duration': 4.063}, {'end': 6030.205, 'text': 'And what this does is it takes our data and encodes it in a tf.data.dataset object.', 'start': 6022.518, 'duration': 7.687}], 'summary': 'An input function defines data epochs and batches for the model. it encodes data into a tf.data.dataset object.', 'duration': 29.33, 'max_score': 6000.875, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk6000875.jpg'}, {'end': 6425.9, 'src': 'embed', 'start': 6398.101, 'weight': 2, 'content': [{'end': 6400.682, 'text': "So let's actually run this and see how this works.", 'start': 6398.101, 'duration': 2.581}, {'end': 6401.543, 'text': 'This will take a second.', 'start': 6400.703, 'duration': 0.84}, {'end': 6402.784, 'text': "So I'll be back once this is done.", 'start': 6401.563, 'duration': 1.221}, {'end': 6407.246, 'text': "Okay, so we're back and we've got a 73.8% accuracy.", 'start': 6403.044, 'duration': 4.202}, {'end': 6413.589, 'text': "So essentially, what we've done right is we've trained the model, you might have seen a bunch of output while you were doing this on your screen.", 'start': 6407.646, 'duration': 5.943}, {'end': 6417.472, 'text': 'And then we printed out the accuracy after evaluating the model.', 'start': 6414.449, 'duration': 3.023}, {'end': 6419.354, 'text': "And this accuracy isn't very good.", 'start': 6417.973, 'duration': 1.381}, {'end': 6421.016, 'text': 'But for our first shot, this is okay.', 'start': 6419.494, 'duration': 1.522}, {'end': 6422.877, 'text': "And we're going to talk about how to improve this in a second.", 'start': 6421.076, 'duration': 1.801}, {'end': 6425.9, 'text': "Okay, so we've evaluated the data set.", 'start': 6424.018, 'duration': 1.882}], 'summary': 'Model achieved 73.8% accuracy, aiming for improvement', 'duration': 27.799, 'max_score': 6398.101, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk6398101.jpg'}, {'end': 6586.391, 'src': 'embed', 'start': 6554.536, 'weight': 5, 'content': [{'end': 6556.738, 'text': 'what we can do is use a method called dot predict.', 'start': 6554.536, 'duration': 2.202}, {'end': 6565.461, 'text': "So what I'm going to do is, I'm going to say I guess results like this equals, and in this case we're going to do the model name,", 'start': 6557.578, 'duration': 7.883}, {'end': 6569.243, 'text': 'which is linear EST dot predict.', 'start': 6565.461, 'duration': 3.782}, {'end': 6574.745, 'text': "And then inside here, what we're going to pass is that input function we use for the evaluation.", 'start': 6569.763, 'duration': 4.982}, {'end': 6582.069, 'text': 'So just like you know, we need to pass an input function to actually train the model, we also need to pass an input function to make a prediction.', 'start': 6575.265, 'duration': 6.804}, {'end': 6586.391, 'text': 'Now this input function could be a little bit different, we can modify this a bit if we wanted to.', 'start': 6582.509, 'duration': 3.882}], 'summary': 'Using dot predict method to make predictions with the linear est model.', 'duration': 31.855, 'max_score': 6554.536, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk6554536.jpg'}, {'end': 6869.649, 'src': 'embed', 'start': 6841.465, 'weight': 1, 'content': [{'end': 6848.93, 'text': "So you can see that that's, you know, represented in the fact that we only have about a 76% accuracy, because this model is not perfect.", 'start': 6841.465, 'duration': 7.465}, {'end': 6851.591, 'text': 'And in this instance, it was pretty bad at saying they have a 32% chance of surviving.', 'start': 6848.97, 'duration': 2.621}, {'end': 6854.073, 'text': 'but they actually did survive.', 'start': 6853.112, 'duration': 0.961}, {'end': 6859.018, 'text': 'So maybe that should be higher, right? So we could change this number and go for four.', 'start': 6854.393, 'duration': 4.625}, {'end': 6862.081, 'text': "I'm just messing around and showing you guys, you know, how we use this.", 'start': 6859.739, 'duration': 2.342}, {'end': 6863.923, 'text': 'So in this one, you know, same thing.', 'start': 6862.482, 'duration': 1.441}, {'end': 6869.649, 'text': 'This person survived, although what is it? They only were given a 14% chance of survival.', 'start': 6864.103, 'duration': 5.546}], 'summary': 'Model accuracy is 76%, predicting low survival rates, but actual survival is higher.', 'duration': 28.184, 'max_score': 6841.465, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk6841465.jpg'}], 'start': 5700.057, 'title': 'Tensorflow model training', 'summary': 'Covers creating feature columns for linear regression, training a tensorflow linear classifier model, achieving 73.8% accuracy initially and 76% after refinement, and making predictions with a 76% accuracy.', 'chapters': [{'end': 6072.933, 'start': 5700.057, 'title': 'Creating model features and training process', 'summary': 'Covers creating feature columns for linear regression, including categorical and numeric columns, and explains the training process involving batching, epochs, and input functions for machine learning models.', 'duration': 372.876, 'highlights': ['Creating feature columns for linear regression involves adding TensorFlow feature columns, such as categorical columns with a vocabulary list and numeric columns, to create a column with the feature name and associated vocabulary, enabling the creation of a model.', 'The training process for machine learning models involves loading data in batches, where a batch size of 32 is used to increase speed, and epochs, which represent how many times the model sees the same data to pick up on patterns and prevent overfitting.', 'The input function is crucial for breaking data into epochs and batches, with the example code demonstrating the process of encoding a pandas data frame into a tf.data.dataset object for the model to utilize.']}, {'end': 6554.536, 'start': 6072.933, 'title': 'Tensorflow model training', 'summary': "Introduces the process of creating an input function, training a tensorflow linear classifier model, and evaluating its accuracy, achieving 73.8% initially and 76% after refining the model. it also delves into accessing statistical values from the model's evaluation results.", 'duration': 481.603, 'highlights': ['Creating an input function to process training and evaluation data sets, resulting in a 73.8% accuracy initially and 76% after refinement.', 'Training a TensorFlow linear classifier model using the input function and evaluating its accuracy, which was initially 73.8% and later improved to 76%.', "Accessing and interpreting statistical values from the model's evaluation results, demonstrating the impact of shuffling and epochs on accuracy."]}, {'end': 6891.528, 'start': 6554.536, 'title': 'Making predictions and evaluating model', 'summary': 'Covers making predictions using the dot predict method and analyzing the results, including accessing the probabilities of survival and non-survival, with the model achieving a 76% accuracy.', 'duration': 336.992, 'highlights': ['The dot predict method is used to make predictions based on the model, with the results being analyzed to access the probabilities of survival and non-survival.', 'The model achieves a 76% accuracy in its predictions, as observed through the analysis of the probabilities of survival and non-survival.', 'The process involves accessing and analyzing the probabilities of survival and non-survival, with specific examples and comparisons provided for different individuals.']}], 'duration': 1191.471, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk5700057.jpg', 'highlights': ['Creating feature columns for linear regression involves adding TensorFlow feature columns, such as categorical columns with a vocabulary list and numeric columns, to create a column with the feature name and associated vocabulary, enabling the creation of a model.', 'The model achieves a 76% accuracy in its predictions, as observed through the analysis of the probabilities of survival and non-survival.', 'Training a TensorFlow linear classifier model using the input function and evaluating its accuracy, which was initially 73.8% and later improved to 76%.', 'The input function is crucial for breaking data into epochs and batches, with the example code demonstrating the process of encoding a pandas data frame into a tf.data.dataset object for the model to utilize.', 'The training process for machine learning models involves loading data in batches, where a batch size of 32 is used to increase speed, and epochs, which represent how many times the model sees the same data to pick up on patterns and prevent overfitting.', 'The dot predict method is used to make predictions based on the model, with the results being analyzed to access the probabilities of survival and non-survival.']}, {'end': 8666.214, 'segs': [{'end': 6959.114, 'src': 'embed', 'start': 6930.24, 'weight': 4, 'content': [{'end': 6933.142, 'text': "I think it's the iris flower data set or something like that.", 'start': 6930.24, 'duration': 2.902}, {'end': 6937.364, 'text': "And we're going to use some different properties of flowers to predict what species of flower it is.", 'start': 6933.162, 'duration': 4.202}, {'end': 6939.965, 'text': "So that's the difference between classification and regression.", 'start': 6937.764, 'duration': 2.201}, {'end': 6944.147, 'text': "Now I'm not going to talk about the specific algorithm we're going to use here for classification,", 'start': 6940.425, 'duration': 3.722}, {'end': 6946.928, 'text': "because there's just so many different ones you can use.", 'start': 6944.147, 'duration': 2.781}, {'end': 6954.032, 'text': "But yeah, I mean, if you really care about how they work on a lower mathematical level, I'm not going to be explaining that,", 'start': 6948.309, 'duration': 5.723}, {'end': 6959.114, 'text': "because it doesn't make sense to explain it for one algorithm when there's like hundreds and they all work a little bit differently.", 'start': 6954.032, 'duration': 5.082}], 'summary': 'Using flower properties to predict species; difference between classification and regression explained.', 'duration': 28.874, 'max_score': 6930.24, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk6930240.jpg'}, {'end': 7008.373, 'src': 'embed', 'start': 6981.707, 'weight': 5, 'content': [{'end': 6985.469, 'text': "So the data set we're using is that Iris flowers data set, like I talked about,", 'start': 6981.707, 'duration': 3.762}, {'end': 6988.471, 'text': 'and this specific data set separates flowers into three different species.', 'start': 6985.469, 'duration': 3.002}, {'end': 6990.092, 'text': 'So we have these different species.', 'start': 6988.491, 'duration': 1.601}, {'end': 6991.713, 'text': 'this is the information we have.', 'start': 6990.652, 'duration': 1.061}, {'end': 6996.939, 'text': "So septal length with petal length, petal width, we're going to use that information, obviously, to make the predictions.", 'start': 6991.753, 'duration': 5.186}, {'end': 7004.091, 'text': "So given this information, you know, in our final model, can it tell us which one of these flowers it's most likely to be Okay.", 'start': 6997.299, 'duration': 6.792}, {'end': 7008.373, 'text': "So what we're going to do now is define the CSV column names and the species.", 'start': 7004.591, 'duration': 3.782}], 'summary': 'Using iris flowers dataset with 3 species to make predictions.', 'duration': 26.666, 'max_score': 6981.707, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk6981707.jpg'}, {'end': 7268.027, 'src': 'embed', 'start': 7236.087, 'weight': 7, 'content': [{'end': 7237.868, 'text': "For now on, don't worry about it too much.", 'start': 7236.087, 'duration': 1.781}, {'end': 7244.15, 'text': "you can pretty much just copy the input functions you've created before and modify them very slightly if you're going to be doing your own models.", 'start': 7237.868, 'duration': 6.282}, {'end': 7249.392, 'text': 'But by the end of this, you should have a good idea of how these input functions work, we will have seen like four or five different ones.', 'start': 7244.41, 'duration': 4.982}, {'end': 7253.873, 'text': "And then you know, we can kind of mess with them and tweak them as we go on, but don't focus on it too much.", 'start': 7249.912, 'duration': 3.961}, {'end': 7259.305, 'text': "Okay, so input function, this our input function, I'm not really going to go into much more detail with that.", 'start': 7254.403, 'duration': 4.902}, {'end': 7261.265, 'text': 'And now our feature columns.', 'start': 7260.045, 'duration': 1.22}, {'end': 7263.966, 'text': 'So this is again pretty straightforward for the feature columns.', 'start': 7261.705, 'duration': 2.261}, {'end': 7268.027, 'text': "all we need to do for this is, since they're all numeric feature columns, is, rather than having two for loops,", 'start': 7263.966, 'duration': 4.061}], 'summary': 'Focus on understanding and modifying input functions for different models, while not getting too caught up in the details. also, simplifying the process for numeric feature columns.', 'duration': 31.94, 'max_score': 7236.087, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk7236087.jpg'}, {'end': 7518.048, 'src': 'embed', 'start': 7489.151, 'weight': 6, 'content': [{'end': 7494.515, 'text': "And let's just dig through this, because this is a bit more of a complicated piece of code than we usually used to work with.", 'start': 7489.151, 'duration': 5.364}, {'end': 7497.337, 'text': "I'm also going to remove these comments just to clean things up in here.", 'start': 7494.555, 'duration': 2.782}, {'end': 7501.401, 'text': "So we've defined the classifier, which is a deep neural network classifier.", 'start': 7497.978, 'duration': 3.423}, {'end': 7505.384, 'text': 'we have our feature columns, hidden units, classes now to train the classifier.', 'start': 7501.401, 'duration': 3.983}, {'end': 7507.426, 'text': 'So we have this input function here.', 'start': 7505.924, 'duration': 1.502}, {'end': 7510.827, 'text': 'this input function is different than the one we created previously.', 'start': 7508.286, 'duration': 2.541}, {'end': 7518.048, 'text': 'Remember when the one we had previously was like make input, whatever function I will continue typing in inside it to find another function.', 'start': 7510.847, 'duration': 7.201}], 'summary': 'Working with a complex deep neural network classifier and a new input function.', 'duration': 28.897, 'max_score': 7489.151, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk7489151.jpg'}, {'end': 7713.262, 'src': 'embed', 'start': 7687.452, 'weight': 0, 'content': [{'end': 7693.073, 'text': "we're just going to say we'll go through the data set until we've hit 5000 numbers, like 5000 things that have been looked at.", 'start': 7687.452, 'duration': 5.621}, {'end': 7695.054, 'text': "So that's what this does with that train.", 'start': 7693.573, 'duration': 1.481}, {'end': 7702.137, 'text': "Let's run this and just look at the training output from our model, it gives us some like, things here, we can kind of see how this is working.", 'start': 7695.714, 'duration': 6.423}, {'end': 7710.281, 'text': 'Notice that if I can stop here for a second, it tells us the current step, it tells us the loss, the lowest, the lower this number, the better.', 'start': 7702.897, 'duration': 7.384}, {'end': 7713.262, 'text': 'And then it tells us global steps per second.', 'start': 7711.121, 'duration': 2.141}], 'summary': 'Analyzing 5000 data points, lower loss number is better. model running at global steps per second.', 'duration': 25.81, 'max_score': 7687.452, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk7687452.jpg'}, {'end': 7878.323, 'src': 'embed', 'start': 7849.077, 'weight': 2, 'content': [{'end': 7854.019, 'text': "And now we'll print this and notice this happens much, much faster, we get a test accuracy of 80%.", 'start': 7849.077, 'duration': 4.942}, {'end': 7860.262, 'text': "So if I were to retrain the model, chances are this accuracy would change again, because of the order in which we're seeing different flowers.", 'start': 7854.019, 'duration': 6.243}, {'end': 7863.704, 'text': "But this is pretty decent considering we don't have that much test data.", 'start': 7860.603, 'duration': 3.101}, {'end': 7868.706, 'text': "And we don't really know what we're doing, right? We're kind of just messing around and experimenting for right now.", 'start': 7864.704, 'duration': 4.002}, {'end': 7870.487, 'text': 'So to get 80% is pretty good.', 'start': 7868.746, 'duration': 1.741}, {'end': 7873.729, 'text': 'Okay, so actually, what am I doing, we need to go back now and do predictions.', 'start': 7870.967, 'duration': 2.762}, {'end': 7878.323, 'text': "So how am I going to predict this for specific flowers? So let's go back to our core learning algorithms.", 'start': 7873.769, 'duration': 4.554}], 'summary': 'Model achieved 80% test accuracy, despite limited data and experimental approach.', 'duration': 29.246, 'max_score': 7849.077, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk7849077.jpg'}, {'end': 8101.61, 'src': 'embed', 'start': 8075.973, 'weight': 1, 'content': [{'end': 8080.636, 'text': "And it says that's an 83 or 86.3% chance that that is the prediction.", 'start': 8075.973, 'duration': 4.663}, {'end': 8083.178, 'text': 'So yeah, that is how that works.', 'start': 8081.397, 'duration': 1.781}, {'end': 8084.239, 'text': "So that's what this does.", 'start': 8083.218, 'duration': 1.021}, {'end': 8088.041, 'text': 'I wanted to give a little script, I wrote most of this, I mean, I stole some of this from TensorFlow.', 'start': 8084.339, 'duration': 3.702}, {'end': 8091.403, 'text': 'But just to show you how we can actually predict on one value.', 'start': 8089.022, 'duration': 2.381}, {'end': 8097.207, 'text': "So let's look at these prediction dictionary, because I just want to show you what one of them actually is.", 'start': 8092.244, 'duration': 4.963}, {'end': 8101.61, 'text': "So I'm going to say print pred underscore dict.", 'start': 8097.648, 'duration': 3.962}], 'summary': 'The prediction has an 86.3% chance of accuracy, demonstrating the predictive capability of the model.', 'duration': 25.637, 'max_score': 8075.973, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk8075973.jpg'}, {'end': 8266.115, 'src': 'embed', 'start': 8237.472, 'weight': 3, 'content': [{'end': 8241.217, 'text': 'Now clustering only works for a very specific set of problems.', 'start': 8237.472, 'duration': 3.745}, {'end': 8247.465, 'text': "And you use clustering when you have a bunch of input information or features, but you don't have any labels or output information.", 'start': 8241.537, 'duration': 5.928}, {'end': 8261.972, 'text': 'Essentially what clustering does is finds clusters of like data points and tells you the location of those clusters.', 'start': 8254.065, 'duration': 7.907}, {'end': 8266.115, 'text': 'So you give a bunch of training data, you can pick how many clusters you want to find.', 'start': 8261.992, 'duration': 4.123}], 'summary': 'Clustering is used for unlabeled data to find clusters of like data points and determine their locations.', 'duration': 28.643, 'max_score': 8237.472, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk8237472.jpg'}], 'start': 6891.988, 'title': 'Building tensorflow model for iris flowers', 'summary': 'Introduces classification in machine learning, distinguishes it from regression, and explains the concept of predicting classes and probabilities using the example of iris flower dataset. it also covers the process of building a deep neural network model for classifying iris flowers using tensorflow, training a neural network model with 5000 steps, evaluating its test accuracy of 80%, and making predictions for specific flowers, as well as discussing predictive modeling with an 86.3% prediction accuracy and demonstrating the process of k-means clustering for unsupervised learning.', 'chapters': [{'end': 6963.777, 'start': 6891.988, 'title': 'Introduction to classification in machine learning', 'summary': 'Introduces classification in machine learning, distinguishing it from regression, and explains the concept of predicting classes and probabilities using the example of iris flower dataset, without diving into specific algorithms due to the multitude of options available.', 'duration': 71.789, 'highlights': ['Classification involves differentiating between data points and separating them into classes, predicting the probability of a data point being within different classes.', 'The example of using the iris flower dataset to predict the species of flowers illustrates the difference between classification and regression.', "The chapter doesn't delve into specific classification algorithms due to the plethora of options available."]}, {'end': 7669.348, 'start': 6964.577, 'title': 'Building tensorflow model for iris flowers', 'summary': 'Covers the process of building a deep neural network model for classifying iris flowers using tensorflow, with a brief explanation of the data set, defining csv column names and species, loading and examining the data set, creating an input function, generating feature columns, and defining and training the deep neural network classifier.', 'duration': 704.771, 'highlights': ['The data set used is the Iris flowers data set, which separates flowers into three different species.', 'Defining the CSV column names and species is a key step in preparing the data set for model training.', 'The process involves loading and examining the data set, which includes separating columns for the species into training and test datasets.', 'Creating an input function is a crucial step in preparing the data for training the model.', 'Generating feature columns involves looping through the keys in the training data set and appending numeric columns to the feature list.', 'Defining and training the deep neural network classifier involves specifying the feature columns, hidden units, and number of classes, and using a lambda function to create an input function for training the model.']}, {'end': 8075.873, 'start': 7669.988, 'title': 'Training and evaluating a neural network model', 'summary': 'Covers the process of training a neural network model with 5000 steps, evaluating its test accuracy of 80%, and making predictions for specific flowers using tensorflow.', 'duration': 405.885, 'highlights': ['The model is trained with 5000 steps, defining a set amount of steps to go through the dataset.', 'The test accuracy of the model is evaluated at 80% which is considered decent with the current amount of test data.', 'A script is used to make predictions for specific flowers, allowing users to input numeric values for features and obtaining the predicted class of the flower.']}, {'end': 8666.214, 'start': 8075.973, 'title': 'Predictive model and clustering analysis', 'summary': 'Discusses predictive modeling with an 86.3% prediction accuracy and demonstrates the process of k-means clustering for unsupervised learning.', 'duration': 590.241, 'highlights': ['The chapter discusses the accuracy of the predictive model, which yields an 86.3% chance of prediction.', 'K-means clustering is introduced as the first unsupervised learning algorithm with the ability to find clusters of like data points and determine their location.', 'The process of K-means clustering is explained, including the steps of randomly placing centroids, assigning data points to centroids by distance, and repeating the process until convergence.']}], 'duration': 1774.226, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk6891988.jpg', 'highlights': ['The model is trained with 5000 steps, defining a set amount of steps to go through the dataset.', 'The chapter discusses the accuracy of the predictive model, which yields an 86.3% chance of prediction.', 'The test accuracy of the model is evaluated at 80% which is considered decent with the current amount of test data.', 'K-means clustering is introduced as the first unsupervised learning algorithm with the ability to find clusters of like data points and determine their location.', 'The example of using the iris flower dataset to predict the species of flowers illustrates the difference between classification and regression.', 'The data set used is the Iris flowers data set, which separates flowers into three different species.', 'Defining and training the deep neural network classifier involves specifying the feature columns, hidden units, and number of classes, and using a lambda function to create an input function for training the model.', 'Creating an input function is a crucial step in preparing the data for training the model.']}, {'end': 9830.186, 'segs': [{'end': 8734.737, 'src': 'embed', 'start': 8695.641, 'weight': 0, 'content': [{'end': 8701.844, 'text': "Now hidden Markov models are way different than what we've seen so far, we've been using kind of algorithms that rely on data.", 'start': 8695.641, 'duration': 6.203}, {'end': 8703.225, 'text': 'So, like k means clustering.', 'start': 8701.924, 'duration': 1.301}, {'end': 8708.008, 'text': 'we gave a lot of data and we know, clustered all those data points, found those centroids.', 'start': 8703.225, 'duration': 4.783}, {'end': 8710.669, 'text': 'use those centroids to find where new data points should be.', 'start': 8708.008, 'duration': 2.661}, {'end': 8713.331, 'text': 'Same thing with linear regression and classification.', 'start': 8711.109, 'duration': 2.222}, {'end': 8717.153, 'text': 'Whereas hidden Markov models, we actually deal with probability distributions.', 'start': 8713.871, 'duration': 3.282}, {'end': 8724.595, 'text': "Now the example we're going to go into here, and it's kind of I have to do a lot of examples for this, because it's a very abstract concept,", 'start': 8717.973, 'duration': 6.622}, {'end': 8726.015, 'text': 'is a basic weather model.', 'start': 8724.595, 'duration': 1.42}, {'end': 8734.737, 'text': 'So what we actually want to do is predict the weather on any given day, given the probability of different events occurring.', 'start': 8726.375, 'duration': 8.362}], 'summary': 'Hidden markov models work with probability distributions, as demonstrated by a basic weather model for predicting weather.', 'duration': 39.096, 'max_score': 8695.641, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk8695641.jpg'}, {'end': 9366.589, 'src': 'embed', 'start': 9339.45, 'weight': 4, 'content': [{'end': 9344.013, 'text': 'the temperature is normally distributed, with mean and standard deviation zero and five on a cold day,', 'start': 9339.45, 'duration': 4.563}, {'end': 9346.515, 'text': 'and mean and standard deviation 15 and 10 on a hot day.', 'start': 9344.013, 'duration': 2.502}, {'end': 9353.5, 'text': 'Now what that means standard deviation is essentially I mean, we can read this thing here is that on a hot day, the average temperature is 15.', 'start': 9346.955, 'duration': 6.545}, {'end': 9356.462, 'text': "That's mean and ranges from five to 25.", 'start': 9353.5, 'duration': 2.962}, {'end': 9361.646, 'text': 'Because the standard deviation is 10 of that, which just means 10 on each side, kind of the min max value.', 'start': 9356.462, 'duration': 5.184}, {'end': 9363.527, 'text': "Again, I'm not in statistics.", 'start': 9362.026, 'duration': 1.501}, {'end': 9366.589, 'text': "So please don't quote me on any definitions of standard deviation.", 'start': 9363.727, 'duration': 2.862}], 'summary': 'Temperature distribution: mean 0, 15; sd 5, 10 on cold, hot days respectively.', 'duration': 27.139, 'max_score': 9339.45, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk9339450.jpg'}, {'end': 9560.389, 'src': 'embed', 'start': 9533.587, 'weight': 1, 'content': [{'end': 9537.85, 'text': 'Now what is steps? Well, steps is how many days we want to predict for.', 'start': 9533.587, 'duration': 4.263}, {'end': 9544.075, 'text': "So the number of steps is how many times we're going to step through this probability cycle, and run the model essentially.", 'start': 9538.191, 'duration': 5.884}, {'end': 9549.143, 'text': 'Now remember, what we want to do is we want to predict the average temperature on each day, right?', 'start': 9544.741, 'duration': 4.402}, {'end': 9552.745, 'text': "Like, that's what the goal of our example is is to predict the average temperature.", 'start': 9549.163, 'duration': 3.582}, {'end': 9559.309, 'text': "So given this information, using these observations and using these transitions, what we'll do is predict that.", 'start': 9553.085, 'duration': 6.224}, {'end': 9560.389, 'text': "So I'm going to run this model.", 'start': 9559.329, 'duration': 1.06}], 'summary': 'Predict average temperature for multiple days using model.', 'duration': 26.802, 'max_score': 9533.587, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk9533587.jpg'}], 'start': 8666.214, 'title': 'Hidden markov models in weather prediction', 'summary': 'Introduces k-means clustering and hidden markov models, emphasizing their application in weather prediction. it highlights the minimal data requirement for training the model and covers the creation of a hidden markov model for weather prediction, including modeling it in tensorflow, with code debugging.', 'chapters': [{'end': 8818.222, 'start': 8666.214, 'title': 'Clustering and hidden markov models', 'summary': 'Introduces clustering, specifically k-means clustering and its reliance on data, followed by an explanation of hidden markov models, which use probability distributions to predict future events, demonstrated through a basic weather model.', 'duration': 152.008, 'highlights': ['Hidden Markov models use probability distributions to predict future events, demonstrated through a basic weather model.', 'Introduction to clustering, specifically K-means clustering and its reliance on data.']}, {'end': 9100.694, 'start': 8818.783, 'title': 'Hidden markov models in weather prediction', 'summary': 'Discusses the concept of hidden markov models in weather prediction, explaining the states, observations, and transitions involved, and emphasizes the minimal data requirement for training the model.', 'duration': 281.911, 'highlights': ['The concept of Hidden Markov Models in weather prediction is explained, emphasizing the states, observations, and transitions involved.', 'Emphasizing the minimal data requirement for training the model, it is stated that constant values for probability and transition distributions and observation distributions are sufficient.']}, {'end': 9370.192, 'start': 9101.175, 'title': 'Hidden markov model for weather prediction', 'summary': "Discusses the creation of a hidden markov model for weather prediction, with probabilities and distributions defined for hot and cold days, and the model's purpose in predicting future events based on past occurrences.", 'duration': 269.017, 'highlights': ['The model defines an 80% chance of the first day in the sequence being cold, and a 30% chance of a cold day being followed by a hot day.', 'The temperature is normally distributed, with mean and standard deviation of 15 and 10 on a hot day, and mean and standard deviation of 0 and 5 on a cold day.', 'The purpose of the model is to predict future events based on past occurrences, such as using the model to predict the weather for the next week.']}, {'end': 9830.186, 'start': 9371.112, 'title': 'Hidden markov model in tensorflow', 'summary': 'Covers modeling a hidden markov model in tensorflow, with initial, transition, and observation distributions, and then demonstrates model creation and prediction of average temperatures based on different probabilities, with code debugging included.', 'duration': 459.074, 'highlights': ['The model creation involves defining initial distribution, transition distribution, observation distribution, and steps to predict average temperatures for a certain number of days.', 'There is a debugging process to address an error related to the compatibility between TensorFlow and TensorFlow Probability versions, with detailed instructions on resolving the issue by updating TensorFlow Probability.', 'Changing the transition probabilities affects the predicted temperatures, as demonstrated by altering the probability of a cold day being followed by a hot day, resulting in different temperature predictions.']}], 'duration': 1163.972, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk8666214.jpg', 'highlights': ['Hidden Markov models use probability distributions to predict future events, demonstrated through a basic weather model.', 'The model creation involves defining initial distribution, transition distribution, observation distribution, and steps to predict average temperatures for a certain number of days.', 'The concept of Hidden Markov Models in weather prediction is explained, emphasizing the states, observations, and transitions involved.', 'Introduction to clustering, specifically K-means clustering and its reliance on data.', 'The temperature is normally distributed, with mean and standard deviation of 15 and 10 on a hot day, and mean and standard deviation of 0 and 5 on a cold day.']}, {'end': 11325.808, 'segs': [{'end': 10052.074, 'src': 'embed', 'start': 10023.631, 'weight': 0, 'content': [{'end': 10025.253, 'text': 'then we get some meaningful output.', 'start': 10023.631, 'duration': 1.622}, {'end': 10026.894, 'text': "this is what we're looking at.", 'start': 10026.073, 'duration': 0.821}, {'end': 10032.479, 'text': "So if we're just looking at a neural network from kind of the outside, we think of it as this magical black box.", 'start': 10027.354, 'duration': 5.125}, {'end': 10033.32, 'text': 'we give some input.', 'start': 10032.479, 'duration': 0.841}, {'end': 10034.16, 'text': 'it gives us some output.', 'start': 10033.32, 'duration': 0.84}, {'end': 10039.485, 'text': "And I mean, we could call this black box just some function, right? Where it's a function of the input maps it to some output.", 'start': 10034.36, 'duration': 5.125}, {'end': 10041.807, 'text': "And that's exactly what a neural network does.", 'start': 10039.825, 'duration': 1.982}, {'end': 10046.671, 'text': 'It takes input and maps that input to some output, just like any other function, right?', 'start': 10042.067, 'duration': 4.604}, {'end': 10050.373, 'text': 'Just like if you had a straight line like this.', 'start': 10046.872, 'duration': 3.501}, {'end': 10051.814, 'text': 'this is a function.', 'start': 10050.373, 'duration': 1.441}, {'end': 10052.074, 'text': 'you know.', 'start': 10051.814, 'duration': 0.26}], 'summary': 'Neural networks function as a mapping from input to output, similar to a mathematical function.', 'duration': 28.443, 'max_score': 10023.631, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk10023631.jpg'}, {'end': 10388.263, 'src': 'heatmap', 'start': 10138.755, 'weight': 1, 'content': [{'end': 10143.257, 'text': "if you're predicting for the image, if you're going to be looking at the entire image, to make a prediction,", 'start': 10138.755, 'duration': 4.502}, {'end': 10151.401, 'text': "you're going to need every single one of those pixels, which is 28 times 28 pixels, which I believe is something like 784..", 'start': 10143.257, 'duration': 8.144}, {'end': 10155.022, 'text': "I could be wrong on that number, but I believe that's what it is.", 'start': 10151.401, 'duration': 3.621}, {'end': 10159.224, 'text': 'So you would need 784 input input neurons.', 'start': 10155.442, 'duration': 3.782}, {'end': 10160.465, 'text': "Now that's totally fine.", 'start': 10159.604, 'duration': 0.861}, {'end': 10164.166, 'text': 'That might seem like a big number, but we deal with massive numbers when it comes to computers.', 'start': 10160.485, 'duration': 3.681}, {'end': 10165.607, 'text': "So this really isn't that many.", 'start': 10164.186, 'duration': 1.421}, {'end': 10167.308, 'text': "But that's an example of.", 'start': 10166.267, 'duration': 1.041}, {'end': 10167.709, 'text': 'you know how.', 'start': 10167.308, 'duration': 0.401}, {'end': 10170.952, 'text': 'you would use a neural network input layer to represent an image.', 'start': 10167.709, 'duration': 3.243}, {'end': 10177.118, 'text': 'you would have 784 input neurons and you would pass one pixel to every single one of those neurons.', 'start': 10170.952, 'duration': 6.166}, {'end': 10181.503, 'text': 'Now for doing an example where maybe we just have one piece of input information.', 'start': 10177.699, 'duration': 3.804}, {'end': 10183.965, 'text': "maybe it's literally just one number.", 'start': 10181.503, 'duration': 2.462}, {'end': 10186.728, 'text': 'well then, all we need is one input neuron.', 'start': 10183.965, 'duration': 2.763}, {'end': 10193.133, 'text': 'If we have an example where we have four pieces of information, we would need for input neurons right?', 'start': 10187.488, 'duration': 5.645}, {'end': 10195.275, 'text': 'Now this can get a little bit more complicated.', 'start': 10193.313, 'duration': 1.962}, {'end': 10197.697, 'text': "But that's the basis that I want you to understand.", 'start': 10195.595, 'duration': 2.102}, {'end': 10201.12, 'text': "is you know the pieces of input you're going to have, regardless of what they are?", 'start': 10197.697, 'duration': 3.423}, {'end': 10206.924, 'text': "you need one input neuron for each piece of that information, unless you're going to be reshaping or putting that information in different form.", 'start': 10201.12, 'duration': 5.804}, {'end': 10210.627, 'text': "Okay, so let's just actually skip ahead and go to now our output layer.", 'start': 10207.585, 'duration': 3.042}, {'end': 10212.889, 'text': 'So this is going to be our output.', 'start': 10211.448, 'duration': 1.441}, {'end': 10218.012, 'text': 'Now, what is our output layer? Well, our output layer is going to have as many neurons.', 'start': 10213.63, 'duration': 4.382}, {'end': 10224.377, 'text': 'And again, the neurons are just representing like a node in the layer as output pieces that we want.', 'start': 10218.153, 'duration': 6.224}, {'end': 10230.921, 'text': "Now, let's say we're doing a classification for images, right? And maybe there's two classes that we could represent.", 'start': 10224.637, 'duration': 6.284}, {'end': 10234.984, 'text': "Well, there's a few different ways we could design our output layer.", 'start': 10231.381, 'duration': 3.603}, {'end': 10238.806, 'text': "what we could do is say Okay, we're going to use one output neuron.", 'start': 10234.984, 'duration': 3.822}, {'end': 10240.728, 'text': 'this output neuron is going to give us some value.', 'start': 10238.806, 'duration': 1.922}, {'end': 10244.73, 'text': 'we want this value to be between zero and one.', 'start': 10241.388, 'duration': 3.342}, {'end': 10246.952, 'text': "And we'll say that's inclusive.", 'start': 10245.131, 'duration': 1.821}, {'end': 10249.614, 'text': 'Now what we can do now.', 'start': 10247.712, 'duration': 1.902}, {'end': 10256.798, 'text': "if we're predicting two classes, say okay, so if my output neuron is going to give me some value, if that value is closer to zero,", 'start': 10249.614, 'duration': 7.184}, {'end': 10257.979, 'text': "then that's going to be class zero.", 'start': 10256.798, 'duration': 1.181}, {'end': 10261.602, 'text': "If this value is closer to one, it's going to be class one right?", 'start': 10258.279, 'duration': 3.323}, {'end': 10272.448, 'text': "And that would mean when we have our training data and we talked about training and testing data we'd give our input in our output would need to be the value zero or one,", 'start': 10262.362, 'duration': 10.086}, {'end': 10275.97, 'text': "because it's either the correct class which is zero right or the correct class which is one.", 'start': 10272.448, 'duration': 3.522}, {'end': 10277.591, 'text': 'So, like our what am I saying?', 'start': 10275.99, 'duration': 1.601}, {'end': 10285.255, 'text': 'our labels for our training data set would be zero and one, and then this value on our output neuron will be guaranteed to be between zero and one,', 'start': 10277.591, 'duration': 7.664}, {'end': 10287.356, 'text': "based on something that I'm going to talk about a little bit later.", 'start': 10285.255, 'duration': 2.101}, {'end': 10291.224, 'text': "that's one way to approach it, right, we have a single value, we look at that value.", 'start': 10288.123, 'duration': 3.101}, {'end': 10297.347, 'text': 'And based on what that value is, we can determine, you know, what class we predicted not work sometimes.', 'start': 10291.244, 'duration': 6.103}, {'end': 10300.208, 'text': "But in other instances, when we're doing classification,", 'start': 10297.827, 'duration': 2.381}, {'end': 10305.41, 'text': "what makes more sense is to have as many output neurons as classes you're looking to predict for.", 'start': 10300.208, 'duration': 5.202}, {'end': 10309.131, 'text': "So let's say we're going to have, you know, like five classes that we're predicting for,", 'start': 10305.91, 'duration': 3.221}, {'end': 10313.133, 'text': 'and maybe these three pieces of input information are enough to make that prediction.', 'start': 10309.131, 'duration': 4.002}, {'end': 10315.734, 'text': "well, we'd actually have five output neurons.", 'start': 10313.133, 'duration': 2.601}, {'end': 10321.777, 'text': 'And each of these neurons would have a value between zero and one and the combination.', 'start': 10316.494, 'duration': 5.283}, {'end': 10325.96, 'text': 'So the sum of every single one of these values would be equal to one.', 'start': 10321.817, 'duration': 4.143}, {'end': 10328.681, 'text': 'Now, can you think of what this means?', 'start': 10326.78, 'duration': 1.901}, {'end': 10335.005, 'text': 'If every single one of these neurons has a value between zero and one and their sum is one, what does this look like to you??', 'start': 10329.102, 'duration': 5.903}, {'end': 10337.687, 'text': 'Well, to me, this looks like a probability distribution.', 'start': 10335.345, 'duration': 2.342}, {'end': 10345.272, 'text': "And essentially what's going to happen is we're going to make predictions for how strongly we think each our input information is each class.", 'start': 10338.107, 'duration': 7.165}, {'end': 10354.061, 'text': "So if we think that it's like class one, maybe we'll just label these like this then what we would do is say Okay, this is going to be 0.9,", 'start': 10345.553, 'duration': 8.508}, {'end': 10355.482, 'text': 'representing 90%.', 'start': 10354.061, 'duration': 1.421}, {'end': 10357.624, 'text': 'Maybe this is like 0.001.', 'start': 10355.482, 'duration': 2.142}, {'end': 10364.09, 'text': "Maybe this is 0.05 0.003, right, you get the point, it's going to add up to one.", 'start': 10357.624, 'duration': 6.466}, {'end': 10366.652, 'text': 'And this is a probability distribution for output layer.', 'start': 10364.29, 'duration': 2.362}, {'end': 10368.333, 'text': "So that's a way to do it as well.", 'start': 10367.353, 'duration': 0.98}, {'end': 10374.535, 'text': "And then obviously, if we're doing some kind of regression task, we can just have one neuron and that will just predict some value.", 'start': 10368.933, 'duration': 5.602}, {'end': 10376.996, 'text': "And we'll define you know what we want that value to be.", 'start': 10375.035, 'duration': 1.961}, {'end': 10380.296, 'text': "Okay, so that's my example for my output.", 'start': 10377.856, 'duration': 2.44}, {'end': 10381.157, 'text': "Now let's erase this.", 'start': 10380.336, 'duration': 0.821}, {'end': 10384.778, 'text': "And let's actually just go back to one output neuron, because that's what I want to use for this example.", 'start': 10381.277, 'duration': 3.501}, {'end': 10388.263, 'text': 'Now, we have something in between these layers.', 'start': 10385.598, 'duration': 2.665}], 'summary': 'Neural network input layer requires 784 input neurons for image prediction, and the output layer can be designed based on the number of classes to predict.', 'duration': 249.508, 'max_score': 10138.755, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk10138755.jpg'}, {'end': 10224.377, 'src': 'embed', 'start': 10197.697, 'weight': 1, 'content': [{'end': 10201.12, 'text': "is you know the pieces of input you're going to have, regardless of what they are?", 'start': 10197.697, 'duration': 3.423}, {'end': 10206.924, 'text': "you need one input neuron for each piece of that information, unless you're going to be reshaping or putting that information in different form.", 'start': 10201.12, 'duration': 5.804}, {'end': 10210.627, 'text': "Okay, so let's just actually skip ahead and go to now our output layer.", 'start': 10207.585, 'duration': 3.042}, {'end': 10212.889, 'text': 'So this is going to be our output.', 'start': 10211.448, 'duration': 1.441}, {'end': 10218.012, 'text': 'Now, what is our output layer? Well, our output layer is going to have as many neurons.', 'start': 10213.63, 'duration': 4.382}, {'end': 10224.377, 'text': 'And again, the neurons are just representing like a node in the layer as output pieces that we want.', 'start': 10218.153, 'duration': 6.224}], 'summary': "Neural network's input and output layers require neurons for each piece of information and output piece.", 'duration': 26.68, 'max_score': 10197.697, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk10197697.jpg'}, {'end': 10354.061, 'src': 'embed', 'start': 10329.102, 'weight': 2, 'content': [{'end': 10335.005, 'text': 'If every single one of these neurons has a value between zero and one and their sum is one, what does this look like to you??', 'start': 10329.102, 'duration': 5.903}, {'end': 10337.687, 'text': 'Well, to me, this looks like a probability distribution.', 'start': 10335.345, 'duration': 2.342}, {'end': 10345.272, 'text': "And essentially what's going to happen is we're going to make predictions for how strongly we think each our input information is each class.", 'start': 10338.107, 'duration': 7.165}, {'end': 10354.061, 'text': "So if we think that it's like class one, maybe we'll just label these like this then what we would do is say Okay, this is going to be 0.9,", 'start': 10345.553, 'duration': 8.508}], 'summary': 'Neurons summing to one represent a probability distribution for making class predictions with quantifiable values like 0.9.', 'duration': 24.959, 'max_score': 10329.102, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk10329102.jpg'}, {'end': 10544.68, 'src': 'embed', 'start': 10522.691, 'weight': 3, 'content': [{'end': 10530.721, 'text': 'And these are what we call the trainable parameters that our neural network will actually tweak and change as we train to get the best possible result.', 'start': 10522.691, 'duration': 8.03}, {'end': 10532.753, 'text': 'So we have these connections.', 'start': 10531.772, 'duration': 0.981}, {'end': 10534.854, 'text': 'Now our hidden layer is connected to our output layer as well.', 'start': 10532.813, 'duration': 2.041}, {'end': 10538.196, 'text': 'This is again another densely connected layer,', 'start': 10535.194, 'duration': 3.002}, {'end': 10544.68, 'text': 'because every layer or every neuron neuron from the previous layer is connected to every neuron from the next layer.', 'start': 10538.196, 'duration': 6.484}], 'summary': 'Neural network has trainable parameters and densely connected layers for optimization.', 'duration': 21.989, 'max_score': 10522.691, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk10522691.jpg'}, {'end': 10943.488, 'src': 'embed', 'start': 10916.727, 'weight': 4, 'content': [{'end': 10920.51, 'text': 'And then we can look at that value and determine what the output of our neural network is.', 'start': 10916.727, 'duration': 3.783}, {'end': 10925.374, 'text': 'So that is pretty much how that works in terms of the weighted sums, the weights and the biases.', 'start': 10921.251, 'duration': 4.123}, {'end': 10929.798, 'text': "Now let's talk about the kind of the training process and another thing called an activation function.", 'start': 10925.915, 'duration': 3.883}, {'end': 10933.601, 'text': "So I've lied to you a little bit because I've said I'm just going to start erasing some stuff.", 'start': 10930.339, 'duration': 3.262}, {'end': 10935.142, 'text': 'So we have a little bit more room on here.', 'start': 10933.621, 'duration': 1.521}, {'end': 10936.683, 'text': "So I've lied to you.", 'start': 10936.023, 'duration': 0.66}, {'end': 10938.685, 'text': "And I've said that this is completely how this works.", 'start': 10936.703, 'duration': 1.982}, {'end': 10943.488, 'text': "Well, we're missing one key feature that I want to talk about, which is called an activation function.", 'start': 10939.165, 'duration': 4.323}], 'summary': 'Neural network involves weighted sums, biases, training process, and activation function.', 'duration': 26.761, 'max_score': 10916.727, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk10916727.jpg'}, {'end': 11017.968, 'src': 'embed', 'start': 10987.034, 'weight': 7, 'content': [{'end': 10990.229, 'text': 'Okay So these are some examples of an activation function.', 'start': 10987.034, 'duration': 3.195}, {'end': 10991.57, 'text': 'And I just want you to look at what they do.', 'start': 10990.309, 'duration': 1.261}, {'end': 10994.332, 'text': 'So this first one is called rectify linear unit.', 'start': 10992.01, 'duration': 2.322}, {'end': 11002.197, 'text': 'Now notice that essentially what this activation function does is take any values that are less than zero and just make them zero.', 'start': 10994.592, 'duration': 7.605}, {'end': 11006.24, 'text': 'So any x values that are, you know, in the negative, it just makes their y zero.', 'start': 11002.437, 'duration': 3.803}, {'end': 11010.663, 'text': "And then any values that are positive, it's just equal to whatever their positive value is.", 'start': 11006.94, 'duration': 3.723}, {'end': 11012.164, 'text': "So if it's 10, it's 10.", 'start': 11010.683, 'duration': 1.481}, {'end': 11017.968, 'text': "This allows us to just pretty much eliminate any negative numbers, right? That's kind of what rectify linear unit does.", 'start': 11012.164, 'duration': 5.804}], 'summary': 'Rectify linear unit activation function zeros negative values, keeping positive values unchanged.', 'duration': 30.934, 'max_score': 10987.034, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk10987034.jpg'}, {'end': 11051.546, 'src': 'embed', 'start': 11026.153, 'weight': 5, 'content': [{'end': 11032.838, 'text': 'So it takes whatever values we have, and the more positive they are, the closer to one they are, the more negative they are,', 'start': 11026.153, 'duration': 6.685}, {'end': 11034.179, 'text': 'the closer to negative one they are.', 'start': 11032.838, 'duration': 1.341}, {'end': 11037.4, 'text': 'So can we see why this might be useful, right, for our neural network.', 'start': 11034.699, 'duration': 2.701}, {'end': 11038.741, 'text': 'And then last one is sigmoid.', 'start': 11037.46, 'duration': 1.281}, {'end': 11041.602, 'text': 'What this does is squish our values between zero and one.', 'start': 11038.801, 'duration': 2.801}, {'end': 11044.763, 'text': 'A lot of people call it like the squishifier function,', 'start': 11041.942, 'duration': 2.821}, {'end': 11051.546, 'text': 'because all it does is take any extremely negative numbers and put them closer to zero and any extremely positive numbers and put them close to one.', 'start': 11044.763, 'duration': 6.783}], 'summary': 'Various activation functions like tanh and sigmoid are used to map input values to a specific range, such as -1 to 1 or 0 to 1, in neural networks.', 'duration': 25.393, 'max_score': 11026.153, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk11026153.jpg'}], 'start': 9830.186, 'title': 'Neural networks & hidden markov models', 'summary': 'Introduces hidden markov models, their limitations and applications, then transitions to neural networks, explaining their structure, function, trainable parameters, and activation functions, providing a comprehensive overview of the topics to be covered in the course.', 'chapters': [{'end': 9989.786, 'start': 9830.186, 'title': 'Hidden markov models & neural networks', 'summary': 'Introduces the concept of hidden markov models, explaining their limitations and applications, and then transitions to discussing neural networks, providing an overview of the topics to be covered in the course.', 'duration': 159.6, 'highlights': ['The chapter explains hidden Markov models, highlighting their limitations in accurately predicting outcomes with increased time and probability dependence, emphasizing their limited but notable utility.', 'It introduces neural networks, outlining the complexity of their components and the challenge of explaining them cohesively, and provides a brief overview of the topics to be covered in the course, including the math behind neural networks, gradient descent, backpropagation, and an example of classifying articles of clothing.']}, {'end': 10521.83, 'start': 9990.427, 'title': 'Neural networks: structure and function', 'summary': 'Explains the structure and function of neural networks, including the purpose of neural networks, the composition of layers, and the connection architecture with weights, and their role in determining the mapping from input to output.', 'duration': 531.403, 'highlights': ['Neural networks are designed to provide classification or predictions based on input data, functioning as a black box that maps input to output.', 'The input layer of a neural network receives raw data and requires one input neuron for each piece of information, such as 784 input neurons for a 28x28 pixel image.', 'The output layer of a neural network can have as many neurons as classes being predicted, creating a probability distribution for the predicted classes.', 'The connections between layers in a neural network are established through weights, which are modified and optimized to determine the mapping from input to output.', 'A densely connected neural network means that every node in one layer is connected to every node in the next layer, and the connections are represented by numeric values.']}, {'end': 10970.648, 'start': 10522.691, 'title': 'Neural network: weights, biases, and connections', 'summary': 'Explains the concept of trainable parameters in a neural network, including connections, weights, biases, and the process of passing information through the network using weighted sums and biases, aiming to achieve an output within a specific range.', 'duration': 447.957, 'highlights': ['The concept of trainable parameters in a neural network, including connections, weights, and biases, is explained.', 'The process of passing information through the network using weighted sums and biases is detailed.', 'The explanation of the activation function and its role in ensuring the desired output range is discussed.']}, {'end': 11325.808, 'start': 10971.148, 'title': 'Activation functions in neural networks', 'summary': 'Discusses the concept of activation functions, including examples like rectify linear unit, tan h, and sigmoid, and explains their role in squishing values, introducing complexity, and making complex predictions in neural networks.', 'duration': 354.66, 'highlights': ['The sigmoid activation function squishes values between 0 and 1, allowing for complex predictions and determination of specific patterns in higher dimensional space.', 'The tan h activation function squishes values between -1 and 1, which could be useful for neural networks.', 'The rectify linear unit activation function eliminates negative numbers by setting them to 0 and keeps positive values as they are, introducing complexity into the neural network.']}], 'duration': 1495.622, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk9830186.jpg', 'highlights': ['Neural networks function as a black box for classification or predictions.', 'The input layer of a neural network requires one input neuron for each piece of information.', 'The output layer of a neural network creates a probability distribution for predicted classes.', 'The concept of trainable parameters in a neural network, including connections, weights, and biases, is explained.', 'The process of passing information through the network using weighted sums and biases is detailed.', 'The sigmoid activation function squishes values between 0 and 1, allowing for complex predictions.', 'The tan h activation function squishes values between -1 and 1, which could be useful for neural networks.', 'The rectify linear unit activation function eliminates negative numbers by setting them to 0.']}, {'end': 12260.459, 'segs': [{'end': 11622.893, 'src': 'embed', 'start': 11594.113, 'weight': 1, 'content': [{'end': 11597.956, 'text': 'So this is an example of what your neural network function might look like.', 'start': 11594.113, 'duration': 3.843}, {'end': 11604.025, 'text': 'Now, as you have higher dimensional math, you have, you know, a lot more dimensions,', 'start': 11598.703, 'duration': 5.322}, {'end': 11611.068, 'text': 'a lot more space to explore when it comes to creating different parameters and creating different biases and activation functions, and all of that.', 'start': 11604.025, 'duration': 7.043}, {'end': 11617.771, 'text': "So as we apply our activation functions, we're kind of spreading our network into higher dimensions, which just makes things much more complicated.", 'start': 11611.368, 'duration': 6.403}, {'end': 11622.893, 'text': "Now, essentially, what we're trying to do with the neural network is optimize this loss function.", 'start': 11618.331, 'duration': 4.562}], 'summary': 'Neural network function explores higher dimensions to optimize loss function.', 'duration': 28.78, 'max_score': 11594.113, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk11594113.jpg'}, {'end': 11836.485, 'src': 'embed', 'start': 11805.135, 'weight': 2, 'content': [{'end': 11809.377, 'text': 'a gradient is the direction we need to move to minimize this loss function.', 'start': 11805.135, 'duration': 4.242}, {'end': 11813.358, 'text': "And this is where the advanced math happens and why I'm kind of skimming over this aspect.", 'start': 11809.417, 'duration': 3.941}, {'end': 11816.519, 'text': 'And then we use an algorithm called back propagation,', 'start': 11813.778, 'duration': 2.741}, {'end': 11822.621, 'text': 'where we step backwards through the network and update the weights and biases according to the gradient that we calculated.', 'start': 11816.519, 'duration': 6.102}, {'end': 11825.302, 'text': 'Now, that is pretty much how this works.', 'start': 11823.281, 'duration': 2.021}, {'end': 11830.103, 'text': "So you know, the more info we have, likely unless we're overfitting.", 'start': 11825.742, 'duration': 4.361}, {'end': 11836.485, 'text': "but you know, if we have a lot of data, if we can keep feeding the network, it starts off being really horrible, having no idea what's going on.", 'start': 11830.103, 'duration': 6.382}], 'summary': 'Neural network uses back propagation to update weights and minimize loss function.', 'duration': 31.35, 'max_score': 11805.135, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk11805135.jpg'}, {'end': 12001.743, 'src': 'embed', 'start': 11973.921, 'weight': 0, 'content': [{'end': 11980.165, 'text': 'So the dataset and the problem we are going to consider for our first neural network is the fashion MNIST dataset.', 'start': 11973.921, 'duration': 6.244}, {'end': 11987.091, 'text': 'Now the fashion MNIST dataset contains 60, 000 images for training and 10, 000 images for validating and testing.', 'start': 11980.746, 'duration': 6.345}, {'end': 11988.912, 'text': '70, 000 images.', 'start': 11988.532, 'duration': 0.38}, {'end': 11993.396, 'text': 'And it is essentially pixel data of clothing articles.', 'start': 11989.573, 'duration': 3.823}, {'end': 12001.743, 'text': "So what we're going to do to load in this data set from Kara, so this actually built into Kara, as it's meant, as, like a beginner,", 'start': 11994.317, 'duration': 7.426}], 'summary': 'Using the fashion mnist dataset with 60,000 training images and 10,000 validation images for the first neural network.', 'duration': 27.822, 'max_score': 11973.921, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk11973921.jpg'}], 'start': 11325.808, 'title': 'Neural networks training and understanding', 'summary': 'Covers the training process of neural networks, including moving data points into higher dimensions, use of loss functions, and gradient descent. it explains backpropagation algorithm, gradient descent, optimizers, and data impact. additionally, it introduces the fashion mnist dataset with 60,000 training images, 10,000 testing images, and 10 different clothing classes.', 'chapters': [{'end': 11678.549, 'start': 11325.808, 'title': 'Neural networks training', 'summary': "Explains how neural networks are trained, involving the concept of moving data points into higher dimensions to extract more information, the use of loss functions to calculate the network's performance, and the process of gradient descent to optimize the loss function for better network performance.", 'duration': 352.741, 'highlights': ['Neural networks move data points into higher dimensions to extract more information', 'Explanation of loss function and its role in training neural networks', 'Importance of gradient descent in optimizing the loss function for better network performance']}, {'end': 11973.4, 'start': 11678.849, 'title': 'Understanding neural networks training', 'summary': 'Explains the training process of neural networks, including the backpropagation algorithm, gradient descent, and the role of optimizers, as well as the impact of data on network performance, with an emphasis on improving predictions and lowering loss.', 'duration': 294.551, 'highlights': ['The backpropagation algorithm is used to update the weights and biases of the neural network to minimize the loss function, improving predictions over time.', 'The training process involves making predictions, comparing them to expected values using a loss function, and calculating the gradient to determine the direction for minimizing the loss function.', "The impact of data on the network's performance is emphasized, with the network improving predictions and lowering loss as more information is fed into it."]}, {'end': 12260.459, 'start': 11973.921, 'title': 'Neural network: fashion mnist dataset', 'summary': 'Introduces the fashion mnist dataset, containing 60,000 training images and 10,000 testing images, each with 28x28 pixels representing 10 different clothing classes, and demonstrates the grayscale values of the pixels.', 'duration': 286.538, 'highlights': ['The fashion MNIST dataset contains 60,000 images for training and 10,000 images for validating and testing, with 28x28 pixels representing 10 different clothing classes.', 'The grayscale values of the pixels range between 0 and 255, with 0 representing black and 255 representing white.', 'The training labels consist of an array with values from zero to nine, representing 10 different classes of clothing such as t-shirt, trouser, pullover, and more.']}], 'duration': 934.651, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk11325808.jpg', 'highlights': ['The fashion MNIST dataset contains 60,000 training images and 10,000 testing images', 'Neural networks move data points into higher dimensions to extract more information', 'The backpropagation algorithm updates weights and biases to minimize the loss function']}, {'end': 13490.616, 'segs': [{'end': 12317.78, 'src': 'embed', 'start': 12291.599, 'weight': 0, 'content': [{'end': 12296.483, 'text': "And a lot of times when we have our data, we have it in these like random forms, or we're missing data.", 'start': 12291.599, 'duration': 4.884}, {'end': 12298.565, 'text': "there's information we don't know or that we haven't seen.", 'start': 12296.483, 'duration': 2.082}, {'end': 12301.247, 'text': 'And typically, what we need to do is pre process it.', 'start': 12299.225, 'duration': 2.022}, {'end': 12305.21, 'text': "Now, what I'm going to do here is squish all my values between zero and one.", 'start': 12301.808, 'duration': 3.402}, {'end': 12312.736, 'text': "Typically it's a good idea to get all of your input values in a neural network in between, like that range in between, I would say negative.", 'start': 12305.931, 'duration': 6.805}, {'end': 12314.498, 'text': "one and one is what you're trying to do.", 'start': 12312.736, 'duration': 1.762}, {'end': 12317.78, 'text': "you're trying to make your numbers as small as possible to feed to the neural network.", 'start': 12314.498, 'duration': 3.282}], 'summary': 'Pre-processing data involves squishing values between 0 and 1 to optimize input for neural networks.', 'duration': 26.181, 'max_score': 12291.599, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk12291599.jpg'}, {'end': 12717.624, 'src': 'embed', 'start': 12690.939, 'weight': 2, 'content': [{'end': 12696.884, 'text': 'But these are things that we can change, right, the optimizer, the loss, the metrics, the activation function, we can change that.', 'start': 12690.939, 'duration': 5.945}, {'end': 12701.287, 'text': 'So these are called hyper parameters, same thing with the number of neurons in each layer.', 'start': 12697.184, 'duration': 4.103}, {'end': 12710.314, 'text': 'So hyper parameter tuning is a process of changing all of these values and looking at how models perform with different hyper parameters change.', 'start': 12701.707, 'duration': 8.607}, {'end': 12712.897, 'text': "So I'm not really going to talk about that too much.", 'start': 12711.195, 'duration': 1.702}, {'end': 12717.624, 'text': "But that is something to note, because you'll probably hear that, you know, this hyper parameter kind of idea.", 'start': 12712.977, 'duration': 4.647}], 'summary': 'Hyper parameter tuning involves optimizing the optimizer, loss, metrics, and activation function, as well as adjusting the number of neurons in each layer to improve model performance.', 'duration': 26.685, 'max_score': 12690.939, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk12690939.jpg'}, {'end': 12829.24, 'src': 'embed', 'start': 12799.136, 'weight': 4, 'content': [{'end': 12801.058, 'text': 'So this will take a few seconds to run.', 'start': 12799.136, 'duration': 1.922}, {'end': 12804, 'text': "Now, if you're on a much faster computer, you'll probably be faster than this.", 'start': 12801.118, 'duration': 2.882}, {'end': 12805.882, 'text': 'But this is why I like Google Collaboratory.', 'start': 12804.321, 'duration': 1.561}, {'end': 12810.246, 'text': "Because you know, this isn't using any of my computer's resources to train, it's using this.", 'start': 12805.922, 'duration': 4.324}, {'end': 12813.629, 'text': 'And we can see, like the RAM and the disk.', 'start': 12810.947, 'duration': 2.682}, {'end': 12817.072, 'text': 'How do I look at this? In this network?', 'start': 12814.289, 'duration': 2.783}, {'end': 12818.653, 'text': 'Oh, is it gonna?', 'start': 12818.053, 'duration': 0.6}, {'end': 12819.354, 'text': 'let me look at this now?', 'start': 12818.653, 'duration': 0.701}, {'end': 12822.516, 'text': "Okay, I don't know why it's not letting me click this, but usually you can have a look at it.", 'start': 12819.694, 'duration': 2.822}, {'end': 12825.738, 'text': "And now we've trained and we've fit the model.", 'start': 12823.016, 'duration': 2.722}, {'end': 12829.24, 'text': 'So we can see that we have an accuracy of 91%.', 'start': 12825.758, 'duration': 3.482}], 'summary': 'Google collaboratory trains model with 91% accuracy, no computer resources used.', 'duration': 30.104, 'max_score': 12799.136, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk12799136.jpg'}, {'end': 12905.08, 'src': 'embed', 'start': 12879.872, 'weight': 1, 'content': [{'end': 12886.418, 'text': 'Now you will notice when I run this, that the accuracy will likely be lower on this than it was on our model.', 'start': 12879.872, 'duration': 6.546}, {'end': 12890.542, 'text': 'So actually, the accuracy we had from this was about 91.', 'start': 12886.498, 'duration': 4.044}, {'end': 12891.903, 'text': "And now we're only getting 88.5.", 'start': 12890.542, 'duration': 1.361}, {'end': 12895.426, 'text': 'So this is an example of something we call overfitting.', 'start': 12891.903, 'duration': 3.523}, {'end': 12901.776, 'text': 'our model seemed like it was doing really well on the testing data or sorry, the training data.', 'start': 12896.227, 'duration': 5.549}, {'end': 12905.08, 'text': "But that's because it was seeing that data so often right.", 'start': 12902.176, 'duration': 2.904}], 'summary': 'Model accuracy dropped from 91 to 88.5, indicating overfitting.', 'duration': 25.208, 'max_score': 12879.872, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk12879872.jpg'}, {'end': 13041.669, 'src': 'embed', 'start': 13015.118, 'weight': 3, 'content': [{'end': 13019.38, 'text': "You think we're going to be better? Do you think we're going to be worse? It's only seen the training data one time.", 'start': 13015.118, 'duration': 4.262}, {'end': 13020.82, 'text': "Let's run this.", 'start': 13020.14, 'duration': 0.68}, {'end': 13024.421, 'text': "And let's see 89.34.", 'start': 13021.6, 'duration': 2.821}, {'end': 13027.482, 'text': 'So in this situation, less epochs was actually better.', 'start': 13024.421, 'duration': 3.061}, {'end': 13029.203, 'text': "So that's something to consider.", 'start': 13028.082, 'duration': 1.121}, {'end': 13033.664, 'text': 'You know, a lot of people I see just go like 100 epochs and just think their model is going to be great.', 'start': 13029.763, 'duration': 3.901}, {'end': 13035.725, 'text': "That's actually not good to do.", 'start': 13034.004, 'duration': 1.721}, {'end': 13041.669, 'text': "a lot of the times you're gonna have a worse model, because what's going to end up happening is it's going to be seeing the same information,", 'start': 13036.445, 'duration': 5.224}], 'summary': 'Less epochs led to better performance, achieving 89.34 accuracy.', 'duration': 26.551, 'max_score': 13015.118, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk13015118.jpg'}], 'start': 12260.499, 'title': 'Neural network training and optimization', 'summary': 'Covers the importance of data preprocessing, hyperparameter tuning, overfitting, and making predictions in neural networks, emphasizing the significance of scaling input values, adjusting hyperparameters, and the impact of these processes on model performance such as achieving a 91% to 88.5% accuracy drop. it also involves a network comprising 784, 128, and 10 neurons and processing 60,000 images, and provides insights into the upcoming module on convolutional neural networks for deep computer vision.', 'chapters': [{'end': 12673.903, 'start': 12260.499, 'title': 'Neural network data preprocessing', 'summary': 'Covers the importance of data preprocessing in neural networks, emphasizing the need to scale input values between 0 and 1 for optimal performance, and the process of creating and compiling a neural network model using keras and tensorflow.', 'duration': 413.404, 'highlights': ['Data preprocessing is crucial for neural networks, aiming to scale input values between 0 and 1 to optimize performance.', 'Creating a neural network model involves defining layers, such as input, hidden, and output layers, and selecting activation functions like rectify linear unit and softmax.', "Compiling the neural network model involves selecting optimizer, loss function, and metrics, such as the 'Adam' optimizer, 'sparse categorical cross entropy' loss, and 'accuracy' metric."]}, {'end': 12819.354, 'start': 12673.903, 'title': 'Neural network training process', 'summary': 'Discusses hyper parameter tuning, model compilation, and training with an emphasis on the significance of adjusting hyper parameters and the time required for training, involving a network comprising 784, 128, and 10 neurons and processing 60,000 images.', 'duration': 145.451, 'highlights': ['The training process involves adjusting hyper parameters such as optimizer, loss, metrics, activation function, and number of neurons in each layer.', 'Training the model involves fitting it to the training data with 10 epochs, which may take a few minutes due to the large number of weights and biases in the neural network.', "The significance of using Google Collaboratory for training is highlighted as it doesn't utilize the user's computer resources and can efficiently handle the computational demands of training."]}, {'end': 13055.321, 'start': 12819.694, 'title': 'Model overfitting and hyperparameter tuning', 'summary': "Discusses the concept of overfitting in machine learning, where a model performs well on training data but poorly on testing data, with an example showing the accuracy dropping from 91% to 88.5%. it also explores the impact of hyperparameter tuning on model performance, demonstrating that changing the number of epochs can significantly affect the model's accuracy.", 'duration': 235.627, 'highlights': ["The model's accuracy dropped from 91% on training data to 88.5% on testing data, indicating overfitting.", 'Changing the number of epochs from 10 to 8 resulted in a different accuracy, demonstrating the impact of hyperparameter tuning.', 'Training the model with only one epoch resulted in an accuracy of 89.34%, suggesting that fewer epochs can sometimes lead to better performance.', "Overfitting occurs when a model memorizes the training data and performs poorly on new data, highlighting the importance of generalizing the model's accuracy to new datasets."]}, {'end': 13490.616, 'start': 13055.421, 'title': 'Making predictions with neural networks', 'summary': 'Discusses making predictions using a neural network model, demonstrating the process of predicting on test images and verifying the predictions, as well as providing insights into the upcoming module on convolutional neural networks for deep computer vision.', 'duration': 435.195, 'highlights': ['The chapter discusses making predictions using a neural network model', 'Demonstrating the process of predicting on test images', 'Verifying the predictions and showcasing a script for making predictions', 'Insights into the upcoming module on convolutional neural networks for deep computer vision']}], 'duration': 1230.117, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk12260499.jpg', 'highlights': ['Data preprocessing is crucial for neural networks, aiming to scale input values between 0 and 1', "The model's accuracy dropped from 91% on training data to 88.5% on testing data, indicating overfitting", 'The training process involves adjusting hyper parameters such as optimizer, loss, metrics, activation function', 'Training the model with only one epoch resulted in an accuracy of 89.34%, suggesting that fewer epochs can sometimes lead to better performance', "The significance of using Google Collaboratory for training is highlighted as it doesn't utilize the user's computer resources and can efficiently handle the computational demands of training"]}, {'end': 14927.179, 'segs': [{'end': 13552.049, 'src': 'embed', 'start': 13523.996, 'weight': 0, 'content': [{'end': 13526.317, 'text': 'Well, with an image, we actually have three dimensions.', 'start': 13523.996, 'duration': 2.321}, {'end': 13530.878, 'text': 'And what makes up those dimensions? Well, we have a height, and we have a width.', 'start': 13526.477, 'duration': 4.401}, {'end': 13533.039, 'text': 'And then we have something called color channels.', 'start': 13531.238, 'duration': 1.801}, {'end': 13535.08, 'text': "Now it's very important to understand this,", 'start': 13533.579, 'duration': 1.501}, {'end': 13542.824, 'text': "because we're going to see this a lot as we get into convolutional networks that the same image is really represented by three specific layers.", 'start': 13535.08, 'duration': 7.744}, {'end': 13549.688, 'text': 'right, we have the first layer, which tells us all of the red values of the pixels, the second layer, which tells us all the green values,', 'start': 13542.824, 'duration': 6.864}, {'end': 13552.049, 'text': 'and the third layer, which tells us all the blue values.', 'start': 13549.688, 'duration': 2.361}], 'summary': 'Image has 3 dimensions: height, width, and color channels (rgb).', 'duration': 28.053, 'max_score': 13523.996, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk13523996.jpg'}, {'end': 13625.514, 'src': 'embed', 'start': 13598.246, 'weight': 4, 'content': [{'end': 13603.071, 'text': "Okay, so now we're gonna talk about a convolutional neural network and the difference between that in a dense neural network.", 'start': 13598.246, 'duration': 4.825}, {'end': 13604.733, 'text': 'So in our previous examples.', 'start': 13603.452, 'duration': 1.281}, {'end': 13610.92, 'text': 'when we use the dense neural network to do some kind of image classification like that fashion and this data set,', 'start': 13604.733, 'duration': 6.187}, {'end': 13618.688, 'text': 'what it essentially did was look at the entire image at once and determined, based on finding features in specific areas of the image,', 'start': 13610.92, 'duration': 7.768}, {'end': 13619.829, 'text': 'what that image was right.', 'start': 13618.688, 'duration': 1.141}, {'end': 13625.514, 'text': 'maybe it found an edge here, a line here, maybe it found a shape, maybe it found a horizontal diagonal line.', 'start': 13620.47, 'duration': 5.044}], 'summary': 'Comparison of convolutional and dense neural networks for image classification.', 'duration': 27.268, 'max_score': 13598.246, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk13598246.jpg'}, {'end': 13784.226, 'src': 'embed', 'start': 13753.535, 'weight': 1, 'content': [{'end': 13754.656, 'text': 'I hope that makes sense.', 'start': 13753.535, 'duration': 1.121}, {'end': 13758.657, 'text': 'The main thing to remember is that dense neural networks work on a global scale,', 'start': 13755.056, 'duration': 3.601}, {'end': 13763.378, 'text': 'meaning they learn global patterns which are specific and are found in specific areas.', 'start': 13758.657, 'duration': 4.721}, {'end': 13769.78, 'text': 'Whereas convolutional neural networks or convolutional layers will find patterns that exist anywhere in the image,', 'start': 13763.899, 'duration': 5.881}, {'end': 13774.002, 'text': 'because they know what the pattern looks like, not that it just exists in a specific area.', 'start': 13769.78, 'duration': 4.222}, {'end': 13776.604, 'text': 'Alright, so how they work, right.', 'start': 13774.884, 'duration': 1.72}, {'end': 13781.005, 'text': "So let's see what a neural network regular neural network looks at this dog image.", 'start': 13776.644, 'duration': 4.361}, {'end': 13781.825, 'text': 'this is a good example.', 'start': 13781.005, 'duration': 0.82}, {'end': 13784.226, 'text': 'I should have been using this before.', 'start': 13781.825, 'duration': 2.401}], 'summary': 'Dense neural networks learn specific global patterns, while convolutional networks find patterns anywhere in the image.', 'duration': 30.691, 'max_score': 13753.535, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk13753535.jpg'}, {'end': 13869.167, 'src': 'embed', 'start': 13842.483, 'weight': 3, 'content': [{'end': 13847.166, 'text': "So that's kind of the point of the convolutional neural network and the convolutional layer.", 'start': 13842.483, 'duration': 4.683}, {'end': 13850.769, 'text': 'And what the convolutional layer does is look at our image and, essentially,', 'start': 13847.547, 'duration': 3.222}, {'end': 13858.654, 'text': "feedback to us what we call an output feature map that tells us about the presence of specific features, or what we're going to call filters,", 'start': 13850.769, 'duration': 7.885}, {'end': 13859.115, 'text': 'in our image.', 'start': 13858.654, 'duration': 0.461}, {'end': 13862.098, 'text': 'So that is kind of the way that works.', 'start': 13859.995, 'duration': 2.103}, {'end': 13869.167, 'text': 'Now, essentially, the thing we have to remember is that our dense neural networks output just a bunch of numeric values,', 'start': 13862.919, 'duration': 6.248}], 'summary': 'Convolutional neural network extracts features from images.', 'duration': 26.684, 'max_score': 13842.483, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk13842483.jpg'}, {'end': 13975.396, 'src': 'embed', 'start': 13952.497, 'weight': 2, 'content': [{'end': 13960.638, 'text': 'But hopefully this makes a little bit of sense that the convolutional layer returns a feature map that quantifies the presence of a filter at a specific location.', 'start': 13952.497, 'duration': 8.141}, {'end': 13965.103, 'text': 'And this filter, the advantage of it is that we slide it across the entire image.', 'start': 13961.198, 'duration': 3.905}, {'end': 13971.612, 'text': 'So if this filter or this feature is presence anywhere in the image, we will know about it, rather than in our dense network,', 'start': 13965.484, 'duration': 6.128}, {'end': 13975.396, 'text': 'where it had to learn that pattern in a specific global location.', 'start': 13971.612, 'duration': 3.784}], 'summary': 'Convolutional layer quantifies filter presence in images, providing advantage of detecting features anywhere in the image.', 'duration': 22.899, 'max_score': 13952.497, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk13952497.jpg'}], 'start': 13490.876, 'title': 'Convolutional neural networks', 'summary': 'Delves into understanding image data and convolutional neural networks, including explanations of the three dimensions of image data, the concept of convolutional neural networks compared to dense neural networks, how convolutional layers output feature maps, properties and functionality of convolutional layers, basics and operations of cnn, and the use of keras to create a cnn, with emphasis on understanding the code rather than memorizing it. it also covers the use of the cifar image dataset, containing 60,000 images of 10 classes, as an example.', 'chapters': [{'end': 13610.92, 'start': 13490.876, 'title': 'Understanding image data and convolutional neural networks', 'summary': 'Explains the three dimensions of image data, consisting of height, width, and color channels, and introduces the concept of convolutional neural networks as compared to dense neural networks for image classification.', 'duration': 120.044, 'highlights': ['The three dimensions of image data are height, width, and color channels, with each channel representing the red, green, and blue values of the pixels, essential for understanding convolutional neural networks.', 'Convolutional neural networks are introduced as a method for image classification, contrasting with dense neural networks used in previous examples.']}, {'end': 14006.294, 'start': 13610.92, 'title': 'Understanding convolutional neural networks', 'summary': 'Explains how dense neural networks learn global patterns in specific areas of an image, while convolutional neural networks learn local patterns that can exist anywhere in the image, and how convolutional layers output feature maps to quantify the presence of filters at different locations.', 'duration': 395.374, 'highlights': ['Convolutional neural networks learn local patterns that can exist anywhere in the image, unlike dense neural networks which learn global patterns in specific areas.', 'Convolutional layers output feature maps that quantify the presence of filters at different locations, allowing them to slide across the entire image to detect the presence of features.', 'Dense neural networks learn global patterns in specific areas of an image, making them unable to learn local patterns and apply them to different areas of the image.']}, {'end': 14364.848, 'start': 14007.154, 'title': 'Convolutional neural networks', 'summary': 'Explains the properties and functionality of convolutional layers in neural networks, including the concept of filters, the number of filters typically used, and the process of generating feature maps using dot product comparison for each filter.', 'duration': 357.694, 'highlights': ['Convolutional layer properties', 'Number of filters in convolutional layers', 'Functionality of filters in convolutional layers', 'Generation of feature maps using dot product comparison']}, {'end': 14927.179, 'start': 14364.868, 'title': 'Convolutional neural networks: basics and operations', 'summary': 'Explains the basics of convolutional neural networks, including the process of applying filters, generating output feature maps, and the importance of operations like padding, stride, and pooling, which helps in reducing dimensionality and extracting features. it also covers the use of keras to create a cnn and emphasizes the significance of understanding the code rather than memorizing it. the cifar image dataset, containing 60,000 images of 10 classes, is used as an example.', 'duration': 562.311, 'highlights': ['The process of applying filters and generating output feature maps in a convolutional neural network is explained, with emphasis on the need for multiple layers and the computational complexity involved, such as expanding the depth of the output feature map and the associated computations.', 'The concept and impact of padding in maintaining the dimensions of the output feature map are discussed, highlighting the addition of extra rows and columns and the rationale behind it, particularly for identifying features at the edges of images.', 'The explanation of pooling operations, including min, max, and average pooling, is provided, emphasizing their role in reducing dimensionality and simplifying the output feature map by sampling specific values and creating a new feature map with reduced size.', "The use of Keras to create a convolutional neural network for the CIFAR image dataset is introduced, highlighting the dataset's characteristics, such as containing 60,000 images of 32x32 resolution and 10 different object classes."]}], 'duration': 1436.303, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk13490876.jpg', 'highlights': ['The three dimensions of image data are height, width, and color channels, essential for understanding convolutional neural networks.', 'Convolutional neural networks learn local patterns that can exist anywhere in the image, unlike dense neural networks.', 'Convolutional layers output feature maps that quantify the presence of filters at different locations.', 'The process of applying filters and generating output feature maps in a convolutional neural network is explained.', 'The use of Keras to create a convolutional neural network for the CIFAR image dataset is introduced.']}, {'end': 16105.539, 'segs': [{'end': 14957.854, 'src': 'embed', 'start': 14927.239, 'weight': 0, 'content': [{'end': 14932.524, 'text': "And I don't I look up the documentation, I copy and paste what I need, I alter them, I write a little bit of my own code.", 'start': 14927.239, 'duration': 5.285}, {'end': 14934.426, 'text': "But that's kind of what you're gonna end up doing.", 'start': 14932.904, 'duration': 1.522}, {'end': 14935.246, 'text': "So that's what I'm doing here.", 'start': 14934.446, 'duration': 0.8}, {'end': 14937.709, 'text': 'So this is the image data set.', 'start': 14936.087, 'duration': 1.622}, {'end': 14941.895, 'text': 'We have truck, horse, ship, airplane, you know, just some everyday regular objects.', 'start': 14938.33, 'duration': 3.565}, {'end': 14945.7, 'text': 'There is 60, 000 images, as we said, and 6, 000 images of each class.', 'start': 14942.035, 'duration': 3.665}, {'end': 14949.145, 'text': "So we don't have too many images of just one specific class.", 'start': 14946.501, 'duration': 2.644}, {'end': 14951.828, 'text': "So we'll start by importing our modules.", 'start': 14949.926, 'duration': 1.902}, {'end': 14957.854, 'text': "So TensorFlow, we're going to import TensorFlow dot Keras, we're going to use the data set built into Keras for this.", 'start': 14952.569, 'duration': 5.285}], 'summary': 'The dataset contains 60,000 images with 6,000 images for each of the 4 classes: truck, horse, ship, and airplane. tensorflow and tensorflow.keras will be used for this project.', 'duration': 30.615, 'max_score': 14927.239, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk14927239.jpg'}, {'end': 15008.431, 'src': 'embed', 'start': 14982.916, 'weight': 6, 'content': [{'end': 14990.103, 'text': "So this is different from what we've used before, where some of our objects have actually been like in NumPy arrays, where we can look at them better.", 'start': 14982.916, 'duration': 7.187}, {'end': 14991.244, 'text': 'this is not going to be in that.', 'start': 14990.103, 'duration': 1.141}, {'end': 14992.646, 'text': 'So just something to keep in mind here.', 'start': 14991.264, 'duration': 1.382}, {'end': 15000.008, 'text': "we're going to normalize this data into train images and test images by just dividing both of them by 255.", 'start': 14993.206, 'duration': 6.802}, {'end': 15003.609, 'text': "Now again, we're doing that because we want to make sure that our values are between zero and one,", 'start': 15000.008, 'duration': 3.601}, {'end': 15008.431, 'text': "because that's just a lot better to work with in our neural networks rather than large integer values.", 'start': 15003.609, 'duration': 4.822}], 'summary': 'Data will be normalized into train and test images by dividing both by 255 to ensure values are between 0 and 1, improving neural network performance.', 'duration': 25.515, 'max_score': 14982.916, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk14982916.jpg'}, {'end': 15077.734, 'src': 'embed', 'start': 15051.509, 'weight': 5, 'content': [{'end': 15055.312, 'text': "So, essentially, we've already talked about how a convolutional neural network works.", 'start': 15051.509, 'duration': 3.803}, {'end': 15058.035, 'text': "we haven't talked about the architecture and how we actually make one.", 'start': 15055.312, 'duration': 2.723}, {'end': 15065.241, 'text': 'Essentially, what we do is we stack a bunch of convolutional layers and max pooling min pooling our average pooling layers together.', 'start': 15058.635, 'duration': 6.606}, {'end': 15067.163, 'text': 'in something like this, right.', 'start': 15065.982, 'duration': 1.181}, {'end': 15073.61, 'text': 'So, after each convolutional layer we have a max pooling layer, some kind of pooling layer, typically to reduce the dimensionality.', 'start': 15067.283, 'duration': 6.327}, {'end': 15077.734, 'text': "although you don't need that, you could just go straight into three convolutional layers.", 'start': 15073.61, 'duration': 4.124}], 'summary': 'Convolutional neural network architecture involves stacking convolutional and pooling layers.', 'duration': 26.225, 'max_score': 15051.509, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk15051509.jpg'}, {'end': 15386.368, 'src': 'embed', 'start': 15348.956, 'weight': 3, 'content': [{'end': 15356.038, 'text': "And you can see I've trained this previously, if you train it on 10 epochs, but I'm just going to train up to four, we get our 67 68%.", 'start': 15348.956, 'duration': 7.082}, {'end': 15356.66, 'text': 'And that should be fine.', 'start': 15356.039, 'duration': 0.621}, {'end': 15359.404, 'text': "So we'll be back once this is trained, then we'll talk about how some of this works.", 'start': 15356.68, 'duration': 2.724}, {'end': 15367.456, 'text': 'Okay, so the model is finally finished training, we did about four epochs, you can see we got an accuracy about 67% on the evaluation data.', 'start': 15359.965, 'duration': 7.491}, {'end': 15369.058, 'text': 'To quickly go over this stuff.', 'start': 15367.957, 'duration': 1.101}, {'end': 15371.842, 'text': "optimizers, Adam, we've talked about that before.", 'start': 15369.859, 'duration': 1.983}, {'end': 15375.644, 'text': 'loss function is sparse categorical cross entropy.', 'start': 15372.843, 'duration': 2.801}, {'end': 15380.386, 'text': 'That one I mean, you can read this if you want computes the cross entropy loss between the labels and predictions.', 'start': 15376.404, 'duration': 3.982}, {'end': 15382.547, 'text': "And I'm not going to go into that.", 'start': 15381.266, 'duration': 1.281}, {'end': 15386.368, 'text': 'But these kind of things are things that you can look up if you really understand why they work.', 'start': 15382.587, 'duration': 3.781}], 'summary': 'After training for four epochs, the model achieved an accuracy of 67% on the evaluation data, using adam optimizer and sparse categorical cross entropy loss function.', 'duration': 37.412, 'max_score': 15348.956, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk15348956.jpg'}, {'end': 15569.006, 'src': 'embed', 'start': 15521.489, 'weight': 1, 'content': [{'end': 15523.41, 'text': 'And this was just to get you familiar with the idea.', 'start': 15521.489, 'duration': 1.921}, {'end': 15525.151, 'text': 'So data augmentation.', 'start': 15524.071, 'duration': 1.08}, {'end': 15527.913, 'text': 'So this is basically the idea.', 'start': 15525.491, 'duration': 2.422}, {'end': 15536.838, 'text': 'if you have one image, we can turn that image into several different images and train and pass all those images to our, our model.', 'start': 15527.913, 'duration': 8.925}, {'end': 15545.464, 'text': 'So, essentially, if we can rotate the image, if we can flip it, if we can stretch it, compress it, you know, shift it, zoom it, whatever it is,', 'start': 15537.538, 'duration': 7.926}, {'end': 15546.785, 'text': 'and pass that to our model.', 'start': 15545.464, 'duration': 1.321}, {'end': 15553.651, 'text': "it should be better at generalizing, because we'll see the same image but modified and augmented multiple times,", 'start': 15546.785, 'duration': 6.866}, {'end': 15562.058, 'text': 'which means that we can turn a data set, say, of 10, 000 images into 40, 000 images by doing four augmentations on every single image.', 'start': 15553.651, 'duration': 8.407}, {'end': 15564.901, 'text': 'Now, obviously, you still want a lot of unique images.', 'start': 15562.539, 'duration': 2.362}, {'end': 15569.006, 'text': 'But this technique can help a lot and is used quite a bit,', 'start': 15565.142, 'duration': 3.864}], 'summary': 'Data augmentation creates multiple images for training, improving model generalization and potentially increasing a dataset from 10,000 to 40,000 images.', 'duration': 47.517, 'max_score': 15521.489, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk15521489.jpg'}, {'end': 15806.67, 'src': 'embed', 'start': 15772.812, 'weight': 2, 'content': [{'end': 15776.734, 'text': 'So as kind of the base of our models that we have a really good starting point.', 'start': 15772.812, 'duration': 3.922}, {'end': 15784.538, 'text': "And all we need to do is what's called fine tune the last few layers of that network, so that they work a little bit better for our purposes.", 'start': 15776.794, 'duration': 7.744}, {'end': 15786.139, 'text': "So what we're going to do?", 'start': 15785.398, 'duration': 0.741}, {'end': 15790.1, 'text': "essentially say Alright, we have this model that Google's trained.", 'start': 15786.139, 'duration': 3.961}, {'end': 15792.001, 'text': "they've trained it on 1.4 million images.", 'start': 15790.1, 'duration': 1.901}, {'end': 15796.524, 'text': "it's capable of classifying, let's say, 1000 different classes, which is actually the example we'll look at later.", 'start': 15792.001, 'duration': 4.523}, {'end': 15806.67, 'text': "So obviously the beginning of that model is what's picking up on the smaller edges and you know kind of the very general things that appear in all of our images.", 'start': 15797.364, 'duration': 9.306}], 'summary': 'Fine-tuning model trained on 1.4m images to classify 1000 classes.', 'duration': 33.858, 'max_score': 15772.812, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk15772812.jpg'}], 'start': 14927.239, 'title': 'Cnn architecture, training, and optimization', 'summary': 'Covers loading and normalizing a cifar-10 image dataset with 60,000 images, explaining the architecture of a convolutional neural network (cnn). it also discusses training a cnn achieving an accuracy of about 67%, optimizing models with adam optimizer and sparse categorical cross entropy loss function, and utilizing data augmentation and pre-trained models to improve accuracy above 90%.', 'chapters': [{'end': 15103.773, 'start': 14927.239, 'title': 'Cnn architecture and image dataset loading', 'summary': 'Discusses loading a cifar-10 image dataset containing 60,000 images of everyday objects, with 6,000 images for each of the 10 classes, and explains the process of normalizing the data and the architecture of a convolutional neural network (cnn).', 'duration': 176.534, 'highlights': ['Loading CIFAR-10 image dataset with 60,000 images of 10 classes, each containing 6,000 images.', 'Normalization of data by dividing train and test images by 255 to ensure values are between 0 and 1 for better neural network performance.', 'Description of the CNN architecture involving stacking convolutional layers, pooling layers, and activation functions.']}, {'end': 15367.456, 'start': 15103.793, 'title': 'Convolutional neural network summary', 'summary': 'Covers the breakdown of layers in a convolutional neural network, including the input shape, output shape, and the process of adding dense layers, with an accuracy of about 67% obtained in training.', 'duration': 263.663, 'highlights': ['The model achieved an accuracy of about 67% in training.', 'Explanation of the breakdown of layers in a convolutional neural network, including input and output shapes.', 'Description of the process of adding dense layers to the convolutional neural network.']}, {'end': 15642.336, 'start': 15367.957, 'title': 'Optimizing and evaluating models', 'summary': 'Covers using adam optimizer and sparse categorical cross entropy loss function to train and evaluate a model achieving an accuracy of 67.35% on a small dataset of about 60,000 images, and discusses the challenge of creating a good convolutional neural network with small data sets and the techniques of data augmentation and using pre-trained models to address this limitation.', 'duration': 274.379, 'highlights': ["Using data augmentation to enhance model generalization by transforming one image into several different images, allowing for the training of augmented images to improve the model's ability to generalize, effectively increasing a dataset from 10,000 to 40,000 images by performing four augmentations on each image.", 'Discussion on the challenge of creating a reliable convolutional neural network with a small dataset of about 60,000 images, and the need to employ techniques such as data augmentation and using pre-trained models to address this limitation.', 'Overview of using Adam optimizer and sparse categorical cross entropy loss function to train and evaluate a model, achieving an accuracy of 67.35% on the small dataset.']}, {'end': 16105.539, 'start': 15642.876, 'title': 'Data augmentation and pre-trained models', 'summary': 'Discusses data augmentation for image data, creating augmented images with random transformations, and utilizing pre-trained models to fine-tune convolutional neural networks for classifying images, aiming for an accuracy above 90%.', 'duration': 462.663, 'highlights': ['Using data augmentation, the script generates augmented images with random transformations, such as shifts and rotations, to increase the dataset size, aiming for a better generalization.', "Utilizing pre-trained models from Google's convolutional neural networks, the script fine-tunes the last few layers to classify images with a starting point of a model trained on 1.4 million images and capable of classifying 1000 different classes.", 'Images are loaded from TensorFlow datasets, and a function is created to display and reshape the images for consistency and preprocessing.']}], 'duration': 1178.3, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk14927239.jpg', 'highlights': ['Loading CIFAR-10 image dataset with 60,000 images of 10 classes, each containing 6,000 images.', 'Using data augmentation to enhance model generalization by transforming one image into several different images, effectively increasing a dataset from 10,000 to 40,000 images.', "Utilizing pre-trained models from Google's convolutional neural networks, fine-tuning the last few layers to classify images with a starting point of a model trained on 1.4 million images.", 'Overview of using Adam optimizer and sparse categorical cross entropy loss function to train and evaluate a model, achieving an accuracy of 67.35% on the small dataset.', 'The model achieved an accuracy of about 67% in training.', 'Description of the CNN architecture involving stacking convolutional layers, pooling layers, and activation functions.', 'Normalization of data by dividing train and test images by 255 to ensure values are between 0 and 1 for better neural network performance.', 'Using data augmentation, the script generates augmented images with random transformations, such as shifts and rotations, to increase the dataset size.']}, {'end': 17056.147, 'segs': [{'end': 16333.062, 'src': 'embed', 'start': 16302.937, 'weight': 2, 'content': [{'end': 16304.018, 'text': 'why are we going to touch this now?', 'start': 16302.937, 'duration': 1.081}, {'end': 16306.761, 'text': "And if we were going to touch this, what's the point of even using this base?", 'start': 16304.058, 'duration': 2.703}, {'end': 16308.823, 'text': "right?. We don't want to train this, we want to leave it the same.", 'start': 16306.761, 'duration': 2.062}, {'end': 16310.724, 'text': "So to do that, we're just going to freeze it.", 'start': 16309.423, 'duration': 1.301}, {'end': 16318.05, 'text': 'Now freezing is a pretty, I mean, it just essentially means turning the trainable attribute of a layer off or of the model off.', 'start': 16310.884, 'duration': 7.166}, {'end': 16320.852, 'text': 'So what we do is we just say base model, dot trainable equals false,', 'start': 16318.47, 'duration': 2.382}, {'end': 16325.036, 'text': 'which essentially means that we are no longer going to be training any aspect of that.', 'start': 16320.852, 'duration': 4.184}, {'end': 16328.519, 'text': "I want to say model, although we'll just call it the base layer for now, or the base model.", 'start': 16325.036, 'duration': 3.483}, {'end': 16333.062, 'text': 'So now, if we look at the summary, we can see when we scroll down to the bottom,', 'start': 16329.139, 'duration': 3.923}], 'summary': 'Freezing the base model to prevent further training.', 'duration': 30.125, 'max_score': 16302.937, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk16302937.jpg'}, {'end': 16384.654, 'src': 'embed', 'start': 16355.658, 'weight': 3, 'content': [{'end': 16362.124, 'text': "So what we're going to do is add a global average layer which, essentially, is going to take the entire average of every single,", 'start': 16355.658, 'duration': 6.466}, {'end': 16364.126, 'text': 'so of 1280 different layers, that are five by five.', 'start': 16362.124, 'duration': 2.002}, {'end': 16370.229, 'text': 'and put that into a one D tensor, which is kind of flattening that for us.', 'start': 16367.208, 'duration': 3.021}, {'end': 16372.57, 'text': 'So we do that global average pooling.', 'start': 16370.829, 'duration': 1.741}, {'end': 16377.512, 'text': "And then we're just going to add the prediction layer, which essentially is going to just be one dense node.", 'start': 16373.03, 'duration': 4.482}, {'end': 16384.654, 'text': "And since we're only classifying two different classes, right, dogs and cats, we only need one, then we're going to add all these models together.", 'start': 16377.612, 'duration': 7.042}], 'summary': 'Adding global average and prediction layers to classify two classes.', 'duration': 28.996, 'max_score': 16355.658, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk16355658.jpg'}, {'end': 16427.095, 'src': 'embed', 'start': 16391.797, 'weight': 1, 'content': [{'end': 16392.277, 'text': "So let's do this.", 'start': 16391.797, 'duration': 0.48}, {'end': 16398.953, 'text': 'global average layer, prediction layer model, give that a second to kind of run there.', 'start': 16393.17, 'duration': 5.783}, {'end': 16405.235, 'text': 'Now when we look at the summary, we can see we have mobile net v two, which is actually a model, but that is our base layer.', 'start': 16399.373, 'duration': 5.862}, {'end': 16413.505, 'text': "And that's fine, because the output shape is that then global average pooling which again, just takes this flonset out does the average for us.", 'start': 16405.255, 'duration': 8.25}, {'end': 16419.028, 'text': 'And then finally, our dense layer, which is going to simply have one neuron, which is going to be our output.', 'start': 16414.065, 'duration': 4.963}, {'end': 16427.095, 'text': 'Now notice that we have 2.259 million parameters in total, and only 1281 of them are trainable.', 'start': 16419.569, 'duration': 7.526}], 'summary': 'Model has 2.259m total parameters, with 1281 trainable.', 'duration': 35.298, 'max_score': 16391.797, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk16391796.jpg'}, {'end': 16606.123, 'src': 'embed', 'start': 16579.966, 'weight': 0, 'content': [{'end': 16587.69, 'text': 'But when you do end up training this, you end up getting an accuracy of a close to 92 or 93%, which is pretty good,', 'start': 16579.966, 'duration': 7.724}, {'end': 16596.116, 'text': 'considering the fact that all we did was use an original layer, like base layer, that classified up to 1000 different images, so very general,', 'start': 16587.69, 'duration': 8.426}, {'end': 16600.519, 'text': 'and applied that just to cats and dogs by adding our dense layer classifier on top.', 'start': 16596.116, 'duration': 4.403}, {'end': 16606.123, 'text': "So you can see this was kind of the accuracy I had from training this previously, I don't want to train again, because it takes so long.", 'start': 16601.159, 'duration': 4.964}], 'summary': 'Achieved 92-93% accuracy using a base layer for classifying cats and dogs.', 'duration': 26.157, 'max_score': 16579.966, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk16579966.jpg'}, {'end': 16695.762, 'src': 'embed', 'start': 16668.1, 'weight': 4, 'content': [{'end': 16673.244, 'text': "So that's kind of the thing there, I was getting an OS error just because I hadn't saved this previously.", 'start': 16668.1, 'duration': 5.144}, {'end': 16677.088, 'text': "But that's how you save and load models, which I think is important when you're doing very large models.", 'start': 16673.325, 'duration': 3.763}, {'end': 16682.373, 'text': "So, when you fit this, feel free to change the epochs to be something slower, if you'd like, again right?", 'start': 16677.629, 'duration': 4.744}, {'end': 16685.314, 'text': 'this takes a long time to actually end up running.', 'start': 16682.373, 'duration': 2.941}, {'end': 16692.481, 'text': "But you can see that the accuracy increases pretty well exponential exponentially, from when we didn't even have that classifier on it.", 'start': 16686.116, 'duration': 6.365}, {'end': 16695.762, 'text': 'Now, the last thing that I want to talk about is object detection.', 'start': 16693.121, 'duration': 2.641}], 'summary': 'Demonstrates saving and loading models for large datasets, with exponential accuracy increase, and touches on object detection.', 'duration': 27.662, 'max_score': 16668.1, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk16668099.jpg'}, {'end': 16867.064, 'src': 'embed', 'start': 16839.076, 'weight': 7, 'content': [{'end': 16842.159, 'text': "Now we're on to recurrent neural networks, which is actually gonna be pretty interesting.", 'start': 16839.076, 'duration': 3.083}, {'end': 16843.62, 'text': "So I'll see you in that module.", 'start': 16842.199, 'duration': 1.421}, {'end': 16847.249, 'text': 'Hello everyone.', 'start': 16846.749, 'duration': 0.5}, {'end': 16853.394, 'text': 'And welcome to the next module in this course, which is covering natural language processing with recurrent neural networks.', 'start': 16847.329, 'duration': 6.065}, {'end': 16856.456, 'text': "Now, what we're going to be doing in this module.", 'start': 16854.114, 'duration': 2.342}, {'end': 16862.02, 'text': "here is, first of all, first off discussing what natural language processing is, which I guess I'll start with here.", 'start': 16856.456, 'duration': 5.564}, {'end': 16867.064, 'text': "essentially, for those of you that don't know, natural language processing, or NLP for short,", 'start': 16862.02, 'duration': 5.044}], 'summary': 'Introduction to nlp with recurrent neural networks in this module.', 'duration': 27.988, 'max_score': 16839.076, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk16839076.jpg'}, {'end': 16938.56, 'src': 'embed', 'start': 16912.907, 'weight': 6, 'content': [{'end': 16918.788, 'text': 'that is probably going to be classified under natural language processing in terms of doing some kind of machine learning stuff with it.', 'start': 16912.907, 'duration': 5.881}, {'end': 16924.771, 'text': 'Now, we are going to be talking about a different kind of neural network in this series called recurrent neural networks.', 'start': 16919.608, 'duration': 5.163}, {'end': 16929.154, 'text': 'Now, these are very good at classifying and understanding textual data.', 'start': 16925.152, 'duration': 4.002}, {'end': 16930.495, 'text': "And that's why we'll be using them.", 'start': 16929.174, 'duration': 1.321}, {'end': 16932.476, 'text': 'But they are fairly complex.', 'start': 16930.815, 'duration': 1.661}, {'end': 16934.198, 'text': "And there's a lot of stuff that goes into them.", 'start': 16932.496, 'duration': 1.702}, {'end': 16938.56, 'text': 'Now, in the interest of time and just not knowing a lot of your math background,', 'start': 16934.598, 'duration': 3.962}], 'summary': 'Discussion on using recurrent neural networks for natural language processing.', 'duration': 25.653, 'max_score': 16912.907, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk16912907.jpg'}], 'start': 16105.619, 'title': 'Implementing mobilenet v2 and tensorflow object detection', 'summary': 'Covers the implementation of mobilenet v2 for image classification, freezing the base model, adding a classifier, and introducing tensorflow object detection. it includes details on achieving 92-93% accuracy after training and highlights the upcoming module on natural language processing with recurrent neural networks.', 'chapters': [{'end': 16302.937, 'start': 16105.619, 'title': 'Mobilenet v2 for image classification', 'summary': 'Discusses the implementation of mobilenet v2, a pre-trained model from google, with an input shape of 160x160x3, excluding the classifier for 1000 classes, and the need to freeze the base to prevent retraining 2.257 million weights and biases.', 'duration': 197.318, 'highlights': ['The base model used for image classification is MobileNet V2, with an input shape of 160x160x3, and the exclusion of the classifier for 1000 classes, aimed at retraining it for specific categories like dogs and cats.', 'Freezing the base is crucial to prevent retraining 2.257 million weights and biases, as they are already well-defined and optimized for the problem of classifying 1000 classes.']}, {'end': 16685.314, 'start': 16302.937, 'title': 'Freezing base model and adding classifier', 'summary': "Discusses freezing a base model to prevent training, adding a global average layer and a prediction layer on top, and evaluating the model's accuracy, achieving 92-93% after training.", 'duration': 382.377, 'highlights': ['The base model is frozen by setting its trainable attribute to false, resulting in trainable parameters reducing to zero from 2.257 million.', 'A global average layer is added to take the average of the 1280 different layers, followed by the addition of a prediction layer with a single dense node for classifying two classes, resulting in a final model with 2.259 million parameters, with only 1281 of them being trainable.', "The model's accuracy is evaluated before training, resulting in an accuracy of 56% with random weights and biases, and after training, the model achieves an accuracy of close to 92-93%.", "The process of saving and loading the model is explained, enabling the user to save the model using 'model.save' and load the model using 'model.load', which is essential for avoiding retraining large models."]}, {'end': 17056.147, 'start': 16686.116, 'title': 'Introduction to tensorflow object detection', 'summary': 'Introduces object detection using tensorflow, highlighting the accuracy increase and confidence scores provided by the api, and mentions the upcoming module on natural language processing with recurrent neural networks.', 'duration': 370.031, 'highlights': ['The accuracy of object detection increases exponentially after using TensorFlow classifier, providing confidence scores for the detected objects.', 'The module on natural language processing with recurrent neural networks is introduced, covering the understanding and classification of textual data.', 'Recurrent neural networks are powerful for classifying and understanding textual data, and will be used for tasks such as sentiment analysis and text generation.', 'The chapter briefly mentions the use of facial recognition module in Python, emphasizing its use of convolutional neural networks for facial detection and recognition.']}], 'duration': 950.528, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk16105619.jpg', 'highlights': ['The model achieves an accuracy of close to 92-93% after training', 'The base model used for image classification is MobileNet V2 with an input shape of 160x160x3', 'Freezing the base model prevents retraining 2.257 million weights and biases', 'A global average layer is added to take the average of the 1280 different layers', 'The process of saving and loading the model is explained for avoiding retraining large models', 'The accuracy of object detection increases exponentially after using TensorFlow classifier', 'Recurrent neural networks will be used for tasks such as sentiment analysis and text generation', 'The module on natural language processing with recurrent neural networks is introduced']}, {'end': 17852.822, 'segs': [{'end': 17219.954, 'src': 'embed', 'start': 17192.503, 'weight': 2, 'content': [{'end': 17198.625, 'text': "So you can imagine that in very large data sets, we're going to have, you know, 10s of 1000s, of hundreds of 1000s,", 'start': 17192.503, 'duration': 6.122}, {'end': 17203.547, 'text': "sometimes even maybe millions of different words, and they're all going to be encoded by different integers.", 'start': 17198.625, 'duration': 4.922}, {'end': 17215.072, 'text': "Now, the reason we call this bag of words is because what we're actually going to do when we look at a sentence is we're only going to keep track of the words that are present and the frequency of those words.", 'start': 17204.248, 'duration': 10.824}, {'end': 17219.954, 'text': "And in fact, what we'll do well is we'll create what we call a bag.", 'start': 17215.612, 'duration': 4.342}], 'summary': "Large data sets have tens of thousands of words encoded as integers, forming a 'bag of words' to track word frequency.", 'duration': 27.451, 'max_score': 17192.503, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk17192503.jpg'}, {'end': 17446.32, 'src': 'embed', 'start': 17417.485, 'weight': 1, 'content': [{'end': 17424.47, 'text': "And that's one of the reasons why bag of words is not very good to use, because we lose the context of the words within the sentence.", 'start': 17417.485, 'duration': 6.985}, {'end': 17427.411, 'text': 'we just pick up the frequency and the fact that these words exist.', 'start': 17424.47, 'duration': 2.941}, {'end': 17432.715, 'text': "So that's the first technique that's called bag of words, I've actually written a little function here that does this for us.", 'start': 17427.752, 'duration': 4.963}, {'end': 17436.836, 'text': 'this is not really the exact way that we would write a bag of words function.', 'start': 17433.095, 'duration': 3.741}, {'end': 17446.32, 'text': 'But you kind of get the idea that when I have a text, this is a test to see if this test will work is test a I just did a bunch of random stuff.', 'start': 17436.916, 'duration': 9.404}], 'summary': 'Bag of words loses context, picks up word frequency, not exact but gives idea.', 'duration': 28.835, 'max_score': 17417.485, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk17417485.jpg'}, {'end': 17598.76, 'src': 'embed', 'start': 17560.21, 'weight': 3, 'content': [{'end': 17566.355, 'text': "And to you good point if you made that point, but I'm going to discuss where this falls apart as well, and why we're not going to use this method.", 'start': 17560.21, 'duration': 6.145}, {'end': 17573.22, 'text': "So, although this does solve the problem I talked about previously, where we're going to kind of lose out on the context of a word,", 'start': 17566.775, 'duration': 6.445}, {'end': 17574.701, 'text': "there's still a lot of issues with this.", 'start': 17573.22, 'duration': 1.481}, {'end': 17578.003, 'text': "And they come especially when you're dealing with very large vocabularies.", 'start': 17574.761, 'duration': 3.242}, {'end': 17581.466, 'text': "Now let's take an example where we actually have a vocabulary of say 100, 000 words.", 'start': 17578.544, 'duration': 2.922}, {'end': 17588.432, 'text': "And we know that that means we're going to have to have 100, 000 unique mappings from words to integers.", 'start': 17582.967, 'duration': 5.465}, {'end': 17591.214, 'text': "So let's say our mappings are something like this.", 'start': 17589.152, 'duration': 2.062}, {'end': 17598.76, 'text': 'One maps to the string happy, the word happy, right to maps to sad.', 'start': 17592.335, 'duration': 6.425}], 'summary': 'Challenges arise with large vocabularies, e.g. 100,000 unique mappings needed, impacting context preservation.', 'duration': 38.55, 'max_score': 17560.21, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk17560210.jpg'}, {'end': 17754.959, 'src': 'embed', 'start': 17731.465, 'weight': 0, 'content': [{'end': 17738.114, 'text': 'Now, what word embeddings does is essentially try to find a way to represent words that are similar using very similar numbers.', 'start': 17731.465, 'duration': 6.649}, {'end': 17739.535, 'text': 'And in fact,', 'start': 17738.835, 'duration': 0.7}, {'end': 17750.298, 'text': "what a word embedding is actually going to do I'll talk about this more in detail as we go on is classify or translate every single one of our words into a vector.", 'start': 17739.535, 'duration': 10.763}, {'end': 17754.959, 'text': 'And that vector is going to have some, you know, n amount of dimensions.', 'start': 17750.778, 'duration': 4.181}], 'summary': 'Word embeddings represent similar words using similar numbers in a vector with n dimensions.', 'duration': 23.494, 'max_score': 17731.465, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk17731465.jpg'}], 'start': 17057.068, 'title': 'Encoding text data and word embeddings for neural networks', 'summary': 'Discusses challenges of encoding text data into numeric format for neural networks, including limitations of bag of words technique, issues with large vocabularies, and the introduction of word embeddings to address arbitrary mappings and capturing word similarity in a multi-dimensional space.', 'chapters': [{'end': 17578.003, 'start': 17057.068, 'title': 'Encoding text data for neural networks', 'summary': 'Explains the challenges of encoding textual data into numeric data for neural networks, discussing the bag of words technique and the limitations of preserving word ordering, and the issues encountered with large vocabularies.', 'duration': 520.935, 'highlights': ['The bag of words technique encodes text into integers based on the frequency of words, but loses the context of the words within the sentence, making it unsuitable for preserving word order and context.', 'Encoding each word with an integer and preserving its original position in the string solves the issue of losing word order, but presents challenges with large vocabularies.']}, {'end': 17852.822, 'start': 17578.544, 'title': 'Word embeddings and vector representation', 'summary': 'Discusses the limitations of mapping words to integers for sentiment analysis and introduces word embeddings, which represents words as vectors to capture their similarity in a multi-dimensional space, aiming to address the issue of arbitrary mappings and large vocabularies.', 'duration': 274.278, 'highlights': ['Word embeddings aims to represent words as vectors in a multi-dimensional space, such as 3D, and capture their similarity by the direction and angle between the vectors.', 'The limitations of mapping words to integers for sentiment analysis are highlighted, showcasing the challenge of arbitrary mappings and large vocabularies in determining word similarity.', 'The issue of using arbitrary mappings and the lack of a systematic way to group words based on their meaning and similarity is discussed, along with the challenge of determining the importance of numbers chosen to represent each word.']}], 'duration': 795.754, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk17057068.jpg', 'highlights': ['Word embeddings capture word similarity in multi-dimensional space', 'Bag of words technique loses context and word order', 'Challenges with large vocabularies in word encoding', 'Limitations of arbitrary mappings for word similarity', 'Preserving word order presents challenges with large vocabularies']}, {'end': 20220.289, 'segs': [{'end': 17895.742, 'src': 'embed', 'start': 17871.358, 'weight': 4, 'content': [{'end': 17878.967, 'text': "you know, not always, but this is what it's trying to do is essentially pick some representation in a vector form for each word.", 'start': 17871.358, 'duration': 7.609}, {'end': 17884.414, 'text': "And then these vectors, we hope if they're similar words are going to be pointing in a very similar direction.", 'start': 17879.207, 'duration': 5.207}, {'end': 17888.559, 'text': "And that's kind of the best explanation of a word embeddings layer I can give you.", 'start': 17884.834, 'duration': 3.725}, {'end': 17890.6, 'text': 'Now, how do we do this, though?', 'start': 17889.019, 'duration': 1.581}, {'end': 17895.742, 'text': 'How do we actually, you know, go from word to vector and have that be meaningful??', 'start': 17890.76, 'duration': 4.982}], 'summary': 'Creating word embeddings involves representing words as vectors to capture similarity and direction for meaningful analysis.', 'duration': 24.384, 'max_score': 17871.358, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk17871358.jpg'}, {'end': 18806.933, 'src': 'embed', 'start': 18781.927, 'weight': 1, 'content': [{'end': 18788.995, 'text': "So this data set is straight from Kara's, and it contains 25, 000 reviews, which are already pre processed and labeled.", 'start': 18781.927, 'duration': 7.068}, {'end': 18793.88, 'text': 'Now what that means for us is that every single word is actually already encoded by an integer.', 'start': 18789.415, 'duration': 4.465}, {'end': 18802.329, 'text': "And in fact they've done kind of a clever encoding system, where what they've done is said if a character is encoded by, say, integer zero,", 'start': 18794.261, 'duration': 8.068}, {'end': 18806.933, 'text': 'that represents how common that word is in the entire data set.', 'start': 18802.329, 'duration': 4.604}], 'summary': "25,000 pre-processed and labeled reviews from kara's, with words encoded by integers.", 'duration': 25.006, 'max_score': 18781.927, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk18781927.jpg'}, {'end': 19115.296, 'src': 'embed', 'start': 19092.475, 'weight': 3, 'content': [{'end': 19100.362, 'text': "which means that when we pass them to the LSTM layer, we need to tell the LSTM layer it's going to have 32 dimensions for every single word,", 'start': 19092.475, 'duration': 7.887}, {'end': 19101.263, 'text': "which is what we're doing.", 'start': 19100.362, 'duration': 0.901}, {'end': 19109.911, 'text': 'And this will implement that long, short term memory process we talked about before and output the final output to tf dot, keras dot, layers dot,', 'start': 19101.644, 'duration': 8.267}, {'end': 19115.296, 'text': "dense, which will tell us you know, that's what this is, right?", 'start': 19109.911, 'duration': 5.385}], 'summary': 'Passing 32 dimensions to lstm layer for implementing long short term memory process and outputting to tf.keras.layers.dense.', 'duration': 22.821, 'max_score': 19092.475, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk19092475.jpg'}, {'end': 19319.064, 'src': 'embed', 'start': 19289.04, 'weight': 0, 'content': [{'end': 19294.966, 'text': "but we'll do the evaluation on our test data and test labels to get a more accurate kind of result here.", 'start': 19289.04, 'duration': 5.926}, {'end': 19301.393, 'text': "And that tells us we have an accuracy of about 85.5%, which you know isn't great, but it's decent,", 'start': 19295.767, 'duration': 5.626}, {'end': 19305.118, 'text': "considering that we didn't really write that much code to get to the point that we're at right now.", 'start': 19301.393, 'duration': 3.725}, {'end': 19308.759, 'text': "Okay, so that's what we're getting the models been trained.", 'start': 19305.718, 'duration': 3.041}, {'end': 19310.58, 'text': "Again, it's not too complicated.", 'start': 19309.019, 'duration': 1.561}, {'end': 19312.061, 'text': "And now we're on to making predictions.", 'start': 19310.62, 'duration': 1.441}, {'end': 19319.064, 'text': "So the idea is that now we've trained our model, and we want to actually use it to make a prediction on some kind of movie review.", 'start': 19312.581, 'duration': 6.483}], 'summary': 'Model evaluation resulted in 85.5% accuracy. simple code. now onto making predictions.', 'duration': 30.024, 'max_score': 19289.04, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk19289040.jpg'}, {'end': 19827.208, 'src': 'embed', 'start': 19801.972, 'weight': 2, 'content': [{'end': 19808.494, 'text': "So now we're on to our last and final example, which is going to be creating a recurrent neural network play generator.", 'start': 19801.972, 'duration': 6.522}, {'end': 19813.798, 'text': "Now this is going to be the first kind of neural network we've done that's actually going to be creating something for us.", 'start': 19809.034, 'duration': 4.764}, {'end': 19820.543, 'text': "But essentially, what we're going to do is make a model that's capable of predicting the next character in a sequence.", 'start': 19813.878, 'duration': 6.665}, {'end': 19823.345, 'text': "So we're going to give it some sequence as an input.", 'start': 19820.963, 'duration': 2.382}, {'end': 19827.208, 'text': "And what it's going to do is just simply predict the most likely next character.", 'start': 19823.425, 'duration': 3.783}], 'summary': 'Creating a recurrent neural network play generator to predict the next character in a sequence.', 'duration': 25.236, 'max_score': 19801.972, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk19801972.jpg'}], 'start': 17853.682, 'title': 'Recurrent neural networks and applications', 'summary': 'Explores word embeddings, recurrent neural networks, and lstm, covering sentiment analysis with lstm, model training, and a play generator, achieving 88% validation accuracy and 85.5% prediction accuracy on movie reviews, using a dataset of 25,000 pre-processed and labeled reviews with a vocabulary size of 88,584 unique words, and training a model on sequences of texts from the play romeo and juliet to generate an entire play.', 'chapters': [{'end': 18247.102, 'start': 17853.682, 'title': 'Word embeddings and recurrent neural networks', 'summary': 'Explains how word embeddings work by representing words as vectors, and how recurrent neural networks process textual data one word at a time by maintaining an internal memory, which helps in understanding the meaning of words based on the context of the words seen before.', 'duration': 393.42, 'highlights': ['Word embeddings represent words as vectors and are trained to point similar words in a similar direction.', 'Recurrent neural networks process textual data one word at a time, maintaining an internal memory to understand the context of words seen before.', 'The recurrent neural network contains an internal loop and processes data at different time steps.']}, {'end': 18733.509, 'start': 18247.102, 'title': 'Recurrent neural networks and lstm', 'summary': "Explains the concept of recurrent neural networks, which process input as a sequence, and introduces the long short term memory (lstm) layer that improves the model's ability to remember and access previous states, making it more useful for longer sequences.", 'duration': 486.407, 'highlights': ['The chapter introduces the concept of recurrent neural networks and explains how they process input as a sequence, allowing the model to build an understanding based on previously seen inputs.', 'The explanation of long short term memory (LSTM) layer highlights its function in keeping track of and accessing previous states, making it more suitable for processing longer sequences.', "The issue with simple RNN layers in processing longer sequences is discussed, emphasizing the difficulty in remembering the beginning of a long sequence as it becomes increasingly insignificant in the model's outputs."]}, {'end': 19190.058, 'start': 18733.949, 'title': 'Sentiment analysis with lstm', 'summary': 'Covers performing sentiment analysis on movie reviews using a lstm model, with a data set containing 25,000 pre-processed and labeled reviews, a vocabulary size of 88,584 unique words, and the process of padding sequences to a length of 250 words.', 'duration': 456.109, 'highlights': ['The data set contains 25,000 reviews, pre-processed and labeled, with a vocabulary size of 88,584 unique words.', 'The process involves padding sequences to a length of 250 words to ensure uniformity for feeding into the neural network.', 'The model includes an embedding layer to create meaningful representations for the pre-processed integers, and an LSTM layer with 32 dimensions for each word, followed by a dense layer with a sigmoid activation function for sentiment prediction.']}, {'end': 19801.912, 'start': 19190.878, 'title': 'Model training and prediction', 'summary': 'Covers training a model with 88% validation accuracy, predicting movie reviews with 85.5% accuracy, and encoding/decoding text for model input.', 'duration': 611.034, 'highlights': ['The model achieves 88% validation accuracy and overfits to 97-98% after training with a 20% validation split.', 'Predicting movie reviews results in an accuracy of 85.5%.', 'Text encoding function is created to preprocess input for model prediction, utilizing a lookup table and padding sequences.']}, {'end': 20220.289, 'start': 19801.972, 'title': 'Recurrent neural network play generator', 'summary': 'Discusses creating a recurrent neural network play generator by training a model on sequences of texts from the play romeo and juliet, and using it to predict and generate an entire play, with approximately 1.1 million characters in the text.', 'duration': 418.317, 'highlights': ['The chapter discusses creating a recurrent neural network play generator by training a model on sequences of texts from the play Romeo and Juliet.', 'The text dataset for Romeo and Juliet contains approximately 1.1 million characters.', 'The model is capable of predicting and generating an entire play by predicting the most likely next character for a given sequence and feeding the output as input to the model.']}], 'duration': 2366.607, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk17853682.jpg', 'highlights': ['Model achieves 88% validation accuracy and 85.5% prediction accuracy on movie reviews', 'Dataset: 25,000 pre-processed and labeled reviews with a vocabulary size of 88,584 unique words', 'Recurrent neural network play generator trained on sequences from Romeo and Juliet with 1.1M characters', 'LSTM layer with 32 dimensions for each word, followed by a dense layer with a sigmoid activation function', 'Word embeddings trained to point similar words in a similar direction']}, {'end': 21270.731, 'segs': [{'end': 20338.964, 'src': 'embed', 'start': 20313.396, 'weight': 1, 'content': [{'end': 20320.198, 'text': "So now we're gonna do is define a sequence length of 100, we're going to say the amount of examples per epoch is going to be the length of the text,", 'start': 20313.396, 'duration': 6.802}, {'end': 20322.319, 'text': 'divided by the sequence length plus one.', 'start': 20320.198, 'duration': 2.121}, {'end': 20329.141, 'text': "The reason we're doing this is because for every training example, we need to create a sequence input that's 100 characters long.", 'start': 20322.739, 'duration': 6.402}, {'end': 20332.782, 'text': "And we need to create a sequence output that's 100 characters long,", 'start': 20329.661, 'duration': 3.121}, {'end': 20338.964, 'text': 'which means that we need to have 101 characters that we use for every training example right?', 'start': 20332.782, 'duration': 6.182}], 'summary': 'Defining a sequence length of 100 to create 101-character training examples.', 'duration': 25.568, 'max_score': 20313.396, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk20313396.jpg'}, {'end': 20431.877, 'src': 'embed', 'start': 20409.323, 'weight': 7, 'content': [{'end': 20418.028, 'text': "So taking this the sequences of 101 length and converting them into the input and target text, and I'll show you how they work in a second.", 'start': 20409.323, 'duration': 8.705}, {'end': 20423.271, 'text': 'we can do this convert the sequences to that by just mapping them to this function.', 'start': 20418.028, 'duration': 5.243}, {'end': 20424.892, 'text': "So that's what this function does.", 'start': 20423.672, 'duration': 1.22}, {'end': 20431.877, 'text': 'So if we say sequences dot map, and we put this function here, that means every single sequence will have this operation applied to it.', 'start': 20425.273, 'duration': 6.604}], 'summary': 'Sequences of 101 length are converted to input and target text using a mapping function.', 'duration': 22.554, 'max_score': 20409.323, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk20409323.jpg'}, {'end': 20512.994, 'src': 'embed', 'start': 20485.827, 'weight': 4, 'content': [{'end': 20492.456, 'text': 'The vocabulary size is the length of the vocabulary which, if you remember all the way back up to the top of the code, was the set.', 'start': 20485.827, 'duration': 6.629}, {'end': 20497.06, 'text': 'are the sorted set of the text which essentially told us how many unique characters are in there?', 'start': 20493.016, 'duration': 4.044}, {'end': 20501.845, 'text': 'The embedding dimension is 256, the RNN units is 1024.', 'start': 20497.561, 'duration': 4.284}, {'end': 20504.206, 'text': 'And the buffer size is 10, 000.', 'start': 20501.845, 'duration': 2.361}, {'end': 20507.19, 'text': "What we're going to do now is create a data set that shuffled.", 'start': 20504.207, 'duration': 2.983}, {'end': 20512.994, 'text': "So we're going to switch around all these sequences, so they don't get shown in the proper order, which we actually don't want.", 'start': 20507.21, 'duration': 5.784}], 'summary': 'Vocabulary size is the set length, with 256 embedding dimension and 1024 rnn units, and a buffer size of 10,000 for creating a shuffled data set.', 'duration': 27.167, 'max_score': 20485.827, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk20485827.jpg'}, {'end': 20588.33, 'src': 'embed', 'start': 20562.294, 'weight': 2, 'content': [{'end': 20567.617, 'text': "So we've kind of set these parameters up here, remember what those are, we've batched and we've shuffled the data set.", 'start': 20562.294, 'duration': 5.323}, {'end': 20569.158, 'text': "And again, that's how this works.", 'start': 20567.657, 'duration': 1.501}, {'end': 20572.56, 'text': 'You can print it out if you want to see what a batch actually looks like.', 'start': 20569.198, 'duration': 3.362}, {'end': 20579.544, 'text': "But essentially, it's just 64 entries of those sequences, right? So 64 different training examples is what a batch of that is.", 'start': 20572.6, 'duration': 6.944}, {'end': 20588.33, 'text': "Alright, so now we go down here, we're going to say build model, we're actually making a function is going to return to us a built model.", 'start': 20580.504, 'duration': 7.826}], 'summary': 'Data set is batched and shuffled, each batch contains 64 training examples.', 'duration': 26.036, 'max_score': 20562.294, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk20562294.jpg'}, {'end': 20767.077, 'src': 'embed', 'start': 20740.621, 'weight': 0, 'content': [{'end': 20745.864, 'text': 'Finally, we have a dense layer, which is going to contain the amount of vocabulary size notes.', 'start': 20740.621, 'duration': 5.243}, {'end': 20754.908, 'text': "The reason we're doing this is because we want the final layer to have the amount of nodes in it equal to the amount of characters in the vocabulary.", 'start': 20746.344, 'duration': 8.564}, {'end': 20761.512, 'text': 'This way, every single one of those nodes can represent a probability distribution that that character comes next.', 'start': 20755.389, 'duration': 6.123}, {'end': 20767.077, 'text': 'So all of those nodes value some sum together should give us the value of one.', 'start': 20761.892, 'duration': 5.185}], 'summary': 'The dense layer contains the vocabulary size notes to represent a probability distribution for the next character in the text.', 'duration': 26.456, 'max_score': 20740.621, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk20740621.jpg'}, {'end': 20836.61, 'src': 'embed', 'start': 20805.045, 'weight': 3, 'content': [{'end': 20808.929, 'text': 'And then this is going to be just the output dimension.', 'start': 20805.045, 'duration': 3.884}, {'end': 20813.294, 'text': 'or sorry, this is the amount of values in the vector right?', 'start': 20808.929, 'duration': 4.365}, {'end': 20818.239, 'text': "So we're going to start with 256, we'll just do 1024 units in the LSTM.", 'start': 20813.314, 'duration': 4.925}, {'end': 20822.283, 'text': 'And then 65 stands for the amount of nodes, because that is the length of the vocabulary.', 'start': 20818.259, 'duration': 4.024}, {'end': 20828.841, 'text': "Alright, so combined, that's how many trainable parameters we get, you can see each of them for each layer.", 'start': 20823.075, 'duration': 5.766}, {'end': 20830.864, 'text': "And now it's time to move on to the next section.", 'start': 20829.302, 'duration': 1.562}, {'end': 20836.61, 'text': "Okay, so now we're moving on to the next step of the tutorial, which is creating a loss function to compile our model with.", 'start': 20831.504, 'duration': 5.106}], 'summary': 'Configuring model parameters: 256 output dimensions, 1024 lstm units, 65 nodes in vocabulary.', 'duration': 31.565, 'max_score': 20805.045, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk20805045.jpg'}, {'end': 21147.182, 'src': 'embed', 'start': 21124.126, 'weight': 8, 'content': [{'end': 21143.139, 'text': "And that's why we need to actually make our own loss function to be able to determine how you know good our models performing when it outputs something ridiculous that looks like this because there is no just built in loss function in TensorFlow that can look at a three dimensional nested array of probabilities over.", 'start': 21124.126, 'duration': 19.013}, {'end': 21147.182, 'text': 'you know the vocabulary size and tell us how different the two things are.', 'start': 21143.139, 'duration': 4.043}], 'summary': 'Creating a custom loss function is necessary to evaluate model performance on complex data in tensorflow.', 'duration': 23.056, 'max_score': 21124.126, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk21124126.jpg'}], 'start': 20220.289, 'title': 'Text sequence training examples, building rnn model, and rnn dense layer', 'summary': 'Explains creating training examples, building rnn model, with parameters like sequence length of 100, 1.1 million characters, batch size=64, embedding dimension=256, rnn units=1024, buffer size=10,000, and dense layer implementation with a vocabulary size of 65.', 'chapters': [{'end': 20465.349, 'start': 20220.289, 'title': 'Text sequence training examples', 'summary': 'Explains how to create training examples by splitting the text into sequences of a defined length, and then mapping them to input and target text for training, utilizing a sequence length of 100 and a total of 1.1 million characters in the dataset.', 'duration': 245.06, 'highlights': ['The chapter explains creating training examples by splitting the text into sequences of a defined length.', 'It outlines the process of mapping the sequences to input and target text for training.', 'Utilizing a sequence length of 100 and a total of 1.1 million characters in the dataset.']}, {'end': 20739.78, 'start': 20465.829, 'title': 'Building rnn model for training', 'summary': 'Explains the process of creating training batches, defining model parameters, and building a model function for training, including key parameters such as batch size=64, embedding dimension=256, rnn units=1024, and buffer size=10,000.', 'duration': 273.951, 'highlights': ['The batch size equals 64.', 'The embedding dimension is 256, the RNN units is 1024, and the buffer size is 10,000.', 'The model function is created with specific parameters, including the batch size and LSTM layer with 1024 units.']}, {'end': 21270.731, 'start': 20740.621, 'title': 'Rnn dense layer & loss function', 'summary': 'Discusses the implementation of a dense layer in a recurrent neural network with a vocabulary size of 65, and the creation of a loss function to handle the three-dimensional nested array of probabilities, with emphasis on the need for sampling in determining predicted characters.', 'duration': 530.11, 'highlights': ['The dense layer has 65 nodes representing the vocabulary size, enabling it to produce a probability distribution for predicting the next character.', 'The need for a custom loss function is emphasized due to the challenge of handling a three-dimensional nested array of probabilities, requiring the use of sampling to determine predicted characters.', 'Sampling is used to select characters based on the probability distribution, avoiding the risk of getting stuck in an infinite loop when accepting the character with the highest probability.']}], 'duration': 1050.442, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk20220289.jpg', 'highlights': ['The dense layer has 65 nodes for a vocabulary size of 65.', 'Utilizing a sequence length of 100 and 1.1 million characters in the dataset.', 'The batch size is 64.', 'The embedding dimension is 256 and the RNN units is 1024.', 'The buffer size is 10,000.', 'The model function is created with specific parameters, including the batch size and LSTM layer with 1024 units.', 'The chapter explains creating training examples by splitting the text into sequences of a defined length.', 'The process of mapping the sequences to input and target text for training is outlined.', 'A custom loss function is emphasized due to the challenge of handling a three-dimensional nested array of probabilities.']}, {'end': 24727.517, 'segs': [{'end': 21573.082, 'src': 'embed', 'start': 21543.248, 'weight': 0, 'content': [{'end': 21545.868, 'text': "So now let's talk about how I actually generated that output.", 'start': 21543.248, 'duration': 2.62}, {'end': 21552.209, 'text': 'So we rebuilt the model to accept a batch size of one, which means that I can pass it a sequence of any length.', 'start': 21546.008, 'duration': 6.201}, {'end': 21558.513, 'text': "And in fact, what I start by doing is passing the sequence that I've typed in here, which was Romeo.", 'start': 21552.31, 'duration': 6.203}, {'end': 21562.396, 'text': 'then what that does is we run this function, generate text.', 'start': 21558.513, 'duration': 3.883}, {'end': 21564.097, 'text': 'I just stole this from TensorFlow, his website.', 'start': 21562.396, 'duration': 1.701}, {'end': 21565.638, 'text': "like I've stolen almost all of this code.", 'start': 21564.097, 'duration': 1.541}, {'end': 21569.941, 'text': 'And then we say the number of characters to generate is 800.', 'start': 21566.118, 'duration': 3.823}, {'end': 21573.082, 'text': 'the input evaluation, which is now like.', 'start': 21569.941, 'duration': 3.141}], 'summary': 'Rebuilt model to accept batch size of one, generating 800 characters.', 'duration': 29.834, 'max_score': 21543.248, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk21543248.jpg'}, {'end': 21944.185, 'src': 'embed', 'start': 21901.312, 'weight': 2, 'content': [{'end': 21904.134, 'text': "we're going to actually build the model, starting with a batch size of 64..", 'start': 21901.312, 'duration': 2.822}, {'end': 21909.317, 'text': "We're going to create our loss function, compile the model,", 'start': 21904.134, 'duration': 5.183}, {'end': 21918.623, 'text': 'set our checkpoints for saving and then train the model and make sure that we say checkpoint callback as the checkpoint callback for the model,', 'start': 21909.317, 'duration': 9.306}, {'end': 21923.406, 'text': "which means it's going to save every epoch the weights that the model had computed at that epoch.", 'start': 21918.623, 'duration': 4.783}, {'end': 21927.089, 'text': 'So after we do that, then our models train.', 'start': 21923.967, 'duration': 3.122}, {'end': 21931.153, 'text': "So we've trained the model, you can see I trained this on 50 epochs for the B movie script.", 'start': 21927.169, 'duration': 3.984}, {'end': 21934.936, 'text': "And then we're going to do is build the model now with a batch size of one.", 'start': 21931.593, 'duration': 3.343}, {'end': 21939.12, 'text': 'So we can pass one example to it and get a prediction.', 'start': 21935.837, 'duration': 3.283}, {'end': 21944.185, 'text': "we're going to load the most recent weights into our model from the checkpoint directory that we defined above.", 'start': 21939.12, 'duration': 5.065}], 'summary': 'Built model with batch size 64, trained on 50 epochs for b movie script', 'duration': 42.873, 'max_score': 21901.312, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk21901312.jpg'}, {'end': 23180.054, 'src': 'embed', 'start': 23152.11, 'weight': 1, 'content': [{'end': 23155.233, 'text': "But now I'm just going to talk about this formula for how we actually update Q values.", 'start': 23152.11, 'duration': 3.123}, {'end': 23167.384, 'text': "So, obviously what's going to end up happening in our Q learning is we're going to have an agent that's going to be in the learning stage exploring the environment and having all these actions and all these rewards and all these observations happening.", 'start': 23155.854, 'duration': 11.53}, {'end': 23175.19, 'text': "And it's going to be moving around the environment by following one of these two kind of principles randomly picking a valid action or using the current Q table to find the best action.", 'start': 23167.764, 'duration': 7.426}, {'end': 23180.054, 'text': 'when it gets into a new state and it, you know, moves from state to state.', 'start': 23175.991, 'duration': 4.063}], 'summary': 'Discussing the formula for updating q values in q learning.', 'duration': 27.944, 'max_score': 23152.11, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk23152110.jpg'}, {'end': 23591.463, 'src': 'embed', 'start': 23567.325, 'weight': 5, 'content': [{'end': 23573.289, 'text': "So you'll see how this works in a second, but essentially there's a ton of graphical environments that have very easy interfaces to use.", 'start': 23567.325, 'duration': 5.964}, {'end': 23579.734, 'text': "So like moving characters around them that you're allowed to experiment with completely for free as a programmer, to try, to you know,", 'start': 23573.329, 'duration': 6.405}, {'end': 23581.675, 'text': 'make some cool reinforcement learning models.', 'start': 23579.734, 'duration': 1.941}, {'end': 23582.816, 'text': "That's what open AI gym is.", 'start': 23581.715, 'duration': 1.101}, {'end': 23583.576, 'text': 'And you can look at it.', 'start': 23582.836, 'duration': 0.74}, {'end': 23585.678, 'text': "I mean, we'll click on it here actually to see what it is.", 'start': 23583.736, 'duration': 1.942}, {'end': 23588.661, 'text': "you can see Jim, there's all these different Atari environments.", 'start': 23586.158, 'duration': 2.503}, {'end': 23591.463, 'text': "And it's just a way to kind of train reinforcement learning models.", 'start': 23588.701, 'duration': 2.762}], 'summary': 'Openai gym provides diverse atari environments for training reinforcement learning models.', 'duration': 24.138, 'max_score': 23567.325, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk23567325.jpg'}, {'end': 23895.004, 'src': 'embed', 'start': 23867.543, 'weight': 4, 'content': [{'end': 23874.685, 'text': 'So something I guess I forgot to mention is when we initialize the queue table, we just initialize all blank values or zero values, because obviously,', 'start': 23867.543, 'duration': 7.142}, {'end': 23879.588, 'text': "at the beginning of our learning, our model or our agent doesn't know anything about the environment yet.", 'start': 23874.685, 'duration': 4.903}, {'end': 23885.775, 'text': "So we just leave those all blank, which means we're going to more likely be taking random actions at the beginning of our training,", 'start': 23879.628, 'duration': 6.147}, {'end': 23887.797, 'text': 'trying to explore the environment space more.', 'start': 23885.775, 'duration': 2.022}, {'end': 23895.004, 'text': 'And then as we get further on and learn more about the environment, those actions will likely be more calculated based on the cue table values.', 'start': 23888.217, 'duration': 6.787}], 'summary': 'During initialization, blank or zero values are used for queue table, leading to random actions in early training, and more calculated actions as learning progresses.', 'duration': 27.461, 'max_score': 23867.543, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk23867543.jpg'}], 'start': 21271.231, 'title': 'Neural network training, text generation, and reinforcement learning', 'summary': 'Covers topics such as reshaping arrays, loss function computation, model compilation, and text generation. it also demonstrates training a recurrent neural network on different scripts and explains reinforcement learning, q learning, and the implementation of q learning to train an ai in various environments.', 'chapters': [{'end': 21717.248, 'start': 21271.231, 'title': 'Neural network training and text generation', 'summary': 'Covers reshaping the array, loss function computation, model compilation, checkpoint setup, and model training. it then explains the process of rebuilding the model, loading weights, and generating text using the model, with the option to adjust temperature for different text predictability.', 'duration': 446.017, 'highlights': ['Rebuilding the model to accept a batch size of one allows for passing a sequence of any length, and generating text output based on the given input, using a function to pre-process the text and an option to adjust temperature for predictable or surprising text results.', "Training the model for more epochs improves the quality of the generated output, as demonstrated with the example of generating text from the input 'Romeo'.", 'Explaining the process of computing the loss function that compares labels and probability distributions to reduce the loss and improve the performance of the algorithm in the neural network.', 'Compiling the model with the atom optimizer and the loss function, and setting up checkpoints for training, which can be adjusted using GPU hardware accelerator for improved performance.', 'Reshaping the array and converting integers to numbers to visualize the actual characters, and showing the predicted characters based on the time steps during the process.']}, {'end': 22024.183, 'start': 21717.248, 'title': 'Generating sequences with recurrent neural network', 'summary': 'Demonstrates how to train a recurrent neural network on a b movie script and highlights the differences in performance compared to training on romeo and juliet, with an emphasis on the training process and model evaluation.', 'duration': 306.935, 'highlights': ["The model was trained on 50 epochs for the B movie script, indicating the training duration and providing insight into the model's learning process.", "A recommendation to increase the amount of epochs to improve the model's performance, emphasizing the significance of low loss and the potential impact of a higher number of epochs.", "Comparison of model performance between the B movie script and Romeo and Juliet, highlighting the differences in text quality and format, which impacts the model's performance."]}, {'end': 23529.709, 'start': 22024.183, 'title': 'Reinforcement learning with q learning', 'summary': 'Explains reinforcement learning and q learning, introducing the concept of an agent exploring an environment, the key terms of state, action, and reward, and the q learning algorithm for updating q values to optimize rewards, while emphasizing the importance of learning rate and discount factor. it also highlights the need for a balance between using the current q table to find the best action and randomly picking a valid action for effective exploration.', 'duration': 1505.526, 'highlights': ['The Q learning algorithm involves updating Q values using a formula that incorporates the learning rate, the discount factor, the maximum reward in the new state, and the difference between the current and updated values.', 'The chapter emphasizes the need for a balance between using the current Q table to find the best action and randomly picking a valid action to effectively explore the environment, thus avoiding local minima.', 'Introduction of key terms such as state, action, and reward in the context of reinforcement learning, with a focus on the importance of defining an environment for an agent to navigate.']}, {'end': 23830.348, 'start': 23529.729, 'title': 'Openai gym for reinforcement learning', 'summary': 'Introduces openai gym, a tool for training reinforcement learning models, with graphical environments offering easy interfaces, and the ability to perform actions, observe states, and receive rewards in the environment.', 'duration': 300.619, 'highlights': ['OpenAI Gym provides graphical environments with easy interfaces for training reinforcement learning models.', 'The observation space and action space in OpenAI Gym define the states and available actions in the environment.', "The 'frozen lake' example in OpenAI Gym involves navigating to the goal while avoiding falling into holes."]}, {'end': 24136.868, 'start': 23831.317, 'title': 'Implementing q learning for frozen lake', 'summary': 'Presents the implementation of q learning to train an ai to navigate the frozen lake environment, including initializing the q table with zero values, defining constants such as gamma, learning rate, max steps, and episodes, and explaining the process of picking an action based on epsilon value and updating q values.', 'duration': 305.551, 'highlights': ['Initializing the Q table with zero values to represent the environment state and actions, allowing the agent to explore the environment by taking random actions at the beginning of training.', 'Defining constants such as gamma, learning rate, max steps, and episodes, which play crucial roles in the training process, with the number of episodes determining how many times the agent explores the environment.', 'Explaining the concept of epsilon, which determines the percentage chance of taking a random action versus using the cue table to find the best action, and the gradual reduction of epsilon during training to promote optimal route exploration.']}, {'end': 24727.517, 'start': 24136.868, 'title': 'Reinforcement learning with tensorflow', 'summary': "Demonstrates the implementation of q-learning algorithm to train an agent to navigate an environment with a 0.2888 average reward over 1500 episodes, and suggests further learning resources on tensorflow's website for advanced topics and tutorials.", 'duration': 590.649, 'highlights': ['The chapter demonstrates the implementation of Q-learning algorithm to train an agent to navigate an environment with a 0.2888 average reward over 1500 episodes.', "The chapter suggests further learning resources on TensorFlow's website for advanced topics and tutorials.", 'The chapter emphasizes the significance of specializing in a specific area of machine learning and AI to delve deeper into the subject matter.']}], 'duration': 3456.286, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tPYj3fFJGjk/pics/tPYj3fFJGjk21271231.jpg', 'highlights': ["Training the model for more epochs improves the quality of the generated output, as demonstrated with the example of generating text from the input 'Romeo'.", 'The Q learning algorithm involves updating Q values using a formula that incorporates the learning rate, the discount factor, the maximum reward in the new state, and the difference between the current and updated values.', 'Compiling the model with the atom optimizer and the loss function, and setting up checkpoints for training, which can be adjusted using GPU hardware accelerator for improved performance.', "The model was trained on 50 epochs for the B movie script, indicating the training duration and providing insight into the model's learning process.", 'Initializing the Q table with zero values to represent the environment state and actions, allowing the agent to explore the environment by taking random actions at the beginning of training.', 'OpenAI Gym provides graphical environments with easy interfaces for training reinforcement learning models.']}], 'highlights': ['The model achieves an accuracy of close to 92-93% after training', 'The course covers core learning algorithms for machine learning, including neural networks and convolutional neural networks', 'Creating feature columns for linear regression involves adding TensorFlow feature columns, achieving a 76% accuracy in its predictions', 'The importance of understanding AI, neural networks, and machine learning is emphasized for the course', 'The importance of data in machine learning and AI is emphasized, explaining features and labels', 'The course introduces TensorFlow, a module developed and maintained by Google, and its applications in machine learning', 'The runtime tab in Collaboratory allows selective execution, improving efficiency', 'The data set has 627 entries and 9 attributes, with a mean age of 29 and a standard deviation of 12', 'The course is aimed at beginners in machine learning and artificial intelligence, requiring fundamental programming and Python knowledge', 'The model is trained with 5000 steps, defining a set amount of steps to go through the dataset']}