title
Learn TensorFlow and Deep Learning fundamentals with Python (code-first introduction) Part 1/2

description
Ready to learn the fundamentals of TensorFlow and deep learning with Python? Well, you’ve come to the right place. After this two-part code-first introduction, you’ll have written 100s of lines of TensorFlow code and have hands-on experience with two important problems in machine learning: regression (predicting a number) and classification (predicting if something is one thing or another). Open a Google Colab (if you’re not sure what this is, you’ll find out soon) window and get ready to code along. Sign up for the full course - https://dbourke.link/ZTMTFcourse Get all of the code/materials on GitHub - https://www.github.com/mrdbourke/tensorflow-deep-learning/ Ask a question - https://github.com/mrdbourke/tensorflow-deep-learning/discussions See part 2 - https://youtu.be/ZUKz4125WNI TensorFlow Python documentation - https://www.tensorflow.org/api_docs/python/tf Connect elsewhere: Web - https://www.mrdbourke.com Livestreams on Twitch - https://www.twitch.tv/mrdbourke Get email updates on my work - https://www.mrdbourke.com/newsletter Timestamps: 0:00 - Intro/hello/how to approach this video 1:50 - MODULE 0 START (TensorFlow/deep learning fundamentals) 1:53 - [Keynote] 1. What is deep learning? 6:31 - [Keynote] 2. Why use deep learning? 16:10 - [Keynote] 3. What are neural networks? 26:33 - [Keynote] 4. What is deep learning actually used for? 35:10 - [Keynote] 5. What is and why use TensorFlow? 43:05 - [Keynote] 6. What is a tensor? 46:40 - [Keynote] 7. What we're going to cover 51:12 - [Keynote] 8. How to approach this course 56:45 - 9. Creating our first tensors with TensorFlow 1:15:32 - 10. Creating tensors with tf Variable 1:22:40 - 11. Creating random tensors 1:32:20 - 12. Shuffling the order of tensors 1:42:00 - 13. Creating tensors from NumPy arrays 1:53:57 - 14. Getting information from our tensors 2:05:52 - 15. Indexing and expanding tensors 2:18:27 - 16. Manipulating tensors with basic operations 2:24:00 - 17. Matrix multiplication part 1 2:35:55 - 18. Matrix multiplication part 2 2:49:25 - 19. Matrix multiplication part 3 2:59:27 - 20. Changing the datatype of tensors 3:06:24 - 21. Aggregating tensors 3:16:14 - 22. Tensor troubleshooting 3:22:27 - 23. Find the positional min and max of a tensor 3:31:56 - 24. Squeezing a tensor 3:34:57 - 25. One-hot encoding tensors 3:40:44 - 26. Trying out more tensor math operations 3:45:31 - 27. Using TensorFlow with NumPy 3:51:14 - MODULE 1 START (neural network regression) 3:51:25 - [Keynote] 28. Intro to neural network regression with TensorFlow 3:58:57 - [Keynote] 29. Inputs and outputs of a regression model 4:07:55 - [Keynote] 30. Architecture of a neural network regression model 4:15:51 - 31. Creating sample regression data 4:28:39 - 32. Steps in modelling with TensorFlow 4:48:53 - 33. Steps in improving a model part 1 4:54:56 - 34. Steps in improving a model part 2 5:04:22 - 35. Steps in improving a model part 3 5:16:55 - 36. Evaluating a model part 1 ("visualize, visualize, visualize") 5:24:20 - 37. Evaluating a model part 2 (the 3 datasets) 5:35:22 - 38. Evaluating a model part 3 (model summary) 5:52:39 - 39. Evaluating a model part 4 (visualizing layers) 5:59:56 - 40. Evaluating a model part 5 (visualizing predictions) 6:09:11 - 41. Evaluating a model part 6 (regression evaluation metrics) 6:17:19 - 42. Evaluating a regression model part 7 (MAE) 6:23:10 - 43. Evaluating a regression model part 8 (MSE) 6:26:29 - 44. Modelling experiments part 1 (start with a simple model) 6:40:19 - 45. Modelling experiments part 2 (increasing complexity) 6:51:49 - 46. Comparing and tracking experiments 7:02:08 - 47. Saving a model 7:11:32 - 48. Loading a saved model 7:21:49 - 49. Saving and downloading files from Google Colab 7:28:07 - 50. Putting together what we've learned 1 (preparing a dataset) 7:41:38 - 51. Putting together what we've learned 2 (building a regression model) 7:55:01 - 52. Putting together what we've learned 3 (improving our regression model) 8:10:45 - [Code] 53. Preprocessing data 1 (concepts) 8:20:21 - [Code] 54. Preprocessing data 2 (normalizing data) 8:31:17 - [Code] 55. Preprocessing data 3 (fitting a model on normalized data) 8:38:57 - MODULE 2 START (neural network classification) 8:39:07 - [Keynote] 56. Introduction to neural network classification with TensorFlow 8:47:31 - [Keynote] 57. Classification inputs and outputs 8:54:08 - [Keynote] 58. Classification input and output tensor shapes 9:00:31 - [Keynote] 59. Typical architecture of a classification model 9:10:08 - 60. Creating and viewing classification data to model 9:21:39 - 61. Checking the input and output shapes of our classification data 9:26:17 - 62. Building a not very good classification model 9:38:28 - 63. Trying to improve our not very good classification model 9:47:42 - 64. Creating a function to visualize our model's not so good predictions 10:02:50 - 65. Making our poor classification model work for a regression dataset #tensorflow #deeplearning #machinelearning

detail
{'title': 'Learn TensorFlow and Deep Learning fundamentals with Python (code-first introduction) Part 1/2', 'heatmap': [{'end': 3694.824, 'start': 3317.145, 'weight': 0.769}, {'end': 16989.45, 'start': 16247.453, 'weight': 0.812}, {'end': 20685.74, 'start': 20310.318, 'weight': 0.704}, {'end': 27329.591, 'start': 26954.507, 'weight': 0.882}, {'end': 29175.187, 'start': 28800.954, 'weight': 0.754}, {'end': 30653.951, 'start': 30271.775, 'weight': 0.894}, {'end': 33609.277, 'start': 33230.384, 'weight': 0.754}, {'end': 36903.942, 'start': 36554.23, 'weight': 1}], 'summary': 'Series provides a 14-hour code-first introduction to tensorflow and deep learning, covering topics such as machine learning applications, tensor operations, neural network regression basics, model architecture, troubleshooting keras model, regression metrics, model management, classification model basics, and strategies for improving model performance.', 'chapters': [{'end': 72.751, 'segs': [{'end': 72.751, 'src': 'embed', 'start': 20.606, 'weight': 0, 'content': [{'end': 24.109, 'text': "first of all, if you're here to learn TensorFlow and deep learning, you've come to the right place.", 'start': 20.606, 'duration': 3.503}, {'end': 26.771, 'text': 'By the end of this two-part video series,', 'start': 24.669, 'duration': 2.102}, {'end': 34.978, 'text': "you'll have written hundreds of lines of TensorFlow code if you follow along with the coding videos and you'll have a code-first introduction to some of the main concepts in deep learning.", 'start': 26.771, 'duration': 8.207}, {'end': 41.816, 'text': 'Now, again, how I would recommend going through these videos is to kind of have the YouTube window on one side.', 'start': 36.212, 'duration': 5.604}, {'end': 43.977, 'text': "Once we get to the code part, you'll see it when it comes up.", 'start': 41.896, 'duration': 2.081}, {'end': 46.479, 'text': 'If you want to skip this intro, go to the timestamp below.', 'start': 44.238, 'duration': 2.241}, {'end': 53.304, 'text': 'And then have on the side, on the other side of your screen, is Google Colab or a Jupyter Notebook.', 'start': 46.499, 'duration': 6.805}, {'end': 54.405, 'text': "That's where we're going to code.", 'start': 53.324, 'duration': 1.081}, {'end': 57.527, 'text': "If you want all of the course materials, they're available on GitHub.", 'start': 55.105, 'duration': 2.422}, {'end': 60.441, 'text': "There'll be links below for everything, by the way.", 'start': 58.8, 'duration': 1.641}, {'end': 66.446, 'text': 'And if you need to ask a question, go to the Discussions tab on GitHub or leave a comment below.', 'start': 61.502, 'duration': 4.944}, {'end': 72.751, 'text': "If you do have a question about anything to do with the video, whether it's on the GitHub discussions or YouTube comment,", 'start': 66.926, 'duration': 5.825}], 'summary': 'Learn tensorflow and deep learning, write hundreds of lines of code for a code-first introduction to main concepts.', 'duration': 52.145, 'max_score': 20.606, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs820606.jpg'}], 'start': 0.109, 'title': 'Introduction to tensorflow and deep learning', 'summary': 'Provides an introduction to tensorflow and deep learning through a two-part video series lasting 14 hours. it emphasizes a code-first approach, resulting in the creation of extensive tensorflow code and a comprehensive understanding of key concepts in deep learning.', 'chapters': [{'end': 72.751, 'start': 0.109, 'title': 'Introduction to tensorflow and deep learning', 'summary': 'Provides an introduction to tensorflow and deep learning, offering a two-part video series with a total duration of 14 hours and a code-first approach, resulting in the creation of hundreds of lines of tensorflow code and a comprehensive understanding of key concepts in deep learning.', 'duration': 72.642, 'highlights': ['The chapter offers a two-part video series with a total duration of 14 hours, providing a code-first introduction to key concepts in deep learning and the creation of hundreds of lines of TensorFlow code.', 'The course materials are available on GitHub, and viewers can access additional resources and ask questions through the GitHub Discussions tab or YouTube comments section.', 'Viewers are recommended to have the YouTube window on one side and Google Colab or a Jupyter Notebook on the other side for an optimal learning experience.']}], 'duration': 72.642, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs8109.jpg', 'highlights': ['The chapter offers a two-part video series with a total duration of 14 hours, providing a code-first introduction to key concepts in deep learning and the creation of hundreds of lines of TensorFlow code.', 'The course materials are available on GitHub, and viewers can access additional resources and ask questions through the GitHub Discussions tab or YouTube comments section.', 'Viewers are recommended to have the YouTube window on one side and Google Colab or a Jupyter Notebook on the other side for an optimal learning experience.']}, {'end': 1889.005, 'segs': [{'end': 125.608, 'src': 'embed', 'start': 94.604, 'weight': 2, 'content': [{'end': 100.185, 'text': "So, if you like what you're going through in this video or in part two, and you want to sign up to the full version,", 'start': 94.604, 'duration': 5.581}, {'end': 106.547, 'text': 'which covers a lot more materials in the order of 20 plus more hours of TensorFlow code and other specific parts of deep learning,', 'start': 100.185, 'duration': 6.362}, {'end': 108.867, 'text': "there'll be a link to sign up in the description below.", 'start': 106.547, 'duration': 2.32}, {'end': 111.008, 'text': 'This is for real this time.', 'start': 110.188, 'duration': 0.82}, {'end': 114.109, 'text': 'Enjoy All right, all right, all right.', 'start': 111.968, 'duration': 2.141}, {'end': 125.608, 'text': "Are you ready? I hope you are because we're about to learn deep learning with, wait for it, TensorFlow.", 'start': 114.409, 'duration': 11.199}], 'summary': 'Enroll in the full version for 20+ hours of tensorflow code and deep learning materials.', 'duration': 31.004, 'max_score': 94.604, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs894604.jpg'}, {'end': 310.544, 'src': 'embed', 'start': 283.794, 'weight': 0, 'content': [{'end': 288.416, 'text': 'I mean, if you wanna do your own research, what deep learning is, go ahead and do that.', 'start': 283.794, 'duration': 4.622}, {'end': 293.959, 'text': "But again, with this course, we're gonna be focused on getting hands-on writing deep learning code.", 'start': 288.656, 'duration': 5.303}, {'end': 303.338, 'text': "So if we come here, what's the difference between traditional programming versus machine learning or deep learning programming.", 'start': 295.04, 'duration': 8.298}, {'end': 308.042, 'text': 'So with traditional programming, you might start with some inputs.', 'start': 304.118, 'duration': 3.924}, {'end': 310.544, 'text': 'You might code up a bunch of rules.', 'start': 308.963, 'duration': 1.581}], 'summary': 'Course focuses on hands-on deep learning coding, not theory or research.', 'duration': 26.75, 'max_score': 283.794, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs8283794.jpg'}, {'end': 427.789, 'src': 'embed', 'start': 396.573, 'weight': 1, 'content': [{'end': 401.557, 'text': 'The next question to answer is why would we want to use machine learning or deep learning?', 'start': 396.573, 'duration': 4.984}, {'end': 404.917, 'text': 'So the good reason is why not?', 'start': 402.796, 'duration': 2.121}, {'end': 407.859, 'text': 'I mean you might have seen what machine learning is capable of.', 'start': 404.997, 'duration': 2.862}, {'end': 413.502, 'text': "You might have heard of the power of deep learning, the power of artificial intelligence, and just all the problems we've got in the world.", 'start': 407.879, 'duration': 5.623}, {'end': 417.263, 'text': 'Why not we just use it? Maybe.', 'start': 413.582, 'duration': 3.681}, {'end': 427.789, 'text': "But a better reason is for a complex problem, such as maybe we're trying to teach a self-driving car to drive.", 'start': 418.964, 'duration': 8.825}], 'summary': 'Use machine learning for complex problems like teaching a self-driving car to drive.', 'duration': 31.216, 'max_score': 396.573, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs8396573.jpg'}, {'end': 600.366, 'src': 'embed', 'start': 574.807, 'weight': 4, 'content': [{'end': 579.452, 'text': 'So yeah, if you think about using machine learning but you think you can code up a simple rule-based system,', 'start': 574.807, 'duration': 4.645}, {'end': 582.856, 'text': 'you should probably do the rule-based system rather than machine learning.', 'start': 579.452, 'duration': 3.404}, {'end': 589.94, 'text': 'So what is deep learning good for? Problems with long lists of rules.', 'start': 584.437, 'duration': 5.503}, {'end': 593.622, 'text': 'So when the traditional approach fails, machine learning slash deep learning.', 'start': 590.5, 'duration': 3.122}, {'end': 600.366, 'text': 'Again, whenever you see throughout this course machine learning slash deep learning, you can kind of think of them as the same thing.', 'start': 594.062, 'duration': 6.304}], 'summary': 'Consider using rule-based system over machine learning for simple tasks. deep learning is suitable for problems with long lists of rules.', 'duration': 25.559, 'max_score': 574.807, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs8574807.jpg'}, {'end': 889.899, 'src': 'embed', 'start': 865.962, 'weight': 5, 'content': [{'end': 874.048, 'text': 'So on the machine learning side of things, you might have the random forest, the naive bays, nearest neighbor, support vector machine, and many more.', 'start': 865.962, 'duration': 8.086}, {'end': 881.233, 'text': 'And since the advent of deep learning, these algorithms here are often referred to as shallow algorithms.', 'start': 874.748, 'duration': 6.485}, {'end': 884.235, 'text': 'Now, what that means for now is not too important.', 'start': 881.714, 'duration': 2.521}, {'end': 889.899, 'text': "I just want you to start getting familiar with some of the terms you're going to hear in the machine learning and deep learning world.", 'start': 884.576, 'duration': 5.323}], 'summary': 'Introduction to machine learning algorithms including random forest, naive bayes, nearest neighbor, and support vector machine.', 'duration': 23.937, 'max_score': 865.962, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs8865962.jpg'}, {'end': 1086.476, 'src': 'embed', 'start': 1059.841, 'weight': 6, 'content': [{'end': 1063.403, 'text': 'And so before data gets used with the neural network, it needs to be turned into numbers.', 'start': 1059.841, 'duration': 3.562}, {'end': 1067.085, 'text': 'Okay, remember our one sentence definition of machine learning?', 'start': 1063.583, 'duration': 3.502}, {'end': 1073.768, 'text': 'Machine learning algorithms are about turning data into numbers and finding patterns in those numbers.', 'start': 1067.905, 'duration': 5.863}, {'end': 1075.349, 'text': 'So this is our numbers here.', 'start': 1074.348, 'duration': 1.001}, {'end': 1083.454, 'text': 'Now we might feed these numbers here that represent our data into a neural network.', 'start': 1077.591, 'duration': 5.863}, {'end': 1086.476, 'text': 'If we come here, this is a simple neural network.', 'start': 1083.474, 'duration': 3.002}], 'summary': 'Machine learning: turning data into numbers to find patterns.', 'duration': 26.635, 'max_score': 1059.841, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs81059841.jpg'}, {'end': 1450.45, 'src': 'embed', 'start': 1419.357, 'weight': 7, 'content': [{'end': 1422.66, 'text': 'and these are often all referring to very similar things.', 'start': 1419.357, 'duration': 3.303}, {'end': 1434.169, 'text': "Now, when it comes to neural networks, we've seen the anatomy, but there's also a few different types of learning.", 'start': 1424.982, 'duration': 9.187}, {'end': 1443.596, 'text': 'The first one is supervised learning, semi-supervised learning, and then unsupervised learning, and transfer learning.', 'start': 1435.83, 'duration': 7.766}, {'end': 1450.45, 'text': 'whoa, and so supervised learning often involves having data and labels.', 'start': 1444.525, 'duration': 5.925}], 'summary': 'Neural networks involve supervised, semi-supervised, unsupervised, and transfer learning.', 'duration': 31.093, 'max_score': 1419.357, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs81419357.jpg'}, {'end': 1629.376, 'src': 'embed', 'start': 1595.114, 'weight': 3, 'content': [{'end': 1597.955, 'text': 'Now, what is deep learning actually used for??', 'start': 1595.114, 'duration': 2.841}, {'end': 1604.996, 'text': "Well, we'll return back to this comment we saw before, because this is actually a beautiful comment.", 'start': 1599.615, 'duration': 5.381}, {'end': 1606.697, 'text': 'Thank you very much, Yashui.', 'start': 1605.517, 'duration': 1.18}, {'end': 1609, 'text': "Hopefully I'm saying that right.", 'start': 1607.938, 'duration': 1.062}, {'end': 1617.11, 'text': 'I think you can use ML for literally anything as long as you can convert it into numbers and program it to find patterns.', 'start': 1609.921, 'duration': 7.189}, {'end': 1621.612, 'text': 'literally it could be anything, any input or output.', 'start': 1618.01, 'duration': 3.602}, {'end': 1629.376, 'text': 'again, lots of emphasis on input or output from the universe, and so I want you to keep this sentence in your mind,', 'start': 1621.612, 'duration': 7.764}], 'summary': 'Deep learning can be used for any input or output, as long as it can be converted into numbers and programmed to find patterns.', 'duration': 34.262, 'max_score': 1595.114, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs81595114.jpg'}, {'end': 1678.945, 'src': 'embed', 'start': 1652.908, 'weight': 8, 'content': [{'end': 1660.371, 'text': 'you start to look at almost everything through the lens of turning something into numbers and finding patterns in those numbers.', 'start': 1652.908, 'duration': 7.463}, {'end': 1668.355, 'text': "Now, let's have a look at some common deep learning use cases that you've probably experienced in your day to day life.", 'start': 1661.332, 'duration': 7.023}, {'end': 1678.945, 'text': 'So the first one is recommendation, translation, speech recognition, computer vision, natural language processing.', 'start': 1669.838, 'duration': 9.107}], 'summary': 'Analyzing data for patterns; deep learning use cases: recommendation, translation, speech recognition, computer vision, natural language processing.', 'duration': 26.037, 'max_score': 1652.908, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs81652908.jpg'}], 'start': 72.751, 'title': 'Deep learning and machine learning applications', 'summary': "Covers deep learning with tensorflow, machine learning applications and limitations, deep learning applications and considerations, common algorithms in machine learning and deep learning, neural networks' data representation, and types of learning in neural networks. it also emphasizes the potential applications of machine learning in solving complex problems and discusses the challenges and considerations in using deep learning.", 'chapters': [{'end': 467.293, 'start': 72.751, 'title': 'Deep learning with tensorflow', 'summary': 'Introduces deep learning and tensorflow, explaining the difference between traditional programming and machine learning, and the potential applications of machine learning in solving complex problems, with an invitation to a full course on tensorflow and deep learning available on zerotomastery.io.', 'duration': 394.542, 'highlights': ['The chapter introduces deep learning and TensorFlow, explaining the difference between traditional programming and machine learning. Provides a clear introduction to deep learning and TensorFlow, distinguishing it from traditional programming and machine learning.', 'The potential applications of machine learning in solving complex problems are discussed, using the example of teaching a self-driving car to drive. Illustrates the potential of machine learning in solving complex problems, such as teaching a self-driving car to drive.', 'An invitation to a full course on TensorFlow and deep learning available on zerotomastery.io is extended, offering 20 plus more hours of TensorFlow code and other specific parts of deep learning. Promotes the full course on TensorFlow and deep learning at zerotomastery.io, offering extended content beyond the current video.']}, {'end': 667.451, 'start': 468.155, 'title': 'Machine learning applications and limitations', 'summary': 'Discusses the versatility of machine learning for converting inputs into numbers and finding patterns, while highlighting its limitations in complex software products and the benefits of deep learning in handling problems with long lists of rules, continually changing environments, and large collections of data.', 'duration': 199.296, 'highlights': ['The versatility of machine learning for converting inputs into numbers and finding patterns Machine learning can be used for literally anything as long as inputs can be converted into numbers and programmed to find patterns.', "Limitations of machine learning in complex software products It is recommended to build a simple rule-based system or a complex software product that doesn't require machine learning instead of using machine learning.", 'Benefits of deep learning in handling problems with long lists of rules, continually changing environments, and large collections of data Deep learning is beneficial for problems with long lists of rules, continually changing environments, and large collections of data, as it can adapt to new scenarios and discover insights within large data collections.']}, {'end': 848.392, 'start': 667.451, 'title': 'Deep learning applications and considerations', 'summary': 'Highlights the challenges and considerations in using deep learning, including the interpretability of patterns, predictability of outputs, need for large data, and the suitability of structured and unstructured data for deep learning.', 'duration': 180.941, 'highlights': ['Deep learning models produce patterns that are typically uninterpretable by humans, making interpretability a challenge. N/A', 'The unpredictability of outputs in deep learning models may result in errors, making it potentially unsuitable when errors are unacceptable. N/A', 'Deep learning models require a large amount of data to produce great results, posing a challenge when data availability is limited. N/A', 'Deep learning typically performs better on unstructured data, such as natural language text, images, and sound waves, while traditional machine learning algorithms perform best on structured data found in spreadsheets. N/A']}, {'end': 1019.288, 'start': 848.712, 'title': 'Common algorithms in machine learning and deep learning', 'summary': 'Introduces common machine learning and deep learning algorithms including random forest, naive bays, nearest neighbor, support vector machine, neural networks, fully connected neural networks, convolutional neural networks, recurrent neural networks, and the transformer architecture. the focus is on building these neural networks using tensorflow and understanding their applications for structured and unstructured data.', 'duration': 170.576, 'highlights': ['The chapter covers common machine learning algorithms such as random forest, naive bays, nearest neighbor, and support vector machine, as well as deep learning algorithms including neural networks, fully connected neural networks, convolutional neural networks, recurrent neural networks, and the transformer architecture.', 'The course will focus on building neural networks using TensorFlow, which are considered the foundational type of neural networks that the deep learning field has been built upon.', 'Neural network architectures are typically better performing on unstructured data, while machine learning algorithms are typically better performing on structured data.', 'The chapter emphasizes the importance of getting familiar with these algorithms in order to understand the terminology and concepts in the machine learning and deep learning world.', 'The next video will provide an overview of what neural networks are, including a definition and explanation of their composition as a network of artificial neurons or nodes.']}, {'end': 1419.357, 'start': 1022.156, 'title': 'Neural networks: data to representation', 'summary': 'Discusses the process of turning data into numerical encoding, feeding it into a neural network to learn representations, and converting the outputs into human understandable form, with an emphasis on the anatomy of a neural network.', 'duration': 397.201, 'highlights': ['The process of turning data into numerical encoding is a crucial step before feeding it into a neural network, which then learns representations or patterns in the data, and eventually outputs a learned representation or prediction probabilities.', 'The anatomy of a neural network involves input layers, hidden layers for learning patterns in the data, and an output layer for producing learned representations or prediction probabilities.', "The neural network's task involves recognizing images, discovering meaning from text, or transcribing sound waves into text, where the appropriate neural network must be chosen for each specific problem."]}, {'end': 1889.005, 'start': 1419.357, 'title': 'Types of learning in neural networks', 'summary': 'Explains the concepts of supervised learning, semi-supervised learning, unsupervised learning, and transfer learning in neural networks, emphasizing the importance of converting data into numbers and finding patterns, with examples of common deep learning use cases like recommendation, translation, speech recognition, computer vision, and natural language processing.', 'duration': 469.648, 'highlights': ['The chapter explains the concepts of supervised learning, semi-supervised learning, unsupervised learning, and transfer learning in neural networks. It details the different types of learning in neural networks, including supervised learning, semi-supervised learning, unsupervised learning, and transfer learning.', 'Emphasizes the importance of converting data into numbers and finding patterns. It emphasizes the significance of converting data into numerical format and finding patterns within the data, which is essential for machine learning and deep learning.', 'Provides examples of common deep learning use cases like recommendation, translation, speech recognition, computer vision, and natural language processing. It provides examples of common deep learning applications such as recommendation systems, translation, speech recognition, computer vision, and natural language processing, illustrating the practical applications of deep learning in everyday life.']}], 'duration': 1816.254, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs872751.jpg', 'highlights': ['The chapter introduces deep learning and TensorFlow, explaining the difference between traditional programming and machine learning.', 'The potential applications of machine learning in solving complex problems are discussed, using the example of teaching a self-driving car to drive.', 'An invitation to a full course on TensorFlow and deep learning available on zerotomastery.io is extended, offering 20 plus more hours of TensorFlow code and other specific parts of deep learning.', 'Machine learning can be used for literally anything as long as inputs can be converted into numbers and programmed to find patterns.', 'Benefits of deep learning in handling problems with long lists of rules, continually changing environments, and large collections of data.', 'The chapter covers common machine learning algorithms such as random forest, naive bays, nearest neighbor, and support vector machine, as well as deep learning algorithms including neural networks, fully connected neural networks, convolutional neural networks, recurrent neural networks, and the transformer architecture.', 'The process of turning data into numerical encoding is a crucial step before feeding it into a neural network, which then learns representations or patterns in the data, and eventually outputs a learned representation or prediction probabilities.', 'The chapter explains the concepts of supervised learning, semi-supervised learning, unsupervised learning, and transfer learning in neural networks.', 'Provides examples of common deep learning use cases like recommendation, translation, speech recognition, computer vision, and natural language processing.']}, {'end': 4328.959, 'segs': [{'end': 2020.491, 'src': 'embed', 'start': 1993.743, 'weight': 0, 'content': [{'end': 1997.405, 'text': 'There we go, automatic speech recognition, image recognition, natural language processing.', 'start': 1993.743, 'duration': 3.662}, {'end': 1999.687, 'text': "But there's one I'm specifically looking for.", 'start': 1997.806, 'duration': 1.881}, {'end': 2001.829, 'text': 'Now this is DeepMind.', 'start': 2000.348, 'duration': 1.481}, {'end': 2005.131, 'text': 'Deep stands for deep learning.', 'start': 2003.47, 'duration': 1.661}, {'end': 2007.954, 'text': 'Now this is a deep learning research company.', 'start': 2005.572, 'duration': 2.382}, {'end': 2013.278, 'text': 'Boom, AlphaFold, a solution to a 50-year-old grand challenge in biology.', 'start': 2008.494, 'duration': 4.784}, {'end': 2020.491, 'text': "We're not gonna go too much into this, but this is possibly one of the biggest breakthroughs in AI powered by deep learning.", 'start': 2013.746, 'duration': 6.745}], 'summary': "Deepmind's alphafold solves a 50-year-old biology challenge, a significant ai breakthrough.", 'duration': 26.748, 'max_score': 1993.743, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs81993743.jpg'}, {'end': 2564.08, 'src': 'embed', 'start': 2536.21, 'weight': 1, 'content': [{'end': 2539.116, 'text': "particularly using Google's own TensorFlow software.", 'start': 2536.21, 'duration': 2.906}, {'end': 2540.198, 'text': 'Ah, okay.', 'start': 2539.536, 'duration': 0.662}, {'end': 2541.961, 'text': "So this is what we're gonna get familiar with.", 'start': 2540.618, 'duration': 1.343}, {'end': 2552.074, 'text': 'And so, If you think of a TPU, if graphics processing units are fast at crunching numbers, well, a TPU is probably even faster.', 'start': 2542.502, 'duration': 9.572}, {'end': 2556.336, 'text': "But we're going to see throughout this course how we can get access to these chips.", 'start': 2552.674, 'duration': 3.662}, {'end': 2564.08, 'text': "If you don't have, I mean, unless you're Google, you probably don't have this in your computer right now or in your bedroom or something like that.", 'start': 2556.396, 'duration': 7.684}], 'summary': "Introduction to google's tensorflow software and tpus for faster processing.", 'duration': 27.87, 'max_score': 2536.21, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs82536210.jpg'}, {'end': 2658.036, 'src': 'embed', 'start': 2629.224, 'weight': 2, 'content': [{'end': 2632.689, 'text': 'And then our neural network outputs some kind of representation outputs.', 'start': 2629.224, 'duration': 3.465}, {'end': 2635.714, 'text': 'And then we convert those into something that we can understand.', 'start': 2633.17, 'duration': 2.544}, {'end': 2642.864, 'text': 'Now, the secret here is, is that these are tensors.', 'start': 2636.855, 'duration': 6.009}, {'end': 2654.152, 'text': 'Whoa And so the most basic definition I can think of for a tensor is some way or some numerical way to represent information.', 'start': 2643.621, 'duration': 10.531}, {'end': 2658.036, 'text': "Now what that information is, I mean, that's totally up to you.", 'start': 2654.713, 'duration': 3.323}], 'summary': 'Neural network outputs representation as tensors, enabling numerical information representation.', 'duration': 28.812, 'max_score': 2629.224, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs82629224.jpg'}, {'end': 3117.639, 'src': 'embed', 'start': 3091.746, 'weight': 3, 'content': [{'end': 3096.112, 'text': "So we've talked about a lot of different concepts, deep learning, neural networks, TensorFlow, tensors.", 'start': 3091.746, 'duration': 4.366}, {'end': 3102.18, 'text': "How should you approach trying to learn these things? Well, here's some guidelines.", 'start': 3096.812, 'duration': 5.368}, {'end': 3105.797, 'text': 'write code lots of it.', 'start': 3103.396, 'duration': 2.401}, {'end': 3106.837, 'text': 'follow along.', 'start': 3105.797, 'duration': 1.04}, {'end': 3113.018, 'text': "so you're gonna see me writing a lot of code and I'm 100 percent going to make a lot of mistakes.", 'start': 3106.837, 'duration': 6.181}, {'end': 3117.639, 'text': 'so follow along, if you can, and make the mistakes with me.', 'start': 3113.018, 'duration': 4.621}], 'summary': 'Guidelines for learning: write lots of code, make mistakes, follow along.', 'duration': 25.893, 'max_score': 3091.746, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs83091746.jpg'}, {'end': 3694.824, 'src': 'heatmap', 'start': 3317.145, 'weight': 0.769, 'content': [{'end': 3325.373, 'text': 'if you want to learn something, i find, aside from doing it yourself or replicating it yourself, the next best way, or possibly even better,', 'start': 3317.145, 'duration': 8.228}, {'end': 3327.014, 'text': 'is to teach someone else.', 'start': 3325.373, 'duration': 1.641}, {'end': 3332.879, 'text': "so if there's a concept that you've learned in this course and you want to really nail it, you want to get better at it.", 'start': 3327.014, 'duration': 5.865}, {'end': 3335.962, 'text': 'figure out how you can explain that to someone else.', 'start': 3332.879, 'duration': 3.083}, {'end': 3343.207, 'text': "so maybe you write an article sharing how you've learned how to turn data into tensors, write a deep learning model with tensorflow,", 'start': 3335.962, 'duration': 7.245}, {'end': 3352.194, 'text': 'figure out patterns in those tensors and then turn those representation outputs from that neural network into some sort of human understandable output.', 'start': 3343.207, 'duration': 8.987}, {'end': 3353.836, 'text': 'maybe you want to share that with others.', 'start': 3352.194, 'duration': 1.642}, {'end': 3356.518, 'text': "that's going to be a great way to really cement your knowledge.", 'start': 3353.836, 'duration': 2.682}, {'end': 3360.301, 'text': 'Avoid the following things.', 'start': 3358.86, 'duration': 1.441}, {'end': 3362.062, 'text': 'Overthinking the process.', 'start': 3360.821, 'duration': 1.241}, {'end': 3367.245, 'text': "So, I can't stress enough, this comes back to our number one motto.", 'start': 3363.123, 'duration': 4.122}, {'end': 3368.826, 'text': 'If in doubt, run the code.', 'start': 3367.446, 'duration': 1.38}, {'end': 3376.371, 'text': "Again, we're going to be learning so many different concepts, you're probably going to be overwhelmed at different points, but don't worry.", 'start': 3369.567, 'duration': 6.804}, {'end': 3387.078, 'text': "Everyone who's learned anything has gone through the trouble of basically creating new patterns in their brain to understand the new concept that they're learning.", 'start': 3377.092, 'duration': 9.986}, {'end': 3391.491, 'text': "So, If you're overthinking the process, you're going to hold yourself back.", 'start': 3387.598, 'duration': 3.893}, {'end': 3395.473, 'text': "And avoid the I can't learn it mentality.", 'start': 3392.451, 'duration': 3.022}, {'end': 3397.094, 'text': "That's bullshit.", 'start': 3396.454, 'duration': 0.64}, {'end': 3398.635, 'text': 'You can learn it.', 'start': 3398.034, 'duration': 0.601}, {'end': 3401.116, 'text': 'All right, enough talking.', 'start': 3399.816, 'duration': 1.3}, {'end': 3403.758, 'text': 'I love that fire.', 'start': 3402.877, 'duration': 0.881}, {'end': 3404.578, 'text': "Let's do it again.", 'start': 3403.938, 'duration': 0.64}, {'end': 3407.24, 'text': "Let's code.", 'start': 3406.66, 'duration': 0.58}, {'end': 3409.691, 'text': "We've got an overview of deep learning.", 'start': 3408.39, 'duration': 1.301}, {'end': 3412.413, 'text': "We've got an overview of TensorFlow and tensors.", 'start': 3410.131, 'duration': 2.282}, {'end': 3414.074, 'text': "It's time to get hands on.", 'start': 3413.174, 'duration': 0.9}, {'end': 3415.295, 'text': 'This is very exciting.', 'start': 3414.094, 'duration': 1.201}, {'end': 3416.937, 'text': "So I'm going to open up my web browser.", 'start': 3415.536, 'duration': 1.401}, {'end': 3422.901, 'text': "This is the tool we're going to be using throughout basically the entire course, is Google Colab.", 'start': 3417.557, 'duration': 5.344}, {'end': 3427.765, 'text': 'So if we come here to colab.research.google.com.', 'start': 3423.642, 'duration': 4.123}, {'end': 3431.928, 'text': "If you're unfamiliar with Google Colab, check out the Colab overview.", 'start': 3428.386, 'duration': 3.542}, {'end': 3437.553, 'text': 'Or if you just go to colab.research.google.com, you can go to the examples tab.', 'start': 3432.489, 'duration': 5.064}, {'end': 3443.437, 'text': 'here, and you can open up a whole bunch of different tutorials to go through and learn about Google Colab.', 'start': 3438.134, 'duration': 5.303}, {'end': 3449.079, 'text': "If you're just starting out, I'd check out Overview of Collaboratory Features or the Markdown Guide.", 'start': 3444.157, 'duration': 4.922}, {'end': 3457.263, 'text': "But as I said, if you want another overview, check out the overview video because we're going to be using Colab a whole bunch throughout this course.", 'start': 3449.58, 'duration': 7.683}, {'end': 3459.184, 'text': "So let's get started.", 'start': 3458.284, 'duration': 0.9}, {'end': 3462.346, 'text': "I'm going to open up a new notebook here because we're going to.", 'start': 3459.204, 'duration': 3.142}, {'end': 3466.708, 'text': "I don't know if you can hear that, but I'm rubbing my hands together because I'm so excited.", 'start': 3462.346, 'duration': 4.362}, {'end': 3470.64, 'text': "we're going to get hands on with TensorFlow.", 'start': 3467.639, 'duration': 3.001}, {'end': 3473.701, 'text': 'So some of the most fundamental functions of TensorFlow.', 'start': 3471.16, 'duration': 2.541}, {'end': 3476.281, 'text': "And let's give our notebook a title here.", 'start': 3474.461, 'duration': 1.82}, {'end': 3482.583, 'text': "So let's go 00 TensorFlow Fundamentals.", 'start': 3476.301, 'duration': 6.282}, {'end': 3489.945, 'text': "The reason I'm doing 00 at the start is because we're gonna by the end of this course have probably about 10 or so of these notebooks.", 'start': 3483.283, 'duration': 6.662}, {'end': 3493.566, 'text': 'So the 00 at the front just lets us know what order they come in.', 'start': 3490.425, 'duration': 3.141}, {'end': 3495.924, 'text': 'So TensorFlow fundamentals.', 'start': 3494.383, 'duration': 1.541}, {'end': 3510.412, 'text': "And now let's put in here in this notebook, we're going to cover some of the most fundamental concepts of tensors using TensorFlow.", 'start': 3496.424, 'duration': 13.988}, {'end': 3520.637, 'text': 'Beautiful And we can put a little hashtag at the front and then what I did there was I did command mm and turned it into a markdown cell.', 'start': 3511.272, 'duration': 9.365}, {'end': 3523.961, 'text': 'And then if I press shift and enter, we get another code cell here.', 'start': 3520.657, 'duration': 3.304}, {'end': 3529.787, 'text': "Beautiful And so to enable us to write code, we're going to have to connect here.", 'start': 3524.581, 'duration': 5.206}, {'end': 3536.094, 'text': "So just press the connect tab there and let's get a little outline of what we're actually going to do.", 'start': 3530.368, 'duration': 5.726}, {'end': 3542.65, 'text': 'So more specifically, This is what I do with most of my notebooks.', 'start': 3536.834, 'duration': 5.816}, {'end': 3548.515, 'text': "Whenever I come to something before writing code, you're going to hear me say write code as much as possible.", 'start': 3543.231, 'duration': 5.284}, {'end': 3553.358, 'text': "But I just like to give myself a little bit of an outline so I know the direction of where I'm heading.", 'start': 3548.915, 'duration': 4.443}, {'end': 3561.284, 'text': "So we're going to cover, what do we have? Introduction to tensors.", 'start': 3554.139, 'duration': 7.145}, {'end': 3565.608, 'text': 'We might as well get some information from tensors.', 'start': 3563.586, 'duration': 2.022}, {'end': 3567.989, 'text': "If none of this makes sense, don't worry.", 'start': 3566.328, 'duration': 1.661}, {'end': 3569.911, 'text': "We're going to code it up by hand.", 'start': 3568.11, 'duration': 1.801}, {'end': 3572.696, 'text': 'manipulating tensors.', 'start': 3571.215, 'duration': 1.481}, {'end': 3580.058, 'text': "manipulating tensors, so changing the information that's stored within tensors, and then we're going to go tensors and numpy.", 'start': 3572.696, 'duration': 7.362}, {'end': 3591.822, 'text': "if you've ever used numpy, tensorflow you'll find has very, very similar features to numpy using at tf function,", 'start': 3580.058, 'duration': 11.764}, {'end': 3599.043, 'text': 'which is a way to speed up your regular Python functions in TensorFlow.', 'start': 3591.822, 'duration': 7.221}, {'end': 3609.51, 'text': 'because, remember, the whole premise of using TensorFlow is so that we can use GPUs with TensorFlow or TPUs to do faster numerical computing.', 'start': 3599.043, 'duration': 10.467}, {'end': 3610.831, 'text': "That's what we're after here.", 'start': 3609.731, 'duration': 1.1}, {'end': 3616.035, 'text': "And at the end, we're gonna have a few exercises to try for yourself.", 'start': 3611.652, 'duration': 4.383}, {'end': 3621.418, 'text': "Alrighty, let's just jump straight in.", 'start': 3618.376, 'duration': 3.042}, {'end': 3623.44, 'text': "I'm gonna put another little heading here.", 'start': 3622.179, 'duration': 1.261}, {'end': 3630.311, 'text': "Introduction to Tensors, and then I'm gonna press Command-MM to turn that into Markdown.", 'start': 3624.73, 'duration': 5.581}, {'end': 3633.512, 'text': "Now for another cell here, I'm gonna press Command-MB.", 'start': 3630.732, 'duration': 2.78}, {'end': 3637.653, 'text': "Oh, I didn't press Escape while I had this cell highlighted.", 'start': 3634.292, 'duration': 3.361}, {'end': 3641.514, 'text': 'So Escape, Command-MB will give me a new cell.', 'start': 3638.153, 'duration': 3.361}, {'end': 3648.096, 'text': "Now I'm saying Command, however, if you're on Windows, it's probably Control, because I'm on a Mac, it's Command for me.", 'start': 3641.914, 'duration': 6.182}, {'end': 3654.455, 'text': 'So the first thing to do is to use TensorFlow is that we have to import TensorFlow.', 'start': 3649.192, 'duration': 5.263}, {'end': 3661.679, 'text': "Now I want you to try and follow along as much as you can with these videos, right? When I'm writing code, I want you to be writing code by my side.", 'start': 3654.595, 'duration': 7.084}, {'end': 3663.64, 'text': "It's like we're partner coders here.", 'start': 3662.259, 'duration': 1.381}, {'end': 3667.442, 'text': "And if you can't keep up because I'm writing a little bit too fast, that's all right.", 'start': 3664.4, 'duration': 3.042}, {'end': 3669.943, 'text': "I've had a lot of practice writing TensorFlow code.", 'start': 3667.982, 'duration': 1.961}, {'end': 3672.684, 'text': "And so again, I've spelled TensorFlow wrong.", 'start': 3670.543, 'duration': 2.141}, {'end': 3675.306, 'text': 'Maybe you catch my errors before I do.', 'start': 3673.625, 'duration': 1.681}, {'end': 3680.178, 'text': "If you need to slow down the video or watch something again, that's perfectly fine.", 'start': 3676.536, 'duration': 3.642}, {'end': 3685.08, 'text': "I'm probably going to need to slow down my code so I don't write as many typos.", 'start': 3681.238, 'duration': 3.842}, {'end': 3687.161, 'text': "So this is how we're going to import TensorFlow.", 'start': 3685.6, 'duration': 1.561}, {'end': 3691.503, 'text': 'TensorFlow becomes the alias tf in Python.', 'start': 3688.021, 'duration': 3.482}, {'end': 3694.824, 'text': 'tf is basically universal.', 'start': 3692.263, 'duration': 2.561}], 'summary': "Teaching others and hands-on practice are effective for learning, avoid overthinking and 'i can't learn' mentality. getting started with tensorflow fundamentals in google colab.", 'duration': 377.679, 'max_score': 3317.145, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs83317145.jpg'}, {'end': 3407.24, 'src': 'embed', 'start': 3377.092, 'weight': 4, 'content': [{'end': 3387.078, 'text': "Everyone who's learned anything has gone through the trouble of basically creating new patterns in their brain to understand the new concept that they're learning.", 'start': 3377.092, 'duration': 9.986}, {'end': 3391.491, 'text': "So, If you're overthinking the process, you're going to hold yourself back.", 'start': 3387.598, 'duration': 3.893}, {'end': 3395.473, 'text': "And avoid the I can't learn it mentality.", 'start': 3392.451, 'duration': 3.022}, {'end': 3397.094, 'text': "That's bullshit.", 'start': 3396.454, 'duration': 0.64}, {'end': 3398.635, 'text': 'You can learn it.', 'start': 3398.034, 'duration': 0.601}, {'end': 3401.116, 'text': 'All right, enough talking.', 'start': 3399.816, 'duration': 1.3}, {'end': 3403.758, 'text': 'I love that fire.', 'start': 3402.877, 'duration': 0.881}, {'end': 3404.578, 'text': "Let's do it again.", 'start': 3403.938, 'duration': 0.64}, {'end': 3407.24, 'text': "Let's code.", 'start': 3406.66, 'duration': 0.58}], 'summary': "Overthinking learning can hold you back. avoid 'i can't learn it' mentality. embrace coding with confidence.", 'duration': 30.148, 'max_score': 3377.092, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs83377092.jpg'}], 'start': 1889.065, 'title': 'Deep learning and tensorflow applications', 'summary': "Covers using deep learning for spam detection, tensorflow's significance in machine learning, gpu/tpu acceleration for deep learning, and an introduction to google colab as the primary tool for deep learning, encompassing fundamental concepts of tensors in tensorflow.", 'chapters': [{'end': 1944.59, 'start': 1889.065, 'title': 'Deep learning for spam detection', 'summary': 'Discusses using sequence-to-sequence deep learning algorithms for classifying emails as spam or not spam, and the concept of classification and regression in this context.', 'duration': 55.525, 'highlights': ["The premise here is if sequence-to-sequence doesn't make much sense, just think about it like this. You have a sequence of words and you want to translate or transform that into another sequence.", "You might have a sequence of sound waves and you're trying to convert them into another sequence of words.", 'Classification is for determining if an email is spam or not, while regression is used for predicting a number.']}, {'end': 2433.399, 'start': 1945.413, 'title': 'Deep learning and tensorflow', 'summary': "Discusses the applications of deep learning, including alphafold's breakthrough in biology, and the significance of tensorflow as an end-to-end machine learning platform, enabling fast deep learning code writing and model deployment with powerful experimentation for research.", 'duration': 487.986, 'highlights': ["AlphaFold's breakthrough in biology is one of the biggest breakthroughs in AI powered by deep learning. AlphaFold's solution to a 50-year-old grand challenge in biology is a significant breakthrough in AI powered by deep learning.", "TensorFlow is an end-to-end machine learning platform enabling fast deep learning code writing and model deployment with powerful experimentation for research. TensorFlow allows fast deep learning code writing and model deployment with powerful experimentation for research, such as rebuilding research like DeepMind's using TensorFlow powered deep learning models.", 'Deep learning can be used for a range of different problems, including protein folding, and TensorFlow allows access to pre-built deep learning models and resources like TensorFlow Hub for transfer learning. Deep learning can be used for a range of different problems, including protein folding, and TensorFlow allows access to pre-built deep learning models and resources like TensorFlow Hub for transfer learning.']}, {'end': 2978.172, 'start': 2433.399, 'title': 'Understanding tensorflow and tensors', 'summary': 'Discusses the significance of gpus/tpus in accelerating numerical calculations for deep learning, explains the concept of tensors in tensorflow, and outlines the topics to be covered in the course, including tensorflow basics, model building, evaluation, and deployment.', 'duration': 544.773, 'highlights': ['GPUs/TPUs are crucial for accelerating numerical calculations for deep learning, with TPUs being even faster than GPUs. TPUs and GPUs are used to accelerate numerical calculations for deep learning, with TPUs being faster than GPUs.', 'Tensors are the fundamental data representation in TensorFlow, serving as numerical encodings for inputs and outputs in neural networks. Tensors serve as the fundamental data representation in TensorFlow, encoding inputs and outputs in neural networks.', 'The course will cover topics including TensorFlow basics, pre-processing data into tensors, building and using deep learning models, model fitting, evaluation, saving/loading models, and making predictions on custom data. The course will cover TensorFlow basics, pre-processing data into tensors, building and using deep learning models, model fitting, evaluation, saving/loading models, and making predictions on custom data.']}, {'end': 3407.24, 'start': 2978.432, 'title': 'Deep learning: experiment, visualize, and share', 'summary': 'Emphasizes the importance of experimentation, visualization, and sharing in deep learning, with a focus on learning by writing code, running experiments, and visualizing data, and encourages sharing knowledge and avoiding overthinking.', 'duration': 428.808, 'highlights': ['The chapter emphasizes the importance of experimentation, visualization, and sharing in deep learning, with a focus on learning by writing code, running experiments, and visualizing data. Importance of experimentation, visualization, and sharing in deep learning, focus on learning by writing code, running experiments, and visualizing data.', 'Encourages sharing knowledge and avoiding overthinking in the learning process. Encouragement to share knowledge and avoid overthinking in the learning process.', 'Emphasizes the importance of asking questions and doing exercises to enhance learning. Importance of asking questions and doing exercises to enhance learning.']}, {'end': 4328.959, 'start': 3408.39, 'title': 'Deep learning overview and tensorflow fundamentals', 'summary': 'Provides an overview of deep learning and tensorflow, introduces google colab as the primary tool, and covers fundamental concepts of tensors and their creation in tensorflow, including the number of dimensions and data type specification.', 'duration': 920.569, 'highlights': ['The chapter introduces Google Colab as the primary tool for the course, providing an overview and guidance on getting started with it, along with a tutorial on its features and usage. N/A', 'The chapter covers fundamental concepts of tensors, including creating tensors using TensorFlow, understanding number of dimensions (ndim), and exploring data type specification with tf.constant. N/A', 'The chapter demonstrates the creation of different types of tensors, such as scalar, vector, and matrix, and explains their shapes, data types, and number of dimensions. N/A']}], 'duration': 2439.894, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs81889065.jpg', 'highlights': ["AlphaFold's solution to a 50-year-old grand challenge in biology is a significant breakthrough in AI powered by deep learning.", 'TPUs and GPUs are used to accelerate numerical calculations for deep learning, with TPUs being faster than GPUs.', 'Tensors serve as the fundamental data representation in TensorFlow, encoding inputs and outputs in neural networks.', 'Importance of experimentation, visualization, and sharing in deep learning, focus on learning by writing code, running experiments, and visualizing data.', 'Encouragement to share knowledge and avoid overthinking in the learning process.']}, {'end': 5814.083, 'segs': [{'end': 4360.369, 'src': 'embed', 'start': 4328.959, 'weight': 3, 'content': [{'end': 4330.6, 'text': 'and then this one has to have two.', 'start': 4328.959, 'duration': 1.641}, {'end': 4336.505, 'text': 'now you see how tedious it is creating tensors from like with your hands.', 'start': 4330.6, 'duration': 5.905}, {'end': 4343.199, 'text': "I mean this is why it's so helpful for TensorFlow to create tensors for us, as we'll see in future modules.", 'start': 4337.355, 'duration': 5.844}, {'end': 4348.162, 'text': "Hopefully, I've got all these little square brackets and commas in the right place.", 'start': 4343.279, 'duration': 4.883}, {'end': 4353.106, 'text': "So if we look at our tensor, shift and enter, what's it going to output? Ooh.", 'start': 4348.543, 'duration': 4.563}, {'end': 4360.369, 'text': 'Hello, we have how many more elements in the shape? We have an extra one.', 'start': 4354.164, 'duration': 6.205}], 'summary': 'Creating tensors manually is tedious, tensorflow helps, outputting extra elements in the shape.', 'duration': 31.41, 'max_score': 4328.959, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs84328959.jpg'}, {'end': 4941.247, 'src': 'embed', 'start': 4915.889, 'weight': 4, 'content': [{'end': 4921.933, 'text': 'we might want some tensors for their values to be changed, whereas we might want some tensors for their values not to be changed.', 'start': 4915.889, 'duration': 6.044}, {'end': 4926.176, 'text': "So there's a variable tensor and there's a constant tensor.", 'start': 4922.474, 'duration': 3.702}, {'end': 4930.539, 'text': "Now again, we're gonna reiterate the fact that a lot of the times,", 'start': 4927.057, 'duration': 3.482}, {'end': 4934.342, 'text': "you won't have to make the decision between using a variable tensor or a constant tensor.", 'start': 4930.539, 'duration': 3.803}, {'end': 4941.247, 'text': 'The decision will be made for you behind the scenes when TensorFlow creates tensors for your neural networks.', 'start': 4934.922, 'duration': 6.325}], 'summary': 'In tensorflow, the decision between variable and constant tensors is often made automatically for neural networks.', 'duration': 25.358, 'max_score': 4915.889, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs84915889.jpg'}, {'end': 5232.643, 'src': 'embed', 'start': 5206.99, 'weight': 0, 'content': [{'end': 5211.479, 'text': 'TensorFlow.random.uniform Wonderful.', 'start': 5206.99, 'duration': 4.489}, {'end': 5216.08, 'text': 'So we could go through that, or we could start to write the code.', 'start': 5212.519, 'duration': 3.561}, {'end': 5222.981, 'text': "Seriously, when I don't know something, that is the type of search that I will put into Google and look up.", 'start': 5217.48, 'duration': 5.501}, {'end': 5229.982, 'text': "As much as I'm trying to teach you TensorFlow itself, I'm trying to teach you how to search for answers to solve your own problems.", 'start': 5223.361, 'duration': 6.621}, {'end': 5232.643, 'text': 'Because at the end of the day, I can only show you so much.', 'start': 5230.543, 'duration': 2.1}], 'summary': 'Teaching tensorflow and problem-solving through search.', 'duration': 25.653, 'max_score': 5206.99, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs85206990.jpg'}, {'end': 5597.997, 'src': 'embed', 'start': 5541.186, 'weight': 1, 'content': [{'end': 5549.653, 'text': 'In the last video we checked out how we can create some random tensors and we tied that back to the concept of when a neural network starts to learn.', 'start': 5541.186, 'duration': 8.467}, {'end': 5553.576, 'text': 'if it wants to learn patterns in some sort of data set,', 'start': 5549.653, 'duration': 3.923}, {'end': 5562.102, 'text': 'it starts off with random patterns and then slowly adjusts them as it continually learns on more and more examples.', 'start': 5553.576, 'duration': 8.526}, {'end': 5575.627, 'text': "So if we come back here, In this video, let's see how we might shuffle the order of, what should we call it? Elements in a tensor.", 'start': 5562.823, 'duration': 12.804}, {'end': 5576.768, 'text': 'All right.', 'start': 5576.408, 'duration': 0.36}, {'end': 5587.058, 'text': "Hmm Why would you want to shuffle the order of elements in a tensor? Let's go back to our example here.", 'start': 5577.449, 'duration': 9.609}, {'end': 5596.876, 'text': "So let's say we were working on a food image classification problem, and we had 15, 000 images of ramen and spaghetti.", 'start': 5587.97, 'duration': 8.906}, {'end': 5597.997, 'text': "Let's keep it nice and simple.", 'start': 5596.896, 'duration': 1.101}], 'summary': 'Neural networks learn by adjusting random patterns; shuffling tensor elements can aid in data processing.', 'duration': 56.811, 'max_score': 5541.186, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs85541186.jpg'}], 'start': 4328.959, 'title': 'Tensor creation and initialization', 'summary': 'Delves into creating and initializing tensors in tensorflow, covering the process, dimensions, consistent terminology, and the practical application of random tensors in deep learning, including shuffling techniques to prevent order affecting learning.', 'chapters': [{'end': 4494.766, 'start': 4328.959, 'title': 'Tensor creation and definitions', 'summary': 'Discusses the creation of tensors in tensorflow, including the process, number of dimensions, and consistent terminology, and provides definitions for scalar, vector, matrix, and tensor.', 'duration': 165.807, 'highlights': ['The chapter discusses the creation of tensors in TensorFlow, emphasizing the tedious process of creating tensors manually and the convenience of using TensorFlow for tensor creation.', "It explores the number of elements and dimensions in the tensor, demonstrating the process of checking the number of dimensions using 'tensor.ndim'.", 'The chapter emphasizes the consistent reference to tensors in TensorFlow, regardless of their dimensionality, such as three-dimensional tensors or two-dimensional tensors referred to as matrices.', 'It provides definitions for scalar, vector, matrix, and tensor, distinguishing them based on their dimensional properties.']}, {'end': 4934.342, 'start': 4494.766, 'title': 'Creating tensors with tensorflow', 'summary': 'Introduces the creation of tensors using tf.constant and tf.variable in tensorflow, demonstrating the difference between changeable and unchangeable tensors and the application of the assign attribute.', 'duration': 439.576, 'highlights': ['The chapter introduces the creation of tensors using tf.constant and tf.variable in TensorFlow, demonstrating the difference between changeable and unchangeable tensors and the application of the assign attribute.', 'A tf.variable allows for the creation of changeable tensors, demonstrated by modifying elements using the assign attribute.', 'The distinction between tf.constant and tf.variable lies in the ability to change values, catering to different requirements in neural network code.']}, {'end': 5330.654, 'start': 4934.922, 'title': 'Tensor creation and initialization', 'summary': 'Explores the creation of random tensors in tensorflow and their significance in initializing neural network weights, with a focus on the practical application and usage of random tensors in deep learning.', 'duration': 395.732, 'highlights': ['Neural networks use random tensors to initialize their weights, which are essential for learning patterns in data. Neural networks utilize random tensors to initialize their weights, crucial for learning patterns in data, such as images of food being classified into categories like ramen or spaghetti.', 'Random tensors are continually adjusted by the neural network to align better with desired labeled outputs, improving the representation outputs over time. Random tensors are iteratively adjusted by the neural network to better align with desired labeled outputs, enhancing the representation outputs as the network learns from more examples.', "The chapter emphasizes the practical application of creating random tensors in TensorFlow and encourages independent problem-solving through effective search methods. The chapter underscores the practical application of creating random tensors in TensorFlow and advocates for independent problem-solving using effective search methods, such as utilizing TensorFlow's random uniform function and learning about random seed for stability."]}, {'end': 5814.083, 'start': 5331.274, 'title': 'Creating and shuffling random tensors with tensorflow', 'summary': 'Explores creating random tensors with tensorflow, including setting seeds for pseudo-random numbers and understanding normal distribution, and then delves into shuffling tensors to prevent order affecting learning, with a practical example of shuffling image data for neural network training.', 'duration': 482.809, 'highlights': ['Creating Random Tensors with TensorFlow The chapter explores creating random tensors with TensorFlow, setting seeds for pseudo-random numbers, understanding normal distribution, and comparing random tensors for equality, providing insights into the concept and practical application of random tensors and seeds.', 'Shuffling Tensors for Improved Learning The chapter delves into shuffling tensors to prevent order affecting learning, with a practical example of shuffling image data for neural network training, highlighting the importance of preventing inherent order from impacting learning outcomes.']}], 'duration': 1485.124, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs84328959.jpg', 'highlights': ['The chapter emphasizes the practical application of creating random tensors in TensorFlow and encourages independent problem-solving through effective search methods.', 'Neural networks use random tensors to initialize their weights, which are essential for learning patterns in data.', 'The chapter delves into shuffling tensors to prevent order affecting learning, with a practical example of shuffling image data for neural network training.', 'The chapter discusses the creation of tensors in TensorFlow, emphasizing the tedious process of creating tensors manually and the convenience of using TensorFlow for tensor creation.', 'The distinction between tf.constant and tf.variable lies in the ability to change values, catering to different requirements in neural network code.']}, {'end': 6952.79, 'segs': [{'end': 6159.156, 'src': 'embed', 'start': 6128.908, 'weight': 0, 'content': [{'end': 6134.793, 'text': 'but we ran into some problems using the global random seed and the operation level random seed.', 'start': 6128.908, 'duration': 5.885}, {'end': 6143.711, 'text': 'but the main intuition behind shuffling the order of tensors is that If we were trying to build a food image classification neural network and we had 15,', 'start': 6134.793, 'duration': 8.918}, {'end': 6152.269, 'text': '000 images of food, 10, 000 images of ramen, 5, 000 images of spaghetti and our neural network, all it saw was the ramen images first.', 'start': 6143.711, 'duration': 8.558}, {'end': 6159.156, 'text': "the patterns that it learns may be too aligned with what's in a ramen image rather than a spaghetti image.", 'start': 6152.269, 'duration': 6.887}], 'summary': 'Challenges with random seed usage; shuffling tensor order aids neural network learning, e.g., food image classification.', 'duration': 30.248, 'max_score': 6128.908, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs86128908.jpg'}, {'end': 6332.005, 'src': 'embed', 'start': 6311.926, 'weight': 1, 'content': [{'end': 6322.557, 'text': "So again now this might not make a lot of sense of why you'd want reproducible tensors, but as you start to run more deep learning experiments,", 'start': 6311.926, 'duration': 10.631}, {'end': 6327.422, 'text': "you'll often find is that because a neural network initializes itself with random patterns,", 'start': 6322.557, 'duration': 4.865}, {'end': 6332.005, 'text': 'you could get different results every single time you run this experiment.', 'start': 6328.462, 'duration': 3.543}], 'summary': 'Reproducible tensors are important to ensure consistent results in deep learning experiments.', 'duration': 20.079, 'max_score': 6311.926, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs86311926.jpg'}, {'end': 6651.205, 'src': 'embed', 'start': 6623.703, 'weight': 2, 'content': [{'end': 6630.268, 'text': 'numpy a, this one here directly to tf constant.', 'start': 6623.703, 'duration': 6.565}, {'end': 6634.631, 'text': 'i mean, what do you think is going to happen if we do this?', 'start': 6630.268, 'duration': 4.363}, {'end': 6637.073, 'text': 'oh wonderful, there we go.', 'start': 6634.631, 'duration': 2.442}, {'end': 6640.095, 'text': "we've now just converted our numpy array.", 'start': 6637.073, 'duration': 3.022}, {'end': 6645.218, 'text': "so if we have a look here, the output here is an array that's of type numpy.", 'start': 6640.095, 'duration': 5.123}, {'end': 6648.361, 'text': "then if we go here, we've just passed it to tf constant.", 'start': 6645.218, 'duration': 3.143}, {'end': 6649.942, 'text': "now it's into the form of a tensor.", 'start': 6648.361, 'duration': 1.581}, {'end': 6651.205, 'text': 'How beautiful is that?', 'start': 6650.404, 'duration': 0.801}], 'summary': 'Converted numpy array to tf constant, now a tensor.', 'duration': 27.502, 'max_score': 6623.703, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs86623703.jpg'}, {'end': 6819.753, 'src': 'embed', 'start': 6791.339, 'weight': 3, 'content': [{'end': 6797.664, 'text': "So we've seen how we can turn NumPy arrays into tensors, and we've seen how to create tensors with all ones and all zeros.", 'start': 6791.339, 'duration': 6.325}, {'end': 6801.767, 'text': "It's probably time that we get a little bit more information from our tensors.", 'start': 6798.365, 'duration': 3.402}, {'end': 6804.309, 'text': "So let's make that in our next video.", 'start': 6802.428, 'duration': 1.881}, {'end': 6807.812, 'text': 'So getting information from tensors.', 'start': 6804.469, 'duration': 3.343}, {'end': 6810.747, 'text': 'All right.', 'start': 6810.086, 'duration': 0.661}, {'end': 6813.889, 'text': 'so have a play around create some NumPy arrays,', 'start': 6810.747, 'duration': 3.142}, {'end': 6819.753, 'text': 'turn them into TensorFlow tensors and then try to adjust their shape so they fit into a different size.', 'start': 6813.889, 'duration': 5.864}], 'summary': 'Turning numpy arrays into tensors, creating tensors with all ones and zeros, and getting information from tensors will be covered in the next video.', 'duration': 28.414, 'max_score': 6791.339, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs86791339.jpg'}], 'start': 5814.524, 'title': 'Tensorflow tensor shuffling', 'summary': 'Covers shuffling tensors in tensorflow, including methods for shuffling tensors, the use of random seeds, and the importance of shuffling images in neural network training to prevent bias. it also delves into tensorflow documentation, random seed rules, tensor operations, conversions, and the advantages of tensorflow tensors for gpu computing.', 'chapters': [{'end': 6177.287, 'start': 5814.524, 'title': 'Shuffling tensors in tensorflow', 'summary': 'Explores shuffling tensors in tensorflow, highlighting how to shuffle tensors along its first dimension and the use of random seeds to create reproducible shuffled orders. it also emphasizes the importance of shuffling images in neural network training to prevent bias towards specific categories, with a practical homework assignment for learners.', 'duration': 362.763, 'highlights': ['The chapter explores shuffling tensors in TensorFlow, highlighting how to shuffle tensors along its first dimension and the use of random seeds to create reproducible shuffled orders.', 'It emphasizes the importance of shuffling images in neural network training to prevent bias towards specific categories, with a practical homework assignment for learners.', 'The transcript includes practical exercises for learners to read through TensorFlow documentation on random seed generation, practice writing five random tensors, and shuffle them, ensuring reproducible shuffled orders using tf.random.set_seed and a combination of seed.', 'It discusses the potential bias in neural network training if the order of images is not shuffled, using the example of a food image classification neural network with 15,000 images of food, 10,000 images of ramen, and 5,000 images of spaghetti.']}, {'end': 6359.142, 'start': 6177.307, 'title': 'Tensorflow documentation and random seed rules', 'summary': 'Discusses the importance of understanding tensorflow documentation and provides insights into the rules for setting global and operation level random seeds to achieve reproducible randomness in tensor shuffling and deep learning experiments.', 'duration': 181.835, 'highlights': ['Understanding TensorFlow documentation is important for grasping concepts, despite needing multiple readings, approximately 100 times, to fully comprehend it. It may take numerous readings, possibly around 100 times, to fully understand the TensorFlow documentation, emphasizing the importance of practice in grasping the concepts.', 'Setting both global and operation level random seeds is crucial for achieving reproducible randomness in tensor shuffling. The importance of using both the global and operation level random seeds to ensure reproducible randomness in tensor shuffling is emphasized, as demonstrated through the example of setting random seeds to maintain the same order for shuffled tensors.', 'Reproducible randomness is essential in deep learning experiments to obtain consistent results when initializing neural networks with random patterns. The significance of reproducible randomness in deep learning experiments is highlighted, particularly in the context of obtaining consistent results when initializing neural networks with random patterns, emphasizing the need for reproducibility in deep learning experiments.']}, {'end': 6952.79, 'start': 6360.262, 'title': 'Tensor operations and conversions', 'summary': "Explores different ways to create tensors, including using numpy ones and zeros, converting numpy arrays into tensorflow tensors, and adjusting tensor shapes, with the main difference between numpy arrays and tensorflow tensors being the latter's ability to run on a gpu for faster numerical computing.", 'duration': 592.528, 'highlights': ['The chapter begins by discussing NumPy ones and NumPy zeros, which can create tensors filled with ones or zeros of a given shape and type, with similar operations available in TensorFlow. NumPy ones and NumPy zeros are introduced as methods to create tensors filled with ones or zeros, showcasing the similarity of operations available in TensorFlow for numerical computing.', 'The process of converting NumPy arrays into TensorFlow tensors is demonstrated, with the ability to use tf.constant to achieve this conversion. The process of converting NumPy arrays into TensorFlow tensors is demonstrated using tf.constant, emphasizing the seamless conversion between the two formats.', 'The importance of tensor shape adjustment is highlighted, with an explanation of the requirement for the total elements in the new shape to match the original tensor, demonstrated through a reshape example. The significance of tensor shape adjustment is emphasized, outlining the requirement for the total elements in the new shape to match the original tensor, demonstrated through a reshape example.', 'The chapter concludes by hinting at exploring methods to obtain more information from tensors in the next video, encouraging practical exploration of creating and manipulating NumPy arrays and TensorFlow tensors. The chapter concludes by hinting at exploring methods to obtain more information from tensors in the next video and encourages practical exploration of creating and manipulating NumPy arrays and TensorFlow tensors.']}], 'duration': 1138.266, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs85814524.jpg', 'highlights': ['The importance of shuffling images in neural network training to prevent bias towards specific categories, with a practical homework assignment for learners.', 'The significance of reproducible randomness in deep learning experiments is highlighted, particularly in the context of obtaining consistent results when initializing neural networks with random patterns, emphasizing the need for reproducibility in deep learning experiments.', 'The process of converting NumPy arrays into TensorFlow tensors is demonstrated using tf.constant, emphasizing the seamless conversion between the two formats.', 'The chapter concludes by hinting at exploring methods to obtain more information from tensors in the next video and encourages practical exploration of creating and manipulating NumPy arrays and TensorFlow tensors.']}, {'end': 8113.517, 'segs': [{'end': 7122.187, 'src': 'embed', 'start': 7067.898, 'weight': 0, 'content': [{'end': 7077.484, 'text': 'When dealing with tensors, you probably want to be aware of the following attributes.', 'start': 7067.898, 'duration': 9.586}, {'end': 7081.647, 'text': 'Boom, shape, access, rank, size.', 'start': 7078.044, 'duration': 3.603}, {'end': 7083.428, 'text': "I said that out of order, but that doesn't really matter.", 'start': 7081.867, 'duration': 1.561}, {'end': 7089.541, 'text': "So probably the most important one here will be shape, but we'll see that again in practice.", 'start': 7084.338, 'duration': 5.203}, {'end': 7093.684, 'text': "Let's create a tensor, create a rank four tensor.", 'start': 7089.681, 'duration': 4.003}, {'end': 7100.148, 'text': 'Now, if I say rank four tensor, we come back here, what does that mean? Rank, the number of tensor dimensions.', 'start': 7094.544, 'duration': 5.604}, {'end': 7104.911, 'text': 'Hmm, so what might that look like? We want four dimensions.', 'start': 7101.028, 'duration': 3.883}, {'end': 7109.214, 'text': 'We come in here, we want rank four tensor.', 'start': 7106.032, 'duration': 3.182}, {'end': 7112.641, 'text': 'equals tf.zeros.', 'start': 7110.299, 'duration': 2.342}, {'end': 7116.403, 'text': 'What does tf.zeros do? This is a little test from before, we just covered this one.', 'start': 7112.961, 'duration': 3.442}, {'end': 7118.144, 'text': 'Two, three, four, five.', 'start': 7117.063, 'duration': 1.081}, {'end': 7122.187, 'text': 'So remember, this is probably the shape parameter here.', 'start': 7119.205, 'duration': 2.982}], 'summary': 'Introduction to tensor attributes: shape, rank, size. creating a rank four tensor with shape 2x3x4x5.', 'duration': 54.289, 'max_score': 7067.898, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs87067898.jpg'}, {'end': 7639.447, 'src': 'embed', 'start': 7611.908, 'weight': 3, 'content': [{'end': 7615.41, 'text': "But if not, that's okay because we're just gonna get hands-on as we do.", 'start': 7611.908, 'duration': 3.502}, {'end': 7616.55, 'text': "That's the theme of this course.", 'start': 7615.55, 'duration': 1}, {'end': 7622.314, 'text': 'So get the first two elements of each dimension in our tensor.', 'start': 7617.171, 'duration': 5.143}, {'end': 7627.442, 'text': 'So, if you have had experience with indexing Python lists,', 'start': 7623.26, 'duration': 4.182}, {'end': 7633.605, 'text': 'how do you think you might get the first two elements of each dimension of our rank four tensor?', 'start': 7627.442, 'duration': 6.163}, {'end': 7639.447, 'text': "So remember, we've got one, two, three, four dimensions here and we want the first two elements.", 'start': 7634.785, 'duration': 4.662}], 'summary': 'Course theme: hands-on learning, accessing first 2 elements of each dimension in rank 4 tensor.', 'duration': 27.539, 'max_score': 7611.908, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs87611908.jpg'}, {'end': 7779.076, 'src': 'embed', 'start': 7749.952, 'weight': 4, 'content': [{'end': 7755.594, 'text': 'Get the first element from each dimension, from each index except for the final one.', 'start': 7749.952, 'duration': 5.642}, {'end': 7760.382, 'text': "Okay, how might we do that? Actually, let's try it with our sum list.", 'start': 7756.154, 'duration': 4.228}, {'end': 7763.445, 'text': 'If we wanted the first element, one.', 'start': 7761.543, 'duration': 1.902}, {'end': 7770.37, 'text': 'Beautiful Now, we want the first element from every dimension except for the final one.', 'start': 7764.365, 'duration': 6.005}, {'end': 7771.39, 'text': "So let's try that.", 'start': 7770.67, 'duration': 0.72}, {'end': 7773.092, 'text': 'Rank for tensor.', 'start': 7771.871, 'duration': 1.221}, {'end': 7779.076, 'text': 'And we want first element from each dimension except for the last one.', 'start': 7774.032, 'duration': 5.044}], 'summary': 'Retrieve the first element from each dimension, except the final one, from a sum list and a tensor.', 'duration': 29.124, 'max_score': 7749.952, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs87749952.jpg'}, {'end': 8030.343, 'src': 'embed', 'start': 7998.028, 'weight': 5, 'content': [{'end': 8005.133, 'text': "And now this is helpful for later on when we're creating neural networks and we need to alter the size of our tensors so that their shapes line up.", 'start': 7998.028, 'duration': 7.105}, {'end': 8011.979, 'text': "So might not seem like it's very helpful now, but I just want to sort of plant the seed so that when we come across it in future videos,", 'start': 8005.494, 'duration': 6.485}, {'end': 8012.68, 'text': "it's not like whoa.", 'start': 8011.979, 'duration': 0.701}, {'end': 8014.201, 'text': "Daniel, we haven't covered this method before.", 'start': 8012.68, 'duration': 1.521}, {'end': 8020.206, 'text': "So let's go in here, add in extra dimension to our rank two tensor.", 'start': 8014.721, 'duration': 5.485}, {'end': 8030.343, 'text': 'So we wanna turn our rank two tensor into a rank three tensor, but keeping the exact same information that is stored in our rank two tensor.', 'start': 8021.297, 'duration': 9.046}], 'summary': 'Adding an extra dimension to a tensor in neural network creation.', 'duration': 32.315, 'max_score': 7998.028, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs87998028.jpg'}], 'start': 6952.81, 'title': 'Tensor attributes, operations, indexing, and manipulation', 'summary': 'Covers important tensor attributes such as shape, axis, rank, and size, along with practical demonstrations and explanations. it also delves into indexing tensors and manipulation, aiming to provide a comprehensive understanding for future use in neural networks.', 'chapters': [{'end': 7183.741, 'start': 6952.81, 'title': 'Tensor attributes and operations', 'summary': 'Covers the important tensor attributes such as shape, axis, rank, and size, with examples and explanations of their use, and the creation of a rank four tensor using tf.zeros.', 'duration': 230.931, 'highlights': ['Explaining the important tensor attributes including shape, axis, rank, and size, with examples and explanations of their use. Importance of understanding tensor attributes, examples of accessing tensor attributes using code, and explanation of their meanings.', 'Creating a rank four tensor using tf.zeros and visualizing its dimensions and elements. Demonstration of creating a rank four tensor using tf.zeros, explanation of the dimensions, and visualization of the tensor elements.']}, {'end': 7722.106, 'start': 7184.693, 'title': 'Tensor attributes and indexing', 'summary': 'Covers the understanding of tensor attributes such as shape, rank, and size, including practical demonstrations, and also delves into the concept of indexing tensors, akin to python lists, with a focus on retrieving the first two elements of each dimension in a rank four tensor.', 'duration': 537.413, 'highlights': ['The chapter covers the understanding of tensor attributes such as shape, rank, and size, including practical demonstrations.', 'The concept of indexing tensors, akin to Python lists, is explained, with a focus on retrieving the first two elements of each dimension in a rank four tensor.', 'The shape, rank, axis or dimension, and size of tensors are discussed, providing insights into the practical aspects of working with tensors.', 'Practical demonstrations and examples are provided to illustrate the process of indexing tensors and retrieving specific elements based on dimensions.']}, {'end': 8113.517, 'start': 7722.787, 'title': 'Tensor manipulation and reshaping', 'summary': 'Covers tensor manipulation and reshaping, including extracting elements from dimensions, changing tensor shapes, and adding extra dimensions, aiming to help in understanding and manipulating tensor structures for future use in neural networks.', 'duration': 390.73, 'highlights': ['We can extract the first element from each dimension of a tensor except for the final one by using indexing, helping in understanding tensor structures and accessing specific elements. ', 'Changing or adding extra dimensions to tensors is essential for altering tensor sizes in neural networks to ensure shapes align, demonstrating the importance of tensor manipulation in preparing for neural network applications. ', 'The concept of tf.newaxis can be used to add an extra dimension to a tensor, maintaining the same information while altering the shape, providing a method for expanding tensor dimensions without altering its numerical values. ']}], 'duration': 1160.707, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs86952810.jpg', 'highlights': ['Explaining the important tensor attributes including shape, axis, rank, and size, with examples and explanations of their use.', 'The chapter covers the understanding of tensor attributes such as shape, rank, and size, including practical demonstrations.', 'Creating a rank four tensor using tf.zeros and visualizing its dimensions and elements.', 'The concept of indexing tensors, akin to Python lists, is explained, with a focus on retrieving the first two elements of each dimension in a rank four tensor.', 'We can extract the first element from each dimension of a tensor except for the final one by using indexing, helping in understanding tensor structures and accessing specific elements.', 'Changing or adding extra dimensions to tensors is essential for altering tensor sizes in neural networks to ensure shapes align, demonstrating the importance of tensor manipulation in preparing for neural network applications.']}, {'end': 9265.112, 'segs': [{'end': 8180.394, 'src': 'embed', 'start': 8143.984, 'weight': 2, 'content': [{'end': 8153.147, 'text': 'So the other alternative to tf.newaccess is alternative to tf.newaccess.', 'start': 8143.984, 'duration': 9.163}, {'end': 8159.857, 'text': 'You might also see tf.expand So that stands for expand dimensions.', 'start': 8153.207, 'duration': 6.65}, {'end': 8162.479, 'text': "And then we're gonna pass it rank two tensor.", 'start': 8160.397, 'duration': 2.082}, {'end': 8165.982, 'text': 'And then we want to expand it on the final axis.', 'start': 8163.179, 'duration': 2.803}, {'end': 8173.228, 'text': 'So negative one means expand the final axis.', 'start': 8167.143, 'duration': 6.085}, {'end': 8175.65, 'text': 'There we go.', 'start': 8175.23, 'duration': 0.42}, {'end': 8178.733, 'text': 'We get the exact same output as this notation here.', 'start': 8175.69, 'duration': 3.043}, {'end': 8180.394, 'text': "It's just slightly different.", 'start': 8179.213, 'duration': 1.181}], 'summary': 'Using tf.expand to expand dimensions of a rank two tensor.', 'duration': 36.41, 'max_score': 8143.984, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs88143984.jpg'}, {'end': 8674.588, 'src': 'embed', 'start': 8642.677, 'weight': 0, 'content': [{'end': 8648.781, 'text': 'So we left off in the last video figuring out how we can manipulate our tensors with the basic operations.', 'start': 8642.677, 'duration': 6.104}, {'end': 8651.082, 'text': "And so hopefully you've tried out a few of these for yourself.", 'start': 8648.841, 'duration': 2.241}, {'end': 8655.485, 'text': "But now we're going to go on to matrix multiplication.", 'start': 8651.903, 'duration': 3.582}, {'end': 8666.392, 'text': 'So in machine learning, matrix multiplication is one of the most common tensor operations.', 'start': 8655.845, 'duration': 10.547}, {'end': 8674.588, 'text': "So The ones we've been through already, these basic operations, are often referred to as element-wise operations.", 'start': 8667.873, 'duration': 6.715}], 'summary': 'In machine learning, matrix multiplication is one of the most common tensor operations.', 'duration': 31.911, 'max_score': 8642.677, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs88642677.jpg'}], 'start': 8114.338, 'title': 'Tensor operations and matrix multiplication in tensorflow', 'summary': 'Covers tensor manipulation, accessing tensors, and basic tensor operations, emphasizing the importance of practice and the use of tensorflow functions. it also explores matrix multiplication using tf.matmul function and highlights an example of matrix multiplication with tensors of differing shapes.', 'chapters': [{'end': 8331.597, 'start': 8114.338, 'title': 'Tensor manipulation and operation', 'summary': 'Covers alternatives to tf.newaccess, tf.expanddims for manipulating and accessing tensors and emphasizes the importance of practice in expanding dimensions and getting attributes from tensors before moving on to tensor operations.', 'duration': 217.259, 'highlights': ['The chapter covers alternatives to tf.newaccess, tf.expandDims for manipulating and accessing tensors. It explains the usage of tf.newaccess and tf.expandDims for manipulating and accessing tensors.', 'Emphasizes the importance of practice in expanding dimensions and getting attributes from tensors before moving on to tensor operations. It emphasizes the importance of practicing expanding dimensions and getting attributes from tensors before moving on to tensor operations.', 'Pattern discovery within tensors often involves manipulating tensors, and building models in TensorFlow utilizes basic operations for pattern discovery. It discusses how pattern discovery within tensors involves manipulating tensors and how building models in TensorFlow utilizes basic operations for pattern discovery.']}, {'end': 8902.181, 'start': 8331.837, 'title': 'Tensor operations and basic tensor manipulation', 'summary': 'Covers basic tensor operations including addition, multiplication, and division, with examples and explanations of element-wise operations and matrix multiplication, emphasizing the use of tensorflow functions for improved performance and the upcoming exploration of matrix multiplication in the next video.', 'duration': 570.344, 'highlights': ['The chapter covers basic tensor operations including addition, multiplication, and division, with examples and explanations of element-wise operations and matrix multiplication.', 'Tensor addition and multiplication are demonstrated with specific examples showing the results of the operations, such as tensor plus 10 resulting in 10+10=20, 7+10=17, 3+10=13, and 4+10=14.', 'The importance of not changing the original tensor when manipulating tensors is highlighted, with the demonstration of tensor plus 10 only affecting the output without changing the original tensor.', 'The use of TensorFlow functions for tensor operations is emphasized, with the explanation that using TensorFlow functions can lead to improved performance, especially on a GPU, and the demonstration of tf.multiply and tf.math.multiply as examples of TensorFlow functions for operations.', 'The upcoming exploration of matrix multiplication in the next video is mentioned, indicating a shift from element-wise operations to a more complex operation in the subsequent lesson.']}, {'end': 9265.112, 'start': 8902.181, 'title': 'Tensorflow matrix multiplication', 'summary': 'Demonstrates how to perform matrix multiplication in tensorflow using the tf.matmul function, explores the difference between element-wise multiplication and matrix multiplication, and highlights an example of matrix multiplication with tensors of differing shapes.', 'duration': 362.931, 'highlights': ['The chapter demonstrates how to perform matrix multiplication in TensorFlow using the tf.matmul function, explores the difference between element-wise multiplication and matrix multiplication, and highlights an example of matrix multiplication with tensors of differing shapes.', 'The tf.matmul function in TensorFlow multiplies matrix A by matrix B, producing A times B, AB.', 'The example of matrix multiplication with tensors of differing shapes showcases the concept of matrix size compatibility, illustrating the rule for matrix multiplication.', 'The chapter also mentions the use of external resources for further exploration, indicated by a book emoji.']}], 'duration': 1150.774, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs88114338.jpg', 'highlights': ['The chapter demonstrates how to perform matrix multiplication in TensorFlow using the tf.matmul function, explores the difference between element-wise multiplication and matrix multiplication, and highlights an example of matrix multiplication with tensors of differing shapes.', 'The chapter covers basic tensor operations including addition, multiplication, and division, with examples and explanations of element-wise operations and matrix multiplication.', 'The chapter covers alternatives to tf.newaccess, tf.expandDims for manipulating and accessing tensors. It explains the usage of tf.newaccess and tf.expandDims for manipulating and accessing tensors.']}, {'end': 11265.858, 'segs': [{'end': 9321.853, 'src': 'embed', 'start': 9286.733, 'weight': 1, 'content': [{'end': 9296.595, 'text': "our tensors or matrices need to fulfill if we're going to matrix multiply them.", 'start': 9286.733, 'duration': 9.862}, {'end': 9304.457, 'text': 'Now, rule one, the inner dimensions must match.', 'start': 9297.655, 'duration': 6.802}, {'end': 9313.559, 'text': 'And rule two, the resulting matrix has the shape of the inner dimensions.', 'start': 9306.077, 'duration': 7.482}, {'end': 9317.91, 'text': "So, knowing these two rules, I'm gonna set you a challenge is before the next video,", 'start': 9314.608, 'duration': 3.302}, {'end': 9321.853, 'text': "we're gonna go through these two rules and see how we can fix our problem here.", 'start': 9317.91, 'duration': 3.943}], 'summary': 'To matrix multiply, inner dimensions must match; resulting matrix has shape of inner dimensions.', 'duration': 35.12, 'max_score': 9286.733, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs89286733.jpg'}, {'end': 10101.358, 'src': 'embed', 'start': 10076.408, 'weight': 3, 'content': [{'end': 10084.652, 'text': 'reshaping your data in the form of tensors to both prepare it to be used with various operations such as feeding into the model and,', 'start': 10076.408, 'duration': 8.244}, {'end': 10090.796, 'text': 'once you get it out of the model, to be able to deduce patterns from it and convert it into something human understandable.', 'start': 10084.652, 'duration': 6.144}, {'end': 10101.358, 'text': "now again, the numbers that we're dealing with are just basically toy numbers, to see an example of how matrix multiplication is actually used.", 'start': 10091.616, 'duration': 9.742}], 'summary': 'Data reshaped into tensors for model operations and pattern deduction.', 'duration': 24.95, 'max_score': 10076.408, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs810076408.jpg'}, {'end': 10440.781, 'src': 'embed', 'start': 10415.218, 'weight': 4, 'content': [{'end': 10422.205, 'text': "So if we can get as much hands-on experience as we can, it might not mean too much to us now because we're just working with toy data.", 'start': 10415.218, 'duration': 6.987}, {'end': 10427.75, 'text': 'It means later on down the track, when we have to reshape and manipulate our tensors and matrices,', 'start': 10422.946, 'duration': 4.804}, {'end': 10430.033, 'text': "is that we've got all of this practice under our belt.", 'start': 10427.75, 'duration': 2.283}, {'end': 10437.7, 'text': 'Wonderful So we see we get different results from transpose and reshape.', 'start': 10432.478, 'duration': 5.222}, {'end': 10440.781, 'text': 'Now to really demonstrate.', 'start': 10438.66, 'duration': 2.121}], 'summary': 'Hands-on experience with toy data prepares us for tensor and matrix manipulation later on.', 'duration': 25.563, 'max_score': 10415.218, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs810415218.jpg'}, {'end': 10668.401, 'src': 'embed', 'start': 10642.945, 'weight': 5, 'content': [{'end': 10648.789, 'text': "So if you write neural network code, it's gonna do a lot of matrix multiplication behind the scenes to figure out different patterns and numbers.", 'start': 10642.945, 'duration': 5.844}, {'end': 10650.671, 'text': 'What sort of patterns are they?', 'start': 10649.53, 'duration': 1.141}, {'end': 10656.815, 'text': "Well, remember, here's a simple example of the kind of patterns you can work out with matrix multiplication.", 'start': 10650.931, 'duration': 5.884}, {'end': 10660.177, 'text': "but again it'll be different depending what problem you're working on.", 'start': 10656.815, 'duration': 3.362}, {'end': 10668.401, 'text': "but generally, whenever you're performing a matrix multiplication on two different tensors and the shapes of the matrices or tensors don't line up,", 'start': 10660.177, 'duration': 8.224}], 'summary': 'Neural network code uses matrix multiplication to find patterns and numbers, but results vary based on the problem and tensor shapes.', 'duration': 25.456, 'max_score': 10642.945, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs810642945.jpg'}, {'end': 10792.993, 'src': 'embed', 'start': 10769.416, 'weight': 2, 'content': [{'end': 10776.781, 'text': "so, as we said in a previous video, the default data type of most tensors will be int32, depending on how they've been created.", 'start': 10769.416, 'duration': 7.365}, {'end': 10781.164, 'text': "however, sometimes you'll want to change the data type of your tensor,", 'start': 10776.781, 'duration': 4.383}, {'end': 10790.732, 'text': "so let's see how we'll do that so we can create a new tensor with the default data type, which is float32.", 'start': 10781.164, 'duration': 9.568}, {'end': 10792.993, 'text': "yeah, let's start off with float.", 'start': 10790.732, 'duration': 2.261}], 'summary': 'Changing tensor data type to float32 for default type.', 'duration': 23.577, 'max_score': 10769.416, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs810769416.jpg'}, {'end': 10944.797, 'src': 'embed', 'start': 10913.133, 'weight': 0, 'content': [{'end': 10917.155, 'text': 'By keeping certain parts of the model in 32 bit types for numeric stability,', 'start': 10913.133, 'duration': 4.022}, {'end': 10922.656, 'text': 'the model will have a lower step time and train equally as well in terms of evaluation metrics such as accuracy.', 'start': 10917.155, 'duration': 5.501}, {'end': 10928.878, 'text': 'Using this guide, this API can improve performance by more than three times on modern GPUs and 60% on TPUs.', 'start': 10923.376, 'duration': 5.502}, {'end': 10935.23, 'text': 'Today, most models use float 32 D type which takes 32 bits of memory.', 'start': 10930.987, 'duration': 4.243}, {'end': 10935.83, 'text': 'There we go.', 'start': 10935.29, 'duration': 0.54}, {'end': 10944.797, 'text': 'However, there are two lower precision D types, float 16 and B float 16, each which takes 16 bits of memory instead.', 'start': 10935.95, 'duration': 8.847}], 'summary': 'Using 32 bit types for stability improves performance by 3x on gpus and 60% on tpus.', 'duration': 31.664, 'max_score': 10913.133, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs810913133.jpg'}], 'start': 9265.153, 'title': 'Matrix manipulation and tensor operations', 'summary': 'Introduces matrix multiplication rules and challenges, discusses reshaping tensors and tensor operations, covers matrix manipulation in neural networks, and explores changing tensor data types in tensorflow, with potential improvements in performance by more than three times on modern gpus and 60% on tpus.', 'chapters': [{'end': 9584.539, 'start': 9265.153, 'title': 'Matrix multiplication rules and challenges', 'summary': 'Introduces the rules of matrix multiplication, stating that the inner dimensions must match and the resulting matrix has the same shape of the outer dimensions, presenting a challenge to fix an error in matrix multiplication and highlighting the significance of understanding these rules for successful matrix multiplication.', 'duration': 319.386, 'highlights': ['The chapter introduces the rules of matrix multiplication, stating that the inner dimensions must match and the resulting matrix has the same shape of the outer dimensions. The chapter emphasizes the two fundamental rules of matrix multiplication: the inner dimensions must match and the resulting matrix has the same shape of the outer dimensions.', 'Presenting a challenge to fix an error in matrix multiplication and highlighting the significance of understanding these rules for successful matrix multiplication. A challenge is presented to fix an error in matrix multiplication, emphasizing the importance of understanding the rules for successful matrix multiplication.', 'Matrix multiplication is also known as the dot product, and the operations can be on matrices or tensors of arbitrary size. The chapter explains that matrix multiplication is also referred to as the dot product, and the operations can be performed on matrices or tensors of arbitrary size.']}, {'end': 10127.504, 'start': 9585.419, 'title': 'Matrix reshaping and tensor operations', 'summary': 'Discusses reshaping tensors using tf.reshape, tf.transpose, and their impact on matrix multiplication, highlighting the importance of matching inner dimensions and the resulting output differences, with a reminder of the significance of data manipulation in machine learning and neural networks.', 'duration': 542.085, 'highlights': ['The chapter discusses reshaping tensors using tf.reshape, tf.transpose, and their impact on matrix multiplication, highlighting the importance of matching inner dimensions and the resulting output differences. reshaping tensors, tf.reshape, tf.transpose, impact on matrix multiplication, matching inner dimensions, resulting output differences', 'Reshaping data in the form of tensors is crucial for various operations in machine learning and neural networks, including feeding into the model and deducing patterns from the output. reshaping data, tensors, machine learning, neural networks, data manipulation, feeding into the model, deducing patterns', 'An example is provided to illustrate the practical application of matrix multiplication, emphasizing the importance of the discussed concepts in real-life scenarios. practical application, matrix multiplication, real-life scenarios, importance of concepts']}, {'end': 10536.646, 'start': 10127.504, 'title': 'Neural networks: matrix manipulation and transposing', 'summary': 'Covers the concept of matrix manipulation including transposing, reshaping, and matrix multiplication in neural networks. it emphasizes the importance of hands-on practice and understanding different results obtained from transpose and reshape.', 'duration': 409.142, 'highlights': ['The chapter emphasizes the importance of hands-on practice and understanding different results obtained from transpose and reshape. The chapter stresses the significance of hands-on experience with matrix manipulation and understanding the different results obtained from transpose and reshape.', 'Matrix multiplication, transposing, and reshaping in neural networks are covered, showing practical examples and colorful illustrations. Practical examples and colorful illustrations are used to demonstrate matrix multiplication, transposing, and reshaping in neural networks.', 'The importance of investigating and understanding errors in neural network code, such as misshaped tensors and silent errors, is highlighted. The chapter highlights the importance of investigating and understanding errors in neural network code, including misshaped tensors and silent errors.']}, {'end': 10743.764, 'start': 10537.106, 'title': 'Tensor transpose and reshape', 'summary': "Covers the concepts of tensor transpose and reshape, emphasizing the significance of using transpose for matrix multiplication when tensor axes don't align, while mentioning the behind-the-scenes operations in neural network code.", 'duration': 206.658, 'highlights': ["Tensor transpose and reshape are important in visualizing and manipulating data, with transpose being necessary for matrix multiplication when tensor axes don't align. Importance of using transpose for matrix multiplication", 'Neural network code often performs matrix multiplication behind the scenes, requiring proper alignment of tensor shapes for different patterns and numbers. Use of matrix multiplication in neural network code', 'Suggestion to experiment with different tensors using operations like tf.mapmul and tensor dot to understand the impact of transpose and reshape in generating different outputs. Encouragement to experiment with different tensors']}, {'end': 11265.858, 'start': 10744.584, 'title': 'Changing tensor data types in tensorflow', 'summary': 'Discusses changing tensor data types in tensorflow, including default data type, changing from int32 to float32, reduced precision, and the use of 16 and 32 bit floating point types in models during training, with potential improvements in performance by more than three times on modern gpus and 60% on tpus.', 'duration': 521.274, 'highlights': ['The default data type of most tensors in TensorFlow is int32, but can vary depending on the data inside the tensor. The default data type of most tensors in TensorFlow is int32, but can vary depending on the data inside the tensor.', 'Changing the data type of a tensor from int32 to float32 can be achieved using tf.cast, demonstrating reduced precision and the storage of tensors in memory. Changing the data type of a tensor from int32 to float32 can be achieved using tf.cast, demonstrating reduced precision and the storage of tensors in memory.', 'The use of mixed precision, involving both 16 and 32 bit floating point types in a model during training, can lead to significant performance improvements, with potential performance improvements by more than three times on modern GPUs and 60% on TPUs. The use of mixed precision, involving both 16 and 32 bit floating point types in a model during training, can lead to significant performance improvements, with potential performance improvements by more than three times on modern GPUs and 60% on TPUs.']}], 'duration': 2000.705, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs89265153.jpg', 'highlights': ['The use of mixed precision, involving both 16 and 32 bit floating point types in a model during training, can lead to significant performance improvements, with potential performance improvements by more than three times on modern GPUs and 60% on TPUs.', 'The chapter introduces the rules of matrix multiplication, stating that the inner dimensions must match and the resulting matrix has the same shape of the outer dimensions.', 'The default data type of most tensors in TensorFlow is int32, but can vary depending on the data inside the tensor.', 'Reshaping data in the form of tensors is crucial for various operations in machine learning and neural networks, including feeding into the model and deducing patterns from the output.', 'The chapter emphasizes the importance of hands-on practice and understanding different results obtained from transpose and reshape.', 'Neural network code often performs matrix multiplication behind the scenes, requiring proper alignment of tensor shapes for different patterns and numbers.']}, {'end': 12284.857, 'segs': [{'end': 11525.488, 'src': 'embed', 'start': 11493.77, 'weight': 0, 'content': [{'end': 11497.152, 'text': 'The first one I wanna go through is get the minimum.', 'start': 11493.77, 'duration': 3.382}, {'end': 11500.655, 'text': 'And then we might wanna get the maximum of a certain tensor.', 'start': 11497.652, 'duration': 3.003}, {'end': 11506.336, 'text': 'We also might want to get the mean of a tensor.', 'start': 11501.633, 'duration': 4.703}, {'end': 11511.82, 'text': 'And we also might want to get the sum of a tensor.', 'start': 11508.037, 'duration': 3.783}, {'end': 11517.363, 'text': "Now, I've shown you a few different ways of how you can explore different methods in TensorFlow.", 'start': 11512.02, 'duration': 5.343}, {'end': 11525.488, 'text': 'So if you wanted to start with a tensor and find the minimum, the maximum, the mean or the sum, how much you research.', 'start': 11517.964, 'duration': 7.524}], 'summary': 'Exploring methods in tensorflow for minimum, maximum, mean, and sum of a tensor.', 'duration': 31.718, 'max_score': 11493.77, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs811493770.jpg'}, {'end': 11626.123, 'src': 'embed', 'start': 11604.231, 'weight': 4, 'content': [{'end': 11612.357, 'text': 'And then how do we check the shape of our tensors? And then how do we check how many number of dimensions our tensor has? Beautiful.', 'start': 11604.231, 'duration': 8.126}, {'end': 11614.818, 'text': 'So we get size is 50, beautiful.', 'start': 11612.817, 'duration': 2.001}, {'end': 11620.321, 'text': 'The shape is also 50, and the number of dimensions is 1.', 'start': 11615.679, 'duration': 4.642}, {'end': 11620.741, 'text': 'All right.', 'start': 11620.321, 'duration': 0.42}, {'end': 11626.123, 'text': "Now, how about we start with finding the minimum? So we're going to go find the minimum.", 'start': 11621.601, 'duration': 4.522}], 'summary': 'Tensors have size and shape of 50 with 1 dimension.', 'duration': 21.892, 'max_score': 11604.231, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs811604231.jpg'}, {'end': 11696.59, 'src': 'embed', 'start': 11659.451, 'weight': 2, 'content': [{'end': 11667.189, 'text': "So knowing that this is the way to find the minimum, how do you think we might find the maximum? Let's find it anyway.", 'start': 11659.451, 'duration': 7.738}, {'end': 11668.83, 'text': 'Find the maximum.', 'start': 11668.01, 'duration': 0.82}, {'end': 11671.792, 'text': 'TF reduce max.', 'start': 11669.931, 'duration': 1.861}, {'end': 11676.514, 'text': "Wonderful So there's 97.", 'start': 11674.673, 'duration': 1.841}, {'end': 11681.356, 'text': 'So yeah, that makes sense because we have values between zero and 100.', 'start': 11676.514, 'duration': 4.842}, {'end': 11687.199, 'text': 'So our lowest value so far is zero and the highest value is 97.', 'start': 11681.356, 'duration': 5.843}, {'end': 11688.239, 'text': 'And find the mean.', 'start': 11687.199, 'duration': 1.04}, {'end': 11692.101, 'text': "Let's go TF reduce mean of E.", 'start': 11689.5, 'duration': 2.601}, {'end': 11696.59, 'text': 'and wonderful.', 'start': 11694.768, 'duration': 1.822}], 'summary': 'Using tf reduce max, the maximum value found is 97 out of values between 0 and 100. tf reduce mean of e yields the mean value.', 'duration': 37.139, 'max_score': 11659.451, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs811659451.jpg'}, {'end': 11767.827, 'src': 'embed', 'start': 11730.24, 'weight': 3, 'content': [{'end': 11752.71, 'text': "exercise is to, with what we've just learned, find the variance and standard deviation of our e tensor using TensorFlow methods.", 'start': 11730.24, 'duration': 22.47}, {'end': 11759.057, 'text': "So, if you're not sure what variance and standard deviation are, this challenge has two sides to it.", 'start': 11753.611, 'duration': 5.446}, {'end': 11767.827, 'text': 'First, you have to find the code to find the variance of our E tensor, and then you have to look up what variance and standard deviation are.', 'start': 11759.458, 'duration': 8.369}], 'summary': 'Find variance and standard deviation of e tensor using tensorflow methods.', 'duration': 37.587, 'max_score': 11730.24, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs811730240.jpg'}, {'end': 12212.896, 'src': 'embed', 'start': 12185.523, 'weight': 1, 'content': [{'end': 12194.027, 'text': 'because with these methods, you can get rid of the math, but with the variance and standard deviation, you require the math.', 'start': 12185.523, 'duration': 8.504}, {'end': 12196.508, 'text': "And so we've both learned something here.", 'start': 12194.587, 'duration': 1.921}, {'end': 12205.212, 'text': 'So I wanna just show you here, if we wanna go, find the variance of our E tensor.', 'start': 12196.528, 'duration': 8.684}, {'end': 12212.896, 'text': "we go tf.math.reduceVariance E, just exactly like we've done here in the documentation.", 'start': 12205.212, 'duration': 7.684}], 'summary': 'Methods eliminate math, variance requires math; using tf.math.reducevariance e as shown in documentation', 'duration': 27.373, 'max_score': 12185.523, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs812185523.jpg'}], 'start': 11266.681, 'title': 'Tensorflow tensor operations', 'summary': 'Covers various tensor operations in tensorflow, including finding minimum, maximum, mean, and sum, creating random tensors, and checking tensor size, shape, and dimensions. it also discusses methods to find variance, standard deviation, and positional maximum and minimum in tensorflow.', 'chapters': [{'end': 11493.049, 'start': 11266.681, 'title': 'Aggregating tensors in deep learning', 'summary': 'Covers the concept of aggregating tensors in deep learning, including understanding the definition of aggregation, conceptualizing aggregation, and practical examples such as getting absolute values of tensors.', 'duration': 226.368, 'highlights': ['The chapter covers the concept of aggregating tensors in deep learning. It provides an overview of the concept of aggregation in deep learning and its relevance to understanding tensors.', 'Practical examples include getting absolute values of tensors. The transcript illustrates the process of getting absolute values of tensors and explains its significance in aggregation.', 'Understanding the definition of aggregation and conceptualizing aggregation. The chapter delves into understanding the definition of aggregation and encourages conceptualizing individual definitions to comprehend the concept more effectively.']}, {'end': 11658.851, 'start': 11493.77, 'title': 'Tensorflow tensor operations', 'summary': 'Explores various tensor operations in tensorflow, including finding the minimum, maximum, mean, sum, creating random tensors, and checking tensor size, shape, and dimensions.', 'duration': 165.081, 'highlights': ['The chapter covers various tensor operations in TensorFlow, such as finding the minimum, maximum, mean, and sum.', 'Demonstrates creating a random tensor with values between 0 and 100, of size 50, using NumPy random array.', 'Explains how to check the size, shape, and number of dimensions of a tensor.', 'Provides a step-by-step guide on finding the minimum of a tensor using the reduce_min method in TensorFlow.']}, {'end': 12113.994, 'start': 11659.451, 'title': 'Tensorflow methods and challenges', 'summary': 'Discusses the use of tensorflow methods to find maximum, mean, sum, variance, and standard deviation of a tensor, and issues a challenge to find the variance and standard deviation of the tensor using tensorflow methods.', 'duration': 454.543, 'highlights': ['The highest value of the tensor is 97, with the lowest value being 0. The highest and lowest values of the tensor are 97 and 0 respectively.', 'The average of the tensor is approximately 48. The average value of the tensor is approximately 48.', 'The challenge is issued to find the variance and standard deviation of the tensor using TensorFlow methods. A challenge is issued to find the variance and standard deviation of the tensor using TensorFlow methods.']}, {'end': 12284.857, 'start': 12113.994, 'title': 'Finding positional maximum and minimum in tensorflow', 'summary': 'Discusses finding the positional maximum and minimum of a tensor in tensorflow, emphasizing the use of tf.math.reducevariance and the exploration of tensorflow documentation to solve problems.', 'duration': 170.863, 'highlights': ['The chapter details the process of finding the variance of a tensor using tf.math.reduceVariance, and the importance of changing the data type to float32 for certain operations.', 'The speaker encourages exploring the TensorFlow documentation for different methods and highlights the importance of problem-solving and research in finding the best approaches for operations.', 'The chapter emphasizes the need for mathematical operations such as variance and standard deviation, showcasing the value of learning through exploration and experimentation.']}], 'duration': 1018.176, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs811266681.jpg', 'highlights': ['The chapter covers various tensor operations in TensorFlow, such as finding the minimum, maximum, mean, and sum.', 'The chapter details the process of finding the variance of a tensor using tf.math.reduceVariance, and the importance of changing the data type to float32 for certain operations.', 'The highest and lowest values of the tensor are 97 and 0 respectively.', 'A challenge is issued to find the variance and standard deviation of the tensor using TensorFlow methods.', 'Explains how to check the size, shape, and number of dimensions of a tensor.']}, {'end': 14010.478, 'segs': [{'end': 12312.196, 'src': 'embed', 'start': 12285.497, 'weight': 0, 'content': [{'end': 12289.739, 'text': "But anyway, let's get into finding the positional maximum and minimum.", 'start': 12285.497, 'duration': 4.242}, {'end': 12292.499, 'text': 'Now, when might this be helpful?', 'start': 12290.459, 'duration': 2.04}, {'end': 12300.242, 'text': "Well, you're going to see this a lot when your neural network outputs prediction probabilities, which we haven't seen yet.", 'start': 12293.06, 'duration': 7.182}, {'end': 12312.196, 'text': 'But if we go here so remember our little example where we got our images we input it into some numerical encoding, goes through our neural network,', 'start': 12300.982, 'duration': 11.214}], 'summary': 'Identifying positional max and min for neural network predictions.', 'duration': 26.699, 'max_score': 12285.497, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs812285497.jpg'}, {'end': 13254.677, 'src': 'embed', 'start': 13229.326, 'weight': 1, 'content': [{'end': 13235.134, 'text': 'So yeah, go through the example there, create your own list of indices and then one-hot, encode it.', 'start': 13229.326, 'duration': 5.808}, {'end': 13240.522, 'text': 'practice around, see what happens if you change the depth parameter and then change the on value and the off value.', 'start': 13235.134, 'duration': 5.388}, {'end': 13242.485, 'text': "And I'll see you in the next video.", 'start': 13241.183, 'duration': 1.302}, {'end': 13250.615, 'text': 'Did you have some fun creating one hot encoded tensors with different on and off values? I hope you did.', 'start': 13245.333, 'duration': 5.282}, {'end': 13252.476, 'text': "And I hope you're loving deep learning.", 'start': 13250.735, 'duration': 1.741}, {'end': 13254.677, 'text': "We actually haven't covered too much deep learning yet.", 'start': 13252.636, 'duration': 2.041}], 'summary': 'Practice one-hot encoding, experiment with depth parameter, and explore deep learning.', 'duration': 25.351, 'max_score': 13229.326, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs813229326.jpg'}, {'end': 13303.819, 'src': 'embed', 'start': 13273.012, 'weight': 3, 'content': [{'end': 13274.072, 'text': 'that you may come across.', 'start': 13273.012, 'duration': 1.06}, {'end': 13278.874, 'text': "There's a lot here as you see, tf.math, overview, we got a whole bunch here.", 'start': 13274.792, 'duration': 4.082}, {'end': 13281.074, 'text': "But let's practice with a few more.", 'start': 13279.254, 'duration': 1.82}, {'end': 13282.435, 'text': 'We can never get enough practice.', 'start': 13281.174, 'duration': 1.261}, {'end': 13289.837, 'text': 'We might go a few common mathematical operations, a squaring, log, and square root.', 'start': 13283.015, 'duration': 6.822}, {'end': 13291.417, 'text': "So let's see how we'll do that.", 'start': 13290.297, 'duration': 1.12}, {'end': 13297.799, 'text': "First things first, let's create a new tensor.", 'start': 13293.658, 'duration': 4.141}, {'end': 13301.199, 'text': "So this time we might go H, I believe we're up to.", 'start': 13298.598, 'duration': 2.601}, {'end': 13303.819, 'text': "So TF, I'm gonna show you a new way to create a tensor.", 'start': 13301.259, 'duration': 2.56}], 'summary': 'Introduction to creating tensors and performing basic mathematical operations in tensorflow.', 'duration': 30.807, 'max_score': 13273.012, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs813273012.jpg'}, {'end': 13895.524, 'src': 'embed', 'start': 13863.504, 'weight': 2, 'content': [{'end': 13868.387, 'text': "And chances are, when you're doing machine learning and deep learning, you're going to run into NumPy somewhere.", 'start': 13863.504, 'duration': 4.883}, {'end': 13874.072, 'text': 'So keep in mind, TensorFlow works beautifully with NumPy arrays and vice versa.', 'start': 13868.948, 'duration': 5.124}, {'end': 13892.301, 'text': 'welcome to neural network regression with tensorflow.', 'start': 13884.915, 'duration': 7.386}, {'end': 13895.524, 'text': "now we've seen some of the basics of tensorflow in the previous section.", 'start': 13892.301, 'duration': 3.223}], 'summary': 'Tensorflow works well with numpy arrays in neural network regression.', 'duration': 32.02, 'max_score': 13863.504, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs813863504.jpg'}], 'start': 12285.497, 'title': 'Tensor operations and interoperability', 'summary': 'Covers finding positional maximum and minimum in neural network predictions, tensor transformation, and mathematical operations using tensorflow, as well as interoperability between tensorflow and numpy, with practical examples and data type considerations.', 'chapters': [{'end': 12330.944, 'start': 12285.497, 'title': 'Finding positional maximum and minimum', 'summary': 'Discusses the significance of finding positional maximum and minimum in the context of neural network prediction probabilities, with an example of outputting prediction probabilities 983 with 0.04 as the highest number.', 'duration': 45.447, 'highlights': ['Neural network outputs prediction probabilities are often represented as positional maximum and minimum.', 'Example of outputting prediction probabilities of 983 with 0.04 as the highest number.']}, {'end': 13091.402, 'start': 12330.944, 'title': 'Finding positional minimum and maximum in tensors', 'summary': 'Explains how to find the positional minimum and maximum values in tensors, using tensorflow to create a random tensor and then demonstrating the argmax and argmin methods to find the maximum and minimum values, along with discussing the concepts of squeezing a tensor and one hot encoding.', 'duration': 760.458, 'highlights': ['The chapter explains how to find the positional minimum and maximum values in tensors. It demonstrates the process of finding the positional minimum and maximum values in a tensor using TensorFlow, providing a clear explanation of the concept.', 'Using TensorFlow to create a random tensor and then demonstrating the argmax and argmin methods to find the maximum and minimum values. The transcript includes a demonstration of creating a random tensor using TensorFlow and then utilizing the argmax and argmin methods to find the maximum and minimum values within the tensor.', 'Discussing the concepts of squeezing a tensor and one hot encoding. The chapter covers the concepts of squeezing a tensor to remove single dimensions and one hot encoding as a method of numerical encoding for passing data to neural networks, providing a practical example of one hot encoding using TensorFlow.']}, {'end': 13493.552, 'start': 13091.642, 'title': 'Tensor transformation and mathematical operations', 'summary': 'Explores the transformation of tensors into one-hot encoded versions and demonstrates mathematical operations like squaring, square root, and logarithm using tensorflow, emphasizing the practical application of these operations and the need to handle data types properly.', 'duration': 401.91, 'highlights': ['Demonstration of one-hot encoding of tensors The chapter explains the process of transforming tensors into one-hot encoded versions, providing an example with quantifiable data and emphasizing the practical application of this technique.', 'Explanation of custom values for one-hot encoding The transcript showcases the customization of on and off values for one-hot encoding, highlighting the practical demonstration of using different values and their impact on the encoded tensor output.', 'Practical application of mathematical operations in TensorFlow The chapter provides practical examples of performing mathematical operations like squaring, square root, and logarithm using TensorFlow, emphasizing the need to handle data types properly to avoid errors.']}, {'end': 14010.478, 'start': 13493.772, 'title': 'Tensorflow and numpy interoperability', 'summary': 'Discusses the interoperability between tensorflow and numpy, including how to convert tensors to numpy arrays and vice versa, the default data types of each array, and the importance of being aware of different data type issues when computing with different tensors.', 'duration': 516.706, 'highlights': ['The chapter emphasizes the need for practice in working with tensors and understanding their data types and functions in TensorFlow, particularly in tf.math, and encourages the audience to experiment with at least three functions.', 'It explains the interoperability between TensorFlow and NumPy, highlighting the seamless conversion of tensors to NumPy arrays using tf.constant and vice versa, and the ability to use NumPy functionality with tensor types.', 'The chapter discusses the default data types of tensors created from NumPy arrays and Python lists, highlighting the default type as float64 for tensors from NumPy arrays and float32 for tensors from Python lists or directly through TensorFlow, emphasizing the potential issues with different data types when converting NumPy arrays into tensors.', 'It provides guidance on seeking help when encountering challenges, including tips such as following along with the code, using docstrings in Google Colab, searching for solutions on Stack Overflow and TensorFlow documentation, and emphasizing the importance of asking questions in the Discord chat.']}], 'duration': 1724.981, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs812285497.jpg', 'highlights': ['Neural network outputs prediction probabilities are often represented as positional maximum and minimum.', 'Demonstration of one-hot encoding of tensors with practical examples and quantifiable data.', 'Explaining the interoperability between TensorFlow and NumPy, highlighting seamless conversion and potential data type issues.', 'Practical examples of performing mathematical operations like squaring, square root, and logarithm using TensorFlow.']}, {'end': 15549.525, 'segs': [{'end': 14055.249, 'src': 'embed', 'start': 14011.199, 'weight': 0, 'content': [{'end': 14019.203, 'text': "Now. with that being said, if we're going to do neural network regression with TensorFlow, you might be thinking what is a regression problem?", 'start': 14011.199, 'duration': 8.004}, {'end': 14022.845, 'text': "So let's have a look at a couple of examples of what regression problems are.", 'start': 14019.343, 'duration': 3.502}, {'end': 14024.166, 'text': 'Here we go.', 'start': 14023.685, 'duration': 0.481}, {'end': 14026.187, 'text': 'Some example regression problems.', 'start': 14024.746, 'duration': 1.441}, {'end': 14031.856, 'text': "Say we're trying to predict the house or the sale price of a house we're interested in.", 'start': 14027.415, 'duration': 4.441}, {'end': 14034.577, 'text': 'So how much will this house sell for?', 'start': 14032.556, 'duration': 2.021}, {'end': 14039.458, 'text': "If we've got a house down the street and we want to try and predict how much it's going to sell for,", 'start': 14034.877, 'duration': 4.581}, {'end': 14042.359, 'text': 'we might ask ourselves how much will this house sell for?', 'start': 14039.458, 'duration': 2.901}, {'end': 14043.44, 'text': "We just said that, didn't we?", 'start': 14042.519, 'duration': 0.921}, {'end': 14047.961, 'text': 'Another regression type problem is how many people will buy this app??', 'start': 14044.02, 'duration': 3.941}, {'end': 14055.249, 'text': 'Or how much will my health insurance be? Or how much should I save each week for fuel?', 'start': 14048.801, 'duration': 6.448}], 'summary': 'Introduction to regression problems in neural network using tensorflow.', 'duration': 44.05, 'max_score': 14011.199, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs814011199.jpg'}, {'end': 14191.267, 'src': 'embed', 'start': 14167.915, 'weight': 2, 'content': [{'end': 14177.1, 'text': 'regression analysis is a set of statistical processes for estimating the relationship between a dependent variable, often called the outcome variable,', 'start': 14167.915, 'duration': 9.185}, {'end': 14182.642, 'text': 'or one or more independent variables, often called predictors, covariates or features.', 'start': 14177.1, 'duration': 5.542}, {'end': 14186.304, 'text': 'okay, so we want to predict the relationship.', 'start': 14182.642, 'duration': 3.662}, {'end': 14189.846, 'text': "so say this line that's the relationship there.", 'start': 14186.304, 'duration': 3.542}, {'end': 14191.267, 'text': 'so the dependent variable.', 'start': 14189.846, 'duration': 1.421}], 'summary': 'Regression analysis estimates relationship between dependent and independent variables.', 'duration': 23.352, 'max_score': 14167.915, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs814167915.jpg'}, {'end': 15153.42, 'src': 'embed', 'start': 15125.796, 'weight': 3, 'content': [{'end': 15130.02, 'text': "We haven't written any of this, so don't worry if it feels foreign to you.", 'start': 15125.796, 'duration': 4.224}, {'end': 15135.424, 'text': "We're going to write a lot of this going forward, but I just want to relate the architecture of regression model,", 'start': 15130.38, 'duration': 5.044}, {'end': 15137.286, 'text': "see what we're going to start working towards.", 'start': 15135.424, 'duration': 1.862}, {'end': 15143.271, 'text': "By the end of this module that we're covering, this section, you're going to be able to write these yourself.", 'start': 15137.826, 'duration': 5.445}, {'end': 15145.113, 'text': "So let's have a look.", 'start': 15144.432, 'duration': 0.681}, {'end': 15146.274, 'text': 'This is the input layer.', 'start': 15145.173, 'duration': 1.101}, {'end': 15148.676, 'text': 'So the input layer is in the blue.', 'start': 15147.214, 'duration': 1.462}, {'end': 15153.42, 'text': "So we see the shape is defined here as three because we're working with our housing price prediction.", 'start': 15148.716, 'duration': 4.704}], 'summary': 'Introduction to regression model architecture with housing price prediction', 'duration': 27.624, 'max_score': 15125.796, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs815125796.jpg'}], 'start': 14011.199, 'title': 'Neural network regression basics and model architecture', 'summary': 'Introduces the concept of regression problems in neural network regression with tensorflow, covering examples such as predicting house sale prices, app downloads, health insurance costs, and fuel expenses. it also discusses the architecture of a neural network regression model, emphasizing input and output shapes, hidden layers, hyperparameters, activation functions, loss functions, and optimizers, and provides guidance on writing code for regression models.', 'chapters': [{'end': 14079.195, 'start': 14011.199, 'title': 'Neural network regression basics', 'summary': 'Explains the concept of regression problems in neural network regression with tensorflow, using examples such as predicting house sale prices, app downloads, health insurance costs, and fuel expenses, all involving predicting a numeric outcome.', 'duration': 67.996, 'highlights': ['Regression problems involve predicting a number, such as the sale price of a house, app downloads, health insurance costs, and weekly fuel expenses.', 'Examples of regression problems include predicting house sale prices, app downloads, health insurance costs, and fuel expenses.', 'Regression problems in neural network regression with TensorFlow involve predicting numeric outcomes, such as house sale prices and app downloads.']}, {'end': 14604.435, 'start': 14079.195, 'title': 'Neural network regression basics', 'summary': 'Introduces the concept of neural network regression, explaining the process of predicting numeric values using examples such as object detection and house price prediction, and outlines the key topics to be covered, including the architecture of a neural network regression model, input and output shapes, custom data creation, modeling steps, evaluation methods, and model saving and loading.', 'duration': 525.24, 'highlights': ['The chapter introduces the concept of neural network regression, explaining the process of predicting numeric values using examples such as object detection and house price prediction. The transcript provides examples of using neural network regression to predict coordinates for object detection and house prices, demonstrating the practical application of the concept.', 'Outlines the key topics to be covered, including the architecture of a neural network regression model, input and output shapes, custom data creation, modeling steps, evaluation methods, and model saving and loading. The chapter outlines the key topics to be covered, encompassing the architecture of a neural network regression model, input and output shapes, custom data creation, modeling steps, evaluation methods, and model saving and loading, providing a comprehensive overview of the upcoming content.', 'Describes the process of encoding input features for a machine learning model using numerical encoding and one-hot encoding. The transcript explains the process of encoding input features for a machine learning model using numerical encoding and one-hot encoding, demonstrating a practical approach to preparing data for machine learning algorithms.']}, {'end': 15065.65, 'start': 14604.956, 'title': 'Regression model architecture & neural network anatomy', 'summary': 'Discusses the concept of supervised learning and regression analysis, emphasizing the relationship estimation between dependent and independent variables, along with the architecture of a neural network regression model in tensorflow, with key points like input and output shapes, hidden layers, and hyperparameters.', 'duration': 460.694, 'highlights': ['Supervised Learning and Regression Analysis The discussion emphasizes the concept of supervised learning and regression analysis, highlighting the relationship estimation between dependent and independent variables in a regression problem.', 'Architecture of a Neural Network Regression Model The chapter explains the architecture of a neural network regression model in TensorFlow, covering input and output shapes, hidden layers, and hyperparameters such as neurons per hidden layer and the loss function.', 'Customization of Hidden Layers and Neurons It is mentioned that the number of hidden layers and neurons per hidden layer in a neural network regression model can be customized, allowing for problem-specific adjustments and the potential for creating deep models.']}, {'end': 15549.525, 'start': 15067.475, 'title': 'Neural network regression with tensorflow', 'summary': 'Covers the architecture of a regression model, including the output layer shape, activation functions, loss functions, and optimizers, with specific emphasis on a housing price prediction problem, using tensorflow. it also introduces the process of writing code for neural network regression models and provides guidance on accessing additional resources for further learning.', 'duration': 482.05, 'highlights': ['The chapter covers the architecture of a regression model, including the output layer shape, activation functions, loss functions, and optimizers, with specific emphasis on a housing price prediction problem, using TensorFlow. Architecture of regression model, output layer shape, activation functions, loss functions, optimizers, housing price prediction problem, TensorFlow.', 'The default loss function for a regression model is usually mean squared error or mean absolute error slash Huber loss, which is a combination of mean absolute and mean squared error. Default loss functions: mean squared error, mean absolute error, Huber loss.', 'The optimizer for improving neural network predictions is usually stochastic gradient descent or the adam optimizer, which is a default value. Optimizers: stochastic gradient descent, adam optimizer.', 'The chapter also introduces the process of writing code for neural network regression models and provides guidance on accessing additional resources for further learning Process of writing code, accessing additional resources for further learning.']}], 'duration': 1538.326, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs814011199.jpg', 'highlights': ['Regression problems involve predicting a number, such as the sale price of a house, app downloads, health insurance costs, and weekly fuel expenses.', 'The chapter introduces the concept of neural network regression, explaining the process of predicting numeric values using examples such as object detection and house price prediction.', 'Supervised Learning and Regression Analysis The discussion emphasizes the concept of supervised learning and regression analysis, highlighting the relationship estimation between dependent and independent variables in a regression problem.', 'The chapter covers the architecture of a regression model, including the output layer shape, activation functions, loss functions, and optimizers, with specific emphasis on a housing price prediction problem, using TensorFlow.']}, {'end': 16793.433, 'segs': [{'end': 15602.641, 'src': 'embed', 'start': 15575.909, 'weight': 11, 'content': [{'end': 15581.812, 'text': 'So we have the blue dots could be our data points and our regression model is this red line through the middle.', 'start': 15575.909, 'duration': 5.903}, {'end': 15588.755, 'text': "So that's the relationship that we're trying to learn, right? Between a dependent variable and one or more independent variables.", 'start': 15581.852, 'duration': 6.903}, {'end': 15591.296, 'text': "So let's create some data that looks like this.", 'start': 15588.775, 'duration': 2.521}, {'end': 15602.641, 'text': "How might we do that? Let's try numpy as mp, and we'll also import matplotlib.pypot as plt.", 'start': 15593.058, 'duration': 9.583}], 'summary': 'Learning the relationship between variables using regression model.', 'duration': 26.732, 'max_score': 15575.909, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs815575909.jpg'}, {'end': 15692.234, 'src': 'embed', 'start': 15666.127, 'weight': 12, 'content': [{'end': 15678.77, 'text': "Now, if you were trying to figure out the relationship between X and Y, what might you do? There we go, we've got X and Y.", 'start': 15666.127, 'duration': 12.643}, {'end': 15684.672, 'text': 'Now, if we wanna visualize it, remember one of our other mottos is visualize, visualize, visualize.', 'start': 15678.77, 'duration': 5.902}, {'end': 15686.632, 'text': "We're gonna see that a lot throughout the course.", 'start': 15684.952, 'duration': 1.68}, {'end': 15689.593, 'text': 'Boom, plot does scatter.', 'start': 15687.993, 'duration': 1.6}, {'end': 15692.234, 'text': "Okay, we've got a very simple line here.", 'start': 15689.913, 'duration': 2.321}], 'summary': 'Analyzing the relationship between x and y through visualization and plotting scatter plot.', 'duration': 26.107, 'max_score': 15666.127, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs815666127.jpg'}, {'end': 15759.965, 'src': 'embed', 'start': 15717.532, 'weight': 7, 'content': [{'end': 15722.566, 'text': 'So x is negative 7, where y is 3.', 'start': 15717.532, 'duration': 5.034}, {'end': 15727.746, 'text': 'And then x is negative 4, where y is 6.', 'start': 15722.566, 'duration': 5.18}, {'end': 15732.647, 'text': 'And negative 1, 9, 2, and 12.', 'start': 15727.746, 'duration': 4.901}, {'end': 15742.989, 'text': "Are you sensing a relationship here? How might we manipulate x to get y? Well, I'll tell you the rule I just figured out.", 'start': 15732.647, 'duration': 10.342}, {'end': 15745.83, 'text': 'y equals x plus 10.', 'start': 15743.389, 'duration': 2.441}, {'end': 15751.141, 'text': 'Does this work? x plus 10.', 'start': 15745.83, 'duration': 5.311}, {'end': 15752.542, 'text': 'Do we get Y? Beautiful.', 'start': 15751.141, 'duration': 1.401}, {'end': 15759.085, 'text': 'So if we want to go Y equals X plus 10.', 'start': 15753.402, 'duration': 5.683}, {'end': 15759.965, 'text': 'True, true, true, true, true.', 'start': 15759.085, 'duration': 0.88}], 'summary': 'The relationship between x and y is y = x + 10, with multiple examples supporting it.', 'duration': 42.433, 'max_score': 15717.532, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs815717532.jpg'}, {'end': 15873.214, 'src': 'embed', 'start': 15836.652, 'weight': 8, 'content': [{'end': 15849.044, 'text': 'So if we had three input variables for one output variable here, what might be the shapes of our input and output variables for this problem?', 'start': 15836.652, 'duration': 12.392}, {'end': 15861.165, 'text': "Let's create a demo tensor for our housing price prediction problem.", 'start': 15852.318, 'duration': 8.847}, {'end': 15867.95, 'text': 'This is a very, very important point for all the neural networks that you build.', 'start': 15862.466, 'duration': 5.484}, {'end': 15869.631, 'text': 'So I want you to pay attention to this one.', 'start': 15867.97, 'duration': 1.661}, {'end': 15873.214, 'text': "So the house info equals, let's create a tensor.", 'start': 15870.232, 'duration': 2.982}], 'summary': 'Discussing input and output variable shapes for a housing price prediction problem and emphasizing the importance for neural networks.', 'duration': 36.562, 'max_score': 15836.652, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs815836652.jpg'}, {'end': 16354.769, 'src': 'embed', 'start': 16239.951, 'weight': 4, 'content': [{'end': 16242.251, 'text': "So if you don't know the answer to that, that's perfectly fine.", 'start': 16239.951, 'duration': 2.3}, {'end': 16247.433, 'text': "But let's start with steps in modeling with TensorFlow.", 'start': 16242.291, 'duration': 5.142}, {'end': 16249.573, 'text': "That's what we're going to cover here.", 'start': 16247.453, 'duration': 2.12}, {'end': 16258.976, 'text': 'So the first one is number one, is creating a model.', 'start': 16252.214, 'duration': 6.762}, {'end': 16264.238, 'text': "In here, you're going to define the input and output layers.", 'start': 16259.296, 'duration': 4.942}, {'end': 16269.185, 'text': 'as well as the hidden layers of a neural network.', 'start': 16265.743, 'duration': 3.442}, {'end': 16273.026, 'text': "And if you're using deep learning, that is, of a deep learning model.", 'start': 16269.545, 'duration': 3.481}, {'end': 16277.728, 'text': 'Wonderful Number two, we have to compile a model.', 'start': 16273.506, 'duration': 4.222}, {'end': 16281.33, 'text': 'We need to define the loss function.', 'start': 16279.049, 'duration': 2.281}, {'end': 16291.835, 'text': 'In other words, the function which tells our model how wrong it is.', 'start': 16283.011, 'duration': 8.824}, {'end': 16294.596, 'text': 'And the optimizer.', 'start': 16293.135, 'duration': 1.461}, {'end': 16306.99, 'text': "The optimizer is tells our model how to improve the patterns it's learning and evaluation metrics.", 'start': 16295.489, 'duration': 11.501}, {'end': 16317.913, 'text': 'So what we can use to interpret the performance of our model.', 'start': 16308.911, 'duration': 9.002}, {'end': 16324.695, 'text': 'Beautiful And then finally, three is fitting a model.', 'start': 16318.813, 'duration': 5.882}, {'end': 16337.618, 'text': 'So this is letting the model try to find patterns between X and Y, or features and labels.', 'start': 16325.315, 'duration': 12.303}, {'end': 16341.401, 'text': 'Beautiful So we have three steps here.', 'start': 16339.36, 'duration': 2.041}, {'end': 16346.484, 'text': "Now if we come into, we've got a beautiful diagram here, which is steps in modeling with TensorFlow.", 'start': 16341.421, 'duration': 5.063}, {'end': 16349.506, 'text': 'Whoa, look at that.', 'start': 16347.745, 'duration': 1.761}, {'end': 16351.287, 'text': 'A beautiful colorful diagram.', 'start': 16349.786, 'duration': 1.501}, {'end': 16354.769, 'text': 'Step one is we have to get our data ready.', 'start': 16351.928, 'duration': 2.841}], 'summary': 'Covering steps in modeling with tensorflow: creating, compiling, and fitting a model.', 'duration': 114.818, 'max_score': 16239.951, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs816239951.jpg'}, {'end': 16600.898, 'src': 'embed', 'start': 16541.062, 'weight': 2, 'content': [{'end': 16548.527, 'text': 'So model.compile equals loss equals tf.keras.losses.mae.', 'start': 16541.062, 'duration': 7.465}, {'end': 16557.453, 'text': 'So in our case, we have loss, mae is short for mean absolute error.', 'start': 16549.848, 'duration': 7.605}, {'end': 16558.714, 'text': "So let's have a look.", 'start': 16558.094, 'duration': 0.62}, {'end': 16562.415, 'text': "Remember, if you don't know something, mean absolute error.", 'start': 16559.633, 'duration': 2.782}, {'end': 16573.442, 'text': 'What is this? So in statistics, mean absolute error is a measure of errors between paired observations expressing the same phenomenon.', 'start': 16563.235, 'duration': 10.207}, {'end': 16584.469, 'text': 'Hmm What about, do we have images? Comparison of two observations where x1 equals 2, mean absolute error.', 'start': 16574.623, 'duration': 9.846}, {'end': 16586.61, 'text': "This is what you're going to come across.", 'start': 16585.63, 'duration': 0.98}, {'end': 16588.932, 'text': "You're going to come across a whole bunch of different explanations.", 'start': 16586.63, 'duration': 2.302}, {'end': 16594.115, 'text': 'But what the best thing to do is to just check them out and see if something catches your eye.', 'start': 16589.753, 'duration': 4.362}, {'end': 16600.898, 'text': 'So examples of Y versus X include comparison of predicted versus observed.', 'start': 16595.035, 'duration': 5.863}], 'summary': 'Using tf.keras.losses.mae for mean absolute error in model compilation.', 'duration': 59.836, 'max_score': 16541.062, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs816541062.jpg'}, {'end': 16723.201, 'src': 'embed', 'start': 16693.167, 'weight': 1, 'content': [{'end': 16702.053, 'text': "Wonderful. Now again, if you're not sure what SGD is, go what is Stochastic Gradient Descent?", 'start': 16693.167, 'duration': 8.886}, {'end': 16708.514, 'text': "So I'll let you go through those in your own time of what stochastic gradient descent is.", 'start': 16703.887, 'duration': 4.627}, {'end': 16714.842, 'text': 'But just what you need to know for now is that an optimizer tells our neural network how it should improve.', 'start': 16708.914, 'duration': 5.928}, {'end': 16718.366, 'text': "And then we'll go here and we want.", 'start': 16716.524, 'duration': 1.842}, {'end': 16723.201, 'text': "metrics is we're going to use mae as well.", 'start': 16719.358, 'duration': 3.843}], 'summary': 'Sgd is an optimizer for neural networks; also using mae as metrics.', 'duration': 30.034, 'max_score': 16693.167, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs816693167.jpg'}, {'end': 16772.818, 'src': 'embed', 'start': 16747.216, 'weight': 0, 'content': [{'end': 16761.377, 'text': "and now we're going to fit the model, which is model.fit, and we're going to fit it on x and y for five laps.", 'start': 16747.216, 'duration': 14.161}, {'end': 16763.854, 'text': 'so this is what the fit function takes.', 'start': 16761.377, 'duration': 2.477}, {'end': 16772.818, 'text': 'So we create the model, we compile the model, and we fit the model, aka telling our model, look at x and y and try and figure out the patterns.', 'start': 16764.294, 'duration': 8.524}], 'summary': 'Fitting model for 5 laps to identify patterns in x and y data.', 'duration': 25.602, 'max_score': 16747.216, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs816747216.jpg'}], 'start': 15550.085, 'title': 'Basics of regression data, neural network shapes, and tensorflow modeling', 'summary': 'Covers creating regression data with numpy and matplotlib, discussing neural network input and output shapes with a housing price prediction example, and focusing on modeling with tensorflow basics, including mean absolute error and stochastic gradient descent for model optimization.', 'chapters': [{'end': 15759.965, 'start': 15550.085, 'title': 'Creating regression data', 'summary': 'Introduces creating regression data using numpy arrays and matplotlib, emphasizing the visualization of the relationship between independent and dependent variables and discussing the pattern and relationship between x and y.', 'duration': 209.88, 'highlights': ["The chapter emphasizes the importance of visualizing the relationship between independent and dependent variables, with a motto of 'visualize, visualize, visualize.'", 'The chapter demonstrates creating regression data using numpy arrays and matplotlib, specifically showcasing the pattern and relationship between x and y.', 'The chapter discusses the rule that y equals x plus 10 and showcases its application to manipulate x to get y, providing specific examples such as x=-7, y=3 and x=-4, y=6.']}, {'end': 16067.681, 'start': 15760.045, 'title': 'Neural network input and output shapes', 'summary': 'Discusses the input and output shapes for a neural network, using a housing price prediction problem as an example, and explains how the input and output shapes are determined based on the number of input features and output labels.', 'duration': 307.636, 'highlights': ['Explaining the input and output shapes for a neural network using a housing price prediction problem as an example. Clarifies the process of determining input and output shapes based on the number of input features and output labels.', 'Defining the input and output shapes for the housing prediction problem as three input variables and one output variable. Illustrates the specific input and output shapes for the housing prediction problem.', 'Demonstrating the creation of tensors for the housing price prediction problem. Emphasizes the importance of creating tensors for neural networks.', 'Discussing the input and output shapes for a sample problem and relating it to the housing price prediction demo. Relates the input and output shapes for the sample problem to the housing price prediction demo.', 'Explaining the concept of scalars and rank 0 tensors in relation to the input and output shapes. Provides an explanation of scalars as a special kind of tensor and its relation to input and output shapes.']}, {'end': 16539.722, 'start': 16068.661, 'title': 'Modeling with tensorflow basics', 'summary': 'Focuses on creating, compiling, and fitting a model in tensorflow, using a sequential api with one layer to predict one number, and the steps involved in modeling with tensorflow.', 'duration': 471.061, 'highlights': ['The first step in modeling with TensorFlow is creating a model, defining input and output layers as well as hidden layers, with the example focusing on building a model to take one input number and predict one output number.', 'The second step involves compiling the model by defining the loss function, optimizer, and evaluation metrics, essential for telling the model how wrong it is, how to improve, and interpreting its performance.', 'The process of fitting the model involves letting it find patterns between input and output, or features and labels, emphasizing the importance of preparing the data in tensors before creating or picking a pre-trained model to suit the problem at hand.']}, {'end': 16793.433, 'start': 16541.062, 'title': 'Understanding mean absolute error and stochastic gradient descent', 'summary': 'Explains the concept of mean absolute error and its significance in model training, alongside the usage of stochastic gradient descent as an optimizer, emphasizing the importance of optimizing our neural network and utilizing the mean absolute error as a metric for model evaluation.', 'duration': 252.371, 'highlights': ['Mean absolute error measures errors between paired observations expressing the same phenomenon. Mean absolute error is a statistical measure of errors between paired observations, providing a quantitative understanding of the discrepancies between predicted and observed values.', "The function tf.keras.losses.mae computes the mean absolute error between labels and predictions. The function tf.keras.losses.mae calculates the mean absolute error between the true labels and the predicted values, offering a direct insight into the accuracy of the model's predictions.", "Stochastic Gradient Descent (SGD) is utilized as an optimizer to improve the neural network. Stochastic Gradient Descent is employed as an optimizer to enhance the performance of the neural network by iteratively adjusting the model's parameters to minimize the loss function and improve prediction accuracy.", 'The fit function is used to train the model on x and y for five epochs, allowing the model to identify patterns in the data. The fit function is applied to train the model using the input data x and the corresponding output y for five epochs, enabling the model to iteratively learn and recognize underlying patterns within the dataset.']}], 'duration': 1243.348, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs815550085.jpg', 'highlights': ['The fit function is used to train the model on x and y for five epochs, allowing the model to identify patterns in the data.', 'Stochastic Gradient Descent (SGD) is utilized as an optimizer to improve the neural network.', 'The function tf.keras.losses.mae computes the mean absolute error between labels and predictions.', 'Mean absolute error measures errors between paired observations expressing the same phenomenon.', 'The process of fitting the model involves letting it find patterns between input and output, or features and labels, emphasizing the importance of preparing the data in tensors before creating or picking a pre-trained model to suit the problem at hand.', 'The second step involves compiling the model by defining the loss function, optimizer, and evaluation metrics, essential for telling the model how wrong it is, how to improve, and interpreting its performance.', 'The first step in modeling with TensorFlow is creating a model, defining input and output layers as well as hidden layers, with the example focusing on building a model to take one input number and predict one output number.', 'The chapter discusses the rule that y equals x plus 10 and showcases its application to manipulate x to get y, providing specific examples such as x=-7, y=3 and x=-4, y=6.', 'Explaining the input and output shapes for a neural network using a housing price prediction problem as an example. Clarifies the process of determining input and output shapes based on the number of input features and output labels.', 'Demonstrating the creation of tensors for the housing price prediction problem. Emphasizes the importance of creating tensors for neural networks.', 'Defining the input and output shapes for the housing prediction problem as three input variables and one output variable. Illustrates the specific input and output shapes for the housing prediction problem.', 'The chapter demonstrates creating regression data using numpy arrays and matplotlib, specifically showcasing the pattern and relationship between x and y.', "The chapter emphasizes the importance of visualizing the relationship between independent and dependent variables, with a motto of 'visualize, visualize, visualize.'"]}, {'end': 19947.804, 'segs': [{'end': 18550.832, 'src': 'embed', 'start': 18495.863, 'weight': 1, 'content': [{'end': 18500.784, 'text': 'Beautiful Oh, would you look at that? Right from the start.', 'start': 18495.863, 'duration': 4.921}, {'end': 18509.684, 'text': "Remember our first model, after five epochs, had an error of about 11? Well, this one's hitting about 10.", 'start': 18501.638, 'duration': 8.046}, {'end': 18516.769, 'text': "But then after 10 epochs, it's already just above, oh, it's already below our other model without a hidden layer.", 'start': 18509.684, 'duration': 7.085}, {'end': 18525.115, 'text': 'So this one finished off with a loss of just around about seven, and an MAE of around about seven as well.', 'start': 18517.75, 'duration': 7.365}, {'end': 18528.258, 'text': 'Remember, mean absolute error is about on average.', 'start': 18525.656, 'duration': 2.602}, {'end': 18529.859, 'text': "how wrong are our model's predictions?", 'start': 18528.258, 'duration': 1.601}, {'end': 18533.545, 'text': 'So if we come down here, what did we finish up with here?', 'start': 18530.984, 'duration': 2.561}, {'end': 18538.167, 'text': 'Oh, my goodness, how cool is that?', 'start': 18535.966, 'duration': 2.201}, {'end': 18546.99, 'text': "Our next model by just tweaking one little thing, by just adding an extra hidden layer here, that's all the change that we made.", 'start': 18539.067, 'duration': 7.923}, {'end': 18550.832, 'text': "we've basically cut out MAE and our loss in half.", 'start': 18546.99, 'duration': 3.842}], 'summary': "After 10 epochs, the model's error reduced from 11 to 7, and adding a hidden layer halved the loss and mae.", 'duration': 54.969, 'max_score': 18495.863, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs818495863.jpg'}, {'end': 18929.716, 'src': 'embed', 'start': 18886.191, 'weight': 0, 'content': [{'end': 18886.571, 'text': 'Look at that.', 'start': 18886.191, 'duration': 0.38}, {'end': 18890.134, 'text': 'Our loss and MAE are barely even one.', 'start': 18887.412, 'duration': 2.722}, {'end': 18892.556, 'text': 'So theoretically, this model should be really good.', 'start': 18890.214, 'duration': 2.342}, {'end': 18897.78, 'text': "Oh, 26.2, that's our best model so far.", 'start': 18895.779, 'duration': 2.001}, {'end': 18903.443, 'text': 'Because remember, predicting on 17, the ideal value should be 27.', 'start': 18897.8, 'duration': 5.643}, {'end': 18910.667, 'text': 'So in our case, adjusting the learning rate of our optimizer has resulted in the best change so far.', 'start': 18903.443, 'duration': 7.224}, {'end': 18912.932, 'text': "So that's an important point.", 'start': 18911.872, 'duration': 1.06}, {'end': 18915.393, 'text': 'I want you to keep these things in your mind as we go through.', 'start': 18913.132, 'duration': 2.261}, {'end': 18919.914, 'text': "And again, if you've only just experienced this for the first time, I'll give you a little hint.", 'start': 18915.413, 'duration': 4.501}, {'end': 18926.615, 'text': 'The learning rate is potentially the most important hyperparameter you can change on all of your neural networks.', 'start': 18920.394, 'duration': 6.221}, {'end': 18928.495, 'text': 'So just keep that in mind going forward.', 'start': 18926.675, 'duration': 1.82}, {'end': 18929.716, 'text': "Don't worry too much about it now.", 'start': 18928.535, 'duration': 1.181}], 'summary': "Model's best mae is 26.2, indicating successful optimization by adjusting learning rate.", 'duration': 43.525, 'max_score': 18886.191, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs818886191.jpg'}, {'end': 19434.861, 'src': 'embed', 'start': 19410.975, 'weight': 4, 'content': [{'end': 19423.545, 'text': 'So before we get into visualizing, further into visualizing the data, the model itself, the training of a model, and the predictions of a model, oh.', 'start': 19410.975, 'duration': 12.57}, {'end': 19430.976, 'text': "and even further into evaluating our model, let's take a look at the concept of the three sets.", 'start': 19424.548, 'duration': 6.428}, {'end': 19434.861, 'text': "Now, if you're familiar with machine learning, you may already know what the three sets are.", 'start': 19431.056, 'duration': 3.805}], 'summary': 'Introduction to the concept of the three sets in machine learning.', 'duration': 23.886, 'max_score': 19410.975, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs819410975.jpg'}], 'start': 16794.034, 'title': 'Improving machine learning models', 'summary': 'Explores tensorflow sequential api, improving model performance by altering hyperparameters, such as increasing epochs from 5 to 100, resulting in a reduction of mean absolute error from around 11 to about 7, and adding an extra hidden layer with 100 hidden units, cutting the mean absolute error and loss in half. it emphasizes the importance of the learning rate as the most crucial hyperparameter in neural networks and discusses the concept of three sets in machine learning, emphasizing the importance of training, validation, and test sets.', 'chapters': [{'end': 17142.086, 'start': 16794.034, 'title': 'Tensorflow sequential api', 'summary': 'Explores the tensorflow sequential api, demonstrating how to create a sequential model and addressing warnings related to changing data types from float64 to float32.', 'duration': 348.052, 'highlights': ['The chapter explores the TensorFlow sequential API and demonstrates how to create a sequential model. The transcript discusses the usage of the sequential API in TensorFlow and provides an example of creating a sequential model using tf.keras.Sequential.', 'Addressing warnings related to changing data types from float64 to float32. The transcript discusses warnings related to data type conversion from float64 to float32, and suggests methods like changing data type using tf.float32 or tf.cast to address the issue.']}, {'end': 17933.923, 'start': 17142.607, 'title': 'Improving a machine learning model', 'summary': "Discusses the process of fitting a model, evaluating the model's performance, and improving the model by altering the steps involved in creating, compiling, and fitting it. it demonstrates the use of tensorflow to create, compile, and fit the model, while highlighting the need for improvements in the model's performance.", 'duration': 791.316, 'highlights': ["The model's initial average error in predicting Y from X is 11.5, which slowly improves to 10.9, indicating the model's learning progress. The initial average error in predicting Y from X is 11.5, which improves to 10.9, showcasing the model's learning progress.", 'The model predicts a Y value of 12.7 for X=17, reflecting an average error of 11 points, as indicated by the loss and MAE values. The model predicts a Y value of 12.7 for X=17, reflecting an average error of 11 points, as indicated by the loss and MAE values.', 'Steps in improving the model involve altering the creation, compilation, and fitting methods, including adding more layers, increasing hidden units, changing activation functions, modifying the optimization function and learning rate, and fitting the model for more epochs. Steps in improving the model involve altering the creation, compilation, and fitting methods, including adding more layers, increasing hidden units, changing activation functions, modifying the optimization function and learning rate, and fitting the model for more epochs.']}, {'end': 18912.932, 'start': 17933.923, 'title': 'Improving model performance', 'summary': 'Demonstrates how to improve model performance by altering hyperparameters, such as increasing epochs from 5 to 100, resulting in a reduction of mean absolute error from around 11 to about 7, and then adding an extra hidden layer with 100 hidden units, cutting the mean absolute error and loss in half, with the final model achieving a mean absolute error of about 1 and a loss of 26.2.', 'duration': 979.009, 'highlights': ["Altering the number of epochs from 5 to 100 resulted in a decrease in mean absolute error from around 11 to about 7. The model's performance improved as the number of epochs was increased, leading to a reduction in mean absolute error from around 11 to about 7.", 'Adding an extra hidden layer with 100 hidden units resulted in cutting the mean absolute error and loss in half, with the final model achieving a mean absolute error of about 1 and a loss of 26.2. The addition of an extra hidden layer with 100 hidden units led to a significant improvement in model performance, cutting the mean absolute error and loss in half, with the final model achieving a mean absolute error of about 1 and a loss of 26.2.', 'Adjusting the learning rate of the optimizer resulted in the best change, with the model achieving a mean absolute error of about 1 and a loss of 26.2. By adjusting the learning rate of the optimizer, the model achieved a significant improvement, with a mean absolute error of about 1 and a loss of 26.2, indicating the best change in performance.']}, {'end': 19320.37, 'start': 18913.132, 'title': 'Neural network hyperparameters and model evaluation', 'summary': 'Emphasizes the importance of the learning rate as the most crucial hyperparameter in neural networks, and details the iterative process of building, fitting, evaluating, and tweaking models, highlighting the significance of visualization in model evaluation.', 'duration': 407.238, 'highlights': ["The learning rate is potentially the most important hyperparameter you can change on all of your neural networks. The learning rate is highlighted as the most crucial hyperparameter in neural networks, impacting the model's performance significantly.", "In practice, you'll probably have a lot more samples when you're building your neural networks. Emphasizes the common scenario of working with larger datasets when building neural networks, suggesting the need for a larger dataset for effective model training.", 'A typical workflow when building neural networks involves building a model, fitting it, evaluating it, and then tweaking the model iteratively. Details the iterative process of building, fitting, evaluating, and tweaking models, outlining the typical workflow for building neural networks.', 'Visualize, visualize, visualize - the most important step when evaluating models, involving visualizing the data, model, training, and predictions. Underlines the significance of visualization in model evaluation, emphasizing the visualization of data, model, training, and predictions as crucial steps in model evaluation.']}, {'end': 19947.804, 'start': 19320.37, 'title': 'Machine learning three sets concept', 'summary': 'Discusses the concept of three sets in machine learning, emphasizing the importance of training, validation, and test sets, and the ideal state of generalization, with a focus on visualizing and splitting the data into training and test sets, and the distribution of samples.', 'duration': 627.434, 'highlights': ["The concept of three sets in machine learning, including the training set, validation set, and test set, is crucial for generalization and model evaluation. It explains the importance of having three sets in machine learning, comprising the training set for learning, the validation set for tuning the model, and the test set for evaluating the model's performance on unseen data.", 'Emphasizing the importance of visualizing the data and the need to split it into training and test sets. It highlights the significance of visualizing the data and the process of splitting it into training and test sets, with a focus on understanding the distribution of samples and the impact on model performance.', 'The distribution of samples into training and test sets, with a recommended split of 80% for training and 20% for testing. It discusses the recommended split of data into training and test sets, emphasizing an 80-20 split for training and testing respectively, and the consideration of the sample size in deep learning.']}], 'duration': 3153.77, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs816794034.jpg', 'highlights': ["The learning rate is highlighted as the most crucial hyperparameter in neural networks, impacting the model's performance significantly.", 'The addition of an extra hidden layer with 100 hidden units led to a significant improvement in model performance, cutting the mean absolute error and loss in half.', 'By adjusting the learning rate of the optimizer, the model achieved a significant improvement, with a mean absolute error of about 1 and a loss of 26.2, indicating the best change in performance.', "The model's performance improved as the number of epochs was increased, leading to a reduction in mean absolute error from around 11 to about 7.", "It explains the importance of having three sets in machine learning, comprising the training set for learning, the validation set for tuning the model, and the test set for evaluating the model's performance on unseen data."]}, {'end': 21320.217, 'segs': [{'end': 19975.95, 'src': 'embed', 'start': 19947.804, 'weight': 9, 'content': [{'end': 19952.245, 'text': "kind of like a comment, but just in a text format that's easier to understand.", 'start': 19947.804, 'duration': 4.441}, {'end': 19955.026, 'text': 'So that was a little bit of an aside.', 'start': 19952.345, 'duration': 2.681}, {'end': 19955.966, 'text': "Let's write some code.", 'start': 19955.066, 'duration': 0.9}, {'end': 19964.689, 'text': "So we want to set up a matplotlib figure because this time we're going to be plotting two samples of data, our training and test.", 'start': 19955.986, 'duration': 8.703}, {'end': 19967.51, 'text': 'So 10.7 is my favorite plot size.', 'start': 19964.729, 'duration': 2.781}, {'end': 19973.052, 'text': "So plot training data in, what's a good color? Blue.", 'start': 19967.85, 'duration': 5.202}, {'end': 19975.95, 'text': 'Plot Scatter.', 'start': 19973.972, 'duration': 1.978}], 'summary': 'Setting up a matplotlib figure to plot training and test data, with a favorite plot size of 10.7 and using blue for the training data.', 'duration': 28.146, 'max_score': 19947.804, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs819947804.jpg'}, {'end': 20170.188, 'src': 'embed', 'start': 20137.383, 'weight': 0, 'content': [{'end': 20144.284, 'text': 'we had a go at visualizing our data, more specifically, comparing our training data to the testing data,', 'start': 20137.383, 'duration': 6.901}, {'end': 20152.346, 'text': 'and we want our model to learn on the training data and we want our model to be able to predict the testing data.', 'start': 20144.284, 'duration': 8.062}, {'end': 20156.127, 'text': "in other words, given X, what's the Y value?", 'start': 20152.346, 'duration': 3.781}, {'end': 20157.987, 'text': "so let's have a look at.", 'start': 20156.127, 'duration': 1.86}, {'end': 20166.547, 'text': "let's have a look at how to build a neural network for our data.", 'start': 20157.987, 'duration': 8.56}, {'end': 20170.188, 'text': "now we've actually already done this.", 'start': 20166.547, 'duration': 3.641}], 'summary': 'Visualized and compared training and testing data, then built a neural network for prediction.', 'duration': 32.805, 'max_score': 20137.383, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs820137383.jpg'}, {'end': 20685.74, 'src': 'heatmap', 'start': 20290.441, 'weight': 8, 'content': [{'end': 20296.506, 'text': 'Now, because what I want us to have a look at is visualizing the model.', 'start': 20290.441, 'duration': 6.065}, {'end': 20307.355, 'text': "So we can get an idea of what our model looks like before we've even run it by running model.summary.", 'start': 20298.128, 'duration': 9.227}, {'end': 20313.581, 'text': 'Oh, what happened here? A value error.', 'start': 20310.318, 'duration': 3.263}, {'end': 20315.522, 'text': 'This model has not yet been built.', 'start': 20313.981, 'duration': 1.541}, {'end': 20317.835, 'text': 'Why is that??', 'start': 20317.435, 'duration': 0.4}, {'end': 20328.745, 'text': 'Build the model first by calling build or calling fit with some data, or specify an input shape argument in the first layers for an automatic build.', 'start': 20318.036, 'duration': 10.709}, {'end': 20332.109, 'text': 'Now, you might see why I commented out this line.', 'start': 20328.826, 'duration': 3.283}, {'end': 20334.511, 'text': 'I did this on purpose because I wanted you to see this error.', 'start': 20332.129, 'duration': 2.382}, {'end': 20337.694, 'text': 'So build the model first by calling dot build.', 'start': 20335.572, 'duration': 2.122}, {'end': 20343.199, 'text': 'So we could go model dot build and we see what that might do.', 'start': 20338.034, 'duration': 5.165}, {'end': 20346.81, 'text': 'So builds the model based on input shapes received.', 'start': 20344.349, 'duration': 2.461}, {'end': 20348.351, 'text': 'We could define the input shape there.', 'start': 20346.83, 'duration': 1.521}, {'end': 20350.292, 'text': "So that's one option, that's the first one.", 'start': 20348.571, 'duration': 1.721}, {'end': 20351.472, 'text': 'You could try that out for yourself.', 'start': 20350.332, 'duration': 1.14}, {'end': 20359.216, 'text': 'Or we could specify an input shape argument in the first layers for an automatic build.', 'start': 20352.553, 'duration': 6.663}, {'end': 20373.682, 'text': "So with that in mind, let's see how we might, let's create a model which builds automatically by defining the input shape argument.", 'start': 20360.596, 'duration': 13.086}, {'end': 20376.853, 'text': 'in the first layer.', 'start': 20375.532, 'duration': 1.321}, {'end': 20382.757, 'text': "let's see what that might look like, because this is something you'll do quite often in practice is defining the input shape.", 'start': 20376.853, 'duration': 5.904}, {'end': 20386.999, 'text': 'usually your neural networks can determine the input shape.', 'start': 20382.757, 'duration': 4.242}, {'end': 20390.001, 'text': 'so this is what i mean by input shape.', 'start': 20386.999, 'duration': 3.002}, {'end': 20392.883, 'text': 'is this parameter here, input shape?', 'start': 20390.001, 'duration': 2.882}, {'end': 20397.345, 'text': 'usually they can figure it out, the input shape on their own.', 'start': 20392.883, 'duration': 4.462}, {'end': 20401.768, 'text': "however, sometimes you'll need to manually define it, depending on what problem you're working on.", 'start': 20397.345, 'duration': 4.423}, {'end': 20402.869, 'text': "so let's see how we might do that.", 'start': 20401.768, 'duration': 1.101}, {'end': 20405.32, 'text': "So we'll set the random seed.", 'start': 20403.678, 'duration': 1.642}, {'end': 20412.006, 'text': 'So tf random set seed for as much reproducibility as we can.', 'start': 20405.7, 'duration': 6.306}, {'end': 20414.849, 'text': "So we'll create a model.", 'start': 20412.747, 'duration': 2.102}, {'end': 20416.171, 'text': 'This is just the same as above.', 'start': 20414.969, 'duration': 1.202}, {'end': 20422.257, 'text': "And we're gonna go model equals tf keras sequential.", 'start': 20417.892, 'duration': 4.365}, {'end': 20426.501, 'text': 'Remember a sequential model just runs from top to bottom.', 'start': 20423.678, 'duration': 2.823}, {'end': 20431.338, 'text': "We're going to go tf.keras.layers.dense1.", 'start': 20428.097, 'duration': 3.241}, {'end': 20432.879, 'text': "Now here's what we're going to do.", 'start': 20431.918, 'duration': 0.961}, {'end': 20436.04, 'text': 'The input shape argument, we need to define that.', 'start': 20433.539, 'duration': 2.501}, {'end': 20443.603, 'text': "So how might we find the input shape? Remember, what are we trying to do? We're trying to predict y based on x.", 'start': 20437.1, 'duration': 6.503}, {'end': 20450.726, 'text': "So what is the shape of the data that we're passing our model? x.shape.", 'start': 20443.603, 'duration': 7.123}, {'end': 20456.308, 'text': '50, but we want just one sample of x.shape.', 'start': 20452.787, 'duration': 3.521}, {'end': 20460.829, 'text': "it's a scalar value.", 'start': 20458.568, 'duration': 2.261}, {'end': 20463.951, 'text': 'so what if we do what is x0 and y0?', 'start': 20460.829, 'duration': 3.122}, {'end': 20464.671, 'text': 'what do they look like?', 'start': 20463.951, 'duration': 0.72}, {'end': 20469.034, 'text': "again, they're just one number.", 'start': 20464.671, 'duration': 4.363}, {'end': 20477.098, 'text': "so in our case, the input shape will be one, because we're passing at one number to predict one number.", 'start': 20469.034, 'duration': 8.064}, {'end': 20477.658, 'text': 'so there we go.', 'start': 20477.098, 'duration': 0.56}, {'end': 20479.98, 'text': "we've specified the input shape argument.", 'start': 20477.658, 'duration': 2.322}, {'end': 20484.386, 'text': "now, Again, the shape might be different depending on the input tensor you're passing in.", 'start': 20479.98, 'duration': 4.406}, {'end': 20486.227, 'text': 'You might have three different variables.', 'start': 20484.926, 'duration': 1.301}, {'end': 20488.248, 'text': 'So you could pass the input shape as three.', 'start': 20486.767, 'duration': 1.481}, {'end': 20491.95, 'text': 'But for our case, we have one input for one output.', 'start': 20488.428, 'duration': 3.522}, {'end': 20493.09, 'text': "That's what we're after.", 'start': 20492.45, 'duration': 0.64}, {'end': 20497.352, 'text': 'Now, we might go number two is compile the model.', 'start': 20493.991, 'duration': 3.361}, {'end': 20511.092, 'text': 'Model.compile loss equals tf.keras losses mae optimizer equals tf.keras optimizers dot.', 'start': 20498.793, 'duration': 12.299}, {'end': 20518.137, 'text': 'SGD stochastic gradient descent metrics equals mean absolute error.', 'start': 20511.092, 'duration': 7.045}, {'end': 20522.821, 'text': "So if we run this, just the same model we've created before.", 'start': 20518.838, 'duration': 3.983}, {'end': 20524.362, 'text': 'This is also the same as above.', 'start': 20522.981, 'duration': 1.381}, {'end': 20528.525, 'text': "Now let's check out our model dot summary.", 'start': 20525.262, 'duration': 3.263}, {'end': 20533.728, 'text': "Whoa Okay, we've got a few things going on here.", 'start': 20530.526, 'duration': 3.202}, {'end': 20539.713, 'text': 'So calling dot summary on our model shows us the layers that it contains.', 'start': 20534.129, 'duration': 5.584}, {'end': 20545.491, 'text': 'the output shape and the number of parameters of each layer.', 'start': 20540.629, 'duration': 4.862}, {'end': 20553.234, 'text': 'So the output shape here is remember, we want one input for one output.', 'start': 20546.652, 'duration': 6.582}, {'end': 20555.875, 'text': 'So the output of one, that makes sense.', 'start': 20553.815, 'duration': 2.06}, {'end': 20558.957, 'text': 'The layer here is a type dense.', 'start': 20556.636, 'duration': 2.321}, {'end': 20561.418, 'text': 'So another word for dense is fully connected.', 'start': 20559.357, 'duration': 2.061}, {'end': 20568.881, 'text': 'So if we go here fully connected layer images,', 'start': 20561.938, 'duration': 6.943}, {'end': 20578.352, 'text': 'What this means in a fully connected layer is that all of the neurons here connected to all of the neurons in the next layer.', 'start': 20570.168, 'duration': 8.184}, {'end': 20580.053, 'text': "So that's what fully connected means.", 'start': 20578.732, 'duration': 1.321}, {'end': 20586.056, 'text': 'And in TensorFlow, a fully connected layer when you see that is the same as a dense layer.', 'start': 20580.153, 'duration': 5.903}, {'end': 20590.498, 'text': 'So dense is just another word for dense connections if you see all those connections there.', 'start': 20586.596, 'duration': 3.902}, {'end': 20594.64, 'text': "And then we've got parameter numbers here.", 'start': 20590.518, 'duration': 4.122}, {'end': 20598.58, 'text': "now there's a few different things here.", 'start': 20595.498, 'duration': 3.082}, {'end': 20601.462, 'text': "we've got total params, trainable params, non-trainable params.", 'start': 20598.58, 'duration': 2.882}, {'end': 20603.303, 'text': "so let's define what each of these are.", 'start': 20601.462, 'duration': 1.841}, {'end': 20610.948, 'text': 'so the total params is total number, as you might have guessed, of parameters in the model.', 'start': 20603.303, 'duration': 7.645}, {'end': 20613.91, 'text': 'these are the patterns that the model is going to learn.', 'start': 20610.948, 'duration': 2.962}, {'end': 20617.933, 'text': 'so remember when we had a look at our overview of a neural network.', 'start': 20613.91, 'duration': 4.023}, {'end': 20621.295, 'text': 'it creates tensors of different values.', 'start': 20617.933, 'duration': 3.362}, {'end': 20622.096, 'text': 'so patterns,', 'start': 20621.295, 'duration': 0.801}, {'end': 20635.532, 'text': "The total number of parameters are how many different patterns our model is going to try and learn within the relationship between where's our X and Y data?", 'start': 20623.228, 'duration': 12.304}, {'end': 20639.253, 'text': 'Here the relationship between our X and Y data.', 'start': 20635.792, 'duration': 3.461}, {'end': 20657.549, 'text': 'So we come down, and the trainable parameters, these are the parameters, the patterns, the model can update as it trains.', 'start': 20640.653, 'duration': 16.896}, {'end': 20665.854, 'text': 'So in our case, the total number of parameters here, two, is equal to the trainable parameters.', 'start': 20657.829, 'duration': 8.025}, {'end': 20670.297, 'text': 'That means all the parameters in the model are trainable, so they can be updated.', 'start': 20666.315, 'duration': 3.982}, {'end': 20678.242, 'text': 'You might be wondering when is ever total params different to trainable params and different to non-trainable params?', 'start': 20671.338, 'duration': 6.904}, {'end': 20685.74, 'text': 'Well, When we, later on in the course, when we import a model that has already learned patterns in data,', 'start': 20679.023, 'duration': 6.717}], 'summary': 'Visualizing and building a model using tensorflow and keras to predict output based on input, with a focus on defining input shape and understanding model parameters.', 'duration': 27.394, 'max_score': 20290.441, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs820290441.jpg'}, {'end': 20555.875, 'src': 'embed', 'start': 20486.767, 'weight': 1, 'content': [{'end': 20488.248, 'text': 'So you could pass the input shape as three.', 'start': 20486.767, 'duration': 1.481}, {'end': 20491.95, 'text': 'But for our case, we have one input for one output.', 'start': 20488.428, 'duration': 3.522}, {'end': 20493.09, 'text': "That's what we're after.", 'start': 20492.45, 'duration': 0.64}, {'end': 20497.352, 'text': 'Now, we might go number two is compile the model.', 'start': 20493.991, 'duration': 3.361}, {'end': 20511.092, 'text': 'Model.compile loss equals tf.keras losses mae optimizer equals tf.keras optimizers dot.', 'start': 20498.793, 'duration': 12.299}, {'end': 20518.137, 'text': 'SGD stochastic gradient descent metrics equals mean absolute error.', 'start': 20511.092, 'duration': 7.045}, {'end': 20522.821, 'text': "So if we run this, just the same model we've created before.", 'start': 20518.838, 'duration': 3.983}, {'end': 20524.362, 'text': 'This is also the same as above.', 'start': 20522.981, 'duration': 1.381}, {'end': 20528.525, 'text': "Now let's check out our model dot summary.", 'start': 20525.262, 'duration': 3.263}, {'end': 20533.728, 'text': "Whoa Okay, we've got a few things going on here.", 'start': 20530.526, 'duration': 3.202}, {'end': 20539.713, 'text': 'So calling dot summary on our model shows us the layers that it contains.', 'start': 20534.129, 'duration': 5.584}, {'end': 20545.491, 'text': 'the output shape and the number of parameters of each layer.', 'start': 20540.629, 'duration': 4.862}, {'end': 20553.234, 'text': 'So the output shape here is remember, we want one input for one output.', 'start': 20546.652, 'duration': 6.582}, {'end': 20555.875, 'text': 'So the output of one, that makes sense.', 'start': 20553.815, 'duration': 2.06}], 'summary': 'Model with one input and one output compiled using sgd optimizer and mean absolute error metrics.', 'duration': 69.108, 'max_score': 20486.767, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs820486767.jpg'}, {'end': 20670.297, 'src': 'embed', 'start': 20640.653, 'weight': 7, 'content': [{'end': 20657.549, 'text': 'So we come down, and the trainable parameters, these are the parameters, the patterns, the model can update as it trains.', 'start': 20640.653, 'duration': 16.896}, {'end': 20665.854, 'text': 'So in our case, the total number of parameters here, two, is equal to the trainable parameters.', 'start': 20657.829, 'duration': 8.025}, {'end': 20670.297, 'text': 'That means all the parameters in the model are trainable, so they can be updated.', 'start': 20666.315, 'duration': 3.982}], 'summary': 'Model has 2 trainable parameters for updating during training.', 'duration': 29.644, 'max_score': 20640.653, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs820640653.jpg'}, {'end': 21024.928, 'src': 'embed', 'start': 20990.655, 'weight': 3, 'content': [{'end': 21010.687, 'text': 'exercise. so try playing around with the number of hidden units in the dense layer and then see how that affects the number of parameters total and trainable.', 'start': 20990.655, 'duration': 20.032}, {'end': 21013.868, 'text': 'by calling model.summary.', 'start': 21010.687, 'duration': 3.181}, {'end': 21014.749, 'text': "I'll give you a demo.", 'start': 21013.868, 'duration': 0.881}, {'end': 21024.928, 'text': 'So if we were to change this from one to three, shift and enter and then hit model summary, look what changes three.', 'start': 21016.405, 'duration': 8.523}], 'summary': 'Experiment with different hidden units to observe impact on total and trainable parameters by calling model.summary.', 'duration': 34.273, 'max_score': 20990.655, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs820990655.jpg'}, {'end': 21144.814, 'src': 'embed', 'start': 21098.405, 'weight': 4, 'content': [{'end': 21101.525, 'text': 'Note that the progress bar is not particularly useful when logged to a file.', 'start': 21098.405, 'duration': 3.12}, {'end': 21105.266, 'text': 'So verbose equals 2 is recommended when not running interactively.', 'start': 21101.585, 'duration': 3.681}, {'end': 21108.586, 'text': 'So verbose equals 0.', 'start': 21106.086, 'duration': 2.5}, {'end': 21111.447, 'text': "Oh, what happened there? We didn't even get any outputs.", 'start': 21108.586, 'duration': 2.861}, {'end': 21114.456, 'text': 'because we set verbose to zero.', 'start': 21113.035, 'duration': 1.421}, {'end': 21116.836, 'text': 'If we set it to one, we can watch our model train.', 'start': 21114.636, 'duration': 2.2}, {'end': 21120.858, 'text': "Look at that! We'll set it back to zero.", 'start': 21117.997, 'duration': 2.861}, {'end': 21130.161, 'text': "Now remember, because we're running this continually, every time we call fit, it's gonna fit for an extra 100 epochs.", 'start': 21124.119, 'duration': 6.042}, {'end': 21133.722, 'text': "So we've actually just fit our model for 200 total epochs.", 'start': 21130.641, 'duration': 3.081}, {'end': 21138.624, 'text': "Now if we call that again, that's 300 total epochs, because we've called 100 three times.", 'start': 21133.762, 'duration': 4.862}, {'end': 21144.814, 'text': "To reset that, we'd have to go up here, re-instantiate our model, get the summary.", 'start': 21139.104, 'duration': 5.71}], 'summary': 'Setting verbose to 1 allows watching model training, leading to 300 total epochs after 3 calls to fit.', 'duration': 46.409, 'max_score': 21098.405, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs821098405.jpg'}, {'end': 21203.571, 'src': 'embed', 'start': 21173.461, 'weight': 5, 'content': [{'end': 21181.548, 'text': 'Try playing around with the number of hidden units in the dense layer and see how that affects the number of parameters total and trainable by calling model.summary.', 'start': 21173.461, 'duration': 8.087}, {'end': 21183.189, 'text': "So let's try that out.", 'start': 21182.148, 'duration': 1.041}, {'end': 21188.313, 'text': "If we call, or we'll just get a summary of our model.", 'start': 21183.249, 'duration': 5.064}, {'end': 21190.235, 'text': 'So model.summary.', 'start': 21189.074, 'duration': 1.161}, {'end': 21203.571, 'text': "Okay, and notice this number here continually increases because we've created a total of 14 sequential models so far, at least in this Colab instance.", 'start': 21194.429, 'duration': 9.142}], 'summary': 'Experiment with hidden units to observe impact on total parameters, which increased to 14 in this colab instance.', 'duration': 30.11, 'max_score': 21173.461, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs821173461.jpg'}], 'start': 19947.804, 'title': 'Building a neural network model', 'summary': 'Involves setting up a matplotlib figure to plot training and test data, building a neural network to learn the relationship between x and y, defining input shape, compiling the model with specified loss and optimizer, setting the number of epochs for model training, adjusting verbosity levels, and understanding the impact of changing the number of hidden units on model parameters.', 'chapters': [{'end': 20426.501, 'start': 19947.804, 'title': 'Building a model for data visualization', 'summary': 'Involves setting up a matplotlib figure to plot training and test data, building a neural network to learn the relationship between x and y, and visualizing the model using model.summary and defining the input shape.', 'duration': 478.697, 'highlights': ['Setting up a matplotlib figure to plot training and test data The speaker sets a matplotlib figure to plot training and test data, using a plot size of 10.7 and different colors for differentiation.', 'Building a neural network to learn the relationship between X and Y The chapter discusses building a neural network with one hidden unit and fitting it on the training data for 100 epochs to learn the patterns for predicting the test data.', 'Visualizing the model using model.summary and defining the input shape The speaker attempts to visualize the model using model.summary, encounters an error, and explains the need to build the model first by defining the input shape in the first layer.']}, {'end': 21072.013, 'start': 20428.097, 'title': 'Neural network model basics', 'summary': 'Covers defining input shape for a neural network model, compiling the model with specified loss and optimizer, and understanding the trainable and non-trainable parameters through model summary, with a practical exercise to adjust hidden units and observe parameter changes.', 'duration': 643.916, 'highlights': ["The input shape for the model is defined as '1' since we are passing one number to predict one number, and the model is compiled with mean absolute error (MAE) loss and stochastic gradient descent (SGD) optimizer.", 'Calling model.summary provides insights into the layers, output shape, and number of parameters of each layer, where the total parameters signify the number of patterns the model will learn and the trainable parameters are the ones the model can update during training.', 'The concept of trainable and non-trainable parameters is explained, where trainable parameters are patterns the model can update during training, while non-trainable parameters remain fixed, typical in transfer learning scenarios.', 'An exercise is provided to adjust the number of hidden units in the dense layer and observe the impact on total and trainable parameters by calling model.summary, encouraging understanding of learnable patterns in the data and external learning resources are recommended for in-depth understanding of trainable parameters.']}, {'end': 21320.217, 'start': 21072.713, 'title': 'Setting epochs, verbose, and model summary', 'summary': 'Covers setting the number of epochs for model training, adjusting verbosity levels, and analyzing the impact of changing the number of hidden units on model parameters using model.summary, with a recommendation to set verbose to 2 for non-interactive runs and a demonstration of the plot model function.', 'duration': 247.504, 'highlights': ['Setting verbosity to 2 is recommended for non-interactive runs Verbose equals 2 is recommended when not running interactively.', 'Running fit multiple times adds epochs cumulatively Every time fit is called, it adds 100 epochs to the total.', 'Analyzing the impact of changing hidden units on model parameters Changing the number of hidden units in the dense layer affects the total and trainable parameters.']}], 'duration': 1372.413, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs819947804.jpg', 'highlights': ['Building a neural network with one hidden unit and fitting it on the training data for 100 epochs to learn the patterns for predicting the test data.', "The input shape for the model is defined as '1' since we are passing one number to predict one number, and the model is compiled with mean absolute error (MAE) loss and stochastic gradient descent (SGD) optimizer.", 'Calling model.summary provides insights into the layers, output shape, and number of parameters of each layer, where the total parameters signify the number of patterns the model will learn and the trainable parameters are the ones the model can update during training.', 'An exercise is provided to adjust the number of hidden units in the dense layer and observe the impact on total and trainable parameters by calling model.summary, encouraging understanding of learnable patterns in the data and external learning resources are recommended for in-depth understanding of trainable parameters.', 'Running fit multiple times adds epochs cumulatively Every time fit is called, it adds 100 epochs to the total.', 'Analyzing the impact of changing hidden units on model parameters Changing the number of hidden units in the dense layer affects the total and trainable parameters.', 'Setting verbosity to 2 is recommended for non-interactive runs Verbose equals 2 is recommended when not running interactively.', 'The concept of trainable and non-trainable parameters is explained, where trainable parameters are patterns the model can update during training, while non-trainable parameters remain fixed, typical in transfer learning scenarios.', 'Visualizing the model using model.summary and defining the input shape The speaker attempts to visualize the model using model.summary, encounters an error, and explains the need to build the model first by defining the input shape in the first layer.', 'Setting up a matplotlib figure to plot training and test data The speaker sets a matplotlib figure to plot training and test data, using a plot size of 10.7 and different colors for differentiation.']}, {'end': 22302.142, 'segs': [{'end': 21445.335, 'src': 'embed', 'start': 21406.591, 'weight': 2, 'content': [{'end': 21411.614, 'text': 'when we start to create more complex models with more hidden layers.', 'start': 21406.591, 'duration': 5.023}, {'end': 21414.757, 'text': "So we see here we've defined the input shape as one.", 'start': 21412.155, 'duration': 2.602}, {'end': 21424.788, 'text': "So that's why we have an input shape as one and our output shape is 10, because we have 10 hidden units in our dense layer.", 'start': 21415.885, 'duration': 8.903}, {'end': 21445.335, 'text': "Now, how might this change if we created tf.keras.layers.dense one, and I'm gonna name it output layer, and this can be name equals, input layer,", 'start': 21425.608, 'duration': 19.727}], 'summary': 'Creating complex models with 1 input and 10 hidden units in a dense layer.', 'duration': 38.744, 'max_score': 21406.591, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs821406591.jpg'}, {'end': 21785.229, 'src': 'embed', 'start': 21746.701, 'weight': 0, 'content': [{'end': 21752.384, 'text': "So let's create a plotting function.", 'start': 21746.701, 'duration': 5.683}, {'end': 21758.465, 'text': "And this is actually, I'll put this down as a tidbit, because I want you to remember something like this.", 'start': 21753.083, 'duration': 5.382}, {'end': 21760.405, 'text': "I'll put a little key here.", 'start': 21758.485, 'duration': 1.92}, {'end': 21764.206, 'text': "Oh, where'd my key emoji go? That's what we want.", 'start': 21760.425, 'duration': 3.781}, {'end': 21769.148, 'text': 'So note, this is just a Python concept in general too.', 'start': 21764.766, 'duration': 4.382}, {'end': 21785.229, 'text': "If you feel like you're going to reuse some kind of functionality in the future, it's a good idea to turn it into a function.", 'start': 21769.988, 'duration': 15.241}], 'summary': 'Creating a plotting function in python and emphasizing the importance of turning reusable functionality into a function.', 'duration': 38.528, 'max_score': 21746.701, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs821746701.jpg'}, {'end': 22214.629, 'src': 'embed', 'start': 22187.897, 'weight': 1, 'content': [{'end': 22199.165, 'text': "And so, since we're working on a regression problem, two of the main metrics you'll see, now, again, there are plenty, you can look these up.", 'start': 22187.897, 'duration': 11.268}, {'end': 22204.468, 'text': "However, two of the main ones you're gonna run into are MAE, which is mean absolute error.", 'start': 22199.605, 'duration': 4.863}, {'end': 22214.629, 'text': "We've been using this one so far, which is basically saying on average how wrong is each of my model's predictions?", 'start': 22204.748, 'duration': 9.881}], 'summary': 'Main metrics for regression problem are mae and mean absolute error.', 'duration': 26.732, 'max_score': 22187.897, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs822187897.jpg'}], 'start': 21320.277, 'title': 'Troubleshooting keras model and visualizing performance', 'summary': 'Covers troubleshooting a keras model error, visualizing the model architecture, defining input and output shapes for a neural network, and creating a plotting function in python to visualize model performance, integrating concepts like scatter plotting and evaluation metrics like mae and mse.', 'chapters': [{'end': 21374.538, 'start': 21320.277, 'title': 'Troubleshooting keras model and visualization', 'summary': 'Discusses troubleshooting a keras model error and visualizing the model architecture using tf.keras.utils.plotmodel, including setting parameters like to_file and show_shapes.', 'duration': 54.261, 'highlights': ['The chapter discusses troubleshooting a Keras model error and visualizing the model architecture using tf.keras.utils.plotmodel, including setting parameters like to_file and show_shapes.', 'The error is due to a missing positional argument, but can be checked and resolved. The process involves converting the Keras model to dot format and saving it to a file.', 'Parameters like to_file and show_shapes can be set to true to save the model as an image and display the shapes of the model, respectively.']}, {'end': 21723.047, 'start': 21375.178, 'title': 'Neural network input and output shapes', 'summary': 'Explains the process of defining input and output shapes for a neural network using a simple model with an input shape of one and an output shape of 10, emphasizing the importance of naming layers and visualizing model predictions.', 'duration': 347.869, 'highlights': ['The model has an input shape of one and an output shape of 10, with 10 hidden units in the dense layer. The input shape of one and the output shape of 10 indicate the configuration of the model, with 10 hidden units in the dense layer.', 'Emphasizes the importance of naming layers for clarity and ease of understanding, especially in complex models with multiple layers. Naming layers, such as input and output layers, is crucial for clarity and understanding, especially in complex models with multiple layers.', 'Stresses the significance of visualizing model predictions and encourages creating different models with varied configurations and visualizing them. The importance of visualizing model predictions is highlighted, with encouragement to create different models with varied configurations and visualize them for better understanding.']}, {'end': 22302.142, 'start': 21723.887, 'title': 'Visualizing model performance with plotting function', 'summary': "Covers the creation of a plotting function in python to visualize the model's performance, including concepts like function creation, scatter plotting, and evaluation metrics like mae and mse, preparing for the next video on evaluating model performance.", 'duration': 578.255, 'highlights': ["The chapter covers the creation of a plotting function in Python to visualize the model's performance, emphasizing the concept of turning reusable functionality into a function.", "The process involves creating a plotting function 'plot predictions' that takes training data, training labels, test data, test labels, and predictions as inputs, and then visualizes them using scatter plots in blue, green, and red colors, representing training data, test data, and model predictions, respectively.", 'The chapter introduces the concept of evaluation metrics for model performance, specifically highlighting Mean Absolute Error (MAE) and Mean Square Error (MSE) as key metrics for regression problems, and provides a brief overview of the mathematical notation and interpretation of MAE.', "The chapter encourages the audience to play around with improving the model's performance by adjusting parameters like adding layers, changing the optimizer, or extending the training duration, and hints at covering the implementation of evaluation metrics with TensorFlow in the next video."]}], 'duration': 981.865, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs821320277.jpg', 'highlights': ["The chapter covers the creation of a plotting function in Python to visualize the model's performance, emphasizing the concept of turning reusable functionality into a function.", 'The chapter introduces the concept of evaluation metrics for model performance, specifically highlighting Mean Absolute Error (MAE) and Mean Square Error (MSE) as key metrics for regression problems, and provides a brief overview of the mathematical notation and interpretation of MAE.', 'The model has an input shape of one and an output shape of 10, with 10 hidden units in the dense layer. The input shape of one and the output shape of 10 indicate the configuration of the model, with 10 hidden units in the dense layer.']}, {'end': 23189.616, 'segs': [{'end': 22385.471, 'src': 'embed', 'start': 22343.613, 'weight': 3, 'content': [{'end': 22352.977, 'text': "And when you see this in combination so this sigma notation here with n in combination with the divide by n sign here we'll see another form of it in a second.", 'start': 22343.613, 'duration': 9.364}, {'end': 22358.146, 'text': 'This is kind of like a fancy way of writing average.', 'start': 22353.925, 'duration': 4.221}, {'end': 22360.267, 'text': 'And this is the absolute error.', 'start': 22358.867, 'duration': 1.4}, {'end': 22362.488, 'text': 'And so we can do this in TensorFlow code.', 'start': 22361.007, 'duration': 1.481}, {'end': 22363.568, 'text': "We're gonna see this in a second.", 'start': 22362.508, 'duration': 1.06}, {'end': 22368.35, 'text': 'And when to use this metric? Well, this is a great starter metric for any regression problem.', 'start': 22364.328, 'duration': 4.022}, {'end': 22373.671, 'text': "It's very easy to understand, because it's just basically saying I like to think of it as on average.", 'start': 22368.57, 'duration': 5.101}, {'end': 22376.072, 'text': "how wrong are our model's predictions?", 'start': 22373.671, 'duration': 2.401}, {'end': 22378.653, 'text': 'So then we go to the mean square error.', 'start': 22376.972, 'duration': 1.681}, {'end': 22385.471, 'text': 'So again, this is a very similar way, this little setup here of writing the exact same thing as you see here.', 'start': 22379.827, 'duration': 5.644}], 'summary': 'Sigma notation and divide by n represent average and absolute error, useful for regression problems.', 'duration': 41.858, 'max_score': 22343.613, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs822343613.jpg'}, {'end': 22513.871, 'src': 'embed', 'start': 22462.003, 'weight': 0, 'content': [{'end': 22465.644, 'text': "We won't dig too far into that, but again, we can write this in TensorFlow code.", 'start': 22462.003, 'duration': 3.641}, {'end': 22474.107, 'text': "But Huber basically takes the combination of MSE and MAE, and it's less sensitive to outliers than MSE.", 'start': 22466.544, 'duration': 7.563}, {'end': 22475.928, 'text': 'So mean squared error here.', 'start': 22474.507, 'duration': 1.421}, {'end': 22480.63, 'text': 'So with that being said, these are some common regression evaluation metrics.', 'start': 22476.868, 'duration': 3.762}, {'end': 22481.93, 'text': 'Take note of this slide.', 'start': 22481.11, 'duration': 0.82}, {'end': 22482.931, 'text': "You'll see these in practice.", 'start': 22481.95, 'duration': 0.981}, {'end': 22483.691, 'text': 'There are many more.', 'start': 22482.951, 'duration': 0.74}, {'end': 22486.672, 'text': "However, let's get hands-on and start writing them with code.", 'start': 22484.091, 'duration': 2.581}, {'end': 22492.219, 'text': 'come down here all right.', 'start': 22489.298, 'duration': 2.921}, {'end': 22494, 'text': 'so how might we start?', 'start': 22492.219, 'duration': 1.781}, {'end': 22504.545, 'text': 'so, if we wanted to evaluate our model using evaluation metrics, one quick way that we can do that is evaluate the model on the test set.', 'start': 22494, 'duration': 10.545}, {'end': 22505.886, 'text': "we've got our train model here.", 'start': 22504.545, 'duration': 1.341}, {'end': 22513.871, 'text': 'we can go model.evaluate x test, y test, because if we pass the test data set here.', 'start': 22505.886, 'duration': 7.985}], 'summary': 'Huber combines mse and mae, less sensitive to outliers. common regression evaluation metrics. evaluating model on test set using code.', 'duration': 51.868, 'max_score': 22462.003, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs822462003.jpg'}, {'end': 22691.271, 'src': 'embed', 'start': 22657.833, 'weight': 1, 'content': [{'end': 22659.714, 'text': "We'll start exploring the TensorFlow library.", 'start': 22657.833, 'duration': 1.881}, {'end': 22664.876, 'text': 'So tf.metrics.mean absolute error.', 'start': 22659.734, 'duration': 5.142}, {'end': 22665.597, 'text': 'There we go.', 'start': 22665.076, 'duration': 0.521}, {'end': 22667.838, 'text': 'Oh, here we go.', 'start': 22665.617, 'duration': 2.221}, {'end': 22671.139, 'text': 'Computes the mean absolute error between labels and predictions.', 'start': 22668.678, 'duration': 2.461}, {'end': 22672.1, 'text': "Beautiful, that's what we want.", 'start': 22671.179, 'duration': 0.921}, {'end': 22676.108, 'text': "So that's the function that it actually implements.", 'start': 22673.007, 'duration': 3.101}, {'end': 22680.208, 'text': 'So loss equals mean absolute y true minus y pred.', 'start': 22676.368, 'duration': 3.84}, {'end': 22691.271, 'text': 'Beautiful Now how might we use this? Okay, loss equals tf.keras.losses.mean absolute error y true minus y pred.', 'start': 22680.469, 'duration': 10.802}], 'summary': 'Introduction to tensorflow library and mean absolute error computation.', 'duration': 33.438, 'max_score': 22657.833, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs822657833.jpg'}], 'start': 22302.802, 'title': 'Regression metrics in tensorflow', 'summary': 'Explains the calculation of absolute error and mean square error in regression problems using tensorflow, emphasizing their importance as starter metrics. it also covers the use of mse and mae as common regression evaluation metrics, providing examples and highlighting their significance when larger errors are more significant than smaller errors.', 'chapters': [{'end': 22416.668, 'start': 22302.802, 'title': 'Understanding regression metrics', 'summary': 'Explains the calculation of absolute error and mean square error in regression problems using tensorflow, providing insights into the formulas and their application, emphasizing their importance as starter metrics.', 'duration': 113.866, 'highlights': ["Absolute error is a great starter metric for any regression problem, helping to understand how wrong the model's predictions are on average. Useful for understanding the average error in model predictions.", 'Mean square error is a similar way of expressing the absolute error, utilizing a different mathematical notation. Comparison of mean square error to absolute error in regression problems.', 'Explanation of sigma notation and its combination with divide by n as a way of expressing average in mathematical terms. Insight into the mathematical representation of average using sigma notation.']}, {'end': 23189.616, 'start': 22417.488, 'title': 'Regression evaluation metrics in tensorflow', 'summary': 'Covers the use of mean squared error (mse) and mean absolute error (mae) as common regression evaluation metrics in tensorflow, providing examples and explanations on how to calculate them using tensorflow code and highlighting the significance of these metrics when larger errors are more significant than smaller errors.', 'duration': 772.128, 'highlights': ['The chapter explains the use of mean squared error (MSE) and mean absolute error (MAE) as common regression evaluation metrics in TensorFlow, highlighting their significance when larger errors are more significant than smaller errors.', 'It provides examples and explanations on how to calculate mean squared error (MSE) and mean absolute error (MAE) using TensorFlow code, demonstrating the practical implementation of these evaluation metrics.', 'The chapter emphasizes the importance of evaluating models using test or validation datasets, providing a practical example of evaluating a model on a test set using TensorFlow code and explaining the significance of training on the training dataset and evaluating on the test or validation dataset.', 'It demonstrates the use of TensorFlow functions such as tf.metrics.mean_squared_error and tf.metrics.mean_absolute_error to calculate mean squared error (MSE) and mean absolute error (MAE) respectively, and explains the significance of these metrics in model evaluation.']}], 'duration': 886.814, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs822302802.jpg', 'highlights': ['The chapter emphasizes the importance of evaluating models using test or validation datasets, providing a practical example of evaluating a model on a test set using TensorFlow code and explaining the significance of training on the training dataset and evaluating on the test or validation dataset.', 'It demonstrates the use of TensorFlow functions such as tf.metrics.mean_squared_error and tf.metrics.mean_absolute_error to calculate mean squared error (MSE) and mean absolute error (MAE) respectively, and explains the significance of these metrics in model evaluation.', 'The chapter explains the use of mean squared error (MSE) and mean absolute error (MAE) as common regression evaluation metrics in TensorFlow, highlighting their significance when larger errors are more significant than smaller errors.', "Absolute error is a great starter metric for any regression problem, helping to understand how wrong the model's predictions are on average. Useful for understanding the average error in model predictions.", 'Mean square error is a similar way of expressing the absolute error, utilizing a different mathematical notation. Comparison of mean square error to absolute error in regression problems.', 'Explanation of sigma notation and its combination with divide by n as a way of expressing average in mathematical terms. Insight into the mathematical representation of average using sigma notation.']}, {'end': 25671.413, 'segs': [{'end': 23269.765, 'src': 'embed', 'start': 23241.57, 'weight': 5, 'content': [{'end': 23257.996, 'text': 'Usually, start by building a model fit it, evaluate it, tweak it, fit it, evaluate it, tweak it again, fit it again and then evaluate it again,', 'start': 23241.57, 'duration': 16.426}, {'end': 23258.756, 'text': 'so on and so on.', 'start': 23257.996, 'duration': 0.76}, {'end': 23269.765, 'text': "So, if we come back to our keynote, if the machine learning explorer's motto is visualize, visualize, visualize.", 'start': 23260.322, 'duration': 9.443}], 'summary': 'Iterative model building and evaluation with emphasis on visualization in machine learning.', 'duration': 28.195, 'max_score': 23241.57, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs823241570.jpg'}, {'end': 23347.752, 'src': 'embed', 'start': 23292.928, 'weight': 0, 'content': [{'end': 23294.549, 'text': "And so that's what we're gonna do.", 'start': 23292.928, 'duration': 1.621}, {'end': 23299.551, 'text': "We're gonna try, run a few, a series of experiments to see if we can improve our model.", 'start': 23294.589, 'duration': 4.962}, {'end': 23301.392, 'text': "Following this workflow, we've already built a model.", 'start': 23299.631, 'duration': 1.761}, {'end': 23302.613, 'text': "We've already fit it.", 'start': 23301.913, 'duration': 0.7}, {'end': 23304.114, 'text': "We've already evaluated it.", 'start': 23302.973, 'duration': 1.141}, {'end': 23305.435, 'text': "Now it's time to tweak it a little.", 'start': 23304.174, 'duration': 1.261}, {'end': 23310.947, 'text': "We'll refit it, we'll evaluate it again, tweak it, fit it, evaluate it.", 'start': 23306.084, 'duration': 4.863}, {'end': 23313.388, 'text': "So let's see what this might look like in practice.", 'start': 23311.547, 'duration': 1.841}, {'end': 23321.992, 'text': "Or if we remember back, what are some ways that we can improve our model? The top three are probably, you'll see, get more data.", 'start': 23314.748, 'duration': 7.244}, {'end': 23327.395, 'text': 'So get more examples for your model to train on.', 'start': 23322.773, 'duration': 4.622}, {'end': 23341.029, 'text': 'In other words, more opportunities to learn patterns or relationships between features and labels.', 'start': 23327.415, 'duration': 13.614}, {'end': 23346.212, 'text': 'Number two would be make your model larger.', 'start': 23342.67, 'duration': 3.542}, {'end': 23347.752, 'text': "We've seen this briefly before.", 'start': 23346.372, 'duration': 1.38}], 'summary': 'Conducting a series of experiments to improve the model by tweaking, refitting, and evaluating it, while considering getting more data and making the model larger.', 'duration': 54.824, 'max_score': 23292.928, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs823292928.jpg'}, {'end': 24662.246, 'src': 'embed', 'start': 24634.591, 'weight': 2, 'content': [{'end': 24640.447, 'text': "then we'll have a look at MAE 3 and we'll also have a look at MSE 3.", 'start': 24634.591, 'duration': 5.856}, {'end': 24646.652, 'text': 'Whoa, now this is substantially higher than our other models that we ran.', 'start': 24640.447, 'duration': 6.205}, {'end': 24648.874, 'text': 'I think model two was actually the best one.', 'start': 24646.993, 'duration': 1.881}, {'end': 24658.162, 'text': "Now what we might do is, rather than just scroll back and forth comparing our model's metrics, like we could look at that one yep,", 'start': 24649.875, 'duration': 8.287}, {'end': 24659.103, 'text': 'model two is the best.', 'start': 24658.162, 'duration': 0.941}, {'end': 24662.246, 'text': "Let's put them into a more structured manner.", 'start': 24659.183, 'duration': 3.063}], 'summary': 'Model 2 has the best performance with substantially lower mae and mse compared to other models.', 'duration': 27.655, 'max_score': 24634.591, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs824634591.jpg'}, {'end': 24970.988, 'src': 'embed', 'start': 24944.869, 'weight': 1, 'content': [{'end': 24949.933, 'text': 'So one with 10 hidden neurons and the output layer with one hidden neuron.', 'start': 24944.869, 'duration': 5.064}, {'end': 24957.56, 'text': 'And it was fit for, we can come back up here, we can see it up here, model two, 100 epochs.', 'start': 24950.754, 'duration': 6.806}, {'end': 24958.601, 'text': 'All right.', 'start': 24958.301, 'duration': 0.3}, {'end': 24962.544, 'text': 'So, hmm, what can we do with this?', 'start': 24959.582, 'duration': 2.962}, {'end': 24970.988, 'text': "You might be thinking Comparing models is very tedious, and it definitely can be because we've only compared three models here.", 'start': 24963.405, 'duration': 7.583}], 'summary': 'Comparison of three models with 10 hidden neurons and 1 output layer, trained for 100 epochs.', 'duration': 26.119, 'max_score': 24944.869, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs824944869.jpg'}, {'end': 25233.327, 'src': 'embed', 'start': 25205.99, 'weight': 4, 'content': [{'end': 25210.033, 'text': 'So one of my favorite tools built straight in TensorFlow is TensorBoard.', 'start': 25205.99, 'duration': 4.043}, {'end': 25214.315, 'text': 'So this is a component of the TensorFlow library.', 'start': 25210.693, 'duration': 3.622}, {'end': 25220.999, 'text': "And again, I'm just introducing the names of these things now, but later on, we're gonna get hands-on.", 'start': 25214.335, 'duration': 6.664}, {'end': 25230.765, 'text': 'So this is, TensorBoard is a component of the TensorBoard library to help track modeling experiments, a very important part of machine learning.', 'start': 25221.259, 'duration': 9.506}, {'end': 25233.327, 'text': "And we're going to, we'll see this one later.", 'start': 25230.965, 'duration': 2.362}], 'summary': 'Tensorboard is a component of tensorflow to track modeling experiments.', 'duration': 27.337, 'max_score': 25205.99, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs825205990.jpg'}], 'start': 23189.616, 'title': 'Improving model performance through experiments', 'summary': 'Covers improving model performance through experiments, including building, fitting, evaluating, and tweaking the model, and discusses the design of three modeling experiments. it also explains the process of building and evaluating a model using tensorflow, creating and training models with different configurations of dense layers, evaluating and comparing models, and introducing tensorboard for tracking experiments and model saving in tensorflow.', 'chapters': [{'end': 23592.386, 'start': 23189.616, 'title': 'Improving model performance with experiments', 'summary': 'Discusses the process of improving model performance through experiments, including the steps of building, fitting, evaluating, and tweaking the model, and the strategies of getting more data, using a larger model, and training for longer. it also outlines the design of three modeling experiments to test different parameters and encourages the audience to create their own experiments for practice.', 'duration': 402.77, 'highlights': ['The process of improving model performance through experiments involves building, fitting, evaluating, and tweaking the model. The chapter discusses the workflow of building a model, fitting it, evaluating it, and tweaking it, as well as the iterative process of fitting, evaluating, and tweaking again to improve model performance.', "Strategies to improve model performance include getting more data, using a larger model, and training for longer. The top three strategies to improve model performance are identified as getting more data, using a more complex (larger) model, and training for longer, with the explanation of how each strategy contributes to improving the model's performance.", 'Designing three modeling experiments to test different parameters is outlined, with a focus on changing one parameter for each experiment to observe the impact on model performance. The chapter outlines the design of three modeling experiments to test different parameters, emphasizing the approach of starting with a baseline model and then changing one parameter for each subsequent experiment to observe the impact on model performance.', 'Encouragement to create additional modeling experiments for practice is provided, fostering an interactive and engaging learning approach. The audience is encouraged to create their own modeling experiments for additional practice, demonstrating a focus on interactive and engaging learning to reinforce the understanding of the concepts discussed.']}, {'end': 23924.01, 'start': 23593.647, 'title': 'Building and evaluating a model', 'summary': "Explains the process of building a model using tensorflow, setting up an experiment with 100 epochs, making predictions, visualizing the results, and evaluating the model's performance using mean absolute error and mean squared error.", 'duration': 330.363, 'highlights': ["Setting up an experiment with 100 epochs The chapter sets up an experiment to fit the model with 100 epochs, aiming to evaluate the model's performance over multiple iterations.", "Making predictions and visualizing the results The process involves making predictions on the test data and visualizing the disparities between the predictions and the ground truth labels, indicating the model's performance.", "Evaluating the model's performance using mean absolute error and mean squared error The chapter demonstrates the evaluation of the model's performance using mean absolute error and mean squared error, providing quantifiable measures of the model's accuracy."]}, {'end': 24603.458, 'start': 23924.85, 'title': 'Modeling experiment: two dense layers', 'summary': 'Discusses creating and training two models with different configurations of dense layers, highlighting the process of building, compiling, fitting, and evaluating the models, with model two achieving a mean squared error of 608 and model three showing signs of overfitting with a mean squared error of 1570.', 'duration': 678.608, 'highlights': ['Model two achieved a mean squared error of 608 after being trained for 100 epochs, with the red dots in the predictions being closer to the green dots, demonstrating an improvement over the initial model.', 'Model three, trained for 500 epochs, displayed signs of overfitting with a mean squared error of 1570, showcasing the importance of optimizing hyperparameters to prevent overfitting and achieve better generalization to unseen data.', 'The process of creating, compiling, fitting, and evaluating the models was emphasized, providing insights into the iterative nature of model building and optimization in machine learning.']}, {'end': 24922.846, 'start': 24605.498, 'title': 'Model evaluation and comparison', 'summary': 'Discusses evaluating model 3 using mae and mse metrics, finding model 2 to be the best, and planning to compare the results of multiple experiments using pandas data frame in the next video.', 'duration': 317.348, 'highlights': ["The chapter emphasizes the comparison of model 3's MAE and MSE metrics with other models, revealing model 2 as the best performing model.", "The speaker recommends starting with small experiments, gradually increasing complexity, and adhering to the motto of 'Experiment, experiment, experiment' to optimize machine learning processes.", 'The approach to compare the results of multiple experiments is introduced, suggesting the use of pandas data frame to structure and compare model results.']}, {'end': 25204.907, 'start': 24924.241, 'title': 'Comparing machine learning models', 'summary': 'Discusses comparing the performance of different machine learning models, highlighting the best performing model and emphasizing the importance of tracking and minimizing the time between experiments in machine learning modeling.', 'duration': 280.666, 'highlights': ['Model two with 100 epochs and 10 hidden neurons in one layer, and 1 hidden neuron in the output layer, performed the best among the compared models. Model two with 100 epochs and 10 hidden neurons in one layer, and 1 hidden neuron in the output layer, was identified as the best performer among the compared models, showcasing the significance of its architecture in achieving superior results.', "Emphasizing the importance of minimizing the time between experiments and the value of running multiple experiments to identify what works and what doesn't. The chapter stresses the importance of minimizing the time between experiments and conducting numerous trials to discern effective strategies, underscoring the iterative nature of machine learning and the value of discovering unsuccessful approaches.", 'Advocating for tracking and organizing experiment results using available tools to alleviate the tedium of running numerous experiments. The discussion advocates for the use of tools to track and organize experiment results, acknowledging the potential tedium of conducting numerous experiments and highlighting the availability of resources to assist in this process.']}, {'end': 25671.413, 'start': 25205.99, 'title': 'Tensorboard and model saving in tensorflow', 'summary': 'Introduces tensorboard, a component of tensorflow for tracking modeling experiments, and explores the process of saving a model in tensorflow, with emphasis on the saved model format and the hdf5 format.', 'duration': 465.423, 'highlights': ['The chapter introduces TensorBoard, a component of TensorFlow for tracking modeling experiments TensorBoard is a component of the TensorFlow library used to track modeling experiments, a crucial part of machine learning.', 'The process of saving a model in TensorFlow is explored, focusing on the saved model format and the HDF5 format The chapter delves into the process of saving a model in TensorFlow, highlighting the saved model format as the default option and the HDF5 format as an alternative.', 'Introduction of Weights and Biases, an external tool for tracking machine learning experiments Weights and Biases is introduced as a tool for tracking machine learning experiments, which can be integrated with TensorBoard.']}], 'duration': 2481.797, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs823189616.jpg', 'highlights': ['Strategies to improve model performance include getting more data, using a larger model, and training for longer.', 'Model two with 100 epochs and 10 hidden neurons in one layer, and 1 hidden neuron in the output layer, performed the best among the compared models.', "The chapter emphasizes the comparison of model 3's MAE and MSE metrics with other models, revealing model 2 as the best performing model.", 'The process of improving model performance through experiments involves building, fitting, evaluating, and tweaking the model.', 'The chapter introduces TensorBoard, a component of TensorFlow for tracking modeling experiments.', 'The process of creating, compiling, fitting, and evaluating the models was emphasized, providing insights into the iterative nature of model building and optimization in machine learning.']}, {'end': 27604.933, 'segs': [{'end': 25777.758, 'src': 'embed', 'start': 25748.572, 'weight': 0, 'content': [{'end': 25755.451, 'text': 'So something like HDF5, so something that you can pass around to many other different programming applications.', 'start': 25748.572, 'duration': 6.879}, {'end': 25760.432, 'text': 'And TensorFlow allows us to save our models directly to .', 'start': 25756.731, 'duration': 3.701}, {'end': 25766.094, 'text': 'h5 by adding the h5 extension onto the end of our file path.', 'start': 25760.432, 'duration': 5.662}, {'end': 25767.075, 'text': "So let's have a look at that.", 'start': 25766.254, 'duration': 0.821}, {'end': 25771.656, 'text': "We're just gonna run the exact same code up here except one difference.", 'start': 25768.095, 'duration': 3.561}, {'end': 25777.758, 'text': 'Save model using the HDF5 format.', 'start': 25773.276, 'duration': 4.482}], 'summary': 'Tensorflow enables saving models in hdf5 format for interoperability.', 'duration': 29.186, 'max_score': 25748.572, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs825748572.jpg'}, {'end': 25872.72, 'src': 'embed', 'start': 25849.073, 'weight': 1, 'content': [{'end': 25858.856, 'text': 'I mentioned before that a way that we can check to see if our models have saved correctly is by loading them in and testing them out again.', 'start': 25849.073, 'duration': 9.783}, {'end': 25861.257, 'text': "so much like we've evaluated our model 2.", 'start': 25858.856, 'duration': 2.401}, {'end': 25866.838, 'text': "I'll just close this to get these evaluation metrics here.", 'start': 25861.257, 'duration': 5.581}, {'end': 25872.72, 'text': "if we load our model back in, if the documentation is correct saying that it's saved all of its weights and optimizes state,", 'start': 25866.838, 'duration': 5.882}], 'summary': 'Testing model saves by loading and evaluating for accuracy.', 'duration': 23.647, 'max_score': 25849.073, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs825849073.jpg'}, {'end': 26214.633, 'src': 'embed', 'start': 26178.928, 'weight': 2, 'content': [{'end': 26197.356, 'text': 'And then we want to compare these two to mae ytrue equals ytest and ypred equals loaded saved model format preds.', 'start': 26178.928, 'duration': 18.428}, {'end': 26200.578, 'text': 'So they should have around about the same error.', 'start': 26198.817, 'duration': 1.761}, {'end': 26205.069, 'text': 'How does this go? True, ah, beautiful.', 'start': 26200.918, 'duration': 4.151}, {'end': 26207.45, 'text': "So I'm not sure why their predictions are different.", 'start': 26205.549, 'duration': 1.901}, {'end': 26211.551, 'text': "You know why it might be? It's because of how a computer stores numbers.", 'start': 26207.89, 'duration': 3.661}, {'end': 26214.633, 'text': 'So if we go model two preds, have a look at this.', 'start': 26212.312, 'duration': 2.321}], 'summary': 'Comparing ytrue and ypred for model evaluation and addressing potential differences due to computer number storage.', 'duration': 35.705, 'max_score': 26178.928, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs826178928.jpg'}, {'end': 26398.277, 'src': 'embed', 'start': 26372.376, 'weight': 3, 'content': [{'end': 26377.198, 'text': "Now we're gonna check it to make sure its predictions are the same as model2.", 'start': 26372.376, 'duration': 4.822}, {'end': 26387.962, 'text': 'So if we go here, check to see if loaded.h5 model predictions match model2.', 'start': 26377.238, 'duration': 10.724}, {'end': 26394.215, 'text': "So we can go model2, we've already got that variable, but we'll just have some practice.", 'start': 26389.923, 'duration': 4.292}, {'end': 26398.277, 'text': 'writing predict on X tests,', 'start': 26394.215, 'duration': 4.062}], 'summary': 'Comparing loaded.h5 model predictions with model2 using x tests.', 'duration': 25.901, 'max_score': 26372.376, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs826372376.jpg'}, {'end': 26865.967, 'src': 'embed', 'start': 26837.929, 'weight': 4, 'content': [{'end': 26842.473, 'text': "h5 and a little spoiler alert for what's to come, but I'm not going to talk about that for now.", 'start': 26837.929, 'duration': 4.544}, {'end': 26844.955, 'text': "we're going to see that in upcoming videos.", 'start': 26842.473, 'duration': 2.482}, {'end': 26846.676, 'text': "so there's three ways there.", 'start': 26844.955, 'duration': 1.721}, {'end': 26852.42, 'text': 'if you wanted to download a model or any other file from Google Colab, you can right click and click download.', 'start': 26846.676, 'duration': 5.744}, {'end': 26859.085, 'text': 'you can download it with code or, if you wanted to save it to your Google Drive and access it later,', 'start': 26852.42, 'duration': 6.665}, {'end': 26865.967, 'text': 'you can use the copy method to save it across there in your your target folder Alrighty.', 'start': 26859.085, 'duration': 6.882}], 'summary': 'Google colab offers three ways to download files, including right-clicking to download or saving to google drive.', 'duration': 28.038, 'max_score': 26837.929, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs826837929.jpg'}, {'end': 27329.591, 'src': 'heatmap', 'start': 26954.507, 'weight': 0.882, 'content': [{'end': 26961.392, 'text': 'find different data sets see example notebooks and also learn more about data science and machine learning in general.', 'start': 26954.507, 'duration': 6.885}, {'end': 26967.785, 'text': "you're going to become very familiar with Kaggle over your data science and machine learning explorations.", 'start': 26962.363, 'duration': 5.422}, {'end': 26976.048, 'text': "So we're not going to go through it for now, but just know that if we want example data sets, Kaggle is probably one of the best places to find them.", 'start': 26968.425, 'duration': 7.623}, {'end': 26983.39, 'text': 'Now, if we have a look at this, so medical cost personal data sets, insurance forecast by using linear regression.', 'start': 26977.148, 'duration': 6.242}, {'end': 26984.771, 'text': "Let's have a read of what's going on.", 'start': 26983.41, 'duration': 1.361}, {'end': 26986.907, 'text': 'So context.', 'start': 26986.226, 'duration': 0.681}, {'end': 26988.49, 'text': 'So machine learning with R.', 'start': 26987.208, 'duration': 1.282}, {'end': 26994.899, 'text': 'So R is another programming language that can be used for numerical computing, just like Python, by Brett Lance.', 'start': 26988.49, 'duration': 6.409}, {'end': 26998.665, 'text': "It's a book that provides an introduction to machine learning using R.", 'start': 26995.5, 'duration': 3.165}, {'end': 27008.972, 'text': "OK?. So the long story short here is that this is a data set that's available online and we've got a bunch of columns here age, sex, BMI,", 'start': 26998.665, 'duration': 10.307}, {'end': 27011.654, 'text': 'children smoker region charges.', 'start': 27008.972, 'duration': 2.682}, {'end': 27025.827, 'text': "Now what we're trying to do here is use these columns so age through to region to predict what someone's individual medical costs billed by health insurance will be.", 'start': 27012.355, 'duration': 13.472}, {'end': 27029.491, 'text': "So we're using these features here to predict a number.", 'start': 27026.568, 'duration': 2.923}, {'end': 27032.594, 'text': "So it's a regression problem.", 'start': 27030.351, 'duration': 2.243}, {'end': 27038.72, 'text': "I believe this example has used linear regression, but we're going to build a neural network regression model.", 'start': 27033.154, 'duration': 5.566}, {'end': 27042.564, 'text': 'So the dataset is publicly available on GitHub here.', 'start': 27039.861, 'duration': 2.703}, {'end': 27046.461, 'text': "Wonderful So there's a fair few data sets here.", 'start': 27043.224, 'duration': 3.237}, {'end': 27050.745, 'text': "We're looking specifically at insurance.csv.", 'start': 27046.762, 'duration': 3.983}, {'end': 27057.311, 'text': "Now I'll show you a little trick that we can use to import this straight from GitHub to our Google Colab notebook.", 'start': 27051.386, 'duration': 5.925}, {'end': 27062.755, 'text': 'If we go raw, there we go, raw.github user content.', 'start': 27057.371, 'duration': 5.384}, {'end': 27066.659, 'text': "Now, if you can't find this link, I'll put in the resources section for you.", 'start': 27063.116, 'duration': 3.543}, {'end': 27071.854, 'text': "We're gonna copy this link, come back to our notebook, I'm going to get out of this.", 'start': 27066.679, 'duration': 5.175}, {'end': 27078.897, 'text': "And the first thing we're going to do is actually import the required libraries for this larger example.", 'start': 27073.014, 'duration': 5.883}, {'end': 27083.1, 'text': "We've already imported these, but it's good practice just if we were starting from scratch.", 'start': 27079.458, 'duration': 3.642}, {'end': 27087.582, 'text': 'Import TensorFlow as tf, import pandas as pd.', 'start': 27083.98, 'duration': 3.602}, {'end': 27092.225, 'text': "Wonderful And we're going to also need matplotlib in case we want to do some plotting.", 'start': 27088.063, 'duration': 4.162}, {'end': 27095.127, 'text': 'Those libraries are pretty standard.', 'start': 27093.866, 'duration': 1.261}, {'end': 27102.933, 'text': "So now, because we've got the insurance data set copied to our clipboard, read in the insurance data set.", 'start': 27095.487, 'duration': 7.446}, {'end': 27103.973, 'text': "Let's see what it looks like.", 'start': 27102.953, 'duration': 1.02}, {'end': 27109.195, 'text': "We'll call it insurance equals PD dot read CSV.", 'start': 27104.554, 'duration': 4.641}, {'end': 27118.557, 'text': 'And then the beautiful thing is we can just paste a link in here and the read CSV function will read it directly from here, all of these values.', 'start': 27110.495, 'duration': 8.062}, {'end': 27123.338, 'text': "So there's a columns, age, sex, BMI, children, smoker, region, charges.", 'start': 27119.637, 'duration': 3.701}, {'end': 27128.188, 'text': "we can import directly, let's have a look, insurance.", 'start': 27125.404, 'duration': 2.784}, {'end': 27133.055, 'text': 'All right, so 1, 338 rows times seven columns.', 'start': 27131.233, 'duration': 1.822}, {'end': 27133.817, 'text': "Yes, now we're talking.", 'start': 27133.075, 'duration': 0.742}, {'end': 27141.862, 'text': "This is a little bit more complex than the problem we've been working on so far.", 'start': 27138.7, 'duration': 3.162}, {'end': 27147.905, 'text': 'So what do we have here? We have age, sex, BMI, children, smoker, region, and then charges.', 'start': 27142.762, 'duration': 5.143}, {'end': 27148.665, 'text': 'This is an amount.', 'start': 27147.925, 'duration': 0.74}, {'end': 27153.008, 'text': "So this is how much someone's medical bills were.", 'start': 27149.146, 'duration': 3.862}, {'end': 27161.292, 'text': 'medical insurance was based on their age, sex, BMI, number of children, are they a smoker, yes or no?', 'start': 27153.008, 'duration': 8.284}, {'end': 27163.113, 'text': 'and whereabouts do they live?', 'start': 27161.292, 'duration': 1.821}, {'end': 27165.194, 'text': 'southwest, northwest, et cetera?', 'start': 27163.113, 'duration': 2.081}, {'end': 27165.735, 'text': "I'm not sure.", 'start': 27165.214, 'duration': 0.521}, {'end': 27168.096, 'text': 'Is it from a city? Anyway.', 'start': 27166.275, 'duration': 1.821}, {'end': 27171.373, 'text': "Doesn't matter, you can do some research.", 'start': 27169.653, 'duration': 1.72}, {'end': 27174.014, 'text': "I'll put the links to where you can find this dataset.", 'start': 27171.553, 'duration': 2.461}, {'end': 27182.656, 'text': "Essentially, what we're focused on here is writing TensorFlow code to take in these features, learn the relationships between them, or more so,", 'start': 27174.654, 'duration': 8.002}, {'end': 27187.677, 'text': 'the relationships between these features and this target variable here charges.', 'start': 27182.656, 'duration': 5.021}, {'end': 27191.117, 'text': 'So what is a regression problem?', 'start': 27188.257, 'duration': 2.86}, {'end': 27193.698, 'text': 'Relating it back to our Wikipedia definition.', 'start': 27191.557, 'duration': 2.141}, {'end': 27202.735, 'text': 'In statistical modeling, regression analysis is a set of statistical processes for estimating the relationships between a dependent variable,', 'start': 27194.813, 'duration': 7.922}, {'end': 27209.256, 'text': 'often called the outcome variable, and one or more independent variables, often called predictors, covariates or features.', 'start': 27202.735, 'duration': 6.521}, {'end': 27217.518, 'text': "So in our case, what is our dependent variable? Our dependent variable is the charges, because this is what we're trying to predict.", 'start': 27209.296, 'duration': 8.222}, {'end': 27223.032, 'text': 'And what are our independent variables? In other words, known as, I like the term features.', 'start': 27218.408, 'duration': 4.624}, {'end': 27224.633, 'text': "So that's what you'll hear me use a lot.", 'start': 27223.372, 'duration': 1.261}, {'end': 27229.036, 'text': 'Our independent variables are these columns here.', 'start': 27225.494, 'duration': 3.542}, {'end': 27232.819, 'text': 'Age, sex, BMI, children, smoker, region.', 'start': 27229.757, 'duration': 3.062}, {'end': 27235.781, 'text': 'So what do we have to do?', 'start': 27233.6, 'duration': 2.181}, {'end': 27242.406, 'text': 'What is our first step in getting our data ready to pass into our machine?', 'start': 27236.622, 'duration': 5.784}, {'end': 27243.407, 'text': 'or neural network models?', 'start': 27242.406, 'duration': 1.001}, {'end': 27246.53, 'text': 'Can we just start building a model? Model.', 'start': 27244.288, 'duration': 2.242}, {'end': 27249.413, 'text': 'equals TF Keras sequential.', 'start': 27247.452, 'duration': 1.961}, {'end': 27256.358, 'text': 'I mean we could, we could keep that going, but what do our machine learning models like??', 'start': 27249.914, 'duration': 6.444}, {'end': 27259.219, 'text': 'Can we pass in this column sex??', 'start': 27256.618, 'duration': 2.601}, {'end': 27263.722, 'text': "If it reads female or male, what's it going to do??", 'start': 27259.94, 'duration': 3.782}, {'end': 27269.926, 'text': 'So if I go insurance, what type is this column??', 'start': 27263.742, 'duration': 6.184}, {'end': 27274.389, 'text': 'Oh, no, insurance.', 'start': 27270.246, 'duration': 4.143}, {'end': 27275.469, 'text': 'data type object.', 'start': 27274.389, 'duration': 1.08}, {'end': 27283.329, 'text': 'Hmm What about smoker? Data type object.', 'start': 27276.11, 'duration': 7.219}, {'end': 27292.256, 'text': "What's the difference between smoker and insurance age? Int 64.", 'start': 27283.569, 'duration': 8.687}, {'end': 27301.863, 'text': "Ah So we have some columns here that are numerical and some columns that aren't numerical.", 'start': 27292.256, 'duration': 9.607}, {'end': 27310.638, 'text': 'Do you remember what we have to do to non-numerical columns before we can pass them to a deep neural network or a machine learning model.', 'start': 27302.963, 'duration': 7.675}, {'end': 27312.779, 'text': 'We have to turn them into numbers right?', 'start': 27311.258, 'duration': 1.521}, {'end': 27321.766, 'text': "If we come back to our regression inputs and outputs, say we're trying to predict the price of homes, the sale price of homes,", 'start': 27312.799, 'duration': 8.967}, {'end': 27329.591, 'text': 'based on the features here bedrooms, bathrooms, garages before we can pass it to our machine learning algorithms,', 'start': 27321.766, 'duration': 7.825}], 'summary': 'Explore data science and machine learning with kaggle, r, and tensorflow to build regression models for predicting medical insurance costs.', 'duration': 375.084, 'max_score': 26954.507, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs826954507.jpg'}, {'end': 27202.735, 'src': 'embed', 'start': 27174.654, 'weight': 5, 'content': [{'end': 27182.656, 'text': "Essentially, what we're focused on here is writing TensorFlow code to take in these features, learn the relationships between them, or more so,", 'start': 27174.654, 'duration': 8.002}, {'end': 27187.677, 'text': 'the relationships between these features and this target variable here charges.', 'start': 27182.656, 'duration': 5.021}, {'end': 27191.117, 'text': 'So what is a regression problem?', 'start': 27188.257, 'duration': 2.86}, {'end': 27193.698, 'text': 'Relating it back to our Wikipedia definition.', 'start': 27191.557, 'duration': 2.141}, {'end': 27202.735, 'text': 'In statistical modeling, regression analysis is a set of statistical processes for estimating the relationships between a dependent variable,', 'start': 27194.813, 'duration': 7.922}], 'summary': 'Writing tensorflow code to learn relationships between features and target variable charges.', 'duration': 28.081, 'max_score': 27174.654, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs827174654.jpg'}], 'start': 25672.394, 'title': 'Tensorflow model management and analysis', 'summary': 'Covers saving tensorflow models in .pb and .h5 formats, loading and evaluating saved models, model comparison and error analysis, checking model loading and predictions, and downloading files from google colab. it also discusses regression analysis with tensorflow, highlighting the practical implementation of each method.', 'chapters': [{'end': 25849.073, 'start': 25672.394, 'title': 'Tensorflow model saving formats', 'summary': 'Explains the process of saving tensorflow models in both .pb (protobuf) and .h5 (hdf5) formats, highlighting the differences and use cases for each, and the benefits of using hdf5 for storing and passing large models to different programming applications.', 'duration': 176.679, 'highlights': ['TensorFlow allows saving models in .pb (protobuf) and .h5 (HDF5) formats, with .pb being the default format for majority use cases. The chapter emphasizes that TensorFlow allows saving models in .pb (protobuf) and .h5 (HDF5) formats, with .pb being the default format for majority use cases.', 'HDF5 provides a basic save format using the HDF5 standard, designed to store and organize large amounts of data. HDF5 provides a basic save format using the HDF5 standard, which is designed to store and organize large amounts of data.', 'The chapter suggests using .pb format if staying within the TensorFlow environment, and .h5 format for using models outside of pure TensorFlow code. The chapter suggests using .pb format if staying within the TensorFlow environment, and .h5 format for using models outside of pure TensorFlow code.']}, {'end': 26015.453, 'start': 25849.073, 'title': 'Loading and evaluating saved models', 'summary': 'Discusses the process of loading a saved model in tensorflow, including evaluating the model to ensure it has been saved correctly, and demonstrates the usage of tf.keras.models.loadmodel to load both saved model format and hdf5 format models.', 'duration': 166.38, 'highlights': ['The chapter discusses the process of loading a saved model in TensorFlow The chapter focuses on the process of loading a saved model in TensorFlow to ensure it has been saved correctly.', 'Demonstrates the usage of tf.keras.models.loadmodel to load both saved model format and HDF5 format models The transcript demonstrates the usage of tf.keras.models.loadmodel to load both saved model format and HDF5 format models, providing a comprehensive understanding of the loading process.', 'Process of evaluating the model to ensure it has been saved correctly The transcript emphasizes the importance of evaluating the loaded model to ensure that it has been saved correctly, ensuring the reliability of the saved model.']}, {'end': 26290.12, 'start': 26016.514, 'title': 'Model comparison and error analysis', 'summary': "Explores comparing two models' predictions and error analysis, finding that the models have similar mean absolute error but different predictions due to a simple typo in the variable names.", 'duration': 273.606, 'highlights': ["The chapter explores comparing two models' predictions and error analysis, finding that the models have similar mean absolute error but different predictions due to a simple typo in the variable names.", 'The models are confirmed to have the same architecture with two dense layers having 10 and 1 hidden units, and further validation is conducted by comparing their predictions and mean absolute error.', 'The chapter discusses the discrepancy in predictions between the two models, attributing it to a simple typo in the variable names, showcasing the process of troubleshooting and debugging in real-time coding scenarios.', 'The author emphasizes the experiential nature of the content, aiming to simulate a collaborative coding environment and address challenges faced in programming, such as variable name similarities leading to errors.']}, {'end': 26476.768, 'start': 26290.22, 'title': 'Checking model loading and predictions', 'summary': 'Details the process of loading a model using the .h5 format and verifying if its predictions match the original model, emphasizing the importance of testing and the potential for mistakes during the learning process.', 'duration': 186.548, 'highlights': ['The process of loading a model using the .h5 format and verifying its predictions is detailed, showcasing the practical application of the learned concepts.', 'Emphasizing the potential for mistakes during the learning process, the instructor highlights the importance of testing and learning from errors to the audience.', 'The instructor demonstrates the comparison of predictions between the loaded .h5 model and the original model, reinforcing the significance of thorough testing and verification of model outputs.']}, {'end': 27174.014, 'start': 26476.768, 'title': 'Downloading files from google colab', 'summary': 'Discusses three ways to download files from google colab including right-clicking and using code, as well as saving to google drive, with a focus on the process of downloading and saving a trained model in hdf5 format, highlighting the practical implementation of each method.', 'duration': 697.246, 'highlights': ['The chapter discusses three ways to download files from Google Colab including right-clicking and using code, as well as saving to Google Drive, with a focus on the process of downloading and saving a trained model in HDF5 format, highlighting the practical implementation of each method.', 'The first method to download files from Google Colab involves right-clicking on the file in the files tab and clicking download, providing a simple and easy way to obtain the desired file, where the download time depends on the file size.', 'The second method entails using code to download a file from Google Colab by importing from google.colab import files and using files.download, allowing for a programmatic approach to file retrieval, with the ability to download files directly to the local machine.', 'The third method involves saving the file to Google Drive by mounting the drive and using the copy method, providing a convenient way to store files in the cloud for later access and allowing for easy organization and management of files within Google Drive.']}, {'end': 27604.933, 'start': 27174.654, 'title': 'Regression analysis with tensorflow', 'summary': 'Discusses using tensorflow to perform regression analysis, preparing non-numerical columns for machine learning models by employing one hot encoding, and using pandas get dummies function to convert categorical variables into numerical variables.', 'duration': 430.279, 'highlights': ["The chapter discusses using TensorFlow to perform regression analysis The speaker focuses on writing TensorFlow code to learn the relationships between the features and the target variable 'charges', emphasizing the importance of regression analysis in estimating the relationships between dependent and independent variables.", 'Preparing non-numerical columns for machine learning models by employing one hot encoding The speaker explains the need to convert non-numerical columns into numerical ones using one hot encoding, providing a detailed example and discussing the use of pandas get dummies function for this purpose.', 'Using pandas get dummies function to convert categorical variables into numerical variables The speaker demonstrates the use of pandas get dummies function to convert categorical variables into numerical variables, providing a step-by-step example and highlighting the practical application of this approach in data preparation for machine learning models.']}], 'duration': 1932.539, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs825672394.jpg', 'highlights': ['TensorFlow allows saving models in .pb (protobuf) and .h5 (HDF5) formats, with .pb being the default format for majority use cases.', 'The chapter emphasizes the importance of evaluating the loaded model to ensure that it has been saved correctly, ensuring the reliability of the saved model.', "The chapter explores comparing two models' predictions and error analysis, finding that the models have similar mean absolute error but different predictions due to a simple typo in the variable names.", 'The instructor demonstrates the comparison of predictions between the loaded .h5 model and the original model, reinforcing the significance of thorough testing and verification of model outputs.', 'The chapter discusses three ways to download files from Google Colab including right-clicking and using code, as well as saving to Google Drive, with a focus on the process of downloading and saving a trained model in HDF5 format, highlighting the practical implementation of each method.', "The speaker focuses on writing TensorFlow code to learn the relationships between the features and the target variable 'charges', emphasizing the importance of regression analysis in estimating the relationships between dependent and independent variables."]}, {'end': 31255.389, 'segs': [{'end': 27885.209, 'src': 'embed', 'start': 27860.202, 'weight': 3, 'content': [{'end': 27865.849, 'text': 'Remember, creating a training and test set, probably one of the most important things in machine learning.', 'start': 27860.202, 'duration': 5.647}, {'end': 27869.253, 'text': "That's why this function is so beautiful.", 'start': 27866.73, 'duration': 2.523}, {'end': 27873.498, 'text': 'So split arrays or matrices into random train and test subsets.', 'start': 27869.814, 'duration': 3.684}, {'end': 27875.4, 'text': "Wonderful, that's exactly what we want.", 'start': 27873.939, 'duration': 1.461}, {'end': 27877.363, 'text': "Oh no, we don't need to copy that.", 'start': 27876.041, 'duration': 1.322}, {'end': 27883.367, 'text': 'we can just import sklearn.modelSelection trainTestSplit.', 'start': 27878.243, 'duration': 5.124}, {'end': 27885.209, 'text': "Let's see how we can use this.", 'start': 27884.168, 'duration': 1.041}], 'summary': 'Creating a training and test set is crucial in machine learning. sklearn.modelselection traintestsplit function helps split arrays or matrices into random train and test subsets.', 'duration': 25.007, 'max_score': 27860.202, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs827860202.jpg'}, {'end': 28402.468, 'src': 'embed', 'start': 28368.196, 'weight': 4, 'content': [{'end': 28373.898, 'text': 'So now what might we want to do? What is this error telling us actually? Mean absolute error.', 'start': 28368.196, 'duration': 5.702}, {'end': 28379.94, 'text': 'It means that on average, our model is wrong by about 7, 000.', 'start': 28374.959, 'duration': 4.981}, {'end': 28386.324, 'text': "Now is that number large? Is that significant compared to the other values in our data set? Let's have a look at our training variables.", 'start': 28379.941, 'duration': 6.383}, {'end': 28388.232, 'text': 'Y train.', 'start': 28387.771, 'duration': 0.461}, {'end': 28402.468, 'text': "Whoa So if our model is off by 7, 000, what's the median, the middle number of our target variables? Wow.", 'start': 28389.273, 'duration': 13.195}], 'summary': "Mean absolute error indicates model's average 7,000 wrong prediction; contextually significant.", 'duration': 34.272, 'max_score': 28368.196, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs828368196.jpg'}, {'end': 29175.187, 'src': 'heatmap', 'start': 28800.954, 'weight': 0.754, 'content': [{'end': 28806.704, 'text': 'Our first experiment, we put down here that we wanted to add an extra layer with more hidden units, yes.', 'start': 28800.954, 'duration': 5.75}, {'end': 28812.126, 'text': 'But it looks like our model may be too complex for our data set,', 'start': 28807.324, 'duration': 4.802}, {'end': 28818.81, 'text': "so that it's not even it's so large that our data set is not large enough to teach it anything.", 'start': 28812.126, 'duration': 6.684}, {'end': 28824.093, 'text': "So what we might try to change is we haven't looked at anything in number two.", 'start': 28819.69, 'duration': 4.403}, {'end': 28834.542, 'text': 'Now, can we alter the learning rate? So the learning rate is 0.01, how about we try old faithful Adam.', 'start': 28825.333, 'duration': 9.209}, {'end': 28837.482, 'text': "So there's a few different optimizers here.", 'start': 28835.842, 'duration': 1.64}, {'end': 28839.643, 'text': 'SGD and Adam are probably the most popular.', 'start': 28837.783, 'duration': 1.86}, {'end': 28842.484, 'text': "If SGD doesn't work, try Adam.", 'start': 28840.283, 'duration': 2.201}, {'end': 28844.685, 'text': "Let's see what happens.", 'start': 28843.944, 'duration': 0.741}, {'end': 28848.246, 'text': 'Oh, look at that.', 'start': 28846.085, 'duration': 2.161}, {'end': 28853.807, 'text': "Yes Oh, that's so far so good.", 'start': 28848.806, 'duration': 5.001}, {'end': 28856.008, 'text': "That looks like it's doing better than the previous model.", 'start': 28853.847, 'duration': 2.161}, {'end': 28859.385, 'text': 'All right, evaluate the larger model.', 'start': 28857.762, 'duration': 1.623}, {'end': 28860.868, 'text': 'Oh, look at that.', 'start': 28859.626, 'duration': 1.242}, {'end': 28865.737, 'text': "Now, we've just, if we go insurance model, where's the first one? .", 'start': 28861.008, 'duration': 4.729}, {'end': 28868.102, 'text': 'evaluate xtest.', 'start': 28866.619, 'duration': 1.483}, {'end': 28870.324, 'text': 'Y test.', 'start': 28869.904, 'duration': 0.42}, {'end': 28872.906, 'text': 'What were the metrics from this one? MAE.', 'start': 28870.365, 'duration': 2.541}, {'end': 28874.067, 'text': 'Holy goodness.', 'start': 28873.127, 'duration': 0.94}, {'end': 28875.709, 'text': "We've just decreased it by 2000.", 'start': 28874.127, 'duration': 1.582}, {'end': 28883.335, 'text': 'So about 30 odd percent around about 30 odd percent decrease in error rate by tweaking two little things.', 'start': 28875.709, 'duration': 7.626}, {'end': 28886.577, 'text': 'We added an extra layer and we changed the optimizers.', 'start': 28883.375, 'duration': 3.202}, {'end': 28894.823, 'text': "Again, this might not always work, but that's just one of the one of the levers that you can turn on your models to improve them,", 'start': 28886.757, 'duration': 8.066}, {'end': 28896.144, 'text': 'or at least try to improve them.', 'start': 28894.823, 'duration': 1.321}, {'end': 28898.046, 'text': 'We put that up here on purpose.', 'start': 28896.725, 'duration': 1.321}, {'end': 28899.896, 'text': 'to try and prove our model.', 'start': 28898.735, 'duration': 1.161}, {'end': 28907.682, 'text': "So I'm going to put here, we actually modified this experiment, add an extra layer and use the Adam optimizer.", 'start': 28900.296, 'duration': 7.386}, {'end': 28917.711, 'text': "So this one, we're going to train for longer, same as above, but train for longer, maybe 200 epochs, 500, probably too many.", 'start': 28908.243, 'duration': 9.468}, {'end': 28921.674, 'text': 'Well, who knows if in down run the code experiment.', 'start': 28918.531, 'duration': 3.143}, {'end': 28925.103, 'text': "Okay So that's insurance model two.", 'start': 28922.515, 'duration': 2.588}, {'end': 28928.505, 'text': "Let's run insurance model three, see if we can improve on this number here.", 'start': 28925.143, 'duration': 3.362}, {'end': 28932.288, 'text': 'How do we create this? So set random seed.', 'start': 28928.965, 'duration': 3.323}, {'end': 28937.151, 'text': 'So tf random set seed 42, beautiful.', 'start': 28933.208, 'duration': 3.943}, {'end': 28940.233, 'text': 'Now number one is create the model.', 'start': 28937.832, 'duration': 2.401}, {'end': 28942.395, 'text': 'Same as above.', 'start': 28941.614, 'duration': 0.781}, {'end': 28951.341, 'text': "So we'll go insurance model three equals tf Keras sequential.", 'start': 28943.996, 'duration': 7.345}, {'end': 28954.667, 'text': 'Go right to the end here.', 'start': 28953.666, 'duration': 1.001}, {'end': 28960.651, 'text': 'tf.keras.layers The first layer we had there was a dense layer with 100 hidden units.', 'start': 28954.927, 'duration': 5.724}, {'end': 28965.414, 'text': 'And now we have tf.keras.layers.dense10.', 'start': 28961.471, 'duration': 3.943}, {'end': 28970.217, 'text': 'And then tf.keras.layers.dense1.', 'start': 28966.455, 'duration': 3.762}, {'end': 28972.619, 'text': 'Got to make this insurance company happy, you know.', 'start': 28970.858, 'duration': 1.761}, {'end': 28976.201, 'text': "What's step number two? Compile the model.", 'start': 28973.58, 'duration': 2.621}, {'end': 28979.204, 'text': 'Beautiful InsuranceModel3.compile.', 'start': 28976.702, 'duration': 2.502}, {'end': 28986.635, 'text': 'Loss equals tf.keras.losses.mae.', 'start': 28983.174, 'duration': 3.461}, {'end': 28990.096, 'text': 'That deleted itself out of nowhere.', 'start': 28988.515, 'duration': 1.581}, {'end': 28995.357, 'text': "The optimizer is, remember what optimizer are we using now? We're using Adam.", 'start': 28990.696, 'duration': 4.661}, {'end': 29000.419, 'text': 'Optimizers Come on Daniel, you can remember that.', 'start': 28996.578, 'duration': 3.841}, {'end': 29004.24, 'text': 'And then the metric is mae.', 'start': 29000.879, 'duration': 3.361}, {'end': 29007.621, 'text': "Wonderful And now let's have a look.", 'start': 29005.78, 'duration': 1.841}, {'end': 29010.361, 'text': 'Number three is fit the model.', 'start': 29008.241, 'duration': 2.12}, {'end': 29021.707, 'text': 'Insurance model three, dot, fit, x, train, y train epochs equals 200.', 'start': 29011.942, 'duration': 9.765}, {'end': 29025.671, 'text': 'we might actually put this history equals that.', 'start': 29021.707, 'duration': 3.964}, {'end': 29027.933, 'text': "you'll see what this means in a second.", 'start': 29025.671, 'duration': 2.262}, {'end': 29034.738, 'text': "so you ready, let's run this three, two, one.", 'start': 29027.933, 'duration': 6.805}, {'end': 29039.702, 'text': "all right, we're past 100 epochs.", 'start': 29034.738, 'duration': 4.964}, {'end': 29047.742, 'text': 'oh yes, the mae is going down beautiful, all right.', 'start': 29039.702, 'duration': 8.04}, {'end': 29057.006, 'text': "so if we go now, let's evaluate, evaluate our third model, dot, evaluate.", 'start': 29047.742, 'duration': 9.264}, {'end': 29065.389, 'text': "do these results on the training data set translate to the test data set, because that's what we're really concerned about.", 'start': 29057.006, 'duration': 8.383}, {'end': 29074.655, 'text': "evaluate, that is what's up.", 'start': 29068.732, 'duration': 5.923}, {'end': 29078.397, 'text': "we've decreased our mae to three and a half thousand.", 'start': 29074.655, 'duration': 3.742}, {'end': 29080.618, 'text': 'i believe we are now.', 'start': 29078.397, 'duration': 2.221}, {'end': 29095.306, 'text': 'so if we look at back to our first insurance model, insurance model, evaluate, x test, y test we just halved our error rate in about five minutes.', 'start': 29080.618, 'duration': 14.688}, {'end': 29097.644, 'text': 'How cool is that?', 'start': 29096.564, 'duration': 1.08}, {'end': 29101.585, 'text': "Now there's a few more levers that we could try, but there's also a little thing that I wanna show you.", 'start': 29097.824, 'duration': 3.761}, {'end': 29107.267, 'text': 'And we saved history equals insurance model three dot fit.', 'start': 29102.205, 'duration': 5.062}, {'end': 29112.988, 'text': "What is this history variable? Well, this is something we're also gonna get very familiar as we go.", 'start': 29107.887, 'duration': 5.101}, {'end': 29122.09, 'text': 'So plot history, also known as a loss curve or a training curve or a training curve.', 'start': 29113.708, 'duration': 8.382}, {'end': 29129.478, 'text': 'So how we can do this, if we go PD, dot data frame equals history.', 'start': 29123.21, 'duration': 6.268}, {'end': 29134.141, 'text': 'So history has a history parameter saved to it or attribute, sorry.', 'start': 29129.979, 'duration': 4.162}, {'end': 29138.683, 'text': 'And then if we go dot plot inside a pandas data frame, that is,', 'start': 29134.541, 'duration': 4.142}, {'end': 29149.529, 'text': "and then we're going to set the Y label to loss and then the X label to epochs and shift and enter.", 'start': 29138.683, 'duration': 10.846}, {'end': 29151.822, 'text': 'Look at that.', 'start': 29151.062, 'duration': 0.76}, {'end': 29160.644, 'text': 'One of the most beautiful sites you will ever see in machine learning and deep learning is a loss curve decreasing.', 'start': 29152.202, 'duration': 8.442}, {'end': 29164.265, 'text': 'So why is this so beautiful?', 'start': 29162.004, 'duration': 2.261}, {'end': 29175.187, 'text': "It's because, if we look at where this history variable originates from, we instantiated it when we started to fit our model to the training data set,", 'start': 29164.705, 'duration': 10.482}], 'summary': 'Experimented with model complexity, changed optimizer to adam, and improved error rate by 30%.', 'duration': 374.233, 'max_score': 28800.954, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs828800954.jpg'}, {'end': 28917.711, 'src': 'embed', 'start': 28886.757, 'weight': 1, 'content': [{'end': 28894.823, 'text': "Again, this might not always work, but that's just one of the one of the levers that you can turn on your models to improve them,", 'start': 28886.757, 'duration': 8.066}, {'end': 28896.144, 'text': 'or at least try to improve them.', 'start': 28894.823, 'duration': 1.321}, {'end': 28898.046, 'text': 'We put that up here on purpose.', 'start': 28896.725, 'duration': 1.321}, {'end': 28899.896, 'text': 'to try and prove our model.', 'start': 28898.735, 'duration': 1.161}, {'end': 28907.682, 'text': "So I'm going to put here, we actually modified this experiment, add an extra layer and use the Adam optimizer.", 'start': 28900.296, 'duration': 7.386}, {'end': 28917.711, 'text': "So this one, we're going to train for longer, same as above, but train for longer, maybe 200 epochs, 500, probably too many.", 'start': 28908.243, 'duration': 9.468}], 'summary': 'Experimenting with model improvement by adding an extra layer and using adam optimizer, aiming to train for longer, possibly around 200-500 epochs.', 'duration': 30.954, 'max_score': 28886.757, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs828886757.jpg'}, {'end': 29624.339, 'src': 'embed', 'start': 29595.067, 'weight': 2, 'content': [{'end': 29599.428, 'text': 'normalization is a technique often applied as part of data preparation for machine learning.', 'start': 29595.067, 'duration': 4.361}, {'end': 29607.091, 'text': 'The goal of normalization is to change the values of numeric columns in the data set to a common scale, ah,', 'start': 29600.149, 'duration': 6.942}, {'end': 29609.932, 'text': 'without distorting differences in the range of values.', 'start': 29607.091, 'duration': 2.841}, {'end': 29612.653, 'text': "All right, so let's have a look at our data set.", 'start': 29610.252, 'duration': 2.401}, {'end': 29615.774, 'text': 'X train, or maybe just X as a whole.', 'start': 29613.373, 'duration': 2.401}, {'end': 29617.615, 'text': 'What does X look like again?', 'start': 29616.514, 'duration': 1.101}, {'end': 29624.339, 'text': "So we see here we've got age BMI, children.", 'start': 29619.116, 'duration': 5.223}], 'summary': 'Normalization standardizes numeric data for machine learning by adjusting it to a common scale without distorting value differences.', 'duration': 29.272, 'max_score': 29595.067, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs829595067.jpg'}, {'end': 30653.951, 'src': 'heatmap', 'start': 30271.775, 'weight': 0.894, 'content': [{'end': 30282.219, 'text': "so as we pass our data through this, it's going to get min max scaled on these columns and one hot encoded on these columns.", 'start': 30271.775, 'duration': 10.444}, {'end': 30290.062, 'text': "so now let's create our x and y values because, remember, we just re-imported our data frame as a fresh.", 'start': 30282.219, 'duration': 7.843}, {'end': 30301.449, 'text': "so x is going to be insurance, dot, drop charges, and then we'll do that on the first axis.", 'start': 30291.684, 'duration': 9.765}, {'end': 30303.15, 'text': 'y will be.', 'start': 30301.449, 'duration': 1.701}, {'end': 30304.671, 'text': 'what are we trying to predict?', 'start': 30303.15, 'duration': 1.521}, {'end': 30307.352, 'text': "we're trying to predict charges.", 'start': 30304.671, 'duration': 2.681}, {'end': 30309.874, 'text': 'beautiful, no typos there.', 'start': 30307.352, 'duration': 2.522}, {'end': 30311.815, 'text': 'and now what do we want to do next?', 'start': 30309.874, 'duration': 1.941}, {'end': 30314.115, 'text': "We've got our X and Y.", 'start': 30312.554, 'duration': 1.561}, {'end': 30315.695, 'text': 'but what do we want our model to learn on?', 'start': 30314.115, 'duration': 1.58}, {'end': 30317.336, 'text': 'Remember our three data sets?', 'start': 30316.195, 'duration': 1.141}, {'end': 30324.078, 'text': "We want to train our model on some training data and evaluate it on some data it hasn't seen, in other words, test data.", 'start': 30317.976, 'duration': 6.102}, {'end': 30328.92, 'text': 'So the beautiful thing here is that we can use scikit-learns.', 'start': 30324.718, 'duration': 4.202}, {'end': 30338.043, 'text': 'Oh, we should put this up here, from sklearn.modelselection import train test split.', 'start': 30329.06, 'duration': 8.983}, {'end': 30344.817, 'text': "So now let's build our train and test sets.", 'start': 30339.537, 'duration': 5.28}, {'end': 30359.465, 'text': 'So we want X train, X test, Y train, Y test equals train test split X Y test size equals 20% of a test dataset.', 'start': 30345.697, 'duration': 13.768}, {'end': 30367.35, 'text': "And the random state we're going to set to 42 so that the split happens exactly the same as you can scroll up before as we did before.", 'start': 30359.985, 'duration': 7.365}, {'end': 30369.671, 'text': "Otherwise it'll be random and we'll get different results.", 'start': 30367.57, 'duration': 2.101}, {'end': 30377.971, 'text': "So now we're going to fit the column transformer to our training data.", 'start': 30370.411, 'duration': 7.56}, {'end': 30384.121, 'text': 'So the important thing here is that whenever you have some sort of column transformer,', 'start': 30379.954, 'duration': 4.167}, {'end': 30390.551, 'text': 'you want to fit it to your training data and then use that fit column transformer to transform your test data.', 'start': 30384.121, 'duration': 6.43}, {'end': 30396.267, 'text': 'Because otherwise, if you do that separately, remember the test data is data the model has never seen before.', 'start': 30391.325, 'duration': 4.942}, {'end': 30398.328, 'text': "So it's basically data from the future.", 'start': 30396.687, 'duration': 1.641}, {'end': 30404.431, 'text': "So, if we're transforming our training data set with information from the test data set,", 'start': 30398.948, 'duration': 5.483}, {'end': 30408.012, 'text': "it's like taking knowledge from the future and altering the data that we have now.", 'start': 30404.431, 'duration': 3.581}, {'end': 30412.334, 'text': "So let's go here, ct.fit on xtrain.", 'start': 30408.773, 'duration': 3.561}, {'end': 30420.058, 'text': 'Now we want to transform training and test data with training.', 'start': 30413.475, 'duration': 6.583}, {'end': 30429.216, 'text': 'normalization min max scalar and one hot encoder.', 'start': 30421.013, 'duration': 8.203}, {'end': 30435.138, 'text': "so let's go x train, normal for normalized, equals ct dot, transform.", 'start': 30429.216, 'duration': 5.922}, {'end': 30442.381, 'text': "so now we're taking what we've learned from the training data and we're transforming it.", 'start': 30435.138, 'duration': 7.243}, {'end': 30446.303, 'text': "we're normalizing the features and one hot encoding the features that we've defined up here.", 'start': 30442.381, 'duration': 3.922}, {'end': 30453.698, 'text': "And now we've got X test, normal equals CT transform, X test.", 'start': 30447.372, 'duration': 6.326}, {'end': 30461.686, 'text': "Beautiful. So now we've been through a fair few steps here, but when we break it down,", 'start': 30455.24, 'duration': 6.446}, {'end': 30465.45, 'text': "we've just created a column transformer with the mini max scaler and the one hot encoder.", 'start': 30461.686, 'duration': 3.764}, {'end': 30467.932, 'text': "We've turned our data into features and labels.", 'start': 30465.97, 'duration': 1.962}, {'end': 30470.614, 'text': "We've split our data into training and test sets.", 'start': 30468.512, 'duration': 2.102}, {'end': 30481.662, 'text': "We fit the column transformer to the training data only and then we've transformed our training and test data with normalization and one hot encoding.", 'start': 30471.377, 'duration': 10.285}, {'end': 30484.484, 'text': "Let's see what, oh, we got a typo here.", 'start': 30482.623, 'duration': 1.861}, {'end': 30488.846, 'text': 'Beautiful, that runs without errors.', 'start': 30487.145, 'duration': 1.701}, {'end': 30492.307, 'text': "Now, we've normalized our data and one hot encoded it.", 'start': 30489.146, 'duration': 3.161}, {'end': 30494.028, 'text': "Let's check out what it looks like.", 'start': 30492.728, 'duration': 1.3}, {'end': 30502.683, 'text': 'What does our data look like now? So we want to go xtrain.lock.', 'start': 30495.789, 'duration': 6.894}, {'end': 30503.944, 'text': "Let's look at the first one.", 'start': 30503.044, 'duration': 0.9}, {'end': 30507.087, 'text': 'Oh, sorry.', 'start': 30506.626, 'duration': 0.461}, {'end': 30509.509, 'text': "We want to go, oh, well, that's what it originally was.", 'start': 30507.267, 'duration': 2.242}, {'end': 30515.593, 'text': 'So then if we go xtrain normal.lock.', 'start': 30510.189, 'duration': 5.404}, {'end': 30519.176, 'text': 'Oh, no, maybe we just want the first one.', 'start': 30517.735, 'duration': 1.441}, {'end': 30523.019, 'text': 'All righty.', 'start': 30522.418, 'duration': 0.601}, {'end': 30526.442, 'text': "So here's what we started with.", 'start': 30524.08, 'duration': 2.362}, {'end': 30534.753, 'text': 'Age is 19, sex is female, BMI 27.9, children, smoker, region.', 'start': 30527.748, 'duration': 7.005}, {'end': 30539.016, 'text': "Now we've got a value here.", 'start': 30535.433, 'duration': 3.583}, {'end': 30540.296, 'text': "I'm guessing that's the age.", 'start': 30539.036, 'duration': 1.26}, {'end': 30542.358, 'text': 'This must be the BMI.', 'start': 30541.097, 'duration': 1.261}, {'end': 30544.099, 'text': 'This must be the children.', 'start': 30542.918, 'duration': 1.181}, {'end': 30549.242, 'text': 'And now all of these other values are 1 or 0.', 'start': 30545.02, 'duration': 4.222}, {'end': 30559.664, 'text': 'Beautiful So is that the same for another sample? And how about another sample? Wonderful.', 'start': 30549.242, 'duration': 10.422}, {'end': 30565.505, 'text': "What does the whole thing look like? Ah, it's all in numerical format.", 'start': 30560.624, 'duration': 4.881}, {'end': 30571.707, 'text': 'Now what does that mean? Well, if you guessed, we can pass this to our neural network as all encoded data.', 'start': 30566.185, 'duration': 5.522}, {'end': 30573.867, 'text': "You'd be 100% correct.", 'start': 30572.227, 'duration': 1.64}, {'end': 30575.687, 'text': "So let's check the shapes.", 'start': 30574.407, 'duration': 1.28}, {'end': 30576.307, 'text': 'One more thing.', 'start': 30575.787, 'duration': 0.52}, {'end': 30579.348, 'text': "Oh, we'll keep that sample there.", 'start': 30578.008, 'duration': 1.34}, {'end': 30583.589, 'text': "And then we'll check the shapes of our data now.", 'start': 30581.589, 'duration': 2}, {'end': 30588.85, 'text': 'How has our shapes changed? So X train shape and X train normal.', 'start': 30583.629, 'duration': 5.221}, {'end': 30589.908, 'text': 'dot shape.', 'start': 30589.468, 'duration': 0.44}, {'end': 30592.749, 'text': 'Ah, okay.', 'start': 30592.329, 'duration': 0.42}, {'end': 30599.01, 'text': 'So you see our x-train value had one, two, three, four, five, six.', 'start': 30593.629, 'duration': 5.381}, {'end': 30601.65, 'text': 'Six different columns here.', 'start': 30600.61, 'duration': 1.04}, {'end': 30608.832, 'text': "Well now, since we've normalized it as well as one in hot encoded it, we've actually added some extra columns here.", 'start': 30602.19, 'duration': 6.642}, {'end': 30616.433, 'text': "So we've got one, two, three, four, five, six, seven, eight, nine, ten, eleven.", 'start': 30609.372, 'duration': 7.061}, {'end': 30623.338, 'text': 'Alrighty, so it looks like that our data is ready to build and pass to a neural network model.', 'start': 30617.355, 'duration': 5.983}, {'end': 30628.001, 'text': "So what we might do is, actually, I'll write a text cell.", 'start': 30623.899, 'duration': 4.102}, {'end': 30632.403, 'text': "We'll write beautiful, because it is beautiful, beautiful.", 'start': 30629.762, 'duration': 2.641}, {'end': 30637.566, 'text': 'Our data has been normalized and one hot encoded.', 'start': 30633.004, 'duration': 4.562}, {'end': 30644.07, 'text': "Now let's build a neural network model on it and see what happens.", 'start': 30638.287, 'duration': 5.783}, {'end': 30645.072, 'text': 'how it goes.', 'start': 30644.491, 'duration': 0.581}, {'end': 30653.951, 'text': 'So the challenge for you is to now build a neural network model to fit on our normalized data.', 'start': 30645.453, 'duration': 8.498}], 'summary': 'Data is preprocessed, normalized, and ready for neural network model building.', 'duration': 382.176, 'max_score': 30271.775, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs830271775.jpg'}, {'end': 30963.025, 'src': 'embed', 'start': 30931.038, 'weight': 0, 'content': [{'end': 30932.358, 'text': 'So this is insurance model two.', 'start': 30931.038, 'duration': 1.32}, {'end': 30937.082, 'text': 'Insurance model two results.', 'start': 30934.14, 'duration': 2.942}, {'end': 30938.683, 'text': 'Look at that.', 'start': 30937.962, 'duration': 0.721}, {'end': 30940.446, 'text': 'What a reduction.', 'start': 30939.605, 'duration': 0.841}, {'end': 30942.528, 'text': "So that's 5, 000 MAE.", 'start': 30940.466, 'duration': 2.062}, {'end': 30949.113, 'text': "Just by normalizing our data, we've gone from 5, 000 MAE to 3, 500 MAE.", 'start': 30943.188, 'duration': 5.925}, {'end': 30952.376, 'text': "That's incredible.", 'start': 30951.775, 'duration': 0.601}, {'end': 30954.398, 'text': "That's a reduction of like 30% or so.", 'start': 30952.416, 'duration': 1.982}, {'end': 30963.025, 'text': "So now I hope you're starting to see the benefits of all of the different hyperparameters we can tune with our model.", 'start': 30955.399, 'duration': 7.626}], 'summary': 'Insurance model two achieved a 30% reduction in mae by normalizing the data.', 'duration': 31.987, 'max_score': 30931.038, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs830931038.jpg'}], 'start': 27605.893, 'title': 'Building and improving neural network models', 'summary': 'Discusses building neural network models for insurance data, creating training and test sets, model improvement experiments, tensorflow early stopping, and data pre-processing techniques. it emphasizes the impact of normalization on model performance, including a 30% reduction in error rate, and the importance of monitoring loss curve for guiding training duration.', 'chapters': [{'end': 27742.835, 'start': 27605.893, 'title': 'Building neural network and training data', 'summary': 'Discusses combining feature columns to predict charges, challenges viewers to create x and y values, build a neural network, and suggests further steps for creating training and test sets and tackling three specific tasks.', 'duration': 136.942, 'highlights': ['The chapter discusses combining feature columns to predict charges The chapter introduces the concept of combining feature columns, such as age, BMI, children, sex, smoker status, and region, to predict the charges.', 'Challenges viewers to create x and y values, build a neural network The viewers are encouraged to take on the challenge of creating x and y values, features and labels, and building a neural network similar to model two.', 'Suggests further steps for creating training and test sets and tackling three specific tasks The chapter suggests creating training and test sets, building a neural network, and tackling the three specific tasks related to the features and target variable relationships.']}, {'end': 28454.837, 'start': 27743.655, 'title': 'Creating neural network model for insurance data', 'summary': "Covers creating x and y values, dropping columns, creating a training and test set using scikit-learn train test split with an 80-20 split, building and training a neural network model using tensorflow, and evaluating the model's performance on the test data.", 'duration': 711.182, 'highlights': ['Creating training and test set using scikit-learn train test split Utilizing scikit-learn train test split function to split arrays or matrices into random train and test subsets, with an 80-20 split, ensuring 80% training data and 20% testing data.', 'Building and training a neural network model using TensorFlow Creating a neural network model, compiling it with mean absolute error as the loss function and stochastic gradient descent as the optimizer, and fitting the model on the training data, resulting in a decrease in mean absolute error from around 8,600 to 7,100.', "Evaluating the model's performance on the test data Assessing the model's performance on the test data, revealing that it performs slightly better on the test data set than on the training data set, with a mean absolute error of about 7,000, indicating the need to improve the model."]}, {'end': 29300.055, 'start': 28455.897, 'title': 'Model improvement experiments', 'summary': "Discusses conducting experiments to improve the model's performance, including adding an extra layer with more hidden units and changing the optimizer to adam, resulting in a 30% decrease in error rate, and emphasizes the importance of monitoring the loss curve to guide training duration.", 'duration': 844.158, 'highlights': ['By adding an extra layer with more hidden units and changing the optimizer to Adam, the error rate decreased by 30%, demonstrating the impact of these modifications on model performance.', "Monitoring the loss curve during training provides valuable insights into the model's learning progress, guiding decisions on training duration and potential performance improvements.", 'The chapter encourages conducting experiments to explore different model configurations and parameters, emphasizing the experimental and iterative nature of machine learning and deep learning.']}, {'end': 29928.163, 'start': 29300.735, 'title': 'Tensorflow early stopping & data pre-processing', 'summary': 'Introduces the tensorflow early stopping callback, explaining its use to stop training once a certain metric stops improving and also covers the importance of normalization and standardization in pre-processing data for machine learning models.', 'duration': 627.428, 'highlights': ['TensorFlow Early Stopping Callback Introduces the early stopping callback in TensorFlow, explaining its use to stop training once a certain metric stops improving, such as loss not decreasing for a certain number of epochs, providing a tangible example of its implementation for training a model.', 'Importance of Normalization and Standardization Emphasizes the significance of normalization and standardization in pre-processing data for machine learning models, elucidating the goal of changing numeric column values to a common scale without distorting differences and explaining the differences between normalization and standardization, along with their respective use cases.']}, {'end': 30544.099, 'start': 29928.843, 'title': 'Data preprocessing and normalization', 'summary': 'Covers the process of importing data using pandas, discussing the concepts of pre-processing, normalizing and standardizing data, and creating a column transformer with a min-max scaler and one hot encoder to normalize and one hot encode the features of the insurance data frame, followed by splitting the data into training and test sets and transforming the data with normalization and one hot encoding.', 'duration': 615.256, 'highlights': ['Importing and preparing data using pandas and TensorFlow The speaker explains the process of importing data using pandas and TensorFlow, aiming to enable running the code directly from the notebook.', 'Discussing the concepts of pre-processing, normalizing and standardizing data The chapter delves into the concepts of pre-processing, normalization, and standardization, laying the foundation for the subsequent data transformation steps.', 'Creating a column transformer with min-max scaler and one hot encoder The speaker outlines the creation of a column transformer with a min-max scaler and one hot encoder to normalize and one hot encode the features of the insurance data frame.', 'Splitting the data into training and test sets and transforming the data with normalization and one hot encoding The chapter explains the process of splitting the data into training and test sets and then transforming the data using the fitted column transformer with normalization and one hot encoding.', 'Demonstrating the transformation of data with normalization and one hot encoding The speaker demonstrates the transformation of data using normalization and one hot encoding, providing a visual representation of the data before and after the transformation.']}, {'end': 31255.389, 'start': 30545.02, 'title': 'Neural network regression with tensorflow', 'summary': 'Covers the normalization and one hot encoding of data, building and training a neural network model, and the impact of normalization on model performance. the normalization of data led to a 30% reduction in mean absolute error, demonstrating the benefits of tuning hyperparameters and data for model improvement.', 'duration': 710.369, 'highlights': ['The normalization of data led to a 30% reduction in mean absolute error Normalization and one hot encoding of data led to a significant improvement in model performance, reducing mean absolute error by 30% from 5,000 MAE to 3,500 MAE.', 'The chapter covers the normalization and one hot encoding of data The chapter discusses the process of normalizing and one hot encoding data to prepare it for passing to a neural network model.', 'Building and training a neural network model The chapter details the process of building and training a neural network model using TensorFlow, including creating, compiling, fitting, and evaluating the model.']}], 'duration': 3649.496, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs827605893.jpg', 'highlights': ['The normalization of data led to a 30% reduction in mean absolute error', 'By adding an extra layer with more hidden units and changing the optimizer to Adam, the error rate decreased by 30%, demonstrating the impact of these modifications on model performance', 'Importance of Normalization and Standardization Emphasizes the significance of normalization and standardization in pre-processing data for machine learning models, elucidating the goal of changing numeric column values to a common scale without distorting differences and explaining the differences between normalization and standardization, along with their respective use cases', 'Creating training and test set using scikit-learn train test split Utilizing scikit-learn train test split function to split arrays or matrices into random train and test subsets, with an 80-20 split, ensuring 80% training data and 20% testing data', 'Building and training a neural network model using TensorFlow Creating a neural network model, compiling it with mean absolute error as the loss function and stochastic gradient descent as the optimizer, and fitting the model on the training data, resulting in a decrease in mean absolute error from around 8,600 to 7,100']}, {'end': 32982.68, 'segs': [{'end': 31305.085, 'src': 'embed', 'start': 31274.028, 'weight': 0, 'content': [{'end': 31278.272, 'text': 'Now, every notebook and concept we go through has a ground truth notebook.', 'start': 31274.028, 'duration': 4.244}, {'end': 31286.279, 'text': "So, for this one, 02 Neural Network Classification in TensorFlow, this is the information we're going to go through.", 'start': 31279.113, 'duration': 7.166}, {'end': 31287.731, 'text': 'in the videos.', 'start': 31287.15, 'duration': 0.581}, {'end': 31290.433, 'text': "we're going to be writing all of this code out.", 'start': 31287.731, 'duration': 2.702}, {'end': 31297.779, 'text': "however, the notebooks here, so this one in the github, have a lot more text around the code that we're writing.", 'start': 31290.433, 'duration': 7.346}, {'end': 31305.085, 'text': 'so if you want a more text based explanation to go along with the videos and along with the code, so say in the video we just write this code.', 'start': 31297.779, 'duration': 7.306}], 'summary': 'The notebook 02 neural network classification in tensorflow provides text-based explanations to complement the video tutorials.', 'duration': 31.057, 'max_score': 31274.028, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs831274028.jpg'}, {'end': 31575.468, 'src': 'embed', 'start': 31553.055, 'weight': 1, 'content': [{'end': 31561.32, 'text': "So, now that we've had a look at some of the most common classification problems, you're gonna come up against binary classification,", 'start': 31553.055, 'duration': 8.265}, {'end': 31565.382, 'text': 'multi-class classification and multi-label classification.', 'start': 31561.32, 'duration': 4.062}, {'end': 31571.466, 'text': "let's have a look at the things we're going to cover or specifically, the things we're going to write code for.", 'start': 31565.382, 'duration': 6.084}, {'end': 31575.468, 'text': "So here's what we're going to cover, broadly.", 'start': 31572.907, 'duration': 2.561}], 'summary': 'Covering binary, multi-class, and multi-label classification problems in code.', 'duration': 22.413, 'max_score': 31553.055, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs831553055.jpg'}, {'end': 31851.056, 'src': 'embed', 'start': 31825.832, 'weight': 2, 'content': [{'end': 31836.135, 'text': "So what might we do before we pass these inputs directly to our machine learning algorithm? Well, we're going to turn them into a tensor.", 'start': 31825.832, 'duration': 10.303}, {'end': 31839.576, 'text': 'In other words, a numerical encoding.', 'start': 31837.176, 'duration': 2.4}, {'end': 31843.197, 'text': "Now here, we've got normalized pixel values.", 'start': 31840.376, 'duration': 2.821}, {'end': 31851.056, 'text': "So that means we've changed them to be whatever they were in terms of red, green and blue, to be some value between 0 and 1.", 'start': 31843.637, 'duration': 7.419}], 'summary': 'Data inputs are transformed into tensors for machine learning, with normalized pixel values ranging from 0 to 1.', 'duration': 25.224, 'max_score': 31825.832, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs831825832.jpg'}, {'end': 32512.272, 'src': 'embed', 'start': 32488.705, 'weight': 3, 'content': [{'end': 32496.13, 'text': "So now, speaking of those, let's have a look at what the architecture of a classification model might look like.", 'start': 32488.705, 'duration': 7.425}, {'end': 32498.271, 'text': "We've got a little spoiler alert here.", 'start': 32496.67, 'duration': 1.601}, {'end': 32500.053, 'text': "Here's some TensorFlow code.", 'start': 32498.872, 'duration': 1.181}, {'end': 32503.435, 'text': "We've been very familiar with this in the regression section.", 'start': 32500.073, 'duration': 3.362}, {'end': 32506.368, 'text': 'However, we might notice a few different things.', 'start': 32504.127, 'duration': 2.241}, {'end': 32508.329, 'text': "Oh, we've got input layer.", 'start': 32506.588, 'duration': 1.741}, {'end': 32509.27, 'text': 'All right.', 'start': 32508.87, 'duration': 0.4}, {'end': 32511.051, 'text': "We've got activation.", 'start': 32510.03, 'duration': 1.021}, {'end': 32512.272, 'text': "We haven't seen much of that.", 'start': 32511.091, 'duration': 1.181}], 'summary': 'Overview of classification model architecture using tensorflow with activation functions.', 'duration': 23.567, 'max_score': 32488.705, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs832488705.jpg'}], 'start': 31255.589, 'title': 'Classification model basics', 'summary': 'Covers tensorflow deep learning github, neural network classification basics, classification inputs and outputs, and classification model architecture. it includes materials, classification types, input and output processes, and model architecture with emphasis on practical learning and adaptability to different problems.', 'chapters': [{'end': 31318.339, 'start': 31255.589, 'title': 'Tensorflow deep learning github', 'summary': 'Introduces the tensorflow deep learning github, which contains materials related to the course, including ground truth notebooks and code explanations, offering additional text-based annotations to complement the videos.', 'duration': 62.75, 'highlights': ['GitHub repository contains materials related to the course, including ground truth notebooks and code explanations.', 'Provides additional text-based annotations to complement the videos and code explanations.', 'Mr. D. Burke, the instructor, mentions that the GitHub repository is still a work in progress but will have more content by the time the viewers go through the video.']}, {'end': 31639.616, 'start': 31319.28, 'title': 'Neural network classification basics', 'summary': 'Covers binary classification, multi-class classification, and multi-label classification, along with the architecture, input shapes, output shapes, custom data creation, modeling steps, evaluation methods, and model saving and loading, emphasizing a hands-on, experimental approach to learning.', 'duration': 320.336, 'highlights': ['The chapter covers binary classification, multi-class classification, and multi-label classification The chapter discusses the three most common classification problems - binary classification, multi-class classification, and multi-label classification, providing clear examples and explanations.', 'Emphasizes a hands-on, experimental approach to learning The chapter encourages a cooking-like, experimental approach to coding, where the focus is on trying different things, improvising, and being flexible rather than strictly adhering to a fixed recipe.', 'Covers the architecture, input shapes, output shapes, custom data creation, modeling steps, evaluation methods, and model saving and loading The chapter details the key components and steps involved in neural network classification, including architecture, input and output shapes, creating custom data, modeling steps, evaluation methods, and saving and loading trained models.']}, {'end': 32531.743, 'start': 31639.616, 'title': 'Classification inputs and outputs', 'summary': 'Introduces the inputs and outputs of a classification model, demonstrating the process of numerically encoding image inputs into tensors and highlighting the importance of understanding the input and output shapes for machine learning algorithms, with a focus on multi-class classification.', 'duration': 892.127, 'highlights': ['The process of numerically encoding image inputs into tensors is demonstrated, where the width, height, and color channels of the images are represented as a tensor with a shape of batch size, width, height, and color channels. Tensor shape representation: batch size, width, height, color channels', 'The importance of understanding input and output shapes for machine learning algorithms is emphasized, as the input and output shapes vary depending on the problem being worked on, with specific considerations for different types of classification such as image classification and text classification. Variability of input and output shapes based on different classification problems', 'The concept of multi-class classification is explained, where the output shape of the tensor corresponds to the number of potential classes, and the process of defining input-output shapes for machine learning or deep learning algorithms is highlighted as a key aspect of classification problems. Explanation of multi-class classification and its impact on tensor output shape']}, {'end': 32982.68, 'start': 32531.763, 'title': 'Classification model architecture', 'summary': 'Introduces a typical architecture for a classification model, covering input layer shape, hidden layers, output layer shape, activation functions, loss functions, and optimizers, emphasizing the adaptability of the architecture to different classification problems and highlighting the usage of adam optimizer for its forgiving nature with hyperparameters.', 'duration': 450.917, 'highlights': ['The architecture of a classification model is adaptable and varies depending on the specific problem, but typically includes the input layer shape, hidden layers, output layer shape, activation functions, loss functions, and optimizers, with a recommendation for using the Adam optimizer for its forgiving nature with hyperparameters.', 'The input layer shape is problem-specific and can vary, such as for image classification where the shape might represent width, height, and number of color channels, and for multi-class classification, it is similar to binary classification.', 'The number of hidden layers in the architecture can vary and may include over 100 layers, with the option to have one or multiple hidden layers for both binary and multi-class classification.', 'The neurons per hidden layer are problem-specific, generally ranging from 10 to 100, and for the discussed case, 100 neurons are set for the hidden layer in multi-class classification and binary classification.', 'For binary classification, the output layer shape is one, while for multi-class classification, it is one per class, indicating the number of classes the model aims to predict.', 'The activation functions for both hidden and output layers are typically relu and softmax, respectively, and the usage of Adam optimizer is emphasized for its forgiving nature with hyperparameters, as recommended by Andrej Karpathy, the senior director of AI at Tesla.']}], 'duration': 1727.091, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs831255589.jpg', 'highlights': ['GitHub repository contains materials related to the course, including ground truth notebooks and code explanations.', 'The chapter covers binary classification, multi-class classification, and multi-label classification, providing clear examples and explanations.', 'The process of numerically encoding image inputs into tensors is demonstrated, where the width, height, and color channels of the images are represented as a tensor with a shape of batch size, width, height, and color channels.', 'The architecture of a classification model is adaptable and varies depending on the specific problem, but typically includes the input layer shape, hidden layers, output layer shape, activation functions, loss functions, and optimizers, with a recommendation for using the Adam optimizer for its forgiving nature with hyperparameters.']}, {'end': 34307.904, 'segs': [{'end': 33116.927, 'src': 'embed', 'start': 33079.548, 'weight': 1, 'content': [{'end': 33090.655, 'text': 'A classification problem is where you try to classify something as one thing or another.', 'start': 33079.548, 'duration': 11.107}, {'end': 33093.696, 'text': "And there's a few types of classification.", 'start': 33091.894, 'duration': 1.802}, {'end': 33106.523, 'text': "You've got binary classification, multi-class classification and multi-label classification,", 'start': 33093.735, 'duration': 12.788}, {'end': 33113.165, 'text': "and we've got here a few types of classification problems.", 'start': 33106.523, 'duration': 6.642}, {'end': 33115.146, 'text': "now we'll turn this into markdown.", 'start': 33113.165, 'duration': 1.981}, {'end': 33116.927, 'text': "i'm going to press command mm.", 'start': 33115.146, 'duration': 1.781}], 'summary': 'Classification involves binary, multi-class, and multi-label classification, with various types of classification problems.', 'duration': 37.379, 'max_score': 33079.548, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs833079548.jpg'}, {'end': 33391.29, 'src': 'embed', 'start': 33360.932, 'weight': 4, 'content': [{'end': 33363.453, 'text': 'Visualize, visualize, visualize.', 'start': 33360.932, 'duration': 2.521}, {'end': 33367.354, 'text': "What's the first thing we should visualize? The data.", 'start': 33364.473, 'duration': 2.881}, {'end': 33370.315, 'text': 'We could visualize the model, the training, predictions.', 'start': 33368.053, 'duration': 2.262}, {'end': 33374.617, 'text': "And it's a good idea to visualize these as often as possible.", 'start': 33371.116, 'duration': 3.501}, {'end': 33378.197, 'text': "So let's see how we might visualize our data.", 'start': 33374.677, 'duration': 3.52}, {'end': 33379.718, 'text': "Let's write ourselves a little note.", 'start': 33378.559, 'duration': 1.159}, {'end': 33384.68, 'text': 'Our data is a little hard to understand.', 'start': 33380.799, 'duration': 3.881}, {'end': 33391.29, 'text': "right now let's visualize it.", 'start': 33385.748, 'duration': 5.542}], 'summary': 'Visualize data, model, training, predictions often to understand the data better.', 'duration': 30.358, 'max_score': 33360.932, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs833360932.jpg'}, {'end': 33613.781, 'src': 'heatmap', 'start': 33230.384, 'weight': 3, 'content': [{'end': 33242.75, 'text': 'Beautiful So if we really wanted to figure out what this is, is how to make example classification data with sklearn.', 'start': 33230.384, 'duration': 12.366}, {'end': 33250.061, 'text': 'Here we go, an introduction to machine learning with scikit-learn, examples.', 'start': 33246.519, 'duration': 3.542}, {'end': 33251.983, 'text': "So we've got a few options there.", 'start': 33250.922, 'duration': 1.061}, {'end': 33256.625, 'text': "I've just chosen, that's basically what I did when I found out about this make circles function.", 'start': 33252.343, 'duration': 4.282}, {'end': 33260.008, 'text': "And there's probably a way to do that in TensorFlow.", 'start': 33257.747, 'duration': 2.261}, {'end': 33266.993, 'text': "I'm not quite sure if it's as quick as this one, but if you wanna give it a try, once you see what this data looks like, feel free.", 'start': 33260.369, 'duration': 6.624}, {'end': 33273.538, 'text': "So now let's check out the features, which is under the X namespace.", 'start': 33267.674, 'duration': 5.864}, {'end': 33276.032, 'text': 'usually a capital X.', 'start': 33274.65, 'duration': 1.382}, {'end': 33286.259, 'text': "okay, so we have an array here and let's check out the labels, check the labels, all right ones and zeros.", 'start': 33276.032, 'duration': 10.227}, {'end': 33288.821, 'text': "so we've got two options here.", 'start': 33286.259, 'duration': 2.562}, {'end': 33291.682, 'text': "I'm guessing one of these samples here.", 'start': 33288.821, 'duration': 2.861}, {'end': 33301.991, 'text': 'so this is a sample has this label number one, and another sample further along here say this one has this label zero.', 'start': 33291.682, 'duration': 10.309}, {'end': 33303.591, 'text': 'we might just make this a bit smaller.', 'start': 33301.991, 'duration': 1.6}, {'end': 33307.332, 'text': "I'll check the labels.", 'start': 33306.232, 'duration': 1.1}, {'end': 33308.072, 'text': "that's what we want there.", 'start': 33307.332, 'duration': 0.74}, {'end': 33316.014, 'text': "So, if we have two label options and we're working on a classification problem, which one of these three are we working on??", 'start': 33308.813, 'duration': 7.201}, {'end': 33318.135, 'text': 'Is it binary??', 'start': 33317.434, 'duration': 0.701}, {'end': 33321.976, 'text': 'Is it multi-class? Or is it multi-label??', 'start': 33319.175, 'duration': 2.801}, {'end': 33327.557, 'text': "Well, it's not multi-label, because each one of these has one label.", 'start': 33323.496, 'duration': 4.061}, {'end': 33329.638, 'text': 'So each sample has one label.', 'start': 33328.377, 'duration': 1.261}, {'end': 33334.72, 'text': "And it's not multi-class because there's only one or zero.", 'start': 33330.678, 'duration': 4.042}, {'end': 33338.402, 'text': 'So this is a binary class classification problem.', 'start': 33335.339, 'duration': 3.063}, {'end': 33342.504, 'text': "But if we're looking at our data like this, I mean, this is, I could look at this.", 'start': 33338.422, 'duration': 4.082}, {'end': 33345.386, 'text': "I've used this before, so I kind of know what this means.", 'start': 33343.064, 'duration': 2.322}, {'end': 33349.487, 'text': "But if you're looking at this for the first time, it doesn't really, it might not make sense.", 'start': 33345.446, 'duration': 4.041}, {'end': 33355.551, 'text': "So what's our data explorers motto? Come back to the keynote.", 'start': 33350.188, 'duration': 5.363}, {'end': 33360.171, 'text': 'the machine learning or even the machine learning explorers motto.', 'start': 33357.129, 'duration': 3.042}, {'end': 33363.453, 'text': 'Visualize, visualize, visualize.', 'start': 33360.932, 'duration': 2.521}, {'end': 33367.354, 'text': "What's the first thing we should visualize? The data.", 'start': 33364.473, 'duration': 2.881}, {'end': 33370.315, 'text': 'We could visualize the model, the training, predictions.', 'start': 33368.053, 'duration': 2.262}, {'end': 33374.617, 'text': "And it's a good idea to visualize these as often as possible.", 'start': 33371.116, 'duration': 3.501}, {'end': 33378.197, 'text': "So let's see how we might visualize our data.", 'start': 33374.677, 'duration': 3.52}, {'end': 33379.718, 'text': "Let's write ourselves a little note.", 'start': 33378.559, 'duration': 1.159}, {'end': 33384.68, 'text': 'Our data is a little hard to understand.', 'start': 33380.799, 'duration': 3.881}, {'end': 33391.29, 'text': "right now let's visualize it.", 'start': 33385.748, 'duration': 5.542}, {'end': 33402.074, 'text': "so to get it into a structured format, I'm going to import pandas as pd and then I'm going to turn it into a data frame called circles, pd.dataframe,", 'start': 33391.29, 'duration': 10.784}, {'end': 33421.014, 'text': "and I'm going to label it x0 for this element here or this sample here, and that can be x, all of the items in the 0th axis, and then x1 can be x.", 'start': 33402.074, 'duration': 18.94}, {'end': 33427.436, 'text': 'we want all of the items in the first axis.', 'start': 33421.014, 'duration': 6.422}, {'end': 33434.218, 'text': 'there we go, and then we want the label label column to be y.', 'start': 33427.436, 'duration': 6.782}, {'end': 33444.232, 'text': "there we go and let's look at our circles data frame typo there of course circles, beautiful, okay.", 'start': 33434.218, 'duration': 10.014}, {'end': 33447.574, 'text': 'so now this is a little bit easier to understand as it is.', 'start': 33444.232, 'duration': 3.342}, {'end': 33450.737, 'text': "we've got x, zero, x, one.", 'start': 33447.574, 'duration': 3.163}, {'end': 33461.485, 'text': 'so we have two features per label, and so this coordinate zero point, seven, five, four, two, two, four, six and zero, two, three, one, four, eight,', 'start': 33450.737, 'duration': 10.748}, {'end': 33464.508, 'text': 'one has the label of one and so on and so on.', 'start': 33461.485, 'duration': 3.023}, {'end': 33469.012, 'text': 'times a thousand, because we set n samples to a thousand.', 'start': 33464.508, 'duration': 4.504}, {'end': 33471.521, 'text': 'now this is still a little hard to understand.', 'start': 33469.012, 'duration': 2.509}, {'end': 33474.763, 'text': 'I wonder if we can visualize this with a plot.', 'start': 33471.521, 'duration': 3.242}, {'end': 33478.945, 'text': "let's try that out, visualize with a plot.", 'start': 33474.763, 'duration': 4.182}, {'end': 33479.866, 'text': 'so how might we do that?', 'start': 33478.945, 'duration': 0.921}, {'end': 33482.788, 'text': 'import matplotlib.', 'start': 33480.446, 'duration': 2.342}, {'end': 33490.012, 'text': 'I like to visualize as much as possible before I start writing neural network code plt.scatter.', 'start': 33482.788, 'duration': 7.224}, {'end': 33493.134, 'text': 'we want a scatterplot.', 'start': 33490.012, 'duration': 3.122}, {'end': 33505.692, 'text': 'scatterplot is a very good plot and we want to just plot this and the color can equal y and the cmap can equal.', 'start': 33493.134, 'duration': 12.558}, {'end': 33507.493, 'text': 'this is just the colour layout.', 'start': 33505.692, 'duration': 1.801}, {'end': 33512.655, 'text': 'I want it just to be red, yellow, blue.', 'start': 33507.493, 'duration': 5.162}, {'end': 33514.356, 'text': 'I think that should do.', 'start': 33512.655, 'duration': 1.701}, {'end': 33517.118, 'text': "let's have a look.", 'start': 33514.356, 'duration': 2.762}, {'end': 33519.379, 'text': 'oh, look at that now.', 'start': 33517.118, 'duration': 2.261}, {'end': 33527.223, 'text': "so, seeing this, if we read the doc string of makeCircles, has it done what it says it's going to do?", 'start': 33519.379, 'duration': 7.844}, {'end': 33531.553, 'text': 'make a large circle containing a smaller circle in 2D.', 'start': 33528.631, 'duration': 2.922}, {'end': 33533.315, 'text': "I think it's done that.", 'start': 33532.614, 'duration': 0.701}, {'end': 33537.358, 'text': 'We have a large circle and a smaller circle.', 'start': 33534.456, 'duration': 2.902}, {'end': 33540.14, 'text': 'Now from this plot.', 'start': 33537.938, 'duration': 2.202}, {'end': 33542.841, 'text': "can you tell what type of model we're going to build??", 'start': 33540.14, 'duration': 2.701}, {'end': 33547.064, 'text': "It's okay if you can't, but I just guess, like what would be?", 'start': 33543.661, 'duration': 3.403}, {'end': 33548.125, 'text': 'what are we trying to do here??', 'start': 33547.064, 'duration': 1.061}, {'end': 33549.786, 'text': 'What would we try to do??', 'start': 33548.785, 'duration': 1.001}, {'end': 33555.37, 'text': "I'm giving you a little hint here by running my pointer in between the two circles.", 'start': 33549.845, 'duration': 5.525}, {'end': 33560.792, 'text': 'How about we build one to classify red or blue dots?', 'start': 33556.308, 'duration': 4.484}, {'end': 33568.918, 'text': 'So, in other words, we want our model to potentially draw a line right through the middle of these two.', 'start': 33562.233, 'duration': 6.685}, {'end': 33577.425, 'text': 'So if we were trying to predict on another 100 rows and we had values like this, would they be a zero or a one??', 'start': 33569.919, 'duration': 7.506}, {'end': 33579.066, 'text': 'Would they be red or blue??', 'start': 33577.625, 'duration': 1.441}, {'end': 33583.228, 'text': 'Now I want you to think about,', 'start': 33581.147, 'duration': 2.081}, {'end': 33592.911, 'text': "before we go ahead this is just a conceptual thing what is the difference between the data we're looking at here and the data we've looked at in our regression notebook?", 'start': 33583.228, 'duration': 9.683}, {'end': 33599.874, 'text': 'So if we start a new tab, what is a regression problem??', 'start': 33593.571, 'duration': 6.303}, {'end': 33609.277, 'text': "And then, if we went to images, what's the difference between this data here and this data here??", 'start': 33600.654, 'duration': 8.623}, {'end': 33613.781, 'text': 'So have a think about that.', 'start': 33612.841, 'duration': 0.94}], 'summary': 'Introduction to machine learning with scikit-learn, exploring binary classification data and visualization', 'duration': 32.634, 'max_score': 33230.384, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs833230384.jpg'}, {'end': 33805.479, 'src': 'embed', 'start': 33779.801, 'weight': 0, 'content': [{'end': 33786.143, 'text': 'And if we want to view the first example of features and labels.', 'start': 33779.801, 'duration': 6.342}, {'end': 33789.105, 'text': "So again, we're just becoming one with the data here.", 'start': 33786.883, 'duration': 2.222}, {'end': 33792.268, 'text': "We're really just familiarizing ourselves with what we're trying to do.", 'start': 33789.145, 'duration': 3.123}, {'end': 33795.711, 'text': "We've already seen a few of these things, but just for completeness, we're putting this here.", 'start': 33792.288, 'duration': 3.423}, {'end': 33797.372, 'text': "So, okay, we've got two.", 'start': 33795.731, 'duration': 1.641}, {'end': 33798.753, 'text': "Here's what we're trying to do.", 'start': 33797.953, 'duration': 0.8}, {'end': 33805.479, 'text': "We're trying to take this point, feed it to our neural network and generate an output, something like this.", 'start': 33798.773, 'duration': 6.706}], 'summary': 'Familiarizing with data, using neural network to generate output.', 'duration': 25.678, 'max_score': 33779.801, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs833779801.jpg'}, {'end': 34119.308, 'src': 'embed', 'start': 34092.637, 'weight': 2, 'content': [{'end': 34097.399, 'text': "the typical architecture of a classification model, where are we up to? Well, we're compiling a model.", 'start': 34092.637, 'duration': 4.762}, {'end': 34101.461, 'text': 'We have to define the loss function and the optimizer.', 'start': 34098.399, 'duration': 3.062}, {'end': 34105.482, 'text': "So, if we look back at what we're working on, what are we working with?", 'start': 34102.481, 'duration': 3.001}, {'end': 34109.124, 'text': 'Are we working with binary classification or multi-class classification??', 'start': 34105.522, 'duration': 3.602}, {'end': 34111.705, 'text': 'Come back to our problem.', 'start': 34110.925, 'duration': 0.78}, {'end': 34115.667, 'text': 'What does it look like? Red or blue dots.', 'start': 34112.105, 'duration': 3.562}, {'end': 34119.308, 'text': 'So it is binary classification.', 'start': 34116.507, 'duration': 2.801}], 'summary': 'Compiling a binary classification model with defined loss function and optimizer.', 'duration': 26.671, 'max_score': 34092.637, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs834092637.jpg'}], 'start': 32983.101, 'title': 'Machine learning fundamentals', 'summary': 'Covers neural network classification with tensorflow, visualizing data for machine learning, comparing regression and image data, and neural network modeling with tensorflow, emphasizing hands-on exploration and practical implementation.', 'chapters': [{'end': 33338.402, 'start': 32983.101, 'title': 'Neural network classification with tensorflow', 'summary': 'Introduces the concept of classification problems, demonstrates the creation of a toy data set for a binary classification problem, and explains the use of make_circles function from scikit-learn to generate the data, preparing for hands-on coding of classification code using tensorflow.', 'duration': 355.301, 'highlights': ['The chapter introduces the concept of classification problems The chapter explains the definition of a classification problem as the act of classifying something as one thing or another, mentioning the types of classification problems such as binary, multi-class, and multi-label.', 'Demonstrates the creation of a toy data set for a binary classification problem The transcript demonstrates the creation of a toy data set using the make_circles function from scikit-learn, generating a thousand examples for a binary classification problem before moving on to actual problems.', 'Explains the use of make_circles function from scikit-learn to generate the data The transcript provides a step-by-step explanation of importing make_circles from scikit-learn and utilizing it to create a toy data set for visualization and classification, demonstrating the process of generating features and labels for the data set.']}, {'end': 33579.066, 'start': 33338.422, 'title': 'Visualizing data for machine learning', 'summary': 'Emphasizes the importance of visualizing data in machine learning, demonstrating the process through the use of pandas and matplotlib to create scatter plots for better understanding and model building.', 'duration': 240.644, 'highlights': ["The chapter stresses the importance of visualizing data in machine learning, advocating for the motto 'Visualize, visualize, visualize' and emphasizing the need to visualize the data, model, training, and predictions as often as possible. None", 'The process of visualizing data is demonstrated through the use of pandas to import and structure the data into a dataframe, making it easier to understand and work with. None', 'The generation of a scatter plot using matplotlib is showcased as a visualization tool to understand the dataset and potentially determine the type of model to be built for classifying red or blue dots. None']}, {'end': 33723.497, 'start': 33581.147, 'title': 'Comparing regression and image data', 'summary': 'Discusses the differences between regression data and image data, and encourages hands-on exploration of tensorflow playground to understand the impact of different parameters on neural networks.', 'duration': 142.35, 'highlights': ['The chapter prompts the audience to compare the differences between regression data and image data in preparation for the next video.', 'The audience is encouraged to spend 10 minutes experimenting with TensorFlow Playground to observe the effects of altering hyperparameters on neural networks.', 'The chapter emphasizes the importance of hands-on exploration by providing an exercise with the hammer and spanner emoji.']}, {'end': 34307.904, 'start': 33724.057, 'title': 'Neural network modeling with tensorflow', 'summary': 'Covers inspecting input and output shapes, building a neural network for classification, and implementing the architecture of a classification model in tensorflow, with a focus on creating and compiling the model and fitting it to the data.', 'duration': 583.847, 'highlights': ['The input and output shapes of the data are inspected, revealing a thousand samples of X and Y, with X having a shape of two and Y being scalar, followed by a check of the number of samples using len() function. The input and output shapes are revealed as a thousand samples of X and Y, with X having a shape of two and Y being scalar. The number of samples is checked using the len() function.', 'The steps in modeling with TensorFlow are outlined, including creating or importing a model, compiling the model by defining the loss function and optimizer, and fitting the model to the data for a specified number of epochs. The steps in modeling with TensorFlow are explained, covering the creation or import of a model, compilation by defining the loss function and optimizer, and fitting the model for a specific number of epochs.', 'The process of building a neural network for classification is discussed, focusing on creating a model with a single hidden layer using the sequential API and compiling it by defining the loss function, optimizer, and metrics such as accuracy. The process of building a neural network for classification is detailed, emphasizing the creation of a model with a single hidden layer using the sequential API and compiling it with the defined loss function, optimizer, and accuracy as metrics.']}], 'duration': 1324.803, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs832983101.jpg', 'highlights': ['The chapter emphasizes hands-on exploration and practical implementation.', 'The chapter introduces the concept of classification problems and explains the types of classification problems such as binary, multi-class, and multi-label.', 'The process of building a neural network for classification is discussed, focusing on creating a model with a single hidden layer using the sequential API and compiling it by defining the loss function, optimizer, and metrics such as accuracy.', 'The chapter prompts the audience to compare the differences between regression data and image data in preparation for the next video.', "The chapter stresses the importance of visualizing data in machine learning, advocating for the motto 'Visualize, visualize, visualize' and emphasizing the need to visualize the data, model, training, and predictions as often as possible."]}, {'end': 36906.725, 'segs': [{'end': 34478.596, 'src': 'embed', 'start': 34438.89, 'weight': 2, 'content': [{'end': 34442.191, 'text': "that's 50, only 50% accuracy, and we trained for 200 epochs.", 'start': 34438.89, 'duration': 3.301}, {'end': 34450.79, 'text': 'What is happening??', 'start': 34449.77, 'duration': 1.02}, {'end': 34453.911, 'text': 'Okay, I know how we can really step things up.', 'start': 34451.43, 'duration': 2.481}, {'end': 34458.892, 'text': "What if we added another layer and trained for longer? Yeah, that's a great idea.", 'start': 34454.571, 'duration': 4.321}, {'end': 34461.812, 'text': "Our model is performing as if it's guessing right now.", 'start': 34459.732, 'duration': 2.08}, {'end': 34462.573, 'text': "Let's write that down.", 'start': 34461.853, 'duration': 0.72}, {'end': 34478.596, 'text': "Since we're working on a binary classification problem and our model is getting around 50% accuracy, it's performing as if it's guessing.", 'start': 34462.873, 'duration': 15.723}], 'summary': 'Model accuracy is at 50% after 200 epochs. plan to add layer and train longer to improve performance.', 'duration': 39.706, 'max_score': 34438.89, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs834438890.jpg'}, {'end': 34594.958, 'src': 'embed', 'start': 34566.347, 'weight': 4, 'content': [{'end': 34570.768, 'text': 'Alright, model2.fit x, y.', 'start': 34566.347, 'duration': 4.421}, {'end': 34577.869, 'text': 'By the way, if you already guessed what we did wrong up here, we fit on the same data that we evaluated on.', 'start': 34570.768, 'duration': 7.101}, {'end': 34583.97, 'text': 'What should we ideally do? We should fit on training data and evaluate on testing data.', 'start': 34577.929, 'duration': 6.041}, {'end': 34590.852, 'text': "But, because we're working with a toy problem, we're allowed to fudge what we're doing a little bit here.", 'start': 34584.671, 'duration': 6.181}, {'end': 34594.958, 'text': 'And now epochs Hmm, what should we do? Maybe 100.', 'start': 34591.592, 'duration': 3.366}], 'summary': 'Model2.fit x, y. fit on training data, evaluate on testing data. toy problem allows some flexibility. set epochs to 100.', 'duration': 28.611, 'max_score': 34566.347, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs834566347.jpg'}, {'end': 34747.595, 'src': 'embed', 'start': 34716.193, 'weight': 1, 'content': [{'end': 34724.295, 'text': "However, we've seen that despite adding an extra layer, model two is still performing very poorly.", 'start': 34716.193, 'duration': 8.102}, {'end': 34728.596, 'text': "I mean, it's getting 50% accuracy on our binary classification problem.", 'start': 34724.775, 'duration': 3.821}, {'end': 34743.232, 'text': "And since we've got an even amount of samples for each class, So if we look here, we've got 500 samples of number one and 500 samples of zero.", 'start': 34729.196, 'duration': 14.036}, {'end': 34747.595, 'text': 'Now, or 500 samples of a blue circle and 500 samples of a red circle.', 'start': 34743.893, 'duration': 3.702}], 'summary': 'Model two is performing poorly with 50% accuracy on a binary classification problem despite an even distribution of 500 samples for each class.', 'duration': 31.402, 'max_score': 34716.193, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs834716193.jpg'}, {'end': 34961.928, 'src': 'embed', 'start': 34930.277, 'weight': 3, 'content': [{'end': 34932.478, 'text': "Increase the number of hidden units, we haven't tried that.", 'start': 34930.277, 'duration': 2.201}, {'end': 34935.38, 'text': 'Change the activation functions.', 'start': 34933.139, 'duration': 2.241}, {'end': 34939.457, 'text': "We definitely haven't tried that, at least for the hidden layers.", 'start': 34936.636, 'duration': 2.821}, {'end': 34941.859, 'text': 'Change the optimization function.', 'start': 34940.198, 'duration': 1.661}, {'end': 34946.941, 'text': 'We could use SGD or Atom or one of the other optimizers in the optimizers package.', 'start': 34942.039, 'duration': 4.902}, {'end': 34948.982, 'text': 'Change the learning rate.', 'start': 34948.061, 'duration': 0.921}, {'end': 34949.822, 'text': "We haven't tried that.", 'start': 34949.022, 'duration': 0.8}, {'end': 34952.183, 'text': "Okay, so we've got a fair few things here that we could try.", 'start': 34949.962, 'duration': 2.221}, {'end': 34954.805, 'text': "Let's start implementing some of these.", 'start': 34953.244, 'duration': 1.561}, {'end': 34961.928, 'text': "And remember, because these are hyperparameters or because these are changeable, they're called hyperparameters.", 'start': 34954.885, 'duration': 7.043}], 'summary': 'Experiment with increasing hidden units, activation functions, optimization, and learning rate to improve model performance.', 'duration': 31.651, 'max_score': 34930.277, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs834930277.jpg'}, {'end': 35478.016, 'src': 'embed', 'start': 35383.045, 'weight': 5, 'content': [{'end': 35391.431, 'text': "I'm going to call it plot decision boundary, because the decision boundaries is basically where's our model?", 'start': 35383.045, 'duration': 8.386}, {'end': 35400.898, 'text': 'deciding where the bounds are between red and blue dots, and this function is going to this function will.', 'start': 35391.431, 'duration': 9.467}, {'end': 35402.659, 'text': "that's a better option.", 'start': 35400.898, 'duration': 1.761}, {'end': 35412.651, 'text': 'so we need to take in a trained model, features x and labels y.', 'start': 35402.659, 'duration': 9.992}, {'end': 35418.815, 'text': "so those are the the parameters, and then it'll create, might turn this into markdown.", 'start': 35412.651, 'duration': 6.164}, {'end': 35422.157, 'text': 'actually create a mesh grid.', 'start': 35418.815, 'duration': 3.342}, {'end': 35425.119, 'text': "if you're not sure what a mesh grid is, it's concept in numpy.", 'start': 35422.157, 'duration': 2.962}, {'end': 35427.2, 'text': 'so numpy mesh grid.', 'start': 35425.119, 'duration': 2.081}, {'end': 35430.002, 'text': "that's what you can do for anything that you're not not sure.", 'start': 35427.2, 'duration': 2.802}, {'end': 35434.605, 'text': 'just search it up and have a little play around and figure out what it is.', 'start': 35430.002, 'duration': 4.603}, {'end': 35436.186, 'text': 'numpy mesh grid.', 'start': 35434.605, 'duration': 1.581}, {'end': 35439.088, 'text': 'you could read the documentation there or we could just practice coding it.', 'start': 35436.186, 'duration': 2.902}, {'end': 35444.55, 'text': 'Create a mesh grid of the different X values.', 'start': 35440.148, 'duration': 4.402}, {'end': 35451.473, 'text': "And then we're going to make predictions across the mesh grid.", 'start': 35446.191, 'duration': 5.282}, {'end': 35467.44, 'text': "And then finally, we're going to plot the predictions as well as a line between the different zones where each unique class falls.", 'start': 35452.833, 'duration': 14.607}, {'end': 35471.174, 'text': 'Now, right now, these steps are all just in English.', 'start': 35468.373, 'duration': 2.801}, {'end': 35472.214, 'text': 'They may not make sense.', 'start': 35471.294, 'duration': 0.92}, {'end': 35474.255, 'text': "So let's start to write code for it.", 'start': 35472.674, 'duration': 1.581}, {'end': 35478.016, 'text': "We'll go here, import numpy as np.", 'start': 35475.055, 'duration': 2.961}], 'summary': 'Creating a plot decision boundary by importing numpy and writing code for mesh grid and predictions.', 'duration': 94.971, 'max_score': 35383.045, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs835383045.jpg'}, {'end': 36068.878, 'src': 'embed', 'start': 36041.7, 'weight': 9, 'content': [{'end': 36048.57, 'text': 'so if we come here, One of the resources I used was CS231N Neural Networks Case Study.', 'start': 36041.7, 'duration': 6.87}, {'end': 36054.292, 'text': 'Now, this is a phenomenal course, Convolutional Neural Networks for Visual Recognition.', 'start': 36049.23, 'duration': 5.062}, {'end': 36061.315, 'text': "We haven't actually worked with Convolutional Neural Networks yet, so if you go through this, it might be a bit full on,", 'start': 36054.853, 'duration': 6.462}, {'end': 36068.878, 'text': "but I'd highly recommend this, as this is going to be a part of the extra curriculum for this section and the Convolutional Neural Networks section.", 'start': 36061.315, 'duration': 7.563}], 'summary': 'Recommend cs231n neural networks case study for learning convolutional neural networks.', 'duration': 27.178, 'max_score': 36041.7, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs836041700.jpg'}, {'end': 36186.731, 'src': 'embed', 'start': 36161.293, 'weight': 8, 'content': [{'end': 36166.999, 'text': 'Because our neural network is plotting a straight line here, or predicting a straight line, I wonder if we can adapt it to a regression problem.', 'start': 36161.293, 'duration': 5.706}, {'end': 36168.821, 'text': "Let's check that in the next video.", 'start': 36167.84, 'duration': 0.981}, {'end': 36176.547, 'text': "In the last video, we created a function, plotDecisionBoundary to visually inspect our model's predictions,", 'start': 36170.285, 'duration': 6.262}, {'end': 36186.731, 'text': "and we found that it's performing so poorly because it's predicting that the decision boundary is a straight line, whereas our data is circular.", 'start': 36176.547, 'duration': 10.184}], 'summary': 'Adapting neural network to regression for circular data. model predicts poorly with straight line decision boundary.', 'duration': 25.438, 'max_score': 36161.293, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs836161293.jpg'}, {'end': 36903.942, 'src': 'heatmap', 'start': 36554.23, 'weight': 1, 'content': [{'end': 36562.098, 'text': "Instead of being binary cross entropy, we're going to change it to mean absolute error so that the loss function is regression specific.", 'start': 36554.23, 'duration': 7.868}, {'end': 36565.697, 'text': "So give that a try, and otherwise I'm going to start writing it now.", 'start': 36563.217, 'duration': 2.48}, {'end': 36570.138, 'text': "So we've got to set up the random seed so we get reproducible results.", 'start': 36566.278, 'duration': 3.86}, {'end': 36573.519, 'text': 'Set seed 42.', 'start': 36571.578, 'duration': 1.941}, {'end': 36578.18, 'text': "Now, step one, we're going to create the model.", 'start': 36573.519, 'duration': 4.661}, {'end': 36579.84, 'text': "We'll just recreate model three.", 'start': 36578.52, 'duration': 1.32}, {'end': 36581.28, 'text': "It's all right, we can override it.", 'start': 36579.86, 'duration': 1.42}, {'end': 36583.74, 'text': 'TF Keras and plus.', 'start': 36581.94, 'duration': 1.8}, {'end': 36588.521, 'text': "What's the harm in having some more practice writing model code? TF Keras.", 'start': 36584.561, 'duration': 3.96}, {'end': 36595.716, 'text': 'By the end of this course, I want you to have created over 100 models, maybe even more.', 'start': 36589.101, 'duration': 6.615}, {'end': 36597.557, 'text': "who knows, I'm not actually sure, I haven't counted.", 'start': 36595.716, 'duration': 1.841}, {'end': 36607.281, 'text': "maybe someone out there could count how many models we actually build together and then let me know, because that'd be pretty cool.", 'start': 36597.557, 'duration': 9.724}, {'end': 36608.261, 'text': 'you know what I should have done.', 'start': 36607.281, 'duration': 0.98}, {'end': 36613.603, 'text': 'I should have increased this number, so we just knew the whole time what number model we were up to.', 'start': 36608.261, 'duration': 5.342}, {'end': 36623.846, 'text': 'too late now, and so compile the model Now, this time with a regression-specific loss function.', 'start': 36613.603, 'duration': 10.243}, {'end': 36633.015, 'text': 'Beautiful, model3.compile, loss equals tf.keras, losses, M-A-E, M-A-E, beautiful.', 'start': 36624.726, 'duration': 8.289}, {'end': 36635.898, 'text': 'And then the optimizer can be tf.keras.', 'start': 36633.356, 'duration': 2.542}, {'end': 36637.2, 'text': "We're gonna keep the optimizer the same.", 'start': 36635.918, 'duration': 1.282}, {'end': 36638.901, 'text': "We'll keep it as atom, optimizers.atom.", 'start': 36637.22, 'duration': 1.681}, {'end': 36641.694, 'text': 'Remember, Adam is safe.', 'start': 36640.834, 'duration': 0.86}, {'end': 36646.596, 'text': "Metrics equals, we need to adjust the metrics as well because it's a regression problem.", 'start': 36642.575, 'duration': 4.021}, {'end': 36647.657, 'text': "We'll keep MAE there.", 'start': 36646.616, 'duration': 1.041}, {'end': 36651.318, 'text': 'And finally, three is fit the model.', 'start': 36648.617, 'duration': 2.701}, {'end': 36654.359, 'text': "So we have model three, we've recreated it.", 'start': 36651.838, 'duration': 2.521}, {'end': 36662.082, 'text': 'Fit on X reg train and Y reg train, which is our regression dataset.', 'start': 36655.32, 'duration': 6.762}, {'end': 36664.643, 'text': "And we'll set it up for 100 epochs.", 'start': 36663.043, 'duration': 1.6}, {'end': 36666.324, 'text': "Now you're ready to run.", 'start': 36665.564, 'duration': 0.76}, {'end': 36668.025, 'text': 'Three, two, one.', 'start': 36666.884, 'duration': 1.141}, {'end': 36673.364, 'text': 'Oh, there we go.', 'start': 36672.263, 'duration': 1.101}, {'end': 36678.99, 'text': 'Now, what did it start with? Do we have, oh, so the MAE is close to 250.', 'start': 36673.385, 'duration': 5.605}, {'end': 36685.055, 'text': 'And by the end, we go right down to 37.', 'start': 36678.99, 'duration': 6.065}, {'end': 36685.696, 'text': "That's beautiful.", 'start': 36685.055, 'duration': 0.641}, {'end': 36688.539, 'text': 'So we see a reduction in loss and a reduction in MAE.', 'start': 36686.056, 'duration': 2.483}, {'end': 36692.663, 'text': "But to make sure, seems like our model's learning something from these training metrics.", 'start': 36688.579, 'duration': 4.084}, {'end': 36694.625, 'text': "Let's plot them.", 'start': 36693.163, 'duration': 1.462}, {'end': 36699.307, 'text': "So just like we've plotted our circular data, we'll plot our regression data.", 'start': 36695.706, 'duration': 3.601}, {'end': 36711.929, 'text': 'So make predictions with our trained model, yregpreds equals model3.predict x, make it on the test data set.', 'start': 36699.907, 'duration': 12.022}, {'end': 36716.19, 'text': "Oh, sorry, it's going to be, oh yeah, predict xregtest.", 'start': 36712.969, 'duration': 3.221}, {'end': 36716.89, 'text': "Yeah, that's what we want.", 'start': 36716.25, 'duration': 0.64}, {'end': 36725.212, 'text': "And now let's plot the model's predictions against our regression data.", 'start': 36717.71, 'duration': 7.502}, {'end': 36728.579, 'text': 'So we can create a plot here.', 'start': 36727.099, 'duration': 1.48}, {'end': 36736.021, 'text': "We're not gonna write as big and intricate a function as we did for our classification data, because this is, regression data is just a straight line.", 'start': 36728.599, 'duration': 7.422}, {'end': 36741.402, 'text': "And we're gonna do a scatter plot, and on this scatter plot is going to be the training data.", 'start': 36736.821, 'duration': 4.581}, {'end': 36747.144, 'text': 'So xreg train, yreg train, and the color of the training can be blue.', 'start': 36742.483, 'duration': 4.661}, {'end': 36751.705, 'text': "We'll label it training data so we know that it is the training data.", 'start': 36747.804, 'duration': 3.901}, {'end': 36757.185, 'text': 'create another scatter plot, this time for the test data set.', 'start': 36753.083, 'duration': 4.102}, {'end': 36774.172, 'text': 'so test features and test labels, and we will color this with green and give it the label equals test data and plot scatter x, reg test.', 'start': 36757.185, 'duration': 16.987}, {'end': 36777.414, 'text': "and now let's plot our reg predictions.", 'start': 36774.172, 'duration': 3.242}, {'end': 36780.255, 'text': "and oh, we're going to have to.", 'start': 36777.414, 'duration': 2.841}, {'end': 36782.336, 'text': 'no, i think that should be okay, we might have to.', 'start': 36780.255, 'duration': 2.081}, {'end': 36784.836, 'text': 'I wonder what dimension they are.', 'start': 36783.595, 'duration': 1.241}, {'end': 36786.817, 'text': "Well, we'll check.", 'start': 36786.216, 'duration': 0.601}, {'end': 36787.797, 'text': 'If in doubt, run the code.', 'start': 36786.877, 'duration': 0.92}, {'end': 36790.839, 'text': 'Wait for the error to pop up for us, and then we can see.', 'start': 36787.817, 'duration': 3.022}, {'end': 36794.101, 'text': 'And then plot legend.', 'start': 36790.859, 'duration': 3.242}, {'end': 36795.441, 'text': "So that's what we've done.", 'start': 36794.821, 'duration': 0.62}, {'end': 36797.302, 'text': "We've trained a regression model.", 'start': 36796.282, 'duration': 1.02}, {'end': 36800.324, 'text': 'We adapted model three to be suited for our regression data.', 'start': 36797.562, 'duration': 2.762}, {'end': 36803.265, 'text': "We've made some predictions on the test data set.", 'start': 36800.864, 'duration': 2.401}, {'end': 36806.407, 'text': "And now we're just plotting it, just like we did in the regression section.", 'start': 36803.766, 'duration': 2.641}, {'end': 36808.708, 'text': 'Training data, test data, predictions.', 'start': 36806.967, 'duration': 1.741}, {'end': 36809.849, 'text': "Let's go.", 'start': 36809.569, 'duration': 0.28}, {'end': 36814.631, 'text': 'Oh, we want this to be fig size.', 'start': 36812.251, 'duration': 2.38}, {'end': 36822.553, 'text': "Oh, would you look at that? Okay, so the predictions aren't perfect.", 'start': 36818.332, 'duration': 4.221}, {'end': 36829.254, 'text': 'I mean, if they were, the red line would line up perfectly with the green line, but they definitely look better than complete guessing.', 'start': 36823.333, 'duration': 5.921}, {'end': 36833.295, 'text': 'I mean, imagine if the predictions were all over the shop, like red dots everywhere.', 'start': 36829.314, 'duration': 3.981}, {'end': 36836.235, 'text': 'That would be basically guessing for this regression problem.', 'start': 36833.695, 'duration': 2.54}, {'end': 36841.316, 'text': 'Now, this means that our model must be learning something.', 'start': 36837.195, 'duration': 4.121}, {'end': 36847.565, 'text': "However, it's still missing something for our classification problem.", 'start': 36842.642, 'duration': 4.923}, {'end': 36853.729, 'text': "What's the difference here? What is the main difference between our data sets? I'll give you a hint.", 'start': 36848.045, 'duration': 5.684}, {'end': 36857.951, 'text': 'This one, this regression problem is a straight line.', 'start': 36855.23, 'duration': 2.721}, {'end': 36865.076, 'text': 'Whereas if we come back, we discussed this before, our classification data is not a straight line.', 'start': 36858.772, 'duration': 6.304}, {'end': 36871.867, 'text': "It's nonlinear, but the decision boundary our model's trying to plot is linear, straight line.", 'start': 36865.885, 'duration': 5.982}, {'end': 36878.669, 'text': "So, hmm, that might be the missing piece, the thing that we haven't introduced to our models yet.", 'start': 36872.567, 'duration': 6.102}, {'end': 36882.13, 'text': "We haven't introduced nonlinearity.", 'start': 36879.469, 'duration': 2.661}, {'end': 36884.511, 'text': "And if you haven't heard of that before, that's okay,", 'start': 36882.71, 'duration': 1.801}, {'end': 36889.512, 'text': "because we're going to discuss it in the next video and probably maybe a couple of videos after that.", 'start': 36884.511, 'duration': 5.001}, {'end': 36891.053, 'text': "So let's write that down.", 'start': 36890.273, 'duration': 0.78}, {'end': 36892.913, 'text': 'The missing piece.', 'start': 36892.073, 'duration': 0.84}, {'end': 36894.554, 'text': "It's like we're on a treasure hunt here.", 'start': 36893.193, 'duration': 1.361}, {'end': 36897.036, 'text': 'non linearity.', 'start': 36895.555, 'duration': 1.481}, {'end': 36903.942, 'text': "Alrighty, get excited because we're going to learn one of the most important concepts in neural networks.", 'start': 36899.158, 'duration': 4.784}], 'summary': "Using mean absolute error as the loss function, we trained a regression-specific model, achieving a reduction in mae and loss, and plotted the model's predictions against the regression data.", 'duration': 349.712, 'max_score': 36554.23, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs836554230.jpg'}, {'end': 36699.307, 'src': 'embed', 'start': 36673.385, 'weight': 0, 'content': [{'end': 36678.99, 'text': 'Now, what did it start with? Do we have, oh, so the MAE is close to 250.', 'start': 36673.385, 'duration': 5.605}, {'end': 36685.055, 'text': 'And by the end, we go right down to 37.', 'start': 36678.99, 'duration': 6.065}, {'end': 36685.696, 'text': "That's beautiful.", 'start': 36685.055, 'duration': 0.641}, {'end': 36688.539, 'text': 'So we see a reduction in loss and a reduction in MAE.', 'start': 36686.056, 'duration': 2.483}, {'end': 36692.663, 'text': "But to make sure, seems like our model's learning something from these training metrics.", 'start': 36688.579, 'duration': 4.084}, {'end': 36694.625, 'text': "Let's plot them.", 'start': 36693.163, 'duration': 1.462}, {'end': 36699.307, 'text': "So just like we've plotted our circular data, we'll plot our regression data.", 'start': 36695.706, 'duration': 3.601}], 'summary': 'Mae decreased from 250 to 37, indicating improved model performance.', 'duration': 25.922, 'max_score': 36673.385, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs836673385.jpg'}], 'start': 34309.684, 'title': 'Improving model performance', 'summary': 'Discusses strategies to improve model performance, including experimenting with hyperparameters, visualizing model predictions, and creating decision boundaries. model accuracy started at 48% and was improved to 50% after 200 epochs. additionally, the chapter covers regression model training, achieving a decrease in mean absolute error from around 250 to 37.', 'chapters': [{'end': 34565.427, 'start': 34309.684, 'title': 'Improving binary classification model', 'summary': 'Discusses training a binary classification model, starting with 48% accuracy, attempting to improve it by training for longer and adding an extra layer, but still achieving only 50% accuracy after 200 epochs.', 'duration': 255.743, 'highlights': ['The model starts with 48% accuracy on 1000 samples, indicating poor performance in binary classification. The initial 48% accuracy on 1000 samples highlights the poor performance of the model in binary classification.', 'Training the model for 200 epochs only yields 50% accuracy, showing limited improvement despite increased training time. Despite training the model for 200 epochs, the accuracy only improves to 50%, indicating limited enhancement in performance.', 'Adding an extra layer to the model does not significantly improve its performance, as it still achieves only 50% accuracy. Despite adding an extra layer to the model, the accuracy remains at 50%, indicating no significant improvement in performance.']}, {'end': 35047.271, 'start': 34566.347, 'title': 'Improving classification model with hyperparameters', 'summary': "Discusses the issues faced with model two, which is still performing poorly with 50% accuracy, and explores the need to experiment with various hyperparameters to improve the model's performance, including adding layers, changing optimization functions, increasing hidden units, and modifying learning rates.", 'duration': 480.924, 'highlights': ["Model two is still performing very poorly with 50% accuracy on our binary classification problem. The model's poor performance is highlighted with a 50% accuracy rate on the binary classification problem, indicating the need for improvement.", "Experimenting with hyperparameters, such as adding layers, changing optimization functions, increasing hidden units, and modifying learning rates, is essential to improving the model's performance. The chapter emphasizes the importance of experimenting with various hyperparameters, including adding layers, changing optimization functions, increasing hidden units, and modifying learning rates, to enhance the model's performance.", 'The need to fit the model on training data and evaluate on testing data is highlighted, as the current approach of fitting and evaluating on the same data is not ideal. The discussion emphasizes the importance of fitting the model on training data and evaluating on testing data rather than fitting and evaluating on the same data, highlighting the need for a more appropriate approach.']}, {'end': 35418.815, 'start': 35047.271, 'title': 'Visualizing model performance', 'summary': "Discusses modifying a neural network by adding layers and hidden units, changing the optimizer to improve the model's performance, and planning to visualize the model's predictions due to its poor accuracy of 50%. the highlight includes the addition of layers and hidden units to the neural network, changing the optimizer to adam, and the plan to visualize the model's predictions.", 'duration': 371.544, 'highlights': ['The addition of layers and hidden units to the neural network, such as adding 10 hidden units to the middle layer and an extra layer with 100 hidden units, to improve model performance.', "Changing the optimizer from SGD to Adam to potentially enhance the model's accuracy.", "The plan to visualize the model's predictions to understand the pattern the model is trying to figure out between the two types of circles."]}, {'end': 35674.503, 'start': 35418.815, 'title': 'Creating mesh grid and plotting decision boundary', 'summary': 'Discusses creating a mesh grid using numpy and plotting decision boundary by making predictions across the grid, with steps to import numpy, defining axis boundaries, creating mesh grid, and visualizing the mesh grid values.', 'duration': 255.688, 'highlights': ['Creating a mesh grid using numpy The chapter discusses creating a mesh grid using numpy to visualize the mesh grid values, returning 100 evenly spaced numbers between x min and x max, and creating a mesh grid out of the generated values.', 'Plotting decision boundary by making predictions across the grid The process involves plotting decision boundary by making predictions across the mesh grid, and then plotting the predictions as well as a line between the different zones where each unique class falls.', 'Steps to import numpy, defining axis boundaries, and creating mesh grid The steps involve importing numpy, defining the axis boundaries for the plot, creating a mesh grid with specified interval, and visualizing the mesh grid values.']}, {'end': 36613.603, 'start': 35674.503, 'title': 'Creating and plotting decision boundaries', 'summary': 'Covers creating and plotting decision boundaries for binary and multi-class classification problems, using numpy and tensorflow, including adapting a model for regression, inspired by resources such as cs231n neural networks case study and made with ml, along with recommendations for further study.', 'duration': 939.1, 'highlights': ['The chapter covers creating and plotting decision boundaries for binary and multi-class classification problems. The transcript discusses the process of creating decision boundaries for binary and multi-class classification problems, including making predictions using a trained model and checking for multi-class classification.', 'Adapting a model for regression, inspired by resources such as CS231N Neural Networks Case Study and Made with ML. The transcript describes the process of adapting a model for regression, influenced by resources such as CS231N Neural Networks Case Study and Made with ML, and provides insights into the decision-making process involved in adapting the model.', 'Recommendations for further study, including resources for Convolutional Neural Networks and machine learning. The transcript recommends further study in Convolutional Neural Networks and machine learning, suggesting resources such as CS231N Neural Networks Case Study and Made with ML for deeper exploration.']}, {'end': 36906.725, 'start': 36613.603, 'title': 'Regression model training', 'summary': "Demonstrates training a regression model, achieving a decrease in mean absolute error from around 250 to 37, and visualizing the model's predictions against the regression data.", 'duration': 293.122, 'highlights': ["The Mean Absolute Error (MAE) decreased from around 250 to 37, indicating significant improvement in the model's performance.", 'The model was trained for 100 epochs, showcasing its ability to learn from the regression dataset.', "The visualization of the model's predictions against the regression data demonstrated that the predictions were better than random guessing, although not perfect."]}], 'duration': 2597.041, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tpCFfeUEGs8/pics/tpCFfeUEGs834309684.jpg', 'highlights': ["The Mean Absolute Error (MAE) decreased from around 250 to 37, indicating significant improvement in the model's performance.", 'The model starts with 48% accuracy on 1000 samples, indicating poor performance in binary classification.', 'Despite training the model for 200 epochs, the accuracy only improves to 50%, indicating limited enhancement in performance.', "The chapter emphasizes the importance of experimenting with various hyperparameters, including adding layers, changing optimization functions, increasing hidden units, and modifying learning rates, to enhance the model's performance.", 'The discussion emphasizes the importance of fitting the model on training data and evaluating on testing data rather than fitting and evaluating on the same data, highlighting the need for a more appropriate approach.', 'The chapter discusses creating a mesh grid using numpy to visualize the mesh grid values, returning 100 evenly spaced numbers between x min and x max, and creating a mesh grid out of the generated values.', 'The process involves plotting decision boundary by making predictions across the mesh grid, and then plotting the predictions as well as a line between the different zones where each unique class falls.', 'The transcript discusses the process of creating decision boundaries for binary and multi-class classification problems, including making predictions using a trained model and checking for multi-class classification.', 'The transcript describes the process of adapting a model for regression, influenced by resources such as CS231N Neural Networks Case Study and Made with ML, and provides insights into the decision-making process involved in adapting the model.', 'The transcript recommends further study in Convolutional Neural Networks and machine learning, suggesting resources such as CS231N Neural Networks Case Study and Made with ML for deeper exploration.']}], 'highlights': ['The chapter offers a two-part video series with a total duration of 14 hours, providing a code-first introduction to key concepts in deep learning and the creation of hundreds of lines of TensorFlow code.', 'The potential applications of machine learning in solving complex problems are discussed, using the example of teaching a self-driving car to drive.', "AlphaFold's solution to a 50-year-old grand challenge in biology is a significant breakthrough in AI powered by deep learning.", 'The chapter emphasizes the practical application of creating random tensors in TensorFlow and encourages independent problem-solving through effective search methods.', 'The importance of shuffling images in neural network training to prevent bias towards specific categories, with a practical homework assignment for learners.', 'Explaining the important tensor attributes including shape, axis, rank, and size, with examples and explanations of their use.', 'The use of mixed precision, involving both 16 and 32 bit floating point types in a model during training, can lead to significant performance improvements, with potential performance improvements by more than three times on modern GPUs and 60% on TPUs.', 'The chapter covers various tensor operations in TensorFlow, such as finding the minimum, maximum, mean, and sum.', 'Neural network outputs prediction probabilities are often represented as positional maximum and minimum.', 'Regression problems involve predicting a number, such as the sale price of a house, app downloads, health insurance costs, and weekly fuel expenses.', 'The fit function is used to train the model on x and y for five epochs, allowing the model to identify patterns in the data.', "The learning rate is highlighted as the most crucial hyperparameter in neural networks, impacting the model's performance significantly.", 'TensorFlow allows saving models in .pb (protobuf) and .h5 (HDF5) formats, with .pb being the default format for majority use cases.', 'The normalization of data led to a 30% reduction in mean absolute error', 'GitHub repository contains materials related to the course, including ground truth notebooks and code explanations.', 'The chapter covers binary classification, multi-class classification, and multi-label classification, providing clear examples and explanations.', "The Mean Absolute Error (MAE) decreased from around 250 to 37, indicating significant improvement in the model's performance."]}