title
Lesson 1: Deep Learning 2019 - Image classification

description
Note: please view this using the video player at http://course.fast.ai, instead of viewing on YouTube directly, to ensure you have the latest information. If you have questions, see if your question already has an answer by searching http://forums.fast.ai, and then post there if required. The key outcome of lesson 1 is that we'll have trained an image classifier which can recognize pet breeds at state of the art accuracy. The key to this success is the use of *transfer learning*, which will be a key platform for much of this course. We'll also see how to analyze the model to understand its failure modes. In this case, we'll see that the places where the model is making mistakes is in the same areas that even breeding experts can make mistakes. We'll discuss the overall approach of the course, which is somewhat unusual in being *top-down* rather than *bottom-up*. So rather than starting with theory, and only getting to practical applications later, instead we start with practical applications, and then gradually dig deeper and deeper in to them, learning the theory as needed. This approach takes more work for teachers to develop, but it's been shown to help students a lot, for example in education research at Harvard (https://www.gse.harvard.edu/news/uk/09/01/education-bat-seven-principles-educators) by David Perkins. We also discuss how to set the most important *hyper-parameter* when training neural networks: the *learning rate*, using Leslie Smith's fantastic *learning rate finder* method. Finally, we'll look at the important but rarely discussed topic of *labeling*, and learn about some of the features that fastai provides for allowing you to easily add labels to your images. Note that to follow along with the lessons, you'll need to connect to a cloud GPU provider which has the fastai library installed (recommended; it should take only 5 minutes or so, and cost under $0.50/hour), or set up a computer with a suitable GPU yourself (which can take days to get working if you're not familiar with the process, so we don't recommend it). You'll also need to be familiar with the basics of the *Jupyter Notebook* environment we use for running deep learning experiments. Up to date tutorials and recommendations for these are available from the course website (http://course.fast.ai).

detail
{'title': 'Lesson 1: Deep Learning 2019 - Image classification', 'heatmap': [{'end': 4209.665, 'start': 4085.672, 'weight': 0.803}, {'end': 4569.539, 'start': 4507.214, 'weight': 0.778}, {'end': 5122.322, 'start': 4927.253, 'weight': 0.848}, {'end': 5656.252, 'start': 5583.931, 'weight': 0.73}], 'summary': 'Tutorial series on deep learning covers practical usage of jupyter notebooks, python coding, fast.ai library, and achieving significant accuracy improvements, such as 94% compared to 59% in 2012, while challenging common misconceptions about deep learning.', 'chapters': [{'end': 126.968, 'segs': [{'end': 31.108, 'src': 'embed', 'start': 0.444, 'weight': 0, 'content': [{'end': 8.329, 'text': 'Okay, so Welcome, practical deep learning for coders.', 'start': 0.444, 'duration': 7.885}, {'end': 9.97, 'text': 'lesson one.', 'start': 8.329, 'duration': 1.641}, {'end': 17.195, 'text': "it's kind of lesson two, because There's a lesson zero, and lesson zero is is why do you need a GPU and how do you get it set up?", 'start': 9.97, 'duration': 7.225}, {'end': 23.539, 'text': "So if you haven't got a GPU running yet, then go back and do that.", 'start': 17.495, 'duration': 6.044}, {'end': 31.108, 'text': "make sure that you can access a Jupiter notebook, and Then you're ready to start the real lesson one.", 'start': 23.539, 'duration': 7.569}], 'summary': 'Lesson zero covers the setup of a gpu for practical deep learning.', 'duration': 30.664, 'max_score': 0.444, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI444.jpg'}, {'end': 126.968, 'src': 'embed', 'start': 57.003, 'weight': 1, 'content': [{'end': 63.817, 'text': "and Let's make that a bit bigger.", 'start': 57.003, 'duration': 6.814}, {'end': 67.877, 'text': "And hopefully you've learnt these four keyboard shortcuts.", 'start': 63.817, 'duration': 4.06}, {'end': 75.439, 'text': 'So the basic idea is that your Jupyter notebook Has pros in it.', 'start': 67.877, 'duration': 7.562}, {'end': 78.139, 'text': 'It can have pictures in it.', 'start': 76.179, 'duration': 1.96}, {'end': 87.581, 'text': 'It can have Charts in it And, most importantly, it can have code in it.', 'start': 78.759, 'duration': 8.822}, {'end': 89.461, 'text': 'so the code is in Python.', 'start': 87.581, 'duration': 1.88}, {'end': 93.67, 'text': 'and How many people have used Python before?', 'start': 89.461, 'duration': 4.209}, {'end': 95.451, 'text': 'Nearly all of you.', 'start': 94.731, 'duration': 0.72}, {'end': 95.991, 'text': "That's great.", 'start': 95.551, 'duration': 0.44}, {'end': 101.332, 'text': "If you haven't used Python, that's totally okay.", 'start': 97.051, 'duration': 4.281}, {'end': 103.412, 'text': "It's a pretty easy language to pick up.", 'start': 101.772, 'duration': 1.64}, {'end': 111.214, 'text': "But if you haven't used Python, this will feel a little bit more intimidating, because the code that you're seeing will be unfamiliar to you.", 'start': 103.792, 'duration': 7.422}, {'end': 112.334, 'text': 'Yes, Rachel?', 'start': 111.854, 'duration': 0.48}, {'end': 121.663, 'text': 'trying to keep the most secret.', 'start': 120.362, 'duration': 1.301}, {'end': 122.704, 'text': 'yeah, yeah, okay.', 'start': 121.663, 'duration': 1.041}, {'end': 124.546, 'text': "well, now that we're here, i'll edit this bit out.", 'start': 122.704, 'duration': 1.842}, {'end': 126.968, 'text': 'so, um, as i say, there are, uh,', 'start': 124.546, 'duration': 2.422}], 'summary': 'Jupyter notebooks can have images, charts, and python code; suitable for beginners.', 'duration': 69.965, 'max_score': 57.003, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI57003.jpg'}], 'start': 0.444, 'title': 'Practical deep learning for coders', 'summary': 'Introduces the significance of having a gpu for running jupyter notebooks, basic jupyter notebook usage, and the use of python for coding.', 'chapters': [{'end': 126.968, 'start': 0.444, 'title': 'Practical deep learning for coders', 'summary': 'Introduces practical deep learning for coders, covering the importance of having a gpu for running jupyter notebooks, basic jupyter notebook usage, and the use of python for coding.', 'duration': 126.524, 'highlights': ['The importance of having a GPU for running Jupyter notebooks is emphasized, with the recommendation to ensure access to a Jupiter notebook before starting the real lesson one.', 'Basic Jupyter notebook usage, including adding one and one together, and using keyboard shortcuts, is demonstrated to aid learners in understanding its functionality.', 'The use of Python for coding in Jupyter notebook is highlighted, with the majority of learners having prior experience with Python, making it a suitable language for the course.', "The reassurance that Python is easy to pick up is mentioned, making it accessible to those who haven't used Python before."]}], 'duration': 126.524, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI444.jpg', 'highlights': ['The importance of having a GPU for running Jupyter notebooks is emphasized, with the recommendation to ensure access to a Jupiter notebook before starting the real lesson one.', 'The use of Python for coding in Jupyter notebook is highlighted, with the majority of learners having prior experience with Python, making it a suitable language for the course.', 'Basic Jupyter notebook usage, including adding one and one together, and using keyboard shortcuts, is demonstrated to aid learners in understanding its functionality.', "The reassurance that Python is easy to pick up is mentioned, making it accessible to those who haven't used Python before."]}, {'end': 619.958, 'segs': [{'end': 186.446, 'src': 'embed', 'start': 150.548, 'weight': 0, 'content': [{'end': 156.47, 'text': 'You can go back after this and make sure that you can get this running using the information in course.', 'start': 150.548, 'duration': 5.922}, {'end': 166.895, 'text': 'v3.faster.ai. Okay, okay, Okay.', 'start': 156.47, 'duration': 10.425}, {'end': 175.737, 'text': 'so a Jupyter notebook is really interesting device for a data scientist,', 'start': 166.895, 'duration': 8.842}, {'end': 186.446, 'text': 'because it kind of lets you run interactive experiments and it lets us give you not just a static piece of information,', 'start': 175.737, 'duration': 10.709}], 'summary': 'Jupyter notebook allows data scientists to run interactive experiments and access dynamic information.', 'duration': 35.898, 'max_score': 150.548, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI150548.jpg'}, {'end': 256.942, 'src': 'embed', 'start': 208.911, 'weight': 2, 'content': [{'end': 218.216, 'text': "First of all, it works pretty well, just to watch a lesson and to end, Don't try and follow along,", 'start': 208.911, 'duration': 9.305}, {'end': 220.977, 'text': "because it's not really designed to go at a speed where you can follow along.", 'start': 218.216, 'duration': 2.761}, {'end': 228.595, 'text': "it's designed to be something where you, just Taking the information, you get a general sense of all of the pieces, how it all fits together right?", 'start': 220.977, 'duration': 7.618}, {'end': 237.62, 'text': 'And then you can go back and go through it more slowly, pausing on in the video And trying things out,', 'start': 229.216, 'duration': 8.404}, {'end': 245.444, 'text': "making sure that you can do the things that I'm doing And that you can try and extend them to do it things in your own way.", 'start': 237.62, 'duration': 7.824}, {'end': 251.947, 'text': "okay, so don't worry if things are zipping along Faster than you can do them.", 'start': 245.444, 'duration': 6.503}, {'end': 256.942, 'text': "That's normal and Also, don't try and stop and understand everything the first time.", 'start': 252.067, 'duration': 4.875}], 'summary': 'Video lessons are designed for general understanding, not for following along. encourages pausing and trying things out at your own pace.', 'duration': 48.031, 'max_score': 208.911, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI208911.jpg'}, {'end': 322.47, 'src': 'embed', 'start': 285.045, 'weight': 4, 'content': [{'end': 288.348, 'text': "And we don't just mean do, we mean do at a very high level.", 'start': 285.045, 'duration': 3.303}, {'end': 291.592, 'text': 'We mean world-class, practitioner-level deep learning.', 'start': 288.689, 'duration': 2.903}, {'end': 305.874, 'text': 'Your main place to be looking for things is coursev3.fast.ai, where you can find out how to get a GPU, other information,', 'start': 295.205, 'duration': 10.669}, {'end': 309.377, 'text': 'and you can also access our forums.', 'start': 305.874, 'duration': 3.503}, {'end': 322.47, 'text': "You can also access our forums, and on our forums you'll find things like how do you build a Deep learning box yourself,", 'start': 311.759, 'duration': 10.711}], 'summary': 'Access world-class deep learning resources at coursev3.fast.ai.', 'duration': 37.425, 'max_score': 285.045, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI285045.jpg'}, {'end': 390.388, 'src': 'embed', 'start': 367.078, 'weight': 1, 'content': [{'end': 376.18, 'text': "So I think that's a good practical, like, can you actually train a predictive model that predicts things? pretty important aspect of data science.", 'start': 367.078, 'duration': 9.102}, {'end': 382.424, 'text': 'I then founded a company called Analytic, which was the first kind of medical deep learning company.', 'start': 377.261, 'duration': 5.163}, {'end': 390.388, 'text': "Nowadays, I'm on the faculty at University of San Francisco and also co-founder with Rachel of Fast.ai.", 'start': 384.365, 'duration': 6.023}], 'summary': 'Founded first medical deep learning company and co-founded fast.ai', 'duration': 23.31, 'max_score': 367.078, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI367078.jpg'}, {'end': 518.049, 'src': 'embed', 'start': 486.117, 'weight': 3, 'content': [{'end': 487.938, 'text': "So there's lots of different ways you can do this,", 'start': 486.117, 'duration': 1.821}, {'end': 494.52, 'text': 'but if you follow along with this kind of 10 hours a week or so approach for the seven weeks by the end,', 'start': 487.938, 'duration': 6.582}, {'end': 500.063, 'text': 'You will be able to build an image classification Model on pictures that you choose.', 'start': 494.52, 'duration': 5.543}, {'end': 502.023, 'text': 'that will work at a world-class level.', 'start': 500.063, 'duration': 1.96}, {'end': 508.646, 'text': "You'll be able to classify text again using whatever data sets you're interested in.", 'start': 502.523, 'duration': 6.123}, {'end': 514.285, 'text': "you'll be able to make predictions of kind of commercial applications like sales.", 'start': 508.646, 'duration': 5.639}, {'end': 518.049, 'text': "you'll be able to build recommendation systems such as the one used by Netflix.", 'start': 514.285, 'duration': 3.764}], 'summary': 'Spend 10 hours a week for 7 weeks to build world-class image classification model, classify text, make predictions for commercial applications, and build recommendation systems.', 'duration': 31.932, 'max_score': 486.117, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI486117.jpg'}, {'end': 558.83, 'src': 'embed', 'start': 533.249, 'weight': 5, 'content': [{'end': 540.141, 'text': 'the prerequisite here is literally one year of coding and high school math,', 'start': 533.249, 'duration': 6.892}, {'end': 543.987, 'text': 'but we have thousands of students now who have done this and shown it to be true.', 'start': 540.141, 'duration': 3.846}, {'end': 552.166, 'text': 'You will probably hear a lot of naysayers less now than a couple of years ago than we started,', 'start': 546.363, 'duration': 5.803}, {'end': 558.83, 'text': "but a lot of naysayers telling you that you can't do it, or that you shouldn't be doing it, or that deep learning's got all these problems.", 'start': 552.166, 'duration': 6.664}], 'summary': "Thousands of students with one year of coding and high school math have succeeded, despite naysayers' doubts.", 'duration': 25.581, 'max_score': 533.249, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI533249.jpg'}], 'start': 126.968, 'title': 'Interactive data science and practical deep learning', 'summary': "Discusses using jupyter notebooks for interactive data science experiments, providing guidance and emphasizing the potential for achieving world-class deep learning proficiency. it also introduces the instructor's background and experience in machine learning, outlines the course structure, and emphasizes that students with one year of coding experience and high school math can achieve world-class results in various applications, challenging common misconceptions about deep learning.", 'chapters': [{'end': 322.47, 'start': 126.968, 'title': 'Interactive data science with jupyter notebooks', 'summary': 'Discusses the use of jupyter notebooks for interactive data science experiments, providing guidance on how to effectively utilize the material and emphasizing the potential for achieving world-class deep learning proficiency, directing learners to coursev3.fast.ai.', 'duration': 195.502, 'highlights': ['Jupyter notebooks enable interactive data science experiments, allowing users to not just access static information but also interactively experiment with it, based on three years of experience with course learners.', 'The material is designed for learners to watch the lesson first to gain a general understanding and then revisit it to follow along more slowly, pausing the video to try out the concepts and extend them.', 'The chapter emphasizes the potential for learners to achieve world-class, practitioner-level deep learning proficiency regardless of their background, directing them to coursev3.fast.ai for further resources and access to forums.', "Encouragement for learners not to worry if the pace is too fast or if they don't understand everything the first time, as the lessons become faster and more difficult over time."]}, {'end': 619.958, 'start': 322.47, 'title': 'Practical deep learning for coders', 'summary': "Introduces the instructor's background and experience in machine learning, outlines the course structure, and emphasizes that students with one year of coding experience and high school math can achieve world-class results in image classification, text classification, commercial predictions, and recommendation systems, challenging common misconceptions about deep learning.", 'duration': 297.488, 'highlights': ['The instructor has over 25 years of experience in machine learning, including being the number one ranked contestant in Kaggle competitions globally, and co-founding Fast.ai and Analytic, a medical deep learning company.', 'The course consists of seven two-hour lessons, with an additional eight to ten hours of homework per week, totaling approximately 70 to 80 hours of work, and promises students the ability to build world-class models in image classification, text classification, commercial predictions, and recommendation systems.', 'Despite common misconceptions, the course only requires one year of coding experience and high school math, and the instructor challenges claims about deep learning, emphasizing its interpretability, minimal data requirement for practical applications, and wide range of applications beyond vision, all achievable without needing a PhD or expensive hardware.']}], 'duration': 492.99, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI126968.jpg', 'highlights': ['Jupyter notebooks enable interactive data science experiments, allowing users to not just access static information but also interactively experiment with it, based on three years of experience with course learners.', 'The instructor has over 25 years of experience in machine learning, including being the number one ranked contestant in Kaggle competitions globally, and co-founding Fast.ai and Analytic, a medical deep learning company.', 'The material is designed for learners to watch the lesson first to gain a general understanding and then revisit it to follow along more slowly, pausing the video to try out the concepts and extend them.', 'The course consists of seven two-hour lessons, with an additional eight to ten hours of homework per week, totaling approximately 70 to 80 hours of work, and promises students the ability to build world-class models in image classification, text classification, commercial predictions, and recommendation systems.', 'The chapter emphasizes the potential for learners to achieve world-class, practitioner-level deep learning proficiency regardless of their background, directing them to coursev3.fast.ai for further resources and access to forums.', 'Despite common misconceptions, the course only requires one year of coding experience and high school math, and the instructor challenges claims about deep learning, emphasizing its interpretability, minimal data requirement for practical applications, and wide range of applications beyond vision, all achievable without needing a PhD or expensive hardware.', "Encouragement for learners not to worry if the pace is too fast or if they don't understand everything the first time, as the lessons become faster and more difficult over time."]}, {'end': 1234.156, 'segs': [{'end': 657.959, 'src': 'embed', 'start': 635.741, 'weight': 0, 'content': [{'end': 647.654, 'text': "which is he downloaded 30 images of people playing cricket and people playing baseball and ran the code you'll see today and built a nearly perfect classifier of which is which", 'start': 635.741, 'duration': 11.913}, {'end': 656.258, 'text': "So it's kind of stuff that you can build with some fun hobby examples like this, or you can try stuff, as we'll see in the workplace,", 'start': 648.594, 'duration': 7.664}, {'end': 657.959, 'text': 'that could be of direct commercial value.', 'start': 656.258, 'duration': 1.701}], 'summary': 'Downloaded 30 images of cricket and baseball, built nearly perfect classifier.', 'duration': 22.218, 'max_score': 635.741, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI635741.jpg'}, {'end': 740.349, 'src': 'embed', 'start': 717.829, 'weight': 1, 'content': [{'end': 728.114, 'text': "Unfortunately, it's still a situation where people who are good practitioners Have a really good feel for how to work with the code and how to work with the data,", 'start': 717.829, 'duration': 10.285}, {'end': 729.915, 'text': 'and you can only get that through experience.', 'start': 728.114, 'duration': 1.801}, {'end': 740.349, 'text': 'And so the best way to get that feel of how to get good models is to create lots of models, do lots of coding and study them carefully.', 'start': 730.842, 'duration': 9.507}], 'summary': 'Experience is crucial for creating good models, involving creating lots of models and coding.', 'duration': 22.52, 'max_score': 717.829, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI717829.jpg'}, {'end': 802.828, 'src': 'embed', 'start': 772.084, 'weight': 2, 'content': [{'end': 777.905, 'text': "everybody will know that you're not a real deep learning practitioner, because real deep learning practitioners know the keyboard shortcuts,", 'start': 772.084, 'duration': 5.821}, {'end': 780.205, 'text': 'and The keyboard shortcut is shift enter.', 'start': 777.905, 'duration': 2.3}, {'end': 787.046, 'text': "given how often you have to run a cell, Don't be Going all the way up here finding it clicking it.", 'start': 780.205, 'duration': 6.841}, {'end': 787.727, 'text': 'just shift, enter.', 'start': 787.046, 'duration': 0.681}, {'end': 789.287, 'text': 'Okay, so type type type shift enter.', 'start': 787.767, 'duration': 1.52}, {'end': 796.243, 'text': 'type type shift enter Up and down to move around to pick something to run, shift enter to run it.', 'start': 789.287, 'duration': 6.956}, {'end': 802.828, 'text': "So we're going to go through this quickly and then later on we're going to go back over it more carefully.", 'start': 796.243, 'duration': 6.585}], 'summary': 'Real deep learning practitioners use keyboard shortcuts, like shift enter, to run cells efficiently.', 'duration': 30.744, 'max_score': 772.084, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI772084.jpg'}, {'end': 966.69, 'src': 'embed', 'start': 935.41, 'weight': 3, 'content': [{'end': 940.251, 'text': 'natural language text, tabular data, and collaborative filtering.', 'start': 935.41, 'duration': 4.841}, {'end': 943.732, 'text': "And we're going to see lots of examples of all of those during the seven weeks.", 'start': 940.731, 'duration': 3.001}, {'end': 945.132, 'text': "So we're going to be doing some computer vision.", 'start': 943.752, 'duration': 1.38}, {'end': 954.034, 'text': "At this point, if you are a Python software engineer, you are probably feeling sick because you've seen me.", 'start': 946.452, 'duration': 7.582}, {'end': 958.295, 'text': "go import star, which is something that you've all been told to never, ever do.", 'start': 954.034, 'duration': 4.261}, {'end': 966.69, 'text': "And there's very good reasons to not use import star in standard production code with most libraries.", 'start': 959.527, 'duration': 7.163}], 'summary': 'Seven-week course covers nlp, tabular data, collaborative filtering, and computer vision with examples and warnings about using import star.', 'duration': 31.28, 'max_score': 935.41, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI935410.jpg'}, {'end': 1137.062, 'src': 'embed', 'start': 1106.748, 'weight': 4, 'content': [{'end': 1110.892, 'text': "So we're going to be starting with an academic dataset called the PET dataset.", 'start': 1106.748, 'duration': 4.144}, {'end': 1117.069, 'text': "The other kind of data set we'll be using during the course is data sets from the Kaggle competitions platform.", 'start': 1111.545, 'duration': 5.524}, {'end': 1124.834, 'text': 'Both academic data sets and Kaggle data sets are interesting for us, particularly because they provide strong baselines.', 'start': 1117.689, 'duration': 7.145}, {'end': 1127.916, 'text': "That is to say, you want to know if you're doing a good job.", 'start': 1125.234, 'duration': 2.682}, {'end': 1132.199, 'text': 'So, with Kaggle data sets that come from a competition,', 'start': 1128.576, 'duration': 3.623}, {'end': 1137.062, 'text': 'you can actually submit your results to Kaggle and see how well would you have gone in that competition.', 'start': 1132.199, 'duration': 4.863}], 'summary': 'Using pet and kaggle datasets for strong baselines and competition performance evaluation.', 'duration': 30.314, 'max_score': 1106.748, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI1106748.jpg'}, {'end': 1212.658, 'src': 'embed', 'start': 1191.359, 'weight': 5, 'content': [{'end': 1200.127, 'text': "The pet data set is going to ask us to distinguish between 37 different categories of dog breed and cat breed, So that's really hard.", 'start': 1191.359, 'duration': 8.768}, {'end': 1209.435, 'text': "in fact, Every course until this one, we've used a different data set, Which is one where you just have to decide is something a dog or is it a cat??", 'start': 1200.127, 'duration': 9.308}, {'end': 1212.658, 'text': "so you've got a 50-50 chance right away, right?", 'start': 1210.096, 'duration': 2.562}], 'summary': 'Task is to distinguish 37 dog and cat breed categories, previous dataset had 50-50 chance.', 'duration': 21.299, 'max_score': 1191.359, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI1191359.jpg'}], 'start': 619.958, 'title': 'Practical deep learning applications and fast.ai library', 'summary': 'Covers practical applications of deep learning, with a focus on building a nearly perfect image classifier for cricket and baseball. it also introduces the fast.ai library and jupyter notebook, emphasizing keyboard shortcuts and desired model performance benchmarks.', 'chapters': [{'end': 740.349, 'start': 619.958, 'title': 'Practical deep learning applications', 'summary': 'Discusses the practical application of deep learning, where a student successfully built a nearly perfect classifier for images of people playing cricket and baseball and emphasizes the importance of hands-on coding and experience in developing good models.', 'duration': 120.391, 'highlights': ['A student downloaded 30 images of people playing cricket and baseball and built a nearly perfect classifier with the code taught, demonstrating the practical application of deep learning.', 'The chapter emphasizes the importance of hands-on coding and experience in developing good models, highlighting the practical approach of learning to build useful things rather than focusing solely on theory.', 'It is noted that the approach to learning deep learning is different from traditional academic courses, where the emphasis is on getting hands dirty with coding and building useful things rather than starting with extensive theory.']}, {'end': 1234.156, 'start': 740.75, 'title': 'Introduction to fast.ai library and jupyter notebook', 'summary': 'Introduces the usage of jupyter notebook and the fast.ai library, emphasizing the importance of keyboard shortcuts and showcasing the modularity and flexibility of the library. it also discusses the types of datasets used in the course and the desired performance benchmarks for models.', 'duration': 493.406, 'highlights': ['Jupyter Notebook provides a great way to study and run code efficiently using keyboard shortcuts like shift enter. Jupyter Notebook offers efficient code execution using keyboard shortcuts like shift enter, enhancing productivity and demonstrating expertise in deep learning practices.', 'The Fast.ai library, built on top of PyTorch, enables a wide range of capabilities for deep learning tasks and supports various applications such as natural language, text, tabular data, and collaborative filtering. The Fast.ai library, leveraging PyTorch, supports diverse applications including natural language processing, text analysis, tabular data processing, and collaborative filtering, contributing to the modularity and flexibility of the library.', 'Academic datasets and Kaggle competition datasets are utilized, providing challenging data with strong baselines for model evaluation and performance benchmarking. The usage of academic datasets and Kaggle competition datasets offers challenging data with strong performance baselines, enabling model evaluation and benchmarking for achieving top-tier performance.', 'Transition to more challenging dataset involving 37 categories of dog and cat breeds reflects the rapid advancements in deep learning, necessitating more complex tasks to achieve higher accuracy levels. The transition to a more complex dataset with 37 categories of dog and cat breeds signifies the rapid progress in deep learning, requiring more challenging tasks to achieve higher accuracy levels compared to previous datasets.']}], 'duration': 614.198, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI619958.jpg', 'highlights': ['A student built a nearly perfect classifier with 30 images, demonstrating practical application of deep learning.', 'Emphasizes hands-on coding and experience in developing good models over theory.', 'Jupyter Notebook offers efficient code execution using keyboard shortcuts like shift enter.', 'Fast.ai library supports diverse applications including natural language processing, text analysis, tabular data processing, and collaborative filtering.', 'Usage of academic and Kaggle competition datasets offers challenging data with strong performance baselines.', 'Transition to a more complex dataset with 37 categories of dog and cat breeds signifies rapid progress in deep learning.']}, {'end': 2793.385, 'segs': [{'end': 1297.243, 'src': 'embed', 'start': 1270.662, 'weight': 5, 'content': [{'end': 1279.446, 'text': "We're going to be using this function called untar data, which will download it automatically and will untar it automatically.", 'start': 1270.662, 'duration': 8.784}, {'end': 1284.168, 'text': 'AWS has been kind enough to give us lots of space and bandwidth for these datasets.', 'start': 1279.446, 'duration': 4.722}, {'end': 1287.35, 'text': "So they'll download super quickly for you.", 'start': 1284.208, 'duration': 3.142}, {'end': 1291.772, 'text': 'And so the first question, then, would be how do I know what untar data is?', 'start': 1287.35, 'duration': 4.422}, {'end': 1297.243, 'text': 'So you can just type help and you will find out.', 'start': 1293.941, 'duration': 3.302}], 'summary': 'Aws provides ample space and bandwidth for datasets, enabling quick downloads and automatic untarring using the function untar data.', 'duration': 26.581, 'max_score': 1270.662, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI1270662.jpg'}, {'end': 1507.796, 'src': 'embed', 'start': 1479.906, 'weight': 8, 'content': [{'end': 1482.407, 'text': 'Path objects are much better to use than strings.', 'start': 1479.906, 'duration': 2.501}, {'end': 1486.613, 'text': "Let's you basically create sub paths like this.", 'start': 1483.232, 'duration': 3.381}, {'end': 1492.334, 'text': "doesn't matter if you're on Windows, Linux, Mac, It's always going to work exactly the same way.", 'start': 1486.613, 'duration': 5.721}, {'end': 1497.995, 'text': "So here's a path to the images in that data set.", 'start': 1492.334, 'duration': 5.661}, {'end': 1503.456, 'text': "All right, so if you're starting with a brand new data set, try to do some deep learning on it.", 'start': 1497.995, 'duration': 5.461}, {'end': 1503.896, 'text': 'what do you do??', 'start': 1503.456, 'duration': 0.44}, {'end': 1507.796, 'text': "Well, the first thing you would want to do is probably see what's in there.", 'start': 1504.556, 'duration': 3.24}], 'summary': 'Use path objects for cross-platform compatibility. start with new data set, explore its contents.', 'duration': 27.89, 'max_score': 1479.906, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI1479906.jpg'}, {'end': 1656.486, 'src': 'embed', 'start': 1625.54, 'weight': 9, 'content': [{'end': 1632.262, 'text': 'I just went ahead and created the regular expression that would extract the label from this text.', 'start': 1625.54, 'duration': 6.722}, {'end': 1639.443, 'text': 'Those of you who are not familiar with regular expressions, super useful tool.', 'start': 1633.822, 'duration': 5.621}, {'end': 1648.685, 'text': "It'd be very useful to spend some time figuring out how and why that particular regular expression is going to extract the label From this text.", 'start': 1639.443, 'duration': 9.242}, {'end': 1656.486, 'text': "Okay, so with this factory method we can basically say, okay, I've got this path containing images This is a list of file names.", 'start': 1649.125, 'duration': 7.361}], 'summary': 'Created a regular expression to extract label from text, emphasizing usefulness and applicability in file naming.', 'duration': 30.946, 'max_score': 1625.54, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI1625540.jpg'}, {'end': 1944.044, 'src': 'embed', 'start': 1916.567, 'weight': 6, 'content': [{'end': 1920.57, 'text': "So, normalizing the images we're going to be learning more about later in the course.", 'start': 1916.567, 'duration': 4.003}, {'end': 1925.493, 'text': "But in short, it means that the the pixel values and we're going to be learning more about pixel values.", 'start': 1920.57, 'duration': 4.923}, {'end': 1938.181, 'text': "the pixel values start out from naught to 255 and some pixel values might tend to be Really, Should say, some channels, because there's red,", 'start': 1925.493, 'duration': 12.688}, {'end': 1938.921, 'text': 'green and blue.', 'start': 1938.181, 'duration': 0.74}, {'end': 1944.044, 'text': 'so some channels might tend to be Really bright and some might tend to be really not bright at all,', 'start': 1938.921, 'duration': 5.123}], 'summary': 'Images are normalized with pixel values ranging from 0 to 255 for red, green, and blue channels.', 'duration': 27.477, 'max_score': 1916.567, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI1916567.jpg'}, {'end': 2237.202, 'src': 'embed', 'start': 2213.013, 'weight': 7, 'content': [{'end': 2220.078, 'text': "That's as much as you need to know to be a pretty good practitioner about architectures for now, which is that there's two architectures,", 'start': 2213.013, 'duration': 7.065}, {'end': 2224.362, 'text': 'or two variants of one architecture that work pretty well ResNet-34 and ResNet-50..', 'start': 2220.078, 'duration': 4.284}, {'end': 2226.783, 'text': "Start with the smaller one and see if it's good enough.", 'start': 2224.922, 'duration': 1.861}, {'end': 2232.268, 'text': 'So that is all the information we need to create a convolutional neural network learner.', 'start': 2227.604, 'duration': 4.664}, {'end': 2237.202, 'text': "There's one other thing I'm going to give it, though, which is a list of metrics.", 'start': 2233.479, 'duration': 3.723}], 'summary': 'Two architectures, resnet-34 and resnet-50, can be used for a convolutional neural network learner. start with the smaller one and assess its efficacy.', 'duration': 24.189, 'max_score': 2213.013, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI2213013.jpg'}, {'end': 2313.754, 'src': 'embed', 'start': 2279.95, 'weight': 1, 'content': [{'end': 2286.295, 'text': "So we can download those pre-trained weights so that we don't start with a model that knows nothing about anything,", 'start': 2279.95, 'duration': 6.345}, {'end': 2293.26, 'text': 'But we actually start with a model that knows how to recognize the a thousand categories of things in ImageNet.', 'start': 2286.295, 'duration': 6.965}, {'end': 2299.941, 'text': "Now, I don't think I'm not sure, but I don't think all of these 37 categories of pet are or in ImageNet,", 'start': 2293.26, 'duration': 6.681}, {'end': 2303.504, 'text': 'but there were certainly some kinds of dog and there were certainly some kinds of cat.', 'start': 2299.941, 'duration': 3.563}, {'end': 2313.754, 'text': 'So this pre-trained model already knows quite a little bit about what pets look like and it certainly knows quite a lot about what animals look like and what photos look like.', 'start': 2304.265, 'duration': 9.489}], 'summary': 'Pre-trained model recognizes 1000 imagenet categories, including dogs and cats.', 'duration': 33.804, 'max_score': 2279.95, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI2279950.jpg'}, {'end': 2367.91, 'src': 'embed', 'start': 2339.808, 'weight': 0, 'content': [{'end': 2350.11, 'text': 'which is how to do this is called transfer learning how to take a model that already knows how to do something pretty well and Make it so that it can do your thing really well.', 'start': 2339.808, 'duration': 10.302}, {'end': 2358.916, 'text': 'So We take a pre-trained model and then we fit it so that, instead of predicting the 1,000 categories of ImageNet with the ImageNet data,', 'start': 2350.13, 'duration': 8.786}, {'end': 2367.91, 'text': 'It predicts the 37 categories of pets using your pet data, and it turns out that by doing this, you can train models in 1,', 'start': 2359.474, 'duration': 8.436}], 'summary': 'Transfer learning trains models in just 1/10th of the data.', 'duration': 28.102, 'max_score': 2339.808, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI2339808.jpg'}, {'end': 2460.427, 'src': 'embed', 'start': 2434.923, 'weight': 2, 'content': [{'end': 2440.946, 'text': 'but just these particular cricketers and these particular photos and these particular baseball players and these particular photos.', 'start': 2434.923, 'duration': 6.023}, {'end': 2443.582, 'text': "We have to make sure that we don't over fit.", 'start': 2442.162, 'duration': 1.42}, {'end': 2447.243, 'text': 'And so the way we do that is using something called a validation set.', 'start': 2444.163, 'duration': 3.08}, {'end': 2452.545, 'text': 'A validation set is a set of images that your model does not get to look at.', 'start': 2447.924, 'duration': 4.621}, {'end': 2460.427, 'text': 'And so these metrics, like in this case, error rate get printed out automatically using the validation set,', 'start': 2453.605, 'duration': 6.822}], 'summary': 'Using a validation set to prevent overfitting in image recognition models.', 'duration': 25.504, 'max_score': 2434.923, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI2434923.jpg'}, {'end': 2552.503, 'src': 'embed', 'start': 2521.272, 'weight': 3, 'content': [{'end': 2523.453, 'text': 'A few months ago, less than a year ago.', 'start': 2521.272, 'duration': 2.181}, {'end': 2532.197, 'text': 'Yeah, so a few months ago, and it turned out to be dramatically better, both more accurate and faster than any previous approach.', 'start': 2523.473, 'duration': 8.724}, {'end': 2543.082, 'text': "So again, I don't want to teach you how to do 2017 deep learning, right? In 2018, the best way to fit models is to use something called one cycle.", 'start': 2532.598, 'duration': 10.484}, {'end': 2544.023, 'text': "We'll learn all about it.", 'start': 2543.162, 'duration': 0.861}, {'end': 2548.765, 'text': 'but for now, just know you should probably type learn.fit one cycle right?', 'start': 2544.023, 'duration': 4.742}, {'end': 2552.503, 'text': 'You forget how to type it.', 'start': 2550.541, 'duration': 1.962}], 'summary': 'New approach outperforms previous methods, faster and more accurate.', 'duration': 31.231, 'max_score': 2521.272, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI2521272.jpg'}], 'start': 1234.156, 'title': 'Deep learning techniques in fast.ai', 'summary': 'Covers fine-grained classification, data extraction, data processing, model training, transfer learning, pre-trained models, and one cycle learning, achieving dramatic improvements in accuracy and faster training, such as 94% accuracy and faster training compared to 59% accuracy in 2012.', 'chapters': [{'end': 1339.949, 'start': 1234.156, 'title': 'Fine-grained classification and data extraction', 'summary': "Introduces the concept of fine-grained classification, specifically for identifying different types of pets, and demonstrates how to download and extract data using the 'untar data' function, which is facilitated by aws for quick downloads.", 'duration': 105.793, 'highlights': ['The chapter focuses on learning fine-grained classification for distinguishing between similar categories, such as identifying specific types of pets.', "The 'untar data' function is used to automatically download and extract the datasets, which are generously provided by AWS, ensuring quick downloads for users.", "The importance of understanding the 'untar data' function, including its source module, functionality, and the types of parameters to be passed, is emphasized to aid in effectively utilizing the function."]}, {'end': 2232.268, 'start': 1339.949, 'title': 'Fast.ai data processing and model training', 'summary': 'Discusses the process of handling data sets in fast.ai, utilizing path objects, regular expressions, and normalization, and introduces the concept of creating a convolutional neural network learner for model training.', 'duration': 892.319, 'highlights': ['Path objects are utilized in Fast.ai for handling file paths, offering a convenient way to interact with files and directories. Path objects provide a more efficient way to handle file paths compared to strings, and they enable cross-platform compatibility.', 'Regular expressions are employed to extract labels from file names, streamlining the process of obtaining labels for deep learning models. Regular expressions are used to efficiently extract label information from file names, contributing to the ease of building and training deep learning models.', 'Normalization of images is essential for training deep learning models, ensuring that pixel values are standardized to aid in model training and performance. Normalization of images involves adjusting pixel values to have a mean of zero and a standard deviation of one, enhancing the training process and model performance.', 'The concept of creating a convolutional neural network learner in Fast.ai is introduced, emphasizing the selection of architectures such as ResNet-34 and ResNet-50 for model training. The learner concept in Fast.ai involves selecting architectures like ResNet-34 and ResNet-50 for building and training convolutional neural networks, offering flexibility and performance.']}, {'end': 2492.74, 'start': 2233.479, 'title': 'Transfer learning and pre-trained models', 'summary': 'Discusses the concept of transfer learning and pre-trained models, highlighting the use of imagenet dataset, the efficiency of transfer learning in model training, and the importance of avoiding overfitting using a validation set.', 'duration': 259.261, 'highlights': ['Transfer learning can train models in 1, 1 hundredths or less of the time of regular model training, with 1, 1 hundredths or less of the data of regular model training, potentially many thousands of times less. Transfer learning enables training models efficiently with significantly less time and data, potentially thousands of times less, compared to regular model training.', 'Pre-trained models are downloaded with pre-existing knowledge, enabling recognition of a thousand categories of things in ImageNet, providing a head start in model training. Pre-trained models come with knowledge of recognizing a thousand categories in ImageNet, offering a head start in model training with existing knowledge.', "Validation sets are crucial for avoiding overfitting, ensuring that the model does not overfit by using a set of images that it never got to see during training. Validation sets are essential for preventing overfitting by evaluating the model's performance on a set of unseen images, contributing to maintaining model integrity."]}, {'end': 2793.385, 'start': 2493.06, 'title': 'One cycle learning in deep learning', 'summary': 'Introduces the concept of one cycle learning, a paper released a few months ago, showing dramatically better results of 94% accuracy and faster training compared to the previous approach of 59% accuracy in 2012, using just three lines of code in pytorch and fast.ai.', 'duration': 300.325, 'highlights': ['One cycle learning demonstrates 94% accuracy, a significant improvement over the previous approach of 59% accuracy in 2012. The chapter illustrates that one cycle learning achieves a 94% accuracy rate, a significant improvement over the 59% accuracy rate achieved by the previous approach in 2012.', 'One cycle learning results in faster training, taking only a minute and 56 seconds to achieve 94% accuracy. The transcript indicates that one cycle learning achieves 94% accuracy in just a minute and 56 seconds, demonstrating faster training compared to the previous approach.', 'One cycle learning requires only three lines of code in PyTorch and fast.ai, showcasing the simplicity and ease of deep learning with these tools. The chapter emphasizes that one cycle learning can be implemented with just three lines of code in PyTorch and fast.ai, highlighting the simplicity and ease of using these tools for deep learning.']}], 'duration': 1559.229, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI1234156.jpg', 'highlights': ['Transfer learning enables training models efficiently with significantly less time and data, potentially thousands of times less, compared to regular model training.', 'Pre-trained models come with knowledge of recognizing a thousand categories in ImageNet, offering a head start in model training with existing knowledge.', "Validation sets are essential for preventing overfitting by evaluating the model's performance on a set of unseen images, contributing to maintaining model integrity.", 'The chapter illustrates that one cycle learning achieves a 94% accuracy rate, a significant improvement over the 59% accuracy rate achieved by the previous approach in 2012.', 'One cycle learning achieves 94% accuracy in just a minute and 56 seconds, demonstrating faster training compared to the previous approach.', "The 'untar data' function is used to automatically download and extract the datasets, which are generously provided by AWS, ensuring quick downloads for users.", 'Normalization of images involves adjusting pixel values to have a mean of zero and a standard deviation of one, enhancing the training process and model performance.', 'The concept of creating a convolutional neural network learner in Fast.ai is introduced, emphasizing the selection of architectures such as ResNet-34 and ResNet-50 for model training.', 'Path objects provide a more efficient way to handle file paths compared to strings, and they enable cross-platform compatibility.', 'Regular expressions are used to efficiently extract label information from file names, contributing to the ease of building and training deep learning models.']}, {'end': 3922.669, 'segs': [{'end': 2851.428, 'src': 'embed', 'start': 2821.253, 'weight': 1, 'content': [{'end': 2828.695, 'text': 'Six percent error certainly sounds like pretty impressive for something that can recognize different dog breeds and cat breeds,', 'start': 2821.253, 'duration': 7.442}, {'end': 2833.016, 'text': "but we don't really know why it works, but we will.", 'start': 2828.695, 'duration': 4.321}, {'end': 2835.361, 'text': "That's okay, right.", 'start': 2833.8, 'duration': 1.561}, {'end': 2848.206, 'text': 'and In terms of getting the most out of this course, We very, very regularly here after the course is finished the same basic feedback,', 'start': 2835.361, 'duration': 12.845}, {'end': 2851.428, 'text': 'Which this is literally copy and pasted for the forum.', 'start': 2848.206, 'duration': 3.222}], 'summary': 'Ai model achieves 6% error rate in identifying dog and cat breeds.', 'duration': 30.175, 'max_score': 2821.253, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI2821253.jpg'}, {'end': 2905.908, 'src': 'embed', 'start': 2878.084, 'weight': 6, 'content': [{'end': 2884.681, 'text': 'Really run the code should have spent the majority of my time on the actual code and the notebooks running it,', 'start': 2878.084, 'duration': 6.597}, {'end': 2888.823, 'text': 'seeing what goes in and Seeing what comes out.', 'start': 2884.681, 'duration': 4.142}, {'end': 2892.525, 'text': 'so your most important skills to practice are learning,', 'start': 2888.823, 'duration': 3.702}, {'end': 2899.289, 'text': "and we're going to show you how to do this in a lot more detail but understanding what goes in and What goes out?", 'start': 2892.525, 'duration': 6.764}, {'end': 2905.908, 'text': "so we've already seen an example of looking at what goes in and which is data.showBatch.", 'start': 2900.43, 'duration': 5.478}], 'summary': 'Practice learning and understanding input and output data.', 'duration': 27.824, 'max_score': 2878.084, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI2878084.jpg'}, {'end': 2971.55, 'src': 'embed', 'start': 2925.656, 'weight': 3, 'content': [{'end': 2931.322, 'text': "now fast AI library is pretty new, But it's already getting an extraordinary amount of traction.", 'start': 2925.656, 'duration': 5.666}, {'end': 2937.108, 'text': "as you've seen, all of the major cloud Providers either support it or are about to support it.", 'start': 2931.322, 'duration': 5.786}, {'end': 2939.871, 'text': 'a lot of researchers are starting to use it.', 'start': 2937.108, 'duration': 2.763}, {'end': 2947.457, 'text': "it's it's Doing making a lot of things a lot easier, but it's also making new things possible.", 'start': 2939.871, 'duration': 7.586}, {'end': 2956.645, 'text': 'And so Really understanding the fast AI software is something which is going to take you a long way And the best way to really understand the fast AL software.', 'start': 2947.918, 'duration': 8.727}, {'end': 2963.551, 'text': "Well is by using the fast AI Documentation and we'll be learning more about the fast AI documentation shortly.", 'start': 2956.685, 'duration': 6.866}, {'end': 2967.407, 'text': 'I. How does it compare?', 'start': 2963.571, 'duration': 3.836}, {'end': 2971.55, 'text': "I mean there's really only one major other piece of software, like fast AI.", 'start': 2967.427, 'duration': 4.123}], 'summary': 'Fast ai is gaining traction with major cloud providers and researchers, offering easier and new possibilities. understanding the fast ai software is crucial for progress.', 'duration': 45.894, 'max_score': 2925.656, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI2925656.jpg'}, {'end': 3017.999, 'src': 'embed', 'start': 2993.579, 'weight': 7, 'content': [{'end': 3005.424, 'text': "So if you look, for example, at the last year's course exercise, which is getting dogs versus cats, Fast AI lets you get more, much more accurate.", 'start': 2993.579, 'duration': 11.845}, {'end': 3010.047, 'text': 'less than half the error on a validation set of course,', 'start': 3005.424, 'duration': 4.623}, {'end': 3017.999, 'text': 'training time is less than half the time and Lines of code is about a sixth of the lines of code.', 'start': 3010.047, 'duration': 7.952}], 'summary': "Fast ai achieves less than half the error, training time, and lines of code compared to last year's course exercise.", 'duration': 24.42, 'max_score': 2993.579, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI2993579.jpg'}, {'end': 3087.79, 'src': 'embed', 'start': 3063.193, 'weight': 2, 'content': [{'end': 3073.36, 'text': 'which was recently featured in Wired, describes a new breakthrough in natural language processing, which people are calling the ImageNet moment,', 'start': 3063.193, 'duration': 10.167}, {'end': 3081.845, 'text': 'which is basically we broke a new state of the art result in text classification, Which open AI, then, built on top of our paper,', 'start': 3073.36, 'duration': 8.485}, {'end': 3087.79, 'text': 'to do with more compute and more data and some different tasks to take it even further Like.', 'start': 3081.845, 'duration': 5.945}], 'summary': 'Breakthrough in natural language processing achieved new state-of-the-art result in text classification, leading to further progress.', 'duration': 24.597, 'max_score': 3063.193, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI3063193.jpg'}, {'end': 3305.661, 'src': 'embed', 'start': 3275.514, 'weight': 0, 'content': [{'end': 3288.043, 'text': "but it really shows Where somebody who has no computer science or math background at all can be now one of the world's top deep learning researchers and doing very valuable work.", 'start': 3275.514, 'duration': 12.529}, {'end': 3295.177, 'text': 'and Another example from our most recent course, Christine pain.', 'start': 3288.043, 'duration': 7.134}, {'end': 3305.661, 'text': 'she Is now at open AI And you can find her post and actually listen to her music samples of.', 'start': 3295.177, 'duration': 10.484}], 'summary': 'Non-technical individual becomes top deep learning researcher, recent course participant joins openai.', 'duration': 30.147, 'max_score': 3275.514, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI3275514.jpg'}, {'end': 3651.604, 'src': 'embed', 'start': 3626.167, 'weight': 8, 'content': [{'end': 3633.332, 'text': "it's the level that you can get to with with The content that you're going to get over these seven weeks and with this,", 'start': 3626.167, 'duration': 7.165}, {'end': 3639.296, 'text': 'software Can get you right to the cutting edge in areas you might find surprising.', 'start': 3633.332, 'duration': 5.964}, {'end': 3650.043, 'text': 'For example, I helped a team of some of our students and some collaborators On actually breaking the world record for training.', 'start': 3639.296, 'duration': 10.747}, {'end': 3651.604, 'text': 'remember, I mentioned the image net data set.', 'start': 3650.043, 'duration': 1.561}], 'summary': 'The software can take you to the cutting edge, helping a team break the world record for image training.', 'duration': 25.437, 'max_score': 3626.167, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI3626167.jpg'}], 'start': 2794.006, 'title': "Fast.ai's impact", 'summary': 'Covers a model training breakthrough achieving a 6% error rate in recognizing dog and cat breeds, the importance of running code and understanding fast ai, and the transformative impact of the fast.ai course, empowering individuals to achieve remarkable success.', 'chapters': [{'end': 2851.428, 'start': 2794.006, 'title': 'Model training breakthrough', 'summary': 'Covers a breakthrough in model training, achieving a 6% error rate in recognizing dog and cat breeds, using just three or four lines of code, surpassing the state of the art accuracy of 2012, with the promise of further understanding the process.', 'duration': 57.422, 'highlights': ['Achieving 6% error rate in recognizing dog and cat breeds with just three or four lines of code, surpassing the state of the art accuracy of 2012.', 'The promise of further understanding the process of model training for improved performance.', 'Taking an eight-minute break before resuming the session.']}, {'end': 3132.947, 'start': 2851.428, 'title': 'Importance of running code and understanding fast ai', 'summary': 'Emphasizes the importance of running code to learn effectively, with fast ai library being highlighted for its ease of use and performance, as well as its impact on research and breakthroughs in natural language processing.', 'duration': 281.519, 'highlights': ['The fast AI library is gaining significant traction, with major cloud providers supporting it, and researchers utilizing it, showcasing its potential and impact on the field of deep learning.', 'Understanding the fast AI software is crucial for learning and leveraging its capabilities, as it offers significant advantages over other deep learning frameworks, such as Keras, in terms of accuracy, training time, and lines of code.', 'The fast AI library has contributed to breakthroughs in natural language processing, including achieving state-of-the-art results in text classification, as well as enabling the development of innovative models for semantic code search.', 'Emphasizing the importance of running code and understanding the inputs and outputs, the chapter highlights the need to practice learning by actively engaging with code and data, rather than solely focusing on theory and research.', "The chapter underlines the significance of utilizing the fast AI documentation to gain a comprehensive understanding of the software, highlighting its pivotal role in enhancing learning and leveraging the library's capabilities."]}, {'end': 3922.669, 'start': 3132.947, 'title': 'Fast.ai transformative impact', 'summary': 'Showcases the transformative impact of the fast.ai course, as it has empowered individuals from diverse backgrounds to achieve remarkable success, such as a former economics student becoming a top google brain researcher and an english literature phd helping a micro-lending organization win a $1 million ai challenge.', 'duration': 789.722, 'highlights': ["Former economics student becomes top Google brain researcher Sarah Hooker, with no background in coding or math, became a top Google brain researcher after taking the fast.ai course and is now setting up Google brain's first deep learning AI research center in Africa.", 'English literature PhD helps micro-lending organization win $1 million AI challenge Melissa Fabros, an English literature PhD, leveraged the skills gained from the fast.ai course to help a micro-lending organization win a $1 million AI challenge, addressing the bias in computer vision software and empowering visually disabled users.', 'Former software engineer uses fast.ai skills to identify fraud at Splunk An alumnus of fast.ai, who previously worked at Splunk as a software engineer, developed an algorithm after the course that proved to be highly effective at identifying fraud, showcasing the practical application of fast.ai skills in industry.', 'Students achieve state-of-the-art results in NLP across multiple languages Students collaborating through the fast.ai forum have achieved state-of-the-art results in natural language processing across various languages, demonstrating the collaborative and impactful nature of the fast.ai community.', 'Fast.ai students break world record for training on ImageNet dataset A team of fast.ai students and collaborators broke the world record for the fastest training on the ImageNet dataset, using the techniques learned in the course, which emphasizes the practical and cutting-edge nature of the skills taught in fast.ai.']}], 'duration': 1128.663, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI2794006.jpg', 'highlights': ['Former economics student becomes top Google brain researcher after taking fast.ai course.', 'Achieving 6% error rate in recognizing dog and cat breeds with just three or four lines of code, surpassing the state of the art accuracy of 2012.', 'The fast AI library has contributed to breakthroughs in natural language processing, including achieving state-of-the-art results in text classification.', 'Understanding the fast AI software is crucial for learning and leveraging its capabilities, offering significant advantages over other deep learning frameworks.', 'Students collaborating through the fast.ai forum have achieved state-of-the-art results in natural language processing across various languages.', 'The fast AI library is gaining significant traction, with major cloud providers supporting it, and researchers utilizing it, showcasing its potential and impact on the field of deep learning.', 'Emphasizing the importance of running code and understanding the inputs and outputs, the chapter highlights the need to practice learning by actively engaging with code and data.', 'The fast.ai course empowers individuals to achieve remarkable success, as demonstrated by various success stories of its alumni.', 'The transformative impact of the fast.ai course is evident through the practical application of skills in industry and the collaborative and impactful nature of the fast.ai community.']}, {'end': 4464.325, 'segs': [{'end': 3947.049, 'src': 'embed', 'start': 3922.669, 'weight': 5, 'content': [{'end': 3932.337, 'text': "so if people were talking about Crop yield analysis and you're a farmer and you think you know, oh, I've got something to add, so please Mention it,", 'start': 3922.669, 'duration': 9.668}, {'end': 3934.519, 'text': "even even if you're not sure it's exactly relevant.", 'start': 3932.337, 'duration': 2.182}, {'end': 3941.684, 'text': "That's fine You know just get involved and because remember everybody else in the forum started out Also intimidated.", 'start': 3934.539, 'duration': 7.145}, {'end': 3947.049, 'text': 'All right, we all start out Not knowing things and so just get out there and try it.', 'start': 3941.704, 'duration': 5.345}], 'summary': "Encouragement to participate in crop yield analysis discussions, emphasizing everyone's starting point and encouraging involvement.", 'duration': 24.38, 'max_score': 3922.669, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI3922669.jpg'}, {'end': 4031.554, 'src': 'embed', 'start': 4000.312, 'weight': 0, 'content': [{'end': 4003.853, 'text': 'Google ResNet, ResNet, ResNet, ResNet.', 'start': 4000.312, 'duration': 3.541}, {'end': 4007.794, 'text': "ResNet's good enough, okay? So it's fun.", 'start': 4004.333, 'duration': 3.461}, {'end': 4014.087, 'text': 'There are other architectures.', 'start': 4012.866, 'duration': 1.221}, {'end': 4018.068, 'text': 'the main reason you might want a different architecture is if you want to do edge computing,', 'start': 4014.087, 'duration': 3.981}, {'end': 4022.07, 'text': "So if you want to create a model that's going to sit on somebody's mobile phone.", 'start': 4018.068, 'duration': 4.002}, {'end': 4024.511, 'text': 'Having said that, even there, most of the time,', 'start': 4022.07, 'duration': 2.441}, {'end': 4031.554, 'text': "I reckon the best way to get a model onto somebody's mobile phone is to run it on your server and Then have your mobile phone app talk to it.", 'start': 4024.511, 'duration': 7.043}], 'summary': 'Resnet is a prevalent architecture for edge computing, but server-based model deployment is recommended for mobile phones.', 'duration': 31.242, 'max_score': 4000.312, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI4000312.jpg'}, {'end': 4209.665, 'src': 'heatmap', 'start': 4085.672, 'weight': 0.803, 'content': [{'end': 4090.794, 'text': "if you've ever done anything like linear regression Or logistic regression, you'll be familiar with coefficients.", 'start': 4085.672, 'duration': 5.122}, {'end': 4097.017, 'text': 'We basically found some coefficients and parameters that work pretty well And it took us a minute and 56 seconds.', 'start': 4090.834, 'duration': 6.183}, {'end': 4102.419, 'text': 'So if we want to start doing some more playing around and come back later, We probably should save those weights.', 'start': 4097.017, 'duration': 5.402}, {'end': 4104.5, 'text': 'so we can save that minute and 56 seconds.', 'start': 4102.419, 'duration': 2.081}, {'end': 4106.981, 'text': 'So you can just go learn dot, save and give it a name.', 'start': 4104.5, 'duration': 2.481}, {'end': 4112.232, 'text': "It's going to put it In a model subdirectory in the same place.", 'start': 4107.908, 'duration': 4.324}, {'end': 4113.332, 'text': 'the data came from.', 'start': 4112.232, 'duration': 1.1}, {'end': 4119.096, 'text': "So if you say different models or different data bunches from different data sets, they'll all be kept separate.", 'start': 4113.332, 'duration': 5.764}, {'end': 4123.08, 'text': "So don't worry about it, All right.", 'start': 4119.256, 'duration': 3.824}, {'end': 4127.943, 'text': 'so we talked about how the most important things are, how to learn what goes into your model, what comes out.', 'start': 4123.08, 'duration': 4.863}, {'end': 4130.404, 'text': "We've seen one way of seeing what goes in now.", 'start': 4127.943, 'duration': 2.461}, {'end': 4131.886, 'text': "Let's see what comes out.", 'start': 4130.444, 'duration': 1.442}, {'end': 4134.207, 'text': 'Yes, This is the other thing you need to get really good at.', 'start': 4132.307, 'duration': 1.9}, {'end': 4137.39, 'text': 'So to see what comes out.', 'start': 4135.649, 'duration': 1.741}, {'end': 4144.756, 'text': "we can use this class called classification interpretation, and We're going to use this factory method from learner.", 'start': 4137.39, 'duration': 7.366}, {'end': 4146.617, 'text': 'So we pass in a learn object.', 'start': 4145.156, 'duration': 1.461}, {'end': 4152.761, 'text': "So remember, a learn object knows two things what's your data and What is your model?", 'start': 4146.636, 'duration': 6.125}, {'end': 4160.087, 'text': "It's now not just an architecture, but it's actually a trained model inside there, And that's all the information we need to interpret that model.", 'start': 4153.221, 'duration': 6.866}, {'end': 4164.871, 'text': "So it's this pass in the learner and we now have a classification interpretation object.", 'start': 4160.147, 'duration': 4.724}, {'end': 4174.037, 'text': 'So one of the things we can do, and perhaps the most useful things to do, is called plot top losses.', 'start': 4167.455, 'duration': 6.582}, {'end': 4179.779, 'text': "So we're going to be learning a lot about this idea of loss functions shortly.", 'start': 4174.037, 'duration': 5.742}, {'end': 4186.363, 'text': 'but in short, a loss function is something that tells you how good was your prediction, and So specifically,', 'start': 4179.779, 'duration': 6.584}, {'end': 4192.966, 'text': 'that means if you predicted one class of cat With great confidence.', 'start': 4186.363, 'duration': 6.603}, {'end': 4201.621, 'text': 'You said I am very, very sure that this is a Berman, But actually you were wrong.', 'start': 4193.006, 'duration': 8.615}, {'end': 4206.744, 'text': "then then that's going to have a high loss, because you were very confident about the wrong answer.", 'start': 4201.621, 'duration': 5.123}, {'end': 4209.665, 'text': "Okay, so that's what it basically means to have a high loss.", 'start': 4206.744, 'duration': 2.921}], 'summary': 'Trained model took 1 minute and 56 seconds to find coefficients. how to interpret and analyze model outputs explained.', 'duration': 123.993, 'max_score': 4085.672, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI4085672.jpg'}, {'end': 4179.779, 'src': 'embed', 'start': 4153.221, 'weight': 1, 'content': [{'end': 4160.087, 'text': "It's now not just an architecture, but it's actually a trained model inside there, And that's all the information we need to interpret that model.", 'start': 4153.221, 'duration': 6.866}, {'end': 4164.871, 'text': "So it's this pass in the learner and we now have a classification interpretation object.", 'start': 4160.147, 'duration': 4.724}, {'end': 4174.037, 'text': 'So one of the things we can do, and perhaps the most useful things to do, is called plot top losses.', 'start': 4167.455, 'duration': 6.582}, {'end': 4179.779, 'text': "So we're going to be learning a lot about this idea of loss functions shortly.", 'start': 4174.037, 'duration': 5.742}], 'summary': 'Trained model inside architecture, used for classification and loss functions.', 'duration': 26.558, 'max_score': 4153.221, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI4153220.jpg'}, {'end': 4284.513, 'src': 'embed', 'start': 4254.01, 'weight': 2, 'content': [{'end': 4262.396, 'text': 'So when you click on show in Docs, It pops up the documentation for that method or class or function or whatever,', 'start': 4254.01, 'duration': 8.386}, {'end': 4267.135, 'text': 'and It starts out by showing us the same information about what are the parameters.', 'start': 4262.396, 'duration': 4.739}, {'end': 4272.841, 'text': 'it takes along with the doc string, but then tells you more information.', 'start': 4267.135, 'duration': 5.706}, {'end': 4284.513, 'text': 'So in this case, it tells me the title of each shows the prediction, the actual, the loss, and the probability that was predicted.', 'start': 4273.021, 'duration': 11.492}], 'summary': "Clicking 'show' in docs displays method/class/function info, including title, prediction, actual, loss, and probability.", 'duration': 30.503, 'max_score': 4254.01, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI4254010.jpg'}, {'end': 4353.204, 'src': 'embed', 'start': 4322.901, 'weight': 3, 'content': [{'end': 4327.827, 'text': "the other thing I'll mention is if you're a Somewhat experienced Python programmer.", 'start': 4322.901, 'duration': 4.926}, {'end': 4331.55, 'text': "You'll find the source code of fast.ai really easy to read.", 'start': 4328.107, 'duration': 3.443}, {'end': 4338.596, 'text': 'We try to write everything in just a small number of you know, Much less than half a screen of code, generally four or five lines of code.', 'start': 4331.55, 'duration': 7.046}, {'end': 4342.979, 'text': 'if you click source, You can jump straight to the source code right?', 'start': 4338.596, 'duration': 4.383}, {'end': 4353.204, 'text': 'So here is the plot top losses, and this is also a great way to find out if How to use the fast AI library, because every line of code here,', 'start': 4342.999, 'duration': 10.205}], 'summary': 'Fast.ai source code is concise, generally 4-5 lines, easy for python programmers.', 'duration': 30.303, 'max_score': 4322.901, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI4322901.jpg'}, {'end': 4461.029, 'src': 'embed', 'start': 4433.13, 'weight': 4, 'content': [{'end': 4436.012, 'text': 'But this is my favorite named function in fast.', 'start': 4433.13, 'duration': 2.882}, {'end': 4437.853, 'text': 'I am very proud of this.', 'start': 4436.052, 'duration': 1.801}, {'end': 4447.8, 'text': 'you can call most confused And most confused will simply grab out of the confusion matrix, the particular Combinations of predicted and actual.', 'start': 4437.853, 'duration': 9.947}, {'end': 4450.462, 'text': 'that got wrong the most often.', 'start': 4447.8, 'duration': 2.662}, {'end': 4458.788, 'text': 'so in this case, the Staffordshire Bull Terrier was what it should have predicted and Instead it predicted an American Pit Bull Terrier and so forth.', 'start': 4450.462, 'duration': 8.326}, {'end': 4461.029, 'text': 'It should have predicted a Siamese and actually predicted Berman.', 'start': 4458.788, 'duration': 2.241}], 'summary': 'Named function in fast grabs most confused combinations from confusion matrix.', 'duration': 27.899, 'max_score': 4433.13, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI4433130.jpg'}], 'start': 3922.669, 'title': 'Crop yield analysis and fast.ai documentation', 'summary': "Discusses the importance of participation in crop yield analysis, preference for resnet architecture, model interpretation, and the fast.ai documentation and tools including the 'doc' function, 'source' function, and 'most confused' function.", 'chapters': [{'end': 4224.996, 'start': 3922.669, 'title': 'Crop yield analysis and model interpretation', 'summary': 'Discusses the importance of participation in crop yield analysis, the preference for resnet architecture in image classification, and the process of model interpretation through classification interpretation and plot top losses.', 'duration': 302.327, 'highlights': ['The importance of participation in crop yield analysis is emphasized, encouraging individuals to contribute, even if they are unsure of the relevance, to foster collaboration and learning. Encourages participation in crop yield analysis, highlighting the value of diverse contributions and the inclusive nature of the learning process.', 'ResNet is endorsed as a preferred architecture for image classification, supported by its top rankings in benchmarks such as Stanford Dawn Bench and ImageNet. ResNet architecture is recommended for image classification, citing its top rankings in benchmarks like Stanford Dawn Bench and ImageNet.', 'The process of model interpretation is explained, involving the use of classification interpretation and the plot top losses method to identify prediction errors and confidence levels. Describes the process of model interpretation using classification interpretation and plot top losses to identify prediction errors and confidence levels.']}, {'end': 4464.325, 'start': 4224.996, 'title': 'Fast.ai documentation and tools', 'summary': "Explains the importance of fast.ai documentation and tools, including the 'doc' function for detailed method/class information, the 'source' function for accessing source code, and the 'most confused' function for identifying the most common prediction errors in image classification.", 'duration': 239.329, 'highlights': ["The 'doc' function provides detailed information about methods/classes, including parameters, doc strings, and additional details, enhancing understanding of prediction results. Importance of 'doc' function for detailed method/class information, parameters, doc strings, and additional details.", "Accessing source code using the 'source' function, which allows easy reading and understanding of fast.ai library code. Accessibility of source code using the 'source' function for easy understanding and learning of fast.ai library code.", "The 'most confused' function identifies the most common prediction errors, showing specific combinations of predicted and actual classes that were frequently incorrect. Functionality and purpose of the 'most confused' function in identifying the most common prediction errors."]}], 'duration': 541.656, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI3922669.jpg', 'highlights': ['ResNet architecture is recommended for image classification, citing its top rankings in benchmarks like Stanford Dawn Bench and ImageNet.', 'Describes the process of model interpretation using classification interpretation and plot top losses to identify prediction errors and confidence levels.', "Importance of 'doc' function for detailed method/class information, parameters, doc strings, and additional details.", "Accessibility of source code using the 'source' function for easy understanding and learning of fast.ai library code.", "Functionality and purpose of the 'most confused' function in identifying the most common prediction errors.", 'Encourages participation in crop yield analysis, highlighting the value of diverse contributions and the inclusive nature of the learning process.']}, {'end': 5432.978, 'segs': [{'end': 4569.539, 'src': 'heatmap', 'start': 4507.214, 'weight': 0.778, 'content': [{'end': 4512.488, 'text': 'What we did was we added a few extra layers to the end and We only trained those.', 'start': 4507.214, 'duration': 5.274}, {'end': 4514.869, 'text': 'we basically left most of the model exactly as it was.', 'start': 4512.488, 'duration': 2.381}, {'end': 4522.632, 'text': "So that's really fast And if we try to build a model of something that's similar to the original Pre-trained model.", 'start': 4515.489, 'duration': 7.143}, {'end': 4527.154, 'text': 'So in this case, similar to the ImageNet data, that works pretty well.', 'start': 4523.053, 'duration': 4.101}, {'end': 4531.717, 'text': 'But what we really want to do is actually go back and train the whole model.', 'start': 4527.154, 'duration': 4.563}, {'end': 4535.678, 'text': 'so this is why we pretty much always use this two-stage process.', 'start': 4531.717, 'duration': 3.961}, {'end': 4542.852, 'text': 'so by default, When we call, fit will fit one cycle on a conf owner.', 'start': 4535.678, 'duration': 7.174}, {'end': 4547.793, 'text': "It'll just fine-tune these few extra layers added to the end and it'll run very fast.", 'start': 4542.852, 'duration': 4.941}, {'end': 4550.253, 'text': "It'll basically never over fit.", 'start': 4547.933, 'duration': 2.32}, {'end': 4560.015, 'text': 'but to really get it good, you have to call unfreeze, and unfreeze is the thing that says please train the whole model,', 'start': 4550.253, 'duration': 9.762}, {'end': 4567.356, 'text': 'and Then I can call fit one cycle again, and uh-oh, the error got much worse.', 'start': 4560.015, 'duration': 7.341}, {'end': 4569.539, 'text': 'and Okay, why?', 'start': 4567.356, 'duration': 2.183}], 'summary': 'Adding extra layers for fast training, then fine-tuning the whole model improves performance.', 'duration': 62.325, 'max_score': 4507.214, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI4507214.jpg'}, {'end': 4638.724, 'src': 'embed', 'start': 4603.18, 'weight': 3, 'content': [{'end': 4608.224, 'text': 'And they created a paper showing how you can visualize the layers of a convolutional neural network.', 'start': 4603.18, 'duration': 5.044}, {'end': 4613.186, 'text': 'So a convolutional neural network will learn mathematically about what the layers are shortly.', 'start': 4609.045, 'duration': 4.141}, {'end': 4614.867, 'text': 'But the basic idea is that your red,', 'start': 4613.186, 'duration': 1.681}, {'end': 4624.23, 'text': 'green and blue pixel values that are numbers from naught to 255 go into a simple computation the first layer and something comes out of that,', 'start': 4614.867, 'duration': 9.363}, {'end': 4626.551, 'text': 'and then the result of that goes into a second layer.', 'start': 4624.23, 'duration': 2.321}, {'end': 4638.724, 'text': 'that goes to a third layer and so forth, and There can be up to a thousand layers of a neural network, and ResNet-34 has 34 layers.', 'start': 4626.551, 'duration': 12.173}], 'summary': 'Convolutional neural network visualizes layers using resnet-34 with 34 layers.', 'duration': 35.544, 'max_score': 4603.18, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI4603180.jpg'}, {'end': 4883.195, 'src': 'embed', 'start': 4860.18, 'weight': 2, 'content': [{'end': 4870.312, 'text': "so when we first Trained, when we first fine-tuned that pre-trained model, we kept all of these layers that you've seen so far,", 'start': 4860.18, 'duration': 10.132}, {'end': 4876.453, 'text': 'And we just trained a few more layers on top of all of those sophisticated features that are already been created.', 'start': 4870.312, 'duration': 6.141}, {'end': 4877.814, 'text': "And so now we're fine-tuning.", 'start': 4876.453, 'duration': 1.361}, {'end': 4880.895, 'text': "We're going back and saying let's change all of these.", 'start': 4878.174, 'duration': 2.721}, {'end': 4883.195, 'text': "We'll keep that We'll start with them where they are.", 'start': 4881.515, 'duration': 1.68}], 'summary': 'Fine-tuning pre-trained model by training a few more layers on top of sophisticated features.', 'duration': 23.015, 'max_score': 4860.18, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI4860180.jpg'}, {'end': 5122.322, 'src': 'heatmap', 'start': 4927.253, 'weight': 0.848, 'content': [{'end': 4935.343, 'text': "So this is why our attempt to fine-tune this model didn't work is because we actually, by default,", 'start': 4927.253, 'duration': 8.09}, {'end': 4943.632, 'text': "It trains all the layers at the same speed right just to say it'll update those like things representing diagonal lines of gradients,", 'start': 4935.908, 'duration': 7.724}, {'end': 4949.195, 'text': 'Just as much as it tries to update the things that represent the exact specifics of what an eyeball looks like.', 'start': 4943.632, 'duration': 5.563}, {'end': 4950.035, 'text': 'So we have to change that.', 'start': 4949.195, 'duration': 0.84}, {'end': 4953.577, 'text': 'Okay, and so To change it.', 'start': 4950.055, 'duration': 3.522}, {'end': 4955.978, 'text': 'We first of all need to go back to where we were before.', 'start': 4953.637, 'duration': 2.341}, {'end': 4964.633, 'text': "Okay, we just broke this model right? We're much worse than started out and If we just go load this brings back the model that we saved earlier.", 'start': 4955.998, 'duration': 8.635}, {'end': 4969.64, 'text': 'Remember we saved it as Stage one.', 'start': 4964.753, 'duration': 4.887}, {'end': 4974.306, 'text': "Okay, so let's go ahead and Load that back up.", 'start': 4970.421, 'duration': 3.885}, {'end': 4976.83, 'text': "So that's now our models back to where it was before we killed it.", 'start': 4974.326, 'duration': 2.504}, {'end': 4981.224, 'text': "So And let's run learning rate finder.", 'start': 4976.85, 'duration': 4.374}, {'end': 4983.025, 'text': "We'll learn about what that is next week.", 'start': 4981.344, 'duration': 1.681}, {'end': 4990.83, 'text': 'But for now, just know, this is the thing that figures out what is the fastest I can train this neural network at,', 'start': 4983.205, 'duration': 7.625}, {'end': 4994.693, 'text': 'without making it zip off the rails and get blown apart.', 'start': 4990.83, 'duration': 3.863}, {'end': 5003.158, 'text': 'So we can call learn.lrfind, and then we can go learn.recorder.plot, and that will plot the result of our LR finder.', 'start': 4995.233, 'duration': 7.925}, {'end': 5008.596, 'text': "And what this basically shows you is there's this parameter that we're going to learn all about called the learning rate.", 'start': 5003.519, 'duration': 5.077}, {'end': 5014.302, 'text': 'And the learning rate basically says how quickly am I updating the parameters in my model.', 'start': 5009.056, 'duration': 5.246}, {'end': 5022.05, 'text': 'And you can see that what happens is, this bottom one here shows me what happens as I increase the learning rate.', 'start': 5015.804, 'duration': 6.246}, {'end': 5027.445, 'text': "This one here shows what you know, what's the result? What's the loss.", 'start': 5023.562, 'duration': 3.883}, {'end': 5033.989, 'text': 'and so you can see, once the learning rate gets past 10 to the negative 4, my loss gets worse.', 'start': 5027.445, 'duration': 6.544}, {'end': 5037.011, 'text': 'right. so It actually so happens.', 'start': 5033.989, 'duration': 3.022}, {'end': 5044.596, 'text': 'In fact I can check this if I press shift tab here and my learning rate defaults to 0.003.', 'start': 5037.111, 'duration': 7.485}, {'end': 5047.437, 'text': 'so my default learning rate is about here.', 'start': 5044.596, 'duration': 2.841}, {'end': 5049.038, 'text': 'so you can see why our loss got worse.', 'start': 5047.437, 'duration': 1.601}, {'end': 5050.919, 'text': "right because we're trying to fine-tune things.", 'start': 5049.038, 'duration': 1.881}, {'end': 5054.662, 'text': "now we can't use such a high learning rate.", 'start': 5050.919, 'duration': 3.743}, {'end': 5063.207, 'text': 'so, based on the learning rate finder, i tried to pick something you know well before it started getting worse.', 'start': 5054.662, 'duration': 8.545}, {'end': 5066.449, 'text': 'so i decided to pick one in x6.', 'start': 5063.207, 'duration': 3.242}, {'end': 5068.07, 'text': "so i decided i'm going to train at that rate.", 'start': 5066.449, 'duration': 1.621}, {'end': 5076.432, 'text': "But there's no point trading all the layers at that rate, because we know that the later layers worked just fine before,", 'start': 5069.485, 'duration': 6.947}, {'end': 5077.893, 'text': 'when we were training much more quickly.', 'start': 5076.432, 'duration': 1.461}, {'end': 5084.884, 'text': 'Again, it was the default, which was, to remind us, 0.003.', 'start': 5078.394, 'duration': 6.49}, {'end': 5090.267, 'text': 'So what we can actually do is we can pass a range of learning rates to learn.fit.', 'start': 5084.884, 'duration': 5.383}, {'end': 5091.468, 'text': 'And we do it like this.', 'start': 5090.727, 'duration': 0.741}, {'end': 5096.651, 'text': "You use this keyword in Python, you may have come across it before, it's called slice.", 'start': 5092.208, 'duration': 4.443}, {'end': 5100.833, 'text': 'And that can take a start value and a stop value.', 'start': 5097.591, 'duration': 3.242}, {'end': 5109.136, 'text': 'And basically what this says is train the very first players at a learning rate of 1e neg 6,', 'start': 5101.493, 'duration': 7.643}, {'end': 5120.441, 'text': 'the very last layers at a rate of 1 in egg 4 and then kind of distribute all the other layers across that you know, between those two values equally.', 'start': 5109.136, 'duration': 11.305}, {'end': 5122.322, 'text': "So we're going to see that in a lot more detail.", 'start': 5120.441, 'duration': 1.881}], 'summary': 'The transcript discusses fine-tuning a model by adjusting learning rates and using learning rate finder to determine the optimal training speed.', 'duration': 195.069, 'max_score': 4927.253, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI4927253.jpg'}, {'end': 5120.441, 'src': 'embed', 'start': 5092.208, 'weight': 1, 'content': [{'end': 5096.651, 'text': "You use this keyword in Python, you may have come across it before, it's called slice.", 'start': 5092.208, 'duration': 4.443}, {'end': 5100.833, 'text': 'And that can take a start value and a stop value.', 'start': 5097.591, 'duration': 3.242}, {'end': 5109.136, 'text': 'And basically what this says is train the very first players at a learning rate of 1e neg 6,', 'start': 5101.493, 'duration': 7.643}, {'end': 5120.441, 'text': 'the very last layers at a rate of 1 in egg 4 and then kind of distribute all the other layers across that you know, between those two values equally.', 'start': 5109.136, 'duration': 11.305}], 'summary': 'In python, the slice keyword can be used to train layers at specific learning rates, such as 1e-6 for the first players and 1e-4 for the last layers, distributing the other layers equally between these values.', 'duration': 28.233, 'max_score': 5092.208, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI5092208.jpg'}, {'end': 5202.713, 'src': 'embed', 'start': 5175.571, 'weight': 0, 'content': [{'end': 5179.473, 'text': "So we've gone down from the 6.1 percent to a 5.7 percent.", 'start': 5175.571, 'duration': 3.902}, {'end': 5184.945, 'text': "So that's about a 10 percentage point relative improvement with another 58 seconds of training.", 'start': 5179.473, 'duration': 5.472}, {'end': 5195.858, 'text': 'So I would perhaps say for most people, most of the time, these two stages are enough to get pretty much a world-class model.', 'start': 5185.786, 'duration': 10.072}, {'end': 5202.713, 'text': "You won't win a Kaggle competition, particularly because now a lot of fast AI alumni are competing on Kaggle,", 'start': 5196.991, 'duration': 5.722}], 'summary': 'Model accuracy improved from 6.1% to 5.7% with 58 seconds of training.', 'duration': 27.142, 'max_score': 5175.571, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI5175571.jpg'}, {'end': 5311.646, 'src': 'embed', 'start': 5283.273, 'weight': 4, 'content': [{'end': 5287.615, 'text': "Anyway, so you'll be somewhere around there And it's very likely that we try to run this.", 'start': 5283.273, 'duration': 4.342}, {'end': 5289.576, 'text': "you'll get an out of memory memory error,", 'start': 5287.615, 'duration': 1.961}, {'end': 5298.216, 'text': "and that's because it's just trying to do too much too many parameter updates for the amount of RAM you have, and That's easily fixed.", 'start': 5289.576, 'duration': 8.64}, {'end': 5304.26, 'text': 'this image data bunch constructor has a parameter at the end Batch size,', 'start': 5298.216, 'duration': 6.044}, {'end': 5308.644, 'text': 'BS for batch size and this basically says how many images do you train at one time?', 'start': 5304.26, 'duration': 4.384}, {'end': 5311.646, 'text': 'If you run out of memory, just make it smaller.', 'start': 5310.005, 'duration': 1.641}], 'summary': 'To avoid out of memory errors, reduce parameter updates or decrease batch size in image training data.', 'duration': 28.373, 'max_score': 5283.273, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI5283273.jpg'}], 'start': 4464.325, 'title': 'Optimizing neural network training', 'summary': 'Discusses optimizing neural network training, including using a learning rate finder to determine the optimal learning rate, adjusting learning rates for different layers, and the impact of model size on gpu memory usage, achieving a 10% relative improvement in performance with an additional 58 seconds of training.', 'chapters': [{'end': 4577.77, 'start': 4464.325, 'title': 'Fine-tuning deep learning models', 'summary': 'Discusses the process of fine-tuning deep learning models where adding extra layers to the end and training only those layers initially leads to fast training, but to achieve better results, the entire model needs to be trained.', 'duration': 113.445, 'highlights': ['Fine-tuning involves initially training only the extra layers added to the pre-trained model, resulting in fast training but not optimal results.', 'Adding a few extra layers to the end of the deep learning model and training only those layers leads to quick training, as demonstrated by the fitting of four epochs that ran rapidly.', 'The process of unfreezing the entire model and then training it results in a significant increase in error, highlighting the need to understand the underlying processes to achieve optimal performance.']}, {'end': 4935.343, 'start': 4578.692, 'title': 'Understanding neural network layers', 'summary': 'Explains how convolutional neural networks (cnn) visualize layers and learn to recognize features, starting with simple gradients and lines in layer one, progressing to complex patterns like dog faces and bird legs in higher layers, and discusses the process of fine-tuning pre-trained models by modifying specific layers.', 'duration': 356.651, 'highlights': ['The chapter explains how convolutional neural networks visualize layers and learn to recognize features, starting with simple gradients and lines in layer one, progressing to complex patterns like dog faces and bird legs in higher layers It describes the visualization of layers in a CNN, starting with simple computations in layer one that detect gradients and lines, and progressing to more complex patterns like dog faces and bird legs in higher layers.', 'The process of fine-tuning pre-trained models is discussed, emphasizing the modification of specific layers based on their semantic complexity It discusses the process of fine-tuning pre-trained models by modifying specific layers, considering that lower layers may not need significant changes, while higher layers, which recognize specific features like dog faces, may require adjustments.', 'The visualization of specific coefficients and parameters within the layers of the CNN is explained, demonstrating how they detect various features such as lines, corners, circles, and patterns It explains the visualization of specific coefficients and parameters within the CNN layers, illustrating how they detect features such as lines, corners, circles, and patterns through the examples of visualized filters and their corresponding activations in images.']}, {'end': 5432.978, 'start': 4935.908, 'title': 'Optimizing neural network training', 'summary': 'Discusses the process of optimizing neural network training, including the use of learning rate finder to determine the optimal learning rate, adjusting the learning rates for different layers, and the impact of model size on gpu memory usage, ultimately achieving a 10 percentage point relative improvement in performance with additional 58 seconds of training.', 'duration': 497.07, 'highlights': ['After using the learning rate finder to identify the optimal learning rate, a 10 percentage point relative improvement was achieved with an additional 58 seconds of training. A 10 percentage point relative improvement was achieved with additional 58 seconds of training after determining the optimal learning rate using the learning rate finder.', 'The impact of model size on GPU memory usage is discussed, and it is recommended to adjust the batch size parameter in the image data bunch constructor to avoid out of memory errors. The impact of model size on GPU memory usage is discussed, and adjusting the batch size parameter in the image data bunch constructor is recommended to avoid out of memory errors.', 'The chapter emphasizes the importance of adjusting learning rates for different layers, suggesting using a slice with a start value and a stop value to distribute the learning rates across layers efficiently. The importance of adjusting learning rates for different layers is emphasized, suggesting the use of a slice with a start value and a stop value to distribute the learning rates across layers efficiently.']}], 'duration': 968.653, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI4464325.jpg', 'highlights': ['Using learning rate finder to determine optimal learning rate led to 10% relative improvement with 58s extra training', 'Adjusting learning rates for different layers is important, suggesting using a slice with start and stop values', 'Fine-tuning involves initially training only the extra layers added to the pre-trained model, resulting in fast training but not optimal results', 'Visualizing layers in CNN, starting with simple computations in layer one and progressing to more complex patterns in higher layers', 'Impact of model size on GPU memory usage is discussed, recommending adjusting batch size parameter to avoid out of memory errors']}, {'end': 6006.617, 'segs': [{'end': 5490.369, 'src': 'embed', 'start': 5456.536, 'weight': 1, 'content': [{'end': 5458.018, 'text': 'So model interpretation works both ways.', 'start': 5456.536, 'duration': 1.482}, {'end': 5465.125, 'text': 'So what I want you to do this week is to run This notebook.', 'start': 5458.438, 'duration': 6.687}, {'end': 5466.667, 'text': 'you know, make sure you can get through it.', 'start': 5465.125, 'duration': 1.542}, {'end': 5471.832, 'text': 'But then what I really want you to do is to get your own image data set.', 'start': 5466.667, 'duration': 5.165}, {'end': 5475.884, 'text': "Actually, I'm Francisco, who I mentioned earlier.", 'start': 5472.723, 'duration': 3.161}, {'end': 5481.526, 'text': "He started the language to model thread and he's, you know, now helping to TA the course.", 'start': 5475.884, 'duration': 5.642}, {'end': 5490.369, 'text': "He's actually putting together a guide that will show you how to download data From Google Images so you can create your own data set to play with.", 'start': 5481.526, 'duration': 8.843}], 'summary': 'Run the provided notebook and create your own image dataset with guidance from francisco.', 'duration': 33.833, 'max_score': 5456.536, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI5456536.jpg'}, {'end': 5556.353, 'src': 'embed', 'start': 5526.501, 'weight': 0, 'content': [{'end': 5540.129, 'text': "The, the MNIST sample Basically looks like this so I can go path dot LS and And you can see it's got a training set in the validation set already.", 'start': 5526.501, 'duration': 13.628}, {'end': 5546.099, 'text': 'So basically, the people that put together this data set have already decided what they want you to use as a validation set.', 'start': 5540.129, 'duration': 5.97}, {'end': 5556.353, 'text': "Okay so if you go path slash, train dot LS, You'll see there's a folder called three and a folder called seven, right?", 'start': 5546.099, 'duration': 10.254}], 'summary': 'Mnist sample has predefined training and validation sets, including folders for digits 3 and 7.', 'duration': 29.852, 'max_score': 5526.501, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI5526501.jpg'}, {'end': 5656.252, 'src': 'heatmap', 'start': 5583.931, 'weight': 0.73, 'content': [{'end': 5592.338, 'text': "And as you can see, three, seven, It's created the labels just by using the folder names.", 'start': 5583.931, 'duration': 8.407}, {'end': 5593.398, 'text': 'Another possibility.', 'start': 5592.338, 'duration': 1.06}, {'end': 5597.882, 'text': 'and as you can see, we can train that at ninety, nine point, five, five percent accuracy.', 'start': 5593.398, 'duration': 4.484}, {'end': 5601.265, 'text': "another possibility, and for this MNIST sample I've got both.", 'start': 5597.882, 'duration': 3.383}, {'end': 5606.102, 'text': 'it might come with a CSV file and That would look something like this for each file name', 'start': 5601.265, 'duration': 4.837}, {'end': 5607.524, 'text': "What's its label?", 'start': 5606.502, 'duration': 1.022}, {'end': 5613.172, 'text': "now? this case the labels aren't three or seven, They're zero or one, which is basically is it a seven or not?", 'start': 5607.524, 'duration': 5.648}, {'end': 5614.675, 'text': "So that's another possibility.", 'start': 5613.613, 'duration': 1.062}, {'end': 5623.439, 'text': "So if this is how your labels are, you can use from csv, And if it's called labels.csv, you don't even have to pass in a file name.", 'start': 5615.396, 'duration': 8.043}, {'end': 5628.42, 'text': "if it's called anything else, Then you can call pass in the csv labels file name.", 'start': 5623.439, 'duration': 4.981}, {'end': 5631.642, 'text': "Okay, so that's how you can use a csv Again, there it is.", 'start': 5628.781, 'duration': 2.861}, {'end': 5632.782, 'text': 'This is now.', 'start': 5632.382, 'duration': 0.4}, {'end': 5633.742, 'text': 'is it a 7 or not??', 'start': 5632.782, 'duration': 0.96}, {'end': 5640.164, 'text': 'Um, Another possibility, and then you can call data.classes to see what it found.', 'start': 5635.903, 'duration': 4.261}, {'end': 5644.366, 'text': "another possibility is, as we've seen is, you've got paths that look like this.", 'start': 5640.164, 'duration': 4.202}, {'end': 5647.405, 'text': 'so in this case this is the same thing.', 'start': 5645.803, 'duration': 1.602}, {'end': 5648.826, 'text': 'these are the folders.', 'start': 5647.405, 'duration': 1.421}, {'end': 5656.252, 'text': "right, i could actually grab the um, the label, by using a regular expression, and so here's the regular expression.", 'start': 5648.826, 'duration': 7.426}], 'summary': 'Using folder names as labels achieves 99.55% accuracy in training. csv files provide another labeling possibility.', 'duration': 72.321, 'max_score': 5583.931, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI5583931.jpg'}, {'end': 5791.147, 'src': 'embed', 'start': 5753.672, 'weight': 2, 'content': [{'end': 5761.114, 'text': "So the documentation for fast.ai doesn't just tell you what to do, but step to step how to do it.", 'start': 5753.672, 'duration': 7.442}, {'end': 5763.955, 'text': 'And here is perhaps the coolest bit.', 'start': 5762.175, 'duration': 1.78}, {'end': 5781.526, 'text': 'If you go to fast.ai Fast AI underscore Docs and Click on Docs source It turns out that all of our documentation is actually just Jupiter notebooks.', 'start': 5764.675, 'duration': 16.851}, {'end': 5788.727, 'text': 'So in this case I was looking at vision dot data.', 'start': 5782.366, 'duration': 6.361}, {'end': 5791.147, 'text': 'So here is the vision dot data notebook.', 'start': 5788.727, 'duration': 2.42}], 'summary': 'Fast.ai documentation provides step-by-step instructions and is based on jupyter notebooks.', 'duration': 37.475, 'max_score': 5753.672, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI5753672.jpg'}, {'end': 5888.991, 'src': 'embed', 'start': 5838.885, 'weight': 3, 'content': [{'end': 5843.49, 'text': 'and so you can actually try every single function in your browser.', 'start': 5838.885, 'duration': 4.605}, {'end': 5847.133, 'text': 'try seeing what goes in and try seeing what comes out.', 'start': 5843.49, 'duration': 3.643}, {'end': 5854.06, 'text': "so there's a question will the library use multi-gpu and parallel by default?", 'start': 5847.133, 'duration': 6.927}, {'end': 5860.664, 'text': 'The library will use multiple CPUs by default, but just one GPU by default.', 'start': 5855.982, 'duration': 4.682}, {'end': 5864.125, 'text': "We probably won't be looking at multi-GPU until part two.", 'start': 5860.764, 'duration': 3.361}, {'end': 5869.807, 'text': "It's easy to do, and you'll find it on the forum, but most people won't be needing to use that now.", 'start': 5864.545, 'duration': 5.262}, {'end': 5878.81, 'text': 'And the second question is whether the library can use 3D data such as MRI or CAT scan.', 'start': 5871.087, 'duration': 7.723}, {'end': 5883.992, 'text': 'Yes, it can, and there is actually a forum thread about that already.', 'start': 5879.73, 'duration': 4.262}, {'end': 5888.991, 'text': "Although that's not as developed as 2D yet, but maybe by the time the MOOC is out it will be.", 'start': 5885.188, 'duration': 3.803}], 'summary': 'Library uses multiple cpus by default, but just one gpu. can handle 3d data like mri or cat scan.', 'duration': 50.106, 'max_score': 5838.885, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI5838885.jpg'}, {'end': 5960.863, 'src': 'embed', 'start': 5935.365, 'weight': 5, 'content': [{'end': 5946.232, 'text': 'he then took the exact code that we saw with an earlier version of the software and trained a CNN in exactly the way we saw and Use that to train his fraud model.', 'start': 5935.365, 'duration': 10.867}, {'end': 5956.993, 'text': 'So he basically took something which is not obviously a picture and he turned it into a picture and and got these fantastically good results for a piece of fraud analysis software.', 'start': 5946.452, 'duration': 10.541}, {'end': 5960.863, 'text': 'So it pays to think Creatively.', 'start': 5957.293, 'duration': 3.57}], 'summary': 'Using creative thinking, the speaker trained a cnn on non-image data, achieving fantastic results for fraud analysis software.', 'duration': 25.498, 'max_score': 5935.365, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI5935365.jpg'}], 'start': 5432.978, 'title': 'Model interpretation and fast.ai documentation', 'summary': "Discusses model interpretation, creating custom data sets, and experimenting with fast.ai's documentation, providing detailed code examples and addressing common questions about multi-gpu usage and 3d data compatibility.", 'chapters': [{'end': 5686.523, 'start': 5432.978, 'title': 'Model interpretation and creating custom data sets', 'summary': 'Discusses model interpretation, running a notebook, creating custom image data sets, and various ways to label data, including using folders, csv files, regular expressions, and custom functions.', 'duration': 253.545, 'highlights': ['The MNIST sample data set is used to demonstrate different ways of creating data sets, with one approach achieving 99.55% accuracy in training.', 'Different labeling methods are shown, including using folders, CSV files, regular expressions, and custom functions to extract labels from file names or paths.', 'Francisco is creating a guide to help download data from Google Images to create custom data sets for model training.', 'Model interpretation is highlighted as a two-way process, beneficial for non-experts to gain domain knowledge.', 'The main difference in assessing pet breeds lies in the kennel club guidelines, with some breeds being hard to identify.']}, {'end': 6006.617, 'start': 5687.024, 'title': 'Fast.ai documentation and experimentation', 'summary': "Highlights the importance of experimenting with fast.ai's documentation, which provides detailed code examples and step-by-step instructions, and encourages users to try out different functions and data sets, while also addressing common questions about multi-gpu usage and 3d data compatibility.", 'duration': 319.593, 'highlights': ['The fast.ai documentation provides detailed code examples and step-by-step instructions for every function, allowing users to experiment with actual working examples and datasets. detailed code examples, step-by-step instructions', 'The library defaults to using multiple CPUs but only one GPU, and multi-GPU usage may be explored in part two, addressing common questions about multi-GPU usage. default CPU and GPU usage, multi-GPU exploration in part two', 'The library can handle 3D data such as MRI or CAT scans, with ongoing development and a forum thread dedicated to this functionality. handling of 3D data, ongoing development and forum thread', "An alum used fast.ai's techniques to create anti-fraud software by converting mouse movements into images and training a CNN, showcasing the potential for creative applications. application in anti-fraud software, conversion of mouse movements into images", "The chapter encourages users to experiment with fast.ai's documentation, try different functions and data sets, and share their experiences on the forum. encouragement for experimentation and sharing on the forum"]}], 'duration': 573.639, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/XfoYk_Z5AkI/pics/XfoYk_Z5AkI5432978.jpg', 'highlights': ['The MNIST sample data set achieves 99.55% accuracy in training.', 'Model interpretation is highlighted as a two-way process, beneficial for non-experts to gain domain knowledge.', 'The fast.ai documentation provides detailed code examples and step-by-step instructions for every function.', 'The library defaults to using multiple CPUs but only one GPU, and multi-GPU usage may be explored in part two.', 'The library can handle 3D data such as MRI or CAT scans, with ongoing development and a forum thread dedicated to this functionality.', "An alum used fast.ai's techniques to create anti-fraud software by converting mouse movements into images and training a CNN, showcasing the potential for creative applications."]}], 'highlights': ['The course consists of seven two-hour lessons, with an additional eight to ten hours of homework per week, totaling approximately 70 to 80 hours of work, and promises students the ability to build world-class models in image classification, text classification, commercial predictions, and recommendation systems.', 'The chapter illustrates that one cycle learning achieves a 94% accuracy rate, a significant improvement over the 59% accuracy rate achieved by the previous approach in 2012.', 'Transfer learning enables training models efficiently with significantly less time and data, potentially thousands of times less, compared to regular model training.', 'The fast AI library has contributed to breakthroughs in natural language processing, including achieving state-of-the-art results in text classification.', 'The fast AI library is gaining significant traction, with major cloud providers supporting it, and researchers utilizing it, showcasing its potential and impact on the field of deep learning.', 'ResNet architecture is recommended for image classification, citing its top rankings in benchmarks like Stanford Dawn Bench and ImageNet.', 'Using learning rate finder to determine optimal learning rate led to 10% relative improvement with 58s extra training', 'The MNIST sample data set achieves 99.55% accuracy in training.']}