title
ml5.js: Image Classification with MobileNet

description
In this video, I use the "pre-trained" MobileNet model to classify the content of an image. #machinelearning #mobilenet #imageclassification #ml5 #p5js 💻Course: https://thecodingtrain.com/Courses/ml5-beginners-guide/1.1-ml5-image-classification.html 🎥Previous Video: https://youtu.be/jmznx0Q1fP0 🎥Next Video: https://youtu.be/D9BoBSkLvFo 🎥Full Playlist: https://www.youtube.com/playlist?list=PLRqwX-V7Uu6YPSwT06y_AEYTqIwbeam3y Links discussed in this course: 🔗 ml5.js: https://ml5js.org 🔗 Image-Net: image-net.org/index 🔗 MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications: https://arxiv.org/abs/1704.04861 🎥 p5.js Workflow: https://youtu.be/HZ4D3wDRaec 🎥 ES6 Promises: https://youtu.be/QO4NXhWo_NM 🚂Website: https://thecodingtrain.com/ 💡Github: https://github.com/CodingTrain 💖Membership: https://youtube.com/thecodingtrain/join 🛒Store: https://www.designbyhumans.com/shop/codingtrain/ 📚Books: https://www.amazon.com/shop/thecodingtrain 🖋️Twitter: https://twitter.com/thecodingtrain Video editing by Mathieu Blanchette. 🎥Coding Challenges: https://www.youtube.com/playlist?list=PLRqwX-V7Uu6ZiZxtDDRCi6uhfTH4FilpH 🎥Intro to Programming using p5.js: https://www.youtube.com/playlist?list=PLRqwX-V7Uu6Zy51Q-x9tMWIv9cueOFTFA 📄 Code of Conduct: https://github.com/CodingTrain/Code-of-Conduct 🌐Help us caption and translate: http://www.youtube.com/timedtext_cs_panel?c=UCvjgXvBlbQiydffZU7m1_aw&tab=2 🚩Suggest Topics: https://github.com/CodingTrain/Rainbow-Topics 👾Share your contribution: https://thecodingtrain.com/Guides/community-contribution-guide.html 🔗 p5.js: https://p5js.org 🔗 Processing: https://processing.org

detail
{'title': 'ml5.js: Image Classification with MobileNet', 'heatmap': [{'end': 1114.166, 'start': 1092.022, 'weight': 1}], 'summary': 'The tutorial covers integrating mobilenet model into javascript using ml5 library, emphasizing supervised learning, public domain images, setting up ml5 in javascript, creating an image classifier, and using ml5.js for image classification, achieving 75% identification probability for oyster catcher.', 'chapters': [{'end': 213.933, 'segs': [{'end': 95.013, 'src': 'embed', 'start': 56.565, 'weight': 1, 'content': [{'end': 62.991, 'text': "But let's first actually, if you just even want to just play around with this model to start with, you can do this right here on the ml5 home page.", 'start': 56.565, 'duration': 6.426}, {'end': 65.873, 'text': 'So right here, you can see this image of a macaw.', 'start': 63.431, 'duration': 2.442}, {'end': 66.514, 'text': 'And guess what?', 'start': 66.013, 'duration': 0.501}, {'end': 74.241, 'text': 'The MobileNet model labeled this image as a macaw with a confidence of 98.79%.', 'start': 66.834, 'duration': 7.407}, {'end': 77.263, 'text': 'Oh, this must be the best, smartest machine learning model ever.', 'start': 74.241, 'duration': 3.022}, {'end': 78.243, 'text': "It's like magic.", 'start': 77.543, 'duration': 0.7}, {'end': 79.264, 'text': 'It just knows everything.', 'start': 78.283, 'duration': 0.981}, {'end': 82.846, 'text': 'And in fact, I can grab this toucan, and I can drag it in here.', 'start': 79.524, 'duration': 3.322}, {'end': 83.226, 'text': 'And look at this.', 'start': 82.866, 'duration': 0.36}, {'end': 87.148, 'text': "It's a toucan with a confidence of 99.99%.", 'start': 83.826, 'duration': 3.322}, {'end': 89.289, 'text': 'This is so smart.', 'start': 87.148, 'duration': 2.141}, {'end': 89.87, 'text': 'So smart.', 'start': 89.45, 'duration': 0.42}, {'end': 91.471, 'text': "I can't believe the MobileNet model is amazing.", 'start': 89.91, 'duration': 1.561}, {'end': 93.112, 'text': "Now let's get this puffin in here.", 'start': 91.671, 'duration': 1.441}, {'end': 94.373, 'text': "It's got to know what a puffin is.", 'start': 93.152, 'duration': 1.221}, {'end': 95.013, 'text': 'Look at this.', 'start': 94.593, 'duration': 0.42}], 'summary': 'Mobilenet model accurately identifies macaw, toucan, and puffin with high confidence.', 'duration': 38.448, 'max_score': 56.565, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yNkAuWz5lnY/pics/yNkAuWz5lnY56565.jpg'}, {'end': 170.037, 'src': 'embed', 'start': 120.307, 'weight': 0, 'content': [{'end': 126.209, 'text': "And one of the things that's really important if you're working with something like a pre-trained model is actually to know what's in that set.", 'start': 120.307, 'duration': 5.902}, {'end': 129.35, 'text': "Now, it's not the most easy thing to find.", 'start': 126.609, 'duration': 2.741}, {'end': 132.291, 'text': 'But right here, on the ML5 GitHub,', 'start': 130.169, 'duration': 2.122}, {'end': 140.654, 'text': "there's actually this page of JavaScript code that shows all the things that the MobileNet model happens to know about.", 'start': 132.291, 'duration': 8.363}, {'end': 143.014, 'text': 'In fact, there are 1,000 classes.', 'start': 140.714, 'duration': 2.3}, {'end': 145.335, 'text': 'And you could see, like, this is crazy.', 'start': 143.534, 'duration': 1.801}, {'end': 149.356, 'text': 'It knows about a beagle and a bloodhound and an English foxhound.', 'start': 145.395, 'duration': 3.961}, {'end': 153.297, 'text': "It's, like, trained to know obscure dog breeds.", 'start': 149.636, 'duration': 3.661}, {'end': 157.538, 'text': "But if I look for puffin, it's nowhere to be found.", 'start': 153.957, 'duration': 3.581}, {'end': 160.338, 'text': 'Oyster catcher, however, is in there.', 'start': 158.498, 'duration': 1.84}, {'end': 166.715, 'text': 'And if I were to look, like, what does an oyster catcher look like? It kind of makes sense.', 'start': 160.638, 'duration': 6.077}, {'end': 170.037, 'text': "That's the thing it knows about that's closest to Puffin.", 'start': 167.155, 'duration': 2.882}], 'summary': 'Mobilenet model has 1,000 classes, including rare dog breeds, but lacks puffin and resembles oyster catcher.', 'duration': 49.73, 'max_score': 120.307, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yNkAuWz5lnY/pics/yNkAuWz5lnY120307.jpg'}], 'start': 0.845, 'title': 'Ml5 and mobilenet model integration', 'summary': 'Introduces a tutorial for image classification using the ml5 library and the integration of the mobilenet model into javascript through the ml5 library. it highlights the ease of use, utilization of pre-trained model, image recognition capabilities, and limitations of pre-trained models with 1,000 classes.', 'chapters': [{'end': 41.132, 'start': 0.845, 'title': 'Ml5 image classification tutorial', 'summary': 'Introduces a code tutorial for image classification using the ml5 library, focusing on the ease of use and the utilization of a pre-trained model, which sets it apart from traditional machine learning tutorials.', 'duration': 40.287, 'highlights': ['The tutorial focuses on creating an image classification code example using the ML5 library, emphasizing its simplicity and applicability as a hello world introduction to machine learning.', 'It highlights the use of a pre-trained model, distinguishing it from conventional machine learning tutorials that often begin with linear regression and training processes.']}, {'end': 213.933, 'start': 41.612, 'title': 'Ml5 and mobilenet model', 'summary': 'Discusses the integration of an open-sourced machine learning model, mobilenet, into javascript through the ml5 library, showcasing its image recognition capabilities with specific examples and highlighting the limitations of pre-trained models with 1,000 classes.', 'duration': 172.321, 'highlights': ['The MobileNet model accurately recognized a macaw with a confidence of 98.79% and a toucan with a confidence of 99.99%, demonstrating its image recognition capabilities.', 'The chapter emphasizes the limitations of pre-trained models, citing the example of the model labeling a puffin as an oyster catcher due to the fixed number of classes it has been trained on, with a specific mention of the 1,000 classes available in the MobileNet model.', 'The discussion also touches upon the challenge of finding specific objects within the 1,000 classes of the MobileNet model and the potential need for retraining the model with custom data to achieve more accurate results.']}], 'duration': 213.088, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yNkAuWz5lnY/pics/yNkAuWz5lnY845.jpg', 'highlights': ['The MobileNet model accurately recognized a macaw with a confidence of 98.79% and a toucan with a confidence of 99.99%, demonstrating its image recognition capabilities.', 'The tutorial focuses on creating an image classification code example using the ML5 library, emphasizing its simplicity and applicability as a hello world introduction to machine learning.', 'It highlights the use of a pre-trained model, distinguishing it from conventional machine learning tutorials that often begin with linear regression and training processes.', 'The chapter emphasizes the limitations of pre-trained models, citing the example of the model labeling a puffin as an oyster catcher due to the fixed number of classes it has been trained on, with a specific mention of the 1,000 classes available in the MobileNet model.', 'The discussion also touches upon the challenge of finding specific objects within the 1,000 classes of the MobileNet model and the potential need for retraining the model with custom data to achieve more accurate results.']}, {'end': 576.52, 'segs': [{'end': 294.425, 'src': 'embed', 'start': 258.546, 'weight': 0, 'content': [{'end': 271.001, 'text': 'And that output is a list of labels cat dog, puffin, et cetera and confidence scores 98%.', 'start': 258.546, 'duration': 12.455}, {'end': 273.501, 'text': 'Well, do the scores all add up to 100%?', 'start': 271.001, 'duration': 2.5}, {'end': 275.402, 'text': 'I think they should.', 'start': 273.501, 'duration': 1.901}, {'end': 279.763, 'text': '90%, 6%, 2%, et cetera.', 'start': 275.422, 'duration': 4.341}, {'end': 283.443, 'text': "So this is the stage that we're doing.", 'start': 281.563, 'duration': 1.88}, {'end': 294.425, 'text': 'But how did we get to the point where we could do this ourselves? So somebody had to do this with a process known as supervised learning.', 'start': 283.783, 'duration': 10.642}], 'summary': 'Supervised learning process produces list of labels with 98% confidence scores.', 'duration': 35.879, 'max_score': 258.546, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yNkAuWz5lnY/pics/yNkAuWz5lnY258546.jpg'}, {'end': 386.195, 'src': 'embed', 'start': 356.506, 'weight': 2, 'content': [{'end': 361.028, 'text': 'And the process of how that works involves looking at the error.', 'start': 356.506, 'duration': 4.522}, {'end': 364.448, 'text': "the model tries to make a guess it doesn't get it wrong.", 'start': 361.028, 'duration': 3.42}, {'end': 365.549, 'text': 'since it knows the right answer.', 'start': 364.448, 'duration': 1.101}, {'end': 368.229, 'text': 'it can change its settings and try to get the right answer the next time.', 'start': 365.549, 'duration': 2.68}, {'end': 369.41, 'text': 'This is the process.', 'start': 368.569, 'duration': 0.841}, {'end': 376.552, 'text': "Once that finishes, There's usually the model is tested with some other images that weren't used in the training set.", 'start': 369.89, 'duration': 6.662}, {'end': 378.933, 'text': 'And then a paper is written about it.', 'start': 376.852, 'duration': 2.081}, {'end': 379.653, 'text': "It's published.", 'start': 378.993, 'duration': 0.66}, {'end': 386.195, 'text': "Or it might be something that a company sort of owns and keeps closed down and doesn't provide access to,", 'start': 379.793, 'duration': 6.402}], 'summary': 'Model accuracy is improved iteratively; then tested with new images before being published or kept proprietary.', 'duration': 29.689, 'max_score': 356.506, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yNkAuWz5lnY/pics/yNkAuWz5lnY356506.jpg'}, {'end': 508.971, 'src': 'embed', 'start': 480.43, 'weight': 1, 'content': [{'end': 482.59, 'text': "I'm going to grab this image.", 'start': 480.43, 'duration': 2.16}, {'end': 484.531, 'text': "I'm going to do Save Images.", 'start': 483.111, 'duration': 1.42}, {'end': 486.831, 'text': "I'm going to save it to the desktop as Penguin.", 'start': 484.551, 'duration': 2.28}, {'end': 488.152, 'text': "I'm going to go back to ML5.", 'start': 486.851, 'duration': 1.301}, {'end': 493.596, 'text': "And then I'm going to go here and say, look, 100%.", 'start': 489.412, 'duration': 4.184}, {'end': 498.381, 'text': "You know why it's 100%? I am almost sure that that image is in.", 'start': 493.596, 'duration': 4.785}, {'end': 499.602, 'text': "I mean, I'm not sure about this.", 'start': 498.401, 'duration': 1.201}, {'end': 500.302, 'text': 'But go and look.', 'start': 499.662, 'duration': 0.64}, {'end': 501.684, 'text': "That's probably in ImageNet.", 'start': 500.443, 'duration': 1.241}, {'end': 508.971, 'text': "So a lot of these things that come up in the first search result that are in the public domain for Wikipedia, they're actually in that image database.", 'start': 502.585, 'duration': 6.386}], 'summary': "Saving image as penguin on desktop, confident it's in imagenet.", 'duration': 28.541, 'max_score': 480.43, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yNkAuWz5lnY/pics/yNkAuWz5lnY480430.jpg'}], 'start': 213.973, 'title': 'Utilizing pre-trained models and public domain images in machine learning', 'summary': 'Highlights the process of supervised learning, the significance of labeled datasets, and the utilization of public domain images, emphasizing the access to large and diverse datasets like imagenet, which contains almost 15 million images. it also encourages critical analysis of models like mobilenet for effective image classification.', 'chapters': [{'end': 456.742, 'start': 213.973, 'title': 'Understanding pre-trained models in machine learning', 'summary': 'Discusses the process of supervised learning, the role of labeled datasets in training pre-trained models, and the significance of a large and diverse training dataset such as imagenet, which contains almost 15 million images, for effective machine learning.', 'duration': 242.769, 'highlights': ['The significance of a large and diverse training dataset such as ImageNet, which contains almost 15 million images, for effective machine learning. ImageNet, a database of almost 15 million images, is crucial for a machine learning system to perform well in the supervised learning process.', 'The process of supervised learning and the role of labeled datasets in training pre-trained models. Supervised learning involves a labeled dataset that helps train pre-trained models by providing examples of inputs and outputs, allowing the model to learn and improve its accuracy.', 'The need for a large data set for effective supervised learning. Effective supervised learning requires a large dataset to enable a machine learning system to perform well, as having a small dataset limits its capabilities.']}, {'end': 576.52, 'start': 457.662, 'title': 'Utilizing public domain images for image classification', 'summary': 'Discusses utilizing public domain images for image classification, highlighting the importance of accessing images from the public domain, the effectiveness of using images from databases like imagenet, and encouraging the audience to explore and critically analyze the mobilenet model.', 'duration': 118.858, 'highlights': ['The importance of accessing public domain images for image classification to enhance model accuracy and performance.', 'The effectiveness of utilizing images from databases such as ImageNet to improve image classification confidence and accuracy.', 'The encouragement to explore and critically analyze the MobileNet model, including accessing relevant resources such as a paper and GitHub repository for in-depth understanding.']}], 'duration': 362.547, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yNkAuWz5lnY/pics/yNkAuWz5lnY213973.jpg', 'highlights': ['The significance of a large and diverse training dataset such as ImageNet, which contains almost 15 million images, for effective machine learning.', 'The process of supervised learning and the role of labeled datasets in training pre-trained models.', 'The importance of accessing public domain images for image classification to enhance model accuracy and performance.', 'The encouragement to explore and critically analyze the MobileNet model, including accessing relevant resources such as a paper and GitHub repository for in-depth understanding.']}, {'end': 732.522, 'segs': [{'end': 626.074, 'src': 'embed', 'start': 599.032, 'weight': 2, 'content': [{'end': 604.775, 'text': 'because I also happen to be running a local server using a node server package.', 'start': 599.032, 'duration': 5.743}, {'end': 610.621, 'text': "Now, there are so many ways that you can run a web page that's run in JavaScript.", 'start': 605.196, 'duration': 5.425}, {'end': 618.728, 'text': "And you could use CodePen or open processing, or something new that's going to be coming out soon, which is a p5 editor that you could use online.", 'start': 610.661, 'duration': 8.067}, {'end': 620.77, 'text': "that when it comes out, I'll link to it in the video description.", 'start': 618.728, 'duration': 2.042}, {'end': 626.074, 'text': "But for right now, I'm going to use my workflow of having my own text editor on my computer.", 'start': 621.25, 'duration': 4.824}], 'summary': 'Running a local server for javascript web pages, options include codepen, open processing, and upcoming p5 editor.', 'duration': 27.042, 'max_score': 599.032, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yNkAuWz5lnY/pics/yNkAuWz5lnY599032.jpg'}, {'end': 673.279, 'src': 'embed', 'start': 644.699, 'weight': 0, 'content': [{'end': 649.42, 'text': 'I have a video where I kind of go through my entire workflow in more detailed steps.', 'start': 644.699, 'duration': 4.721}, {'end': 651.481, 'text': "I will also link to that in this video's description.", 'start': 649.54, 'duration': 1.941}, {'end': 655.654, 'text': "Now that that's settled, though, I can actually start to use ML5.", 'start': 652.751, 'duration': 2.903}, {'end': 658.778, 'text': "So how do I use ML5? So right here, I'm on the ML5 home page.", 'start': 655.694, 'duration': 3.084}, {'end': 661.721, 'text': 'The first place I should go is just click on this big Get Started button.', 'start': 658.818, 'duration': 2.903}, {'end': 663.122, 'text': "So I'm going to click on that.", 'start': 661.741, 'duration': 1.381}, {'end': 664.944, 'text': 'And I totally clicked around to the wrong place.', 'start': 663.142, 'duration': 1.802}, {'end': 668.188, 'text': "Let's try that again.", 'start': 667.567, 'duration': 0.621}, {'end': 669.95, 'text': "I'm going to click on the Get Started button.", 'start': 668.208, 'duration': 1.742}, {'end': 673.279, 'text': "And I'm going to go right down here, and I'm going to look at this.", 'start': 671.138, 'duration': 2.141}], 'summary': 'The speaker discusses using ml5, guiding viewers through the process, but struggling with navigation at first.', 'duration': 28.58, 'max_score': 644.699, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yNkAuWz5lnY/pics/yNkAuWz5lnY644699.jpg'}, {'end': 740.337, 'src': 'embed', 'start': 710.051, 'weight': 1, 'content': [{'end': 716.253, 'text': 'All right, I will not make my joke about your brain being inside of the gelatinous thawing machine thing in this video.', 'start': 710.051, 'duration': 6.202}, {'end': 717.734, 'text': 'I made it too many times.', 'start': 716.673, 'duration': 1.061}, {'end': 719.995, 'text': "It's not even funny.", 'start': 718.754, 'duration': 1.241}, {'end': 721.936, 'text': "OK So I'm going to put that in here.", 'start': 720.135, 'duration': 1.801}, {'end': 725.858, 'text': "I'm just going to go back to my web page, and I'm going to refresh.", 'start': 721.956, 'duration': 3.902}, {'end': 726.699, 'text': "It's still working.", 'start': 725.978, 'duration': 0.721}, {'end': 727.639, 'text': 'OK, good.', 'start': 727.279, 'duration': 0.36}, {'end': 732.522, 'text': 'So now we have the ml5 library imported, and we can start calling ml5 functions.', 'start': 728.199, 'duration': 4.323}, {'end': 740.337, 'text': 'So one thing I might actually do is just go back to the ML5 website oops, sorry which is here and click on examples,', 'start': 732.895, 'duration': 7.442}], 'summary': 'Transcript: demonstrating ml5 library usage in a video.', 'duration': 30.286, 'max_score': 710.051, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yNkAuWz5lnY/pics/yNkAuWz5lnY710051.jpg'}], 'start': 576.76, 'title': 'Setting up javascript and ml5', 'summary': 'Covers setting up a javascript file using the p5 library, including creating a canvas and using the ml5 library, with a focus on importing the ml5 javascript library and referencing it in a script tag.', 'chapters': [{'end': 732.522, 'start': 576.76, 'title': 'Setting up javascript and ml5', 'summary': 'Covers setting up a javascript file using the p5 library, including creating a canvas and using the ml5 library, with a focus on importing the ml5 javascript library and referencing it in a script tag.', 'duration': 155.762, 'highlights': ['Using the p5 library to create a canvas and color the background with the color 0,, which is black The p5 library supports a setup function which executes when the web page loads, creating a canvas and coloring the background with the color 0, which is black.', 'Running a local server using a node server package to view the results of the JavaScript code The speaker is able to see the results of the JavaScript code by running a local server using a node server package.', 'Importing the ml5 library in the HTML file and referencing it in a script tag The speaker prefers referencing the ml5 library in a script tag in the HTML file and mentions the current version of the ml5 library as 0.1.1.']}], 'duration': 155.762, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yNkAuWz5lnY/pics/yNkAuWz5lnY576760.jpg', 'highlights': ['Using the p5 library to create a canvas and color the background with the color 0, which is black', 'Importing the ml5 library in the HTML file and referencing it in a script tag', 'Running a local server using a node server package to view the results of the JavaScript code']}, {'end': 897.847, 'segs': [{'end': 821.021, 'src': 'embed', 'start': 792.714, 'weight': 0, 'content': [{'end': 795.816, 'text': "So I'm going to make a variable, and I'm going to call it classifier.", 'start': 792.714, 'duration': 3.102}, {'end': 797.236, 'text': 'And actually, you know what?', 'start': 796.656, 'duration': 0.58}, {'end': 803.858, 'text': "I'm going to call it mobilenet, because I want to remind myself that this is not magic, that this is using a very specific,", 'start': 797.256, 'duration': 6.602}, {'end': 805.138, 'text': 'pre-trained model called mobilenet.', 'start': 803.858, 'duration': 1.28}, {'end': 806.498, 'text': "So I'm going to call my variable mobilenet.", 'start': 805.158, 'duration': 1.34}, {'end': 812.94, 'text': "Then I'm going to say mobilenet equals ml5.imageClassifier.", 'start': 806.978, 'duration': 5.962}, {'end': 816.52, 'text': 'So this is a function that generates an image classification object.', 'start': 813.36, 'duration': 3.16}, {'end': 818.581, 'text': "It's going to be stored in that variable mobilenet.", 'start': 816.781, 'duration': 1.8}, {'end': 821.021, 'text': 'And now it needs some arguments.', 'start': 818.941, 'duration': 2.08}], 'summary': "Creating a variable 'mobilenet' as an image classifier using ml5 with pre-trained model.", 'duration': 28.307, 'max_score': 792.714, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yNkAuWz5lnY/pics/yNkAuWz5lnY792714.jpg'}, {'end': 879.74, 'src': 'embed', 'start': 834.254, 'weight': 1, 'content': [{'end': 841.699, 'text': "So I'm telling ml5 that I want to make an image classifier, and the first argument I'm giving it is a string with the name of the model.", 'start': 834.254, 'duration': 7.445}, {'end': 850.664, 'text': 'Now, in theory, as ml5 supports additional pre-trained model, I might get to type something in here, like a unicorn classifier.', 'start': 842.039, 'duration': 8.625}, {'end': 853.005, 'text': 'Maybe it classifies all different kinds of unicorns.', 'start': 850.704, 'duration': 2.301}, {'end': 856.742, 'text': 'MobileNet, and then I need something else really important.', 'start': 854.42, 'duration': 2.322}, {'end': 857.802, 'text': 'I need a callback.', 'start': 856.822, 'duration': 0.98}, {'end': 861.065, 'text': 'Deep breath, deep breath, deep breath, deep breath.', 'start': 859.423, 'duration': 1.642}, {'end': 863.506, 'text': 'Okay, so wait, I gotta stop for a second.', 'start': 861.225, 'duration': 2.281}, {'end': 872.297, 'text': "The ml5 library supports callbacks, which is what I'm going to use in these video tutorials, and something called promises.", 'start': 866.295, 'duration': 6.002}, {'end': 877.359, 'text': "If you don't know what a JavaScript promise is, I will refer you to my playlist about JavaScript promises.", 'start': 872.517, 'duration': 4.842}, {'end': 879.74, 'text': 'And you might look at some of the documentation.', 'start': 877.879, 'duration': 1.861}], 'summary': 'Using ml5, creating an image classifier with mobilenet and implementing callbacks for javascript promises.', 'duration': 45.486, 'max_score': 834.254, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yNkAuWz5lnY/pics/yNkAuWz5lnY834254.jpg'}], 'start': 732.895, 'title': 'Using ml5 for image classification', 'summary': "Demonstrates how to create an image classifier using the pre-trained model 'mobilenet' and the implementation of callbacks in the javascript programming language.", 'chapters': [{'end': 897.847, 'start': 732.895, 'title': 'Using ml5 for image classification', 'summary': "Discusses using the ml5 library for image classification, demonstrating how to create an image classifier using the pre-trained model 'mobilenet' and the implementation of callbacks in the javascript programming language.", 'duration': 164.952, 'highlights': ["The chapter discusses using the ML5 library for image classification The transcript is centered around using the ML5 library for image classification, demonstrating the process of creating an image classifier using the 'MobileNet' pre-trained model.", "Demonstrating how to create an image classifier using the pre-trained model 'MobileNet' The transcript explains the process of creating an image classifier by defining a variable 'mobilenet' and using the function 'ml5.imageClassifier' with the pre-trained model 'MobileNet'.", 'The implementation of callbacks in the JavaScript programming language The chapter introduces the use of callbacks in the JavaScript programming language for handling asynchronous events, providing an overview of their functionality and relationship to promises.']}], 'duration': 164.952, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yNkAuWz5lnY/pics/yNkAuWz5lnY732895.jpg', 'highlights': ['The chapter discusses using the ML5 library for image classification.', "Demonstrating how to create an image classifier using the pre-trained model 'MobileNet'.", 'The implementation of callbacks in the JavaScript programming language.']}, {'end': 1204.614, 'segs': [{'end': 946.411, 'src': 'embed', 'start': 897.887, 'weight': 0, 'content': [{'end': 901.109, 'text': 'I could also put an anonymous function in there called model ready.', 'start': 897.887, 'duration': 3.222}, {'end': 907.892, 'text': "And I'm just going to put that function up here in the global space because that's going to be very simple.", 'start': 902.069, 'duration': 5.823}, {'end': 914.746, 'text': "And I'm just going to say console.log Model is ready, exclamation point.", 'start': 908.493, 'duration': 6.253}, {'end': 919.148, 'text': 'So the idea here is I am now creating an image classifier with a MobileNet model.', 'start': 914.986, 'duration': 4.162}, {'end': 921.789, 'text': "It's going to take some time for it to load that model.", 'start': 919.388, 'duration': 2.401}, {'end': 922.749, 'text': 'This is not a small thing.', 'start': 921.829, 'duration': 0.92}, {'end': 927.311, 'text': "Now, it's called MobileNet because it's actually a tiny model that can even run on mobile phones.", 'start': 922.809, 'duration': 4.502}, {'end': 930.332, 'text': 'So this code would work well even in a mobile browser.', 'start': 927.531, 'duration': 2.801}, {'end': 935.715, 'text': "But even the tiniest model is something that's got some size to it.", 'start': 931.368, 'duration': 4.347}, {'end': 937.378, 'text': "It's going to take a little while to load.", 'start': 935.735, 'duration': 1.643}, {'end': 939.14, 'text': "So let's go see how long.", 'start': 937.698, 'duration': 1.442}, {'end': 941.123, 'text': 'Go over here, into here.', 'start': 939.641, 'duration': 1.482}, {'end': 941.985, 'text': "I'm going to hit Refresh.", 'start': 941.143, 'duration': 0.842}, {'end': 946.411, 'text': 'Model is ready.', 'start': 945.791, 'duration': 0.62}], 'summary': 'Creating image classifier with mobilenet model, suitable for mobile browsers, takes time to load.', 'duration': 48.524, 'max_score': 897.887, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yNkAuWz5lnY/pics/yNkAuWz5lnY897887.jpg'}, {'end': 985.235, 'src': 'embed', 'start': 960.495, 'weight': 3, 'content': [{'end': 967.978, 'text': 'this is going to be different, but a lot of the pre-trained models that you might make use of in ML5 are actually loaded from the cloud,', 'start': 960.495, 'duration': 7.483}, {'end': 971.699, 'text': 'meaning some underground bunker of servers where the model file is stored.', 'start': 967.978, 'duration': 3.721}, {'end': 977.486, 'text': "I believe it's coming from a Google server.", 'start': 974.382, 'duration': 3.104}, {'end': 980.95, 'text': "So if you're not connected to the internet, this example won't even run.", 'start': 977.906, 'duration': 3.044}, {'end': 985.235, 'text': 'At some point it would be nice to support ways of running mobile net model offline.', 'start': 980.97, 'duration': 4.265}], 'summary': 'Ml5 pre-trained models are loaded from the cloud, specifically from google servers, making internet connection necessary for running the example. future support for offline use is desired.', 'duration': 24.74, 'max_score': 960.495, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yNkAuWz5lnY/pics/yNkAuWz5lnY960495.jpg'}, {'end': 1120.513, 'src': 'heatmap', 'start': 1092.022, 'weight': 1, 'content': [{'end': 1097.387, 'text': "A lot of this stuff is unnecessary to the example I'm building, but just to demonstrate the idea.", 'start': 1092.022, 'duration': 5.365}, {'end': 1100.51, 'text': "And then now I'm going to draw the image into the canvas.", 'start': 1097.407, 'duration': 3.103}, {'end': 1108.94, 'text': 'but that image is really like a big size so I could resize it or I could just like force it to be the size of the canvas and there we go.', 'start': 1102.293, 'duration': 6.647}, {'end': 1114.166, 'text': 'So I now have a P5 canvas displaying my Puffin image and the model is ready.', 'start': 1109.321, 'duration': 4.845}, {'end': 1120.513, 'text': 'So now, once the model is ready, what can I do? I can classify the image.', 'start': 1115.031, 'duration': 5.482}], 'summary': 'Demonstrating resizing and displaying an image on a p5 canvas.', 'duration': 28.491, 'max_score': 1092.022, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yNkAuWz5lnY/pics/yNkAuWz5lnY1092022.jpg'}], 'start': 897.887, 'title': 'Creating image classifier with mobilenet model and using ml5 and p5 to work with images', 'summary': 'Demonstrates creating an image classifier using a mobilenet model, with a model loading time of a few seconds and a dependency on an internet connection. it also discusses using the p5 and ml5 libraries to work with images, including classifying image content and handling asynchronous operations with callbacks in javascript.', 'chapters': [{'end': 1000.918, 'start': 897.887, 'title': 'Creating image classifier with mobilenet model', 'summary': 'Demonstrates creating an image classifier using a mobilenet model, emphasizing the model loading time, which took a few seconds, and the dependency on an internet connection for accessing pre-trained models.', 'duration': 103.031, 'highlights': ["The MobileNet model is used to create an image classifier, and it takes some time to load. Emphasizes the usage of the MobileNet model and the time it takes to load, highlighting the model's capability to run on mobile phones and the loading time.", 'The pre-trained models used in ML5 are loaded from the cloud, requiring an internet connection for access. Highlights the dependency on an internet connection for accessing pre-trained models in ML5, mentioning the storage of model files in a Google server and the requirement for online connectivity.', "The example of using MobileNet model in ML5 requires an internet connection and does not support offline usage. Emphasizes the necessity of an internet connection for running the example, stating the lack of support for offline usage in the ML5 library's MobileNet model."]}, {'end': 1204.614, 'start': 1001.559, 'title': 'Using ml5 and p5 to work with images', 'summary': 'Discusses using the p5 and ml5 libraries to work with images, including creating and displaying images, utilizing the mobilenet model to classify the content of an image, and handling asynchronous operations with callbacks in javascript.', 'duration': 203.055, 'highlights': ['Utilizing P5 and ML5 libraries to create and display images, such as the Puffin image The chapter explains the process of using the P5 and ML5 libraries to create and display images, with a specific example of working with the Puffin image.', 'Using the MobileNet model to classify the content of the Puffin image The chapter demonstrates the usage of the MobileNet model to classify the content of the Puffin image, showcasing the predict function and the asynchronous nature of the operation.', 'Handling asynchronous operations with callbacks in JavaScript The chapter emphasizes the concept of handling asynchronous operations in JavaScript using callbacks, including the necessity of handling errors with error-first callbacks.']}], 'duration': 306.727, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yNkAuWz5lnY/pics/yNkAuWz5lnY897887.jpg', 'highlights': ['The MobileNet model is used to create an image classifier, and it takes some time to load.', 'The pre-trained models used in ML5 are loaded from the cloud, requiring an internet connection for access.', 'The example of using MobileNet model in ML5 requires an internet connection and does not support offline usage.', 'Utilizing P5 and ML5 libraries to create and display images, such as the Puffin image.', 'Using the MobileNet model to classify the content of the Puffin image.', 'Handling asynchronous operations with callbacks in JavaScript.']}, {'end': 1449.341, 'segs': [{'end': 1289.524, 'src': 'embed', 'start': 1259.584, 'weight': 0, 'content': [{'end': 1267.951, 'text': "Now, why 5%, 11%? Why? Why is this different than what I got with the oyster catcher in the home page of ML5.js? I don't know the answer to that.", 'start': 1259.584, 'duration': 8.367}, {'end': 1270.77, 'text': "I don't have to think about that.", 'start': 1269.909, 'duration': 0.861}, {'end': 1272.531, 'text': "It's a different probability.", 'start': 1271.73, 'duration': 0.801}, {'end': 1274.432, 'text': "But nonetheless, I'm getting a pretty similar result.", 'start': 1272.551, 'duration': 1.881}, {'end': 1278.596, 'text': 'It might have to do with versions of something and that the homepage is running a different version of something.', 'start': 1274.452, 'duration': 4.144}, {'end': 1280.677, 'text': 'But we can see this is the idea.', 'start': 1279.616, 'duration': 1.061}, {'end': 1289.524, 'text': 'Now, what I could do is I could go and I could say, all right, let me get the label is results.', 'start': 1280.777, 'duration': 8.747}], 'summary': 'Comparing probabilities of 5% and 11% from different versions, achieving similar results.', 'duration': 29.94, 'max_score': 1259.584, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yNkAuWz5lnY/pics/yNkAuWz5lnY1259584.jpg'}, {'end': 1422.839, 'src': 'embed', 'start': 1394.275, 'weight': 1, 'content': [{'end': 1395.838, 'text': 'So this is the basics.', 'start': 1394.275, 'duration': 1.563}, {'end': 1399.665, 'text': 'Now you see, you could actually just go and use this in a project right now.', 'start': 1396.539, 'duration': 3.126}, {'end': 1402.166, 'text': 'I have a couple ideas for you.', 'start': 1401.005, 'duration': 1.161}, {'end': 1405.128, 'text': 'Number one is make a little interface, try different images.', 'start': 1402.226, 'duration': 2.902}, {'end': 1409.11, 'text': 'Can you make one where you can actually drag and drop your own image, like it does on the ML5 homepage?', 'start': 1405.188, 'duration': 3.922}, {'end': 1413.673, 'text': "Could you make something where you draw on the canvas and it's trying to classify what you're drawing?", 'start': 1409.851, 'duration': 3.822}, {'end': 1418.817, 'text': 'Another thing you could try is can you get it to classify what the webcam is seeing?', 'start': 1414.214, 'duration': 4.603}, {'end': 1419.437, 'text': 'And guess what?', 'start': 1418.917, 'duration': 0.52}, {'end': 1420.918, 'text': "That's what I'm going to do in the next video.", 'start': 1419.657, 'duration': 1.261}, {'end': 1422.839, 'text': "So there's so many more things you could do with this.", 'start': 1421.158, 'duration': 1.681}], 'summary': 'Explore various project ideas including interface design, image manipulation, drawing classification, and webcam classification.', 'duration': 28.564, 'max_score': 1394.275, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yNkAuWz5lnY/pics/yNkAuWz5lnY1394275.jpg'}, {'end': 1449.341, 'src': 'embed', 'start': 1446.659, 'weight': 2, 'content': [{'end': 1448.42, 'text': "And I'll answer those also at the beginning of the next video.", 'start': 1446.659, 'duration': 1.761}, {'end': 1449.341, 'text': 'OK? See you soon.', 'start': 1448.48, 'duration': 0.861}], 'summary': 'Next video will address remaining questions. stay tuned!', 'duration': 2.682, 'max_score': 1446.659, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yNkAuWz5lnY/pics/yNkAuWz5lnY1446659.jpg'}], 'start': 1204.914, 'title': 'Using ml5.js for image classification', 'summary': 'Demonstrates using ml5.js to classify images with a pre-trained mobilenet model, achieving a 75% probability of identifying an image as an oyster catcher. it also explores error handling and displaying results on a canvas.', 'chapters': [{'end': 1449.341, 'start': 1204.914, 'title': 'Image classification with ml5.js', 'summary': 'Demonstrates using ml5.js to classify images using a pre-trained mobilenet model, achieving a 75% probability of identifying an image as an oyster catcher, and explores error handling and displaying classification results on a canvas.', 'duration': 244.427, 'highlights': ['Using ML5.js to classify images with a pre-trained MobileNet model The demonstration showcases the use of ML5.js to classify images using a pre-trained MobileNet model, achieving a 75% probability of identifying an image as an oyster catcher.', 'Exploring error handling and displaying classification results on a canvas The chapter delves into error handling by logging errors and displaying classification results on a canvas, providing insights into displaying labels and probabilities for the classified images.', 'Future possibilities of using the ML5.js model The chapter suggests potential projects including creating an interface for trying different images, implementing drag and drop functionality, classifying drawings on a canvas, and using the live webcam feed for real-time image classification.']}], 'duration': 244.427, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/yNkAuWz5lnY/pics/yNkAuWz5lnY1204914.jpg', 'highlights': ['Using ML5.js to classify images with a pre-trained MobileNet model achieving a 75% probability of identifying an image as an oyster catcher.', 'Exploring error handling and displaying classification results on a canvas providing insights into displaying labels and probabilities for the classified images.', 'Future possibilities of using the ML5.js model including creating an interface for trying different images, implementing drag and drop functionality, classifying drawings on a canvas, and using the live webcam feed for real-time image classification.']}], 'highlights': ['The MobileNet model accurately recognized a macaw with a confidence of 98.79% and a toucan with a confidence of 99.99%, demonstrating its image recognition capabilities.', 'The tutorial focuses on creating an image classification code example using the ML5 library, emphasizing its simplicity and applicability as a hello world introduction to machine learning.', 'The significance of a large and diverse training dataset such as ImageNet, which contains almost 15 million images, for effective machine learning.', 'Using ML5.js to classify images with a pre-trained MobileNet model achieving a 75% probability of identifying an image as an oyster catcher.', 'The process of supervised learning and the role of labeled datasets in training pre-trained models.', 'The chapter discusses using the ML5 library for image classification.', 'The implementation of callbacks in the JavaScript programming language.', 'The encouragement to explore and critically analyze the MobileNet model, including accessing relevant resources such as a paper and GitHub repository for in-depth understanding.']}