title
YOLOv8: How to Train for Object Detection on a Custom Dataset

description
YOLOv8 is the latest installment of the highly influential YOLO (You Only Look Once) architecture. YOLOv8 was developed by Ultralytics, a team known for its work on YOLOv3 and YOLOv5. Following the trend set by  YOLOv6 and YOLOv7, we have at our disposal object detection, but also instance segmentation, and image classification. The model itself is created in PyTorch and runs on both the CPU and GPU. As with YOLOv5, we also have a number of various exports such as TF.js or CoreML. In this video, I'll take you through a step-by-step tutorial on Google Colab, and show you how to train your own YOLOv8 object detection model. Chapters: 0:00 Introduction 0:51 Overview 3:09 Setting up the Python environment 5:36 New API: CLI vs. Python SDK 8:51 Prepare the YOLOv8 object detection dataset 12:29 Train YOLOv8 model on custom dataset 13:54 YOLOv8 model evaluation 16:47 YOLOv8 model inference on images and videos 18:44 YOLOv8 model deployment and inference via hosted API 19:58 Conclusion Resources: 🌏 Roboflow: https://roboflow.com 🌌 Roboflow Universe: https://universe.roboflow.com 📝 How to Train YOLOv8 Object Detection on a Custom Dataset Blogpost: https://blog.roboflow.com/how-to-train-yolov8-on-a-custom-dataset 📓How to Train YOLOv8 Object Detection on a Custom Dataset Notebook: https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/train-yolov8-object-detection-on-custom-dataset.ipynb ⭐ YOLOv8 repository: https://github.com/ultralytics/ultralytics 📄 YOLOv8 docs: https://v8docs.ultralytics.com 📓 Learn more about YOLOv8 and other Computer Vision models with Roboflow Notebooks: https://github.com/roboflow/notebooks 🎬 Automatically Label Computer Vision Data: https://youtu.be/eSfGTZarFNQ 🆕 What's New in YOLOv8 Architecture: https://blog.roboflow.com/whats-new-in-yolov8/ 💯 RF100 Dataset https://blog.roboflow.com/roboflow-100 Stay up to date with the projects I'm working on at https://github.com/roboflow and https://github.com/SkalskiP! ⭐

detail
{'title': 'YOLOv8: How to Train for Object Detection on a Custom Dataset', 'heatmap': [{'end': 788.49, 'start': 733.657, 'weight': 0.791}, {'end': 842.011, 'start': 807.364, 'weight': 0.733}], 'summary': "Yolov8 is introduced as the latest state-of-the-art object detection algorithm, claiming faster fine-tuning than predecessors on the roboflow 100 dataset. it emphasizes significant engineering changes, creation of clis and sdks, and highlights the ultralytics team's open source contributions with 45,000 stars on github. the tutorial covers yolov8 implementation, including creating helper variables, installation methods, and api comparisons, and demonstrates object detection with specific instances. it outlines the process of creating a football player detection project, covers training a dataset with yolov8, adjusting hyperparameters, and obtaining training results in approximately an hour. the analysis of yolov8 object detection training process reveals promising accuracy and performance improvements, achieving an inference speed of 80 fps with a 30-minute trained model and ease of deploying and using the model through the api.", 'chapters': [{'end': 99.349, 'segs': [{'end': 33.16, 'src': 'embed', 'start': 0.13, 'weight': 0, 'content': [{'end': 1.563, 'text': 'YOLOv8 is out!.', 'start': 0.13, 'duration': 1.433}, {'end': 9.785, 'text': 'Latest installment of highly influential You Only Look Once.', 'start': 5.922, 'duration': 3.863}, {'end': 16.669, 'text': "algorithm has just been released and the team behind it claims that it's new SOTA when it comes to object detection in real time.", 'start': 9.785, 'duration': 6.884}, {'end': 20.271, 'text': 'I guess we just need to wait for Paper with Code to confirm that.', 'start': 17.349, 'duration': 2.922}, {'end': 28.457, 'text': 'However, internally, we measured YOLOv8 against YOLOv5 and YOLOv7 using our RoboFlow 100 dataset,', 'start': 20.692, 'duration': 7.765}, {'end': 33.16, 'text': 'and the results suggest that the model fine tunes much faster than its predecessors.', 'start': 28.457, 'duration': 4.703}], 'summary': 'Yolov8 claims new sota for real-time object detection, fine-tuning faster than yolov5 and yolov7.', 'duration': 33.03, 'max_score': 0.13, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/wuZtUMEiKWY/pics/wuZtUMEiKWY130.jpg'}, {'end': 107.215, 'src': 'embed', 'start': 76.949, 'weight': 2, 'content': [{'end': 80.552, 'text': 'If you want to be up to date with those videos, make sure to like and subscribe.', 'start': 76.949, 'duration': 3.603}, {'end': 83.995, 'text': 'This way you will know first when the new video gets released.', 'start': 80.933, 'duration': 3.062}, {'end': 88.78, 'text': 'I will show you how to train, validate, predict and deploy the model,', 'start': 84.636, 'duration': 4.144}, {'end': 94.465, 'text': "and we'll use Football Player Detection dataset available at RoboFlow Universe to do all of that.", 'start': 88.78, 'duration': 5.685}, {'end': 99.349, 'text': 'So, if you are here mostly to learn about the training process,', 'start': 94.885, 'duration': 4.464}, {'end': 107.215, 'text': 'feel free to use the chapters in the description to navigate to the part of the video that is most interesting to you.', 'start': 99.349, 'duration': 7.866}], 'summary': 'Learn to train, validate, predict, and deploy models using football player detection dataset at roboflow universe.', 'duration': 30.266, 'max_score': 76.949, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/wuZtUMEiKWY/pics/wuZtUMEiKWY76949.jpg'}], 'start': 0.13, 'title': 'Yolov8: new sota object detection model', 'summary': 'Introduces yolov8 as the latest state-of-the-art object detection algorithm, claiming faster fine-tuning than its predecessors when tested on the roboflow 100 dataset. it also provides a tutorial on training the model on a custom dataset.', 'chapters': [{'end': 99.349, 'start': 0.13, 'title': 'Yolov8: new sota object detection model', 'summary': 'Introduces yolov8 as the latest state-of-the-art object detection algorithm, claiming faster fine-tuning than its predecessors when tested on the roboflow 100 dataset, and offers a tutorial on training the model on a custom dataset.', 'duration': 99.219, 'highlights': ['YOLOv8 is introduced as the latest state-of-the-art object detection algorithm, claiming faster fine-tuning than YOLOv5 and YOLOv7 when tested on the RoboFlow 100 dataset.', 'The chapter offers a tutorial on training the YOLOv8 object detection model on a custom dataset, specifically using the Football Player Detection dataset available at RoboFlow Universe.', 'The chapter mentions the potential arrival of instant segmentation and classification videos in the near future.']}], 'duration': 99.219, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/wuZtUMEiKWY/pics/wuZtUMEiKWY130.jpg', 'highlights': ['YOLOv8 is introduced as the latest state-of-the-art object detection algorithm, claiming faster fine-tuning than YOLOv5 and YOLOv7 when tested on the RoboFlow 100 dataset.', 'The chapter offers a tutorial on training the YOLOv8 object detection model on a custom dataset, specifically using the Football Player Detection dataset available at RoboFlow Universe.', 'The chapter mentions the potential arrival of instant segmentation and classification videos in the near future.']}, {'end': 256.225, 'segs': [{'end': 256.225, 'src': 'embed', 'start': 226.671, 'weight': 0, 'content': [{'end': 234.294, 'text': "Here we are in the notebook and let's start by just confirming that our runtime is GPU accelerated.", 'start': 226.671, 'duration': 7.623}, {'end': 238.336, 'text': "and indeed that's the case.", 'start': 235.234, 'duration': 3.102}, {'end': 242.818, 'text': "so to just confirm that, let's run nvidia semi command.", 'start': 238.336, 'duration': 4.482}, {'end': 250.442, 'text': 'if indeed we have access to the gpu, we should see similar result after that command is being run.', 'start': 242.818, 'duration': 7.624}, {'end': 256.225, 'text': 'the in-signalization takes a little bit of time, but yes, we have the access to the gpu tesla t4.', 'start': 250.442, 'duration': 5.783}], 'summary': 'Notebook runtime is gpu accelerated, confirmed by running nvidia-smi command, accessing gpu tesla t4.', 'duration': 29.554, 'max_score': 226.671, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/wuZtUMEiKWY/pics/wuZtUMEiKWY226671.jpg'}], 'start': 99.349, 'title': 'The impact of yolov8', 'summary': "Introduces the transition to yolov8, emphasizing significant engineering changes, creation of clis and sdks, and potential to address past issues. it highlights ultralytics team's contributions to open source, including yolov8, yolov3, and yolov5, with a collective 45,000 stars on github.", 'chapters': [{'end': 256.225, 'start': 99.349, 'title': 'Introduction to yolov8 and its impact', 'summary': 'Discusses the transition to yolov8, highlighting the significant engineering changes, the creation of clis and sdks, and the potential to address past issues, while emphasizing the high-level of contributions to open source by the ultralytics team, creators of yolov8, yolov3, and yolov5, with a collective 45,000 stars on github.', 'duration': 156.876, 'highlights': ['The YOLOv8 model represents a major engineering jump from Darknet to PyTorch, eliminating the need for train.py and forking the repository, and introducing CLIs and SDKs. The transition to YOLOv8 marks a significant engineering leap, removing the reliance on train.py and repository forking, and introducing CLIs and SDKs for improved functionality.', 'The creators of YOLOv8, the Ultralytics team, have a strong track record with YOLOv3 and YOLOv5, boasting a combined total of almost 45,000 stars on GitHub. The Ultralytics team, responsible for YOLOv8, YOLOv3, and YOLOv5, has earned nearly 45,000 stars on GitHub, showcasing their substantial contributions to open source projects.', 'Potential improvements in YOLOv8 include the resolution of past issues such as the lack of paper and PIP package, suggesting a promising solution to these concerns. YOLOv8 aims to address previous issues, including the lack of paper and PIP package, which have been sources of frustration in the data science and engineering communities, offering potential solutions to these challenges.']}], 'duration': 156.876, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/wuZtUMEiKWY/pics/wuZtUMEiKWY99349.jpg', 'highlights': ['The Ultralytics team, responsible for YOLOv8, YOLOv3, and YOLOv5, has earned nearly 45,000 stars on GitHub, showcasing their substantial contributions to open source projects.', 'The YOLOv8 model represents a major engineering jump from Darknet to PyTorch, eliminating the need for train.py and forking the repository, and introducing CLIs and SDKs.', 'Potential improvements in YOLOv8 include the resolution of past issues such as the lack of paper and PIP package, suggesting a promising solution to these concerns.']}, {'end': 570.818, 'segs': [{'end': 406.267, 'src': 'embed', 'start': 377.478, 'weight': 3, 'content': [{'end': 381.361, 'text': "Let's compare those two APIs using prediction as an example.", 'start': 377.478, 'duration': 3.883}, {'end': 382.901, 'text': "Let's start with the CLI.", 'start': 381.701, 'duration': 1.2}, {'end': 386.662, 'text': 'It has a concept of task and the mode.', 'start': 383.221, 'duration': 3.441}, {'end': 395.585, 'text': "And this is very important because that's how they compact everything that was a separate script previously into a single command.", 'start': 386.982, 'duration': 8.603}, {'end': 402.346, 'text': 'First, we select task and task is basically detection or segmentation on classification.', 'start': 395.945, 'duration': 6.401}, {'end': 406.267, 'text': 'So for this tutorial, we select detection and the mode.', 'start': 402.766, 'duration': 3.501}], 'summary': 'Comparison of two apis using prediction as an example, emphasizing the concept of task and mode for compacting scripts into a single command.', 'duration': 28.789, 'max_score': 377.478, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/wuZtUMEiKWY/pics/wuZtUMEiKWY377478.jpg'}, {'end': 485.859, 'src': 'embed', 'start': 457.369, 'weight': 5, 'content': [{'end': 468.081, 'text': 'that can take few seconds and after that it displays what was detected and the results are saved into the runs directory.', 'start': 457.369, 'duration': 10.712}, {'end': 474.108, 'text': 'This is exactly how it was happening in the case of Python scripts.', 'start': 468.501, 'duration': 5.607}, {'end': 481.595, 'text': 'So you can see over here we have runs detect predict and over there we will be able to find our result image.', 'start': 474.608, 'duration': 6.987}, {'end': 485.859, 'text': 'So now I can basically load that image and display the result.', 'start': 481.855, 'duration': 4.004}], 'summary': 'Python scripts run, detect, and predict, displaying results in the runs directory.', 'duration': 28.49, 'max_score': 457.369, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/wuZtUMEiKWY/pics/wuZtUMEiKWY457369.jpg'}, {'end': 570.818, 'src': 'embed', 'start': 515.197, 'weight': 0, 'content': [{'end': 523.441, 'text': 'And like before, the model is loaded into the memory and the results are being displayed and we can compare the results.', 'start': 515.197, 'duration': 8.244}, {'end': 525.642, 'text': 'One person, one car, one dog.', 'start': 523.561, 'duration': 2.081}, {'end': 530.545, 'text': 'And over here, one person, one car, one dog.', 'start': 526.642, 'duration': 3.903}, {'end': 537.45, 'text': "Cool Let's switch the topic a bit and discuss the dataset that you need to have to train YOLOv8.", 'start': 530.985, 'duration': 6.465}, {'end': 543.774, 'text': 'Latest iteration of YOLO is supporting exactly the same data format as YOLOv5.', 'start': 538.37, 'duration': 5.404}, {'end': 551.981, 'text': 'So if you already have the dataset that you used previously and you would just like to retrain the model, Feel free to use that.', 'start': 544.395, 'duration': 7.586}, {'end': 558.027, 'text': "But if you don't, let me quickly show you how can you create that dataset using RoboFlow.", 'start': 552.422, 'duration': 5.605}, {'end': 560.429, 'text': "Let's go to roboflow.com.", 'start': 558.407, 'duration': 2.022}, {'end': 565.673, 'text': "If you don't have account, obviously you need to create one.", 'start': 563.251, 'duration': 2.422}, {'end': 568.456, 'text': 'If you have it already, just sign in.', 'start': 566.094, 'duration': 2.362}, {'end': 570.818, 'text': "Let's create new project.", 'start': 569.497, 'duration': 1.321}], 'summary': 'Yolov8 supports the same data format as yolov5, allowing retraining with existing datasets or creation of new datasets using roboflow.', 'duration': 55.621, 'max_score': 515.197, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/wuZtUMEiKWY/pics/wuZtUMEiKWY515197.jpg'}], 'start': 256.225, 'title': 'Implementing and using yolo v8', 'summary': 'Covers the implementation of yolo v8, including creating helper variables, installation methods, and api comparisons. it also demonstrates using yolov8 for object detection, with specific instances of one person, one car, and one dog being detected, and discusses the required training dataset.', 'chapters': [{'end': 402.346, 'start': 256.225, 'title': 'Implementing yolo v8 and comparing apis', 'summary': 'Discusses the process of creating a helper variable, installing yolo v8 using pip or git clone, importing yolo from ultralytics, and comparing the new cli and sdk apis, highlighting the advantages and functionalities of each.', 'duration': 146.121, 'highlights': ['The process of creating a helper variable called home to manage paths to datasets and images is introduced, aiming to facilitate easier management of paths for datasets and other images.', 'The installation of YOLO V8 package is discussed, presenting two methods: using pip to install Ultralytics, which is the first iteration of YOLO with an official pip package, or cloning the repository and installing the dependencies, providing flexibility in the installation process.', 'The chapter explores the two new ways to interact with the YOLO codebase: CLI and SDK, highlighting the advantages of each, with the CLI offering a compact method for running commands and the SDK providing a Pythonic way of using YOLO.', 'A comparison between the two APIs is demonstrated using prediction as an example, showcasing the concept of task and mode in the CLI, and the simplicity of using the SDK for importing YOLO from Ultralytics and performing actions with it.']}, {'end': 570.818, 'start': 402.766, 'title': 'Using yolov8 for object detection', 'summary': 'Demonstrates the use of yolov8 for object detection, running predictions on an image, and comparing the results, with one person, one car, and one dog detected, and also discusses the dataset needed for training yolov8.', 'duration': 168.052, 'highlights': ['The chapter demonstrates the use of YOLOv8 for object detection, running predictions on an image, and comparing the results, with one person, one car, and one dog detected. The tutorial shows how to use YOLOv8 for object detection, run predictions on an image, and compare the results, with one person, one car, and one dog detected in both cases.', 'The tutorial explains the dataset needed for training YOLOv8 and mentions that the latest iteration of YOLO supports the same data format as YOLOv5. The latest iteration of YOLOv8 supports the same data format as YOLOv5, allowing the use of an existing dataset for retraining or creating a new dataset using RoboFlow.']}], 'duration': 314.593, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/wuZtUMEiKWY/pics/wuZtUMEiKWY256225.jpg', 'highlights': ['The chapter explores the two new ways to interact with the YOLO codebase: CLI and SDK, highlighting the advantages of each, with the CLI offering a compact method for running commands and the SDK providing a Pythonic way of using YOLO.', 'The installation of YOLO V8 package is discussed, presenting two methods: using pip to install Ultralytics, which is the first iteration of YOLO with an official pip package, or cloning the repository and installing the dependencies, providing flexibility in the installation process.', 'The process of creating a helper variable called home to manage paths to datasets and images is introduced, aiming to facilitate easier management of paths for datasets and other images.', 'The tutorial explains the dataset needed for training YOLOv8 and mentions that the latest iteration of YOLO supports the same data format as YOLOv5. The latest iteration of YOLOv8 supports the same data format as YOLOv5, allowing the use of an existing dataset for retraining or creating a new dataset using RoboFlow.', 'The chapter demonstrates the use of YOLOv8 for object detection, running predictions on an image, and comparing the results, with one person, one car, and one dog detected. The tutorial shows how to use YOLOv8 for object detection, run predictions on an image, and compare the results, with one person, one car, and one dog detected in both cases.', 'A comparison between the two APIs is demonstrated using prediction as an example, showcasing the concept of task and mode in the CLI, and the simplicity of using the SDK for importing YOLO from Ultralytics and performing actions with it.']}, {'end': 852.655, 'segs': [{'end': 603.822, 'src': 'embed', 'start': 571.438, 'weight': 1, 'content': [{'end': 574.021, 'text': 'In the first dropdown, we select object detection.', 'start': 571.438, 'duration': 2.583}, {'end': 577.504, 'text': "Now let's say that we will detect football players.", 'start': 574.361, 'duration': 3.143}, {'end': 582.289, 'text': "And the name of the project, let's call it Football Players Detection.", 'start': 579.187, 'duration': 3.102}, {'end': 588.973, 'text': "The license can stay as this and let's create public project.", 'start': 584.57, 'duration': 4.403}, {'end': 594.616, 'text': 'Now, if you have both images and annotations, feel free to drag and drop them over here.', 'start': 589.453, 'duration': 5.163}, {'end': 603.822, 'text': 'I have annotations, but for the sake of example, I will just drag and drop images and we will go through part of the annotation process together.', 'start': 594.877, 'duration': 8.945}], 'summary': 'Setting up object detection for football players, creating public project and starting annotation process.', 'duration': 32.384, 'max_score': 571.438, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/wuZtUMEiKWY/pics/wuZtUMEiKWY571438.jpg'}, {'end': 668.757, 'src': 'embed', 'start': 638.363, 'weight': 0, 'content': [{'end': 642.206, 'text': 'Like I said, in my case, I just select myself.', 'start': 638.363, 'duration': 3.843}, {'end': 644.948, 'text': 'and click Assign Images.', 'start': 643.547, 'duration': 1.401}, {'end': 649.589, 'text': 'When the labeling job is created, you will get redirected to this screen.', 'start': 645.468, 'duration': 4.121}, {'end': 653.991, 'text': 'And from here, you just click on the first image and just start labeling.', 'start': 650.13, 'duration': 3.861}, {'end': 657.092, 'text': 'The annotation process itself is pretty straightforward.', 'start': 654.371, 'duration': 2.721}, {'end': 662.335, 'text': 'You just draw bounding box around the object that you would like the model to detect.', 'start': 657.152, 'duration': 5.183}, {'end': 668.757, 'text': "The work is pretty tedious, but the good news is that you don't need to annotate the whole dataset.", 'start': 662.615, 'duration': 6.142}], 'summary': 'Annotation process involves drawing bounding boxes around objects to train the model; not necessary to annotate the entire dataset.', 'duration': 30.394, 'max_score': 638.363, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/wuZtUMEiKWY/pics/wuZtUMEiKWY638363.jpg'}, {'end': 796.578, 'src': 'heatmap', 'start': 733.657, 'weight': 2, 'content': [{'end': 739.462, 'text': "I don't want to go too hard, so I'll just add a horizontal flip for the images.", 'start': 733.657, 'duration': 5.805}, {'end': 745.047, 'text': 'I press continue and generate.', 'start': 742.765, 'duration': 2.282}, {'end': 749.029, 'text': 'And after just a few minutes, my dataset should be ready.', 'start': 746.468, 'duration': 2.561}, {'end': 757.351, 'text': 'As we already have the dataset, there is nothing else for us to do than just download it into our environment and start training.', 'start': 749.469, 'duration': 7.882}, {'end': 762.252, 'text': 'Now we can use one of the features that we added specifically for YOLOv8 launch.', 'start': 757.491, 'duration': 4.761}, {'end': 774.176, 'text': 'We can press that button over here and that will generate a code snippet that will allow us to download our freshly created dataset in YOLOv8 format.', 'start': 762.372, 'duration': 11.804}, {'end': 777.818, 'text': 'We can just copy it and paste it into the notebook.', 'start': 774.216, 'duration': 3.602}, {'end': 779.8, 'text': "So let's do it right now.", 'start': 778.359, 'duration': 1.441}, {'end': 788.49, 'text': 'I can just highlight that text, paste it here, press shift enter and the downloading will start immediately.', 'start': 779.82, 'duration': 8.67}, {'end': 791.793, 'text': 'Keep in mind that that code snippet contains your API key.', 'start': 788.71, 'duration': 3.083}, {'end': 796.578, 'text': 'It is not visible in the UI, but when you paste it, it will be in plain text.', 'start': 792.294, 'duration': 4.284}], 'summary': 'After adding a horizontal flip, the dataset can be generated and downloaded for yolov8 with an included api key.', 'duration': 47.109, 'max_score': 733.657, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/wuZtUMEiKWY/pics/wuZtUMEiKWY733657.jpg'}, {'end': 842.011, 'src': 'heatmap', 'start': 807.364, 'weight': 0.733, 'content': [{'end': 812.866, 'text': 'And now the only thing that is still left to do is to execute the training itself.', 'start': 807.364, 'duration': 5.502}, {'end': 821.51, 'text': 'Let me just adjust a single hyper parameter, limit the amount of epochs from 100 to maybe something around 25.', 'start': 813.366, 'duration': 8.144}, {'end': 827.593, 'text': 'This is pretty large data set, so that will take like an hour to complete.', 'start': 821.51, 'duration': 6.083}, {'end': 830.895, 'text': 'Shift enter and the training can start.', 'start': 828.073, 'duration': 2.822}, {'end': 842.011, 'text': 'All right, the training has completed and now the time has come for us to review the results of the training.', 'start': 835.069, 'duration': 6.942}], 'summary': 'Training completed with 25 epochs on large dataset, taking approximately an hour.', 'duration': 34.647, 'max_score': 807.364, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/wuZtUMEiKWY/pics/wuZtUMEiKWY807364.jpg'}], 'start': 571.438, 'title': 'Football player detection process and yolov8 dataset training', 'summary': 'Outlines the process of creating a football player detection project, emphasizing iterative improvement, and covers training a dataset with yolov8, adjusting hyperparameters, and obtaining training results in approximately an hour.', 'chapters': [{'end': 749.029, 'start': 571.438, 'title': 'Football players detection process', 'summary': 'Outlines the process of creating a football player detection project, including uploading images, annotating, and generating a dataset with augmentations, and emphasizes the iterative improvement of prediction quality.', 'duration': 177.591, 'highlights': ['The process involves selecting object detection, creating a public project, uploading images, and starting the annotation process.', 'Annotation process involves drawing bounding boxes around objects, and it is highlighted that annotating part of the dataset is enough to train the model and improve prediction quality iteratively.', 'The chapter emphasizes the use of tutorials for improving prediction quality and provides options to approve or reject annotations before generating the dataset with augmentations.']}, {'end': 852.655, 'start': 749.469, 'title': 'Yolov8 dataset training', 'summary': 'Covers the process of downloading a dataset into the environment, training it using yolov8, and reviewing the training results, including adjusting hyperparameters and the training duration of approximately an hour, leading to the generation of charts saved in the runs directory.', 'duration': 103.186, 'highlights': ['The training duration is approximately an hour, due to the large dataset.', 'Adjusting the hyperparameter to limit the amount of epochs from 100 to around 25.', 'Downloading the dataset in YOLOv8 format using a generated code snippet.', 'The importance of not accidentally committing the API key included in the code snippet into a public repository.', 'Reviewing the results of the training by looking at charts produced by YOLOv8 in the runs directory.']}], 'duration': 281.217, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/wuZtUMEiKWY/pics/wuZtUMEiKWY571438.jpg', 'highlights': ['The process involves selecting object detection, creating a public project, uploading images, and starting the annotation process.', 'The training duration is approximately an hour, due to the large dataset.', 'Adjusting the hyperparameter to limit the amount of epochs from 100 to around 25.', 'Annotation process involves drawing bounding boxes around objects, and it is highlighted that annotating part of the dataset is enough to train the model and improve prediction quality iteratively.', 'Downloading the dataset in YOLOv8 format using a generated code snippet.', 'The chapter emphasizes the use of tutorials for improving prediction quality and provides options to approve or reject annotations before generating the dataset with augmentations.', 'The importance of not accidentally committing the API key included in the code snippet into a public repository.', 'Reviewing the results of the training by looking at charts produced by YOLOv8 in the runs directory.']}, {'end': 1230.179, 'segs': [{'end': 879.675, 'src': 'embed', 'start': 852.655, 'weight': 0, 'content': [{'end': 859.117, 'text': 'that will tell us a little bit more about the training process and the expected accuracy of our model.', 'start': 852.655, 'duration': 6.462}, {'end': 862.178, 'text': "First, let's take a look at the confusion metrics.", 'start': 859.337, 'duration': 2.841}, {'end': 869.946, 'text': 'So confusion matrix is a chart that shows us how our model handles different classes.', 'start': 862.478, 'duration': 7.468}, {'end': 879.675, 'text': "So, for example, if we talk about goalkeeper, 74% of the time it's detected correctly and it's classified as a goalkeeper.", 'start': 870.306, 'duration': 9.369}], 'summary': 'Training process assessment: goalkeeper detection accuracy at 74%.', 'duration': 27.02, 'max_score': 852.655, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/wuZtUMEiKWY/pics/wuZtUMEiKWY852655.jpg'}, {'end': 931.453, 'src': 'embed', 'start': 905.513, 'weight': 2, 'content': [{'end': 914.5, 'text': 'All classes, apart of ball, are detected correctly most of the time, and given the fact that we only trained for 30 to 40 minutes,', 'start': 905.513, 'duration': 8.987}, {'end': 916.522, 'text': 'I am more than satisfied with the results.', 'start': 914.5, 'duration': 2.022}, {'end': 923.267, 'text': "Now let's take a look at the charts displaying key metrics tracked by YOLOv8 object detection.", 'start': 916.842, 'duration': 6.425}, {'end': 931.453, 'text': "So over here I'm usually mostly interested in box loss and class loss for train and validation datasets.", 'start': 923.567, 'duration': 7.886}], 'summary': 'Most classes detected correctly, trained for 30-40 mins, satisfied with results.', 'duration': 25.94, 'max_score': 905.513, 'thumbnail': ''}, {'end': 1151.8, 'src': 'embed', 'start': 1118.534, 'weight': 1, 'content': [{'end': 1123.696, 'text': 'That means that the inference at around 80 FPS is well within our reach.', 'start': 1118.534, 'duration': 5.162}, {'end': 1133.044, 'text': 'Now, before we wrap it up, let me show you one more thing, which is how to deploy your freshly trained model and use it to infer over the API.', 'start': 1124.136, 'duration': 8.908}, {'end': 1136.206, 'text': 'And all of that magic can happen with just a single line of code.', 'start': 1133.224, 'duration': 2.982}, {'end': 1140.45, 'text': 'After we trained our model, we just do project version deploy.', 'start': 1136.487, 'duration': 3.963}, {'end': 1151.8, 'text': 'We pass the model type and the path to our train directory, hit shift enter, and that will basically upload our weights to the server.', 'start': 1140.59, 'duration': 11.21}], 'summary': 'Inference at around 80 fps achievable; deploying model over api with just a single line of code.', 'duration': 33.266, 'max_score': 1118.534, 'thumbnail': ''}], 'start': 852.655, 'title': 'Yolov8 object detection', 'summary': 'Presents the analysis of yolov8 object detection training process, focusing on confusion metrics, key tracked metrics, validation, and inference results. it reveals promising accuracy and performance improvements with potential for further training and demonstrates achieving an inference speed of 80 fps with a 30-minute trained model and the ease of deploying and using the model through the api.', 'chapters': [{'end': 1090.454, 'start': 852.655, 'title': 'Yolov8 object detection analysis', 'summary': 'Presents the analysis of yolov8 object detection training process, with a focus on confusion metrics, key tracked metrics, validation, and inference results, revealing promising accuracy and performance improvements with potential for further training.', 'duration': 237.799, 'highlights': ['The confusion matrix reveals that most classes are detected correctly most of the time, with a 74% correct detection rate for goalkeepers and a 26% detection rate for the ball, indicating areas for improvement. The confusion matrix shows a 74% correct detection rate for goalkeepers and a 26% detection rate for the ball, highlighting areas for improvement in detection accuracy.', 'The key tracked metrics for YOLOv8 object detection indicate that the model is converging, with a steep curve suggesting potential for further training and improved results. The key tracked metrics show that the model is converging, with a steep curve indicating potential for further training and improved results.', "The validation script using the test dataset demonstrates the true Mean Average Precision (MAP) metrics, providing valuable insights into the model's performance. The validation script using the test dataset provides insights into the true Mean Average Precision (MAP) metrics, offering valuable information on the model's performance.", 'The inference results reveal promising accuracy and performance improvements, with the model successfully detecting referees and goalkeepers, while highlighting potential occlusion challenges with the ball. The inference results show promising accuracy and performance improvements, with successful detection of referees and goalkeepers, while indicating potential occlusion challenges with the ball.']}, {'end': 1230.179, 'start': 1090.454, 'title': 'Yolov8: fast object detection & deployment', 'summary': 'Demonstrates the potential of yolov8, achieving an inference speed of 80 fps with a 30-minute trained model, and the ease of deploying and using the model through the api.', 'duration': 139.725, 'highlights': ['Inference on a single frame takes between 12 to 13 milliseconds, achieving an inference speed of around 80 FPS with a 30-minute trained YOLOv8 model.', "Deploying the freshly trained model and using it for inference over the API can be done with just a single line of code, 'project version deploy', which uploads the model weights to the server.", 'The model deployment process includes uploading the weights to the server and verifying the successful upload in the RoboFlow account, enabling inference using the hosted API and providing multiple language snippets for inference implementation.', 'The video is part one of a series on YOLOv8, with upcoming content covering instance segmentation, classification, and comparisons to previous object detection models, encouraging viewers to stay tuned and subscribe for future updates.']}], 'duration': 377.524, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/wuZtUMEiKWY/pics/wuZtUMEiKWY852655.jpg', 'highlights': ['The inference speed of around 80 FPS is achieved with a 30-minute trained YOLOv8 model.', 'The confusion matrix reveals a 74% correct detection rate for goalkeepers and a 26% detection rate for the ball, indicating areas for improvement.', 'The key tracked metrics show that the model is converging, with a steep curve indicating potential for further training and improved results.', "The validation script using the test dataset provides insights into the true Mean Average Precision (MAP) metrics, offering valuable information on the model's performance.", 'The inference results show promising accuracy and performance improvements, with successful detection of referees and goalkeepers, while indicating potential occlusion challenges with the ball.']}], 'highlights': ['YOLOv8 claims faster fine-tuning than YOLOv5 and YOLOv7 on the RoboFlow 100 dataset', 'Ultralytics team has earned nearly 45,000 stars on GitHub for their substantial open source contributions', 'YOLOv8 represents a major engineering jump from Darknet to PyTorch, introducing CLIs and SDKs', 'The installation of YOLOv8 package is discussed, presenting two methods: using pip to install Ultralytics or cloning the repository and installing the dependencies', 'The process of creating a helper variable called home to manage paths to datasets and images is introduced', 'YOLOv8 supports the same data format as YOLOv5, allowing the use of an existing dataset for retraining or creating a new dataset using RoboFlow', 'The chapter demonstrates the use of YOLOv8 for object detection, running predictions on an image, and comparing the results', 'The process involves selecting object detection, creating a public project, uploading images, and starting the annotation process', 'The training duration is approximately an hour, due to the large dataset', 'The inference speed of around 80 FPS is achieved with a 30-minute trained YOLOv8 model', 'The confusion matrix reveals a 74% correct detection rate for goalkeepers and a 26% detection rate for the ball', 'The validation script using the test dataset provides insights into the true Mean Average Precision (MAP) metrics']}