title
OpenCV Python Tutorial - Find Lanes for Self-Driving Cars (Computer Vision Basics Tutorial)

description
Simulate Self-Driving Cars with Computer Vision & Deep Learning - Full Course on sale for $10! (normally $200): https://www.udemy.com/applied-deep-learningtm-the-complete-self-driving-car-course/?couponCode=YOUTUBE09 Rayan Slim's channel: https://www.youtube.com/channel/UCY-XVeC8oCIm9tfX7qqt0Xw Road Image Link: https://github.com/rslim087a/road-image (for Computer Vision tutorial 1) Road Video Link: https://github.com/rslim087a/road-video (for last Computer Vision tutorial) This video was done in collaboration with Rayan Slim and ProgrammingKnowledge. Computer Vision helps the computer see the world as we do. Learn & Master Computer Vision techniques in this fun and exciting video with top instructor Rayan Slim. You'll go from beginner to Computer Vision competent and your instructor will complete each task with you step by step on screen. By the end of the tutorial, you will be able to build a lane-detection algorithm fuelled entirely by Computer Vision. Feel the real power of Python and programming! The course offers you a unique approach of learning how to code by solving real world problems. #ProgrammingKnowledge #ComputerVision #OpenCV ★★★Top Online Courses From ProgrammingKnowledge ★★★ Python Programming Course ➡️ http://bit.ly/2vsuMaS ⚫️ http://bit.ly/2GOaeQB Java Programming Course ➡️ http://bit.ly/2GEfQMf ⚫️ http://bit.ly/2Vvjy4a Bash Shell Scripting Course ➡️ http://bit.ly/2DBVF0C ⚫️ http://bit.ly/2UM06vF Linux Command Line Tutorials ➡️ http://bit.ly/2IXuil0 ⚫️ http://bit.ly/2IXukt8 C Programming Course ➡️ http://bit.ly/2GQCiD1 ⚫️ http://bit.ly/2ZGN6ej C++ Programming Course ➡️ http://bit.ly/2V4oEVJ ⚫️ http://bit.ly/2XMvqMs PHP Programming Course ➡️ http://bit.ly/2XP71WH ⚫️ http://bit.ly/2vs3od6 Android Development Course ➡️ http://bit.ly/2UHih5H ⚫️ http://bit.ly/2IMhVci C# Programming Course ➡️ http://bit.ly/2Vr7HEl ⚫️ http://bit.ly/2W6RXTU JavaFx Programming Course ➡️ http://bit.ly/2XMvZWA ⚫️ http://bit.ly/2V2CoAi NodeJs Programming Course ➡️ http://bit.ly/2GPg7gA ⚫️ http://bit.ly/2GQYTQ2 Jenkins Course For Developers and DevOps ➡️ http://bit.ly/2Wd4l4W ⚫️ http://bit.ly/2J1B1ug Scala Programming Tutorial Course ➡️ http://bit.ly/2PysyA4 ⚫️ http://bit.ly/2PCaVj2 Bootstrap Responsive Web Design Tutorial ➡️ http://bit.ly/2DFQ2yC ⚫️ http://bit.ly/2VoJWwH MongoDB Tutorial Course ➡️ http://bit.ly/2LaCJfP ⚫️ http://bit.ly/2WaI7Ap QT C++ GUI Tutorial For Beginners ➡️ http://bit.ly/2vwqHSZ ★★★ Online Courses to learn ★★★ Get 2 FREE Months of Unlimited Classes from skillshare - https://skillshare.eqcm.net/r1KEj Data Science - http://bit.ly/2lD9h5L | http://bit.ly/2lI8wIl Machine Learning - http://bit.ly/2WGGQpb | http://bit.ly/2GghLXX Artificial Intelligence - http://bit.ly/2lYqaYx | http://bit.ly/2NmaPya MERN Stack E-Degree Program - http://bit.ly/2kx2NFe | http://bit.ly/2lWj4no DevOps E-degree - http://bit.ly/2k1PwUQ | http://bit.ly/2k8Ypfy Data Analytics with R - http://bit.ly/2lBKqz8 | http://bit.ly/2lAjos3 AWS Certification Training - http://bit.ly/2kmLtTu | http://bit.ly/2lAkQL1 Projects in Java - http://bit.ly/2kzn25d | http://bit.ly/2lBMffs Machine Learning With TensorFlow - http://bit.ly/2m1z3AF | http://bit.ly/2lBMhnA Angular 8 - Complete Essential Guide - http://bit.ly/2lYvYRP Kotlin Android Development Masterclass - http://bit.ly/2GcblsI Learn iOS Programming Building Advance Projects - http://bit.ly/2kyX7ue ★★★ Follow ★★★ My Website - http://www.codebind.com DISCLAIMER: This video and description contains affiliate links, which means that if you click on one of the product links, I’ll receive a small commission. This help support the channel and allows us to continue to make videos like this. Thank you for the support!

detail
{'title': 'OpenCV Python Tutorial - Find Lanes for Self-Driving Cars (Computer Vision Basics Tutorial)', 'heatmap': [{'end': 677.485, 'start': 615.576, 'weight': 0.704}, {'end': 1507.044, 'start': 1450.573, 'weight': 0.778}, {'end': 1816.516, 'start': 1758.941, 'weight': 0.816}, {'end': 2435.645, 'start': 2378.374, 'weight': 0.72}, {'end': 3681.701, 'start': 3623.961, 'weight': 0.77}, {'end': 3999.205, 'start': 3936.41, 'weight': 0.73}, {'end': 4718.777, 'start': 4612.97, 'weight': 1}], 'summary': 'The tutorial delves into using computer vision techniques with opencv and python to detect lane lines for self-driving cars, covering installation of anaconda distribution and atom text editor, grayscale conversion, edge detection, hough transform for lane identification, binary number representation, array reshaping, and video processing with opencv.', 'chapters': [{'end': 362.377, 'segs': [{'end': 31.767, 'src': 'embed', 'start': 0.229, 'weight': 0, 'content': [{'end': 0.99, 'text': 'Hello everyone.', 'start': 0.229, 'duration': 0.761}, {'end': 6.833, 'text': 'in this tutorial, I will teach you to use effective computer vision techniques with OpenCV and Python,', 'start': 0.99, 'duration': 5.843}, {'end': 10.275, 'text': 'ultimately to detect lane lines for a simulated self-driving car.', 'start': 6.833, 'duration': 3.442}, {'end': 16.719, 'text': 'This video was done in collaboration with the Programming Knowledge YouTube channel and by the time you finish this tutorial,', 'start': 10.656, 'duration': 6.063}, {'end': 22.803, 'text': "if you're interested in more self-driving car content, feel free to check out the link in the description below, but without further ado,", 'start': 16.719, 'duration': 6.084}, {'end': 23.984, 'text': "let's start this tutorial!", 'start': 22.803, 'duration': 1.181}, {'end': 26.784, 'text': 'Welcome to your first lesson.', 'start': 25.523, 'duration': 1.261}, {'end': 31.767, 'text': "This lesson is quite simple, as all we're going to be doing is installing the Anaconda distribution.", 'start': 27.064, 'duration': 4.703}], 'summary': 'Learn to use computer vision with opencv and python to detect lane lines for a self-driving car.', 'duration': 31.538, 'max_score': 0.229, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy4229.jpg'}, {'end': 70.248, 'src': 'embed', 'start': 40.99, 'weight': 1, 'content': [{'end': 49.434, 'text': "The Anaconda distribution conveniently installs Python, the Jupyter Notebook app, which we're going to use quite often throughout this course,", 'start': 40.99, 'duration': 8.444}, {'end': 52.235, 'text': 'and over 150 other scientific packages.', 'start': 49.434, 'duration': 2.801}, {'end': 57.778, 'text': "Since we're installing Python for Mac, make sure to navigate to the Mac section.", 'start': 53.276, 'duration': 4.502}, {'end': 70.248, 'text': "And we're going to install Python 3, not Python 2.", 'start': 65.626, 'duration': 4.622}], 'summary': 'Anaconda distribution includes python, jupyter notebook, and 150+ scientific packages. install python 3 for mac.', 'duration': 29.258, 'max_score': 40.99, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy440990.jpg'}, {'end': 211.409, 'src': 'embed', 'start': 159.784, 'weight': 4, 'content': [{'end': 166.369, 'text': 'Alternatively, you could have just accessed your terminal by pressing on F4 and performing the search here.', 'start': 159.784, 'duration': 6.585}, {'end': 175.136, 'text': 'But regardless, to ensure a successful installation, write the command Python 3 double dash version.', 'start': 166.809, 'duration': 8.327}, {'end': 185.432, 'text': 'and I get a version of 3.6.4.', 'start': 181.607, 'duration': 3.825}, {'end': 187.576, 'text': 'Make sure you also get a Python 3 version.', 'start': 185.433, 'duration': 2.143}, {'end': 188.938, 'text': 'That is all.', 'start': 188.437, 'duration': 0.501}, {'end': 190.64, 'text': 'Hope you were able to follow along.', 'start': 189.178, 'duration': 1.462}, {'end': 195.267, 'text': 'If you have any issues with the installation, feel free to ask me in the Q&A section.', 'start': 190.86, 'duration': 4.407}, {'end': 197.079, 'text': 'Welcome back.', 'start': 196.479, 'duration': 0.6}, {'end': 200.422, 'text': "We'll be making use of the Atom text editor in the computer vision section.", 'start': 197.28, 'duration': 3.142}, {'end': 207.286, 'text': 'You can feel free to use any text editor you want, like Sublime or Vim, in which case feel free to skip this lesson.', 'start': 200.882, 'duration': 6.404}, {'end': 211.409, 'text': "Otherwise, if you don't have a text editor installed, let's get to it.", 'start': 207.927, 'duration': 3.482}], 'summary': 'Ensure successful python 3 installation, version 3.6.4. use atom text editor for computer vision section.', 'duration': 51.625, 'max_score': 159.784, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy4159784.jpg'}, {'end': 269.406, 'src': 'embed', 'start': 241.335, 'weight': 3, 'content': [{'end': 248.241, 'text': "if you just simply click on the appropriate link and then wait for it to finish setting up the download, whether you're on Mac or windows.", 'start': 241.335, 'duration': 6.906}, {'end': 255.262, 'text': 'All right, once your installation is complete, it should be inside of your Downloads folder.', 'start': 251.201, 'duration': 4.061}, {'end': 258.702, 'text': "Let us just open up Atom and see what it's like.", 'start': 255.362, 'duration': 3.34}, {'end': 262.344, 'text': "It's verifying this new application.", 'start': 260.663, 'duration': 1.681}, {'end': 265.184, 'text': "We're going to open it.", 'start': 264.084, 'duration': 1.1}, {'end': 267.065, 'text': 'All right.', 'start': 266.605, 'duration': 0.46}, {'end': 269.406, 'text': "We're going to close the following.", 'start': 268.165, 'duration': 1.241}], 'summary': 'Demonstrating software installation and usage on mac and windows.', 'duration': 28.071, 'max_score': 241.335, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy4241335.jpg'}], 'start': 0.229, 'title': 'Using computer vision with opencv and python and atom text editor installation', 'summary': 'Demonstrates the use of effective computer vision techniques with opencv and python to detect lane lines for a simulated self-driving car, starting with installing the anaconda distribution which includes python, jupyter notebook, and over 150 scientific packages. it also covers the installation process of atom text editor, including downloading, installation, and configuration, with a focus on terminal commands and editor settings, aiming to ensure a successful python 3 installation and editor setup.', 'chapters': [{'end': 97.866, 'start': 0.229, 'title': 'Using computer vision with opencv and python', 'summary': 'Demonstrates the use of effective computer vision techniques with opencv and python to detect lane lines for a simulated self-driving car, starting with installing the anaconda distribution which includes python, jupyter notebook, and over 150 scientific packages.', 'duration': 97.637, 'highlights': ['The chapter demonstrates the use of effective computer vision techniques with OpenCV and Python to detect lane lines for a simulated self-driving car. This tutorial teaches the use of computer vision techniques with OpenCV and Python for detecting lane lines.', 'The Anaconda distribution includes Python, the Jupyter Notebook app, and over 150 scientific packages. Anaconda distribution installs Python, Jupyter Notebook app, and over 150 other scientific packages.', 'The tutorial focuses on installing the Anaconda distribution, emphasizing the installation of Python 3 for compatibility and future improvements. The tutorial emphasizes the installation of the Anaconda distribution, specifically Python 3, for compatibility with future Python improvements.']}, {'end': 362.377, 'start': 101.208, 'title': 'Atom text editor installation', 'summary': 'Covers the installation process of atom text editor, including downloading, installation, and configuration, with a focus on terminal commands and editor settings, aiming to ensure a successful python 3 installation and editor setup.', 'duration': 261.169, 'highlights': ['The installation process of Atom text editor, including downloading, installation, and configuration The transcript provides a detailed guide on the installation process of Atom text editor, covering the steps from downloading the application from atom.io or tech spot to its installation and initial setup, such as accessing the terminal, verifying the installation, and configuring the editor settings.', "Terminal commands for successful installation and Python 3 version check The chapter emphasizes the importance of closing any open terminal windows, opening a new terminal window, and executing the command 'Python 3 double dash version' to verify the Python 3 version, ensuring a successful installation.", 'Editor settings configuration for Atom text editor The transcript provides detailed instructions for configuring the Atom text editor settings, including modifying font family, font size, enabling cursor and indentation indicators, setting tab length, and enabling autosave, aimed at optimizing the editor for Pythonic code and convenient coding experience.']}], 'duration': 362.148, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy4229.jpg', 'highlights': ['The chapter demonstrates the use of effective computer vision techniques with OpenCV and Python to detect lane lines for a simulated self-driving car.', 'The Anaconda distribution includes Python, the Jupyter Notebook app, and over 150 scientific packages.', 'The tutorial emphasizes the installation of the Anaconda distribution, specifically Python 3, for compatibility with future Python improvements.', 'The installation process of Atom text editor, including downloading, installation, and configuration, is covered in detail.', 'The chapter emphasizes the importance of verifying the Python 3 version using terminal commands for successful installation.', 'The transcript provides detailed instructions for configuring the Atom text editor settings, aimed at optimizing the editor for Pythonic code and convenient coding experience.']}, {'end': 1480.501, 'segs': [{'end': 391.126, 'src': 'embed', 'start': 365.896, 'weight': 0, 'content': [{'end': 371.443, 'text': 'The purpose of this section is to build a program that can identify lane lines in a picture or a video.', 'start': 365.896, 'duration': 5.547}, {'end': 376.189, 'text': 'When you and I drive a car, we can see where the lane lines are using our eyes.', 'start': 371.903, 'duration': 4.286}, {'end': 378.271, 'text': "A car doesn't have any eyes.", 'start': 376.989, 'duration': 1.282}, {'end': 384.919, 'text': "And that's where computer vision comes in, which, through complex algorithms, helps the computer see the world as we do.", 'start': 378.912, 'duration': 6.007}, {'end': 391.126, 'text': "In our case, we'll be using it to see the road and identify lane lines in a series of camera images.", 'start': 385.76, 'duration': 5.366}], 'summary': 'Developing a program for identifying lane lines using computer vision in car camera images.', 'duration': 25.23, 'max_score': 365.896, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy4365896.jpg'}, {'end': 460.881, 'src': 'embed', 'start': 433.775, 'weight': 1, 'content': [{'end': 438.078, 'text': 'Inside of finding lanes, make a new Python file called lanes.py.', 'start': 433.775, 'duration': 4.303}, {'end': 446.366, 'text': "Inside of the lanes file, we're going to start by writing a program that can identify lanes in a JPEG image.", 'start': 441.201, 'duration': 5.165}, {'end': 449.229, 'text': 'I posted the image on my GitHub.', 'start': 447.588, 'duration': 1.641}, {'end': 452.272, 'text': 'To access it, make sure to go to the following link.', 'start': 449.449, 'duration': 2.823}, {'end': 455.455, 'text': 'The link is also available in the description below.', 'start': 452.753, 'duration': 2.702}, {'end': 460.881, 'text': 'And so once you get to this GitHub page, click on testimage.jpeg.', 'start': 455.475, 'duration': 5.406}], 'summary': 'Create lanes.py to identify lanes in a jpeg image from the github link.', 'duration': 27.106, 'max_score': 433.775, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy4433775.jpg'}, {'end': 677.485, 'src': 'heatmap', 'start': 615.576, 'weight': 0.704, 'content': [{'end': 622.703, 'text': 'What this will do is it will display our window, our result window, infinitely until we press anything in our keyboard.', 'start': 615.576, 'duration': 7.127}, {'end': 631.669, 'text': 'If we rerun the code python lanes, the image is displayed and notice our window name results.', 'start': 623.644, 'duration': 8.025}, {'end': 633.67, 'text': "we'll keep this lesson short and stop here.", 'start': 631.669, 'duration': 2.001}, {'end': 637.492, 'text': 'you learn how to load and display images using the opencv library.', 'start': 633.67, 'duration': 3.822}, {'end': 640.995, 'text': "in the next lesson we'll start discussing canny edge detection,", 'start': 637.492, 'duration': 3.503}, {'end': 648.059, 'text': "a technique that we'll use to write a program that can detect edges in an image and thereby single out the lane lines.", 'start': 640.995, 'duration': 7.064}, {'end': 650.999, 'text': 'welcome to lesson number two.', 'start': 649.557, 'duration': 1.442}, {'end': 658.166, 'text': 'the goal of the next few videos will be to make use of an edge detection algorithm, the canny edge detection technique.', 'start': 650.999, 'duration': 7.167}, {'end': 662.971, 'text': 'the goal of edge detection is to identify the boundaries of objects within images.', 'start': 658.166, 'duration': 4.805}, {'end': 670.639, 'text': "in essence, we'll be using edge detection to try and find regions in an image where there is a sharp change in intensity, a sharp change in color.", 'start': 662.971, 'duration': 7.668}, {'end': 677.485, 'text': "Before diving into this, it's important to recognize that an image can be read as a matrix, an array of pixels.", 'start': 671.46, 'duration': 6.025}], 'summary': 'Learned to load and display images using opencv. next, exploring canny edge detection to identify boundaries of objects in images.', 'duration': 61.909, 'max_score': 615.576, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy4615576.jpg'}, {'end': 682.009, 'src': 'embed', 'start': 649.557, 'weight': 2, 'content': [{'end': 650.999, 'text': 'welcome to lesson number two.', 'start': 649.557, 'duration': 1.442}, {'end': 658.166, 'text': 'the goal of the next few videos will be to make use of an edge detection algorithm, the canny edge detection technique.', 'start': 650.999, 'duration': 7.167}, {'end': 662.971, 'text': 'the goal of edge detection is to identify the boundaries of objects within images.', 'start': 658.166, 'duration': 4.805}, {'end': 670.639, 'text': "in essence, we'll be using edge detection to try and find regions in an image where there is a sharp change in intensity, a sharp change in color.", 'start': 662.971, 'duration': 7.668}, {'end': 677.485, 'text': "Before diving into this, it's important to recognize that an image can be read as a matrix, an array of pixels.", 'start': 671.46, 'duration': 6.025}, {'end': 682.009, 'text': 'A pixel contains the light intensity at some location in the image.', 'start': 678.326, 'duration': 3.683}], 'summary': 'Using canny edge detection to identify image boundaries and sharp intensity changes.', 'duration': 32.452, 'max_score': 649.557, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy4649557.jpg'}, {'end': 973.007, 'src': 'embed', 'start': 924.327, 'weight': 3, 'content': [{'end': 929.109, 'text': 'In the last lesson we applied step number one which was to convert our image to grayscale.', 'start': 924.327, 'duration': 4.782}, {'end': 932.83, 'text': 'Step two is to now reduce noise and smoothen our image.', 'start': 929.529, 'duration': 3.301}, {'end': 941.673, 'text': "When detecting edges, while it's important to accurately catch as many edges in the image as possible, we must filter out any image noise.", 'start': 933.871, 'duration': 7.802}, {'end': 946.735, 'text': 'Image noise can create false edges and ultimately affect edge detection.', 'start': 942.273, 'duration': 4.462}, {'end': 951.717, 'text': "That's why it's imperative to filter it out, and thus smoothen the image.", 'start': 947.535, 'duration': 4.182}, {'end': 956.859, 'text': 'Filtering out image noise and smoothening will be done with a Gaussian filter.', 'start': 952.697, 'duration': 4.162}, {'end': 964.643, 'text': 'To understand the concept of a Gaussian filter, recall that an image is stored as a collection of discrete pixels.', 'start': 957.78, 'duration': 6.863}, {'end': 973.007, 'text': 'Each of the pixels for a grayscale image is represented by a single number that describes the brightness of the pixel.', 'start': 965.904, 'duration': 7.103}], 'summary': 'Lesson: apply grayscale, reduce noise, smoothen image using gaussian filter', 'duration': 48.68, 'max_score': 924.327, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy4924327.jpg'}, {'end': 1044.404, 'src': 'embed', 'start': 1016.649, 'weight': 6, 'content': [{'end': 1022.673, 'text': "what we're doing is applying a Gaussian blur on a grayscale image with a 5 by 5 kernel.", 'start': 1016.649, 'duration': 6.024}, {'end': 1026.615, 'text': 'The size of the kernel is dependent on specific situations.', 'start': 1023.373, 'duration': 3.242}, {'end': 1030.217, 'text': 'A 5 by 5 kernel is a good size for most cases.', 'start': 1027.295, 'duration': 2.922}, {'end': 1035.961, 'text': 'But ultimately, what that will do is return a new image that we simply called blur.', 'start': 1031.058, 'duration': 4.903}, {'end': 1044.404, 'text': 'Applying the gaussian blur by convolving our image with a kernel of gaussian values reduces noise in our image.', 'start': 1036.82, 'duration': 7.584}], 'summary': 'Applying a 5x5 gaussian blur on a grayscale image reduces noise.', 'duration': 27.755, 'max_score': 1016.649, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy41016649.jpg'}, {'end': 1256.34, 'src': 'embed', 'start': 1228.67, 'weight': 8, 'content': [{'end': 1238.195, 'text': 'It computes the gradient in all directions of our blurred image and is then going to trace our strongest gradients as a series of white pixels.', 'start': 1228.67, 'duration': 9.525}, {'end': 1243.438, 'text': 'But notice these two arguments, low threshold and high threshold.', 'start': 1239.256, 'duration': 4.182}, {'end': 1250.759, 'text': 'Well, this actually allows us to isolate the adjacent pixels that follow the strongest gradients.', 'start': 1244.258, 'duration': 6.501}, {'end': 1256.34, 'text': 'If the gradient is larger than the upper threshold, then it is accepted as an edge pixel.', 'start': 1251.379, 'duration': 4.961}], 'summary': 'Gradient computation isolates strong edge pixels with high threshold.', 'duration': 27.67, 'max_score': 1228.67, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy41228670.jpg'}, {'end': 1317.54, 'src': 'embed', 'start': 1280.187, 'weight': 9, 'content': [{'end': 1286.391, 'text': 'now that we know what goes on under the hood, we can call this function inside of our project by writing.', 'start': 1280.187, 'duration': 6.204}, {'end': 1291.633, 'text': 'canny is equal to cv2 dot.', 'start': 1286.391, 'duration': 5.242}, {'end': 1301.537, 'text': "canny we'll apply the candy method on the blurred image with low and high thresholds of 50 and 150..", 'start': 1291.633, 'duration': 9.904}, {'end': 1307.323, 'text': "And now we'll show the image gradient instead of the blurred image, Kenny.", 'start': 1301.537, 'duration': 5.786}, {'end': 1317.54, 'text': "If we go ahead and run the code Python lanes.py, and there's the gradient image,", 'start': 1308.765, 'duration': 8.775}], 'summary': 'Applying canny method with thresholds of 50 and 150 to the blurred image produces the gradient image.', 'duration': 37.353, 'max_score': 1280.187, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy41280187.jpg'}], 'start': 365.896, 'title': 'Lane line identification and image processing', 'summary': 'Introduces computer vision for identifying lane lines, grayscale conversion, gaussian blur, image gradients, and the canny method for edge detection in python.', 'chapters': [{'end': 799.532, 'start': 365.896, 'title': 'Lane line identification', 'summary': 'Introduces the concept of computer vision for identifying lane lines in images, setting up the initial stages of the project, displaying an image, and discussing the use of edge detection to identify boundaries in images.', 'duration': 433.636, 'highlights': ['The chapter introduces computer vision for identifying lane lines in images Computer vision is used to help the computer see the world and identify lane lines, which is essential in driving systems. This sets the foundation for the subsequent lessons.', 'Setting up the initial stages of the project and displaying an image The process involves creating a new project folder, opening it with a code editor, writing a program to identify lanes in a JPEG image, downloading and saving the test image, and using OpenCV to display the image.', 'Discussion of the use of edge detection to identify boundaries in images The chapter introduces the concept of using edge detection techniques, such as the Canny edge detection algorithm, to identify boundaries of objects within images and find regions with sharp changes in intensity, which is crucial for detecting lane lines.']}, {'end': 1125.917, 'start': 801.034, 'title': 'Image processing: grayscale conversion and gaussian blur', 'summary': 'Covers the implementation of grayscale conversion and gaussian blur on an image for edge detection, emphasizing the importance of reducing noise and smoothening the image, and introducing the concept of gaussian filter. step one involves converting the image to grayscale and step two involves reducing noise and smoothening the image with a gaussian blur.', 'duration': 324.883, 'highlights': ['Step two involves reducing noise and smoothening the image with a Gaussian blur The chapter emphasizes the importance of reducing noise and smoothening the image with a Gaussian blur for edge detection.', 'Step one involves converting the image to grayscale The chapter discusses the process of converting the image to grayscale for edge detection.', 'The importance of accurately catching as many edges as possible and filtering out any image noise is highlighted The chapter stresses the importance of accurately catching as many edges as possible and filtering out any image noise to improve edge detection.', 'Application of a 5 by 5 Gaussian blur kernel to reduce noise in the grayscale image The chapter explains the application of a 5 by 5 Gaussian blur kernel to effectively reduce noise in the grayscale image.', 'Introduction to the concept of Gaussian filter for reducing noise and smoothening the image The chapter introduces the concept of Gaussian filter for reducing noise and smoothening the image to improve edge detection.']}, {'end': 1480.501, 'start': 1126.558, 'title': 'Image gradients and canny method', 'summary': 'Explains the concept of image gradients, the canny method, and its implementation using python. it outlines the process of computing gradients, setting thresholds, and detecting edges in an image.', 'duration': 353.943, 'highlights': ['The Canny function computes gradients in all directions of the image, tracing the strongest gradients as a series of white pixels, and isolating adjacent pixels that follow the strongest gradients based on low and high thresholds. The Canny function computes gradients in all directions of the image, tracing the strongest gradients as a series of white pixels, and isolating adjacent pixels that follow the strongest gradients based on low and high thresholds.', 'The Canny method is applied to the blurred image with low and high thresholds of 50 and 150, producing a gradient image that outlines the edges corresponding to the most rapid changes in intensity. The Canny method is applied to the blurred image with low and high thresholds of 50 and 150, producing a gradient image that outlines the edges corresponding to the most rapid changes in intensity.', "The chapter emphasizes the importance of understanding the Canny method's workings before proceeding further, highlighting the significance of grasping the underlying process. The chapter emphasizes the importance of understanding the Canny method's workings before proceeding further, highlighting the significance of grasping the underlying process."]}], 'duration': 1114.605, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy4365896.jpg', 'highlights': ['The chapter introduces computer vision for identifying lane lines in images', 'The process involves creating a new project folder, opening it with a code editor, writing a program to identify lanes in a JPEG image, downloading and saving the test image, and using OpenCV to display the image', 'The chapter introduces the concept of using edge detection techniques, such as the Canny edge detection algorithm, to identify boundaries of objects within images and find regions with sharp changes in intensity, which is crucial for detecting lane lines', 'The chapter emphasizes the importance of reducing noise and smoothening the image with a Gaussian blur for edge detection', 'The chapter discusses the process of converting the image to grayscale for edge detection', 'The chapter stresses the importance of accurately catching as many edges as possible and filtering out any image noise to improve edge detection', 'The chapter explains the application of a 5 by 5 Gaussian blur kernel to effectively reduce noise in the grayscale image', 'The chapter introduces the concept of Gaussian filter for reducing noise and smoothening the image to improve edge detection', 'The Canny function computes gradients in all directions of the image, tracing the strongest gradients as a series of white pixels, and isolating adjacent pixels that follow the strongest gradients based on low and high thresholds', 'The Canny method is applied to the blurred image with low and high thresholds of 50 and 150, producing a gradient image that outlines the edges corresponding to the most rapid changes in intensity', "The chapter emphasizes the importance of understanding the Canny method's workings before proceeding further, highlighting the significance of grasping the underlying process"]}, {'end': 1857.932, 'segs': [{'end': 1746.855, 'src': 'embed', 'start': 1711.092, 'weight': 1, 'content': [{'end': 1719.879, 'text': "What we have to do now is fill this mask, this black image, with our polygon using OpenCV's fillPoly function.", 'start': 1711.092, 'duration': 8.787}, {'end': 1725.063, 'text': 'That is cv2.fillPoly.', 'start': 1720.74, 'duration': 4.323}, {'end': 1730.427, 'text': 'We will fill our mask with our triangle.', 'start': 1726.764, 'duration': 3.663}, {'end': 1736.832, 'text': "The third argument specifies that the color of our polygon, which we're going to have be completely white.", 'start': 1731.468, 'duration': 5.364}, {'end': 1746.855, 'text': "So what we're going to do is take a triangle whose boundaries we defined over here and apply it on the mask,", 'start': 1739.031, 'duration': 7.824}], 'summary': "Using opencv's fillpoly function to fill a mask with a white triangle.", 'duration': 35.763, 'max_score': 1711.092, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy41711092.jpg'}, {'end': 1816.516, 'src': 'heatmap', 'start': 1758.941, 'weight': 0.816, 'content': [{'end': 1765.545, 'text': "And instead of showing the candy image, we'll be showing the return value of our function.", 'start': 1758.941, 'duration': 6.604}, {'end': 1767.946, 'text': 'Region of interest.', 'start': 1766.425, 'duration': 1.521}, {'end': 1772.675, 'text': "And the image that we're going to pass in is simply going to be the candy image.", 'start': 1769.574, 'duration': 3.101}, {'end': 1774.536, 'text': "Let's run the code.", 'start': 1773.676, 'duration': 0.86}, {'end': 1781.038, 'text': "Let's run python lanes.py and it would throw an exception.", 'start': 1775.536, 'duration': 5.502}, {'end': 1790.062, 'text': "And that's because fill poly, the fill poly function fills an area bounded by several polygons, not just one.", 'start': 1781.659, 'duration': 8.403}, {'end': 1794.886, 'text': "Even though you and I both know that we're dealing with only a single polygon.", 'start': 1790.922, 'duration': 3.964}, {'end': 1802.553, 'text': "we'll rename this variable from triangle to polygons for consistency and we'll set it equal to an array of polygons.", 'start': 1794.886, 'duration': 7.667}, {'end': 1806.457, 'text': 'In our case, an array of simply one polygon.', 'start': 1803.274, 'duration': 3.183}, {'end': 1809.56, 'text': 'Change this from triangle to polygons.', 'start': 1807.118, 'duration': 2.442}, {'end': 1812.343, 'text': 'If we rerun this code.', 'start': 1811.242, 'duration': 1.101}, {'end': 1816.516, 'text': 'there is our mask.', 'start': 1815.655, 'duration': 0.861}], 'summary': 'Code review: function returns wrong value, fixed by modifying variable names and input data.', 'duration': 57.575, 'max_score': 1758.941, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy41758941.jpg'}, {'end': 1869.793, 'src': 'embed', 'start': 1832.571, 'weight': 0, 'content': [{'end': 1836.354, 'text': 'Previously, we created a mask with the same dimensions as our road image.', 'start': 1832.571, 'duration': 3.783}, {'end': 1846.162, 'text': 'We then identified a region of interest in our road image with very specific vertices along the x and y axis that we then used to fill our mask.', 'start': 1836.774, 'duration': 9.388}, {'end': 1853.949, 'text': "The image on the right, why is it important? Well, we're going to use it to only show a specific portion of the image.", 'start': 1847.143, 'duration': 6.806}, {'end': 1857.932, 'text': 'Everything else we want to mask.', 'start': 1856.651, 'duration': 1.281}, {'end': 1869.793, 'text': "So, to understand how we're going to use this image to mask our candy image, to only show the region of interest traced by the triangular polygon,", 'start': 1860.751, 'duration': 9.042}], 'summary': 'Created mask with specific vertices to show region of interest in road image.', 'duration': 37.222, 'max_score': 1832.571, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy41832571.jpg'}], 'start': 1481.262, 'title': 'Opencv function and masking', 'summary': 'Explains how to create a mask with specific vertices to identify a region of interest in an image using opencv functions and arrays, aiming to show only a specific portion of the image.', 'chapters': [{'end': 1857.932, 'start': 1481.262, 'title': 'Opencv function and masking', 'summary': 'Explains how to create a mask with specific vertices to identify a region of interest in an image, utilizing opencv functions and arrays, aiming to show only a specific portion of the image.', 'duration': 376.67, 'highlights': ['Creating a mask with specific vertices to identify a region of interest in the image Identified region of interest with vertices along the x and y axis', "Using OpenCV's fillPoly function to fill the mask with a specified polygon Utilized OpenCV's fillPoly function to fill the mask with a polygon", 'Showing only a specific portion of the image using the created mask The mask was created to show only a specific portion of the image']}], 'duration': 376.67, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy41481262.jpg', 'highlights': ['Creating a mask with specific vertices to identify a region of interest in the image', "Utilized OpenCV's fillPoly function to fill the mask with a polygon", 'The mask was created to show only a specific portion of the image']}, {'end': 2406.554, 'segs': [{'end': 1912.221, 'src': 'embed', 'start': 1885.458, 'weight': 1, 'content': [{'end': 1889.241, 'text': 'Commonly, when one thinks of binary representations, they think of zeros and ones.', 'start': 1885.458, 'duration': 3.783}, {'end': 1898.409, 'text': 'Well, more specifically, binary numbers are expressed in the base-to-numeral system, which uses only two symbols, typically zeros and ones.', 'start': 1889.902, 'duration': 8.507}, {'end': 1907.517, 'text': 'What does that mean? For example, the number 23, its binary representation is 10111.', 'start': 1899.27, 'duration': 8.247}, {'end': 1912.221, 'text': "How did I obtain that number? Well, let's imagine eight placeholders, eight boxes.", 'start': 1907.517, 'duration': 4.704}], 'summary': 'Binary numbers use base-2 system with 0s and 1s, like 23 represented as 10111.', 'duration': 26.763, 'max_score': 1885.458, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy41885458.jpg'}, {'end': 2053.963, 'src': 'embed', 'start': 2001.336, 'weight': 0, 'content': [{'end': 2002.436, 'text': 'Does 1 go into 1? Indeed.', 'start': 2001.336, 'duration': 1.1}, {'end': 2006.616, 'text': 'And there is the binary representation of 23.', 'start': 2004.455, 'duration': 2.161}, {'end': 2012.96, 'text': "We can cut off the zeros in the beginning, and it's just as we said earlier, 10111.", 'start': 2006.616, 'duration': 6.344}, {'end': 2019.344, 'text': 'Alright. so why did I just randomly start talking about binary numbers??', 'start': 2012.96, 'duration': 6.384}, {'end': 2021.645, 'text': 'Well, the image on the right.', 'start': 2020.224, 'duration': 1.421}, {'end': 2024.127, 'text': 'I went ahead and printed out its pixel representation.', 'start': 2021.645, 'duration': 2.482}, {'end': 2027.589, 'text': 'I resized the array simply because it was too large.', 'start': 2025.187, 'duration': 2.402}, {'end': 2029.325, 'text': 'But never mind that.', 'start': 2028.424, 'duration': 0.901}, {'end': 2039.676, 'text': 'Notice how the triangular polygon translates to pixel intensities of 255, and the block surrounding region translates to pixel intensities of 0.', 'start': 2030.006, 'duration': 9.67}, {'end': 2042.6, 'text': "What's the binary representation of 0?", 'start': 2039.676, 'duration': 2.924}, {'end': 2051.86, 'text': 'Well, none of these numbers go into 0, so we leave 0 for each placeholder, leaving us with a binary representation of 0000..', 'start': 2042.6, 'duration': 9.26}, {'end': 2053.963, 'text': 'What about 255?', 'start': 2051.86, 'duration': 2.103}], 'summary': 'Binary representation of 23 is 10111. image pixels: 255 intensity, 0 intensity.', 'duration': 52.627, 'max_score': 2001.336, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy42001336.jpg'}, {'end': 2138.665, 'src': 'embed', 'start': 2107.129, 'weight': 4, 'content': [{'end': 2112.173, 'text': 'then the binary representation of every pixel intensity in that region would be all ones.', 'start': 2107.129, 'duration': 5.044}, {'end': 2114.375, 'text': 'Why is this important?', 'start': 2113.214, 'duration': 1.161}, {'end': 2124.584, 'text': "Well, we're going to apply this mask onto our Kenny image to ultimately only show the region of interest, the region traced by the polygonal contour.", 'start': 2115.156, 'duration': 9.428}, {'end': 2130.822, 'text': 'We do this by applying the bitwise AND operation between the two images.', 'start': 2126.28, 'duration': 4.542}, {'end': 2138.665, 'text': 'The bitwise AND operation occurs elementwise between the two images, between the two arrays of pixels.', 'start': 2131.702, 'duration': 6.963}], 'summary': 'Applying bitwise and operation to show region of interest in kenny image.', 'duration': 31.536, 'max_score': 2107.129, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy42107129.jpg'}, {'end': 2360.477, 'src': 'embed', 'start': 2319.915, 'weight': 5, 'content': [{'end': 2324.696, 'text': "meaning that taking its bitwise end with the ones didn't have an effect.", 'start': 2319.915, 'duration': 4.781}, {'end': 2332.919, 'text': 'And so in our image, taking the bitwise end of these two regions would also have zero effect,', 'start': 2325.597, 'duration': 7.322}, {'end': 2342.521, 'text': "which means we've successfully masked our canny image to ultimately only show the region of interest, the region traced by the polygonal contour.", 'start': 2332.919, 'duration': 9.602}, {'end': 2360.477, 'text': "We can implement this by setting masked image is equal to cv2.bitwiseEnd, and we'll compute the bitwise end of both the canny and mask arrays.", 'start': 2346.022, 'duration': 14.455}], 'summary': 'Masking with bitwise end successfully isolates region of interest.', 'duration': 40.562, 'max_score': 2319.915, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy42319915.jpg'}], 'start': 1860.751, 'title': 'Binary number representation and masking in image processing', 'summary': 'Explains binary number representation by converting decimal 23 to binary (10111) and covers binary masking in image processing, isolating regions using bitwise and operations and achieving successful masking of the traced region.', 'chapters': [{'end': 2001.075, 'start': 1860.751, 'title': 'Binary number representation', 'summary': 'Explains the binary number representation and demonstrates how to convert the decimal number 23 into its binary representation, using a base-2 numeral system, with a detailed explanation of each step and the resulting binary representation 10111.', 'duration': 140.324, 'highlights': ['The binary representation of the decimal number 23, using a base-2 numeral system, is explained in detail, with a step-by-step demonstration of converting the number into its binary form, resulting in 10111.', 'The concept of binary numbers and the base-2 numeral system is introduced, providing a fundamental understanding of how binary representations work and their relevance to digital data processing.', 'The explanation includes a detailed breakdown of the binary conversion process, showcasing the significance of each numerical place corresponding to an increasing power of 2 and the utilization of zeros and ones to represent the binary number.']}, {'end': 2406.554, 'start': 2001.336, 'title': 'Binary masking in image processing', 'summary': 'Covers the process of binary masking in image processing, using binary representations of pixel intensities and bitwise and operations to isolate the region of interest, ultimately achieving successful masking of the canny image to only display the traced region.', 'duration': 405.218, 'highlights': ['Binary representations of pixel intensities in the image are determined, with 0 translating to 0000 and 255 translating to 11111111. The binary representations of pixel intensities are discussed, with 0 resulting in 0000 and 255 resulting in 11111111.', 'The significance of the binary representations lies in the application of a mask to isolate the region of interest, achieved through bitwise AND operations between the images. The importance of binary representations is highlighted in their role in applying a mask to isolate the region of interest through bitwise AND operations.', 'By utilizing bitwise AND operations, the chapter demonstrates how the Canny image is successfully masked to display only the traced region, showcasing the practical implementation of binary masking. The successful isolation of the region of interest in the Canny image through bitwise AND operations is showcased as a practical application of binary masking.']}], 'duration': 545.803, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy41860751.jpg', 'highlights': ['The binary representation of the decimal number 23, resulting in 10111, is explained in detail.', 'The concept of binary numbers and the base-2 numeral system is introduced, providing a fundamental understanding of binary representations.', 'The explanation includes a detailed breakdown of the binary conversion process, showcasing the significance of each numerical place and the utilization of zeros and ones.', 'Binary representations of pixel intensities in the image are determined, with 0 translating to 0000 and 255 translating to 11111111.', 'The significance of binary representations lies in the application of a mask to isolate the region of interest through bitwise AND operations.', 'The chapter demonstrates how the Canny image is successfully masked to display only the traced region, showcasing the practical implementation of binary masking.']}, {'end': 2727.384, 'segs': [{'end': 2495.786, 'src': 'embed', 'start': 2407.254, 'weight': 0, 'content': [{'end': 2416.378, 'text': 'The final step of lane detection will be to use the Hough transform technique to detect straight lines in our region of interest and thus identify the lane lines.', 'start': 2407.254, 'duration': 9.124}, {'end': 2421.72, 'text': "So far we've identified the edges in our image and isolated the region of interest.", 'start': 2417.558, 'duration': 4.162}, {'end': 2427.962, 'text': "Now we'll make use of a technique that will detect straight lines in the image and thus identify the lane lines.", 'start': 2422.4, 'duration': 5.562}, {'end': 2430.323, 'text': 'This technique is known as Hough Transform.', 'start': 2428.422, 'duration': 1.901}, {'end': 2435.645, 'text': "We'll start by drawing a 2D coordinate space of x and y and inside of it a straight line.", 'start': 2430.643, 'duration': 5.002}, {'end': 2442.587, 'text': 'We know that a straight line is represented by the equation y is equal to mx plus b.', 'start': 2436.665, 'duration': 5.922}, {'end': 2444.828, 'text': 'Nothing new so far, just simple math.', 'start': 2442.587, 'duration': 2.241}, {'end': 2449.787, 'text': 'For our straight line, has two parameters, m and b.', 'start': 2444.848, 'duration': 4.939}, {'end': 2460.788, 'text': "We're currently plotting it as a function of X and Y, but we can also represent this line in parametric space, which we will call Hough space,", 'start': 2450.882, 'duration': 9.906}, {'end': 2462.388, 'text': 'as B versus M.', 'start': 2460.788, 'duration': 1.6}, {'end': 2467.871, 'text': 'We know, the Y intercept of this line is two and the slope of the line is simply rise over.', 'start': 2462.388, 'duration': 5.483}, {'end': 2472.574, 'text': 'run the change in Y over the change in X, which evaluates to three.', 'start': 2467.871, 'duration': 4.703}, {'end': 2479.238, 'text': 'Given the Y intercept and slope, this entire line can be plotted as a single point in Hough space.', 'start': 2473.334, 'duration': 5.904}, {'end': 2487.703, 'text': 'Now imagine that instead of a line, we had a single dot located at the coordinates 12 and 2.', 'start': 2480.7, 'duration': 7.003}, {'end': 2495.786, 'text': 'There are many possible lines that can pass through this dot, each line with different values for m and b.', 'start': 2487.703, 'duration': 8.083}], 'summary': 'Using hough transform to detect lane lines in region of interest.', 'duration': 88.532, 'max_score': 2407.254, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy42407254.jpg'}, {'end': 2630.212, 'src': 'embed', 'start': 2607.128, 'weight': 5, 'content': [{'end': 2616.335, 'text': 'Each point in that line denotes different values for m and b, which once again correspond to different lines that can pass through this point.', 'start': 2607.128, 'duration': 9.207}, {'end': 2623.648, 'text': 'But notice that there is another intersection at the same point,', 'start': 2619.246, 'duration': 4.402}, {'end': 2630.212, 'text': 'which means that the line with the following slope and y-intercept 4 and 4 crosses all three of our dots.', 'start': 2623.648, 'duration': 6.564}], 'summary': 'Multiple lines intersect at one point, with slope 4 and y-intercept 4.', 'duration': 23.084, 'max_score': 2607.128, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy42607128.jpg'}, {'end': 2733.797, 'src': 'embed', 'start': 2704.258, 'weight': 1, 'content': [{'end': 2710.84, 'text': "For every point of intersection, we're going to cast a vote inside of the bin that it belongs to.", 'start': 2704.258, 'duration': 6.582}, {'end': 2715.721, 'text': "The bin with the maximum number of votes, that's going to be your line.", 'start': 2711.52, 'duration': 4.201}, {'end': 2720.062, 'text': 'Whatever m and b value that this bin belongs to.', 'start': 2716.341, 'duration': 3.721}, {'end': 2727.384, 'text': "that's the line that we're going to draw, since it was voted as the line of best fit in describing our data.", 'start': 2720.062, 'duration': 7.322}, {'end': 2733.797, 'text': "Now that we know the theory of how we're going to identify lines in our gradient image,", 'start': 2728.675, 'duration': 5.122}], 'summary': 'Using voting method to identify line of best fit in gradient image.', 'duration': 29.539, 'max_score': 2704.258, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy42704258.jpg'}], 'start': 2407.254, 'title': 'Hough transform for lane detection', 'summary': 'Covers the usage of hough transform for lane identification, including plotting lines in hough space, determining intersections, and detecting lines in gradient images by using a grid-based approach. it also demonstrates a specific example of a line with a slope and y-intercept of 4 crossing multiple points.', 'chapters': [{'end': 2576.253, 'start': 2407.254, 'title': 'Lane detection with hough transform', 'summary': 'Explains the concept of hough transform and its usage to detect straight lines in an image for lane identification, with emphasis on plotting lines in hough space using parametric representation.', 'duration': 168.999, 'highlights': ['Hough Transform technique used to detect straight lines in region of interest for lane identification The chapter discusses the use of Hough transform technique to detect straight lines in the region of interest for lane identification, providing a systematic approach to identifying lane lines.', 'Parametric representation of lines in Hough space explained using slope and intercept The parametric representation of lines in Hough space is explained using the slope-intercept form, highlighting the process of plotting lines in Hough space based on slope and intercept values.', 'Illustration of representing points in x and y space as lines in Hough space The concept of representing points in x and y space as lines in Hough space is illustrated, demonstrating the relationship between points in image space and lines in Hough space.']}, {'end': 2630.212, 'start': 2577.133, 'title': 'Intersection in hough space', 'summary': 'Explains how to determine the intersection in hough space to find the line consistent with crossing points, with a specific example showing a line with a slope and y-intercept of 4 crossing multiple points.', 'duration': 53.079, 'highlights': ['The line with the following slope and y-intercept 4 and 4 crosses all three of our dots.', 'Each point in that line denotes different values for m and b, corresponding to different lines that can pass through this point.', 'The point of intersection represents the m and b values of a line consistent with crossing both of our points, with a slope and y-intercept of 4.']}, {'end': 2727.384, 'start': 2631.638, 'title': 'Detecting lines in gradient images', 'summary': 'Explains how to detect lines in a gradient image by using a hough space divided into a grid, where each bin corresponds to the slope and y-intercept values of a candidate line, and the line with the maximum votes in a bin is considered the line of best fit.', 'duration': 95.746, 'highlights': ['The Hough space is divided into a grid, with each bin corresponding to the slope and y-intercept value of a candidate line, allowing for the identification of lines in the gradient image.', 'For every point of intersection, a vote is cast inside the corresponding bin, and the line with the maximum number of votes is considered the line of best fit, providing a quantitative method for identifying lines in the image.']}], 'duration': 320.13, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy42407254.jpg', 'highlights': ['The Hough space is divided into a grid, with each bin corresponding to the slope and y-intercept value of a candidate line, allowing for the identification of lines in the gradient image.', 'For every point of intersection, a vote is cast inside the corresponding bin, and the line with the maximum number of votes is considered the line of best fit, providing a quantitative method for identifying lines in the image.', 'Parametric representation of lines in Hough space explained using slope and intercept The parametric representation of lines in Hough space is explained using the slope-intercept form, highlighting the process of plotting lines in Hough space based on slope and intercept values.', 'The concept of representing points in x and y space as lines in Hough space is illustrated, demonstrating the relationship between points in image space and lines in Hough space.', 'Hough Transform technique used to detect straight lines in region of interest for lane identification The chapter discusses the use of Hough transform technique to detect straight lines in the region of interest for lane identification, providing a systematic approach to identifying lane lines.', 'The line with the following slope and y-intercept 4 and 4 crosses all three of our dots.', 'Each point in that line denotes different values for m and b, corresponding to different lines that can pass through this point.', 'The point of intersection represents the m and b values of a line consistent with crossing both of our points, with a slope and y-intercept of 4.']}, {'end': 3070.614, 'segs': [{'end': 2773.612, 'src': 'embed', 'start': 2746.641, 'weight': 1, 'content': [{'end': 2755.404, 'text': 'Obviously, if you try to compute the slope of a vertical line, the change in x is zero, which ultimately will always evaluate to a slope of infinity.', 'start': 2746.641, 'duration': 8.763}, {'end': 2758.785, 'text': 'which is not something that we can represent in Hough space.', 'start': 2756.184, 'duration': 2.601}, {'end': 2762.227, 'text': 'Infinity is not really something we can work with anyway.', 'start': 2759.706, 'duration': 2.521}, {'end': 2773.612, 'text': "We need a more robust representation of lines so that we don't encounter any numeric problems, because clearly this form y is equal to mx,", 'start': 2762.667, 'duration': 10.945}], 'summary': 'Slope of a vertical line is always infinity, not representable in hough space.', 'duration': 26.971, 'max_score': 2746.641, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy42746641.jpg'}, {'end': 2817.175, 'src': 'embed', 'start': 2788.009, 'weight': 3, 'content': [{'end': 2794.117, 'text': 'such that our line equation can be written as rho is equal to x cos theta plus y sin theta.', 'start': 2788.009, 'duration': 6.108}, {'end': 2800.183, 'text': "If you're really interested in how this equation is derived, feel free to ask me in the Q&A.", 'start': 2795.098, 'duration': 5.085}, {'end': 2806.308, 'text': 'but the main idea is that this is still the equation of a line, but in polar coordinates.', 'start': 2800.183, 'duration': 6.125}, {'end': 2817.175, 'text': 'If I am to draw some line in Cartesian space, the variable rho is the perpendicular distance from the origin to that line,', 'start': 2807.089, 'duration': 10.086}], 'summary': 'Line equation in polar coordinates with rho=x*cos(theta)+y*sin(theta)', 'duration': 29.166, 'max_score': 2788.009, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy42788009.jpg'}, {'end': 2965.557, 'src': 'embed', 'start': 2940.908, 'weight': 0, 'content': [{'end': 2947.05, 'text': 'Because imagine instead of one point, we had 10 points, which in turn results in 10 sinusoidal curves.', 'start': 2940.908, 'duration': 6.142}, {'end': 2953.212, 'text': 'As previously noted, if the curves of different points intersect in Hough space,', 'start': 2948.09, 'duration': 5.122}, {'end': 2958.774, 'text': 'then these points belong to the same line characterized by some rho and theta value.', 'start': 2953.212, 'duration': 5.562}, {'end': 2965.557, 'text': 'So just like before, a line can be detected by finding the number of intersections between curves.', 'start': 2959.394, 'duration': 6.163}], 'summary': '10 points result in 10 sinusoidal curves, intersecting curves in hough space indicate same line, detected by finding intersections.', 'duration': 24.649, 'max_score': 2940.908, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy42940908.jpg'}], 'start': 2728.675, 'title': 'Line detection', 'summary': 'Discusses the challenge of identifying vertical lines in gradient image, limitations of representing infinity in hough space, and introduces hough transform for line detection using rho and theta to characterize and detect lines in image space by voting in the hough space.', 'chapters': [{'end': 2762.227, 'start': 2728.675, 'title': 'Identifying lines in gradient image', 'summary': 'Explains the challenge of identifying vertical lines in the gradient image and the limitation of representing infinity in hough space.', 'duration': 33.552, 'highlights': ['The challenge of identifying vertical lines in the gradient image due to the computation of slope resulting in infinity.', 'Limitation of representing infinity in Hough space as it is not a workable value.']}, {'end': 3070.614, 'start': 2762.667, 'title': 'Hough transform for line detection', 'summary': 'Introduces the concept of representing lines in polar coordinates, using rho and theta, to detect lines in image space, where the line is characterized by the intersection of sinusoidal curves and the line of best fit is determined by voting in the hough space.', 'duration': 307.947, 'highlights': ['The line equation is expressed in polar coordinates as rho = x cos(theta) + y sin(theta), enabling the representation of lines that cannot be expressed by the Cartesian coordinate system.', 'In Hough space, the intersection of sinusoidal curves representing different values for rho and theta of lines passing through a point indicates that the points belong to the same line, with the line of best fit determined by the bin with the maximum number of votes.', 'The concept of finding the line that best fits the data and defining the edge points in the gradient image is achieved through the Hough Transform, which utilizes the parameters rho and theta to represent and detect lines in image space.']}], 'duration': 341.939, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy42728675.jpg', 'highlights': ['The Hough Transform utilizes the parameters rho and theta to represent and detect lines in image space.', 'The challenge of identifying vertical lines in the gradient image due to the computation of slope resulting in infinity.', 'Limitation of representing infinity in Hough space as it is not a workable value.', 'The line equation is expressed in polar coordinates as rho = x cos(theta) + y sin(theta), enabling the representation of lines that cannot be expressed by the Cartesian coordinate system.', 'In Hough space, the intersection of sinusoidal curves representing different values for rho and theta of lines passing through a point indicates that the points belong to the same line, with the line of best fit determined by the bin with the maximum number of votes.']}, {'end': 3519.55, 'segs': [{'end': 3174.683, 'src': 'embed', 'start': 3147.383, 'weight': 0, 'content': [{'end': 3152.668, 'text': 'The second and third arguments are really important as they specify the size of the bins.', 'start': 3147.383, 'duration': 5.285}, {'end': 3159.835, 'text': 'Rho is the distance resolution of the accumulator in pixels and theta is the angle resolution of the accumulator in radians.', 'start': 3153.449, 'duration': 6.386}, {'end': 3164.359, 'text': 'The larger the bins, the less precision in which lines are going to be detected.', 'start': 3160.396, 'duration': 3.963}, {'end': 3168.501, 'text': 'For example, imagine every bin in our array was so large.', 'start': 3165.02, 'duration': 3.481}, {'end': 3174.683, 'text': 'this is way too coarse, in the sense that too many intersections are going to occur inside of a single bin.', 'start': 3168.501, 'duration': 6.182}], 'summary': 'Specifying the size of bins is crucial for detecting lines accurately.', 'duration': 27.3, 'max_score': 3147.383, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy43147383.jpg'}, {'end': 3241.894, 'src': 'embed', 'start': 3215.562, 'weight': 1, 'content': [{'end': 3221.889, 'text': "To demonstrate the effect of this early on, here's a sneak peek of the end result when we finally detect our lines.", 'start': 3215.562, 'duration': 6.327}, {'end': 3230.198, 'text': "In this picture, the bin resolution was a row of 20 pixels and 5 degrees, whereas here it's 2 pixels with a single degree precision.", 'start': 3222.39, 'duration': 7.808}, {'end': 3233.623, 'text': 'Clearly, this one is much more precise in its output.', 'start': 3231.019, 'duration': 2.604}, {'end': 3235.445, 'text': "So that's it for resolution.", 'start': 3234.123, 'duration': 1.322}, {'end': 3237.408, 'text': 'The fourth argument is very simple.', 'start': 3235.665, 'duration': 1.743}, {'end': 3238.489, 'text': "It's the threshold.", 'start': 3237.488, 'duration': 1.001}, {'end': 3241.894, 'text': 'To find and display the lines from a series of dots.', 'start': 3238.509, 'duration': 3.385}], 'summary': 'Comparison of line detection resolutions: 20 pixels/5 degrees vs. 2 pixels/1 degree, with the latter being much more precise.', 'duration': 26.332, 'max_score': 3215.562, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy43215562.jpg'}, {'end': 3392.407, 'src': 'embed', 'start': 3371.015, 'weight': 2, 'content': [{'end': 3380.32, 'text': 'This indicates the maximum distance in pixels between segmented lines which we will allow to be connected into a single line instead of them being broken up.', 'start': 3371.015, 'duration': 9.305}, {'end': 3382.041, 'text': "That's all guys.", 'start': 3381.241, 'duration': 0.8}, {'end': 3387.124, 'text': 'We just set up an algorithm that can detect lines in our cropped gradient image.', 'start': 3382.121, 'duration': 5.003}, {'end': 3392.407, 'text': 'Now comes the fun part, which is to actually display these lines into our real image.', 'start': 3387.944, 'duration': 4.463}], 'summary': 'Algorithm detects lines in cropped gradient image, allowing display in real image.', 'duration': 21.392, 'max_score': 3371.015, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy43371015.jpg'}], 'start': 3076.209, 'title': 'Hough transform for line detection', 'summary': 'Focuses on implementing hough transform to detect lines in an image, emphasizing resolution parameters like rho and theta, and the threshold for minimum intersections, with a demonstration of different resolutions. it also covers setting up an algorithm to detect and display lines in the cropped gradient image.', 'chapters': [{'end': 3519.55, 'start': 3076.209, 'title': 'Detecting lines with hough transform', 'summary': 'Discusses implementing hough transform to detect lines in an image, emphasizing the importance of resolution parameters such as rho and theta, as well as the threshold for minimum number of intersections, with a demonstration of the effect of different resolutions. additionally, it covers setting up an algorithm to detect and display lines in the cropped gradient image.', 'duration': 443.341, 'highlights': ['The Hough accumulator array resolution parameters rho and theta, as well as the threshold for minimum number of intersections, are crucial in detecting lines with precision. The chapter emphasizes the significance of resolution parameters rho and theta, and the threshold for minimum number of intersections, in detecting lines with precision.', 'The demonstration of the effect of different resolutions shows the impact on the precision of detecting lines, with a comparison between two different bin resolutions. A comparison between two different bin resolutions demonstrates the impact on the precision of detecting lines, highlighting the importance of resolution parameters.', 'Setting up an algorithm to detect and display lines in the cropped gradient image is discussed, with details on the implementation of the function and the usage of a placeholder array. The chapter covers the setup of an algorithm to detect and display lines in the cropped gradient image, including details on the implementation of the function and the usage of a placeholder array.']}], 'duration': 443.341, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy43076209.jpg', 'highlights': ['The Hough accumulator array resolution parameters rho and theta, as well as the threshold for minimum number of intersections, are crucial in detecting lines with precision.', 'The demonstration of the effect of different resolutions shows the impact on the precision of detecting lines, with a comparison between two different bin resolutions.', 'Setting up an algorithm to detect and display lines in the cropped gradient image is discussed, with details on the implementation of the function and the usage of a placeholder array.']}, {'end': 3947.594, 'segs': [{'end': 3579.554, 'src': 'embed', 'start': 3546.988, 'weight': 0, 'content': [{'end': 3552.573, 'text': 'Thanks to OpenCV, we can write CV2 dot line.', 'start': 3546.988, 'duration': 5.585}, {'end': 3556.737, 'text': 'This function draws a line segment connecting two points.', 'start': 3553.214, 'duration': 3.523}, {'end': 3560.22, 'text': "We'll draw our lines on the line image.", 'start': 3557.377, 'duration': 2.843}, {'end': 3563.988, 'text': 'the black image we just created.', 'start': 3562.307, 'duration': 1.681}, {'end': 3571.07, 'text': 'The second and third arguments specify in which coordinates of the image space that we want to draw the lines.', 'start': 3564.668, 'duration': 6.402}, {'end': 3579.554, 'text': 'So the second argument will be the first point of the line segment, which is simply X1, Y1.', 'start': 3571.711, 'duration': 7.843}], 'summary': 'Opencv enables drawing line segments connecting two points on an image.', 'duration': 32.566, 'max_score': 3546.988, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy43546988.jpg'}, {'end': 3681.701, 'src': 'heatmap', 'start': 3623.961, 'weight': 0.77, 'content': [{'end': 3631.625, 'text': 'and that is all all of the lines we detected in the gradient image in our cropped image.', 'start': 3623.961, 'duration': 7.664}, {'end': 3638.908, 'text': 'we just drew them onto a black image which has the same dimensions as our road image.', 'start': 3631.625, 'duration': 7.283}, {'end': 3651.745, 'text': "now let's just go ahead and return the line image, return line image And over here we're going to show the line image instead of cropped image.", 'start': 3638.908, 'duration': 12.837}, {'end': 3657.927, 'text': "And back to our terminal, we'll rerun our code, pythonlanes.py.", 'start': 3652.505, 'duration': 5.422}, {'end': 3669.593, 'text': 'And as expected, It shows the lines that we detected using HuffTransform and it displayed them on a black image.', 'start': 3657.947, 'duration': 11.646}, {'end': 3675.997, 'text': 'The final step is to blend this image to our original color image.', 'start': 3670.413, 'duration': 5.584}, {'end': 3681.701, 'text': 'That way the lines show up on the lanes instead of some black screen.', 'start': 3676.598, 'duration': 5.103}], 'summary': 'Detected lines in the image using hough transform and blended with original image.', 'duration': 57.74, 'max_score': 3623.961, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy43623961.jpg'}, {'end': 3921.984, 'src': 'embed', 'start': 3882.628, 'weight': 1, 'content': [{'end': 3888.834, 'text': 'In the last lesson, we detected lines from a series of points in the gradient image using the Hough transform detection algorithm.', 'start': 3882.628, 'duration': 6.206}, {'end': 3898.183, 'text': 'We then took these lines and placed them on a blank image, which we then merged with our color image, ultimately displaying the lines onto our lanes.', 'start': 3889.495, 'duration': 8.688}, {'end': 3902.587, 'text': "What we'll do now is further optimize how these lines are displayed.", 'start': 3899.044, 'duration': 3.543}, {'end': 3910.294, 'text': "It's important to first recognize that the lines currently displayed correspond to bins which exceeded the voting threshold.", 'start': 3903.388, 'duration': 6.906}, {'end': 3913.737, 'text': 'They were voted as the lines which best described our data.', 'start': 3910.694, 'duration': 3.043}, {'end': 3916.64, 'text': "What we'll do now is, instead of having multiple lines,", 'start': 3914.398, 'duration': 2.242}, {'end': 3921.984, 'text': 'we can average out their slope and y-intercept into a single line that traces out both of our lanes.', 'start': 3916.64, 'duration': 5.344}], 'summary': 'Optimizing lane display by averaging multiple lines into a single line.', 'duration': 39.356, 'max_score': 3882.628, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy43882628.jpg'}], 'start': 3519.55, 'title': 'Reshaping arrays and drawing lines with opencv, and detecting and displaying lane lines', 'summary': 'Discusses reshaping arrays into a one-dimensional array with four elements and drawing lines onto a blank image using opencv, specifying coordinates and colors. it also covers detecting and displaying lane lines, including using the hough transform technique to identify lines in a gradient image, drawing them onto a black image with the same dimensions as the original road image, blending the line image with the original color image to display the lines on the lanes, and further optimizing the line display by averaging out the slope and y-intercept of multiple lines into a single line.', 'chapters': [{'end': 3614.495, 'start': 3519.55, 'title': 'Reshaping arrays and drawing lines with opencv', 'summary': 'Discusses reshaping arrays into a one-dimensional array with four elements and drawing lines onto a blank image using opencv, specifying coordinates and colors.', 'duration': 94.945, 'highlights': ['We can simply unpack the array elements into four different variables, X1, Y1, X2, Y2, instead of setting line is equal to line dot reshape four.', 'Thanks to OpenCV, we can use CV2 dot line to draw line segments connecting two points on the blank image, specifying coordinates and color.', 'The second and third arguments of CV2 dot line specify the coordinates of the image space for drawing lines, while the next argument determines the color as BGR 255, 0, 0 for blue.', 'The function draws a line segment connecting two points, and the specified BGR color of 255, 0, and 0 will result in a blue color.']}, {'end': 3947.594, 'start': 3615.496, 'title': 'Detecting and displaying lane lines', 'summary': 'Covers the process of detecting and displaying lane lines, including using the hough transform technique to identify lines in a gradient image, drawing them onto a black image with the same dimensions as the original road image, blending the line image with the original color image to display the lines on the lanes, and further optimizing the line display by averaging out the slope and y-intercept of multiple lines into a single line.', 'duration': 332.098, 'highlights': ['Using the Hough transform technique to identify lines in a gradient image The chapter covers the process of using the Hough transform technique to identify lines in a gradient image, which is a key step in detecting lane lines.', 'Blending the line image with the original color image to display the lines on the lanes The process involves blending the line image with the original color image, using the add weighted function with a weight of 0.8 for the lane image and 1 for the line image to define the lines more clearly.', 'Further optimizing the line display by averaging out the slope and y-intercept of multiple lines into a single line The optimization includes averaging out the slope and y-intercept of multiple lines into a single line that traces out both lanes, improving the representation of the lane lines.']}], 'duration': 428.044, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy43519550.jpg', 'highlights': ['Using CV2 dot line to draw line segments connecting two points on the blank image', 'Detecting lane lines using the Hough transform technique', 'Blending line image with original color image to display lines on lanes', 'Averaging out slope and y-intercept of multiple lines into a single line']}, {'end': 4742.37, 'segs': [{'end': 4011.228, 'src': 'embed', 'start': 3978.449, 'weight': 3, 'content': [{'end': 3987.574, 'text': "that will be our function name and we'll pass into it our colored lane image as well as the lines that we detected.", 'start': 3978.449, 'duration': 9.125}, {'end': 3999.205, 'text': "And now we'll simply define the function right on top, def average slope intercept with argument image and lines.", 'start': 3989.025, 'duration': 10.18}, {'end': 4004.382, 'text': "and what we'll do first is we'll declare two empty lists.", 'start': 4000.959, 'duration': 3.423}, {'end': 4008.005, 'text': 'left fit is equal to an empty list.', 'start': 4004.382, 'duration': 3.623}, {'end': 4011.228, 'text': 'right fit is also equal to an empty list.', 'start': 4008.005, 'duration': 3.223}], 'summary': "Defining a function 'average slope intercept' with image and lines as arguments, initializing empty lists for left and right fit.", 'duration': 32.779, 'max_score': 3978.449, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy43978449.jpg'}, {'end': 4080.656, 'src': 'embed', 'start': 4051.726, 'weight': 4, 'content': [{'end': 4058.168, 'text': "when you're given the points of a line, it's very easy to compute the slope by calculating the change in y over the change in x,", 'start': 4051.726, 'duration': 6.442}, {'end': 4061.99, 'text': 'subbing that into our equation to then determine the y-intercept.', 'start': 4058.168, 'duration': 3.822}, {'end': 4071.053, 'text': 'well, to determine these parameters in code, what we can do is set parameters is equal to numpy.polyfit.', 'start': 4061.99, 'duration': 9.063}, {'end': 4080.656, 'text': 'What polyfit will do for us is it will fit a first degree polynomial which would simply be a linear function of y is equal to mx plus b.', 'start': 4071.993, 'duration': 8.663}], 'summary': 'Given line points, calculate slope and y-intercept using numpy.polyfit.', 'duration': 28.93, 'max_score': 4051.726, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy44051726.jpg'}, {'end': 4394.372, 'src': 'embed', 'start': 4342.816, 'weight': 0, 'content': [{'end': 4353.711, 'text': "we'll go ahead and print left fit average, print right fit average and we'll go ahead and just label them for clarity.", 'start': 4342.816, 'duration': 10.895}, {'end': 4358.299, 'text': 'left and right back to our terminal.', 'start': 4353.711, 'duration': 4.588}, {'end': 4363.513, 'text': 'We get back to arrays.', 'start': 4361.912, 'duration': 1.601}, {'end': 4369.676, 'text': 'This array represents the average slope and y-intercept of a single line through the left side,', 'start': 4364.113, 'duration': 5.563}, {'end': 4374.538, 'text': 'and this array the average slope and y-intercept of a single line through the right side.', 'start': 4369.676, 'duration': 4.862}, {'end': 4376.699, 'text': "We're not out of the woods yet.", 'start': 4374.558, 'duration': 2.141}, {'end': 4382.662, 'text': "We have the slopes and y-intercepts of the lines that we'll eventually draw,", 'start': 4376.939, 'duration': 5.723}, {'end': 4389.626, 'text': "but we can't actually draw them unless we also specify their coordinates to actually specify where we want our lines to be placed.", 'start': 4382.662, 'duration': 6.964}, {'end': 4394.372, 'text': 'the x1, y1, x2, y2 for each line.', 'start': 4390.186, 'duration': 4.186}], 'summary': 'Data analysis revealed left and right fit averages, with slopes and y-intercepts for line placement.', 'duration': 51.556, 'max_score': 4342.816, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy44342816.jpg'}, {'end': 4718.777, 'src': 'heatmap', 'start': 4612.97, 'weight': 1, 'content': [{'end': 4617.473, 'text': 'So both of our lines are going to have the same vertical coordinates.', 'start': 4612.97, 'duration': 4.503}, {'end': 4620.856, 'text': "They're both going to start at the very bottom.", 'start': 4617.954, 'duration': 2.902}, {'end': 4630.402, 'text': "But they're going to start at the bottom and go upwards three fifths of the way up until the coordinate 424.", 'start': 4621.876, 'duration': 8.526}, {'end': 4636.687, 'text': 'But their horizontal coordinates are obviously dependent on their slope and y-intercept, which we calculated right here.', 'start': 4630.402, 'duration': 6.285}, {'end': 4638.308, 'text': 'We finally have our lines.', 'start': 4637.087, 'duration': 1.221}, {'end': 4640.309, 'text': 'We can return them as an array.', 'start': 4638.488, 'duration': 1.821}, {'end': 4649.05, 'text': 'Return numpy dot array left line and right line.', 'start': 4640.329, 'duration': 8.721}, {'end': 4660.375, 'text': "And now comes the fun part, which is instead of our line image being populated by the huff detected lines, we're going to pass in the average lines.", 'start': 4649.89, 'duration': 10.485}, {'end': 4667.838, 'text': 'If we show the line image instead, run the code.', 'start': 4662.796, 'duration': 5.042}, {'end': 4671.807, 'text': 'This looks a lot smoother.', 'start': 4669.605, 'duration': 2.202}, {'end': 4677.711, 'text': 'Instead of many lines, they were all averaged out into a single line on each side.', 'start': 4672.387, 'duration': 5.324}, {'end': 4684.896, 'text': "Back to our code, obviously we're still blending the line image with the color image.", 'start': 4678.592, 'duration': 6.304}, {'end': 4688.559, 'text': "So let's show that instead, combo image.", 'start': 4685.937, 'duration': 2.622}, {'end': 4691.862, 'text': 'Back to our terminal.', 'start': 4690.902, 'duration': 0.96}, {'end': 4693.703, 'text': "We'll rerun this.", 'start': 4692.883, 'duration': 0.82}, {'end': 4698.564, 'text': 'And it displays our two lines on our two lanes.', 'start': 4694.843, 'duration': 3.721}, {'end': 4703.626, 'text': 'We took the average of our lines and displayed one line on each side instead.', 'start': 4699.245, 'duration': 4.381}, {'end': 4706.407, 'text': 'This looks a lot smoother than earlier.', 'start': 4704.386, 'duration': 2.021}, {'end': 4718.777, 'text': 'One more thing before we end this lesson is that previously we were passing in a three dimensional array the huff lines into our display lines function.', 'start': 4707.447, 'duration': 11.33}], 'summary': 'Lines with same vertical coordinates start at the bottom and go upwards three fifths of the way up until the coordinate 424. the average lines produce a smoother image.', 'duration': 105.807, 'max_score': 4612.97, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy44612970.jpg'}, {'end': 4698.564, 'src': 'embed', 'start': 4662.796, 'weight': 2, 'content': [{'end': 4667.838, 'text': 'If we show the line image instead, run the code.', 'start': 4662.796, 'duration': 5.042}, {'end': 4671.807, 'text': 'This looks a lot smoother.', 'start': 4669.605, 'duration': 2.202}, {'end': 4677.711, 'text': 'Instead of many lines, they were all averaged out into a single line on each side.', 'start': 4672.387, 'duration': 5.324}, {'end': 4684.896, 'text': "Back to our code, obviously we're still blending the line image with the color image.", 'start': 4678.592, 'duration': 6.304}, {'end': 4688.559, 'text': "So let's show that instead, combo image.", 'start': 4685.937, 'duration': 2.622}, {'end': 4691.862, 'text': 'Back to our terminal.', 'start': 4690.902, 'duration': 0.96}, {'end': 4693.703, 'text': "We'll rerun this.", 'start': 4692.883, 'duration': 0.82}, {'end': 4698.564, 'text': 'And it displays our two lines on our two lanes.', 'start': 4694.843, 'duration': 3.721}], 'summary': 'Code smooths multiple lines into two averaged lines.', 'duration': 35.768, 'max_score': 4662.796, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy44662796.jpg'}], 'start': 3947.875, 'title': 'Function declaration and line initialization', 'summary': "Covers the declaration of a function 'average slope intercept' and initializing empty lists for storing coordinates, emphasizing consistency and variable naming. it also involves reshaping lines into one-dimensional arrays with four elements. additionally, it explains the process of calculating the slope and y-intercept of lines, averaging their values, and specifying their coordinates to draw smoother lines on an image.", 'chapters': [{'end': 4051.726, 'start': 3947.875, 'title': 'Function declaration and list initialization', 'summary': "Covers the declaration of a function 'average slope intercept' and the initialization of empty lists 'left fit' and 'right fit' for storing coordinates, as well as the importance of consistency and variable naming. it also involves reshaping lines into one-dimensional arrays with four elements.", 'duration': 103.851, 'highlights': ["The chapter covers the declaration of a function 'average slope intercept' and the initialization of empty lists 'left fit' and 'right fit' for storing coordinates.", 'It emphasizes the importance of consistency in coding to avoid bugs and the avoidance of reusing variable names.', 'The process involves reshaping each line into a one-dimensional array with four elements line.reshape and then unpacking the elements of the array into four variables.']}, {'end': 4742.37, 'start': 4051.726, 'title': 'Calculating slope and y-intercept', 'summary': 'Explains the process of calculating the slope and y-intercept of lines, determining their direction, averaging their values, and specifying their coordinates to draw smoother lines on an image.', 'duration': 690.644, 'highlights': ['Using numpy.polyfit to determine slope and y-intercept The polyfit function fits a first degree polynomial to x and y points, providing coefficients for slope and y-intercept.', 'Determining the direction of lines based on slope Lines on the left have a negative slope, while lines on the right have a positive slope, allowing for easy categorization based on slope value.', 'Averaging slope and y-intercept for smoother lines The process involves averaging out all slope and y-intercept values for both sides, resulting in single average values for left and right lines.', 'Specifying coordinates to draw lines on the image A function is defined to denote x and y coordinates of the lines based on slope and y-intercept, ensuring the lines start at the bottom and extend upwards with specified coordinates.', 'Displaying smoother lines on the image Using the averaged lines to populate the line image, resulting in a smoother display of a single line on each side instead of multiple lines.']}], 'duration': 794.495, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy43947875.jpg', 'highlights': ['The process involves averaging out all slope and y-intercept values for both sides, resulting in single average values for left and right lines.', 'A function is defined to denote x and y coordinates of the lines based on slope and y-intercept, ensuring the lines start at the bottom and extend upwards with specified coordinates.', 'Using the averaged lines to populate the line image, resulting in a smoother display of a single line on each side instead of multiple lines.', "The chapter covers the declaration of a function 'average slope intercept' and the initialization of empty lists 'left fit' and 'right fit' for storing coordinates.", 'Using numpy.polyfit to determine slope and y-intercept The polyfit function fits a first degree polynomial to x and y points, providing coefficients for slope and y-intercept.']}, {'end': 5180.087, 'segs': [{'end': 4788.785, 'src': 'embed', 'start': 4743.09, 'weight': 0, 'content': [{'end': 4750.652, 'text': 'And better yet, we can simply unpack each line into four variables over here and delete that.', 'start': 4743.09, 'duration': 7.562}, {'end': 4751.952, 'text': 'And that is all.', 'start': 4751.252, 'duration': 0.7}, {'end': 4754.853, 'text': "Let's rerun it to make sure we didn't make any mistakes.", 'start': 4752.232, 'duration': 2.621}, {'end': 4758.884, 'text': 'and everything still works out accordingly.', 'start': 4756.84, 'duration': 2.044}, {'end': 4766.881, 'text': "In the next lesson, we'll use the code that we currently have and take it up a notch by identifying lane lines in a video.", 'start': 4759.646, 'duration': 7.235}, {'end': 4769.9, 'text': 'Welcome to your last lesson of this section.', 'start': 4768.24, 'duration': 1.66}, {'end': 4775.742, 'text': 'In the last lesson, we finally finished our line detection algorithm and identified lane lines in our image.', 'start': 4770.301, 'duration': 5.441}, {'end': 4780.503, 'text': "What we'll do now is use that same algorithm to identify lines in a video.", 'start': 4776.482, 'duration': 4.021}, {'end': 4788.785, 'text': "This is the video and we'll use the algorithm we currently have to detect lane lines in every single frame.", 'start': 4781.203, 'duration': 7.582}], 'summary': 'Completed line detection algorithm to identify lane lines in image and video.', 'duration': 45.695, 'max_score': 4743.09, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy44743090.jpg'}, {'end': 4876.93, 'src': 'embed', 'start': 4838.254, 'weight': 2, 'content': [{'end': 4840.636, 'text': "but we'll go with this for now, just to keep things quick.", 'start': 4838.254, 'duration': 2.382}, {'end': 4850.225, 'text': 'Regardless, to capture this video in our workspace, we need to create a video capture object by setting a variable name.', 'start': 4841.457, 'duration': 8.768}, {'end': 4859.335, 'text': "cap is equal to cv2.videoCapture and we'll capture the video test2.mp4 and while cap.isOpened,", 'start': 4850.225, 'duration': 9.11}, {'end': 4876.93, 'text': 'This returns true if video capturing has been initialized.', 'start': 4873.208, 'duration': 3.722}], 'summary': "To capture video in our workspace, create a video capture object named 'cap' equal to cv2.videocapture, capturing the video test2.mp4. cap.isopened returns true if video capturing is initialized.", 'duration': 38.676, 'max_score': 4838.254, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy44838254.jpg'}, {'end': 5048.769, 'src': 'embed', 'start': 5022.832, 'weight': 3, 'content': [{'end': 5028.996, 'text': 'We need a way to actually break out of this for loop and not just wait until the video is complete for it to dismiss.', 'start': 5022.832, 'duration': 6.164}, {'end': 5030.737, 'text': "So we'll go back here.", 'start': 5029.656, 'duration': 1.081}, {'end': 5035.4, 'text': 'Upon pressing a keyboard key, we want the video to close.', 'start': 5031.497, 'duration': 3.903}, {'end': 5041.084, 'text': "So we'll put this inside of an if statement such that we're still invoking the wait key function.", 'start': 5035.86, 'duration': 5.224}, {'end': 5048.769, 'text': 'We mentioned that it waits one millisecond in between frames, but what it also does is it returns a 32 bit integer value,', 'start': 5041.144, 'duration': 7.625}], 'summary': 'Enhance the for loop to close video upon keyboard key press.', 'duration': 25.937, 'max_score': 5022.832, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy45022832.jpg'}], 'start': 4743.09, 'title': 'Video processing with opencv', 'summary': 'Demonstrates the process of processing a video using opencv, including capturing the video, detecting lines in each frame, and implementing a method to break out of the video loop, achieving the goal of identifying relevant lines in the video frames.', 'chapters': [{'end': 4788.785, 'start': 4743.09, 'title': 'Lane detection in video', 'summary': 'Covers the finalization of the line detection algorithm and its application to identify lane lines in a video, marking the end of this section.', 'duration': 45.695, 'highlights': ['We will use the code to identify lane lines in a video, extending the application of the algorithm (quantifiable data: video application).', 'The chapter finalizes the line detection algorithm and its application to identify lane lines in an image (quantifiable data: successful completion of algorithm).', 'The lesson involves unpacking each line into four variables and ensuring everything works correctly, indicating the completion of the coding process.']}, {'end': 5180.087, 'start': 4790.097, 'title': 'Video processing with opencv', 'summary': 'Demonstrates the process of processing a video using opencv, including capturing the video, detecting lines in each frame, and implementing a method to break out of the video loop, achieving the goal of identifying relevant lines in the video frames.', 'duration': 389.99, 'highlights': ["We created a video capture object and processed the video 'test2.mp4', achieving the goal of capturing the video for further processing.", 'We applied the algorithm for detecting lines to each frame of the video, demonstrating the process of identifying relevant lines in the video frames.', 'We implemented a method to break out of the video loop by comparing keyboard input to close the video, ensuring the functionality of breaking out of the loop to close the video file.']}], 'duration': 436.997, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/eLTLtUVuuy4/pics/eLTLtUVuuy44743090.jpg', 'highlights': ['The chapter finalizes the line detection algorithm and its application to identify lane lines in an image (quantifiable data: successful completion of algorithm).', 'We applied the algorithm for detecting lines to each frame of the video, demonstrating the process of identifying relevant lines in the video frames.', "We created a video capture object and processed the video 'test2.mp4', achieving the goal of capturing the video for further processing.", 'We implemented a method to break out of the video loop by comparing keyboard input to close the video, ensuring the functionality of breaking out of the loop to close the video file.', 'The lesson involves unpacking each line into four variables and ensuring everything works correctly, indicating the completion of the coding process.', 'We will use the code to identify lane lines in a video, extending the application of the algorithm (quantifiable data: video application).']}], 'highlights': ['The tutorial delves into using computer vision techniques with OpenCV and Python to detect lane lines for a simulated self-driving car.', 'The Anaconda distribution includes Python, the Jupyter Notebook app, and over 150 scientific packages.', 'The tutorial emphasizes the installation of the Anaconda distribution, specifically Python 3, for compatibility with future Python improvements.', 'The installation process of Atom text editor, including downloading, installation, and configuration, is covered in detail.', 'The chapter introduces computer vision for identifying lane lines in images.', 'The process involves creating a new project folder, opening it with a code editor, writing a program to identify lanes in a JPEG image, downloading and saving the test image, and using OpenCV to display the image.', 'The chapter introduces the concept of using edge detection techniques, such as the Canny edge detection algorithm, to identify boundaries of objects within images and find regions with sharp changes in intensity, which is crucial for detecting lane lines.', 'The chapter emphasizes the importance of reducing noise and smoothening the image with a Gaussian blur for edge detection.', 'Creating a mask with specific vertices to identify a region of interest in the image.', 'The binary representation of the decimal number 23, resulting in 10111, is explained in detail.', 'The concept of binary numbers and the base-2 numeral system is introduced, providing a fundamental understanding of binary representations.', 'The Hough space is divided into a grid, with each bin corresponding to the slope and y-intercept value of a candidate line, allowing for the identification of lines in the gradient image.', 'The Hough Transform utilizes the parameters rho and theta to represent and detect lines in image space.', 'The Hough accumulator array resolution parameters rho and theta, as well as the threshold for minimum number of intersections, are crucial in detecting lines with precision.', 'Using CV2 dot line to draw line segments connecting two points on the blank image.', 'Detecting lane lines using the Hough transform technique.', 'The process involves averaging out all slope and y-intercept values for both sides, resulting in single average values for left and right lines.', 'The chapter finalizes the line detection algorithm and its application to identify lane lines in an image (quantifiable data: successful completion of algorithm).', 'We applied the algorithm for detecting lines to each frame of the video, demonstrating the process of identifying relevant lines in the video frames.', "We created a video capture object and processed the video 'test2.mp4', achieving the goal of capturing the video for further processing.", 'We implemented a method to break out of the video loop by comparing keyboard input to close the video, ensuring the functionality of breaking out of the loop to close the video file.', 'The lesson involves unpacking each line into four variables and ensuring everything works correctly, indicating the completion of the coding process.', 'We will use the code to identify lane lines in a video, extending the application of the algorithm (quantifiable data: video application).']}