title
Convolutional Neural Network Tutorial (CNN) | How CNN Works | Deep Learning Tutorial | Simplilearn

description
šŸ”„Artificial Intelligence Engineer Program (Discount Coupon: YTBE15): https://www.simplilearn.com/artificial-intelligence-masters-program-training-course?utm_campaign=AI-Jy9-aGMB_TE&utm_medium=DescriptionFirstFold&utm_source=youtube šŸ”„Professional Certificate Program In AI And Machine Learning: https://www.simplilearn.com/pgp-ai-machine-learning-certification-training-course?utm_campaign=AI-Jy9-aGMB_TE&utm_medium=DescriptionFirstFold&utm_source=youtube This Convolutional neural network tutorial (CNN) will help you understand what is a convolutional neural network, how CNN recognizes images, what are layers in the convolutional neural network and at the end, you will see a use case implementation using CNN. A CNN is also known as a ConvNet. Convolutional networks can also perform optical character recognition to digitize text and make natural-language processing possible on analog and hand-written documents. CNN can also be applied to sound when it is represented visually as a spectrogram. Now, let's deep dive into this video to understand what is CNN and how do they actually work. Start learning today's most in-demand skills for FREE. Visit us at https://www.simplilearn.com/skillup-free-online-courses?utm_campaign=AI&utm_medium=Description&utm_source=youtube Choose over 300 in-demand skills and get access to 1000+ hours of video content for FREE in various technologies like Data Science, Cybersecurity, Project Management & Leadership, Digital Marketing, and much more. šŸ”„ Enroll for FREE Artificial Intelligence Course & Get your Completion Certificate: https://www.simplilearn.com/learn-ai-basics-skillup?utm_campaign=AI&utm_medium=Description&utm_source=youtube Below topics are explained in this CNN tutorial (Convolutional Neural Network Tutorial) 1. Introduction to CNN 2. What is a convolutional neural network? 3. How CNN recognizes images? 4. Layers in convolutional neural network 5. Use case implementation using CNN To learn more about Deep Learning, subscribe to our YouTube channel: https://www.youtube.com/user/Simplilearn?sub_confirmation=1 You can also go through the slides here: https://goo.gl/ZNcp9n Watch more videos on Deep Learning: https://www.youtube.com/watch?v=FbxTVRfQFuI&list=PLEiEAq2VkUUIYQ-mMRAGilfOKyWKpHSip #DeepLearning #Datasciencecourse #DataScience #SimplilearnMachineLearning #DeepLearningCourse We've partnered with Purdue University and collaborated with IBM to offer you the unique Post Graduate Program in AI and Machine Learning. Learn more about it here - https://www.simplilearn.com/ai-and-machine-learning-post-graduate-certificate-program-purdue?utm_campaign=Convolutional-Neural-Network-Tutorial-CNN-Tutorial-Jy9-aGMB_TE&utm_medium=Tutorials&utm_source=youtube Simplilearnā€™s Deep Learning course will transform you into an expert in deep learning techniques using TensorFlow, the open-source software library designed to conduct machine learning & deep neural network research. With our deep learning course, youā€™ll master deep learning and TensorFlow concepts, learn to implement algorithms, build artificial neural networks and traverse layers of data abstraction to understand the power of data and prepare you for your new role as deep learning scientist. You can gain in-depth knowledge of Deep Learning by taking our Deep Learning certification training course. With Simplilearnā€™s Deep Learning course, you will prepare for a career as a Deep Learning engineer as you master concepts and techniques including supervised and unsupervised learning, mathematical and heuristic aspects, and hands-on modeling to develop algorithms. Those who complete the course will be able to: 1. Understand the concepts of TensorFlow, its main functions, operations and the execution pipeline 2. Implement deep learning algorithms, understand neural networks and traverse the layers of data abstraction which will empower you to understand data like never before 3. Master and comprehend advanced topics such as convolutional neural networks, recurrent neural networks, training deep networks and high-level interfaces 4. Build deep learning models in TensorFlow and interpret the results 5. Understand the language and fundamental concepts of artificial neural networks Learn more at: https://www.simplilearn.com/deep-learning-course-with-tensorflow-training?utm_campaign=Convolutional-Neural-Network-Tutorial-CNN-Tutorial-Jy9-aGMB_TE&utm_medium=Tutorials&utm_source=youtube For more information about Simplilearnā€™s courses, visit: - Facebook: https://www.facebook.com/Simplilearn - LinkedIn: https://www.linkedin.com/company/simp... - Website: https://www.simplilearn.com Get the Android app: http://bit.ly/1WlVo4u Get the iOS app: http://apple.co/1HIO5J0 šŸ”„šŸ”„ Interested in Attending Live Classes? Call Us: IN - 18002127688 / US - +18445327688

detail
{'title': 'Convolutional Neural Network Tutorial (CNN) | How CNN Works | Deep Learning Tutorial | Simplilearn', 'heatmap': [{'end': 849.626, 'start': 688.273, 'weight': 1}], 'summary': 'This tutorial on convolutional neural networks (cnn) covers the basics of cnn, including image recognition, matrix filters for convolution operations, and the functions of hidden layers such as convolution, relu, pooling layers. it also delves into python script writing for image data visualization, structuring a python project, and loading and setting up images for machine learning, providing insights into cnn architecture and implementation.', 'chapters': [{'end': 263.752, 'segs': [{'end': 97.096, 'src': 'embed', 'start': 58.513, 'weight': 0, 'content': [{'end': 62.915, 'text': 'So we have our input layer accepts the pixels of the image as input in the form of arrays.', 'start': 58.513, 'duration': 4.402}, {'end': 68.236, 'text': "And you can see up here where they've actually labeled each block of the bird in different arrays.", 'start': 63.315, 'duration': 4.921}, {'end': 72.798, 'text': "So we'll dive into deep as to how that looks like and how those matrices are set up.", 'start': 68.677, 'duration': 4.121}, {'end': 78.564, 'text': 'Your hidden layer carry out feature extraction by performing certain calculations and manipulation.', 'start': 73.339, 'duration': 5.225}, {'end': 85.892, 'text': "So this is the part that kind of reorganizes that picture multiple ways until we get some data that's easy to read for the neural network.", 'start': 78.644, 'duration': 7.248}, {'end': 92.439, 'text': 'This layer uses a matrix filter and performs convolution operation to detect patterns in the image.', 'start': 86.433, 'duration': 6.006}, {'end': 97.096, 'text': 'And if you remember that convolution means to coil or to twist,', 'start': 93.055, 'duration': 4.041}], 'summary': 'Neural network layers process image pixels, extract features, and detect patterns using matrix filter and convolution operation.', 'duration': 38.583, 'max_score': 58.513, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/Jy9-aGMB_TE/pics/Jy9-aGMB_TE58513.jpg'}, {'end': 151.526, 'src': 'embed', 'start': 121.178, 'weight': 1, 'content': [{'end': 124.819, 'text': 'And just like the term says, pooling is pooling information together.', 'start': 121.178, 'duration': 3.641}, {'end': 127.019, 'text': "And we'll look into that a lot closer here.", 'start': 125.179, 'duration': 1.84}, {'end': 132.521, 'text': "So if it's a little confusing now, we'll dig in deep and try to get you squared away with that.", 'start': 127.259, 'duration': 5.262}, {'end': 137.282, 'text': 'And then finally, there is a fully connected layer that identifies the object in the image.', 'start': 132.821, 'duration': 4.461}, {'end': 141.983, 'text': 'So we have these different layers coming through in the hidden layers, and they come into the final area.', 'start': 137.802, 'duration': 4.181}, {'end': 147.144, 'text': "And that's where we have the one node or one neural network entity that lights up that says it's a bird.", 'start': 142.063, 'duration': 5.081}, {'end': 151.526, 'text': "What's in it for you? We're going to cover an introduction to the CNN.", 'start': 147.865, 'duration': 3.661}], 'summary': 'Introduction to cnn including pooling and fully connected layers, with a focus on identifying objects in images.', 'duration': 30.348, 'max_score': 121.178, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/Jy9-aGMB_TE/pics/Jy9-aGMB_TE121178.jpg'}, {'end': 202.413, 'src': 'embed', 'start': 174.778, 'weight': 4, 'content': [{'end': 179.744, 'text': 'built the first convolutional neural network called LeNet in 1988.', 'start': 174.778, 'duration': 4.966}, {'end': 183.568, 'text': 'So these have been around for a while and have had a chance to mature over the years.', 'start': 179.744, 'duration': 3.824}, {'end': 187.512, 'text': 'It was used for character recognition tasks like reading zip code digits.', 'start': 183.988, 'duration': 3.524}, {'end': 190.616, 'text': 'Imagine processing mail and automating that process.', 'start': 187.973, 'duration': 2.643}, {'end': 199.471, 'text': 'CNN is a feed-forward neural network that is generally used to analyze visual images by producing data with a grid-like topology.', 'start': 191.207, 'duration': 8.264}, {'end': 202.413, 'text': 'A CNN is also known as a ConvNet.', 'start': 199.751, 'duration': 2.662}], 'summary': 'Lenet, the first cnn, developed in 1988 for character recognition, matured over the years and used for automating mail processing.', 'duration': 27.635, 'max_score': 174.778, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/Jy9-aGMB_TE/pics/Jy9-aGMB_TE174778.jpg'}], 'start': 3.655, 'title': 'Convolutional neural networks and neural network operations', 'summary': 'Covers the basics of convolutional neural networks, including their role in image recognition and processing, and the reorganization of data for neural networks using matrix filters for convolution operations, pattern detection, and the functions of hidden layers such as convolution, rel u, and pooling layers. it also includes an introduction to cnn and yann lecun, the pioneer of cnn, and how cnn recognizes and processes images, particularly in character recognition tasks like reading zip code digits.', 'chapters': [{'end': 78.564, 'start': 3.655, 'title': 'Convolutional neural network', 'summary': 'Explains the basics of convolutional neural network, emphasizing its role in image recognition and how it processes image pixels to identify objects, serving as a fundamental building block for image recognition.', 'duration': 74.909, 'highlights': ['The Convolutional Neural Network (CNN) is a central building block for image recognition, using pixels of an image as input and performing feature extraction through hidden layers.', 'The input layer of CNN accepts the pixels of the image as input in the form of arrays, and the hidden layers carry out feature extraction by performing calculations and manipulation.', 'CNN is fundamental for image recognition and plays a crucial role in identifying objects within images.']}, {'end': 137.282, 'start': 78.644, 'title': 'Neural network layers and operations', 'summary': 'Explains the process of reorganizing data for a neural network, including the use of matrix filters for convolution operations, detection of patterns in images, and the functions of hidden layers such as convolution, rel u, and pooling layers.', 'duration': 58.638, 'highlights': ['The fully connected layer identifies the object in the image, providing a comprehensive understanding of the final output.', 'The pooling layer uses multiple filters to detect edges, corners, eyes, feathers, beak, etc., enabling the recognition of various features in an image.', "The convolution layer utilizes matrix filters and performs convolution operations to detect patterns in the image, contributing significantly to the neural network's ability to recognize complex patterns.", "The rel u layer, or the rectified linear unit, is a crucial hidden layer that involves the activation function used in the neural network, impacting the network's ability to learn and make predictions."]}, {'end': 263.752, 'start': 137.802, 'title': 'Introduction to cnn and yann lecun', 'summary': 'Covers an introduction to cnn, including a brief history of yann lecun, the pioneer of convolutional neural networks, and how cnn recognizes and processes images, with emphasis on its use in character recognition tasks like reading zip code digits.', 'duration': 125.95, 'highlights': ['Yann LeCun is the pioneer of convolutional neural network, and he built the first CNN called LeNet in 1988, which was used for character recognition tasks like reading zip code digits. Yann LeCun is highlighted as the pioneer of CNN and the first developer of LeNet, emphasizing its use in character recognition tasks.', 'CNN is generally used to analyze visual images by producing data with a grid-like topology and is known as a ConvNet. The fundamental use of CNN in analyzing visual images with a grid-like topology is highlighted.', "CNN is designed to process images and capture multiple images' features by utilizing different layers, some of which are now used in other neural network frameworks. The design and functionality of CNN in processing images and capturing their features, including the transferability of its layers to other neural network frameworks, are highlighted."]}], 'duration': 260.097, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/Jy9-aGMB_TE/pics/Jy9-aGMB_TE3655.jpg', 'highlights': ["The convolution layer utilizes matrix filters and performs convolution operations to detect patterns in the image, contributing significantly to the neural network's ability to recognize complex patterns.", 'The pooling layer uses multiple filters to detect edges, corners, eyes, feathers, beak, etc., enabling the recognition of various features in an image.', 'The input layer of CNN accepts the pixels of the image as input in the form of arrays, and the hidden layers carry out feature extraction by performing calculations and manipulation.', 'The fully connected layer identifies the object in the image, providing a comprehensive understanding of the final output.', 'Yann LeCun is the pioneer of convolutional neural network, and he built the first CNN called LeNet in 1988, which was used for character recognition tasks like reading zip code digits.']}, {'end': 508.959, 'segs': [{'end': 322.65, 'src': 'embed', 'start': 264.213, 'weight': 0, 'content': [{'end': 269.777, 'text': 'What separates the CNN, or the convolutional neural network, from other neural networks..', 'start': 264.213, 'duration': 5.564}, {'end': 275.139, 'text': 'is a convolutional operation, forms a basis of any convolutional neural network.', 'start': 270.297, 'duration': 4.842}, {'end': 279.661, 'text': 'In a CNN, every image is represented in the form of arrays of pixel values.', 'start': 275.399, 'duration': 4.262}, {'end': 282.883, 'text': 'So here we have a real image of the digit 8.', 'start': 279.922, 'duration': 2.961}, {'end': 287.925, 'text': 'That then gets put onto its pixel values, represented in the form of an array.', 'start': 282.883, 'duration': 5.042}, {'end': 290.086, 'text': 'In this case, you have a two-dimensional array.', 'start': 288.225, 'duration': 1.861}, {'end': 298.37, 'text': 'And then you can see in the final N form, we transform the digit 8 into its representational form of pixels of zeros and ones.', 'start': 290.266, 'duration': 8.104}, {'end': 303.294, 'text': "where the 1's represent, in this case, the black part of the 8, and the 0's represent the white background.", 'start': 298.59, 'duration': 4.704}, {'end': 311.18, 'text': "To understand the convolution neural network, or how that convolutional operation works, we're going to take a sidestep and look at matrices.", 'start': 303.514, 'duration': 7.666}, {'end': 316.805, 'text': "In this case, we're going to simplify it, and we're going to take two matrices, A and B, of one dimension.", 'start': 311.481, 'duration': 5.324}, {'end': 322.65, 'text': 'Now, kind of separate this from your thinking as we learn that you want to focus just on the matrix aspect of this.', 'start': 316.885, 'duration': 5.765}], 'summary': 'Cnn uses convolutional operation on arrays of pixel values to process images.', 'duration': 58.437, 'max_score': 264.213, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/Jy9-aGMB_TE/pics/Jy9-aGMB_TE264213.jpg'}, {'end': 397.066, 'src': 'embed', 'start': 368.169, 'weight': 2, 'content': [{'end': 370.852, 'text': 'multiplied just by the first three elements of the first array.', 'start': 368.169, 'duration': 2.683}, {'end': 372.693, 'text': "Now that's going to be a little confusing.", 'start': 371.132, 'duration': 1.561}, {'end': 376.575, 'text': 'Remember, a computer gets to repeat these processes hundreds of times.', 'start': 372.793, 'duration': 3.782}, {'end': 379.477, 'text': "So we're not going to just forget those other numbers later on.", 'start': 376.775, 'duration': 2.702}, {'end': 381.058, 'text': "We'll bring those back in.", 'start': 379.557, 'duration': 1.501}, {'end': 382.839, 'text': 'And then we have the sum of the product.', 'start': 381.418, 'duration': 1.421}, {'end': 386.161, 'text': 'In this case, 5 plus 6 plus 6 equals 17.', 'start': 383.139, 'duration': 3.022}, {'end': 392.324, 'text': 'So in our A times B, our very first digit in that matrix of A times B is 17.', 'start': 386.161, 'duration': 6.163}, {'end': 394.746, 'text': "And if you remember, I said we're not going to forget the other digits.", 'start': 392.324, 'duration': 2.422}, {'end': 397.066, 'text': 'So we now have 3, 2, 5.', 'start': 395.106, 'duration': 1.96}], 'summary': 'The sum of the product of the first three elements in the first array is 17, with additional digits 3, 2, 5.', 'duration': 28.897, 'max_score': 368.169, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/Jy9-aGMB_TE/pics/Jy9-aGMB_TE368169.jpg'}, {'end': 519.203, 'src': 'embed', 'start': 490.547, 'weight': 4, 'content': [{'end': 492.489, 'text': 'so we click the forward slash button.', 'start': 490.547, 'duration': 1.942}, {'end': 494.571, 'text': 'that flips very basic.', 'start': 492.489, 'duration': 2.082}, {'end': 496.272, 'text': 'we have four pixels going in.', 'start': 494.571, 'duration': 1.701}, {'end': 497.774, 'text': "can't get any more basic than that.", 'start': 496.272, 'duration': 1.502}, {'end': 499.995, 'text': 'Here we have a little bit more complicated picture.', 'start': 498.214, 'duration': 1.781}, {'end': 502.416, 'text': 'We take a real image of a smiley face.', 'start': 500.035, 'duration': 2.381}, {'end': 505.878, 'text': 'Then we represent that in the form of black and white pixels.', 'start': 503.236, 'duration': 2.642}, {'end': 508.959, 'text': "So if this was an image in the computer, it's black and white.", 'start': 506.118, 'duration': 2.841}, {'end': 513.321, 'text': 'And like we saw before, we convert this into the zeros and ones.', 'start': 509.219, 'duration': 4.102}, {'end': 519.203, 'text': 'So where the other one would have just been a matrix of just four dots, now we have a significantly larger image coming in.', 'start': 513.461, 'duration': 5.742}], 'summary': 'Demonstrates pixel representation with 4 and larger images.', 'duration': 28.656, 'max_score': 490.547, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/Jy9-aGMB_TE/pics/Jy9-aGMB_TE490547.jpg'}], 'start': 264.213, 'title': 'Cnn and convolutional networks', 'summary': 'Focuses on the representation of images in a convolutional neural network, emphasizing the transformation of an image into its pixel values and the convolutional operation using matrices. it provides insights into the basics of convolutional networks and image processing.', 'chapters': [{'end': 303.294, 'start': 264.213, 'title': 'Cnn and image representation', 'summary': 'Explains the representation of images using arrays of pixel values in a convolutional neural network, with a focus on transforming an image of the digit 8 into its representational form of pixels of zeros and ones.', 'duration': 39.081, 'highlights': ['In a CNN, every image is represented in the form of arrays of pixel values, such as the real image of the digit 8 being transformed into its representational form of pixels of zeros and ones.', "The transformation of the digit 8 into its representational form of pixels of zeros and ones involves the use of a two-dimensional array, where the 1's represent the black part of the 8 and the 0's represent the white background.", 'The convolutional operation forms the basis of any convolutional neural network, setting it apart from other neural networks.']}, {'end': 508.959, 'start': 303.514, 'title': 'Understanding convolutional networks', 'summary': 'Explains the convolutional operation using matrices, demonstrating how the arrays are multiplied element-wise to obtain individual products, summed up to produce the final result, and how this process is used to compare matrices, providing insights into the basics of convolutional networks and image processing.', 'duration': 205.445, 'highlights': ['The chapter explains the process of multiplying arrays element-wise and summing up the products to obtain the final result, providing a detailed example with quantifiable data. The explanation of multiplying arrays element-wise and summing up the products to obtain the final result, with the specific example of 5+6+6=17 as the result for the first set of products.', 'The demonstration of comparing matrices by multiplying different parts and bringing the value down into one matrix, A times B, to help the computer see different aspects, providing insights into the basics of convolutional networks. The process of comparing matrices by multiplying different parts and bringing the value down into one matrix, A times B, to help the computer see different aspects, providing insights into the basics of convolutional networks.', 'The explanation of representing images in the form of black and white pixels and the basic understanding of image processing, using simple examples of backslash and forward slash images as well as a smiley face image, demonstrating the foundational concepts of image processing. The explanation of representing images in the form of black and white pixels and the basic understanding of image processing, using simple examples of backslash and forward slash images as well as a smiley face image, demonstrating the foundational concepts of image processing.']}], 'duration': 244.746, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/Jy9-aGMB_TE/pics/Jy9-aGMB_TE264213.jpg', 'highlights': ['The convolutional operation forms the basis of any convolutional neural network, setting it apart from other neural networks.', "The transformation of the digit 8 into its representational form of pixels of zeros and ones involves the use of a two-dimensional array, where the 1's represent the black part of the 8 and the 0's represent the white background.", 'The explanation of multiplying arrays element-wise and summing up the products to obtain the final result, providing a detailed example with quantifiable data.', 'The demonstration of comparing matrices by multiplying different parts and bringing the value down into one matrix, A times B, to help the computer see different aspects, providing insights into the basics of convolutional networks.', 'The explanation of representing images in the form of black and white pixels and the basic understanding of image processing, using simple examples of backslash and forward slash images as well as a smiley face image, demonstrating the foundational concepts of image processing.']}, {'end': 1344.641, 'segs': [{'end': 621.707, 'src': 'embed', 'start': 591.204, 'weight': 3, 'content': [{'end': 593.085, 'text': 'So we have started to look at matrices.', 'start': 591.204, 'duration': 1.881}, {'end': 596.808, 'text': "We've started to look at the convolutional layer and where it fits in and everything.", 'start': 593.246, 'duration': 3.562}, {'end': 598.249, 'text': "We've taken a look at images.", 'start': 596.928, 'duration': 1.321}, {'end': 603.833, 'text': "So we're going to focus more on the convolution layer since this is a convolutional neural network.", 'start': 598.729, 'duration': 5.104}, {'end': 608.677, 'text': 'A convolution layer has a number of filters and perform convolution operation.', 'start': 604.153, 'duration': 4.524}, {'end': 611.779, 'text': 'Every image is considered as a matrix of pixel values.', 'start': 608.957, 'duration': 2.822}, {'end': 616.543, 'text': 'Consider the following 5x5 image whose pixel values are only 0 and 1.', 'start': 612.119, 'duration': 4.424}, {'end': 621.707, 'text': "Now, obviously, when we're dealing with color, there's all kinds of things that come in on color processing,", 'start': 616.543, 'duration': 5.164}], 'summary': 'Studying matrices, convolution layer in neural network, using filters for image processing.', 'duration': 30.503, 'max_score': 591.204, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/Jy9-aGMB_TE/pics/Jy9-aGMB_TE591204.jpg'}, {'end': 849.626, 'src': 'heatmap', 'start': 688.273, 'weight': 1, 'content': [{'end': 694.337, 'text': "We're going to take that and sliding the filter matrix over the image and computing the dot product to detect patterns.", 'start': 688.273, 'duration': 6.064}, {'end': 695.658, 'text': "So we're just going to slide this over.", 'start': 694.577, 'duration': 1.081}, {'end': 699.761, 'text': "We're going to predict the first one and slide it over one notch, predict the second one,", 'start': 695.678, 'duration': 4.083}, {'end': 704.185, 'text': 'and so on and so on all the way through until we have a new matrix.', 'start': 699.761, 'duration': 4.424}, {'end': 711.512, 'text': "And this matrix, which is the same size as a filter, has reduced the image and whatever filter, whatever that's filtering out,", 'start': 704.586, 'duration': 6.926}, {'end': 716.537, 'text': 'is going to be looking at just those features reduced down to a smaller matrix.', 'start': 711.512, 'duration': 5.025}, {'end': 723.543, 'text': 'So once the feature maps are extracted, the next step is to move them to the ReLU layer.', 'start': 716.837, 'duration': 6.706}, {'end': 729.488, 'text': 'So the rel you layer, the next step, first is going to perform an element-wise operation.', 'start': 723.883, 'duration': 5.605}, {'end': 735.854, 'text': "So each of those maps coming in, if there's negative pixels, so it sets all the negative pixels to zero.", 'start': 729.708, 'duration': 6.146}, {'end': 744.262, 'text': 'And you can see this nice graph where it just zeroes out the negatives and then you have a value that goes from zero up to whatever value is coming out of the matrix.', 'start': 736.254, 'duration': 8.008}, {'end': 747.304, 'text': 'This introduces non-linearity to the network.', 'start': 744.802, 'duration': 2.502}, {'end': 750.987, 'text': 'So up until now, we say linearity.', 'start': 747.924, 'duration': 3.063}, {'end': 754.109, 'text': "We're talking about the fact that the feature has a value.", 'start': 751.247, 'duration': 2.862}, {'end': 755.33, 'text': "So it's a linear feature.", 'start': 754.289, 'duration': 1.041}, {'end': 759.473, 'text': "This feature came up and has, let's say the feature is the edge of the beak.", 'start': 755.37, 'duration': 4.103}, {'end': 767.546, 'text': "or the backslash that we saw, it'll look at that and say, okay, this feature has a value from negative 10 to 10 in this case.", 'start': 760.782, 'duration': 6.764}, {'end': 770.747, 'text': "If it was 1, it'd say, yeah, this might be a beak.", 'start': 768.026, 'duration': 2.721}, {'end': 771.407, 'text': 'It might not.', 'start': 770.787, 'duration': 0.62}, {'end': 772.948, 'text': 'It might be an edge right there.', 'start': 771.427, 'duration': 1.521}, {'end': 775.73, 'text': "A minus 5 means, no, we're not even going to look at it.", 'start': 773.188, 'duration': 2.542}, {'end': 776.49, 'text': "It's a 0.", 'start': 775.75, 'duration': 0.74}, {'end': 782.073, 'text': 'And so we end up with an output, and the output takes all these features, all these filtered features.', 'start': 776.49, 'duration': 5.583}, {'end': 784.214, 'text': "Remember, we're not just running one filter on this.", 'start': 782.333, 'duration': 1.881}, {'end': 786.495, 'text': "We're running a number of filters on this image.", 'start': 784.254, 'duration': 2.241}, {'end': 794.602, 'text': 'And so we end up with a rectified feature map that is looking at just the features coming through and how they weigh in from our filters.', 'start': 786.835, 'duration': 7.767}, {'end': 798.226, 'text': 'So here we have an input of what looks like a toucan bird.', 'start': 795.043, 'duration': 3.183}, {'end': 800.588, 'text': 'Very exotic looking.', 'start': 799.567, 'duration': 1.021}, {'end': 806.394, 'text': 'Real image is scanned in multiple convolution and the ReLU layers for locating features.', 'start': 800.788, 'duration': 5.606}, {'end': 809.937, 'text': "And you can see up here it's turned into a black and white image.", 'start': 806.754, 'duration': 3.183}, {'end': 815.019, 'text': "And in this case, we're looking in the upper right-hand corner for a feature, and that box scans over.", 'start': 810.475, 'duration': 4.544}, {'end': 818.042, 'text': "A lot of times, it doesn't scan one pixel at a time.", 'start': 815.3, 'duration': 2.742}, {'end': 822.827, 'text': 'A lot of times, it will skip by two or three or four pixels to speed up the process.', 'start': 818.122, 'duration': 4.705}, {'end': 828.452, 'text': "That's one of the ways you can compensate if you don't have enough resources on your computation for large images.", 'start': 822.907, 'duration': 5.545}, {'end': 832.115, 'text': "And it's not just one filter that slowly goes across the image.", 'start': 828.792, 'duration': 3.323}, {'end': 834.958, 'text': 'You have multiple filters that have been programmed in there.', 'start': 832.576, 'duration': 2.382}, {'end': 842.305, 'text': "So you're looking at a lot of different filters going over the different aspects of the image and just sliding across there and forming a new matrix.", 'start': 835.118, 'duration': 7.187}, {'end': 849.626, 'text': "One more aspect to note about the ReLU layer is we're not just having one ReLU coming in.", 'start': 843.081, 'duration': 6.545}], 'summary': 'Convolution and relu layers extract features from image, introducing non-linearity to the network.', 'duration': 161.353, 'max_score': 688.273, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/Jy9-aGMB_TE/pics/Jy9-aGMB_TE688273.jpg'}, {'end': 856.75, 'src': 'embed', 'start': 828.792, 'weight': 1, 'content': [{'end': 832.115, 'text': "And it's not just one filter that slowly goes across the image.", 'start': 828.792, 'duration': 3.323}, {'end': 834.958, 'text': 'You have multiple filters that have been programmed in there.', 'start': 832.576, 'duration': 2.382}, {'end': 842.305, 'text': "So you're looking at a lot of different filters going over the different aspects of the image and just sliding across there and forming a new matrix.", 'start': 835.118, 'duration': 7.187}, {'end': 849.626, 'text': "One more aspect to note about the ReLU layer is we're not just having one ReLU coming in.", 'start': 843.081, 'duration': 6.545}, {'end': 856.75, 'text': "So not only do we have multiple features going through, but we're generating multiple ReLU layers for locating the features.", 'start': 850.286, 'duration': 6.464}], 'summary': 'Multiple filters and relu layers in image processing.', 'duration': 27.958, 'max_score': 828.792, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/Jy9-aGMB_TE/pics/Jy9-aGMB_TE828792.jpg'}, {'end': 897.258, 'src': 'embed', 'start': 871.442, 'weight': 2, 'content': [{'end': 876.446, 'text': 'Pooling is a downsampling operation that reduces the dimensionality of the feature map.', 'start': 871.442, 'duration': 5.004}, {'end': 877.747, 'text': "That's all we're trying to do.", 'start': 876.726, 'duration': 1.021}, {'end': 882.171, 'text': "We're trying to take a huge amount of information and reduce it down to a single answer.", 'start': 877.767, 'duration': 4.404}, {'end': 884.593, 'text': 'This is a specific kind of bird.', 'start': 882.291, 'duration': 2.302}, {'end': 885.554, 'text': 'This is an iris.', 'start': 884.653, 'duration': 0.901}, {'end': 886.495, 'text': 'This is a rose.', 'start': 885.614, 'duration': 0.881}, {'end': 888.537, 'text': 'So you have a rectified feature map.', 'start': 886.955, 'duration': 1.582}, {'end': 891.997, 'text': 'You see here we have a rectified feature map coming in.', 'start': 889.136, 'duration': 2.861}, {'end': 897.258, 'text': 'We set the max pooling with a 2x2 filters and a stride of 2.', 'start': 892.737, 'duration': 4.521}], 'summary': 'Pooling reduces feature map dimensionality to a single answer.', 'duration': 25.816, 'max_score': 871.442, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/Jy9-aGMB_TE/pics/Jy9-aGMB_TE871442.jpg'}, {'end': 1095.728, 'src': 'embed', 'start': 1066.868, 'weight': 4, 'content': [{'end': 1071.009, 'text': "So right now we're looking at 2D image dimensions coming in into the pooling layer.", 'start': 1066.868, 'duration': 4.141}, {'end': 1075.673, 'text': 'So the next step is we want to reduce those dimensions or flatten them.', 'start': 1071.129, 'duration': 4.544}, {'end': 1076.533, 'text': 'So flattening.', 'start': 1075.773, 'duration': 0.76}, {'end': 1085.16, 'text': 'Flattening is a process of converting all of the resultant two-dimensional arrays from pooled feature map into a single long continuous linear vector.', 'start': 1076.693, 'duration': 8.467}, {'end': 1087.922, 'text': 'So over here you see where we have a pooled feature map.', 'start': 1085.44, 'duration': 2.482}, {'end': 1089.163, 'text': "Maybe that's the bird wing.", 'start': 1087.962, 'duration': 1.201}, {'end': 1091.544, 'text': 'And it has values 6, 8, 4, 7.', 'start': 1089.403, 'duration': 2.141}, {'end': 1095.728, 'text': 'And we want to just flatten this out and turn it into 6, 8, 4, 7 or a single linear vector.', 'start': 1091.545, 'duration': 4.183}], 'summary': 'Flattening converts 2d arrays into a single linear vector.', 'duration': 28.86, 'max_score': 1066.868, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/Jy9-aGMB_TE/pics/Jy9-aGMB_TE1066868.jpg'}, {'end': 1326.172, 'src': 'embed', 'start': 1304.213, 'weight': 0, 'content': [{'end': 1312.74, 'text': "we'll be using the CIFAR-10 dataset from Canadian Institute for Advanced Research for classifying images across 10 categories.", 'start': 1304.213, 'duration': 8.527}, {'end': 1317.064, 'text': "Unfortunately, they don't let me know whether it's going to be a toucan or some other kind of bird.", 'start': 1313.241, 'duration': 3.823}, {'end': 1324.931, 'text': 'But we do get to find out whether it can categorize between a ship, a frog, deer, bird, airplane, automobile, cat, dog, horse, truck.', 'start': 1317.124, 'duration': 7.807}, {'end': 1326.172, 'text': "So that's a lot of fun.", 'start': 1325.331, 'duration': 0.841}], 'summary': 'Using cifar-10 dataset to classify images across 10 categories.', 'duration': 21.959, 'max_score': 1304.213, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/Jy9-aGMB_TE/pics/Jy9-aGMB_TE1304213.jpg'}], 'start': 509.219, 'title': 'Convolutional neural networks', 'summary': 'Discusses key layers in cnn, including convolution, relu, pooling, and fully connected layers, and their roles in processing image data. it also covers the fundamentals of cnn, emphasizing convolution, relu layer, pooling, and flattening, and their role in classifying images, and highlights the use of cnn in classifying images across 10 categories using the cifar-10 dataset.', 'chapters': [{'end': 590.724, 'start': 509.219, 'title': 'Convolutional neural networks', 'summary': 'Discusses the key layers in convolutional neural networks, including the convolution layer, relu layer, pooling layer, and fully connected layer, and their roles in processing image data.', 'duration': 81.505, 'highlights': ['The convolution layer is the central aspect of processing images in the convolutional neural network. It plays a crucial role in processing image data in the network.', 'The ReLU layer is activated by the math behind it, and it determines what makes the neurons fire. Explains the activation process and its significance in processing data.', 'The pooling layer, also known as the reduce layer, is where the data is pooled together and reduced. Describes the function of pooling layer in consolidating data.', "The fully connected layer is where the network's output is generated. Explains the role of the fully connected layer in producing the network's output."]}, {'end': 1344.641, 'start': 591.204, 'title': 'Understanding convolutional neural networks', 'summary': 'Discusses the fundamentals of convolutional neural networks, emphasizing the process of convolution, relu layer, pooling, and flattening, and their role in classifying images, while highlighting the use of cnn in classifying images across 10 categories using the cifar-10 dataset.', 'duration': 753.437, 'highlights': ['The process of convolution involves sliding a filter matrix over the image, computing the dot product to detect patterns, and generating multiple feature maps, which are then passed through the ReLU layer to introduce non-linearity to the network. Convolution involves sliding a filter matrix over the image to detect patterns, generating multiple feature maps, and introducing non-linearity through the ReLU layer.', 'Pooling is a downsampling operation that reduces the dimensionality of the feature map, where the max pooling with a 2x2 filter and a stride of 2 results in a 2x2 pooled feature map, effectively filtering and reducing data for manageability. Pooling operation reduces dimensionality, and max pooling with a 2x2 filter and a stride of 2 results in a 2x2 pooled feature map.', 'Flattening involves converting two-dimensional arrays from pooled feature maps into a single long continuous linear vector, which serves as the input to the fully connected layer for image classification. Flattening converts pooled feature maps into a single linear vector, which is used as input for image classification in the fully connected layer.', 'The use of CNN in classifying images across 10 categories using the CIFAR-10 dataset demonstrates the practical application and relevance of CNN in image recognition and classification tasks. The practical application of CNN in classifying images using the CIFAR-10 dataset demonstrates its relevance in image recognition and classification tasks.']}], 'duration': 835.422, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/Jy9-aGMB_TE/pics/Jy9-aGMB_TE509219.jpg', 'highlights': ['The use of CNN in classifying images across 10 categories using the CIFAR-10 dataset demonstrates the practical application and relevance of CNN in image recognition and classification tasks.', 'The process of convolution involves sliding a filter matrix over the image, computing the dot product to detect patterns, and generating multiple feature maps, which are then passed through the ReLU layer to introduce non-linearity to the network.', 'Pooling is a downsampling operation that reduces the dimensionality of the feature map, where the max pooling with a 2x2 filter and a stride of 2 results in a 2x2 pooled feature map, effectively filtering and reducing data for manageability.', 'The convolution layer is the central aspect of processing images in the convolutional neural network. It plays a crucial role in processing image data in the network.', 'Flattening involves converting two-dimensional arrays from pooled feature maps into a single long continuous linear vector, which serves as the input to the fully connected layer for image classification.']}, {'end': 1785.616, 'segs': [{'end': 1418.646, 'src': 'embed', 'start': 1386.147, 'weight': 2, 'content': [{'end': 1387.888, 'text': "There's many ways to display the images.", 'start': 1386.147, 'duration': 1.741}, {'end': 1391.929, 'text': "There's other ways to drill into it, but matplotlib is really good for this.", 'start': 1387.908, 'duration': 4.021}, {'end': 1398.691, 'text': "And we'll also look at our first reshape setup, or shaping the data, so you can have a little glimpse into what that means.", 'start': 1392.269, 'duration': 6.422}, {'end': 1401.133, 'text': "So we're going to start by importing our matplot.", 'start': 1398.871, 'duration': 2.262}, {'end': 1408.058, 'text': 'And of course, since I am doing Jupyter Notebook, I need to do the matplot inline command so it shows up on my page.', 'start': 1401.273, 'duration': 6.785}, {'end': 1409.039, 'text': 'So here we go.', 'start': 1408.418, 'duration': 0.621}, {'end': 1412.982, 'text': "We're going to import matplot library.pyplot as plt.", 'start': 1409.059, 'duration': 3.923}, {'end': 1418.646, 'text': 'And if you remember matplot library, the pyplot is like a canvas that we paint stuff onto.', 'start': 1413.262, 'duration': 5.384}], 'summary': 'Introduction to matplotlib for image display and data reshaping in jupyter notebook.', 'duration': 32.499, 'max_score': 1386.147, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/Jy9-aGMB_TE/pics/Jy9-aGMB_TE1386147.jpg'}, {'end': 1513.916, 'src': 'embed', 'start': 1489.426, 'weight': 0, 'content': [{'end': 1495.868, 'text': "And we're going to break it up into 10,000 pieces, and those 10,000 pieces then are broken into three pieces each,", 'start': 1489.426, 'duration': 6.442}, {'end': 1498.588, 'text': 'and those three pieces then are 32 by 32..', 'start': 1495.868, 'duration': 2.72}, {'end': 1505.35, 'text': 'You can look at this like an old-fashioned projector, where they have the red screen or the red projector, the blue projector and the green projector,', 'start': 1498.588, 'duration': 6.762}, {'end': 1507.951, 'text': 'and they add them all together and each one of those is a 32 by 32 bit.', 'start': 1505.35, 'duration': 2.601}, {'end': 1512.955, 'text': "So that's probably how this was originally formatted, was in that kind of ideal.", 'start': 1509.611, 'duration': 3.344}, {'end': 1513.916, 'text': 'Things have changed.', 'start': 1513.115, 'duration': 0.801}], 'summary': 'Data divided into 10,000 pieces, then into three, each 32 by 32 bit.', 'duration': 24.49, 'max_score': 1489.426, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/Jy9-aGMB_TE/pics/Jy9-aGMB_TE1489426.jpg'}, {'end': 1581.719, 'src': 'embed', 'start': 1552.768, 'weight': 3, 'content': [{'end': 1555.429, 'text': "you can come in here and you'll see a lot of these.", 'start': 1552.768, 'duration': 2.661}, {'end': 1562.572, 'text': "they'll try to do this with a float, or float 64, which you got to remember, though, is a float uses a lot of memory.", 'start': 1555.429, 'duration': 7.143}, {'end': 1572.236, 'text': "so once you switch this into something that's not integer 8, which is goes up to 128, you are just going to the the amount of RAM.", 'start': 1562.572, 'duration': 9.664}, {'end': 1573.337, 'text': "let's just put that in here.", 'start': 1572.236, 'duration': 1.101}, {'end': 1576.338, 'text': "it's going to go way up the amount of RAM that it loads.", 'start': 1573.337, 'duration': 3.001}, {'end': 1578.158, 'text': 'So you want to go ahead and use this.', 'start': 1576.758, 'duration': 1.4}, {'end': 1581.719, 'text': 'You can try the other ones and see what happens if you have a lot of RAM on your computer.', 'start': 1578.338, 'duration': 3.381}], 'summary': 'Using integer 8 instead of float 64 reduces memory usage and ram load significantly.', 'duration': 28.951, 'max_score': 1552.768, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/Jy9-aGMB_TE/pics/Jy9-aGMB_TE1552768.jpg'}, {'end': 1743.637, 'src': 'embed', 'start': 1718.233, 'weight': 1, 'content': [{'end': 1723.056, 'text': "And then we get into the fun part where we're actually going to start creating our model, our actual neural network model.", 'start': 1718.233, 'duration': 4.823}, {'end': 1726.117, 'text': "So let's start by creating our one hot encoder.", 'start': 1723.356, 'duration': 2.761}, {'end': 1727.698, 'text': "We're going to create our own here.", 'start': 1726.137, 'duration': 1.561}, {'end': 1729.639, 'text': "And it's going to be turn and out.", 'start': 1728.198, 'duration': 1.441}, {'end': 1733.254, 'text': "And we'll have our vector coming in and our values equal 10.", 'start': 1729.979, 'duration': 3.275}, {'end': 1737.435, 'text': 'What this means is that we have the 10 values, the 10 possible labels.', 'start': 1733.254, 'duration': 4.181}, {'end': 1743.637, 'text': "And remember, we don't look at the labels as a number because a car isn't one more than a horse.", 'start': 1737.775, 'duration': 5.862}], 'summary': 'Creating a neural network model with 10 possible labels for classification.', 'duration': 25.404, 'max_score': 1718.233, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/Jy9-aGMB_TE/pics/Jy9-aGMB_TE1718233.jpg'}], 'start': 1345.041, 'title': 'Python script, matplotlib, data reshaping, and image processing', 'summary': 'Covers the process of writing a python script for image data visualization using matplotlib, including reshaping binary data into 10,000 images of 32x32 pixels and transposing the data type to integer 8, enabling visualization and interpretation of images, followed by one hot encoder creation for labels and cifar helper class for image setup.', 'chapters': [{'end': 1424.371, 'start': 1345.041, 'title': 'Python script and matplotlib for data visualization', 'summary': 'Covers the process of writing a python script to display image data using matplotlib, including importing the matplotlib library, displaying image using matplotlib, and reshaping the data.', 'duration': 79.33, 'highlights': ['The process starts by importing the Matplotlib library and using the matplot inline command for displaying images in Jupyter Notebook.', "The data set is structured as a dictionary with keys including batch one label, data, and file names, providing insights into the dataset's organization.", 'Matplotlib is highlighted as an effective tool for displaying images, and the chapter emphasizes the importance of understanding reshaping data for effective data visualization.']}, {'end': 1785.616, 'start': 1424.671, 'title': 'Data reshaping and image processing', 'summary': 'Discusses reshaping binary data into 10,000 images of 32x32 pixels, and the process of transposing and converting the data type to integer 8, enabling visualization and interpretation of images, followed by the creation of a one hot encoder for labels and cifar helper class for image setup.', 'duration': 360.945, 'highlights': ['The process involves reshaping binary data into 10,000 images of 32x32 pixels, enabling the visualization and interpretation of images. The data is reshaped into 10,000 images of 32x32 pixels to facilitate visual interpretation.', 'The data type is converted to integer 8 to conserve memory and enable efficient processing, preventing excessive RAM usage. Converting the data type to integer 8 conserves memory and prevents excessive RAM usage, ensuring efficient processing.', 'The creation of a one hot encoder facilitates the conversion of labels into a numpy array of zeros and ones, enabling efficient representation of 10 possible labels. The one hot encoder converts labels into a numpy array of zeros and ones, efficiently representing 10 possible labels.']}], 'duration': 440.575, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/Jy9-aGMB_TE/pics/Jy9-aGMB_TE1345041.jpg', 'highlights': ['The process involves reshaping binary data into 10,000 images of 32x32 pixels, enabling the visualization and interpretation of images.', 'The creation of a one hot encoder facilitates the conversion of labels into a numpy array of zeros and ones, enabling efficient representation of 10 possible labels.', 'Matplotlib is highlighted as an effective tool for displaying images, and the chapter emphasizes the importance of understanding reshaping data for effective data visualization.', 'The data type is converted to integer 8 to conserve memory and enable efficient processing, preventing excessive RAM usage.']}, {'end': 2194.551, 'segs': [{'end': 1816.612, 'src': 'embed', 'start': 1785.656, 'weight': 0, 'content': [{'end': 1788.319, 'text': "We have a few of these helper functions we're going to build.", 'start': 1785.656, 'duration': 2.663}, {'end': 1795.688, 'text': "And when you're working with a very complicated Python project, dividing it up into separate definitions and classes is very important.", 'start': 1788.58, 'duration': 7.108}, {'end': 1799.312, 'text': 'otherwise it just becomes really ungainly to work with.', 'start': 1795.988, 'duration': 3.324}, {'end': 1804.358, 'text': "so let's go ahead and put in our next helper, which is a class, and this is a lot in this class.", 'start': 1799.312, 'duration': 5.046}, {'end': 1808.603, 'text': "so we'll break it down here and let's just start to put a space right in there.", 'start': 1804.358, 'duration': 4.245}, {'end': 1809.223, 'text': 'there we go.', 'start': 1808.603, 'duration': 0.62}, {'end': 1812.227, 'text': "now there's a little bit more readable at a second space.", 'start': 1809.223, 'duration': 3.004}, {'end': 1816.612, 'text': "so we're gonna create our class, the cipher helper, and we'll start by initializing it.", 'start': 1812.227, 'duration': 4.385}], 'summary': 'Building helper functions and classes in python project for better organization and readability.', 'duration': 30.956, 'max_score': 1785.656, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/Jy9-aGMB_TE/pics/Jy9-aGMB_TE1785656.jpg'}, {'end': 1869.676, 'src': 'embed', 'start': 1825.116, 'weight': 1, 'content': [{'end': 1827.057, 'text': "We'll come back to that in the lower part.", 'start': 1825.116, 'duration': 1.941}, {'end': 1829.539, 'text': 'We want to initialize our training batches.', 'start': 1827.237, 'duration': 2.302}, {'end': 1832.5, 'text': 'So when we went through this, there was like a meta batch.', 'start': 1829.839, 'duration': 2.661}, {'end': 1839.162, 'text': "We don't need the meta batch, but we do need the data batch 1, 2, 3, 4, 5.", 'start': 1832.68, 'duration': 6.482}, {'end': 1841.045, 'text': 'And we do not want the testing batch in here.', 'start': 1839.164, 'duration': 1.881}, {'end': 1843.486, 'text': 'This is just the self all trained batches.', 'start': 1841.185, 'duration': 2.301}, {'end': 1847.088, 'text': "So we're going to make an array of all those different images.", 'start': 1843.846, 'duration': 3.242}, {'end': 1851.85, 'text': 'And then, of course, we left the test batch out, so we have our self.testbatch.', 'start': 1847.568, 'duration': 4.282}, {'end': 1859.033, 'text': "We're going to initialize the training images and the training labels, and also the test images and the test labels.", 'start': 1852.49, 'duration': 6.543}, {'end': 1862.735, 'text': 'So this is just to initialize these variables in here.', 'start': 1859.513, 'duration': 3.222}, {'end': 1867.513, 'text': 'Then we create another definition down here, and this is going to set up the images.', 'start': 1863.166, 'duration': 4.347}, {'end': 1869.676, 'text': "Let's just take a look and see what's going on in there.", 'start': 1867.733, 'duration': 1.943}], 'summary': 'Initializing training and test batches for images and labels.', 'duration': 44.56, 'max_score': 1825.116, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/Jy9-aGMB_TE/pics/Jy9-aGMB_TE1825116.jpg'}, {'end': 1916.893, 'src': 'embed', 'start': 1887.79, 'weight': 3, 'content': [{'end': 1894.114, 'text': "We're going to set up these self-training images at this point, and that's going to go to a numpy array vstack.", 'start': 1887.79, 'duration': 6.324}, {'end': 1901.28, 'text': "and in there we're going to load up in this case the data for d and self all trained batches again.", 'start': 1894.594, 'duration': 6.686}, {'end': 1902.901, 'text': 'that points right up to here.', 'start': 1901.28, 'duration': 1.621}, {'end': 1906.845, 'text': "so we're going to go through each one of these files or each one of these data sets.", 'start': 1902.901, 'duration': 3.944}, {'end': 1907.945, 'text': "they're not a file anymore.", 'start': 1906.845, 'duration': 1.1}, {'end': 1909.066, 'text': "we've brought them in.", 'start': 1907.945, 'duration': 1.121}, {'end': 1916.893, 'text': 'data batch one points to the actual data and so our self-training images is going to stack them all into our, into a numpy array,', 'start': 1909.066, 'duration': 7.827}], 'summary': 'Setting up self-training images using numpy array vstack', 'duration': 29.103, 'max_score': 1887.79, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/Jy9-aGMB_TE/pics/Jy9-aGMB_TE1887790.jpg'}, {'end': 2101.167, 'src': 'embed', 'start': 2072.744, 'weight': 5, 'content': [{'end': 2078.911, 'text': "Tis to throw me for a little loop, when I'm working with TensorFlow or Keras or a lot of these, we have our data coming in.", 'start': 2072.744, 'duration': 6.167}, {'end': 2081.594, 'text': 'If you remember, we had like 10,000 photos.', 'start': 2078.971, 'duration': 2.623}, {'end': 2083.496, 'text': 'Let me just put 10,000 down here.', 'start': 2081.694, 'duration': 1.802}, {'end': 2089.36, 'text': "We don't want to run all 10,000 at once, so we want to break this up into batch sizes.", 'start': 2084.516, 'duration': 4.844}, {'end': 2095.944, 'text': 'And you also remember that we had the number of photos, in this case length of test or whatever number is in there.', 'start': 2089.76, 'duration': 6.184}, {'end': 2101.167, 'text': 'We also have 32 by 32 by 3.', 'start': 2096.304, 'duration': 4.863}], 'summary': 'Using batch sizes to process 10,000 photos in tensorflow/keras.', 'duration': 28.423, 'max_score': 2072.744, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/Jy9-aGMB_TE/pics/Jy9-aGMB_TE2072744.jpg'}, {'end': 2154.042, 'src': 'embed', 'start': 2132.689, 'weight': 4, 'content': [{'end': 2141.534, 'text': "And then we're going to reshape that into, and this is important to let the data know, that we're looking at 100 by 32 by 32 by 3.", 'start': 2132.689, 'duration': 8.845}, {'end': 2144.256, 'text': "Now, we've already formatted it to the 32 by 32 by 3.", 'start': 2141.534, 'duration': 2.722}, {'end': 2150.2, 'text': 'This just sets everything up correctly so that X has the data in there in the correct order and the correct shape.', 'start': 2144.256, 'duration': 5.944}, {'end': 2154.042, 'text': 'And then the Y, just like the X, is our labels.', 'start': 2150.6, 'duration': 3.442}], 'summary': 'Data is reshaped to 100 by 32 by 32 by 3, ensuring correct order and shape for x and y.', 'duration': 21.353, 'max_score': 2132.689, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/Jy9-aGMB_TE/pics/Jy9-aGMB_TE2132689.jpg'}], 'start': 1785.656, 'title': 'Python project structuring and image data processing', 'summary': 'Discusses structuring a python project with a cipher helper class, including batch setup. it also covers optimizing data processing and image analysis in a machine learning model by setting up self-training images, stacking them into arrays, reshaping, transposing, and processing in batches of 100.', 'chapters': [{'end': 1887.37, 'start': 1785.656, 'title': 'Building cipher helper class', 'summary': 'Discusses the importance of dividing a complicated python project into separate definitions and classes, and it explains the initialization and setup of training and test batches in the cipher helper class.', 'duration': 101.714, 'highlights': ['The importance of dividing a complicated Python project into separate definitions and classes is emphasized for better manageability and readability.', 'The initialization of training batches involves creating an array of different images, excluding the testing batch.', 'The setup of training and test images, as well as their respective labels, is explained for initializing the variables in the cipher helper class.']}, {'end': 2194.551, 'start': 1887.79, 'title': 'Self-training image setup', 'summary': 'Discusses setting up self-training images, stacking them into numpy arrays, reshaping and transposing the images, and dividing the data into batches of 100 for processing, aiming to optimize data processing and image analysis in a machine learning model.', 'duration': 306.761, 'highlights': ['The chapter discusses setting up self-training images and stacking them into numpy arrays for data processing.', 'It explains the process of reshaping and transposing the images to ensure correct formatting and data optimization.', 'The transcript also details the strategy of dividing the data into batches of 100 for efficient processing and analysis in a machine learning model.']}], 'duration': 408.895, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/Jy9-aGMB_TE/pics/Jy9-aGMB_TE1785656.jpg', 'highlights': ['The importance of dividing a complicated Python project into separate definitions and classes is emphasized for better manageability and readability.', 'The initialization of training batches involves creating an array of different images, excluding the testing batch.', 'The setup of training and test images, as well as their respective labels, is explained for initializing the variables in the cipher helper class.', 'The chapter discusses setting up self-training images and stacking them into numpy arrays for data processing.', 'It explains the process of reshaping and transposing the images to ensure correct formatting and data optimization.', 'The transcript also details the strategy of dividing the data into batches of 100 for efficient processing and analysis in a machine learning model.']}, {'end': 2490.481, 'segs': [{'end': 2238.002, 'src': 'embed', 'start': 2213.716, 'weight': 0, 'content': [{'end': 2219.718, 'text': 'Now we could have just put all the setup images under the init, but by breaking this up into two parts it makes it much more readable.', 'start': 2213.716, 'duration': 6.002}, {'end': 2224.359, 'text': "And also if you're doing other work there's reasons to do that as far as the setup.", 'start': 2219.978, 'duration': 4.381}, {'end': 2225.739, 'text': "Let's go ahead and run that.", 'start': 2224.699, 'duration': 1.04}, {'end': 2230.9, 'text': 'And you can see where it says setting up training images and labels, setting up test images.', 'start': 2226.279, 'duration': 4.621}, {'end': 2234.581, 'text': "And that's one of the reasons we broke it up is so that if you're testing this out,", 'start': 2231.14, 'duration': 3.441}, {'end': 2238.002, 'text': "you can actually have print statements in there telling you what's going on, which is really nice.", 'start': 2234.581, 'duration': 3.421}], 'summary': 'Breaking setup into two parts improves readability and allows for better testing and tracking.', 'duration': 24.286, 'max_score': 2213.716, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/Jy9-aGMB_TE/pics/Jy9-aGMB_TE2213716.jpg'}, {'end': 2280.968, 'src': 'embed', 'start': 2248.386, 'weight': 2, 'content': [{'end': 2253.148, 'text': "batch equals ch, next batch of 100, because we're going to use the 100 size.", 'start': 2248.386, 'duration': 4.762}, {'end': 2254.549, 'text': "But we'll come back to that.", 'start': 2253.588, 'duration': 0.961}, {'end': 2255.149, 'text': "We're going to use that.", 'start': 2254.569, 'duration': 0.58}, {'end': 2259.491, 'text': "Just remember that that's part of our code we're going to be using in a minute from the definition we just made.", 'start': 2255.189, 'duration': 4.302}, {'end': 2261.492, 'text': "So now we're ready to create our model.", 'start': 2259.731, 'duration': 1.761}, {'end': 2265.033, 'text': 'First thing we want to do is we want to import our TensorFlow as tf.', 'start': 2261.572, 'duration': 3.461}, {'end': 2266.934, 'text': "I'll just go ahead and run that so it's loaded up.", 'start': 2265.213, 'duration': 1.721}, {'end': 2270.557, 'text': "And you can see we've got a warning here.", 'start': 2267.434, 'duration': 3.123}, {'end': 2272.059, 'text': "that's because they're making some changes.", 'start': 2270.557, 'duration': 1.502}, {'end': 2280.968, 'text': "it's always growing and they're going to be depreciating one of the values from float 64 to float type or is treated as an NP float 64.", 'start': 2272.059, 'duration': 8.909}], 'summary': 'Preparing to create a model with tensorflow, importing tf, and addressing a warning.', 'duration': 32.582, 'max_score': 2248.386, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/Jy9-aGMB_TE/pics/Jy9-aGMB_TE2248386.jpg'}, {'end': 2422.935, 'src': 'embed', 'start': 2363.693, 'weight': 3, 'content': [{'end': 2364.494, 'text': "And we're going to use this.", 'start': 2363.693, 'duration': 0.801}, {'end': 2366.715, 'text': "We don't have to have a shape or anything for this.", 'start': 2364.834, 'duration': 1.881}, {'end': 2369.798, 'text': 'This placeholder is for what we call dropout.', 'start': 2366.835, 'duration': 2.963}, {'end': 2378.584, 'text': "If you remember from our theory before, we drop out so many nodes it's looking at or the different values going through, which helps decrease bias.", 'start': 2369.998, 'duration': 8.586}, {'end': 2382.047, 'text': 'So we need to go ahead and put a placeholder for that also.', 'start': 2378.905, 'duration': 3.142}, {'end': 2383.008, 'text': "And we'll run this.", 'start': 2382.227, 'duration': 0.781}, {'end': 2384.729, 'text': "So it's all loaded up in there.", 'start': 2383.408, 'duration': 1.321}, {'end': 2386.431, 'text': 'So we have our three different placeholders.', 'start': 2384.789, 'duration': 1.642}, {'end': 2390.814, 'text': "And since we're in TensorFlow, when you use Keras, it does some of this automatically.", 'start': 2386.771, 'duration': 4.043}, {'end': 2392.235, 'text': "But we're in TensorFlow direct.", 'start': 2390.854, 'duration': 1.381}, {'end': 2393.736, 'text': 'Ross sits on tensorflow.', 'start': 2392.475, 'duration': 1.261}, {'end': 2396.359, 'text': "we're gonna go ahead and create some more helper functions.", 'start': 2393.736, 'duration': 2.623}, {'end': 2400.584, 'text': "we're gonna create something to help us initialize the weights, initialize our bias.", 'start': 2396.359, 'duration': 4.225}, {'end': 2404.167, 'text': 'if you remember that each layer has to have a bias going in.', 'start': 2400.584, 'duration': 3.583}, {'end': 2408.811, 'text': "we're gonna go ahead and work on our conversional 2d and our max pool.", 'start': 2404.167, 'duration': 4.644}, {'end': 2413.453, 'text': 'so we have our pooling layer, our convolutional layer and then our normal full layer.', 'start': 2408.811, 'duration': 4.642}, {'end': 2417.394, 'text': "So we're going to go ahead and put those all into definitions, and let's see what that looks like in code.", 'start': 2413.473, 'duration': 3.921}, {'end': 2422.935, 'text': 'And you can also grab some of these helper functions from the MNIST, the NIST setup.', 'start': 2417.594, 'duration': 5.341}], 'summary': 'Creating helper functions for tensorflow in code', 'duration': 59.242, 'max_score': 2363.693, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/Jy9-aGMB_TE/pics/Jy9-aGMB_TE2363693.jpg'}], 'start': 2194.791, 'title': 'Loading and setting up images for machine learning and tensorflow placeholder and helper functions', 'summary': 'Covers the process of loading and setting up images for machine learning, emphasizing progress tracking and tensorflow usage, and discusses setting up tensorflow placeholders, helper functions, and implementing layers for a neural network.', 'chapters': [{'end': 2265.033, 'start': 2194.791, 'title': 'Loading and setting up images for machine learning', 'summary': 'Discusses the process of loading and setting up images for machine learning, emphasizing the benefits of breaking up the setup process and the importance of print statements for tracking progress, also highlighting the usage of tensorflow for creating the model.', 'duration': 70.242, 'highlights': ['The chapter emphasizes the benefits of breaking up the setup process, making it more readable and providing reasons for doing so.', 'Print statements are highlighted as an important tool for tracking progress, with the example of setting up training and test images and labels.', 'The usage of TensorFlow is mentioned for creating the model, emphasizing the need to import it as tf.']}, {'end': 2490.481, 'start': 2265.213, 'title': 'Tensorflow placeholder and helper functions', 'summary': 'Discusses setting up tensorflow placeholders for input data and dropout probability, creating helper functions to initialize weights and bias, and implementing convolutional 2d and max pooling layers for a neural network in tensorflow.', 'duration': 225.268, 'highlights': ['Setting up TensorFlow placeholders for input data and dropout probability The chapter discusses setting up x and y placeholders for input data and a dropout probability placeholder in TensorFlow.', 'Creating helper functions to initialize weights and bias The chapter covers the creation of helper functions to initialize weights and bias for a neural network in TensorFlow.', 'Implementing convolutional 2D and max pooling layers The chapter explains the implementation of convolutional 2D and max pooling layers for a neural network in TensorFlow.']}], 'duration': 295.69, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/Jy9-aGMB_TE/pics/Jy9-aGMB_TE2194791.jpg', 'highlights': ['The chapter emphasizes the benefits of breaking up the setup process, making it more readable and providing reasons for doing so.', 'Print statements are highlighted as an important tool for tracking progress, with the example of setting up training and test images and labels.', 'The usage of TensorFlow is mentioned for creating the model, emphasizing the need to import it as tf.', 'Setting up TensorFlow placeholders for input data and dropout probability The chapter discusses setting up x and y placeholders for input data and a dropout probability placeholder in TensorFlow.', 'Creating helper functions to initialize weights and bias The chapter covers the creation of helper functions to initialize weights and bias for a neural network in TensorFlow.', 'Implementing convolutional 2D and max pooling layers The chapter explains the implementation of convolutional 2D and max pooling layers for a neural network in TensorFlow.']}, {'end': 3088.807, 'segs': [{'end': 2520.183, 'src': 'embed', 'start': 2490.481, 'weight': 0, 'content': [{'end': 2494.625, 'text': 'A lot of times the bias is just put in as 1, and then you have your weights to add on to that.', 'start': 2490.481, 'duration': 4.144}, {'end': 2497.328, 'text': "But we're going to set this as 0.1.", 'start': 2495.086, 'duration': 2.242}, {'end': 2501.252, 'text': 'So we want to return a convolutional 2D, in this case a neural network.', 'start': 2497.328, 'duration': 3.924}, {'end': 2503.354, 'text': 'This would be a layer on here.', 'start': 2501.452, 'duration': 1.902}, {'end': 2504.615, 'text': "What's going on with the 2D..", 'start': 2503.594, 'duration': 1.021}, {'end': 2510.419, 'text': "We're taking our data coming in, we're going to filter it.", 'start': 2506.617, 'duration': 3.802}, {'end': 2515.501, 'text': "Strides. if you remember correctly, strides came from here's our image.", 'start': 2510.799, 'duration': 4.702}, {'end': 2520.183, 'text': 'and then we only look at this picture here and then maybe we have a stride of one.', 'start': 2515.501, 'duration': 4.682}], 'summary': 'Using a bias of 0.1, the neural network applies 2d convolution to filter incoming data with a stride of one.', 'duration': 29.702, 'max_score': 2490.481, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/Jy9-aGMB_TE/pics/Jy9-aGMB_TE2490481.jpg'}, {'end': 2642.051, 'src': 'embed', 'start': 2613.738, 'weight': 3, 'content': [{'end': 2616.459, 'text': 'We create our convolutional layer with all the filters.', 'start': 2613.738, 'duration': 2.721}, {'end': 2617.34, 'text': 'Remember the filters.', 'start': 2616.519, 'duration': 0.821}, {'end': 2623.502, 'text': "go, you know the filters coming in here, and it looks at these four boxes and then, if it's a step, let's say step two,", 'start': 2617.34, 'duration': 6.162}, {'end': 2626.823, 'text': 'it then goes to these four boxes and then the next step, and so on.', 'start': 2623.502, 'duration': 3.321}, {'end': 2631.205, 'text': 'So we have our convolutional layer that we generate or convolutional layers.', 'start': 2627.284, 'duration': 3.921}, {'end': 2634.087, 'text': 'They use the relu function.', 'start': 2631.425, 'duration': 2.662}, {'end': 2635.747, 'text': "There's other functions out there.", 'start': 2634.467, 'duration': 1.28}, {'end': 2639.769, 'text': 'For this though, the relu is the most, the one that works the best.', 'start': 2635.867, 'duration': 3.902}, {'end': 2640.95, 'text': 'At least so far.', 'start': 2640.229, 'duration': 0.721}, {'end': 2642.051, 'text': "I'm sure that will change.", 'start': 2640.99, 'duration': 1.061}], 'summary': 'Creating convolutional layers with filters, using relu function for best performance.', 'duration': 28.313, 'max_score': 2613.738, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/Jy9-aGMB_TE/pics/Jy9-aGMB_TE2613738.jpg'}, {'end': 2813.389, 'src': 'embed', 'start': 2785.993, 'weight': 1, 'content': [{'end': 2793.638, 'text': 'So, if you remember, you have a filter and you have your image and the filter slowly steps over and filters out this image,', 'start': 2785.993, 'duration': 7.645}, {'end': 2794.759, 'text': 'depending on what your step is.', 'start': 2793.638, 'duration': 1.121}, {'end': 2798.742, 'text': 'For this particular setup 4x4 is just fine.', 'start': 2795.1, 'duration': 3.642}, {'end': 2801.905, 'text': "That should work pretty good for what we're doing and for the size of the image.", 'start': 2798.782, 'duration': 3.123}, {'end': 2807.067, 'text': 'And then, of course, at the end, once you have your convolutional layer set up, you also need to pool it.', 'start': 2802.305, 'duration': 4.762}, {'end': 2813.389, 'text': "And you'll see that the pooling is automatically set up so that it would see the different shape based on what's coming in.", 'start': 2807.307, 'duration': 6.082}], 'summary': 'A 4x4 filter is suitable for the image, followed by automatic pooling.', 'duration': 27.396, 'max_score': 2785.993, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/Jy9-aGMB_TE/pics/Jy9-aGMB_TE2785993.jpg'}, {'end': 2917.01, 'src': 'embed', 'start': 2891.054, 'weight': 2, 'content': [{'end': 2895.236, 'text': "And then we're going to set it up as a single layer that's 4,096 in size.", 'start': 2891.054, 'duration': 4.182}, {'end': 2896.797, 'text': "That's what that means there.", 'start': 2895.577, 'duration': 1.22}, {'end': 2898.178, 'text': "We'll go ahead and run this.", 'start': 2896.917, 'duration': 1.261}, {'end': 2901.761, 'text': "So we've now created this variable, the Convo 2 flat.", 'start': 2898.539, 'duration': 3.222}, {'end': 2903.982, 'text': 'And then we have our first full layer.', 'start': 2902.001, 'duration': 1.981}, {'end': 2908.065, 'text': 'This is the final neural network where we have the flat layer going in.', 'start': 2904.082, 'duration': 3.983}, {'end': 2914.089, 'text': "And we're going to again use the ReLU for our setup on there in a neural network for evaluation.", 'start': 2908.385, 'duration': 5.704}, {'end': 2917.01, 'text': "And you'll notice that we're going to create our first full layer.", 'start': 2914.469, 'duration': 2.541}], 'summary': 'Creating a neural network with a single 4,096-sized layer and relu activation for evaluation.', 'duration': 25.956, 'max_score': 2891.054, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/Jy9-aGMB_TE/pics/Jy9-aGMB_TE2891054.jpg'}], 'start': 2490.481, 'title': 'Cnn basics and architecture', 'summary': 'Covers the fundamentals of convolutional neural networks, including bias setting, data filtering, strides, reformatting data dimensions, weight and bias initialization, tfnn relu usage, and pooling. it also elaborates on constructing cnn architecture, encompassing convolutional layer creation, pooling, flattening, and defining fully connected layers, with a focus on filter size and layer dimensions.', 'chapters': [{'end': 2613.478, 'start': 2490.481, 'title': 'Convolutional neural network basics', 'summary': 'Discusses the basics of convolutional neural networks, including setting bias, filtering data, strides, reformatting data dimensions, initializing weights and biases, using tfnn relu with the convolutional 2d, and the concept of pooling after running data through the convolutional layer.', 'duration': 122.997, 'highlights': ['The chapter discusses the basics of convolutional neural networks, including setting bias, filtering data, strides, reformatting data dimensions, initializing weights and biases, using TFNN relu with the convolutional 2D, and the concept of pooling after running data through the convolutional layer. This highlight provides a comprehensive overview of the key points discussed in the chapter, including the various components and processes involved in convolutional neural networks.', 'The convolutional layer steps through and creates all those filters we saw. This highlights the role of the convolutional layer in creating filters, which is a crucial aspect of convolutional neural networks.', "We're taking our data coming in, we're going to filter it. This highlights the initial step of filtering the incoming data, which is fundamental in the processing of convolutional neural networks.", 'After each time we run it through the convectional layer, we want to pool the data. This highlights the concept of pooling the data after running it through the convolutional layer, which is an important process in convolutional neural networks.']}, {'end': 3088.807, 'start': 2613.738, 'title': 'Convolutional neural network architecture', 'summary': 'Explains the process of creating a convolutional neural network (cnn) architecture, including the steps for creating convolutional layers, pooling, flattening, and defining fully connected layers, with emphasis on key parameters such as filter size and layer dimensions.', 'duration': 475.069, 'highlights': ['Creating Convolutional Layers and Pooling The process of creating convolutional layers and pooling involves setting up filter sizes (e.g., 4x4), utilizing the ReLU function, and configuring max pooling to reduce data size for efficient processing.', 'Defining Fully Connected Layers The definition of fully connected layers involves flattening the data into a single array, setting up the input size, initializing weights and biases, and using the ReLU function for evaluation, with specific layer dimensions (e.g., 1024) and dropout probability.', "Optimizing Model Fit and Resource Utilization The chapter emphasizes the impact of parameter choices, such as filter sizes and layer dimensions, on the model's fit and resource utilization, highlighting the need for careful selection to avoid overfitting or underutilization."]}], 'duration': 598.326, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/Jy9-aGMB_TE/pics/Jy9-aGMB_TE2490481.jpg', 'highlights': ['The chapter discusses the basics of convolutional neural networks, including setting bias, filtering data, strides, reformatting data dimensions, initializing weights and biases, using TFNN relu with the convolutional 2D, and the concept of pooling after running data through the convolutional layer.', 'Creating Convolutional Layers and Pooling involves setting up filter sizes (e.g., 4x4), utilizing the ReLU function, and configuring max pooling to reduce data size for efficient processing.', 'Defining Fully Connected Layers involves flattening the data into a single array, setting up the input size, initializing weights and biases, and using the ReLU function for evaluation, with specific layer dimensions (e.g., 1024) and dropout probability.', 'The convolutional layer steps through and creates all those filters we saw.']}, {'end': 3833.632, 'segs': [{'end': 3121.086, 'src': 'embed', 'start': 3088.807, 'weight': 3, 'content': [{'end': 3092.852, 'text': 'You can use any number there, but those would be the ideal numbers when you look at this data.', 'start': 3088.807, 'duration': 4.045}, {'end': 3099.054, 'text': 'So the next step in all of this is we need to also create a way of tracking how good our model is.', 'start': 3093.092, 'duration': 5.962}, {'end': 3101.175, 'text': "And we're going to call this a loss function.", 'start': 3099.454, 'duration': 1.721}, {'end': 3104.876, 'text': "And so we're going to create a cross entropy loss function.", 'start': 3101.415, 'duration': 3.461}, {'end': 3108.978, 'text': "And so before we discuss exactly what that is, let's take a look and see what we're feeding it.", 'start': 3105.016, 'duration': 3.962}, {'end': 3114.322, 'text': "we're gonna feed it our labels and we have our true labels and our prediction labels.", 'start': 3109.698, 'duration': 4.624}, {'end': 3121.086, 'text': "So, coming in here is where the two different variables we're sending in, or the two different probability distributions,", 'start': 3114.842, 'duration': 6.244}], 'summary': 'Create cross entropy loss function to track model performance and analyze input data.', 'duration': 32.279, 'max_score': 3088.807, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/Jy9-aGMB_TE/pics/Jy9-aGMB_TE3088807.jpg'}, {'end': 3193.874, 'src': 'embed', 'start': 3168.601, 'weight': 4, 'content': [{'end': 3173.925, 'text': "So when you know what the loss is and we're training it, you feed that back into the back propagation setup.", 'start': 3168.601, 'duration': 5.324}, {'end': 3175.946, 'text': 'And so we want to go ahead and optimize that.', 'start': 3174.205, 'duration': 1.741}, {'end': 3176.987, 'text': "Here's our optimizer.", 'start': 3176.006, 'duration': 0.981}, {'end': 3179.988, 'text': "We're going to create the optimizer using an atom optimizer.", 'start': 3177.067, 'duration': 2.921}, {'end': 3182.57, 'text': "Remember, there's a lot of different ways of optimizing the data.", 'start': 3180.148, 'duration': 2.422}, {'end': 3184.231, 'text': "Atom's the most popular used.", 'start': 3182.75, 'duration': 1.481}, {'end': 3188.612, 'text': 'So our optimizer is going to equal the TFTrainAtomOptimizer.', 'start': 3184.911, 'duration': 3.701}, {'end': 3192.853, 'text': "If you don't remember what the learning rate is, let me just pop this back into here.", 'start': 3188.792, 'duration': 4.061}, {'end': 3193.874, 'text': "Here's our learning rate.", 'start': 3192.993, 'duration': 0.881}], 'summary': 'Training involves optimizing loss using tftrainatomoptimizer with learning rate.', 'duration': 25.273, 'max_score': 3168.601, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/Jy9-aGMB_TE/pics/Jy9-aGMB_TE3168601.jpg'}, {'end': 3396.184, 'src': 'embed', 'start': 3367.918, 'weight': 2, 'content': [{'end': 3374.684, 'text': "The next thing we're going to do is we're going to go for i in range of 500, batch equals ch dot next batch.", 'start': 3367.918, 'duration': 6.766}, {'end': 3379.649, 'text': 'So if you remember correctly, this is loading up 100 pictures at a time.', 'start': 3374.884, 'duration': 4.765}, {'end': 3383.773, 'text': 'And this is going to loop through that 500 times.', 'start': 3380.209, 'duration': 3.564}, {'end': 3386.455, 'text': 'So we are literally doing, what is that, 500 times 100 is 50,000.', 'start': 3384.173, 'duration': 2.282}, {'end': 3389.178, 'text': "So that's 50,000 pictures we're going to process right there.", 'start': 3386.455, 'duration': 2.723}, {'end': 3396.184, 'text': "In the first process, we're going to do a session run.", 'start': 3393.962, 'duration': 2.222}], 'summary': 'Processing 50,000 pictures in batches of 100, looping through 500 times.', 'duration': 28.266, 'max_score': 3367.918, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/Jy9-aGMB_TE/pics/Jy9-aGMB_TE3367918.jpg'}, {'end': 3585.571, 'src': 'embed', 'start': 3556.888, 'weight': 5, 'content': [{'end': 3559.65, 'text': 'So if you have an accuracy of 1, that is phenomenal.', 'start': 3556.888, 'duration': 2.762}, {'end': 3562.172, 'text': "In fact, that's pretty much unheard of.", 'start': 3560.03, 'duration': 2.142}, {'end': 3563.553, 'text': 'And the same thing with loss.', 'start': 3562.252, 'duration': 1.301}, {'end': 3566.615, 'text': "If you have a loss of 0, that's also unheard of.", 'start': 3563.613, 'duration': 3.002}, {'end': 3570.078, 'text': 'The zero is actually on this axis right here as we go in there.', 'start': 3566.955, 'duration': 3.123}, {'end': 3576.563, 'text': "So how do we interpret that? Because if I was looking at this and I go, oh, 0.51, that's 51%.", 'start': 3570.258, 'duration': 6.305}, {'end': 3577.304, 'text': "You're doing 50-50.", 'start': 3576.563, 'duration': 0.741}, {'end': 3580.106, 'text': 'No, this is not percentage.', 'start': 3578.065, 'duration': 2.041}, {'end': 3581.808, 'text': 'Let me just put that in there.', 'start': 3580.667, 'duration': 1.141}, {'end': 3583.369, 'text': 'It is not percentage.', 'start': 3582.008, 'duration': 1.361}, {'end': 3585.571, 'text': 'This is logarithmic.', 'start': 3583.889, 'duration': 1.682}], 'summary': 'Interpreting accuracy and loss values in logarithmic scale, aiming for 1 accuracy and 0 loss.', 'duration': 28.683, 'max_score': 3556.888, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/Jy9-aGMB_TE/pics/Jy9-aGMB_TE3556888.jpg'}, {'end': 3759.089, 'src': 'embed', 'start': 3729.732, 'weight': 0, 'content': [{'end': 3736.558, 'text': 'And we discussed how a CNN recognizes images and very basics as we converted the image into a pixel map of zeros and ones.', 'start': 3729.732, 'duration': 6.826}, {'end': 3740.101, 'text': 'Then we dived into layers in a convolutional neural network.', 'start': 3736.958, 'duration': 3.143}, {'end': 3748.184, 'text': 'And we had our convolutional layer, our fully convected layer, our ReLU layer, and our pooling layer.', 'start': 3740.621, 'duration': 7.563}, {'end': 3749.845, 'text': 'And we looked at pooling layer.', 'start': 3748.544, 'duration': 1.301}, {'end': 3759.089, 'text': 'It reduces the data down to a smaller amount by finding the, first it finds the matches and then it finds the maximum value in that match.', 'start': 3750.205, 'duration': 8.884}], 'summary': 'Cnn processes images using layers like convolutional, fully connected, relu, and pooling to reduce data', 'duration': 29.357, 'max_score': 3729.732, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/Jy9-aGMB_TE/pics/Jy9-aGMB_TE3729732.jpg'}, {'end': 3812.718, 'src': 'embed', 'start': 3786.623, 'weight': 1, 'content': [{'end': 3793.728, 'text': "we max pooled those and then we took all of those pooled values at the end, so they've been reduced to smaller mappings.", 'start': 3786.623, 'duration': 7.105}, {'end': 3797.01, 'text': "we've reduced that and then we fed that into the fully connected layer.", 'start': 3793.728, 'duration': 3.282}, {'end': 3804.954, 'text': 'And then finally, we went into the use case implementation using CNN, and we walked through a full demo on the coding on there.', 'start': 3797.29, 'duration': 7.664}, {'end': 3807.115, 'text': 'With that, I want to thank you for joining us today.', 'start': 3805.114, 'duration': 2.001}, {'end': 3808.236, 'text': 'So thank you.', 'start': 3807.655, 'duration': 0.581}, {'end': 3812.718, 'text': 'For more information, visit www.simplylearn.com.', 'start': 3808.556, 'duration': 4.162}], 'summary': 'Reduced mappings and implemented cnn for use case demo on coding.', 'duration': 26.095, 'max_score': 3786.623, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/Jy9-aGMB_TE/pics/Jy9-aGMB_TE3786623.jpg'}], 'start': 3088.807, 'title': 'Implementing cross entropy loss function and cnn implementation', 'summary': 'Discusses implementation of a cross entropy loss function with an atom optimizer having a learning rate of 0.001 and training the model over 500 batches, along with cnn implementation processing 50,000 pictures, accuracy analysis, and detailed insights into cnn layers and use case implementation.', 'chapters': [{'end': 3367.658, 'start': 3088.807, 'title': 'Implementing cross entropy loss function for model optimization', 'summary': 'Discusses the implementation of a cross entropy loss function for model optimization, including tracking model performance and using an atom optimizer with a learning rate of 0.001, and training the model over 500 batches.', 'duration': 278.851, 'highlights': ['The chapter discusses the implementation of a cross entropy loss function for model optimization This is the central focus of the transcript, detailing the process of implementing a cross entropy loss function for model optimization.', 'Using an atom optimizer with a learning rate of 0.001 Provides specific details about the optimizer and its learning rate, crucial for understanding the model optimization process.', 'Training the model over 500 batches Highlights the specific training approach, indicating the data batch size and the training iteration.']}, {'end': 3833.632, 'start': 3367.918, 'title': 'Cnn implementation and accuracy analysis', 'summary': 'Covers the implementation of a convolutional neural network (cnn) with 50,000 pictures processed, accuracy interpretation, and detailed insights into cnn layers and use case implementation.', 'duration': 465.714, 'highlights': ['The chapter covers the implementation of a convolutional neural network (CNN) with 50,000 pictures processed.', 'Accuracy interpretation includes understanding the logarithmic scale, where an accuracy of 0.5 or higher is considered valid.', 'Detailed insights into CNN layers include the understanding of convolutional layer, fully connected layer, ReLU layer, and pooling layer.', 'The use case implementation using CNN is demonstrated through a full coding demo.']}], 'duration': 744.825, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/Jy9-aGMB_TE/pics/Jy9-aGMB_TE3088807.jpg', 'highlights': ['Detailed insights into CNN layers including convolutional, fully connected, ReLU, and pooling layers.', 'Use case implementation using CNN demonstrated through a full coding demo.', 'Training the model over 500 batches, indicating the data batch size and training iteration.', 'Implementation of a cross entropy loss function for model optimization, detailing the process.', 'Atom optimizer with a learning rate of 0.001, providing specific details crucial for model optimization.', 'Accuracy interpretation includes understanding the logarithmic scale, where an accuracy of 0.5 or higher is considered valid.', 'Implementation of a convolutional neural network (CNN) with 50,000 pictures processed.']}], 'highlights': ["The convolution layer utilizes matrix filters and performs convolution operations to detect patterns in the image, contributing significantly to the neural network's ability to recognize complex patterns.", 'The pooling layer uses multiple filters to detect edges, corners, eyes, feathers, beak, etc., enabling the recognition of various features in an image.', 'The input layer of CNN accepts the pixels of the image as input in the form of arrays, and the hidden layers carry out feature extraction by performing calculations and manipulation.', 'The fully connected layer identifies the object in the image, providing a comprehensive understanding of the final output.', 'The process of convolution involves sliding a filter matrix over the image, computing the dot product to detect patterns, and generating multiple feature maps, which are then passed through the ReLU layer to introduce non-linearity to the network.', 'The importance of dividing a complicated Python project into separate definitions and classes is emphasized for better manageability and readability.', 'The initialization of training batches involves creating an array of different images, excluding the testing batch.', 'The chapter discusses setting up self-training images and stacking them into numpy arrays for data processing.', 'The chapter emphasizes the benefits of breaking up the setup process, making it more readable and providing reasons for doing so.', 'Creating Convolutional Layers and Pooling involves setting up filter sizes (e.g., 4x4), utilizing the ReLU function, and configuring max pooling to reduce data size for efficient processing.', 'Detailed insights into CNN layers including convolutional, fully connected, ReLU, and pooling layers.', 'Use case implementation using CNN demonstrated through a full coding demo.', 'Training the model over 500 batches, indicating the data batch size and training iteration.', 'Implementation of a cross entropy loss function for model optimization, detailing the process.', 'Atom optimizer with a learning rate of 0.001, providing specific details crucial for model optimization.', 'Accuracy interpretation includes understanding the logarithmic scale, where an accuracy of 0.5 or higher is considered valid.', 'Implementation of a convolutional neural network (CNN) with 50,000 pictures processed.']}