title
Cancer Detection Using Deep Learning | Deep Learning Projects | Deep Learning Training | Edureka

description
๐Ÿ”ฅEdureka Deep Learning With TensorFlow (๐”๐ฌ๐ž ๐‚๐จ๐๐ž: ๐˜๐Ž๐”๐“๐”๐๐„๐Ÿ๐ŸŽ): https://www.edureka.co/ai-deep-learning-with-tensorflow This Edureka video on ๐‚๐š๐ง๐œ๐ž๐ซ ๐ƒ๐ž๐ญ๐ž๐œ๐ญ๐ข๐จ๐ง ๐”๐ฌ๐ข๐ง๐  ๐ƒ๐ž๐ž๐ฉ ๐‹๐ž๐š๐ซ๐ง๐ข๐ง๐ , will help you understand how to develop models using Convolution Neural Networks. We will also have a discussion on improving model accuracy using pretrained models. Below are the topics covered in Cancer Detection Using Deep Learning video : 00:00:00 Introduction 00:00:52 Introduction to Deep Learning 00:02:57 Deep Learning General Intuition 00:05:43 Image Processing Using DL 00:16:54 Brain Tumor Detection Using Custom Model 01:07:43 Transfer Learning 01:11:21 CNN Architectures ๐Ÿ”นCheck our complete Deep Learning With TensorFlow playlist here: https://goo.gl/cck4hE ๐Ÿ”นCheck our complete Deep Learning With TensorFlow Blog Series: http://bit.ly/2sqmP4s ๐Ÿ”ดDo subscribe to our channel and hit the bell icon to never miss an update from us in the future: https://goo.gl/6ohpTV ๐Ÿ“Œ๐“๐ž๐ฅ๐ž๐ ๐ซ๐š๐ฆ: https://t.me/edurekaupdates ๐Ÿ“Œ๐“๐ฐ๐ข๐ญ๐ญ๐ž๐ซ: https://twitter.com/edurekain ๐Ÿ“Œ๐‹๐ข๐ง๐ค๐ž๐๐ˆ๐ง: https://www.linkedin.com/company/edureka ๐Ÿ“Œ๐ˆ๐ง๐ฌ๐ญ๐š๐ ๐ซ๐š๐ฆ: https://www.instagram.com/edureka_learning/ ๐Ÿ“Œ๐…๐š๐œ๐ž๐›๐จ๐จ๐ค: https://www.facebook.com/edurekaIN/ ๐Ÿ“Œ๐’๐ฅ๐ข๐๐ž๐’๐ก๐š๐ซ๐ž: https://www.slideshare.net/EdurekaIN ๐Ÿ“Œ๐‚๐š๐ฌ๐ญ๐›๐จ๐ฑ: https://castbox.fm/networks/505?country=IN ๐Ÿ“Œ๐Œ๐ž๐ž๐ญ๐ฎ๐ฉ: https://www.meetup.com/edureka/ ๐Ÿ“Œ๐‚๐จ๐ฆ๐ฆ๐ฎ๐ง๐ข๐ญ๐ฒ: https://www.edureka.co/community/ #edureka #edurekadeeplearning #deeplearningwithtensorflow #cancerdetectionusingdeeplearning #convolutionneuralnetworks #deeplearningpretrainedmodels #deepearningtutorial #edurekatraining ---------๐„๐๐ฎ๐ซ๐ž๐ค๐š ๐Ž๐ง๐ฅ๐ข๐ง๐ž ๐“๐ซ๐š๐ข๐ง๐ข๐ง๐  ๐š๐ง๐ ๐‚๐ž๐ซ๐ญ๐ข๐Ÿ๐ข๐œ๐š๐ญ๐ข๐จ๐ง--------- ๐Ÿ”ต Data Science Online Training: https://bit.ly/2NCT239 ๐ŸŸฃ Python Online Training: https://bit.ly/2CQYGN7 ๐Ÿ”ต AWS Online Training: https://bit.ly/2ZnbW3s ๐ŸŸฃ RPA Online Training: https://bit.ly/2Zd0ac0 ๐Ÿ”ต DevOps Online Training: https://bit.ly/2BPwXf0 ๐ŸŸฃ Big Data Online Training: https://bit.ly/3g8zksu ๐Ÿ”ต Java Online Training: https://bit.ly/31rxJcY ---------๐„๐๐ฎ๐ซ๐ž๐ค๐š ๐Œ๐š๐ฌ๐ญ๐ž๐ซ๐ฌ ๐๐ซ๐จ๐ ๐ซ๐š๐ฆ๐ฌ--------- ๐ŸŸฃMachine Learning Engineer Masters Program: https://bit.ly/388NXJi ๐Ÿ”ตDevOps Engineer Masters Program: https://bit.ly/2B9tZCp ๐ŸŸฃCloud Architect Masters Program: https://bit.ly/3i9z0eJ ๐Ÿ”ตData Scientist Masters Program: https://bit.ly/2YHaolS ๐ŸŸฃBig Data Architect Masters Program: https://bit.ly/31qrOVv ๐Ÿ”ตBusiness Intelligence Masters Program: https://bit.ly/2BPLtn2 -----------------๐„๐๐ฎ๐ซ๐ž๐ค๐š ๐๐†๐ ๐‚๐จ๐ฎ๐ซ๐ฌ๐ž๐ฌ--------------- ๐Ÿ”ตArtificial and Machine Learning PGP: https://bit.ly/2Ziy7b1 ๐ŸŸฃCyberSecurity PGP: https://bit.ly/3eHvI0h ๐Ÿ”ตDigital Marketing PGP: https://bit.ly/38cqdnz ๐ŸŸฃBig Data Engineering PGP: https://bit.ly/3eTSyBC ๐Ÿ”ตData Science PGP: https://bit.ly/3dIeYV9 ๐ŸŸฃCloud Computing PGP: https://bit.ly/2B9tHLP ---------------------------------- ๐Ÿ”…๐Ÿ”…How it Works? 1. This is a 5 Week Instructor led Online Course. 2. Course consists of 30 hours of online classes, 20 hours of assignment, 20 hours of project 3. We have a 24x7 One-on-One LIVE Technical Support to help you with any problems you might face or any clarifications you may require during the course. 4. You will get Lifetime Access to the recordings in the LMS. 5. At the end of the training you will have to complete the project based on which we will provide you a Verifiable Certificate! - - - - - - - - - - - - - - ๐Ÿ”…๐Ÿ”…About the Course Why Learn Deep Learning With TensorFlow? TensorFlow is one of the best libraries to implement Deep Learning. TensorFlow is a software library for numerical computation of mathematical expressions, using data flow graphs. Nodes in the graph represent mathematical operations, while the edges represent the multidimensional data arrays (tensors) that flow between them. It was created by Google and tailored for Machine Learning. In fact, it is being widely used to develop solutions with Deep Learning. - - - - - - - - - - - - - - For Online Training and Certification, Please write back to us at sales@edureka.in or call us at IND: 9606058406 / US: 18338555775 (toll-free) for more information

detail
{'title': 'Cancer Detection Using Deep Learning | Deep Learning Projects | Deep Learning Training | Edureka', 'heatmap': [{'end': 1095.96, 'start': 1031.127, 'weight': 1}, {'end': 3371.059, 'start': 3314.451, 'weight': 0.705}], 'summary': 'Covers deep learning for brain tumor detection, including cnn fundamentals, implementation in google colab, managing image data, model training, image processing, data augmentation, and improving model accuracy through transfer learning and hyperparameter tuning, resulting in 83% accuracy and successful brain tumor image prediction.', 'chapters': [{'end': 1021.446, 'segs': [{'end': 134.88, 'src': 'embed', 'start': 106.216, 'weight': 0, 'content': [{'end': 111.08, 'text': 'it prevents overfitting and it also acquires more dimensions and therefore giving us more accuracy.', 'start': 106.216, 'duration': 4.864}, {'end': 112.921, 'text': 'So this is why we need deep learning.', 'start': 111.42, 'duration': 1.501}, {'end': 118.585, 'text': 'So now that we have discussed why we need deep learning now, you might be wondering what exactly is deep learning.', 'start': 113.322, 'duration': 5.263}, {'end': 124.69, 'text': 'Well, you see deep learning is basically a subset of machine learning algorithms, which are inspired from human brains.', 'start': 119.026, 'duration': 5.664}, {'end': 129.774, 'text': 'And as I mentioned earlier deep learning works well when you have provide a huge amount of data.', 'start': 125.27, 'duration': 4.504}, {'end': 134.88, 'text': 'So how does this work in contrast to machine learning algorithm? Well, you see machine learning algorithms.', 'start': 130.274, 'duration': 4.606}], 'summary': 'Deep learning prevents overfitting, acquires more dimensions, and provides more accuracy, making it essential for handling large datasets.', 'duration': 28.664, 'max_score': 106.216, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/7MceDfpnP8k/pics/7MceDfpnP8k106216.jpg'}, {'end': 163.961, 'src': 'embed', 'start': 139.184, 'weight': 1, 'content': [{'end': 145.151, 'text': 'But whereas in deep learning we use something called as tensors and tensors are basically small matrices inside a big Matrix.', 'start': 139.184, 'duration': 5.967}, {'end': 148.295, 'text': 'So you can consider them as a Matrix nested inside a Matrix.', 'start': 145.531, 'duration': 2.764}, {'end': 156.578, 'text': 'And what happens here how deep learning performs better when we give a different images is because looks into different features of that particular image.', 'start': 148.795, 'duration': 7.783}, {'end': 159.359, 'text': 'For example, if I have to take process an image of a cat.', 'start': 157.118, 'duration': 2.241}, {'end': 163.961, 'text': 'So when I provide multiple images of a cat with a different different dimensions,', 'start': 159.739, 'duration': 4.222}], 'summary': 'Deep learning uses tensors, nested matrices, to analyze diverse image features for better performance.', 'duration': 24.777, 'max_score': 139.184, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/7MceDfpnP8k/pics/7MceDfpnP8k139184.jpg'}, {'end': 342.938, 'src': 'embed', 'start': 306.937, 'weight': 3, 'content': [{'end': 310.919, 'text': 'So what is the advantage of multi-layer perceptron with respect to a single layer perceptron?', 'start': 306.937, 'duration': 3.982}, {'end': 318.584, 'text': 'Well, if I have to give you a brief example, a single layer perceptron or a single perceptron is basically a logistic regression problem.', 'start': 311.3, 'duration': 7.284}, {'end': 324.887, 'text': 'I can just perform a single class and I cannot even take a look into multiple dimensions, but will multiple layer perceptrons.', 'start': 318.964, 'duration': 5.923}, {'end': 335.954, 'text': "I'll have multiple densely up and with every layer I can extract multiple features from whatever input I have been provided with and therefore you know I can get a better accuracy on whatever results I'm getting.", 'start': 325.228, 'duration': 10.726}, {'end': 338.255, 'text': 'So this is what is a multi-layer perceptron.', 'start': 336.434, 'duration': 1.821}, {'end': 342.938, 'text': 'So, now that we know what is deep learning? What are the basic fundamentals of deep learning?', 'start': 338.776, 'duration': 4.162}], 'summary': 'Multi-layer perceptrons extract multiple features, providing better accuracy in results compared to single layer perceptrons.', 'duration': 36.001, 'max_score': 306.937, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/7MceDfpnP8k/pics/7MceDfpnP8k306937.jpg'}, {'end': 502.234, 'src': 'embed', 'start': 474.204, 'weight': 2, 'content': [{'end': 476.825, 'text': 'This is exactly why we had CNN algorithm.', 'start': 474.204, 'duration': 2.621}, {'end': 481.447, 'text': 'So what is the CNN stands for CNN basically stands for convolution neural network.', 'start': 477.145, 'duration': 4.302}, {'end': 484.048, 'text': 'So what exactly is convolution neural network??', 'start': 481.927, 'duration': 2.121}, {'end': 490.91, 'text': 'Well, you see, convolution neural network is a class of deep learning algorithms which are majorly applied for computer vision applications.', 'start': 484.488, 'duration': 6.422}, {'end': 493.031, 'text': 'So what exactly is happening over here?', 'start': 491.35, 'duration': 1.681}, {'end': 494.711, 'text': 'Well, you see the image.', 'start': 493.451, 'duration': 1.26}, {'end': 502.234, 'text': 'instead of being flattened directly, we extract multiple features, and every time I extract this feature, I would pass this through another filter,', 'start': 494.711, 'duration': 7.523}], 'summary': 'Cnn is used for computer vision, extracting multiple features from images.', 'duration': 28.03, 'max_score': 474.204, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/7MceDfpnP8k/pics/7MceDfpnP8k474204.jpg'}, {'end': 665.2, 'src': 'embed', 'start': 640.941, 'weight': 5, 'content': [{'end': 647.043, 'text': 'Well, I would say, convolution filter is the most important stuff, because it is responsible for extracting features from our image.', 'start': 640.941, 'duration': 6.102}, {'end': 650.584, 'text': 'So what do I mean by this feature? So let me quickly show you that.', 'start': 647.383, 'duration': 3.201}, {'end': 653.487, 'text': "Now, let's say that I have an image of a cat.", 'start': 651.164, 'duration': 2.323}, {'end': 661.035, 'text': 'So I have an image of a cat here and then these are its ear and then you can say face and nose.', 'start': 653.787, 'duration': 7.248}, {'end': 663.158, 'text': 'over here are some whiskers.', 'start': 661.035, 'duration': 2.123}, {'end': 665.2, 'text': 'whiskers are pretty important features of a cat.', 'start': 663.158, 'duration': 2.042}], 'summary': 'Convolution filter extracts important features from images, such as cat whiskers.', 'duration': 24.259, 'max_score': 640.941, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/7MceDfpnP8k/pics/7MceDfpnP8k640941.jpg'}, {'end': 790.854, 'src': 'embed', 'start': 768.562, 'weight': 6, 'content': [{'end': 776.467, 'text': 'We usually take it as 3 cross 3 or 5 cross 5 filter and then I pass that through an image and this would extract the features of my image.', 'start': 768.562, 'duration': 7.905}, {'end': 779.549, 'text': 'So moving on to the next component that is pooling layer.', 'start': 777.067, 'duration': 2.482}, {'end': 782.911, 'text': 'So now you have passed our image through a multiple convolution filter.', 'start': 779.929, 'duration': 2.982}, {'end': 785.272, 'text': 'These are obviously increase the dimension of our image.', 'start': 783.171, 'duration': 2.101}, {'end': 790.854, 'text': "And now what you're going to do is in order to reduce the dimensions like in order to reduce the complexity.", 'start': 785.752, 'duration': 5.102}], 'summary': 'Using 3x3 or 5x5 filters, image features are extracted through convolution, followed by dimension reduction.', 'duration': 22.292, 'max_score': 768.562, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/7MceDfpnP8k/pics/7MceDfpnP8k768562.jpg'}, {'end': 906.738, 'src': 'embed', 'start': 878.656, 'weight': 7, 'content': [{'end': 885.982, 'text': 'So, as you might have observed earlier, whenever I try to pass my image or convolution filter, the image size decreases sometimes.', 'start': 878.656, 'duration': 7.326}, {'end': 889.405, 'text': 'what happens is we might have a particular feature which is present at the corner.', 'start': 885.982, 'duration': 3.423}, {'end': 895.509, 'text': 'In order to prevent our image, or convolution filter, from literally ignoring that, I would add a padding layer.', 'start': 889.785, 'duration': 5.724}, {'end': 900.433, 'text': "padding layer is basically a zeros which I'm adding across or around my image.", 'start': 895.509, 'duration': 4.924}, {'end': 906.738, 'text': 'with this, what happens is, if the convolution filter passes through it, the image size would remain the same and all the features,', 'start': 900.433, 'duration': 6.305}], 'summary': 'Adding a padding layer prevents image size reduction and ensures feature retention during convolution filtering.', 'duration': 28.082, 'max_score': 878.656, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/7MceDfpnP8k/pics/7MceDfpnP8k878656.jpg'}, {'end': 945.516, 'src': 'embed', 'start': 914.524, 'weight': 8, 'content': [{'end': 915.845, 'text': 'There is no learning of features.', 'start': 914.524, 'duration': 1.321}, {'end': 919.166, 'text': 'So the moving on to the next component we have flattening operation.', 'start': 916.445, 'duration': 2.721}, {'end': 925.127, 'text': 'So if you remember basically at the end of the day, we are going to pass a convolution neural network to a dense layer.', 'start': 919.666, 'duration': 5.461}, {'end': 932.889, 'text': 'So these are basically the features of a cat like if I have to take an example of an image this features cannot just be directly processed.', 'start': 925.527, 'duration': 7.362}, {'end': 936.45, 'text': 'I obviously have to convert them into a single array of numbers.', 'start': 933.249, 'duration': 3.201}, {'end': 945.516, 'text': 'and this array is fed to the inputs of our dense layer, or I would say multi-layer perceptron, and then whatever is the output of that,', 'start': 937.21, 'duration': 8.306}], 'summary': 'The transcript discusses the flattening operation in a neural network for processing image features.', 'duration': 30.992, 'max_score': 914.524, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/7MceDfpnP8k/pics/7MceDfpnP8k914524.jpg'}], 'start': 7.059, 'title': 'Brain tumor detection with cnn', 'summary': 'Covers deep learning for brain tumor detection, emphasizing advantages over traditional methods, the use of tensors, and the importance of large data. it also explores perceptron, multi-layer perceptron, and cnn fundamentals, highlighting their advantages and limitations. additionally, it explains the core concepts of cnn, including convolution filters, pooling layers, and feature extraction.', 'chapters': [{'end': 195.629, 'start': 7.059, 'title': 'Detecting brain tumor with deep learning', 'summary': 'Provides an overview of deep learning, highlighting its advantages over traditional machine learning algorithms and its application in detecting brain tumors, emphasizing the use of tensors to process images and the significance of providing large amounts of data for model accuracy.', 'duration': 188.57, 'highlights': ['Deep learning works better with more data, preventing overfitting and acquiring more dimensions, leading to increased accuracy, in contrast to the limitations of traditional machine learning algorithms. Deep learning acquires more dimensions and prevents overfitting with more data, resulting in increased accuracy, unlike traditional machine learning algorithms.', 'Deep learning uses tensors, which are small matrices inside a big Matrix, to process images and analyze different features, allowing for a more comprehensive understanding compared to traditional machine learning algorithms. Deep learning utilizes tensors to process images, enabling a comprehensive analysis of various features, unlike traditional machine learning algorithms.', 'The chapter begins by explaining the need for deep learning over traditional machine learning algorithms and its application in detecting brain tumors using a deep learning model. The chapter introduces the need for deep learning and its application in detecting brain tumors with a deep learning model.']}, {'end': 494.711, 'start': 195.629, 'title': 'Perceptron and deep learning fundamentals', 'summary': 'Introduces the concept of perceptron, single layer neural network, multi-layer perceptron, and convolution neural network, highlighting the advantages of multi-layer perceptron over single layer perceptron and the limitations of using single layer perceptron and multi-layer perceptron for image processing.', 'duration': 299.082, 'highlights': ['Convolution neural network is a class of deep learning algorithms majorly applied for computer vision applications. CNN is a key concept for image processing in deep learning, specifically designed for computer vision applications.', 'Advantages of multi-layer perceptron over single layer perceptron include the ability to extract multiple features from input data, leading to better accuracy in results. Multi-layer perceptron offers the advantage of extracting multiple features from input data, resulting in improved accuracy in the obtained results.', 'Limitations of using single layer perceptron and multi-layer perceptron for image processing include overfitting, curse of dimensionality, and variance in object position due to the flattening of images into a single vector of pixel values. Using single layer perceptron and multi-layer perceptron for image processing can lead to overfitting, curse of dimensionality, and variance in object position due to the flattening of images into a single vector of pixel values.', 'Perceptron is a single neuron, and when multiple perceptrons are combined, they form a dense layer, and further combination with multiple layers results in a multi-layer perceptron. The combination of multiple perceptrons forms a dense layer, and further combination with multiple layers results in a multi-layer perceptron.']}, {'end': 1021.446, 'start': 494.711, 'title': 'Convolutional neural network basics', 'summary': 'Explains the core concept of cnn, including the use of convolution filters, pooling layers, padding layer, and flattening operation to extract features from images, reduce dimensionality, and process images for deep learning, with emphasis on the process of template matching and feature extraction.', 'duration': 526.735, 'highlights': ['Extraction of Features Using Convolution Filters The process involves using multiple convolution filters to extract various features from images, such as ears, borders, whiskers, and other important characteristics of objects, which helps in reducing dimensionality and focusing on minute details.', 'Pooling Layer for Dimensionality Reduction The pooling layer is utilized to reduce the complexity and dimensions of images by extracting maximum or average values, contributing to the elimination of unnecessary features and focusing on the most significant ones for further processing.', 'Padding Layer for Capturing All Image Features The addition of a padding layer ensures that all features of an image are captured by maintaining the image size, preventing the convolution filter from ignoring certain features present at the corners or edges of the image.', 'Flattening Operation for Input to Dense Layer The flattening operation converts the extracted image features into a single array of numbers, enabling their processing in a dense layer or multi-layer perceptron to obtain the output in a probability distribution manner.']}], 'duration': 1014.387, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/7MceDfpnP8k/pics/7MceDfpnP8k7059.jpg', 'highlights': ['Deep learning acquires more dimensions and prevents overfitting with more data, resulting in increased accuracy, unlike traditional machine learning algorithms.', 'Deep learning utilizes tensors to process images, enabling a comprehensive analysis of various features, unlike traditional machine learning algorithms.', 'CNN is a key concept for image processing in deep learning, specifically designed for computer vision applications.', 'Multi-layer perceptron offers the advantage of extracting multiple features from input data, resulting in improved accuracy in the obtained results.', 'Using single layer perceptron and multi-layer perceptron for image processing can lead to overfitting, curse of dimensionality, and variance in object position due to the flattening of images into a single vector of pixel values.', 'The process involves using multiple convolution filters to extract various features from images, such as ears, borders, whiskers, and other important characteristics of objects, which helps in reducing dimensionality and focusing on minute details.', 'The pooling layer is utilized to reduce the complexity and dimensions of images by extracting maximum or average values, contributing to the elimination of unnecessary features and focusing on the most significant ones for further processing.', 'The addition of a padding layer ensures that all features of an image are captured by maintaining the image size, preventing the convolution filter from ignoring certain features present at the corners or edges of the image.', 'The flattening operation converts the extracted image features into a single array of numbers, enabling their processing in a dense layer or multi-layer perceptron to obtain the output in a probability distribution manner.']}, {'end': 1484.139, 'segs': [{'end': 1100.702, 'src': 'heatmap', 'start': 1022.006, 'weight': 0, 'content': [{'end': 1031.127, 'text': "So now what we're going to do is we are going to jump to the code editor and see how we can implement convolution neural network to detect if a person is suffering from tumor or not,", 'start': 1022.006, 'duration': 9.121}, {'end': 1035.529, 'text': 'and the data set that we are going to use here would be of a brain and it would be an MRI images.', 'start': 1031.127, 'duration': 4.402}, {'end': 1041.569, 'text': "So let me know quickly jump to my code editor and the code editor that I'm going to use for today is going to be Google collab.", 'start': 1035.929, 'duration': 5.64}, {'end': 1044.07, 'text': 'So here we are at Google collab.', 'start': 1042.05, 'duration': 2.02}, {'end': 1045.011, 'text': 'Let me zoom in a bit.', 'start': 1044.09, 'duration': 0.921}, {'end': 1051.916, 'text': "So, first off, what I'm going to do here is let me see if my the run type is a GPU or not.", 'start': 1045.851, 'duration': 6.065}, {'end': 1056.619, 'text': "as of now, I'll keep it for none, because you know it would just be an unnecessary only during their training time.", 'start': 1051.916, 'duration': 4.703}, {'end': 1058.781, 'text': 'I would just change this back to GPU.', 'start': 1056.639, 'duration': 2.142}, {'end': 1061.283, 'text': 'If you want to change the GPU can just click on over here.', 'start': 1059.041, 'duration': 2.242}, {'end': 1067.246, 'text': "So the first thing that I'm going to do is import warning and before that let me connect this and for the data set.", 'start': 1061.943, 'duration': 5.303}, {'end': 1071.469, 'text': 'I have uploaded that to my Dropbox and in order for me to get the data.', 'start': 1067.326, 'duration': 4.143}, {'end': 1075.311, 'text': "All I'm going to do here is just copy the code and paste it here.", 'start': 1071.629, 'duration': 3.682}, {'end': 1078.894, 'text': 'This Linux command will just import the data and let me execute this.', 'start': 1075.592, 'duration': 3.302}, {'end': 1085.538, 'text': 'So as you can see here that it downloads this particular data and now all I need to do is unzip this up.', 'start': 1079.634, 'duration': 5.904}, {'end': 1095.96, 'text': 'and to unzip let me zoom out here a bit and to unzip all I would do is unzip followed by the path and let me execute this as well.', 'start': 1086.338, 'duration': 9.622}, {'end': 1100.702, 'text': 'So, as you can see here now, these are the images that are contained over here and now.', 'start': 1096.4, 'duration': 4.302}], 'summary': 'Implementing convolutional neural network to detect brain tumor using mri images on google colab.', 'duration': 78.696, 'max_score': 1022.006, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/7MceDfpnP8k/pics/7MceDfpnP8k1022006.jpg'}, {'end': 1176.228, 'src': 'embed', 'start': 1142.786, 'weight': 1, 'content': [{'end': 1146.808, 'text': 'So now first off we are going to work this on a three stages first off.', 'start': 1142.786, 'duration': 4.022}, {'end': 1148.869, 'text': "We'll collect the data which you have already done now.", 'start': 1146.828, 'duration': 2.041}, {'end': 1151.951, 'text': "We'll pre-process it and then we're going to train it to a model.", 'start': 1148.889, 'duration': 3.062}, {'end': 1156.934, 'text': 'and finally, the fourth step would be to test our data, like to see how well our model is working.', 'start': 1151.951, 'duration': 4.983}, {'end': 1161.336, 'text': "and here we are going to train our model on our own, and you'll get to know what I'm speaking in a while.", 'start': 1156.934, 'duration': 4.402}, {'end': 1166.519, 'text': 'So first thing first, I will just import some of the libraries that are commonly used everywhere.', 'start': 1161.616, 'duration': 4.903}, {'end': 1176.228, 'text': "So it would be import numpy as NP, Then I have import matplotlib as plt and then, as I'm dealing here with files,", 'start': 1167.039, 'duration': 9.189}], 'summary': 'The process involves data collection, preprocessing, model training, and testing, followed by the use of common libraries like numpy and matplotlib.', 'duration': 33.442, 'max_score': 1142.786, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/7MceDfpnP8k/pics/7MceDfpnP8k1142786.jpg'}, {'end': 1401.947, 'src': 'embed', 'start': 1372.502, 'weight': 2, 'content': [{'end': 1374.823, 'text': 'So now I want to know how many images I have for that.', 'start': 1372.502, 'duration': 2.321}, {'end': 1381.506, 'text': "I'll just use a simple function called as length and as that is an array I can iterate it over here and let me execute this up.", 'start': 1374.863, 'duration': 6.643}, {'end': 1384.671, 'text': 'And now, once again, if I do items, and you can see,', 'start': 1382.048, 'duration': 2.623}, {'end': 1392.638, 'text': 'the number of images I have in brain tumor is 2513 and whereas for healthy is going to be 2087 images.', 'start': 1384.671, 'duration': 7.967}, {'end': 1394.32, 'text': 'and now, if you want to see what does this list?', 'start': 1392.638, 'duration': 1.682}, {'end': 1401.947, 'text': 'dir does so let me give you an illustration here OS dot, list dir, and let me just give any folder over here.', 'start': 1394.32, 'duration': 7.627}], 'summary': 'Analyzing 2513 brain tumor images and 2087 healthy images using python functions.', 'duration': 29.445, 'max_score': 1372.502, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/7MceDfpnP8k/pics/7MceDfpnP8k1372502.jpg'}, {'end': 1460.236, 'src': 'embed', 'start': 1437.287, 'weight': 3, 'content': [{'end': 1445.03, 'text': 'So the split is going to be in such a manner that 70% of my data would go for the training data set and then 15% of each would go for validation and testing.', 'start': 1437.287, 'duration': 7.743}, {'end': 1447.531, 'text': "So let me quickly show you what I'm doing here.", 'start': 1445.47, 'duration': 2.061}, {'end': 1448.932, 'text': 'Let me write it as a comment.', 'start': 1447.691, 'duration': 1.241}, {'end': 1452.193, 'text': 'So let me minimize this up and expand a bit.', 'start': 1449.452, 'duration': 2.741}, {'end': 1460.236, 'text': 'So we will split the data such that so we have 70% of the data for training,', 'start': 1452.873, 'duration': 7.363}], 'summary': 'Data split: 70% for training, 15% each for validation and testing', 'duration': 22.949, 'max_score': 1437.287, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/7MceDfpnP8k/pics/7MceDfpnP8k1437287.jpg'}], 'start': 1022.006, 'title': 'Implementing cnn for brain tumor detection', 'summary': 'Covers the implementation of a cnn in google colab to detect brain tumors from mri images, including importing and unzipping the dataset, preprocessing, training, and testing. it also involves data analysis, image counting, creating a dictionary for image count, using os dot list dir to get folder contents, and splitting the data into 70% for training, 15% for validation, and 15% for testing.', 'chapters': [{'end': 1217.464, 'start': 1022.006, 'title': 'Implementing cnn for brain tumor detection', 'summary': 'Explains the implementation of a convolutional neural network using google colab to detect brain tumors from mri images, including importing and unzipping the dataset, preprocessing, training, and testing the model.', 'duration': 195.458, 'highlights': ['The chapter covers the implementation of a convolutional neural network (CNN) using Google colab to detect brain tumors from MRI images.', 'The process includes importing and unzipping the dataset, preprocessing, training, and testing the model.', 'The speaker demonstrates the use of libraries such as numpy, matplotlib, OS, math, and shutil for various tasks related to file handling and computations.']}, {'end': 1484.139, 'start': 1218.125, 'title': 'Data analysis and image splitting', 'summary': 'Discusses counting the number of images in different classes, creating a dictionary to store the count, using os dot list dir to get folder contents, and splitting the data into 70% for training, 15% for validation, and 15% for testing.', 'duration': 266.014, 'highlights': ['Counting the number of images in different classes and creating a dictionary to store the count The speaker counts 2513 images for brain tumor and 2087 images for a healthy class, creating a dictionary to store these counts.', 'Using OS dot list dir to get folder contents and illustrating its functionality The speaker demonstrates the use of OS dot list dir to obtain the list of folders, and explains its functionality using an example.', 'Splitting the data into 70% for training, 15% for validation, and 15% for testing The chapter outlines the split of the data into training, validation, and testing sets, allocating 70%, 15%, and 15% respectively.']}], 'duration': 462.133, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/7MceDfpnP8k/pics/7MceDfpnP8k1022006.jpg', 'highlights': ['The chapter covers the implementation of a convolutional neural network (CNN) using Google colab to detect brain tumors from MRI images.', 'The process includes importing and unzipping the dataset, preprocessing, training, and testing the model.', 'The speaker counts 2513 images for brain tumor and 2087 images for a healthy class, creating a dictionary to store these counts.', 'Splitting the data into 70% for training, 15% for validation, and 15% for testing.']}, {'end': 2814.922, 'segs': [{'end': 1732.638, 'src': 'embed', 'start': 1706.199, 'weight': 1, 'content': [{'end': 1711.102, 'text': 'So within this 2000 images or whatever the images are present in the respective class.', 'start': 1706.199, 'duration': 4.903}, {'end': 1716.205, 'text': "I'll randomly pick the images just so that I have more randomness and my model can be generalized.", 'start': 1711.462, 'duration': 4.743}, {'end': 1721.669, 'text': 'I obviously have to distribute my data equally between training data set, testing data set and validation.', 'start': 1716.706, 'duration': 4.963}, {'end': 1725.632, 'text': 'And when I mean equally, I mean to say between 70, 15 and 15 percent.', 'start': 1722.049, 'duration': 3.583}, {'end': 1726.733, 'text': "So this is what I'm doing.", 'start': 1725.912, 'duration': 0.821}, {'end': 1728.194, 'text': 'What I was a total size.', 'start': 1727.033, 'duration': 1.161}, {'end': 1732.638, 'text': 'I just take the 70 percent of it and minus it by 2, just in case if the value is missing.', 'start': 1728.395, 'duration': 4.243}], 'summary': 'Randomly selecting 70% for training, 15% for testing, and 15% for validation from 2000 images per class.', 'duration': 26.439, 'max_score': 1706.199, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/7MceDfpnP8k/pics/7MceDfpnP8k1706199.jpg'}, {'end': 2298.335, 'src': 'embed', 'start': 2268.69, 'weight': 0, 'content': [{'end': 2274.715, 'text': 'So let me give a comment here CNN model if you remember again CNN basically stands for convolution neural network.', 'start': 2268.69, 'duration': 6.025}, {'end': 2279.119, 'text': 'This is one of the most popular algorithm that is being used for computer vision.', 'start': 2275.456, 'duration': 3.663}, {'end': 2283.563, 'text': 'So first off I give a variable to my model should be model is equal to sequential.', 'start': 2279.459, 'duration': 4.104}, {'end': 2286.845, 'text': 'So this is something which is mandatory and within the sequential model.', 'start': 2284.023, 'duration': 2.822}, {'end': 2287.846, 'text': 'I have to add layers.', 'start': 2286.866, 'duration': 0.98}, {'end': 2291.429, 'text': 'There are two ways either you can create an array inside this and add layer.', 'start': 2288.307, 'duration': 3.122}, {'end': 2298.335, 'text': 'But what I would prefer to do is just give us indent and have model dot add and inside this add function.', 'start': 2291.99, 'duration': 6.345}], 'summary': 'Cnn model is popular for computer vision, using sequential model with mandatory layers.', 'duration': 29.645, 'max_score': 2268.69, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/7MceDfpnP8k/pics/7MceDfpnP8k2268690.jpg'}, {'end': 2626.903, 'src': 'embed', 'start': 2597.092, 'weight': 3, 'content': [{'end': 2601.999, 'text': "And now finally, we'll try to compile this model dot compile here.", 'start': 2597.092, 'duration': 4.907}, {'end': 2604.142, 'text': "We're going to say what kind of optimizer I'm going to use.", 'start': 2602.039, 'duration': 2.103}, {'end': 2609.148, 'text': "So by default, we have RMS prop that's totally fine, but I would prefer to go with Adam.", 'start': 2604.542, 'duration': 4.606}, {'end': 2614.354, 'text': "Obviously if you have to specify the loss and the type of laws that I'm going to use here.", 'start': 2609.689, 'duration': 4.665}, {'end': 2618.277, 'text': 'It will be binary cross entropy because we are doing binary classification.', 'start': 2614.454, 'duration': 3.823}, {'end': 2620.498, 'text': 'What is binary classification?', 'start': 2618.717, 'duration': 1.781}, {'end': 2626.903, 'text': "as the name states, binary, which means to so to class classification, whether it's the image is a cancerous or not,", 'start': 2620.498, 'duration': 6.405}], 'summary': 'Compiling model using adam optimizer and binary cross entropy for binary classification.', 'duration': 29.811, 'max_score': 2597.092, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/7MceDfpnP8k/pics/7MceDfpnP8k2597092.jpg'}, {'end': 2668.082, 'src': 'embed', 'start': 2639.933, 'weight': 2, 'content': [{'end': 2643.338, 'text': 'We can say binary cross entropy binary categorical cross entropy.', 'start': 2639.933, 'duration': 3.405}, {'end': 2648.304, 'text': 'You have to choose the one with this one and not the function and now we have done the optimizer.', 'start': 2643.678, 'duration': 4.626}, {'end': 2650.267, 'text': 'We have done the loss and now we need Matrix.', 'start': 2648.324, 'duration': 1.943}, {'end': 2654.632, 'text': "So it's going to be a Matrix and will provide I want to do it on accuracy.", 'start': 2650.867, 'duration': 3.765}, {'end': 2656.574, 'text': 'and this is done.', 'start': 2655.593, 'duration': 0.981}, {'end': 2658.796, 'text': 'So now our model is ready.', 'start': 2656.894, 'duration': 1.902}, {'end': 2659.916, 'text': 'and now for our model.', 'start': 2658.796, 'duration': 1.12}, {'end': 2665.72, 'text': 'now we have to input the data so that we can train it and then finally test our data for inputting the data.', 'start': 2659.916, 'duration': 5.804}, {'end': 2668.082, 'text': "I'm not going to use anything that I'm going to develop.", 'start': 2665.921, 'duration': 2.161}], 'summary': 'Implemented binary cross entropy for optimizer, achieved accuracy, and prepared model for training and testing.', 'duration': 28.149, 'max_score': 2639.933, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/7MceDfpnP8k/pics/7MceDfpnP8k2639933.jpg'}], 'start': 1484.539, 'title': 'Managing image data for model training', 'summary': 'Explains creating folder structure for organizing mri images, random selection for training, splitting data into training, testing, and validation sets, and building convolutional neural network model with 56 lakhs parameters and training a binary classification model using keras.', 'chapters': [{'end': 1602.963, 'start': 1484.539, 'title': 'Creating folders and managing images', 'summary': 'Explains how to create a folder structure for organizing mri images into healthy and unhealthy categories and how to randomly select images for training to ensure model generalization.', 'duration': 118.424, 'highlights': ['Explaining the process of creating a folder structure for healthy and unhealthy MRI images The speaker details the process of creating a folder structure to organize healthy and unhealthy MRI images, ensuring a systematic approach to data management.', "Implementing a method to randomly select images for training The speaker discusses the use of the numpy function 'NP.random.choice' to randomly select images for training, which is essential for model generalization.", 'Using conditional statements to avoid repeated folder creation The speaker demonstrates the use of conditional statements and OS functions to avoid repeated creation of the same folder, ensuring efficient folder management.']}, {'end': 1974.154, 'start': 1603.283, 'title': 'Data folder creation and management', 'summary': 'Covers the creation of a function for splitting data into training, testing, and validation sets, where 70% of images are randomly selected, with adjustments made to ensure that the data is equally distributed between the sets.', 'duration': 370.871, 'highlights': ['Creation of a function for splitting data into training, testing, and validation sets The chapter covers the creation of a function for splitting data into training, testing, and validation sets, ensuring the data is equally distributed between them.', 'Random selection of 70% of images for training set 70% of images are randomly selected for the training set to ensure model generalization.', 'Adjustments made to ensure equal distribution of data between the sets Mathematical adjustments are made to ensure equal distribution of data between the training, testing, and validation sets.']}, {'end': 2149.091, 'start': 1974.154, 'title': 'Data splitting for model building', 'summary': 'Discusses splitting data into train, validation, and test folders, allocating percentages for each, encountering errors, and successfully transferring images, resulting in 17 and 16 images in the test and train folders respectively.', 'duration': 174.937, 'highlights': ['Successfully transferred images from one folder to the other, resulting in 17 and 16 images in the test and train folders respectively. The number of images in the test and train folders decreased to 17 and 16 respectively, indicating successful image transfer.', 'Encountered errors while splitting data and adjusting percentages, leading to errors in processing the healthy images. Encountered errors while adjusting percentages and processing healthy images, necessitating a rerun of the process.', 'Discussed the process of splitting data into train, validation, and test folders, allocating percentages for each. Discussed the process of splitting data into train, validation, and test folders, and allocating specific percentages for each set.']}, {'end': 2595.592, 'start': 2149.331, 'title': 'Building convolutional neural network model', 'summary': 'Covers building a convolutional neural network (cnn) model using keras for image processing, with details on layer types, parameter values, and reasons for the choices, and the total number of parameters to be trained being 56 lakhs.', 'duration': 446.261, 'highlights': ['Explaining the process of building a CNN model with sequential layers, specifying input shape, adding convolution layers with varying number of filters, kernel size, activation function (relu), and explaining the reason for increasing filter values, and using flatten and dense layers with specific parameters. number of filters, kernel size, total number of parameters to be trained as 56 lakhs', 'Importing necessary libraries from Keras, such as layers, sequential model, and preprocessing for image data, and demonstrating the use of dropout and normalization techniques like batch normalization and global average pooling. use of dropout rate at 25%, demonstrating normalization techniques', 'Execution of the model compilation and displaying the summary of the model, showcasing the input size, information on padding, and the total number of parameters to be trained being 56 lakhs. total number of parameters to be trained as 56 lakhs']}, {'end': 2814.922, 'start': 2597.092, 'title': 'Building and training a binary classification model', 'summary': 'Discusses building and training a binary classification model using keras, specifying adam optimizer, binary cross entropy loss, and accuracy matrix, and employing image data generator with data augmentation techniques for pre-processing the images.', 'duration': 217.83, 'highlights': ['The chapter discusses building and training a binary classification model using Keras, specifying Adam optimizer, binary cross entropy loss, and accuracy matrix, and employing image data generator with data augmentation techniques for pre-processing the images. The chapter covers specifying Adam optimizer, binary cross entropy loss, and accuracy matrix for building a binary classification model, along with using image data generator with data augmentation techniques for pre-processing the images.', 'Using Adam optimizer for building the model instead of the default RMS prop. The preference for using Adam optimizer over RMS prop is highlighted for building the model.', 'Explaining binary classification and specifying binary cross entropy as the loss function for the model. The explanation of binary classification and the specification of binary cross entropy as the loss function for the model is discussed.', 'Utilizing image data generator with data augmentation techniques for pre-processing the images. The use of image data generator with data augmentation techniques for pre-processing the images is emphasized.', 'Discussing the use of data augmentation techniques such as zoom range, shear range, rescale, and horizontal flip to increase the dimension and normalize the data. The discussion of data augmentation techniques including zoom range, shear range, rescale, and horizontal flip for increasing the dimension and normalizing the data is detailed.']}], 'duration': 1330.383, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/7MceDfpnP8k/pics/7MceDfpnP8k1484539.jpg', 'highlights': ['Explaining the process of building a CNN model with sequential layers, specifying input shape, adding convolution layers with varying number of filters, kernel size, activation function (relu), and explaining the reason for increasing filter values, and using flatten and dense layers with specific parameters. number of filters, kernel size, total number of parameters to be trained as 56 lakhs', 'Creation of a function for splitting data into training, testing, and validation sets The chapter covers the creation of a function for splitting data into training, testing, and validation sets, ensuring the data is equally distributed between them.', 'The chapter discusses building and training a binary classification model using Keras, specifying Adam optimizer, binary cross entropy loss, and accuracy matrix, and employing image data generator with data augmentation techniques for pre-processing the images. The chapter covers specifying Adam optimizer, binary cross entropy loss, and accuracy matrix for building a binary classification model, along with using image data generator with data augmentation techniques for pre-processing the images.', 'Using Adam optimizer for building the model instead of the default RMS prop. The preference for using Adam optimizer over RMS prop is highlighted for building the model.', 'Random selection of 70% of images for training set 70% of images are randomly selected for the training set to ensure model generalization.']}, {'end': 3267.681, 'segs': [{'end': 2969.973, 'src': 'embed', 'start': 2944.076, 'weight': 2, 'content': [{'end': 2949.5, 'text': 'because when I try to do the validation, I obviously want the data to be as natural as possible.', 'start': 2944.076, 'duration': 5.424}, {'end': 2956.084, 'text': "This is just so that I don't have to deal with model which is works only on our training data set, because when I launch this in a production,", 'start': 2949.9, 'duration': 6.184}, {'end': 2958.045, 'text': "I don't know what type of data I'll be having.", 'start': 2956.084, 'duration': 1.961}, {'end': 2960.347, 'text': 'So to have a more generalized model.', 'start': 2958.466, 'duration': 1.881}, {'end': 2965.75, 'text': 'I would remove all of this, and this is pretty important because, based on this validation, know it calculates the loss.', 'start': 2960.347, 'duration': 5.403}, {'end': 2967.832, 'text': 'So this kind of important step.', 'start': 2966.171, 'duration': 1.661}, {'end': 2969.973, 'text': 'So now everything remains the same.', 'start': 2968.432, 'duration': 1.541}], 'summary': 'Validation ensures a more generalized model for diverse production data.', 'duration': 25.897, 'max_score': 2944.076, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/7MceDfpnP8k/pics/7MceDfpnP8k2944076.jpg'}, {'end': 3028.26, 'src': 'embed', 'start': 2997.205, 'weight': 0, 'content': [{'end': 3000.226, 'text': "So with this data generator, I don't have to explicitly label my data.", 'start': 2997.205, 'duration': 3.021}, {'end': 3001.547, 'text': 'Everything is done by the function.', 'start': 3000.306, 'duration': 1.241}, {'end': 3004.388, 'text': 'So and on top of that, you know, everything is in one place.', 'start': 3001.807, 'duration': 2.581}, {'end': 3011.05, 'text': "So I don't have to worry about regularization or if I've missed on something and finally, let's get our validation data as well.", 'start': 3004.688, 'duration': 6.362}, {'end': 3015.231, 'text': 'So even this has the same number of images as the path as the test path.', 'start': 3011.43, 'duration': 3.801}, {'end': 3022.394, 'text': 'So now that we are done with our data pre-processing and we have also created a model the next stage over here is to train a model.', 'start': 3015.552, 'duration': 6.842}, {'end': 3028.26, 'text': "But before that what we'll do here is we will add some stuff that is called as early stopping,", 'start': 3022.794, 'duration': 5.466}], 'summary': 'Data generator automates labeling, simplifies process, and ensures completeness; validation data aligns with test data in terms of image count; early stopping to be implemented before model training.', 'duration': 31.055, 'max_score': 2997.205, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/7MceDfpnP8k/pics/7MceDfpnP8k2997205.jpg'}, {'end': 3058.571, 'src': 'embed', 'start': 3031.923, 'weight': 3, 'content': [{'end': 3039.111, 'text': "obviously sometimes my results can come up early and I don't want my model to waste the resource by training all the other parameters.", 'start': 3031.923, 'duration': 7.188}, {'end': 3042.414, 'text': 'So for this we have something called as early stopping and then model checkpoint.', 'start': 3039.491, 'duration': 2.923}, {'end': 3053.588, 'text': "So for this I'll do from Keras dot callbacks import model checkpoint and then I can also have early stopping and now for early stopping.", 'start': 3043.502, 'duration': 10.086}, {'end': 3058.571, 'text': "Let's say yes is early stopping and now I'll create an object of this for that.", 'start': 3054.249, 'duration': 4.322}], 'summary': 'Implementing early stopping and model checkpoint in keras for resource optimization.', 'duration': 26.648, 'max_score': 3031.923, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/7MceDfpnP8k/pics/7MceDfpnP8k3031923.jpg'}, {'end': 3138.344, 'src': 'embed', 'start': 3098.234, 'weight': 1, 'content': [{'end': 3102.757, 'text': 'So this is a stopping and similarly, let me do it for model checkpoint.', 'start': 3098.234, 'duration': 4.523}, {'end': 3106.099, 'text': 'Everything remains the same year just we have to do a couple of changes.', 'start': 3102.937, 'duration': 3.162}, {'end': 3111.263, 'text': "So we'll give MC as model checkpoint and then we'll create an object of it.", 'start': 3106.66, 'duration': 4.603}, {'end': 3115.946, 'text': 'Obviously it has to monitor validation accuracy minimum Delta is totally fine.', 'start': 3111.683, 'duration': 4.263}, {'end': 3119.708, 'text': "We'll have to just remove couple of things that is patience verbose.", 'start': 3116.366, 'duration': 3.342}, {'end': 3123.51, 'text': 'and minimum Delta also would be removing and apart from this.', 'start': 3120.729, 'duration': 2.781}, {'end': 3125.831, 'text': "I'll just provide couple of stuff that is file path.", 'start': 3123.73, 'duration': 2.101}, {'end': 3128.072, 'text': 'This is where my best model is going to be saved.', 'start': 3126.151, 'duration': 1.921}, {'end': 3130.873, 'text': "So I'll say best model dot h5.", 'start': 3128.392, 'duration': 2.481}, {'end': 3135.915, 'text': "It's important that you give the extension because if you don't save it as an extension, it would just be a random file.", 'start': 3130.933, 'duration': 4.982}, {'end': 3138.344, 'text': 'And then we have verbose.', 'start': 3136.684, 'duration': 1.66}], 'summary': 'Configuring model checkpoint with mc, validation accuracy, and best model save location.', 'duration': 40.11, 'max_score': 3098.234, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/7MceDfpnP8k/pics/7MceDfpnP8k3098234.jpg'}, {'end': 3224.084, 'src': 'embed', 'start': 3179.225, 'weight': 5, 'content': [{'end': 3184.366, 'text': 'So that is like the validation data and the validation accuracy with respect to the training and validation.', 'start': 3179.225, 'duration': 5.141}, {'end': 3188.567, 'text': 'So it will be model dot fit transform instead of fit.', 'start': 3184.767, 'duration': 3.8}, {'end': 3190.188, 'text': 'I would just use fit generator.', 'start': 3188.748, 'duration': 1.44}, {'end': 3194.909, 'text': 'So fit generator because we have defined a generator give a generator name.', 'start': 3190.748, 'duration': 4.161}, {'end': 3201.971, 'text': 'So this will be generator is going to be trained data and apart from that we have couple more stuff.', 'start': 3195.543, 'duration': 6.428}, {'end': 3203.714, 'text': 'We can have steps per Epoch.', 'start': 3202.012, 'duration': 1.702}, {'end': 3207.939, 'text': 'This is usually given as a then we have number of Epochs.', 'start': 3204.415, 'duration': 3.524}, {'end': 3216.074, 'text': "I'm mentioning 30 here, but usually we give it as 300 400, but provided that we have huge amount of data as well.", 'start': 3209.061, 'duration': 7.013}, {'end': 3220.7, 'text': 'but no matter what data you give, we have early stopping if the model things that you know,', 'start': 3216.074, 'duration': 4.626}, {'end': 3224.084, 'text': 'it has trained enough or it has the highest accuracy within that time frame.', 'start': 3220.7, 'duration': 3.384}], 'summary': 'Using fit generator with specified parameters for training data and early stopping for model accuracy.', 'duration': 44.859, 'max_score': 3179.225, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/7MceDfpnP8k/pics/7MceDfpnP8k3179225.jpg'}], 'start': 2814.942, 'title': 'Image data processing and model training', 'summary': 'Discusses image data processing for a model, including image data generators, train/test/validation data creation, and early stopping and model checkpoint implementation. it also covers configuring early stopping and model checkpoint for training a model, mentioning validation accuracy, best model saving, and model training parameters.', 'chapters': [{'end': 3097.494, 'start': 2814.942, 'title': 'Image data processing and model training', 'summary': 'Discusses the process of image data processing for a model, including the use of image data generators, creating train, test, and validation data, and implementing early stopping and model checkpoint for training the model.', 'duration': 282.552, 'highlights': ['The process of creating train, test, and validation data using image data generators is discussed, with 3,009 images for training and 679 images for testing. Creation of train, test, and validation data using image data generators, with specific numbers of images mentioned for training and testing.', 'The use of early stopping and model checkpoint for optimizing model training, with parameters including minimum Delta and patience explained. Explanation of early stopping and model checkpoint, with parameters like minimum Delta and patience described for optimizing model training.', 'The importance of removing certain parameters for validation data to create a more generalized model is emphasized. Importance of removing certain parameters for validation data to ensure a more generalized model is highlighted.']}, {'end': 3267.681, 'start': 3098.234, 'title': 'Training model with early stopping and model checkpoint', 'summary': 'Covers configuring early stopping and model checkpoint for training a model, with mentions of validation accuracy, best model saving, and model training parameters such as steps per epoch and number of epochs.', 'duration': 169.447, 'highlights': ['Configuring early stopping and model checkpoint The process involves setting up early stopping and model checkpoint by defining parameters such as monitoring validation accuracy and creating an object for model checkpoint.', "Defining best model saving and file path The best model is saved as 'best_model.h5' to a specified file path, ensuring the appropriate file extension is provided for saving the model.", 'Setting training parameters like steps per Epoch and number of Epochs Parameters such as steps per Epoch and number of Epochs (e.g., 30, 300, or 400) are mentioned, with the consideration of a large amount of data and the use of early stopping if the model achieves the highest accuracy within a certain time frame.', "Training the model using fit generator and validation data The process of training the model involves using fit generator with a specified generator (e.g., 'trained_data') and providing validation data and steps, along with the use of callbacks as an array."]}], 'duration': 452.739, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/7MceDfpnP8k/pics/7MceDfpnP8k2814942.jpg', 'highlights': ['Creation of train, test, and validation data using image data generators, with specific numbers of images mentioned for training and testing.', 'Explanation of early stopping and model checkpoint, with parameters like minimum Delta and patience described for optimizing model training.', 'Importance of removing certain parameters for validation data to ensure a more generalized model is highlighted.', 'Configuring early stopping and model checkpoint by defining parameters such as monitoring validation accuracy and creating an object for model checkpoint.', "The best model is saved as 'best_model.h5' to a specified file path, ensuring the appropriate file extension is provided for saving the model.", 'Parameters such as steps per Epoch and number of Epochs (e.g., 30, 300, or 400) are mentioned, with the consideration of a large amount of data and the use of early stopping if the model achieves the highest accuracy within a certain time frame.', "The process of training the model involves using fit generator with a specified generator (e.g., 'trained_data') and providing validation data and steps, along with the use of callbacks as an array."]}, {'end': 3899.288, 'segs': [{'end': 3296.04, 'src': 'embed', 'start': 3268.261, 'weight': 4, 'content': [{'end': 3274.583, 'text': 'So model dot fit generator basically, you know, it reduces our effort of manually adding up and all of these data.', 'start': 3268.261, 'duration': 6.322}, {'end': 3277.945, 'text': 'So first off it will take the training data from this generator.', 'start': 3275.243, 'duration': 2.702}, {'end': 3280.007, 'text': 'It takes the steps per Epoch.', 'start': 3278.486, 'duration': 1.521}, {'end': 3283.53, 'text': 'Then this is a number of Epochs that is number of times and the verbose over here.', 'start': 3280.047, 'duration': 3.483}, {'end': 3287.513, 'text': "Basically, I'm using verbose to show like whatever the execution is happening.", 'start': 3283.59, 'duration': 3.923}, {'end': 3288.614, 'text': 'I want that to be displayed.', 'start': 3287.573, 'duration': 1.041}, {'end': 3291.416, 'text': "So that's what is verbose validation data.", 'start': 3288.974, 'duration': 2.442}, {'end': 3296.04, 'text': 'Basically as I mentioned earlier, we are not trying to perform any pre-processing in our data.', 'start': 3291.456, 'duration': 4.584}], 'summary': 'Using model.fit_generator reduces manual effort, takes training data, steps per epoch, number of epochs, and displays execution progress.', 'duration': 27.779, 'max_score': 3268.261, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/7MceDfpnP8k/pics/7MceDfpnP8k3268261.jpg'}, {'end': 3374.941, 'src': 'heatmap', 'start': 3306.074, 'weight': 5, 'content': [{'end': 3307.477, 'text': 'So this is called as data augmentation.', 'start': 3306.074, 'duration': 1.403}, {'end': 3313.09, 'text': 'But now when it comes to test and validation, we are not trying to perform any data augmentation.', 'start': 3308.547, 'duration': 4.543}, {'end': 3313.991, 'text': 'It is just a rescaling.', 'start': 3313.13, 'duration': 0.861}, {'end': 3316.072, 'text': 'So this is what is basically happening here.', 'start': 3314.451, 'duration': 1.621}, {'end': 3321.296, 'text': "The another advantage of doing this kind of function is that I'm trying to import new image.", 'start': 3316.492, 'duration': 4.804}, {'end': 3325.759, 'text': "So let's say I have downloaded this image from Google and I want to see how my model is working.", 'start': 3321.516, 'duration': 4.243}, {'end': 3328.38, 'text': 'I just have to pass my image through this function.', 'start': 3326.179, 'duration': 2.201}, {'end': 3332.883, 'text': 'I would get whatever pre-processed images that and then I can directly pass it to my model.', 'start': 3328.48, 'duration': 4.403}, {'end': 3334.024, 'text': 'So this is what is happening.', 'start': 3333.183, 'duration': 0.841}, {'end': 3338.167, 'text': 'And this is what is validation steps and finally callbacks as I mentioned earlier.', 'start': 3334.484, 'duration': 3.683}, {'end': 3343.57, 'text': "So now that we are model is ready and only thing that is left is to train when I'm training.", 'start': 3338.667, 'duration': 4.903}, {'end': 3346.292, 'text': "I'll just change my runtime type to GPU.", 'start': 3343.871, 'duration': 2.421}, {'end': 3351.736, 'text': 'This is because if I have it in my CPU, it would take a pretty long time to execute it.', 'start': 3346.773, 'duration': 4.963}, {'end': 3354.018, 'text': 'and let me give your run all.', 'start': 3351.736, 'duration': 2.282}, {'end': 3358.981, 'text': 'let me factory reset and then run all so that whatever was there earlier would get cleared up.', 'start': 3354.018, 'duration': 4.963}, {'end': 3365.615, 'text': 'So, as you can see here, our model has started training and then it has also ended.', 'start': 3361.232, 'duration': 4.383}, {'end': 3371.059, 'text': "and you know, as the values didn't change here much, the model just terminated the execution,", 'start': 3365.615, 'duration': 5.444}, {'end': 3374.941, 'text': "but we are having this like the maximum accuracy that I'm getting is 0.65..", 'start': 3371.059, 'duration': 3.882}], 'summary': 'Data augmentation used for training, rescaling for validation, model trained with gpu for faster execution, achieving maximum accuracy of 0.65.', 'duration': 68.867, 'max_score': 3306.074, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/7MceDfpnP8k/pics/7MceDfpnP8k3306074.jpg'}, {'end': 3465.433, 'src': 'embed', 'start': 3399.344, 'weight': 0, 'content': [{'end': 3405.133, 'text': "We are getting here as 67 and then we have 73% let's see what happens to our model accuracy.", 'start': 3399.344, 'duration': 5.789}, {'end': 3412.987, 'text': 'So, as you can see here, our model has been successfully trained, and it trained for like belly box,', 'start': 3407.403, 'duration': 5.584}, {'end': 3416.73, 'text': 'and now the accuracy over here finally does come down to 80%.', 'start': 3412.987, 'duration': 3.743}, {'end': 3422.915, 'text': 'but you know the best accuracy was in this equal, and therefore our model has saved that, like early stopping,', 'start': 3416.73, 'duration': 6.185}, {'end': 3428.219, 'text': 'and model checkpoint has saved that 82%, and this would be saved as a dot h5 file.', 'start': 3422.915, 'duration': 5.304}, {'end': 3429.6, 'text': 'If I show you over here.', 'start': 3428.719, 'duration': 0.881}, {'end': 3438.183, 'text': 'And now, as you can see, now our accuracy is kind of increasing, and earlier we were getting our accuracy of somewhere around 67 and now, finally,', 'start': 3430.18, 'duration': 8.003}, {'end': 3438.984, 'text': 'we have 82%.', 'start': 3438.183, 'duration': 0.801}, {'end': 3443.105, 'text': "or you might be wondering how can I say if my model it's overfitting or not?", 'start': 3438.984, 'duration': 4.121}, {'end': 3445.927, 'text': 'Well, you can look into couple of parameters.', 'start': 3443.466, 'duration': 2.461}, {'end': 3449.648, 'text': 'when you look into this loss, this loss difference between them is not too high.', 'start': 3445.927, 'duration': 3.721}, {'end': 3455.931, 'text': 'It is one thing to look at and the second thing that you can look at is the difference between accuracy and validation accuracy.', 'start': 3450.108, 'duration': 5.823}, {'end': 3460.812, 'text': 'So, as long as the difference between validation accuracy and accuracy is not more than 10,', 'start': 3456.571, 'duration': 4.241}, {'end': 3465.433, 'text': "I would generally consider that the model is not overfitting and it's really working fine.", 'start': 3460.812, 'duration': 4.621}], 'summary': 'Model accuracy increased from 67% to 82%, avoiding overfitting by maintaining a 10% difference between accuracy and validation accuracy.', 'duration': 66.089, 'max_score': 3399.344, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/7MceDfpnP8k/pics/7MceDfpnP8k3399344.jpg'}, {'end': 3813.759, 'src': 'embed', 'start': 3787.302, 'weight': 3, 'content': [{'end': 3790.884, 'text': "And now finally once we have loaded an image, let's do it right away.", 'start': 3787.302, 'duration': 3.582}, {'end': 3791.705, 'text': "What's that in that?", 'start': 3791.064, 'duration': 0.641}, {'end': 3800.29, 'text': "So we have done some pre-processing on test train and Val one data that we haven't touched is from this brain tumor to our original data set.", 'start': 3792.065, 'duration': 8.225}, {'end': 3801.751, 'text': "Let's now do one thing.", 'start': 3800.771, 'duration': 0.98}, {'end': 3804.333, 'text': "Let's now see how well our model predicts.", 'start': 3802.091, 'duration': 2.242}, {'end': 3806.354, 'text': 'I know that this image is of a cancer.', 'start': 3804.913, 'duration': 1.441}, {'end': 3812.718, 'text': "So I'll just copy this and paste it here thing is I know that this image belongs to cancer human and not the model.", 'start': 3806.694, 'duration': 6.024}, {'end': 3813.759, 'text': "So let's load.", 'start': 3813.078, 'duration': 0.681}], 'summary': 'Pre-processed brain tumor image for model prediction.', 'duration': 26.457, 'max_score': 3787.302, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/7MceDfpnP8k/pics/7MceDfpnP8k3787302.jpg'}], 'start': 3268.261, 'title': 'Model training and evaluation', 'summary': 'Discusses the use of a generator for data processing and augmentation, focusing on training and validation data, steps per epoch, and data augmentation, resulting in 83% model accuracy and cancer case prediction.', 'chapters': [{'end': 3338.167, 'start': 3268.261, 'title': 'Using generator for model fitting', 'summary': 'Discusses the use of a generator to automate data processing and augmentation, with a focus on training and validation data, steps per epoch, and data augmentation, aiming to reduce manual effort and improve model performance.', 'duration': 69.906, 'highlights': ['The generator automates the process of adding and processing training data, reducing manual effort and improving efficiency.', 'Data augmentation is applied to the training data, but not to the test and validation data, where only rescaling occurs, ensuring consistency and accuracy.', 'Using the generator allows for easy import of new images for model evaluation, simplifying the process and improving flexibility.']}, {'end': 3899.288, 'start': 3338.667, 'title': 'Model training and evaluation', 'summary': 'Demonstrates model training and evaluation using hyperparameter tuning, early stopping, and graphical interpretation, resulting in a model accuracy of 83% and prediction of a given image as a cancer case.', 'duration': 560.621, 'highlights': ['The model accuracy is 83% The model achieved an accuracy of 83% after training and evaluation.', 'Hyperparameter tuning improved the accuracy from 67% to 82% By adjusting the hyperparameters, the accuracy of the model improved from 67% to 82%.', 'Graphical interpretation showed validation accuracy outperforming accuracy, indicating a non-overfitting model The graphical representation demonstrated that the model was not overfitting, with validation accuracy outperforming the accuracy.', 'Demonstrated the process of predicting a given image as a cancer case The transcript outlined the process of using the trained model to predict whether a given image corresponds to a cancer case.']}], 'duration': 631.027, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/7MceDfpnP8k/pics/7MceDfpnP8k3268261.jpg', 'highlights': ['The model accuracy is 83% after training and evaluation.', 'Hyperparameter tuning improved the accuracy from 67% to 82%.', 'Graphical interpretation showed validation accuracy outperforming accuracy, indicating a non-overfitting model.', 'Demonstrated the process of predicting a given image as a cancer case.', 'The generator automates the process of adding and processing training data, reducing manual effort and improving efficiency.', 'Using the generator allows for easy import of new images for model evaluation, simplifying the process and improving flexibility.', 'Data augmentation is applied to the training data, but not to the test and validation data, where only rescaling occurs, ensuring consistency and accuracy.']}, {'end': 4402.129, 'segs': [{'end': 4059.335, 'src': 'embed', 'start': 4028.522, 'weight': 5, 'content': [{'end': 4034.984, 'text': "one thing that I'm kind of worried about is if I have to deploy this on a real-time scenario or when I'm trying to deploy this, let's say,", 'start': 4028.522, 'duration': 6.462}, {'end': 4036.344, 'text': 'for an hospital application.', 'start': 4034.984, 'duration': 1.36}, {'end': 4043.267, 'text': 'My model accuracy is just 83% so that means that there are 20% chances that my model can predict something wrong.', 'start': 4036.744, 'duration': 6.523}, {'end': 4047.069, 'text': "So how do I fix this up? Well, let's see how we can do that.", 'start': 4043.807, 'duration': 3.262}, {'end': 4048.53, 'text': 'There are a couple of ways that we can do.', 'start': 4047.089, 'duration': 1.441}, {'end': 4051.911, 'text': "So let's quickly move back to our PPT and see what happens.", 'start': 4049.03, 'duration': 2.881}, {'end': 4059.335, 'text': "So as we saw when we were trying to perform this practical session our model accuracy is coming down to 82, which is good, but it's not great.", 'start': 4052.552, 'duration': 6.783}], 'summary': 'Model accuracy is at 83%, with a 20% chance of wrong predictions, aiming to improve for real-time deployment.', 'duration': 30.813, 'max_score': 4028.522, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/7MceDfpnP8k/pics/7MceDfpnP8k4028522.jpg'}, {'end': 4118.971, 'src': 'embed', 'start': 4089.72, 'weight': 1, 'content': [{'end': 4094.264, 'text': 'hyperparameter tuning is something like we can use grid search CV or random search CV,', 'start': 4089.72, 'duration': 4.544}, {'end': 4101.189, 'text': 'wherein we can change the or tune the different parameters that we have and then finally, we can just change the values.', 'start': 4094.264, 'duration': 6.925}, {'end': 4105.471, 'text': 'and the best way to do hyperparameter tuning for deep learning models is by using Keras tuner.', 'start': 4101.189, 'duration': 4.282}, {'end': 4109.087, 'text': 'The other or more efficient way is by using transfer learning.', 'start': 4106.006, 'duration': 3.081}, {'end': 4111.349, 'text': "So let's see what exactly is transfer learning.", 'start': 4109.508, 'duration': 1.841}, {'end': 4114.17, 'text': 'So transfer learning, as the name states.', 'start': 4111.809, 'duration': 2.361}, {'end': 4118.971, 'text': "you're basically trying to transfer the knowledge that was learned from one model to the other.", 'start': 4114.17, 'duration': 4.801}], 'summary': 'Hyperparameter tuning options include grid search cv, random search cv, keras tuner; transfer learning is an efficient alternative.', 'duration': 29.251, 'max_score': 4089.72, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/7MceDfpnP8k/pics/7MceDfpnP8k4089720.jpg'}, {'end': 4176.147, 'src': 'embed', 'start': 4145.768, 'weight': 2, 'content': [{'end': 4149.652, 'text': 'finally, we came up with something called as transfer learning and some of the state-of-the-art models.', 'start': 4145.768, 'duration': 3.884}, {'end': 4152.715, 'text': 'You can say something like ResNet or Google in it.', 'start': 4150.072, 'duration': 2.643}, {'end': 4158.358, 'text': 'These are trained on images like 1 billion images, and then they have up to thousand classes,', 'start': 4153.075, 'duration': 5.283}, {'end': 4163.921, 'text': 'and for us to develop such thing to take huge amount of resources and very large amount of time.', 'start': 4158.358, 'duration': 5.563}, {'end': 4169.404, 'text': "This is something which we as a normal people can't afford to so this is where transfer learning comes into picture.", 'start': 4164.241, 'duration': 5.163}, {'end': 4176.147, 'text': "So what happens over here is let's say Institute tries to develop a research project and they try to learn on it.", 'start': 4169.904, 'duration': 6.243}], 'summary': 'Transfer learning with state-of-the-art models like resnet and google, trained on 1 billion images with up to thousand classes, saves time and resources for developing research projects.', 'duration': 30.379, 'max_score': 4145.768, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/7MceDfpnP8k/pics/7MceDfpnP8k4145768.jpg'}, {'end': 4315.357, 'src': 'embed', 'start': 4284.081, 'weight': 4, 'content': [{'end': 4289.884, 'text': 'You see lean it is one of the first CNN model which was developed to give out the best accuracy outside.', 'start': 4284.081, 'duration': 5.803}, {'end': 4293.446, 'text': 'So this model is pretty simple compared to any other ones here.', 'start': 4290.304, 'duration': 3.142}, {'end': 4297.909, 'text': 'We have an input whose size of 32 cross 32, and this has a single channel.', 'start': 4293.486, 'duration': 4.423}, {'end': 4305.714, 'text': 'single channel means the image is usually black and white and then I pass this through a convolution layer, as this was developed in 1980s or so.', 'start': 4297.909, 'duration': 7.805}, {'end': 4307.615, 'text': 'you can see, we are using 5 cross, 5 Matrix.', 'start': 4305.714, 'duration': 1.901}, {'end': 4315.357, 'text': 'but in the current generation would just be using 3 cross, 3 Matrix, and this creates a feature map and then, when I perform sampling,', 'start': 4308.095, 'duration': 7.262}], 'summary': 'The lenet cnn model, developed in the 1980s, uses 5x5 matrix for feature extraction and has an input size of 32x32 with a single channel.', 'duration': 31.276, 'max_score': 4284.081, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/7MceDfpnP8k/pics/7MceDfpnP8k4284081.jpg'}, {'end': 4380.293, 'src': 'embed', 'start': 4343.366, 'weight': 0, 'content': [{'end': 4347.97, 'text': 'What I mean by mobile friendly is that in order to train or in order to use this deep learning models.', 'start': 4343.366, 'duration': 4.604}, {'end': 4354.395, 'text': 'It requires huge amount of compute power, like GPUs and CPUs, and we all know mobile cannot afford that.', 'start': 4348.29, 'duration': 6.105}, {'end': 4359.88, 'text': "so in order to keep it light and simple, and so that we don't have face any latency when we are running a program,", 'start': 4354.395, 'duration': 5.485}, {'end': 4361.501, 'text': 'they came up with mobile net architecture.', 'start': 4359.88, 'duration': 1.621}, {'end': 4368.746, 'text': 'And now moving ahead we are going to implement mobile net architecture into our application and see how we can enhance the accuracy of a model.', 'start': 4361.921, 'duration': 6.825}, {'end': 4372.828, 'text': 'So let me know quickly move to my code editor and show you how we can work through this.', 'start': 4369.026, 'duration': 3.802}, {'end': 4376.911, 'text': "So coming back to our Google collab, we don't have to do much.", 'start': 4373.509, 'duration': 3.402}, {'end': 4380.293, 'text': 'So here if you remember we are trying to do the same thing.', 'start': 4377.371, 'duration': 2.922}], 'summary': 'Implementing mobile net architecture to enhance model accuracy and reduce latency for mobile devices.', 'duration': 36.927, 'max_score': 4343.366, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/7MceDfpnP8k/pics/7MceDfpnP8k4343366.jpg'}], 'start': 3899.308, 'title': 'Improving brain tumor detection model', 'summary': 'Discusses improving a model to detect brain tumors, focusing on challenges in real-time deployment, and methods like transfer learning and hyperparameter tuning. it also explains transfer learning and pre-trained cnn architectures, particularly lenet and mobilenet, emphasizing their structures and potential for enhancing model accuracy.', 'chapters': [{'end': 4234.187, 'start': 3899.308, 'title': 'Tumor detection model improvement', 'summary': 'Discusses the process of using a model to detect brain tumors, with a focus on the challenges faced in deploying the model in a real-time scenario and the methods to improve model accuracy, such as transfer learning and hyperparameter tuning.', 'duration': 334.879, 'highlights': ['The model accuracy is 83%, leaving a 20% chance of incorrect predictions when deployed in a real-time scenario, posing a risk in a health field application. The model accuracy is 83%, leaving a 20% chance of incorrect predictions when deployed in a real-time scenario, posing a risk in a health field application.', 'The methods to improve model accuracy include transfer learning and hyperparameter tuning, with a focus on using Keras tuner for deep learning models and the potential of training models on a public open-source network. The methods to improve model accuracy include transfer learning and hyperparameter tuning, with a focus on using Keras tuner for deep learning models and the potential of training models on a public open-source network.', 'The chapter also explains the concept of transfer learning, where knowledge learned from one model is transferred to another, particularly leveraging pre-trained models like ResNet or GoogleNet, trained on 1 billion images and thousand classes. The chapter also explains the concept of transfer learning, where knowledge learned from one model is transferred to another, particularly leveraging pre-trained models like ResNet or GoogleNet, trained on 1 billion images and thousand classes.']}, {'end': 4402.129, 'start': 4234.187, 'title': 'Transfer learning and cnn pre-trained architectures', 'summary': 'Explains transfer learning and pre-trained cnn architectures, focusing on lenet and mobilenet, highlighting their structures and mobile-friendly features, and indicating the potential for enhancing model accuracy.', 'duration': 167.942, 'highlights': ['LeNet is a simple CNN model with a single-channel input of size 32x32, using 5x5 matrix for convolution, and capable of detecting 10 classes. LeNet architecture has a simple structure with a single-channel input of size 32x32, utilizing a 5x5 matrix for convolution and capable of detecting 10 classes.', 'MobileNet is significant for being mobile-friendly, providing efficient deep learning models without requiring substantial compute power like GPUs and CPUs. MobileNet architecture is notable for being mobile-friendly, enabling efficient deep learning models without extensive compute power like GPUs and CPUs.', 'The chapter emphasizes implementing MobileNet architecture to enhance model accuracy and demonstrates the process in Google Colab. The chapter emphasizes implementing MobileNet architecture to enhance model accuracy and demonstrates the process in Google Colab.']}], 'duration': 502.821, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/7MceDfpnP8k/pics/7MceDfpnP8k3899308.jpg', 'highlights': ['The chapter emphasizes implementing MobileNet architecture to enhance model accuracy and demonstrates the process in Google Colab.', 'The methods to improve model accuracy include transfer learning and hyperparameter tuning, with a focus on using Keras tuner for deep learning models and the potential of training models on a public open-source network.', 'The chapter also explains the concept of transfer learning, where knowledge learned from one model is transferred to another, particularly leveraging pre-trained models like ResNet or GoogleNet, trained on 1 billion images and thousand classes.', 'MobileNet architecture is notable for being mobile-friendly, enabling efficient deep learning models without extensive compute power like GPUs and CPUs.', 'LeNet architecture has a simple structure with a single-channel input of size 32x32, utilizing a 5x5 matrix for convolution and capable of detecting 10 classes.', 'The model accuracy is 83%, leaving a 20% chance of incorrect predictions when deployed in a real-time scenario, posing a risk in a health field application.']}, {'end': 5420.602, 'segs': [{'end': 4516.267, 'src': 'embed', 'start': 4488.375, 'weight': 4, 'content': [{'end': 4495.781, 'text': "And now once we're done with the layers, let's also take model from Keras not models import last time.", 'start': 4488.375, 'duration': 7.406}, {'end': 4497.382, 'text': 'We had used something called a sequential.', 'start': 4495.801, 'duration': 1.581}, {'end': 4500.985, 'text': "So this time you won't be using sequential you'll be using functional API.", 'start': 4497.882, 'duration': 3.103}, {'end': 4505.048, 'text': "So I'll give you the difference between functional API and the sequential model.", 'start': 4501.425, 'duration': 3.623}, {'end': 4506.975, 'text': "Apart from that, we'll also take load mode.", 'start': 4505.578, 'duration': 1.397}, {'end': 4509.806, 'text': 'and now we also have to pre-process the image.', 'start': 4507.705, 'duration': 2.101}, {'end': 4516.267, 'text': 'So what is this pre-processing basically means is now as you can see here, we have to pre-process the image over here.', 'start': 4510.186, 'duration': 6.081}], 'summary': 'Using functional api instead of sequential for model, and discussing pre-processing images.', 'duration': 27.892, 'max_score': 4488.375, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/7MceDfpnP8k/pics/7MceDfpnP8k4488375.jpg'}, {'end': 4563.764, 'src': 'embed', 'start': 4530.771, 'weight': 0, 'content': [{'end': 4533.231, 'text': "But before that let's kind of import all the libraries.", 'start': 4530.771, 'duration': 2.46}, {'end': 4536.532, 'text': 'So from Keras dot models.', 'start': 4533.791, 'duration': 2.741}, {'end': 4538.806, 'text': "It's going to be applications.", 'start': 4537.745, 'duration': 1.061}, {'end': 4544.029, 'text': 'So from application garage or application package I can import whichever model I want.', 'start': 4539.706, 'duration': 4.323}, {'end': 4553.536, 'text': 'So as you can see I have dense net I have efficient net I have inception v2 and then I have mobile net and various other architectures.', 'start': 4544.369, 'duration': 9.167}, {'end': 4558.239, 'text': "I have here exception net which is 16 and the one that I'm going to use over here is mobile net.", 'start': 4553.596, 'duration': 4.643}, {'end': 4563.764, 'text': "So for this particular one, I'm going to import mobile net and then preprocess input.", 'start': 4558.739, 'duration': 5.025}], 'summary': 'Import various models from keras, including dense net, efficient net, inception v2, and mobile net.', 'duration': 32.993, 'max_score': 4530.771, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/7MceDfpnP8k/pics/7MceDfpnP8k4530771.jpg'}, {'end': 4655.553, 'src': 'embed', 'start': 4631.632, 'weight': 1, 'content': [{'end': 4638.238, 'text': "Once again, nothing changes is there that the way the data representation is there only that's the thing that cannot change.", 'start': 4631.632, 'duration': 6.606}, {'end': 4641.281, 'text': "So let's now import our mobile net model here.", 'start': 4638.679, 'duration': 2.602}, {'end': 4643.963, 'text': "And finally, let's now get started by creating our model.", 'start': 4641.741, 'duration': 2.222}, {'end': 4647.507, 'text': 'So to create a model first off we are going to create this functional model.', 'start': 4644.404, 'duration': 3.103}, {'end': 4649.989, 'text': 'I would say base underscore model.', 'start': 4647.707, 'duration': 2.282}, {'end': 4655.553, 'text': 'So this would be equal to mobile net And now we have to obviously provide the input shape here.', 'start': 4650.509, 'duration': 5.044}], 'summary': 'Import mobile net model and create functional model with input shape.', 'duration': 23.921, 'max_score': 4631.632, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/7MceDfpnP8k/pics/7MceDfpnP8k4631632.jpg'}, {'end': 5377.128, 'src': 'embed', 'start': 5346.686, 'weight': 3, 'content': [{'end': 5350.547, 'text': 'which means there is only 2% chance that my model would predict something wrong.', 'start': 5346.686, 'duration': 3.861}, {'end': 5352.447, 'text': "So let's do some more trial here.", 'start': 5350.927, 'duration': 1.52}, {'end': 5354.648, 'text': 'Let me add this to couple more values.', 'start': 5352.947, 'duration': 1.701}, {'end': 5358.994, 'text': "One more image and let's see whether this predicts it to be cancer or not.", 'start': 5355.569, 'duration': 3.425}, {'end': 5362.199, 'text': 'We all know this of a brain tumor and yes, it did predict correct.', 'start': 5359.475, 'duration': 2.724}, {'end': 5368.808, 'text': "Let's take one more image from a healthy side and let me copy the path and then let me add it over here.", 'start': 5362.759, 'duration': 6.049}, {'end': 5377.128, 'text': 'So as you can see MRI images of a healthy brain and this thing would hold true even if I take out input from an external Google source.', 'start': 5369.864, 'duration': 7.264}], 'summary': 'Model has a 2% error rate; successfully predicts brain tumor on mri images.', 'duration': 30.442, 'max_score': 5346.686, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/7MceDfpnP8k/pics/7MceDfpnP8k5346686.jpg'}, {'end': 5405.599, 'src': 'embed', 'start': 5380.63, 'weight': 2, 'content': [{'end': 5386.253, 'text': "and you can see there's a huge amount of difference between creating our own model and having a custom pre-trained model.", 'start': 5380.63, 'duration': 5.623}, {'end': 5394.217, 'text': 'So you can see here our training accuracy has increased to 97% All right guys with this we come to the end of a session.', 'start': 5386.593, 'duration': 7.624}, {'end': 5396.318, 'text': 'I hope you enjoyed and learn something new.', 'start': 5394.677, 'duration': 1.641}, {'end': 5399.678, 'text': 'If you have any further queries please do mention them in a comment box below.', 'start': 5396.718, 'duration': 2.96}, {'end': 5402.539, 'text': 'Until next time goodbye and take care.', 'start': 5400.238, 'duration': 2.301}, {'end': 5405.599, 'text': 'I hope you have enjoyed listening to this video.', 'start': 5403.459, 'duration': 2.14}], 'summary': 'Custom pre-trained model increased training accuracy to 97%.', 'duration': 24.969, 'max_score': 5380.63, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/7MceDfpnP8k/pics/7MceDfpnP8k5380630.jpg'}], 'start': 4402.269, 'title': 'Image data pre-processing and model training', 'summary': 'Covers pre-processing input data with image data generator, utilizing the functional api instead of the sequential model, and implementing mobilenet architecture for image classification. it also discusses creating a custom model using mobilenet, model compilation, training with callbacks, and using a pre-trained model for image classification, achieving a significant accuracy increase through hyperparameter tuning and successful brain tumor image prediction.', 'chapters': [{'end': 4631.532, 'start': 4402.269, 'title': 'Pre-processing and model training', 'summary': 'Covers the pre-processing of input data using image data generator, importing a pre-trained model for model training, and utilizing the functional api instead of the sequential model from keras, with a focus on using the mobile net architecture for image classification.', 'duration': 229.263, 'highlights': ['Importing image data generator for pre-processing input data The process involves importing the image data generator from Keras and using it to pre-process the input data, including creating the folders for train, validation, and test data.', 'Explaining the class indices for the train data The trainer explains the class indices for the train data, where 0 represents a brain tumor image and 1 represents a healthy image, providing clarity on the classification.', 'Utilizing a pre-trained model for training The decision to use a pre-trained model for training is made, with a specific focus on the mobile net architecture, highlighting the efficiency and effectiveness of the chosen approach.', 'Switching from sequential to functional API for model building The shift from using the sequential model to the functional API for model building is mentioned, emphasizing the difference between the two approaches and the rationale behind the change.', 'Importing and utilizing the preprocess input function The process of importing the preprocess input function and its usage to standardize the input data for the mobile net architecture is explained, highlighting the importance of this step in the pre-processing phase.']}, {'end': 4946.214, 'start': 4631.632, 'title': 'Creating a custom model with mobilenet', 'summary': 'Covers importing and creating a model using mobilenet, clubbing the base model with a custom model, and compiling and training the model with callbacks for early stopping and model checkpoints.', 'duration': 314.582, 'highlights': ['Importing and creating a model using MobileNet The process involves importing the MobileNet model, setting input shape, including top as false, and ensuring non-trainable layers, followed by clubbing the base model with a custom model.', 'Compiling and training the model with callbacks The model is compiled with a default optimizer and binary cross-entropy loss, and trained using fit generator, with callbacks for early stopping and model checkpoints for monitoring validation accuracy and saving the best model.']}, {'end': 5420.602, 'start': 4946.73, 'title': 'Using pre-trained model for image classification', 'summary': 'Discusses the implementation of a pre-trained model for image classification, demonstrating an increase in model accuracy from 56% to 97% through hyperparameter tuning and evaluation, as well as the successful prediction of brain tumor images with a 97% accuracy.', 'duration': 473.872, 'highlights': ['The model accuracy increased from 56% to 97% through hyperparameter tuning and evaluation. The model accuracy started at 56% and increased to 97% through hyperparameter tuning and evaluation.', 'Successful prediction of brain tumor images with 97% accuracy. The model successfully predicted brain tumor images with a 97% accuracy.', 'Demonstrated comparison between creating a custom model and using a pre-trained model, showing a significant difference in training accuracy. A comparison was demonstrated between creating a custom model and using a pre-trained model, showcasing a significant difference in training accuracy, which increased to 97% when using the pre-trained model.']}], 'duration': 1018.333, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/7MceDfpnP8k/pics/7MceDfpnP8k4402269.jpg', 'highlights': ['Utilizing a pre-trained model for training, focusing on the mobile net architecture', 'Importing and creating a model using MobileNet with non-trainable layers', 'Model accuracy increased from 56% to 97% through hyperparameter tuning', 'Successful prediction of brain tumor images with 97% accuracy', 'Switching from sequential to functional API for model building']}], 'highlights': ['Model accuracy increased from 56% to 97% through hyperparameter tuning', 'The model accuracy is 83% after training and evaluation', 'Hyperparameter tuning improved the accuracy from 67% to 82%', 'The chapter emphasizes implementing MobileNet architecture to enhance model accuracy and demonstrates the process in Google Colab', 'The process involves using multiple convolution filters to extract various features from images, such as ears, borders, whiskers, and other important characteristics of objects, which helps in reducing dimensionality and focusing on minute details', 'The addition of a padding layer ensures that all features of an image are captured by maintaining the image size, preventing the convolution filter from ignoring certain features present at the corners or edges of the image', 'The chapter covers the implementation of a convolutional neural network (CNN) using Google colab to detect brain tumors from MRI images', 'Using single layer perceptron and multi-layer perceptron for image processing can lead to overfitting, curse of dimensionality, and variance in object position due to the flattening of images into a single vector of pixel values', 'The chapter discusses building and training a binary classification model using Keras, specifying Adam optimizer, binary cross entropy loss, and accuracy matrix, and employing image data generator with data augmentation techniques for pre-processing the images', 'The process includes importing and unzipping the dataset, preprocessing, training, and testing the model']}