title
Neural Network For Handwritten Digits Classification | Deep Learning Tutorial 7 (Tensorflow2.0)

description
In this video we will build our first neural network in tensorflow and python for handwritten digits classification. We will first build a very simple neural network with only input and output layer. After that we will add a hidden layer and check how the performance of our model changes. 🔖 Hashtags 🔖 #handwrittendigitrecognition #tensorflowtutorial #handwritingrecognition #mnisttensorflowtutorial Do you want to learn technology from me? Check https://codebasics.io/?utm_source=description&utm_medium=yt&utm_campaign=description&utm_id=description for my affordable video courses. Github link for code in this tutorial: https://github.com/codebasics/deep-learning-keras-tf-tutorial/blob/master/1_digits_recognition/digits_recognition_neural_network.ipynb Next video: https://www.youtube.com/watch?v=icZItWxw7AI&list=PLeo1K3hjS3uu7CxAacxVndI4bE_o3BDtO&index=8 Previous video: https://www.youtube.com/watch?v=z-ZR_8BZ1wQ&list=PLeo1K3hjS3uu7CxAacxVndI4bE_o3BDtO&index=6 Deep learning playlist: https://www.youtube.com/playlist?list=PLeo1K3hjS3uu7CxAacxVndI4bE_o3BDtO Prerequisites for this series:    1: Python tutorials (first 16 videos): https://www.youtube.com/playlist?list=PLeo1K3hjS3uv5U-Lmlnucd7gqF-3ehIh0     2: Pandas tutorials(first 8 videos): https://www.youtube.com/playlist?list=PLeo1K3hjS3uuASpe-1LjfG5f14Bnozjwy 3: Machine learning playlist (first 16 videos): https://www.youtube.com/playlist?list=PLeo1K3hjS3uvCeTYTeyfe0-rN5r8zn9rw   🌎 My Website For Video Courses: https://codebasics.io/?utm_source=description&utm_medium=yt&utm_campaign=description&utm_id=description Need help building software or data analytics and AI solutions? My company https://www.atliq.com/ can help. Click on the Contact button on that website. #️⃣ Social Media #️⃣ 🔗 Discord: https://discord.gg/r42Kbuk 📸 Dhaval's Personal Instagram: https://www.instagram.com/dhavalsays/ 📸 Codebasics Instagram: https://www.instagram.com/codebasicshub/ 🔊 Facebook: https://www.facebook.com/codebasicshub 📱 Twitter: https://twitter.com/codebasicshub 📝 Linkedin (Personal): https://www.linkedin.com/in/dhavalsays/ 📝 Linkedin (Codebasics): https://www.linkedin.com/company/codebasics/ 🔗 Patreon: https://www.patreon.com/codebasics?fan_landing=true

detail
{'title': 'Neural Network For Handwritten Digits Classification | Deep Learning Tutorial 7 (Tensorflow2.0)', 'heatmap': [{'end': 595.508, 'start': 547.488, 'weight': 0.893}, {'end': 902.408, 'start': 836.02, 'weight': 0.928}, {'end': 1766.629, 'start': 1734.535, 'weight': 0.897}], 'summary': 'The tutorial covers tensorflow and python code to recognize hand-written digits using deep learning, including a comparison with an insurance dataset. it discusses the use of dense neural network layers to classify 60,000 training images and 10,000 test images, building a simple neural network with an initial accuracy of 0.47 scaling to 0.99, improving model accuracy to 92%, and evaluating a neural network model using confusion metrics, adding a hidden layer to improve performance, leading to a 5% increase in accuracy from 92% to 97.15%.', 'chapters': [{'end': 221.665, 'segs': [{'end': 73.747, 'src': 'embed', 'start': 24.889, 'weight': 0, 'content': [{'end': 37.642, 'text': "We'll discuss that insurance data set and I will show you how deep neural network will look like or how a neural network with hidden layers will look like for the insurance data set.", 'start': 24.889, 'duration': 12.753}, {'end': 44.51, 'text': 'Then we will cover some theory on the hand written digits data set and then we will jump into coding.', 'start': 38.243, 'duration': 6.267}, {'end': 49.513, 'text': 'I will quickly refresh the concept that we went over in what is neuron video.', 'start': 44.85, 'duration': 4.663}, {'end': 57.858, 'text': 'Here I have an insurance data set where the person age and whether the person has an insurance based on the age is given.', 'start': 49.913, 'duration': 7.945}, {'end': 66.383, 'text': 'And we are doing binary classification basically based on the age we are trying to decide whether person will buy an insurance or not.', 'start': 58.298, 'duration': 8.085}, {'end': 73.747, 'text': 'And for this case we saw that we can have a single neuron or a logistic regression,', 'start': 67.163, 'duration': 6.584}], 'summary': 'Discussing deep neural network for insurance data set and binary classification based on age for insurance purchase.', 'duration': 48.858, 'max_score': 24.889, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/iqQgED9vV7k/pics/iqQgED9vV7k24889.jpg'}, {'end': 180.601, 'src': 'embed', 'start': 149.171, 'weight': 3, 'content': [{'end': 150.592, 'text': 'It will have even more features.', 'start': 149.171, 'duration': 1.421}, {'end': 153.813, 'text': "So let's say you have age, education, income and savings.", 'start': 151.232, 'duration': 2.581}, {'end': 155.353, 'text': 'In this case,', 'start': 154.633, 'duration': 0.72}, {'end': 166.556, 'text': 'you can have a hidden layer in the neural network where what you can do is use age and education to determine the awareness factor of a person.', 'start': 155.353, 'duration': 11.203}, {'end': 171.458, 'text': 'Awareness meaning how much aware a person is to buy the insurance.', 'start': 166.896, 'duration': 4.562}, {'end': 180.601, 'text': "Usually, if the person's age is young, people think that they will not fall ill and they don't buy the insurance.", 'start': 172.218, 'duration': 8.383}], 'summary': 'Neural network can use age and education to determine insurance awareness factor.', 'duration': 31.43, 'max_score': 149.171, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/iqQgED9vV7k/pics/iqQgED9vV7k149171.jpg'}], 'start': 0.169, 'title': 'Tensorflow and neural network applications', 'summary': 'Covers writing code in tensorflow and python to recognize hand-written digits using deep learning, including a comparison with a network for an insurance data set. it also explains the use of a neural network for binary classification based on age to determine whether a person will buy insurance, with a focus on the impact of age, education, income, and savings on the decision-making process.', 'chapters': [{'end': 49.513, 'start': 0.169, 'title': 'Tensorflow handwritten digits recognition', 'summary': 'Covers writing code in tensorflow and python to recognize hand-written digits using deep learning, including a comparison with a network for an insurance data set.', 'duration': 49.344, 'highlights': ['We will write code in TensorFlow and Python to recognize hand-written digits using deep learning, following a previous video on a neural network with one neuron.', 'A comparison will be made between a simple neural network for an insurance data set and a deep neural network with hidden layers for the same dataset.', 'The chapter will also provide a theoretical background on the hand-written digits data set before diving into coding.']}, {'end': 221.665, 'start': 49.913, 'title': 'Neural network for insurance classification', 'summary': 'Explains the use of a neural network for binary classification based on age to determine whether a person will buy insurance, with a focus on the impact of age, education, income, and savings on the decision-making process.', 'duration': 171.752, 'highlights': ['The neural network uses a single neuron or logistic regression to determine whether a person will buy insurance based on their age, with a threshold of 0.5 for decision-making.', 'As the complexity of the dataset increases with additional features like education, income, and savings, a hidden layer in the neural network is introduced to consider factors such as awareness and affordability in determining the likelihood of a person buying insurance.', 'The awareness factor, influenced by age and education, impacts the decision to buy insurance, as younger individuals may be less aware of the need for insurance, while lower education levels could lead to lower awareness.', 'The affordability factor, determined by income and savings, also plays a crucial role in the decision-making process, as individuals with limited income or savings may not be able to afford insurance despite being aware of its importance.']}], 'duration': 221.496, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/iqQgED9vV7k/pics/iqQgED9vV7k169.jpg', 'highlights': ['A comparison will be made between a simple neural network for an insurance data set and a deep neural network with hidden layers for the same dataset.', 'The neural network uses a single neuron or logistic regression to determine whether a person will buy insurance based on their age, with a threshold of 0.5 for decision-making.', 'The chapter will also provide a theoretical background on the hand-written digits data set before diving into coding.', 'As the complexity of the dataset increases with additional features like education, income, and savings, a hidden layer in the neural network is introduced to consider factors such as awareness and affordability in determining the likelihood of a person buying insurance.']}, {'end': 495.238, 'segs': [{'end': 251.108, 'src': 'embed', 'start': 222.866, 'weight': 0, 'content': [{'end': 225.889, 'text': 'these are the weights that I have shown here.', 'start': 222.866, 'duration': 3.023}, {'end': 234.658, 'text': 'actual neural network will look something like this where every neuron is connected with every other neuron in the hidden layer.', 'start': 225.889, 'duration': 8.769}, {'end': 244.048, 'text': 'this is the reason it is called a dense layer of network dense, meaning every neuron is connected with every other neuron in the other layer.', 'start': 234.658, 'duration': 9.39}, {'end': 248.887, 'text': "now let's move on to our digits.", 'start': 245.405, 'duration': 3.482}, {'end': 251.108, 'text': 'uh, recognition or data set.', 'start': 248.887, 'duration': 2.221}], 'summary': 'Neural network with dense layer connects every neuron in hidden layer for digit recognition.', 'duration': 28.242, 'max_score': 222.866, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/iqQgED9vV7k/pics/iqQgED9vV7k222866.jpg'}, {'end': 304.187, 'src': 'embed', 'start': 275.192, 'weight': 1, 'content': [{'end': 282.435, 'text': 'where we can feed an image into the input layer like this and the output layer will have 10 neurons,', 'start': 275.192, 'duration': 7.243}, {'end': 289.136, 'text': 'because we are classifying an image into 10 different, into 10 different classes.', 'start': 282.435, 'duration': 6.701}, {'end': 296.681, 'text': 'basically, we have 0 to 9, which is total 10, and we want to classify an image as one of these classes.', 'start': 289.136, 'duration': 7.545}, {'end': 304.187, 'text': "so if you feed let's say eat this image into your neural network, the output will have this kind of score.", 'start': 296.681, 'duration': 7.506}], 'summary': 'Neural network classifies images into 10 different classes.', 'duration': 28.995, 'max_score': 275.192, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/iqQgED9vV7k/pics/iqQgED9vV7k275192.jpg'}, {'end': 423.254, 'src': 'embed', 'start': 390.281, 'weight': 4, 'content': [{'end': 399.564, 'text': 'So you can actually have those pixels as a two-dimensional array, something like this, where the darker region will have 0.', 'start': 390.281, 'duration': 9.283}, {'end': 401.965, 'text': 'This region is very bright, so it has 240.', 'start': 399.564, 'duration': 2.401}, {'end': 404.346, 'text': 'The maximum value will be 255.', 'start': 401.965, 'duration': 2.381}, {'end': 409.928, 'text': 'And you get a two-dimensional array for representing this image.', 'start': 404.346, 'duration': 5.582}, {'end': 423.254, 'text': 'And then you can supply that two dimensional array and you can basically convert that into one dimensional array so you can flatten the array.', 'start': 410.848, 'duration': 12.406}], 'summary': 'Image pixels can be represented as a 2d array, with values ranging from 0 to 255. this array can be flattened.', 'duration': 32.973, 'max_score': 390.281, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/iqQgED9vV7k/pics/iqQgED9vV7k390281.jpg'}, {'end': 495.238, 'src': 'embed', 'start': 462.757, 'weight': 2, 'content': [{'end': 471.024, 'text': 'And these values now can be propagated into this neural network, which has 10 output neurons.', 'start': 462.757, 'duration': 8.267}, {'end': 474.347, 'text': "So this neural network doesn't have any hidden layer.", 'start': 471.165, 'duration': 3.182}, {'end': 476.61, 'text': 'It has just input and the output.', 'start': 474.488, 'duration': 2.122}, {'end': 478.23, 'text': 'layer into it.', 'start': 477.23, 'duration': 1}, {'end': 480.992, 'text': 'now the problem that we are going to solve.', 'start': 478.23, 'duration': 2.762}, {'end': 488.515, 'text': 'we will actually have 28 by 28 grid, okay, and we will flatten it,', 'start': 480.992, 'duration': 7.523}, {'end': 495.238, 'text': 'so it will become 784 neurons in our first layer and then you have the output layer.', 'start': 488.515, 'duration': 6.723}], 'summary': 'Neural network has 10 output neurons, with 784 neurons in the first layer.', 'duration': 32.481, 'max_score': 462.757, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/iqQgED9vV7k/pics/iqQgED9vV7k462757.jpg'}], 'start': 222.866, 'title': 'Neural network for image recognition', 'summary': 'Discusses the use of dense neural network layers to classify handwritten digits into 10 classes with scores ranging from 0% to 100% and explains the process of converting an image into input neurons.', 'chapters': [{'end': 337.111, 'start': 222.866, 'title': 'Neural network for handwritten digits recognition', 'summary': 'Discusses the concept of dense neural network layers and the application of a simple neural network for classifying handwritten digits into 10 different classes with scores ranging from 0% to 100% based on similarity.', 'duration': 114.245, 'highlights': ['The neural network consists of a dense layer where every neuron is connected with every other neuron in the hidden layer.', 'The application involves classifying handwritten digits into 10 different classes with similarity scores, indicating the likelihood of an image belonging to a particular class.', 'The output layer of the neural network has 10 neurons to classify images into 10 different classes.']}, {'end': 495.238, 'start': 337.111, 'title': 'Image input neurons in neural networks', 'summary': 'Explains how to convert an image into input neurons for a neural network, illustrating the process of flattening a 2d array to a 1d array and the resulting number of neurons in the input layer.', 'duration': 158.127, 'highlights': ['The process of converting a 28x28 grid image into 784 input neurons for the first layer of a neural network is explained, demonstrating the flattening of a 2D array to a 1D array.', 'The concept of representing an image as a two-dimensional array with pixels ranging from 0 to 255 is described, where 255 represents white and 0 represents black, providing a clear understanding of image representation in neural networks.', 'The discussion of propagating values from the input layer to the 10 output neurons in the neural network, highlighting the absence of hidden layers in the network architecture and the direct mapping of input to output.']}], 'duration': 272.372, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/iqQgED9vV7k/pics/iqQgED9vV7k222866.jpg', 'highlights': ['The neural network consists of a dense layer where every neuron is connected with every other neuron in the hidden layer.', 'The output layer of the neural network has 10 neurons to classify images into 10 different classes.', 'The process of converting a 28x28 grid image into 784 input neurons for the first layer of a neural network is explained, demonstrating the flattening of a 2D array to a 1D array.', 'The application involves classifying handwritten digits into 10 different classes with similarity scores, indicating the likelihood of an image belonging to a particular class.', 'The concept of representing an image as a two-dimensional array with pixels ranging from 0 to 255 is described, where 255 represents white and 0 represents black, providing a clear understanding of image representation in neural networks.']}, {'end': 816.54, 'segs': [{'end': 595.508, 'src': 'heatmap', 'start': 495.238, 'weight': 0, 'content': [{'end': 501.48, 'text': "when we are writing code in python and tensorflow, we'll first try to solve it using a simple neural network like this,", 'start': 495.238, 'duration': 6.242}, {'end': 505.582, 'text': "and then we will add a hidden layer and we'll see how the accuracy improves.", 'start': 501.48, 'duration': 4.102}, {'end': 515.727, 'text': 'I ran a Jupyter notebook command and that will launch a notebook like this, where I have imported some necessary modules.', 'start': 507.461, 'duration': 8.266}, {'end': 519.19, 'text': 'We already installed TensorFlow, so TensorFlow I already imported.', 'start': 516.008, 'duration': 3.182}, {'end': 524.494, 'text': 'And from TensorFlow, I also imported Kera, which will give me some convenient API to use.', 'start': 519.711, 'duration': 4.783}, {'end': 530.479, 'text': "We are going to use handwritten digits dataset from Kera's library.", 'start': 525.075, 'duration': 5.404}, {'end': 534.883, 'text': 'And that is something you can load using this particular line here.', 'start': 531.3, 'duration': 3.583}, {'end': 543.606, 'text': 'And what this will do is it will load the train and test digits data set into these variables.', 'start': 535.623, 'duration': 7.983}, {'end': 547.468, 'text': "So let's see how many samples we have.", 'start': 544.107, 'duration': 3.361}, {'end': 551.79, 'text': 'So we have in X train 60, 000 digits images.', 'start': 547.488, 'duration': 4.302}, {'end': 556.652, 'text': 'Similarly, in X test, we have.', 'start': 552.37, 'duration': 4.282}, {'end': 558.686, 'text': '10, 000 images.', 'start': 557.905, 'duration': 0.781}, {'end': 561.508, 'text': 'So this is a pretty good like a big data set.', 'start': 558.726, 'duration': 2.782}, {'end': 569.254, 'text': 'Now, if you look at each individual sample, that sample is a 28 by 28 pixel image.', 'start': 561.808, 'duration': 7.446}, {'end': 575.459, 'text': 'And the weights represented in numbers is a simple two dimensional array like this.', 'start': 569.834, 'duration': 5.625}, {'end': 584.904, 'text': 'So just a two dimensional array, zero means those black points and you know, 253 means 255 is white.', 'start': 577.642, 'duration': 7.262}, {'end': 588.966, 'text': 'So these are between zero and 255 values.', 'start': 585.565, 'duration': 3.401}, {'end': 595.508, 'text': 'If you want to see how it looks really, you can use matplotlib library.', 'start': 589.026, 'duration': 6.482}], 'summary': 'Using tensorflow in python, we analyze a dataset with 60,000 training images and 10,000 test images to improve accuracy by adding a hidden layer.', 'duration': 80.221, 'max_score': 495.238, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/iqQgED9vV7k/pics/iqQgED9vV7k495238.jpg'}, {'end': 640.447, 'src': 'embed', 'start': 603.632, 'weight': 5, 'content': [{'end': 612.583, 'text': 'there is a function called let me just remove this, these lines, and if you press B it will create a new cell.', 'start': 603.632, 'duration': 8.951}, {'end': 616.648, 'text': 'and here I am plotting the first training image.', 'start': 612.583, 'duration': 4.065}, {'end': 618.21, 'text': 'so this is obviously five.', 'start': 616.648, 'duration': 1.562}, {'end': 624.932, 'text': 'can look at couple of other images, see the second, one is zero.', 'start': 619.967, 'duration': 4.965}, {'end': 626.213, 'text': 'third, one is four.', 'start': 624.932, 'duration': 1.281}, {'end': 631.318, 'text': 'but you can see these are like clearly handwritten digits.', 'start': 626.213, 'duration': 5.105}, {'end': 640.447, 'text': "and if you look at y train, so y train, let's say y train in.", 'start': 631.318, 'duration': 9.129}], 'summary': 'Function creates new cell, plots training images, 5, 0, 4. handwritten digits.', 'duration': 36.815, 'max_score': 603.632, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/iqQgED9vV7k/pics/iqQgED9vV7k603632.jpg'}, {'end': 715.626, 'src': 'embed', 'start': 673.879, 'weight': 4, 'content': [{'end': 690.63, 'text': 'because we saw in the presentation that we want to convert this 28 by 28 image into the single dimensional array that will have 748 84 elements.', 'start': 673.879, 'duration': 16.751}, {'end': 698.516, 'text': 'So how do you flatten it in pandas? There is a function called reshape.', 'start': 690.851, 'duration': 7.665}, {'end': 705.24, 'text': 'So when you have X train, you can call a reshape function on it.', 'start': 699.136, 'duration': 6.104}, {'end': 712.965, 'text': 'in the reshape you can say okay, length of X train.', 'start': 706.761, 'duration': 6.204}, {'end': 714.326, 'text': 'that is your first dimension.', 'start': 712.965, 'duration': 1.361}, {'end': 715.626, 'text': 'so it has two dimension.', 'start': 714.326, 'duration': 1.3}], 'summary': 'Convert 28x28 image into a 1d array with 74884 elements using reshape function in pandas.', 'duration': 41.747, 'max_score': 673.879, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/iqQgED9vV7k/pics/iqQgED9vV7k673879.jpg'}], 'start': 495.238, 'title': 'Building and plotting neural network with tensorflow', 'summary': 'Covers the creation of a neural network in python and tensorflow using a dataset of 60,000 training images and 10,000 test images, each being a 28x28 pixel image. it also demonstrates the flattening process and plotting of the first training image using matplotlib library, with the dataset containing handwritten digits from 0 to 9.', 'chapters': [{'end': 575.459, 'start': 495.238, 'title': 'Building neural network with tensorflow', 'summary': 'Covers building a neural network in python and tensorflow, utilizing a handwritten digits dataset from keras, comprising 60,000 training images and 10,000 test images, with each sample being a 28x28 pixel image.', 'duration': 80.221, 'highlights': ['The chapter covers building a neural network in Python and TensorFlow', 'Utilizing a handwritten digits dataset from Keras', 'Comprising 60,000 training images and 10,000 test images', 'Each sample being a 28x28 pixel image']}, {'end': 816.54, 'start': 577.642, 'title': 'Flattening and plotting image data', 'summary': 'Demonstrates the process of flattening a 28x28 image dataset into a 784-dimensional array, and plotting the first training image using matplotlib library, with the dataset containing 60,000 training images and 10,000 test images of handwritten digits from 0 to 9.', 'duration': 238.898, 'highlights': ['The dataset contains 60,000 training images and 10,000 test images of handwritten digits from 0 to 9.', 'The process involves flattening a 28x28 image dataset into a 784-dimensional array using the reshape function in pandas.', 'The first training image is plotted using the matplotlib library.']}], 'duration': 321.302, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/iqQgED9vV7k/pics/iqQgED9vV7k495238.jpg', 'highlights': ['Building a neural network in Python and TensorFlow', 'Utilizing a handwritten digits dataset from Keras', 'Comprising 60,000 training images and 10,000 test images', 'Each sample being a 28x28 pixel image', 'Flattening a 28x28 image dataset into a 784-dimensional array', 'Plotting the first training image using the matplotlib library']}, {'end': 1363.93, 'segs': [{'end': 935.941, 'src': 'heatmap', 'start': 836.02, 'weight': 0, 'content': [{'end': 837.021, 'text': 'It has just two layers.', 'start': 836.02, 'duration': 1.001}, {'end': 838.984, 'text': 'Input layer with 784 elements.', 'start': 837.262, 'duration': 1.722}, {'end': 840.145, 'text': 'Output layer with 10 elements.', 'start': 839.004, 'duration': 1.141}, {'end': 851.45, 'text': 'And the way you create this in TensorFlow and Keras is you use Keras dot.', 'start': 844.443, 'duration': 7.007}, {'end': 856.514, 'text': 'OK, so you use Keras dot sequential.', 'start': 851.59, 'duration': 4.924}, {'end': 862.219, 'text': 'So sequential means I am having a stack of layers in my neural network.', 'start': 857.155, 'duration': 5.064}, {'end': 866.003, 'text': 'And since it is a stack, it will accept.', 'start': 863.721, 'duration': 2.282}, {'end': 869.814, 'text': 'every layer as one element.', 'start': 867.731, 'duration': 2.083}, {'end': 872.497, 'text': 'So the first element here is input.', 'start': 869.834, 'duration': 2.663}, {'end': 879.946, 'text': 'But Keras has this API where you can say Keras.layers.dense.', 'start': 873.318, 'duration': 6.628}, {'end': 884.391, 'text': 'Dense means all the neurons here.', 'start': 880.346, 'duration': 4.045}, {'end': 889.177, 'text': 'In one layer I connected with every other neuron in the second layer.', 'start': 885.794, 'duration': 3.383}, {'end': 890.898, 'text': 'That is why it is called dense.', 'start': 889.617, 'duration': 1.281}, {'end': 894.081, 'text': "Okay So I'm creating a dense layer here.", 'start': 891.459, 'duration': 2.622}, {'end': 898.104, 'text': 'And See the input shape.', 'start': 895.542, 'duration': 2.562}, {'end': 902.408, 'text': 'Input shape is what? 784.', 'start': 901.287, 'duration': 1.121}, {'end': 908.132, 'text': 'See input is x1, x2, x3 to 784.', 'start': 902.408, 'duration': 5.724}, {'end': 909.133, 'text': "So that's what I have.", 'start': 908.132, 'duration': 1.001}, {'end': 909.374, 'text': '784 here.', 'start': 909.153, 'duration': 0.221}, {'end': 909.754, 'text': 'And The.', 'start': 909.594, 'duration': 0.16}, {'end': 919.594, 'text': 'output shape is 10.', 'start': 916.653, 'duration': 2.941}, {'end': 924.476, 'text': 'so this way you are defining both input and output layer.', 'start': 919.594, 'duration': 4.882}, {'end': 928.938, 'text': 'basically, this is output which has 10 neurons.', 'start': 924.476, 'duration': 4.462}, {'end': 935.001, 'text': 'so 10 neurons are this 0 to 9 and the input is 784 neurons.', 'start': 928.938, 'duration': 6.063}, {'end': 935.941, 'text': "so that's what you're saying.", 'start': 935.001, 'duration': 0.94}], 'summary': 'Neural network has 784 input elements and 10 output elements.', 'duration': 119.401, 'max_score': 836.02, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/iqQgED9vV7k/pics/iqQgED9vV7k836020.jpg'}, {'end': 1026.026, 'src': 'embed', 'start': 992.279, 'weight': 5, 'content': [{'end': 998.001, 'text': 'But optimizers allow you to allow you to train efficiently.', 'start': 992.279, 'duration': 5.722}, {'end': 1009.106, 'text': 'Basically, when the backward propagation and the training is going on, optimizer will allow you to reach to global optima in efficient way.', 'start': 998.041, 'duration': 11.065}, {'end': 1015.717, 'text': 'the second parameter is loss.', 'start': 1010.772, 'duration': 4.945}, {'end': 1026.026, 'text': 'so loss function if you follow my machine learning tutorials especially, let me just show you so if you go to YouTube, by the way,', 'start': 1015.717, 'duration': 10.309}], 'summary': 'Optimizers allow efficient training to reach global optima.', 'duration': 33.747, 'max_score': 992.279, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/iqQgED9vV7k/pics/iqQgED9vV7k992279.jpg'}, {'end': 1099.082, 'src': 'embed', 'start': 1067.229, 'weight': 6, 'content': [{'end': 1069.07, 'text': 'We have like 10 classes in our output.', 'start': 1067.229, 'duration': 1.841}, {'end': 1077.472, 'text': 'And sparse means our output variable, which is y train, is actually an integer number.', 'start': 1069.63, 'duration': 7.842}, {'end': 1084.614, 'text': 'If it is one hot encoded array, you would probably use categorical cross entropy.', 'start': 1078.252, 'duration': 6.362}, {'end': 1093.197, 'text': 'And you can look at the documentation and find different type of loss, see? In tensorflow, there are different types of loss.', 'start': 1085.354, 'duration': 7.843}, {'end': 1099.082, 'text': 'The one that we are using here is this sparse categorical cross entropy.', 'start': 1093.498, 'duration': 5.584}], 'summary': 'The output has 10 classes, and the loss function used is sparse categorical cross entropy.', 'duration': 31.853, 'max_score': 1067.229, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/iqQgED9vV7k/pics/iqQgED9vV7k1067229.jpg'}, {'end': 1325.854, 'src': 'embed', 'start': 1262.483, 'weight': 3, 'content': [{'end': 1272.765, 'text': 'now. here in our case, actually, the accuracy came out to be really very low.', 'start': 1262.483, 'duration': 10.282}, {'end': 1281.066, 'text': "so one reason that i'm suspecting that the accuracy here is very low, like 0.99, would be uh, high accuracy.", 'start': 1272.765, 'duration': 8.301}, {'end': 1284.867, 'text': 'basically 1.0 will be like perfect, but we got 47.', 'start': 1281.066, 'duration': 3.801}, {'end': 1290.888, 'text': 'so one thing i am suspecting here is that Our values are not scaled.', 'start': 1284.867, 'duration': 6.021}, {'end': 1293.849, 'text': 'Often in machine learning, you need to scale the values.', 'start': 1291.188, 'duration': 2.661}, {'end': 1297.91, 'text': 'OK, so what I will do is I will try to scale them.', 'start': 1294.229, 'duration': 3.681}, {'end': 1302.951, 'text': 'Now, how do you scale them? The value each individual value is in range zero to 255.', 'start': 1297.95, 'duration': 5.001}, {'end': 1310.492, 'text': 'So if I divide this whole array by 255, it will be scaled from zero to one.', 'start': 1302.951, 'duration': 7.541}, {'end': 1312.393, 'text': 'So let me do that here.', 'start': 1311.053, 'duration': 1.34}, {'end': 1318.034, 'text': 'So after I load my data set.', 'start': 1314.513, 'duration': 3.521}, {'end': 1325.854, 'text': "Or just before doing this, I'm going to now divide it by 255.", 'start': 1319.128, 'duration': 6.726}], 'summary': 'Low accuracy (0.99) suspected due to unscaled values. scaling to range 0-1 by dividing by 255.', 'duration': 63.371, 'max_score': 1262.483, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/iqQgED9vV7k/pics/iqQgED9vV7k1262483.jpg'}], 'start': 816.54, 'title': 'Neural network development', 'summary': 'Explains the creation of a simple neural network with input and output layers of 784 and 10 elements, training parameters including the use of adam optimizer and sparse categorical cross entropy, and the process of building an accurate neural network resulting in an initial accuracy of 0.47 improving to 0.99 after scaling input values.', 'chapters': [{'end': 964.76, 'start': 816.54, 'title': 'Creating simple neural network', 'summary': 'Explains the creation of a simple neural network with an input layer of 784 elements and an output layer with 10 elements using tensorflow and keras, including the usage of keras dot sequential and the definition of input and output layers.', 'duration': 148.22, 'highlights': ['Creation of a simple neural network with an input layer of 784 elements and an output layer with 10 elements', 'Usage of Keras dot sequential for defining a stack of layers in the neural network', 'Definition of input and output layers using Keras.layers.dense with specified input and output shapes']}, {'end': 1149.024, 'start': 964.76, 'title': 'Neural network training parameters', 'summary': 'Explains the essential parameters for training a neural network, such as optimizer and loss function, including the rationale behind using adam optimizer and sparse categorical cross entropy, and the impact of these parameters on training efficiency and model optimization.', 'duration': 184.264, 'highlights': ['The optimizer, Adam, allows for efficient training and reaching the global optima during backward propagation and training.', 'The chosen loss function, sparse categorical cross entropy, is suitable for categorical output classes, and it accommodates integer-numbered output variables.', 'The tutorial references various machine learning tutorials on YouTube to explain the choice of loss function and encourages further exploration of different types of loss in TensorFlow.']}, {'end': 1363.93, 'start': 1149.404, 'title': 'Building accurate neural network', 'summary': 'Explains the process of building a neural network with a goal to increase accuracy, with an initial low accuracy of 0.47, and the subsequent improvement to 0.99 after scaling the input values from 0 to 255 to 0 to 1.', 'duration': 214.526, 'highlights': ['The initial accuracy of the neural network was 0.47, which later improved to 0.99 after scaling the input values from 0 to 255 to 0 to 1.', 'The process of scaling the input values from 0 to 255 to 0 to 1 significantly improved the accuracy of the neural network.', "The chapter emphasizes the importance of scaling values in machine learning and demonstrates the process of scaling input values from 0 to 255 to 0 to 1 to enhance the neural network's accuracy."]}], 'duration': 547.39, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/iqQgED9vV7k/pics/iqQgED9vV7k816540.jpg', 'highlights': ['Creation of a simple neural network with an input layer of 784 elements and an output layer with 10 elements', 'Usage of Keras dot sequential for defining a stack of layers in the neural network', 'Definition of input and output layers using Keras.layers.dense with specified input and output shapes', 'The process of scaling the input values from 0 to 255 to 0 to 1 significantly improved the accuracy of the neural network', 'The initial accuracy of the neural network was 0.47, which later improved to 0.99 after scaling the input values from 0 to 255 to 0 to 1', 'The optimizer, Adam, allows for efficient training and reaching the global optima during backward propagation and training', 'The chosen loss function, sparse categorical cross entropy, is suitable for categorical output classes, and it accommodates integer-numbered output variables', "The chapter emphasizes the importance of scaling values in machine learning and demonstrates the process of scaling input values from 0 to 255 to 0 to 1 to enhance the neural network's accuracy", 'The tutorial references various machine learning tutorials on YouTube to explain the choice of loss function and encourages further exploration of different types of loss in TensorFlow']}, {'end': 1740.699, 'segs': [{'end': 1413.916, 'src': 'embed', 'start': 1363.95, 'weight': 0, 'content': [{'end': 1369.572, 'text': 'A technique that improves on the accuracy of machine learning model.', 'start': 1363.95, 'duration': 5.622}, {'end': 1376.254, 'text': 'we saw in in our in this particular machine learning tutorial as well, that when you do scaling, it tends to improve the accuracy,', 'start': 1369.572, 'duration': 6.682}, {'end': 1377.815, 'text': 'and we are already seeing that here, see.', 'start': 1376.254, 'duration': 1.561}, {'end': 1384.746, 'text': 'in third or fourth iteration, we got 92% accuracy.', 'start': 1379.562, 'duration': 5.184}, {'end': 1393.614, 'text': 'So now we have 92% accuracy, which means our model is trained in a way that 92% of time, it will make accurate prediction.', 'start': 1385.267, 'duration': 8.347}, {'end': 1404.363, 'text': "Okay, Let's try to evaluate the accuracy on a test dataset, because when it is running training,", 'start': 1394.755, 'duration': 9.608}, {'end': 1407.866, 'text': 'it is actually evaluating accuracy on a training dataset.', 'start': 1404.363, 'duration': 3.503}, {'end': 1413.916, 'text': 'before deploying model to a production, we always evaluate the accuracy on a test dataset.', 'start': 1409.415, 'duration': 4.501}], 'summary': 'Improved machine learning model achieved 92% accuracy after third or fourth iteration.', 'duration': 49.966, 'max_score': 1363.95, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/iqQgED9vV7k/pics/iqQgED9vV7k1363950.jpg'}, {'end': 1520.373, 'src': 'embed', 'start': 1486.949, 'weight': 2, 'content': [{'end': 1497.016, 'text': "OK, looks like I need to predict it for all the values and I'll just store it in Y predicted, which will be an array.", 'start': 1486.949, 'duration': 10.067}, {'end': 1501.24, 'text': 'And the first image should be seven.', 'start': 1498.157, 'duration': 3.083}, {'end': 1503.842, 'text': "OK, so let's print seven.", 'start': 1501.62, 'duration': 2.222}, {'end': 1514.529, 'text': 'OK, so the prediction is coming out to be this and it has 10 values because what it is now doing is it is printing those 10 scores like this.', 'start': 1504.122, 'duration': 10.407}, {'end': 1517.611, 'text': '0.29, 0.07, and so on.', 'start': 1515.93, 'duration': 1.681}, {'end': 1520.373, 'text': 'So these 10 scores are printed in this array.', 'start': 1518.071, 'duration': 2.302}], 'summary': 'Predicted 10 scores for the first image, with highest being 0.29.', 'duration': 33.424, 'max_score': 1486.949, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/iqQgED9vV7k/pics/iqQgED9vV7k1486949.jpg'}, {'end': 1740.699, 'src': 'embed', 'start': 1718.864, 'weight': 3, 'content': [{'end': 1730.552, 'text': "Okay Now, what is this exactly? Well, let's print this confusion matrix in some visually appealing way so that we can visualize it better.", 'start': 1718.864, 'duration': 11.688}, {'end': 1732.734, 'text': "And I'm going to use seaborn library.", 'start': 1731.093, 'duration': 1.641}, {'end': 1733.974, 'text': 'You can just look at this code.', 'start': 1732.774, 'duration': 1.2}, {'end': 1739.518, 'text': "Basically, instead of this visualization, I'm having some fancy colorful visualization.", 'start': 1734.535, 'duration': 4.983}, {'end': 1740.699, 'text': "That's all this is.", 'start': 1739.878, 'duration': 0.821}], 'summary': 'Using seaborn library for visually appealing confusion matrix visualization.', 'duration': 21.835, 'max_score': 1718.864, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/iqQgED9vV7k/pics/iqQgED9vV7k1718864.jpg'}], 'start': 1363.95, 'title': 'Improving model accuracy and image prediction', 'summary': 'Discusses improving machine learning model accuracy to 92% through scaling, and using numpy to predict a handwritten digit resulting in a prediction of 7. it also covers building a confusion matrix in tensorflow to evaluate prediction performance.', 'chapters': [{'end': 1449.307, 'start': 1363.95, 'title': 'Improving machine learning model accuracy', 'summary': 'Discusses improving the accuracy of a machine learning model through scaling, achieving a 92% accuracy after several iterations and evaluating the accuracy on a test dataset, which also yielded a 92.68% accuracy, indicating a successful model for deployment and sample prediction.', 'duration': 85.357, 'highlights': ['The model achieved 92.68% accuracy when evaluated on a test dataset, indicating a successful model for deployment and prediction.', 'After several iterations, the model achieved 92% accuracy, demonstrating the improvement in accuracy through scaling.', 'The technique of scaling has been shown to improve the accuracy of the machine learning model, as evidenced by the 92% accuracy achieved.', 'Before deploying the model to production, it is essential to evaluate the accuracy on a test dataset, ensuring its reliability for real-world applications.']}, {'end': 1554.1, 'start': 1449.547, 'title': 'Image prediction with numpy', 'summary': 'Illustrates the process of using numpy to predict the content of an image, with the example of a handwritten digit recognition, resulting in a prediction of the digit being 7 and the method involving the use of np.argmax to find the maximum value.', 'duration': 104.553, 'highlights': ['The prediction of the first image is 7, and the process involves flattening the input and using NumPy to store and print the 10 scores, with the maximum score determining the prediction.', "The use of NumPy's np.argmax function to find the maximum value and print its index, resulting in the prediction of the handwritten digit being 7."]}, {'end': 1740.699, 'start': 1554.56, 'title': 'Building confusion matrix in tensorflow', 'summary': 'Covers building a confusion matrix in tensorflow to evaluate the prediction performance, demonstrating how to convert predicted values to class labels and visualize the confusion matrix using seaborn library.', 'duration': 186.139, 'highlights': ['The chapter demonstrates how to convert predicted values to class labels by using list comprehension and np.argmax, resulting in accurate prediction matching with the truth data.', 'The process of building a confusion matrix in TensorFlow is explained, and the importance of visualizing the confusion matrix using Seaborn library is highlighted for better understanding of prediction performance.', "The use of TensorFlow's math module for confusion matrix function and the process of storing the confusion matrix into a variable called cm are explained."]}], 'duration': 376.749, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/iqQgED9vV7k/pics/iqQgED9vV7k1363950.jpg', 'highlights': ['The model achieved 92.68% accuracy on the test dataset, demonstrating successful deployment and prediction.', "The technique of scaling improved the model's accuracy to 92% through several iterations.", 'The prediction of the handwritten digit using NumPy resulted in a prediction of 7.', 'The process of building a confusion matrix in TensorFlow is explained, emphasizing the importance of visualizing it using Seaborn library.', 'Before deploying the model, it is essential to evaluate its accuracy on a test dataset for real-world applications.']}, {'end': 2198.306, 'segs': [{'end': 1909.05, 'src': 'embed', 'start': 1781.758, 'weight': 2, 'content': [{'end': 1783.319, 'text': 'For example look at 40 number here.', 'start': 1781.758, 'duration': 1.561}, {'end': 1791.701, 'text': 'So 40 times it was number 2 but our model said it is actually 8.', 'start': 1784.26, 'duration': 7.441}, {'end': 1794.403, 'text': 'So you can see all the errors.', 'start': 1791.701, 'duration': 2.702}, {'end': 1798.366, 'text': 'You can see how those errors are distributed using confusion metrics.', 'start': 1794.423, 'duration': 3.943}, {'end': 1802.588, 'text': "It's a very good tool to evaluate the performance of your model.", 'start': 1798.406, 'duration': 4.182}, {'end': 1814.777, 'text': "Now I'm going to copy paste the same model here and then I will add a hidden layer into this.", 'start': 1804.21, 'duration': 10.567}, {'end': 1823.246, 'text': 'So as you add hidden layer, it generally tends to improve the performance.', 'start': 1816.545, 'duration': 6.701}, {'end': 1827.627, 'text': 'So what I will do here is this.', 'start': 1824.147, 'duration': 3.48}, {'end': 1840.91, 'text': "See, my last layer doesn't need input shape because whatever first layer it is connected, it knows how to figure out the input shape from that.", 'start': 1830.348, 'duration': 10.562}, {'end': 1846.704, 'text': 'okay?. But my first layer input neurons are 784..', 'start': 1840.91, 'duration': 5.794}, {'end': 1850.787, 'text': 'And here I need to specify number of neurons in my hidden layer.', 'start': 1846.704, 'duration': 4.083}, {'end': 1857.81, 'text': 'Now, how exactly you can specify how many neurons you want? Well, it is short of trial and error.', 'start': 1851.687, 'duration': 6.123}, {'end': 1860.652, 'text': 'There is no like fixed thumb rule.', 'start': 1859.031, 'duration': 1.621}, {'end': 1866.775, 'text': "There are some guidelines, but I'm just going to start with some value which is less than input shape.", 'start': 1860.672, 'duration': 6.103}, {'end': 1870.359, 'text': 'So I will start with 100.', 'start': 1867.815, 'duration': 2.544}, {'end': 1875.101, 'text': 'In the activation function, I will use an activation function called ReLU.', 'start': 1870.359, 'duration': 4.742}, {'end': 1879.423, 'text': 'We will look into later videos what ReLU is.', 'start': 1876.422, 'duration': 3.001}, {'end': 1881.224, 'text': 'You guys already know about sigmoid.', 'start': 1879.584, 'duration': 1.64}, {'end': 1887.828, 'text': 'There are other activation functions such as 10H, Leaky, ReLU and so on.', 'start': 1881.745, 'duration': 6.083}, {'end': 1889.508, 'text': 'We will look into those in details.', 'start': 1888.148, 'duration': 1.36}, {'end': 1891.589, 'text': 'Again, do not worry too much about this.', 'start': 1889.608, 'duration': 1.981}, {'end': 1901.337, 'text': 'As I said, my goal of this tutorial is to build a working model and then we will, uh, look into details, like you know,', 'start': 1892.45, 'duration': 8.887}, {'end': 1902.98, 'text': 'pilling the layers of the onion.', 'start': 1901.337, 'duration': 1.643}, {'end': 1909.05, 'text': "we'll pill all those layers in later videos.", 'start': 1902.98, 'duration': 6.07}], 'summary': 'Adding a hidden layer with 100 neurons tends to improve model performance.', 'duration': 127.292, 'max_score': 1781.758, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/iqQgED9vV7k/pics/iqQgED9vV7k1781758.jpg'}, {'end': 1963.95, 'src': 'embed', 'start': 1930.365, 'weight': 0, 'content': [{'end': 1940.371, 'text': "here i'm having just one hidden layer, but you can have more hidden layers as well and you can figure out how the performance is going to look like.", 'start': 1930.365, 'duration': 10.006}, {'end': 1947.136, 'text': 'And once the model is trained, of course, we are going to evaluate the performance on test set.', 'start': 1940.931, 'duration': 6.205}, {'end': 1951.68, 'text': 'And test set shows 97 percent.', 'start': 1949.238, 'duration': 2.442}, {'end': 1957.545, 'text': 'This means 97.15 percent is my accuracy.', 'start': 1952.501, 'duration': 5.044}, {'end': 1963.95, 'text': 'When I did not have hidden layer, my accuracy was 92 percent.', 'start': 1958.306, 'duration': 5.644}], 'summary': 'Model with one hidden layer achieves 97.15% accuracy, compared to 92% without hidden layer.', 'duration': 33.585, 'max_score': 1930.365, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/iqQgED9vV7k/pics/iqQgED9vV7k1930365.jpg'}, {'end': 2090.715, 'src': 'embed', 'start': 2051.813, 'weight': 1, 'content': [{'end': 2058.879, 'text': "and here let's say I don't want to create flatten array, I just want to supply X train, Y train.", 'start': 2051.813, 'duration': 7.066}, {'end': 2061.199, 'text': 'how do I do that?', 'start': 2058.879, 'duration': 2.32}, {'end': 2064.362, 'text': 'think about that for a second.', 'start': 2061.199, 'duration': 3.163}, {'end': 2072.946, 'text': 'there is a way called Keras.layers.flatten.', 'start': 2064.362, 'duration': 8.584}, {'end': 2081.37, 'text': 'Yes In that you can say my input shape is what? 28 by 28.', 'start': 2073.826, 'duration': 7.544}, {'end': 2084.612, 'text': "Because that's one image dimension.", 'start': 2081.37, 'duration': 3.242}, {'end': 2086.733, 'text': '28 by 28.', 'start': 2084.632, 'duration': 2.101}, {'end': 2090.715, 'text': "In the second one you don't need to specify input shape because it can figure it out on its own.", 'start': 2086.733, 'duration': 3.982}], 'summary': 'Using keras.layers.flatten, input shape for one dimension is 28 by 28.', 'duration': 38.902, 'max_score': 2051.813, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/iqQgED9vV7k/pics/iqQgED9vV7k2051813.jpg'}, {'end': 2179.015, 'src': 'embed', 'start': 2155.206, 'weight': 6, 'content': [{'end': 2164.911, 'text': 'So I want you to try different losses, different values of metric, different values of optimizer, also different values of epoch.', 'start': 2155.206, 'duration': 9.705}, {'end': 2170.49, 'text': 'And tell me if you can get more accuracy than 97%.', 'start': 2164.931, 'duration': 5.559}, {'end': 2173.212, 'text': 'You can play with some more hidden layers as well.', 'start': 2170.49, 'duration': 2.722}, {'end': 2179.015, 'text': 'Also try to play with different activation functions and tell me if you can get more accuracies.', 'start': 2173.552, 'duration': 5.463}], 'summary': 'Experiment with various parameters to achieve over 97% accuracy, including losses, metrics, optimizers, epochs, hidden layers, and activation functions.', 'duration': 23.809, 'max_score': 2155.206, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/iqQgED9vV7k/pics/iqQgED9vV7k2155206.jpg'}], 'start': 1742.74, 'title': 'Evaluating and improving neural network performance', 'summary': 'Covers evaluating a neural network model using confusion metrics, adding a hidden layer to improve performance, and specifying the number of neurons and activation function, also discussing the impact of adding hidden layers on neural network training time, leading to a 5% increase in accuracy from 92% to 97.15%, and introducing the use of keras.layers.flatten to simplify the input process, followed by an exercise to experiment with different optimizers, loss functions, epochs, and hidden layers to achieve higher accuracy.', 'chapters': [{'end': 1909.05, 'start': 1742.74, 'title': 'Neural network model evaluation', 'summary': 'Discusses evaluating a neural network model using confusion metrics, adding a hidden layer to improve performance, and the process of specifying the number of neurons and activation function, with a starting value of 100 neurons and relu activation function.', 'duration': 166.31, 'highlights': ['Adding a hidden layer tends to improve performance.', 'Trial and error method is used to specify the number of neurons for the hidden layer.', 'The ReLU activation function is used for the hidden layer.', "Confusion metrics are used to evaluate the model's performance."]}, {'end': 2198.306, 'start': 1909.05, 'title': 'Neural network performance improvement', 'summary': 'Discusses the impact of adding hidden layers on neural network training time, leading to a 5% increase in accuracy from 92% to 97.15%, and introduces the use of keras.layers.flatten to simplify the input process, followed by an exercise to experiment with different optimizers, loss functions, epochs, and hidden layers to achieve higher accuracy.', 'duration': 289.256, 'highlights': ['The addition of hidden layers in the neural network led to a 5% increase in accuracy from 92% to 97.15%.', "Introduction of Keras.layers.flatten simplifies the input process by eliminating the need to create a flatten array, enhancing the model's simplicity and efficiency.", 'Encourages the audience to experiment with different optimizers, loss functions, epochs, hidden layers, and activation functions to achieve higher accuracies than 97%.']}], 'duration': 455.566, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/iqQgED9vV7k/pics/iqQgED9vV7k1742740.jpg', 'highlights': ['The addition of hidden layers in the neural network led to a 5% increase in accuracy from 92% to 97.15%.', "Introduction of Keras.layers.flatten simplifies the input process by eliminating the need to create a flatten array, enhancing the model's simplicity and efficiency.", 'Adding a hidden layer tends to improve performance.', 'Trial and error method is used to specify the number of neurons for the hidden layer.', 'The ReLU activation function is used for the hidden layer.', "Confusion metrics are used to evaluate the model's performance.", 'Encourages the audience to experiment with different optimizers, loss functions, epochs, hidden layers, and activation functions to achieve higher accuracies than 97%.']}], 'highlights': ['The addition of hidden layers in the neural network led to a 5% increase in accuracy from 92% to 97.15%.', 'The model achieved 92.68% accuracy on the test dataset, demonstrating successful deployment and prediction.', 'The process of scaling the input values from 0 to 255 to 0 to 1 significantly improved the accuracy of the neural network.', 'The initial accuracy of the neural network was 0.47, which later improved to 0.99 after scaling the input values from 0 to 255 to 0 to 1.', "The technique of scaling improved the model's accuracy to 92% through several iterations.", 'The concept of representing an image as a two-dimensional array with pixels ranging from 0 to 255 is described, where 255 represents white and 0 represents black, providing a clear understanding of image representation in neural networks.', 'The process of converting a 28x28 grid image into 784 input neurons for the first layer of a neural network is explained, demonstrating the flattening of a 2D array to a 1D array.', 'The neural network consists of a dense layer where every neuron is connected with every other neuron in the hidden layer.', 'The output layer of the neural network has 10 neurons to classify images into 10 different classes.', 'The tutorial covers tensorflow and python code to recognize hand-written digits using deep learning, including a comparison with an insurance dataset.']}