title
Convolutional Neural Networks with TensorFlow - Deep Learning with Neural Networks 13

description
In this tutorial, we cover how to create a Convolutional Neural Network (CNN) model within TensorFlow, using our multilayer perceptron model: https://pythonprogramming.net/tensorflow-neural-network-session-machine-learning-tutorial/ Deep MNIST for experts: https://www.tensorflow.org/versions/r0.10/tutorials/mnist/pros/index.html https://pythonprogramming.net https://twitter.com/sentdex https://www.facebook.com/pythonprogramming.net/ https://plus.google.com/+sentdex

detail
{'title': 'Convolutional Neural Networks with TensorFlow - Deep Learning with Neural Networks 13', 'heatmap': [{'end': 272.441, 'start': 255.794, 'weight': 0.715}, {'end': 1133.122, 'start': 1116.228, 'weight': 1}], 'summary': 'Covers building a convolutional neural network with tensorflow, modifying neural network models, explaining convolution functions, achieving 97.5% accuracy after 10 epochs, and discussing convnet training, dropout application, and their impact on performance.', 'chapters': [{'end': 53.681, 'segs': [{'end': 53.681, 'src': 'embed', 'start': 2.904, 'weight': 0, 'content': [{'end': 9.766, 'text': 'The convolutional neural network with TensorFlow is what we are going to be doing in this tutorial.', 'start': 2.904, 'duration': 6.862}, {'end': 18.568, 'text': 'So to start, we are going to grab that multilayer perceptron code, that basic deep neural network code that we worked on in the beginning.', 'start': 10.566, 'duration': 8.002}, {'end': 21.989, 'text': 'You can go to pythonprogram.net, search for how the network will run.', 'start': 18.628, 'duration': 3.361}, {'end': 23.389, 'text': "It'll be your first result.", 'start': 22.029, 'duration': 1.36}, {'end': 32.98, 'text': "Scrolling down to the bottom, we're just going to take that code and we're going to copy it Move this aside, go into tftuts.", 'start': 24.19, 'duration': 8.79}, {'end': 34.962, 'text': 'Oops, opened it twice.', 'start': 34.081, 'duration': 0.881}, {'end': 36.463, 'text': 'Close one.', 'start': 35.983, 'duration': 0.48}, {'end': 41.808, 'text': "And I'm going to New Documents, and we'll call it cnnExample.py.", 'start': 37.184, 'duration': 4.624}, {'end': 46.532, 'text': "I actually don't want it in this touchy edit.", 'start': 43.089, 'duration': 3.443}, {'end': 49.134, 'text': 'Open in Sublime, paste.', 'start': 47.032, 'duration': 2.102}, {'end': 51.139, 'text': "here's our code.", 'start': 50.459, 'duration': 0.68}, {'end': 53.681, 'text': "so we're going to make the changes that we need to make.", 'start': 51.139, 'duration': 2.542}], 'summary': 'Creating a convolutional neural network with tensorflow for tutorial.', 'duration': 50.777, 'max_score': 2.904, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/mynJtLhhcXk/pics/mynJtLhhcXk2904.jpg'}], 'start': 2.904, 'title': 'Convolutional neural network with tensorflow', 'summary': 'Covers building a convolutional neural network with tensorflow and provides instructions on modifying a multilayer perceptron code and making necessary changes.', 'chapters': [{'end': 53.681, 'start': 2.904, 'title': 'Convolutional neural network with tensorflow', 'summary': 'Covers building a convolutional neural network with tensorflow, by modifying a multilayer perceptron code, and provides instructions on where to find the code and how to make the necessary changes.', 'duration': 50.777, 'highlights': ['The tutorial focuses on building a convolutional neural network with TensorFlow, by modifying a multilayer perceptron code.', 'Instructions are provided on where to find the code for the multilayer perceptron and how to make the necessary changes.', 'The chapter emphasizes the process of grabbing the multilayer perceptron code from pythonprogram.net, and making the required modifications.', "It provides guidance on saving the modified code in a new document named 'cnnExample.py' and using Sublime for editing."]}], 'duration': 50.777, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/mynJtLhhcXk/pics/mynJtLhhcXk2904.jpg', 'highlights': ['The tutorial focuses on building a convolutional neural network with TensorFlow, by modifying a multilayer perceptron code.', 'Instructions are provided on where to find the code for the multilayer perceptron and how to make the necessary changes.', 'The chapter emphasizes the process of grabbing the multilayer perceptron code from pythonprogram.net, and making the required modifications.', "It provides guidance on saving the modified code in a new document named 'cnnExample.py' and using Sublime for editing."]}, {'end': 229.313, 'segs': [{'end': 118.567, 'src': 'embed', 'start': 73.63, 'weight': 0, 'content': [{'end': 80.452, 'text': "We're going to change this to 128 for the same reasons that we did it in the recurrent neural network tutorial.", 'start': 73.63, 'duration': 6.822}, {'end': 84.253, 'text': "And then now we're going to mess with this neural network model function.", 'start': 81.232, 'duration': 3.021}, {'end': 89.654, 'text': "And I'm going to just go ahead and delete everything except for two of the dictionaries because we're going to use dictionaries.", 'start': 84.673, 'duration': 4.981}, {'end': 104.155, 'text': 'Also, just to note, I am going to try to use the same-ish variable names as you could find in the Deep MNIST for Experts tutorial.', 'start': 91.495, 'duration': 12.66}, {'end': 106.057, 'text': "It's not the exact same code.", 'start': 104.496, 'duration': 1.561}, {'end': 110.2, 'text': "They're just scripting this code, basically.", 'start': 106.717, 'duration': 3.483}, {'end': 111.922, 'text': 'There is no function, as far as I know.', 'start': 110.24, 'duration': 1.682}, {'end': 116.305, 'text': "And they're actually running the iterations a little different.", 'start': 112.762, 'duration': 3.543}, {'end': 118.567, 'text': 'So our code is actually a little different,', 'start': 117.126, 'duration': 1.441}], 'summary': 'Modifying neural network model with 128 units, using dictionaries, and referencing deep mnist tutorial.', 'duration': 44.937, 'max_score': 73.63, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/mynJtLhhcXk/pics/mynJtLhhcXk73630.jpg'}, {'end': 206.775, 'src': 'embed', 'start': 158.419, 'weight': 2, 'content': [{'end': 164.225, 'text': "And again, it is going to take data, but we'll replace that with X.", 'start': 158.419, 'duration': 5.806}, {'end': 165.786, 'text': 'Again, same reason we were doing it before.', 'start': 164.225, 'duration': 1.561}, {'end': 168.148, 'text': "It's just easier, I think, if we just keep everything as X.", 'start': 165.846, 'duration': 2.302}, {'end': 180.053, 'text': "Now, uh, we are going to, instead of having like layer dictionaries, we're just going to have basically a weights dictionary and a biases dictionary.", 'start': 169.705, 'duration': 10.348}, {'end': 181.814, 'text': 'I think this is a better way to go about doing it.', 'start': 180.073, 'duration': 1.741}, {'end': 190.82, 'text': "And, um, we'll start with the weights part and this will be w underscore conv one.", 'start': 182.754, 'duration': 8.066}, {'end': 194.143, 'text': 'So this is following actually lowercase.', 'start': 191.381, 'duration': 2.762}, {'end': 195.984, 'text': "See if we're going to do it just right.", 'start': 194.163, 'duration': 1.821}, {'end': 200.187, 'text': 'This is following the same kind of names for variables as this tutorial.', 'start': 196.383, 'duration': 3.804}, {'end': 204.312, 'text': "So if you did wait underscore conv1, you'll find that, oh yes, look at that.", 'start': 200.227, 'duration': 4.085}, {'end': 206.775, 'text': "So you probably should be able to see what's about to come.", 'start': 204.372, 'duration': 2.403}], 'summary': 'Replacing data with x for simplicity, transitioning to weights and biases dictionaries for better organization.', 'duration': 48.356, 'max_score': 158.419, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/mynJtLhhcXk/pics/mynJtLhhcXk158419.jpg'}], 'start': 53.681, 'title': 'Modifying neural network model', 'summary': 'Delves into the process of altering the neural network model function, eliminating redundant variables, adjusting specific parameters to 128, and utilizing dictionaries for weights and biases. it also suggests comparing with the deep mnist for experts tutorial to comprehend varied tensorflow methodologies.', 'chapters': [{'end': 229.313, 'start': 53.681, 'title': 'Changing neural network model function', 'summary': 'Discusses the process of modifying the neural network model function, removing unused variables, changing some parameters to 128, and using dictionaries for weights and biases, while recommending the comparison with the deep mnist for experts tutorial for understanding different approaches to tensorflow.', 'duration': 175.632, 'highlights': ['The process involves getting rid of unused variables, changing certain parameters to 128, and using dictionaries for weights and biases, while maintaining similar variable names to the Deep MNIST for Experts tutorial.', 'The recommendation is to compare this code with the Deep MNIST for Experts tutorial code for understanding different TensorFlow approaches and nuances in running iterations.', 'The chapter suggests using dictionaries for weights and biases instead of layer dictionaries for a better approach.', "The code encourages the use of 'X' instead of 'data' and emphasizes lowercase variable names for consistency with the tutorial."]}], 'duration': 175.632, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/mynJtLhhcXk/pics/mynJtLhhcXk53681.jpg', 'highlights': ['The process involves getting rid of unused variables, changing certain parameters to 128, and using dictionaries for weights and biases, while maintaining similar variable names to the Deep MNIST for Experts tutorial.', 'The recommendation is to compare this code with the Deep MNIST for Experts tutorial code for understanding different TensorFlow approaches and nuances in running iterations.', 'The chapter suggests using dictionaries for weights and biases instead of layer dictionaries for a better approach.', "The code encourages the use of 'X' instead of 'data' and emphasizes lowercase variable names for consistency with the tutorial."]}, {'end': 655.35, 'segs': [{'end': 254.353, 'src': 'embed', 'start': 229.373, 'weight': 5, 'content': [{'end': 236.478, 'text': "Again, if you're confused about how to build your own neural network models, it's good to see as many examples of people doing it as possible.", 'start': 229.373, 'duration': 7.105}, {'end': 245.648, 'text': "So anyways, moving along, we've got the weights for the first convolutional layer, basically the first convolution.", 'start': 237.443, 'duration': 8.205}, {'end': 248.69, 'text': "That's, as usual, going to be a TF variable.", 'start': 246.148, 'duration': 2.542}, {'end': 250.251, 'text': "It's a TF random normal.", 'start': 249.09, 'duration': 1.161}, {'end': 254.353, 'text': 'It is not anymore going to be 784 by the number of nodes in HL1.', 'start': 250.471, 'duration': 3.882}], 'summary': 'Learning from examples is helpful for building neural network models.', 'duration': 24.98, 'max_score': 229.373, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/mynJtLhhcXk/pics/mynJtLhhcXk229373.jpg'}, {'end': 290.898, 'src': 'heatmap', 'start': 255.794, 'weight': 0.715, 'content': [{'end': 259.016, 'text': "And instead, it'll be 5518.", 'start': 255.794, 'duration': 3.222}, {'end': 262.217, 'text': '32, because those are magical black box parameters written.', 'start': 259.016, 'duration': 3.201}, {'end': 266.018, 'text': 'now this will be 5 by 5 convolution.', 'start': 262.217, 'duration': 3.801}, {'end': 272.441, 'text': "It's going to take one input and it's going to produce 32 features or 32 outputs.", 'start': 266.018, 'duration': 6.423}, {'end': 279.363, 'text': "Now I am going to do oops, Really, I can't get away with that.", 'start': 272.441, 'duration': 6.922}, {'end': 290.898, 'text': "We'll delete that and I'm just going to copy, come down here, and oh, my goodness, really all right, and let's see if I can get away with this.", 'start': 279.363, 'duration': 11.535}], 'summary': 'The model will produce 32 features through 5 by 5 convolution.', 'duration': 35.104, 'max_score': 255.794, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/mynJtLhhcXk/pics/mynJtLhhcXk255794.jpg'}, {'end': 371.301, 'src': 'embed', 'start': 303.807, 'weight': 0, 'content': [{'end': 305.729, 'text': "so we'll just whoops, we'll just call this out.", 'start': 303.807, 'duration': 1.922}, {'end': 308.272, 'text': 'So the fully connected.', 'start': 307.352, 'duration': 0.92}, {'end': 311.293, 'text': 'I think the tutorial refers to that as the densely connected layer.', 'start': 308.272, 'duration': 3.021}, {'end': 313.533, 'text': 'I think fully connected is the proper term.', 'start': 311.593, 'duration': 1.94}, {'end': 320.275, 'text': "Densely just means it's not sparse, it's dense, but does dense necessarily mean all of them are connected? I don't think so.", 'start': 313.593, 'duration': 6.682}, {'end': 322.356, 'text': 'So fully connected I think is the proper term.', 'start': 320.675, 'duration': 1.681}, {'end': 326.597, 'text': 'Next, you have a five by five, again, convolution.', 'start': 322.996, 'duration': 3.601}, {'end': 334.318, 'text': "Instead of one input, it's gonna be 32 inputs, and the output we're gonna say is 64.", 'start': 327.297, 'duration': 7.021}, {'end': 336.919, 'text': "You can tinker with these if you'd like, but that's what we're gonna go with.", 'start': 334.318, 'duration': 2.601}, {'end': 343.119, 'text': 'the fully connected layer is fully connected.', 'start': 338.456, 'duration': 4.663}, {'end': 344.94, 'text': "it's not a convolution anymore.", 'start': 343.119, 'duration': 1.821}, {'end': 350.704, 'text': "so we're gonna say uh, the image at this point we're gonna say is a seven by seven image.", 'start': 344.94, 'duration': 5.764}, {'end': 353.886, 'text': 'remember, we started with 28 by 28 images.', 'start': 350.704, 'duration': 3.182}, {'end': 355.808, 'text': "when you do convolution, it's gonna.", 'start': 353.886, 'duration': 1.922}, {'end': 360.511, 'text': "you're gonna significantly compress those images down to be feature maps.", 'start': 355.808, 'duration': 4.703}, {'end': 360.831, 'text': "they're not.", 'start': 360.511, 'duration': 0.32}, {'end': 362.452, 'text': "it's not even an image anymore.", 'start': 360.831, 'duration': 1.621}, {'end': 369.46, 'text': 'and if you saw a picture of a of what ends up happening, i wonder if i can pull one up really quick.', 'start': 362.452, 'duration': 7.008}, {'end': 371.301, 'text': "I'll see if I can should have had it prepared.", 'start': 369.48, 'duration': 1.821}], 'summary': 'Tutorial on neural network layers and convolution with 32 inputs and 64 outputs.', 'duration': 67.494, 'max_score': 303.807, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/mynJtLhhcXk/pics/mynJtLhhcXk303807.jpg'}, {'end': 466.613, 'src': 'embed', 'start': 431.334, 'weight': 2, 'content': [{'end': 437.697, 'text': 'So, if you recall, our typical are before our fully connected layers were what 784, right?', 'start': 431.334, 'duration': 6.363}, {'end': 438.377, 'text': 'That was the.', 'start': 437.757, 'duration': 0.62}, {'end': 443.9, 'text': "the input was 784 initially, but now it's a seven by seven times 64 features.", 'start': 438.377, 'duration': 5.523}, {'end': 446.281, 'text': "And then we're going to just say, okay, we want 1, 024 nodes there.", 'start': 443.96, 'duration': 2.321}, {'end': 450.443, 'text': 'Then the output is going to be 1, 024 and then number of classes.', 'start': 446.321, 'duration': 4.122}, {'end': 461.709, 'text': 'So it can be any 0 through 9.', 'start': 458.567, 'duration': 3.142}, {'end': 466.613, 'text': 'Now we get to the biases.', 'start': 461.709, 'duration': 4.904}], 'summary': 'Input size changed from 784 to 7x7x64, with 1024 nodes and 10 classes.', 'duration': 35.279, 'max_score': 431.334, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/mynJtLhhcXk/pics/mynJtLhhcXk431334.jpg'}, {'end': 578.327, 'src': 'embed', 'start': 551.713, 'weight': 4, 'content': [{'end': 558.915, 'text': 'we use that nice example and Do it in numpy and just print out at each step of the way to kind of get an idea,', 'start': 551.713, 'duration': 7.202}, {'end': 562.136, 'text': 'because these are kind of like abstract Manipulations.', 'start': 558.915, 'duration': 3.221}, {'end': 563.477, 'text': "this one's fairly simple, this one.", 'start': 562.136, 'duration': 1.341}, {'end': 570.539, 'text': 'We are simply reshaping a 784 pixel image to a flat one a 28 by 28 image.', 'start': 563.517, 'duration': 7.022}, {'end': 572.72, 'text': "So that one's pretty simple.", 'start': 570.879, 'duration': 1.841}, {'end': 578.327, 'text': "I don't actually I I don't think in this one we're going to be doing anything besides reshaping.", 'start': 572.76, 'duration': 5.567}], 'summary': 'Using numpy, reshaping a 784 pixel image to a 28x28 image.', 'duration': 26.614, 'max_score': 551.713, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/mynJtLhhcXk/pics/mynJtLhhcXk551713.jpg'}], 'start': 229.373, 'title': 'Neural network model and convolution process', 'summary': 'Provides detailed information on building a neural network model, including convolutional and fully connected layers with specific parameters and dimensions, as well as outlining the convolution process, layer dimensions, and image manipulation.', 'chapters': [{'end': 399.458, 'start': 229.373, 'title': 'Neural network model building', 'summary': 'Discusses building a neural network model with detailed information on the convolutional layers, fully connected layers, and the transformation of input images, with specific parameters and dimensions mentioned.', 'duration': 170.085, 'highlights': ['The first convolutional layer will have 32 outputs and a 5x5 convolution, producing feature maps from 28x28 input images, transforming them into 7x7 feature maps.', "The fully connected layer will have 64 outputs, and the tutorial refers to it as the densely connected layer, although 'fully connected' is considered the proper term.", 'The process of convolution significantly compresses the input images into feature maps, leading to a transformation from 28x28 images to 7x7 feature maps.']}, {'end': 655.35, 'start': 399.498, 'title': 'Neural network convolution process', 'summary': 'Outlines the process of convolution in a neural network, including the dimensions and number of nodes in each layer, as well as the reshaping and manipulation of pixel images.', 'duration': 255.852, 'highlights': ['The number of nodes in the fully connected layers has changed from the typical 784 to 7 by 7 times 64, resulting in 1024 nodes in the layer. The number of nodes in the fully connected layers has changed from the typical 784 to 7 by 7 times 64, resulting in 1024 nodes in the layer.', 'The reshaping process involves converting a 784 pixel image to a flat 28 by 28 image. The reshaping process involves converting a 784 pixel image to a flat 28 by 28 image.', 'The chapter discusses the process of convolution and the use of functions from TensorFlow for the first convolution layer. The chapter discusses the process of convolution and the use of functions from TensorFlow for the first convolution layer.']}], 'duration': 425.977, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/mynJtLhhcXk/pics/mynJtLhhcXk229373.jpg', 'highlights': ['The first convolutional layer will have 32 outputs and a 5x5 convolution, producing feature maps from 28x28 input images, transforming them into 7x7 feature maps.', "The fully connected layer will have 64 outputs, and the tutorial refers to it as the densely connected layer, although 'fully connected' is considered the proper term.", 'The number of nodes in the fully connected layers has changed from the typical 784 to 7 by 7 times 64, resulting in 1024 nodes in the layer.', 'The process of convolution significantly compresses the input images into feature maps, leading to a transformation from 28x28 images to 7x7 feature maps.', 'The reshaping process involves converting a 784 pixel image to a flat 28 by 28 image.', 'The chapter discusses the process of convolution and the use of functions from TensorFlow for the first convolution layer.']}, {'end': 1194.289, 'segs': [{'end': 792.319, 'src': 'embed', 'start': 726.901, 'weight': 1, 'content': [{'end': 743.048, 'text': 'is going to be 1, 2, 2, 1, and then strides, strides, strides, 1, this will be 2, 2, 1 again, and then padding again will be the same.', 'start': 726.901, 'duration': 16.147}, {'end': 746.727, 'text': 'So convolution.', 'start': 744.585, 'duration': 2.142}, {'end': 755.514, 'text': 'recall that, uh, when we did convolution, when we did convolution, um, we were going to.', 'start': 746.727, 'duration': 8.787}, {'end': 759.938, 'text': 'we took that image and then we, kind of like, slid over a moving window over the pixels.', 'start': 755.514, 'duration': 4.424}, {'end': 764.002, 'text': "And we just didn't want to skip any pixels right at each time.", 'start': 759.958, 'duration': 4.044}, {'end': 773.042, 'text': "And then with pooling, uh, we were doing basically the same thing, only with convolution you're trying to extract features.", 'start': 764.482, 'duration': 8.56}, {'end': 775.324, 'text': "with pooling, you're just trying to simplify things.", 'start': 773.042, 'duration': 2.282}, {'end': 780.008, 'text': 'so in this case, like with strides 1111, this is just simply taking.', 'start': 775.324, 'duration': 4.684}, {'end': 789.897, 'text': "um, it's going to take your your size and each time it moves, it's just going to move.", 'start': 780.008, 'duration': 9.889}, {'end': 792.319, 'text': "it's just going to take basically one pixel at a time.", 'start': 789.897, 'duration': 2.422}], 'summary': 'Explains convolution and pooling in neural networks, using strides 1, 2, 2, 1 and emphasizing feature extraction and simplification.', 'duration': 65.418, 'max_score': 726.901, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/mynJtLhhcXk/pics/mynJtLhhcXk726901.jpg'}, {'end': 1016.675, 'src': 'embed', 'start': 976.892, 'weight': 3, 'content': [{'end': 981.375, 'text': "This, the input to that secondary layer, that's not going to be X anymore.", 'start': 976.892, 'duration': 4.483}, {'end': 982.475, 'text': "That'll be conv one.", 'start': 981.395, 'duration': 1.08}, {'end': 984.356, 'text': 'The weights, conv two.', 'start': 982.975, 'duration': 1.381}, {'end': 987.818, 'text': 'Max pool will be against conv two.', 'start': 984.876, 'duration': 2.942}, {'end': 992.38, 'text': "Now we're going to do the fully connected layer.", 'start': 989.858, 'duration': 2.522}, {'end': 996.442, 'text': "So FC, that's going to be tf.reshape.", 'start': 992.6, 'duration': 3.842}, {'end': 999.504, 'text': "We're going to reshape that con2.", 'start': 997.523, 'duration': 1.981}, {'end': 1005.328, 'text': 'And the shape will be a negative 1 and then 7 by 7 by 64.', 'start': 1000.765, 'duration': 4.563}, {'end': 1007.249, 'text': "That's what we did up here.", 'start': 1005.328, 'duration': 1.921}, {'end': 1016.675, 'text': "So we're reshaping that.", 'start': 1014.19, 'duration': 2.485}], 'summary': 'Reshaping conv2 to -1 and 7x7x64', 'duration': 39.783, 'max_score': 976.892, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/mynJtLhhcXk/pics/mynJtLhhcXk976892.jpg'}, {'end': 1133.122, 'src': 'heatmap', 'start': 1094.195, 'weight': 0, 'content': [{'end': 1097.296, 'text': 'Now, the output, again, is just going to be tf.matmul.', 'start': 1094.195, 'duration': 3.101}, {'end': 1102.418, 'text': 'And this time, the input is fc, that fully connected layer.', 'start': 1098.096, 'duration': 4.322}, {'end': 1105.5, 'text': 'The weights will be the weights for the out layer.', 'start': 1102.839, 'duration': 2.661}, {'end': 1112.545, 'text': "And then we're going to just do plus the biases for the outlayer.", 'start': 1107.461, 'duration': 5.084}, {'end': 1117.469, 'text': "And that's it.", 'start': 1116.228, 'duration': 1.241}, {'end': 1126.797, 'text': 'So then we take convolutional neural network, copy, paste, and we should be finished.', 'start': 1117.829, 'duration': 8.968}, {'end': 1133.122, 'text': "Let's see what happens.", 'start': 1131.801, 'duration': 1.321}], 'summary': 'Using tf.matmul for fully connected layer with weights and biases, then integrating into cnn for final output.', 'duration': 38.927, 'max_score': 1094.195, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/mynJtLhhcXk/pics/mynJtLhhcXk1094195.jpg'}], 'start': 655.39, 'title': 'Convolutional neural networks', 'summary': 'Explains the basics of defining convolutional functions in a neural network, including the use of strides, padding, and pooling, with emphasis on 2d operations and their impact on image processing. it also covers the process of building a convolutional neural network using tensorflow, achieving an accuracy of 97.5% after 10 epochs.', 'chapters': [{'end': 890.008, 'start': 655.39, 'title': 'Convolutional neural networks basics', 'summary': 'Explains the basics of defining convolutional functions in a neural network, including the use of strides, padding, and pooling, with emphasis on 2d operations and their impact on image processing.', 'duration': 234.618, 'highlights': ['The function defines conv2D and maxpool2d for 2D convolution and pooling, using strides and padding to manipulate data and extract features.', 'The explanation of strides and padding in the context of convolution and pooling, with emphasis on their impact on image processing.', 'Clarification on the purpose of padding in the context of image size and window movement.']}, {'end': 1194.289, 'start': 890.128, 'title': 'Building convolutional neural network', 'summary': 'Explains the process of building a convolutional neural network using tensorflow, including the steps of convolution, pooling, fully connected layer, and output layer, achieving an accuracy of 97.5% after 10 epochs.', 'duration': 304.161, 'highlights': ['The output layer of the convolutional neural network achieved an accuracy of 97.5% after 10 epochs.', 'The process involves steps of convolution, pooling, fully connected layer, and output layer using TensorFlow.', 'The fully connected layer is reshaped to a dimension of 7 by 7 by 64.', 'The convolutional neural network is built by applying convolution and pooling to the input data.']}], 'duration': 538.899, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/mynJtLhhcXk/pics/mynJtLhhcXk655390.jpg', 'highlights': ['The output layer of the convolutional neural network achieved an accuracy of 97.5% after 10 epochs.', 'The function defines conv2D and maxpool2d for 2D convolution and pooling, using strides and padding to manipulate data and extract features.', 'The process involves steps of convolution, pooling, fully connected layer, and output layer using TensorFlow.', 'The fully connected layer is reshaped to a dimension of 7 by 7 by 64.', 'The explanation of strides and padding in the context of convolution and pooling, with emphasis on their impact on image processing.']}, {'end': 1504.871, 'segs': [{'end': 1224.723, 'src': 'embed', 'start': 1194.289, 'weight': 0, 'content': [{'end': 1200.234, 'text': "actually, i believe that's worse than the uh, the recurrent net that we did.", 'start': 1194.289, 'duration': 5.945}, {'end': 1205.156, 'text': "But it's because the size of the data is not ideal.", 'start': 1201.535, 'duration': 3.621}, {'end': 1211.878, 'text': "so the recurrent net actually can Seemingly get by with smaller data sets if, at least from what I've found,", 'start': 1205.156, 'duration': 6.722}, {'end': 1215.78, 'text': 'The ConvNet will be superior with a much larger data set.', 'start': 1211.878, 'duration': 3.902}, {'end': 1224.723, 'text': 'the other thing that I do want to just show is, especially with a ConvNet dropout, this, this idea of dropout, is pretty important.', 'start': 1215.78, 'duration': 8.943}], 'summary': 'Convnet outperforms recurrent net with larger data sets, dropout is important', 'duration': 30.434, 'max_score': 1194.289, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/mynJtLhhcXk/pics/mynJtLhhcXk1194289.jpg'}, {'end': 1293.781, 'src': 'embed', 'start': 1258.803, 'weight': 2, 'content': [{'end': 1266.949, 'text': "so again, even with dropout, you can look at the example here we're going to add ours just to the end anyways, but um yeah.", 'start': 1258.803, 'duration': 8.146}, {'end': 1272.232, 'text': "so after our you know x and y variable definitions, let's go ahead and add a keep rate.", 'start': 1266.949, 'duration': 5.283}, {'end': 1286.237, 'text': "The keep rate will do 0.8, and then keep underscore prob will say is tf.placeholder and it's going to be tf.float32..", 'start': 1273.53, 'duration': 12.707}, {'end': 1293.781, 'text': "Okay, now what I'm going to change is down here.", 'start': 1288.999, 'duration': 4.782}], 'summary': 'Adding a keep rate of 0.8 to the x and y variable definitions.', 'duration': 34.978, 'max_score': 1258.803, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/mynJtLhhcXk/pics/mynJtLhhcXk1258803.jpg'}, {'end': 1453.268, 'src': 'embed', 'start': 1425.817, 'weight': 1, 'content': [{'end': 1430.14, 'text': "But again, in a much larger data set, dropout is going to help because they're going to.", 'start': 1425.817, 'duration': 4.323}, {'end': 1431.38, 'text': 'And in fact, we even did worse.', 'start': 1430.24, 'duration': 1.14}, {'end': 1433.001, 'text': "Let's run it one more time.", 'start': 1431.42, 'duration': 1.581}, {'end': 1437.564, 'text': 'Dropout is kind of like like not everything is exactly the same.', 'start': 1434.182, 'duration': 3.382}, {'end': 1441.546, 'text': 'And sometimes like maybe one neuron is given way too much weight or something like that.', 'start': 1437.604, 'duration': 3.942}, {'end': 1447.587, 'text': 'So these things can happen, so dropout can kind of help fight those kind of local Maximo kind of things.', 'start': 1442.466, 'duration': 5.121}, {'end': 1453.268, 'text': "But anyway, once we're done with this, we'll be done with this tutorial.", 'start': 1449.127, 'duration': 4.141}], 'summary': 'Dropout helps in a larger data set to avoid local maximo, improving performance.', 'duration': 27.451, 'max_score': 1425.817, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/mynJtLhhcXk/pics/mynJtLhhcXk1425817.jpg'}, {'end': 1490.631, 'src': 'embed', 'start': 1465.53, 'weight': 4, 'content': [{'end': 1471.191, 'text': 'But in all reality, on image data, generally the convolutional neural network, it still did worse.', 'start': 1465.53, 'duration': 5.661}, {'end': 1472.618, 'text': 'Under 97.', 'start': 1472.378, 'duration': 0.24}, {'end': 1476.961, 'text': 'The convolutional neural network is the champion.', 'start': 1472.618, 'duration': 4.343}, {'end': 1478.823, 'text': "You're just going to need a bigger data set.", 'start': 1477.241, 'duration': 1.582}, {'end': 1484.146, 'text': "And in this case, I think we're using 50, 000 images and 10, 000 tests, if I recall right.", 'start': 1478.923, 'duration': 5.223}, {'end': 1485.107, 'text': 'Something like that.', 'start': 1484.547, 'duration': 0.56}, {'end': 1486.228, 'text': "It's not a huge number.", 'start': 1485.187, 'duration': 1.041}, {'end': 1490.631, 'text': "And in reality, you'd want a very large number to work with.", 'start': 1486.548, 'duration': 4.083}], 'summary': 'The convolutional neural network performed under 97% on a dataset of 50,000 images and 10,000 tests, indicating the need for a larger dataset.', 'duration': 25.101, 'max_score': 1465.53, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/mynJtLhhcXk/pics/mynJtLhhcXk1465530.jpg'}], 'start': 1194.289, 'title': 'Convnet training and dropout application', 'summary': 'Discusses the significance of data size in training convnet, the use of dropout to enhance performance, and the application of dropout in a tensorflow model with an 80% keep rate, resulting in increased error for a small dataset but potentially beneficial for larger datasets.', 'chapters': [{'end': 1321.297, 'start': 1194.289, 'title': 'Neural network training with convnet', 'summary': 'Discusses the importance of data size in training convnet and the use of dropout to improve performance, with convnet being superior with larger datasets and dropout being a popular technique to mimic missing neurons.', 'duration': 127.008, 'highlights': ['The ConvNet is superior with a much larger data set, compared to the recurrent net which can seemingly get by with smaller data sets.', 'The idea of dropout is important, as it mimics missing neurons and is popular with the ConvNet.', 'Adding dropout with a keep rate of 0.8 and keep_prob as tf.placeholder tf.float32 is recommended for neural networks.']}, {'end': 1504.871, 'start': 1321.737, 'title': 'Applying dropout in tensorflow', 'summary': 'Discusses the application of dropout in a tensorflow model, with an 80% keep rate, resulting in increased error for a small dataset, but potentially beneficial for larger datasets, such as image data.', 'duration': 183.134, 'highlights': ['Dropout with 80% keep rate resulted in increased error for a small dataset. The model showed significantly more error than the original model.', 'Dropout may be beneficial for larger datasets, such as image data. In a larger dataset, dropout can help combat local maximum issues and potentially improve model performance.', 'Convolutional neural network outperformed recurrent net for image data with 97% accuracy. The convolutional neural network achieved over 97% accuracy, indicating its superiority for image data.']}], 'duration': 310.582, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/mynJtLhhcXk/pics/mynJtLhhcXk1194289.jpg', 'highlights': ['ConvNet is superior with a much larger data set, compared to the recurrent net.', 'Dropout with 80% keep rate resulted in increased error for a small dataset.', 'Adding dropout with a keep rate of 0.8 and keep_prob as tf.placeholder tf.float32 is recommended for neural networks.', 'Dropout may be beneficial for larger datasets, such as image data.', 'Convolutional neural network outperformed recurrent net for image data with 97% accuracy.']}], 'highlights': ['The output layer of the convolutional neural network achieved an accuracy of 97.5% after 10 epochs.', 'The tutorial focuses on building a convolutional neural network with TensorFlow, by modifying a multilayer perceptron code.', 'The first convolutional layer will have 32 outputs and a 5x5 convolution, producing feature maps from 28x28 input images, transforming them into 7x7 feature maps.', 'ConvNet is superior with a much larger data set, compared to the recurrent net.', 'The process involves steps of convolution, pooling, fully connected layer, and output layer using TensorFlow.']}