title
Deep Learning with Python, TensorFlow, and Keras tutorial
description
An updated deep learning introduction using Python, TensorFlow, and Keras.
Text-tutorial and notes: https://pythonprogramming.net/introduction-deep-learning-python-tensorflow-keras/
TensorFlow Docs: https://www.tensorflow.org/api_docs/python/
Keras Docs: https://keras.io/layers/about-keras-layers/
Discord: https://discord.gg/sentdex
detail
{'title': 'Deep Learning with Python, TensorFlow, and Keras tutorial', 'heatmap': [{'end': 384.318, 'start': 340.888, 'weight': 0.769}, {'end': 497.362, 'start': 480.424, 'weight': 0.839}, {'end': 581.765, 'start': 554.256, 'weight': 0.846}, {'end': 1098.563, 'start': 1072.783, 'weight': 1}, {'end': 1125.67, 'start': 1104.227, 'weight': 0.705}], 'summary': 'Learn about the updates and simplicity of deep learning in python with tensorflow and keras, understand neural networks, activation functions, building and normalizing a neural network model, training a model with 97% accuracy, and get insights into keras, tensorflow, and ml updates with upcoming tutorial plans and challenges discussed.', 'chapters': [{'end': 64.632, 'segs': [{'end': 64.632, 'src': 'embed', 'start': 1.42, 'weight': 0, 'content': [{'end': 10.528, 'text': 'What is going on everybody and welcome to a much needed update to the deep learning in Python with TensorFlow as well as now Keras tutorial.', 'start': 1.42, 'duration': 9.108}, {'end': 18.816, 'text': "It's been a bit over two years since I did just a basic deep learning video in Python and since then a lot has changed.", 'start': 10.589, 'duration': 8.227}, {'end': 25.886, 'text': "It's now much simpler and To both like get into it, but then also just to work with deep learning models.", 'start': 18.876, 'duration': 7.01}, {'end': 32.631, 'text': 'So if you want to get into the more nitty-gritty details in the lower level tensorflow code, You can still check out the older video.', 'start': 25.886, 'duration': 6.745}, {'end': 37.335, 'text': "But if you're just trying to get started with deep learning and that's not necessary anymore,", 'start': 32.631, 'duration': 4.704}, {'end': 44.7, 'text': "because we have these nice high-level api's like keras that sit on top of tensorflow and Make it super, super simple.", 'start': 37.335, 'duration': 7.365}, {'end': 46.265, 'text': 'So Anybody can follow along.', 'start': 44.76, 'duration': 1.505}, {'end': 48.887, 'text': "If you don't know anything about deep learning, that's totally fine.", 'start': 46.305, 'duration': 2.582}, {'end': 52.188, 'text': "We're going to do a quick run-through of neural networks.", 'start': 48.907, 'duration': 3.281}, {'end': 57.03, 'text': "Also, you're going to want Python 3.6, at least as of the release of this video.", 'start': 52.368, 'duration': 4.662}, {'end': 60.812, 'text': 'Hopefully, very, very soon, TensorFlow will be supported on 3.7.', 'start': 57.41, 'duration': 3.402}, {'end': 63.593, 'text': 'In later versions of Python, it just happens to be the case.', 'start': 60.812, 'duration': 2.781}, {'end': 64.632, 'text': "Right now, it isn't.", 'start': 63.733, 'duration': 0.899}], 'summary': 'Update on deep learning in python with tensorflow and keras, now simpler and more accessible, suitable for beginners and python 3.6 users.', 'duration': 63.212, 'max_score': 1.42, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/wQ8BIBpya2k/pics/wQ8BIBpya2k1420.jpg'}], 'start': 1.42, 'title': 'Deep learning in python with tensorflow & keras update', 'summary': 'Covers the updates and simplicity of deep learning in python with tensorflow and keras, emphasizing the transition to high-level apis like keras, making it easier for beginners, and the requirement of python 3.6 for tensorflow support.', 'chapters': [{'end': 64.632, 'start': 1.42, 'title': 'Deep learning in python: tensorflow & keras update', 'summary': 'Discusses the updates and simplicity of deep learning in python with tensorflow and keras, highlighting the transition to high-level apis like keras, making it easier for beginners, and the requirement of python 3.6 for tensorflow support.', 'duration': 63.212, 'highlights': ['The transition to high-level APIs like Keras makes it super simple for anybody to get started with deep learning.', 'The chapter emphasizes the need for Python 3.6 for TensorFlow support, with hopeful future support for 3.7.', 'The update discusses the changes and simplification in deep learning with TensorFlow and Keras over the past two years.']}], 'duration': 63.212, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/wQ8BIBpya2k/pics/wQ8BIBpya2k1420.jpg', 'highlights': ['The transition to high-level APIs like Keras makes it super simple for anybody to get started with deep learning.', 'The update discusses the changes and simplification in deep learning with TensorFlow and Keras over the past two years.', 'The chapter emphasizes the need for Python 3.6 for TensorFlow support, with hopeful future support for 3.7.']}, {'end': 497.362, 'segs': [{'end': 117.48, 'src': 'embed', 'start': 84.882, 'weight': 2, 'content': [{'end': 89.948, 'text': "So I'm gonna show you guys what I think is just the kind of bare essential to understanding what's going on.", 'start': 84.882, 'duration': 5.066}, {'end': 96.773, 'text': "So a neural network is going to consist of the following things like what's the goal of any machine learning model?", 'start': 90.649, 'duration': 6.124}, {'end': 107.076, 'text': "well, you've got some inputs, let's say x1, x2, x3, and you're just trying to map those inputs to some sort of output.", 'start': 96.773, 'duration': 10.303}, {'end': 117.48, 'text': "let's say that output is determining whether something is a dog or that something is a cat, so the output is going to be two neurons in this case.", 'start': 107.076, 'duration': 10.404}], 'summary': 'Neural network maps inputs to outputs, e.g. classifying dogs or cats.', 'duration': 32.598, 'max_score': 84.882, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/wQ8BIBpya2k/pics/wQ8BIBpya2k84882.jpg'}, {'end': 189.176, 'src': 'embed', 'start': 162.537, 'weight': 0, 'content': [{'end': 169.142, 'text': 'The problem is, if you did this, the relationship between X1 and dog or cat and all the other ones?', 'start': 162.537, 'duration': 6.605}, {'end': 172.344, 'text': 'those relationships would only be linear relationships.', 'start': 169.142, 'duration': 3.202}, {'end': 179.028, 'text': "So if we're looking to map nonlinear relationships, which is probably going to be the case in a complex question, you need to have two or more.", 'start': 172.824, 'duration': 6.204}, {'end': 181.75, 'text': 'One hidden layer means you just have a neural network.', 'start': 179.549, 'duration': 2.201}, {'end': 185.873, 'text': 'Two or more hidden layers means you have a quote-unquote deep neural network.', 'start': 182.19, 'duration': 3.683}, {'end': 189.176, 'text': "So we'll add one more layer and then we're going to fully connect that one too.", 'start': 185.953, 'duration': 3.223}], 'summary': 'To capture nonlinear relationships, use two or more hidden layers in a neural network.', 'duration': 26.639, 'max_score': 162.537, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/wQ8BIBpya2k/pics/wQ8BIBpya2k162537.jpg'}, {'end': 264.368, 'src': 'embed', 'start': 225.734, 'weight': 1, 'content': [{'end': 227.895, 'text': 'It might be data coming from another neuron.', 'start': 225.734, 'duration': 2.161}, {'end': 233.196, 'text': "But regardless, that data is going to come in, and we're just going to get the sum of that data.", 'start': 228.035, 'duration': 5.161}, {'end': 235.837, 'text': "So it's going to come in and be summed all together.", 'start': 233.597, 'duration': 2.24}, {'end': 237.998, 'text': 'But remember, we also have those weights.', 'start': 235.977, 'duration': 2.021}, {'end': 245.5, 'text': 'Each of the inputs has a unique weight that gets multiplied against the input data, and then we sum it together.', 'start': 238.418, 'duration': 7.082}, {'end': 250.681, 'text': 'Finally, and this is kind of where the artificial neural network comes into play, we have an activation function.', 'start': 245.62, 'duration': 5.061}, {'end': 256.344, 'text': 'And this activation function is kind of meant to simulate a neuron actually firing or not.', 'start': 250.701, 'duration': 5.643}, {'end': 259.325, 'text': 'so you can think of the activation function like on a graph.', 'start': 256.344, 'duration': 2.981}, {'end': 264.368, 'text': 'you know you got your x and your y, and then a really basic activation function would be like a stepper function.', 'start': 259.325, 'duration': 5.043}], 'summary': 'Data inputs are summed using unique weights, then passed through an activation function in artificial neural networks.', 'duration': 38.634, 'max_score': 225.734, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/wQ8BIBpya2k/pics/wQ8BIBpya2k225734.jpg'}, {'end': 340.888, 'src': 'embed', 'start': 314.314, 'weight': 3, 'content': [{'end': 320.259, 'text': 'well, this output layer is almost certain to have just a sigmoid activation function.', 'start': 314.314, 'duration': 5.945}, {'end': 326.303, 'text': "and what's going to say is maybe dog is a 0.79 and cat is a 0.21.", 'start': 320.259, 'duration': 6.044}, {'end': 333.369, 'text': "these two values are going to add up to a perfect 1.0, but we're going to go with whatever the largest value is.", 'start': 326.303, 'duration': 7.066}, {'end': 340.888, 'text': "so in this case the neural network is, you could say, 79 confident, it's a dog 21 confidence, a cat.", 'start': 333.369, 'duration': 7.519}], 'summary': 'Neural network uses sigmoid activation, predicts 79% dog, 21% cat.', 'duration': 26.574, 'max_score': 314.314, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/wQ8BIBpya2k/pics/wQ8BIBpya2k314314.jpg'}, {'end': 384.318, 'src': 'heatmap', 'start': 340.888, 'weight': 0.769, 'content': [{'end': 346.274, 'text': "we're going to say we're going to take the arg max basically, and we're going to say we think it's a dog all right.", 'start': 340.888, 'duration': 5.386}, {'end': 350.779, 'text': "now that we're all experts on the concepts of neural networks, let's go ahead and build one.", 'start': 346.274, 'duration': 4.505}, {'end': 352.321, 'text': "you're going to need tensorflow.", 'start': 350.779, 'duration': 1.542}, {'end': 355.424, 'text': 'so do a pip, install dash, dash, upgrade tensorflow.', 'start': 352.321, 'duration': 3.103}, {'end': 357.506, 'text': 'you should be on tensorflow version 1.1 or greater.', 'start': 355.424, 'duration': 2.082}, {'end': 370.569, 'text': 'So one thing you can do is import TensorFlow and then actually TensorFlow as tf, and then tf.version will give you your current version.', 'start': 359.454, 'duration': 11.115}, {'end': 374.713, 'text': 'So mine is 1.10.', 'start': 371.15, 'duration': 3.563}, {'end': 375.854, 'text': "Now, let's go ahead and get started.", 'start': 374.713, 'duration': 1.141}, {'end': 377.915, 'text': "So the first thing we're going to do is import a data set.", 'start': 375.874, 'duration': 2.041}, {'end': 384.318, 'text': "We're going to use MNIST, kind of the hello world example of data sets with machine learning.", 'start': 377.955, 'duration': 6.363}], 'summary': 'Introduction to building neural networks using tensorflow, focusing on version 1.10 and utilizing the mnist dataset.', 'duration': 43.43, 'max_score': 340.888, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/wQ8BIBpya2k/pics/wQ8BIBpya2k340888.jpg'}, {'end': 384.318, 'src': 'embed', 'start': 359.454, 'weight': 4, 'content': [{'end': 370.569, 'text': 'So one thing you can do is import TensorFlow and then actually TensorFlow as tf, and then tf.version will give you your current version.', 'start': 359.454, 'duration': 11.115}, {'end': 374.713, 'text': 'So mine is 1.10.', 'start': 371.15, 'duration': 3.563}, {'end': 375.854, 'text': "Now, let's go ahead and get started.", 'start': 374.713, 'duration': 1.141}, {'end': 377.915, 'text': "So the first thing we're going to do is import a data set.", 'start': 375.874, 'duration': 2.041}, {'end': 384.318, 'text': "We're going to use MNIST, kind of the hello world example of data sets with machine learning.", 'start': 377.955, 'duration': 6.363}], 'summary': 'Import tensorflow, version 1.10, and start by importing mnist dataset.', 'duration': 24.864, 'max_score': 359.454, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/wQ8BIBpya2k/pics/wQ8BIBpya2k359454.jpg'}, {'end': 509.253, 'src': 'heatmap', 'start': 480.424, 'weight': 0.839, 'content': [{'end': 482.366, 'text': "So you can see what we're talking about here.", 'start': 480.424, 'duration': 1.942}, {'end': 483.908, 'text': 'So this is just going to be an array.', 'start': 482.386, 'duration': 1.522}, {'end': 488.272, 'text': "It'll be a multidimensional array, which is all a tensor is, by the way.", 'start': 483.928, 'duration': 4.344}, {'end': 497.362, 'text': "So here's your tensor, right? Okay, so that's the actual data that we're gonna attempt to pass through our neural network.", 'start': 489.334, 'duration': 8.028}, {'end': 505.75, 'text': "And just to show you, if we were to actually graph it and then do a plt.show, it's going to be the number and you can just excuse the color.", 'start': 497.723, 'duration': 8.027}, {'end': 507.091, 'text': "it's definitely black and white.", 'start': 505.75, 'duration': 1.341}, {'end': 509.253, 'text': "it's a single color, it's a binary.", 'start': 507.091, 'duration': 2.162}], 'summary': 'Discussion on representing data as a multidimensional array or tensor for neural network processing.', 'duration': 28.829, 'max_score': 480.424, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/wQ8BIBpya2k/pics/wQ8BIBpya2k480424.jpg'}], 'start': 64.673, 'title': 'Neural networks and activation functions', 'summary': 'Explores the basic structure of neural networks, emphasizing the need for multiple hidden layers and the weighted summation process. it also discusses the concept of activation functions, focusing on the usage of sigmoid activation function and its application in classifying images using tensorflow and the mnist dataset.', 'chapters': [{'end': 245.5, 'start': 64.673, 'title': 'Understanding neural networks', 'summary': 'Explores the basic structure of a neural network, including the input, hidden, and output layers, emphasizing the need for multiple hidden layers to map nonlinear relationships and the weighted summation process at the neuron level.', 'duration': 180.827, 'highlights': ["The need for multiple hidden layers in a neural network To map nonlinear relationships, it's essential to have two or more hidden layers, with one hidden layer resulting in a neural network and two or more hidden layers resulting in a deep neural network.", "The process of weighted summation at the neuron level Inputs, whether from the input layer or other neurons, are multiplied by unique weights and summed together, illustrating the crucial role of weights in the neural network's functionality.", "The structure and function of a neural network Provides an overview of a neural network's components, such as input, hidden, and output layers, and their roles in mapping inputs to outputs, with a focus on the basic architecture and functionality."]}, {'end': 497.362, 'start': 245.62, 'title': 'Neural network activation functions', 'summary': 'Discusses the concept of activation functions in neural networks, focusing on the usage of sigmoid activation function and its application in determining the confidence level of classifying images. it also provides a brief introduction to building a neural network using tensorflow and importing the mnist dataset for image classification.', 'duration': 251.742, 'highlights': ['Explanation of the sigmoid activation function and its application in determining confidence level in classifying images ', 'Introduction to building a neural network using TensorFlow and importing the MNIST dataset for image classification ', 'Overview of the concept of activation functions in neural networks ']}], 'duration': 432.689, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/wQ8BIBpya2k/pics/wQ8BIBpya2k64673.jpg', 'highlights': ["The need for multiple hidden layers in a neural network To map nonlinear relationships, it's essential to have two or more hidden layers, with one hidden layer resulting in a neural network and two or more hidden layers resulting in a deep neural network.", "The process of weighted summation at the neuron level Inputs, whether from the input layer or other neurons, are multiplied by unique weights and summed together, illustrating the crucial role of weights in the neural network's functionality.", "The structure and function of a neural network Provides an overview of a neural network's components, such as input, hidden, and output layers, and their roles in mapping inputs to outputs, with a focus on the basic architecture and functionality.", 'Explanation of the sigmoid activation function and its application in determining confidence level in classifying images', 'Introduction to building a neural network using TensorFlow and importing the MNIST dataset for image classification', 'Overview of the concept of activation functions in neural networks']}, {'end': 873.527, 'segs': [{'end': 554.256, 'src': 'embed', 'start': 519.44, 'weight': 1, 'content': [{'end': 522.903, 'text': 'so anyways, back to our actual code up here.', 'start': 519.44, 'duration': 3.463}, {'end': 528.861, 'text': 'Once we have the data, one thing we want to do is normalize that data.', 'start': 524.459, 'duration': 4.402}, {'end': 534.863, 'text': "So again, if I print it out, you can see it's data that seems to vary from zero to, looks like we have as high as 253.", 'start': 528.921, 'duration': 5.942}, {'end': 537.424, 'text': "It's zero to 255 for pixel data.", 'start': 534.863, 'duration': 2.561}, {'end': 542.106, 'text': 'So what we want to do is scale this data or normalize it.', 'start': 538.244, 'duration': 3.862}, {'end': 544.987, 'text': "But really what we're doing in this normalization is scaling it.", 'start': 542.146, 'duration': 2.841}, {'end': 554.256, 'text': "So we're going to just redefine xtrain and xtest, but it's going to be tf.keros.utils.normalize,", 'start': 545.587, 'duration': 8.669}], 'summary': 'Normalize pixel data from 0 to 255 to scale and redefine xtrain and xtest.', 'duration': 34.816, 'max_score': 519.44, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/wQ8BIBpya2k/pics/wQ8BIBpya2k519440.jpg'}, {'end': 581.765, 'src': 'heatmap', 'start': 554.256, 'weight': 0.846, 'content': [{'end': 566.129, 'text': "and we're going to pass xtrain and it'll be on axis 1 and then we're going to copy paste and we're gonna do the exact same thing for X test and all this does.", 'start': 554.256, 'duration': 11.873}, {'end': 569.773, 'text': "let's just run that and then we'll run this again and you can see how the five has changed.", 'start': 566.129, 'duration': 3.644}, {'end': 578.702, 'text': 'a little bit looks like I got a little lighter, and then we come down here and we can see the values here are now scaled between zero and one,', 'start': 569.773, 'duration': 8.929}, {'end': 581.765, 'text': 'and that just makes it easier for a network to learn.', 'start': 578.702, 'duration': 3.063}], 'summary': 'Scaling x train and x test on axis 1, values now scaled between 0 and 1', 'duration': 27.509, 'max_score': 554.256, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/wQ8BIBpya2k/pics/wQ8BIBpya2k554256.jpg'}, {'end': 766.06, 'src': 'embed', 'start': 740.384, 'weight': 0, 'content': [{'end': 746.571, 'text': "Your output layer will always, if it's in the case of classification anyways, it'll have your number of classifications.", 'start': 740.384, 'duration': 6.187}, {'end': 749.273, 'text': "In our case, that's 10.", 'start': 746.711, 'duration': 2.562}, {'end': 755.516, 'text': "And the activation function, we don't want it to be ReLU because we actually, this is like a probability distribution.", 'start': 749.273, 'duration': 6.243}, {'end': 759.137, 'text': 'So we want to use softmax for a probability distribution.', 'start': 755.936, 'duration': 3.201}, {'end': 761.918, 'text': 'So that is our entire model.', 'start': 759.857, 'duration': 2.061}, {'end': 766.06, 'text': "We're done with defining the architecture, I guess, of our model.", 'start': 762.018, 'duration': 4.042}], 'summary': 'Output layer has 10 classifications and uses softmax for probability distribution.', 'duration': 25.676, 'max_score': 740.384, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/wQ8BIBpya2k/pics/wQ8BIBpya2k740384.jpg'}, {'end': 873.527, 'src': 'embed', 'start': 852.459, 'weight': 2, 'content': [{'end': 861.861, 'text': "probably the most popular one is a categorical cross entropy or some version of that, in that in this case we're going to use sparse.", 'start': 852.459, 'duration': 9.402}, {'end': 865.142, 'text': 'you could also use binary, like in the case of cats versus dogs.', 'start': 861.861, 'duration': 3.281}, {'end': 868.383, 'text': 'you would probably use binary in that case, but you could.', 'start': 865.142, 'duration': 3.241}, {'end': 873.527, 'text': 'you could just kind of blanket categorical cross entropy, everything Anyways.', 'start': 868.383, 'duration': 5.144}], 'summary': 'Popular loss function: categorical cross entropy, used for classification tasks.', 'duration': 21.068, 'max_score': 852.459, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/wQ8BIBpya2k/pics/wQ8BIBpya2k852459.jpg'}], 'start': 497.723, 'title': 'Building and normalizing a neural network model', 'summary': 'Explains the process of normalizing data, flattening input layers, defining hidden and output layers, and setting parameters for training a neural network model using tensorflow and keras.', 'chapters': [{'end': 873.527, 'start': 497.723, 'title': 'Building and normalizing a neural network model', 'summary': 'Explains the process of normalizing data, flattening input layers, defining hidden and output layers, and setting parameters for training a neural network model using tensorflow and keras.', 'duration': 375.804, 'highlights': ['The data is normalized to scale the pixel values between 0 and 1 for easier learning by the network, with values ranging from 0 to 255. The pixel data is normalized using tf.keros.utils.normalize to scale the values between 0 and 1, making it easier for the network to learn.', 'The model architecture includes flattened input layer, two hidden layers with 128 neurons each and ReLU activation function, and an output layer with 10 neurons and softmax activation function for classification. The model architecture involves a flattened input layer, two hidden layers with 128 neurons and ReLU activation, and an output layer with 10 neurons and softmax activation for classification.', 'The Adam optimizer and sparse categorical cross-entropy loss metric are set as default choices for optimizing and calculating loss in the model training. The Adam optimizer and sparse categorical cross-entropy loss metric are chosen as default options for optimizing and calculating loss during model training.']}], 'duration': 375.804, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/wQ8BIBpya2k/pics/wQ8BIBpya2k497723.jpg', 'highlights': ['The model architecture involves a flattened input layer, two hidden layers with 128 neurons and ReLU activation, and an output layer with 10 neurons and softmax activation for classification.', 'The data is normalized to scale the pixel values between 0 and 1 for easier learning by the network, with values ranging from 0 to 255.', 'The Adam optimizer and sparse categorical cross-entropy loss metric are chosen as default options for optimizing and calculating loss during model training.']}, {'end': 1023.788, 'segs': [{'end': 959.388, 'src': 'embed', 'start': 926.306, 'weight': 0, 'content': [{'end': 928.127, 'text': 'Awesome Okay, so we did pretty good.', 'start': 926.306, 'duration': 1.821}, {'end': 933.811, 'text': 'We got a 97% accuracy after only three epochs, which is pretty darn good.', 'start': 928.167, 'duration': 5.644}, {'end': 939.035, 'text': 'So once we have this, we can, this was in sample.', 'start': 935.312, 'duration': 3.723}, {'end': 941.536, 'text': 'So this is always going to really excite you.', 'start': 939.095, 'duration': 2.441}, {'end': 946.219, 'text': "But what's really important to remember is neural networks are great at bidding.", 'start': 941.816, 'duration': 4.403}, {'end': 948.321, 'text': 'The question is did they overfit??', 'start': 946.6, 'duration': 1.721}, {'end': 959.388, 'text': 'So the idea or the hope is that your model actually generalized right? to what makes an 8 and what makes a 4,', 'start': 948.381, 'duration': 11.007}], 'summary': 'Achieved 97% accuracy in 3 epochs. focus on preventing overfitting and ensuring model generalization.', 'duration': 33.082, 'max_score': 926.306, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/wQ8BIBpya2k/pics/wQ8BIBpya2k926306.jpg'}, {'end': 1023.788, 'src': 'embed', 'start': 976.88, 'weight': 1, 'content': [{'end': 979.802, 'text': 'And that is just model.evaluate xtest, ytest.', 'start': 976.88, 'duration': 2.922}, {'end': 992.312, 'text': "then we'll go ahead and just print val loss and val accuracy.", 'start': 985.37, 'duration': 6.942}, {'end': 1000.695, 'text': 'and we can see here the loss is almost 0.11 and the accuracy is at 96.5, so a little less than the one that we ended on,', 'start': 992.312, 'duration': 8.383}, {'end': 1003.576, 'text': 'and the loss is quite a bit higher relatively.', 'start': 1000.695, 'duration': 2.881}, {'end': 1011.619, 'text': 'but um, you should expect that you should expect your out of sample accuracy to be slightly lower and your loss to be slightly higher.', 'start': 1003.576, 'duration': 8.043}, {'end': 1017.043, 'text': "what you definitely don't want to see is either too close or too much of a delta.", 'start': 1012.139, 'duration': 4.904}, {'end': 1023.788, 'text': "if there's a huge delta, chances are you probably already have overfit and you'd want to like kind of dial it back a little bit.", 'start': 1017.043, 'duration': 6.745}], 'summary': 'Model evaluation shows 96.5% accuracy with 0.11 loss, indicating potential overfitting.', 'duration': 46.908, 'max_score': 976.88, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/wQ8BIBpya2k/pics/wQ8BIBpya2k976880.jpg'}], 'start': 873.527, 'title': 'Neural network model training and evaluation', 'summary': 'Covers training a neural network model with 97% accuracy after three epochs, evaluating the validation loss and accuracy, and addressing the risk of overfitting.', 'chapters': [{'end': 1023.788, 'start': 873.527, 'title': 'Neural network model training and evaluation', 'summary': 'Covers training a neural network model with 97% accuracy after three epochs, evaluating the validation loss and accuracy, and addressing the risk of overfitting.', 'duration': 150.261, 'highlights': ['The model achieved a 97% accuracy after only three epochs, demonstrating efficient training and high performance.', 'The validation loss and accuracy were calculated as 0.11 and 96.5% respectively, indicating slightly lower out-of-sample accuracy and higher loss compared to the in-sample results.', 'Emphasizing the importance of preventing overfitting, the text highlights the need to avoid a significant difference between in-sample and out-of-sample performance to maintain model generalization.']}], 'duration': 150.261, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/wQ8BIBpya2k/pics/wQ8BIBpya2k873527.jpg', 'highlights': ['The model achieved a 97% accuracy after only three epochs, demonstrating efficient training and high performance.', 'The validation loss and accuracy were calculated as 0.11 and 96.5% respectively, indicating slightly lower out-of-sample accuracy and higher loss compared to the in-sample results.', 'Emphasizing the importance of preventing overfitting, the text highlights the need to avoid a significant difference between in-sample and out-of-sample performance to maintain model generalization.']}, {'end': 1232.61, 'segs': [{'end': 1072.743, 'src': 'embed', 'start': 1023.788, 'weight': 0, 'content': [{'end': 1030.873, 'text': "so that's basically everything as far as the basics of keros and all that.", 'start': 1023.788, 'duration': 7.085}, {'end': 1037.799, 'text': "the only other things that i wouldn't mind covering here is like if you want to save a model and load a model, it's just model.save,", 'start': 1030.873, 'duration': 6.926}, {'end': 1043.561, 'text': 'And we can save this as epic num reader dot model.', 'start': 1038.618, 'duration': 4.943}, {'end': 1049.025, 'text': "And then if you want to reload that model, we'll save it as we'll call it a new underscore model.", 'start': 1044.662, 'duration': 4.363}, {'end': 1053.487, 'text': "That's going to be equal to tf dot keras dot models dot load model.", 'start': 1049.245, 'duration': 4.242}, {'end': 1055.488, 'text': "And it's this exact model name.", 'start': 1054.008, 'duration': 1.48}, {'end': 1058.17, 'text': 'Whoops There we go.', 'start': 1057.089, 'duration': 1.081}, {'end': 1060.648, 'text': "So that's our new model.", 'start': 1059.366, 'duration': 1.282}, {'end': 1067.056, 'text': 'And then finally, if we wanted to make a prediction, we could say predictions equals new model dot predict.', 'start': 1060.708, 'duration': 6.348}, {'end': 1070.32, 'text': 'And keep in mind, predict always takes a list.', 'start': 1067.556, 'duration': 2.764}, {'end': 1072.743, 'text': "This'll get you a few times for sure.", 'start': 1070.981, 'duration': 1.762}], 'summary': 'Basics of keros, model saving/loading, and making predictions in tensorflow.', 'duration': 48.955, 'max_score': 1023.788, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/wQ8BIBpya2k/pics/wQ8BIBpya2k1023788.jpg'}, {'end': 1099.663, 'src': 'heatmap', 'start': 1072.783, 'weight': 1, 'content': [{'end': 1076.688, 'text': "But anyways, that'll take a list and we'll do X underscore test.", 'start': 1072.783, 'duration': 3.905}, {'end': 1081.958, 'text': "And then if we just print predictions, it's probably not going to look too friendly.", 'start': 1078.157, 'duration': 3.801}, {'end': 1083.659, 'text': "It's a little messy.", 'start': 1082.978, 'duration': 0.681}, {'end': 1089.88, 'text': "So what's going on here? These are all one hot arrays and these are our probability distributions.", 'start': 1084.119, 'duration': 5.761}, {'end': 1093.641, 'text': "So what do we do with these? So I'm going to use NumPy.", 'start': 1090.6, 'duration': 3.041}, {'end': 1096.782, 'text': "You can also use tf.argmax, but it's an abstract.", 'start': 1093.781, 'duration': 3.001}, {'end': 1098.563, 'text': "It's a tensor and we have to pull it down.", 'start': 1096.902, 'duration': 1.661}, {'end': 1099.663, 'text': 'We need a session and all that.', 'start': 1098.603, 'duration': 1.06}], 'summary': 'Using numpy to manipulate one hot arrays and probability distributions in a tensorflow session.', 'duration': 26.88, 'max_score': 1072.783, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/wQ8BIBpya2k/pics/wQ8BIBpya2k1072783.jpg'}, {'end': 1127.794, 'src': 'heatmap', 'start': 1104.227, 'weight': 0.705, 'content': [{'end': 1111.634, 'text': 'import numpy as np and then print, for example, np.argmax, argmax.', 'start': 1104.227, 'duration': 7.407}, {'end': 1116.458, 'text': "let's do predictions and let's just do the 0th prediction.", 'start': 1111.634, 'duration': 4.824}, {'end': 1119.481, 'text': "Okay, it says it's a 7, so the prediction.", 'start': 1116.618, 'duration': 2.863}, {'end': 1125.67, 'text': "for x test zero, like the zeroth index is, it's a seven.", 'start': 1120.001, 'duration': 5.669}, {'end': 1127.794, 'text': 'so gee, if only we had a way to draw it.', 'start': 1125.67, 'duration': 2.124}], 'summary': 'Using numpy, made predictions where 0th index is 7.', 'duration': 23.567, 'max_score': 1104.227, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/wQ8BIBpya2k/pics/wQ8BIBpya2k1104227.jpg'}, {'end': 1222.89, 'src': 'embed', 'start': 1125.67, 'weight': 3, 'content': [{'end': 1127.794, 'text': 'so gee, if only we had a way to draw it.', 'start': 1125.67, 'duration': 2.124}, {'end': 1128.895, 'text': 'okay, we can definitely do this.', 'start': 1127.794, 'duration': 1.101}, {'end': 1138.521, 'text': "so we can do plt.mshow and we're going to do x test zeroth and then plt.show.", 'start': 1128.895, 'duration': 9.626}, {'end': 1140.162, 'text': 'Oh, look at that.', 'start': 1138.541, 'duration': 1.621}, {'end': 1141.163, 'text': "It's a seven.", 'start': 1140.622, 'duration': 0.541}, {'end': 1153.689, 'text': "Okay. So I think that's basically all the things I would want to show you guys as far as like just a quick start with deep learning and Python and Keras and TensorFlow.", 'start': 1141.843, 'duration': 11.846}, {'end': 1155.87, 'text': 'This just barely scratches the surface.', 'start': 1154.089, 'duration': 1.781}, {'end': 1157.871, 'text': "There's so many things for us to do.", 'start': 1155.89, 'duration': 1.981}, {'end': 1164.075, 'text': 'I definitely plan to have at least one more follow-up video covering things like loading in outside datasets,', 'start': 1157.891, 'duration': 6.184}, {'end': 1172.819, 'text': "Definitely some tensor board reading the model, understanding what's going on and also what's going wrong, because that's eventually, you know,", 'start': 1164.815, 'duration': 8.004}, {'end': 1177.241, 'text': "it's really fun when we're doing tutorials and problems are like already solved and we know the answer.", 'start': 1172.819, 'duration': 4.422}, {'end': 1179.183, 'text': "It's very exciting.", 'start': 1178.262, 'duration': 0.921}, {'end': 1184.765, 'text': 'But in reality, a lot of times you have to dig to find the model that works with your data.', 'start': 1179.243, 'duration': 5.522}, {'end': 1191.527, 'text': "So anyways, that's definitely something we have to cover, or at least that you're going to have to learn somehow or other.", 'start': 1185.145, 'duration': 6.382}, {'end': 1193.408, 'text': 'Anyway, that is all for now.', 'start': 1191.787, 'duration': 1.621}, {'end': 1196.369, 'text': "If you've got questions, comments, concerns, whatever, feel free to leave them below.", 'start': 1193.448, 'duration': 2.921}, {'end': 1199.99, 'text': 'Definitely check out reddit.com slash r slash machine learning.', 'start': 1196.469, 'duration': 3.521}, {'end': 1202.472, 'text': 'Learn Machine Learning subreddit.', 'start': 1200.59, 'duration': 1.882}, {'end': 1205.694, 'text': "You can come join our Discord if you've got questions.", 'start': 1203.312, 'duration': 2.382}, {'end': 1209.217, 'text': "That's just discord.gg slash Centex.", 'start': 1205.714, 'duration': 3.503}, {'end': 1209.917, 'text': "We'll get you there.", 'start': 1209.277, 'duration': 0.64}, {'end': 1219.204, 'text': 'Also, a special thanks to my most recent channel members, Daniel, Jeffrey, KB, Abhijit, Eichner, NewcastleGeek, Fuba44, Jason, and 8counts.', 'start': 1210.618, 'duration': 8.586}, {'end': 1222.89, 'text': 'Thank you guys so much for your support.', 'start': 1221.246, 'duration': 1.644}], 'summary': 'Quick start with deep learning in python and keras, plan for follow-up video covering loading outside datasets and tensorboard reading the model.', 'duration': 97.22, 'max_score': 1125.67, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/wQ8BIBpya2k/pics/wQ8BIBpya2k1125670.jpg'}], 'start': 1023.788, 'title': 'Keras, tensorflow, and ml update', 'summary': 'Covers basics of keras and tensorflow, including saving/loading models and making predictions. it also discusses upcoming tutorial plans, challenges in finding the right model, and encourages engagement through reddit and discord.', 'chapters': [{'end': 1155.87, 'start': 1023.788, 'title': 'Basics of keras and tensorflow', 'summary': "Covers saving and loading models, making predictions, and visualizing predictions using keras and tensorflow, including examples of saving a model as 'epic num reader.model' and making a prediction on the x_test data.", 'duration': 132.082, 'highlights': ["Saving a model can be done using model.save('epic num reader.model'), and reloading the model using tf.keras.models.load_model('epic num reader.model').", 'Making predictions is achieved through new_model.predict(X_test), and visualizing predictions can be done by using plt.imshow(X_test[0]) and plt.show() to display the predicted image.', 'The tutorial provides a quick start with deep learning, Keras, and TensorFlow, covering the basics and scratching the surface of these concepts.']}, {'end': 1232.61, 'start': 1155.89, 'title': 'Machine learning update and call for engagement', 'summary': 'Discusses upcoming tutorial plans, including covering topics like loading outside datasets and using tensor board, highlighting challenges in finding the right model, and concludes with a call for engagement through reddit and discord.', 'duration': 76.72, 'highlights': ['The chapter discusses upcoming tutorial plans, including covering topics like loading outside datasets and using tensor board.', "Challenges in finding the right model are highlighted, emphasizing the need to understand what's going on and what's going wrong.", 'A call for engagement is extended through Reddit and Discord, thanking recent channel members for their support.']}], 'duration': 208.822, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/wQ8BIBpya2k/pics/wQ8BIBpya2k1023788.jpg', 'highlights': ["Saving a model can be done using model.save('epic num reader.model')", "Reloading the model using tf.keras.models.load_model('epic num reader.model')", 'Making predictions is achieved through new_model.predict(X_test)', 'Visualizing predictions can be done by using plt.imshow(X_test[0]) and plt.show()', 'The tutorial provides a quick start with deep learning, Keras, and TensorFlow', 'The chapter discusses upcoming tutorial plans, including loading outside datasets and using tensor board', 'Challenges in finding the right model are highlighted', 'A call for engagement is extended through Reddit and Discord, thanking recent channel members for their support', "Emphasizing the need to understand what's going on and what's going wrong"]}], 'highlights': ['The model achieved a 97% accuracy after only three epochs, demonstrating efficient training and high performance.', 'The update discusses the changes and simplification in deep learning with TensorFlow and Keras over the past two years.', 'The transition to high-level APIs like Keras makes it super simple for anybody to get started with deep learning.', "The need for multiple hidden layers in a neural network To map nonlinear relationships, it's essential to have two or more hidden layers, with one hidden layer resulting in a neural network and two or more hidden layers resulting in a deep neural network.", "The process of weighted summation at the neuron level Inputs, whether from the input layer or other neurons, are multiplied by unique weights and summed together, illustrating the crucial role of weights in the neural network's functionality.", "The structure and function of a neural network Provides an overview of a neural network's components, such as input, hidden, and output layers, and their roles in mapping inputs to outputs, with a focus on the basic architecture and functionality.", 'The model architecture involves a flattened input layer, two hidden layers with 128 neurons and ReLU activation, and an output layer with 10 neurons and softmax activation for classification.', 'The data is normalized to scale the pixel values between 0 and 1 for easier learning by the network, with values ranging from 0 to 255.', 'The Adam optimizer and sparse categorical cross-entropy loss metric are chosen as default options for optimizing and calculating loss during model training.', "Saving a model can be done using model.save('epic num reader.model')", "Reloading the model using tf.keras.models.load_model('epic num reader.model')", 'Making predictions is achieved through new_model.predict(X_test)', 'Visualizing predictions can be done by using plt.imshow(X_test[0]) and plt.show()', 'The tutorial provides a quick start with deep learning, Keras, and TensorFlow', 'The chapter discusses upcoming tutorial plans, including loading outside datasets and using tensor board', 'Challenges in finding the right model are highlighted', 'A call for engagement is extended through Reddit and Discord, thanking recent channel members for their support', "Emphasizing the need to understand what's going on and what's going wrong"]}