title
Running our Network - Deep Learning with Neural Networks and TensorFlow

description
Welcome to part four of Deep Learning with Neural Networks and TensorFlow, and part 46 of the Machine Learning tutorial series. In this tutorial, we're going to write the code for what happens during the Session in TensorFlow. In the previous tutorial, we built the model for our Artificial Neural Network and set up the computation graph with TensorFlow. Now we need to actually set up the training process, which is what will be run in the TensorFlow Session. https://pythonprogramming.net https://twitter.com/sentdex https://www.facebook.com/pythonprogramming.net/ https://plus.google.com/+sentdex

detail
{'title': 'Running our Network - Deep Learning with Neural Networks and TensorFlow', 'heatmap': [{'end': 957.407, 'start': 936.815, 'weight': 0.709}, {'end': 1152.068, 'start': 1100.997, 'weight': 0.807}], 'summary': 'Covers topics such as building a computation graph for a tensorflow model, training a neural network with input data, using cross entropy as the cost function, minimizing cost with the atom optimizer and learning rate of 0.001, initializing variables, running a session, optimizing weights and biases, tracking epic loss, debugging neural networks and softmax functions in python, and achieving 94.85% accuracy on the mnist dataset.', 'chapters': [{'end': 270.304, 'segs': [{'end': 72.422, 'src': 'embed', 'start': 51.269, 'weight': 0, 'content': [{'end': 63.859, 'text': "This is going to take x, which is just your input data, and we're going to say the prediction function is equal to the neural network model x.", 'start': 51.269, 'duration': 12.59}, {'end': 64.94, 'text': "so what's happening here?", 'start': 63.859, 'duration': 1.081}, {'end': 72.422, 'text': "you're taking input data, you're passing it through your neural network model, which shoves it all through the layers and returns an output.", 'start': 64.94, 'duration': 7.482}], 'summary': 'Using input data x to predict output using neural network model.', 'duration': 21.153, 'max_score': 51.269, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/PwAGxqrXSCs/pics/PwAGxqrXSCs51269.jpg'}, {'end': 130.582, 'src': 'embed', 'start': 102.845, 'weight': 1, 'content': [{'end': 107.988, 'text': 'with logits against the prediction and y.', 'start': 102.845, 'duration': 5.143}, {'end': 115.012, 'text': "So, in this case, what we're doing is we're using cross entropy with logits as our cost function.", 'start': 107.988, 'duration': 7.024}, {'end': 125.239, 'text': "And that's going to calculate basically the difference, let's say, of the prediction that we got to the known label that we have.", 'start': 115.633, 'duration': 9.606}, {'end': 128.16, 'text': 'And both of these are that one hot format.', 'start': 125.999, 'duration': 2.161}, {'end': 130.582, 'text': "That's why we basically, you know, this one hot equals true.", 'start': 128.199, 'duration': 2.383}], 'summary': 'Using cross entropy with logits to calculate difference between prediction and label in one hot format.', 'duration': 27.737, 'max_score': 102.845, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/PwAGxqrXSCs/pics/PwAGxqrXSCs102845.jpg'}, {'end': 228.426, 'src': 'embed', 'start': 175.436, 'weight': 2, 'content': [{'end': 177.257, 'text': "So anyway, we're going to have an optimizer.", 'start': 175.436, 'duration': 1.821}, {'end': 182.383, 'text': 'And the optimizer is going to be tf.train.atomoptimizer.', 'start': 178.158, 'duration': 4.225}, {'end': 189.531, 'text': 'And this is synonymous with stochastic gradient descent, AdaGrad, and so on.', 'start': 182.744, 'duration': 6.787}, {'end': 195.558, 'text': 'And this would be, so using the atom optimizer, what do we want to do? Well, we want to minimize.', 'start': 190.352, 'duration': 5.206}, {'end': 197.4, 'text': 'What do we want to minimize? Cost.', 'start': 195.859, 'duration': 1.541}, {'end': 200.497, 'text': 'done optionally.', 'start': 198.356, 'duration': 2.141}, {'end': 210.703, 'text': 'the atom optimizer does have a parameter, that is learning rate, but learning rate default equals 0.001, which is a fine enough learning rate,', 'start': 200.497, 'duration': 10.206}, {'end': 212.284, 'text': 'so i see no reason to modify that.', 'start': 210.703, 'duration': 1.581}, {'end': 220.343, 'text': "Okay so we've got those values and we have our cost function defined.", 'start': 214.801, 'duration': 5.542}, {'end': 224.924, 'text': 'But again, what is anything actually happening in this cost function? I mean, just look at it.', 'start': 220.423, 'duration': 4.501}, {'end': 228.426, 'text': 'Nothing is happening in the cost function.', 'start': 226.005, 'duration': 2.421}], 'summary': 'Using tf.train.atomoptimizer to minimize cost with default learning rate of 0.001.', 'duration': 52.99, 'max_score': 175.436, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/PwAGxqrXSCs/pics/PwAGxqrXSCs175436.jpg'}, {'end': 270.304, 'src': 'embed', 'start': 248.112, 'weight': 4, 'content': [{'end': 258.214, 'text': "if you are following along on a really really slow computer, probably do like a really low number there or don't do this at all.", 'start': 248.112, 'duration': 10.102}, {'end': 262.077, 'text': "I don't know what you're doing with neural networks on a really slow computer, but it'll run.", 'start': 258.214, 'duration': 3.863}, {'end': 264.257, 'text': 'it just might take a while.', 'start': 262.077, 'duration': 2.18}, {'end': 268.463, 'text': 'so yeah, anyway, How many epics? 10..', 'start': 264.257, 'duration': 4.206}, {'end': 270.304, 'text': "Like, if you're on a netbook, I wonder how slow this would be.", 'start': 268.463, 'duration': 1.841}], 'summary': 'Neural network may run slow on slow computers. consider low epochs, e.g., 10 for netbook.', 'duration': 22.192, 'max_score': 248.112, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/PwAGxqrXSCs/pics/PwAGxqrXSCs248112.jpg'}], 'start': 1.84, 'title': 'Neural network model training, cost function, and optimization', 'summary': 'Covers building a computation graph for a tensorflow model, training a neural network using input data, explaining cross entropy as the cost function, and minimizing cost using the atom optimizer with a learning rate of 0.001 and starting with 10 epochs.', 'chapters': [{'end': 72.422, 'start': 1.84, 'title': 'Neural network model training', 'summary': 'Covers the building of a computation graph for a tensorflow model and a neural network model, and the creation of a function to train the neural network model using input data.', 'duration': 70.582, 'highlights': ['The chapter covers building the computation graph for a tensorflow model and a neural network model, which are synonymous in their structure and functionality.', 'The creation of a function to train the neural network model using input data is explained, with the prediction function being equal to the neural network model x.']}, {'end': 152.549, 'start': 72.422, 'title': 'Neural network cost function', 'summary': 'Explains the use of cross entropy with logits as the cost function for a neural network, which calculates the difference between the prediction and the known label, particularly in the one hot format.', 'duration': 80.127, 'highlights': ['The cost function is equal to tf.reduceMean on the tf.nn softmax cross entropy with logits against the prediction and y, used for calculating the difference between the prediction and the known label in the one hot format.', "The output layer is of the one hot shape, and if not using one hot, the output can be of any shape, always matching the shape of the testing sets' labels."]}, {'end': 270.304, 'start': 152.689, 'title': 'Neural network optimization', 'summary': 'Discusses the process of minimizing cost in a neural network using the atom optimizer, with a default learning rate of 0.001, and the consideration of the number of epochs, suggesting to start with 10.', 'duration': 117.615, 'highlights': ['The atom optimizer is used to minimize the cost in the neural network, synonymous with stochastic gradient descent and AdaGrad. The atom optimizer is utilized for minimizing the cost in the neural network, functioning similarly to stochastic gradient descent and AdaGrad.', 'The default learning rate of the atom optimizer is 0.001, considered to be a suitable value for the optimization process. The default learning rate of the atom optimizer is set at 0.001, which is deemed appropriate for the optimization process.', 'The suggestion to begin with 10 epochs for the optimization process, with the consideration of slower computers when choosing the number of epochs. The recommendation is to initiate the optimization process with 10 epochs, with a caution for slower computers when determining the number of epochs to use.']}], 'duration': 268.464, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/PwAGxqrXSCs/pics/PwAGxqrXSCs1840.jpg', 'highlights': ['The creation of a function to train the neural network model using input data is explained, with the prediction function being equal to the neural network model x.', 'The cost function is equal to tf.reduceMean on the tf.nn softmax cross entropy with logits against the prediction and y, used for calculating the difference between the prediction and the known label in the one hot format.', 'The atom optimizer is used to minimize the cost in the neural network, synonymous with stochastic gradient descent and AdaGrad.', 'The default learning rate of the atom optimizer is 0.001, considered to be a suitable value for the optimization process.', 'The suggestion to begin with 10 epochs for the optimization process, with the consideration of slower computers when choosing the number of epochs.']}, {'end': 445.19, 'segs': [{'end': 323.997, 'src': 'embed', 'start': 270.785, 'weight': 1, 'content': [{'end': 277.35, 'text': 'Anyway, with tf.session as ses, what do we want to do??', 'start': 270.785, 'duration': 6.565}, {'end': 286.417, 'text': "Well, first we need to initialize our variables, so we'll do ses.run, tf.initialize all variables.", 'start': 277.39, 'duration': 9.027}, {'end': 293.442, 'text': 'I feel like they need to just have, like, a tf.init or something, rather than, like, init vars or something.', 'start': 287.117, 'duration': 6.325}, {'end': 295.784, 'text': "Anyway, that's a long thing to type out, I'm just saying.", 'start': 293.562, 'duration': 2.222}, {'end': 307.051, 'text': 'Okay, so that initializes our variables, and that actually begins now the session has started running, right? The session has begun.', 'start': 297.685, 'duration': 9.366}, {'end': 310.374, 'text': "We're done defining our computation graph at this point.", 'start': 307.512, 'duration': 2.862}, {'end': 320.541, 'text': "So we're going to run through these epics and again the epics is just cycles right of feed forward plus your back prop right?", 'start': 311.254, 'duration': 9.287}, {'end': 323.997, 'text': "So you're four data and then back prop fixing all the weights.", 'start': 320.681, 'duration': 3.316}], 'summary': 'The session initializes variables and runs the computation graph with feed forward and backpropagation cycles.', 'duration': 53.212, 'max_score': 270.785, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/PwAGxqrXSCs/pics/PwAGxqrXSCs270785.jpg'}, {'end': 403.369, 'src': 'embed', 'start': 379.052, 'weight': 0, 'content': [{'end': 386.457, 'text': "we're dividing by our batch size, and that tells us how many times we need to cycle based on that dynamic batch size that we set way up here.", 'start': 379.052, 'duration': 7.405}, {'end': 390.4, 'text': "So we do that, and we're just going to iterate through.", 'start': 388.238, 'duration': 2.162}, {'end': 403.369, 'text': "As we iterate through, we're going to say x and y, so data labels equals mnist dot train dot next batch batch size.", 'start': 390.78, 'duration': 12.589}], 'summary': 'Iterating through the data labels using dynamic batch size.', 'duration': 24.317, 'max_score': 379.052, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/PwAGxqrXSCs/pics/PwAGxqrXSCs379052.jpg'}], 'start': 270.785, 'title': 'Tensorflow session and computation graph', 'summary': 'Covers initializing variables, running a session, defining computation graph, iterating through data with dynamic batch size, and utilizing pre-built functions in tensorflow.', 'chapters': [{'end': 445.19, 'start': 270.785, 'title': 'Tensorflow session and computation graph', 'summary': 'Discusses initializing variables, running a session, defining computation graph, iterating through data using dynamic batch size, and utilizing pre-built functions in tensorflow.', 'duration': 174.405, 'highlights': ['The session is started by initializing variables using ses.run(tf.initialize_all_variables). This step marks the beginning of the session and the initialization of variables.', 'The process involves running through epics which consist of cycles of feed forward and backpropagation. Epics involve cycling through feed forward and backpropagation, crucial for data processing and weight adjustments.', 'The iteration involves chunking through the dataset using mnist.train.next_batch(batch_size). The process involves chunking through the dataset using the next_batch method, making data iteration more manageable.']}], 'duration': 174.405, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/PwAGxqrXSCs/pics/PwAGxqrXSCs270785.jpg', 'highlights': ['The iteration involves chunking through the dataset using mnist.train.next_batch(batch_size).', 'The process involves running through epics which consist of cycles of feed forward and backpropagation.', 'The session is started by initializing variables using ses.run(tf.initialize_all_variables).']}, {'end': 789.279, 'segs': [{'end': 570.105, 'src': 'embed', 'start': 478.739, 'weight': 0, 'content': [{'end': 480.721, 'text': "sorry, my microphone's in the way of my keyboard.", 'start': 478.739, 'duration': 1.982}, {'end': 489.849, 'text': "feed dict equals and we're going to say x, x, y, y done so we run through the data and we're running that.", 'start': 480.721, 'duration': 9.128}, {'end': 495.674, 'text': "we're optimizing the cost with x's and y's and we're passing the x's and y's.", 'start': 489.849, 'duration': 5.825}, {'end': 498.117, 'text': "right, we see x's and y's.", 'start': 495.674, 'duration': 2.443}, {'end': 499.838, 'text': 'how are we optimizing that cost?', 'start': 498.117, 'duration': 1.721}, {'end': 500.239, 'text': 'you ask.', 'start': 499.838, 'duration': 0.401}, {'end': 506.722, 'text': 'right, we are doing that by modifying the weights, and yes, the weights are.', 'start': 501.619, 'duration': 5.103}, {'end': 513.025, 'text': 'or by modifying these weights, and yes, those weights are being passed through these layers.', 'start': 506.722, 'duration': 6.303}, {'end': 520.467, 'text': 'but somehow magically, tensorflow knows that in biases that it gets to modify those.', 'start': 513.025, 'duration': 7.442}, {'end': 526.672, 'text': "so incredible, It's a little high level.", 'start': 520.467, 'duration': 6.205}, {'end': 531.734, 'text': 'I mean that was a kind of like one of the most confusing things for me as I was coming into tensorflow.', 'start': 526.672, 'duration': 5.062}, {'end': 536.175, 'text': 'But anyway, epic loss plus equals, whatever C is okay.', 'start': 531.734, 'duration': 4.441}, {'end': 538.876, 'text': "and then for each epic We're obviously resetting the epic loss.", 'start': 536.175, 'duration': 2.701}, {'end': 544.158, 'text': 'But we kind of want to track that each time and then after that for loop, what we can do is we can just print.', 'start': 538.876, 'duration': 5.282}, {'end': 547.679, 'text': 'we can do some beautiful formatting here.', 'start': 544.158, 'duration': 3.521}, {'end': 555.569, 'text': 'epic, epic completed out of what was it?', 'start': 547.679, 'duration': 7.89}, {'end': 556.59, 'text': 'How many epics?', 'start': 555.849, 'duration': 0.741}, {'end': 558.452, 'text': 'HM epics.', 'start': 557.891, 'duration': 0.561}, {'end': 561.075, 'text': 'And then we can say loss.', 'start': 559.173, 'duration': 1.902}, {'end': 564.759, 'text': 'Loss No.', 'start': 564.098, 'duration': 0.661}, {'end': 565.48, 'text': 'Epic loss.', 'start': 565.019, 'duration': 0.461}, {'end': 568.783, 'text': 'Perfect Done.', 'start': 566.841, 'duration': 1.942}, {'end': 570.105, 'text': 'Done Done.', 'start': 569.805, 'duration': 0.3}], 'summary': 'Training process using tensorflow, optimizing cost, modifying weights and biases, tracking epic loss.', 'duration': 91.366, 'max_score': 478.739, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/PwAGxqrXSCs/pics/PwAGxqrXSCs478739.jpg'}, {'end': 670.319, 'src': 'embed', 'start': 635.203, 'weight': 2, 'content': [{'end': 637.286, 'text': 'so all this is doing.', 'start': 635.203, 'duration': 2.083}, {'end': 646.756, 'text': "tf.argmax is going to return the index of the maximum value in this, these arrays, and we're hoping that those index values are the same.", 'start': 637.286, 'duration': 9.47}, {'end': 648.698, 'text': "right. they're both one hots and all that.", 'start': 646.756, 'duration': 1.942}, {'end': 655.224, 'text': "so we're expecting that this is going to tell us whether or not these are basically identical.", 'start': 648.698, 'duration': 6.526}, {'end': 656.426, 'text': "so that's that now.", 'start': 655.224, 'duration': 1.202}, {'end': 670.319, 'text': 'um, we can compute accuracy as being equal to tf dot reduce, mean tf dot cast, which just changes the variable uh to a type.', 'start': 658.395, 'duration': 11.924}], 'summary': 'Using tf.argmax to check if arrays are identical, then computing accuracy.', 'duration': 35.116, 'max_score': 635.203, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/PwAGxqrXSCs/pics/PwAGxqrXSCs635203.jpg'}, {'end': 761.26, 'src': 'embed', 'start': 736.121, 'weight': 1, 'content': [{'end': 744.086, 'text': "we're dealing all with training and train here, we're dealing with the training data and then, once we have optimized those weights,", 'start': 736.121, 'duration': 7.965}, {'end': 747.675, 'text': 'we come down here and run them through our model.', 'start': 744.086, 'duration': 3.589}, {'end': 756.278, 'text': "And we're saying that to know what's correct or not, we compare the prediction to the actual label.", 'start': 748.535, 'duration': 7.743}, {'end': 761.26, 'text': 'Accuracy is just whatever the float is of that correctness.', 'start': 757.359, 'duration': 3.901}], 'summary': 'Training data is optimized to improve model accuracy.', 'duration': 25.139, 'max_score': 736.121, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/PwAGxqrXSCs/pics/PwAGxqrXSCs736121.jpg'}], 'start': 445.19, 'title': 'Tensorflow optimization and tracking loss', 'summary': 'Covers the process of running the optimizer and cost function in tensorflow to modify weights and biases for optimization, along with tracking epic loss and computing accuracy during training, focusing on evaluating model performance on test data.', 'chapters': [{'end': 536.175, 'start': 445.19, 'title': 'Tensorflow optimization process', 'summary': 'Discusses the process of running the optimizer and cost function in tensorflow, modifying weights and biases to optimize the cost using feed dict to pass data through the layers.', 'duration': 90.985, 'highlights': ["The process involves running the optimizer and the cost function, with feed dict passing x's and y's data.", 'Modifying the weights and biases is part of optimizing the cost function in TensorFlow.', 'The chapter also mentions the confusion and complexity encountered when first learning about TensorFlow.']}, {'end': 789.279, 'start': 536.175, 'title': 'Tracking epic loss and computing accuracy', 'summary': 'Discusses tracking epic loss during training and computing accuracy using tensorflow, emphasizing the importance of printing progress and evaluating model performance on test data.', 'duration': 253.104, 'highlights': ['We can track epic loss during training and print progress by displaying the completed epic count out of total epics and the corresponding loss.', 'Computing accuracy involves comparing model predictions to actual labels and evaluating the correctness using TensorFlow, followed by printing the accuracy of the model on test data.', "Using tf.argmax to compare index values of the maximum values in arrays and computing accuracy as the mean of correctness provide insights into the model's performance.", "It's beneficial to print progress during training to understand the duration and usefulness of the process, as well as evaluating model accuracy on test data to assess performance."]}], 'duration': 344.089, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/PwAGxqrXSCs/pics/PwAGxqrXSCs445190.jpg', 'highlights': ['Modifying the weights and biases is part of optimizing the cost function in TensorFlow.', 'Computing accuracy involves comparing model predictions to actual labels and evaluating the correctness using TensorFlow, followed by printing the accuracy of the model on test data.', "Using tf.argmax to compare index values of the maximum values in arrays and computing accuracy as the mean of correctness provide insights into the model's performance.", 'We can track epic loss during training and print progress by displaying the completed epic count out of total epics and the corresponding loss.', "The process involves running the optimizer and the cost function, with feed dict passing x's and y's data."]}, {'end': 1197.594, 'segs': [{'end': 883.13, 'src': 'embed', 'start': 789.279, 'weight': 0, 'content': [{'end': 790.88, 'text': 'And I have some white space.', 'start': 789.279, 'duration': 1.601}, {'end': 792.001, 'text': "I mean, it's decent.", 'start': 790.96, 'duration': 1.041}, {'end': 794.962, 'text': 'But this is 66 lines of code.', 'start': 792.461, 'duration': 2.501}, {'end': 796.923, 'text': "That's incredible.", 'start': 796.083, 'duration': 0.84}, {'end': 799.785, 'text': "And by the way, yeah, we're done.", 'start': 798.364, 'duration': 1.421}, {'end': 801.146, 'text': 'So now what we do.', 'start': 799.965, 'duration': 1.181}, {'end': 805.869, 'text': "Everyone's like, run it! We're just going to train NetworkX.", 'start': 801.146, 'duration': 4.723}, {'end': 808.65, 'text': "Right, we pass x, that's it.", 'start': 807.509, 'duration': 1.141}, {'end': 814.133, 'text': "Done So now let's test this bad boy.", 'start': 809.65, 'duration': 4.483}, {'end': 823.879, 'text': "So we come over here, and that's just from before, we'll just up, yep, Python 3, deepnet.py.", 'start': 814.593, 'duration': 9.286}, {'end': 825.46, 'text': "Please don't give me an error.", 'start': 824.419, 'duration': 1.041}, {'end': 830.403, 'text': 'God, I knew we were gonna do this.', 'start': 828.242, 'duration': 2.161}, {'end': 834.826, 'text': "Okay, so layer one, we're missing one required positional argument.", 'start': 830.543, 'duration': 4.283}, {'end': 838.039, 'text': 'Add map more.', 'start': 836.656, 'duration': 1.383}, {'end': 849.861, 'text': 'I mean, what am I missing here? All right, so I paused it while I was trying to figure this out, and I figured it out.', 'start': 839.401, 'duration': 10.46}, {'end': 852.824, 'text': 'So I should have paid a little bit more attention.', 'start': 850.542, 'duration': 2.282}, {'end': 858.049, 'text': 'But anyway, add, missing one required positional argument, a y.', 'start': 853.024, 'duration': 5.025}, {'end': 862.833, 'text': 'So when you add, you have an x and a y, right? You add and then x, y.', 'start': 858.049, 'duration': 4.784}, {'end': 869.519, 'text': 'And instead what I had done, which I probably actually could have gotten away with after we showed the whole multiplication and all that.', 'start': 862.833, 'duration': 6.686}, {'end': 875.224, 'text': "But I'm doing tf.add this plus this right?", 'start': 871.741, 'duration': 3.483}, {'end': 879.387, 'text': "Which, you know, kind of made sense if it wasn't within a tf.add.", 'start': 875.664, 'duration': 3.723}, {'end': 883.13, 'text': 'So actually, rather than this plus, you have a comma.', 'start': 879.407, 'duration': 3.723}], 'summary': 'The transcript involves debugging code, training networkx, and testing a deepnet.py file.', 'duration': 93.851, 'max_score': 789.279, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/PwAGxqrXSCs/pics/PwAGxqrXSCs789279.jpg'}, {'end': 1152.068, 'src': 'heatmap', 'start': 936.815, 'weight': 1, 'content': [{'end': 940.177, 'text': 'So up here.', 'start': 936.815, 'duration': 3.362}, {'end': 957.407, 'text': "I don't know about y'all, but I honestly already see a y.", 'start': 940.197, 'duration': 17.21}, {'end': 961.632, 'text': 'being defined.', 'start': 959.349, 'duration': 2.283}, {'end': 976.608, 'text': "but I wonder maybe, if what's happening is here, that it's getting angry, so why don't we say epic X, epic Y?", 'start': 961.632, 'duration': 14.976}, {'end': 985.687, 'text': 'I wonder if this is screwing that up.', 'start': 983.343, 'duration': 2.344}, {'end': 989.152, 'text': "Because again, it doesn't work like Python works.", 'start': 986.608, 'duration': 2.544}, {'end': 994.479, 'text': "So you can't necessarily use bad code that you write in Python.", 'start': 989.672, 'duration': 4.807}, {'end': 998.966, 'text': "So epic x and then let's try this to be epic y.", 'start': 995.401, 'duration': 3.565}, {'end': 1006.258, 'text': "Sorry for the errors, but I'll probably end up leaving them in because I think they're fairly useful.", 'start': 1001.072, 'duration': 5.186}, {'end': 1008.66, 'text': "Because it's easy to make stupid mistakes like this.", 'start': 1006.278, 'duration': 2.382}, {'end': 1009.962, 'text': 'The first one was a stupid mistake.', 'start': 1008.72, 'duration': 1.242}, {'end': 1013.325, 'text': "I'm guessing this will fix this one, but we'll see.", 'start': 1010.002, 'duration': 3.323}, {'end': 1016.128, 'text': 'I should stop surmising and just find out.', 'start': 1013.345, 'duration': 2.783}, {'end': 1018.831, 'text': 'Okay Okay.', 'start': 1018.551, 'duration': 0.28}, {'end': 1045.236, 'text': "Okay, so here we have an invalid argument, and but it's not telling me like where that would be really useful.", 'start': 1033.865, 'duration': 11.371}, {'end': 1047.377, 'text': 'So clearly we got an empty.', 'start': 1045.236, 'duration': 2.141}, {'end': 1053.945, 'text': "something was empty that it didn't want to be empty.", 'start': 1047.377, 'duration': 6.568}, {'end': 1055.366, 'text': "ah, That's it.", 'start': 1053.945, 'duration': 1.421}, {'end': 1069.806, 'text': 'Okay, so, as painful as that is or that was recall, these are all basically needing to be arrays here.', 'start': 1062.02, 'duration': 7.786}, {'end': 1079.973, 'text': 'So here, all these biases, here, here, here, here, and this one I actually did correctly, interestingly enough.', 'start': 1069.926, 'duration': 10.047}, {'end': 1088.095, 'text': 'Anyway, that one was a little hard, harder to get get to, but anyway, Yeah, you get that first trace back, and then another exception occurs,', 'start': 1080.573, 'duration': 7.522}, {'end': 1089.635, 'text': "and that's where I got to read here.", 'start': 1088.095, 'duration': 1.54}, {'end': 1090.475, 'text': 'I was like reading this.', 'start': 1089.635, 'duration': 0.84}, {'end': 1094.136, 'text': "I was like what's wrong with this line?", 'start': 1090.475, 'duration': 3.661}, {'end': 1094.956, 'text': "But that's it.", 'start': 1094.136, 'duration': 0.82}, {'end': 1099.177, 'text': 'this needed to be, You know, square brackets.', 'start': 1094.956, 'duration': 4.221}, {'end': 1100.977, 'text': "Okay, so let's hit that again.", 'start': 1099.177, 'duration': 1.8}, {'end': 1109.319, 'text': "Oops What just happened? Okay Save that again, and let's try again.", 'start': 1100.997, 'duration': 8.322}, {'end': 1116.572, 'text': 'Oh Ah, yes, we left this commented out.', 'start': 1109.339, 'duration': 7.233}, {'end': 1119.332, 'text': 'At least we know we made it to that point.', 'start': 1117.692, 'duration': 1.64}, {'end': 1121.853, 'text': 'I shall try again.', 'start': 1121.113, 'duration': 0.74}, {'end': 1126.935, 'text': 'This is getting brutal.', 'start': 1125.474, 'duration': 1.461}, {'end': 1141.379, 'text': "Clearly, I'm just having a bad day for epic in range of how many epics.", 'start': 1135.517, 'duration': 5.862}, {'end': 1141.939, 'text': 'All right.', 'start': 1141.659, 'duration': 0.28}, {'end': 1145.085, 'text': 'This is going to be the death of me.', 'start': 1144.005, 'duration': 1.08}, {'end': 1146.146, 'text': "Please don't give me an error.", 'start': 1145.165, 'duration': 0.981}, {'end': 1149.487, 'text': "Why does it jump out my window? Luckily, I'm on the first story.", 'start': 1146.886, 'duration': 2.601}, {'end': 1152.068, 'text': 'Here we go.', 'start': 1151.648, 'duration': 0.42}], 'summary': 'Struggling with coding errors, aiming for epic x and y, facing challenges with arrays and exceptions.', 'duration': 190.12, 'max_score': 936.815, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/PwAGxqrXSCs/pics/PwAGxqrXSCs936815.jpg'}], 'start': 789.279, 'title': 'Debugging neural networks and softmax functions in python', 'summary': "Covers debugging a neural network with 66 lines of code, resolving missing positional arguments, and discussing debugging the softmax function in python. it highlights adapting code to python's working style and addressing challenges for a massive improvement in the process.", 'chapters': [{'end': 917.269, 'start': 789.279, 'title': 'Debugging a neural network', 'summary': 'Follows the process of debugging a neural network, involving 66 lines of code and resolving missing positional arguments, with an insight into the usage of tf.add and alternative methods for addition in the context of a neural network.', 'duration': 127.99, 'highlights': ['The chapter explores the debugging process of a neural network, involving 66 lines of code, and the resolution of missing positional arguments, ultimately leading to successful testing.', 'The narrator encounters an error message regarding missing positional arguments while testing the neural network implementation.', 'The narrator discovers the necessity of using a comma for the add operation in the neural network implementation.']}, {'end': 1009.962, 'start': 917.269, 'title': 'Debugging softmax function in python', 'summary': "Discusses debugging the softmax function in python, highlighting the issue of referencing 'y' before its assignment and the need to adapt code to python's working style.", 'duration': 92.693, 'highlights': ["Referencing 'y' before assignment in the softmax function.", "Adapting code to Python's working style is necessary for proper functionality.", 'The importance of recognizing and rectifying coding mistakes to ensure effective program execution.']}, {'end': 1197.594, 'start': 1010.002, 'title': 'Debugging challenges and improvement', 'summary': 'Describes the challenges faced while debugging and the need for improvement, including identifying empty arguments and correcting array-related errors, with a focus on achieving a massive improvement in the process.', 'duration': 187.592, 'highlights': ['Realization of the need for improvement and identification of empty arguments', 'Challenges faced in correcting array-related errors', 'Expressing frustration and humor in the debugging process']}], 'duration': 408.315, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/PwAGxqrXSCs/pics/PwAGxqrXSCs789279.jpg', 'highlights': ['The chapter explores the debugging process of a neural network, involving 66 lines of code, and the resolution of missing positional arguments, ultimately leading to successful testing.', "Adapting code to Python's working style is necessary for proper functionality.", 'The narrator encounters an error message regarding missing positional arguments while testing the neural network implementation.', 'The importance of recognizing and rectifying coding mistakes to ensure effective program execution.', "Referencing 'y' before assignment in the softmax function.", 'The narrator discovers the necessity of using a comma for the add operation in the neural network implementation.', 'Realization of the need for improvement and identification of empty arguments', 'Challenges faced in correcting array-related errors', 'Expressing frustration and humor in the debugging process']}, {'end': 1514.137, 'segs': [{'end': 1247.79, 'src': 'embed', 'start': 1218.023, 'weight': 2, 'content': [{'end': 1226.128, 'text': "uh, so we got almost 95 percent and on the mnist dataset, that's actually considered like laughable.", 'start': 1218.023, 'duration': 8.105}, {'end': 1234.593, 'text': "but um, that's only considered laughable because this isn't quite the best model to be using on the mnist dataset.", 'start': 1226.128, 'duration': 8.465}, {'end': 1244.98, 'text': 'um, mostly because, like, consider again that all we did was feed through raw pixel values just one by one.', 'start': 1234.593, 'duration': 10.387}, {'end': 1247.79, 'text': "we didn't help it generalize.", 'start': 1244.98, 'duration': 2.81}], 'summary': 'Achieved almost 95% accuracy on the mnist dataset, considered laughable for this model.', 'duration': 29.767, 'max_score': 1218.023, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/PwAGxqrXSCs/pics/PwAGxqrXSCs1218023.jpg'}, {'end': 1351.75, 'src': 'embed', 'start': 1284.975, 'weight': 0, 'content': [{'end': 1295.922, 'text': "And while 95% isn't like industry standard for that exact specific task, the point is we threw a pretty challenging task at the machine,", 'start': 1284.975, 'duration': 10.947}, {'end': 1302.246, 'text': 'just gave it the most basic form of data possible and it still came up with a pretty darn good model.', 'start': 1295.922, 'duration': 6.324}, {'end': 1308.05, 'text': "Like, I think that's just amazing, right? The accuracy could be better with a better model, and.", 'start': 1302.666, 'duration': 5.384}, {'end': 1316.312, 'text': "convolutional neural network, maybe, and some more work, but honestly that's a really good score for what we like, the effort that we put into it,", 'start': 1308.59, 'duration': 7.722}, {'end': 1324.094, 'text': 'and i think that all that does is, while not being the greatest score, it does kind of show you the incredible value of a neural network,', 'start': 1316.312, 'duration': 7.782}, {'end': 1331.296, 'text': "or a deep neural network anyway, at just coming up with models like that's pretty cool.", 'start': 1324.094, 'duration': 7.202}, {'end': 1333.498, 'text': "anyway. I think that's cool.", 'start': 1331.296, 'duration': 2.202}, {'end': 1340.943, 'text': "so not without errors, but I'll probably leave them in there, just simply because it seems like people don't really mind when I leave the errors in.", 'start': 1333.498, 'duration': 7.445}, {'end': 1341.963, 'text': 'and I think it actually helps.', 'start': 1340.943, 'duration': 1.02}, {'end': 1349.749, 'text': "because you know, working in this code is like confusing, as it is, and I've probably done too much of this today.", 'start': 1341.963, 'duration': 7.786}, {'end': 1351.75, 'text': 'but working in the code is just confusing.', 'start': 1349.749, 'duration': 2.001}], 'summary': 'Machine achieved 95% accuracy with basic data, showcasing the value of neural networks.', 'duration': 66.775, 'max_score': 1284.975, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/PwAGxqrXSCs/pics/PwAGxqrXSCs1284975.jpg'}, {'end': 1394.983, 'src': 'embed', 'start': 1369.557, 'weight': 4, 'content': [{'end': 1374.519, 'text': "But at the end of the day, the biggest thing that you need to look for is the stuff that like line, like I've seen line 728.", 'start': 1369.557, 'duration': 4.962}, {'end': 1377.599, 'text': "I'm like, I didn't write 728 lines.", 'start': 1374.519, 'duration': 3.08}, {'end': 1380.28, 'text': "So I'm just like scrolling through looking for my code.", 'start': 1377.619, 'duration': 2.661}, {'end': 1385.281, 'text': 'right and I was still able to find my code and figure out what the problem was because,', 'start': 1380.64, 'duration': 4.641}, {'end': 1389.842, 'text': "like some of this stuff is talking about vector shapes and all this, and clearly it's getting an empty shape somewhere,", 'start': 1385.281, 'duration': 4.561}, {'end': 1394.983, 'text': 'and that probably would have been a useful ish.', 'start': 1389.842, 'duration': 5.141}], 'summary': 'Despite a large number of lines, was able to find and fix code issue.', 'duration': 25.426, 'max_score': 1369.557, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/PwAGxqrXSCs/pics/PwAGxqrXSCs1369557.jpg'}, {'end': 1514.137, 'src': 'embed', 'start': 1501.826, 'weight': 5, 'content': [{'end': 1508.872, 'text': "And then as time goes on, I'll probably add some more with convolutional neural networks and some of the other models as well.", 'start': 1501.826, 'duration': 7.046}, {'end': 1512.335, 'text': 'So if you have questions, comments, concerns, whatever, leave them below.', 'start': 1509.032, 'duration': 3.303}, {'end': 1514.137, 'text': 'Otherwise, thanks for watching and until next time.', 'start': 1512.535, 'duration': 1.602}], 'summary': 'Plans to add more models like convolutional neural networks in the future.', 'duration': 12.311, 'max_score': 1501.826, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/PwAGxqrXSCs/pics/PwAGxqrXSCs1501826.jpg'}], 'start': 1198.214, 'title': 'Neural network accuracy and error detection', 'summary': 'Covers training a model with 94.85% accuracy on the mnist dataset, the potential for improvement with a convolutional neural network, and the process of encountering and dealing with errors in coding, emphasizing the importance of understanding specific errors to effectively debug.', 'chapters': [{'end': 1331.296, 'start': 1198.214, 'title': 'Neural network accuracy and model training', 'summary': 'Discusses training a model with 94.85% accuracy on the mnist dataset, highlighting the limitations of a basic approach and the potential for improvement with a convolutional neural network.', 'duration': 133.082, 'highlights': ['The accuracy of the trained model is 94.85%, which is considered quite high for a basic approach.', 'The mnist dataset considers 95% accuracy laughable due to the limitations of the basic model used.', 'A convolutional neural network would provide a more logical approach and potentially improve accuracy on the mnist dataset.']}, {'end': 1404.025, 'start': 1331.296, 'title': 'Finding errors in code', 'summary': 'Discusses the process of encountering and dealing with errors in coding, emphasizing the importance of locating and understanding specific errors to effectively debug, amidst the common occurrence of mistakes and confusion when working with code.', 'duration': 72.729, 'highlights': ["Encourages leaving errors in code as it seems people don't mind and it can be helpful amidst the confusing nature of coding.", 'Emphasizes the commonality of making mistakes and encountering errors while working with code.', 'Stresses the importance of locating and understanding specific errors in the code to effectively debug, illustrated through a personal experience of finding a specific line of code amidst confusion.']}, {'end': 1514.137, 'start': 1404.805, 'title': 'Neural network tutorial summary', 'summary': 'Discusses the importance of simplicity in neural network design, achieving 95% accuracy with 10 epics, the next step of working with custom data, and inviting suggestions for future projects.', 'duration': 109.332, 'highlights': ['Achieving 95% accuracy with only 10 epics, emphasizing the effectiveness of simplicity in neural network design', 'Next step involves working with custom data and acquiring suggestions for future projects', 'Plans to explore other neural network models like convolutional neural networks in future tutorials', 'Inviting audience to share suggestions and requests for future projects']}], 'duration': 315.923, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/PwAGxqrXSCs/pics/PwAGxqrXSCs1198214.jpg', 'highlights': ['A convolutional neural network would provide a more logical approach and potentially improve accuracy on the mnist dataset.', 'The accuracy of the trained model is 94.85%, which is considered quite high for a basic approach.', 'The mnist dataset considers 95% accuracy laughable due to the limitations of the basic model used.', 'Achieving 95% accuracy with only 10 epics, emphasizing the effectiveness of simplicity in neural network design.', 'Stresses the importance of locating and understanding specific errors in the code to effectively debug, illustrated through a personal experience of finding a specific line of code amidst confusion.', 'Plans to explore other neural network models like convolutional neural networks in future tutorials.', "Encourages leaving errors in code as it seems people don't mind and it can be helpful amidst the confusing nature of coding."]}], 'highlights': ['The accuracy of the trained model is 94.85%, which is considered quite high for a basic approach.', 'The suggestion to begin with 10 epochs for the optimization process, with the consideration of slower computers when choosing the number of epochs.', 'The process involves running through epics which consist of cycles of feed forward and backpropagation.', 'The creation of a function to train the neural network model using input data is explained, with the prediction function being equal to the neural network model x.', 'The cost function is equal to tf.reduceMean on the tf.nn softmax cross entropy with logits against the prediction and y, used for calculating the difference between the prediction and the known label in the one hot format.', 'The atom optimizer is used to minimize the cost in the neural network, synonymous with stochastic gradient descent and AdaGrad.', 'The default learning rate of the atom optimizer is 0.001, considered to be a suitable value for the optimization process.', 'The iteration involves chunking through the dataset using mnist.train.next_batch(batch_size).', 'Modifying the weights and biases is part of optimizing the cost function in TensorFlow.', 'Computing accuracy involves comparing model predictions to actual labels and evaluating the correctness using TensorFlow, followed by printing the accuracy of the model on test data.', "Using tf.argmax to compare index values of the maximum values in arrays and computing accuracy as the mean of correctness provide insights into the model's performance.", 'We can track epic loss during training and print progress by displaying the completed epic count out of total epics and the corresponding loss.', "The process involves running the optimizer and the cost function, with feed dict passing x's and y's data.", 'The chapter explores the debugging process of a neural network, involving 66 lines of code, and the resolution of missing positional arguments, ultimately leading to successful testing.', "Adapting code to Python's working style is necessary for proper functionality.", 'The narrator encounters an error message regarding missing positional arguments while testing the neural network implementation.', 'The importance of recognizing and rectifying coding mistakes to ensure effective program execution.', "Referencing 'y' before assignment in the softmax function.", 'The narrator discovers the necessity of using a comma for the add operation in the neural network implementation.', 'Realization of the need for improvement and identification of empty arguments', 'Challenges faced in correcting array-related errors', 'Expressing frustration and humor in the debugging process', 'A convolutional neural network would provide a more logical approach and potentially improve accuracy on the mnist dataset.', 'The mnist dataset considers 95% accuracy laughable due to the limitations of the basic model used.', 'Achieving 95% accuracy with only 10 epics, emphasizing the effectiveness of simplicity in neural network design.', 'Stresses the importance of locating and understanding specific errors in the code to effectively debug, illustrated through a personal experience of finding a specific line of code amidst confusion.', 'Plans to explore other neural network models like convolutional neural networks in future tutorials.', "Encourages leaving errors in code as it seems people don't mind and it can be helpful amidst the confusing nature of coding."]}