title
10.12: Neural Networks: Feedforward Algorithm Part 1 - The Nature of Code

description
In this video, I tackle a fundamental algorithm for neural networks: Feedforward. I discuss how the algorithm works in a Multi-layered Perceptron and connect the algorithm with the matrix math from previous videos. Next Part: https://youtu.be/HuZbYEn8AvY This video is part of Chapter 10 of The Nature of Code (http://natureofcode.com/book/chapter-10-neural-networks/) This video is also part of session 4 of my Spring 2017 ITP "Intelligence and Learning" course (https://github.com/shiffman/NOC-S17-2-Intelligence-Learning/tree/master/week4-neural-networks) Support this channel on Patreon: https://patreon.com/codingtrain To buy Coding Train merchandise: https://www.designbyhumans.com/shop/codingtrain/ To donate to the Processing Foundation: https://processingfoundation.org/ Send me your questions and coding challenges!: https://github.com/CodingTrain/Rainbow-Topics Contact: Twitter: https://twitter.com/shiffman The Coding Train website: http://thecodingtrain.com/ Links discussed in this video: The Coding Train Amazon Shop: https://www.amazon.com/shop/thecodingtrain Sigmoid Function on Wikipedia: https://en.wikipedia.org/wiki/Sigmoid_function Videos mentioned in this video: 3Blue1Brown Neural Networks playlist: https://www.youtube.com/playlist?list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi Source Code for the all Video Lessons: https://github.com/CodingTrain/Rainbow-Code p5.js: https://p5js.org/ Processing: https://processing.org The Nature of Code playlist: https://www.youtube.com/user/shiffman/playlists?view_as=subscriber&shelf_id=6&view=50&sort=dd For More Coding Challenges: https://www.youtube.com/playlist?list=PLRqwX-V7Uu6ZiZxtDDRCi6uhfTH4FilpH For More Intelligence and Learning: https://www.youtube.com/playlist?list=PLRqwX-V7Uu6YJ3XfHhT2Mm4Y5I99nrIKX 📄 Code of Conduct: https://github.com/CodingTrain/Code-of-Conduct

detail
{'title': '10.12: Neural Networks: Feedforward Algorithm Part 1 - The Nature of Code', 'heatmap': [{'end': 1020.652, 'start': 961.101, 'weight': 0.737}, {'end': 1149.487, 'start': 1080.486, 'weight': 0.963}, {'end': 1315.143, 'start': 1258.386, 'weight': 0.741}], 'summary': 'Series on neural networks covers the feed-forward algorithm, including 2-layer network basics with 3 inputs, 2 hidden nodes, and 1 output node, and explains the weighted sum, biases, activation functions, and matrix math for implementation.', 'chapters': [{'end': 373.432, 'segs': [{'end': 78.126, 'src': 'embed', 'start': 13.9, 'weight': 0, 'content': [{'end': 18.165, 'text': "Building this little matrix library that's going to allow us to do some math stuff that we're going to need.", 'start': 13.9, 'duration': 4.265}, {'end': 25.431, 'text': "when we implement the code for this particular video, where I'm going to describe the feed-forward algorithm of neural network.", 'start': 18.806, 'duration': 6.625}, {'end': 32.777, 'text': "Now, I want to give thanks to two sources that I've used primarily in the studying and preparation for this video.", 'start': 25.591, 'duration': 7.186}, {'end': 36.139, 'text': 'Number one is Make Your Own Neural Network by Tariq Rashid.', 'start': 33.077, 'duration': 3.062}, {'end': 41.103, 'text': "You'll find a link to this book at the Coding Train Amazon shop in this video's description.", 'start': 36.36, 'duration': 4.743}, {'end': 47.248, 'text': 'And also, I want to thank and reference the 3Blue1Brown channel, which has a playlist.', 'start': 41.564, 'duration': 5.684}, {'end': 54.215, 'text': "I forget what it's called, but there's a video called What is a Neural Network? At the time of this recording, there's about four videos.", 'start': 48.789, 'duration': 5.426}, {'end': 55.556, 'text': 'Those are amazing.', 'start': 54.555, 'duration': 1.001}, {'end': 58.259, 'text': "They're animated, they're thoughtful, they're careful.", 'start': 55.936, 'duration': 2.323}, {'end': 68.379, 'text': "They're really for understanding and having an intuition for how a neural network works and for seeing all the pieces of an algorithm.", 'start': 59.019, 'duration': 9.36}, {'end': 69.48, 'text': 'I highly recommend those.', 'start': 68.379, 'duration': 1.101}, {'end': 78.126, 'text': "What I'm really attempting to do in my videos is sort of figure out a way to implement a lot of the stuff that's in this book and those videos in code.", 'start': 69.78, 'duration': 8.346}], 'summary': 'Building matrix library for neural network math, inspired by make your own neural network book and 3blue1brown channel.', 'duration': 64.226, 'max_score': 13.9, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/qWK7yW8oS0I/pics/qWK7yW8oS0I13900.jpg'}, {'end': 122.955, 'src': 'embed', 'start': 91.255, 'weight': 5, 'content': [{'end': 98.028, 'text': 'Okay, so where I last left off before I started working on matrix math stuff, I had built this simple example of a perceptron.', 'start': 91.255, 'duration': 6.773}, {'end': 104.765, 'text': 'And a perceptron, the idea of a perceptron as a single neuron, that receives inputs.', 'start': 98.429, 'duration': 6.336}, {'end': 108.907, 'text': 'So we might have inputs, something like x1 and x2.', 'start': 105.085, 'duration': 3.822}, {'end': 113.63, 'text': 'And those two values go into this perceptron.', 'start': 109.527, 'duration': 4.103}, {'end': 122.955, 'text': 'They are processed and then some output is generated, often referred to as y.', 'start': 114.11, 'duration': 8.845}], 'summary': 'Transcript discusses building a simple perceptron model with inputs x1 and x2, yielding an output y.', 'duration': 31.7, 'max_score': 91.255, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/qWK7yW8oS0I/pics/qWK7yW8oS0I91255.jpg'}, {'end': 177.608, 'src': 'embed', 'start': 148.755, 'weight': 6, 'content': [{'end': 150.476, 'text': 'or we could also think of that as zero or one.', 'start': 148.755, 'duration': 1.721}, {'end': 163.042, 'text': 'And we want this perceptron system, this system, to output a one for y only if both the inputs are true.', 'start': 152.378, 'duration': 10.664}, {'end': 168.544, 'text': 'So we could say if both the inputs are true, then we want to get a one.', 'start': 163.703, 'duration': 4.841}, {'end': 171.926, 'text': 'If the inputs are true and false, we should get a zero.', 'start': 169.425, 'duration': 2.501}, {'end': 173.946, 'text': "If they're false and true, we should also get a zero.", 'start': 172.226, 'duration': 1.72}, {'end': 176.207, 'text': "If they're false and false, we should also get a zero.", 'start': 174.046, 'duration': 2.161}, {'end': 177.608, 'text': 'So this is the kind of system.', 'start': 176.407, 'duration': 1.201}], 'summary': 'Perceptron system outputs 1 only if both inputs are true, else 0.', 'duration': 28.853, 'max_score': 148.755, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/qWK7yW8oS0I/pics/qWK7yW8oS0I148755.jpg'}, {'end': 260.12, 'src': 'embed', 'start': 230.311, 'weight': 3, 'content': [{'end': 233.213, 'text': 'So this, for exclusive OR, will actually output a zero.', 'start': 230.311, 'duration': 2.902}, {'end': 239.114, 'text': 'And if you, I talk about this more in a previous video, this is not a linearly separable problem.', 'start': 233.693, 'duration': 5.421}, {'end': 245.696, 'text': "We can't graph the solution space and draw a line right in the middle and say all the answers for true are on one side,", 'start': 239.375, 'duration': 6.321}, {'end': 246.957, 'text': 'all the answers for false are on the other.', 'start': 245.696, 'duration': 1.261}, {'end': 251.858, 'text': 'So this is where this idea of a multi-layered perceptron comes in.', 'start': 247.417, 'duration': 4.441}, {'end': 260.12, 'text': 'Most problems, as I get further and further through this playlist of doing more and more things, cannot be solved with a single neuron,', 'start': 252.618, 'duration': 7.502}], 'summary': 'Multi-layered perceptron is needed for non-linearly separable problems.', 'duration': 29.809, 'max_score': 230.311, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/qWK7yW8oS0I/pics/qWK7yW8oS0I230311.jpg'}, {'end': 312.023, 'src': 'embed', 'start': 278.091, 'weight': 4, 'content': [{'end': 281.774, 'text': "now we have what's called a multi-layered perceptron.", 'start': 278.091, 'duration': 3.683}, {'end': 284.076, 'text': 'We have this output layer.', 'start': 282.094, 'duration': 1.982}, {'end': 290.221, 'text': 'This you could consider an input layer, although the input functions differently than these other two.', 'start': 285.037, 'duration': 5.184}, {'end': 291.943, 'text': 'This is a layer, this is a layer.', 'start': 290.782, 'duration': 1.161}, {'end': 292.804, 'text': "And here's the thing.", 'start': 292.183, 'duration': 0.621}, {'end': 294.725, 'text': 'This is called a hidden layer.', 'start': 293.284, 'duration': 1.441}, {'end': 298.889, 'text': "The reason why this, so I'm going to say hidden is right here.", 'start': 295.006, 'duration': 3.883}, {'end': 300.891, 'text': 'This is input.', 'start': 300.01, 'duration': 0.881}, {'end': 304.218, 'text': 'right here, and this is output.', 'start': 302.256, 'duration': 1.962}, {'end': 312.023, 'text': 'Now, as I get further and further through many videos that I hope to make and examples I hope to make,', 'start': 306.519, 'duration': 5.504}], 'summary': 'The transcript introduces a multi-layered perceptron with an input, hidden, and output layer.', 'duration': 33.932, 'max_score': 278.091, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/qWK7yW8oS0I/pics/qWK7yW8oS0I278091.jpg'}], 'start': 0.563, 'title': 'Neural network algorithms', 'summary': "Covers the feed-forward algorithm preparation, acknowledging sources, and intends to implement concepts. it discusses the perceptron solving logical 'and' problem, and explains multi-layered perceptron solving non-linearly separable problems like xor.", 'chapters': [{'end': 90.775, 'start': 0.563, 'title': 'Neural network algorithm overview', 'summary': 'Covers the preparation for implementing the feed-forward algorithm of a neural network, giving thanks to the sources make your own neural network by tariq rashid and the 3blue1brown channel, while expressing the intention to implement the concepts from these sources into code.', 'duration': 90.212, 'highlights': ['The chapter discusses the preparation for implementing the feed-forward algorithm of a neural network, with a focus on building a matrix library to facilitate mathematical operations necessary for the implementation.', 'The chapter acknowledges and references two primary sources, Make Your Own Neural Network by Tariq Rashid and the 3Blue1Brown channel, which is aimed at understanding the functioning and components of a neural network.', 'The chapter emphasizes the intention to implement the concepts from the aforementioned sources into code, aiming to translate the content from the book and videos into practical code for the neural network algorithm.', "The chapter mentions the author's intention to start working on the code in the next video and expresses the desire to talk through the algorithm in their own words, as it applies to their current progress in the playlist."]}, {'end': 200.997, 'start': 91.255, 'title': 'Perceptron and logical and', 'summary': "Discusses the concept of a perceptron as a single neuron receiving inputs, aiming to solve the logical 'and' problem with a simple scenario of two inputs and a boolean output.", 'duration': 109.742, 'highlights': ['The chapter discusses the concept of a perceptron as a single neuron receiving inputs It explains the idea of a perceptron receiving inputs and generating an output, setting the stage for further discussion.', "Aiming to solve the logical 'and' problem with a simple scenario of two inputs and a Boolean output The chapter presents the goal of the perceptron system to output a one only if both inputs are true, providing clear criteria for the system's functionality."]}, {'end': 373.432, 'start': 201.357, 'title': 'Understanding multi-layered perceptron', 'summary': 'Explains the concept of xor and the need for a multi-layered perceptron to solve non-linearly separable problems like xor, with the addition of a hidden layer and an output layer in the neural network.', 'duration': 172.075, 'highlights': ['The chapter explains the concept of XOR and the need for a multi-layered perceptron to solve non-linearly separable problems like XOR. The explanation of XOR and the necessity of a multi-layered perceptron to solve non-linearly separable problems.', 'The addition of a hidden layer and an output layer in the neural network. Introduction of a hidden layer and an output layer in the neural network to address complex problems.']}], 'duration': 372.869, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/qWK7yW8oS0I/pics/qWK7yW8oS0I563.jpg', 'highlights': ['The chapter emphasizes the intention to implement the concepts from the sources into practical code for the neural network algorithm.', 'The chapter discusses the preparation for implementing the feed-forward algorithm of a neural network, with a focus on building a matrix library for mathematical operations.', 'The chapter acknowledges and references two primary sources, Make Your Own Neural Network by Tariq Rashid and the 3Blue1Brown channel, aimed at understanding the functioning and components of a neural network.', 'The chapter explains the concept of XOR and the need for a multi-layered perceptron to solve non-linearly separable problems like XOR.', 'Introduction of a hidden layer and an output layer in the neural network to address complex problems.', 'The chapter discusses the concept of a perceptron as a single neuron receiving inputs and generating an output, setting the stage for further discussion.', "Aiming to solve the logical 'and' problem with a simple scenario of two inputs and a Boolean output, the chapter presents the goal of the perceptron system to output a one only if both inputs are true, providing clear criteria for the system's functionality."]}, {'end': 843.738, 'segs': [{'end': 450.668, 'src': 'embed', 'start': 395.317, 'weight': 2, 'content': [{'end': 402.911, 'text': "So I'm going to add that, and that will also go into this hidden neuron and this hidden neuron.", 'start': 395.317, 'duration': 7.594}, {'end': 410.874, 'text': 'So now you can see we have three inputs, two hidden nodes, and one output node.', 'start': 403.271, 'duration': 7.603}, {'end': 413.135, 'text': 'And all of this is totally flexible.', 'start': 411.255, 'duration': 1.88}, {'end': 422.98, 'text': "If you've ever looked at that classic Hello World machine learning problem, where you have this data set of handwritten images,", 'start': 414.756, 'duration': 8.224}, {'end': 424.261, 'text': 'often the inputs are 784 for 784 pixels.', 'start': 422.98, 'duration': 1.281}, {'end': 433.604, 'text': "And the outputs, they're 10 outputs, because you have a probability for whether it's a 0, 1, 2, 3, 4, 5, 6, 7, 8, or 9.", 'start': 427.242, 'duration': 6.362}, {'end': 440.625, 'text': 'So the design of this, how many inputs, how many outputs, how many hidden nodes, how many hidden layers, this is all food for thought.', 'start': 433.604, 'duration': 7.021}, {'end': 444.206, 'text': 'But I just want to look at this very basic 2.', 'start': 440.986, 'duration': 3.22}, {'end': 445.047, 'text': "Maybe it's a 3, I think.", 'start': 444.206, 'duration': 0.841}, {'end': 445.627, 'text': "It's a 2.", 'start': 445.167, 'duration': 0.46}, {'end': 446.067, 'text': "It's a 2.", 'start': 445.627, 'duration': 0.44}, {'end': 447.387, 'text': '2 OK.', 'start': 446.067, 'duration': 1.32}, {'end': 448.567, 'text': 'OK OK.', 'start': 447.467, 'duration': 1.1}, {'end': 449.328, 'text': "It's a 2.", 'start': 449.128, 'duration': 0.2}, {'end': 450.668, 'text': '2-layer network.', 'start': 449.328, 'duration': 1.34}], 'summary': 'Neural network with 3 inputs, 2 hidden nodes, and 1 output node. flexible design for various inputs and outputs.', 'duration': 55.351, 'max_score': 395.317, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/qWK7yW8oS0I/pics/qWK7yW8oS0I395317.jpg'}, {'end': 827.852, 'src': 'embed', 'start': 802.585, 'weight': 0, 'content': [{'end': 808.967, 'text': "Because the way that the feedforward algorithm works, and mind you, there's a lot more to the feedforward algorithm that I need to talk about.", 'start': 802.585, 'duration': 6.382}, {'end': 812.188, 'text': "I haven't talked about the bias yet or the activation yet.", 'start': 808.987, 'duration': 3.201}, {'end': 812.648, 'text': "There's more.", 'start': 812.208, 'duration': 0.44}, {'end': 817.769, 'text': 'But I just want to get now to this primary understanding of the inputs come in.', 'start': 812.688, 'duration': 5.081}, {'end': 822.991, 'text': 'We take a weighted sum of all the connections between the inputs and the next layer.', 'start': 818.129, 'duration': 4.862}, {'end': 827.852, 'text': 'And that can be done in a single operation.', 'start': 824.191, 'duration': 3.661}], 'summary': 'The feedforward algorithm involves taking a weighted sum of inputs in a single operation.', 'duration': 25.267, 'max_score': 802.585, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/qWK7yW8oS0I/pics/qWK7yW8oS0I802585.jpg'}], 'start': 375.091, 'title': 'Neural network fundamentals', 'summary': 'Covers the feed-forward algorithm basics, focusing on a 2-layer network with 3 inputs, 2 hidden nodes, and 1 output node, and discusses the concept of weighted sum in neural networks, emphasizing its relevance and usage in deep learning implementations.', 'chapters': [{'end': 450.668, 'start': 375.091, 'title': 'Feed-forward algorithm basics', 'summary': 'Discusses the feed-forward algorithm, focusing on the flow of data in a 2-layer network with 3 inputs, 2 hidden nodes, and 1 output node, while highlighting the flexibility of input and output configurations in machine learning.', 'duration': 75.577, 'highlights': ['Explaining the structure of a 2-layer network with 3 inputs, 2 hidden nodes, and 1 output node, emphasizing its flexibility in machine learning configurations.', 'Highlighting the classic Hello World machine learning problem with 784 input pixels and 10 outputs for probabilities of digits 0-9.', 'Emphasizing the flexibility in designing the network, including the number of inputs, outputs, hidden nodes, and hidden layers.']}, {'end': 843.738, 'start': 453.487, 'title': 'Neural network weighted sum', 'summary': 'Explains the concept of weighted sum in neural networks, where the connections between inputs and hidden nodes are represented by a weight matrix and the feedforward algorithm utilizes matrix math for efficient computation, demonstrating its relevance and usage in deep learning implementations.', 'duration': 390.251, 'highlights': ['The connections between inputs and hidden nodes are represented by a weight matrix, which can be expressed as a matrix of weights with rows representing inputs and columns representing hidden nodes. The weight matrix represents the connections between inputs and hidden nodes, and it can be visualized as a matrix of weights with rows representing inputs and columns representing hidden nodes.', 'The feedforward algorithm in neural networks utilizes matrix math for efficiently computing the weighted sum of connections between inputs and hidden nodes, highlighting the relevance and usage of matrix math in deep learning implementations. The feedforward algorithm in neural networks utilizes matrix math to efficiently compute the weighted sum of connections between inputs and hidden nodes, emphasizing the significance of matrix math in deep learning implementations.', 'The weighted sum of connections between inputs and hidden nodes is a fundamental operation in the feedforward algorithm of neural networks, enabling efficient computation in a single operation by storing weights and inputs in matrices. The weighted sum of connections between inputs and hidden nodes is a fundamental operation in the feedforward algorithm of neural networks, allowing efficient computation in a single operation by storing weights and inputs in matrices.']}], 'duration': 468.647, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/qWK7yW8oS0I/pics/qWK7yW8oS0I375091.jpg', 'highlights': ['The feedforward algorithm in neural networks utilizes matrix math for efficiently computing the weighted sum of connections between inputs and hidden nodes, highlighting the relevance and usage of matrix math in deep learning implementations.', 'The weighted sum of connections between inputs and hidden nodes is a fundamental operation in the feedforward algorithm of neural networks, enabling efficient computation in a single operation by storing weights and inputs in matrices.', 'Explaining the structure of a 2-layer network with 3 inputs, 2 hidden nodes, and 1 output node, emphasizing its flexibility in machine learning configurations.', 'Highlighting the classic Hello World machine learning problem with 784 input pixels and 10 outputs for probabilities of digits 0-9.']}, {'end': 1042.375, 'segs': [{'end': 931.692, 'src': 'embed', 'start': 867.689, 'weight': 0, 'content': [{'end': 874.853, 'text': 'The inputs come in, the weighted sums get added all together inside these hidden nodes.', 'start': 867.689, 'duration': 7.164}, {'end': 877.514, 'text': "Now there are two big components that I've missed.", 'start': 874.893, 'duration': 2.621}, {'end': 878.494, 'text': 'Let me just write this over here.', 'start': 877.554, 'duration': 0.94}, {'end': 879.975, 'text': 'One is bias.', 'start': 879.094, 'duration': 0.881}, {'end': 883.95, 'text': 'And 2 is an activation function.', 'start': 881.167, 'duration': 2.783}, {'end': 886.834, 'text': "I'm kind of not sure which order to talk about these things.", 'start': 884.571, 'duration': 2.263}, {'end': 892.681, 'text': 'Before I get to the bias and the activation, what I want to do is rewrite this as a smaller formula.', 'start': 887.575, 'duration': 5.106}, {'end': 897.266, 'text': 'So I want to consider the weight matrix, for example, just as the capital letter W.', 'start': 892.721, 'duration': 4.545}, {'end': 901.691, 'text': 'And I can think of it as a matrix of i rows and j columns.', 'start': 898.047, 'duration': 3.644}, {'end': 907.517, 'text': 'i and j are kind of terrible because they look so similar, but the weight matrix of i rows and j columns.', 'start': 902.432, 'duration': 5.085}, {'end': 911.461, 'text': "Now, we're taking the matrix product, which I'm just going to use a dot here.", 'start': 907.777, 'duration': 3.684}, {'end': 912.562, 'text': "I think that's going to be fine.", 'start': 911.541, 'duration': 1.021}, {'end': 915.986, 'text': 'The matrix product between this and the inputs.', 'start': 912.882, 'duration': 3.104}, {'end': 920.07, 'text': "Now, the inputs is a matrix, but it's one column, but i rows.", 'start': 916.426, 'duration': 3.644}, {'end': 925.245, 'text': "And the point of the reason why I'm doing this is because I'm trying to get the outputs.", 'start': 921.601, 'duration': 3.644}, {'end': 928.869, 'text': 'What I want is, I want to know what number to send out of here.', 'start': 925.545, 'duration': 3.324}, {'end': 930.33, 'text': "The data's flowing in.", 'start': 929.229, 'duration': 1.101}, {'end': 931.692, 'text': 'We get this weighted sum.', 'start': 930.37, 'duration': 1.322}], 'summary': 'Neural network components include inputs, weighted sums, bias, activation function, and matrix operations.', 'duration': 64.003, 'max_score': 867.689, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/qWK7yW8oS0I/pics/qWK7yW8oS0I867689.jpg'}, {'end': 1020.652, 'src': 'heatmap', 'start': 956.374, 'weight': 4, 'content': [{'end': 958.817, 'text': 'So whatever comes out of the hidden goes in here into the output.', 'start': 956.374, 'duration': 2.443}, {'end': 965.965, 'text': 'Hidden i equals the matrix product between the weight matrix and the inputs.', 'start': 961.101, 'duration': 4.864}, {'end': 966.926, 'text': "But there's more.", 'start': 966.165, 'duration': 0.761}, {'end': 971.81, 'text': "So what are the things that I'm forgetting? And I can actually fold these two things in right in here.", 'start': 967.706, 'duration': 4.104}, {'end': 973.971, 'text': "The things that I'm forgetting are, one, the bias.", 'start': 972.09, 'duration': 1.881}, {'end': 978.515, 'text': 'In the previous videos, where I went through the perceptron, for example, remember,', 'start': 974.472, 'duration': 4.043}, {'end': 982.418, 'text': 'I was trying to find this line and what points would be above it or below it.', 'start': 978.515, 'duration': 3.903}, {'end': 986.141, 'text': "And I've got to really deal with the problem that all the inputs could be 0.", 'start': 982.659, 'duration': 3.482}, {'end': 991.514, 'text': 'If all the inputs are 0, then the weighted sum is always going to be zero.', 'start': 986.141, 'duration': 5.373}, {'end': 993.276, 'text': "That can't be right.", 'start': 992.355, 'duration': 0.921}, {'end': 994.437, 'text': 'So we need this bias.', 'start': 993.476, 'duration': 0.961}, {'end': 999.021, 'text': 'We sometimes need to make it easier or harder for it to, so to speak, fire.', 'start': 994.457, 'duration': 4.564}, {'end': 1002.684, 'text': 'We want to bias the output in a given direction.', 'start': 999.281, 'duration': 3.403}, {'end': 1009.951, 'text': 'So one thing I would write here is I would also say plus that bias.', 'start': 1003.245, 'duration': 6.706}, {'end': 1013.194, 'text': 'And that is a single column vector as well.', 'start': 1010.852, 'duration': 2.342}, {'end': 1020.652, 'text': "So really, if I'm down here again, and boy am I running out of space, but that's okay.", 'start': 1013.674, 'duration': 6.978}], 'summary': 'Explains the importance of bias in matrix product for better model output.', 'duration': 38.063, 'max_score': 956.374, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/qWK7yW8oS0I/pics/qWK7yW8oS0I956374.jpg'}], 'start': 844.818, 'title': 'Neural network operations', 'summary': 'Explains the feed forward algorithm for a neural network, focusing on the matrix product, weighted sums, bias, and activation function. it also explores the concept of hidden layers and biases, discussing the flow of numbers through the hidden layer, the introduction of biases to prevent all inputs from being zero, and the matrix product between weight matrix and inputs.', 'chapters': [{'end': 931.692, 'start': 844.818, 'title': 'Feed forward algorithm - matrix product', 'summary': 'Explains the feed forward algorithm for a neural network, focusing on the matrix product, weighted sums, bias, and activation function.', 'duration': 86.874, 'highlights': ['The inputs come in, the weighted sums get added all together inside these hidden nodes. Describes the process of adding weighted sums in hidden nodes.', "There are two big components that I've missed. One is bias. And 2 is an activation function. Identifies the two important components, bias and activation function, in the feed forward algorithm.", "The weight matrix of i rows and j columns. Now, we're taking the matrix product, which I'm just going to use a dot here. Explains the weight matrix and the matrix product using the dot symbol.", "The data's flowing in. We get this weighted sum. Emphasizes the process of obtaining the weighted sum from the flowing data."]}, {'end': 1042.375, 'start': 932.072, 'title': 'Neural network hidden layer and bias', 'summary': 'Explores the concept of hidden layers and biases in a neural network, discussing the flow of numbers through the hidden layer, the introduction of biases to prevent all inputs from being zero, and the matrix product between weight matrix and inputs.', 'duration': 110.303, 'highlights': ['The outputs of the hidden layer are the inputs to the output, involving a matrix product between the weight matrix and the inputs.', 'The introduction of biases is essential to prevent all inputs from resulting in a weighted sum of zero, thus biasing the output in a given direction.', 'The need for biases arises from the problem that all inputs could be 0, resulting in a weighted sum of zero, and the bias is represented as a single column vector.']}], 'duration': 197.557, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/qWK7yW8oS0I/pics/qWK7yW8oS0I844818.jpg', 'highlights': ['Describes the process of adding weighted sums in hidden nodes.', 'Identifies the two important components, bias and activation function, in the feed forward algorithm.', 'Explains the weight matrix and the matrix product using the dot symbol.', 'Emphasizes the process of obtaining the weighted sum from the flowing data.', 'The outputs of the hidden layer are the inputs to the output, involving a matrix product between the weight matrix and the inputs.', 'The introduction of biases is essential to prevent all inputs from resulting in a weighted sum of zero, thus biasing the output in a given direction.', 'The need for biases arises from the problem that all inputs could be 0, resulting in a weighted sum of zero, and the bias is represented as a single column vector.']}, {'end': 1236.898, 'segs': [{'end': 1079.563, 'src': 'embed', 'start': 1042.555, 'weight': 0, 'content': [{'end': 1053.521, 'text': "The whole point of this learning system that we're going to create is to figure out how to tune all the values of all these weights and biases so that the outputs match up with what we think they should be.", 'start': 1042.555, 'duration': 10.966}, {'end': 1058.805, 'text': 'The system needs to somehow adapt and learn and tune all those values to perform some sort of task.', 'start': 1053.842, 'duration': 4.963}, {'end': 1062.768, 'text': 'And in a sense, this is really just one big function.', 'start': 1060.105, 'duration': 2.663}, {'end': 1065.57, 'text': "That's why a neural network is something called a universal function approximator.", 'start': 1062.788, 'duration': 2.782}, {'end': 1069.254, 'text': "It's just a function that receives inputs and generates an output.", 'start': 1065.75, 'duration': 3.504}, {'end': 1070.855, 'text': "That's a function.", 'start': 1070.174, 'duration': 0.681}, {'end': 1075.239, 'text': 'And we have to, in theory, like if we have enough of these hidden layers and hidden nodes,', 'start': 1070.995, 'duration': 4.244}, {'end': 1078.281, 'text': "there's no inputs we couldn't match with some given set of outputs.", 'start': 1075.239, 'duration': 3.042}, {'end': 1079.563, 'text': "So anyway, we're going to get to all that.", 'start': 1078.301, 'duration': 1.262}], 'summary': 'Neural network tunes weights and biases to match outputs, functioning as a universal approximator.', 'duration': 37.008, 'max_score': 1042.555, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/qWK7yW8oS0I/pics/qWK7yW8oS0I1042555.jpg'}, {'end': 1168.81, 'src': 'heatmap', 'start': 1080.486, 'weight': 4, 'content': [{'end': 1082.247, 'text': "But I'm back here, so I need to add the bias in.", 'start': 1080.486, 'duration': 1.761}, {'end': 1083.388, 'text': "And then there's something else.", 'start': 1082.307, 'duration': 1.081}, {'end': 1087.631, 'text': 'You might remember from the simple perceptron example that we had this activation function.', 'start': 1083.808, 'duration': 3.823}, {'end': 1094.796, 'text': 'Whatever this weighted sum plus the bias would be, if it was a positive number, we would turn that number into plus one.', 'start': 1087.891, 'duration': 6.905}, {'end': 1098.099, 'text': 'If it was a negative number, we would turn that number into negative one.', 'start': 1095.137, 'duration': 2.962}, {'end': 1102.88, 'text': "And this is something that's very typical of neural network based systems.", 'start': 1098.839, 'duration': 4.041}, {'end': 1107.602, 'text': 'Whatever these weighted sums come in as, we want to like squash them into some known range.', 'start': 1103.16, 'duration': 4.442}, {'end': 1111.823, 'text': 'And there are a variety of different mathematical functions that can do this.', 'start': 1107.982, 'duration': 3.841}, {'end': 1117.865, 'text': 'And while this is not typically the sort of latest and greatest and most cutting edge activation function,', 'start': 1112.323, 'duration': 5.542}, {'end': 1125.727, 'text': 'the function that we will find in a lot of textbooks and early implementations of neural networks is something called a sigmoid.', 'start': 1117.865, 'duration': 7.862}, {'end': 1130.786, 'text': 'And sigmoid is a function that actually looks like this.', 'start': 1127.284, 'duration': 3.502}, {'end': 1139.371, 'text': 'f of x equals 1 divided by 1 plus e to the negative x.', 'start': 1132.367, 'duration': 7.004}, {'end': 1140.951, 'text': "I'm not 100% sure I got that right.", 'start': 1139.371, 'duration': 1.58}, {'end': 1143.733, 'text': "Let's go look at the Wikipedia page for the sigmoid function.", 'start': 1141.292, 'duration': 2.441}, {'end': 1149.487, 'text': 'OK, so here we can see I did get the correct formula for the sigmoid function.', 'start': 1145.106, 'duration': 4.381}, {'end': 1153.927, 'text': 'What is this number e? e is called the natural number.', 'start': 1150.027, 'duration': 3.9}, {'end': 1156.108, 'text': "It's the base for the natural logarithm.", 'start': 1154.007, 'duration': 2.101}, {'end': 1158.308, 'text': "It's like 2.71 something or other.", 'start': 1156.168, 'duration': 2.14}, {'end': 1161.469, 'text': "It's one of these magic numbers, Euler's number.", 'start': 1158.328, 'duration': 3.141}, {'end': 1162.789, 'text': 'You can read up about it.', 'start': 1161.509, 'duration': 1.28}, {'end': 1168.81, 'text': 'And to be honest, the sigmoid function is barely used anymore in modern deep learning research.', 'start': 1162.809, 'duration': 6.001}], 'summary': 'Neural network activation functions include sigmoid, which is rarely used in modern deep learning research.', 'duration': 50.945, 'max_score': 1080.486, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/qWK7yW8oS0I/pics/qWK7yW8oS0I1080486.jpg'}, {'end': 1218.675, 'src': 'embed', 'start': 1189.178, 'weight': 3, 'content': [{'end': 1190.259, 'text': 'It squashes it.', 'start': 1189.178, 'duration': 1.081}, {'end': 1193.46, 'text': 'Higher numbers are going to be much closer to 1.', 'start': 1190.959, 'duration': 2.501}, {'end': 1195.862, 'text': 'Lower numbers are going to be much closer to 0.', 'start': 1193.46, 'duration': 2.402}, {'end': 1199.603, 'text': 'And the bias can push things closer to 1 or closer to 0.', 'start': 1195.862, 'duration': 3.741}, {'end': 1205.266, 'text': 'This squashing of it, it works really well because we can get a true or false, 0 or 1.', 'start': 1199.603, 'duration': 5.663}, {'end': 1208.268, 'text': 'We can get a probability value between 0 and 1 when we get to the output.', 'start': 1205.266, 'duration': 3.002}, {'end': 1209.548, 'text': 'Lots of possibilities.', 'start': 1208.628, 'duration': 0.92}, {'end': 1212.55, 'text': 'So this is the sigmoid function.', 'start': 1209.929, 'duration': 2.621}, {'end': 1215.071, 'text': "So I'm over here, by the way.", 'start': 1213.55, 'duration': 1.521}, {'end': 1216.332, 'text': 'I need to correct a couple of things.', 'start': 1215.091, 'duration': 1.241}, {'end': 1218.034, 'text': "Where's that eraser?", 'start': 1217.433, 'duration': 0.601}, {'end': 1218.675, 'text': 'Number one is.', 'start': 1218.054, 'duration': 0.621}], 'summary': 'Sigmoid function squashes numbers to 0 or 1, useful for probabilities.', 'duration': 29.497, 'max_score': 1189.178, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/qWK7yW8oS0I/pics/qWK7yW8oS0I1189178.jpg'}], 'start': 1042.555, 'title': 'Neural network learning system and activation functions', 'summary': 'Outlines the purpose of a learning system to tune the values of weights and biases in a neural network and discusses the sigmoid function, a common activation function, which squashes input into a range between 0 and 1.', 'chapters': [{'end': 1079.563, 'start': 1042.555, 'title': 'Neural network learning system', 'summary': 'Outlines the purpose of a learning system to tune the values of weights and biases in a neural network to match the desired outputs and introduces the concept of a neural network as a universal function approximator.', 'duration': 37.008, 'highlights': ['The purpose of the learning system is to tune the values of weights and biases in the neural network to match the desired outputs.', 'A neural network is referred to as a universal function approximator, as it is capable of receiving inputs and generating outputs, making it essentially a function.', 'The system needs to adapt and learn to tune all the values of weights and biases to perform a task.']}, {'end': 1236.898, 'start': 1080.486, 'title': 'Neural network activation functions', 'summary': 'Discusses the sigmoid function, a common activation function in neural networks, which squashes input into a range between 0 and 1, making it suitable for scenarios requiring binary outputs or probability values.', 'duration': 156.412, 'highlights': ['The sigmoid function is a common activation function in neural networks, squashing input into a range between 0 and 1, making it suitable for binary outputs or probability values. The sigmoid function takes any input and squashes it into a range between 0 and 1, making it perfect for scenarios requiring binary outputs or probability values.', "Euler's number, denoted as 'e', is the base for the natural logarithm and is used in the formula for the sigmoid function. Euler's number, denoted as 'e', is the base for the natural logarithm and is used in the formula for the sigmoid function.", 'The sigmoid function is barely used in modern deep learning research, and there are other more cutting-edge activation functions. The sigmoid function is barely used in modern deep learning research, and there are other more cutting-edge activation functions available.']}], 'duration': 194.343, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/qWK7yW8oS0I/pics/qWK7yW8oS0I1042555.jpg', 'highlights': ['A neural network is a universal function approximator, capable of receiving inputs and generating outputs.', 'The system needs to adapt and learn to tune all the values of weights and biases to perform a task.', 'The purpose of the learning system is to tune the values of weights and biases in the neural network to match the desired outputs.', 'The sigmoid function squashes input into a range between 0 and 1, making it suitable for binary outputs or probability values.', "Euler's number, denoted as 'e', is the base for the natural logarithm and is used in the formula for the sigmoid function.", 'The sigmoid function is barely used in modern deep learning research, and there are other more cutting-edge activation functions available.']}, {'end': 1436.893, 'segs': [{'end': 1315.143, 'src': 'heatmap', 'start': 1258.386, 'weight': 0.741, 'content': [{'end': 1262.687, 'text': 'The exact notation aside the point of the feed forward.', 'start': 1258.386, 'duration': 4.301}, {'end': 1273.531, 'text': 'the truth of the feed forward algorithm is the inputs come in, you take a weighted sum, you add in the bias, you take that weighted sum, add the bias,', 'start': 1262.687, 'duration': 10.844}, {'end': 1279.412, 'text': 'pass it through the activation function and that result feeds forward towards the network and goes straight to the output.', 'start': 1273.531, 'duration': 5.881}, {'end': 1280.633, 'text': 'And guess what the output does?', 'start': 1279.432, 'duration': 1.201}, {'end': 1291.945, 'text': 'The output takes, whoops, the weights of all the connections between the hidden and the output.', 'start': 1281.273, 'duration': 10.672}, {'end': 1299.417, 'text': 'So the matrix, product of the hidden with those weights plus its own biases.', 'start': 1293.675, 'duration': 5.742}, {'end': 1300.438, 'text': 'these are different biases.', 'start': 1299.417, 'duration': 1.021}, {'end': 1303.079, 'text': 'these are the biases for this particular.', 'start': 1300.438, 'duration': 2.641}, {'end': 1306.92, 'text': 'the output node and then passes that through the sigmoid function as well.', 'start': 1303.079, 'duration': 3.841}, {'end': 1310.522, 'text': 'So this just happens with every single layer.', 'start': 1307.26, 'duration': 3.262}, {'end': 1312.122, 'text': 'Here come the inputs.', 'start': 1310.862, 'duration': 1.26}, {'end': 1315.143, 'text': 'weighted sum passes the activation function.', 'start': 1312.122, 'duration': 3.021}], 'summary': 'Feedforward algorithm processes inputs through weighted sum, bias, and activation function for each layer.', 'duration': 56.757, 'max_score': 1258.386, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/qWK7yW8oS0I/pics/qWK7yW8oS0I1258386.jpg'}, {'end': 1312.122, 'src': 'embed', 'start': 1262.687, 'weight': 1, 'content': [{'end': 1273.531, 'text': 'the truth of the feed forward algorithm is the inputs come in, you take a weighted sum, you add in the bias, you take that weighted sum, add the bias,', 'start': 1262.687, 'duration': 10.844}, {'end': 1279.412, 'text': 'pass it through the activation function and that result feeds forward towards the network and goes straight to the output.', 'start': 1273.531, 'duration': 5.881}, {'end': 1280.633, 'text': 'And guess what the output does?', 'start': 1279.432, 'duration': 1.201}, {'end': 1291.945, 'text': 'The output takes, whoops, the weights of all the connections between the hidden and the output.', 'start': 1281.273, 'duration': 10.672}, {'end': 1299.417, 'text': 'So the matrix, product of the hidden with those weights plus its own biases.', 'start': 1293.675, 'duration': 5.742}, {'end': 1300.438, 'text': 'these are different biases.', 'start': 1299.417, 'duration': 1.021}, {'end': 1303.079, 'text': 'these are the biases for this particular.', 'start': 1300.438, 'duration': 2.641}, {'end': 1306.92, 'text': 'the output node and then passes that through the sigmoid function as well.', 'start': 1303.079, 'duration': 3.841}, {'end': 1310.522, 'text': 'So this just happens with every single layer.', 'start': 1307.26, 'duration': 3.262}, {'end': 1312.122, 'text': 'Here come the inputs.', 'start': 1310.862, 'duration': 1.26}], 'summary': 'The feed forward algorithm processes inputs by taking weighted sums and biases, passing through activation functions, and propagating the results through the network to the output.', 'duration': 49.435, 'max_score': 1262.687, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/qWK7yW8oS0I/pics/qWK7yW8oS0I1262687.jpg'}, {'end': 1359.253, 'src': 'embed', 'start': 1333.7, 'weight': 3, 'content': [{'end': 1342.228, 'text': 'So a nice way to draw the bias into the diagram is to think of it as another input at each layer.', 'start': 1333.7, 'duration': 8.528}, {'end': 1352.348, 'text': 'So for example, if there were always an input that had a value of one, that connected like this, now we have bias one and bias two.', 'start': 1342.649, 'duration': 9.699}, {'end': 1357.632, 'text': "So these are just, these are like weights that are getting weighted with an arbitrary input of one that's always coming in.", 'start': 1352.649, 'duration': 4.983}, {'end': 1359.253, 'text': 'And the same thing with here.', 'start': 1358.013, 'duration': 1.24}], 'summary': 'Bias can be represented as an additional input at each layer, such as a value of one, for more accurate weight calculations.', 'duration': 25.553, 'max_score': 1333.7, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/qWK7yW8oS0I/pics/qWK7yW8oS0I1333700.jpg'}, {'end': 1429.491, 'src': 'embed', 'start': 1400.672, 'weight': 0, 'content': [{'end': 1402.453, 'text': 'I can perform a matrix product.', 'start': 1400.672, 'duration': 1.781}, {'end': 1406.876, 'text': 'I can apply a function like the sigmoid function to every element of a matrix.', 'start': 1402.894, 'duration': 3.982}, {'end': 1408.297, 'text': 'So I can do all of this.', 'start': 1407.076, 'duration': 1.221}, {'end': 1412.6, 'text': 'I can start to write the code for a neural network library.', 'start': 1408.397, 'duration': 4.203}, {'end': 1417.423, 'text': 'Okay, we have now arrived at what is sort of the end of this video.', 'start': 1413.421, 'duration': 4.002}, {'end': 1421.646, 'text': "I'm going to pause and check and see what all the wonderful,", 'start': 1417.523, 'duration': 4.123}, {'end': 1429.491, 'text': 'nice people who are generous enough to follow this along live have corrected me and see if I need to come back and offer any corrections or answer any questions.', 'start': 1421.646, 'duration': 7.845}], 'summary': 'Can perform matrix product, apply sigmoid function, and write neural network library code.', 'duration': 28.819, 'max_score': 1400.672, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/qWK7yW8oS0I/pics/qWK7yW8oS0I1400672.jpg'}], 'start': 1237.618, 'title': 'Neural network operations', 'summary': 'Covers feed forward algorithm, neural network bias, and matrix math for neural network implementation, emphasizing the process of taking inputs, computing weighted sums, adding biases, and conducting matrix operations for library development.', 'chapters': [{'end': 1333.479, 'start': 1237.618, 'title': 'Feed forward algorithm in neural networks', 'summary': 'Explains the feed forward algorithm in neural networks, highlighting the process of taking inputs, computing weighted sums, adding biases, and passing through activation functions to generate outputs, with emphasis on matrix operations and the role of biases.', 'duration': 95.861, 'highlights': ['The feed forward algorithm involves taking inputs, computing a weighted sum, adding biases, passing the result through an activation function, and feeding it forward to the network, ultimately reaching the output node.', 'The output node computes the weighted sum of connections between the hidden and output layers, adds its own biases, and passes the result through a sigmoid function, repeating this process for every layer.', 'The matrix W represents the weights of connections between layers, and the algorithm involves iterating over rows and columns to compute the weighted sum and pass the result through an activation function.']}, {'end': 1378.252, 'start': 1333.7, 'title': 'Neural network bias and inputs', 'summary': 'Explains the role of bias in neural networks, illustrating how it functions as an additional input at each layer, with a constant input of one affecting the weights and connections, ultimately impacting the output.', 'duration': 44.552, 'highlights': ['The bias in a neural network serves as an additional input at each layer, with a constant input of one affecting the weights and connections, ultimately impacting the output.', "The bias is similar to weights and gets weighted with an arbitrary input of one that's always coming in, affecting the overall computation of the neural network.", 'At each layer, there is a constant input of one, which serves as the bias, impacting the weights and connections within the neural network.']}, {'end': 1436.893, 'start': 1378.432, 'title': 'Matrix math for neural network', 'summary': 'Introduces matrix operations for neural network implementation, including element-wise addition, matrix product, and applying functions to matrix elements, paving the way for a neural network library development.', 'duration': 58.461, 'highlights': ['The upcoming video will feature the development of a matrix library for performing matrix operations for neural network implementation.', 'The matrix library will include functions for element-wise addition, matrix product, and applying a function like the sigmoid function to every element of a matrix.', 'The chapter concludes by acknowledging the need for corrections and feedback from the audience regarding the content discussed.']}], 'duration': 199.275, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/qWK7yW8oS0I/pics/qWK7yW8oS0I1237618.jpg', 'highlights': ['The matrix library will include functions for element-wise addition, matrix product, and applying a function like the sigmoid function to every element of a matrix.', 'The feed forward algorithm involves taking inputs, computing a weighted sum, adding biases, passing the result through an activation function, and feeding it forward to the network, ultimately reaching the output node.', 'The output node computes the weighted sum of connections between the hidden and output layers, adds its own biases, and passes the result through a sigmoid function, repeating this process for every layer.', 'The bias in a neural network serves as an additional input at each layer, with a constant input of one affecting the weights and connections, ultimately impacting the output.', 'The upcoming video will feature the development of a matrix library for performing matrix operations for neural network implementation.']}, {'end': 1650.038, 'segs': [{'end': 1467.165, 'src': 'embed', 'start': 1437.533, 'weight': 0, 'content': [{'end': 1443.834, 'text': "Number one is it's very important to realize that these two weight matrices and these two biases are not the same.", 'start': 1437.533, 'duration': 6.301}, {'end': 1444.654, 'text': 'I wrote them out.', 'start': 1444.154, 'duration': 0.5}, {'end': 1448.295, 'text': 'I mean, we have the hidden outputs and the outputs output.', 'start': 1444.674, 'duration': 3.621}, {'end': 1448.975, 'text': "That's what this is.", 'start': 1448.335, 'duration': 0.64}, {'end': 1449.675, 'text': "That's what this is.", 'start': 1449.015, 'duration': 0.66}, {'end': 1454.876, 'text': 'So the hidden outputs are actually the weight matrix between the inputs and the hidden.', 'start': 1449.935, 'duration': 4.941}, {'end': 1459.137, 'text': 'So I could write that as like superscript up here, W-I-H in a way.', 'start': 1455.156, 'duration': 3.981}, {'end': 1467.165, 'text': "This is the hidden's outputs is the weight matrix between the input and the hidden with the matrix product of those inputs.", 'start': 1459.537, 'duration': 7.628}], 'summary': 'Weight matrices and biases are distinct, with hidden outputs involving input-to-hidden weight matrix.', 'duration': 29.632, 'max_score': 1437.533, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/qWK7yW8oS0I/pics/qWK7yW8oS0I1437533.jpg'}, {'end': 1528.146, 'src': 'embed', 'start': 1484.112, 'weight': 1, 'content': [{'end': 1485.894, 'text': 'The other thing is these are not the same biases.', 'start': 1484.112, 'duration': 1.782}, {'end': 1494.921, 'text': "This is the bias that's connected with each hidden neuron, and so I could say BH there.", 'start': 1486.355, 'duration': 8.566}, {'end': 1498.881, 'text': "and then this is the bias right that's connected with.", 'start': 1494.921, 'duration': 3.96}, {'end': 1506.183, 'text': "it's just one at one bias, but it's connected with this output neuron, and so I could put an O here so that would be more clear about that.", 'start': 1498.881, 'duration': 7.302}, {'end': 1515.424, 'text': 'another point of clarification is there is a way to write this without having this having the bias as part of the weight matrix itself,', 'start': 1506.183, 'duration': 9.241}, {'end': 1524.565, 'text': "because there's no reason why I couldn't just consider the bias, like I said, as an extra input that always comes in with a value of 1..", 'start': 1515.424, 'duration': 9.141}, {'end': 1528.146, 'text': "And then what I would need to do where's my eraser is?", 'start': 1524.565, 'duration': 3.581}], 'summary': 'Discussing biases in neural networks and their connection to hidden and output neurons.', 'duration': 44.034, 'max_score': 1484.112, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/qWK7yW8oS0I/pics/qWK7yW8oS0I1484112.jpg'}, {'end': 1650.038, 'src': 'embed', 'start': 1619.009, 'weight': 3, 'content': [{'end': 1623.254, 'text': "if there's a really like one that sums it up all perfectly, I'll pin it right to the top.", 'start': 1619.009, 'duration': 4.245}, {'end': 1627.698, 'text': 'But the point of this was for me to kind of like get through the basics of this.', 'start': 1623.254, 'duration': 4.444}, {'end': 1637.748, 'text': "I am now going to, in the next video, actually implement this in code and Once I've done that, we'll be ready to then look at the learning algorithm,", 'start': 1627.698, 'duration': 10.05}, {'end': 1640.49, 'text': 'the training algorithm, this thing called back propagation.', 'start': 1637.748, 'duration': 2.742}, {'end': 1641.831, 'text': 'implement that in code.', 'start': 1640.49, 'duration': 1.341}, {'end': 1643.853, 'text': 'Then the neural network will be complete.', 'start': 1642.332, 'duration': 1.521}, {'end': 1645.994, 'text': 'We can actually use it to solve something, I hope.', 'start': 1644.053, 'duration': 1.941}, {'end': 1648.216, 'text': 'So see you in a future video.', 'start': 1646.715, 'duration': 1.501}, {'end': 1648.877, 'text': 'Thank you.', 'start': 1648.456, 'duration': 0.421}, {'end': 1650.038, 'text': 'Acting.', 'start': 1649.717, 'duration': 0.321}], 'summary': 'Upcoming video will implement neural network in code, including back propagation.', 'duration': 31.029, 'max_score': 1619.009, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/qWK7yW8oS0I/pics/qWK7yW8oS0I1619009.jpg'}], 'start': 1437.533, 'title': 'Neural network weight matrices, biases, and introduction', 'summary': 'Explains the distinction between weight matrices and biases in a neural network, highlighting differences between layers, and addressing the importance of separate biases for each neuron. it also discusses incorporating bias in a neural network weight matrix, with an exercise for viewers to replicate the calculations. additionally, it introduces the basics of neural networks and outlines the plan to implement the learning algorithm in code in the next video.', 'chapters': [{'end': 1506.183, 'start': 1437.533, 'title': 'Neural network weight matrices and biases', 'summary': 'Explains the distinction between weight matrices and biases in a neural network, highlighting the differences between the hidden and output layers, and addressing the importance of separate biases for each neuron.', 'duration': 68.65, 'highlights': ['The weight matrix between the inputs and the hidden layer is distinct from the weight matrix between the hidden and output layers. This highlights the separation between weight matrices for different layers, emphasizing the distinct roles of the matrices within the neural network.', 'Separate biases are utilized for the hidden neurons and the output neuron. This emphasizes the necessity of distinct biases for different neurons, reinforcing the importance of individual biases for each layer of the neural network.']}, {'end': 1619.009, 'start': 1506.183, 'title': 'Neural network bias and weight matrix', 'summary': 'Discusses the concept of incorporating bias in a neural network weight matrix and suggests an exercise for viewers to replicate the calculations, with an acknowledgment of notation concerns and an invitation for feedback.', 'duration': 112.826, 'highlights': ['The weight matrix can be modified to include the bias as an extra input, represented by a column of 1s, which allows for simpler calculations and easier representation of the bias values.', 'The video encourages viewers to replicate the calculations and suggests providing a visual aid for the matrix formula as an exercise.', 'Acknowledgment of unconventional notation and an invitation for feedback and comments from viewers is mentioned, showing a willingness to address concerns and improve the content.']}, {'end': 1650.038, 'start': 1619.009, 'title': 'Introduction to neural networks', 'summary': 'Introduces the basics of neural networks and outlines the plan to implement the learning algorithm in code in the next video, aiming to complete the neural network and use it for problem-solving.', 'duration': 31.029, 'highlights': ['The next video will implement the learning algorithm in code, completing the neural network (Relevance: 5)', 'The chapter aims to use the neural network to solve a problem (Relevance: 4)', 'The video covers the basics of neural networks (Relevance: 3)']}], 'duration': 212.505, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/qWK7yW8oS0I/pics/qWK7yW8oS0I1437533.jpg', 'highlights': ['The weight matrix between the inputs and the hidden layer is distinct from the weight matrix between the hidden and output layers, emphasizing the separation between weight matrices for different layers.', 'Separate biases are utilized for the hidden neurons and the output neuron, reinforcing the importance of individual biases for each layer of the neural network.', 'The weight matrix can be modified to include the bias as an extra input, represented by a column of 1s, allowing for simpler calculations and easier representation of the bias values.', 'The next video will implement the learning algorithm in code, completing the neural network.', 'The chapter aims to use the neural network to solve a problem.', 'The video covers the basics of neural networks.']}], 'highlights': ['The feedforward algorithm in neural networks utilizes matrix math for efficiently computing the weighted sum of connections between inputs and hidden nodes, highlighting the relevance and usage of matrix math in deep learning implementations.', 'Explaining the structure of a 2-layer network with 3 inputs, 2 hidden nodes, and 1 output node, emphasizing its flexibility in machine learning configurations.', 'The chapter emphasizes the intention to implement the concepts from the sources into practical code for the neural network algorithm.', 'The chapter discusses the preparation for implementing the feed-forward algorithm of a neural network, with a focus on building a matrix library for mathematical operations.', 'The chapter acknowledges and references two primary sources, Make Your Own Neural Network by Tariq Rashid and the 3Blue1Brown channel, aimed at understanding the functioning and components of a neural network.', 'The chapter explains the concept of XOR and the need for a multi-layered perceptron to solve non-linearly separable problems like XOR.', 'Introduction of a hidden layer and an output layer in the neural network to address complex problems.', 'Describes the process of adding weighted sums in hidden nodes.', 'Identifies the two important components, bias and activation function, in the feed forward algorithm.', 'Explains the weight matrix and the matrix product using the dot symbol.', 'Emphasizes the process of obtaining the weighted sum from the flowing data.', 'The outputs of the hidden layer are the inputs to the output, involving a matrix product between the weight matrix and the inputs.', 'The introduction of biases is essential to prevent all inputs from resulting in a weighted sum of zero, thus biasing the output in a given direction.', 'The need for biases arises from the problem that all inputs could be 0, resulting in a weighted sum of zero, and the bias is represented as a single column vector.', 'A neural network is a universal function approximator, capable of receiving inputs and generating outputs.', 'The system needs to adapt and learn to tune all the values of weights and biases to perform a task.', 'The purpose of the learning system is to tune the values of weights and biases in the neural network to match the desired outputs.', 'The sigmoid function squashes input into a range between 0 and 1, making it suitable for binary outputs or probability values.', "Euler's number, denoted as 'e', is the base for the natural logarithm and is used in the formula for the sigmoid function.", 'The sigmoid function is barely used in modern deep learning research, and there are other more cutting-edge activation functions available.', 'The matrix library will include functions for element-wise addition, matrix product, and applying a function like the sigmoid function to every element of a matrix.', 'The feed forward algorithm involves taking inputs, computing a weighted sum, adding biases, passing the result through an activation function, and feeding it forward to the network, ultimately reaching the output node.', 'The output node computes the weighted sum of connections between the hidden and output layers, adds its own biases, and passes the result through a sigmoid function, repeating this process for every layer.', 'The bias in a neural network serves as an additional input at each layer, with a constant input of one affecting the weights and connections, ultimately impacting the output.', 'The upcoming video will feature the development of a matrix library for performing matrix operations for neural network implementation.', 'The weight matrix between the inputs and the hidden layer is distinct from the weight matrix between the hidden and output layers, emphasizing the separation between weight matrices for different layers.', 'Separate biases are utilized for the hidden neurons and the output neuron, reinforcing the importance of individual biases for each layer of the neural network.', 'The weight matrix can be modified to include the bias as an extra input, represented by a column of 1s, allowing for simpler calculations and easier representation of the bias values.', 'The next video will implement the learning algorithm in code, completing the neural network.', 'The chapter aims to use the neural network to solve a problem.', 'The video covers the basics of neural networks.']}