title
Neural Networks from Scratch - P.3 The Dot Product

description
Neural Networks from Scratch book: https://nnfs.io NNFSiX Github: https://github.com/Sentdex/NNfSiX Playlist for this series: https://www.youtube.com/playlist?list=PLQVvvaa0QuDcjD5BAw2DxE6OF2tius3V3 Neural Networks IN Scratch (the programming language): https://youtu.be/eJ1HdTZAcn4 Python 3 basics: https://pythonprogramming.net/introduction-learn-python-3-tutorials/ Intermediate Python (w/ OOP): https://pythonprogramming.net/introduction-intermediate-python-tutorial/ Mug link for fellow mug aficionados: https://amzn.to/3cKEokU Channel membership: https://www.youtube.com/channel/UCfzlCWGWYyIQ0aLC5w48gBQ/join Discord: https://discord.gg/sentdex Support the content: https://pythonprogramming.net/support-donate/ Twitter: https://twitter.com/sentdex Instagram: https://instagram.com/sentdex Facebook: https://www.facebook.com/pythonprogramming.net/ Twitch: https://www.twitch.tv/sentdex #nnfs #python #neuralnetworks

detail
{'title': 'Neural Networks from Scratch - P.3 The Dot Product', 'heatmap': [{'end': 229.974, 'start': 150.774, 'weight': 0.906}, {'end': 652.513, 'start': 617.999, 'weight': 0.853}, {'end': 1184.202, 'start': 1165.171, 'weight': 0.805}], 'summary': "Discusses transitioning to using vectors and matrices in neural networks, elaborating on the difference between weights and bias, introducing numpy's dot product, and covering the concepts of dot products in deep learning, with an iterative np.dot process performed three times.", 'chapters': [{'end': 309.465, 'segs': [{'end': 111.694, 'src': 'embed', 'start': 70.744, 'weight': 1, 'content': [{'end': 72.486, 'text': "i'll put a link in the description also.", 'start': 70.744, 'duration': 1.742}, {'end': 76.311, 'text': 'uh, thanks to everyone who has contributed and supported the book.', 'start': 72.486, 'duration': 3.825}, {'end': 81.57, 'text': "uh, we're almost to like 2 000 total people on the book, which is, Um just crazy.", 'start': 76.311, 'duration': 5.259}, {'end': 82.752, 'text': "That's, that's really awesome.", 'start': 81.63, 'duration': 1.122}, {'end': 87.88, 'text': 'Um, I really appreciate all the comments and questions, suggestions, the edits, all that stuff in the document.', 'start': 82.992, 'duration': 4.888}, {'end': 90.343, 'text': "Um, that's all going really well.", 'start': 88.42, 'duration': 1.923}, {'end': 92.707, 'text': 'and to put 2000 people into the same document?', 'start': 90.343, 'duration': 2.364}, {'end': 97.762, 'text': "um, uh, can certainly, uh, be hectic, and it's been pretty good.", 'start': 92.707, 'duration': 5.055}, {'end': 99.864, 'text': 'so, uh, thank you guys, very much so.', 'start': 97.762, 'duration': 2.102}, {'end': 103.667, 'text': "uh, let's go ahead and jump into the code for today.", 'start': 99.864, 'duration': 3.803}, {'end': 105.148, 'text': "so there's a couple of things that i want to do.", 'start': 103.667, 'duration': 1.481}, {'end': 111.694, 'text': "uh, first of all, we want to simplify this code and i'm going to save you guys, you don't have to follow along coding this.", 'start': 105.148, 'duration': 6.546}], 'summary': 'Over 2,000 contributors supported the book, with positive feedback. simplifying code in the upcoming session.', 'duration': 40.95, 'max_score': 70.744, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tMrbN67U9d4/pics/tMrbN67U9d470744.jpg'}, {'end': 229.974, 'src': 'heatmap', 'start': 150.774, 'weight': 0.906, 'content': [{'end': 152.514, 'text': "I'll fix my standards there.", 'start': 150.774, 'duration': 1.74}, {'end': 154.095, 'text': 'And then we can delete this.', 'start': 152.895, 'duration': 1.2}, {'end': 157.979, 'text': "and then delete this code, and boom, I've already actually coded this code.", 'start': 154.935, 'duration': 3.044}, {'end': 164.326, 'text': "So if you, we're going to be using weights and biases like this soon-ish, but not even really.", 'start': 158.039, 'duration': 6.287}, {'end': 166.429, 'text': "And in fact I'm probably going to go back.", 'start': 164.767, 'duration': 1.662}, {'end': 173.816, 'text': "we're going to code just a single neuron and then we'll go to the three neurons with four inputs on this video, but we're gonna use a lot of the code.", 'start': 166.429, 'duration': 7.387}, {'end': 175.758, 'text': "So I'm probably gonna be editing and undoing soon.", 'start': 173.836, 'duration': 1.922}, {'end': 181.702, 'text': 'Anyway, this is one way that we can simplify and make much more dynamic.', 'start': 176.198, 'duration': 5.504}, {'end': 185.825, 'text': "So what we're doing here is we're gonna zip together weights and biases to start.", 'start': 181.782, 'duration': 4.043}, {'end': 190.609, 'text': "And if you don't know what zip does, it just kind of combines two lists into a list of lists.", 'start': 186.005, 'duration': 4.604}, {'end': 192.228, 'text': 'element wise.', 'start': 191.607, 'duration': 0.621}, {'end': 197.979, 'text': 'So in this case, the first element of this zipped list, or the zeroth element of the zipped list,', 'start': 192.629, 'duration': 5.35}, {'end': 202.548, 'text': 'will be another list of the zeroth weight and the zeroth bias, and so on.', 'start': 197.979, 'duration': 4.569}, {'end': 204.05, 'text': 'It just combines two lists element wise.', 'start': 202.588, 'duration': 1.462}, {'end': 207.985, 'text': "So, uh, we're going to iterate over that and then we're going to do input in the weight.", 'start': 204.784, 'duration': 3.201}, {'end': 210.106, 'text': "Uh, and then we're going to zip those together.", 'start': 208.706, 'duration': 1.4}, {'end': 213.988, 'text': "We're going to say the neuron output plus equals the inputs times the weights.", 'start': 210.126, 'duration': 3.862}, {'end': 216.869, 'text': 'And then at the end we had the bias append it together.', 'start': 214.468, 'duration': 2.401}, {'end': 217.809, 'text': 'We can run that.', 'start': 217.189, 'duration': 0.62}, {'end': 220.37, 'text': 'And that is the output that we expected.', 'start': 218.149, 'duration': 2.221}, {'end': 229.974, 'text': "So, um, so yeah, that's a, it's a cleaner, more dynamic way of doing inputs times the weights plus the bias for a layer in Python.", 'start': 220.81, 'duration': 9.164}], 'summary': 'Coding a more dynamic way to handle inputs, weights, and biases for a layer in python.', 'duration': 79.2, 'max_score': 150.774, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tMrbN67U9d4/pics/tMrbN67U9d4150774.jpg'}, {'end': 202.548, 'src': 'embed', 'start': 176.198, 'weight': 0, 'content': [{'end': 181.702, 'text': 'Anyway, this is one way that we can simplify and make much more dynamic.', 'start': 176.198, 'duration': 5.504}, {'end': 185.825, 'text': "So what we're doing here is we're gonna zip together weights and biases to start.", 'start': 181.782, 'duration': 4.043}, {'end': 190.609, 'text': "And if you don't know what zip does, it just kind of combines two lists into a list of lists.", 'start': 186.005, 'duration': 4.604}, {'end': 192.228, 'text': 'element wise.', 'start': 191.607, 'duration': 0.621}, {'end': 197.979, 'text': 'So in this case, the first element of this zipped list, or the zeroth element of the zipped list,', 'start': 192.629, 'duration': 5.35}, {'end': 202.548, 'text': 'will be another list of the zeroth weight and the zeroth bias, and so on.', 'start': 197.979, 'duration': 4.569}], 'summary': 'Simplify and make dynamic by zipping weights and biases.', 'duration': 26.35, 'max_score': 176.198, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tMrbN67U9d4/pics/tMrbN67U9d4176198.jpg'}], 'start': 4.969, 'title': 'Transitioning to using vectors and matrices in neural networks', 'summary': "Discusses the shift from raw python list code to utilizing vectors and matrices for deep learning, introduces a dynamic repository for neural networks from scratch, and explains the influence of weights and biases on a neuron's output.", 'chapters': [{'end': 309.465, 'start': 4.969, 'title': 'Neural networks: transition to using vectors and matrices', 'summary': "Discusses the transition from raw python list code to using vectors and matrices for deep learning, introduces a repository for neural networks from scratch, simplifies and makes the code more dynamic, and explains the impact of weights and biases on a neuron's output.", 'duration': 304.496, 'highlights': ['The chapter introduces a repository for neural networks from scratch, providing a link for users to contribute and share their code, as well as to reference for following along in other programming languages.', 'The chapter simplifies and makes the code more dynamic by using zip to combine weights and biases element-wise, demonstrating a cleaner way of doing inputs times the weights plus the bias for a layer in Python.', "The chapter explains the impact of weights and biases on a neuron's output, emphasizing that these are tuned by the optimizer to fit some data and can have a significant impact on the neuron's output."]}], 'duration': 304.496, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tMrbN67U9d4/pics/tMrbN67U9d44969.jpg', 'highlights': ["The chapter explains the impact of weights and biases on a neuron's output, emphasizing their significant impact.", 'The chapter simplifies and makes the code more dynamic by using zip to combine weights and biases element-wise.', 'The chapter introduces a repository for neural networks from scratch, providing a link for users to contribute and share their code.']}, {'end': 526.296, 'segs': [{'end': 335.457, 'src': 'embed', 'start': 309.465, 'weight': 0, 'content': [{'end': 315.591, 'text': 'so um, you want to have both and, and just real quickly, i just want to kind of drive home why are these different?', 'start': 309.465, 'duration': 6.126}, {'end': 319.275, 'text': "so imagine you have some value and we're going to say some value, is we're going to make this 0.5,", 'start': 315.591, 'duration': 3.684}, {'end': 327.063, 'text': "and then we're going to say you've got a weight and the weight will be a 0.7, and for now we'll do the same weight or the same value for a bias.", 'start': 319.275, 'duration': 7.788}, {'end': 330.694, 'text': 'So in this case, what is a weight gonna do??', 'start': 327.972, 'duration': 2.722}, {'end': 335.457, 'text': 'Well, the weight is gonna be the weight some value times that weight, right?', 'start': 330.834, 'duration': 4.623}], 'summary': 'Explains the difference between value and weight using 0.5 as value and 0.7 as weight.', 'duration': 25.992, 'max_score': 309.465, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tMrbN67U9d4/pics/tMrbN67U9d4309465.jpg'}, {'end': 490.689, 'src': 'embed', 'start': 463.611, 'weight': 1, 'content': [{'end': 469.376, 'text': 'and if we have millions of these little tools that we can sort of tweak, we can change the overall kind of outputs.', 'start': 463.611, 'duration': 5.765}, {'end': 480.3, 'text': "and Now, with that in mind, let's get into transitioning into more real mathematics rather than this stuff.", 'start': 469.376, 'duration': 10.924}, {'end': 486.907, 'text': "So we're gonna get into NumPy now and start talking about vectors and matrices and dot products and all these things.", 'start': 480.32, 'duration': 6.587}, {'end': 490.689, 'text': "And we're going to start by talking about shape.", 'start': 488.648, 'duration': 2.041}], 'summary': 'Transitioning to numpy for mathematics with vectors and matrices.', 'duration': 27.078, 'max_score': 463.611, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tMrbN67U9d4/pics/tMrbN67U9d4463611.jpg'}], 'start': 309.465, 'title': 'Neural network weights, bias, and shape', 'summary': 'Elaborates on the difference between weights and bias in neural networks, their impact on the output value, insights into activation functions, and the significance of understanding shape in deep learning frameworks, particularly in relation to error handling.', 'chapters': [{'end': 428.267, 'start': 309.465, 'title': 'Neural network weights and bias', 'summary': 'Explains the difference between weights and bias in neural networks, demonstrating how they affect the output value and offering insights into activation functions and their impact on the network.', 'duration': 118.802, 'highlights': ['The weight in a neural network is responsible for changing the magnitude of the input value, demonstrated by multiplying a value by a weight, resulting in a negative or positive output.', 'The bias in a neural network acts as an offset to the input value, as demonstrated by adding a bias to a value, resulting in a positive output despite the input being negative.', 'Explanation of activation functions is provided, highlighting their role in determining the final output of a network and how they impact the input to subsequent layers or the overall network output.']}, {'end': 526.296, 'start': 429.348, 'title': 'Understanding bias and shape in neural networks', 'summary': 'Discusses the importance of bias and shape in neural networks, highlighting their impact on values and the significance of understanding shape in deep learning frameworks, particularly in relation to error handling.', 'duration': 96.948, 'highlights': ['Bias in neural networks offsets values and can change the overall outputs, providing a tool to tweak millions of values (e.g., positive values becoming negative after bias).', 'Understanding shape is crucial in deep learning frameworks, as errors related to shape are common and it is the first step to understanding how the rest of the system works.']}], 'duration': 216.831, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tMrbN67U9d4/pics/tMrbN67U9d4309465.jpg', 'highlights': ['Understanding shape is crucial in deep learning frameworks, as errors related to shape are common and it is the first step to understanding how the rest of the system works.', 'Bias in neural networks offsets values and can change the overall outputs, providing a tool to tweak millions of values (e.g., positive values becoming negative after bias).', 'Explanation of activation functions is provided, highlighting their role in determining the final output of a network and how they impact the input to subsequent layers or the overall network output.', 'The weight in a neural network is responsible for changing the magnitude of the input value, demonstrated by multiplying a value by a weight, resulting in a negative or positive output.', 'The bias in a neural network acts as an offset to the input value, as demonstrated by adding a bias to a value, resulting in a positive output despite the input being negative.']}, {'end': 787.407, 'segs': [{'end': 659.638, 'src': 'heatmap', 'start': 617.999, 'weight': 0, 'content': [{'end': 622.522, 'text': "Now again, imagine a scenario where you've got three of these lists of lists contained within a list.", 'start': 617.999, 'duration': 4.523}, {'end': 626.405, 'text': 'What are we looking at? A list of lists of lists, or LOLOL.', 'start': 622.962, 'duration': 3.443}, {'end': 633.65, 'text': "This shape, because we've got three lists of lists, the first at the first dimension, we have three elements.", 'start': 626.505, 'duration': 7.145}, {'end': 635.671, 'text': 'At the second dimension, we have two.', 'start': 633.89, 'duration': 1.781}, {'end': 637.673, 'text': 'In the third dimension, we have four.', 'start': 636.272, 'duration': 1.401}, {'end': 641.475, 'text': 'This is a three-dimensional array.', 'start': 638.733, 'duration': 2.742}, {'end': 645.318, 'text': "So finally, the last container type I'll just briefly mention is a tensor.", 'start': 641.876, 'duration': 3.442}, {'end': 650.412, 'text': "So if you've ever played with TensorFlow or heard of TensorFlow, you might wonder well what is a tensor?", 'start': 645.788, 'duration': 4.624}, {'end': 652.513, 'text': 'And then some tutorials might call things tensors.', 'start': 650.432, 'duration': 2.081}, {'end': 659.638, 'text': 'So put extremely simply, a tensor is an object that can be represented as an array.', 'start': 652.653, 'duration': 6.985}], 'summary': 'Introduction to multi-dimensional arrays and tensors in data structures.', 'duration': 41.639, 'max_score': 617.999, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tMrbN67U9d4/pics/tMrbN67U9d4617999.jpg'}, {'end': 779.243, 'src': 'embed', 'start': 752.401, 'weight': 2, 'content': [{'end': 759.128, 'text': 'So the main thing to note here is the dot product of two vectors results in a scalar single value.', 'start': 752.401, 'duration': 6.727}, {'end': 762.911, 'text': "All right, getting back to our code, let's try to employ the dot product.", 'start': 759.188, 'duration': 3.723}, {'end': 765.354, 'text': "But first we're going to do it just with a single neuron.", 'start': 762.951, 'duration': 2.403}, {'end': 770.579, 'text': "So the four inputs, one set of weights, one bias, and then we'll do the full layer of neurons.", 'start': 765.694, 'duration': 4.885}, {'end': 772.56, 'text': "I'm not just doing this to waste time.", 'start': 771.219, 'duration': 1.341}, {'end': 779.243, 'text': "I'm doing this to drive home a point about how dot product and matrix product Really works,", 'start': 772.6, 'duration': 6.643}], 'summary': 'The dot product of two vectors results in a scalar value. implementing dot product with a single neuron and a full layer of neurons to understand its functionality.', 'duration': 26.842, 'max_score': 752.401, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tMrbN67U9d4/pics/tMrbN67U9d4752401.jpg'}], 'start': 526.817, 'title': 'Python lists, numpy arrays, and matrix operations', 'summary': "Provides an explanation of one-dimensional and two-dimensional lists in python, highlighting a 1d array's shape of 4 and a 2d array containing two lists with four elements. it also emphasizes the concepts of arrays, matrices, and tensors, while introducing the dot product for multiplying vectors and matrices.", 'chapters': [{'end': 572.405, 'start': 526.817, 'title': 'Python lists and numpy arrays', 'summary': 'Explains the concept of one-dimensional lists and two-dimensional lists in python, highlighting that a 1d array has a shape of 4 and a 2d array contains two lists with each list having four elements.', 'duration': 45.588, 'highlights': ['The concept of one-dimensional lists and two-dimensional lists in Python is explained, highlighting that a 1D array has a shape of 4.', 'In Python, a list of lists is referred to as a two-dimensional array, with the first list containing two lists, and each list having four elements.']}, {'end': 787.407, 'start': 573.246, 'title': 'Understanding arrays, matrices, and tensors', 'summary': 'Explains the concept of arrays, matrices, and tensors, emphasizing the importance of homologous shape and introducing the dot product for multiplying vectors and matrices.', 'duration': 214.161, 'highlights': ['The chapter explains the concept of arrays, matrices, and tensors.', 'Introduction to the dot product for multiplying vectors and matrices.', 'Emphasizing the importance of homologous shape in arrays.']}], 'duration': 260.59, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tMrbN67U9d4/pics/tMrbN67U9d4526817.jpg', 'highlights': ['A 1D array has a shape of 4', 'A list of lists in Python is a 2D array with two lists, each containing four elements', 'Explanation of arrays, matrices, and tensors', 'Introduction to the dot product for multiplying vectors and matrices', 'Emphasis on the importance of homologous shape in arrays']}, {'end': 1062.548, 'segs': [{'end': 829.128, 'src': 'embed', 'start': 804.976, 'weight': 3, 'content': [{'end': 812.94, 'text': "But for now I am going to reduce weights to just being one set of weights and then biases will actually be two and now it's just a bias.", 'start': 804.976, 'duration': 7.964}, {'end': 816.722, 'text': 'And then we are going to do the dot product.', 'start': 814.4, 'duration': 2.322}, {'end': 819.323, 'text': "So we're going to use import numpy as np.", 'start': 816.762, 'duration': 2.561}, {'end': 823.545, 'text': "If you don't have numpy for whatever reason, you can pip install numpy.", 'start': 819.383, 'duration': 4.162}, {'end': 829.128, 'text': 'You might have to do pip 3 install numpy or python 3.7-m pip install whatever.', 'start': 823.565, 'duration': 5.563}], 'summary': 'Reducing weights to one set, with biases being two, using numpy for dot product.', 'duration': 24.152, 'max_score': 804.976, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tMrbN67U9d4/pics/tMrbN67U9d4804976.jpg'}, {'end': 929.055, 'src': 'embed', 'start': 902.832, 'weight': 0, 'content': [{'end': 907.113, 'text': 'So again reviewing what just happened, we did an np dot, dot weights, inputs plus bias.', 'start': 902.832, 'duration': 4.281}, {'end': 912.914, 'text': "now what's happening here is when we do that dot product of those two vectors, they're just multiplying element wise,", 'start': 907.113, 'duration': 5.801}, {'end': 918.856, 'text': 'each element at the same index of both of these vectors, and then adding all of that together.', 'start': 912.914, 'duration': 5.942}, {'end': 923.217, 'text': 'so just that dot product alone returned to us a 2.8.', 'start': 918.856, 'duration': 4.361}, {'end': 929.055, 'text': 'then we add the bias of 2.0, and that gives us the final answer of 4.8..', 'start': 923.217, 'duration': 5.838}], 'summary': 'The dot product of two vectors returned 2.8, added bias of 2.0, resulting in a final answer of 4.8.', 'duration': 26.223, 'max_score': 902.832, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tMrbN67U9d4/pics/tMrbN67U9d4902832.jpg'}, {'end': 972.646, 'src': 'embed', 'start': 944.885, 'weight': 1, 'content': [{'end': 952.29, 'text': "So I'm gonna copy that, come over here, paste that, and then everything stays the same weights, inputs, and then biases, right?", 'start': 944.885, 'duration': 7.405}, {'end': 955.654, 'text': 'And now we will illustrate why weights came first.', 'start': 952.41, 'duration': 3.244}, {'end': 961.218, 'text': "Let's go ahead and first run that, and we can see we get the output that we again expected from seeing this.", 'start': 956.094, 'duration': 5.124}, {'end': 963.199, 'text': "We've seen these operations already.", 'start': 961.678, 'duration': 1.521}, {'end': 972.646, 'text': "What's happening in this dot product shouldn't be beyond you because you've seen it in very raw Python code, but that is the output that we wanted.", 'start': 963.919, 'duration': 8.727}], 'summary': 'Illustrating the importance of weights in a dot product.', 'duration': 27.761, 'max_score': 944.885, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tMrbN67U9d4/pics/tMrbN67U9d4944885.jpg'}, {'end': 1020.627, 'src': 'embed', 'start': 988.499, 'weight': 2, 'content': [{'end': 992.08, 'text': 'but now what is inputs is a vector.', 'start': 988.499, 'duration': 3.581}, {'end': 997.761, 'text': "still weights is actually a matrix, right, it's a matrix containing vectors.", 'start': 992.08, 'duration': 5.681}, {'end': 998.721, 'text': 'so this is different.', 'start': 997.761, 'duration': 0.96}, {'end': 1005.403, 'text': 'so the first element in this dot product, when you do a dot product with numpy, the,', 'start': 998.721, 'duration': 6.682}, {'end': 1011.859, 'text': 'the first element you pass is how the return is going to be indexed.', 'start': 1005.403, 'duration': 6.456}, {'end': 1020.627, 'text': "So in this case, what do we actually want back, right? We want those outputs, like what are we modeling here? We're modeling three neurons.", 'start': 1012.3, 'duration': 8.327}], 'summary': 'Using a vector input and matrix weights to model three neurons in a dot product with numpy.', 'duration': 32.128, 'max_score': 988.499, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tMrbN67U9d4/pics/tMrbN67U9d4988499.jpg'}], 'start': 787.407, 'title': 'Understanding dot product in numpy', 'summary': "Introduces numpy's dot product and emphasizes simplicity. it explains dot product using matrices and vectors, illustrating a calculation resulting in 4.8. it also emphasizes the importance of order of inputs and weights, showing how it affects the output, particularly in matrices, and highlights the significance of the number of neurons in a model on the weights used.", 'chapters': [{'end': 849.794, 'start': 787.407, 'title': "Understanding numpy's dot product", 'summary': "Introduces the concept of numpy's dot product, emphasizing the simplicity of the process and the need to install numpy if not available.", 'duration': 62.387, 'highlights': ['The dot product is introduced, emphasizing its simplicity in the context of numpy.', 'The need to install numpy is highlighted, with a mention of the installation process using pip and the recommendation to seek help if encountering challenges.']}, {'end': 944.845, 'start': 849.794, 'title': 'Dot product of matrices and vectors', 'summary': 'Explains the concept of dot product using matrices and vectors, demonstrating a sample calculation that results in a final answer of 4.8 and highlighting the significance of the order of inputs for the dot product.', 'duration': 95.051, 'highlights': ['The dot product of the two vectors results in 2.8, to which the bias of 2.0 is added, resulting in the final answer of 4.8.', 'The chapter emphasizes the significance of the order of inputs for the dot product, illustrating that although it does not currently matter which vector comes first, it will soon become crucial.', 'The concept of dot product is introduced, demonstrating a calculation that yields a final answer of 4.8.']}, {'end': 1062.548, 'start': 944.885, 'title': 'Order of inputs and weights in dot product', 'summary': 'Explains the importance of the order of inputs and weights in a dot product, demonstrating how the change in the order affects the output, particularly when dealing with matrices. it also highlights the significance of the number of neurons in a model and the implications on the weights used.', 'duration': 117.663, 'highlights': ['The order of inputs and weights in a dot product significantly affects the output, particularly when dealing with matrices, as demonstrated in the explanation.', "The number of neurons in a model impacts the weights used, with the difference in the dot product being the existence of multiple sets of weights, which is crucial for the model's functionality."]}], 'duration': 275.141, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tMrbN67U9d4/pics/tMrbN67U9d4787407.jpg', 'highlights': ['The dot product of the two vectors results in 2.8, to which the bias of 2.0 is added, resulting in the final answer of 4.8.', 'The order of inputs and weights in a dot product significantly affects the output, particularly when dealing with matrices, as demonstrated in the explanation.', "The number of neurons in a model impacts the weights used, with the difference in the dot product being the existence of multiple sets of weights, which is crucial for the model's functionality.", 'The dot product is introduced, emphasizing its simplicity in the context of numpy.', 'The concept of dot product is introduced, demonstrating a calculation that yields a final answer of 4.8.', 'The need to install numpy is highlighted, with a mention of the installation process using pip and the recommendation to seek help if encountering challenges.']}, {'end': 1439.08, 'segs': [{'end': 1130.414, 'src': 'embed', 'start': 1104.291, 'weight': 0, 'content': [{'end': 1108.455, 'text': "it's just going to iterate through the this, this matrix of vectors.", 'start': 1104.291, 'duration': 4.164}, {'end': 1111.768, 'text': "So again, let's just run that, because I don't like looking at the air.", 'start': 1109.167, 'duration': 2.601}, {'end': 1115.829, 'text': "Again, let's visualize what has happened in this case.", 'start': 1112.628, 'duration': 3.201}, {'end': 1120.991, 'text': "Okay, so to review what's gone on here, we've got outputs is np.dot weights and inputs.", 'start': 1116.129, 'duration': 4.862}, {'end': 1122.751, 'text': "Let's talk about just that for now.", 'start': 1121.031, 'duration': 1.72}, {'end': 1130.414, 'text': "And what's going to happen here is because weights is a matrix of three vectors, it's going to do this dot product three times.", 'start': 1123.372, 'duration': 7.042}], 'summary': 'Iteration through matrix of vectors performing dot product three times.', 'duration': 26.123, 'max_score': 1104.291, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tMrbN67U9d4/pics/tMrbN67U9d41104291.jpg'}, {'end': 1190.947, 'src': 'heatmap', 'start': 1165.171, 'weight': 4, 'content': [{'end': 1171.418, 'text': "And then after that all we're going to do is add these two vectors together to get our final output values.", 'start': 1165.171, 'duration': 6.247}, {'end': 1184.202, 'text': "okay. so that's the conversion of everything into numpy terms and more mathematics terms.", 'start': 1178.658, 'duration': 5.544}, {'end': 1190.947, 'text': 'this is the way that you are going to be seeing the calculations, the dot product of weights and inputs, plus the bias.', 'start': 1184.202, 'duration': 6.745}], 'summary': 'Conversion of calculations into numpy and mathematics terms for vector addition and dot product.', 'duration': 25.776, 'max_score': 1165.171, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tMrbN67U9d4/pics/tMrbN67U9d41165171.jpg'}, {'end': 1275.727, 'src': 'embed', 'start': 1210.265, 'weight': 2, 'content': [{'end': 1214.626, 'text': "when we do a layer, pretty soon we're going to be doing things in a batch.", 'start': 1210.265, 'duration': 4.361}, {'end': 1221.628, 'text': 'so up until this point our input data has actually just been a 1d array or a vector of a single sample.', 'start': 1214.626, 'duration': 7.002}, {'end': 1231.451, 'text': "but soon what we're going to be doing instead is passing a batch of these samples and now input data, rather than just being a 1d array or a vector,", 'start': 1221.628, 'duration': 9.823}, {'end': 1233.892, 'text': 'is actually going to be a 2d array or a matrix.', 'start': 1231.451, 'duration': 2.441}, {'end': 1239.182, 'text': "So hopefully, now that we've covered shape and what vectors are,", 'start': 1235.498, 'duration': 3.684}, {'end': 1244.407, 'text': "matrices and dot product and you hopefully have a really good grasp of how that's working,", 'start': 1239.182, 'duration': 5.225}, {'end': 1249.232, 'text': "I'd strongly advise you not to leave this video if you don't yet understand.", 'start': 1244.407, 'duration': 4.825}, {'end': 1250.614, 'text': 'So please ask questions below.', 'start': 1249.272, 'duration': 1.342}, {'end': 1254.077, 'text': "Also, hopefully we've cleared up the questions about weights versus bias.", 'start': 1250.814, 'duration': 3.263}, {'end': 1258.281, 'text': "They're just two different tools for approximating something.", 'start': 1254.097, 'duration': 4.184}, {'end': 1262.991, 'text': 'Weights times the inputs plus the bias is similar to the equation for a line.', 'start': 1259.306, 'duration': 3.685}, {'end': 1265.014, 'text': 'y equals mx plus b, something like that, right?', 'start': 1262.991, 'duration': 2.023}, {'end': 1271.062, 'text': "So here's an example where x is your input, y is your output, and now we're going to just be modifying weight.", 'start': 1265.534, 'duration': 5.528}, {'end': 1275.727, 'text': 'You can see how it impacts the equation for that line.', 'start': 1271.102, 'duration': 4.625}], 'summary': 'Neural network input data will soon be passed as a 2d array instead of 1d, and understanding weights versus bias is crucial for modifying the equation.', 'duration': 65.462, 'max_score': 1210.265, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tMrbN67U9d4/pics/tMrbN67U9d41210265.jpg'}, {'end': 1331.825, 'src': 'embed', 'start': 1296.919, 'weight': 3, 'content': [{'end': 1301.698, 'text': "once we get to activation functions, You don't necessarily need to know how rectified linear is working.", 'start': 1296.919, 'duration': 4.779}, {'end': 1303.199, 'text': "We're going to get to that in a little bit.", 'start': 1301.738, 'duration': 1.461}, {'end': 1312.445, 'text': "But you should be able to see at least visually that by adjusting weights, there's just a completely different impact than adjusting for bias.", 'start': 1303.639, 'duration': 8.806}, {'end': 1321.551, 'text': 'And basically bias is going to, at least in terms of our activation function, help determine whether or not this neuron is actually firing at all.', 'start': 1312.525, 'duration': 9.026}, {'end': 1323.552, 'text': 'And if so, to what degree.', 'start': 1321.931, 'duration': 1.621}, {'end': 1328.242, 'text': "Again, right now, we're just kind of, we're just making up values here.", 'start': 1325.6, 'duration': 2.642}, {'end': 1329.303, 'text': 'This has no meaning.', 'start': 1328.322, 'duration': 0.981}, {'end': 1331.825, 'text': "Soon we'll work with a data set that has a little more meaning.", 'start': 1329.443, 'duration': 2.382}], 'summary': 'Adjusting weights and bias has different impacts on activation functions, determining neuron firing and degree. data set to be used soon.', 'duration': 34.906, 'max_score': 1296.919, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tMrbN67U9d4/pics/tMrbN67U9d41296919.jpg'}, {'end': 1402.928, 'src': 'embed', 'start': 1372.832, 'weight': 1, 'content': [{'end': 1374.514, 'text': "uh, he's also the one doing the animation.", 'start': 1372.832, 'duration': 1.682}, {'end': 1377.036, 'text': "so if you're really enjoying these animations, shouts out to.", 'start': 1374.514, 'duration': 2.522}, {'end': 1386.559, 'text': "He's also done a short video of programming a fully working neural network in Scratch, the programming language, which is absolutely absurd.", 'start': 1377.256, 'duration': 9.303}, {'end': 1388.34, 'text': 'I will put a link in the description.', 'start': 1386.919, 'duration': 1.421}, {'end': 1389.06, 'text': 'You can check it out.', 'start': 1388.42, 'duration': 0.64}, {'end': 1392.161, 'text': "It's not too long and it is absolutely insane.", 'start': 1389.08, 'duration': 3.081}, {'end': 1399.865, 'text': 'Also, some people have been asking and noting similarities between the animations and three blue one brown animations,', 'start': 1393.641, 'duration': 6.224}, {'end': 1402.928, 'text': 'and that is because we are using the Manim package.', 'start': 1399.865, 'duration': 3.063}], 'summary': 'Animator created a neural network video in scratch, using the manim package for animations.', 'duration': 30.096, 'max_score': 1372.832, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tMrbN67U9d4/pics/tMrbN67U9d41372832.jpg'}], 'start': 1062.548, 'title': 'Neural networks and dot products', 'summary': 'Covers the concepts of dot products in deep learning, emphasizing the iterative np.dot process performed three times, transitioning from 1d to 2d input data, and the basics of neural networks, with a nod to the neural network from scratch in x and the use of the manim package for animations.', 'chapters': [{'end': 1190.947, 'start': 1062.548, 'title': 'Deep learning dot product', 'summary': 'Explains the concept of dot products in deep learning, emphasizing the iterative process of np.dot weights and inputs, which is performed three times, and the final output values are obtained by adding the resulting vectors together.', 'duration': 128.399, 'highlights': ['The process of np.dot weights and inputs is iterated three times, resulting in the dot product of the zeroth, first, and second weights and inputs, which are then added together to obtain the final output values.', 'Without a good understanding of the atomic calculations in deep learning libraries, shape errors may occur, leading to confusion and frustration for the user.', 'The explanation includes breaking down the concept into dot products and visualizing the process to enhance comprehension and clarity for the audience.']}, {'end': 1296.919, 'start': 1190.947, 'title': 'Understanding numpy, dot product, and neural network basics', 'summary': 'Covers the concept of dot product in numpy, transitioning from 1d to 2d input data, understanding weights versus bias, and their impact on the equation for a line in neural networks.', 'duration': 105.972, 'highlights': ['The transition from 1D to 2D input data is crucial, as it will enable processing batches of samples, expanding the input data from a 1D array to a 2D array, thus impacting the operations in neural networks.', 'Understanding the concept of weights versus bias is essential, as it provides different tools for approximating and adjusting the equation for a line in neural networks, with specific impacts on graphing lines and use cases in neural networks.', 'The explanation of the dot product in numpy is fundamental, as it forms the basis for operations in neural networks and understanding its functioning is crucial for further operations in the network.']}, {'end': 1439.08, 'start': 1296.919, 'title': 'Understanding neural networks', 'summary': 'Discusses the basics of neural networks, emphasizing the importance of understanding key concepts before delving into deep learning, with a nod to the neural network from scratch in x and the use of the manim package for animations.', 'duration': 142.161, 'highlights': ['The importance of understanding the basics of neural networks before delving into deep learning, as it lays the foundation for comprehension and utilization of advanced concepts.', 'Emphasis on the impact of adjusting weights and bias on the activation function, highlighting the crucial role of bias in determining neuron firing and degree of activation.', 'Introduction of the nnfs.io book and the neural network from scratch in x, offering resources for further learning and understanding of neural networks.', 'Utilization of the Manim package for animations, acknowledging the contribution of Daniel in creating animations and highlighting the similarity to three blue one brown animations.']}], 'duration': 376.532, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/tMrbN67U9d4/pics/tMrbN67U9d41062548.jpg', 'highlights': ['The process of np.dot weights and inputs is iterated three times, resulting in the dot product of the zeroth, first, and second weights and inputs, which are then added together to obtain the final output values.', 'The transition from 1D to 2D input data is crucial, as it will enable processing batches of samples, expanding the input data from a 1D array to a 2D array, thus impacting the operations in neural networks.', 'The importance of understanding the basics of neural networks before delving into deep learning, as it lays the foundation for comprehension and utilization of advanced concepts.', 'Understanding the concept of weights versus bias is essential, as it provides different tools for approximating and adjusting the equation for a line in neural networks, with specific impacts on graphing lines and use cases in neural networks.', 'The explanation includes breaking down the concept into dot products and visualizing the process to enhance comprehension and clarity for the audience.', 'The explanation of the dot product in numpy is fundamental, as it forms the basis for operations in neural networks and understanding its functioning is crucial for further operations in the network.', 'Emphasis on the impact of adjusting weights and bias on the activation function, highlighting the crucial role of bias in determining neuron firing and degree of activation.', 'Introduction of the nnfs.io book and the neural network from scratch in x, offering resources for further learning and understanding of neural networks.', 'Utilization of the Manim package for animations, acknowledging the contribution of Daniel in creating animations and highlighting the similarity to three blue one brown animations.', 'Without a good understanding of the atomic calculations in deep learning libraries, shape errors may occur, leading to confusion and frustration for the user.']}], 'highlights': ['The process of np.dot weights and inputs is iterated three times, resulting in the dot product of the zeroth, first, and second weights and inputs, which are then added together to obtain the final output values.', 'Understanding shape is crucial in deep learning frameworks, as errors related to shape are common and it is the first step to understanding how the rest of the system works.', "The chapter explains the impact of weights and biases on a neuron's output, emphasizing their significant impact.", 'The explanation includes breaking down the concept into dot products and visualizing the process to enhance comprehension and clarity for the audience.', 'The weight in a neural network is responsible for changing the magnitude of the input value, demonstrated by multiplying a value by a weight, resulting in a negative or positive output.']}