title
A Friendly Introduction to Generative Adversarial Networks (GANs)

description
Code: http://www.github.com/luisguiserrano/gans What is the simplest pair of GANs one can build? In this video (with code included) we build a pair of ONE-layer GANs which will generate some simple 2x2 images (faces). Grokking Machine Learning Book: https://www.manning.com/books/grokking-machine-learning 40% discount promo code: serranoyt GANs from Scratch 1: A deep introduction. With code in PyTorch and TensorFlow: https://medium.com/ai-society/gans-from-scratch-1-a-deep-introduction-with-code-in-pytorch-and-tensorflow-cb03cdcdba0f

detail
{'title': 'A Friendly Introduction to Generative Adversarial Networks (GANs)', 'heatmap': [{'end': 441.507, 'start': 428.658, 'weight': 0.702}, {'end': 966.479, 'start': 912.911, 'weight': 0.788}, {'end': 1010.249, 'start': 978.968, 'weight': 0.773}, {'end': 1142.351, 'start': 1108.358, 'weight': 0.884}, {'end': 1249.512, 'start': 1245.808, 'weight': 1}], 'summary': "Covers the use of generative adversarial networks (gans) for face generation and discusses topics such as creating neural networks for facial recognition, using log loss in training, back propagation, and creating gans to generate faces, with specific examples and acknowledgements, concluding with a promotion for the author's machine learning book and a special discount for viewers.", 'chapters': [{'end': 141.212, 'segs': [{'end': 31.826, 'src': 'embed', 'start': 4.548, 'weight': 0, 'content': [{'end': 9.857, 'text': "Hello, I'm Luis Serrano and this video is about Generative Adversarial Networks or GANs for short.", 'start': 4.548, 'duration': 5.309}, {'end': 16.99, 'text': 'Developed by Ian Goodfellow and other researchers in Montreal, GANs are a great advance in machine learning and they have numerous applications.', 'start': 10.478, 'duration': 6.512}, {'end': 20.922, 'text': 'Perhaps one of the fanciest applications of GANs is face generation.', 'start': 17.881, 'duration': 3.041}, {'end': 25.764, 'text': "If you go to this page, thispersondoesnotexist.com, you'll see it in action.", 'start': 21.442, 'duration': 4.322}, {'end': 29.165, 'text': "These images you see here are of people who don't exist.", 'start': 26.404, 'duration': 2.761}, {'end': 31.826, 'text': 'They have been fully generated by a neural network.', 'start': 29.545, 'duration': 2.281}], 'summary': 'Gans, developed by ian goodfellow, are used for face generation and have numerous applications in machine learning.', 'duration': 27.278, 'max_score': 4.548, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8L11aMN5KY8/pics/8L11aMN5KY84548.jpg'}, {'end': 96.561, 'src': 'embed', 'start': 65.434, 'weight': 1, 'content': [{'end': 68.395, 'text': 'GANs consist of a pair of neural networks that fight with each other.', 'start': 65.434, 'duration': 2.961}, {'end': 72.036, 'text': 'One is called the generator and the other one, the discriminator.', 'start': 69.095, 'duration': 2.941}, {'end': 74.976, 'text': 'And they behave a lot like a counterfeiter and a cop.', 'start': 72.736, 'duration': 2.24}, {'end': 81.618, 'text': 'The counterfeiter is constantly trying to make fake paintings and the cop is constantly trying to catch the counterfeiter.', 'start': 75.416, 'duration': 6.202}, {'end': 85.459, 'text': 'The counterfeiter is the generator and the cop is the discriminator.', 'start': 82.218, 'duration': 3.241}, {'end': 96.561, 'text': 'So as he gets caught by the cop, the counterfeiter keeps improving and improving his paintings until one day he learns to finally paint a perfect one.', 'start': 86.673, 'duration': 9.888}], 'summary': 'Gans use a pair of neural networks to mimic a counterfeiter-cop dynamic, improving fake paintings till producing perfect ones.', 'duration': 31.127, 'max_score': 65.434, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8L11aMN5KY8/pics/8L11aMN5KY865434.jpg'}, {'end': 141.212, 'src': 'embed', 'start': 114.619, 'weight': 2, 'content': [{'end': 120.482, 'text': 'The discriminator is trained to identify which images are real or which ones come from the generator.', 'start': 114.619, 'duration': 5.863}, {'end': 127.426, 'text': 'And the generator is trained to fool the discriminator into classifying its images as real images.', 'start': 121.083, 'duration': 6.343}, {'end': 133.31, 'text': "In this video, we'll build a very, very simple pair of GANs.", 'start': 130.247, 'duration': 3.063}, {'end': 138.733, 'text': "So simple that we'll be able to code them straight in Python without any deep learning packages.", 'start': 133.85, 'duration': 4.883}, {'end': 141.212, 'text': 'This is our setting.', 'start': 140.472, 'duration': 0.74}], 'summary': 'Training a simple pair of gans to identify and generate images without deep learning packages.', 'duration': 26.593, 'max_score': 114.619, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8L11aMN5KY8/pics/8L11aMN5KY8114619.jpg'}], 'start': 4.548, 'title': 'Understanding gans for face generation', 'summary': 'Delves into the concept of generative adversarial networks (gans) for face generation, utilizing a simple one-layer neural network and explaining gans as a counterfeiter and a cop.', 'chapters': [{'end': 141.212, 'start': 4.548, 'title': 'Understanding gans for face generation', 'summary': 'Discusses the concept of generative adversarial networks (gans) as a pair of neural networks that generate fake images and catch them, with a focus on face generation, using a simple one-layer neural network to create images and explaining the concept of gans as a counterfeiter and a cop.', 'duration': 136.664, 'highlights': ['GANs are a great advance in machine learning and have numerous applications, with a focus on face generation which is demonstrated through the website thispersondoesnotexist.com.', 'The explanation of GANs as a pair of neural networks, the generator and the discriminator, is likened to a counterfeiter and a cop, where the generator creates fake images and the discriminator tries to identify them.', 'The chapter details the training process of GANs, where the discriminator is trained to identify real and fake images, while the generator is trained to fool the discriminator into classifying its images as real.', 'The video demonstrates the building of a simple pair of GANs using Python without any deep learning packages, providing a practical and intuitive understanding of the concept.']}], 'duration': 136.664, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8L11aMN5KY8/pics/8L11aMN5KY84548.jpg', 'highlights': ['GANs are a great advance in machine learning with numerous applications, focusing on face generation.', 'The explanation of GANs as a pair of neural networks is likened to a counterfeiter and a cop.', 'The chapter details the training process of GANs, where the discriminator is trained to identify real and fake images.', 'The video demonstrates the building of a simple pair of GANs using Python without any deep learning packages.']}, {'end': 549.896, 'segs': [{'end': 242.271, 'src': 'embed', 'start': 215.393, 'weight': 4, 'content': [{'end': 223.879, 'text': 'So the goal of our networks is to be able to distinguish faces like these from noisy images or non-faces like these ones.', 'start': 215.393, 'duration': 8.486}, {'end': 227.061, 'text': "Let's attach some numbers here.", 'start': 225.88, 'duration': 1.181}, {'end': 232.785, 'text': "We'll have a scale where a white pixel has value zero and a black pixel has a value of one.", 'start': 227.321, 'duration': 5.464}, {'end': 236.707, 'text': "You may have seen this in the opposite way, but we'll do it like this for clarity.", 'start': 233.505, 'duration': 3.202}, {'end': 242.271, 'text': 'This way, we can attach a value to every pixel in the two times two screens.', 'start': 237.388, 'duration': 4.883}], 'summary': 'Networks aim to differentiate faces from non-faces using pixel values.', 'duration': 26.878, 'max_score': 215.393, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8L11aMN5KY8/pics/8L11aMN5KY8215393.jpg'}, {'end': 411.289, 'src': 'embed', 'start': 347.747, 'weight': 0, 'content': [{'end': 354.892, 'text': 'We apply the sigmoid, which is this function that sends high numbers to numbers close to one and low numbers to numbers close to zero.', 'start': 347.747, 'duration': 7.145}, {'end': 357.694, 'text': 'And we get sigmoid of one, which is 0.73.', 'start': 355.573, 'duration': 2.121}, {'end': 366.784, 'text': 'This discriminator network then assigns to this image a probability of 73% that it is a face.', 'start': 357.694, 'duration': 9.09}, {'end': 374.929, 'text': 'Since it is a high probability, greater than 50%, we conclude that this discriminator thinks that the image is a face, which is correct.', 'start': 367.685, 'duration': 7.244}, {'end': 381.392, 'text': 'Notice that in this neural network, the thick edges are positive and the thin ones are negative,', 'start': 376.41, 'duration': 4.982}, {'end': 383.654, 'text': "and this is a convention we'll be using throughout this video.", 'start': 381.392, 'duration': 2.262}, {'end': 395.43, 'text': 'Now, if we enter the second image, which is not a face, into a discriminator, Then we do the same calculation and we get negative 0.5 for the score.', 'start': 385.615, 'duration': 9.815}, {'end': 399.314, 'text': 'Sigma of negative 0.5 is 0.37.', 'start': 396.09, 'duration': 3.224}, {'end': 405.781, 'text': 'This is lower than 50%, so we conclude that this discriminator thinks that this image is not a face.', 'start': 399.314, 'duration': 6.467}, {'end': 407.883, 'text': 'The discriminator is correct again.', 'start': 406.421, 'duration': 1.462}, {'end': 411.289, 'text': "Now in the same way, let's build a generator.", 'start': 409.488, 'duration': 1.801}], 'summary': 'Using sigmoid function, discriminator assigns 73% probability to face image, correctly identifies non-face image as <50% probability.', 'duration': 63.542, 'max_score': 347.747, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8L11aMN5KY8/pics/8L11aMN5KY8347747.jpg'}, {'end': 459.222, 'src': 'heatmap', 'start': 428.658, 'weight': 0.702, 'content': [{'end': 432.64, 'text': 'First, we start by picking an input z, which is a random number between zero and one.', 'start': 428.658, 'duration': 3.982}, {'end': 435.962, 'text': "In this case, let's say it will be 0.7.", 'start': 433.181, 'duration': 2.781}, {'end': 439.365, 'text': 'In general, the input would be a vector that comes from some fixed distribution.', 'start': 435.962, 'duration': 3.403}, {'end': 441.507, 'text': "Now let's build a neural network.", 'start': 440.286, 'duration': 1.221}, {'end': 446.251, 'text': 'What we really want is to have some large values and some small ones.', 'start': 441.847, 'duration': 4.404}, {'end': 450.755, 'text': 'The large ones are represented by thick edges and the small ones by thin edges.', 'start': 446.811, 'duration': 3.944}, {'end': 459.222, 'text': 'Because remember that we want large values for the top left and bottom right corner and small values for the top right and bottom left corner.', 'start': 451.755, 'duration': 7.467}], 'summary': 'Using input z (0.7), build neural network with large and small values for specific areas.', 'duration': 30.564, 'max_score': 428.658, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8L11aMN5KY8/pics/8L11aMN5KY8428658.jpg'}, {'end': 560.251, 'src': 'embed', 'start': 529.181, 'weight': 1, 'content': [{'end': 530.962, 'text': 'Notice that, by the way we built it,', 'start': 529.181, 'duration': 1.781}, {'end': 539.769, 'text': 'this neural network will always generate large values for the top left and bottom right corners and small values for the top right and bottom left corners,', 'start': 530.962, 'duration': 8.807}, {'end': 541.73, 'text': 'no matter what value that we input.', 'start': 539.769, 'duration': 1.961}, {'end': 544.572, 'text': 'Because remember, that is between zero and one.', 'start': 542.29, 'duration': 2.282}, {'end': 547.754, 'text': 'Therefore, this neural network will always generate a face.', 'start': 545.192, 'duration': 2.562}, {'end': 549.896, 'text': "That means it's a good generator.", 'start': 548.455, 'duration': 1.441}, {'end': 555.809, 'text': "Of course, we build these neural networks by eyeballing the weights, but that's not how it's normally done.", 'start': 551.587, 'duration': 4.222}, {'end': 560.251, 'text': 'In general, we have to train the neural networks to get the best possible weights.', 'start': 556.369, 'duration': 3.882}], 'summary': 'Neural network generates large values for top corners, always produces a face.', 'duration': 31.07, 'max_score': 529.181, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8L11aMN5KY8/pics/8L11aMN5KY8529181.jpg'}], 'start': 141.712, 'title': 'Creating neural networks for facial recognition', 'summary': 'Discusses the development of simple neural networks to distinguish faces from noise on slanted land with 2x2 pixel screens, and how a discriminator network classifies images using pixel values and specific examples and probabilities mentioned.', 'chapters': [{'end': 256.613, 'start': 141.712, 'title': 'Creating neural networks in slanted land', 'summary': 'Discusses the technological limitations of a world where images are displayed on two-pixel by two-pixel screens and the development of simple neural networks to distinguish faces from noise, all tilted at a 45-degree angle in slanted land.', 'duration': 114.901, 'highlights': ['The world of slanted land has technology with limited resolution, featuring computer screens with a best resolution of two times two pixels for displaying black and white images. The world of slanted land has technology with limited resolution, featuring computer screens with a best resolution of two times two pixels for displaying black and white images.', 'The development of neural networks in slanted land is limited to simple, one-layer networks, aimed at generating faces of people tilted at a 45-degree angle. The development of neural networks in slanted land is limited to simple, one-layer networks, aimed at generating faces of people tilted at a 45-degree angle.', 'The goal of the networks in slanted land is to distinguish between faces and non-faces, represented by images on the two-pixel by two-pixel screens. The goal of the networks in slanted land is to distinguish between faces and non-faces, represented by images on the two-pixel by two-pixel screens.']}, {'end': 549.896, 'start': 257.173, 'title': 'Facial recognition using neural networks', 'summary': 'Discusses how a discriminator network classifies images as faces or not faces based on the summation and subtraction of pixel values, with specific examples and probabilities mentioned, and the generation of faces using a generator neural network.', 'duration': 292.723, 'highlights': ['The discriminator network identifies faces by summing the values of the top left and bottom right corners and subtracting the values of the other two corners, with a cutoff threshold of 1 for face classification.', 'The sigmoid function is applied to output the probability of an image being a face, with a specific example showing a 73% probability for a face image.', 'The generator neural network is designed to consistently generate images with large values for the top left and bottom right corners and small values for the top right and bottom left corners, ensuring it always produces a face.', 'The detailed evaluation of the second image by the discriminator network, resulting in a conclusion that the image is not a face, with a probability calculation of 37%.', 'Explanation of the neural network structure and weight assignments for the generator to consistently produce images with specific pixel value patterns for face generation.']}], 'duration': 408.184, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8L11aMN5KY8/pics/8L11aMN5KY8141712.jpg', 'highlights': ['The discriminator network identifies faces by summing the values of the top left and bottom right corners and subtracting the values of the other two corners, with a cutoff threshold of 1 for face classification.', 'The generator neural network is designed to consistently generate images with large values for the top left and bottom right corners and small values for the top right and bottom left corners, ensuring it always produces a face.', 'The sigmoid function is applied to output the probability of an image being a face, with a specific example showing a 73% probability for a face image.', 'The detailed evaluation of the second image by the discriminator network, resulting in a conclusion that the image is not a face, with a probability calculation of 37%.', 'The goal of the networks in slanted land is to distinguish between faces and non-faces, represented by images on the two-pixel by two-pixel screens.']}, {'end': 747.138, 'segs': [{'end': 587.488, 'src': 'embed', 'start': 551.587, 'weight': 0, 'content': [{'end': 555.809, 'text': "Of course, we build these neural networks by eyeballing the weights, but that's not how it's normally done.", 'start': 551.587, 'duration': 4.222}, {'end': 560.251, 'text': 'In general, we have to train the neural networks to get the best possible weights.', 'start': 556.369, 'duration': 3.882}, {'end': 563.412, 'text': 'For this, let me tell you a little bit about error functions.', 'start': 560.911, 'duration': 2.501}, {'end': 568.775, 'text': "An error or cost function is a way to tell the network how it's doing in order for it to improve.", 'start': 563.833, 'duration': 4.942}, {'end': 573.497, 'text': "If the error is large, then the network's not doing well and it needs to reduce the error to improve.", 'start': 569.255, 'duration': 4.242}, {'end': 579.397, 'text': "The error function that we'll use to train these GANs is the log loss, which is the natural logarithm.", 'start': 574.83, 'duration': 4.567}, {'end': 581.78, 'text': 'This is the logarithm base E.', 'start': 579.557, 'duration': 2.223}, {'end': 582.641, 'text': 'Why the logarithm??', 'start': 581.78, 'duration': 0.861}, {'end': 587.488, 'text': 'Well, logarithm appears a lot in error functions for many deeper reasons which we will not cover,', 'start': 583.102, 'duration': 4.386}], 'summary': 'Neural networks are trained using error functions like log loss to improve performance.', 'duration': 35.901, 'max_score': 551.587, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8L11aMN5KY8/pics/8L11aMN5KY8551587.jpg'}, {'end': 664.911, 'src': 'embed', 'start': 638.103, 'weight': 4, 'content': [{'end': 645.389, 'text': 'Therefore, when the label is one, the function negative logarithm of the prediction is a good error function.', 'start': 638.103, 'duration': 7.286}, {'end': 650.179, 'text': "Now let's go in the other extreme, when the label is zero.", 'start': 647.335, 'duration': 2.844}, {'end': 656.188, 'text': "In this case, if we had a prediction of 0.1, it would be good because it's close to the label.", 'start': 650.7, 'duration': 5.488}, {'end': 658.591, 'text': 'Therefore, the error should be small.', 'start': 656.769, 'duration': 1.822}, {'end': 664.911, 'text': 'On the other hand, a prediction of 0.9 is terrible, so the error should be large.', 'start': 659.409, 'duration': 5.502}], 'summary': 'Using negative logarithm for error function based on label and prediction values.', 'duration': 26.808, 'max_score': 638.103, 'thumbnail': ''}, {'end': 747.138, 'src': 'embed', 'start': 724.583, 'weight': 6, 'content': [{'end': 732.866, 'text': "From the graph of negative logarithm of 1 minus x in the right, we see that when the prediction is close to 0, there's a low error,", 'start': 724.583, 'duration': 8.283}, {'end': 736.147, 'text': 'and when the prediction is close to 1, there is a high error.', 'start': 732.866, 'duration': 3.281}, {'end': 747.138, 'text': "So these two are the error functions that we're gonna use for training the generator and the discriminator based on if we want the prediction to be zero or one.", 'start': 737.119, 'duration': 10.019}], 'summary': 'Using error functions based on prediction proximity to 0 or 1 for training the generator and discriminator.', 'duration': 22.555, 'max_score': 724.583, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8L11aMN5KY8/pics/8L11aMN5KY8724583.jpg'}], 'start': 551.587, 'title': 'Using log loss in training neural networks', 'summary': 'Emphasizes the significance of log loss as the error function for training gans, while also discussing the use of negative logarithm as an error function and its application for different label values and predictions in machine learning.', 'chapters': [{'end': 587.488, 'start': 551.587, 'title': 'Training neural networks with error functions', 'summary': 'Explains the importance of error functions in training neural networks, particularly focusing on log loss as the error function for training gans.', 'duration': 35.901, 'highlights': ['The error function used to train GANs is the log loss, which is the natural logarithm, indicating the importance of error functions in training neural networks (Log loss).', "An error or cost function is a way to tell the network how it's doing, emphasizing the significance of error functions in the network's improvement (Error or cost function).", "The network needs to reduce the error to improve, with a large error indicating poor network performance, providing a quantifiable measure of the network's performance (Reducing error to improve).", 'Neural networks are generally trained to get the best possible weights, highlighting the standard practice of training neural networks to optimize their performance (Training neural networks).']}, {'end': 747.138, 'start': 587.488, 'title': 'Logarithmic error functions', 'summary': 'Explains the concept of using negative logarithm as an error function, demonstrating its application for different label values and predictions, and its relevance to training the generator and the discriminator in machine learning.', 'duration': 159.65, 'highlights': ['The negative logarithm of the prediction is a good error function when the label is one, as it produces a small error when the prediction is close to the label. When the label is one, the error function is defined as the negative logarithm of the prediction, producing a small error when the prediction is close to one.', 'When the label is zero, the error function is the negative logarithm of 1 minus the prediction, resulting in a small error when the prediction is close to the label and a large error when the prediction is far from the label. For a label of zero, the error function is the negative logarithm of 1 minus the prediction, leading to a small error when the prediction is close to zero and a large error when the prediction is close to one.', 'These error functions are used for training the generator and the discriminator based on whether the prediction is desired to be zero or one. The error functions are employed for training the generator and the discriminator, depending on whether the desired prediction is zero or one.']}], 'duration': 195.551, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8L11aMN5KY8/pics/8L11aMN5KY8551587.jpg', 'highlights': ['The error function used to train GANs is the log loss, emphasizing its significance in training neural networks.', "An error or cost function is a way to tell the network how it's doing, crucial for the network's improvement.", "The network needs to reduce the error to improve, providing a quantifiable measure of the network's performance.", 'Neural networks are generally trained to get the best possible weights, highlighting the standard practice of optimizing their performance.', 'The negative logarithm of the prediction is a good error function when the label is one, producing a small error when the prediction is close to the label.', 'When the label is zero, the error function is the negative logarithm of 1 minus the prediction, resulting in a small error when the prediction is close to the label and a large error when the prediction is far from the label.', 'These error functions are used for training the generator and the discriminator based on whether the prediction is desired to be zero or one.']}, {'end': 1091.614, 'segs': [{'end': 778.994, 'src': 'embed', 'start': 749.252, 'weight': 0, 'content': [{'end': 753.514, 'text': 'Now that we have our functions, we get to the meat of our training process, which is back propagation.', 'start': 749.252, 'duration': 4.262}, {'end': 755.335, 'text': 'I will explain it very briefly.', 'start': 754.054, 'duration': 1.281}, {'end': 760.818, 'text': 'The way we train neural networks by first taking a data point and performing a forward pass,', 'start': 755.955, 'duration': 4.863}, {'end': 766.661, 'text': 'calculating the prediction and then calculating the error based on the log loss that we previously defined.', 'start': 760.818, 'duration': 5.843}, {'end': 773.828, 'text': 'Then we proceed to take the derivative of the error based on all the weights using the chain rule.', 'start': 767.581, 'duration': 6.247}, {'end': 778.994, 'text': 'This will tell us how much to update each weight in order to best decrease the error.', 'start': 774.429, 'duration': 4.565}], 'summary': 'Neural networks are trained using back propagation, updating weights to decrease error.', 'duration': 29.742, 'max_score': 749.252, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8L11aMN5KY8/pics/8L11aMN5KY8749252.jpg'}, {'end': 966.479, 'src': 'heatmap', 'start': 904.266, 'weight': 1, 'content': [{'end': 912.41, 'text': "Well, the generator's wildest dreams are to generate an image so good, so real that the discriminator classifies it as real.", 'start': 904.266, 'duration': 8.144}, {'end': 920.855, 'text': 'Therefore, the generator wants this whole neural network, the connection of the two, to output a one.', 'start': 912.911, 'duration': 7.944}, {'end': 929.219, 'text': 'That means that the error function from the generator is negative logarithm of the prediction.', 'start': 922.216, 'duration': 7.003}, {'end': 933.682, 'text': 'So that is the error function that will help us train the weights of the generator.', 'start': 929.78, 'duration': 3.902}, {'end': 941.84, 'text': 'In other words, if g is the output of the generator and d is the output of the discriminator,', 'start': 934.713, 'duration': 7.127}, {'end': 956.753, 'text': 'then the error function for the generator is negative logarithm of d and the error function for the discriminator is negative logarithm of 1 minus d.', 'start': 941.84, 'duration': 14.913}, {'end': 966.479, 'text': 'the derivatives of these two are what will help us update the weights of both neural networks in order to improve that particular prediction.', 'start': 956.753, 'duration': 9.726}], 'summary': 'Generator aims for image so real, error function is negative logarithm of prediction.', 'duration': 52.487, 'max_score': 904.266, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8L11aMN5KY8/pics/8L11aMN5KY8904266.jpg'}, {'end': 1010.249, 'src': 'heatmap', 'start': 978.968, 'weight': 0.773, 'content': [{'end': 985.392, 'text': 'they simply improve both neural networks as one to produce different outputs, which is fascinating.', 'start': 978.968, 'duration': 6.424}, {'end': 991.169, 'text': 'So what we have to do now is repeat this process many times.', 'start': 987.625, 'duration': 3.544}, {'end': 996.474, 'text': 'We pick a random value for Z, we apply the generator to produce a fake image,', 'start': 991.709, 'duration': 4.765}, {'end': 1003.181, 'text': 'apply the discriminator to that image and use backpropagation to update the weights on both the generator and the discriminator.', 'start': 996.474, 'duration': 6.707}, {'end': 1010.249, 'text': 'Then we take a real image, plug it into the discriminator and update its weights again using backpropagation.', 'start': 1003.842, 'duration': 6.407}], 'summary': 'Improving neural networks together to produce various outputs, iterating multiple times by updating weights.', 'duration': 31.281, 'max_score': 978.968, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8L11aMN5KY8/pics/8L11aMN5KY8978968.jpg'}], 'start': 749.252, 'title': 'Back propagation and gradient descent', 'summary': 'Explains back propagation in training neural networks, using gradient descent and error functions, leading to successful training of both the generator and discriminator, resulting in realistic image generation and accurate classification.', 'chapters': [{'end': 1091.614, 'start': 749.252, 'title': 'Back propagation and gradient descent', 'summary': 'Explains the back propagation process in training neural networks, including the use of gradient descent and error functions, leading to the successful training of both the generator and the discriminator, resulting in realistic image generation and accurate classification.', 'duration': 342.362, 'highlights': ['The process of back propagation in training neural networks involves performing a forward pass, calculating the prediction and error based on log loss, and then updating weights using gradient descent, ultimately resulting in successful training of the generator and discriminator. Back propagation involves forward pass, prediction calculation, error calculation using log loss, and weight updating using gradient descent, leading to successful training of generator and discriminator.', 'The error functions for the generator and discriminator are defined based on their objectives, with the generator aiming for a neural network output of one and the discriminator aiming for an output of zero, leading to the use of negative logarithm functions to train their respective weights. Error functions for the generator and discriminator are defined based on their objectives, with the generator aiming for an output of one and the discriminator aiming for an output of zero, using negative logarithm functions for weight training.', 'The successful training of both the generator and the discriminator is evidenced by the ability to generate realistic looking faces and accurately classify them, achieved through the careful adjustment of weights based on the defined error functions. Successful training of generator and discriminator results in the generation of realistic faces and accurate classification, achieved through careful weight adjustment based on defined error functions.']}], 'duration': 342.362, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8L11aMN5KY8/pics/8L11aMN5KY8749252.jpg', 'highlights': ['Back propagation involves forward pass, prediction calculation, error calculation using log loss, and weight updating using gradient descent, leading to successful training of generator and discriminator.', 'Error functions for the generator and discriminator are defined based on their objectives, with the generator aiming for an output of one and the discriminator aiming for an output of zero, using negative logarithm functions for weight training.', 'Successful training of generator and discriminator results in the generation of realistic faces and accurate classification, achieved through careful weight adjustment based on defined error functions.']}, {'end': 1252.915, 'segs': [{'end': 1126.35, 'src': 'embed', 'start': 1091.614, 'weight': 3, 'content': [{'end': 1097.515, 'text': 'that are all two pixels by two pixels and this picture looks uncannily like this resident of slanted land over here.', 'start': 1091.614, 'duration': 5.901}, {'end': 1101.637, 'text': 'Thus, we have built a network that generates faces.', 'start': 1098.156, 'duration': 3.481}, {'end': 1105.618, 'text': "Now, as promised, here's a code for you to sing along to the video.", 'start': 1103.157, 'duration': 2.461}, {'end': 1107.838, 'text': "It's in this repo called GANs under my GitHub.", 'start': 1105.638, 'duration': 2.2}, {'end': 1113.6, 'text': 'First, we have the faces that we hard code and the random noisy images that we generate that are not necessarily faces.', 'start': 1108.358, 'duration': 5.242}, {'end': 1126.35, 'text': 'Then we also develop the derivatives carefully, for the discriminator based on faces, then for the discriminator based on noisy images,', 'start': 1114.577, 'duration': 11.773}], 'summary': 'A network was built to generate faces, with code available on github for singing along to the video.', 'duration': 34.736, 'max_score': 1091.614, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8L11aMN5KY8/pics/8L11aMN5KY81091614.jpg'}, {'end': 1142.351, 'src': 'heatmap', 'start': 1108.358, 'weight': 0.884, 'content': [{'end': 1113.6, 'text': 'First, we have the faces that we hard code and the random noisy images that we generate that are not necessarily faces.', 'start': 1108.358, 'duration': 5.242}, {'end': 1126.35, 'text': 'Then we also develop the derivatives carefully, for the discriminator based on faces, then for the discriminator based on noisy images,', 'start': 1114.577, 'duration': 11.773}, {'end': 1129.054, 'text': 'both with respect to the weights and with respect to the biases.', 'start': 1126.35, 'duration': 2.704}, {'end': 1132.598, 'text': 'These are all coded in this discriminator class.', 'start': 1129.554, 'duration': 3.044}, {'end': 1142.351, 'text': 'We also work out the derivatives corresponding to the error functions for the generator, again with respect to the weights and the bias,', 'start': 1133.561, 'duration': 8.79}], 'summary': 'Developed derivatives for discriminators and error functions in discriminator class', 'duration': 33.993, 'max_score': 1108.358, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8L11aMN5KY8/pics/8L11aMN5KY81108358.jpg'}, {'end': 1195.264, 'src': 'embed', 'start': 1154.514, 'weight': 0, 'content': [{'end': 1161.02, 'text': 'Notice that the generator error function goes down and stabilizes, but since the generator ends up fooling the discriminator,', 'start': 1154.514, 'duration': 6.506}, {'end': 1164.262, 'text': "then the discriminator function doesn't do so well and actually goes up at the end.", 'start': 1161.02, 'duration': 3.242}, {'end': 1169.187, 'text': 'And finally, we ask our generator to generate some random images, and here they are.', 'start': 1165.383, 'duration': 3.804}, {'end': 1174.331, 'text': 'Notice that they all look like faces in slanted land, which is what we wanted from the beginning.', 'start': 1169.647, 'duration': 4.684}, {'end': 1181.775, 'text': 'Therefore, we have successfully created a pair of GANs that generate faces in slanted land.', 'start': 1175.711, 'duration': 6.064}, {'end': 1184.817, 'text': 'Now, time for some acknowledgements.', 'start': 1183.276, 'duration': 1.541}, {'end': 1187.959, 'text': 'This video would not be the same if not for the help of my friends.', 'start': 1185.318, 'duration': 2.641}, {'end': 1195.264, 'text': 'so a big thanks to Diego, Sahil and Alejandro, who helped me in various ways either encouraged me to learn GANs more seriously,', 'start': 1187.959, 'duration': 7.305}], 'summary': 'Pair of gans successfully generates faces in slanted land.', 'duration': 40.75, 'max_score': 1154.514, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8L11aMN5KY8/pics/8L11aMN5KY81154514.jpg'}, {'end': 1238.541, 'src': 'embed', 'start': 1211.853, 'weight': 2, 'content': [{'end': 1215.434, 'text': "I'd like to remind you that I have a machine learning book called Groking Machine Learning,", 'start': 1211.853, 'duration': 3.581}, {'end': 1220.837, 'text': 'in which I explain the concepts of machine learning in a down-to-earth way, with real examples for everybody to understand.', 'start': 1215.434, 'duration': 5.403}, {'end': 1226.58, 'text': 'In the description, you can find the link to the book and a very special 40% discount for the viewers of this channel.', 'start': 1220.857, 'duration': 5.723}, {'end': 1233.336, 'text': 'And as usual, if you enjoyed this video, please subscribe to my channel for more content or hit like or share amongst your friends.', 'start': 1227.691, 'duration': 5.645}, {'end': 1234.978, 'text': 'And feel free to write a comment.', 'start': 1233.957, 'duration': 1.021}, {'end': 1238.541, 'text': 'I really enjoy reading your comments, especially those with suggestions for future topics.', 'start': 1235.278, 'duration': 3.263}], 'summary': "Book 'groking machine learning' offered at 40% discount to viewers, with real examples and down-to-earth explanations.", 'duration': 26.688, 'max_score': 1211.853, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8L11aMN5KY8/pics/8L11aMN5KY81211853.jpg'}, {'end': 1252.915, 'src': 'heatmap', 'start': 1245.808, 'weight': 4, 'content': [{'end': 1248.431, 'text': 'can be found at this link, serrano.academy.', 'start': 1245.808, 'duration': 2.623}, {'end': 1249.512, 'text': 'So check it out.', 'start': 1248.891, 'duration': 0.621}, {'end': 1252.915, 'text': 'Thank you very much for your attention and see you in the next video.', 'start': 1250.272, 'duration': 2.643}], 'summary': 'Visit serrano.academy for more information. thank you and see you in the next video.', 'duration': 7.107, 'max_score': 1245.808, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8L11aMN5KY8/pics/8L11aMN5KY81245808.jpg'}], 'start': 1091.614, 'title': 'Creating gans to generate faces', 'summary': "Discusses the process of creating a pair of gans that successfully generate faces in slanted land, using a network that generates faces, a discriminator class, error function plots, and acknowledgements to diego, sahil, and alejandro. the video concludes with a promotion for the author's machine learning book and a special discount for viewers.", 'chapters': [{'end': 1252.915, 'start': 1091.614, 'title': 'Creating gans to generate faces', 'summary': "Discusses the process of creating a pair of gans that successfully generate faces in slanted land, using a network that generates faces, a discriminator class, error function plots, and acknowledgements to diego, sahil, and alejandro. the video concludes with a promotion for the author's machine learning book and a special discount for viewers.", 'duration': 161.301, 'highlights': ['The chapter discusses the process of creating a pair of GANs that successfully generate faces in slanted land, using a network that generates faces, a discriminator class, error function plots, and acknowledgements to Diego, Sahil, and Alejandro.', 'The generator error function goes down and stabilizes, but the discriminator function goes up at the end.', "Acknowledgements are given to Diego, Sahil, and Alejandro for their help and inspiration, with a promotion for the author's machine learning book and a special discount for viewers.", 'The author provides a code for GANs on their GitHub repository called GANs, and encourages viewers to subscribe to their channel, like, share, and leave comments with suggestions for future topics.', "The author promotes their machine learning book 'Groking Machine Learning,' offering a special 40% discount for viewers and providing information about where to find their content.", "The author also mentions the availability of their information, videos, writings, etc., at serrano.academy and expresses gratitude for the audience's attention, concluding with an invitation for the next video."]}], 'duration': 161.301, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8L11aMN5KY8/pics/8L11aMN5KY81091614.jpg', 'highlights': ['The generator error function goes down and stabilizes, but the discriminator function goes up at the end.', "Acknowledgements are given to Diego, Sahil, and Alejandro for their help and inspiration, with a promotion for the author's machine learning book and a special discount for viewers.", "The author promotes their machine learning book 'Groking Machine Learning,' offering a special 40% discount for viewers and providing information about where to find their content.", 'The author provides a code for GANs on their GitHub repository called GANs, and encourages viewers to subscribe to their channel, like, share, and leave comments with suggestions for future topics.', "The author also mentions the availability of their information, videos, writings, etc., at serrano.academy and expresses gratitude for the audience's attention, concluding with an invitation for the next video."]}], 'highlights': ['GANs are a great advance in machine learning with numerous applications, focusing on face generation.', 'The explanation of GANs as a pair of neural networks is likened to a counterfeiter and a cop.', 'The discriminator network identifies faces by summing the values of the top left and bottom right corners and subtracting the values of the other two corners, with a cutoff threshold of 1 for face classification.', 'The generator neural network is designed to consistently generate images with large values for the top left and bottom right corners and small values for the top right and bottom left corners, ensuring it always produces a face.', 'The error function used to train GANs is the log loss, emphasizing its significance in training neural networks.', 'Back propagation involves forward pass, prediction calculation, error calculation using log loss, and weight updating using gradient descent, leading to successful training of generator and discriminator.', 'The generator error function goes down and stabilizes, but the discriminator function goes up at the end.', "Acknowledgements are given to Diego, Sahil, and Alejandro for their help and inspiration, with a promotion for the author's machine learning book and a special discount for viewers."]}