title
Deep Learning with Neural Networks and TensorFlow Introduction

description
Welcome to a new section in our Machine Learning Tutorial series: Deep Learning with Neural Networks and TensorFlow. The artificial neural network is a biologically-inspired methodology to conduct machine learning, intended to mimic your brain (a biological neural network). The Artificial Neural Network, which I will now just refer to as a neural network, is not a new concept. The idea has been around since the 1940's, and has had a few ups and downs, most notably when compared against the Support Vector Machine (SVM). For example, the Neural Network was popularized up until the mid 90s when it was shown that the SVM, using a new-to-the-public (the technique itself was thought up long before it was actually put to use) technique, the "Kernel Trick," was capable of working with non-linearly separable datasets. With this, the Support Vector Machine catapulted to the front again, leaving neural nets behind and mostly nothing interesting until about 2011, where Deep Neural Networks began to take hold and outperform the Support Vector Machine, using new techniques, huge dataset availability, and much more powerful computers. https://pythonprogramming.net https://twitter.com/sentdex https://www.facebook.com/pythonprogramming.net/ https://plus.google.com/+sentdex

detail
{'title': 'Deep Learning with Neural Networks and TensorFlow Introduction', 'heatmap': [{'end': 148.992, 'start': 135.839, 'weight': 0.727}, {'end': 587.439, 'start': 554.656, 'weight': 0.877}, {'end': 1219.309, 'start': 1204.304, 'weight': 0.763}], 'summary': 'Provides an in-depth exploration of topics including deep learning with neural networks, neural network fundamentals, optimization, and free data sources for machine learning, with examples of sample sizes ranging from 60,000 to 500 million, and the introduction of tensorflow for neural networks.', 'chapters': [{'end': 346.352, 'segs': [{'end': 65.406, 'src': 'embed', 'start': 23.401, 'weight': 0, 'content': [{'end': 31.203, 'text': 'And really this is fairly recent so the neural network has been around since the 1940s but basically it was pretty much worthless until very recently.', 'start': 23.401, 'duration': 7.802}, {'end': 32.564, 'text': "There's a few things.", 'start': 31.883, 'duration': 0.681}, {'end': 38.51, 'text': 'In the 1940s, it was more of like a concept and there was really no way anybody was going to do anything with it.', 'start': 34.005, 'duration': 4.505}, {'end': 47.979, 'text': "And then in the early 1970s, I think 1974, Paul Werbos came up with a way to kind of neutralize the threshold, which we'll talk about later.", 'start': 39.33, 'duration': 8.649}, {'end': 57.203, 'text': 'um, but so that kind of helped a little bit, but pretty much the neural network was worthless until around 2011 or 2012, where,', 'start': 48.599, 'duration': 8.604}, {'end': 65.406, 'text': "with deep learning and just massive data sets, we've come up with neural networks that are doing some pretty incredible things,", 'start': 57.203, 'duration': 8.203}], 'summary': 'Neural networks were worthless until 2011-2012, when deep learning and massive datasets enabled significant advancements.', 'duration': 42.005, 'max_score': 23.401, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/oYbVFhK_olY/pics/oYbVFhK_olY23401.jpg'}, {'end': 133.457, 'src': 'embed', 'start': 106.63, 'weight': 2, 'content': [{'end': 111.391, 'text': "All right, so now we're going to run through the theory of a neural network and how it works.", 'start': 106.63, 'duration': 4.761}, {'end': 115.852, 'text': 'So obviously a neural network is a network of, you guessed it, neurons.', 'start': 111.431, 'duration': 4.421}, {'end': 124.095, 'text': "So how does a neuron look, a basic neuron? Well, to start, you've got these little things called dendrites.", 'start': 116.413, 'duration': 7.682}, {'end': 130.916, 'text': "And then those connect, and then you'll have the nucleus, and then it goes down the axon to the axon terminal.", 'start': 125.415, 'duration': 5.501}, {'end': 133.457, 'text': 'Then it has some more little squiggly things.', 'start': 131.817, 'duration': 1.64}], 'summary': 'Overview of neural network theory and components, including dendrites, nucleus, and axon terminal.', 'duration': 26.827, 'max_score': 106.63, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/oYbVFhK_olY/pics/oYbVFhK_olY106630.jpg'}, {'end': 160.342, 'src': 'heatmap', 'start': 135.839, 'weight': 0.727, 'content': [{'end': 142.205, 'text': 'So you have your dendrites, which are these.', 'start': 135.839, 'duration': 6.366}, {'end': 145.188, 'text': 'Your nucleus is this little green thing.', 'start': 143.386, 'duration': 1.802}, {'end': 146.79, 'text': "I'm not going to write out nucleus.", 'start': 145.208, 'duration': 1.582}, {'end': 148.992, 'text': 'And then you have your axon.', 'start': 147.25, 'duration': 1.742}, {'end': 153.378, 'text': "And then you've got the axon terminal.", 'start': 150.916, 'duration': 2.462}, {'end': 158.161, 'text': 'Okay But of course, a neural network, again, has a network of neurons.', 'start': 154.078, 'duration': 4.083}, {'end': 160.342, 'text': "One neuron won't cut it, but two neurons will.", 'start': 158.301, 'duration': 2.041}], 'summary': 'Neural network comprises multiple neurons for effective functioning.', 'duration': 24.503, 'max_score': 135.839, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/oYbVFhK_olY/pics/oYbVFhK_olY135839.jpg'}], 'start': 2.495, 'title': 'Deep learning with neural networks', 'summary': 'Delves into the history, structure, and theory of neural networks, highlighting recent advancements and applications in deep learning, with a focus on biological and artificial neural network models.', 'chapters': [{'end': 346.352, 'start': 2.495, 'title': 'Deep learning with neural networks', 'summary': 'Discusses the history of neural networks, the structure of a neuron, and the theory of how neurons work, emphasizing the recent advancements in neural networks and their application in deep learning, with a focus on the biological and artificial neural network models.', 'duration': 343.857, 'highlights': ['Neural networks have seen significant advancements since 2011 or 2012, leveraging deep learning and massive datasets to achieve remarkable results. Since 2011 or 2012, neural networks have made significant advancements, leveraging deep learning and massive datasets to achieve remarkable results.', 'The history of neural networks, from its concept in the 1940s to its recent advancements, is discussed. The history of neural networks, from its concept in the 1940s to its recent advancements, is discussed.', 'The structure and theory of neurons in both biological and artificial neural networks are explained, focusing on inputs, processing, and communication between neurons. The chapter explains the structure and theory of neurons in both biological and artificial neural networks, focusing on inputs, processing, and communication between neurons.']}], 'duration': 343.857, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/oYbVFhK_olY/pics/oYbVFhK_olY2495.jpg', 'highlights': ['Neural networks have seen significant advancements since 2011 or 2012, leveraging deep learning and massive datasets to achieve remarkable results.', 'The history of neural networks, from its concept in the 1940s to its recent advancements, is discussed.', 'The structure and theory of neurons in both biological and artificial neural networks are explained, focusing on inputs, processing, and communication between neurons.']}, {'end': 657.759, 'segs': [{'end': 373.616, 'src': 'embed', 'start': 347.433, 'weight': 2, 'content': [{'end': 352.114, 'text': "Now a neuron, based on the input, either fires or it doesn't fire.", 'start': 347.433, 'duration': 4.681}, {'end': 354.135, 'text': 'So how do we determine whether or not it fires?', 'start': 352.374, 'duration': 1.761}, {'end': 359.593, 'text': 'It gets passed through a threshold function, right.', 'start': 355.669, 'duration': 3.924}, {'end': 364.158, 'text': "so that's this, and a lot of times you'll see this depicted like this well, hold on.", 'start': 359.593, 'duration': 4.565}, {'end': 367.773, 'text': "That's a little better.", 'start': 367.213, 'duration': 0.56}, {'end': 373.616, 'text': "So sometimes you'll hear this referred to as a step function because it literally looks like a step.", 'start': 368.534, 'duration': 5.082}], 'summary': "Neurons either fire or don't based on input, determined by threshold function.", 'duration': 26.183, 'max_score': 347.433, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/oYbVFhK_olY/pics/oYbVFhK_olY347433.jpg'}, {'end': 461.587, 'src': 'embed', 'start': 436.845, 'weight': 0, 'content': [{'end': 445.172, 'text': 'people are not using a step function as the threshold simply because a zero or a one is not ideal.', 'start': 436.845, 'duration': 8.327}, {'end': 448.475, 'text': "You'd rather have something more along like a scale or something like that.", 'start': 445.452, 'duration': 3.023}, {'end': 452.279, 'text': "So in general, people aren't actually using a step function.", 'start': 448.896, 'duration': 3.383}, {'end': 456.382, 'text': 'They use a sigmoid function, which looks more like this.', 'start': 453.16, 'duration': 3.222}, {'end': 461.587, 'text': "Okay, something like that, and it's called sigmoid just simply because it looks kind of like, it has an S shape.", 'start': 457.243, 'duration': 4.344}], 'summary': 'People prefer using a sigmoid function over a step function due to its more flexible nature and s-shaped appearance.', 'duration': 24.742, 'max_score': 436.845, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/oYbVFhK_olY/pics/oYbVFhK_olY436845.jpg'}, {'end': 587.439, 'src': 'heatmap', 'start': 554.656, 'weight': 0.877, 'content': [{'end': 556.837, 'text': "And in fact, actually we can do, we'll do this.", 'start': 554.656, 'duration': 2.181}, {'end': 558.298, 'text': "It doesn't really matter.", 'start': 557.778, 'duration': 0.52}, {'end': 559.139, 'text': "We'll talk about that in a second.", 'start': 558.378, 'duration': 0.761}, {'end': 566.043, 'text': "So then all of your input goes through all of your neurons in the, in the, in the most basic model, let's say.", 'start': 559.719, 'duration': 6.324}, {'end': 567.764, 'text': 'So all of these are connected.', 'start': 566.123, 'duration': 1.641}, {'end': 587.439, 'text': 'okay, and each of those connections has a unique weight associated to it, and then again from here, each of these are connected to each of these,', 'start': 575.757, 'duration': 11.682}], 'summary': 'Neural networks process input through connected neurons with unique weights.', 'duration': 32.783, 'max_score': 554.656, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/oYbVFhK_olY/pics/oYbVFhK_olY554656.jpg'}, {'end': 632.372, 'src': 'embed', 'start': 587.439, 'weight': 1, 'content': [{'end': 601.102, 'text': 'again with another unique weight, and then these are connected to these, again unique weights.', 'start': 587.439, 'duration': 13.663}, {'end': 603.183, 'text': "okay, so that's your neural network.", 'start': 601.102, 'duration': 2.081}, {'end': 608.494, 'text': 'now over here, you have your input And this is your input layer.', 'start': 603.183, 'duration': 5.311}, {'end': 617.941, 'text': 'And then here, this is your hidden layer 1.', 'start': 609.054, 'duration': 8.887}, {'end': 619.662, 'text': "That's this area.", 'start': 617.941, 'duration': 1.721}, {'end': 622.204, 'text': "That's your hidden layer 1.", 'start': 619.802, 'duration': 2.402}, {'end': 624.286, 'text': 'And then here, we have a secondary hidden layer.', 'start': 622.204, 'duration': 2.082}, {'end': 631.231, 'text': "So that's hidden layer 2.", 'start': 624.306, 'duration': 6.925}, {'end': 632.372, 'text': 'And then this is our output.', 'start': 631.231, 'duration': 1.141}], 'summary': 'Neural network with input layer, 2 hidden layers, and output.', 'duration': 44.933, 'max_score': 587.439, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/oYbVFhK_olY/pics/oYbVFhK_olY587439.jpg'}], 'start': 347.433, 'title': 'Neural network fundamentals', 'summary': 'Covers neuron firing, threshold function, and neural network structure, including the use of sigmoid function as an activation function and the explanation of network structure with unique weights.', 'chapters': [{'end': 436.845, 'start': 347.433, 'title': 'Neuron firing and threshold function', 'summary': 'Discusses the process of neuron firing and the threshold function in artificial neural networks, including the depiction of a step function and the transformation of input values in the network.', 'duration': 89.412, 'highlights': ['Neurons fire based on input passing through a threshold function, where passing a certain threshold results in a return value of 1, otherwise 0.', 'In an artificial neural network, the firing of a neuron corresponds to either a zero or a one, and this value becomes an input for another connected neuron.', 'The discussion outlines the process of neuron firing and the threshold function in the context of artificial neural networks, providing insights into the transformation of input values within the network.']}, {'end': 657.759, 'start': 436.845, 'title': 'Neural network structure and activation functions', 'summary': 'Discusses the use of sigmoid function as an activation function in neural networks, and explains the structure of a neural network, including the input, hidden layers, and output, with unique weights associated to connections.', 'duration': 220.914, 'highlights': ['The sigmoid function is used as an activation function in neural networks, providing a scale-like threshold instead of a binary one. The chapter emphasizes the preference for using a sigmoid function over a step function in neural networks, as it provides a more scale-like threshold instead of binary, enabling a more flexible and continuous model.', 'Explanation of the structure of a neural network, including input, hidden layers, and output, with unique weights associated to connections. The detailed explanation of the structure of a neural network is provided, covering the input layer, hidden layers, and output layer, along with the unique weights associated with connections between neurons.', "Description of the input layer, hidden layers, and output layer in a neural network, with an example of theoretical outputs. The chapter includes a description of the input layer, hidden layers, and output layer in a neural network, and provides an example of theoretical outputs, such as 'one zero one,' and explains their potential meaning."]}], 'duration': 310.326, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/oYbVFhK_olY/pics/oYbVFhK_olY347433.jpg', 'highlights': ['The sigmoid function is used as an activation function in neural networks, providing a scale-like threshold instead of a binary one.', 'The detailed explanation of the structure of a neural network is provided, covering the input layer, hidden layers, and output layer, along with the unique weights associated with connections between neurons.', 'Neurons fire based on input passing through a threshold function, where passing a certain threshold results in a return value of 1, otherwise 0.']}, {'end': 972.711, 'segs': [{'end': 687.497, 'src': 'embed', 'start': 657.759, 'weight': 1, 'content': [{'end': 659.241, 'text': 'so i figure might as well show you that.', 'start': 657.759, 'duration': 1.482}, {'end': 664.004, 'text': 'So this is a deep neural network.', 'start': 660.141, 'duration': 3.863}, {'end': 665.645, 'text': "We've just modeled a deep neural network.", 'start': 664.184, 'duration': 1.461}, {'end': 670.128, 'text': "Why is it a deep neural network? Well, it's because we have a second hidden layer.", 'start': 665.745, 'duration': 4.383}, {'end': 674.791, 'text': "If you just have one hidden layer, that's a regular neural network.", 'start': 670.729, 'duration': 4.062}, {'end': 681.075, 'text': "If you have two or more hidden layers, Well, hot diggity, you've got yourself a deep neural network.", 'start': 674.811, 'duration': 6.264}, {'end': 681.935, 'text': "It's as simple as that.", 'start': 681.115, 'duration': 0.82}, {'end': 687.497, 'text': "And that's why I'm really not going to split up doing like a neural network and deep neural networks, because they're the same thing.", 'start': 682.275, 'duration': 5.222}], 'summary': 'Explained deep neural networks with two hidden layers.', 'duration': 29.738, 'max_score': 657.759, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/oYbVFhK_olY/pics/oYbVFhK_olY657759.jpg'}, {'end': 779.946, 'src': 'embed', 'start': 753.187, 'weight': 0, 'content': [{'end': 760.952, 'text': 'Most of the big commercial neural networks that are doing crazy things might have 500 million samples or something like that.', 'start': 753.187, 'duration': 7.765}, {'end': 766.997, 'text': "After about 500 million, it appears that with diminishing returns, you're not really seeing much better results.", 'start': 761.013, 'duration': 5.984}, {'end': 770.219, 'text': "But that's about what it takes right now.", 'start': 767.077, 'duration': 3.142}, {'end': 774.362, 'text': "so recall with like, let's say, the support vector machine.", 'start': 770.959, 'duration': 3.403}, {'end': 779.946, 'text': 'if you followed along that tutorial, we had what was called a convex optimization problems.', 'start': 774.362, 'duration': 5.584}], 'summary': 'Big neural networks use 500 million samples, support vector machines utilize convex optimization problems.', 'duration': 26.759, 'max_score': 753.187, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/oYbVFhK_olY/pics/oYbVFhK_olY753187.jpg'}, {'end': 844.913, 'src': 'embed', 'start': 819.342, 'weight': 2, 'content': [{'end': 828.029, 'text': "Whereas here, we have, even in this really simple neural network, I'm not about to count all the lines, but we have a lot of connections there.", 'start': 819.342, 'duration': 8.687}, {'end': 834.307, 'text': "Like each of those lines is a unique weight, okay? That's a lot of unique weights.", 'start': 828.249, 'duration': 6.058}, {'end': 835.427, 'text': "That's a lot of variables.", 'start': 834.447, 'duration': 0.98}, {'end': 840.45, 'text': "And that's a really challenging optimization problem, both in the mathematics,", 'start': 835.928, 'duration': 4.522}, {'end': 844.913, 'text': 'but also mainly in the computation required for a machine to be able to solve that.', 'start': 840.45, 'duration': 4.463}], 'summary': 'A simple neural network has many connections and unique weights, posing a challenging optimization problem.', 'duration': 25.571, 'max_score': 819.342, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/oYbVFhK_olY/pics/oYbVFhK_olY819342.jpg'}, {'end': 907.561, 'src': 'embed', 'start': 883.476, 'weight': 3, 'content': [{'end': 889.482, 'text': 'Like for simple classification tasks, the neural network is actually, I think, you know, close in performance to other algorithms.', 'start': 883.476, 'duration': 6.006}, {'end': 893.346, 'text': 'So, for example, you know the support vector machine.', 'start': 889.623, 'duration': 3.723}, {'end': 894.768, 'text': 'state of the art might be like?', 'start': 893.346, 'duration': 1.422}, {'end': 895.268, 'text': "I don't know.", 'start': 894.768, 'duration': 0.5}, {'end': 902.096, 'text': "let's just say an example 97% accurate on this figurative sample set.", 'start': 895.268, 'duration': 6.828}, {'end': 907.561, 'text': 'And then a neural network might achieve 98 or 99%.', 'start': 902.876, 'duration': 4.685}], 'summary': 'Neural networks outperform svm with 98-99% accuracy.', 'duration': 24.085, 'max_score': 883.476, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/oYbVFhK_olY/pics/oYbVFhK_olY883476.jpg'}], 'start': 657.759, 'title': 'Neural networks and optimization', 'summary': 'Delves into deep neural networks, emphasizing the impact of hidden layers, and the importance of extensive input data, with examples of sample sizes ranging from 60,000 to 500 million. additionally, it discusses the challenging optimization problems in neural networks and their achievement of close to 99% accuracy in classification tasks.', 'chapters': [{'end': 774.362, 'start': 657.759, 'title': 'Deep neural networks and input data', 'summary': 'Discusses the concept of deep neural networks, emphasizing that the number of hidden layers determines whether it is a regular or deep neural network. it also highlights the significance of a large amount of input data in achieving better results, with examples of sample sizes ranging from 60,000 to 500 million in commercial neural networks.', 'duration': 116.603, 'highlights': ['The required amount of input data is a significant factor in the effectiveness of neural networks, with commercial networks often requiring around 500 million samples for optimal performance.', 'The distinction between regular neural networks and deep neural networks lies in the number of hidden layers, with two or more layers constituting a deep neural network.', 'The concept of deep neural networks is explained, where the presence of two or more hidden layers distinguishes it from a regular neural network.', 'In the example used, the number of samples for training was 60,000, which is considered relatively small compared to commercial networks that may have 500 million samples.']}, {'end': 972.711, 'start': 774.362, 'title': 'Challenges of neural network optimization', 'summary': 'Discusses the challenging optimization problems in neural networks due to the large number of variables and the need for huge datasets, leading to a perfect storm that has allowed neural networks to shine today, achieving close to 99% accuracy in classification tasks and excelling in modeling aspects.', 'duration': 198.349, 'highlights': ['Neural networks face challenging optimization problems due to the large number of variables and the need for huge datasets, leading to a perfect storm that has allowed neural networks to shine today. The optimization problem in neural networks is complex due to the large number of unique weights and connections, requiring significant computation and huge datasets for processing and optimizing.', 'Neural networks can achieve close to 99% accuracy in simple classification tasks, outperforming other algorithms like support vector machines. Neural networks can achieve 98-99% accuracy in simple classification tasks, surpassing the state-of-the-art accuracy of other algorithms like support vector machines, leading to a fierce competition for marginal percentage improvements.', 'Neural networks excel in modeling aspects, although the understanding of how the modeling works is not fully comprehended due to the large number of variables involved. Neural networks excel in modeling aspects, although the understanding of how the modeling works is not fully comprehended due to the large number of variables involved, making it challenging to demystify the answers through analysis.']}], 'duration': 314.952, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/oYbVFhK_olY/pics/oYbVFhK_olY657759.jpg', 'highlights': ['Commercial networks often require around 500 million samples for optimal performance.', 'Deep neural networks are distinguished by the presence of two or more hidden layers.', 'Neural networks face challenging optimization problems due to the large number of variables and the need for huge datasets.', 'Neural networks can achieve close to 99% accuracy in simple classification tasks, surpassing the state-of-the-art accuracy of other algorithms like support vector machines.']}, {'end': 1169.766, 'segs': [{'end': 1000.584, 'src': 'embed', 'start': 972.711, 'weight': 1, 'content': [{'end': 976.433, 'text': 'and in order to get that answer, you kind of have to employ a little bit of logic,', 'start': 972.711, 'duration': 3.722}, {'end': 982.995, 'text': 'which is something that machine learning algorithms up till very recently have not been able to do.', 'start': 977.393, 'duration': 5.602}, {'end': 985.455, 'text': 'They can do good at classification tasks and stuff like that,', 'start': 983.055, 'duration': 2.4}, {'end': 989.776, 'text': "but they're not very good at like modeling logic or figuring out how to model that logic.", 'start': 985.455, 'duration': 4.321}, {'end': 991.717, 'text': 'And up till very recently.', 'start': 989.816, 'duration': 1.901}, {'end': 995.178, 'text': 'if you wanted to make an algorithm to answer that question,', 'start': 991.717, 'duration': 3.461}, {'end': 1000.584, 'text': "you would have had to model that logic somehow and you'd have to know a lot about linguistics and stuff like that.", 'start': 996.161, 'duration': 4.423}], 'summary': 'Machine learning algorithms struggle with modeling logic and linguistic understanding.', 'duration': 27.873, 'max_score': 972.711, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/oYbVFhK_olY/pics/oYbVFhK_olY972711.jpg'}, {'end': 1036.709, 'src': 'embed', 'start': 1011.552, 'weight': 0, 'content': [{'end': 1019.96, 'text': "You do that a few million times, or let's say 400 million times, And it turns out the neural network can actually just figure that out on its own.", 'start': 1011.552, 'duration': 8.408}, {'end': 1023.824, 'text': 'It figures out its own model, how to figure out how to model the logic and all that.', 'start': 1020, 'duration': 3.824}, {'end': 1025.085, 'text': 'It does that on its own.', 'start': 1023.844, 'duration': 1.241}, {'end': 1029.829, 'text': 'That is what is so impressive about the neural network.', 'start': 1026.727, 'duration': 3.102}, {'end': 1036.709, 'text': "And even in the example that we're about to do in the coming tutorials, I think is impressive.", 'start': 1030.451, 'duration': 6.258}], 'summary': 'Neural network can figure out model on its own, impressive in coming tutorials.', 'duration': 25.157, 'max_score': 1011.552, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/oYbVFhK_olY/pics/oYbVFhK_olY1011552.jpg'}, {'end': 1111.738, 'src': 'embed', 'start': 1052.218, 'weight': 4, 'content': [{'end': 1056.02, 'text': 'So you have a few options that are free at least.', 'start': 1052.218, 'duration': 3.802}, {'end': 1059.322, 'text': 'So one option is like image data is something like ImageNet.', 'start': 1056.24, 'duration': 3.082}, {'end': 1061.003, 'text': 'You can just Google ImageNet.', 'start': 1059.782, 'duration': 1.221}, {'end': 1064.926, 'text': 'And it works a lot like WordNet if you follow the NLTK series I have.', 'start': 1061.423, 'duration': 3.503}, {'end': 1066.667, 'text': 'It works very similar to that.', 'start': 1065.626, 'duration': 1.041}, {'end': 1071.33, 'text': "If you're not familiar, you can kind of poke around NLTK or even ImageNet, and you'll figure it out.", 'start': 1067.347, 'duration': 3.983}, {'end': 1073.652, 'text': "If you just kind of click around, you'll see how everything is organized.", 'start': 1071.35, 'duration': 2.302}, {'end': 1076.294, 'text': 'But anyway, so for image data, you could use ImageNet.', 'start': 1074.132, 'duration': 2.162}, {'end': 1081.858, 'text': 'Next for like text data, your first stop probably should be the Wikipedia data dump.', 'start': 1077.334, 'duration': 4.524}, {'end': 1086.002, 'text': 'So you can get the entire dump of Wikipedia, which is actually pretty cool.', 'start': 1081.878, 'duration': 4.124}, {'end': 1088.825, 'text': 'And again, neural network can begin to model things for you.', 'start': 1086.042, 'duration': 2.783}, {'end': 1092.108, 'text': 'So you can do a lot of really interesting things with Wikipedia.', 'start': 1088.925, 'duration': 3.183}, {'end': 1094.229, 'text': 'Other things are like chat.', 'start': 1093.168, 'duration': 1.061}, {'end': 1099.874, 'text': 'You can usually get like chat logs or you could crawl Reddit and stuff like that for input output, stuff like that.', 'start': 1094.289, 'duration': 5.585}, {'end': 1103.936, 'text': "For speech, I really don't know many off the top of my head.", 'start': 1101.175, 'duration': 2.761}, {'end': 1107.997, 'text': "I've never heard of like a huge dump for speech.", 'start': 1103.996, 'duration': 4.001}, {'end': 1111.738, 'text': 'But there is a website called Tatoba.', 'start': 1108.457, 'duration': 3.281}], 'summary': 'Free data sources: imagenet for images, wikipedia for text, chat logs or reddit for chat data, and tatoba for speech.', 'duration': 59.52, 'max_score': 1052.218, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/oYbVFhK_olY/pics/oYbVFhK_olY1052218.jpg'}, {'end': 1174.256, 'src': 'embed', 'start': 1148.478, 'weight': 3, 'content': [{'end': 1154.245, 'text': "But if you you know if you work for somebody who's got a huge server or you have a lot of money and you can buy the AWS stuff,", 'start': 1148.478, 'duration': 5.767}, {'end': 1155.807, 'text': 'Common Crawl might be a great.', 'start': 1154.245, 'duration': 1.562}, {'end': 1157.188, 'text': "And basically what Common Crawl is, is it's.", 'start': 1155.807, 'duration': 1.381}, {'end': 1160.393, 'text': "It's like every website.", 'start': 1159.391, 'duration': 1.002}, {'end': 1161.695, 'text': "It's like parsed.", 'start': 1161.074, 'duration': 0.621}, {'end': 1164.459, 'text': "Okay, so that's just incredible.", 'start': 1162.255, 'duration': 2.204}, {'end': 1166.902, 'text': "Apparently it's petabytes in size.", 'start': 1164.799, 'duration': 2.103}, {'end': 1167.503, 'text': 'So there you go.', 'start': 1166.982, 'duration': 0.521}, {'end': 1169.766, 'text': 'You need a hard drive that can even hold petabytes.', 'start': 1167.523, 'duration': 2.243}, {'end': 1170.307, 'text': 'So good luck.', 'start': 1169.826, 'duration': 0.481}, {'end': 1172.43, 'text': 'Okay, so a few hard drives.', 'start': 1170.828, 'duration': 1.602}, {'end': 1174.256, 'text': 'All right.', 'start': 1173.956, 'duration': 0.3}], 'summary': 'Common crawl contains petabytes of data from parsed websites, making it suitable for those with access to substantial server resources.', 'duration': 25.778, 'max_score': 1148.478, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/oYbVFhK_olY/pics/oYbVFhK_olY1148478.jpg'}], 'start': 972.711, 'title': 'Neural networks and free data sources for machine learning', 'summary': 'Delves into how neural networks can learn logic independently and explores various free data sources for machine learning such as imagenet, wikipedia data dump, tatoba for speech, and the petabyte-sized common crawl.', 'chapters': [{'end': 1052.118, 'start': 972.711, 'title': 'Neural networks and logic', 'summary': 'Discusses how neural networks can learn logic and model it on their own, without the need for explicit programming or linguistic knowledge, exemplified by training it with millions of examples.', 'duration': 79.407, 'highlights': ['Neural networks can now model logic on their own, without explicit programming or linguistic knowledge, demonstrated by training it with millions of examples.', 'Previously, machine learning algorithms were not adept at modeling logic, but with neural networks, they can figure out their own model of logic through training with a large number of examples.', 'Training the neural network with millions of examples allows it to figure out its own model of logic, exemplified by training it with 400 million examples.']}, {'end': 1169.766, 'start': 1052.218, 'title': 'Free data sources for machine learning', 'summary': 'Discusses free data sources for machine learning, including imagenet, wikipedia data dump, chat logs, tatoba for speech, and common crawl, which is petabytes in size and requires a hard drive that can hold petabytes.', 'duration': 117.548, 'highlights': ['Common Crawl is a petabyte-sized dataset, making it one of the largest free datasets available. Common Crawl is petabytes in size, making it a vast and valuable free dataset for machine learning.', 'Wikipedia data dump is recommended as a first stop for text data, offering a comprehensive dataset for neural network modeling. The Wikipedia data dump is suggested as a primary resource for text data, offering a comprehensive dataset for neural network modeling.', 'ImageNet is a free option for image data, similar to WordNet for NLTK series, offering organized and useful data for machine learning. ImageNet is a free option for image data, resembling WordNet for NLTK series and providing organized and useful data for machine learning.', 'Tatoba is mentioned as a website for speech data, although the speaker is not familiar with many free options for speech datasets. Tatoba is mentioned as a potential resource for speech data, although the speaker is not aware of many free options for speech datasets.']}], 'duration': 197.055, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/oYbVFhK_olY/pics/oYbVFhK_olY972711.jpg', 'highlights': ['Neural networks can now model logic on their own, without explicit programming or linguistic knowledge, demonstrated by training it with millions of examples.', 'Previously, machine learning algorithms were not adept at modeling logic, but with neural networks, they can figure out their own model of logic through training with a large number of examples.', 'Training the neural network with millions of examples allows it to figure out its own model of logic, exemplified by training it with 400 million examples.', 'Common Crawl is a petabyte-sized dataset, making it one of the largest free datasets available.', 'Wikipedia data dump is recommended as a first stop for text data, offering a comprehensive dataset for neural network modeling.', 'ImageNet is a free option for image data, similar to WordNet for NLTK series, offering organized and useful data for machine learning.', 'Tatoba is mentioned as a website for speech data, although the speaker is not familiar with many free options for speech datasets.']}, {'end': 1352.34, 'segs': [{'end': 1247.093, 'src': 'heatmap', 'start': 1204.304, 'weight': 0, 'content': [{'end': 1207.546, 'text': 'So you can kind of ask around on like the machine learning subreddit or something like that.', 'start': 1204.304, 'duration': 3.242}, {'end': 1210.808, 'text': 'And other people might be aware of these datasets.', 'start': 1208.126, 'duration': 2.682}, {'end': 1219.309, 'text': 'So anyway, should be relatively obvious now why companies like Facebook and Google are so big into AI and neural networks, especially right?', 'start': 1212.02, 'duration': 7.289}, {'end': 1227.779, 'text': 'They actually possess the required volumes of data to do some very interesting, albeit creepy sometimes, things, okay?', 'start': 1219.349, 'duration': 8.43}, {'end': 1232.163, 'text': 'So, now that we have that out of the way, how are we going to be working on neural networks?', 'start': 1228.52, 'duration': 3.643}, {'end': 1236.025, 'text': "Well, we're going to be using TensorFlow, which is a relatively new package from Google.", 'start': 1232.223, 'duration': 3.802}, {'end': 1239.347, 'text': "It's still actually in beta at the time of me filming this.", 'start': 1236.045, 'duration': 3.302}, {'end': 1247.093, 'text': 'There are other packages for machine learning, like the Anno and Torch, and they all pretty much work in the exact same ways,', 'start': 1240.268, 'duration': 6.825}], 'summary': 'Large companies like facebook and google are heavily investing in ai and neural networks due to their access to vast volumes of data, with tensorflow being a key tool for working on neural networks.', 'duration': 77.267, 'max_score': 1204.304, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/oYbVFhK_olY/pics/oYbVFhK_olY1204304.jpg'}, {'end': 1300.793, 'src': 'embed', 'start': 1270.03, 'weight': 3, 'content': [{'end': 1271.932, 'text': 'So we really just need to pick one framework.', 'start': 1270.03, 'duration': 1.902}, {'end': 1279.679, 'text': "I'm choosing TensorFlow just because TensorFlow has some interesting future capabilities or current capabilities that I might use in the future,", 'start': 1272.092, 'duration': 7.587}, {'end': 1281.1, 'text': 'like distributed computing and stuff like that.', 'start': 1279.679, 'duration': 1.421}, {'end': 1286.243, 'text': "So anyways, in the next tutorial, we're going to install TensorFlow.", 'start': 1281.34, 'duration': 4.903}, {'end': 1288.885, 'text': "That's all we're going to do is install TensorFlow.", 'start': 1286.283, 'duration': 2.602}, {'end': 1293.749, 'text': 'If you already have TensorFlow installed, you can skip the next tutorial.', 'start': 1289.286, 'duration': 4.463}, {'end': 1300.793, 'text': "And for that reason, I'm just going to put this one and the next one out at the same time because hopefully some people won't need it.", 'start': 1293.769, 'duration': 7.024}], 'summary': 'Choosing tensorflow for future capabilities, will install in next tutorial.', 'duration': 30.763, 'max_score': 1270.03, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/oYbVFhK_olY/pics/oYbVFhK_olY1270030.jpg'}], 'start': 1169.826, 'title': 'Leveraging data sets for machine learning and introduction to tensorflow', 'summary': 'Discusses the importance of data sets in machine learning and highlights significant data volumes possessed by companies like facebook and google for ai and neural network development. it also introduces using tensorflow for neural networks, highlighting its beta status, compatibility with other packages, and future capabilities like distributed computing.', 'chapters': [{'end': 1227.779, 'start': 1169.826, 'title': 'Leveraging data sets for machine learning', 'summary': 'Discusses the importance of data sets in machine learning, suggests seeking data sets from sources like the machine learning subreddit, and highlights the significant data volumes possessed by companies like facebook and google for ai and neural network development.', 'duration': 57.953, 'highlights': ['Companies like Facebook and Google possess significant volumes of data for AI and neural network development, enabling them to perform interesting yet sometimes unsettling tasks.', 'Seeking data sets from sources like the machine learning subreddit is recommended as Google may not always find relevant datasets.', 'The main point of failure for machine learning often revolves around data sets, with the example of needing 100 million examples for tasks like handwriting.']}, {'end': 1352.34, 'start': 1228.52, 'title': 'Introduction to tensorflow', 'summary': 'Introduces using tensorflow for neural networks, highlighting its beta status, compatibility with other packages, and future capabilities like distributed computing. the next tutorial will focus on installing tensorflow on ubuntu.', 'duration': 123.82, 'highlights': ['The chapter introduces using TensorFlow for neural networks, highlighting its beta status, compatibility with other packages, and future capabilities like distributed computing.', 'The next tutorial will focus on installing TensorFlow on Ubuntu to ensure compatibility for following tutorials.', 'The chapter emphasizes the importance of picking one framework, choosing TensorFlow for its future capabilities like distributed computing.', 'The tutorial suggests that if viewers have TensorFlow installed, they can skip the next tutorial, which focuses on installing TensorFlow on Ubuntu.', 'The tutorial mentions using a virtual machine on Windows to install TensorFlow, while also noting that it works on Ubuntu and can be installed on other systems like Mac and Docker with Windows.']}], 'duration': 182.514, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/oYbVFhK_olY/pics/oYbVFhK_olY1169826.jpg', 'highlights': ['Companies like Facebook and Google possess significant volumes of data for AI and neural network development, enabling them to perform interesting yet sometimes unsettling tasks.', 'The main point of failure for machine learning often revolves around data sets, with the example of needing 100 million examples for tasks like handwriting.', 'The chapter introduces using TensorFlow for neural networks, highlighting its beta status, compatibility with other packages, and future capabilities like distributed computing.', 'The chapter emphasizes the importance of picking one framework, choosing TensorFlow for its future capabilities like distributed computing.', 'Seeking data sets from sources like the machine learning subreddit is recommended as Google may not always find relevant datasets.']}], 'highlights': ['Neural networks can now model logic on their own, without explicit programming or linguistic knowledge, demonstrated by training it with millions of examples.', 'Commercial networks often require around 500 million samples for optimal performance.', 'The main point of failure for machine learning often revolves around data sets, with the example of needing 100 million examples for tasks like handwriting.', 'Deep neural networks are distinguished by the presence of two or more hidden layers.', 'The history of neural networks, from its concept in the 1940s to its recent advancements, is discussed.', 'The chapter introduces using TensorFlow for neural networks, highlighting its beta status, compatibility with other packages, and future capabilities like distributed computing.']}