title
Yann LeCun: Deep Learning, ConvNets, and Self-Supervised Learning | Lex Fridman Podcast #36
description
detail
{'title': 'Yann LeCun: Deep Learning, ConvNets, and Self-Supervised Learning | Lex Fridman Podcast #36', 'heatmap': [{'end': 1643.508, 'start': 1593.778, 'weight': 1}, {'end': 4154.103, 'start': 4055.136, 'weight': 0.837}], 'summary': "Yann lecun's pivotal role in deep learning as the founding father of convolutional neural networks and his contributions to optical character recognition and the mnist dataset are explored. the ethical implications of ai objective functions, neural networks, reasoning, and memory in ai, transforming knowledge representation, causal inference, neural network history and revival, convolutional net patent and commercialization, ai and agi benchmarks and challenges, challenges in ai learning, and challenges and components of intelligent autonomous systems in ai are also discussed, providing a comprehensive overview of key topics in the field.", 'chapters': [{'end': 36.762, 'segs': [{'end': 36.762, 'src': 'embed', 'start': 0.089, 'weight': 0, 'content': [{'end': 2.431, 'text': 'The following is a conversation with Yann LeCun.', 'start': 0.089, 'duration': 2.342}, {'end': 9.175, 'text': "He's considered to be one of the fathers of deep learning, which, if you've been hiding under a rock,", 'start': 3.171, 'duration': 6.004}, {'end': 15.379, 'text': 'is the recent revolution in AI that has captivated the world with the possibility of what machines can learn from data.', 'start': 9.175, 'duration': 6.204}, {'end': 18.602, 'text': "He's a professor at New York University,", 'start': 16.239, 'duration': 2.363}, {'end': 25.61, 'text': 'a Vice President and Chief AI Scientist at Facebook and co-recipient of the Turing Award for his work on deep learning.', 'start': 18.602, 'duration': 7.008}, {'end': 30.194, 'text': "He's probably best known as the founding father of convolutional neural networks.", 'start': 26.33, 'duration': 3.864}, {'end': 36.762, 'text': 'In particular, their application to optical character recognition and the famed MNIST dataset.', 'start': 30.795, 'duration': 5.967}], 'summary': 'Yann lecun, a pioneer in deep learning, is a professor at nyu, vp and chief ai scientist at facebook, and co-recipient of the turing award for his work on convolutional neural networks and the mnist dataset.', 'duration': 36.673, 'max_score': 0.089, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo2489.jpg'}], 'start': 0.089, 'title': "Yann lecun's role in deep learning", 'summary': "Delves into yann lecun's pivotal role as the founding father of convolutional neural networks, his contribution to optical character recognition, and the creation of the mnist dataset.", 'chapters': [{'end': 36.762, 'start': 0.089, 'title': 'Yann lecun: father of deep learning', 'summary': "Explores yann lecun's pivotal role in the deep learning revolution, as the founding father of convolutional neural networks, and his contribution to optical character recognition and the mnist dataset.", 'duration': 36.673, 'highlights': ['Yann LeCun is considered one of the fathers of deep learning, a recent revolution in AI.', 'He is a professor at New York University, Vice President and Chief AI Scientist at Facebook, and co-recipient of the Turing Award.', 'Yann LeCun is best known as the founding father of convolutional neural networks and their application to optical character recognition and the famed MNIST dataset.']}], 'duration': 36.673, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo2489.jpg', 'highlights': ['Yann LeCun is best known as the founding father of convolutional neural networks and their application to optical character recognition and the famed MNIST dataset.', 'He is a professor at New York University, Vice President and Chief AI Scientist at Facebook, and co-recipient of the Turing Award.', 'Yann LeCun is considered one of the fathers of deep learning, a recent revolution in AI.']}, {'end': 537.562, 'segs': [{'end': 204.65, 'src': 'embed', 'start': 179.665, 'weight': 1, 'content': [{'end': 185.19, 'text': 'where it is alignment for the greater good of society, that an AI system will make decisions that are difficult?', 'start': 179.665, 'duration': 5.525}, {'end': 186.742, 'text': "Well, that's the trick.", 'start': 185.922, 'duration': 0.82}, {'end': 190.664, 'text': "I mean, eventually we'll have to figure out how to do this.", 'start': 186.822, 'duration': 3.842}, {'end': 195.726, 'text': "And again, we're not starting from scratch because we've been doing this with humans for millennia.", 'start': 190.884, 'duration': 4.842}, {'end': 204.65, 'text': "So designing objective functions for people is something that we know how to do, and we don't do it by programming things,", 'start': 196.506, 'duration': 8.144}], 'summary': 'Designing objective functions for ai aligns with human decision-making processes, building on existing knowledge and experience.', 'duration': 24.985, 'max_score': 179.665, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo24179665.jpg'}, {'end': 252.437, 'src': 'embed', 'start': 219.015, 'weight': 0, 'content': [{'end': 220.456, 'text': "That's an objective function.", 'start': 219.015, 'duration': 1.441}, {'end': 227.682, 'text': "So there is this idea somehow that it's a new thing for people to try to design objective functions that are aligned with the common good.", 'start': 221.716, 'duration': 5.966}, {'end': 230.805, 'text': "But no, we've been writing laws for millennia and that's exactly what it is.", 'start': 227.962, 'duration': 2.843}, {'end': 241.074, 'text': "So that's where the science of lawmaking and computer science will come together.", 'start': 232.126, 'duration': 8.948}, {'end': 246.649, 'text': "So there's nothing special about HAL or AI systems.", 'start': 242.884, 'duration': 3.765}, {'end': 252.437, 'text': "It's just the continuation of tools used to make some of these difficult ethical judgments that laws make.", 'start': 246.869, 'duration': 5.568}], 'summary': 'Lawmaking and computer science intersect to align objective functions with common good.', 'duration': 33.422, 'max_score': 219.015, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo24219015.jpg'}, {'end': 293.127, 'src': 'embed', 'start': 267.508, 'weight': 3, 'content': [{'end': 272.652, 'text': "And we have to be flexible enough about those rules so that they can be broken when it's obvious that they shouldn't be applied.", 'start': 267.508, 'duration': 5.144}, {'end': 279.477, 'text': "So you don't see this on the camera here, but all the decoration in this room is all pictures from 2001 A Space Odyssey.", 'start': 274.053, 'duration': 5.424}, {'end': 284.361, 'text': "Wow Is that by accident? It's not by accident.", 'start': 281.358, 'duration': 3.003}, {'end': 285.121, 'text': "It's by design.", 'start': 284.481, 'duration': 0.64}, {'end': 288.124, 'text': 'Oh, wow.', 'start': 285.141, 'duration': 2.983}, {'end': 293.127, 'text': 'So, if you were to build HAL 10,000, so an improvement of HAL 9,000, what would you improve?', 'start': 288.504, 'duration': 4.623}], 'summary': 'Flexibility in rules, intentional design of 2001: a space odyssey decor, and discussion about improving hal 10,000.', 'duration': 25.619, 'max_score': 267.508, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo24267508.jpg'}, {'end': 371.011, 'src': 'embed', 'start': 336.584, 'weight': 4, 'content': [{'end': 341.006, 'text': 'like a set of facts that should not be shared with the human operators?', 'start': 336.584, 'duration': 4.422}, {'end': 356.439, 'text': 'I think it should be a bit like in the design of autonomous AI systems there should be the equivalent of the oath that doctors sign up to.', 'start': 342.609, 'duration': 13.83}, {'end': 365.526, 'text': "So there's certain things, certain rules that you have to abide by.", 'start': 362.583, 'duration': 2.943}, {'end': 371.011, 'text': "And we can sort of hardwire this into our machines to kind of make sure they don't go..", 'start': 365.987, 'duration': 5.024}], 'summary': 'Autonomous ai systems should adhere to a set of rules similar to the oath that doctors sign up to.', 'duration': 34.427, 'max_score': 336.584, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo24336584.jpg'}, {'end': 517.032, 'src': 'embed', 'start': 492.227, 'weight': 6, 'content': [{'end': 504.808, 'text': 'The fact that you can build gigantic neural nets, train them on relatively small amounts of data relatively with stochastic gradient descent,', 'start': 492.227, 'duration': 12.581}, {'end': 505.788, 'text': 'and that it actually works.', 'start': 504.808, 'duration': 0.98}, {'end': 509.269, 'text': 'Breaks everything you read in every textbook,', 'start': 506.968, 'duration': 2.301}, {'end': 517.032, 'text': 'every pre-deep learning textbook that told you you need to have fewer parameters than you have data samples.', 'start': 509.269, 'duration': 7.763}], 'summary': 'Gigantic neural nets trained on small data with stochastic gradient descent defy traditional textbook limits.', 'duration': 24.805, 'max_score': 492.227, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo24492227.jpg'}], 'start': 37.442, 'title': 'Ethical implications of ai objective functions', 'summary': 'Delves into the ethical implications of objective functions in ai, emphasizing the need for aligning ai objective functions with the common good, discussing potential consequences of value misalignment, and highlighting the importance of ethical considerations in ai development. it also explores the significance of flexible rules for ai, the risks associated with ai secrecy and lies, and the success of deep learning despite traditional limitations.', 'chapters': [{'end': 266.708, 'start': 37.442, 'title': 'Ethical ai and objective functions', 'summary': "Discusses the ethical implications of objective functions in ai, drawing parallels to human society's laws, and the challenges in designing ai objective functions aligned with the common good, highlighting the potential consequences of value misalignment and the need for ethical considerations in ai development.", 'duration': 229.266, 'highlights': ['The chapter discusses the importance of shaping objective functions in AI, drawing parallels to laws in human society to prevent undesirable behaviors, emphasizing the need for designing objective functions aligned with the common good. (Importance of objective functions and parallels to human laws)', 'Value misalignment in AI is highlighted as a potential concern, with the example of machines striving to achieve objectives without constraints leading to damaging or stupid actions, emphasizing the need for ethical considerations and constraints in AI development. (Concerns about value misalignment and need for ethical constraints in AI)', 'The conversation explores the concept of designing objective functions for AI aligned with the greater good of society, drawing parallels to the design of objective functions for humans through legal code, highlighting the need for ethical and utilitarian considerations in AI decision-making. (Designing objective functions for AI aligned with societal good)']}, {'end': 537.562, 'start': 267.508, 'title': 'Ai ethics and design', 'summary': 'Discusses the importance of flexible rules for ai, the potential pitfalls of ai secrecy and lies, the need for ethical guidelines akin to the hippocratic oath for ai, and the surprising success of deep learning despite traditional textbook limitations.', 'duration': 270.054, 'highlights': ['The importance of flexible rules for AI Flexible rules are crucial for AI, allowing for exceptions when necessary, as exemplified by the intentional design of a room with 2001 A Space Odyssey decorations.', 'Potential pitfalls of AI secrecy and lies The discussion highlights the potential negative consequences of AI holding secrets and telling lies, emphasizing the need for ethical considerations in AI design.', 'The need for ethical guidelines akin to the Hippocratic Oath for AI The comparison to the Hippocratic Oath underscores the necessity of establishing ethical guidelines and boundaries for AI systems, similar to those adhered to by doctors.', 'Surprising success of deep learning despite traditional limitations The discussion reveals the surprising success of deep learning in training large neural nets with relatively small data and non-convex objectives, challenging traditional textbook constraints.']}], 'duration': 500.12, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo2437442.jpg', 'highlights': ['The chapter emphasizes the need for aligning AI objective functions with the common good, drawing parallels to laws in human society to prevent undesirable behaviors.', 'Value misalignment in AI is highlighted as a potential concern, emphasizing the need for ethical considerations and constraints in AI development.', 'The conversation explores the concept of designing objective functions for AI aligned with the greater good of society, highlighting the need for ethical and utilitarian considerations in AI decision-making.', 'Flexible rules are crucial for AI, allowing for exceptions when necessary, as exemplified by the intentional design of a room with 2001 A Space Odyssey decorations.', 'The discussion highlights the potential negative consequences of AI holding secrets and telling lies, emphasizing the need for ethical considerations in AI design.', 'The comparison to the Hippocratic Oath underscores the necessity of establishing ethical guidelines and boundaries for AI systems, similar to those adhered to by doctors.', 'The discussion reveals the surprising success of deep learning in training large neural nets with relatively small data and non-convex objectives, challenging traditional textbook constraints.']}, {'end': 1143.489, 'segs': [{'end': 623.747, 'src': 'embed', 'start': 598.352, 'weight': 0, 'content': [{'end': 606.818, 'text': "There's also the idea somehow that I've been convinced of since I was an undergrad that, even before, that intelligence is inseparable from learning.", 'start': 598.352, 'duration': 8.466}, {'end': 617.285, 'text': 'So the idea somehow that you can create an intelligent machine by basically programming, for me, was a non-starter from the start.', 'start': 606.918, 'duration': 10.367}, {'end': 623.747, 'text': 'Every intelligent entity that we know about arrives at this intelligence through learning.', 'start': 617.745, 'duration': 6.002}], 'summary': 'Intelligence is inseparable from learning; programming alone cannot create intelligence.', 'duration': 25.395, 'max_score': 598.352, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo24598352.jpg'}, {'end': 711.642, 'src': 'embed', 'start': 672.559, 'weight': 2, 'content': [{'end': 680.806, 'text': 'The question is how much prior structure do you have to put in the neural net so that something like human reasoning will emerge from learning?', 'start': 672.559, 'duration': 8.247}, {'end': 688.033, 'text': 'Another question is all of our model of what reasoning is that are based on logic,', 'start': 681.787, 'duration': 6.246}, {'end': 692.677, 'text': 'are discrete and are therefore incompatible with gradient-based learning?', 'start': 688.033, 'duration': 4.644}, {'end': 696.018, 'text': "And I'm a very strong believer in this idea of gradient-based learning.", 'start': 693.417, 'duration': 2.601}, {'end': 702.34, 'text': "I don't believe that other types of learning that don't use kind of gradient information, if you want.", 'start': 696.518, 'duration': 5.822}, {'end': 707.481, 'text': "So you don't like discrete mathematics? You don't like anything discrete? Well, it's not that I don't like it.", 'start': 702.58, 'duration': 4.901}, {'end': 711.642, 'text': "It's just that it's incompatible with learning, and I'm a big fan of learning right?", 'start': 707.601, 'duration': 4.041}], 'summary': "Exploring neural net's prior structure for human-like reasoning and gradient-based learning compatibility.", 'duration': 39.083, 'max_score': 672.559, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo24672559.jpg'}, {'end': 996.024, 'src': 'embed', 'start': 968.862, 'weight': 1, 'content': [{'end': 973.285, 'text': 'I mean, how you access and write into an associative memory in an efficient way.', 'start': 968.862, 'duration': 4.423}, {'end': 977.548, 'text': 'I mean sort of the original memory network maybe had something like the right architecture,', 'start': 973.305, 'duration': 4.243}, {'end': 983.773, 'text': "but if you try to scale up a memory network so that the memory contains all the Wikipedia, it doesn't quite work.", 'start': 977.548, 'duration': 6.225}, {'end': 987.856, 'text': "Right So there's a need for new ideas there.", 'start': 984.073, 'duration': 3.783}, {'end': 989.918, 'text': "But it's not the only form of reasoning.", 'start': 988.637, 'duration': 1.281}, {'end': 996.024, 'text': "So there's another form of reasoning, which is very classical also in some types of AI.", 'start': 989.998, 'duration': 6.026}], 'summary': 'Efficient access and writing into associative memory, scaling up memory network, need for new ideas in reasoning.', 'duration': 27.162, 'max_score': 968.862, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo24968862.jpg'}, {'end': 1069.874, 'src': 'embed', 'start': 1045.749, 'weight': 3, 'content': [{'end': 1052.101, 'text': 'And that allows you to, by energy minimization, figure out a sequence of action that optimizes a particular objective function,', 'start': 1045.749, 'duration': 6.352}, {'end': 1058.532, 'text': "which minimizes the number of times you're going to hit something and the energy you're going to spend doing the gesture and et cetera.", 'start': 1052.101, 'duration': 6.431}, {'end': 1062.189, 'text': "So that's a form of reasoning.", 'start': 1059.867, 'duration': 2.322}, {'end': 1063.489, 'text': 'Planning is a form of reasoning.', 'start': 1062.469, 'duration': 1.02}, {'end': 1069.874, 'text': 'And perhaps what led to the ability of humans to reason is the fact that,', 'start': 1063.549, 'duration': 6.325}], 'summary': 'Energy minimization optimizes actions to reduce collisions and energy expenditure, a form of reasoning.', 'duration': 24.125, 'max_score': 1045.749, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo241045749.jpg'}], 'start': 537.562, 'title': 'Neural networks, reasoning, and memory in ai', 'summary': 'Explores the intuition behind neural networks and the significance of learning through gradient-based learning, as well as the requirements for reasoning and memory in ai, emphasizing the need for a working memory and iterative information processing.', 'chapters': [{'end': 768.904, 'start': 537.562, 'title': 'Neural networks and gradient-based learning', 'summary': 'Discusses the intuition behind neural networks, the inseparability of intelligence from learning, and the compatibility of neural networks with reasoning through gradient-based learning, emphasizing the significance of learning and the incompatibility with discrete mathematics in machine learning.', 'duration': 231.342, 'highlights': ['The idea that intelligence is inseparable from learning, making machine learning an obvious path, is emphasized, highlighting the crucial role of learning in the development of intelligence.', 'The compatibility of neural networks with reasoning through gradient-based learning is discussed, raising questions about the emergence of human-like reasoning through learning and the incompatibility of discrete mathematics with learning in machine learning.', 'The significance of gradient-based learning and its incompatibility with discrete mathematics is stressed, illustrating the difference in mathematical approaches between deep learning and traditional computer science.', "The contrast between the compulsive attention to details in computer science and the 'sloppiness' in machine learning, emphasizing the nature of machine learning as the science of sloppiness in comparison to the exactness of computer science."]}, {'end': 1143.489, 'start': 771.065, 'title': 'Reasoning and memory in ai', 'summary': 'Discusses the requirements for a system capable of reasoning, highlighting the need for a working memory, a network to access and process information iteratively, and different forms of reasoning such as energy minimization and planning.', 'duration': 372.424, 'highlights': ['The need for a working memory and a network to access and process information iteratively The system requires a working memory to store factual episodic information and a network to access and process this information iteratively, facilitating a chain of reasoning.', 'Different forms of reasoning such as energy minimization and planning Energy minimization and planning are identified as forms of reasoning, with energy minimization enabling planning and optimal control based on a model of the environment and the body.', 'Limitations of logic representation and the transition to probabilistic models The discussion outlines the limitations of logic representation, emphasizing the brittleness of variables and constraints, leading to the transition to probabilistic models and graphical representations in AI systems.']}], 'duration': 605.927, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo24537562.jpg', 'highlights': ['The idea that intelligence is inseparable from learning, making machine learning an obvious path, is emphasized, highlighting the crucial role of learning in the development of intelligence.', 'The need for a working memory and a network to access and process information iteratively The system requires a working memory to store factual episodic information and a network to access and process this information iteratively, facilitating a chain of reasoning.', 'The compatibility of neural networks with reasoning through gradient-based learning is discussed, raising questions about the emergence of human-like reasoning through learning and the incompatibility of discrete mathematics with learning in machine learning.', 'Different forms of reasoning such as energy minimization and planning Energy minimization and planning are identified as forms of reasoning, with energy minimization enabling planning and optimal control based on a model of the environment and the body.', 'The significance of gradient-based learning and its incompatibility with discrete mathematics is stressed, illustrating the difference in mathematical approaches between deep learning and traditional computer science.']}, {'end': 1479.995, 'segs': [{'end': 1215.83, 'src': 'embed', 'start': 1145.03, 'weight': 0, 'content': [{'end': 1151.412, 'text': 'So there is certainly a lot of interesting work going on in this area.', 'start': 1145.03, 'duration': 6.382}, {'end': 1153.833, 'text': 'The main issue with this is knowledge acquisition.', 'start': 1151.472, 'duration': 2.361}, {'end': 1164.77, 'text': 'How do you reduce a bunch of data to a graph of this type? It relies on the expert, on the human being to encode, to add knowledge.', 'start': 1153.913, 'duration': 10.857}, {'end': 1166.832, 'text': "That's essentially impractical.", 'start': 1165.091, 'duration': 1.741}, {'end': 1169.175, 'text': "It's not scalable.", 'start': 1167.133, 'duration': 2.042}, {'end': 1170.076, 'text': "That's a big question.", 'start': 1169.235, 'duration': 0.841}, {'end': 1176.62, 'text': 'The second question is do you want to represent knowledge as symbols And do you want to manipulate them with logic?', 'start': 1170.116, 'duration': 6.504}, {'end': 1178.681, 'text': "And again, that's incompatible with learning.", 'start': 1177.261, 'duration': 1.42}, {'end': 1188.803, 'text': 'So one suggestion which Jeff Hinton has been advocating for many decades is replace symbols by vectors.', 'start': 1179.401, 'duration': 9.402}, {'end': 1194.124, 'text': 'Think of it as pattern of activities in a bunch of neurons or units or whatever you want to call them.', 'start': 1189.403, 'duration': 4.721}, {'end': 1198.505, 'text': 'And replace logic by continuous functions.', 'start': 1195.145, 'duration': 3.36}, {'end': 1201.386, 'text': 'And that becomes now compatible.', 'start': 1199.966, 'duration': 1.42}, {'end': 1210.789, 'text': "There's a very good set of ideas written in a paper about 10 years ago by Leon Boutou, who is here at Facebook.", 'start': 1201.846, 'duration': 8.943}, {'end': 1215.83, 'text': 'The title of the paper is From Machine Learning to Machine Reasoning.', 'start': 1213.229, 'duration': 2.601}], 'summary': 'Challenges in knowledge acquisition and representation discussed, advocating for vector-based approach in machine reasoning.', 'duration': 70.8, 'max_score': 1145.03, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo241145030.jpg'}, {'end': 1307.23, 'src': 'embed', 'start': 1260.311, 'weight': 3, 'content': [{'end': 1272.064, 'text': 'So his worry is that the current neural networks are not able to learn what causes what causal inference between things.', 'start': 1260.311, 'duration': 11.753}, {'end': 1275.26, 'text': "So I think he's right and wrong about this.", 'start': 1272.799, 'duration': 2.461}, {'end': 1283.324, 'text': "If he's talking about the sort of classic type of neural nets, people sort of didn't worry too much about this.", 'start': 1275.68, 'duration': 7.644}, {'end': 1289.166, 'text': "But there's a lot of people now working on causal inference, and there's a paper that just came out last week by Léon Boutou, among others,", 'start': 1283.844, 'duration': 5.322}, {'end': 1290.667, 'text': 'David Lopez-Paz and a bunch of other people.', 'start': 1289.166, 'duration': 1.501}, {'end': 1301.642, 'text': 'exactly on that problem of how do you get a neural net to pay attention to real causal relationships,', 'start': 1292.109, 'duration': 9.533}, {'end': 1307.23, 'text': 'which may also solve issues of bias in data and things like this.', 'start': 1301.642, 'duration': 5.588}], 'summary': 'Current neural networks struggle with causal inference, but recent research aims to address this through attention to causal relationships, potentially resolving bias in data.', 'duration': 46.919, 'max_score': 1260.311, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo241260311.jpg'}, {'end': 1448.82, 'src': 'embed', 'start': 1420.157, 'weight': 4, 'content': [{'end': 1421.818, 'text': 'So they get the causal relationship backwards.', 'start': 1420.157, 'duration': 1.661}, {'end': 1426.221, 'text': "And it's because their understanding of the world and intuitive physics is not that great, right?", 'start': 1422.618, 'duration': 3.603}, {'end': 1428.243, 'text': 'I mean these are, like you know, four or five-year-old kids.', 'start': 1426.281, 'duration': 1.962}, {'end': 1433.267, 'text': "You know it gets better and then you understand that this it can't be right?", 'start': 1429.884, 'duration': 3.383}, {'end': 1437.577, 'text': 'but there are many things which we can.', 'start': 1434.056, 'duration': 3.521}, {'end': 1446.719, 'text': 'because of our common sense, understanding of things what people call common sense and our understanding of physics, we can.', 'start': 1437.577, 'duration': 9.142}, {'end': 1448.82, 'text': "there's a lot of stuff that we can figure out causality.", 'start': 1446.719, 'duration': 2.101}], 'summary': "Children's understanding of intuitive physics is limited, but they can still figure out causality through common sense and physics intuition.", 'duration': 28.663, 'max_score': 1420.157, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo241420157.jpg'}], 'start': 1145.03, 'title': 'Transforming knowledge representation and causal inference in neural networks', 'summary': 'Delves into the challenges of knowledge representation and reasoning, proposing the replacement of symbols with vectors and logic with continuous functions. it also discusses the debate on causal inference in neural networks and recent work in this area, emphasizing the challenges in human understanding of causality.', 'chapters': [{'end': 1243.396, 'start': 1145.03, 'title': 'Transforming knowledge representation and reasoning', 'summary': "Explores the challenges of knowledge representation and reasoning, emphasizing the impracticality of relying on human expertise for data encoding, the incompatibility of representing knowledge as symbols with learning, and the proposal to replace symbols with vectors and logic with continuous functions, as advocated by jeff hinton and outlined in leon boutou's paper 'from machine learning to machine reasoning'.", 'duration': 98.366, 'highlights': ["Leon Boutou's paper 'From Machine Learning to Machine Reasoning' outlines the proposal to replace symbols with vectors and logic with continuous functions, making the system compatible with learning.", 'Jeff Hinton advocates for replacing symbols by vectors and logic by continuous functions, addressing the impracticality and scalability issues of relying on human expertise for data encoding.', 'The main issue in knowledge representation is the reliance on human expertise for data encoding, which is deemed impractical and non-scalable.']}, {'end': 1479.995, 'start': 1243.396, 'title': 'Causal inference in neural networks', 'summary': 'Discusses the debate on the ability of current neural networks to learn causal relationships, highlighting recent work on causal inference in neural nets and the challenges in human understanding of causality.', 'duration': 236.599, 'highlights': ['Recent work on causal inference in neural nets A paper by Léon Boutou, David Lopez-Paz, and others addresses the problem of getting neural nets to pay attention to real causal relationships.', 'Challenges in human understanding of causality The discussion includes the limitations of human understanding of causality, as illustrated by the example of children attributing the cause of wind to the movement of branches and trees.', 'Debate on the ability of current neural networks to learn causal relationships There is a debate on whether current neural networks are capable of learning causal inference, with concerns raised by Judea Pearl about the limitations of neural networks in understanding causality.']}], 'duration': 334.965, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo241145030.jpg', 'highlights': ["Leon Boutou's paper 'From Machine Learning to Machine Reasoning' proposes replacing symbols with vectors and logic with continuous functions for system compatibility with learning.", 'Jeff Hinton advocates for replacing symbols by vectors and logic by continuous functions to address impracticality and scalability issues of human expertise for data encoding.', 'The main issue in knowledge representation is the reliance on human expertise for data encoding, which is deemed impractical and non-scalable.', 'Recent work on causal inference in neural nets addresses the problem of getting neural nets to pay attention to real causal relationships.', 'Challenges in human understanding of causality include limitations illustrated by the example of children attributing the cause of wind to the movement of branches and trees.', 'Debate on the ability of current neural networks to learn causal relationships raises concerns about the limitations of neural networks in understanding causality.']}, {'end': 1869.6, 'segs': [{'end': 1535.355, 'src': 'embed', 'start': 1506.391, 'weight': 0, 'content': [{'end': 1507.712, 'text': "Yeah, it wasn't called deep learning yet.", 'start': 1506.391, 'duration': 1.321}, {'end': 1508.673, 'text': 'It was just called neural nets.', 'start': 1507.792, 'duration': 0.881}, {'end': 1509.674, 'text': 'Neural networks.', 'start': 1508.693, 'duration': 0.981}, {'end': 1512.867, 'text': 'Yeah, they lost interest.', 'start': 1511.627, 'duration': 1.24}, {'end': 1518.029, 'text': 'I mean, I think I would put that around 1995, at least the machine learning community.', 'start': 1513.828, 'duration': 4.201}, {'end': 1526.092, 'text': 'There was always a neural net community, but it became kind of disconnected from sort of mainstream machine learning, if you want.', 'start': 1518.049, 'duration': 8.043}, {'end': 1531.794, 'text': 'There were, it was basically electrical engineering that kept at it.', 'start': 1528.153, 'duration': 3.641}, {'end': 1533.975, 'text': 'Nice And computer science.', 'start': 1531.814, 'duration': 2.161}, {'end': 1535.355, 'text': 'Just gave up.', 'start': 1533.995, 'duration': 1.36}], 'summary': 'Neural nets lost interest around 1995, disconnecting from mainstream machine learning.', 'duration': 28.964, 'max_score': 1506.391, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo241506391.jpg'}, {'end': 1578.662, 'src': 'embed', 'start': 1556.146, 'weight': 1, 'content': [{'end': 1565.072, 'text': 'it was very hard to make them work in the sense that you would implement backprop in your favorite language,', 'start': 1556.146, 'duration': 8.926}, {'end': 1570.155, 'text': "and that favorite language was not Python, it was not MATLAB, it was not any of those things, because they didn't exist.", 'start': 1565.072, 'duration': 5.083}, {'end': 1574.258, 'text': 'You had to write it in Fortran or something like this.', 'start': 1570.795, 'duration': 3.463}, {'end': 1578.662, 'text': 'So you would experiment with it.', 'start': 1576.34, 'duration': 2.322}], 'summary': 'Implementing backprop in non-python languages was challenging, requiring experimentation with languages like fortran.', 'duration': 22.516, 'max_score': 1556.146, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo241556146.jpg'}, {'end': 1643.508, 'src': 'heatmap', 'start': 1593.778, 'weight': 1, 'content': [{'end': 1595.74, 'text': "So you'd say, I give up.", 'start': 1593.778, 'duration': 1.962}, {'end': 1599.983, 'text': "Also, you're trying it with batch gradient, which, you know, isn't really sufficient.", 'start': 1596.28, 'duration': 3.703}, {'end': 1606.187, 'text': "So there's a lot of bad good tricks that you had to know to make those things work, or you had to reinvent.", 'start': 1600.303, 'duration': 5.884}, {'end': 1609.43, 'text': "And a lot of people just didn't, and they just couldn't make it work.", 'start': 1607.028, 'duration': 2.402}, {'end': 1611.971, 'text': "So that's one thing.", 'start': 1611.351, 'duration': 0.62}, {'end': 1619.375, 'text': "The investment in software platform to be able to kind of display things, figure out why things don't work,", 'start': 1612.471, 'duration': 6.904}, {'end': 1622.177, 'text': 'kind of get a good intuition for how to get them to work,', 'start': 1619.375, 'duration': 2.802}, {'end': 1626.819, 'text': 'have enough flexibility so you can create network architectures like convolutional nets and stuff like that.', 'start': 1622.177, 'duration': 4.642}, {'end': 1628.9, 'text': 'It was hard.', 'start': 1628.38, 'duration': 0.52}, {'end': 1630.501, 'text': 'I mean, you had to write everything from scratch.', 'start': 1629, 'duration': 1.501}, {'end': 1632.803, 'text': "And again, you didn't have any Python or MATLAB or anything, right?", 'start': 1630.521, 'duration': 2.282}, {'end': 1634.904, 'text': 'I read that.', 'start': 1634.343, 'duration': 0.561}, {'end': 1643.508, 'text': 'sorry to interrupt, but I read that you wrote in Lisp your first versions of Lynette with the convolutional neural networks, which, by the way,', 'start': 1634.904, 'duration': 8.604}], 'summary': 'Developing neural networks without existing resources was challenging, requiring reinvention and flexibility.', 'duration': 49.73, 'max_score': 1593.778, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo241593778.jpg'}, {'end': 1626.819, 'src': 'embed', 'start': 1596.28, 'weight': 2, 'content': [{'end': 1599.983, 'text': "Also, you're trying it with batch gradient, which, you know, isn't really sufficient.", 'start': 1596.28, 'duration': 3.703}, {'end': 1606.187, 'text': "So there's a lot of bad good tricks that you had to know to make those things work, or you had to reinvent.", 'start': 1600.303, 'duration': 5.884}, {'end': 1609.43, 'text': "And a lot of people just didn't, and they just couldn't make it work.", 'start': 1607.028, 'duration': 2.402}, {'end': 1611.971, 'text': "So that's one thing.", 'start': 1611.351, 'duration': 0.62}, {'end': 1619.375, 'text': "The investment in software platform to be able to kind of display things, figure out why things don't work,", 'start': 1612.471, 'duration': 6.904}, {'end': 1622.177, 'text': 'kind of get a good intuition for how to get them to work,', 'start': 1619.375, 'duration': 2.802}, {'end': 1626.819, 'text': 'have enough flexibility so you can create network architectures like convolutional nets and stuff like that.', 'start': 1622.177, 'duration': 4.642}], 'summary': 'Challenges with batch gradient; investment in software platform for intuitive network architectures.', 'duration': 30.539, 'max_score': 1596.28, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo241596280.jpg'}, {'end': 1694.371, 'src': 'embed', 'start': 1670.956, 'weight': 3, 'content': [{'end': 1677.757, 'text': 'we invented this idea of basically having modules that know how to forward propagate and back propagate gradients,', 'start': 1670.956, 'duration': 6.801}, {'end': 1680.078, 'text': 'and then interconnecting those modules in a graph.', 'start': 1677.757, 'duration': 2.321}, {'end': 1687.324, 'text': 'Leombo2 had made proposals about this in the late 80s, and we were able to implement this using a Lisp system.', 'start': 1681.618, 'duration': 5.706}, {'end': 1694.371, 'text': 'Eventually we wanted to use that system to build production code for character recognition at Bell Labs.', 'start': 1688.405, 'duration': 5.966}], 'summary': 'Invented modules for gradient propagation and graph interconnection, implemented in a lisp system for character recognition at bell labs.', 'duration': 23.415, 'max_score': 1670.956, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo241670956.jpg'}, {'end': 1767.849, 'src': 'embed', 'start': 1737.275, 'weight': 4, 'content': [{'end': 1743.679, 'text': 'Now, at the time also, or today, this would turn into Torch or PyTorch or TensorFlow or whatever.', 'start': 1737.275, 'duration': 6.404}, {'end': 1747.301, 'text': "We'd put it in open source, everybody would use it and realize it's good.", 'start': 1743.999, 'duration': 3.302}, {'end': 1757.147, 'text': "Back before 1995, working at AT&T, there's no way the lawyers would let you release anything in open source of this nature.", 'start': 1748.261, 'duration': 8.886}, {'end': 1759.408, 'text': 'And so we could not distribute our code, really.', 'start': 1757.827, 'duration': 1.581}, {'end': 1767.849, 'text': 'And on that point, and sorry to go on a million tangents, but on that point I also read that there was some almost patent,', 'start': 1760.822, 'duration': 7.027}], 'summary': "Before 1995, at&t couldn't release code in open source; now, it's accessible to all.", 'duration': 30.574, 'max_score': 1737.275, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo241737275.jpg'}, {'end': 1850.634, 'src': 'embed', 'start': 1820.27, 'weight': 5, 'content': [{'end': 1821.65, 'text': 'They have a different concept, but you know.', 'start': 1820.27, 'duration': 1.38}, {'end': 1828.322, 'text': "I never actually strongly believed in this, but I don't believe in this kind of patent.", 'start': 1823.112, 'duration': 5.21}, {'end': 1830.987, 'text': "Facebook basically doesn't believe in this kind of patent.", 'start': 1828.943, 'duration': 2.044}, {'end': 1838.965, 'text': "Google files patents because they've been burned with Apple.", 'start': 1834.101, 'duration': 4.864}, {'end': 1844.389, 'text': "And so now they do this for defensive purpose, but usually they say, we're not going to sue you if you infringe.", 'start': 1839.305, 'duration': 5.084}, {'end': 1847.091, 'text': 'Facebook has a similar policy.', 'start': 1845.029, 'duration': 2.062}, {'end': 1850.634, 'text': 'They say, you know, we file patent on certain things for defensive purpose.', 'start': 1847.291, 'duration': 3.343}], 'summary': 'Facebook and google file patents for defensive purposes due to past conflicts with apple.', 'duration': 30.364, 'max_score': 1820.27, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo241820270.jpg'}], 'start': 1480.035, 'title': 'Neural networks history and revival', 'summary': 'Discusses the challenges faced in neural network development, including the loss of interest in the 1990s leading to disconnection from mainstream machine learning, and the subsequent resurgence over a decade later. it also covers the early challenges in developing neural networks, such as implementation difficulties in lisp and controversies around patenting algorithms and software ideas.', 'chapters': [{'end': 1619.375, 'start': 1480.035, 'title': 'Ai winter and rebirth', 'summary': 'Discusses the loss of interest in neural nets around 1995 due to challenges in implementation and lack of software platforms, leading to disconnection from mainstream machine learning, with a subsequent resurgence over a decade later.', 'duration': 139.34, 'highlights': ['Neural nets lost interest around 1995 due to challenges in implementation and lack of software platforms, leading to disconnection from mainstream machine learning.', 'Implementing neural nets was difficult as it required writing code in languages like Fortran, making basic mistakes in initialization and network size, leading to a low success rate in training.', 'The use of batch gradient was not sufficient, and there were many tricks needed to make neural nets work, causing many to give up on them.']}, {'end': 1869.6, 'start': 1619.375, 'title': 'History of neural networks', 'summary': 'Discusses the early challenges in developing neural networks, particularly in lisp, including the invention of modules for interconnecting gradients, the difficulties of open sourcing the code, and the controversial topic of patenting algorithms and software ideas.', 'duration': 250.225, 'highlights': ['The invention of modules for interconnecting gradients in neural networks using a Lisp system around 1991. Invented around 1991.', 'The challenges of open sourcing the code before 1995 due to legal restrictions at AT&T. Legal restrictions before 1995.', 'The discussion on patenting algorithms and software ideas, with the speaker expressing skepticism and mentioning the policies of Facebook and Google towards patents. Policies of Facebook and Google towards patents.']}], 'duration': 389.565, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo241480035.jpg', 'highlights': ['Neural nets lost interest around 1995 due to challenges in implementation and lack of software platforms, leading to disconnection from mainstream machine learning.', 'Implementing neural nets was difficult as it required writing code in languages like Fortran, making basic mistakes in initialization and network size, leading to a low success rate in training.', 'The use of batch gradient was not sufficient, and there were many tricks needed to make neural nets work, causing many to give up on them.', 'The invention of modules for interconnecting gradients in neural networks using a Lisp system around 1991.', 'The challenges of open sourcing the code before 1995 due to legal restrictions at AT&T.', 'The discussion on patenting algorithms and software ideas, with the speaker expressing skepticism and mentioning the policies of Facebook and Google towards patents.']}, {'end': 2652.863, 'segs': [{'end': 1971.816, 'src': 'embed', 'start': 1908.866, 'weight': 0, 'content': [{'end': 1915.589, 'text': 'And in 1994, a check reading system was deployed in ATM machines.', 'start': 1908.866, 'duration': 6.723}, {'end': 1920.331, 'text': 'In 1995, it was for large check reading machines in back offices, etc.', 'start': 1916.29, 'duration': 4.041}, {'end': 1928.647, 'text': 'And those systems were developed by an engineering group that we were collaborating with at AT&T, and they were commercialized by NCR,', 'start': 1920.711, 'duration': 7.936}, {'end': 1930.968, 'text': 'which at the time was a subsidiary of AT&T.', 'start': 1928.647, 'duration': 2.321}, {'end': 1942.515, 'text': 'Now AT&T split up in 1996, early 1996, and the lawyers just looked at all the patents and they distributed the patents among the various companies.', 'start': 1931.689, 'duration': 10.826}, {'end': 1948.759, 'text': 'They gave the convolutional net patent to NCR because they were actually selling products that used it.', 'start': 1943.015, 'duration': 5.744}, {'end': 1951.5, 'text': 'But nobody at NCR had any idea what a convolutional net was.', 'start': 1949.279, 'duration': 2.221}, {'end': 1953.563, 'text': 'Yeah Okay.', 'start': 1952.381, 'duration': 1.182}, {'end': 1962.75, 'text': "So between 1996 and 2007, so there's a whole period until 2002 where I didn't actually work on machine learning or consciousness.", 'start': 1954.004, 'duration': 8.746}, {'end': 1966.012, 'text': 'I resumed working on this around 2002.', 'start': 1962.77, 'duration': 3.242}, {'end': 1971.816, 'text': 'And between 2002 and 2007, I was working on them, crossing my fingers that nobody at NCR would notice and nobody noticed.', 'start': 1966.012, 'duration': 5.804}], 'summary': 'Atm check reading system deployed in 1994, developed by at&t engineering group, commercialized by ncr.', 'duration': 62.95, 'max_score': 1908.866, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo241908866.jpg'}, {'end': 2028.346, 'src': 'embed', 'start': 1998.155, 'weight': 6, 'content': [{'end': 2000.616, 'text': "It's that we don't have the technology to build the things we want to build.", 'start': 1998.155, 'duration': 2.461}, {'end': 2003.957, 'text': 'We want to build intelligent virtual assistants that have common sense.', 'start': 2000.636, 'duration': 3.321}, {'end': 2006.618, 'text': "We don't have monopoly on good ideas for this.", 'start': 2004.978, 'duration': 1.64}, {'end': 2007.439, 'text': "We don't believe we do.", 'start': 2006.698, 'duration': 0.741}, {'end': 2010.2, 'text': "Maybe others believe they do, but we don't.", 'start': 2007.999, 'duration': 2.201}, {'end': 2017.362, 'text': "Okay If a startup tells you they have the secret to human level intelligence and common sense, don't believe them.", 'start': 2010.48, 'duration': 6.882}, {'end': 2017.742, 'text': "They don't.", 'start': 2017.462, 'duration': 0.28}, {'end': 2028.346, 'text': "And it's going to take the entire work of the world research community for a while to get to the point where you can go off and,", 'start': 2018.362, 'duration': 9.984}], 'summary': 'Challenges in developing intelligent virtual assistants, no monopoly on good ideas, global research needed.', 'duration': 30.191, 'max_score': 1998.155, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo241998155.jpg'}, {'end': 2164.149, 'src': 'embed', 'start': 2140.183, 'weight': 5, 'content': [{'end': 2147.685, 'text': "So there's a lot of people who who try to take advantage of the hype for business reasons and so on.", 'start': 2140.183, 'duration': 7.502}, {'end': 2158.688, 'text': 'But let me sort of talk to this idea that the new ideas, the ideas that push the field forward, may not yet have a benchmark,', 'start': 2148.265, 'duration': 10.423}, {'end': 2160.788, 'text': 'or it may be very difficult to establish a benchmark.', 'start': 2158.688, 'duration': 2.1}, {'end': 2161.108, 'text': 'I agree.', 'start': 2160.808, 'duration': 0.3}, {'end': 2162.329, 'text': "That's part of the process.", 'start': 2161.348, 'duration': 0.981}, {'end': 2164.149, 'text': 'Establishing benchmarks is part of the process.', 'start': 2162.609, 'duration': 1.54}], 'summary': 'Establishing benchmarks for new ideas is part of the process.', 'duration': 23.966, 'max_score': 2140.183, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo242140183.jpg'}, {'end': 2219.466, 'src': 'embed', 'start': 2191.536, 'weight': 8, 'content': [{'end': 2199.798, 'text': "something like intelligence, like reasoning, like maybe you don't like the term, but AGI echoes of that kind of?", 'start': 2191.536, 'duration': 8.262}, {'end': 2208.265, 'text': 'Yeah, so a lot of people are working on interactive environments in which you can train and test intelligent systems.', 'start': 2200.498, 'duration': 7.767}, {'end': 2219.466, 'text': 'So, for example, you know The classical paradigm of supervised learning is that you have a dataset, you partition it into a training set,', 'start': 2208.785, 'duration': 10.681}], 'summary': 'Research focuses on interactive environments for training intelligent systems.', 'duration': 27.93, 'max_score': 2191.536, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo242191536.jpg'}, {'end': 2347.962, 'src': 'embed', 'start': 2318.462, 'weight': 7, 'content': [{'end': 2320.223, 'text': "It's very, very specialized.", 'start': 2318.462, 'duration': 1.761}, {'end': 2321.663, 'text': "We think it's general.", 'start': 2320.883, 'duration': 0.78}, {'end': 2323.784, 'text': "We'd like to think of ourselves as having general intelligence.", 'start': 2321.703, 'duration': 2.081}, {'end': 2325.144, 'text': "We don't, we're very specialized.", 'start': 2323.844, 'duration': 1.3}, {'end': 2328.605, 'text': "We're only slightly more general than- Why does it feel general??", 'start': 2326.124, 'duration': 2.481}, {'end': 2331.346, 'text': 'So you kind of the term general.', 'start': 2328.925, 'duration': 2.421}, {'end': 2341.195, 'text': "I think what's impressive about humans is ability to learn, as we were talking about learning to learn in just so many different domains.", 'start': 2332.086, 'duration': 9.109}, {'end': 2347.962, 'text': "It's perhaps not arbitrarily general, but just you can learn in many domains and integrate that knowledge somehow.", 'start': 2341.335, 'duration': 6.627}], 'summary': "Humans' ability to learn in diverse domains showcases their impressive but slightly specialized intelligence.", 'duration': 29.5, 'max_score': 2318.462, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo242318462.jpg'}], 'start': 1869.6, 'title': 'Convolutional net patent and commercialization and ai and agi benchmarks and challenges', 'summary': "Delves into the patent and commercialization of convolutional nets, filed in 1989 and 1990, leading to applications in check reading systems for atm and back offices, commercialized by ncr after at&t's split in 1996. it also discusses the need for benchmarks in ai development, cautioning against exaggerated claims of human-level intelligence, exploring the limitations of human intelligence, and the challenges in developing artificial general intelligence (agi).", 'chapters': [{'end': 1971.816, 'start': 1869.6, 'title': 'Convolutional net patent and commercialization', 'summary': "Discusses the patent and commercialization of convolutional nets, filed in 1989 and 1990, leading to applications in check reading systems for atm and back offices, commercialized by ncr after at&t's split in 1996, and the unawareness of ncr about convolutional nets until 2007.", 'duration': 102.216, 'highlights': ['In 1994, a check reading system using convolutional nets was deployed in ATM machines.', 'In 1995, the technology was utilized for large check reading machines in back offices.', 'The commercialization of the systems was carried out by NCR, a subsidiary of AT&T at the time.', 'AT&T split up in 1996, and the convolutional net patent was given to NCR, despite their lack of knowledge about it.', "Between 2002 and 2007, the speaker worked on convolutional nets, hoping that NCR would not notice, and they didn't."]}, {'end': 2652.863, 'start': 1972.116, 'title': 'Ai and agi benchmarks and challenges', 'summary': 'Discusses the need for benchmarks in ai development, cautioning against exaggerated claims of human-level intelligence, and explores the limitations of human intelligence and the challenges in developing artificial general intelligence (agi). it also emphasizes the specialized nature of human intelligence and the vastness of unapprehended tasks, drawing parallels to the concept of heat and entropy in physics.', 'duration': 680.747, 'highlights': ['The need for benchmarks in AI development The discussion emphasizes the importance of benchmarks in AI development to test and validate ideas, highlighting the necessity for practical testing and application, ultimately accelerating industry progress.', 'Cautions against exaggerated claims of human-level intelligence The chapter warns against unwarranted claims of possessing the secret to human-level intelligence, emphasizing the collaborative effort of the global research community and the lack of monopoly on good ideas in AI development.', 'Limitations and specialized nature of human intelligence The specialized nature of human intelligence is underscored, challenging the notion of human intelligence as general and emphasizing the ability to learn in multiple domains, despite its specialized nature.', "Challenges in developing artificial general intelligence (AGI) The chapter discusses the challenges in establishing benchmarks for AGI, focusing on the difficulties in testing new ideas and pushing the field forward, particularly in terms of reasoning and intelligence, while cautioning against the term 'AGI' and highlighting the impressive adaptability of the human brain."]}], 'duration': 783.263, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo241869600.jpg', 'highlights': ['In 1994, a check reading system using convolutional nets was deployed in ATM machines.', 'In 1995, the technology was utilized for large check reading machines in back offices.', 'The commercialization of the systems was carried out by NCR, a subsidiary of AT&T at the time.', 'AT&T split up in 1996, and the convolutional net patent was given to NCR, despite their lack of knowledge about it.', "Between 2002 and 2007, the speaker worked on convolutional nets, hoping that NCR would not notice, and they didn't.", 'The need for benchmarks in AI development The discussion emphasizes the importance of benchmarks in AI development to test and validate ideas, highlighting the necessity for practical testing and application, ultimately accelerating industry progress.', 'Cautions against exaggerated claims of human-level intelligence The chapter warns against unwarranted claims of possessing the secret to human-level intelligence, emphasizing the collaborative effort of the global research community and the lack of monopoly on good ideas in AI development.', 'Limitations and specialized nature of human intelligence The specialized nature of human intelligence is underscored, challenging the notion of human intelligence as general and emphasizing the ability to learn in multiple domains, despite its specialized nature.', "Challenges in developing artificial general intelligence (AGI) The chapter discusses the challenges in establishing benchmarks for AGI, focusing on the difficulties in testing new ideas and pushing the field forward, particularly in terms of reasoning and intelligence, while cautioning against the term 'AGI' and highlighting the impressive adaptability of the human brain."]}, {'end': 3178.624, 'segs': [{'end': 2713.849, 'src': 'embed', 'start': 2683.956, 'weight': 0, 'content': [{'end': 2686.318, 'text': 'Damn impressive demonstration of intelligence, whatever.', 'start': 2683.956, 'duration': 2.362}, {'end': 2692.843, 'text': 'And so on that topic, most successes in deep learning have been in supervised learning.', 'start': 2686.738, 'duration': 6.105}, {'end': 2697.706, 'text': 'What.. is your view on unsupervised learning.', 'start': 2693.844, 'duration': 3.862}, {'end': 2707.488, 'text': 'Is there a hope to reduce involvement of human input and still have successful systems that have practical use?', 'start': 2697.926, 'duration': 9.562}, {'end': 2709.608, 'text': "Yeah, I mean there's definitely a hope.", 'start': 2708.368, 'duration': 1.24}, {'end': 2711.088, 'text': "It's more than a hope, actually.", 'start': 2709.968, 'duration': 1.12}, {'end': 2713.849, 'text': "It's mounting evidence for it.", 'start': 2711.228, 'duration': 2.621}], 'summary': 'Deep learning successes mainly in supervised learning, hope for unsupervised learning with mounting evidence.', 'duration': 29.893, 'max_score': 2683.956, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo242683956.jpg'}, {'end': 2765.327, 'src': 'embed', 'start': 2732.382, 'weight': 2, 'content': [{'end': 2734.464, 'text': 'you know, when you say unsupervised learning, oh my God, you know.', 'start': 2732.382, 'duration': 2.082}, {'end': 2736.586, 'text': 'machines are going to learn by themselves and without supervision.', 'start': 2734.464, 'duration': 2.122}, {'end': 2739.65, 'text': 'You know, they see this as..', 'start': 2737.347, 'duration': 2.303}, {'end': 2740.411, 'text': "Where's the parents??", 'start': 2739.65, 'duration': 0.761}, {'end': 2743.933, 'text': 'Yeah, so I call it self-supervised learning, because, in fact,', 'start': 2740.831, 'duration': 3.102}, {'end': 2750.398, 'text': 'the underlying algorithms that I use are the same algorithms as the supervised learning algorithms,', 'start': 2743.933, 'duration': 6.465}, {'end': 2758.284, 'text': 'except that what we train them to do is not predict a particular set of variables, like the category of an image,', 'start': 2750.398, 'duration': 7.886}, {'end': 2765.327, 'text': 'and not to predict a set of variables that have been provided by human labelers.', 'start': 2760.524, 'duration': 4.803}], 'summary': 'Self-supervised learning uses same algorithms as supervised learning but trains machines without human-labeled variables.', 'duration': 32.945, 'max_score': 2732.382, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo242732382.jpg'}, {'end': 3099.622, 'src': 'embed', 'start': 3049.819, 'weight': 4, 'content': [{'end': 3052.201, 'text': 'And if the game is deterministic, it works fine.', 'start': 3049.819, 'duration': 2.382}, {'end': 3061.468, 'text': 'And that includes, you know, feeding the system with the action that your little character is going to take.', 'start': 3054.903, 'duration': 6.565}, {'end': 3069.034, 'text': 'The problem comes from the fact that the real world and most games are not entirely predictable.', 'start': 3063.089, 'duration': 5.945}, {'end': 3072.977, 'text': "And so there you get those blurry predictions and you can't do planning with blurry predictions.", 'start': 3069.714, 'duration': 3.263}, {'end': 3084.108, 'text': 'So, if you have a perfect model of the world, you can, in your head, run this model with a hypothesis for a sequence of actions,', 'start': 3074.495, 'duration': 9.613}, {'end': 3086.451, 'text': "and you're going to predict the outcome of that sequence of actions.", 'start': 3084.108, 'duration': 2.343}, {'end': 3093.676, 'text': 'But if your model is imperfect, how can you plan? Yeah, it quickly explodes.', 'start': 3088.594, 'duration': 5.082}, {'end': 3099.622, 'text': "What are your thoughts on the extension of this, which topic I'm super excited about?", 'start': 3094.857, 'duration': 4.765}], 'summary': 'Imperfect models hinder planning with blurry predictions, impacting outcomes.', 'duration': 49.803, 'max_score': 3049.819, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo243049819.jpg'}, {'end': 3178.624, 'src': 'embed', 'start': 3126.414, 'weight': 6, 'content': [{'end': 3127.594, 'text': 'asking for human input?', 'start': 3126.414, 'duration': 1.18}, {'end': 3133.617, 'text': "Do you see value in that kind of work? I don't see transformative value.", 'start': 3128.054, 'duration': 5.563}, {'end': 3140.796, 'text': "It's going to make things that we can already do more efficient, or they will learn slightly more efficiently,", 'start': 3134.637, 'duration': 6.159}, {'end': 3143.978, 'text': "but it's not going to make machines significantly more intelligent.", 'start': 3140.796, 'duration': 3.182}, {'end': 3149.081, 'text': 'And, by the way, there is no opposition.', 'start': 3145.579, 'duration': 3.502}, {'end': 3157.586, 'text': "there's no conflict between self-supervised learning, reinforcement learning and supervised learning, or imitation learning or active learning.", 'start': 3149.081, 'duration': 8.505}, {'end': 3163.209, 'text': 'I see self-supervised learning as a preliminary to all of the above.', 'start': 3159.127, 'duration': 4.082}, {'end': 3175.663, 'text': 'Yes, The example I use very often is how is it that so, if you use classical reinforcement learning, deep reinforcement learning,', 'start': 3163.87, 'duration': 11.793}, {'end': 3178.624, 'text': 'if you want the best methods?', 'start': 3175.663, 'duration': 2.961}], 'summary': 'Self-supervised learning is seen as a preliminary step to reinforcement learning, with no conflict between different learning methods mentioned.', 'duration': 52.21, 'max_score': 3126.414, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo243126414.jpg'}], 'start': 2652.883, 'title': 'Challenges in ai learning', 'summary': 'Discusses challenges in unsupervised and self-supervised learning in ai, focusing on representing uncertainty in predictions, addressing issues in real-world data generation, and exploring the potential of active learning.', 'chapters': [{'end': 2980.619, 'start': 2652.883, 'title': 'Challenges of unsupervised learning in ai', 'summary': 'Discusses the challenges of unsupervised learning in ai, emphasizing the potential of self-supervised learning and the technical obstacles in representing uncertainty in predictions for image and video recognition.', 'duration': 327.736, 'highlights': ['The chapter emphasizes the potential of self-supervised learning and the mounting evidence for its success in reducing human input in AI systems. Self-supervised learning shows potential for reducing human input in AI systems with mounting evidence for its success.', 'The speaker explains the technical challenges in representing uncertainty in predictions for image and video recognition, leading to the limited success of self-supervised learning in these contexts. Technical challenges in representing uncertainty in predictions for image and video recognition have limited the success of self-supervised learning in these contexts.', 'The discussion outlines the difference between unsupervised and self-supervised learning, highlighting the use of the same algorithms as supervised learning to train machines to reconstruct input rather than predict specific variables provided by human labelers. The chapter outlines the difference between unsupervised and self-supervised learning, emphasizing the use of the same algorithms as supervised learning to train machines to reconstruct input instead of predicting specific variables provided by human labelers.']}, {'end': 3178.624, 'start': 2981.638, 'title': 'Challenges in self-supervised learning', 'summary': 'Discusses the challenges of self-supervised learning for visual scenes, addressing issues of data generation and handling uncertainty in the real world, while also touching on the potential of active learning and its impact on machine intelligence.', 'duration': 196.986, 'highlights': ['The challenge of handling uncertainty in the real world and its impact on planning with imperfect models. The real world and most games are not entirely predictable, leading to blurry predictions that hinder effective planning.', 'The potential of active learning for more efficient learning processes, although it may not significantly enhance machine intelligence. Active learning has the potential to make existing processes more efficient, but it may not lead to a significant boost in machine intelligence.', 'The preliminary role of self-supervised learning in relation to reinforcement learning, supervised learning, imitation learning, and active learning. Self-supervised learning is seen as a preliminary step to other learning methods, laying the foundation for further advancements in machine learning.']}], 'duration': 525.741, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo242652883.jpg', 'highlights': ['Self-supervised learning shows potential for reducing human input in AI systems with mounting evidence for its success.', 'The chapter emphasizes the potential of self-supervised learning and the mounting evidence for its success in reducing human input in AI systems.', 'The chapter outlines the difference between unsupervised and self-supervised learning, emphasizing the use of the same algorithms as supervised learning to train machines to reconstruct input instead of predicting specific variables provided by human labelers.', 'The discussion outlines the difference between unsupervised and self-supervised learning, highlighting the use of the same algorithms as supervised learning to train machines to reconstruct input rather than predict specific variables provided by human labelers.', 'The challenge of handling uncertainty in the real world and its impact on planning with imperfect models.', 'The real world and most games are not entirely predictable, leading to blurry predictions that hinder effective planning.', 'The potential of active learning for more efficient learning processes, although it may not significantly enhance machine intelligence.', 'Active learning has the potential to make existing processes more efficient, but it may not lead to a significant boost in machine intelligence.', 'The preliminary role of self-supervised learning in relation to reinforcement learning, supervised learning, imitation learning, and active learning.', 'Self-supervised learning is seen as a preliminary step to other learning methods, laying the foundation for further advancements in machine learning.']}, {'end': 4537.952, 'segs': [{'end': 3243.125, 'src': 'embed', 'start': 3204.269, 'weight': 0, 'content': [{'end': 3224.131, 'text': 'and can reach better than human level with about the equivalent of 200 years of training playing against itself.', 'start': 3204.269, 'duration': 19.862}, {'end': 3230.059, 'text': "It's 200 years, right? It's not something that no human can, could ever do.", 'start': 3225.273, 'duration': 4.786}, {'end': 3231.882, 'text': "I mean, I'm not sure what lesson to take away from that.", 'start': 3230.099, 'duration': 1.783}, {'end': 3239.904, 'text': 'Okay, now take those algorithms, the best algorithms we have today, to train a car to drive itself.', 'start': 3232.342, 'duration': 7.562}, {'end': 3243.125, 'text': 'It would probably have to drive millions of hours.', 'start': 3241.345, 'duration': 1.78}], 'summary': 'Ai can surpass human level with 200 years of training. self-driving cars need millions of hours for training.', 'duration': 38.856, 'max_score': 3204.269, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo243204269.jpg'}, {'end': 3369.125, 'src': 'embed', 'start': 3342.25, 'weight': 2, 'content': [{'end': 3347.654, 'text': "There's some imitation and supervised learning because we have a driving instructor that tells us occasionally what to do.", 'start': 3342.25, 'duration': 5.404}, {'end': 3356.02, 'text': "But most of the learning is learning the model, learning physics that we've done since we were babies.", 'start': 3348.815, 'duration': 7.205}, {'end': 3358.122, 'text': "That's where almost all the learning..", 'start': 3356.42, 'duration': 1.702}, {'end': 3360.403, 'text': 'And the physics is somewhat transferable from..', 'start': 3358.122, 'duration': 2.281}, {'end': 3362.925, 'text': "It's transferable from scene to scene.", 'start': 3360.403, 'duration': 2.522}, {'end': 3365.127, 'text': 'Stupid things are the same everywhere.', 'start': 3363.306, 'duration': 1.821}, {'end': 3369.125, 'text': 'Yeah, I mean, if you have experience of the world,', 'start': 3365.563, 'duration': 3.562}], 'summary': 'Supervised learning with driving instructor; learning model and physics from experience.', 'duration': 26.875, 'max_score': 3342.25, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo243342250.jpg'}, {'end': 3445.592, 'src': 'embed', 'start': 3419.536, 'weight': 4, 'content': [{'end': 3423.518, 'text': 'The question is what other type of learning are you allowed to do?', 'start': 3419.536, 'duration': 3.982}, {'end': 3429.062, 'text': "If what you're allowed to do is train on some gigantic dataset of labeled digit, that's called transfer learning?", 'start': 3423.638, 'duration': 5.424}, {'end': 3429.782, 'text': 'We know that works.', 'start': 3429.182, 'duration': 0.6}, {'end': 3433.224, 'text': 'We do this at Facebook in production.', 'start': 3431.643, 'duration': 1.581}, {'end': 3440.189, 'text': 'We train large convolutional nets to predict hashtags that people type on Instagram, and we train on billions of images, literally billions.', 'start': 3433.525, 'duration': 6.664}, {'end': 3444.711, 'text': 'And then we chop off the last layer and fine tune on whatever task we want.', 'start': 3441.009, 'duration': 3.702}, {'end': 3445.592, 'text': 'That works really well.', 'start': 3444.931, 'duration': 0.661}], 'summary': 'Transfer learning involves training on large datasets and fine-tuning for specific tasks, proven effective at facebook using billions of images.', 'duration': 26.056, 'max_score': 3419.536, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo243419536.jpg'}, {'end': 3570.153, 'src': 'embed', 'start': 3543.07, 'weight': 3, 'content': [{'end': 3551.461, 'text': 'Okay, if I want to get a particular level of error rate for this task, I know I need a million samples.', 'start': 3543.07, 'duration': 8.391}, {'end': 3557.349, 'text': 'Can I do self-supervised pre-training to reduce this to about 100 or something?', 'start': 3552.303, 'duration': 5.046}, {'end': 3560.193, 'text': 'And you think the answer there is self-supervised pre-training?', 'start': 3557.79, 'duration': 2.403}, {'end': 3564.512, 'text': 'Yeah, some form, some form of it.', 'start': 3560.851, 'duration': 3.661}, {'end': 3567.633, 'text': 'Telling you, active learning, but you disagree.', 'start': 3565.072, 'duration': 2.561}, {'end': 3570.153, 'text': "No, it's not useless.", 'start': 3568.713, 'duration': 1.44}], 'summary': 'A million samples needed for error rate, self-supervised pre-training aims for 100.', 'duration': 27.083, 'max_score': 3543.07, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo243543070.jpg'}, {'end': 3712.197, 'src': 'embed', 'start': 3680.417, 'weight': 5, 'content': [{'end': 3685.922, 'text': 'And then as technology progresses, we end up relying more and more on learning.', 'start': 3680.417, 'duration': 5.505}, {'end': 3691.145, 'text': "That's the history of character recognition, the history of speech recognition, now computer vision, natural language processing.", 'start': 3686.042, 'duration': 5.103}, {'end': 3703.172, 'text': 'And I think the same is going to happen with autonomous driving that currently, the methods that are closest to providing some level of autonomy,', 'start': 3691.585, 'duration': 11.587}, {'end': 3712.197, 'text': "some decent level of autonomy, where you don't expect a driver to kind of do anything, is where you constrain the world, so you only run within.", 'start': 3703.172, 'duration': 9.025}], 'summary': 'Advancements in technology lead to increased reliance on learning in character recognition, speech recognition, computer vision, and natural language processing, with implications for autonomous driving.', 'duration': 31.78, 'max_score': 3680.417, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo243680417.jpg'}, {'end': 3789.668, 'src': 'embed', 'start': 3763.484, 'weight': 6, 'content': [{'end': 3770.025, 'text': 'and possibly using a combination of self-supervised learning and model-based reinforcement or something like that.', 'start': 3763.484, 'duration': 6.541}, {'end': 3777.027, 'text': 'But ultimately learning will be not just at the core, but really the fundamental part of the system.', 'start': 3770.865, 'duration': 6.162}, {'end': 3779.667, 'text': "Yeah, it already is, but it'll become more and more.", 'start': 3777.167, 'duration': 2.5}, {'end': 3783.961, 'text': 'What do you think it takes to build a system with human-level intelligence?', 'start': 3780.396, 'duration': 3.565}, {'end': 3789.668, 'text': 'You talked about the AI system in the movie Her being way out of reach, our current reach.', 'start': 3784.101, 'duration': 5.567}], 'summary': 'Future ai systems will heavily rely on learning and self-supervised learning, aiming for human-level intelligence.', 'duration': 26.184, 'max_score': 3763.484, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo243763484.jpg'}, {'end': 3964.819, 'src': 'embed', 'start': 3936.624, 'weight': 7, 'content': [{'end': 3941.188, 'text': 'mostly by observation, with a little bit of interaction and learning those models of the world?', 'start': 3936.624, 'duration': 4.564}, {'end': 3945.892, 'text': "Because I think that's really a crucial piece of an intelligent autonomous system.", 'start': 3941.228, 'duration': 4.664}, {'end': 3951.294, 'text': 'So if you think about the architecture of an intelligent autonomous system, it needs to have a predictive model of the world.', 'start': 3946.372, 'duration': 4.922}, {'end': 3956.396, 'text': 'So something that says, here is the world at time t, here is the state of the world at time t plus one, if I take this action.', 'start': 3951.334, 'duration': 5.062}, {'end': 3960.938, 'text': "And it's not a single answer, it can be a bunch of- Yeah, it can be a distribution, yeah.", 'start': 3957.596, 'duration': 3.342}, {'end': 3964.819, 'text': "Well, but we don't know how to represent distributions in three-dimensional continuous spaces.", 'start': 3961.638, 'duration': 3.181}], 'summary': 'Intelligent autonomous systems need predictive models of the world for effective functioning.', 'duration': 28.195, 'max_score': 3936.624, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo243936624.jpg'}, {'end': 4154.103, 'src': 'heatmap', 'start': 4055.136, 'weight': 0.837, 'content': [{'end': 4063.14, 'text': 'You have the hardwired contentment objective computer, if you want, calculator.', 'start': 4055.136, 'duration': 8.004}, {'end': 4065.041, 'text': 'And then you have the three components.', 'start': 4063.941, 'duration': 1.1}, {'end': 4068.403, 'text': 'One is the objective predictor, which basically predicts your level of contentment.', 'start': 4065.221, 'duration': 3.182}, {'end': 4071.586, 'text': 'One is the model of the world.', 'start': 4068.983, 'duration': 2.603}, {'end': 4080.514, 'text': "And there's a third module I didn't mention, which is a module that will figure out the best course of action to optimize an objective,", 'start': 4072.547, 'duration': 7.967}, {'end': 4081.095, 'text': 'given your model.', 'start': 4080.514, 'duration': 0.581}, {'end': 4088.093, 'text': 'Okay?. a policy network or something like that, right?', 'start': 4082.756, 'duration': 5.337}, {'end': 4095.557, 'text': 'Now you need those three components to act autonomously, intelligently, and you can be stupid in three different ways.', 'start': 4089.434, 'duration': 6.123}, {'end': 4098.738, 'text': 'You can be stupid because your model of the world is wrong.', 'start': 4096.136, 'duration': 2.602}, {'end': 4108.683, 'text': 'You can be stupid because your objective is not aligned with what you actually want to achieve, okay? In humans, that would be a psychopath.', 'start': 4099.419, 'duration': 9.264}, {'end': 4114.988, 'text': 'And then the third way you can be stupid is that you have the right model.', 'start': 4110.044, 'duration': 4.944}, {'end': 4122.694, 'text': "you have the right objective, but you're unable to figure out a course of action to optimize your objective given your model.", 'start': 4114.988, 'duration': 7.706}, {'end': 4127.298, 'text': 'Some people who are in charge of big countries actually have all three that are wrong.', 'start': 4124.176, 'duration': 3.122}, {'end': 4130.68, 'text': 'All right.', 'start': 4127.818, 'duration': 2.862}, {'end': 4132.002, 'text': "Which countries? I don't know.", 'start': 4130.961, 'duration': 1.041}, {'end': 4144.22, 'text': 'Okay So if we think about this agent, if you think about the movie Her, criticized the art project that is Sophia the robot.', 'start': 4132.182, 'duration': 12.038}, {'end': 4154.103, 'text': 'And what that project essentially does is uses our natural inclination to anthropomorphize things that look like human and give them more.', 'start': 4144.88, 'duration': 9.223}], 'summary': 'A computer with 3 components: objective predictor, world model, action optimization. stupidity due to wrong model, misaligned objective, or inability to optimize action.', 'duration': 98.967, 'max_score': 4055.136, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo244055136.jpg'}, {'end': 4122.694, 'src': 'embed', 'start': 4099.419, 'weight': 8, 'content': [{'end': 4108.683, 'text': 'You can be stupid because your objective is not aligned with what you actually want to achieve, okay? In humans, that would be a psychopath.', 'start': 4099.419, 'duration': 9.264}, {'end': 4114.988, 'text': 'And then the third way you can be stupid is that you have the right model.', 'start': 4110.044, 'duration': 4.944}, {'end': 4122.694, 'text': "you have the right objective, but you're unable to figure out a course of action to optimize your objective given your model.", 'start': 4114.988, 'duration': 7.706}], 'summary': 'Being stupid: not aligned with objective or unable to optimize given model.', 'duration': 23.275, 'max_score': 4099.419, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo244099419.jpg'}, {'end': 4234.133, 'src': 'embed', 'start': 4200.432, 'weight': 9, 'content': [{'end': 4209.238, 'text': 'And the people who created Sophia are not honestly publicly communicating, trying to teach the public.', 'start': 4200.432, 'duration': 8.806}, {'end': 4212.527, 'text': "Here's a tough question.", 'start': 4211.667, 'duration': 0.86}, {'end': 4218.809, 'text': "Don't you think the same thing is?", 'start': 4213.448, 'duration': 5.361}, {'end': 4230.552, 'text': 'scientists in industry and research are taking advantage of the same misunderstanding in the public when they create AI companies or publish stuff Some companies?', 'start': 4218.809, 'duration': 11.743}, {'end': 4230.792, 'text': 'yes,', 'start': 4230.552, 'duration': 0.24}, {'end': 4234.133, 'text': "I mean, there is no sense of, there's no desire to delude.", 'start': 4231.152, 'duration': 2.981}], 'summary': 'Concerns raised about lack of transparent communication in ai creation and public understanding.', 'duration': 33.701, 'max_score': 4200.432, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo244200432.jpg'}, {'end': 4309.37, 'src': 'embed', 'start': 4277.203, 'weight': 10, 'content': [{'end': 4282.128, 'text': "So I don't think we're going to get machines that really understand language without some level of grounding in the real world.", 'start': 4277.203, 'duration': 4.925}, {'end': 4287.875, 'text': "And it's not clear to me that language is a high enough bandwidth medium to communicate how the real world works.", 'start': 4282.509, 'duration': 5.366}, {'end': 4299.924, 'text': 'Can you talk about what grounding means to you? Grounding means that there is this classic problem of common sense reasoning, the Winograd schema.', 'start': 4289.616, 'duration': 10.308}, {'end': 4309.37, 'text': "I tell you the trophy doesn't fit in the suitcase because it's too big, or the trophy doesn't fit in the suitcase because it's too small,", 'start': 4301.725, 'duration': 7.645}], 'summary': 'Language understanding in machines requires grounding in real world, common sense reasoning, and communication challenges.', 'duration': 32.167, 'max_score': 4277.203, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo244277203.jpg'}, {'end': 4378.117, 'src': 'embed', 'start': 4338.069, 'weight': 11, 'content': [{'end': 4346.573, 'text': 'I think you need some low-level perception of the world, be it visual, touch, whatever, but some higher bandwidth perception of the world.', 'start': 4338.069, 'duration': 8.504}, {'end': 4351.498, 'text': "So by reading all the world's text, you still might not have enough information That's right.", 'start': 4346.613, 'duration': 4.885}, {'end': 4356.982, 'text': "There's a lot of things that just will never appear in text and that you can't really infer.", 'start': 4352.519, 'duration': 4.463}, {'end': 4363.427, 'text': 'So I think common sense will emerge from certainly a lot of language interaction,', 'start': 4357.062, 'duration': 6.365}, {'end': 4371.732, 'text': 'but also with watching videos or perhaps even interacting in virtual environments and possibly robot interacting in the real world.', 'start': 4363.427, 'duration': 8.305}, {'end': 4378.117, 'text': "But I don't actually believe necessarily that this last one is absolutely necessary, but I think there's a need for some grounding.", 'start': 4371.772, 'duration': 6.345}], 'summary': 'Low-level perception + high bandwidth perception essential for common sense. text alone may not provide enough info. language interaction, videos, virtual/robotic interaction contribute to common sense emergence.', 'duration': 40.048, 'max_score': 4338.069, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo244338069.jpg'}, {'end': 4478.023, 'src': 'embed', 'start': 4432.312, 'weight': 13, 'content': [{'end': 4436.554, 'text': "When you know for sure that something bad is going to happen to you, you kind of give up, right? It's not there anymore.", 'start': 4432.312, 'duration': 4.242}, {'end': 4438.655, 'text': "It's uncertainty that creates fear.", 'start': 4437.534, 'duration': 1.121}, {'end': 4443.096, 'text': "So the punchline is we're not going to have autonomous intelligence without emotions.", 'start': 4439.515, 'duration': 3.581}, {'end': 4448.758, 'text': 'Okay Whatever the heck emotions are.', 'start': 4444.717, 'duration': 4.041}, {'end': 4453.18, 'text': "So you mentioned very practical things of fear, but there's a lot of other mess around it.", 'start': 4448.778, 'duration': 4.402}, {'end': 4455.141, 'text': 'But there are kind of the results of drives.', 'start': 4453.4, 'duration': 1.741}, {'end': 4461.305, 'text': "Yeah, there's deeper biological stuff going on, and I've talked to a few folks on this.", 'start': 4456.42, 'duration': 4.885}, {'end': 4466.371, 'text': "There's fascinating stuff that ultimately connects to our brain.", 'start': 4461.345, 'duration': 5.026}, {'end': 4470.475, 'text': 'If we create an AGI system, sorry.', 'start': 4467.332, 'duration': 3.143}, {'end': 4471.656, 'text': 'Human level intelligence.', 'start': 4470.495, 'duration': 1.161}, {'end': 4478.023, 'text': 'Human level intelligence system, and you get to ask her one question what would that question be?', 'start': 4471.696, 'duration': 6.327}], 'summary': 'Uncertainty fuels fear, emotions essential for agi, deeper biological connections to intelligence.', 'duration': 45.711, 'max_score': 4432.312, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo244432312.jpg'}], 'start': 3178.624, 'title': 'Challenges and components of intelligent autonomous systems in ai', 'summary': 'Discusses the limitations of model-free reinforcement learning, highlighting the time it takes to reach human-level performance in playing atari games, surpassing human level in starcraft, and learning to drive itself. it emphasizes the importance of predictive models and potential of model-based reinforcement learning. additionally, it covers self-supervised learning, transfer learning, and the potential impact of deep learning in autonomous driving. the key components of an intelligent autonomous system, potential pitfalls, misunderstandings of ai capabilities, and the necessity of grounding in ai are also addressed.', 'chapters': [{'end': 3391.254, 'start': 3178.624, 'title': 'Challenges in reinforcement learning', 'summary': 'Discusses the limitations of model-free reinforcement learning, highlighting that it takes approximately 80 hours to reach human-level performance in playing atari games, 200 years of training to surpass human level in starcraft, and millions of hours for a car to learn to drive itself without causing accidents. the importance of predictive models in human learning and the potential of model-based reinforcement learning are also emphasized.', 'duration': 212.63, 'highlights': ['It takes about 80 hours of training for model-free reinforcement learning to reach human-level performance in playing Atari games. Model-free reinforcement learning takes about 80 hours of training to reach the level that any human can reach in about 15 minutes.', 'AlphaStar can reach better than human level in StarCraft with approximately 200 years of training playing against itself. AlphaStar, to play StarCraft, can reach better than human level with about the equivalent of 200 years of training playing against itself.', 'Training a car to drive itself using current algorithms would require millions of hours and potentially cause thousands of accidents. The best algorithms today to train a car to drive itself would probably have to drive millions of hours and potentially cause thousands of accidents.', 'Humans possess predictive models of the world, allowing them to learn quickly and avoid stupid actions. Humans have predictive models of the world that allow them to learn quickly and avoid stupid actions.', 'The importance of learning physics and models in driving is emphasized, with the assertion that most learning is about understanding the model of the world. Most learning in driving is about learning the model, learning physics that has been understood since childhood.']}, {'end': 3936.624, 'start': 3391.274, 'title': 'Self-supervised learning and transfer learning', 'summary': 'Discusses the concepts of self-supervised learning, transfer learning, and the potential of deep learning in autonomous driving, emphasizing the importance of self-supervised pre-training to reduce the need for labeled data and its potential impact on medical image analysis.', 'duration': 545.35, 'highlights': ['The chapter emphasizes the potential of self-supervised pre-training to significantly reduce the number of labeled samples needed for supervised tasks, impacting fields like medical image analysis. Self-supervised pre-training could potentially reduce the need for a million labeled samples to about 100 for a specific level of error rate, affecting fields like medical image analysis.', 'The discussion on transfer learning highlights the effectiveness of training large convolutional nets on billions of images for tasks like predicting Instagram hashtags and fine-tuning on specific tasks, achieving superior performance compared to training from scratch. Training large convolutional nets on billions of images and fine-tuning on specific tasks can outperform training from scratch, potentially impacting tasks like ImageNet record beating.', 'The potential of deep learning in autonomous driving is explored, with an emphasis on the gradual shift towards relying more on learning, possibly incorporating self-supervised learning and model-based reinforcement in the long-term. Deep learning is seen as a crucial part of the solution for autonomous driving, with a predicted gradual shift towards relying more on learning, potentially including self-supervised learning and model-based reinforcement.', 'The chapter also discusses the challenges and potential of achieving human-level intelligence in AI systems, with a focus on the initial obstacle of self-supervised learning and the need to enable machines to learn models of the world through observation, akin to how babies learn. The initial obstacle in achieving human-level intelligence in AI systems is identified as self-supervised learning, aiming to enable machines to learn models of the world through observation, similar to how babies learn.']}, {'end': 4257.53, 'start': 3936.624, 'title': 'Components of intelligent autonomous systems', 'summary': "Discusses the key components of an intelligent autonomous system, including the need for a predictive model of the world, an objective to optimize, and a mechanism for figuring out the best course of action to optimize the objective, while highlighting the potential pitfalls of being 'stupid' in different ways and the public's misunderstanding of ai capabilities.", 'duration': 320.906, 'highlights': ['The need for a predictive model of the world, an objective to optimize, and a mechanism for figuring out the best course of action to optimize the objective The architecture of an intelligent autonomous system requires a predictive model of the world, an objective to optimize, and a mechanism for figuring out the best course of action to optimize the objective.', "The potential pitfalls of being 'stupid' in different ways An intelligent autonomous system can be 'stupid' in different ways, including having a wrong model of the world, an objective not aligned with the desired achievement, or an inability to figure out a course of action to optimize the objective given the model.", "The public's misunderstanding of AI capabilities The general public often overestimates the capabilities of AI systems, such as Sophia the robot, and the creators are not honestly communicating its limitations, similar to the misunderstanding in the public when AI companies or research are publicized."]}, {'end': 4537.952, 'start': 4257.89, 'title': 'Necessity of grounding in ai', 'summary': 'Discusses the necessity of grounding in ai, emphasizing the importance of common sense reasoning, the inadequacy of language, and the role of emotions in achieving autonomous intelligence.', 'duration': 280.062, 'highlights': ['Grounding in the real world is necessary for machines to understand language and have common sense reasoning. Machines need grounding in the real world to understand language and common sense reasoning.', 'Language may not provide high enough bandwidth to communicate how the real world works, necessitating low-level perception of the world for AI. Language may not be enough to convey how the real world works, requiring low-level perception for AI.', 'Common sense will emerge from language interaction, watching videos, and possibly interacting in virtual environments, but some grounding is needed. Common sense will develop from language interaction, watching videos, and possibly interacting in virtual environments, but grounding is essential.', 'Autonomous intelligence in AI requires emotions, as they are integral for common sense reasoning and decision-making. Emotions are essential for autonomous intelligence in AI, aiding in common sense reasoning and decision-making.', 'The first AGI system may have the intelligence level of a four-year-old and questions about common sense reasoning and causal inference can reveal its capabilities. The first AGI system may have the intelligence level of a four-year-old, and questions about common sense reasoning and causal inference can reveal its capabilities.']}], 'duration': 1359.328, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/SGSOCuByo24/pics/SGSOCuByo243178624.jpg', 'highlights': ['AlphaStar can reach better than human level in StarCraft with approximately 200 years of training playing against itself.', 'Training a car to drive itself using current algorithms would require millions of hours and potentially cause thousands of accidents.', 'The importance of learning physics and models in driving is emphasized, with the assertion that most learning is about understanding the model of the world.', 'Self-supervised pre-training could potentially reduce the need for a million labeled samples to about 100 for a specific level of error rate, affecting fields like medical image analysis.', 'The discussion on transfer learning highlights the effectiveness of training large convolutional nets on billions of images for tasks like predicting Instagram hashtags and fine-tuning on specific tasks, achieving superior performance compared to training from scratch.', 'The potential of deep learning in autonomous driving is explored, with an emphasis on the gradual shift towards relying more on learning, possibly incorporating self-supervised learning and model-based reinforcement in the long-term.', 'The initial obstacle in achieving human-level intelligence in AI systems is identified as self-supervised learning, aiming to enable machines to learn models of the world through observation, similar to how babies learn.', 'The architecture of an intelligent autonomous system requires a predictive model of the world, an objective to optimize, and a mechanism for figuring out the best course of action to optimize the objective.', "An intelligent autonomous system can be 'stupid' in different ways, including having a wrong model of the world, an objective not aligned with the desired achievement, or an inability to figure out a course of action to optimize the objective given the model.", 'The general public often overestimates the capabilities of AI systems, such as Sophia the robot, and the creators are not honestly communicating its limitations, similar to the misunderstanding in the public when AI companies or research are publicized.', 'Machines need grounding in the real world to understand language and common sense reasoning.', 'Language may not be enough to convey how the real world works, requiring low-level perception for AI.', 'Common sense will develop from language interaction, watching videos, and possibly interacting in virtual environments, but grounding is essential.', 'Emotions are essential for autonomous intelligence in AI, aiding in common sense reasoning and decision-making.', 'The first AGI system may have the intelligence level of a four-year-old, and questions about common sense reasoning and causal inference can reveal its capabilities.']}], 'highlights': ['Yann LeCun is best known as the founding father of convolutional neural networks and their application to optical character recognition and the famed MNIST dataset.', 'The chapter emphasizes the need for aligning AI objective functions with the common good, drawing parallels to laws in human society to prevent undesirable behaviors.', 'The idea that intelligence is inseparable from learning, making machine learning an obvious path, is emphasized, highlighting the crucial role of learning in the development of intelligence.', "Leon Boutou's paper 'From Machine Learning to Machine Reasoning' proposes replacing symbols with vectors and logic with continuous functions for system compatibility with learning.", 'Neural nets lost interest around 1995 due to challenges in implementation and lack of software platforms, leading to disconnection from mainstream machine learning.', 'In 1994, a check reading system using convolutional nets was deployed in ATM machines.', 'Self-supervised learning shows potential for reducing human input in AI systems with mounting evidence for its success.', 'AlphaStar can reach better than human level in StarCraft with approximately 200 years of training playing against itself.', 'The importance of learning physics and models in driving is emphasized, with the assertion that most learning is about understanding the model of the world.', 'The architecture of an intelligent autonomous system requires a predictive model of the world, an objective to optimize, and a mechanism for figuring out the best course of action to optimize the objective.']}