title
MIT 6.S191: The Future of Robot Learning

description
MIT Introduction to Deep Learning 6.S191: Lecture 10 The Future of Robot Learning Lecturer: Daniela Rus 2023 Edition For all lectures, slides, and lab materials: http://introtodeeplearning.com​ Lecture Outline - coming soon! Subscribe to stay up to date with new deep learning lectures at MIT, or follow us @MITDeepLearning on Twitter and Instagram to stay fully-connected!!

detail
{'title': 'MIT 6.S191: The Future of Robot Learning', 'heatmap': [], 'summary': 'Covers the increasing role of robotics in automating routine tasks, highlighting 19 different types of robots and the potential impact of ai in reducing car accidents, improving healthcare, ensuring privacy, enhancing global communication, and optimizing education and work efficiency. it also delves into robot anatomy, machine learning in robot perception, algorithm deployment considerations, lidar impact on autonomous vehicles, simulating vehicle scenarios, liquid networks for vehicle control, liquid networks in various applications, and the potential of machine learning and ai in revolutionizing everyday life.', 'chapters': [{'end': 233.557, 'segs': [{'end': 39.683, 'src': 'embed', 'start': 9.156, 'weight': 3, 'content': [{'end': 12.817, 'text': 'Robotics is a really cool and important direction for the future.', 'start': 9.156, 'duration': 3.661}, {'end': 21.759, 'text': 'I really believe that we are moving towards a world where so many routine tasks are taken off your plate.', 'start': 14.277, 'duration': 7.482}, {'end': 26.76, 'text': 'Fresh produce turns up at your doorsteps delivered by drones.', 'start': 22.559, 'duration': 4.201}, {'end': 29.28, 'text': 'Garbage bins take themselves out.', 'start': 27.3, 'duration': 1.98}, {'end': 34.381, 'text': 'Smart infrastructure ensures that the garbage gets removed.', 'start': 30.24, 'duration': 4.141}, {'end': 39.683, 'text': 'Robots help with recycling, with shelving, with cleaning windows.', 'start': 34.981, 'duration': 4.702}], 'summary': 'Robotics streamlines routine tasks, delivers produce by drones, and aids in recycling and cleaning windows.', 'duration': 30.527, 'max_score': 9.156, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/WHvWSYKGMDQ/pics/WHvWSYKGMDQ9156.jpg'}, {'end': 131.026, 'src': 'embed', 'start': 101.342, 'weight': 0, 'content': [{'end': 111.071, 'text': 'And so today, we can say that with AI, we will see such a wide breadth of applications.', 'start': 101.342, 'duration': 9.729}, {'end': 117.536, 'text': 'For instance, these technologies have the potential to reduce and eliminate car accidents.', 'start': 111.151, 'duration': 6.385}, {'end': 123.902, 'text': 'They have the potential to better diagnose, monitor, and treat disease, as you have seen in some of the previous slides.', 'start': 117.596, 'duration': 6.306}, {'end': 131.026, 'text': 'Lectures. these technologies have the potential to keep your information private and safe.', 'start': 124.382, 'duration': 6.644}], 'summary': 'Ai has potential to reduce car accidents, improve healthcare, and enhance data security.', 'duration': 29.684, 'max_score': 101.342, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/WHvWSYKGMDQ/pics/WHvWSYKGMDQ101342.jpg'}, {'end': 214.903, 'src': 'embed', 'start': 160.247, 'weight': 1, 'content': [{'end': 172.419, 'text': 'Now robots, I like to think of robots as the machines that put computing in motion and give our machines in the world the ability to navigate.', 'start': 160.247, 'duration': 12.172}, {'end': 174.951, 'text': 'and to manipulate the world.', 'start': 173.809, 'duration': 1.142}, {'end': 184.185, 'text': 'We have artificial intelligence, which enables machines to see, to hear, and to communicate and to make decisions like humans.', 'start': 175.592, 'duration': 8.593}, {'end': 192.885, 'text': 'And then we have machine learning, And to me, machine learning is about learning from and making predictions on data.', 'start': 184.705, 'duration': 8.18}, {'end': 198.85, 'text': 'And this kind of application of machine learning is broad.', 'start': 193.626, 'duration': 5.224}, {'end': 200.732, 'text': 'It applies to cognitive tasks.', 'start': 198.99, 'duration': 1.742}, {'end': 202.894, 'text': 'It applies to physical tasks.', 'start': 200.872, 'duration': 2.022}, {'end': 214.903, 'text': 'But regardless of the task, we can characterize how machine learning works as using data to answer questions that are either descriptive,', 'start': 203.374, 'duration': 11.529}], 'summary': 'Robots enable navigation and manipulation, ai empowers machines to see, hear, communicate and make decisions, while machine learning involves learning from data to make predictions on cognitive and physical tasks.', 'duration': 54.656, 'max_score': 160.247, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/WHvWSYKGMDQ/pics/WHvWSYKGMDQ160247.jpg'}], 'start': 9.156, 'title': 'Robotics and ai', 'summary': 'Covers the increasing role of robotics in automating routine tasks, with 19 different types of robots highlighted, and the potential impact of ai in various fields including reducing car accidents, improving healthcare, ensuring privacy, enhancing global communication, and optimizing education and work efficiency.', 'chapters': [{'end': 73.182, 'start': 9.156, 'title': 'Future of robotics', 'summary': 'Discusses the increasing role of robotics in automating routine tasks, such as drone delivery of fresh produce, self-managing garbage bins, and robots aiding in recycling and cleaning windows. the speaker also highlights the presence of 19 different types of robots in an image.', 'duration': 64.026, 'highlights': ['The speaker discusses the role of robotics in automating routine tasks such as drone delivery of fresh produce, self-managing garbage bins, and robots aiding in recycling and cleaning windows.', 'The speaker highlights the presence of 19 different types of robots in an image, including flying robots, cars, shopping cart robots, and robots for carrying and shelving.']}, {'end': 233.557, 'start': 73.283, 'title': "Ai's impact on future technology", 'summary': 'Discusses the potential impact of ai in various fields, including reducing car accidents, improving healthcare, ensuring privacy, enhancing global communication, and optimizing education and work efficiency.', 'duration': 160.274, 'highlights': ['AI has the potential to reduce and eliminate car accidents, better diagnose, monitor, and treat diseases, keep information private and safe, transport people and goods effectively, provide instantaneous translations, and optimize education and work efficiency.', 'Robots enable machines to navigate and manipulate the world, while artificial intelligence enables machines to see, hear, communicate, and make decisions like humans.', 'Machine learning involves learning from and making predictions on data, applying to both cognitive and physical tasks, and answering questions that are descriptive, predictive, or prescriptive.']}], 'duration': 224.401, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/WHvWSYKGMDQ/pics/WHvWSYKGMDQ9156.jpg', 'highlights': ['AI has the potential to reduce and eliminate car accidents.', 'Robots enable machines to navigate and manipulate the world.', 'Artificial intelligence enables machines to see, hear, communicate, and make decisions like humans.', 'The speaker discusses the role of robotics in automating routine tasks.', 'Machine learning involves learning from and making predictions on data.']}, {'end': 692.314, 'segs': [{'end': 284.746, 'src': 'embed', 'start': 236.781, 'weight': 0, 'content': [{'end': 246.667, 'text': 'And so, when we think about these questions in the context of a robot, we have to kind of get on the same page about what a robot is.', 'start': 236.781, 'duration': 9.886}, {'end': 256.012, 'text': 'And so think of a robot as a programmable mechanical device that takes input with its sensors,', 'start': 246.907, 'duration': 9.105}, {'end': 260.796, 'text': 'reasons about this input and then generates an action in the physical world.', 'start': 256.012, 'duration': 4.784}, {'end': 264.939, 'text': 'And robots are made of a body and the brain.', 'start': 262.337, 'duration': 2.602}, {'end': 274.394, 'text': 'The body, consisting of actuators and sensors, determine the range of tasks that the robot can do.', 'start': 265.984, 'duration': 8.41}, {'end': 277.778, 'text': 'So the robot can only do what its body is capable of doing.', 'start': 274.514, 'duration': 3.264}, {'end': 282.864, 'text': 'A robot on wheels will not be able to do the task of climbing stairs.', 'start': 278.999, 'duration': 3.865}, {'end': 284.746, 'text': 'So we have to think about the body.', 'start': 283.305, 'duration': 1.441}], 'summary': 'A robot is a programmable device with body and brain, limited by its body capabilities.', 'duration': 47.965, 'max_score': 236.781, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/WHvWSYKGMDQ/pics/WHvWSYKGMDQ236781.jpg'}, {'end': 340.476, 'src': 'embed', 'start': 313.479, 'weight': 3, 'content': [{'end': 318.161, 'text': 'Now, in the context of robots, we have three types of learning.', 'start': 313.479, 'duration': 4.682}, {'end': 322.383, 'text': 'And you have seen different aspects of these methodologies throughout the course.', 'start': 318.241, 'duration': 4.142}, {'end': 324.444, 'text': 'We have supervised learning.', 'start': 323.123, 'duration': 1.321}, {'end': 332.848, 'text': 'And so in this method of learning, we use data to find the relationship between input and output.', 'start': 324.964, 'duration': 7.884}, {'end': 340.476, 'text': 'We have unsupervised learning and in the context of unsupervised learning we can use data to have patterns,', 'start': 333.869, 'duration': 6.607}], 'summary': 'Three types of learning in robots: supervised, unsupervised, and use of data for relationship and patterns.', 'duration': 26.997, 'max_score': 313.479, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/WHvWSYKGMDQ/pics/WHvWSYKGMDQ313479.jpg'}, {'end': 425.085, 'src': 'embed', 'start': 397.183, 'weight': 4, 'content': [{'end': 405.027, 'text': 'And so how does this work? Well, let me give you a high-level view of how a robot car can actually recognize the scene.', 'start': 397.183, 'duration': 7.844}, {'end': 414.897, 'text': 'So in order to use deep learning for the perception task of robots, we use data.', 'start': 407.432, 'duration': 7.465}, {'end': 421.242, 'text': 'This is manually labeled data that gets fed into a convolutional neural network.', 'start': 415.018, 'duration': 6.224}, {'end': 425.085, 'text': 'And then the labels are used to classify what the data is.', 'start': 421.842, 'duration': 3.243}], 'summary': 'Robot car uses deep learning with manually labeled data to recognize scenes.', 'duration': 27.902, 'max_score': 397.183, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/WHvWSYKGMDQ/pics/WHvWSYKGMDQ397183.jpg'}, {'end': 559.914, 'src': 'embed', 'start': 528.222, 'weight': 2, 'content': [{'end': 532.043, 'text': 'And here we see the leaderboards of ImageNet.', 'start': 528.222, 'duration': 3.821}, {'end': 545.125, 'text': 'And we see performance of various variations of image classification algorithms that perform well into 90% accuracy.', 'start': 532.143, 'duration': 12.982}, {'end': 547.326, 'text': 'And this is really quite exciting.', 'start': 545.646, 'duration': 1.68}, {'end': 559.914, 'text': "It's exciting, but if those algorithms were to run on a car, that's not good enough, because the car is a safety-critical system.", 'start': 548.382, 'duration': 11.532}], 'summary': 'Imagenet leaderboards show image classification algorithms with over 90% accuracy, but not sufficient for car safety.', 'duration': 31.692, 'max_score': 528.222, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/WHvWSYKGMDQ/pics/WHvWSYKGMDQ528222.jpg'}], 'start': 236.781, 'title': 'Robot anatomy, function, perception, and machine learning', 'summary': "Delves into the definition of a robot, emphasizing its body's role in task performance, and discusses the use of machine learning in robot perception, covering supervised, unsupervised, and reinforcement learning, and advancements in image classification for safety-critical systems.", 'chapters': [{'end': 284.746, 'start': 236.781, 'title': 'Robot anatomy and function', 'summary': "Discusses the definition of a robot as a programmable mechanical device that takes input with its sensors, reasons about this input, and then generates an action in the physical world. it also emphasizes the importance of a robot's body, consisting of actuators and sensors, in determining the range of tasks it can perform.", 'duration': 47.965, 'highlights': ['The definition of a robot as a programmable mechanical device that takes input with its sensors, reasons about this input, and then generates an action in the physical world.', "The importance of a robot's body, consisting of actuators and sensors, in determining the range of tasks it can perform."]}, {'end': 692.314, 'start': 285.507, 'title': 'Robot perception and machine learning', 'summary': 'Discusses the use of machine learning in enhancing robot perception, including methodologies such as supervised, unsupervised, and reinforcement learning, as well as the challenges and advancements in image classification for safety-critical systems.', 'duration': 406.807, 'highlights': ['Machine learning methodologies discussed: supervised learning, unsupervised learning, and reinforcement learning.', 'The process of using deep learning for perception tasks of robots, including employing manually labeled data and convolutional neural networks for image classification.', 'Challenges in image classification for safety-critical systems, with the need for 100% accuracy and the identification of extreme errors or corner cases during training.']}], 'duration': 455.533, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/WHvWSYKGMDQ/pics/WHvWSYKGMDQ236781.jpg', 'highlights': ['The definition of a robot as a programmable mechanical device with sensors and actuators.', "The importance of a robot's body in determining the range of tasks it can perform.", 'Challenges in image classification for safety-critical systems, requiring 100% accuracy.', 'Discussion of machine learning methodologies: supervised, unsupervised, and reinforcement learning.', 'Employing convolutional neural networks for image classification in robot perception.']}, {'end': 1479.348, 'segs': [{'end': 770.05, 'src': 'embed', 'start': 692.734, 'weight': 0, 'content': [{'end': 703.497, 'text': 'And with this significant change in context, the performance of the top performing ImageNet algorithms dropped by as much as 40% to 50%,', 'start': 692.734, 'duration': 10.763}, {'end': 704.798, 'text': 'which is really extraordinary.', 'start': 703.497, 'duration': 1.301}, {'end': 715.342, 'text': "And I'm sharing this with you, not to discourage you from using these algorithms, but to point out that when you, when you deploy an algorithm,", 'start': 705.458, 'duration': 9.884}, {'end': 721.686, 'text': "and especially when you deploy it in a safety-critical application, it's important to understand the scope of the algorithm.", 'start': 715.342, 'duration': 6.344}, {'end': 730.613, 'text': "It's important to understand what works and what doesn't work, when you can apply the algorithm and when you shouldn't apply the algorithm.", 'start': 721.946, 'duration': 8.667}, {'end': 740.28, 'text': 'And so keep this in mind as you think about deploying or building and deploying deep neural network solutions.', 'start': 731.914, 'duration': 8.366}, {'end': 747.599, 'text': "There's another thing that is very critical for autonomous driving and for robots.", 'start': 741.055, 'duration': 6.544}, {'end': 752.601, 'text': 'You have heard a beautiful lecture on adversarial attacks.', 'start': 748.299, 'duration': 4.302}, {'end': 763.227, 'text': 'Well, it turns out you can attack very easily the images that get fed from the cameras streams of cars to the decision-making engine of the car.', 'start': 753.042, 'duration': 10.185}, {'end': 770.05, 'text': "And in fact it's quite easy to take a stop sign and perturb it a little bit.", 'start': 763.727, 'duration': 6.323}], 'summary': 'Top imagenet algorithms dropped by 40-50%, caution in deploying deep neural network solutions for safety-critical applications.', 'duration': 77.316, 'max_score': 692.734, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/WHvWSYKGMDQ/pics/WHvWSYKGMDQ692734.jpg'}, {'end': 873.153, 'src': 'embed', 'start': 845.719, 'weight': 2, 'content': [{'end': 864.607, 'text': 'Well, the reason reinforcement learning is causing a huge revolution in robotics is because we have built fast simulation systems and simulation methodologies that allow us to run thousands of simulations in parallel in order to train a reinforcement learning policy.', 'start': 845.719, 'duration': 18.888}, {'end': 873.153, 'text': 'And we are also decreasing the gap between the hardware platforms and the simulation engines.', 'start': 865.107, 'duration': 8.046}], 'summary': 'Reinforcement learning revolutionizes robotics with fast simulations and parallel training.', 'duration': 27.434, 'max_score': 845.719, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/WHvWSYKGMDQ/pics/WHvWSYKGMDQ845719.jpg'}, {'end': 1420.728, 'src': 'embed', 'start': 1369.909, 'weight': 3, 'content': [{'end': 1380.196, 'text': "So it's really super interesting to think about how visual processing improved from one frame per 10 minutes to 100 frames per second.", 'start': 1369.909, 'duration': 10.287}, {'end': 1384.642, 'text': 'And this has been a game changer for autonomous cars.', 'start': 1381.438, 'duration': 3.204}, {'end': 1388.526, 'text': "And we're getting back to the connection between hardware and software.", 'start': 1384.782, 'duration': 3.744}, {'end': 1393.312, 'text': 'We need both in order to get good solutions for real problems.', 'start': 1388.947, 'duration': 4.365}, {'end': 1402.918, 'text': 'The other thing that happened in autonomous driving was that the LIDAR sensors decreased the uncertainty and increased safety.', 'start': 1395.134, 'duration': 7.784}, {'end': 1408.882, 'text': 'And today, we have many companies and groups that are deploying self-driving cars.', 'start': 1403.539, 'duration': 5.343}, {'end': 1411.423, 'text': 'This is an example from Singapore.', 'start': 1408.942, 'duration': 2.481}, {'end': 1413.624, 'text': "It's a vehicle we deployed.", 'start': 1412.284, 'duration': 1.34}, {'end': 1416.666, 'text': 'And in fact, we had the public ride our vehicle.', 'start': 1413.984, 'duration': 2.682}, {'end': 1420.728, 'text': 'In 2014, we have vehicles at MIT.', 'start': 1417.426, 'duration': 3.302}], 'summary': 'Visual processing improved from 1 frame per 10 minutes to 100 frames per second, enhancing autonomous cars. lidar sensors reduced uncertainty and increased safety, leading to deployment of self-driving cars by many companies and groups.', 'duration': 50.819, 'max_score': 1369.909, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/WHvWSYKGMDQ/pics/WHvWSYKGMDQ1369909.jpg'}], 'start': 692.734, 'title': 'Algorithm deployment considerations', 'summary': 'Discusses the significant drop in performance of top imagenet algorithms by 40% to 50% in certain contexts, emphasizing the importance of understanding algorithm scope and potential vulnerabilities, such as the ease of adversarial attacks in autonomous driving scenarios.', 'chapters': [{'end': 790.3, 'start': 692.734, 'title': 'Algorithm deployment considerations', 'summary': 'Discusses the significant drop in performance of top imagenet algorithms by 40% to 50% in certain contexts, emphasizing the importance of understanding algorithm scope and potential vulnerabilities, such as the ease of adversarial attacks in autonomous driving scenarios.', 'duration': 97.566, 'highlights': ['The performance of top performing ImageNet algorithms dropped by 40% to 50% in certain contexts.', "It's crucial to understand the scope of algorithms and their limitations when deploying them in safety-critical applications.", 'Adversarial attacks can easily manipulate camera images in autonomous driving, potentially causing chaos on the road.']}, {'end': 1479.348, 'start': 791.528, 'title': 'Reinforcement learning in robotics', 'summary': 'Discusses the revolutionary impact of reinforcement learning in robotics, highlighting the ability to run thousands of simulations in parallel for training, advancements in autonomous driving from historical perspectives, and the pivotal role of lidar sensors in increasing safety and decreasing uncertainty.', 'duration': 687.82, 'highlights': ["Reinforcement learning's impact in robotics is due to the ability to run thousands of simulations in parallel for training, leading to extraordinary possibilities and capabilities in agents.", 'Historical perspectives on autonomous driving reveal significant advancements in visual processing, with a transition from one frame per 10 minutes to 100 frames per second, demonstrating a game-changing improvement.', 'The pivotal role of LIDAR sensors in autonomous driving is highlighted, showcasing its ability to decrease uncertainty and increase safety, replacing less effective solutions such as sonar sensors.']}], 'duration': 786.614, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/WHvWSYKGMDQ/pics/WHvWSYKGMDQ692734.jpg', 'highlights': ['The performance of top performing ImageNet algorithms dropped by 40% to 50% in certain contexts.', 'Adversarial attacks can easily manipulate camera images in autonomous driving, potentially causing chaos on the road.', "Reinforcement learning's impact in robotics is due to the ability to run thousands of simulations in parallel for training, leading to extraordinary possibilities and capabilities in agents.", 'Historical perspectives on autonomous driving reveal significant advancements in visual processing, with a transition from one frame per 10 minutes to 100 frames per second, demonstrating a game-changing improvement.', "It's crucial to understand the scope of algorithms and their limitations when deploying them in safety-critical applications.", 'The pivotal role of LIDAR sensors in autonomous driving is highlighted, showcasing its ability to decrease uncertainty and increase safety, replacing less effective solutions such as sonar sensors.']}, {'end': 2062.13, 'segs': [{'end': 1527.157, 'src': 'embed', 'start': 1479.488, 'weight': 0, 'content': [{'end': 1485.173, 'text': "All the algorithms that were developed on sonar and didn't work started working when the LiDAR was introduced.", 'start': 1479.488, 'duration': 5.685}, {'end': 1486.214, 'text': "It's really exciting.", 'start': 1485.193, 'duration': 1.021}, {'end': 1493.616, 'text': 'Okay, now, when we think about autonomous driving,', 'start': 1488.952, 'duration': 4.664}, {'end': 1500.863, 'text': 'there are several key parameters that emerge as we think about what the capabilities of these systems are.', 'start': 1493.616, 'duration': 7.247}, {'end': 1505.988, 'text': 'And one question is how complex is the environment where the car is moving?', 'start': 1501.784, 'duration': 4.204}, {'end': 1510.372, 'text': "If it's an empty road, like in the German case, then the problem is much easier.", 'start': 1506.108, 'duration': 4.264}, {'end': 1517.112, 'text': 'Then we have to ask ourselves how complex are the interactions between the car and the environment?', 'start': 1511.789, 'duration': 5.323}, {'end': 1520.973, 'text': 'And we also have to think about how complex is the reasoning of the car?', 'start': 1517.232, 'duration': 3.741}, {'end': 1522.554, 'text': 'How fast is the car going?', 'start': 1521.034, 'duration': 1.52}, {'end': 1527.157, 'text': 'And underlying all these questions is a fundamental question.', 'start': 1523.515, 'duration': 3.642}], 'summary': 'Lidar improved sonar algorithms for autonomous driving. consider environment complexity and car interactions.', 'duration': 47.669, 'max_score': 1479.488, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/WHvWSYKGMDQ/pics/WHvWSYKGMDQ1479488.jpg'}, {'end': 1628.89, 'src': 'embed', 'start': 1600.39, 'weight': 2, 'content': [{'end': 1602.491, 'text': "But the sensors don't work well in weather.", 'start': 1600.39, 'duration': 2.101}, {'end': 1609.176, 'text': 'And the uncertainty of the perception system increases significantly if it rains hard or it snows.', 'start': 1602.932, 'duration': 6.244}, {'end': 1619.483, 'text': 'And the uncertainty of the vehicle prior also increases in the case of extreme congestion, where you have erratic driving with vehicles, with people,', 'start': 1609.777, 'duration': 9.706}, {'end': 1622.926, 'text': 'with scooters, even with cows on the road.', 'start': 1619.483, 'duration': 3.443}, {'end': 1627.609, 'text': 'And this is a video I took during a taxi ride in Bangalore.', 'start': 1623.406, 'duration': 4.203}, {'end': 1628.89, 'text': 'There come the cows.', 'start': 1628.209, 'duration': 0.681}], 'summary': 'Sensors have issues in bad weather, resulting in increased uncertainty in perception and vehicle performance, especially in extreme congestion.', 'duration': 28.5, 'max_score': 1600.39, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/WHvWSYKGMDQ/pics/WHvWSYKGMDQ1600390.jpg'}, {'end': 1688.37, 'src': 'embed', 'start': 1661.253, 'weight': 3, 'content': [{'end': 1667.956, 'text': 'And many of them follow a very simple solution, which you can adopt and turn your car into a self-driving car.', 'start': 1661.253, 'duration': 6.703}, {'end': 1669.037, 'text': "So here's what you have to do.", 'start': 1667.996, 'duration': 1.041}, {'end': 1679.622, 'text': 'You take your car, you extend it to drive by wires so that your computer can talk to the steering and the acceleration, the throttle controls.', 'start': 1669.817, 'duration': 9.805}, {'end': 1688.37, 'text': 'So then you further extend this car with sensors, and most of the sensors we use are cameras and lidars.', 'start': 1680.752, 'duration': 7.618}], 'summary': 'Convert your car into a self-driving car by adding drive-by-wire system and sensors like cameras and lidars.', 'duration': 27.117, 'max_score': 1661.253, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/WHvWSYKGMDQ/pics/WHvWSYKGMDQ1661253.jpg'}, {'end': 1895.15, 'src': 'embed', 'start': 1835.618, 'weight': 4, 'content': [{'end': 1853.174, 'text': 'his idea was to utilize a large data set to learn a representation of what humans did in similar situations and develop autonomous driving solutions that drive more like humans than the traditional pipeline,', 'start': 1835.618, 'duration': 17.556}, {'end': 1856.577, 'text': 'which is much more robotic-y, if you like.', 'start': 1853.174, 'duration': 3.403}, {'end': 1865.97, 'text': 'So, then, the question is how can we use machine learning to go directly from sensors to actuation?', 'start': 1857.463, 'duration': 8.507}, {'end': 1873.876, 'text': 'In other words, can we compress all the stuff in the middle and use learning to connect directly perception and action?', 'start': 1866.67, 'duration': 7.206}, {'end': 1882.723, 'text': 'So the solutions that we employed build on things we have already talked about.', 'start': 1876.34, 'duration': 6.383}, {'end': 1895.15, 'text': 'We can use deep learning and reinforcement learning to take us from images of roads onto steering and throttle onto what to do.', 'start': 1883.804, 'duration': 11.346}], 'summary': 'Utilizing large data set, employ deep learning and reinforcement learning to develop autonomous driving solutions that drive more like humans, connecting perception and action directly.', 'duration': 59.532, 'max_score': 1835.618, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/WHvWSYKGMDQ/pics/WHvWSYKGMDQ1835618.jpg'}], 'start': 1479.488, 'title': 'Lidar and machine learning impact on autonomous vehicles', 'summary': 'Discusses the impact of lidar on sonar algorithms, highlighting improved performance for autonomous driving. it also covers the uncertainties and solutions for deploying machine learning in safety critical applications for self-driving cars.', 'chapters': [{'end': 1527.157, 'start': 1479.488, 'title': "Lidar's impact on sonar algorithms", 'summary': "Discusses the impact of introducing lidar on sonar algorithms, highlighting the improvement in performance and the key parameters for autonomous driving, such as the complexity of the environment, interactions with the environment, and the car's reasoning and speed.", 'duration': 47.669, 'highlights': ['The introduction of LiDAR improved the performance of sonar algorithms, making previously ineffective algorithms functional.', "The key parameters for autonomous driving include the complexity of the environment, interactions with the environment, the car's reasoning, and its speed."]}, {'end': 2062.13, 'start': 1527.577, 'title': 'Machine learning for autonomous vehicles', 'summary': 'Discusses the uncertainties associated with deploying machine learning on safety critical applications, and the effective solutions for robot cars operating in easy environments, while highlighting the challenges and advancements in developing self-driving cars using machine learning and deep learning techniques.', 'duration': 534.553, 'highlights': ['The car can operate autonomously in easy environments but faces increased uncertainty in perception system during extreme weather conditions and congestion, impacting its capability to avoid obstacles.', 'The process of turning a vehicle into an autonomous vehicle involves extending the car with sensors, software modules for perception, estimation, learning, planning, and control, and addressing computational units for processing sensor data, obstacle detection, vehicle localization, and planning.', 'The classical autonomous driving pipeline requires hand-engineering parameters for various road situations, leading to brittleness in solutions, while advancements in autonomous driving solutions involve utilizing large datasets and developing human-like driving solutions using machine learning and deep learning techniques.', 'Machine learning techniques such as deep learning and reinforcement learning can be employed to directly connect perception and action, enabling training on certain roads and driving situations and inferring control signals for various situations, while also providing human-like control and localization of the vehicle.']}], 'duration': 582.642, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/WHvWSYKGMDQ/pics/WHvWSYKGMDQ1479488.jpg', 'highlights': ["LiDAR improved sonar algorithms' performance", 'Autonomous driving parameters: environment complexity, interactions, reasoning, speed', 'Challenges in extreme weather and congestion for autonomous cars', 'Extending vehicles with sensors, software modules, and computational units for autonomous driving', 'Advancements in autonomous driving involve using large datasets and machine learning', 'Machine learning connects perception and action, enabling training on specific roads and driving situations']}, {'end': 2606.095, 'segs': [{'end': 2124.558, 'src': 'embed', 'start': 2091.271, 'weight': 0, 'content': [{'end': 2101.095, 'text': 'And the Vista simulator can model multiple agents, multiple types of sensors, and multiple types of agent-to-agent interaction.', 'start': 2091.271, 'duration': 9.824}, {'end': 2107.981, 'text': 'And so the Vista simulator has been recently open sourced.', 'start': 2103.339, 'duration': 4.642}, {'end': 2112.423, 'text': 'You can get the code from vista.csale.mit.edu.', 'start': 2108.602, 'duration': 3.821}, {'end': 2116.025, 'text': 'And a lot of people are already using the system.', 'start': 2112.483, 'duration': 3.542}, {'end': 2118.066, 'text': 'So what we get from Vista.', 'start': 2116.705, 'duration': 1.361}, {'end': 2124.558, 'text': 'is the ability to simulate different physical sensing modalities.', 'start': 2120.196, 'duration': 4.362}], 'summary': 'Vista simulator models multiple agents, sensors, and interactions, open sourced and widely used for simulating physical sensing modalities.', 'duration': 33.287, 'max_score': 2091.271, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/WHvWSYKGMDQ/pics/WHvWSYKGMDQ2091271.jpg'}, {'end': 2216.986, 'src': 'embed', 'start': 2187.402, 'weight': 1, 'content': [{'end': 2207.888, 'text': 'here you can see our original data and you can see how this original data can be mapped in simulation in a way that looks very realistically into a new simulated trajectory that is erratic and that now exists as part of our training set in Vista.', 'start': 2187.402, 'duration': 20.486}, {'end': 2212.744, 'text': 'And so we can do this.', 'start': 2210.063, 'duration': 2.681}, {'end': 2216.986, 'text': 'We can use this data, and then we can learn a policy.', 'start': 2213.604, 'duration': 3.382}], 'summary': 'Original data mapped to new simulated trajectory for training in vista.', 'duration': 29.584, 'max_score': 2187.402, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/WHvWSYKGMDQ/pics/WHvWSYKGMDQ2187402.jpg'}, {'end': 2305.464, 'src': 'embed', 'start': 2277.199, 'weight': 2, 'content': [{'end': 2280.942, 'text': 'Now, however, our solution works better than all the other solutions.', 'start': 2277.199, 'duration': 3.743}, {'end': 2292.071, 'text': 'And here you can see the results of comparing what happens in Vista with what happens in the existing simulators in the state of the art.', 'start': 2281.002, 'duration': 11.069}, {'end': 2301.119, 'text': 'So the top line shows crash locations in red, and the bottom line shows mean trajectory variation in color.', 'start': 2293.071, 'duration': 8.048}, {'end': 2305.464, 'text': 'And you can see that our solution really does the best.', 'start': 2301.82, 'duration': 3.644}], 'summary': 'Our solution outperforms existing simulators with better crash locations and trajectory variation.', 'duration': 28.265, 'max_score': 2277.199, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/WHvWSYKGMDQ/pics/WHvWSYKGMDQ2277199.jpg'}, {'end': 2449.283, 'src': 'embed', 'start': 2392.081, 'weight': 4, 'content': [{'end': 2396.924, 'text': 'The decision engine has about 100, 000 neurons and about half a million parameters.', 'start': 2392.081, 'duration': 4.843}, {'end': 2411.121, 'text': 'And I will challenge you to, to figure out if there are any patterns that associate the state of neurons with the behavior of the vehicle.', 'start': 2398.301, 'duration': 12.82}, {'end': 2413.423, 'text': "It's really hard to see because there are so many of them.", 'start': 2411.182, 'duration': 2.241}, {'end': 2418.446, 'text': "There's just so much stuff that is happening in parallel at the same time.", 'start': 2413.803, 'duration': 4.643}, {'end': 2421.727, 'text': 'And then have a look at the attention map.', 'start': 2419.566, 'duration': 2.161}, {'end': 2426.51, 'text': 'So it turns out this vehicle is looking at the bushes on the road in order to make decisions.', 'start': 2422.208, 'duration': 4.302}, {'end': 2432.254, 'text': 'Still, it seems to do a pretty good job.', 'start': 2429.913, 'duration': 2.341}, {'end': 2435.356, 'text': 'But we asked ourselves can we do better??', 'start': 2432.815, 'duration': 2.541}, {'end': 2440.198, 'text': 'Can we have more reliable, learning-based solutions?', 'start': 2436.076, 'duration': 4.122}, {'end': 2449.283, 'text': 'And so yesterday Ramin introduced liquid networks and introduced neural circuit policies.', 'start': 2441.239, 'duration': 8.044}], 'summary': 'Decision engine has 100,000 neurons, aims for more reliable, learning-based solutions', 'duration': 57.202, 'max_score': 2392.081, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/WHvWSYKGMDQ/pics/WHvWSYKGMDQ2392081.jpg'}, {'end': 2516.837, 'src': 'embed', 'start': 2483.112, 'weight': 3, 'content': [{'end': 2485.873, 'text': 'And the attention map is so much cleaner.', 'start': 2483.112, 'duration': 2.761}, {'end': 2493.156, 'text': 'The vehicle is looking at the road horizon and at the sides of the road, which is what we all do when we drive a vehicle.', 'start': 2486.634, 'duration': 6.522}, {'end': 2510.515, 'text': 'Now, remember that Ramin told us that this model called liquid time constant network is a continuous time network.', 'start': 2495.222, 'duration': 15.293}, {'end': 2516.837, 'text': 'And this model changes what the neuron computes.', 'start': 2511.776, 'duration': 5.061}], 'summary': 'The liquid time constant network model changes neuron computations for vehicle attention, resulting in a cleaner attention map.', 'duration': 33.725, 'max_score': 2483.112, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/WHvWSYKGMDQ/pics/WHvWSYKGMDQ2483112.jpg'}, {'end': 2579.196, 'src': 'embed', 'start': 2544.95, 'weight': 5, 'content': [{'end': 2552.956, 'text': 'where the function here determines not only the state of the neuron,', 'start': 2544.95, 'duration': 8.006}, {'end': 2563.984, 'text': 'But also this function is controlled by new input at the time of execution of inference.', 'start': 2554.597, 'duration': 9.387}, {'end': 2575.133, 'text': "So what's really cool about this model is that it is able to dynamically adapt after training based on the inputs that it sees.", 'start': 2564.765, 'duration': 10.368}, {'end': 2579.196, 'text': 'And this is something very powerful about liquid networks.', 'start': 2576.074, 'duration': 3.122}], 'summary': 'Neural network can dynamically adapt to new inputs, a powerful feature of liquid networks.', 'duration': 34.246, 'max_score': 2544.95, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/WHvWSYKGMDQ/pics/WHvWSYKGMDQ2544950.jpg'}], 'start': 2063.806, 'title': 'Simulating vehicle scenarios and liquid networks for vehicle control', 'summary': 'Discusses using the vista simulator to train and simulate different physical sensing modalities, environmental situations, and interactions, enabling the generation of diverse training data, and a novel liquid time constant network model with 19 neurons for vehicle control, revolutionizing simulation-based learning-based control.', 'chapters': [{'end': 2305.464, 'start': 2063.806, 'title': 'Simulating vehicle scenarios with vista', 'summary': 'Discusses using the vista simulator to train and simulate different physical sensing modalities, environmental situations, and interactions, enabling the generation of diverse training data, leading to a more effective solution than existing simulators.', 'duration': 241.658, 'highlights': ['The Vista simulator can model multiple agents, sensors, and agent-to-agent interactions, and has been recently open sourced, with a lot of users already utilizing the system.', 'Using Vista, high quality data from a human-driven vehicle can be transformed into various simulated trajectories, enabling the generation of diverse training data for learning policies.', 'The solution using Vista outperforms existing simulators in terms of crash locations and mean trajectory variation, demonstrating its effectiveness in simulating vehicle scenarios.']}, {'end': 2606.095, 'start': 2305.604, 'title': 'Liquid networks for vehicle control', 'summary': 'Discusses a novel liquid time constant network model with 19 neurons and cleaner attention map for vehicle control, enabling recovery from orientations off-road or in the wrong lane, and dynamic adaptation based on inputs, revolutionizing simulation-based learning-based control.', 'duration': 300.491, 'highlights': ['The liquid time constant network model with 19 neurons and cleaner attention map enables recovery from orientations off-road or in the wrong lane, and dynamic adaptation based on inputs, revolutionizing simulation-based learning-based control.', 'The decision engine of the solution contains about 100,000 neurons and about half a million parameters, challenging the observer to associate the state of neurons with the behavior of the vehicle.', 'The solution utilizes liquid networks and neural circuit policies to enhance the learning-based control, providing a more reliable and cleaner attention map, leading to improved vehicle behavior and decision-making.', 'The liquid time constant network model changes the neuron equation and wiring, allowing dynamic adaptation after training based on the inputs, with different types of neurons performing specific functions in the architecture.']}], 'duration': 542.289, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/WHvWSYKGMDQ/pics/WHvWSYKGMDQ2063806.jpg', 'highlights': ['The Vista simulator can model multiple agents, sensors, and agent-to-agent interactions, and has been recently open sourced, with a lot of users already utilizing the system.', 'Using Vista, high quality data from a human-driven vehicle can be transformed into various simulated trajectories, enabling the generation of diverse training data for learning policies.', 'The solution using Vista outperforms existing simulators in terms of crash locations and mean trajectory variation, demonstrating its effectiveness in simulating vehicle scenarios.', 'The liquid time constant network model with 19 neurons and cleaner attention map enables recovery from orientations off-road or in the wrong lane, and dynamic adaptation based on inputs, revolutionizing simulation-based learning-based control.', 'The solution utilizes liquid networks and neural circuit policies to enhance the learning-based control, providing a more reliable and cleaner attention map, leading to improved vehicle behavior and decision-making.', 'The liquid time constant network model changes the neuron equation and wiring, allowing dynamic adaptation after training based on the inputs, with different types of neurons performing specific functions in the architecture.', 'The decision engine of the solution contains about 100,000 neurons and about half a million parameters, challenging the observer to associate the state of neurons with the behavior of the vehicle.']}, {'end': 3262.568, 'segs': [{'end': 2636.714, 'src': 'embed', 'start': 2606.635, 'weight': 0, 'content': [{'end': 2613.54, 'text': 'And so with this in mind, we can look again at the beautiful solution that is enabled by liquid networks.', 'start': 2606.635, 'duration': 6.905}, {'end': 2623.006, 'text': 'And the solution keeps the car on the road and only requires 19 neurons to deliver that kind of function.', 'start': 2613.74, 'duration': 9.266}, {'end': 2635.553, 'text': 'And you can see here that the attention of the solution is extremely focused as compared to other models like CNN or CTRNN or LSTM,', 'start': 2623.846, 'duration': 11.707}, {'end': 2636.714, 'text': 'which are much more noisy.', 'start': 2635.553, 'duration': 1.161}], 'summary': 'Liquid networks enable a solution using 19 neurons, with focused attention compared to other models.', 'duration': 30.079, 'max_score': 2606.635, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/WHvWSYKGMDQ/pics/WHvWSYKGMDQ2606635.jpg'}, {'end': 2688.34, 'src': 'embed', 'start': 2664.146, 'weight': 6, 'content': [{'end': 2672.61, 'text': 'where the plane can only go up and down but it has to hit these obstacles, which are at locations that the plane does not know.', 'start': 2664.146, 'duration': 8.464}, {'end': 2674.911, 'text': 'And the plane also does not know what the environment looks like.', 'start': 2672.65, 'duration': 2.261}, {'end': 2688.34, 'text': 'And so in particular, when you implement the task with one degree of freedom control for the plane, all you need is 11 liquid neurons.', 'start': 2676.448, 'duration': 11.892}], 'summary': '1d control task needs 11 liquid neurons for plane to navigate obstacles.', 'duration': 24.194, 'max_score': 2664.146, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/WHvWSYKGMDQ/pics/WHvWSYKGMDQ2664146.jpg'}, {'end': 2742.727, 'src': 'embed', 'start': 2714.964, 'weight': 1, 'content': [{'end': 2718.545, 'text': 'And you can see a two degree of freedom solution to drone dodgeball.', 'start': 2714.964, 'duration': 3.581}, {'end': 2719.625, 'text': "And that's the network.", 'start': 2718.705, 'duration': 0.92}, {'end': 2723.767, 'text': 'You can see how all the neurons fire.', 'start': 2719.765, 'duration': 4.002}, {'end': 2735.19, 'text': 'And you can really associate the function of this controller, of this learning-based control, with activation patterns of the neurons.', 'start': 2724.507, 'duration': 10.683}, {'end': 2742.727, 'text': "And so very excited because, in fact, we're able to extract decision trees from these kinds of solutions.", 'start': 2736.263, 'duration': 6.464}], 'summary': 'Two-degree freedom drone dodgeball solution with learning-based control and decision tree extraction.', 'duration': 27.763, 'max_score': 2714.964, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/WHvWSYKGMDQ/pics/WHvWSYKGMDQ2714964.jpg'}, {'end': 2908.861, 'src': 'embed', 'start': 2873.611, 'weight': 2, 'content': [{'end': 2878.597, 'text': 'And that means we can get zero-shot transfers from one environment to another.', 'start': 2873.611, 'duration': 4.986}, {'end': 2884.65, 'text': 'And so moreover, we actually have done the same task in the middle of the winter.', 'start': 2879.768, 'duration': 4.882}, {'end': 2888.212, 'text': 'When we no longer have leaves, we have black tree lines.', 'start': 2885.07, 'duration': 3.142}, {'end': 2895.755, 'text': 'And the environment looks much, much different than the environment where we trained.', 'start': 2888.972, 'duration': 6.783}, {'end': 2908.861, 'text': 'And this kind of ability to transfer from one set of training data to completely different environments is truly transformational for the capabilities of machine learning.', 'start': 2896.555, 'duration': 12.306}], 'summary': 'Zero-shot transfers between environments demonstrated in winter, with drastic visual differences, showcasing transformative capabilities of machine learning.', 'duration': 35.25, 'max_score': 2873.611, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/WHvWSYKGMDQ/pics/WHvWSYKGMDQ2873611.jpg'}, {'end': 2960.027, 'src': 'embed', 'start': 2931.433, 'weight': 3, 'content': [{'end': 2933.053, 'text': "It's an office.", 'start': 2931.433, 'duration': 1.62}, {'end': 2934.254, 'text': "It's an indoor environment.", 'start': 2933.113, 'duration': 1.141}, {'end': 2946.08, 'text': 'And we see other examples where we take our solution and we deploy it to find the same object, the chair, just outside of this data building.', 'start': 2934.934, 'duration': 11.146}, {'end': 2951.743, 'text': 'And this is the deep neural network solution that gets completely confused.', 'start': 2946.4, 'duration': 5.343}, {'end': 2960.027, 'text': 'And here is the liquid network solution that has the exact same input and has no problem going to the robot.', 'start': 2951.923, 'duration': 8.104}], 'summary': 'Deep neural network confused indoors, liquid network successful outside.', 'duration': 28.594, 'max_score': 2931.433, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/WHvWSYKGMDQ/pics/WHvWSYKGMDQ2931433.jpg'}, {'end': 3171.395, 'src': 'embed', 'start': 3142.432, 'weight': 4, 'content': [{'end': 3152.661, 'text': 'We have used machine learning to identify the whales and then, once you have identified the whale, we can actually servo to the center of the whale,', 'start': 3142.432, 'duration': 10.229}, {'end': 3155.864, 'text': 'essentially tracking the whale along the way.', 'start': 3152.661, 'duration': 3.203}, {'end': 3158.706, 'text': 'and here is how the system works.', 'start': 3155.864, 'duration': 2.842}, {'end': 3171.395, 'text': 'You can see a group of whales, and you can see our robot servoing and following the whales as they move along.', 'start': 3160.107, 'duration': 11.288}], 'summary': 'Machine learning used to identify and track whales with a robot servoing to the center of the whale.', 'duration': 28.963, 'max_score': 3142.432, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/WHvWSYKGMDQ/pics/WHvWSYKGMDQ3142432.jpg'}, {'end': 3235.887, 'src': 'embed', 'start': 3202.603, 'weight': 5, 'content': [{'end': 3209.067, 'text': 'We can also study the whales from within, from inside the ocean.', 'start': 3202.603, 'duration': 6.464}, {'end': 3219.614, 'text': "And here's Sophie, our soft robotic fish, which Joseph, who is with us today, has participated in building.", 'start': 3209.107, 'duration': 10.507}, {'end': 3222.836, 'text': "And here's this beautiful, beautiful,", 'start': 3219.674, 'duration': 3.162}, {'end': 3235.887, 'text': 'very natural moving robot that can get close to aquatic creatures that can move in the same way aquatic creatures do, without disturbing them.', 'start': 3222.836, 'duration': 13.051}], 'summary': 'Soft robotic fish designed to study whales, moves like aquatic creatures.', 'duration': 33.284, 'max_score': 3202.603, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/WHvWSYKGMDQ/pics/WHvWSYKGMDQ3202603.jpg'}], 'start': 2606.635, 'title': 'Liquid networks in various applications', 'summary': 'Explores the success of liquid networks in dynamic causal models, demonstrating their ability to solve tasks with minimal neurons such as canyon run and drone dodgeball, requiring 11-24 neurons for plane control. it also discusses their effectiveness in object detection, showcasing their ability to generalize across different seasons and environments, achieving zero-shot transfers, and demonstrating applications in tracking whales using robotic drones and soft robotic fish.', 'chapters': [{'end': 2772.675, 'start': 2606.635, 'title': 'Liquid networks for dynamic causal models', 'summary': 'Discusses liquid networks, showcasing their ability to solve tasks with minimal neurons and apply to various problems such as canyon run and drone dodgeball, requiring 11-24 neurons for plane control and providing human understandable explanations via decision trees.', 'duration': 166.04, 'highlights': ['Liquid networks only require 19 neurons to keep a car on the road, showing focused attention compared to other models like CNN, CTRNN, or LSTM.', 'Implementing a liquid network on a task of flying a plane with one degree of freedom requires 11 neurons, and 24 neurons for controlling all degrees of freedom, showcasing its efficiency.', 'The solution to drone dodgeball using a two-degree of freedom liquid network shows the association of the function of the controller with activation patterns of the neurons, enabling the extraction of decision trees for human understandable explanations.']}, {'end': 3262.568, 'start': 2773.235, 'title': 'Liquid networks for object detection', 'summary': 'Discusses the success of liquid networks in object detection, showcasing their ability to generalize across different seasons and environments, achieving zero-shot transfers and demonstrating applications in tracking whales using robotic drones and soft robotic fish.', 'duration': 489.333, 'highlights': ["Liquid networks demonstrate the ability to generalize to different seasons and environments, achieving zero-shot transfers (e.g. from summer to fall, and winter), showcasing a new type of machine learning that addresses challenges faced by today's neural networks (e.g. lack of generalization to unseen test scenarios).", 'The deployment of liquid networks in various environments, such as an office, outdoor building, and baseball field, illustrates their consistent success in object detection, while deep neural networks struggle with confusion and inability to generalize.', 'The application of machine learning in tracking whales using robotic drones and soft robotic fish allows for non-invasive study of these majestic creatures, providing insights into their lives and behavior, and enabling better understanding of other animals and creatures we share the planet with.', 'The soft robotic fish, Sophie, designed to mimic the natural movement of aquatic creatures, showcases the use of non-disturbing technology in studying marine life, providing a more accurate representation of aquatic environments compared to thruster-based robots.']}], 'duration': 655.933, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/WHvWSYKGMDQ/pics/WHvWSYKGMDQ2606635.jpg', 'highlights': ['Liquid networks only require 19 neurons to keep a car on the road, showing focused attention compared to other models like CNN, CTRNN, or LSTM.', 'The solution to drone dodgeball using a two-degree of freedom liquid network shows the association of the function of the controller with activation patterns of the neurons, enabling the extraction of decision trees for human understandable explanations.', "Liquid networks demonstrate the ability to generalize to different seasons and environments, achieving zero-shot transfers (e.g. from summer to fall, and winter), showcasing a new type of machine learning that addresses challenges faced by today's neural networks (e.g. lack of generalization to unseen test scenarios).", 'The deployment of liquid networks in various environments, such as an office, outdoor building, and baseball field, illustrates their consistent success in object detection, while deep neural networks struggle with confusion and inability to generalize.', 'The application of machine learning in tracking whales using robotic drones and soft robotic fish allows for non-invasive study of these majestic creatures, providing insights into their lives and behavior, and enabling better understanding of other animals and creatures we share the planet with.', 'The soft robotic fish, Sophie, designed to mimic the natural movement of aquatic creatures, showcases the use of non-disturbing technology in studying marine life, providing a more accurate representation of aquatic environments compared to thruster-based robots.', 'Implementing a liquid network on a task of flying a plane with one degree of freedom requires 11 neurons, and 24 neurons for controlling all degrees of freedom, showcasing its efficiency.']}, {'end': 3756.437, 'segs': [{'end': 3286.723, 'src': 'embed', 'start': 3262.568, 'weight': 1, 'content': [{'end': 3269.273, 'text': 'depending on how much water we move and in what proportions, you can get the fish to move forward, to turn left or to turn right.', 'start': 3262.568, 'duration': 6.705}, {'end': 3275.877, 'text': 'So we can observe the motion of animals using robotic technologies, but we can do more.', 'start': 3270.293, 'duration': 5.584}, {'end': 3280.82, 'text': 'We can also listen in on what, oops, actually I need sound here.', 'start': 3276.817, 'duration': 4.003}, {'end': 3281.72, 'text': 'I forgot about this.', 'start': 3280.96, 'duration': 0.76}, {'end': 3286.723, 'text': 'We can observe the whales and we can observe what they say to each other.', 'start': 3282.361, 'duration': 4.362}], 'summary': 'Robotic technologies can observe animal motion and listen to whales communicating.', 'duration': 24.155, 'max_score': 3262.568, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/WHvWSYKGMDQ/pics/WHvWSYKGMDQ3262568.jpg'}, {'end': 3451.623, 'src': 'embed', 'start': 3396.516, 'weight': 0, 'content': [{'end': 3407.173, 'text': 'We can use machine learning to differentiate the clicks that allow the whales to echolocate from the clicks that seem to be vocalization and information carrying clicks.', 'start': 3396.516, 'duration': 10.657}, {'end': 3412.457, 'text': 'We can begin to look at what the protocols for information exchange are.', 'start': 3408.674, 'duration': 3.783}, {'end': 3414.378, 'text': 'How do they engage in dialogue?', 'start': 3412.737, 'duration': 1.641}, {'end': 3420.022, 'text': 'And we can begin to ask what is the information that they say to one another?', 'start': 3415.138, 'duration': 4.884}, {'end': 3429.888, 'text': 'So, with our project, we are trying to understand the phonetics, the semantics and the syntax and the discourse for Wales.', 'start': 3421.002, 'duration': 8.886}, {'end': 3436.235, 'text': 'So we have a big data set consisting of about 22, 000 clicks.', 'start': 3431.673, 'duration': 4.562}, {'end': 3440.397, 'text': 'The clicks get grouped into codas.', 'start': 3437.396, 'duration': 3.001}, {'end': 3442.358, 'text': 'The codas are like the phonemes.', 'start': 3440.557, 'duration': 1.801}, {'end': 3447.721, 'text': 'And using machine learning, we can identify coda types.', 'start': 3443.999, 'duration': 3.722}, {'end': 3451.623, 'text': 'We can identify patterns for coda exchanges.', 'start': 3448.401, 'duration': 3.222}], 'summary': 'Using machine learning to identify and analyze whale clicks in a dataset of 22,000 clicks to understand phonetics, semantics, syntax, and discourse.', 'duration': 55.107, 'max_score': 3396.516, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/WHvWSYKGMDQ/pics/WHvWSYKGMDQ3396516.jpg'}, {'end': 3508.154, 'src': 'embed', 'start': 3484.224, 'weight': 4, 'content': [{'end': 3494, 'text': 'So let me close by saying that In this class, you have looked at a number of really exciting machine learning algorithms.', 'start': 3484.224, 'duration': 9.776}, {'end': 3500.23, 'text': 'But you have also looked at what some of the technical challenges with the machine learning algorithms are.', 'start': 3494.741, 'duration': 5.489}, {'end': 3508.154, 'text': 'including data availability, including data quality, including the amount of computation required,', 'start': 3501.468, 'duration': 6.686}], 'summary': 'Explored exciting machine learning algorithms with focus on technical challenges like data availability, quality, and computation.', 'duration': 23.93, 'max_score': 3484.224, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/WHvWSYKGMDQ/pics/WHvWSYKGMDQ3484224.jpg'}, {'end': 3561.253, 'src': 'embed', 'start': 3536.173, 'weight': 3, 'content': [{'end': 3545.26, 'text': 'There is so much opportunity for developing improved machine learning using existing models and inventing new models.', 'start': 3536.173, 'duration': 9.087}, {'end': 3553.947, 'text': 'And if we can do this, we can create an exciting world where machines will really empower us,', 'start': 3546.161, 'duration': 7.786}, {'end': 3561.253, 'text': 'will really augment us and enhance us in our cognitive abilities and in our physical abilities.', 'start': 3553.947, 'duration': 7.306}], 'summary': 'Opportunity for developing improved machine learning using existing and new models to create an exciting world where machines empower and enhance cognitive and physical abilities.', 'duration': 25.08, 'max_score': 3536.173, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/WHvWSYKGMDQ/pics/WHvWSYKGMDQ3536173.jpg'}], 'start': 3262.568, 'title': 'Machine learning and ai', 'summary': 'Discusses using machine learning to analyze whale vocalizations, aiming to understand whale communication, and explores the potential of machine learning and ai in revolutionizing everyday life with personalized assistance, programmable clothing, intelligent environments, and autonomous vehicles.', 'chapters': [{'end': 3482.763, 'start': 3262.568, 'title': 'Deciphering whale language through machine learning', 'summary': 'Discusses using robotic technologies and machine learning to analyze whale vocalizations, aiming to understand the phonetics, semantics, and syntax of whale communication. it highlights the use of a big data set consisting of about 22,000 clicks and the application of machine learning to identify coda types and patterns for coda exchanges.', 'duration': 220.195, 'highlights': ['Using machine learning to differentiate the clicks that allow whales to echolocate from the clicks that seem to be vocalization and information carrying clicks, aiming to understand the protocols for information exchange and the information that whales convey to one another.', 'Having a big data set consisting of about 22,000 clicks, which are grouped into codas and analyzed using machine learning to identify coda types and patterns for coda exchanges, contributing to the understanding of how whales exchange information.', 'Employing robotic technologies to observe the motion of animals and listening in on whale vocalizations, with the aim of understanding the phonetics, semantics, and syntax of whale communication, and using machine learning to make progress in deciphering whale language.']}, {'end': 3756.437, 'start': 3484.224, 'title': 'Future of machine learning and ai', 'summary': 'Explores the potential of machine learning and ai in revolutionizing everyday life by enabling personalized assistance, programmable clothing, intelligent environments, and autonomous vehicles, ultimately empowering individuals in their cognitive and physical abilities.', 'duration': 272.213, 'highlights': ['The potential of machine learning and AI to revolutionize everyday life is highlighted through examples such as personalized assistance, programmable clothing, intelligent environments, and autonomous vehicles.', 'Challenges with machine learning algorithms, such as data availability, data quality, computation requirements, model size, and interpretability, are emphasized as areas for improvement.', 'The vision of a future where machines empower and augment individuals in their cognitive and physical abilities is presented, creating an exciting prospect for the integration of advanced technologies into daily life.']}], 'duration': 493.869, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/WHvWSYKGMDQ/pics/WHvWSYKGMDQ3262568.jpg', 'highlights': ['Using machine learning to differentiate whale vocalizations and echolocation clicks, aiming to understand information exchange protocols.', 'Employing robotic technologies to observe whale motion and vocalizations, using machine learning to decipher whale language.', 'Analyzing a big data set of 22,000 clicks with machine learning to identify coda types and patterns for information exchange among whales.', 'Highlighting the potential of machine learning and AI to revolutionize everyday life with personalized assistance, programmable clothing, intelligent environments, and autonomous vehicles.', 'Emphasizing challenges in machine learning algorithms: data availability, quality, computation requirements, model size, and interpretability.', 'Presenting a vision of a future where machines empower and augment individuals in cognitive and physical abilities.']}], 'highlights': ['Liquid networks with 19 neurons revolutionize simulation-based learning-based control', 'AI has potential to reduce and eliminate car accidents', 'Robots enable machines to navigate and manipulate the world', 'Machine learning connects perception and action for autonomous driving', 'LIDAR sensors decrease uncertainty and increase safety in autonomous driving', 'Machine learning and AI have potential to revolutionize everyday life', 'Liquid networks generalize to different seasons and environments, achieving zero-shot transfers', 'Using machine learning to decipher whale language and information exchange among whales', 'The pivotal role of LIDAR sensors in autonomous driving is highlighted', "The importance of a robot's body in determining the range of tasks it can perform", 'The Vista simulator can model multiple agents, sensors, and agent-to-agent interactions', 'The potential of machine learning and AI to revolutionize everyday life with personalized assistance, programmable clothing, intelligent environments, and autonomous vehicles', 'The solution utilizing liquid networks and neural circuit policies enhances learning-based control', 'The soft robotic fish, Sophie, designed to mimic the natural movement of aquatic creatures', 'The solution using Vista outperforms existing simulators in terms of crash locations and mean trajectory variation', 'The performance of top performing ImageNet algorithms dropped by 40% to 50% in certain contexts', 'The liquid time constant network model changes the neuron equation and wiring, allowing dynamic adaptation after training', 'The liquid time constant network model with 19 neurons enables recovery from orientations off-road or in the wrong lane', 'The liquid time constant network model contains about 100,000 neurons and about half a million parameters', 'The decision engine of the solution contains about 100,000 neurons and about half a million parameters', 'The definition of a robot as a programmable mechanical device with sensors and actuators', 'The application of machine learning in tracking whales using robotic drones and soft robotic fish', "Reinforcement learning's impact in robotics is due to the ability to run thousands of simulations in parallel for training", 'Machine learning involves learning from and making predictions on data', 'Historical perspectives on autonomous driving reveal significant advancements in visual processing', 'Employing convolutional neural networks for image classification in robot perception', 'Challenges in extreme weather and congestion for autonomous cars', 'Challenges in image classification for safety-critical systems, requiring 100% accuracy', 'Autonomous driving parameters: environment complexity, interactions, reasoning, speed', 'Advancements in autonomous driving involve using large datasets and machine learning', 'Adversarial attacks can easily manipulate camera images in autonomous driving', 'Analyzing a big data set of 22,000 clicks with machine learning to identify coda types and patterns for information exchange among whales', 'Liquid networks only require 19 neurons to keep a car on the road, showing focused attention compared to other models like CNN, CTRNN, or LSTM', 'Implementing a liquid network on a task of flying a plane with one degree of freedom requires 11 neurons, and 24 neurons for controlling all degrees of freedom', 'Employing robotic technologies to observe whale motion and vocalizations, using machine learning to decipher whale language', 'Discussing the role of robotics in automating routine tasks', 'Discussing machine learning methodologies: supervised, unsupervised, and reinforcement learning', 'Discussing the role of robotics in automating routine tasks', 'Discussing machine learning methodologies: supervised, unsupervised, and reinforcement learning', 'Discussing the role of robotics in automating routine tasks', 'Discussing machine learning methodologies: supervised, unsupervised, and reinforcement learning', 'Discussing the role of robotics in automating routine tasks', 'Discussing machine learning methodologies: supervised, unsupervised, and reinforcement learning', 'Discussing the role of robotics in automating routine tasks', 'Discussing machine learning methodologies: supervised, unsupervised, and reinforcement learning']}