title
The LangChain Cookbook - Beginner Guide To 7 Essential Concepts
description
Twitter: https://twitter.com/GregKamradt
Newsletter: https://mail.gregkamradt.com/signup
Cookbook Part 2: https://youtu.be/vGP4pQdCocw
Wild Belle - Keep You: https://open.spotify.com/track/1eREJIBdqeCcqNCB1pbz7w?si=3c3d30c473b54994
LangChain Cookbook: https://github.com/gkamradt/langchain-tutorials/blob/main/LangChain%20Cookbook%20Part%201%20-%20Fundamentals.ipynb
LangChain Conceptual Docs: https://docs.langchain.com/docs/category/components
Python Docs: https://python.langchain.com/en/latest/
JS/TS Docs: https://js.langchain.com/docs/
0:00 - Introduction
1:12 - Conceptual Docs
1:54 - Cookbook introduction
2:27 - What is LangChain?
5:10 - Schema (Text, Messages, Documents)
8:54 - Models (Language, Chat, Embeddings)
12:03 - Prompts (Template, Examples, Output Parse)
20:45 - Indexes (Loaders, Splitters, Retrievers, Vectorstores)
26:39 - Memory (Chat History)
28:12 - Chains (Simple, Summarize)
32:52 - Agents (Toolkits, Agents)
Music by lofigenerator.com / CC BY
detail
{'title': 'The LangChain Cookbook - Beginner Guide To 7 Essential Concepts', 'heatmap': [{'end': 422.855, 'start': 320.838, 'weight': 0.872}, {'end': 483.455, 'start': 457.741, 'weight': 0.704}, {'end': 551.307, 'start': 523.426, 'weight': 0.706}, {'end': 1080.421, 'start': 844.193, 'weight': 0.843}, {'end': 1169.92, 'start': 1099.222, 'weight': 0.729}, {'end': 1310.88, 'start': 1234.21, 'weight': 1}, {'end': 1467.063, 'start': 1416.836, 'weight': 0.738}, {'end': 1883.176, 'start': 1762.701, 'weight': 0.762}], 'summary': "Provides a beginner's guide to langchain, covering its basics, framework, openai api utilization, ai interaction, text embedding, prompt and template-based learning, language model techniques, and enhancing language models and langchain, with a focus on theoretical and qualitative aspects.", 'chapters': [{'end': 72.635, 'segs': [{'end': 30.19, 'src': 'embed', 'start': 0.069, 'weight': 0, 'content': [{'end': 1.51, 'text': 'Hello, good people.', 'start': 0.069, 'duration': 1.441}, {'end': 4.432, 'text': 'Have you ever wondered what Langchain was?', 'start': 2.29, 'duration': 2.142}, {'end': 10.195, 'text': "Or maybe you've heard about it and you've played around with a few sections, but you're not quite sure where to look next.", 'start': 4.852, 'duration': 5.343}, {'end': 18.701, 'text': "Well, in this video, we're gonna be covering all of the Langchain basics with the goal of getting you building and having fun as quick as possible.", 'start': 10.656, 'duration': 8.045}, {'end': 23.284, 'text': "My name is Greg, and I've been having a ton of fun building out apps in LangChain.", 'start': 19.381, 'duration': 3.903}, {'end': 30.19, 'text': 'Now, I share most of my work on Twitter, so if you want to go check it out, links in the description, you can go follow along with me.', 'start': 23.645, 'duration': 6.545}], 'summary': 'Langchain basics for quick app building and fun.', 'duration': 30.121, 'max_score': 0.069, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/2xxziIWmaSA/pics/2xxziIWmaSA69.jpg'}, {'end': 59.889, 'src': 'embed', 'start': 35.274, 'weight': 1, 'content': [{'end': 42.701, 'text': "And the reason why I'm doing a video here is because it takes all of the technical pieces and abstracts them up into more theoretical,", 'start': 35.274, 'duration': 7.427}, {'end': 46.144, 'text': 'qualitative aspects of LangChain, which I think is extremely helpful for it.', 'start': 42.701, 'duration': 3.443}, {'end': 54.307, 'text': "And in order to understand this a little bit better, I've created a companion for this video, and that is the Langchain Cookbook.", 'start': 46.864, 'duration': 7.443}, {'end': 55.447, 'text': "Link's in the description.", 'start': 54.707, 'duration': 0.74}, {'end': 59.889, 'text': 'If you wanna go check that out, please go and check out the GitHub and you can follow along here.', 'start': 55.928, 'duration': 3.961}], 'summary': 'Langchain video simplifies technical aspects, promotes qualitative understanding, and offers companion cookbook on github.', 'duration': 24.615, 'max_score': 35.274, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/2xxziIWmaSA/pics/2xxziIWmaSA35274.jpg'}], 'start': 0.069, 'title': 'Introduction to langchain basics', 'summary': 'Introduces langchain and covers its basics, aiming to get viewers building and having fun as quickly as possible, with a focus on theoretical and qualitative aspects, accompanied by a langchain cookbook.', 'chapters': [{'end': 72.635, 'start': 0.069, 'title': 'Introduction to langchain basics', 'summary': 'Introduces langchain and covers its basics, aiming to get viewers building and having fun as quickly as possible, with a focus on theoretical and qualitative aspects, accompanied by a langchain cookbook.', 'duration': 72.566, 'highlights': ['The video covers Langchain basics to help viewers start building and having fun as quickly as possible, using new conceptual docs from LangChain.', 'The content abstracts technical pieces into theoretical and qualitative aspects, making it extremely helpful for understanding LangChain.', 'The Langchain Cookbook, mentioned in the video, provides a companion resource for viewers to follow along and delve deeper into the content.', 'Viewers are encouraged to check out the GitHub link provided to explore the companion Langchain Cookbook and access additional resources for further understanding.']}], 'duration': 72.566, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/2xxziIWmaSA/pics/2xxziIWmaSA69.jpg', 'highlights': ['The video covers Langchain basics to help viewers start building and having fun as quickly as possible, using new conceptual docs from LangChain.', 'The content abstracts technical pieces into theoretical and qualitative aspects, making it extremely helpful for understanding LangChain.', 'The Langchain Cookbook, mentioned in the video, provides a companion resource for viewers to follow along and delve deeper into the content.', 'Viewers are encouraged to check out the GitHub link provided to explore the companion Langchain Cookbook and access additional resources for further understanding.']}, {'end': 451.337, 'segs': [{'end': 186.302, 'src': 'embed', 'start': 160.396, 'weight': 0, 'content': [{'end': 169.245, 'text': "and langchain helps abstract a ton of that so that you're able to work with it more easily and intermix different pieces and customize really how you need to.", 'start': 160.396, 'duration': 8.849}, {'end': 174.59, 'text': 'So LangChain makes the complicated parts of working and building with AI models easier.', 'start': 170.305, 'duration': 4.285}, {'end': 176.712, 'text': 'It does this in two main ways.', 'start': 175.27, 'duration': 1.442}, {'end': 178.994, 'text': 'The first big way is going to be through integration.', 'start': 177.273, 'duration': 1.721}, {'end': 186.302, 'text': 'So, you can bring external data such as your files, other applications, API data to your language models, which is cool.', 'start': 179.135, 'duration': 7.167}], 'summary': 'Langchain simplifies working with ai models by integrating external data for easier customization.', 'duration': 25.906, 'max_score': 160.396, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/2xxziIWmaSA/pics/2xxziIWmaSA160396.jpg'}, {'end': 226.097, 'src': 'embed', 'start': 199.22, 'weight': 1, 'content': [{'end': 204.306, 'text': "And you do this when the path isn't so clear or maybe unknown, and we'll get into more of that later.", 'start': 199.22, 'duration': 5.086}, {'end': 209.145, 'text': 'So why Langchain specifically? There are four big reasons why I like Langchain.', 'start': 205.802, 'duration': 3.343}, {'end': 211.306, 'text': 'The first one is gonna be for the components.', 'start': 209.485, 'duration': 1.821}, {'end': 216.69, 'text': 'Langchain makes it easy to swap out abstractions and components necessary to work with language models.', 'start': 211.787, 'duration': 4.903}, {'end': 226.097, 'text': "Basically, they've created a ton of tools that make it super simple to work with language models like ChatGPT or anything on Honeyface,", 'start': 217.271, 'duration': 8.826}], 'summary': 'Langchain simplifies language model components for easy swapping and usage.', 'duration': 26.877, 'max_score': 199.22, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/2xxziIWmaSA/pics/2xxziIWmaSA199220.jpg'}, {'end': 267.69, 'src': 'embed', 'start': 238.626, 'weight': 2, 'content': [{'end': 242.369, 'text': 'On the qualitative side of why LangChan is awesome is because the speed is great.', 'start': 238.626, 'duration': 3.743}, {'end': 248.193, 'text': "Almost every day I need to go and make sure that I'm on the latest branch of LangChan and I go and I update it every time.", 'start': 242.709, 'duration': 5.484}, {'end': 249.414, 'text': 'So the speed is awesome.', 'start': 248.233, 'duration': 1.181}, {'end': 251.638, 'text': 'The other really cool part is the community.', 'start': 249.997, 'duration': 1.641}, {'end': 257.601, 'text': "So there's a ton of meetups, there's a Discord channel and there's a ton of events, like webinars, that go on throughout the week.", 'start': 251.818, 'duration': 5.783}, {'end': 259.543, 'text': 'that are really awesome learning resources for us.', 'start': 257.601, 'duration': 1.942}, {'end': 267.69, 'text': 'Cool? Now, again, to summarize all this, why do we need Langchain? Well, because language models can be pretty straightforward.', 'start': 260.423, 'duration': 7.267}], 'summary': 'Langchan excels in speed and community, offering frequent updates and valuable learning resources.', 'duration': 29.064, 'max_score': 238.626, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/2xxziIWmaSA/pics/2xxziIWmaSA238626.jpg'}, {'end': 422.855, 'src': 'heatmap', 'start': 320.838, 'weight': 0.872, 'content': [{'end': 323.741, 'text': "The first aspect of Langchain components that we're gonna look at is the schema.", 'start': 320.838, 'duration': 2.903}, {'end': 326.964, 'text': "Now, I almost didn't even include this one, but the first one is gonna be text.", 'start': 324.181, 'duration': 2.783}, {'end': 331.729, 'text': "Now, what's really cool about these language models is that text is the new programming language.", 'start': 327.384, 'duration': 4.345}, {'end': 337.535, 'text': "Not verbatim, not per se, but we're using a lot more English language to tell language models what to do.", 'start': 332.67, 'duration': 4.865}, {'end': 341.738, 'text': 'in this case, what day comes after friday is an example of something.', 'start': 338.235, 'duration': 3.503}, {'end': 347.043, 'text': 'i may go tell a language model and it is going to respond back to me with a natural language response very cool.', 'start': 341.738, 'duration': 5.305}, {'end': 349.004, 'text': 'next up is going to be chat messages.', 'start': 347.043, 'duration': 1.961}, {'end': 353.048, 'text': 'so, like text, chat messages are similar, but they have different types.', 'start': 349.004, 'duration': 4.044}, {'end': 357.431, 'text': 'the first type is going to be system, and this is helpful background context that tell the ai what to do.', 'start': 353.048, 'duration': 4.383}, {'end': 361.275, 'text': 'all right, like your helpful teacher assistant, bot or something.', 'start': 357.431, 'duration': 3.844}, {'end': 365.337, 'text': 'Then we have human messages, and these are messages that are intended to represent the user.', 'start': 361.815, 'duration': 3.522}, {'end': 368.819, 'text': 'And so literally user input or something that I may text from it.', 'start': 366.037, 'duration': 2.782}, {'end': 374.381, 'text': 'Then we have AI messages, and these are messages that show what the AI responded with.', 'start': 369.839, 'duration': 4.542}, {'end': 380.645, 'text': 'And the cool part about this is the AI may or may not have actually responded with it, but you can tell it that it did,', 'start': 374.682, 'duration': 5.963}, {'end': 382.786, 'text': 'so that it has additional context on how to answer you.', 'start': 380.645, 'duration': 2.141}, {'end': 389.371, 'text': "So what I'm gonna do here is I'm gonna import chat OpenAI and my three message types, and then I'm gonna create my chat model.", 'start': 383.966, 'duration': 5.405}, {'end': 391.973, 'text': "Gonna do that, and then I'm gonna type in two messages.", 'start': 389.931, 'duration': 2.042}, {'end': 398.018, 'text': 'The first system message is, you are a nice AI bot that helps a user figure out what to eat in a short sentence.', 'start': 392.293, 'duration': 5.725}, {'end': 402.961, 'text': 'And then a human message, I like tomatoes, what should I eat? Let me go ahead and run this.', 'start': 398.718, 'duration': 4.243}, {'end': 406.604, 'text': 'And you get an AI message back, because this is what it responds with.', 'start': 403.982, 'duration': 2.622}, {'end': 410.187, 'text': 'You could try making a tomato salad with fresh basil and mozzarella cheese.', 'start': 407.145, 'duration': 3.042}, {'end': 411.969, 'text': "Thanks AI, that's cool.", 'start': 410.828, 'duration': 1.141}, {'end': 417.049, 'text': 'What you can also do is you can also pass more chat history and get responses from the AI.', 'start': 413.105, 'duration': 3.944}, {'end': 422.855, 'text': "So in this case, you're a nice AI bot that helps a user figure out where to travel to in one short sentence.", 'start': 417.47, 'duration': 5.385}], 'summary': 'Langchain components include text and chat messages with system, human, and ai message types, enabling natural language interaction with language models.', 'duration': 102.017, 'max_score': 320.838, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/2xxziIWmaSA/pics/2xxziIWmaSA320838.jpg'}, {'end': 361.275, 'src': 'embed', 'start': 324.181, 'weight': 3, 'content': [{'end': 326.964, 'text': "Now, I almost didn't even include this one, but the first one is gonna be text.", 'start': 324.181, 'duration': 2.783}, {'end': 331.729, 'text': "Now, what's really cool about these language models is that text is the new programming language.", 'start': 327.384, 'duration': 4.345}, {'end': 337.535, 'text': "Not verbatim, not per se, but we're using a lot more English language to tell language models what to do.", 'start': 332.67, 'duration': 4.865}, {'end': 341.738, 'text': 'in this case, what day comes after friday is an example of something.', 'start': 338.235, 'duration': 3.503}, {'end': 347.043, 'text': 'i may go tell a language model and it is going to respond back to me with a natural language response very cool.', 'start': 341.738, 'duration': 5.305}, {'end': 349.004, 'text': 'next up is going to be chat messages.', 'start': 347.043, 'duration': 1.961}, {'end': 353.048, 'text': 'so, like text, chat messages are similar, but they have different types.', 'start': 349.004, 'duration': 4.044}, {'end': 357.431, 'text': 'the first type is going to be system, and this is helpful background context that tell the ai what to do.', 'start': 353.048, 'duration': 4.383}, {'end': 361.275, 'text': 'all right, like your helpful teacher assistant, bot or something.', 'start': 357.431, 'duration': 3.844}], 'summary': 'Language models use text as a new programming language, enabling natural language responses to queries.', 'duration': 37.094, 'max_score': 324.181, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/2xxziIWmaSA/pics/2xxziIWmaSA324181.jpg'}], 'start': 72.655, 'title': 'Langchain framework and openai api', 'summary': 'Introduces langchain, a framework for developing applications powered by language models, emphasizing its components, use cases, and benefits, and also highlights the utilization of openai api for language models, showcasing its capability to infer context from chat history with examples.', 'chapters': [{'end': 304.446, 'start': 72.655, 'title': 'Langchain conceptual docs overview', 'summary': 'Introduces langchain, a framework for developing applications powered by language models, highlighting its components, use cases, and benefits, with a focus on qualitative aspects and technical documentation.', 'duration': 231.791, 'highlights': ['Langchain is a framework for developing applications powered by language models, simplifying the process of working and building with AI models through integration and agency.', 'The components of Langchain make it easy to swap out abstractions and components necessary to work with language models, providing out-of-the-box support for using and customizing chains.', 'The speed and community support of Langchain are highlighted as significant advantages, with regular updates and a strong community providing learning resources.']}, {'end': 451.337, 'start': 305.706, 'title': 'Utilizing openai api for language models', 'summary': "Introduces the langchain components, emphasizing the use of text and chat messages to interact with language models, demonstrating an example of a chat conversation with system, human, and ai messages and showcasing the ai's capability to infer context from chat history.", 'duration': 145.631, 'highlights': ["The chapter emphasizes the use of text as the new programming language for language models, showcasing the example 'what day comes after Friday' to illustrate instructing language models using English language.", 'The chapter introduces the concept of chat messages with system, human, and AI message types, highlighting their roles in providing context and interaction with language models.', "The chapter demonstrates a chat conversation with system, human, and AI messages, showcasing the AI's capability to respond contextually to user input, such as suggesting a tomato salad and providing travel recommendations based on chat history."]}], 'duration': 378.682, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/2xxziIWmaSA/pics/2xxziIWmaSA72655.jpg', 'highlights': ['Langchain simplifies working with AI models through integration and agency.', 'Langchain components enable easy swapping and customization for language models.', "Langchain's speed and community support are significant advantages.", 'Text is emphasized as the new programming language for language models.', 'Chat messages with system, human, and AI types provide context and interaction.', "AI's capability to respond contextually is showcased through chat conversation."]}, {'end': 858.846, 'segs': [{'end': 485.416, 'src': 'heatmap', 'start': 451.737, 'weight': 4, 'content': [{'end': 457.081, 'text': "Now, if you're making a chat bot, you could see how you could append different messages that have been back and forth.", 'start': 451.737, 'duration': 5.344}, {'end': 460.223, 'text': "I'm not sure if that's a verb, but back and forth through the user.", 'start': 457.741, 'duration': 2.482}, {'end': 465.805, 'text': "The next model that we're going to look at is going to be documents.", 'start': 461.883, 'duration': 3.922}, {'end': 472.189, 'text': 'So documents are important because this represents a piece of text along with associated metadata.', 'start': 466.126, 'duration': 6.063}, {'end': 475.771, 'text': 'Now metadata is just a fancy word for things about that document.', 'start': 472.609, 'duration': 3.162}, {'end': 482.054, 'text': 'And in this case, this document or the text is held within a field called page content.', 'start': 476.211, 'duration': 5.843}, {'end': 483.455, 'text': 'So this is my document.', 'start': 482.434, 'duration': 1.021}, {'end': 485.416, 'text': "It's full text that I've gathered from other places.", 'start': 483.515, 'duration': 1.901}], 'summary': 'The transcript discusses the importance of documents in chat bots, including metadata and page content.', 'duration': 33.679, 'max_score': 451.737, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/2xxziIWmaSA/pics/2xxziIWmaSA451737.jpg'}, {'end': 555.71, 'src': 'heatmap', 'start': 508.881, 'weight': 2, 'content': [{'end': 515.523, 'text': "This is extremely helpful for when you're making large repositories of information and you want to be able to filter by it.", 'start': 508.881, 'duration': 6.642}, {'end': 522.606, 'text': 'So instead of just going and asking Langchain to look at all your documents in your database, you can go ahead and filter these by certain metadata.', 'start': 515.923, 'duration': 6.683}, {'end': 528.85, 'text': 'Go ahead and run this, and you can see here I get a document object with a bunch of metadata on it from there.', 'start': 523.426, 'duration': 5.424}, {'end': 534.795, 'text': "Cool If those are the schemas that we work with, the next thing we're gonna look at is the different models.", 'start': 529.711, 'duration': 5.084}, {'end': 543.401, 'text': "Now, these are the ways of interacting with, well, different models, but the reason why this is important is because they're different model types.", 'start': 535.135, 'duration': 8.266}, {'end': 544.962, 'text': 'Let me just show an example here.', 'start': 543.761, 'duration': 1.201}, {'end': 551.307, 'text': "The normal one that we're looking at is gonna be the language model, and this is when text goes in and text comes out, okay?", 'start': 545.362, 'duration': 5.945}, {'end': 555.71, 'text': "Now, the first thing I'll do is I'll import OpenAI and I'll make my model.", 'start': 551.827, 'duration': 3.883}], 'summary': 'Langchain enables filtering of documents by metadata. different model types available for language processing.', 'duration': 46.829, 'max_score': 508.881, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/2xxziIWmaSA/pics/2xxziIWmaSA508881.jpg'}, {'end': 654.718, 'src': 'embed', 'start': 625.432, 'weight': 3, 'content': [{'end': 628.674, 'text': "The last type of model that we're gonna look at is gonna be your text embedding model.", 'start': 625.432, 'duration': 3.242}, {'end': 636.32, 'text': 'The reason why this one is important is because we do a lot of similarity searches and a lot of comparing texts when working with language models.', 'start': 629.035, 'duration': 7.285}, {'end': 640.924, 'text': "Now, in this case, OpenAI also has an AI embeddings model that we're gonna use.", 'start': 636.761, 'duration': 4.163}, {'end': 642.986, 'text': "There's a lot of embedding models out there.", 'start': 641.384, 'duration': 1.602}, {'end': 644.247, 'text': 'You can use whatever you want.', 'start': 643.026, 'duration': 1.221}, {'end': 647.93, 'text': "I just use OpenAI because It feels like it's a standard and it's very simple right now.", 'start': 644.287, 'duration': 3.643}, {'end': 650.934, 'text': "So I'm going to pass in my API key.", 'start': 648.551, 'duration': 2.383}, {'end': 654.718, 'text': "I'm going to get my embeddings engine ready, and then I'm going to define a piece of text.", 'start': 650.974, 'duration': 3.744}], 'summary': "Text embedding model for similarity searches and comparing texts with openai's ai embeddings model.", 'duration': 29.286, 'max_score': 625.432, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/2xxziIWmaSA/pics/2xxziIWmaSA625432.jpg'}, {'end': 712.185, 'src': 'embed', 'start': 686.224, 'weight': 0, 'content': [{'end': 690.585, 'text': 'which makes it really easy to compare across others as well.', 'start': 686.224, 'duration': 4.361}, {'end': 693.606, 'text': "So I'm going to put that in a variable called text embeddings.", 'start': 691.025, 'duration': 2.581}, {'end': 697.767, 'text': "I'm going to see how long my text embeddings is, and I'm going to get a preview of it.", 'start': 693.966, 'duration': 3.801}, {'end': 701.068, 'text': "So you'll notice here that my text embedding length is 1536.", 'start': 698.287, 'duration': 2.781}, {'end': 709.09, 'text': 'This means that there are 1536 different numbers within that list that represent the meaning of my text.', 'start': 701.068, 'duration': 8.022}, {'end': 712.185, 'text': "That's a lot of numbers and I'm glad I don't have to deal with them.", 'start': 709.884, 'duration': 2.301}], 'summary': 'Text embeddings contain 1536 numbers representing the meaning of the text.', 'duration': 25.961, 'max_score': 686.224, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/2xxziIWmaSA/pics/2xxziIWmaSA686224.jpg'}, {'end': 816.583, 'src': 'embed', 'start': 785.556, 'weight': 1, 'content': [{'end': 788.239, 'text': "meaning they won't just be static strings that you type out,", 'start': 785.556, 'duration': 2.683}, {'end': 794.126, 'text': "but you're actually going to be inputting tokens or inputting placeholders based off of the scenario that you're working with.", 'start': 788.239, 'duration': 5.887}, {'end': 799.418, 'text': "so in this case, what i'm doing here is i'm importing my packages again.", 'start': 795.017, 'duration': 4.401}, {'end': 801.259, 'text': 'in this case, prompt template is going to be the new one.', 'start': 799.418, 'duration': 1.841}, {'end': 806.72, 'text': "i'm going to do davinci again okay, great, and in this case i'm going to create a template to start.", 'start': 801.259, 'duration': 5.461}, {'end': 809.941, 'text': 'so i really want to travel to location.', 'start': 806.72, 'duration': 3.221}, {'end': 816.583, 'text': "you'll notice my opened and closed brackets around location, which means that this is going to be a token that i'm going to be replacing later.", 'start': 809.941, 'duration': 6.642}], 'summary': 'Using tokens as dynamic placeholders for inputting strings in code.', 'duration': 31.027, 'max_score': 785.556, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/2xxziIWmaSA/pics/2xxziIWmaSA785556.jpg'}], 'start': 451.737, 'title': 'Ai interaction and text embedding', 'summary': 'Discusses interacting with ai models, including language and chat models, and the use of metadata in documents. it also covers text embedding with 1536-dimensional vectors and prompt templates for dynamic prompt generation and token replacement in language models.', 'chapters': [{'end': 624.271, 'start': 451.737, 'title': 'Interacting with ai models', 'summary': 'Discusses the importance of documents and different models when interacting with ai, including examples of language and chat models, as well as the use of metadata in documents.', 'duration': 172.534, 'highlights': ['Documents are important as they represent a piece of text along with associated metadata, which can be used for filtering in large repositories of information.', 'Different model types, such as language and chat models, play a crucial role in interacting with AI, with examples demonstrating their functionalities.', 'The chat model allows for more creativity and exaggeration, as demonstrated by the example of an unhelpful AI bot making jokes in response to user input.']}, {'end': 858.846, 'start': 625.432, 'title': 'Text embedding and prompt templates', 'summary': 'Discusses the use of text embedding to convert text into vectors for easy comparison, with a 1536-dimensional vector representing the meaning of the text, and the utilization of prompt templates for dynamic prompt generation and token replacement in language models.', 'duration': 233.414, 'highlights': ['Text embedding converts text into a 1536-dimensional vector for easy comparison, showcasing the semantic representation of the text and enabling comparisons with others.', 'Prompt templates allow for dynamic prompt generation by replacing tokens based on the scenario, enhancing the flexibility of prompts used in language models.']}], 'duration': 407.109, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/2xxziIWmaSA/pics/2xxziIWmaSA451737.jpg', 'highlights': ['Text embedding converts text into a 1536-dimensional vector for easy comparison.', 'Prompt templates allow for dynamic prompt generation by replacing tokens.', 'Documents with metadata can be used for filtering in large repositories of information.', 'Different model types, such as language and chat models, play a crucial role in interacting with AI.', 'The chat model allows for more creativity and exaggeration.']}, {'end': 1234.21, 'segs': [{'end': 889.453, 'src': 'embed', 'start': 859.247, 'weight': 0, 'content': [{'end': 860.868, 'text': 'It gives me this, which is cool.', 'start': 859.247, 'duration': 1.621}, {'end': 862.37, 'text': 'All right.', 'start': 861.349, 'duration': 1.021}, {'end': 866.233, 'text': "The next cool part that we're going to look at is the example selectors.", 'start': 862.91, 'duration': 3.323}, {'end': 871.978, 'text': "So often when you're constructing your prompts, you're going to do something called in context learning.", 'start': 866.573, 'duration': 5.405}, {'end': 876.762, 'text': "This means that you're going to show the language model what you want it to do.", 'start': 872.799, 'duration': 3.963}, {'end': 879.985, 'text': 'And one of the main ways that people do this is through examples.', 'start': 877.243, 'duration': 2.742}, {'end': 886.651, 'text': 'This could be about how to answer a customer service request, or it could be how to respond to some nuanced question.', 'start': 880.405, 'duration': 6.246}, {'end': 889.453, 'text': "And in this case, I'm going to pick examples.", 'start': 887.151, 'duration': 2.302}], 'summary': 'In-context learning involves demonstrating desired behavior through examples, such as customer service responses.', 'duration': 30.206, 'max_score': 859.247, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/2xxziIWmaSA/pics/2xxziIWmaSA859247.jpg'}, {'end': 934.37, 'src': 'embed', 'start': 901.607, 'weight': 2, 'content': [{'end': 905.751, 'text': "And in this case what I'm going to do is I'm going to import a lot of things here,", 'start': 901.607, 'duration': 4.144}, {'end': 911.037, 'text': 'but the main star of the show is going to be the semantic similarity example selector.', 'start': 905.751, 'duration': 5.286}, {'end': 916.5, 'text': "That's a long name for a functionality that's going to select similar examples.", 'start': 911.477, 'duration': 5.023}, {'end': 918.901, 'text': "So I'm going to get my language model going again.", 'start': 916.9, 'duration': 2.001}, {'end': 923.144, 'text': "I'm going to get my example prompt, and this is just a prompt template like we saw up above.", 'start': 919.341, 'duration': 3.803}, {'end': 926.345, 'text': "And then I'm going to define a list of different examples.", 'start': 923.644, 'duration': 2.701}, {'end': 934.37, 'text': 'So in this case, I want to name a noun, and then I want the language model to tell me where this noun is usually found.', 'start': 926.846, 'duration': 7.524}], 'summary': 'Developing a semantic similarity example selector to find noun locations.', 'duration': 32.763, 'max_score': 901.607, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/2xxziIWmaSA/pics/2xxziIWmaSA901607.jpg'}, {'end': 977.835, 'src': 'embed', 'start': 947.918, 'weight': 1, 'content': [{'end': 950.74, 'text': "And then what we're going to do is we're going to get our example selector ready.", 'start': 947.918, 'duration': 2.822}, {'end': 953.481, 'text': 'So we have our similar example selector.', 'start': 951.16, 'duration': 2.321}, {'end': 957.684, 'text': "We're going to pass it the list of examples that I just defined above.", 'start': 954.002, 'duration': 3.682}, {'end': 960.846, 'text': "But then we're also going to pass it our embedding engine.", 'start': 958.124, 'duration': 2.722}, {'end': 965.309, 'text': "And the reason why we do this is because we're actually going to match examples on their semantic meaning.", 'start': 961.086, 'duration': 4.223}, {'end': 970.231, 'text': 'So not just matching them off of similar strings, but off of what they actually mean.', 'start': 965.709, 'duration': 4.522}, {'end': 977.835, 'text': "So in this case, we're going to use the OpenAI embeddings, which is one of the models that has been shared by Facebook, which is really cool.", 'start': 970.812, 'duration': 7.023}], 'summary': 'Preparing example selector to match examples semantically using openai embeddings.', 'duration': 29.917, 'max_score': 947.918, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/2xxziIWmaSA/pics/2xxziIWmaSA947918.jpg'}, {'end': 1016.87, 'src': 'embed', 'start': 986.779, 'weight': 3, 'content': [{'end': 987.719, 'text': 'Let me go ahead and run that.', 'start': 986.779, 'duration': 0.94}, {'end': 992.221, 'text': "And then we're going to have a new prompt template here.", 'start': 988.88, 'duration': 3.341}, {'end': 998.703, 'text': "And this is going to be the few shot prompt template, meaning the few shot part means that there's going to be a few examples in there for us.", 'start': 992.541, 'duration': 6.162}, {'end': 1000.784, 'text': 'So we give it our example selector.', 'start': 999.183, 'duration': 1.601}, {'end': 1003.304, 'text': 'We give it our example prompt, which we made up above.', 'start': 1001.164, 'duration': 2.14}, {'end': 1008.566, 'text': "And then we're going to add on just some little strings before and after to make it easier for the model.", 'start': 1003.765, 'duration': 4.801}, {'end': 1010.847, 'text': 'So give the location that an item is usually found in.', 'start': 1008.586, 'duration': 2.261}, {'end': 1016.87, 'text': 'Cool And then the suffix will be the input and the output that we have from here based off of what the user inputs.', 'start': 1011.007, 'duration': 5.863}], 'summary': 'Creating a new few shot prompt template with examples and additional strings for model input and output.', 'duration': 30.091, 'max_score': 986.779, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/2xxziIWmaSA/pics/2xxziIWmaSA986779.jpg'}, {'end': 1112.37, 'src': 'embed', 'start': 1081.842, 'weight': 4, 'content': [{'end': 1084.165, 'text': "There's two big concepts when we talk about output parsers.", 'start': 1081.842, 'duration': 2.323}, {'end': 1086.869, 'text': 'First gonna be the formatting instructions piece.', 'start': 1084.345, 'duration': 2.524}, {'end': 1092.155, 'text': 'So this is the prompt template that is gonna tell your language model how to respond back to you.', 'start': 1087.249, 'duration': 4.906}, {'end': 1096.1, 'text': 'And Langchain provides us some conventions to do this automatically, which is cool.', 'start': 1092.556, 'duration': 3.544}, {'end': 1098.742, 'text': "And then the second thing we're gonna have is gonna be the parser.", 'start': 1096.661, 'duration': 2.081}, {'end': 1103.785, 'text': 'And so this is gonna be the tool that is gonna parse the output of your language model.', 'start': 1099.222, 'duration': 4.563}, {'end': 1109.288, 'text': 'So the language model can only return back a string, but if we want a JSON object, well,', 'start': 1104.205, 'duration': 5.083}, {'end': 1112.37, 'text': 'we need to go and parse that string and extract the JSON from that.', 'start': 1109.288, 'duration': 3.082}], 'summary': 'Output parsers involve formatting instructions and parsing tools to convert language model output to json.', 'duration': 30.528, 'max_score': 1081.842, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/2xxziIWmaSA/pics/2xxziIWmaSA1081842.jpg'}, {'end': 1169.92, 'src': 'heatmap', 'start': 1099.222, 'weight': 0.729, 'content': [{'end': 1103.785, 'text': 'And so this is gonna be the tool that is gonna parse the output of your language model.', 'start': 1099.222, 'duration': 4.563}, {'end': 1109.288, 'text': 'So the language model can only return back a string, but if we want a JSON object, well,', 'start': 1104.205, 'duration': 5.083}, {'end': 1112.37, 'text': 'we need to go and parse that string and extract the JSON from that.', 'start': 1109.288, 'duration': 3.082}, {'end': 1117.712, 'text': "Okay, so we're gonna get a structured output parser and we're gonna get the response schema from there.", 'start': 1113.23, 'duration': 4.482}, {'end': 1119.653, 'text': "Let's import our language model again.", 'start': 1118.133, 'duration': 1.52}, {'end': 1121.494, 'text': "We're gonna have a response schema.", 'start': 1120.034, 'duration': 1.46}, {'end': 1124.756, 'text': 'So in this case, I just want it to be a two field JSON object.', 'start': 1121.894, 'duration': 2.862}, {'end': 1130.339, 'text': "I'm gonna have a bad string, which is a poorly formatted user input string, and then a good string.", 'start': 1125.356, 'duration': 4.983}, {'end': 1133.441, 'text': 'this is your response, a formatted response.', 'start': 1130.999, 'duration': 2.442}, {'end': 1137.024, 'text': 'And so the really nice response from the from the language model there.', 'start': 1133.882, 'duration': 3.142}, {'end': 1140.027, 'text': "And in this case, I'm going to go ahead and create my output parser,", 'start': 1137.465, 'duration': 2.562}, {'end': 1143.35, 'text': "which is going to read the response schema and it's going to be able to parse it for us.", 'start': 1140.027, 'duration': 3.323}, {'end': 1145.112, 'text': "But we won't use that until just a second here.", 'start': 1143.39, 'duration': 1.722}, {'end': 1147.966, 'text': "So first thing we're gonna have is our format instructions.", 'start': 1145.885, 'duration': 2.081}, {'end': 1153.109, 'text': "So on the output parser, we're gonna say get format instructions and then let's print those out.", 'start': 1148.286, 'duration': 4.823}, {'end': 1155.471, 'text': "In fact, I don't need to do that.", 'start': 1154.19, 'duration': 1.281}, {'end': 1157.432, 'text': 'I could just print this out directly right here.', 'start': 1155.531, 'duration': 1.901}, {'end': 1164.777, 'text': 'Cool And so this is a piece of text that is gonna be input or insert put into the prompt.', 'start': 1158.613, 'duration': 6.164}, {'end': 1169.92, 'text': 'The output should be a markdown code snippet, format it in the following schema.', 'start': 1165.517, 'duration': 4.403}], 'summary': 'Creating a tool to parse language model output into a structured json response, including a two-field json object for bad and good strings.', 'duration': 70.698, 'max_score': 1099.222, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/2xxziIWmaSA/pics/2xxziIWmaSA1099222.jpg'}], 'start': 859.247, 'title': 'Prompt and template-based learning', 'summary': 'Discusses example selectors for in-context learning, utilizing semantic similarity and openai embeddings, and creating a few shot prompt template to prompt a language model, including implementing an output parser for structured json object.', 'chapters': [{'end': 985.999, 'start': 859.247, 'title': 'Example selectors for prompt learning', 'summary': 'Explores using example selectors for in-context learning, including the use of semantic similarity example selectors to select relevant examples based on semantic meaning, using openai embeddings to match examples, and specifying the number of examples to retrieve.', 'duration': 126.752, 'highlights': ['The chapter discusses in-context learning and the use of example selectors to show the language model what to do, particularly in choosing from a large number of examples for prompts.', 'It introduces the semantic similarity example selector, which uses OpenAI embeddings to match examples based on semantic meaning, and specifies the number of examples to retrieve.', 'The process involves importing the semantic similarity example selector, defining a list of examples, and passing it to the embedding engine to match examples based on their semantic meaning.']}, {'end': 1234.21, 'start': 986.779, 'title': 'Template-based few shot learning', 'summary': "Demonstrates the process of creating a few shot prompt template, utilizing examples to prompt a language model, and implementing an output parser to convert the language model's response into a structured json object, providing clarity and ease of use.", 'duration': 247.431, 'highlights': ['The chapter explains the process of creating a few shot prompt template and utilizing examples to prompt a language model.', "It demonstrates the utilization of output parsers to convert the language model's response into a structured JSON object.", 'The importance of structured output and response schema is emphasized.']}], 'duration': 374.963, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/2xxziIWmaSA/pics/2xxziIWmaSA859247.jpg', 'highlights': ['The chapter discusses in-context learning and the use of example selectors for prompts. (Relevance: 5)', 'It introduces the semantic similarity example selector using OpenAI embeddings. (Relevance: 4)', 'The process involves importing the semantic similarity example selector and defining a list of examples. (Relevance: 3)', 'The chapter explains the process of creating a few shot prompt template. (Relevance: 2)', "It demonstrates the utilization of output parsers to convert the language model's response. (Relevance: 1)"]}, {'end': 1600.939, 'segs': [{'end': 1257.653, 'src': 'embed', 'start': 1234.21, 'weight': 4, 'content': [{'end': 1241.492, 'text': "but before printing out, let's just go ahead and parse this, and now we can actually parse this and we get a nice json object back.", 'start': 1234.21, 'duration': 7.282}, {'end': 1245.713, 'text': "well, in this case it's going to be a dict, but you can see here it's type dict.", 'start': 1241.492, 'duration': 4.221}, {'end': 1247.994, 'text': "the next thing we're going to look at is different indexes.", 'start': 1245.713, 'duration': 2.281}, {'end': 1253.435, 'text': "so in this case we're going to be structuring documents in a way that language models have a better time working with them,", 'start': 1247.994, 'duration': 5.441}, {'end': 1257.653, 'text': 'And one of the main ways that Langchain does this is going to be through document loaders.', 'start': 1254.031, 'duration': 3.622}], 'summary': 'Parsing data into json and structuring documents for better language model processing.', 'duration': 23.443, 'max_score': 1234.21, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/2xxziIWmaSA/pics/2xxziIWmaSA1234210.jpg'}, {'end': 1310.88, 'src': 'heatmap', 'start': 1234.21, 'weight': 1, 'content': [{'end': 1241.492, 'text': "but before printing out, let's just go ahead and parse this, and now we can actually parse this and we get a nice json object back.", 'start': 1234.21, 'duration': 7.282}, {'end': 1245.713, 'text': "well, in this case it's going to be a dict, but you can see here it's type dict.", 'start': 1241.492, 'duration': 4.221}, {'end': 1247.994, 'text': "the next thing we're going to look at is different indexes.", 'start': 1245.713, 'duration': 2.281}, {'end': 1253.435, 'text': "so in this case we're going to be structuring documents in a way that language models have a better time working with them,", 'start': 1247.994, 'duration': 5.441}, {'end': 1257.653, 'text': 'And one of the main ways that Langchain does this is going to be through document loaders.', 'start': 1254.031, 'duration': 3.622}, {'end': 1261.216, 'text': 'Now, this is very similar to the OpenAI plugins that just were released.', 'start': 1258.054, 'duration': 3.162}, {'end': 1267.46, 'text': "However, there's a lot of support for a lot of really cool data sources in Langchain that aren't yet supported within the plugin world.", 'start': 1261.576, 'duration': 5.884}, {'end': 1271.923, 'text': "In this case, I'm going to be doing a Hacker News data loader.", 'start': 1268.221, 'duration': 3.702}, {'end': 1276.186, 'text': "So all I'm doing is just passing a simple URL to this data loader.", 'start': 1272.303, 'duration': 3.883}, {'end': 1278.508, 'text': "I'm going to say, hey, go get me that data.", 'start': 1276.206, 'duration': 2.302}, {'end': 1281.95, 'text': "And so I'm asking hey, how many pieces of data did you find?", 'start': 1279.128, 'duration': 2.822}, {'end': 1289.464, 'text': 'And in this case it found 76 different comments within this Hacker News post, and I asked it to print me out a sample.', 'start': 1284.039, 'duration': 5.425}, {'end': 1297.009, 'text': 'And here we see one of the responses by the moderator, Deng, within Hacker News, and we see the response there.', 'start': 1289.904, 'duration': 7.105}, {'end': 1298.21, 'text': 'We see different comments.', 'start': 1297.289, 'duration': 0.921}, {'end': 1301.493, 'text': 'You can go and work with these within your language model now, which is pretty cool.', 'start': 1298.851, 'duration': 2.642}, {'end': 1304.655, 'text': 'Another big piece of what we do a ton of is text splitting.', 'start': 1301.573, 'duration': 3.082}, {'end': 1310.88, 'text': 'So oftentimes your document, like your book or your essay or whatever, is gonna be too long for your language model.', 'start': 1305.055, 'duration': 5.825}], 'summary': 'Parsing and structuring data for language models, with 76 comments found in hacker news post.', 'duration': 76.67, 'max_score': 1234.21, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/2xxziIWmaSA/pics/2xxziIWmaSA1234210.jpg'}, {'end': 1301.493, 'src': 'embed', 'start': 1272.303, 'weight': 1, 'content': [{'end': 1276.186, 'text': "So all I'm doing is just passing a simple URL to this data loader.", 'start': 1272.303, 'duration': 3.883}, {'end': 1278.508, 'text': "I'm going to say, hey, go get me that data.", 'start': 1276.206, 'duration': 2.302}, {'end': 1281.95, 'text': "And so I'm asking hey, how many pieces of data did you find?", 'start': 1279.128, 'duration': 2.822}, {'end': 1289.464, 'text': 'And in this case it found 76 different comments within this Hacker News post, and I asked it to print me out a sample.', 'start': 1284.039, 'duration': 5.425}, {'end': 1297.009, 'text': 'And here we see one of the responses by the moderator, Deng, within Hacker News, and we see the response there.', 'start': 1289.904, 'duration': 7.105}, {'end': 1298.21, 'text': 'We see different comments.', 'start': 1297.289, 'duration': 0.921}, {'end': 1301.493, 'text': 'You can go and work with these within your language model now, which is pretty cool.', 'start': 1298.851, 'duration': 2.642}], 'summary': 'A data loader fetched 76 comments from a hacker news post for language modeling.', 'duration': 29.19, 'max_score': 1272.303, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/2xxziIWmaSA/pics/2xxziIWmaSA1272303.jpg'}, {'end': 1467.063, 'src': 'heatmap', 'start': 1396.521, 'weight': 0, 'content': [{'end': 1402.225, 'text': 'And so we had one document up above, but after I split it, I now have 606 documents, all right?', 'start': 1396.521, 'duration': 5.704}, {'end': 1407.029, 'text': "And if I wanted to preview those, I can go ahead and preview these and see how they're nice and small.", 'start': 1402.586, 'duration': 4.443}, {'end': 1408.41, 'text': "They're super small.", 'start': 1407.049, 'duration': 1.361}, {'end': 1412.214, 'text': 'And if I wanted to make this 50, for example, well, then my chunks will be a whole lot smaller.', 'start': 1408.471, 'duration': 3.743}, {'end': 1414.015, 'text': 'But let me go ahead and make that bigger.', 'start': 1412.794, 'duration': 1.221}, {'end': 1416.496, 'text': "The next thing we're going to look at is going to be retrievers.", 'start': 1414.475, 'duration': 2.021}, {'end': 1421.459, 'text': 'Now, retrievers are easy ways to combine your documents with your language models.', 'start': 1416.836, 'duration': 4.623}, {'end': 1427.262, 'text': "There's going to be a lot of different types of retrievers, and the most widely supported one is going to be the vector store retriever.", 'start': 1422.079, 'duration': 5.183}, {'end': 1432.404, 'text': "And it's most widely supported because we're doing so much similarity search within embeddings.", 'start': 1427.802, 'duration': 4.602}, {'end': 1433.945, 'text': "Let's look at an example here.", 'start': 1432.825, 'duration': 1.12}, {'end': 1437.507, 'text': "We're going to load up a Paul Graham essay, just like how we had beforehand.", 'start': 1434.746, 'duration': 2.761}, {'end': 1439.028, 'text': "I'm going to do some splitting of it.", 'start': 1437.807, 'duration': 1.221}, {'end': 1440.669, 'text': "And so we're going to get a whole bunch of documents.", 'start': 1439.208, 'duration': 1.461}, {'end': 1447.299, 'text': "going to split the documents and then i'm going to create embeddings out of those documents and so all those little chunks.", 'start': 1441.798, 'duration': 5.501}, {'end': 1450.48, 'text': "we're going to create vectors out of them, which is the semantic meaning of them,", 'start': 1447.299, 'duration': 3.181}, {'end': 1457.261, 'text': "and then i'm going to store those vectors within a document store here, okay, and i'm going to call that within my db there.", 'start': 1450.48, 'duration': 6.781}, {'end': 1461.882, 'text': "and then i'm going to say, hey, this retriever is going to be the db, but we're going to set it as the retriever.", 'start': 1457.261, 'duration': 4.621}, {'end': 1463.702, 'text': 'okay, so it knows to go get stuff.', 'start': 1461.882, 'duration': 1.82}, {'end': 1467.063, 'text': 'and if i were to look at this, you can see here that we have our vector store retriever.', 'start': 1463.702, 'duration': 3.361}], 'summary': 'After splitting one document, 606 smaller documents were created. retrievers combine documents and language models, with vector store retriever being widely supported.', 'duration': 35.883, 'max_score': 1396.521, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/2xxziIWmaSA/pics/2xxziIWmaSA1396521.jpg'}, {'end': 1564.234, 'src': 'embed', 'start': 1534.89, 'weight': 2, 'content': [{'end': 1538.674, 'text': 'two main players in the space right now are going to be pinecone and weviate.', 'start': 1534.89, 'duration': 3.784}, {'end': 1540.556, 'text': 'however, if you want to, you can go check out,', 'start': 1538.674, 'duration': 1.882}, {'end': 1546.648, 'text': "open ai's retriever documentation and they list a whole bunch of other ones that you may find awesome for you, Okay.", 'start': 1540.556, 'duration': 6.092}, {'end': 1548.749, 'text': "So let's go ahead and look at these again.", 'start': 1547.208, 'duration': 1.541}, {'end': 1550.729, 'text': "We're going to import our models.", 'start': 1548.809, 'duration': 1.92}, {'end': 1551.73, 'text': 'We got our embeddings.', 'start': 1550.849, 'duration': 0.881}, {'end': 1552.65, 'text': 'Okay, cool.', 'start': 1552.15, 'duration': 0.5}, {'end': 1555.231, 'text': "Now with these embeddings, I'm going to look at that.", 'start': 1552.93, 'duration': 2.301}, {'end': 1561.073, 'text': 'And based off of how I split my document up above with a thousand chunks or a thousand as a chunk size,', 'start': 1555.451, 'duration': 5.622}, {'end': 1564.234, 'text': "we get 78 documents out of Paul Graham's worked essay.", 'start': 1561.073, 'duration': 3.161}], 'summary': "Pinecone and weaviate are key players; openai's retriever offers other options. importing models and getting 78 documents from a 1000-chunk essay.", 'duration': 29.344, 'max_score': 1534.89, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/2xxziIWmaSA/pics/2xxziIWmaSA1534890.jpg'}], 'start': 1234.21, 'title': 'Language model techniques', 'summary': 'Explores document parsing, text splitting, and vector store retrievers for language models, resulting in 606 smaller documents and enabling similarity search within embeddings.', 'chapters': [{'end': 1414.015, 'start': 1234.21, 'title': 'Langchain: document parsing and text splitting', 'summary': 'Explores document parsing and text splitting techniques for language models, including parsing a json object, using document loaders, and text splitting with recursive character text splitter, resulting in 606 smaller documents from an original one.', 'duration': 179.805, 'highlights': ['Using document loaders such as the Hacker News data loader to parse data, resulting in finding 76 different comments within a post', 'Text splitting using the recursive character text splitter to create 606 smaller documents from an original one', 'Parsing a JSON object and receiving a dict as the output']}, {'end': 1600.939, 'start': 1414.475, 'title': 'Retrievers and vector stores', 'summary': 'Discusses the usage of retrievers to combine documents with language models, particularly focusing on vector store retrievers, and emphasizes the process of creating and storing vectors to enable similarity search within embeddings. it also mentions the main players in the vector store space and demonstrates the creation and storage of embeddings for efficient searching.', 'duration': 186.464, 'highlights': ['Demonstrating the usage of retrievers and vector store retrievers for combining documents with language models and enabling similarity search within embeddings.', "Mentioning the main players in the vector store space, including pinecone and weviate, and suggesting the exploration of other options listed in open AI's retriever documentation.", "Demonstrating the creation and storage of 78 embeddings for Paul Graham's worked essay, emphasizing the numerical representation of the semantic meaning of the documents and their storage for efficient searching."]}], 'duration': 366.729, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/2xxziIWmaSA/pics/2xxziIWmaSA1234210.jpg', 'highlights': ['Creating 606 smaller documents from an original one using text splitting', 'Finding 76 different comments within a post using document loaders', "Storing 78 embeddings for Paul Graham's worked essay for efficient searching", 'Using retrievers and vector store retrievers for combining documents with language models', 'Parsing a JSON object and receiving a dict as the output', 'Mentioning the main players in the vector store space, including pinecone and weviate']}, {'end': 2289.918, 'segs': [{'end': 1630.439, 'src': 'embed', 'start': 1601.319, 'weight': 0, 'content': [{'end': 1605.121, 'text': 'So this is going to be how you help your language models remember things.', 'start': 1601.319, 'duration': 3.802}, {'end': 1608.703, 'text': 'The most common use case for this is going to be your chat history.', 'start': 1605.642, 'duration': 3.061}, {'end': 1613.786, 'text': "So, if you're making a chat bot, then you can tell it the history messages that you've had beforehand,", 'start': 1609.023, 'duration': 4.763}, {'end': 1617.368, 'text': 'which makes it a whole lot better at helping your user do whatever it needs to do.', 'start': 1613.786, 'duration': 3.582}, {'end': 1623.033, 'text': "So in this case, I'm going to import chat message history and I'm going to import my chat open AI again.", 'start': 1617.888, 'duration': 5.145}, {'end': 1627.056, 'text': "And so I'm going to create my chat model and then I'm going to create my history model.", 'start': 1623.613, 'duration': 3.443}, {'end': 1630.439, 'text': "And to my history model, I'm going to add an AI message.", 'start': 1627.656, 'duration': 2.783}], 'summary': 'Help language models remember chat history for better user assistance.', 'duration': 29.12, 'max_score': 1601.319, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/2xxziIWmaSA/pics/2xxziIWmaSA1601319.jpg'}, {'end': 1699.547, 'src': 'embed', 'start': 1674.726, 'weight': 2, 'content': [{'end': 1682.07, 'text': 'And you can see here that it adds the capital of France is Paris to the end of my chat history, which makes it easy for me to work with.', 'start': 1674.726, 'duration': 7.344}, {'end': 1687.054, 'text': 'And another cool functionality of this too is Langchain makes it extremely simple to save this chat history.', 'start': 1682.451, 'duration': 4.603}, {'end': 1688.535, 'text': 'So you can go ahead and load it later.', 'start': 1687.094, 'duration': 1.441}, {'end': 1690.796, 'text': 'A lot of really cool functionality.', 'start': 1688.555, 'duration': 2.241}, {'end': 1691.837, 'text': 'I encourage you to go check out.', 'start': 1690.816, 'duration': 1.021}, {'end': 1694.521, 'text': "The next concept we're going to look at is chains.", 'start': 1692.398, 'duration': 2.123}, {'end': 1699.547, 'text': "So in this case, we're going to be combining different LLM calls and actions automatically.", 'start': 1694.841, 'duration': 4.706}], 'summary': 'Langchain adds capital of france as paris to chat history, easy to save and load, and combines llm calls and actions automatically.', 'duration': 24.821, 'max_score': 1674.726, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/2xxziIWmaSA/pics/2xxziIWmaSA1674726.jpg'}, {'end': 1755.518, 'src': 'embed', 'start': 1724.328, 'weight': 3, 'content': [{'end': 1727.09, 'text': 'The first one is going to be a simple sequential chain.', 'start': 1724.328, 'duration': 2.762}, {'end': 1733.054, 'text': "And in this case, I'm going to go ahead and tell it, hey, I want you to do X and then Y and then Z.", 'start': 1727.45, 'duration': 5.604}, {'end': 1738.277, 'text': 'Now, the reason why this is important or why I like to do it is because it helps break up the tasks.', 'start': 1733.054, 'duration': 5.223}, {'end': 1746.11, 'text': 'Now, language models can get distracted sometimes, and if you ask it to do too many things in a row, it could get confused,', 'start': 1738.838, 'duration': 7.272}, {'end': 1748.554, 'text': "it could start to hallucinate, and that's not good for anybody.", 'start': 1746.11, 'duration': 2.444}, {'end': 1755.518, 'text': 'Plus. I want to make sure that my thinking is sound, and that way I can kind of check out the different outputs of each one of my different actions here.', 'start': 1749.135, 'duration': 6.383}], 'summary': 'Sequential chain helps break up tasks and avoid confusion, hallucination in language models.', 'duration': 31.19, 'max_score': 1724.328, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/2xxziIWmaSA/pics/2xxziIWmaSA1724328.jpg'}, {'end': 1795.164, 'src': 'embed', 'start': 1762.701, 'weight': 6, 'content': [{'end': 1764.162, 'text': "I'm going to use two different prompt templates.", 'start': 1762.701, 'duration': 1.461}, {'end': 1769.504, 'text': 'So your job is to come up with a classic dish from the area that the user suggests.', 'start': 1764.462, 'duration': 5.042}, {'end': 1774.607, 'text': "I'm going to input the user location and I'm going to give it the user location, which we'll do in a second here.", 'start': 1769.945, 'duration': 4.662}, {'end': 1781.833, 'text': "And I'm gonna create a LLM chain with this and I'm gonna call it location chain, which basically is gonna take my language model.", 'start': 1775.587, 'duration': 6.246}, {'end': 1783.754, 'text': "it's gonna take a prompt template.", 'start': 1781.833, 'duration': 1.921}, {'end': 1785.996, 'text': "And then the next one we're gonna look at.", 'start': 1784.635, 'duration': 1.361}, {'end': 1791.201, 'text': 'Given a meal, give a short and simple recipe on how to make that dish at home.', 'start': 1786.677, 'duration': 4.524}, {'end': 1795.164, 'text': "So in this case, we have the user location, which that's not actually what we want.", 'start': 1791.701, 'duration': 3.463}], 'summary': 'Using prompt templates to generate local recipes from user input.', 'duration': 32.463, 'max_score': 1762.701, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/2xxziIWmaSA/pics/2xxziIWmaSA1762701.jpg'}, {'end': 1883.176, 'src': 'heatmap', 'start': 1762.701, 'weight': 0.762, 'content': [{'end': 1764.162, 'text': "I'm going to use two different prompt templates.", 'start': 1762.701, 'duration': 1.461}, {'end': 1769.504, 'text': 'So your job is to come up with a classic dish from the area that the user suggests.', 'start': 1764.462, 'duration': 5.042}, {'end': 1774.607, 'text': "I'm going to input the user location and I'm going to give it the user location, which we'll do in a second here.", 'start': 1769.945, 'duration': 4.662}, {'end': 1781.833, 'text': "And I'm gonna create a LLM chain with this and I'm gonna call it location chain, which basically is gonna take my language model.", 'start': 1775.587, 'duration': 6.246}, {'end': 1783.754, 'text': "it's gonna take a prompt template.", 'start': 1781.833, 'duration': 1.921}, {'end': 1785.996, 'text': "And then the next one we're gonna look at.", 'start': 1784.635, 'duration': 1.361}, {'end': 1791.201, 'text': 'Given a meal, give a short and simple recipe on how to make that dish at home.', 'start': 1786.677, 'duration': 4.524}, {'end': 1795.164, 'text': "So in this case, we have the user location, which that's not actually what we want.", 'start': 1791.701, 'duration': 3.463}, {'end': 1797.947, 'text': 'We want user meal output.', 'start': 1795.204, 'duration': 2.743}, {'end': 1802.03, 'text': "This wouldn't have mattered because I had the variables the same, but just to make it more clear.", 'start': 1798.267, 'duration': 3.763}, {'end': 1803.672, 'text': 'uh, given a meal.', 'start': 1802.951, 'duration': 0.721}, {'end': 1805.553, 'text': 'okay, cool your response.', 'start': 1803.672, 'duration': 1.881}, {'end': 1806.634, 'text': "i'm gonna do the same thing.", 'start': 1805.553, 'duration': 1.081}, {'end': 1808.436, 'text': "i'm gonna put that into a meal chain.", 'start': 1806.634, 'duration': 1.802}, {'end': 1815.362, 'text': "so what it's gonna do is it's gonna output a meal, a classic dish, and then it's gonna output a simple recipe for that classic dish.", 'start': 1808.436, 'duration': 6.926}, {'end': 1819.801, 'text': "I'm going to create my simple sequential chain.", 'start': 1816.92, 'duration': 2.881}, {'end': 1824.562, 'text': "And in this case, I'm going to specify my chains as my location chain and then the meal chain.", 'start': 1820.241, 'duration': 4.321}, {'end': 1825.923, 'text': 'Order matters.', 'start': 1825.423, 'duration': 0.5}, {'end': 1827.103, 'text': 'Be careful on that.', 'start': 1826.543, 'duration': 0.56}, {'end': 1831.244, 'text': "I'm going to set verbose equals true, which means that it's going to tell us what it's thinking.", 'start': 1827.583, 'duration': 3.661}, {'end': 1834.525, 'text': "And it's actually going to print those statements out so it's easier to debug what's going on.", 'start': 1831.264, 'duration': 3.261}, {'end': 1836.086, 'text': "Let's go ahead and create that.", 'start': 1835.146, 'duration': 0.94}, {'end': 1839.507, 'text': "And then I'm going to say my overall chain, I want you to run.", 'start': 1836.586, 'duration': 2.921}, {'end': 1846.061, 'text': 'And in this case, I only have one input variable, which is gonna be Roam, which is gonna be the user location that I start in the first place.', 'start': 1840.117, 'duration': 5.944}, {'end': 1847.702, 'text': 'Let me go ahead and run this.', 'start': 1846.802, 'duration': 0.9}, {'end': 1856.108, 'text': "So you can see here that it's entering the new sequential chain and it ran Roam against the first prompt template and got me a classic dish which is really cool.", 'start': 1848.162, 'duration': 7.946}, {'end': 1862.617, 'text': 'And then it gave me a recipe on how to make that classic dish, which is really cool.', 'start': 1857.048, 'duration': 5.569}, {'end': 1866.404, 'text': 'So all of a sudden it just did two different runs for me all in one go.', 'start': 1862.978, 'duration': 3.426}, {'end': 1868.387, 'text': "And I didn't have to run any complicated code.", 'start': 1866.684, 'duration': 1.703}, {'end': 1869.649, 'text': 'I could just use Langchain for that.', 'start': 1868.407, 'duration': 1.242}, {'end': 1870.631, 'text': "It's pretty sweet.", 'start': 1870.03, 'duration': 0.601}, {'end': 1877.009, 'text': 'Now, the next one that I wanna show is one that I use quite often, which is gonna be the summarization chain.', 'start': 1871.943, 'duration': 5.066}, {'end': 1883.176, 'text': 'The reason why this one is so cool is because if you have a long piece of text and you want it summarized,', 'start': 1877.509, 'duration': 5.667}], 'summary': 'Creating sequential chains to generate classic dishes and recipes based on user input.', 'duration': 120.475, 'max_score': 1762.701, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/2xxziIWmaSA/pics/2xxziIWmaSA1762701.jpg'}, {'end': 1883.176, 'src': 'embed', 'start': 1857.048, 'weight': 5, 'content': [{'end': 1862.617, 'text': 'And then it gave me a recipe on how to make that classic dish, which is really cool.', 'start': 1857.048, 'duration': 5.569}, {'end': 1866.404, 'text': 'So all of a sudden it just did two different runs for me all in one go.', 'start': 1862.978, 'duration': 3.426}, {'end': 1868.387, 'text': "And I didn't have to run any complicated code.", 'start': 1866.684, 'duration': 1.703}, {'end': 1869.649, 'text': 'I could just use Langchain for that.', 'start': 1868.407, 'duration': 1.242}, {'end': 1870.631, 'text': "It's pretty sweet.", 'start': 1870.03, 'duration': 0.601}, {'end': 1877.009, 'text': 'Now, the next one that I wanna show is one that I use quite often, which is gonna be the summarization chain.', 'start': 1871.943, 'duration': 5.066}, {'end': 1883.176, 'text': 'The reason why this one is so cool is because if you have a long piece of text and you want it summarized,', 'start': 1877.509, 'duration': 5.667}], 'summary': 'Langchain can automate recipe generation and summarization, saving time and effort.', 'duration': 26.128, 'max_score': 1857.048, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/2xxziIWmaSA/pics/2xxziIWmaSA1857048.jpg'}, {'end': 1991.577, 'src': 'embed', 'start': 1954.305, 'weight': 7, 'content': [{'end': 1956.006, 'text': "Here's the summary of chunk number two.", 'start': 1954.305, 'duration': 1.701}, {'end': 1957.947, 'text': "And it's asking for a summary of the summaries.", 'start': 1956.346, 'duration': 1.601}, {'end': 1960.348, 'text': 'And we finally get a summary of the summaries,', 'start': 1958.387, 'duration': 1.961}, {'end': 1969.894, 'text': 'which is really cool because all built into this one liner right here was all the different calls back and forth to figure out how to do the summary of the summaries,', 'start': 1960.348, 'duration': 9.546}, {'end': 1971.936, 'text': 'which is one of the powers of Langchain, which is really sweet.', 'start': 1969.894, 'duration': 2.042}, {'end': 1973.978, 'text': "The last thing we're going to look at is agents.", 'start': 1972.216, 'duration': 1.762}, {'end': 1979.022, 'text': "And this is one of the most complicated concepts within Langchain, which is why we're talking about it last here.", 'start': 1974.458, 'duration': 4.564}, {'end': 1984.547, 'text': 'But I thought that the official Langchain documentation did a great job describing what agents are.', 'start': 1979.663, 'duration': 4.884}, {'end': 1991.577, 'text': 'Some applications will not require just a predetermined chain of calls to LLMs and other tools.', 'start': 1985.655, 'duration': 5.922}], 'summary': 'Langchain enables easy summary of complex concepts. agents in langchain are described well in the official documentation.', 'duration': 37.272, 'max_score': 1954.305, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/2xxziIWmaSA/pics/2xxziIWmaSA1954305.jpg'}], 'start': 1601.319, 'title': 'Enhancing language models and langchain', 'summary': 'Discusses improving language models by utilizing chat history and chains, and demonstrates the use of langchain for creating sequential chains to generate classic dishes and recipes, summarization chains for summarizing long texts, and agents for dynamically determining chains based on user input.', 'chapters': [{'end': 1762.221, 'start': 1601.319, 'title': 'Improving language models with chat history and chains', 'summary': 'Discusses utilizing chat history to enhance language models, enabling them to remember previous interactions and responses, and the concept of chains to automate sequential actions for better model performance and task management.', 'duration': 160.902, 'highlights': ['The importance of utilizing chat history to improve language models is emphasized, allowing the models to better assist users based on previous interactions.', 'Demonstration of adding chat history to the language model and observing its impact on model responses, showcasing the practical application of incorporating historical messages.', 'The ease of saving and loading chat history using Langchain is highlighted, providing a convenient functionality for managing and utilizing chat history.', 'Introduction to the concept of chains to automate sequential actions in language models, enhancing task management and model performance.', 'Explaining the significance of using chains to break up tasks and avoid model distraction or confusion, ensuring sound thinking and reliable outputs.']}, {'end': 2289.918, 'start': 1762.701, 'title': 'Langchain: agents, summarization, and sequential chains', 'summary': 'Demonstrates the use of langchain for creating sequential chains to generate classic dishes and recipes, summarization chains for summarizing long texts, and agents for dynamically determining chains based on user input, with a detailed example of an agent navigating a multi-step question.', 'duration': 527.217, 'highlights': ['The chapter demonstrates the use of Langchain for creating sequential chains to generate classic dishes and recipes, summarization chains for summarizing long texts, and agents for dynamically determining chains based on user input, with a detailed example of an agent navigating a multi-step question.', 'The author creates sequential chains using Langchain to generate classic dishes and recipes by inputting user location and meal, showcasing the capability to run two different prompt templates in one go.', 'The author showcases the use of summarization chains for chunking up longer pieces of text and finding summaries of those different chunks, ultimately obtaining a final concise summary, highlighting the power of Langchain in simplifying the process of summarizing texts.', "The chapter provides a comprehensive explanation of agents in Langchain, emphasizing their role in dynamically determining chains based on user input, with a detailed example illustrating an agent's ability to navigate a multi-step question and obtain the final answer without a predetermined chain.", 'The author emphasizes the importance of selecting the appropriate agent type for different tasks, encouraging readers to explore the documentation for further insights into utilizing agents effectively in Langchain.']}], 'duration': 688.599, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/2xxziIWmaSA/pics/2xxziIWmaSA1601319.jpg', 'highlights': ['The importance of utilizing chat history to improve language models is emphasized, allowing the models to better assist users based on previous interactions.', 'Demonstration of adding chat history to the language model and observing its impact on model responses, showcasing the practical application of incorporating historical messages.', 'The ease of saving and loading chat history using Langchain is highlighted, providing a convenient functionality for managing and utilizing chat history.', 'Introduction to the concept of chains to automate sequential actions in language models, enhancing task management and model performance.', 'Explaining the significance of using chains to break up tasks and avoid model distraction or confusion, ensuring sound thinking and reliable outputs.', 'The chapter demonstrates the use of Langchain for creating sequential chains to generate classic dishes and recipes, summarization chains for summarizing long texts, and agents for dynamically determining chains based on user input, with a detailed example of an agent navigating a multi-step question.', 'The author creates sequential chains using Langchain to generate classic dishes and recipes by inputting user location and meal, showcasing the capability to run two different prompt templates in one go.', 'The author showcases the use of summarization chains for chunking up longer pieces of text and finding summaries of those different chunks, ultimately obtaining a final concise summary, highlighting the power of Langchain in simplifying the process of summarizing texts.', "The chapter provides a comprehensive explanation of agents in Langchain, emphasizing their role in dynamically determining chains based on user input, with a detailed example illustrating an agent's ability to navigate a multi-step question and obtain the final answer without a predetermined chain.", 'The author emphasizes the importance of selecting the appropriate agent type for different tasks, encouraging readers to explore the documentation for further insights into utilizing agents effectively in Langchain.']}], 'highlights': ['Langchain simplifies working with AI models through integration and agency.', 'The video covers Langchain basics to help viewers start building and having fun as quickly as possible, using new conceptual docs from LangChain.', 'The content abstracts technical pieces into theoretical and qualitative aspects, making it extremely helpful for understanding LangChain.', 'The Langchain Cookbook, mentioned in the video, provides a companion resource for viewers to follow along and delve deeper into the content.', 'Viewers are encouraged to check out the GitHub link provided to explore the companion Langchain Cookbook and access additional resources for further understanding.', 'Text embedding converts text into a 1536-dimensional vector for easy comparison.', 'Prompt templates allow for dynamic prompt generation by replacing tokens.', 'Documents with metadata can be used for filtering in large repositories of information.', 'Different model types, such as language and chat models, play a crucial role in interacting with AI.', 'The importance of utilizing chat history to improve language models is emphasized, allowing the models to better assist users based on previous interactions.', 'Demonstration of adding chat history to the language model and observing its impact on model responses, showcasing the practical application of incorporating historical messages.', 'The ease of saving and loading chat history using Langchain is highlighted, providing a convenient functionality for managing and utilizing chat history.', 'Introduction to the concept of chains to automate sequential actions in language models, enhancing task management and model performance.', 'Explaining the significance of using chains to break up tasks and avoid model distraction or confusion, ensuring sound thinking and reliable outputs.', 'The chapter discusses in-context learning and the use of example selectors for prompts.', 'It introduces the semantic similarity example selector using OpenAI embeddings.', 'The process involves importing the semantic similarity example selector and defining a list of examples.', 'The chapter explains the process of creating a few shot prompt template.', "It demonstrates the utilization of output parsers to convert the language model's response.", 'Creating 606 smaller documents from an original one using text splitting', 'Finding 76 different comments within a post using document loaders', "Storing 78 embeddings for Paul Graham's worked essay for efficient searching", 'Using retrievers and vector store retrievers for combining documents with language models', 'Parsing a JSON object and receiving a dict as the output', 'Mentioning the main players in the vector store space, including pinecone and weviate']}