title
Live Day 1- Introduction And Roadmap To Natural Language Processing And Quiz-5000Inr Give Away

description
Enroll for free Community Dashboard https://ineuron.ai/course/NLP-Foundations Last 5 days for the FSDA batch to start. We are happy to announce iNeuron is coming up with the 6 months Live Full Stack Data Analytics batch with job assistance and internship starting from 18th June 2022.The instructor of the course will be me and Sudhanshu. The course price is really affordable 4000rs Inr including GST. The course content will be available for lifetime along with prerecorded videos. You can check the course syllabus below Course link: https://courses.ineuron.ai/Full-Stack-Data-Analytics From my side you can avail addition 10% off by using Krish10 coupon code. Don't miss this opportunity and grab it before it's too late. Happy Learning!!

detail
{'title': 'Live Day 1- Introduction And Roadmap To Natural Language Processing And Quiz-5000Inr Give Away', 'heatmap': [{'end': 1700.335, 'start': 1646.975, 'weight': 0.749}, {'end': 2524.417, 'start': 2369.187, 'weight': 0.882}, {'end': 2717.357, 'start': 2662.683, 'weight': 0.858}, {'end': 3159.501, 'start': 2955.723, 'weight': 0.913}], 'summary': "An introduction to a live nlp series with a 5000 inr giveaway and a roadmap covering nlp topics from basics to bert and transformers, emphasizing python, stats, machine learning, and deep learning knowledge. the series includes community sessions, an nlp quiz announcement, nlp importance in ai, nlp roadmap and techniques, nlp's role in text-to-image conversion, tokenization, text pre-processing steps, attention models, label to vector conversion, and live quiz with leaderboard standings and prize money transfer to winners.", 'chapters': [{'end': 406.917, 'segs': [{'end': 73.334, 'src': 'embed', 'start': 36.314, 'weight': 0, 'content': [{'end': 56.584, 'text': "Yeah?. So yes, today we are going to start the NLP series, and So just give me a confirmation if you're able to hear me.", 'start': 36.314, 'duration': 20.27}, {'end': 61.848, 'text': 'I think I can hear my voice too from my YouTube channel itself.', 'start': 56.664, 'duration': 5.184}, {'end': 69.153, 'text': "So do hit like because this is going to be an amazing series of NLP where I'm going to cover machine learning and deep learning.", 'start': 62.388, 'duration': 6.765}, {'end': 70.853, 'text': 'Apart from that today.', 'start': 69.793, 'duration': 1.06}, {'end': 73.334, 'text': "I'm also going to give you five thousand rupees.", 'start': 71.034, 'duration': 2.3}], 'summary': 'Starting nlp series covering ml and dl, offering 5000 rupees.', 'duration': 37.02, 'max_score': 36.314, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/CG9iLLhqQF8/pics/CG9iLLhqQF836314.jpg'}, {'end': 137.735, 'src': 'embed', 'start': 94.772, 'weight': 1, 'content': [{'end': 100.654, 'text': "at the end of the day they can communicate me through the Instagram and I'll give you the money live through my Google pay or phone pay.", 'start': 94.772, 'duration': 5.882}, {'end': 102.615, 'text': 'You can actually give whatever information you want.', 'start': 100.674, 'duration': 1.941}, {'end': 107.937, 'text': "Okay So today we are going to have a quiz and we'll select three prices, I guess.", 'start': 102.995, 'duration': 4.942}, {'end': 115.701, 'text': "Let's make it like first price will be 2000, second price will be 2000 and third price will be 1000 rupees.", 'start': 109.338, 'duration': 6.363}, {'end': 122.925, 'text': "Okay, so we'll do it like that and the NLP plan.", 'start': 116.761, 'duration': 6.164}, {'end': 131.231, 'text': "the agenda will be in such a way that we'll try to cover everything from basics and we'll make sure that you know we go till BERT and Transformers.", 'start': 122.925, 'duration': 8.306}, {'end': 137.735, 'text': "So there's just not a simple session or basic session that we are going to cover, but instead we are going to cover in an amazing way.", 'start': 131.671, 'duration': 6.064}], 'summary': 'Offering quiz prizes of 2000, 2000, and 1000 rupees. agenda covers basics to bert and transformers in an amazing way.', 'duration': 42.963, 'max_score': 94.772, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/CG9iLLhqQF8/pics/CG9iLLhqQF894772.jpg'}, {'end': 284.081, 'src': 'embed', 'start': 256.434, 'weight': 2, 'content': [{'end': 261.521, 'text': 'Because if you follow this roadmap, you will be able to easily crack any interviews.', 'start': 256.434, 'duration': 5.087}, {'end': 265.327, 'text': 'Okay So roadmap of NLP.', 'start': 261.641, 'duration': 3.686}, {'end': 273.174, 'text': 'Okay Second thing is why NLP? Or I can also make this as my first point, but we are going to understand this.', 'start': 265.607, 'duration': 7.567}, {'end': 278.417, 'text': 'Okay And then third, today we are also going to see a lot of examples.', 'start': 273.314, 'duration': 5.103}, {'end': 280.779, 'text': 'A lot of examples.', 'start': 278.437, 'duration': 2.342}, {'end': 284.081, 'text': 'Okay A lot of examples, real world scenarios and all.', 'start': 280.919, 'duration': 3.162}], 'summary': 'Learn nlp roadmap for cracking interviews, with many real-world examples.', 'duration': 27.647, 'max_score': 256.434, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/CG9iLLhqQF8/pics/CG9iLLhqQF8256434.jpg'}], 'start': 15.39, 'title': 'Nlp series & roadmap', 'summary': 'Introduces a live nlp series with a 5000 rupees giveaway and covers the nlp roadmap, including topics from basics to bert and transformers, emphasizing the need for python, stats, machine learning, and deep learning knowledge.', 'chapters': [{'end': 115.701, 'start': 15.39, 'title': 'Nlp series & 5000 rupees giveaway', 'summary': 'Introduces a live nlp series and announces a 5000 rupees giveaway through a quiz, with distribution of 2000 rupees for the first prize, 2000 rupees for the second prize, and 1000 rupees for the third prize.', 'duration': 100.311, 'highlights': ['The chapter introduces a live NLP series and announces a 5000 rupees giveaway through a quiz, with distribution of 2000 rupees for the first prize, 2000 rupees for the second prize, and 1000 rupees for the third prize.', 'Participants need to follow the speaker on Instagram, and winners can communicate through Instagram to receive the money via Google pay or phone pay.', 'The speaker plans to start the quiz after completing the session, followed by the distribution of the prize money.']}, {'end': 406.917, 'start': 116.761, 'title': 'Nlp roadmap and prerequisites', 'summary': 'Covers the nlp roadmap, including topics from basics to bert and transformers, and outlines the prerequisites for learning nlp, emphasizing the need for python, stats, machine learning, and deep learning knowledge.', 'duration': 290.156, 'highlights': ['The chapter covers the NLP roadmap, including topics from basics to BERT and Transformers, and outlines the prerequisites for learning NLP. It mentions covering NLP topics from basics to BERT and Transformers and emphasizes the importance of knowing the prerequisites for learning NLP, which includes Python, stats, machine learning, and deep learning knowledge.', 'The roadmap of NLP is discussed, emphasizing its importance in easily cracking interviews. The chapter discusses the significance of following the NLP roadmap, highlighting its potential to aid in easily cracking interviews.', 'The prerequisites for learning NLP are outlined, including the need for Python, stats, machine learning, and deep learning knowledge. The chapter outlines the prerequisites for learning NLP, stressing the importance of knowledge in Python, stats, machine learning, and deep learning.']}], 'duration': 391.527, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/CG9iLLhqQF8/pics/CG9iLLhqQF815390.jpg', 'highlights': ['The chapter introduces a live NLP series and announces a 5000 rupees giveaway through a quiz, with distribution of 2000 rupees for the first prize, 2000 rupees for the second prize, and 1000 rupees for the third prize.', 'The chapter covers the NLP roadmap, including topics from basics to BERT and Transformers, and outlines the prerequisites for learning NLP.', 'The roadmap of NLP is discussed, emphasizing its importance in easily cracking interviews.', 'Participants need to follow the speaker on Instagram, and winners can communicate through Instagram to receive the money via Google pay or phone pay.']}, {'end': 1047.96, 'segs': [{'end': 435.497, 'src': 'embed', 'start': 406.917, 'weight': 0, 'content': [{'end': 410.058, 'text': 'and again, all this has been covered in my community sessions.', 'start': 406.917, 'duration': 3.141}, {'end': 412.679, 'text': 'i hope everybody agrees to that, right.', 'start': 410.058, 'duration': 2.621}, {'end': 415.939, 'text': 'so this is all completed in community sessions.', 'start': 412.679, 'duration': 3.26}, {'end': 422.42, 'text': 'okay, now, after we complete all these topics, then we are going to have an amazing quiz.', 'start': 415.939, 'duration': 6.481}, {'end': 424.061, 'text': 'okay, the quiz.', 'start': 422.42, 'duration': 1.641}, {'end': 432.413, 'text': 'uh, we will be giving 5000 rupees for the first prize, 2000 rupees inr, Okay.', 'start': 424.061, 'duration': 8.352}, {'end': 435.497, 'text': 'For the second price, 1000 rupees INR.', 'start': 432.914, 'duration': 2.583}], 'summary': 'Community sessions cover all topics. a quiz with prizes: 5000 inr for 1st, 2000 inr for 2nd.', 'duration': 28.58, 'max_score': 406.917, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/CG9iLLhqQF8/pics/CG9iLLhqQF8406917.jpg'}, {'end': 478.974, 'src': 'embed', 'start': 454.934, 'weight': 2, 'content': [{'end': 462.24, 'text': 'okay, so, and for participating, just go ahead and follow me in instagram.', 'start': 454.934, 'duration': 7.306}, {'end': 465.522, 'text': 'okay, because there only i will be able to take your information.', 'start': 462.24, 'duration': 3.282}, {'end': 472.388, 'text': 'okay, so whoever will be coming First to second, third, you will be able to understand this.', 'start': 465.522, 'duration': 6.866}, {'end': 478.974, 'text': 'So you just have to drop me a message if you come first so that I can validate that you are the genuine person or not.', 'start': 472.768, 'duration': 6.206}], 'summary': 'Participants must follow on instagram to submit information and validate their status.', 'duration': 24.04, 'max_score': 454.934, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/CG9iLLhqQF8/pics/CG9iLLhqQF8454934.jpg'}, {'end': 953.057, 'src': 'embed', 'start': 922.947, 'weight': 3, 'content': [{'end': 926.888, 'text': 'the machine is not able to capture the sarcasm you know properly.', 'start': 922.947, 'duration': 3.941}, {'end': 931.509, 'text': 'Yes, Google is doing amazing amount of work, but it will still take time.', 'start': 927.368, 'duration': 4.141}, {'end': 937.091, 'text': 'You know, it is going to take time and probably in the upcoming days, you know, the sarcasm thing is also getting captured.', 'start': 931.769, 'duration': 5.322}, {'end': 942.832, 'text': 'Nvidia has come up with an open source algorithm or open source model.', 'start': 937.431, 'duration': 5.401}, {'end': 945.493, 'text': 'which can actually detect some amount of sarcasm.', 'start': 943.232, 'duration': 2.261}, {'end': 948.935, 'text': 'And recently the GitHub hackathon that we had in iNeuron.', 'start': 945.553, 'duration': 3.382}, {'end': 953.057, 'text': 'you know one of the guy used that and he dubbed the entire voice right?', 'start': 948.935, 'duration': 4.122}], 'summary': "Nvidia's open source algorithm can detect sarcasm, used in recent github hackathon.", 'duration': 30.11, 'max_score': 922.947, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/CG9iLLhqQF8/pics/CG9iLLhqQF8922947.jpg'}, {'end': 994.691, 'src': 'embed', 'start': 968.546, 'weight': 1, 'content': [{'end': 974.879, 'text': 'So Wherever your text is data, you know wherever your text is data.', 'start': 968.546, 'duration': 6.333}, {'end': 978.221, 'text': 'at that point of time, you specifically need to use NLP.', 'start': 974.879, 'duration': 3.342}, {'end': 980.803, 'text': 'Okay So this is what is the importance of NLP.', 'start': 978.501, 'duration': 2.302}, {'end': 984.266, 'text': 'Yet at the end of the day, we are creating an AI application.', 'start': 980.883, 'duration': 3.383}, {'end': 985.666, 'text': 'Again, understand.', 'start': 985.046, 'duration': 0.62}, {'end': 994.691, 'text': 'Now, in this case, the AI application may be a language translator or it can be a chatbot, it can be a support chatbot and many more things right?', 'start': 986.027, 'duration': 8.664}], 'summary': 'Nlp is crucial for ai applications like language translators and chatbots.', 'duration': 26.145, 'max_score': 968.546, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/CG9iLLhqQF8/pics/CG9iLLhqQF8968546.jpg'}], 'start': 406.917, 'title': 'Community sessions, quiz announcement, and nlp importance', 'summary': 'Covers the completion of topics in community sessions, an upcoming quiz with cash prizes, and the importance of nlp in ai, including its role, challenges, and applications such as language translation and chatbots.', 'chapters': [{'end': 553.157, 'start': 406.917, 'title': 'Community sessions and quiz announcement', 'summary': 'Introduced the completion of topics in community sessions, followed by the announcement of an upcoming quiz with cash prizes of 5000, 2000, and 1000 rupees inr for the top three participants, who need to follow the presenter on instagram to participate and validate their identity.', 'duration': 146.24, 'highlights': ['The presenter announced an upcoming quiz with cash prizes of 5000, 2000, and 1000 rupees INR for the top three participants.', 'Participants are required to follow the presenter on Instagram to provide information and validate their identity.', 'The presenter emphasized the importance of watching the session till the end to participate in the quiz, as the quiz content will be based on the topics covered in the session.']}, {'end': 1047.96, 'start': 553.338, 'title': 'Nlp importance in ai', 'summary': 'Discusses the importance of natural language processing (nlp) in ai, emphasizing its role in ai, machine learning, and deep learning, and the challenges and potential applications of nlp, including the need to understand and process text data for tasks such as language translation, chatbots, and sarcasm detection.', 'duration': 494.622, 'highlights': ["NLP's role in AI, machine learning, and deep learning NLP is crucial in AI, machine learning, and deep learning for processing text data and enabling tasks such as language translation and chatbots.", "Challenges in capturing sarcasm using NLP The challenge of capturing sarcasm using NLP, and the potential future developments in this field, such as Nvidia's open source model for sarcasm detection.", "Potential applications of NLP NLP's potential applications in language translation, chatbots, and voice dubbing for various languages, highlighting the importance of NLP in processing text data."]}], 'duration': 641.043, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/CG9iLLhqQF8/pics/CG9iLLhqQF8406917.jpg', 'highlights': ['The presenter announced an upcoming quiz with cash prizes of 5000, 2000, and 1000 rupees INR for the top three participants.', "NLP's role in AI, machine learning, and deep learning NLP is crucial in AI, machine learning, and deep learning for processing text data and enabling tasks such as language translation and chatbots.", 'Participants are required to follow the presenter on Instagram to provide information and validate their identity.', "Challenges in capturing sarcasm using NLP The challenge of capturing sarcasm using NLP, and the potential future developments in this field, such as Nvidia's open source model for sarcasm detection.", 'The presenter emphasized the importance of watching the session till the end to participate in the quiz, as the quiz content will be based on the topics covered in the session.', "Potential applications of NLP NLP's potential applications in language translation, chatbots, and voice dubbing for various languages, highlighting the importance of NLP in processing text data."]}, {'end': 1910.144, 'segs': [{'end': 1107.357, 'src': 'embed', 'start': 1077.456, 'weight': 2, 'content': [{'end': 1081.64, 'text': "Okay? I'm just going to go from bottom to top approach.", 'start': 1077.456, 'duration': 4.184}, {'end': 1083.862, 'text': 'Now here you can see that.', 'start': 1082.601, 'duration': 1.261}, {'end': 1092.389, 'text': "Let's say the first step you really need to know in machine learning, in NLP specifically, is called as text pre-processing.", 'start': 1084.602, 'duration': 7.787}, {'end': 1094.871, 'text': 'Text pre-processing.', 'start': 1093.59, 'duration': 1.281}, {'end': 1099.875, 'text': 'Now, what exactly is text pre-processing?', 'start': 1097.994, 'duration': 1.881}, {'end': 1103.495, 'text': 'See, guys, There are such scenarios and situation.', 'start': 1099.915, 'duration': 3.58}, {'end': 1103.835, 'text': 'you know.', 'start': 1103.495, 'duration': 0.34}, {'end': 1107.357, 'text': 'when we specifically get text data that may not be clean.', 'start': 1103.835, 'duration': 3.522}], 'summary': 'Text pre-processing is crucial in nlp for cleaning unstructured text data.', 'duration': 29.901, 'max_score': 1077.456, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/CG9iLLhqQF8/pics/CG9iLLhqQF81077456.jpg'}, {'end': 1368.544, 'src': 'embed', 'start': 1341.543, 'weight': 3, 'content': [{'end': 1347.107, 'text': 'Second layer is again text preprocessing, but I would definitely say it as part two.', 'start': 1341.543, 'duration': 5.564}, {'end': 1350.049, 'text': 'So here I have text preprocessing again.', 'start': 1347.567, 'duration': 2.482}, {'end': 1357.555, 'text': 'And now, in this text preprocessing part 2, I am going to focus on how I can convert the words into vectors.', 'start': 1350.729, 'duration': 6.826}, {'end': 1361.518, 'text': 'So here I am going to basically have techniques like bag of words.', 'start': 1357.975, 'duration': 3.543}, {'end': 1364.14, 'text': 'I am going to have techniques like TF-IDF, right?', 'start': 1361.518, 'duration': 2.622}, {'end': 1368.544, 'text': 'Here I am going to have techniques like unigrams, bigrams, you know?', 'start': 1364.54, 'duration': 4.004}], 'summary': 'Text preprocessing part 2 focuses on converting words into vectors using techniques like bag of words, tf-idf, unigrams, and bigrams.', 'duration': 27.001, 'max_score': 1341.543, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/CG9iLLhqQF8/pics/CG9iLLhqQF81341543.jpg'}, {'end': 1546.125, 'src': 'embed', 'start': 1518.848, 'weight': 0, 'content': [{'end': 1522.45, 'text': "We'll be learning about GRU, right? GRU RNN.", 'start': 1518.848, 'duration': 3.602}, {'end': 1531.195, 'text': 'So all these techniques we will try to learn and using these techniques now, because here we are actually moving into deep learning.', 'start': 1522.751, 'duration': 8.444}, {'end': 1537.297, 'text': 'Okay, Here we are actually moving into deep learning, and with the help of deep learning,', 'start': 1531.896, 'duration': 5.401}, {'end': 1542.902, 'text': 'you will be able to create an efficient model specifically related to NLP use cases.', 'start': 1537.297, 'duration': 5.605}, {'end': 1546.125, 'text': 'But it is always good that we know all these things also.', 'start': 1543.162, 'duration': 2.963}], 'summary': 'Learning about gru rnn in deep learning for nlp use cases.', 'duration': 27.277, 'max_score': 1518.848, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/CG9iLLhqQF8/pics/CG9iLLhqQF81518848.jpg'}, {'end': 1720.945, 'src': 'heatmap', 'start': 1646.975, 'weight': 1, 'content': [{'end': 1651.221, 'text': "even we'll try to create, you know, machine translation problem statements.", 'start': 1646.975, 'duration': 4.246}, {'end': 1653.243, 'text': "i'll do it practically in front of you.", 'start': 1651.221, 'duration': 2.022}, {'end': 1656.387, 'text': 'then we will be having attention models Right?', 'start': 1653.243, 'duration': 3.144}, {'end': 1657.908, 'text': 'All these things we will try to learn.', 'start': 1656.547, 'duration': 1.361}, {'end': 1662.051, 'text': 'So that is the reason I have not told you that only 7 days I will take NLP.', 'start': 1658.308, 'duration': 3.743}, {'end': 1663.992, 'text': 'This will go till 15 to 20 days.', 'start': 1662.351, 'duration': 1.641}, {'end': 1668.515, 'text': 'That is what I feel if I am able to cover every day 1 and a half hour to 2 hour session.', 'start': 1664.152, 'duration': 4.363}, {'end': 1673.658, 'text': "And I will not go much with it because again I don't want you all to take stress of so many things.", 'start': 1668.895, 'duration': 4.763}, {'end': 1676.46, 'text': 'We will go slowly, we will try to convert this into 15 days.', 'start': 1674.019, 'duration': 2.441}, {'end': 1683.605, 'text': 'Okay? Then we will be having, we will be learning about transformers, we will be learning about the final thing which is called as BERT.', 'start': 1676.5, 'duration': 7.105}, {'end': 1686.987, 'text': 'okay, so this will be our pyramid of learning.', 'start': 1684.085, 'duration': 2.902}, {'end': 1693.111, 'text': 'the learning process will be going from bottom to top, okay, and we will try to learn in this specific way.', 'start': 1686.987, 'duration': 6.124}, {'end': 1694.071, 'text': 'now, what are libraries?', 'start': 1693.111, 'duration': 0.96}, {'end': 1695.332, 'text': 'we are going to cover.', 'start': 1694.071, 'duration': 1.261}, {'end': 1698.994, 'text': 'one library for machine learning will going to use nltk.', 'start': 1695.332, 'duration': 3.662}, {'end': 1700.335, 'text': 'then we are going to use space c.', 'start': 1698.994, 'duration': 1.341}, {'end': 1703.577, 'text': 'One more library is something called as text block.', 'start': 1701.696, 'duration': 1.881}, {'end': 1706.518, 'text': "So with respect to machine learning, we'll try to cover these three.", 'start': 1703.917, 'duration': 2.601}, {'end': 1713.802, 'text': 'And with respect to deep learning, we will be covering TensorFlow.', 'start': 1707.199, 'duration': 6.603}, {'end': 1719.484, 'text': 'So using TensorFlow, if you want PyTorch, you can give me 1000 likes.', 'start': 1714.582, 'duration': 4.902}, {'end': 1720.945, 'text': "I'll also teach you in PyTorch.", 'start': 1719.544, 'duration': 1.401}], 'summary': 'Nlp and machine learning sessions to cover machine translation, attention models, nltk, spacy, textblob, tensorflow, and possibly pytorch, with sessions lasting 15-20 days for 1.5-2 hours each.', 'duration': 73.97, 'max_score': 1646.975, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/CG9iLLhqQF8/pics/CG9iLLhqQF81646975.jpg'}], 'start': 1048.06, 'title': 'Nlp roadmap and techniques', 'summary': 'Covers the roadmap of nlp, including various text preprocessing techniques such as bag of words, tf-idf, word2vec, stop words, lemmatization, and stemming, progressing to advanced techniques like gensim, word embeddings, bi-directional lstms, attention models, transformers, and bert, with a focus on machine learning, deep learning, and the use of libraries such as nltk, spacy, and textblob for machine learning, and tensorflow for deep learning.', 'chapters': [{'end': 1154.811, 'start': 1048.06, 'title': 'Nlp roadmap and text pre-processing', 'summary': 'Introduces the roadmap of nlp, emphasizing the importance of text pre-processing in converting text data into numerical vectors for machine learning applications, such as spam classification and chatbots.', 'duration': 106.751, 'highlights': ['Text pre-processing is the first step in NLP, involving the conversion of text data into numerical vectors, enabling applications like spam classification and chatbots.', 'The roadmap of NLP is presented, emphasizing the significance of text pre-processing in machine learning applications.']}, {'end': 1910.144, 'start': 1155.231, 'title': 'Nlp roadmap and techniques', 'summary': 'Covers the roadmap of nlp, including various text preprocessing techniques such as bag of words, tf-idf, word2vec, stop words, lemmatization, and stemming, progressing to advanced techniques like gensim, word embeddings, bi-directional lstms, attention models, transformers, and bert, with a focus on both machine learning and deep learning, and the use of libraries such as nltk, spacy, and textblob for machine learning, and tensorflow for deep learning.', 'duration': 754.913, 'highlights': ['The chapter covers various text preprocessing techniques such as bag of words, TF-IDF, word2vec, stop words, lemmatization, and stemming, which are essential for cleaning and converting data efficiently for NLP (e.g., word2vec is used in both machine learning and deep learning, and lemmatization and stop words are important techniques).', 'The roadmap progresses to advanced techniques like Gensim, word embeddings, bi-directional LSTMs, attention models, transformers, and BERT, providing a comprehensive understanding of text handling and deep learning models for NLP use cases.', 'The chapter also emphasizes the use of libraries such as NLTK, spacy, and textblob for machine learning, and TensorFlow for deep learning, to implement the various NLP techniques discussed.']}], 'duration': 862.084, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/CG9iLLhqQF8/pics/CG9iLLhqQF81048060.jpg', 'highlights': ['The roadmap progresses to advanced techniques like Gensim, word embeddings, bi-directional LSTMs, attention models, transformers, and BERT, providing a comprehensive understanding of text handling and deep learning models for NLP use cases.', 'The chapter also emphasizes the use of libraries such as NLTK, spacy, and textblob for machine learning, and TensorFlow for deep learning, to implement the various NLP techniques discussed.', 'Text pre-processing is the first step in NLP, involving the conversion of text data into numerical vectors, enabling applications like spam classification and chatbots.', 'The chapter covers various text preprocessing techniques such as bag of words, TF-IDF, word2vec, stop words, lemmatization, and stemming, which are essential for cleaning and converting data efficiently for NLP.']}, {'end': 2155.107, 'segs': [{'end': 1977.636, 'src': 'embed', 'start': 1910.144, 'weight': 0, 'content': [{'end': 1911.825, 'text': 'okay, every information is basically done.', 'start': 1910.144, 'duration': 1.681}, {'end': 1915.987, 'text': 'But understand one thing if I write, if I click on images, okay?', 'start': 1912.285, 'duration': 3.702}, {'end': 1920.77, 'text': "If I'm searching for Krishnayak, how this text is getting related to images?", 'start': 1916.627, 'duration': 4.143}, {'end': 1923.771, 'text': 'I hope everybody has heard about DALL-E 2, right?', 'start': 1921.47, 'duration': 2.301}, {'end': 1927.113, 'text': 'DALL-E 2, I guess everybody has heard about it.', 'start': 1924.372, 'duration': 2.741}, {'end': 1931.375, 'text': 'You just write the text and it will automatically be converting to an image.', 'start': 1928.054, 'duration': 3.321}, {'end': 1935.418, 'text': 'So that entirely thing is basically text to image conversion.', 'start': 1931.455, 'duration': 3.963}, {'end': 1948.168, 'text': 'And that will basically be using NLP anyhow, right? So suppose I see that cat fighting with cat, right? Cat fighting with cat.', 'start': 1936.038, 'duration': 12.13}, {'end': 1948.729, 'text': 'You can see this?', 'start': 1948.188, 'duration': 0.541}, {'end': 1950.21, 'text': 'Images are there right?', 'start': 1948.889, 'duration': 1.321}, {'end': 1952.653, 'text': 'Images are there right?', 'start': 1951.812, 'duration': 0.841}, {'end': 1956.16, 'text': 'And these are like not created images right?', 'start': 1953.638, 'duration': 2.522}, {'end': 1959.903, 'text': 'These are the images that you can basically see over here.', 'start': 1956.32, 'duration': 3.583}, {'end': 1964.406, 'text': "Cat fighting, it's a real image, okay? Okay, cat, comedy cat.", 'start': 1960.003, 'duration': 4.403}, {'end': 1970.03, 'text': 'If I search right, how Google is able to understand anyhow it is able to see.', 'start': 1964.706, 'duration': 5.324}, {'end': 1972.392, 'text': 'see comedy, cat cat memes, see this?', 'start': 1970.03, 'duration': 2.362}, {'end': 1974.414, 'text': 'How it is being able to show here.', 'start': 1972.892, 'duration': 1.522}, {'end': 1977.636, 'text': "All the memes that are available in the internet, you'll be able to see over here.", 'start': 1974.574, 'duration': 3.062}], 'summary': 'Discussion on text-to-image conversion using dall-e 2 and nlp, including examples of images and memes.', 'duration': 67.492, 'max_score': 1910.144, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/CG9iLLhqQF8/pics/CG9iLLhqQF81910144.jpg'}, {'end': 2076.286, 'src': 'embed', 'start': 2049.492, 'weight': 3, 'content': [{'end': 2052.975, 'text': 'Right And see, iNeural Intelligence, it has been linked with Eshchand.', 'start': 2049.492, 'duration': 3.483}, {'end': 2055.897, 'text': 'Eshchand is the is the company who have funded us.', 'start': 2053.054, 'duration': 2.843}, {'end': 2061.862, 'text': 'Again, thank you for Eshchand for trusting in us so that of trusting in affordable courses and all.', 'start': 2056.338, 'duration': 5.524}, {'end': 2067.724, 'text': 'So here you can see artificial intelligence, you can see data analytics, you can see technology, internship.', 'start': 2062.742, 'duration': 4.982}, {'end': 2071.205, 'text': 'If I probably search for Krishna, I may get other categories over here.', 'start': 2067.824, 'duration': 3.381}, {'end': 2076.286, 'text': 'I may get categories like Twitter, YouTube, missing values, Facebook, feature engineering.', 'start': 2071.625, 'duration': 4.661}], 'summary': 'Ineural intelligence is funded by eshchand, offering affordable courses in ai, data analytics, technology, and internships.', 'duration': 26.794, 'max_score': 2049.492, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/CG9iLLhqQF8/pics/CG9iLLhqQF82049492.jpg'}], 'start': 1910.144, 'title': "Nlp's role in text-to-image conversion", 'summary': "Discusses nlp's role in text-to-image conversion, focusing on dall-e 2's ability to convert text to images, relevance of images in search results, and the importance of nlp in ai, showcasing its relevance in various domains.", 'chapters': [{'end': 1977.636, 'start': 1910.144, 'title': 'Text to image conversion with nlp', 'summary': 'Discusses the concept of text to image conversion using nlp, particularly focusing on dall-e 2 and its ability to automatically convert text to images, demonstrating the relevance of images in search results and the use of real images and memes.', 'duration': 67.492, 'highlights': ['DALL-E 2 is capable of automatically converting text to images, demonstrating the advancement in text to image conversion technology.', 'The relevance of images in search results is highlighted, showcasing the impact of image understanding and retrieval in search engines.', 'The ability of Google to display memes and real images based on search queries is showcased, indicating the integration of image understanding in search results.']}, {'end': 2155.107, 'start': 1977.836, 'title': 'Importance of nlp in ai', 'summary': 'Emphasizes the importance of nlp in ai, showcasing how it links information, mentions company collaborations, and demonstrates the relevance of the technology in various domains through examples.', 'duration': 177.271, 'highlights': ["NLP's ability to link information from various sources is highlighted, showcasing its importance in AI. The transcript discusses how NLP can link information from different sources, demonstrating its significance in AI applications.", 'The mention of company collaborations and funding demonstrates the real-world relevance of NLP technology. The transcript mentions the collaboration with Eshchand, a company that provided funding, showcasing the real-world relevance of NLP technology.', "Demonstration of NLP's relevance in various domains through examples, such as news and social media platforms. The transcript showcases how NLP is relevant in various domains through examples, including news and social media platforms."]}], 'duration': 244.963, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/CG9iLLhqQF8/pics/CG9iLLhqQF81910144.jpg', 'highlights': ['DALL-E 2 demonstrates advancement in text to image conversion technology.', 'Relevance of images in search results showcases impact of image understanding in search engines.', "Google's ability to display memes and real images based on search queries indicates integration of image understanding in search results.", "NLP's importance in AI demonstrated through its ability to link information from various sources.", 'Company collaborations and funding demonstrate real-world relevance of NLP technology.', "NLP's relevance in various domains showcased through examples like news and social media platforms."]}, {'end': 2494.682, 'segs': [{'end': 2223.485, 'src': 'embed', 'start': 2181.803, 'weight': 0, 'content': [{'end': 2183.364, 'text': "Right So let's go ahead.", 'start': 2181.803, 'duration': 1.561}, {'end': 2190.251, 'text': "Now coming to the next topic, let's start with something called as tokenization.", 'start': 2185.99, 'duration': 4.261}, {'end': 2198.314, 'text': "The first step of NLP that whenever you're starting to read is something called as tokenization.", 'start': 2191.152, 'duration': 7.162}, {'end': 2214.459, 'text': "Always understand guys, if I consider a ML use case, and let's say I am building a spam classifier.", 'start': 2203.896, 'duration': 10.563}, {'end': 2223.485, 'text': "okay. spam classifier, let's say gmail spam classifier or mail classifier.", 'start': 2217.239, 'duration': 6.246}], 'summary': 'Introducing tokenization, a key step in nlp, for ml use cases like spam classification.', 'duration': 41.682, 'max_score': 2181.803, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/CG9iLLhqQF8/pics/CG9iLLhqQF82181803.jpg'}, {'end': 2313.956, 'src': 'embed', 'start': 2287.389, 'weight': 1, 'content': [{'end': 2292.31, 'text': 'Okay So these two features, you can see that, okay, I have an email body and email subject.', 'start': 2287.389, 'duration': 4.921}, {'end': 2297.992, 'text': 'And based on these two features, I need to predict whether the output data is spam or ham.', 'start': 2292.75, 'duration': 5.242}, {'end': 2304.493, 'text': "Okay Let's say my email body say that you won $1 million.", 'start': 2298.292, 'duration': 6.201}, {'end': 2307.994, 'text': 'Okay You won $1 million.', 'start': 2304.594, 'duration': 3.4}, {'end': 2312.276, 'text': 'And the email subject is that billionaire.', 'start': 2309.195, 'duration': 3.081}, {'end': 2313.956, 'text': "Let's say.", 'start': 2313.616, 'duration': 0.34}], 'summary': 'Predict spam or ham based on email body and subject features.', 'duration': 26.567, 'max_score': 2287.389, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/CG9iLLhqQF8/pics/CG9iLLhqQF82287389.jpg'}, {'end': 2422.716, 'src': 'embed', 'start': 2393.548, 'weight': 2, 'content': [{'end': 2401.676, 'text': 'Then the next step that we basically apply is something called as, I can say stemming.', 'start': 2393.548, 'duration': 8.128}, {'end': 2407.802, 'text': 'And before applying stemming also, I can use something called as stop words.', 'start': 2403.458, 'duration': 4.344}, {'end': 2409.543, 'text': 'I will talk about it.', 'start': 2408.803, 'duration': 0.74}, {'end': 2413.327, 'text': 'Okay Then finally we apply something called as lemmatization.', 'start': 2409.603, 'duration': 3.724}, {'end': 2418.074, 'text': 'Okay So these are the three steps we specifically do.', 'start': 2415.273, 'duration': 2.801}, {'end': 2422.716, 'text': 'The first step, the second step along with stop words and the third step.', 'start': 2418.114, 'duration': 4.602}], 'summary': 'The process involves three steps: stemming, stop words, and lemmatization.', 'duration': 29.168, 'max_score': 2393.548, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/CG9iLLhqQF8/pics/CG9iLLhqQF82393548.jpg'}, {'end': 2511.69, 'src': 'embed', 'start': 2474.69, 'weight': 3, 'content': [{'end': 2476.171, 'text': "or whatever algorithm you're trying to do.", 'start': 2474.69, 'duration': 1.481}, {'end': 2478.592, 'text': 'So this is the first step of pre-processing.', 'start': 2476.491, 'duration': 2.101}, {'end': 2482.615, 'text': 'As soon as I applied tokenization, it is just going to take the sentence.', 'start': 2479.133, 'duration': 3.482}, {'end': 2486.317, 'text': 'It is just going to take the sentence and convert it to words.', 'start': 2483.235, 'duration': 3.082}, {'end': 2488.639, 'text': 'So this is the process of tokenization.', 'start': 2486.457, 'duration': 2.182}, {'end': 2490.46, 'text': 'Very simple, very clear.', 'start': 2489.399, 'duration': 1.061}, {'end': 2492.1, 'text': 'Nothing so complex.', 'start': 2490.78, 'duration': 1.32}, {'end': 2494.682, 'text': 'We are just taking the sentence, converting it into words.', 'start': 2492.441, 'duration': 2.241}, {'end': 2497.683, 'text': "Now let's go towards something called as stop words.", 'start': 2495.342, 'duration': 2.341}, {'end': 2499.984, 'text': "Let's say I have a sentence.", 'start': 2498.844, 'duration': 1.14}, {'end': 2511.69, 'text': 'Hey buddy, I want to go to your house.', 'start': 2501.925, 'duration': 9.765}], 'summary': 'The transcript covers pre-processing steps including tokenization and stop words.', 'duration': 37, 'max_score': 2474.69, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/CG9iLLhqQF8/pics/CG9iLLhqQF82474690.jpg'}], 'start': 2155.487, 'title': 'Nlp tokenization and text pre-processing steps', 'summary': 'Covers nlp tokenization for building a spam classifier using email features, and discusses pre-processing steps including tokenization, stop words removal, and lemmatization for enhancing text analysis.', 'chapters': [{'end': 2368.446, 'start': 2155.487, 'title': 'Nlp tokenization for spam classification', 'summary': 'Covers the basics of nlp tokenization and applies it to building a spam classifier using email body and subject as features to predict spam or ham, with examples and key points emphasized.', 'duration': 212.959, 'highlights': ['Tokenization is the first step of NLP and is crucial for building a spam classifier using email body and subject as features to predict spam or ham.', "Examples of email content such as 'You won $1 million' and 'credit card winner' are highlighted as potential indicators of spam.", 'The announcement of a quiz related to the topic is mentioned as an upcoming event.', 'Emphasizing the importance of understanding the roadmap and expressing excitement are noted at the beginning of the transcript.']}, {'end': 2494.682, 'start': 2369.187, 'title': 'Text pre-processing steps', 'summary': 'Discusses the three key pre-processing steps for text analysis: tokenization, stop words removal, and lemmatization, aiming to make the text understandable for machine learning algorithms.', 'duration': 125.495, 'highlights': ['The chapter discusses the three key pre-processing steps for text analysis: tokenization, stop words removal, and lemmatization. The speaker explains the three essential pre-processing steps for text analysis, which are tokenization, stop words removal, and lemmatization.', 'Tokenization is the process of converting sentences into words to make them understandable for machine learning algorithms. Tokenization involves converting sentences into individual words to improve the understandability of the text for machine learning algorithms.', 'Stop words removal is another important step in pre-processing. The removal of stop words is highlighted as a crucial pre-processing step for text analysis.', 'The chapter emphasizes the significance of making the text understandable for machine learning algorithms. The focus is on making the text comprehensible for machine learning algorithms, underlining its importance in text analysis.']}], 'duration': 339.195, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/CG9iLLhqQF8/pics/CG9iLLhqQF82155487.jpg', 'highlights': ['Tokenization is crucial for building a spam classifier using email features.', "Examples of email content like 'You won $1 million' and 'credit card winner' are potential indicators of spam.", 'The chapter discusses the three key pre-processing steps for text analysis: tokenization, stop words removal, and lemmatization.', 'Tokenization involves converting sentences into individual words to improve the understandability of the text for machine learning algorithms.', 'Stop words removal is highlighted as a crucial pre-processing step for text analysis.']}, {'end': 3368.887, 'segs': [{'end': 2593.033, 'src': 'embed', 'start': 2561.615, 'weight': 0, 'content': [{'end': 2563.977, 'text': 'So not keyword can play an important part.', 'start': 2561.615, 'duration': 2.362}, {'end': 2574.084, 'text': 'Now. but in the case of to of the he she, this kind of words right is not that important for some of the use cases like spam classification,', 'start': 2563.997, 'duration': 10.087}, {'end': 2579.648, 'text': 'or it can be important for some other use cases, like text summarization, you know.', 'start': 2574.084, 'duration': 5.564}, {'end': 2582.47, 'text': 'Okay, for text summarization, it can be important.', 'start': 2580.569, 'duration': 1.901}, {'end': 2585.111, 'text': 'But for some of the use cases, this will not be important.', 'start': 2582.73, 'duration': 2.381}, {'end': 2587.852, 'text': 'So what we do is that we can remove these words.', 'start': 2585.411, 'duration': 2.441}, {'end': 2589.072, 'text': 'We can remove these words.', 'start': 2587.992, 'duration': 1.08}, {'end': 2593.033, 'text': 'And in order to remove the words, we will be applying something called a stop words.', 'start': 2589.432, 'duration': 3.601}], 'summary': 'Stop words can be important for some use cases like text summarization, but not for others like spam classification.', 'duration': 31.418, 'max_score': 2561.615, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/CG9iLLhqQF8/pics/CG9iLLhqQF82561615.jpg'}, {'end': 2640.218, 'src': 'embed', 'start': 2608.419, 'weight': 2, 'content': [{'end': 2609.279, 'text': 'Just remove it.', 'start': 2608.419, 'duration': 0.86}, {'end': 2617.347, 'text': 'Okay But right now the library like NLTK, you know, the stop words not is also present over there.', 'start': 2610, 'duration': 7.347}, {'end': 2624.033, 'text': 'Okay So if I want to really create my own stop words, I will definitely not remove not.', 'start': 2618.187, 'duration': 5.846}, {'end': 2631.68, 'text': 'I will create my own list like to, the, he, see, of, go, something small, small words that you can see.', 'start': 2624.473, 'duration': 7.207}, {'end': 2635.363, 'text': "And I will probably create a list and I'll make sure that I will not include.", 'start': 2631.74, 'duration': 3.623}, {'end': 2640.218, 'text': 'not, okay, so these all are not giving that much meaningful information.', 'start': 2636.437, 'duration': 3.781}], 'summary': "Creating a custom stop word list with nltk library, excluding 'not' and other non-meaningful words.", 'duration': 31.799, 'max_score': 2608.419, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/CG9iLLhqQF8/pics/CG9iLLhqQF82608419.jpg'}, {'end': 2717.357, 'src': 'heatmap', 'start': 2662.683, 'weight': 0.858, 'content': [{'end': 2667.626, 'text': 'Because, at the end of the day, I want to make sure that the machine will be understanding this text.', 'start': 2662.683, 'duration': 4.943}, {'end': 2669.967, 'text': 'Okay So this is the second step.', 'start': 2668.126, 'duration': 1.841}, {'end': 2676.731, 'text': 'So in the second step, what we did is that we removed this small, small words which are not playing a very important role.', 'start': 2670.527, 'duration': 6.204}, {'end': 2679.892, 'text': 'Now coming to the third step, which is called as stemming.', 'start': 2677.151, 'duration': 2.741}, {'end': 2683.154, 'text': 'Okay Stemming.', 'start': 2681.733, 'duration': 1.421}, {'end': 2687.119, 'text': 'Now this stemming word is super, super important.', 'start': 2684.557, 'duration': 2.562}, {'end': 2698.711, 'text': 'In stemming what we focus on is that we try to find out the base of this word, base of a specific word or base stem of a specific word.', 'start': 2688.681, 'duration': 10.03}, {'end': 2700.232, 'text': 'Let me show you one example.', 'start': 2699.071, 'duration': 1.161}, {'end': 2708.54, 'text': 'Suppose I have two separate words like this, historical and history.', 'start': 2701.313, 'duration': 7.227}, {'end': 2717.357, 'text': 'Okay Now, whenever I have this two words, okay, this two words represent different, different things.', 'start': 2710.034, 'duration': 7.323}], 'summary': 'Text processing involves removing insignificant words and finding word base forms, e.g. historical and history represent different things.', 'duration': 54.674, 'max_score': 2662.683, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/CG9iLLhqQF8/pics/CG9iLLhqQF82662683.jpg'}, {'end': 2945.213, 'src': 'embed', 'start': 2907.02, 'weight': 3, 'content': [{'end': 2909.101, 'text': "See, advantages of stemming, I'll say.", 'start': 2907.02, 'duration': 2.081}, {'end': 2911.827, 'text': 'Advantages of stemming.', 'start': 2910.706, 'duration': 1.121}, {'end': 2916.809, 'text': 'Stemming process is really fast.', 'start': 2912.547, 'duration': 4.262}, {'end': 2921.512, 'text': 'It can actually help you for text preprocessing for huge dataset.', 'start': 2917.49, 'duration': 4.022}, {'end': 2928.895, 'text': 'But the disadvantage that I will talk about is that it is removing the meaning of the word.', 'start': 2922.672, 'duration': 6.223}, {'end': 2935.819, 'text': 'It is removing the meaning of the word.', 'start': 2932.137, 'duration': 3.682}, {'end': 2942.731, 'text': 'It is removing the meaning of the word.', 'start': 2940.189, 'duration': 2.542}, {'end': 2945.213, 'text': 'So this is the disadvantage.', 'start': 2943.311, 'duration': 1.902}], 'summary': 'Stemming process is fast, but removes word meaning.', 'duration': 38.193, 'max_score': 2907.02, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/CG9iLLhqQF8/pics/CG9iLLhqQF82907020.jpg'}, {'end': 3159.501, 'src': 'heatmap', 'start': 2955.723, 'weight': 0.913, 'content': [{'end': 2960.547, 'text': 'But I tried it out with the help of NLTK library over there specifically final was coming.', 'start': 2955.723, 'duration': 4.824}, {'end': 2964.972, 'text': 'So whatever things I am showing you over here, I did it and then only I am showing you.', 'start': 2961.008, 'duration': 3.964}, {'end': 2970.394, 'text': 'Now in order to overcome the disadvantage, what we do is that we have something called as lemmatization.', 'start': 2965.592, 'duration': 4.802}, {'end': 2977.778, 'text': 'So this will be my fourth step, which is called as lemmatization.', 'start': 2970.855, 'duration': 6.923}, {'end': 2981.84, 'text': "Tomorrow we'll try to see the practical also.", 'start': 2980.179, 'duration': 1.661}, {'end': 2987.483, 'text': 'So in lemmatization, what we do is specifically in lemmatization.', 'start': 2982.12, 'duration': 5.363}, {'end': 2989.424, 'text': 'now this is super, super important.', 'start': 2987.483, 'duration': 1.941}, {'end': 2996.916, 'text': 'in lemmatization we try to convert the word, but here we will be getting Meaningful word.', 'start': 2989.424, 'duration': 7.492}, {'end': 3004.218, 'text': 'Okay So when I probably give this to lemmatization in NLTK, there is something called as word let word net lemmatizer.', 'start': 2997.236, 'duration': 6.982}, {'end': 3007.859, 'text': 'So here I will be getting a meaningful word history.', 'start': 3004.598, 'duration': 3.261}, {'end': 3011.48, 'text': 'Okay And you may be thinking how it is able to do.', 'start': 3008.419, 'duration': 3.061}, {'end': 3013.6, 'text': 'It has the entire dictionary of words.', 'start': 3011.88, 'duration': 1.72}, {'end': 3018.221, 'text': "It will probably check with respect to the base word and it'll compare.", 'start': 3014.2, 'duration': 4.021}, {'end': 3024.423, 'text': 'Okay Now, if I also have a scenario, which it says like, finally, finally, final.', 'start': 3018.501, 'duration': 5.922}, {'end': 3031.448, 'text': 'and finalized, this will also get converted to a word which is called as final.', 'start': 3025.823, 'duration': 5.625}, {'end': 3037.113, 'text': 'okay, so that is how lemmatization work now over here.', 'start': 3031.448, 'duration': 5.665}, {'end': 3049.04, 'text': 'advantage yes, we are able to get a meaningful word, meaningful word, But disadvantage will be that it is slow.', 'start': 3037.113, 'duration': 11.927}, {'end': 3055.913, 'text': 'I hope everybody knows why it is slow because it has to really do a lot of comparison with respect to all the dictionary of word it has.', 'start': 3050.002, 'duration': 5.911}, {'end': 3058.478, 'text': 'It is slow when compared to STEMI.', 'start': 3056.394, 'duration': 2.084}, {'end': 3068.622, 'text': 'Now, if I talk about some of the use cases, if you say spam classification, spam ham classification or toxic classification,', 'start': 3059.795, 'duration': 8.827}, {'end': 3076.288, 'text': 'toxic basically means whether a person is basically writing a comment, whether it should give three star, one star, five star, like that.', 'start': 3068.622, 'duration': 7.666}, {'end': 3079.47, 'text': 'right?. In that particular case also, we can use stemming, okay?', 'start': 3076.288, 'duration': 3.182}, {'end': 3082.232, 'text': 'But lemmatization needs to be used in chatbots.', 'start': 3079.79, 'duration': 2.442}, {'end': 3087.736, 'text': 'So if I say about use cases, if I talk about use cases okay.', 'start': 3082.272, 'duration': 5.464}, {'end': 3093.388, 'text': 'in case of stemming okay here, i can say in spam classification.', 'start': 3087.736, 'duration': 5.652}, {'end': 3099.133, 'text': 'i can use this in spam classification, i can use stemming.', 'start': 3093.388, 'duration': 5.745}, {'end': 3108.521, 'text': 'second one is that comments classification, whether it is good, bad or review classification i can write okay, review classification,', 'start': 3099.133, 'duration': 9.388}, {'end': 3109.502, 'text': 'review classification.', 'start': 3108.521, 'duration': 0.981}, {'end': 3116.734, 'text': 'Now, based on this, you can also give reviews on my lectures that I usually take.', 'start': 3112.051, 'duration': 4.683}, {'end': 3119.336, 'text': 'Okay So it may be good or bad.', 'start': 3116.954, 'duration': 2.382}, {'end': 3120.616, 'text': 'It should not be bad.', 'start': 3119.896, 'duration': 0.72}, {'end': 3121.877, 'text': 'No Yeah.', 'start': 3120.676, 'duration': 1.201}, {'end': 3123.778, 'text': 'I know many people have cleared the interviews.', 'start': 3121.917, 'duration': 1.861}, {'end': 3125.019, 'text': 'You cannot say it is bad.', 'start': 3123.798, 'duration': 1.221}, {'end': 3129.542, 'text': 'If, if you say it is bad, then you definitely require a PhD person to teach you.', 'start': 3125.74, 'duration': 3.802}, {'end': 3130.823, 'text': 'Okay That is the thing.', 'start': 3129.602, 'duration': 1.221}, {'end': 3137.547, 'text': 'Now in case of lemmatization, if I talk about some use cases, because here meaningful words is important.', 'start': 3131.663, 'duration': 5.884}, {'end': 3142.257, 'text': 'So I can basically use something like text summarization, Okay.', 'start': 3137.627, 'duration': 4.63}, {'end': 3144.238, 'text': 'Language translation.', 'start': 3143.117, 'duration': 1.121}, {'end': 3147.678, 'text': 'Language translation.', 'start': 3146.398, 'duration': 1.28}, {'end': 3159.501, 'text': 'Here I can also write third chatbots because chatbots also require good, complete lemmatized words, right? So all this is super, super important.', 'start': 3149.579, 'duration': 9.922}], 'summary': 'Lemmatization provides meaningful words but is slower than stemming. use cases include chatbots, text summarization, and language translation.', 'duration': 203.778, 'max_score': 2955.723, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/CG9iLLhqQF8/pics/CG9iLLhqQF82955723.jpg'}, {'end': 3082.232, 'src': 'embed', 'start': 3037.113, 'weight': 5, 'content': [{'end': 3049.04, 'text': 'advantage yes, we are able to get a meaningful word, meaningful word, But disadvantage will be that it is slow.', 'start': 3037.113, 'duration': 11.927}, {'end': 3055.913, 'text': 'I hope everybody knows why it is slow because it has to really do a lot of comparison with respect to all the dictionary of word it has.', 'start': 3050.002, 'duration': 5.911}, {'end': 3058.478, 'text': 'It is slow when compared to STEMI.', 'start': 3056.394, 'duration': 2.084}, {'end': 3068.622, 'text': 'Now, if I talk about some of the use cases, if you say spam classification, spam ham classification or toxic classification,', 'start': 3059.795, 'duration': 8.827}, {'end': 3076.288, 'text': 'toxic basically means whether a person is basically writing a comment, whether it should give three star, one star, five star, like that.', 'start': 3068.622, 'duration': 7.666}, {'end': 3079.47, 'text': 'right?. In that particular case also, we can use stemming, okay?', 'start': 3076.288, 'duration': 3.182}, {'end': 3082.232, 'text': 'But lemmatization needs to be used in chatbots.', 'start': 3079.79, 'duration': 2.442}], 'summary': 'Advantage: meaningful words. disadvantage: slow due to extensive comparison. use cases: spam classification, toxic classification, chatbots.', 'duration': 45.119, 'max_score': 3037.113, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/CG9iLLhqQF8/pics/CG9iLLhqQF83037113.jpg'}], 'start': 2495.342, 'title': 'Text pre-processing techniques', 'summary': 'Delves into the significance of stop words, emphasizing their impact on text summarization and spam classification. it also explores the processes of stemming and lemmatization, highlighting their advantages, disadvantages, and use cases in various applications such as spam and review classifications, chatbots, and language translation.', 'chapters': [{'end': 2654.821, 'start': 2495.342, 'title': 'Stop words and their importance', 'summary': 'Discusses the concept of stop words, their significance in natural language processing, and the process of removing them, emphasizing their impact on various use cases such as text summarization and spam classification.', 'duration': 159.479, 'highlights': ["Stop words like 'to', 'the', 'of' are not important in some use cases like spam classification, while they can be crucial for text summarization. Stop words like 'to', 'the', 'of' are not important for some use cases like spam classification, while they can be crucial for text summarization.", "The importance of 'not' as a keyword in text processing, as it can play a significant role in conveying opposite meanings. The importance of 'not' as a keyword in text processing, as it can play a significant role in conveying opposite meanings.", 'The ability to create custom stop word lists and the use of libraries like NLTK for stop word removal. The ability to create custom stop word lists and the use of libraries like NLTK for stop word removal.']}, {'end': 3368.887, 'start': 2654.821, 'title': 'Text pre-processing: stemming and lemmatization', 'summary': 'Discusses the importance of text pre-processing, particularly focusing on the processes of stemming and lemmatization. it highlights the advantages and disadvantages of stemming, such as its speed and loss of word meaning, and also emphasizes the significance of lemmatization in generating meaningful words, albeit at a slower pace. additionally, it outlines the use cases of both stemming and lemmatization, showcasing their relevance in spam and review classifications as well as in chatbots and language translation.', 'duration': 714.066, 'highlights': ['The advantages of stemming include its speed and applicability for text preprocessing on large datasets. Stemming process is really fast and can help for text preprocessing for huge datasets.', 'Stemming may result in the loss of word meaning, which is a significant disadvantage. Stemming process removes the meaning of the word, leading to a loss of word meaning.', 'Lemmatization yields meaningful words but is slower due to the extensive comparison required with the dictionary of words. Lemmatization provides meaningful words but is slow due to extensive comparison with the dictionary of words.', 'Stemming is suitable for spam and review classifications, while lemmatization is essential for text summarization, language translation, and chatbots. Stemming is applicable in spam and review classifications, while lemmatization is crucial for text summarization, language translation, and chatbots.']}], 'duration': 873.545, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/CG9iLLhqQF8/pics/CG9iLLhqQF82495342.jpg', 'highlights': ["Stop words like 'to', 'the', 'of' are not important in some use cases like spam classification, while they can be crucial for text summarization.", "The importance of 'not' as a keyword in text processing, as it can play a significant role in conveying opposite meanings.", 'The ability to create custom stop word lists and the use of libraries like NLTK for stop word removal.', 'Stemming process is really fast and can help for text preprocessing for huge datasets.', 'Stemming may result in the loss of word meaning, which is a significant disadvantage.', 'Lemmatization provides meaningful words but is slow due to extensive comparison with the dictionary of words.', 'Stemming is applicable in spam and review classifications, while lemmatization is crucial for text summarization, language translation, and chatbots.']}, {'end': 4035.137, 'segs': [{'end': 3411.52, 'src': 'embed', 'start': 3369.367, 'weight': 3, 'content': [{'end': 3375.889, 'text': 'And remember guys, all these materials, you know, I will try to give it in the dashboard that is given in the description of this particular video.', 'start': 3369.367, 'duration': 6.522}, {'end': 3381.952, 'text': "Enroll in this dashboard because there you'll also be able to find out the video link and all these materials will be given you.", 'start': 3376.35, 'duration': 5.602}, {'end': 3384.853, 'text': 'Is my handwriting good? I like this diagram.', 'start': 3382.372, 'duration': 2.481}, {'end': 3390.195, 'text': 'This diagram is the best diagram that I could ever draw just with my bare hands, you know.', 'start': 3385.153, 'duration': 5.042}, {'end': 3392.355, 'text': 'Yeah, I will be teaching attention models.', 'start': 3390.655, 'duration': 1.7}, {'end': 3396.276, 'text': "Dilip Kumar, label to vector also I'll try to convert, okay? Complete.", 'start': 3392.876, 'duration': 3.4}, {'end': 3399.157, 'text': "So quickly, let's go towards quiz.", 'start': 3397.076, 'duration': 2.081}, {'end': 3402.538, 'text': 'First of all, everybody follow me in Instagram.', 'start': 3399.377, 'duration': 3.161}, {'end': 3411.52, 'text': "Follow me in Instagram, then only I'll be able to validate your quiz, okay? So I've opened my phone.", 'start': 3405.718, 'duration': 5.802}], 'summary': 'Teaching attention models and converting label to vector. check the dashboard for materials and video links.', 'duration': 42.153, 'max_score': 3369.367, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/CG9iLLhqQF8/pics/CG9iLLhqQF83369367.jpg'}, {'end': 3496.057, 'src': 'embed', 'start': 3468.49, 'weight': 1, 'content': [{'end': 3473.631, 'text': 'understand, the first winner will get 2000 rupees, the second winner will get 1500 and third winner will get 1500.', 'start': 3468.49, 'duration': 5.141}, {'end': 3483.213, 'text': 'okay, so please make sure that you follow me and make sure that you actually so whoever will be winner,', 'start': 3473.631, 'duration': 9.582}, {'end': 3486.854, 'text': 'they can later message me and then i will transfer you from here itself.', 'start': 3483.213, 'duration': 3.641}, {'end': 3489.395, 'text': 'okay, you have to message me your up id.', 'start': 3486.854, 'duration': 2.541}, {'end': 3492.616, 'text': "okay, If one isn't on the Instagram, what to do??", 'start': 3489.395, 'duration': 3.221}, {'end': 3496.057, 'text': 'Then you can drop me a mail at krishnayak06 at the rate gmail.com.', 'start': 3492.656, 'duration': 3.401}], 'summary': 'Winners: 1st - 2000 rupees, 2nd - 1500 rupees, 3rd - 1500 rupees. prizes to be transferred via instagram or email.', 'duration': 27.567, 'max_score': 3468.49, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/CG9iLLhqQF8/pics/CG9iLLhqQF83468490.jpg'}, {'end': 3606.146, 'src': 'embed', 'start': 3577.665, 'weight': 2, 'content': [{'end': 3581.929, 'text': "come on, let's make it thousand likes, so that we make this entertaining.", 'start': 3577.665, 'duration': 4.264}, {'end': 3583.05, 'text': "and it's a request.", 'start': 3581.929, 'duration': 1.121}, {'end': 3585.673, 'text': "see, i'm, i'm, i'm, i'm saying you like this.", 'start': 3583.05, 'duration': 2.623}, {'end': 3586.834, 'text': "it's a request.", 'start': 3585.673, 'duration': 1.161}, {'end': 3588.556, 'text': 'please share this links.', 'start': 3586.834, 'duration': 1.722}, {'end': 3593.662, 'text': 'you know, uh, you know this, this sessions of nlp with all your friends in linkedin.', 'start': 3588.556, 'duration': 5.106}, {'end': 3595.302, 'text': 'you know, understand, we are.', 'start': 3594.162, 'duration': 1.14}, {'end': 3597.823, 'text': 'we are focusing more on community building, right.', 'start': 3595.302, 'duration': 2.521}, {'end': 3602.525, 'text': "so if community building happens in a better way, don't you think everybody should use this right?", 'start': 3597.823, 'duration': 4.702}, {'end': 3606.146, 'text': 'many people will not be knowing, okay, so it is basically.', 'start': 3602.525, 'duration': 3.621}], 'summary': 'Request for 1000 likes and sharing nlp session links for community building.', 'duration': 28.481, 'max_score': 3577.665, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/CG9iLLhqQF8/pics/CG9iLLhqQF83577665.jpg'}, {'end': 3880.797, 'src': 'embed', 'start': 3840.962, 'weight': 0, 'content': [{'end': 3842.243, 'text': 'So you have 24 seconds.', 'start': 3840.962, 'duration': 1.281}, {'end': 3845.705, 'text': 'Whoever answers first and right will be in the top of the dashboard.', 'start': 3842.343, 'duration': 3.362}, {'end': 3866.505, 'text': 'And the answer is 3, 2, 1.', 'start': 3858.279, 'duration': 8.226}, {'end': 3867.346, 'text': 'All of the above.', 'start': 3866.506, 'duration': 0.84}, {'end': 3877.714, 'text': 'So people, there are 401 people who have given the right answer, right? 106 people have covered text classification, topic modeling, chatbots.', 'start': 3867.406, 'duration': 10.308}, {'end': 3880.797, 'text': 'Guys, I had actually covered chatbots and text classification right?', 'start': 3877.754, 'duration': 3.043}], 'summary': '401 people gave the right answer, 106 covered text classification, topic modeling, chatbots.', 'duration': 39.835, 'max_score': 3840.962, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/CG9iLLhqQF8/pics/CG9iLLhqQF83840962.jpg'}], 'start': 3369.367, 'title': 'Attention models, label to vector, and nlp quiz', 'summary': 'Covers attention models and label to vector conversion, and introduces an nlp quiz on menti.com with a cash prize giveaway, attracting 704 live viewers and 509 participants, covering nlp topics, including text classification, topic modeling, and chatbots.', 'chapters': [{'end': 3411.52, 'start': 3369.367, 'title': 'Attention models and label to vector conversion', 'summary': 'Covers the distribution of materials through a dashboard link, teaching attention models and label to vector conversion, and instructing students to follow on instagram for quiz validation.', 'duration': 42.153, 'highlights': ['The chapter covers the distribution of materials through a dashboard link, including video links and other resources.', 'The instructor will be teaching attention models and label to vector conversion.', 'Students are instructed to follow the instructor on Instagram for quiz validation.']}, {'end': 4035.137, 'start': 3413.06, 'title': 'Nlp quiz & cash prize giveaway', 'summary': 'Introduces an nlp quiz on menti.com with a cash prize giveaway, attracting 704 live viewers and 509 participants, covering nlp topics including text classification, topic modeling, and chatbots.', 'duration': 622.077, 'highlights': ['The quiz attracted 704 live viewers and 509 participants, covering topics such as text classification, topic modeling, and chatbots. The quiz attracted a significant audience of 704 live viewers and 509 participants, covering NLP topics including text classification, topic modeling, and chatbots.', 'The first winner was set to receive 2000 rupees, the second 1500, and the third 1500, with the requirement to follow the host on Instagram for prize delivery. The prizes for the quiz were set at 2000 rupees for the first winner, 1500 for the second, and 1500 for the third, with the requirement for winners to follow the host on Instagram for prize delivery.', 'Encouragement to share the quiz and NLP sessions with friends and on social media platforms for community building and learning. The host encouraged participants to share the quiz and NLP sessions with friends and on social media platforms for community building and learning purposes.']}], 'duration': 665.77, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/CG9iLLhqQF8/pics/CG9iLLhqQF83369367.jpg', 'highlights': ['The quiz attracted 704 live viewers and 509 participants, covering NLP topics including text classification, topic modeling, and chatbots.', 'The prizes for the quiz were set at 2000 rupees for the first winner, 1500 for the second, and 1500 for the third, with the requirement for winners to follow the host on Instagram for prize delivery.', 'Encouragement to share the quiz and NLP sessions with friends and on social media platforms for community building and learning.', 'The instructor will be teaching attention models and label to vector conversion.', 'Covers the distribution of materials through a dashboard link, including video links and other resources.', 'Students are instructed to follow the instructor on Instagram for quiz validation.']}, {'end': 4330.148, 'segs': [{'end': 4127.131, 'src': 'embed', 'start': 4100.011, 'weight': 1, 'content': [{'end': 4103.073, 'text': 'perfect. so many people have actually said right.', 'start': 4100.011, 'duration': 3.062}, {'end': 4108.676, 'text': 'it is classifying entity into predefined labels, which is quite amazing.', 'start': 4103.073, 'duration': 5.603}, {'end': 4111.92, 'text': 'you can see 322 people have said yes.', 'start': 4108.676, 'duration': 3.244}, {'end': 4117.783, 'text': 'most of them are saying it right, okay, so this is amazing.', 'start': 4111.92, 'duration': 5.863}, {'end': 4120.725, 'text': "ah, now this becomes a competition right, it's okay.", 'start': 4117.783, 'duration': 2.942}, {'end': 4124.148, 'text': "if you don't use insta, you can drop me a mail, you know.", 'start': 4120.725, 'duration': 3.423}, {'end': 4127.131, 'text': 'so this is amazing.', 'start': 4124.148, 'duration': 2.983}], 'summary': 'Classifying entities into predefined labels, with 322 people approving, is amazing.', 'duration': 27.12, 'max_score': 4100.011, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/CG9iLLhqQF8/pics/CG9iLLhqQF84100011.jpg'}, {'end': 4192.493, 'src': 'embed', 'start': 4159.612, 'weight': 2, 'content': [{'end': 4163.653, 'text': 'Google Translator is an application of.', 'start': 4159.612, 'duration': 4.041}, {'end': 4165.053, 'text': 'You have 15 seconds to answer.', 'start': 4163.653, 'duration': 1.4}, {'end': 4170.734, 'text': 'Sentiment analysis, information extraction, information retrieval, machine translation.', 'start': 4165.973, 'duration': 4.761}, {'end': 4180.138, 'text': 'Quickly And here we go.', 'start': 4171.715, 'duration': 8.423}, {'end': 4180.718, 'text': "Time's up.", 'start': 4180.218, 'duration': 0.5}, {'end': 4187.185, 'text': 'Whoa, 378 people have actually told, right? It is used for machine translation.', 'start': 4182.099, 'duration': 5.086}, {'end': 4192.493, 'text': 'Okay, not for information retrieval guys, Google translator, you know, it is machine translation.', 'start': 4187.206, 'duration': 5.287}], 'summary': 'Google translator is used by 378 people for machine translation, not information retrieval.', 'duration': 32.881, 'max_score': 4159.612, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/CG9iLLhqQF8/pics/CG9iLLhqQF84159612.jpg'}, {'end': 4274.644, 'src': 'embed', 'start': 4221.374, 'weight': 0, 'content': [{'end': 4224.275, 'text': 'Oh my God, Kaushal is in the first position, I guess.', 'start': 4221.374, 'duration': 2.901}, {'end': 4227.827, 'text': 'Yes, Kaushal is in the first position.', 'start': 4225.566, 'duration': 2.261}, {'end': 4229.447, 'text': 'Then we have Nitin.', 'start': 4228.367, 'duration': 1.08}, {'end': 4230.667, 'text': 'Okay, then we have Parth.', 'start': 4229.567, 'duration': 1.1}, {'end': 4231.728, 'text': 'Then we have Jim.', 'start': 4230.827, 'duration': 0.901}, {'end': 4237.389, 'text': 'Okay, guys, please make sure that you follow me on Instagram so that I can give you the money, okay?', 'start': 4232.848, 'duration': 4.541}, {'end': 4242.831, 'text': 'But over here you can see there is a very, you know, minute difference.', 'start': 4237.949, 'duration': 4.882}, {'end': 4244.371, 'text': '3798, 3785, 3727, 3711, 3697, okay? So the leaderboard is quite good.', 'start': 4242.851, 'duration': 1.52}, {'end': 4255.259, 'text': 'Okay? And here you have Gokul Krishna.', 'start': 4252.358, 'duration': 2.901}, {'end': 4258.339, 'text': 'So these people, you know, will definitely be able to win it.', 'start': 4255.279, 'duration': 3.06}, {'end': 4260.54, 'text': 'Right? Okay.', 'start': 4258.6, 'duration': 1.94}, {'end': 4264.141, 'text': 'Coming with the final question of the day.', 'start': 4260.66, 'duration': 3.481}, {'end': 4266.321, 'text': 'Woo! Final question.', 'start': 4265.021, 'duration': 1.3}, {'end': 4270.583, 'text': 'Okay? And by this, you can now determine who will be the winner.', 'start': 4266.642, 'duration': 3.941}, {'end': 4274.263, 'text': 'Okay? Before that, hit like, make it thousand.', 'start': 4270.983, 'duration': 3.28}, {'end': 4274.644, 'text': 'Come on.', 'start': 4274.423, 'duration': 0.221}], 'summary': 'Kaushal is leading with 3798 points, followed by nitin, parth, and jim. the final question will determine the winner.', 'duration': 53.27, 'max_score': 4221.374, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/CG9iLLhqQF8/pics/CG9iLLhqQF84221374.jpg'}, {'end': 4330.148, 'src': 'embed', 'start': 4300.586, 'weight': 3, 'content': [{'end': 4301.247, 'text': 'Just a while back.', 'start': 4300.586, 'duration': 0.661}, {'end': 4310.834, 'text': 'Okay And 445 people have actually said it right.', 'start': 4301.407, 'duration': 9.427}, {'end': 4311.855, 'text': '47 said it wrong.', 'start': 4310.854, 'duration': 1.001}, {'end': 4312.935, 'text': '7 this, this.', 'start': 4311.875, 'duration': 1.06}, {'end': 4315.577, 'text': "Now see why I've kept the less timestamp, right?", 'start': 4312.975, 'duration': 2.602}, {'end': 4317.899, 'text': 'Because these all are very easy questions, right?', 'start': 4315.617, 'duration': 2.282}, {'end': 4323.904, 'text': 'That is the reason I had actually kept it and I definitely want you all to try it out, okay?', 'start': 4318.359, 'duration': 5.545}, {'end': 4325.905, 'text': 'So tokenization stemming.', 'start': 4324.364, 'duration': 1.541}, {'end': 4327.566, 'text': 'lemmatization is the answer.', 'start': 4325.905, 'duration': 1.661}, {'end': 4330.148, 'text': 'I hope you like this quiz.', 'start': 4328.307, 'duration': 1.841}], 'summary': '445 people answered right, 47 wrong. quiz on tokenization, stemming, lemmatization.', 'duration': 29.562, 'max_score': 4300.586, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/CG9iLLhqQF8/pics/CG9iLLhqQF84300586.jpg'}], 'start': 4035.777, 'title': 'Live quiz and leaderboard', 'summary': 'Covers a live quiz on nlp with 322 participants for name entity recognition and 378 for google translator, and the leaderboard standings with kaushal in first place, along with a final question where 445 answered correctly, 47 incorrectly, and a call to action for likes to reach a thousand.', 'chapters': [{'end': 4220.113, 'start': 4035.777, 'title': 'Live quiz on nlp', 'summary': 'Discussed a live quiz on nlp with 322 participants answering a question on name entity recognition and 378 participants answering a question on google translator.', 'duration': 184.336, 'highlights': ['322 participants answered a question on name entity recognition, with most of them correctly identifying it as classifying entity into predefined labels.', '378 participants correctly identified Google Translator as an application of machine translation.']}, {'end': 4330.148, 'start': 4221.374, 'title': 'Quiz leaderboard and final question', 'summary': 'Discusses the current leaderboard standings, with kaushal in the first position, and the final question of the day, where 445 people answered correctly, and 47 answered incorrectly, with a call to action for likes to reach a thousand.', 'duration': 108.774, 'highlights': ['Kaushal is in the first position on the leaderboard. Kaushal is currently leading the quiz competition.', '445 people answered the final question correctly, while 47 answered incorrectly. A total of 445 people provided the correct answer to the final question, while 47 answered incorrectly.', 'Call to action for likes to reach a thousand. Encouraging the audience to like the quiz post to reach a goal of a thousand likes.']}], 'duration': 294.371, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/CG9iLLhqQF8/pics/CG9iLLhqQF84035777.jpg', 'highlights': ['Kaushal is in the first position on the leaderboard. Kaushal is currently leading the quiz competition.', '322 participants answered a question on name entity recognition, with most of them correctly identifying it as classifying entity into predefined labels.', '378 participants correctly identified Google Translator as an application of machine translation.', '445 people answered the final question correctly, while 47 answered incorrectly. A total of 445 people provided the correct answer to the final question, while 47 answered incorrectly.', 'Call to action for likes to reach a thousand. Encouraging the audience to like the quiz post to reach a goal of a thousand likes.']}, {'end': 4850.006, 'segs': [{'end': 4361.452, 'src': 'embed', 'start': 4330.789, 'weight': 0, 'content': [{'end': 4331.97, 'text': "Let's see the winner now.", 'start': 4330.789, 'duration': 1.181}, {'end': 4336.301, 'text': 'The winner is Kaushal.', 'start': 4334.04, 'duration': 2.261}, {'end': 4338.322, 'text': 'Okay, Kaushal is in the first.', 'start': 4336.942, 'duration': 1.38}, {'end': 4344.245, 'text': 'Gokul Krishna, I think it is in the second or what? Okay, so we have the winners.', 'start': 4339.122, 'duration': 5.123}, {'end': 4347.726, 'text': 'Kaushal, Gokul Krishna, Nitin.', 'start': 4345.105, 'duration': 2.621}, {'end': 4349.527, 'text': 'Ping me in Instagram.', 'start': 4348.326, 'duration': 1.201}, {'end': 4352.248, 'text': 'And Kaushal, claps.', 'start': 4350.707, 'duration': 1.541}, {'end': 4354.009, 'text': 'Kaushal is the first winner.', 'start': 4352.688, 'duration': 1.321}, {'end': 4358.851, 'text': 'Okay Kaushal, Gokul Krishna, Nitin, please ping me in the Instagram.', 'start': 4354.789, 'duration': 4.062}, {'end': 4361.452, 'text': 'And daily we are going to have this kind of session.', 'start': 4359.271, 'duration': 2.181}], 'summary': 'Kaushal is the winner; gokul krishna and nitin are also winners. daily sessions.', 'duration': 30.663, 'max_score': 4330.789, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/CG9iLLhqQF8/pics/CG9iLLhqQF84330789.jpg'}, {'end': 4576.435, 'src': 'embed', 'start': 4415.123, 'weight': 1, 'content': [{'end': 4421.466, 'text': 'Kaushal, did you message or not? Yes, Nitin, I can see his message.', 'start': 4415.123, 'duration': 6.343}, {'end': 4426.009, 'text': 'Send me your UPI ID, Nitin.', 'start': 4424.548, 'duration': 1.461}, {'end': 4431.346, 'text': 'Navin Reddy is saying sending his new PID.', 'start': 4429.224, 'duration': 2.122}, {'end': 4432.987, 'text': 'Okay, Navin sir is also there.', 'start': 4431.406, 'duration': 1.581}, {'end': 4442.654, 'text': 'Okay Gokul Krishna, you there? Okay, Gokul Krishna also I have received it.', 'start': 4434.468, 'duration': 8.186}, {'end': 4444.515, 'text': 'Gokul Krishna, send me your UPID.', 'start': 4442.974, 'duration': 1.541}, {'end': 4451.901, 'text': 'Kaushal is also there.', 'start': 4450.56, 'duration': 1.341}, {'end': 4454.723, 'text': 'And please make sure that you send the screenshot, everyone.', 'start': 4452.201, 'duration': 2.522}, {'end': 4460.759, 'text': 'Send your screenshot.', 'start': 4459.759, 'duration': 1}, {'end': 4461.7, 'text': 'First three winners.', 'start': 4460.799, 'duration': 0.901}, {'end': 4480.446, 'text': 'Okay So, Koshal, UPID.', 'start': 4471.583, 'duration': 8.863}, {'end': 4481.566, 'text': "I'm getting it.", 'start': 4480.946, 'duration': 0.62}, {'end': 4485.533, 'text': 'Gokul Krishna UPIDs I got.', 'start': 4482.972, 'duration': 2.561}, {'end': 4487.634, 'text': 'Gokul is basically in the second.', 'start': 4485.693, 'duration': 1.941}, {'end': 4490.635, 'text': 'So I am going to transfer Gokul first.', 'start': 4488.314, 'duration': 2.321}, {'end': 4496.677, 'text': 'So Gokul just confirm or probably write in the chat whether you have got the money or not.', 'start': 4491.535, 'duration': 5.142}, {'end': 4499.598, 'text': 'Okay So here I am pasting it.', 'start': 4496.757, 'duration': 2.841}, {'end': 4503.879, 'text': 'So 1500 to Gokul.', 'start': 4501.418, 'duration': 2.461}, {'end': 4505.88, 'text': 'Quiz winner.', 'start': 4505.219, 'duration': 0.661}, {'end': 4511.282, 'text': 'Okay So you can see guys I have transferred him.', 'start': 4507.5, 'duration': 3.782}, {'end': 4520.108, 'text': 'Okay So if you I am transferring him.', 'start': 4511.362, 'duration': 8.746}, {'end': 4520.668, 'text': 'Just a second.', 'start': 4520.128, 'duration': 0.54}, {'end': 4522.33, 'text': "I don't know why it got rejected.", 'start': 4520.689, 'duration': 1.641}, {'end': 4528.196, 'text': "Okay, I'll do it by phone pay then.", 'start': 4526.474, 'duration': 1.722}, {'end': 4530.237, 'text': 'Phone pay.', 'start': 4529.597, 'duration': 0.64}, {'end': 4533.3, 'text': "Okay I'll do it one by one.", 'start': 4530.257, 'duration': 3.043}, {'end': 4535.222, 'text': 'But I hope everybody liked it.', 'start': 4533.721, 'duration': 1.501}, {'end': 4540.027, 'text': 'I hope everybody had fun.', 'start': 4535.242, 'duration': 4.785}, {'end': 4556.101, 'text': 'Just a second.', 'start': 4555.461, 'duration': 0.64}, {'end': 4557.022, 'text': "I'm doing it.", 'start': 4556.161, 'duration': 0.861}, {'end': 4565.408, 'text': "Everyone I'll give it.", 'start': 4564.547, 'duration': 0.861}, {'end': 4566.568, 'text': 'Wait just a second.', 'start': 4565.448, 'duration': 1.12}, {'end': 4574.974, 'text': 'Okay So for Goku, the money has gone.', 'start': 4572.212, 'duration': 2.762}, {'end': 4576.435, 'text': 'Everybody can see it over here.', 'start': 4575.054, 'duration': 1.381}], 'summary': 'Transferring prize money to quiz winners, gokul received 1500 via upi.', 'duration': 161.312, 'max_score': 4415.123, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/CG9iLLhqQF8/pics/CG9iLLhqQF84415123.jpg'}, {'end': 4807.296, 'src': 'embed', 'start': 4777.683, 'weight': 5, 'content': [{'end': 4778.583, 'text': 'Sharing is caring.', 'start': 4777.683, 'duration': 0.9}, {'end': 4785.006, 'text': 'Because of you sharing to someone, it may help them to clear jobs in an amazing way.', 'start': 4779.204, 'duration': 5.802}, {'end': 4785.826, 'text': 'And this has happened.', 'start': 4785.026, 'duration': 0.8}, {'end': 4790.588, 'text': 'Many people say that, Krish, your channel was shared by one friend, you know.', 'start': 4786.406, 'duration': 4.182}, {'end': 4794.93, 'text': "So tell them, I'm going to take this entire month with respect to NLP sessions, right?", 'start': 4791.088, 'duration': 3.842}, {'end': 4800.533, 'text': 'So thank you, and this kind of session, I think, was amazing, I guess.', 'start': 4795.831, 'duration': 4.702}, {'end': 4807.296, 'text': 'And I will also make sure that every time I have a quiz so that everybody earns, learn, learn and earn right?', 'start': 4800.713, 'duration': 6.583}], 'summary': 'Sharing led to success: 1 friend shared, leading to nlp sessions for the entire month and positive feedback.', 'duration': 29.613, 'max_score': 4777.683, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/CG9iLLhqQF8/pics/CG9iLLhqQF84777683.jpg'}], 'start': 4330.789, 'title': 'Quiz winners announcement and session conclusion', 'summary': 'Discusses the announcement and transfer of prize money to the quiz winners kaushal, gokul krishna, and nitin, with kaushal receiving 1st prize, and gokul krishna receiving 2nd prize of 1500. it also concludes with a call for sharing the session on linkedin to help others, a commitment to donate youtube earnings, and a promise of future nlp sessions and word cloud implementation.', 'chapters': [{'end': 4576.435, 'start': 4330.789, 'title': 'Quiz winners announcement', 'summary': 'Discusses the announcement and transfer of prize money to the quiz winners kaushal, gokul krishna, and nitin, with kaushal receiving 1st prize, and gokul krishna receiving 2nd prize of 1500.', 'duration': 245.646, 'highlights': ['Kaushal is the first winner, receiving a prize.', 'Gokul Krishna is the second winner, receiving a prize of 1500.', 'Nitin is also a winner and is requested to provide his UPI ID.', 'The prize money is being transferred to the winners using phone pay.']}, {'end': 4850.006, 'start': 4576.455, 'title': 'Session conclusion and gratitude', 'summary': 'Concludes with a quiz winners announcement and a call for sharing the session on linkedin to help others, with a commitment to donate youtube earnings and a promise of future nlp sessions and word cloud implementation.', 'duration': 273.551, 'highlights': ['Krish announces quiz winners, with Kaushal winning 2000 and Nitin winning 1500.', 'Krish encourages viewers to share the session on LinkedIn to help others and commits to donating YouTube earnings to the audience.', 'Krish expresses gratitude to the audience and promises future NLP sessions and word cloud implementation.', 'Krish emphasizes the importance of sharing knowledge and announces his commitment to quiz sessions and future contributions to the audience.']}], 'duration': 519.217, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/CG9iLLhqQF8/pics/CG9iLLhqQF84330789.jpg', 'highlights': ['Kaushal is the first winner, receiving a prize.', 'Gokul Krishna is the second winner, receiving a prize of 1500.', 'Nitin is also a winner and is requested to provide his UPI ID.', 'The prize money is being transferred to the winners using phone pay.', 'Krish announces quiz winners, with Kaushal winning 2000 and Nitin winning 1500.', 'Krish encourages viewers to share the session on LinkedIn to help others and commits to donating YouTube earnings to the audience.', 'Krish emphasizes the importance of sharing knowledge and announces his commitment to quiz sessions and future contributions to the audience.']}], 'highlights': ['The chapter introduces a live NLP series and announces a 5000 rupees giveaway through a quiz, with distribution of 2000 rupees for the first prize, 2000 rupees for the second prize, and 1000 rupees for the third prize.', 'The presenter announced an upcoming quiz with cash prizes of 5000, 2000, and 1000 rupees INR for the top three participants.', 'The roadmap progresses to advanced techniques like Gensim, word embeddings, bi-directional LSTMs, attention models, transformers, and BERT, providing a comprehensive understanding of text handling and deep learning models for NLP use cases.', 'DALL-E 2 demonstrates advancement in text to image conversion technology.', 'Tokenization is crucial for building a spam classifier using email features.', "Stop words like 'to', 'the', 'of' are not important in some use cases like spam classification, while they can be crucial for text summarization.", 'The quiz attracted 704 live viewers and 509 participants, covering NLP topics including text classification, topic modeling, and chatbots.', 'Kaushal is in the first position on the leaderboard. Kaushal is currently leading the quiz competition.', 'Kaushal is the first winner, receiving a prize.', 'The prize money is being transferred to the winners using phone pay.']}