title
AdaBoost, Clearly Explained

description
AdaBoost is one of those machine learning methods that seems so much more confusing than it really is. It's really just a simple twist on decision trees and random forests. NOTE: This video assumes you already know about Decision Trees... https://youtu.be/_L39rN6gz7Y ...and Random Forests.... https://youtu.be/J4Wdy0Wc_xQ For a complete index of all the StatQuest videos, check out: https://statquest.org/video-index/ Sources: The original AdaBoost paper by Robert E. Schapire and Yoav Freund https://www.sciencedirect.com/science/article/pii/S002200009791504X And a follow up by co-created Schapire: http://rob.schapire.net/papers/explaining-adaboost.pdf The idea of using the weights to resample the original dataset comes from Boosting Foundations and Algorithms, by Robert E. Schapire and Yoav Freund https://mitpress.mit.edu/books/boosting Lastly, Chris McCormick's tutorial was super helpful: http://mccormickml.com/2013/12/13/adaboost-tutorial/ If you'd like to support StatQuest, please consider... Buying The StatQuest Illustrated Guide to Machine Learning!!! PDF - https://statquest.gumroad.com/l/wvtmc Paperback - https://www.amazon.com/dp/B09ZCKR4H6 Kindle eBook - https://www.amazon.com/dp/B09ZG79HXC Patreon: https://www.patreon.com/statquest ...or... YouTube Membership: https://www.youtube.com/channel/UCtYLUTtgS3k1Fg4y5tAhLbw/join ...a cool StatQuest t-shirt or sweatshirt: https://shop.spreadshirt.com/statquest-with-josh-starmer/ ...buying one or two of my songs (or go large and get a whole album!) https://joshuastarmer.bandcamp.com/ ...or just donating to StatQuest! https://www.paypal.me/statquest Lastly, if you want to keep up with me as I research and create new StatQuests, follow me on twitter: https://twitter.com/joshuastarmer 0:00 Awesome song and introduction 0:56 The three main ideas behind AdaBoost 3:30 Review of the three main ideas 3:58 Building a stump with the GINI index 6:27 Determining the Amount of Say for a stump 10:45 Updating sample weights 14:47 Normalizing the sample weights 15:32 Using the normalized weights to make the second stump 19:06 Using stumps to make classifications 19:51 Review of the three main ideas behind AdaBoost Correction: 10:18. The Amount of Say for Chest Pain = (1/2)*log((1-(3/8))/(3/8)) = 1/2*log(5/8/3/8) = 1/2*log(5/3) = 0.25, not 0.42. #statquest #adaboost

detail
{'title': 'AdaBoost, Clearly Explained', 'heatmap': [{'end': 392.476, 'start': 363.178, 'weight': 0.83}, {'end': 445.557, 'start': 412.106, 'weight': 0.789}, {'end': 568.919, 'start': 537.865, 'weight': 0.8}, {'end': 992.061, 'start': 923.519, 'weight': 0.723}, {'end': 1201.28, 'start': 1153.742, 'weight': 0.745}], 'summary': 'Explains adaboost, its use with decision trees, patient predictions for heart disease, weight modification to emphasize correct classifications, and creating a forest of stumps for patient classification based on a larger sum of say.', 'chapters': [{'end': 85.018, 'segs': [{'end': 85.018, 'src': 'embed', 'start': 33.957, 'weight': 0, 'content': [{'end': 39.001, 'text': "We will also mention Random Forests, so if you don't know about them, check out the quest.", 'start': 33.957, 'duration': 5.044}, {'end': 46.407, 'text': "We'll start by using Decision Trees and Random Forests to explain the three concepts behind Adaboost.", 'start': 40.002, 'duration': 6.405}, {'end': 54.826, 'text': "Then we'll get into the nitty-gritty details of how Adaboost creates a forest of trees from scratch, and how it's used to make classifications.", 'start': 47.383, 'duration': 7.443}, {'end': 63.149, 'text': "So let's start by using decision trees and random forests to explain the three main concepts behind Adaboost.", 'start': 56.327, 'duration': 6.822}, {'end': 69.932, 'text': 'In a random forest, each time you make a tree, you make a full-sized tree.', 'start': 65.11, 'duration': 4.822}, {'end': 76.535, 'text': "Some trees might be bigger than others, but there's no predetermined maximum depth.", 'start': 71.813, 'duration': 4.722}, {'end': 85.018, 'text': 'In contrast, in a forest of trees made with Adaboost, the trees are usually just a node and two leaves.', 'start': 77.992, 'duration': 7.026}], 'summary': 'Using decision trees and random forests to explain the concepts behind adaboost.', 'duration': 51.061, 'max_score': 33.957, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/LsK-xG1cLYA/pics/LsK-xG1cLYA33957.jpg'}], 'start': 0.376, 'title': 'Adaboost with decision trees', 'summary': 'Explains adaboost, its combination with decision trees, and mentions random forests, emphasizing the simplicity of adaboost with small trees and its application in creating a forest of trees for classifications.', 'chapters': [{'end': 85.018, 'start': 0.376, 'title': 'Understanding adaboost with decision trees', 'summary': 'Explains adaboost, its combination with decision trees, and mentions random forests, emphasizing the simplicity of adaboost with small trees and its application in creating a forest of trees for classifications.', 'duration': 84.642, 'highlights': ['Adaboost creates a forest of trees from scratch, with the trees usually just a node and two leaves, simplifying the model.', "The chapter emphasizes the combination of Adaboost with Decision Trees and mentions Random Forests for understanding Adaboost's application.", 'Explanation of the three main concepts behind Adaboost using Decision Trees and Random Forests.']}], 'duration': 84.642, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/LsK-xG1cLYA/pics/LsK-xG1cLYA376.jpg', 'highlights': ['Adaboost creates a forest of trees with small trees for simplicity.', 'Emphasizes the combination of Adaboost with Decision Trees and mentions Random Forests.', 'Explains the three main concepts behind Adaboost using Decision Trees and Random Forests.']}, {'end': 361.336, 'segs': [{'end': 254.271, 'src': 'embed', 'start': 193.4, 'weight': 0, 'content': [{'end': 199.964, 'text': 'The errors that the first stump makes influence how the second stump is made,', 'start': 193.4, 'duration': 6.564}, {'end': 206.629, 'text': 'and the errors that the second stump makes influence how the third stump is made, etc.', 'start': 199.964, 'duration': 6.665}, {'end': 207.47, 'text': 'etc. etc.', 'start': 206.629, 'duration': 0.841}, {'end': 219.776, 'text': 'To review, the three ideas behind Adaboost are, one, Adaboost combines a lot of weak learners to make classifications.', 'start': 210.612, 'duration': 9.164}, {'end': 222.958, 'text': 'The weak learners are almost always stumps.', 'start': 220.437, 'duration': 2.521}, {'end': 228.661, 'text': 'Two, some stumps get more say in the classification than others.', 'start': 224.199, 'duration': 4.462}, {'end': 235.184, 'text': "Three, each stump is made by taking the previous stump's mistakes into account.", 'start': 229.941, 'duration': 5.243}, {'end': 244.968, 'text': "Bam Now let's dive into the nitty-gritty detail of how to create a forest of stumps using Adaboost.", 'start': 236.565, 'duration': 8.403}, {'end': 248.109, 'text': "First, we'll start with some data.", 'start': 246.208, 'duration': 1.901}, {'end': 254.271, 'text': 'We create a forest of stumps with Adaboost to predict if a patient has heart disease.', 'start': 249.289, 'duration': 4.982}], 'summary': 'Adaboost combines weak learners to predict heart disease.', 'duration': 60.871, 'max_score': 193.4, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/LsK-xG1cLYA/pics/LsK-xG1cLYA193400.jpg'}, {'end': 307.517, 'src': 'embed', 'start': 280.062, 'weight': 3, 'content': [{'end': 282.883, 'text': 'At the start, all samples get the same weight.', 'start': 280.062, 'duration': 2.821}, {'end': 286.563, 'text': '1 divided by the total number of samples.', 'start': 284.363, 'duration': 2.2}, {'end': 291.644, 'text': "In this case, that's 1 divided by 8.", 'start': 287.704, 'duration': 3.94}, {'end': 294.345, 'text': 'And that makes the samples all equally important.', 'start': 291.644, 'duration': 2.701}, {'end': 302.734, 'text': 'However, after we make the first stump, these weights will change in order to guide how the next stump is created.', 'start': 296.209, 'duration': 6.525}, {'end': 307.517, 'text': "In other words, we'll talk more about the sample weights later.", 'start': 303.795, 'duration': 3.722}], 'summary': 'All 8 samples have equal weight at the start, but weights change after the first stump is made.', 'duration': 27.455, 'max_score': 280.062, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/LsK-xG1cLYA/pics/LsK-xG1cLYA280062.jpg'}, {'end': 361.336, 'src': 'embed', 'start': 334.198, 'weight': 2, 'content': [{'end': 342.92, 'text': 'Of the five samples with chest pain, three were correctly classified as having heart disease, and two were incorrectly classified.', 'start': 334.198, 'duration': 8.722}, {'end': 354.551, 'text': 'Of the three samples without chest pain, two were correctly classified as not having heart disease and one was incorrectly classified.', 'start': 344.443, 'duration': 10.108}, {'end': 361.336, 'text': 'Now we do the same thing for blocked arteries and for patient weight.', 'start': 356.472, 'duration': 4.864}], 'summary': '3 out of 5 chest pain samples correctly classified with heart disease, 2 misclassified', 'duration': 27.138, 'max_score': 334.198, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/LsK-xG1cLYA/pics/LsK-xG1cLYA334198.jpg'}], 'start': 87.08, 'title': 'Adaboost and patient predictions', 'summary': "Discusses adaboost's use of stumps, practical examples in predicting heart disease, and making patient predictions based on chest pain and blocked artery status, utilizing sample weights and evaluating sample classifications.", 'chapters': [{'end': 254.271, 'start': 87.08, 'title': 'Adaboost and random forest', 'summary': "Discusses the concept of stumps in adaboost and random forest, emphasizing how adaboost combines weak learners to make classifications, assigns varying influence to different stumps, and considers the previous stump's mistakes in building the next one, with a practical example of predicting heart disease using a forest of stumps.", 'duration': 167.191, 'highlights': ['Adaboost combines weak learners to make classifications Adaboost combines a lot of weak learners, almost always stumps, to make classifications.', 'Some stumps get more say in the classification than others In a forest of stumps made with Adaboost, some stumps get more say in the final classification than others.', "Each stump is made by taking the previous stump's mistakes into account In Adaboost, each stump is made by taking the previous stump's mistakes into account.", 'Practical example of predicting heart disease using a forest of stumps The chapter illustrates using a forest of stumps with Adaboost to predict if a patient has heart disease.']}, {'end': 361.336, 'start': 255.651, 'title': 'Patient predictions based on chest pain and blocked artery', 'summary': "Explains the process of making predictions based on a patient's chest pain and blocked artery status, using sample weights to guide the creation of stumps, and evaluating the classification of samples based on chest pain, blocked arteries, and patient weight.", 'duration': 105.685, 'highlights': ['The process involves evaluating the classification of samples based on chest pain, blocked arteries, and patient weight, where chest pain correctly classified 3 out of 5 samples with chest pain and 2 out of 3 samples without chest pain.', 'The chapter discusses the use of sample weights to guide the creation of stumps in order to classify patient predictions with varying importance, initially giving all samples equal weight, and later adjusting weights to guide the creation of subsequent stumps.', 'The initial weights assigned to samples are 1 divided by the total number of samples, making all samples equally important at the start of the process.']}], 'duration': 274.256, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/LsK-xG1cLYA/pics/LsK-xG1cLYA87080.jpg', 'highlights': ['Adaboost combines a lot of weak learners, almost always stumps, to make classifications.', 'The chapter illustrates using a forest of stumps with Adaboost to predict if a patient has heart disease.', 'The process involves evaluating the classification of samples based on chest pain, blocked arteries, and patient weight, where chest pain correctly classified 3 out of 5 samples with chest pain and 2 out of 3 samples without chest pain.', 'The chapter discusses the use of sample weights to guide the creation of stumps in order to classify patient predictions with varying importance, initially giving all samples equal weight, and later adjusting weights to guide the creation of subsequent stumps.', "Each stump is made by taking the previous stump's mistakes into account."]}, {'end': 642.806, 'segs': [{'end': 392.476, 'src': 'heatmap', 'start': 363.178, 'weight': 0.83, 'content': [{'end': 372.525, 'text': 'Note, we used the techniques described in the decision tree stat quest to determine that 176 was the best weight to separate the patients.', 'start': 363.178, 'duration': 9.347}, {'end': 377.752, 'text': 'Now we calculate the Gini index for the three stumps.', 'start': 374.631, 'duration': 3.121}, {'end': 382.753, 'text': 'The Gini index for patient weight is the lowest.', 'start': 379.572, 'duration': 3.181}, {'end': 386.214, 'text': 'So this will be the first stump in the forest.', 'start': 383.854, 'duration': 2.36}, {'end': 392.476, 'text': 'Now we need to determine how much say this stump will have in the final classification.', 'start': 387.995, 'duration': 4.481}], 'summary': 'Using decision tree techniques, 176 was found as the best weight to separate patients, with the gini index for patient weight being the lowest for the first stump in the forest.', 'duration': 29.298, 'max_score': 363.178, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/LsK-xG1cLYA/pics/LsK-xG1cLYA363178.jpg'}, {'end': 471.168, 'src': 'heatmap', 'start': 412.106, 'weight': 0, 'content': [{'end': 419.33, 'text': 'This patient, who weighs less than 176, has heart disease, but the stump says they do not.', 'start': 412.106, 'duration': 7.224}, {'end': 427.675, 'text': 'The total error for a stump is the sum of the weights associated with the incorrectly classified samples.', 'start': 421.271, 'duration': 6.404}, {'end': 433.132, 'text': 'Thus, in this case, the total error is 1 eighth.', 'start': 429.591, 'duration': 3.541}, {'end': 445.557, 'text': 'Note, because all of the sample weights add up to 1, total error will always be between 0, for a perfect stump, and 1, for a horrible stump.', 'start': 434.933, 'duration': 10.624}, {'end': 454.38, 'text': 'We use the total error to determine the amount of say this stump has in the final classification with the following formula.', 'start': 447.097, 'duration': 7.283}, {'end': 462.963, 'text': 'Amount of say equals one half times the log of one minus the total error divided by the total error.', 'start': 455.518, 'duration': 7.445}, {'end': 471.168, 'text': 'We can draw a graph of the amount of say by plugging in a bunch of numbers between zero and one for total error.', 'start': 464.744, 'duration': 6.424}], 'summary': "Patient's weight <176, has heart disease, stump's total error=1/8, amount of say calculated using formula.", 'duration': 49.897, 'max_score': 412.106, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/LsK-xG1cLYA/pics/LsK-xG1cLYA412106.jpg'}, {'end': 583.728, 'src': 'heatmap', 'start': 537.865, 'weight': 2, 'content': [{'end': 542.307, 'text': 'In practice, a small error term is added to prevent this from happening.', 'start': 537.865, 'duration': 4.442}, {'end': 550.712, 'text': 'With patient weight greater than 176, the total error is 1 eighth, so we just plug and chug.', 'start': 543.768, 'duration': 6.944}, {'end': 561.314, 'text': 'And the amount of say that this stump has on the final classification is 0.97.', 'start': 554.374, 'duration': 6.94}, {'end': 568.919, 'text': "Bam!. Now that we've worked out how much say this stump gets when classifying a sample,", 'start': 561.314, 'duration': 7.605}, {'end': 573.762, 'text': "let's work out how much say the chest pain stump would have if it had been the best stump.", 'start': 568.919, 'duration': 4.843}, {'end': 580.486, 'text': "Note, we don't need to do this, but I think it helps illustrate the concepts we've covered so far.", 'start': 575.082, 'duration': 5.404}, {'end': 583.728, 'text': 'Chest pain made three errors.', 'start': 581.907, 'duration': 1.821}], 'summary': 'Patient weight > 176 results in 1/8 error; chest pain stump had 3 errors.', 'duration': 29.354, 'max_score': 537.865, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/LsK-xG1cLYA/pics/LsK-xG1cLYA537865.jpg'}], 'start': 363.178, 'title': 'Decision stump classification and stump say calculation', 'summary': 'Covers the process of determining weight for patient separation, calculating gini index for stumps, and their impact on final classification. it also delves into calculating the amount of say for each stump, illustrating with the chest pain stump having a say of 0.42 and three errors, showcasing the relationship between sample weights and stump say.', 'chapters': [{'end': 550.712, 'start': 363.178, 'title': 'Decision stump classification', 'summary': 'Discusses the process of determining the best weight to separate patients, calculating the gini index for stumps, and determining the amount of say a stump has in the final classification based on its classification performance.', 'duration': 187.534, 'highlights': ['The total error for a stump is the sum of the weights associated with the incorrectly classified samples, in this case, the total error is 1 eighth.', 'The amount of say for a stump in the final classification is determined by the formula: Amount of say = 0.5 * log((1 - total error) / total error), where a good classification results in a relatively large positive value and a terrible classification results in a large negative value.', 'When a stump does a good job and the total error is small, the amount of say is a relatively large positive value, while a terrible job results in a large negative value, and a total error of 0.5 leads to an amount of say of zero.']}, {'end': 642.806, 'start': 554.374, 'title': 'Calculating stump say in classification', 'summary': 'Discusses the calculation of the amount of say for each stump in classifying a sample, with the chest pain stump having a say of 0.42 and three errors, illustrating the concept of total error and the relationship between sample weights and stump say.', 'duration': 88.432, 'highlights': ['The amount of say that the chest pain stump would have had on the final classification is 0.42, with three errors made, demonstrating the relationship between total error and stump say.', 'The amount of say that the stump has on the final classification is 0.97, showcasing the significance of the stump in classifying samples.', 'The total error for chest pain is 3 eighths, indicating the total error and providing a basis for determining the amount of say for the stump.']}], 'duration': 279.628, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/LsK-xG1cLYA/pics/LsK-xG1cLYA363178.jpg', 'highlights': ['The amount of say for a stump in the final classification is determined by a formula: Amount of say = 0.5 * log((1 - total error) / total error).', 'The total error for a stump is the sum of the weights associated with the incorrectly classified samples, in this case, the total error is 1 eighth.', 'The amount of say that the stump has on the final classification is 0.97, showcasing the significance of the stump in classifying samples.', 'The amount of say that the chest pain stump would have had on the final classification is 0.42, with three errors made, demonstrating the relationship between total error and stump say.']}, {'end': 958.996, 'segs': [{'end': 702.19, 'src': 'embed', 'start': 671.648, 'weight': 2, 'content': [{'end': 676.411, 'text': 'But since this stump incorrectly classified this sample,', 'start': 671.648, 'duration': 4.763}, {'end': 686.739, 'text': 'we will emphasize the need for the next stump to correctly classify it by increasing its sample weight and decreasing all of the other sample weights.', 'start': 676.411, 'duration': 10.328}, {'end': 693.865, 'text': "Let's start by increasing the sample weight for the incorrectly classified sample.", 'start': 688.781, 'duration': 5.084}, {'end': 702.19, 'text': 'This is the formula we will use to increase the sample weight for the sample that was incorrectly classified.', 'start': 695.728, 'duration': 6.462}], 'summary': 'Emphasizing the need for correct classification by adjusting sample weights.', 'duration': 30.542, 'max_score': 671.648, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/LsK-xG1cLYA/pics/LsK-xG1cLYA671648.jpg'}, {'end': 834.023, 'src': 'embed', 'start': 782.611, 'weight': 0, 'content': [{'end': 789.475, 'text': 'Bam Now we need to decrease the sample weights for all of the correctly classified samples.', 'start': 782.611, 'duration': 6.864}, {'end': 794.699, 'text': 'This is the formula we will use to decrease the sample weights.', 'start': 791.038, 'duration': 3.661}, {'end': 800.5, 'text': 'The big difference is the negative sign in front of amount of say.', 'start': 796.239, 'duration': 4.261}, {'end': 804.981, 'text': 'Just like before, we plug in the sample weight.', 'start': 802.161, 'duration': 2.82}, {'end': 807.962, 'text': 'And, just like before,', 'start': 806.482, 'duration': 1.48}, {'end': 815.284, 'text': 'we can get a better understanding of how this will scale the sample weight by plotting a graph using different values for amount of say.', 'start': 807.962, 'duration': 7.322}, {'end': 821.532, 'text': 'The blue line represents e raised to the negative amount of say.', 'start': 817.008, 'duration': 4.524}, {'end': 829.999, 'text': 'When the amount of say is relatively large, then we scale the sample weight by a value close to zero.', 'start': 822.933, 'duration': 7.066}, {'end': 834.023, 'text': 'This will make the new sample weight very small.', 'start': 831.14, 'duration': 2.883}], 'summary': 'Sample weights for correctly classified samples decrease exponentially with larger amount of say.', 'duration': 51.412, 'max_score': 782.611, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/LsK-xG1cLYA/pics/LsK-xG1cLYA782611.jpg'}, {'end': 905.489, 'src': 'embed', 'start': 858.047, 'weight': 3, 'content': [{'end': 875.172, 'text': 'Bam We will keep track of the new sample weights in this column.', 'start': 858.047, 'duration': 17.125}, {'end': 881.717, 'text': 'We plug in 0.33 for the sample that was incorrectly classified.', 'start': 876.833, 'duration': 4.884}, {'end': 887.922, 'text': 'All of the other samples get 0.05.', 'start': 883.038, 'duration': 4.884}, {'end': 893.467, 'text': 'Now we need to normalize the new sample weights so that they will add up to 1.', 'start': 887.922, 'duration': 5.545}, {'end': 899.747, 'text': 'Right now, if you add up the new sample weights, you get 0.68.', 'start': 893.467, 'duration': 6.28}, {'end': 905.489, 'text': 'so we divide each new sample weight by 0.68 to get the normalized values.', 'start': 899.747, 'duration': 5.742}], 'summary': 'Adjust sample weights for classification: 0.33 for one, 0.05 for others, normalized to add up to 1.', 'duration': 47.442, 'max_score': 858.047, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/LsK-xG1cLYA/pics/LsK-xG1cLYA858047.jpg'}], 'start': 643.967, 'title': 'Weight modification in adaboost', 'summary': "Discusses modifying weights to emphasize the importance of correctly classifying samples in adaboost, by increasing the sample weight for incorrectly classified samples and decreasing all other sample weights. it also explains the formula for increasing and decreasing sample weights based on the performance of the previous classification stump, scaling the previous sample weight with a term, and how the amount of say affects the scaling, with an example showing the impact of different values of 'amount of say' on the new sample weights.", 'chapters': [{'end': 693.865, 'start': 643.967, 'title': 'Weight modification in adaboost', 'summary': 'Discusses modifying weights to emphasize the importance of correctly classifying samples in adaboost, by increasing the sample weight for incorrectly classified samples and decreasing all other sample weights.', 'duration': 49.898, 'highlights': ['The importance of correctly classifying samples is emphasized by increasing the sample weight for the incorrectly classified sample and decreasing all other sample weights.', 'By modifying the weights, the next stump will take into account the errors made by the current stump.', 'Initially, all sample weights were the same, resulting in no emphasis on correctly classifying any particular sample.']}, {'end': 807.962, 'start': 695.728, 'title': 'Sample weight adjustment in classification', 'summary': 'Explains the formula for increasing and decreasing sample weights based on the performance of the previous classification stump, scaling the previous sample weight with a term and how the amount of say affects the scaling, illustrated through a graph and example calculations.', 'duration': 112.234, 'highlights': ['The formula for increasing sample weight involves scaling the previous sample weight with a term, which results in a larger new sample weight when the amount of say is relatively large, and a smaller new sample weight when the amount of say is relatively low. The formula for increasing sample weight involves scaling the previous sample weight with a term, resulting in a larger new sample weight when the amount of say is relatively large, and a smaller new sample weight when the amount of say is relatively low.', 'The amount of say affects the scaling of the previous sample weight, illustrated with an example where the amount of say was 0.97, resulting in a new sample weight of 0.33, which is more than the old one. The amount of say affects the scaling of the previous sample weight, illustrated with an example where the amount of say was 0.97, resulting in a new sample weight of 0.33, which is more than the old one.', 'The formula for decreasing sample weight involves a negative sign in front of the amount of say, leading to a decrease in the sample weights for correctly classified samples. The formula for decreasing sample weight involves a negative sign in front of the amount of say, leading to a decrease in the sample weights for correctly classified samples.']}, {'end': 958.996, 'start': 807.962, 'title': 'Adaptive boosting algorithm explanation', 'summary': "Explains how the sample weights are scaled, normalized, and used in the adaptive boosting algorithm, with an example showing the impact of different values of 'amount of say' on the new sample weights.", 'duration': 151.034, 'highlights': ["The new sample weight is 0.05, which is less than the old one. The example demonstrates the impact of 'amount of say' (0.97) on the new sample weight, which decreases from the old value.", 'When we add up the new sample weights, we get 1, plus or minus a little rounding error. The process of normalizing the new sample weights to ensure they add up to 1, with a small margin for rounding error, is explained.', "The blue line represents e raised to the negative amount of say. Illustrates the relationship between the 'amount of say' and the scaling of sample weight using e raised to the negative amount of say."]}], 'duration': 315.029, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/LsK-xG1cLYA/pics/LsK-xG1cLYA643967.jpg', 'highlights': ['The formula for increasing sample weight involves scaling the previous sample weight with a term, resulting in a larger new sample weight when the amount of say is relatively large, and a smaller new sample weight when the amount of say is relatively low.', 'The formula for decreasing sample weight involves a negative sign in front of the amount of say, leading to a decrease in the sample weights for correctly classified samples.', 'Initially, all sample weights were the same, resulting in no emphasis on correctly classifying any particular sample.', 'The process of normalizing the new sample weights to ensure they add up to 1, with a small margin for rounding error, is explained.', 'The importance of correctly classifying samples is emphasized by increasing the sample weight for the incorrectly classified sample and decreasing all other sample weights.']}, {'end': 1251.478, 'segs': [{'end': 1005.911, 'src': 'embed', 'start': 958.996, 'weight': 1, 'content': [{'end': 965.279, 'text': 'we can make a new collection of samples that contains duplicate copies of the samples with the largest sample weights.', 'start': 958.996, 'duration': 6.283}, {'end': 972.941, 'text': 'So we start by making a new but empty data set that is the same size as the original.', 'start': 967.019, 'duration': 5.922}, {'end': 978.843, 'text': 'Then we pick a random number between 0 and 1.', 'start': 974.342, 'duration': 4.501}, {'end': 983.705, 'text': 'And we see where that number falls when you use the sample weights like a distribution.', 'start': 978.843, 'duration': 4.862}, {'end': 992.061, 'text': 'If the number is between 0 and 0.7, then we would put this sample into the new collection of samples.', 'start': 985.313, 'duration': 6.748}, {'end': 1001.809, 'text': 'And if the number is between 0.7 and 0.14, then we would put this sample into the new collection of samples.', 'start': 993.826, 'duration': 7.983}, {'end': 1005.911, 'text': 'And if the number is between 0.14 and 0.21, then we would put this sample into the new collection of samples.', 'start': 1003.05, 'duration': 2.861}], 'summary': 'Creating a new collection with duplicate copies of samples based on sample weights.', 'duration': 46.915, 'max_score': 958.996, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/LsK-xG1cLYA/pics/LsK-xG1cLYA958996.jpg'}, {'end': 1201.28, 'src': 'heatmap', 'start': 1126.882, 'weight': 2, 'content': [{'end': 1140.313, 'text': 'So that is how the errors that the first tree makes influence how the second tree is made and how the errors that the second tree makes influence how the third tree is made,', 'start': 1126.882, 'duration': 13.431}, {'end': 1142.615, 'text': 'et cetera, et cetera, et cetera.', 'start': 1140.313, 'duration': 2.302}, {'end': 1152.481, 'text': 'Double bam! Now we need to talk about how a forest of stumps created by Adaboost makes classifications.', 'start': 1143.756, 'duration': 8.725}, {'end': 1158.323, 'text': 'Imagine that these stumps classified a patient as has heart disease.', 'start': 1153.742, 'duration': 4.581}, {'end': 1163.984, 'text': 'And these stumps classified the patient as does not have heart disease.', 'start': 1159.303, 'duration': 4.681}, {'end': 1168.205, 'text': 'These are the amounts of say for these stumps.', 'start': 1165.504, 'duration': 2.701}, {'end': 1172.086, 'text': 'And these are the amounts of say for these stumps.', 'start': 1169.345, 'duration': 2.741}, {'end': 1179.222, 'text': 'Now we add up the amounts of say for this group of stumps and for this group of stumps.', 'start': 1173.86, 'duration': 5.362}, {'end': 1187.265, 'text': 'Ultimately, the patient is classified as has heart disease because this is the larger sum.', 'start': 1181.083, 'duration': 6.182}, {'end': 1197.559, 'text': 'Triple bam! To review, the three ideas behind Adaboost are 1.', 'start': 1189.005, 'duration': 8.554}, {'end': 1201.28, 'text': 'Adaboost combines a lot of weak learners to make classifications.', 'start': 1197.559, 'duration': 3.721}], 'summary': 'Adaboost combines weak learners to make classifications with larger sum.', 'duration': 31.441, 'max_score': 1126.882, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/LsK-xG1cLYA/pics/LsK-xG1cLYA1126882.jpg'}, {'end': 1222.388, 'src': 'embed', 'start': 1189.005, 'weight': 0, 'content': [{'end': 1197.559, 'text': 'Triple bam! To review, the three ideas behind Adaboost are 1.', 'start': 1189.005, 'duration': 8.554}, {'end': 1201.28, 'text': 'Adaboost combines a lot of weak learners to make classifications.', 'start': 1197.559, 'duration': 3.721}, {'end': 1206.042, 'text': 'The weak learners are almost always stumps.', 'start': 1201.921, 'duration': 4.121}, {'end': 1210.123, 'text': '2 Some stumps get more say in the classification than others.', 'start': 1206.062, 'duration': 4.061}, {'end': 1212.784, 'text': 'And 3.', 'start': 1211.484, 'duration': 1.3}, {'end': 1217.086, 'text': "Each stump is made by taking the previous stump's mistakes into account.", 'start': 1212.784, 'duration': 4.302}, {'end': 1222.388, 'text': 'If we have a weighted genie function, then we use it with the sample weights.', 'start': 1218.499, 'duration': 3.889}], 'summary': 'Adaboost combines weak learners to make classifications, using stumps and weighted genie function.', 'duration': 33.383, 'max_score': 1189.005, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/LsK-xG1cLYA/pics/LsK-xG1cLYA1189005.jpg'}], 'start': 958.996, 'title': 'Adaboost and sample weight', 'summary': "Covers creating a new sample collection with duplicate samples, discussing sample weight in classification, and explaining adaboost's process of combining weak learners, leading to patient classification based on a larger sum of say. it also emphasizes the significance of the forest of stumps created by adaboost in making classifications.", 'chapters': [{'end': 1077.265, 'start': 958.996, 'title': 'Creating new collection with duplicate samples', 'summary': 'Describes a process for creating a new collection of samples by duplicating samples with the largest weights, using a random number generation and a specified distribution, until the new collection matches the size of the original.', 'duration': 118.269, 'highlights': ['The process involves creating a new data set the same size as the original and using sample weights as a distribution to determine which samples to include in the new collection.', 'The method includes generating random numbers to decide the inclusion of samples based on specified ranges, e.g., if the number falls between 0 and 0.7, the sample is added to the new collection, and this continues until the new collection reaches the original size.', 'The example provided illustrates the selection process based on random numbers, demonstrating how samples are chosen and added to the new collection.']}, {'end': 1125.241, 'start': 1078.865, 'title': 'Sample weight in classification', 'summary': 'Discusses the process of adding a sample to a new collection multiple times, giving them equal weight, and finding the best stump for classification.', 'duration': 46.376, 'highlights': ['The sample was added to the new collection four times, reflecting its larger sample weight.', 'The new collection of samples is used, and all samples are given equal weights.', 'Treating the samples as a block creates a large penalty for misclassification.', 'Seeking the best stump to classify the new collection of samples is emphasized.']}, {'end': 1251.478, 'start': 1126.882, 'title': 'Adaboost: combining weak learners', 'summary': "Explains how adaboost combines weak learners, such as stumps, to make classifications, with some stumps having more influence than others, and each stump taking the previous stump's mistakes into account, ultimately leading to the patient being classified based on the larger sum of say. it also touches on using weighted genie function and sample weights in the process. additionally, it emphasizes the significance of the forest of stumps created by adaboost in making classifications.", 'duration': 124.596, 'highlights': ['Adaboost combines a lot of weak learners to make classifications, with the weak learners almost always being stumps.', 'Some stumps get more say in the classification than others, ultimately leading to the patient being classified based on the larger sum.', "Each stump is made by taking the previous stump's mistakes into account, which influences how the subsequent stumps are made.", 'Using a weighted genie function with sample weights or creating a new data set reflecting those weights is part of the process.', 'The significance of the forest of stumps created by Adaboost in making classifications is highlighted.']}], 'duration': 292.482, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/LsK-xG1cLYA/pics/LsK-xG1cLYA958996.jpg', 'highlights': ['Adaboost combines weak learners to make classifications, with stumps almost always being used.', 'The process involves creating a new data set using sample weights as a distribution to determine which samples to include.', 'The significance of the forest of stumps created by Adaboost in making classifications is emphasized.', "Each stump is made by taking the previous stump's mistakes into account, influencing subsequent stumps.", 'The method includes generating random numbers to decide the inclusion of samples based on specified ranges.']}], 'highlights': ['Adaboost combines a lot of weak learners, almost always stumps, to make classifications.', 'The process involves evaluating the classification of samples based on chest pain, blocked arteries, and patient weight, where chest pain correctly classified 3 out of 5 samples with chest pain and 2 out of 3 samples without chest pain.', 'The formula for increasing sample weight involves scaling the previous sample weight with a term, resulting in a larger new sample weight when the amount of say is relatively large, and a smaller new sample weight when the amount of say is relatively low.', 'The process of normalizing the new sample weights to ensure they add up to 1, with a small margin for rounding error, is explained.', 'The importance of correctly classifying samples is emphasized by increasing the sample weight for the incorrectly classified sample and decreasing all other sample weights.']}