title
Lecture 31: Markov Chains | Statistics 110
description
We introduce Markov chains -- a very beautiful and very useful kind of stochastic process -- and discuss the Markov property, transition matrices, and stationary distributions.
detail
{'title': 'Lecture 31: Markov Chains | Statistics 110', 'heatmap': [{'end': 822.495, 'start': 697.82, 'weight': 0.723}, {'end': 1960.632, 'start': 1757.471, 'weight': 0.966}, {'end': 2383.299, 'start': 2349.589, 'weight': 0.764}], 'summary': 'Covers markov chains, including law of large numbers, central limit theorem, multivariate normal, transition probabilities, and stationary distributions. it also includes exam preparation recommendations for a 66-page review and past 110 finals, suggesting practicing under exam conditions.', 'chapters': [{'end': 134.433, 'segs': [{'end': 44.713, 'src': 'embed', 'start': 19.356, 'weight': 1, 'content': [{'end': 24.778, 'text': 'So this week it will focus on law of large numbers, central limit theorem, multivariate normal, which we have been doing.', 'start': 19.356, 'duration': 5.422}, {'end': 33.089, 'text': 'Then the last section next week, then the TFs will do examples of Markov chains, which is our last topic.', 'start': 25.267, 'duration': 7.822}, {'end': 38.191, 'text': "And okay, so I'll just start from the beginning of Markov chains.", 'start': 34.05, 'duration': 4.141}, {'end': 42.492, 'text': "There's also a handout that I wrote on Markov chains, which you can read at some point.", 'start': 38.211, 'duration': 4.281}, {'end': 44.713, 'text': "If you haven't seen it already, just post it under handouts.", 'start': 42.532, 'duration': 2.181}], 'summary': 'This week focuses on law of large numbers, central limit theorem, and multivariate normal. next week, examples of markov chains will be covered as the last topic.', 'duration': 25.357, 'max_score': 19.356, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8AJPs3gvNlY/pics/8AJPs3gvNlY19356.jpg'}, {'end': 91.013, 'src': 'embed', 'start': 67.762, 'weight': 0, 'content': [{'end': 77.838, 'text': 'The 66 pages consists of 23 pages of review and then the rest of it just consists of literally just all the past 110 finals.', 'start': 67.762, 'duration': 10.076}, {'end': 82.385, 'text': "I've been at Harvard since 2006 and teaching 110 every fall,", 'start': 77.938, 'duration': 4.447}, {'end': 91.013, 'text': "and I don't really believe in withholding information or some people get access to more exams than others cuz they have a friend from whatever.", 'start': 82.385, 'duration': 8.628}], 'summary': '66 pages of review, 23 pages and 110 past finals, promoting fair access to exams.', 'duration': 23.251, 'max_score': 67.762, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8AJPs3gvNlY/pics/8AJPs3gvNlY67762.jpg'}, {'end': 142.18, 'src': 'embed', 'start': 116.353, 'weight': 2, 'content': [{'end': 125.639, 'text': 'So I would highly recommend doing at least two or three of those exams under as close as you can to exam conditions.', 'start': 116.353, 'duration': 9.286}, {'end': 128.561, 'text': 'That is your time, time yourself strictly.', 'start': 125.659, 'duration': 2.902}, {'end': 132.223, 'text': 'You have four pages of notes, double sided, that kind of thing.', 'start': 128.621, 'duration': 3.602}, {'end': 134.433, 'text': 'Then after each one, then you can go through it.', 'start': 132.591, 'duration': 1.842}, {'end': 142.18, 'text': "If you run out of time and you're not done, it's still a good idea to spend some time working on the problem, still without looking at the solutions.", 'start': 135.474, 'duration': 6.706}], 'summary': 'Recommend 2-3 exams under exam conditions with time restrictions and review without solutions.', 'duration': 25.827, 'max_score': 116.353, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8AJPs3gvNlY/pics/8AJPs3gvNlY116353.jpg'}], 'start': 0.229, 'title': 'Markov chains and final exam preparation', 'summary': 'Covers the last topic of markov chains, focusing on law of large numbers, central limit theorem, and multivariate normal. the final exam will be on december 15th, comprising 66 pages of review and past 110 finals, with the recommendation to practice at least two or three exams under exam conditions.', 'chapters': [{'end': 134.433, 'start': 0.229, 'title': 'Markov chains and final exam preparation', 'summary': 'Covers the last topic of markov chains, with upcoming sections focusing on law of large numbers, central limit theorem, and multivariate normal. the final exam will be on december 15th, comprising 66 pages of review and past 110 finals, with the recommendation to practice at least two or three exams under exam conditions.', 'duration': 134.204, 'highlights': ['The final exam will be on December 15th, comprising 66 pages of review and past 110 finals Important information about the final exam date and its content, providing a clear overview of what to expect.', 'The upcoming sections will focus on law of large numbers, central limit theorem, and multivariate normal Key points about the topics to be covered, providing insight into the upcoming course content.', 'Recommendation to practice at least two or three exams under exam conditions Encouragement for students to prepare for the exam by practicing under realistic conditions, emphasizing the importance of thorough preparation.']}], 'duration': 134.204, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8AJPs3gvNlY/pics/8AJPs3gvNlY229.jpg', 'highlights': ['The final exam will be on December 15th, comprising 66 pages of review and past 110 finals Important information about the final exam date and its content, providing a clear overview of what to expect.', 'The upcoming sections will focus on law of large numbers, central limit theorem, and multivariate normal Key points about the topics to be covered, providing insight into the upcoming course content.', 'Recommendation to practice at least two or three exams under exam conditions Encouragement for students to prepare for the exam by practicing under realistic conditions, emphasizing the importance of thorough preparation.']}, {'end': 811.187, 'segs': [{'end': 258.891, 'src': 'embed', 'start': 230.598, 'weight': 0, 'content': [{'end': 241.903, 'text': "But the number of applications they found since then has just been unbelievable and a lot of direction that's gone in that Markov would not have anticipated.", 'start': 230.598, 'duration': 11.305}, {'end': 247.046, 'text': "Okay, so Markov chains, it's an example of a stochastic process.", 'start': 241.923, 'duration': 5.123}, {'end': 250.528, 'text': "So I'll tell you just very briefly what a stochastic process is.", 'start': 247.066, 'duration': 3.462}, {'end': 258.891, 'text': "It's one of the most important examples of stochastic processes.", 'start': 254.228, 'duration': 4.663}], 'summary': 'Unbelievable increase in applications for markov chains, a key stochastic process.', 'duration': 28.293, 'max_score': 230.598, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8AJPs3gvNlY/pics/8AJPs3gvNlY230598.jpg'}, {'end': 333.984, 'src': 'embed', 'start': 301.078, 'weight': 2, 'content': [{'end': 309.746, 'text': 'So Markov chain is an example of a stochastic process, which basically just means random variables evolving over time.', 'start': 301.078, 'duration': 8.668}, {'end': 319.757, 'text': "Okay, so I'll just say examples of stochastic processes.", 'start': 314.915, 'duration': 4.842}, {'end': 326.56, 'text': "So what does that mean? It doesn't literally have to be time, but that's the most common interpretation.", 'start': 321.138, 'duration': 5.422}, {'end': 333.984, 'text': 'So we have this sequence of random variables, X0, X1, X2, blah, blah, blah, blah, blah.', 'start': 326.921, 'duration': 7.063}], 'summary': 'Markov chain is a stochastic process with evolving random variables over time.', 'duration': 32.906, 'max_score': 301.078, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8AJPs3gvNlY/pics/8AJPs3gvNlY301078.jpg'}, {'end': 381.855, 'src': 'embed', 'start': 360.151, 'weight': 1, 'content': [{'end': 369.273, 'text': 'But if we allow these to be completely dependent in some arbitrarily complicated way, that can be extremely complicated, right?', 'start': 360.151, 'duration': 9.122}, {'end': 373.234, 'text': 'You have this infinite sequence and they can all relate to each other in some very complicated way.', 'start': 369.313, 'duration': 3.921}, {'end': 377.174, 'text': 'So Markov chains are kind of a compromise.', 'start': 373.994, 'duration': 3.18}, {'end': 381.855, 'text': "It's basically going one step beyond IID right?", 'start': 377.755, 'duration': 4.1}], 'summary': 'Markov chains are a compromise going beyond iid.', 'duration': 21.704, 'max_score': 360.151, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8AJPs3gvNlY/pics/8AJPs3gvNlY360151.jpg'}, {'end': 447.023, 'src': 'embed', 'start': 417.484, 'weight': 5, 'content': [{'end': 424.028, 'text': "It's jumping from state to state, and that's at time n, which we're assuming is discrete right now.", 'start': 417.484, 'duration': 6.544}, {'end': 427.53, 'text': 'That is, n is a non-negative integer.', 'start': 425.188, 'duration': 2.342}, {'end': 435.635, 'text': 'So we can consider stochastic processes in continuous time or continuous space.', 'start': 428.55, 'duration': 7.085}, {'end': 440.799, 'text': 'Continuous time or discrete time, continuous space or discrete space.', 'start': 437.316, 'duration': 3.483}, {'end': 442.3, 'text': "So there's basically four possibilities.", 'start': 440.819, 'duration': 1.481}, {'end': 447.023, 'text': 'That is, this index, here we just did x0, x1, x2.', 'start': 443.04, 'duration': 3.983}], 'summary': 'Stochastic processes can be in continuous or discrete time and space, with four possibilities.', 'duration': 29.539, 'max_score': 417.484, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8AJPs3gvNlY/pics/8AJPs3gvNlY417484.jpg'}, {'end': 793.86, 'src': 'embed', 'start': 741.754, 'weight': 3, 'content': [{'end': 744.937, 'text': 'So I just crossed out everything further back in time.', 'start': 741.754, 'duration': 3.183}, {'end': 749.653, 'text': 'Only the most recent one matters, conditionally.', 'start': 746.091, 'duration': 3.562}, {'end': 758.116, 'text': "That doesn't mean, though, that if we had Xn plus 1 given Xn minus 1 and we were not given Xn, it doesn't mean we can get rid of the Xn minus 1,", 'start': 750.613, 'duration': 7.503}, {'end': 760.077, 'text': "right?. It's conditional independence.", 'start': 758.116, 'duration': 1.961}, {'end': 762.577, 'text': "So that's the Markov assumption.", 'start': 760.557, 'duration': 2.02}, {'end': 771.561, 'text': "Whether that assumption is good or not for a real example completely depends on the problem, okay? But that's the Markov property.", 'start': 763.338, 'duration': 8.223}, {'end': 780.771, 'text': "And in particular, for our purposes, let's actually assume that this doesn't depend on n.", 'start': 772.862, 'duration': 7.909}, {'end': 785.777, 'text': "So we'll usually call this thing qij, or call it whatever you want.", 'start': 780.771, 'duration': 5.006}, {'end': 787.879, 'text': "That's called a transition probability.", 'start': 786.037, 'duration': 1.842}, {'end': 793.86, 'text': 'So sometimes this is called, homogeneous Markov chain.', 'start': 789.301, 'duration': 4.559}], 'summary': 'Markov assumption: recent data matters, conditional independence, transition probability qij.', 'duration': 52.106, 'max_score': 741.754, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8AJPs3gvNlY/pics/8AJPs3gvNlY741754.jpg'}], 'start': 135.474, 'title': 'Markov chains and stochastic processes', 'summary': 'Delves into markov chains, a concept invented by markov over 100 years ago, as an example of a stochastic process, with significant applications for studying sequences of random variables, and emphasizes the significance of the most recent information and the transition probability.', 'chapters': [{'end': 381.855, 'start': 135.474, 'title': 'Markov chains and stochastic processes', 'summary': 'Covers the concept of markov chains, invented by markov over 100 years ago, as an example of a stochastic process, involving random variables evolving over time, with significant applications and implications for studying sequences of random variables.', 'duration': 246.381, 'highlights': ['Markov chains were introduced by Markov over 100 years ago as an example of a stochastic process, with significant applications and implications for studying sequences of random variables. Markov chains were invented or discovered by Markov just over 100 years ago, and they are an example of a stochastic process, involving random variables evolving over time with significant applications and implications for studying sequences of random variables.', 'Stochastic processes involve random variables evolving over time, with the most common interpretation being time, but it can be more broadly applicable. Stochastic processes involve random variables evolving over time, with the most common interpretation being time, but it can be more broadly applicable and involves sequences of random variables evolving in some way with some kind of dependence.', 'The concept of Markov chains is a compromise between independent and identically distributed (IID) random variables and completely dependent variables, going one step beyond IID. Markov chains are a compromise between independent and identically distributed (IID) random variables and completely dependent variables, going one step beyond IID, and involve a sequence of random variables evolving in some way with some kind of dependence.']}, {'end': 811.187, 'start': 382.155, 'title': 'Understanding markov chains', 'summary': 'Introduces the concept of markov chains, discussing stochastic processes, markov property, and conditional independence, emphasizing the significance of the most recent information and the transition probability.', 'duration': 429.032, 'highlights': ['Markov property: It states that only the most recent information matters in predicting the future state of a system, leading to conditional independence given the present, simplifying the probability statement from the entire past history. Significance of the most recent information, conditional independence, simplification of probability statement', 'Transition probability: The assumption that the probability of transitioning from one state to another (qij) does not depend on time, known as a homogeneous Markov chain. Homogeneous Markov chain, transition probability assumption', 'Stochastic processes: The chapter discusses various interpretations of Xn as the state of a system and the consideration of stochastic processes in continuous or discrete time and space. Interpretation of Xn, consideration of stochastic processes in different contexts']}], 'duration': 675.713, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8AJPs3gvNlY/pics/8AJPs3gvNlY135474.jpg', 'highlights': ['Markov chains were introduced by Markov over 100 years ago as an example of a stochastic process, with significant applications and implications for studying sequences of random variables.', 'The concept of Markov chains is a compromise between independent and identically distributed (IID) random variables and completely dependent variables, going one step beyond IID.', 'Stochastic processes involve random variables evolving over time, with the most common interpretation being time, but it can be more broadly applicable and involves sequences of random variables evolving in some way with some kind of dependence.', 'Markov property: It states that only the most recent information matters in predicting the future state of a system, leading to conditional independence given the present, simplifying the probability statement from the entire past history.', 'Transition probability: The assumption that the probability of transitioning from one state to another (qij) does not depend on time, known as a homogeneous Markov chain.', 'Stochastic processes: The chapter discusses various interpretations of Xn as the state of a system and the consideration of stochastic processes in continuous or discrete time and space.']}, {'end': 1026.911, 'segs': [{'end': 837.692, 'src': 'embed', 'start': 811.227, 'weight': 1, 'content': [{'end': 822.495, 'text': "So this is saying We're looking at the case where the probability if the system is at state i and then we wanna know what's the probability that at the next time point it's at state j.", 'start': 811.227, 'duration': 11.268}, {'end': 829.601, 'text': "If that's always qij, if it doesn't change with time, then we would say that that's homogeneous or time homogeneous.", 'start': 822.495, 'duration': 7.106}, {'end': 837.692, 'text': "For our purposes, we're just gonna be studying homogeneous Markov chains.", 'start': 832.77, 'duration': 4.922}], 'summary': 'Studying homogeneous markov chains with constant transition probabilities.', 'duration': 26.465, 'max_score': 811.227, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8AJPs3gvNlY/pics/8AJPs3gvNlY811227.jpg'}, {'end': 1026.911, 'src': 'embed', 'start': 873.336, 'weight': 0, 'content': [{'end': 879.965, 'text': 'Okay, and then to specify the Markov chain, we just need to specify transition probabilities.', 'start': 873.336, 'duration': 6.629}, {'end': 884.171, 'text': "That is, what's the probability of going from state one to state two, all different states.", 'start': 880.025, 'duration': 4.146}, {'end': 888.397, 'text': 'So it may not be possible to go from any state to any state in one step.', 'start': 884.532, 'duration': 3.865}, {'end': 890.059, 'text': "In fact, usually it's not possible.", 'start': 888.477, 'duration': 1.582}, {'end': 892.681, 'text': "So for example, there's just an example I made up.", 'start': 890.76, 'duration': 1.921}, {'end': 895.003, 'text': "Let's say the system is at state one.", 'start': 893.282, 'duration': 1.721}, {'end': 902.247, 'text': 'It has probably one-third of staying there, and it has probably two-thirds of going to state two.', 'start': 895.983, 'duration': 6.264}, {'end': 908.291, 'text': "I'm just drawing arrows and I'm putting probabilities on the arrows that reflect the probabilities of those different transitions.", 'start': 902.827, 'duration': 5.464}, {'end': 915.535, 'text': "From state two, it's equally likely to go back to state one or to go to state two.", 'start': 908.871, 'duration': 6.664}, {'end': 918.497, 'text': "I guess I'm putting the probabilities above the arrow.", 'start': 915.555, 'duration': 2.942}, {'end': 923.186, 'text': 'So from state two, it either goes here or here, equally likely.', 'start': 920.103, 'duration': 3.083}, {'end': 932.514, 'text': 'From state three, it either goes back to state four with probability 1 quarter, or it goes back to state one with probability 1 half.', 'start': 923.666, 'duration': 8.848}, {'end': 947.93, 'text': 'Or it stays there with probability 1 quarter.', 'start': 941.528, 'duration': 6.402}, {'end': 954.412, 'text': "And then let's say from state 3, just to make life a little bit easier, from state 3 it always goes to state 4.", 'start': 948.25, 'duration': 6.162}, {'end': 955.793, 'text': "So I'll just put a 1 there.", 'start': 954.412, 'duration': 1.381}, {'end': 957.617, 'text': 'You make up your own example.', 'start': 956.536, 'duration': 1.081}, {'end': 961.981, 'text': 'Anyway, it just helps to have a picture like this in mind.', 'start': 958.057, 'duration': 3.924}, {'end': 969.527, 'text': "And you can have a Markov chain that has infinitely many states, and then you can't actually draw infinitely many states.", 'start': 962.641, 'duration': 6.886}, {'end': 973.771, 'text': "But we're only gonna be looking at the case where there are finitely many states.", 'start': 969.948, 'duration': 3.823}, {'end': 976.974, 'text': 'So for our purposes, you can always imagine a picture like this.', 'start': 974.031, 'duration': 2.943}, {'end': 978.455, 'text': 'We have some number of states.', 'start': 977.034, 'duration': 1.421}, {'end': 988.075, 'text': 'And then you have a particle randomly bouncing around between states, following the arrows, and the arrows may have different probabilities.', 'start': 980.888, 'duration': 7.187}, {'end': 990.538, 'text': "Okay, that's the mental picture.", 'start': 989.377, 'duration': 1.161}, {'end': 1002.35, 'text': "And then to explain in that picture, what does this property say? It says that you're wandering around from state to state, like that.", 'start': 991.138, 'duration': 11.212}, {'end': 1012.399, 'text': 'What this says is, if I wanna know the future or predict the future, all I care about is the current state.', 'start': 1004.232, 'duration': 8.167}, {'end': 1018.704, 'text': "I don't care about the whole history of all the long wanderings throughout time to get there.", 'start': 1012.739, 'duration': 5.965}, {'end': 1022.207, 'text': "It doesn't matter how you got there, it only matters where you are right?", 'start': 1018.964, 'duration': 3.243}, {'end': 1024.829, 'text': "That's what the Markov property says.", 'start': 1022.628, 'duration': 2.201}, {'end': 1026.911, 'text': 'intuitively, okay?', 'start': 1024.829, 'duration': 2.082}], 'summary': 'Markov chain specifies transition probabilities between states, simplifying future prediction.', 'duration': 153.575, 'max_score': 873.336, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8AJPs3gvNlY/pics/8AJPs3gvNlY873336.jpg'}], 'start': 811.227, 'title': 'Markov chains', 'summary': "Discusses homogeneous markov chains and transition probabilities, emphasizing the concept of qijs and their time-independence, illustrated with a simple example of a four-state system. it describes a markov chain with probability transitions between states, with specific transition probabilities outlined. additionally, it introduces the concept of markov chains with a focus on the case of finitely many states, emphasizing the markov property's significance in predicting the future based solely on the current state.", 'chapters': [{'end': 895.003, 'start': 811.227, 'title': 'Homogeneous markov chains and transition probabilities', 'summary': 'Discusses homogeneous markov chains and transition probabilities, emphasizing the concept of qijs and their time-independence, illustrated with a simple example of a four-state system.', 'duration': 83.776, 'highlights': ['The concept of homogeneous Markov chains is explained in relation to the probability of a system transitioning from state i to state j, denoted as qij and emphasized as time-invariant.', 'The importance of mental visualization in understanding Markov chains is highlighted, stressing the value of creating a mental picture to comprehend the system dynamics.', 'The example of a four-state Markov chain is introduced to illustrate transition probabilities, emphasizing the necessity of specifying the probabilities of transitioning between different states.']}, {'end': 955.793, 'start': 895.983, 'title': 'Markov chain probability transitions', 'summary': 'Describes a markov chain with probability transitions between states, with one-third probability of staying in the initial state, and two-thirds probability of transitioning to the next state. state three always transitions to state four, while state two equally transitions back to state one or to itself.', 'duration': 59.81, 'highlights': ['The initial state has a probability of one-third of staying and two-thirds of transitioning to the next state.', 'State two has an equal likelihood of transitioning back to state one or to itself.', 'State three always transitions to state four.']}, {'end': 1026.911, 'start': 956.536, 'title': 'Markov chains: finite state case', 'summary': "Introduces the concept of markov chains with a focus on the case of finitely many states, emphasizing the markov property's significance in predicting the future based solely on the current state.", 'duration': 70.375, 'highlights': ['Markov chains can have infinitely many states, but the focus is on the case where there are finitely many states. The chapter highlights the distinction between Markov chains with infinitely many states and the specific focus on the case where there are finitely many states.', 'The Markov property indicates that predicting the future only requires knowledge of the current state, disregarding the entire historical path to reach that state. The chapter emphasizes the significance of the Markov property, stating that predicting the future solely depends on the current state, irrespective of the historical path.', 'The mental picture involves a particle randomly moving between states, following arrows with different probabilities. It describes the mental picture of a particle moving randomly between states, guided by arrows with varying probabilities.']}], 'duration': 215.684, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8AJPs3gvNlY/pics/8AJPs3gvNlY811227.jpg', 'highlights': ['The Markov property indicates that predicting the future only requires knowledge of the current state, disregarding the entire historical path to reach that state.', 'The concept of homogeneous Markov chains is explained in relation to the probability of a system transitioning from state i to state j, denoted as qij and emphasized as time-invariant.', 'The mental picture involves a particle randomly moving between states, following arrows with different probabilities.', 'The example of a four-state Markov chain is introduced to illustrate transition probabilities, emphasizing the necessity of specifying the probabilities of transitioning between different states.', 'Markov chains can have infinitely many states, but the focus is on the case where there are finitely many states.', 'State three always transitions to state four.', 'The initial state has a probability of one-third of staying and two-thirds of transitioning to the next state.', 'The importance of mental visualization in understanding Markov chains is highlighted, stressing the value of creating a mental picture to comprehend the system dynamics.', 'The Markov property emphasizes the significance of predicting the future solely depending on the current state, irrespective of the historical path.']}, {'end': 1634.277, 'segs': [{'end': 1161.872, 'src': 'embed', 'start': 1134.658, 'weight': 0, 'content': [{'end': 1141.426, 'text': 'Some people would use the transpose of this matrix instead, and then the columns sum to 1, just a convention.', 'start': 1134.658, 'duration': 6.768}, {'end': 1145.21, 'text': "We're gonna take the convention that we write it this way.", 'start': 1142.707, 'duration': 2.503}, {'end': 1151.28, 'text': 'that the ij entry is the probability of jumping from i to j in one step.', 'start': 1145.834, 'duration': 5.446}, {'end': 1161.872, 'text': 'So therefore, each row sums to 1, right? Okay, columns may or may not sum to 1, but the rows sum to 1.', 'start': 1151.74, 'duration': 10.132}], 'summary': 'Probability matrix rows sum to 1, columns may not. conventional representation.', 'duration': 27.214, 'max_score': 1134.658, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8AJPs3gvNlY/pics/8AJPs3gvNlY1134658.jpg'}, {'end': 1246.219, 'src': 'embed', 'start': 1183.785, 'weight': 1, 'content': [{'end': 1186.127, 'text': "So that's what a transition matrix is.", 'start': 1183.785, 'duration': 2.342}, {'end': 1195.954, 'text': "By the way, the two most important concepts, just other than the basic definition of Markov property, two most important concepts we're gonna need.", 'start': 1187.11, 'duration': 8.844}, {'end': 1199.215, 'text': 'One is transition matrix, now you know what a transition matrix is.', 'start': 1196.054, 'duration': 3.161}, {'end': 1202.817, 'text': "The other one's stationary distribution, which we'll talk about later.", 'start': 1199.675, 'duration': 3.142}, {'end': 1207.959, 'text': "Okay, so that's the Markov property.", 'start': 1202.837, 'duration': 5.122}, {'end': 1223.991, 'text': "So to tell you a little bit about The history, well, okay, so how it's used now, it's being used now in two sort of different ways.", 'start': 1209.48, 'duration': 14.511}, {'end': 1235.333, 'text': 'One is as a model that might be used in various problems in social sciences, physical sciences, in biology,', 'start': 1225.391, 'duration': 9.942}, {'end': 1239.434, 'text': 'where you actually literally believe that you have a Markov chain.', 'start': 1235.333, 'duration': 4.101}, {'end': 1246.219, 'text': "Or you're using it as an approximation for some system evolving over time.", 'start': 1240.154, 'duration': 6.065}], 'summary': 'Markov property explained with key concepts: transition matrix and stationary distribution, used in various fields for modeling and approximation.', 'duration': 62.434, 'max_score': 1183.785, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8AJPs3gvNlY/pics/8AJPs3gvNlY1183785.jpg'}, {'end': 1490.026, 'src': 'embed', 'start': 1463.824, 'weight': 2, 'content': [{'end': 1471.949, 'text': 'as a way of doing extremely complicated simulations and computations that would have been completely impossible 50 years ago, or even 30 years ago.', 'start': 1463.824, 'duration': 8.125}, {'end': 1478.901, 'text': "Okay, that's Markov chain Monte Carlo.", 'start': 1476.119, 'duration': 2.782}, {'end': 1479.681, 'text': 'So Markov chain.', 'start': 1479.041, 'duration': 0.64}, {'end': 1482.943, 'text': 'Monte Carlo is when you build your own Markov chain right?', 'start': 1479.681, 'duration': 3.262}, {'end': 1490.026, 'text': 'And then the other application is just that you have this natural system, that you choose to model it using a Markov chain.', 'start': 1483.183, 'duration': 6.843}], 'summary': 'Enables complex simulations and computations, a breakthrough in the past 50 years.', 'duration': 26.202, 'max_score': 1463.824, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8AJPs3gvNlY/pics/8AJPs3gvNlY1463824.jpg'}], 'start': 1028.051, 'title': 'Transition matrices and markov chains', 'summary': 'Covers the concept of transition matrices, including their role in representing transition probabilities and their significance in markov chain analysis. it also explores the history and applications of markov chains, highlighting their role as a model for various systems and their impact on scientific computing and complex simulations.', 'chapters': [{'end': 1207.959, 'start': 1028.051, 'title': 'Understanding transition matrices', 'summary': 'Introduces the concept of transition matrix, explaining its role in representing transition probabilities, ensuring each row sums to 1, and its significance in markov chain analysis.', 'duration': 179.908, 'highlights': ['The transition matrix represents transition probabilities and is conventionally organized with each row summing to 1. The transition matrix is a significant concept in Markov chain analysis, representing the transition probabilities and ensuring that each row sums to 1.', 'The chapter emphasizes the importance of the transition matrix and its role in Markov chain analysis. The transition matrix is highlighted as one of the two most important concepts in Markov chain analysis, alongside the stationary distribution.', 'Explanation of how to construct a valid Markov chain transition matrix by filling in non-negative numbers, ensuring each row sums up to 1, and converting it to a graphical representation. The process of creating a valid Markov chain transition matrix is explained, emphasizing the requirements of non-negative numbers, row sums totaling 1, and the conversion to a graphical representation.']}, {'end': 1634.277, 'start': 1209.48, 'title': 'Markov chains: history and applications', 'summary': 'Discusses the history of markov chains and its two main applications: as a model for various systems and for markov chain monte carlo simulations, which have revolutionized scientific computing, being widely used in various fields and enabled complex simulations that were previously impossible.', 'duration': 424.797, 'highlights': ['Markov Chain Monte Carlo has revolutionized scientific computing, being widely used in various fields and enabling complex simulations that were previously impossible. Markov Chain Monte Carlo has revolutionized scientific computing and is widely used in various fields, enabling complex simulations that were previously impossible.', 'Markov chains are used as a model for various systems in social sciences, physical sciences, and biology, either as a literal belief in a Markov chain or as an approximation for system evolution over time. Markov chains are used as a model for various systems in social sciences, physical sciences, and biology, either as a literal belief in a Markov chain or as an approximation for system evolution over time.', 'Markov introduced Markov chains to address a religious debate over free will in the early 1900s, aiming to find a process one step in complexity beyond IID (independent and identically distributed). Markov introduced Markov chains to address a religious debate over free will in the early 1900s, aiming to find a process one step in complexity beyond IID (independent and identically distributed).']}], 'duration': 606.226, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8AJPs3gvNlY/pics/8AJPs3gvNlY1028050.jpg', 'highlights': ['The transition matrix represents transition probabilities and is conventionally organized with each row summing to 1.', 'The chapter emphasizes the importance of the transition matrix and its role in Markov chain analysis.', 'Markov Chain Monte Carlo has revolutionized scientific computing, being widely used in various fields and enabling complex simulations that were previously impossible.', 'Markov chains are used as a model for various systems in social sciences, physical sciences, and biology, either as a literal belief in a Markov chain or as an approximation for system evolution over time.']}, {'end': 1997.673, 'segs': [{'end': 1731.757, 'src': 'embed', 'start': 1634.738, 'weight': 0, 'content': [{'end': 1640.024, 'text': 'And then he proved a version of the law of large numbers that applies to Markov chains.', 'start': 1634.738, 'duration': 5.286}, {'end': 1647.533, 'text': "And then he said, well look, you don't actually need IID in order for the law of large numbers to hold.", 'start': 1640.064, 'duration': 7.469}, {'end': 1655.284, 'text': 'Okay, and the chain that he originally looked at, which might be the first Markov chain.', 'start': 1648.78, 'duration': 6.504}, {'end': 1662.308, 'text': 'he took a Russian novel and he, kind of just empirically, was looking at consonants and vowels.', 'start': 1655.284, 'duration': 7.024}, {'end': 1667.771, 'text': 'And he said, okay, there are two states, vowel and consonant.', 'start': 1662.528, 'duration': 5.243}, {'end': 1671.253, 'text': 'And a vowel is either followed by a vowel or a consonant.', 'start': 1667.911, 'duration': 3.342}, {'end': 1674.154, 'text': 'A consonant is either followed by a vowel or a consonant.', 'start': 1671.333, 'duration': 2.821}, {'end': 1680.778, 'text': 'And then he kind of empirically computed the probability that, Vowel is followed by vowel, vowel is followed by consonant, and so on.', 'start': 1674.194, 'duration': 6.584}, {'end': 1682.558, 'text': 'It was just studying that simple chain.', 'start': 1680.838, 'duration': 1.72}, {'end': 1685.94, 'text': 'So just like this picture, except even simpler, only two states.', 'start': 1682.899, 'duration': 3.041}, {'end': 1689.642, 'text': "And I drew one with four states, but conceptually, it's the same thing.", 'start': 1686.04, 'duration': 3.602}, {'end': 1692.243, 'text': "You're just bouncing around between states.", 'start': 1689.662, 'duration': 2.581}, {'end': 1696.893, 'text': "Okay, so that's the whole idea of a Markov chain.", 'start': 1693.55, 'duration': 3.343}, {'end': 1705.039, 'text': 'And if you keep these simple pictures in mind, it will make things much, much, much easier, even for more.', 'start': 1697.073, 'duration': 7.966}, {'end': 1713.626, 'text': 'We could have a Markov chain that has 10 to the 100th states, but you can still keep pictures like this in mind, okay?', 'start': 1705.54, 'duration': 8.086}, {'end': 1718.13, 'text': "So we've defined the transition matrix.", 'start': 1714.628, 'duration': 3.502}, {'end': 1720.251, 'text': "it's just this matrix of transition probabilities.", 'start': 1718.13, 'duration': 2.121}, {'end': 1729.776, 'text': "But I wanna show you how do we actually use the transition matrix to get higher order things like what's the probability??", 'start': 1721.311, 'duration': 8.465}, {'end': 1731.757, 'text': 'For example, we wanna answer the question.', 'start': 1730.056, 'duration': 1.701}], 'summary': 'The transcript discusses markov chains and their application to the law of large numbers, with examples and empirical computations.', 'duration': 97.019, 'max_score': 1634.738, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8AJPs3gvNlY/pics/8AJPs3gvNlY1634738.jpg'}, {'end': 1960.632, 'src': 'heatmap', 'start': 1757.471, 'weight': 0.966, 'content': [{'end': 1766.69, 'text': "So suppose let's say that at time n, which we're considering as the present Xn,", 'start': 1757.471, 'duration': 9.219}, {'end': 1780.561, 'text': "the chain which is Xn at time n has distribution Which I'm thinking of as S, which is a row vector.", 'start': 1766.69, 'duration': 13.871}, {'end': 1788.127, 'text': "Which we're gonna think of as, in other words, we can think of it as a 1 by M matrix.", 'start': 1784.104, 'duration': 4.023}, {'end': 1798.356, 'text': "And all I mean by this is we're taking the PMF, but since we're assuming, I'm assuming that there are M states.", 'start': 1790.991, 'duration': 7.365}, {'end': 1801.498, 'text': 'So here, M equals 4 in this example.', 'start': 1798.856, 'duration': 2.642}, {'end': 1803.68, 'text': 'M is the number of ovals in this picture.', 'start': 1801.999, 'duration': 1.681}, {'end': 1812.125, 'text': "So we're assuming M states, and we're saying, okay, the distribution, I just mean the PMF cuz it's discrete.", 'start': 1804.5, 'duration': 7.625}, {'end': 1817.789, 'text': 'But since there are only as finitely many values, you may as well just list them all out right?', 'start': 1812.626, 'duration': 5.163}, {'end': 1826.355, 'text': 'this thing is the PMF, kind of just listed as a vector, listed out right?', 'start': 1819.47, 'duration': 6.885}, {'end': 1830.978, 'text': "So it's the probability of being in state one, probability of being in state two, and so on.", 'start': 1826.395, 'duration': 4.583}, {'end': 1833.94, 'text': 'just list out the probabilities, okay?', 'start': 1830.978, 'duration': 2.962}, {'end': 1842.166, 'text': "So, in particular, we know that S is a vector and we're writing it as a row and the entries are non-negative and add up to one.", 'start': 1833.96, 'duration': 8.206}, {'end': 1854.63, 'text': "Okay, so let's assume that that's how things stand at time n, and then we wanna know what happens at time n plus 1.", 'start': 1843.066, 'duration': 11.564}, {'end': 1861.392, 'text': "So we wanna know what's the probability that Xn plus 1 is some state j.", 'start': 1854.63, 'duration': 6.762}, {'end': 1869.455, 'text': "So we wanna know the PMF, this is the PMF at time n, it's just we're writing it as a vector because that's convenient.", 'start': 1861.392, 'duration': 8.063}, {'end': 1873.057, 'text': "That's the PMF at time n.", 'start': 1870.375, 'duration': 2.682}, {'end': 1876.16, 'text': 'Now we want the PMF at time n plus 1, one step in the future.', 'start': 1873.057, 'duration': 3.103}, {'end': 1884.946, 'text': "Okay, well, to do that, it's not hard to guess that what we should do is condition on the value now, right? Condition on what we wish that we knew.", 'start': 1876.68, 'duration': 8.266}, {'end': 1891.65, 'text': "It would be convenient if we knew where the thing is now, that would be very useful, right? So we're just gonna condition.", 'start': 1884.966, 'duration': 6.684}, {'end': 1895.453, 'text': "So we're gonna sum over all states.", 'start': 1892.771, 'duration': 2.682}, {'end': 1900.936, 'text': 'i of the probability that xn plus 1 equals j.', 'start': 1895.453, 'duration': 5.483}, {'end': 1905.498, 'text': 'given xn equals i times the probability that xn equals i.', 'start': 1900.936, 'duration': 4.562}, {'end': 1910.401, 'text': 'right?. Just the law of total probability.', 'start': 1905.498, 'duration': 4.903}, {'end': 1913.263, 'text': 'But these are things we know.', 'start': 1912.222, 'duration': 1.041}, {'end': 1923.032, 'text': 'Just by definition, this is qij, that is the probability of going from i to j.', 'start': 1914.585, 'duration': 8.447}, {'end': 1924.513, 'text': "By definition, that's qij.", 'start': 1923.032, 'duration': 1.481}, {'end': 1935.421, 'text': "And this thing here is just si, right? Because that's just the probability being at i, and that's just the i-th value in this vector, not just si.", 'start': 1925.313, 'duration': 10.108}, {'end': 1938.364, 'text': "I'm gonna write it on the left because it looks a little bit nicer to me.", 'start': 1935.641, 'duration': 2.723}, {'end': 1949.723, 'text': "That's the answer as a sum, but it's easier to think of this in terms of a matrix.", 'start': 1944.318, 'duration': 5.405}, {'end': 1960.632, 'text': 'This is, in fact, the jth entry of s times q.', 'start': 1951.084, 'duration': 9.548}], 'summary': 'The transcript discusses the probability of state transitions in a markov chain, with m=4 states, and uses matrices and vectors for calculations.', 'duration': 203.161, 'max_score': 1757.471, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8AJPs3gvNlY/pics/8AJPs3gvNlY1757471.jpg'}], 'start': 1634.738, 'title': 'Markov chains and transition matrix', 'summary': 'Discusses a version of the law of large numbers applicable to markov chains, highlighting the absence of the need for iid. it presents an empirical study of a simple markov chain based on consonants and vowels from a russian novel. additionally, it explains the concept of markov chains, transition matrix, and the probability distribution of states, emphasizing the ease of visualization and usage of transition matrices, with a focus on the relationship between the present and future states through conditional probabilities.', 'chapters': [{'end': 1682.558, 'start': 1634.738, 'title': 'Markov chains and law of large numbers', 'summary': 'Discusses a version of the law of large numbers applicable to markov chains, highlighting the absence of the need for iid. it presents an empirical study of a simple markov chain based on consonants and vowels from a russian novel.', 'duration': 47.82, 'highlights': ['The chapter presents a version of the law of large numbers applicable to Markov chains, revealing the absence of the need for IID.', 'The speaker conducted an empirical study of a simple Markov chain based on consonants and vowels from a Russian novel, computing the probability of transitions between vowels and consonants.']}, {'end': 1997.673, 'start': 1682.899, 'title': 'Understanding markov chains and transition matrix', 'summary': 'Explains the concept of markov chains, transition matrix, and the probability distribution of states, emphasizing the ease of visualization and usage of transition matrices, with a focus on the relationship between the present and future states through conditional probabilities.', 'duration': 314.774, 'highlights': ['The concept of Markov chains simplifies to two states, facilitating easier comprehension for more complex scenarios. The simplicity of Markov chains, even with more states, aids in understanding complex scenarios.', "The transition matrix is utilized to calculate probabilities of transitioning between states over multiple time steps, providing a foundational understanding of the chain's behavior. The transition matrix facilitates the calculation of probabilities for transitioning between states over various time steps, enabling a comprehensive understanding of the chain's behavior.", 'The relationship between the present and future states is explained through conditional probabilities, demonstrating the calculation as a sum and presenting the concept in terms of matrices. The explanation of the relationship between present and future states through conditional probabilities, demonstrated as a sum and presented in terms of matrices, enhances the understanding of the concept.']}], 'duration': 362.935, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8AJPs3gvNlY/pics/8AJPs3gvNlY1634738.jpg', 'highlights': ['The chapter presents a version of the law of large numbers applicable to Markov chains, revealing the absence of the need for IID.', 'The speaker conducted an empirical study of a simple Markov chain based on consonants and vowels from a Russian novel, computing the probability of transitions between vowels and consonants.', "The transition matrix is utilized to calculate probabilities of transitioning between states over multiple time steps, providing a foundational understanding of the chain's behavior.", 'The explanation of the relationship between present and future states through conditional probabilities, demonstrated as a sum and presented in terms of matrices, enhances the understanding of the concept.', 'The concept of Markov chains simplifies to two states, facilitating easier comprehension for more complex scenarios. The simplicity of Markov chains, even with more states, aids in understanding complex scenarios.']}, {'end': 2796.645, 'segs': [{'end': 2386.8, 'src': 'heatmap', 'start': 2349.589, 'weight': 0, 'content': [{'end': 2363.211, 'text': "So this is saying it's at i now, what's the probability that it will be at j m steps in the future? That's just gonna be the ij entry of q to the m.", 'start': 2349.589, 'duration': 13.622}, {'end': 2377.216, 'text': 'So what works out pretty nicely, that powers of the transition matrix actually give us a lot of information.', 'start': 2369.372, 'duration': 7.844}, {'end': 2383.299, 'text': "We don't need a separate study of every different transition and different number of steps.", 'start': 2377.276, 'duration': 6.023}, {'end': 2386.8, 'text': 'Just work with powers of the transition matrix.', 'start': 2384.279, 'duration': 2.521}], 'summary': 'Using powers of the transition matrix provides information about future probabilities.', 'duration': 37.211, 'max_score': 2349.589, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8AJPs3gvNlY/pics/8AJPs3gvNlY2349589.jpg'}, {'end': 2438.641, 'src': 'embed', 'start': 2411.144, 'weight': 1, 'content': [{'end': 2417.246, 'text': 'Transition matrix and stationary distribution are the two most important ideas other than the basic definition.', 'start': 2411.144, 'duration': 6.102}, {'end': 2427.357, 'text': "So what's a stationary distribution of a chain? It's also called a steady state or long run or equilibrium, that kind of thing.", 'start': 2419.494, 'duration': 7.863}, {'end': 2429.758, 'text': "It's supposed to capture the idea of.", 'start': 2427.657, 'duration': 2.101}, {'end': 2430.798, 'text': "so the reason it's important.", 'start': 2429.758, 'duration': 1.04}, {'end': 2438.641, 'text': "one reason it's important is if we run the chain for a long time, we wanna know does it converge to some limiting distribution?", 'start': 2430.798, 'duration': 7.843}], 'summary': 'Transition matrix and stationary distribution are key concepts in understanding convergence in chain theory.', 'duration': 27.497, 'max_score': 2411.144, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8AJPs3gvNlY/pics/8AJPs3gvNlY2411144.jpg'}, {'end': 2727.968, 'src': 'embed', 'start': 2679.787, 'weight': 2, 'content': [{'end': 2685.832, 'text': 'So those are basic questions about stationary distributions.', 'start': 2679.787, 'duration': 6.045}, {'end': 2699.293, 'text': 'Okay, and so under some very mild conditions, the answer will be yes to all three of these first three questions.', 'start': 2687.009, 'duration': 12.284}, {'end': 2708.456, 'text': "There are a few technical conditions that we'll get into, but under some mild technical conditions it will exist.", 'start': 2700.513, 'duration': 7.943}, {'end': 2709.396, 'text': 'it will be unique.', 'start': 2708.456, 'duration': 0.94}, {'end': 2714.058, 'text': 'the chain will converge to the stationary distribution, so it does capture the long run behavior.', 'start': 2709.396, 'duration': 4.662}, {'end': 2724.267, 'text': 'As for this last question, though, how to compute it, I mean, in principle, if you had enough time, you can just use a computer.', 'start': 2715.063, 'duration': 9.204}, {'end': 2727.968, 'text': 'Or, well, if you had enough time, you can do it by hand in principle.', 'start': 2724.567, 'duration': 3.401}], 'summary': 'Under mild technical conditions, the chain will converge to a unique stationary distribution capturing long run behavior.', 'duration': 48.181, 'max_score': 2679.787, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8AJPs3gvNlY/pics/8AJPs3gvNlY2679787.jpg'}], 'start': 1997.693, 'title': 'Markov chains and stationary distributions', 'summary': 'Covers transition matrix, stationary distributions, existence, uniqueness, convergence, and computation in markov chains, emphasizing their importance for understanding long-term behavior and providing assurance under mild conditions.', 'chapters': [{'end': 2533.748, 'start': 1997.693, 'title': 'Markov chain: transition matrix and stationary distribution', 'summary': 'Explains the concept of transition matrix and its powers to determine the distribution at various time steps, and introduces the idea of a stationary distribution, which is crucial for understanding the long-term behavior of a markov chain.', 'duration': 536.055, 'highlights': ['Powers of the transition matrix provide distribution at different time steps, simplifying prediction and calculation. The powers of the transition matrix, Q, allow for the determination of the distribution at various time steps, streamlining the prediction and calculation process.', 'Introduction of the concept of a stationary distribution and its importance in understanding the long-term behavior of a Markov chain. The concept of a stationary distribution, denoted as S, is crucial for comprehending the long-term behavior of a Markov chain, particularly in determining if the chain converges to a limiting distribution after running for a prolonged period.', 'Definition of stationary distribution using the equation S times Q equals S, resembling an eigenvalue eigenvector equation. The stationary distribution S is defined as a probability vector that satisfies the equation S times Q equals S, bearing resemblance to an eigenvalue eigenvector equation, which is significant for those familiar with eigenvalues and eigenvectors.']}, {'end': 2796.645, 'start': 2533.929, 'title': 'Stationary distributions in markov chains', 'summary': 'Discusses the concept of stationary distributions in markov chains, addressing the existence, uniqueness, convergence, and computation of the stationary distribution, with the assurance that under mild conditions, it will exist, be unique, and the chain will converge to it in the long run.', 'duration': 262.716, 'highlights': ['The stationary distribution in a Markov chain remains unchanged over time, representing the long-run behavior of the chain.', 'The key questions regarding stationary distributions include their existence, uniqueness, and convergence, as well as the efficient computation of the distribution.', 'Under mild technical conditions, the stationary distribution exists, is unique, and the chain converges to it, capturing the long-run behavior.', 'Efficient computation of the stationary distribution can be challenging in the general case but certain examples of Markov chains allow for quick and easy computation without using matrices.']}], 'duration': 798.952, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/8AJPs3gvNlY/pics/8AJPs3gvNlY1997693.jpg', 'highlights': ['Powers of the transition matrix provide distribution at different time steps, simplifying prediction and calculation.', 'Introduction of the concept of a stationary distribution and its importance in understanding the long-term behavior of a Markov chain.', 'The key questions regarding stationary distributions include their existence, uniqueness, and convergence, as well as the efficient computation of the distribution.', 'Under mild technical conditions, the stationary distribution exists, is unique, and the chain converges to it, capturing the long-run behavior.']}], 'highlights': ['The final exam will be on December 15th, comprising 66 pages of review and past 110 finals Important information about the final exam date and its content, providing a clear overview of what to expect.', 'Markov chains were introduced by Markov over 100 years ago as an example of a stochastic process, with significant applications and implications for studying sequences of random variables.', 'The concept of Markov chains is a compromise between independent and identically distributed (IID) random variables and completely dependent variables, going one step beyond IID.', 'The transition matrix represents transition probabilities and is conventionally organized with each row summing to 1.', 'Introduction of the concept of a stationary distribution and its importance in understanding the long-term behavior of a Markov chain.']}