title
Build 12 Data Science Apps with Python and Streamlit - Full Course

description
Learn how to build interactive and data-driven web apps in Python using the Streamlit library. ✏️ Course developed by Chanin Nantasenamat (aka Data Professor). Check out his YouTube channel for more data science tutorials: http://youtube.com/dataprofessor 🔗 And Medium blog posts for more data science tutorials: https://data-professor.medium.com/ ⭐️ Course Contents ⭐️ ⌨️ (0:00) Introduction ⌨️ (2:54) 1. Simple Stock Price ⌨️ (13:24) 2. Simple Bioinformatics DNA Count ⌨️ (29:44) 3. EDA Basketball ⌨️ (50:39) 4. EDA Football ⌨️ (1:00:48) 5. EDA SP500 Stock Price ⌨️ (1:24:03) 6. EDA Cryptocurrency ⌨️ (1:50:47) 7. Classification Iris ⌨️ (1:58:58) 8. Classification Penguins ⌨️ (2:16:08) 9. Regression Boston Housing ⌨️ (2:27:53) 10. Regression Bioinformatics Solubility ⌨️ (2:54:27) 11. Deploy to Heroku ⌨️ (3:04:37) 12. Deploy to Streamlit Sharing ⭐️ Code ⭐️ 💻 1. Simple Stock Pric https://github.com/dataprofessor/streamlit_freecodecamp/tree/main/app_1_simple_stock_price 💻 2. Simple Bioinformatics DNA Coun https://github.com/dataprofessor/streamlit_freecodecamp/tree/main/app_2_simple_bioinformatics_dna 💻 3. EDA Basketbal https://github.com/dataprofessor/streamlit_freecodecamp/tree/main/app_3_eda_basketball 💻 4. EDA Footbal https://github.com/dataprofessor/streamlit_freecodecamp/tree/main/app_4_eda_football 💻 5. EDA SP500 Stock Pric https://github.com/dataprofessor/streamlit_freecodecamp/tree/main/app_5_eda_sp500_stock 💻 6. EDA Cryptocurrenc https://github.com/dataprofessor/streamlit_freecodecamp/tree/main/app_6_eda_cryptocurrency 💻 7. Classification Iri https://github.com/dataprofessor/streamlit_freecodecamp/tree/main/app_7_classificatio_iris 💻 8. Classification Penguin https://github.com/dataprofessor/streamlit_freecodecamp/tree/main/app_8_classification_penguins 💻 9. Regression Boston Housin https://github.com/dataprofessor/streamlit_freecodecamp/tree/main/app_9_regression_boston_housing 💻 10. Regression Bioinformatics Solubilit https://github.com/dataprofessor/streamlit_freecodecamp/tree/main/app_10_regression_bioinformatics_solubility 💻 11. Deploy to Heroku https://github.com/dataprofessor/penguins-heroku ⭐️ More ways to connect with Chanin Nantasenamat ⭐️ ✅ Website: http://dataprofessor.org/ ✅ Newsletter: http://newsletter.dataprofessor.org ✅ Twitter: https://twitter.com/thedataprof/ ✅ FaceBook: http://facebook.com/dataprofessor/ ✅ Instagram: https://www.instagram.com/data.professor/ ✅ LinkedIn: https://www.linkedin.com/in/chanin-nantasenamat/ ✅ GitHub: https://github.com/dataprofessor/ -- Learn to code for free and get a developer job: https://www.freecodecamp.org Read hundreds of articles on programming: https://freecodecamp.org/news

detail
{'title': 'Build 12 Data Science Apps with Python and Streamlit - Full Course', 'heatmap': [{'end': 11285.537, 'start': 11161.192, 'weight': 1}], 'summary': 'Learn to build 12 data apps in python using streamlit, create interactive web apps, develop classification and regression models, and deploy to heroku and streamlit sharing platforms. the course also includes building sports stats and cryptocurrency web apps, web scraping, data visualization, and development of bioinformatics and molecular solubility prediction web apps.', 'chapters': [{'end': 203.65, 'segs': [{'end': 184.735, 'src': 'embed', 'start': 136.285, 'weight': 0, 'content': [{'end': 144.849, 'text': "And finally, we'll also be showing you how you can deploy your application to the Heroku platform and also to the Streamlit sharing platform.", 'start': 136.285, 'duration': 8.564}, {'end': 154.774, 'text': 'And so for more data science, machine learning and bioinformatics projects, please make sure to subscribe to my YouTube channel, The Data Professor,', 'start': 145.089, 'duration': 9.685}, {'end': 162.117, 'text': 'and also follow me on Medium, where I regularly publish blog posts on data science and also machine learning.', 'start': 154.774, 'duration': 7.343}, {'end': 166.779, 'text': 'So links to all of these are provided in the description of this video.', 'start': 162.337, 'duration': 4.442}, {'end': 168.98, 'text': 'Also, grab yourself a cup of coffee.', 'start': 167.14, 'duration': 1.84}, {'end': 171.902, 'text': "And without further ado, let's get started.", 'start': 169.461, 'duration': 2.441}, {'end': 184.735, 'text': 'have you ever wanted to build a data-driven web application for your data science projects?', 'start': 179.011, 'duration': 5.724}], 'summary': 'Learn to deploy applications on heroku and streamlit. subscribe to the data professor for data science and machine learning content.', 'duration': 48.45, 'max_score': 136.285, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM136285.jpg'}], 'start': 0.589, 'title': 'Building data apps and models', 'summary': "Covers a beginner's course on building 12 data apps in python using streamlit, featuring interactive web applications and utilizing python libraries. it also includes building eight different data applications, developing classification and regression models, and building a data-driven web app with deployment options to heroku and streamlit sharing platforms.", 'chapters': [{'end': 94.388, 'start': 0.589, 'title': 'Build 12 data apps in python', 'summary': "Introduces a beginner's course on building 12 data apps in python using streamlit, taught by chanin nantat senamat, featuring interactive web applications for preprocessing datasets, visualizing data, and making predictions from machine learning, utilizing python libraries such as numpy, scipy, maps.lib, and seaborn.", 'duration': 93.799, 'highlights': ['Chanin Nantat Senamat, also known as the Data Professor, will teach the course on building 12 interactive data-driven web applications in Python using the Streamlit library.', 'The course will enable participants to utilize Python libraries like NumPy, SciPy, Maps.lib, Seaborn within the Streamlit environment to create interactive web applications for data preprocessing, visualization, and machine learning predictions.']}, {'end': 136.045, 'start': 94.588, 'title': 'Building data applications and models', 'summary': 'Covers building eight different data applications, including stock price, bioinformatics, and eda applications, and developing classification and regression models for various datasets.', 'duration': 41.457, 'highlights': ['Building eight different data applications', 'Developing classification and regression models']}, {'end': 203.65, 'start': 136.285, 'title': 'Build data-driven web app with few lines of code', 'summary': 'Highlights how to build a data-driven web application in just a few lines of code, and also mentions deployment to heroku and streamlit sharing platforms for data science, machine learning, and bioinformatics projects.', 'duration': 67.365, 'highlights': ['The chapter demonstrates how to build a data-driven web application in just a few lines of code.', 'The transcript mentions deployment to Heroku and Streamlit sharing platforms for data science, machine learning, and bioinformatics projects.', 'The speaker encourages the audience to subscribe to their YouTube channel and follow them on Medium for more data science and machine learning content.', 'The speaker suggests the audience grab a cup of coffee before getting started.', 'The speaker mentions the potential intimidation of coding in django or in flask for building web applications.']}], 'duration': 203.061, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM589.jpg', 'highlights': ['The course will enable participants to utilize Python libraries like NumPy, SciPy, Maps.lib, Seaborn within the Streamlit environment to create interactive web applications for data preprocessing, visualization, and machine learning predictions.', "Covers a beginner's course on building 12 data apps in python using streamlit, featuring interactive web applications and utilizing python libraries.", 'Building eight different data applications', 'Developing classification and regression models', 'The chapter demonstrates how to build a data-driven web application in just a few lines of code.', 'The transcript mentions deployment to Heroku and Streamlit sharing platforms for data science, machine learning, and bioinformatics projects.']}, {'end': 1385.827, 'segs': [{'end': 272.127, 'src': 'embed', 'start': 226.758, 'weight': 0, 'content': [{'end': 230.359, 'text': 'data-driven web application for your data science project.', 'start': 226.758, 'duration': 3.601}, {'end': 238.504, 'text': 'and so the first thing that you want to do is head over to the streamlit website by typing in streamlit.io,', 'start': 230.959, 'duration': 7.545}, {'end': 242.586, 'text': "and so i'm going to provide you the link in the description of this video.", 'start': 238.504, 'duration': 4.082}, {'end': 250.411, 'text': 'so this is the website of streamlit and, as you will see, it says that it is the fastest way to build a data application,', 'start': 242.586, 'duration': 7.825}, {'end': 260.858, 'text': 'And so here you can see that you could build a OpenCV web application from within Streamlit.', 'start': 254.013, 'duration': 6.845}, {'end': 264.121, 'text': 'And you could add a lot of interactive elements as well.', 'start': 260.958, 'duration': 3.163}, {'end': 268.584, 'text': 'So in order to get started, you want to install Streamlit.', 'start': 266.062, 'duration': 2.522}, {'end': 272.127, 'text': 'And so you could do that by typing in pip install streamlit.', 'start': 268.704, 'duration': 3.423}], 'summary': 'Streamlit.io is the fastest way to build a data application with interactive elements. install it via pip install streamlit.', 'duration': 45.369, 'max_score': 226.758, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM226758.jpg'}, {'end': 871.813, 'src': 'embed', 'start': 841.983, 'weight': 5, 'content': [{'end': 843.985, 'text': 'And the environment is called the DP.', 'start': 841.983, 'duration': 2.002}, {'end': 846.567, 'text': 'DP standing for Data Professor.', 'start': 844.345, 'duration': 2.222}, {'end': 852.153, 'text': 'And so let me go to the folder where I have my Streamlit web application files.', 'start': 847.448, 'duration': 4.705}, {'end': 857.738, 'text': 'CD Desktop, CD Streamlit, CD DNA.', 'start': 853.994, 'duration': 3.744}, {'end': 863.367, 'text': 'Okay, so we have a total of three files here.', 'start': 859.224, 'duration': 4.143}, {'end': 871.813, 'text': "And so the aromatase.fasta is a example data file, but actually we're not using it to build the web application.", 'start': 863.627, 'duration': 8.186}], 'summary': 'Setting up a streamlit web app with 3 files, excluding aromatase.fasta', 'duration': 29.83, 'max_score': 841.983, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM841983.jpg'}, {'end': 963.855, 'src': 'embed', 'start': 931.309, 'weight': 8, 'content': [{'end': 933.17, 'text': 'And let me show it side by side.', 'start': 931.309, 'duration': 1.861}, {'end': 942.936, 'text': 'Okay Let me also increase the font size here.', 'start': 933.19, 'duration': 9.746}, {'end': 946.037, 'text': 'Alright, there you go.', 'start': 942.956, 'duration': 3.081}, {'end': 947.638, 'text': "So it's bigger for you guys.", 'start': 946.178, 'duration': 1.46}, {'end': 949.599, 'text': 'And right here too.', 'start': 948.939, 'duration': 0.66}, {'end': 952.521, 'text': 'Okay, there you go.', 'start': 951.621, 'duration': 0.9}, {'end': 953.642, 'text': "So it's a lot bigger now.", 'start': 952.621, 'duration': 1.021}, {'end': 957.412, 'text': "Okay, so let's take a look at the code here.", 'start': 955.851, 'duration': 1.561}, {'end': 963.855, 'text': "So the first couple of lines here, we're going to be importing the necessary libraries for this web application.", 'start': 957.732, 'duration': 6.123}], 'summary': 'Demonstrating code with increased font size for better visibility.', 'duration': 32.546, 'max_score': 931.309, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM931309.jpg'}, {'end': 1013.409, 'src': 'embed', 'start': 989.47, 'weight': 2, 'content': [{'end': 996.356, 'text': "The block of code here from line number 10 to 24, we're essentially going to show the DNA logo.", 'start': 989.47, 'duration': 6.886}, {'end': 1004.042, 'text': "We're going to create a variable called image, which contains the name of the logo, and then we're going to display the image here,", 'start': 996.616, 'duration': 7.426}, {'end': 1009.006, 'text': "and then we're going to display it by allowing the image to expand to the column width here.", 'start': 1004.042, 'duration': 4.964}, {'end': 1013.409, 'text': "so it will expand to the column width and then we're going to print out the heather here.", 'start': 1009.346, 'duration': 4.063}], 'summary': 'Display dna logo from line 10 to 24, allowing image to expand to column width.', 'duration': 23.939, 'max_score': 989.47, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM989470.jpg'}, {'end': 1154.986, 'src': 'embed', 'start': 1123.969, 'weight': 3, 'content': [{'end': 1127.592, 'text': "And then for the sequence variable, we're going to split the lines.", 'start': 1123.969, 'duration': 3.623}, {'end': 1131.061, 'text': 'So each of the line here will be split.', 'start': 1128.74, 'duration': 2.321}, {'end': 1136.502, 'text': 'By splitting it means that it will create a list of each of the line.', 'start': 1132.021, 'duration': 4.481}, {'end': 1143.643, 'text': 'So the first member of the list will be DNA query and then the second line will be the second member of the list.', 'start': 1136.802, 'duration': 6.841}, {'end': 1147.824, 'text': 'Third line will be the third member and the fourth line will be the fourth member of the list.', 'start': 1143.863, 'duration': 3.961}, {'end': 1154.986, 'text': 'And to provide this even clearer, let me show you.', 'start': 1150.325, 'duration': 4.661}], 'summary': 'Splitting the sequence variable creates a list of dna queries, with each line as a separate member.', 'duration': 31.017, 'max_score': 1123.969, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM1123969.jpg'}, {'end': 1280.59, 'src': 'embed', 'start': 1253.455, 'weight': 7, 'content': [{'end': 1261.997, 'text': "so here we're gonna slice or select in the bracket here index number one onwards, meaning that index number one here onwards until the end.", 'start': 1253.455, 'duration': 8.542}, {'end': 1265.079, 'text': 'And it has a total of three additional lines.', 'start': 1262.637, 'duration': 2.442}, {'end': 1267.32, 'text': 'Index one, index two, index three.', 'start': 1265.379, 'duration': 1.941}, {'end': 1270.002, 'text': 'And then it will be assigned to the same name, sequence.', 'start': 1267.5, 'duration': 2.502}, {'end': 1271.643, 'text': "Okay, so let's show it here again.", 'start': 1270.022, 'duration': 1.621}, {'end': 1278.809, 'text': 'As you can see, the name is now gone.', 'start': 1276.027, 'duration': 2.782}, {'end': 1280.59, 'text': 'And now we have only the sequence.', 'start': 1279.069, 'duration': 1.521}], 'summary': 'Slicing from index one onwards, resulting in a sequence with three additional lines.', 'duration': 27.135, 'max_score': 1253.455, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM1253455.jpg'}], 'start': 204.19, 'title': 'Building data-driven web apps with streamlit', 'summary': 'Introduces streamlit, a python library for creating simple, data-driven web applications, enabling users to build interactive web apps with just a few lines of code, install, and deploy, with examples and a showcase of web applications built using streamlit.', 'chapters': [{'end': 605.196, 'start': 204.19, 'title': 'Building data-driven web apps with streamlit', 'summary': 'Introduces streamlit, a python library for creating simple, data-driven web applications, allowing users to build interactive web apps with just a few lines of code, install, and deploy, with examples and a showcase of web applications built using streamlit.', 'duration': 401.006, 'highlights': ["The fastest way to build a data application is using Streamlit, a Python library recommended by a subscriber, enabling the development of simple, data-driven web applications for data science projects, installable via 'pip install streamlit' and deployable using Git, showcased by a gallery of web applications built using Streamlit.", 'Streamlit allows the creation of a simple web application in just a few lines of code, with the capability to add interactive elements such as a slider widget for selecting numbers, as well as easy deployment using Git, demonstrating a minimal framework for building powerful web applications.', 'A simple web application can be built using approximately 20 lines of code, with the ability to customize the application in real time and serve the updated version instantly, as shown by an example of a stock price application utilizing Streamlit to display stock closing prices and volumes.']}, {'end': 1123.529, 'start': 605.577, 'title': 'Creating markdown style text and interactive web application', 'summary': 'Demonstrates creating markdown-style text with different heading sizes and customization, as well as building an interactive bioinformatics web application using python with streamlit, including importing necessary libraries, displaying a logo, creating a text box for entering dna sequence, and capturing user input for nucleotide composition analysis.', 'duration': 517.952, 'highlights': ['Creating different heading sizes with Markdown-style text', 'Customizing text with bold, italic, and links in Markdown-style', 'Building an interactive bioinformatics web application with Python and Streamlit']}, {'end': 1385.827, 'start': 1123.969, 'title': 'Splitting and joining dna sequences', 'summary': 'Demonstrates the process of splitting lines to create a list and discarding the sequence name to compute dna composition, resulting in a final sequence with three members, then joining the three lines to form a continuous dna sequence without spaces.', 'duration': 261.858, 'highlights': ['The process of splitting lines to create a list and discarding the sequence name to compute DNA composition results in a final sequence with three members.', 'Joining the three lines to form a continuous DNA sequence without spaces results in a single line of sequence.']}], 'duration': 1181.637, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM204190.jpg', 'highlights': ['Streamlit enables building simple, data-driven web apps with minimal code and easy deployment', 'Creating a web app with Streamlit requires approximately 20 lines of code', 'Streamlit allows customization of web apps in real time and instant updates', "Streamlit is installable via 'pip install streamlit' and deployable using Git", 'Streamlit showcases a gallery of web applications built using the library', 'Streamlit allows adding interactive elements like a slider widget for selecting numbers', 'Building an interactive bioinformatics web application with Python and Streamlit', 'Customizing text with bold, italic, and links in Markdown-style', 'The process of splitting lines to create a list and discarding the sequence name to compute DNA composition results in a final sequence with three members', 'Joining the three lines to form a continuous DNA sequence without spaces results in a single line of sequence']}, {'end': 2266.022, 'segs': [{'end': 1797.14, 'src': 'embed', 'start': 1761.869, 'weight': 4, 'content': [{'end': 1766.214, 'text': 'okay. and so there you have it a very simple bioinformatics web application.', 'start': 1761.869, 'duration': 4.345}, {'end': 1774.743, 'text': 'feel free to modify this to be another web application in bioinformatics or for any industry as well,', 'start': 1766.214, 'duration': 8.529}, {'end': 1782.132, 'text': 'because the code is quite applicable and you could use it as a template for building your own personal data science project.', 'start': 1774.743, 'duration': 7.389}, {'end': 1797.14, 'text': 'Okay so, this video is the fifth part of the Streamlit tutorial series where I go into detail, step by step,', 'start': 1789.116, 'duration': 8.024}], 'summary': 'Intro to bioinformatics web app with potential for customization and reuse.', 'duration': 35.271, 'max_score': 1761.869, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM1761869.jpg'}, {'end': 1979.387, 'src': 'embed', 'start': 1938.963, 'weight': 1, 'content': [{'end': 1945.526, 'text': "And so before we take a deep dive into the code, let's try to run this code and let's have a look at the web application.", 'start': 1938.963, 'duration': 6.563}, {'end': 1950.589, 'text': "So let's close the file for a moment and let's open up a command prompt.", 'start': 1945.726, 'duration': 4.863}, {'end': 1957.632, 'text': 'So if you are on a Windows type in CMD, if you are on a Mac or a Ubuntu, you want to open up your terminal.', 'start': 1951.069, 'duration': 6.563}, {'end': 1964.913, 'text': "And so this is only going to work on my computer because I'm going to type in conda activate DP.", 'start': 1958.988, 'duration': 5.925}, {'end': 1969.718, 'text': 'DP being the name of the conda environment that is installed on my computer.', 'start': 1965.214, 'duration': 4.504}, {'end': 1975.983, 'text': 'So if you have a conda environment installed on your computer, you can activate that particular conda environment.', 'start': 1969.858, 'duration': 6.125}, {'end': 1979.387, 'text': 'So you could type in conda activate and then the name of your environment.', 'start': 1976.164, 'duration': 3.223}], 'summary': 'Demonstrating code execution and activating conda environment.', 'duration': 40.424, 'max_score': 1938.963, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM1938963.jpg'}, {'end': 2207.62, 'src': 'embed', 'start': 2144.787, 'weight': 0, 'content': [{'end': 2153.791, 'text': 'okay. so it seems to work now, and so this is our nba player stats explorer web application that we are going to build today.', 'start': 2144.787, 'duration': 9.004}, {'end': 2157.133, 'text': "so let's have a look at the general characteristic of this web app.", 'start': 2153.791, 'duration': 3.342}, {'end': 2162.635, 'text': "so you're going to see that on the sidebar on the left, we're going to have three input parameters.", 'start': 2157.133, 'duration': 5.502}, {'end': 2169.678, 'text': 'so the first one is the year of the data that you want to have a look at, and then the second parameter will be the team.', 'start': 2162.635, 'duration': 7.043}, {'end': 2176.761, 'text': 'and so notice here that you could select multiple teams, and by default it will select all of the teams for you,', 'start': 2169.678, 'duration': 7.083}, {'end': 2182.443, 'text': "and then you could take on the teams that you don't want, and then the results will be updated on the fly.", 'start': 2176.761, 'duration': 5.682}, {'end': 2194.127, 'text': 'so notice that whenever i click on the x mark here, you will see that the number of rows will reduce right from 683 to 665..', 'start': 2182.443, 'duration': 11.684}, {'end': 2198.311, 'text': 'And so the third parameter of the input is the position of the players.', 'start': 2194.127, 'duration': 4.184}, {'end': 2204.917, 'text': "So we're going to have the five traditional positions here, center, power forward, small forward, point guard, and shooting guard.", 'start': 2198.531, 'duration': 6.386}, {'end': 2207.62, 'text': 'And so to the right here, which is the main panel.', 'start': 2205.037, 'duration': 2.583}], 'summary': 'Building an nba player stats explorer web app with input parameters for year, team, and position, reducing rows from 683 to 665.', 'duration': 62.833, 'max_score': 2144.787, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM2144787.jpg'}], 'start': 1385.967, 'title': 'Dna sequence preprocessing and building nba player stats explorer', 'summary': 'Discusses the preprocessing of a dna sequence, including addition of space, new lines, and dots, with specific reference to line number 46. it also details the process of building an nba player stats explorer web application using python and streamlit, covering data scraping, filtering, display, and exploratory data analysis within approximately 60 lines of code.', 'chapters': [{'end': 1454.287, 'start': 1385.967, 'title': 'Dna sequence preprocessing', 'summary': 'Discusses the preprocessing of a dna sequence, highlighting the addition of space, new lines, and dots, and the readiness of the sequence for computation, with a specific reference to line number 46 as the comment for the input dna sequence.', 'duration': 68.32, 'highlights': ['The DNA sequence is preprocessed and ready for computation.', 'The addition of space, new lines, and dots is demonstrated.', 'Specific reference to line number 46 as the comment for the input DNA sequence.']}, {'end': 2266.022, 'start': 1454.287, 'title': 'Building nba player stats explorer', 'summary': 'Details the process of building an nba player stats explorer web application using python and streamlit, covering the creation of input parameters for year, team, and player position, data scraping, filtering, display, and exploratory data analysis, all within approximately 60 lines of code.', 'duration': 811.735, 'highlights': ['The chapter details the process of building an NBA Player Stats Explorer web application using Python and Streamlit.', 'The input parameters for the web app include year, team, and player position, with the ability to select multiple teams and see the results updated in real-time.', 'The data scraping, filtering, and display are done using pandas, with the resulting data frame available for export as a CSV file.', 'The web app also features an exploratory data analysis, including the visualization of an intercorrelation heat map of the input parameters.']}], 'duration': 880.055, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM1385967.jpg', 'highlights': ['The DNA sequence is preprocessed and ready for computation.', 'The chapter details the process of building an NBA Player Stats Explorer web application using Python and Streamlit.', 'The input parameters for the web app include year, team, and player position, with the ability to select multiple teams and see the results updated in real-time.', 'The addition of space, new lines, and dots is demonstrated.', 'The data scraping, filtering, and display are done using pandas, with the resulting data frame available for export as a CSV file.', 'The web app also features an exploratory data analysis, including the visualization of an intercorrelation heat map of the input parameters.', 'Specific reference to line number 46 as the comment for the input DNA sequence.']}, {'end': 3868.339, 'segs': [{'end': 2536.969, 'src': 'embed', 'start': 2513.34, 'weight': 1, 'content': [{'end': 2520.704, 'text': "And then we're going to use it by dropping some of the redundant header, which is present throughout the table data.", 'start': 2513.34, 'duration': 7.364}, {'end': 2529.387, 'text': 'And then, after removing those, we will perform some simple deletion of some index column here called rk,', 'start': 2520.904, 'duration': 8.483}, {'end': 2533.428, 'text': 'because it will be redundant with the index provided normally by pandas.', 'start': 2529.387, 'duration': 4.041}, {'end': 2536.969, 'text': 'And then finally, we will display the pre processed data.', 'start': 2533.588, 'duration': 3.381}], 'summary': 'Preprocess table data by removing redundant headers and index column, then display the processed data.', 'duration': 23.629, 'max_score': 2513.34, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM2513340.jpg'}, {'end': 3300.694, 'src': 'embed', 'start': 3273.096, 'weight': 4, 'content': [{'end': 3279.084, 'text': "So if you click on the inter-correlation heat map, you're going to be seeing the inter-correlation of the variables here.", 'start': 3273.096, 'duration': 5.988}, {'end': 3284.662, 'text': "Okay And so let's take a line by line explanation of the code here.", 'start': 3281.28, 'duration': 3.382}, {'end': 3290.887, 'text': 'So the first six lines of code will be importing the necessary libraries that we are going to be using today.', 'start': 3284.943, 'duration': 5.944}, {'end': 3296.231, 'text': 'And so the first one is the streamlet because it allows us to build this essentially this web app.', 'start': 3291.207, 'duration': 5.024}, {'end': 3300.694, 'text': "And then we're going to import pandas as PD because of the data frame that we're using here.", 'start': 3296.411, 'duration': 4.283}], 'summary': 'Explanation of code: 6 lines import necessary libraries for building a web app and using data frame.', 'duration': 27.598, 'max_score': 3273.096, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM3273096.jpg'}, {'end': 3565.584, 'src': 'embed', 'start': 3542.052, 'weight': 3, 'content': [{'end': 3551.337, 'text': "here we're going to be filtering the data based on the input of the sidebar of the team selection and the position selection.", 'start': 3542.052, 'duration': 9.285}, {'end': 3561.742, 'text': "So line number 41 will be essentially filtering the data frame that we're seeing right here based on our input selection, the team and the position.", 'start': 3551.697, 'duration': 10.045}, {'end': 3565.584, 'text': 'lines number 43 through 45.', 'start': 3563.081, 'duration': 2.503}], 'summary': 'Filtering data based on sidebar input for team and position.', 'duration': 23.532, 'max_score': 3542.052, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM3542052.jpg'}, {'end': 3737.251, 'src': 'embed', 'start': 3681.155, 'weight': 0, 'content': [{'end': 3683.096, 'text': 'and the files are on the desktop.', 'start': 3681.155, 'duration': 1.941}, {'end': 3688.5, 'text': "I'm going to provide you the links to the files described in this tutorial.", 'start': 3684.097, 'duration': 4.403}, {'end': 3690.621, 'text': 'You want to check out the video description.', 'start': 3689.08, 'duration': 1.541}, {'end': 3701.167, 'text': "SP500 You're going to see that the only file that we're going to use is the SP500-app.py.", 'start': 3693.703, 'duration': 7.464}, {'end': 3703.029, 'text': "Let's have a look at that.", 'start': 3701.227, 'duration': 1.802}, {'end': 3724.477, 'text': "And before doing that, let's also open up the web app.", 'start': 3721.494, 'duration': 2.983}, {'end': 3731.625, 'text': 'Streamlit run sp500app.py.', 'start': 3726.6, 'duration': 5.025}, {'end': 3737.251, 'text': 'All right, so here we are.', 'start': 3735.349, 'duration': 1.902}], 'summary': 'Tutorial provides links to files for sp500-app.py and demonstrates opening web app using streamlit.', 'duration': 56.096, 'max_score': 3681.155, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM3681155.jpg'}], 'start': 2266.122, 'title': 'Creating sports stats web apps', 'summary': 'Demonstrates how to create web applications to visualize nba and nfl player stats using python libraries like streamlit, pandas, numpy, matplotlib, and seaborn. it covers data processing, interactive visualization, and web scraping to provide user-friendly data exploration and analysis platforms.', 'chapters': [{'end': 2513.2, 'start': 2266.122, 'title': 'Creating nba player stats web app', 'summary': "Demonstrates how to create a web app to visualize nba player stats using python's streamlit library, including importing necessary libraries, setting up the app's title and description, creating input features like a dropdown menu for selecting years, and performing web scraping to fetch and process data from basketball reference.com.", 'duration': 247.078, 'highlights': ['The code imports necessary libraries such as Streamlit, pandas, base64, Matplotlib, Seaborn, and NumPy for building the web app, handling data frames, performing web scraping, and creating the heatmap plot.', "The web app's title is set as 'NBA Player Stats Explorer' with a corresponding description and data source link provided in Markdown language.", 'Input features such as a dropdown menu for selecting years and a range of numbers from 1950 to 2019 are created, allowing dynamic data visualization based on user input.', 'The code includes web scraping and data pre-processing using the PD.read_html function to fetch and process data from basketball reference.com.']}, {'end': 3050.642, 'start': 2513.34, 'title': 'Data visualization and data processing', 'summary': 'Covers data processing and visualization techniques using python pandas and streamlit, including preprocessing, data filtering, and creating interactive visualizations, resulting in a streamlined and user-friendly data exploration and analysis platform.', 'duration': 537.302, 'highlights': ['The chapter covers data processing and visualization techniques', 'Data filtering based on input selection in the sidebar menu impacts the data dimension', 'Creation of a heat map using inter-correlation matrix calculation']}, {'end': 3542.052, 'start': 3050.642, 'title': 'Exploring nfl player stats with data science', 'summary': 'Discusses creating a web application to explore nfl player rushing stats, using python libraries like streamlit, pandas, numpy, matplotlib, and seaborn, with the data source from profootballreference.com. the app allows users to select the year and team, and includes features like inter-correlation heat map and data cleaning suggestions.', 'duration': 491.41, 'highlights': ['The web application allows users to select the year and team, and provides options for data cleaning and visualization, using Python libraries like Streamlit, Pandas, NumPy, Matplotlib, and Seaborn.', 'The data source for the web application is profootballreference.com, and the scraping is done using the Pandas library with data range set from 1990 to 2020.', 'The web scraping process is achieved in only one line of code using the Pandas library.', 'The app includes features such as an inter-correlation heat map and suggestions for data cleaning, with a data set of 117 rows.']}, {'end': 3868.339, 'start': 3542.052, 'title': 'Python data-driven web application', 'summary': 'Covers creating a data-driven web application in python for retrieving nfl football player stats data and web scraping s&p 500 stock prices using around 70 lines of code, including filtering data based on input selection, displaying player stats, downloading data frame into a csv file, encoding and decoding data using the base64 library, and creating a heat map for intercorrelation between variables.', 'duration': 326.287, 'highlights': ['Creating a data-driven web application in Python for retrieving NFL football player stats data and web scraping S&P 500 stock prices using around 70 lines of code', 'Filtering data based on input selection and displaying player stats', 'Downloading data frame into a CSV file', 'Encoding and decoding data using the base64 library', 'Creating a heat map for intercorrelation between variables']}], 'duration': 1602.217, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM2266122.jpg', 'highlights': ['The code imports necessary libraries such as Streamlit, pandas, base64, Matplotlib, Seaborn, and NumPy for building the web app, handling data frames, performing web scraping, and creating the heatmap plot.', 'Creating a data-driven web application in Python for retrieving NFL football player stats data and web scraping S&P 500 stock prices using around 70 lines of code', 'The web application allows users to select the year and team, and provides options for data cleaning and visualization, using Python libraries like Streamlit, Pandas, NumPy, Matplotlib, and Seaborn.', "The web app's title is set as 'NBA Player Stats Explorer' with a corresponding description and data source link provided in Markdown language.", 'Input features such as a dropdown menu for selecting years and a range of numbers from 1950 to 2019 are created, allowing dynamic data visualization based on user input.']}, {'end': 5225.795, 'segs': [{'end': 4435.68, 'src': 'embed', 'start': 4367.706, 'weight': 0, 'content': [{'end': 4371.808, 'text': 'We have to rerun the load data function.', 'start': 4367.706, 'duration': 4.102}, {'end': 4376.572, 'text': 'Okay, and now it will be reassigned to the DF.', 'start': 4373.81, 'duration': 2.762}, {'end': 4385.277, 'text': 'because previously we have overwritten the df variable name, and it is lacking the symbol column.', 'start': 4377.455, 'duration': 7.822}, {'end': 4388.258, 'text': 'Right here.', 'start': 4387.938, 'duration': 0.32}, {'end': 4398.741, 'text': 'Right here, we have overwritten the same name, so I think we should call this something else.', 'start': 4393.78, 'duration': 4.961}, {'end': 4402.182, 'text': "Let's call it df2.", 'start': 4401.462, 'duration': 0.72}, {'end': 4423.205, 'text': 'And this will have to be df2.', 'start': 4421.883, 'duration': 1.322}, {'end': 4431.475, 'text': "Right? Let's do it again.", 'start': 4429.392, 'duration': 2.083}, {'end': 4433.057, 'text': 'This is df2.', 'start': 4431.495, 'duration': 1.562}, {'end': 4435.68, 'text': 'And then.', 'start': 4435.199, 'duration': 0.481}], 'summary': 'Rerunning load data function, reassigning to df2 due to missing symbol column.', 'duration': 67.974, 'max_score': 4367.706, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM4367706.jpg'}], 'start': 3868.719, 'title': 'Web scraping, data analysis, and visualization', 'summary': 'Demonstrates web scraping data from wikipedia using pandas and beautiful soup, analyzing s&p 500 company sectors and stock price data using y finance library in python, resulting in 11 unique sectors and retrieving stock price data for over 500 companies, with only 2 companies failing to fetch data, and generating a data frame containing 192 days of stock prices. it also covers visualizing stock prices using python, creating a custom function for plotting, deploying the visualization to a web application in streamlit, and building web applications using python libraries to display s&p 500 stock prices and cryptocurrency prices with various custom functions and input parameters.', 'chapters': [{'end': 4210.828, 'start': 3868.719, 'title': 'Web scraping and data analysis', 'summary': 'Demonstrates web scraping data from wikipedia using pandas and beautiful soup, analyzing the s&p 500 company sectors and stock price data using y finance library in python, resulting in 11 unique sectors and retrieving stock price data for over 500 companies, with only 2 companies failing to fetch data, and generating a data frame containing 192 days of stock prices.', 'duration': 342.109, 'highlights': ['The chapter demonstrates web scraping data from Wikipedia using Pandas and Beautiful Soup, resulting in a data frame with 11 unique sectors.', 'Analyzing the S&P 500 company sectors reveals 11 unique sectors and the number of companies in each sector.', 'Retrieving stock price data for over 500 companies using Y Finance library in Python results in a data frame containing 192 days of stock prices, with 2 companies failing to fetch data.']}, {'end': 4602.862, 'start': 4211.248, 'title': 'Visualizing stock prices with python', 'summary': 'Demonstrates the process of visualizing stock prices using python, focusing on creating a custom function for plotting and deploying the visualization to a web application in streamlit, allowing users to select sectors and the number of companies to display, with a maximum limit of five.', 'duration': 391.614, 'highlights': ['The chapter demonstrates the process of visualizing stock prices using Python, focusing on creating a custom function for plotting and deploying the visualization to a web application in Streamlit.', 'By comparing the stock prices from the beginning of the year until the present, it is noted that the price has increased by about $18.', "A custom function called 'price plot' is created, which takes the ticker symbol as an input argument and generates the plot, simplifying the process of plotting for different companies.", 'The demonstration includes deploying the visualization to a web application in Streamlit, where users can select sectors and the number of companies to display, with a maximum limit of five.']}, {'end': 4753.845, 'start': 4603.182, 'title': 'Building s&p 500 web app with python libraries', 'summary': 'Discusses building a web application using streamlit, pandas, base 64, matplotlib, and y finance to display s&p 500 stock price, with emphasis on the reduced number of python libraries used and the custom function for web scraping data from google colab.', 'duration': 150.663, 'highlights': ['The chapter discusses building a web application using Streamlit, Pandas, base 64, Matplotlib, and Y Finance to display S&P 500 stock price, with emphasis on the reduced number of Python libraries used and the custom function for web scraping data from Google Colab.', "The function st.title is used to create the title of the web application, with 'S&P 500 app' shown as bold text.", 'A custom function for web scraping S&P 500 data from Google Colab is used to avoid re-downloading data, reducing the need for subsequent downloads after the first run.', 'The chapter highlights the use of only five Python libraries, including Streamlit, Pandas, base 64, Matplotlib, and Y Finance, to display the S&P 500 stock price.', 'The chapter explains the process of assigning the web scraped data into the DF data frame, obtained from the custom function for web scraping S&P 500 data from Google Colab.']}, {'end': 5225.795, 'start': 4754.085, 'title': 'Building cryptocurrency price web app', 'summary': 'Covers building a cryptocurrency price web application, including grouping by sector names, filtering data based on user input, custom functions for data manipulation and plotting, and creating a web application with input parameters for currency, number of top cryptocurrencies, and percent change within specific time frames.', 'duration': 471.71, 'highlights': ['Grouping by sector names and filtering data based on user input', 'Creating custom functions for data manipulation and plotting', 'Creating a web application with input parameters']}], 'duration': 1357.076, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM3868719.jpg', 'highlights': ['Retrieving stock price data for over 500 companies using Y Finance library in Python results in a data frame containing 192 days of stock prices, with 2 companies failing to fetch data.', 'Demonstrates web scraping data from Wikipedia using Pandas and Beautiful Soup, resulting in a data frame with 11 unique sectors.', 'The chapter demonstrates the process of visualizing stock prices using Python, focusing on creating a custom function for plotting and deploying the visualization to a web application in Streamlit.', 'The chapter discusses building a web application using Streamlit, Pandas, base 64, Matplotlib, and Y Finance to display S&P 500 stock price, with emphasis on the reduced number of Python libraries used and the custom function for web scraping data from Google Colab.']}, {'end': 6295.906, 'segs': [{'end': 5287.306, 'src': 'embed', 'start': 5228.038, 'weight': 3, 'content': [{'end': 5236.728, 'text': 'So you see here that the green color will represent the price change that is changed for the positive gain,', 'start': 5228.038, 'duration': 8.69}, {'end': 5241.153, 'text': 'while some of the cryptocurrency will have a negative pricing here.', 'start': 5236.728, 'duration': 4.425}, {'end': 5252.145, 'text': "Meaning that when it's compared between the first day and the seventh day, when the seventh day is selected, if the price change is negative,", 'start': 5242.438, 'duration': 9.707}, {'end': 5253.826, 'text': 'it means that the price has reduced.', 'start': 5252.145, 'duration': 1.681}, {'end': 5258.11, 'text': 'However, if the price has increased, then there is a gain.', 'start': 5254.447, 'duration': 3.663}, {'end': 5265.295, 'text': "So it's essentially the gain of the price in green or the loss in red.", 'start': 5258.95, 'duration': 6.345}, {'end': 5271.533, 'text': 'Okay, so this is the cryptocurrency web app that we are going to be building today.', 'start': 5267.529, 'duration': 4.004}, {'end': 5279.92, 'text': 'And you can notice that here, the interface and the layout of the web application is full screen now.', 'start': 5271.553, 'duration': 8.367}, {'end': 5287.306, 'text': 'Because before, the web application will be a bit centered at the middle.', 'start': 5280.921, 'duration': 6.385}], 'summary': 'Cryptocurrency web app tracks price changes, with gain in green and loss in red.', 'duration': 59.268, 'max_score': 5228.038, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM5228038.jpg'}, {'end': 5644.763, 'src': 'embed', 'start': 5585.516, 'weight': 0, 'content': [{'end': 5593.238, 'text': 'particularly the page width that I have mentioned and also the extra column that is a new feature,', 'start': 5585.516, 'duration': 7.722}, {'end': 5597, 'text': 'you need to upgrade your Streamlit if you already have it installed on your computer.', 'start': 5593.238, 'duration': 3.762}, {'end': 5601.181, 'text': "However, if you haven't yet installed it, then you could install a fresh version.", 'start': 5597.64, 'duration': 3.541}, {'end': 5608.503, 'text': 'But in order to upgrade it, you need to type in pip install dash dash upgrade and then Streamlit.', 'start': 5602.199, 'duration': 6.304}, {'end': 5619.349, 'text': 'And because it is a new feature, Streamlit has probably used the term beta in front of the option here.', 'start': 5612.545, 'duration': 6.804}, {'end': 5621.11, 'text': 'Set page config.', 'start': 5619.949, 'duration': 1.161}, {'end': 5624.952, 'text': 'And then the layout will be equal to wide.', 'start': 5623.051, 'duration': 1.901}, {'end': 5630.275, 'text': 'So this will allow us to expand the content to the full width of the page.', 'start': 5625.252, 'duration': 5.023}, {'end': 5634.417, 'text': "So let's try commenting it out and see what happens.", 'start': 5631.636, 'duration': 2.781}, {'end': 5639.74, 'text': 'All right, and here you go.', 'start': 5638.78, 'duration': 0.96}, {'end': 5644.763, 'text': 'You see that when we comment out the page width, it will be centered.', 'start': 5640, 'duration': 4.763}], 'summary': 'Upgrade streamlit to access new wide page width feature.', 'duration': 59.247, 'max_score': 5585.516, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM5585516.jpg'}], 'start': 5228.038, 'title': 'Cryptocurrency web app development', 'summary': 'Discusses visualizing cryptocurrency price changes with green and red indicators, creating a full-screen web app. it also covers the recent streamlit update enabling multiple columns for aesthetic partitioning and the use of nine libraries. furthermore, it explains web scraping of cryptocurrency prices using python and streamlit, emphasizing beta features, page layout customization, user interface options, caching data, and data filtering.', 'chapters': [{'end': 5287.306, 'start': 5228.038, 'title': 'Cryptocurrency price change visualization', 'summary': 'Discusses representing cryptocurrency price changes using green for positive gains and red for negative pricing, with the aim of building a full-screen cryptocurrency web app.', 'duration': 59.268, 'highlights': ['Representing price change with green for positive gain and red for negative pricing.', 'Building a full-screen cryptocurrency web app with improved interface and layout.']}, {'end': 5531.047, 'start': 5288.727, 'title': 'Streamlit update: multiple columns and aesthetic partitioning', 'summary': 'Discusses the recent streamlit update, enabling the use of multiple columns in web applications for aesthetic partitioning of content, allowing for greater visual appeal, and the use of nine libraries for a new web application.', 'duration': 242.32, 'highlights': ['Streamlit update allows for the use of multiple columns in web applications, enabling aesthetic partitioning of content', 'Nine libraries are used for the new web application, including Streamlit, PIL, pandas, base64, and matplotlib']}, {'end': 6295.906, 'start': 5532.728, 'title': 'Web scraping crypto prices tutorial', 'summary': 'Explains the process of web scraping cryptocurrency prices using python and streamlit, highlighting the use of beta features, page layout customization, and user interface options, and also discusses the benefits of caching data and data filtering. it also showcases the website coinmarketcap and the data to be scraped, emphasizing the selection and display of cryptocurrencies and the sorting of values.', 'duration': 763.178, 'highlights': ['The chapter explains the process of web scraping cryptocurrency prices using Python and Streamlit.', 'It discusses the benefits of caching data and data filtering.', 'It showcases the website CoinMarketCap and the data to be scraped, emphasizing the selection and display of cryptocurrencies and the sorting of values.', 'It highlights the use of beta features, page layout customization, and user interface options.']}], 'duration': 1067.868, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM5228038.jpg', 'highlights': ['Building a full-screen cryptocurrency web app with improved interface and layout.', 'Representing price change with green for positive gain and red for negative pricing.', 'Streamlit update allows for the use of multiple columns in web applications, enabling aesthetic partitioning of content', 'Nine libraries are used for the new web application, including Streamlit, PIL, pandas, base64, and matplotlib', 'The chapter explains the process of web scraping cryptocurrency prices using Python and Streamlit.', 'It discusses the benefits of caching data and data filtering.', 'It highlights the use of beta features, page layout customization, and user interface options.', 'Showcases the website CoinMarketCap and the data to be scraped, emphasizing the selection and display of cryptocurrencies and the sorting of values.']}, {'end': 7218.229, 'segs': [{'end': 6403.043, 'src': 'embed', 'start': 6360.629, 'weight': 3, 'content': [{'end': 6365.711, 'text': "It's going to print out that there are 100 rows and eight columns here.", 'start': 6360.629, 'duration': 5.082}, {'end': 6366.532, 'text': 'Eight columns.', 'start': 6365.931, 'duration': 0.601}, {'end': 6371.674, 'text': 'Column two data frame DF coins.', 'start': 6369.413, 'duration': 2.261}, {'end': 6375.196, 'text': 'So this is DF coins is the data frame here.', 'start': 6371.754, 'duration': 3.442}, {'end': 6391.813, 'text': 'line number 122128 is going to be allowing us to download the data here as a csv file.', 'start': 6377.381, 'duration': 14.432}, {'end': 6398.979, 'text': 'and then lines number 133, preparing the data for the bar plot.', 'start': 6391.813, 'duration': 7.166}, {'end': 6403.043, 'text': "so here here in column 3, we're going to make the bar plot.", 'start': 6398.979, 'duration': 4.064}], 'summary': 'Data includes 100 rows and 8 columns, with preparations for a bar plot.', 'duration': 42.414, 'max_score': 6360.629, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM6360629.jpg'}, {'end': 6939.322, 'src': 'embed', 'start': 6909.114, 'weight': 5, 'content': [{'end': 6914.999, 'text': "sidebar. Okay, so let's save it and it'll move back to the sidebar, okay?", 'start': 6909.114, 'duration': 5.885}, {'end': 6925.116, 'text': 'and lines number 14 through 24 will be a custom function used to accept all of the four input parameters from the sidebar,', 'start': 6916.152, 'duration': 8.964}, {'end': 6935.201, 'text': 'and it will create a pandas data frame and the input parameters will be obtained from this sidebar as shown right here to the left hand side,', 'start': 6925.116, 'duration': 10.085}, {'end': 6939.322, 'text': 'and the text here will represent the name shown here.', 'start': 6935.201, 'duration': 4.121}], 'summary': 'Custom function on lines 14-24 creates pandas data frame from sidebar input parameters.', 'duration': 30.208, 'max_score': 6909.114, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM6909114.jpg'}, {'end': 7034.786, 'src': 'embed', 'start': 6966.071, 'weight': 0, 'content': [{'end': 6970.375, 'text': 'And so the first value here represents the minimum value, and it is 4.3.', 'start': 6966.071, 'duration': 4.304}, {'end': 6974.438, 'text': 'And the second number here represents the maximum value, which is 7.9.', 'start': 6970.375, 'duration': 4.063}, {'end': 6978.882, 'text': 'And the third value represents the current selected value, or the default value, which is 5.4.', 'start': 6974.438, 'duration': 4.444}, {'end': 6984.306, 'text': 'So here, 4.3, 7.9, and 5.4.', 'start': 6978.882, 'duration': 5.424}, {'end': 6995.617, 'text': 'and so if you want to change the default value to something else like 5.8, and then it will be changed to 5.8, okay, let me change it back to 5.4.', 'start': 6984.306, 'duration': 11.311}, {'end': 6996.138, 'text': 'all right.', 'start': 6995.617, 'duration': 0.521}, {'end': 7005.407, 'text': "so here we're going to use the custom function that we built above user input features and then we're going to assign it into the df variable,", 'start': 6996.138, 'duration': 9.269}, {'end': 7009.278, 'text': 'and this will be on line number 26.', 'start': 7005.407, 'duration': 3.871}, {'end': 7012.803, 'text': 'okay, and lines number 28 and 29 will be right here.', 'start': 7009.278, 'duration': 3.525}, {'end': 7015.506, 'text': 'user input parameters.', 'start': 7012.803, 'duration': 2.703}, {'end': 7021.795, 'text': 'so you can see that in just two lines of code we could have the section header name and the corresponding table below.', 'start': 7015.506, 'duration': 6.289}, {'end': 7029.842, 'text': "so it's just a simple print out of the data frame, And lines number 31 will be essentially just loading in the iris dataset.", 'start': 7021.795, 'duration': 8.047}, {'end': 7034.786, 'text': 'Line number 32 will assign the iris.data into the x variable.', 'start': 7029.962, 'duration': 4.824}], 'summary': 'Minimum value: 4.3, maximum value: 7.9, default value: 5.4', 'duration': 68.715, 'max_score': 6966.071, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM6966071.jpg'}, {'end': 7083.573, 'src': 'embed', 'start': 7054.922, 'weight': 4, 'content': [{'end': 7061.728, 'text': 'we will be creating a classifier variable comprising of the random forest classifier and line number 36.', 'start': 7054.922, 'duration': 6.806}, {'end': 7071.883, 'text': "we're going to apply the classifier to build a training model using as input argument the x and the y data matrices And in line number 38,", 'start': 7061.728, 'duration': 10.155}, {'end': 7073.004, 'text': "we'll make the prediction.", 'start': 7071.883, 'duration': 1.121}, {'end': 7075.847, 'text': '39 will give you the prediction probability.', 'start': 7073.024, 'duration': 2.823}, {'end': 7083.573, 'text': 'Lines number 41 and 42 will be just a simple printout of the class label and their corresponding index number.', 'start': 7076.127, 'duration': 7.446}], 'summary': 'Creating a random forest classifier to build a training model and make predictions.', 'duration': 28.651, 'max_score': 7054.922, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM7054922.jpg'}], 'start': 6298.007, 'title': 'Data visualization and web app development', 'summary': 'Covers data sorting, bar plot preparation with 100 rows and 8 columns, creating a cryptocurrency price web app with machine learning for iris flower prediction, conditional checks for bar plots, and a streamlit tutorial for web app development using the palmer penguins dataset and r libraries.', 'chapters': [{'end': 6464.902, 'start': 6298.007, 'title': 'Data sorting and bar plot preparation', 'summary': 'Explains how to sort data based on a specific value and prepares data for a bar plot, including 100 rows and 8 columns, and creating a new data frame for bar plot preparation.', 'duration': 166.895, 'highlights': ['The data frame contains 100 rows and 8 columns, providing a comprehensive dataset for analysis.', 'The process involves selecting specific columns like coin symbol, price change, and percent change for one hour, 24 hours, and seven days to create a new data frame called DF change.', 'The chapter demonstrates how to sort data based on a specific value, highlighting the impact of sorting on the order of displayed values, such as rank numbers and corresponding coins.']}, {'end': 6939.322, 'start': 6466.383, 'title': 'Cryptocurrency price web app', 'summary': 'Discusses the creation of a cryptocurrency price web application, incorporating machine learning capability to predict iris flower type based on four input parameters, utilizing conditional checks to create bar plots for different timeframes, and utilizing a random forest classifier for prediction.', 'duration': 472.939, 'highlights': ['The chapter discusses the creation of a cryptocurrency price web application.', 'Utilizing conditional checks to create bar plots for different timeframes.', 'Utilizing a random forest classifier for prediction.']}, {'end': 7218.229, 'start': 6939.322, 'title': 'Streamlit tutorial: developing a web application', 'summary': 'Demonstrates the process of developing a web application using the palmer penguins dataset, utilizing streamlit and r libraries, and provides a quick recap of the previous tutorial series on streamlit.', 'duration': 278.907, 'highlights': ['The tutorial focuses on developing a web application using the Palmer Penguins dataset and Streamlit.', 'A quick recap of the previous tutorial series on Streamlit is provided.']}], 'duration': 920.222, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM6298007.jpg', 'highlights': ['The data frame contains 100 rows and 8 columns, providing a comprehensive dataset for analysis.', 'The process involves selecting specific columns like coin symbol, price change, and percent change for one hour, 24 hours, and seven days to create a new data frame called DF change.', 'The chapter demonstrates how to sort data based on a specific value, highlighting the impact of sorting on the order of displayed values, such as rank numbers and corresponding coins.', 'The tutorial focuses on developing a web application using the Palmer Penguins dataset and Streamlit.', 'Utilizing conditional checks to create bar plots for different timeframes.', 'Utilizing a random forest classifier for prediction.', 'A quick recap of the previous tutorial series on Streamlit is provided.', 'The chapter discusses the creation of a cryptocurrency price web application.']}, {'end': 7937.385, 'segs': [{'end': 7356.895, 'src': 'embed', 'start': 7327.873, 'weight': 3, 'content': [{'end': 7332.796, 'text': 'And so as some of you have pointed out, this particular flaw of the code, I totally agree with you.', 'start': 7327.873, 'duration': 4.923}, {'end': 7337.119, 'text': 'And so the previous version was built like that for the simplicity of the tutorial.', 'start': 7332.976, 'duration': 4.143}, {'end': 7343.983, 'text': "And so in this tutorial we're going to use another approach where we could beforehand build a prediction model,", 'start': 7337.279, 'duration': 6.704}, {'end': 7346.684, 'text': 'pickle the object which is to save it into a file.', 'start': 7343.983, 'duration': 2.701}, {'end': 7350.827, 'text': "And then within the Streamlit code, we're going to read in the saved file.", 'start': 7346.804, 'duration': 4.023}, {'end': 7356.895, 'text': 'And so the advantage of that is that there is no need to rebuild the model every time that the input parameters are changed.', 'start': 7350.987, 'duration': 5.908}], 'summary': 'In the tutorial, a prediction model is pickled to save and read the file, avoiding the need to rebuild the model for input parameter changes.', 'duration': 29.022, 'max_score': 7327.873, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM7327873.jpg'}, {'end': 7410.823, 'src': 'embed', 'start': 7388.422, 'weight': 0, 'content': [{'end': 7397.272, 'text': "And then we're going to define the target and the encode variable according to the excellent kernel Kaggle provided in this link from Pratic.", 'start': 7388.422, 'duration': 8.85}, {'end': 7401.857, 'text': "And so kudos to Pratic for the code that we're using as the basis of this tutorial.", 'start': 7397.532, 'duration': 4.325}, {'end': 7410.823, 'text': "And so here we're going to use ordinal feature encoding in order to encode the qualitative features such as species, island and sex.", 'start': 7402.037, 'duration': 8.786}], 'summary': 'Using ordinal feature encoding for qualitative features like species, island, and sex.', 'duration': 22.401, 'max_score': 7388.422, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM7388422.jpg'}], 'start': 7218.229, 'title': 'Building penguin classification web app', 'summary': 'Demonstrates building a classification web application using the palmer penguins dataset with 333 rows and 7 columns, discussing data imputation. it also covers a tutorial on building a predictive model for penguin species using a unique approach, as demonstrated in a streamlit web application.', 'chapters': [{'end': 7287.33, 'start': 7218.229, 'title': 'Building classification web app with penguin dataset', 'summary': 'Demonstrates building a classification web application using the palmer penguins dataset, which has 333 rows and 7 columns, with less than 10 missing values, and discusses the option of data imputation.', 'duration': 69.101, 'highlights': ['The dataset contains 333 rows and 7 columns, including species, island, bill length, bill depth, flipper length, body mass, and sex.', 'The missing values were deleted, resulting in less than 10 missing values in the dataset.', 'The chapter discusses the option of data imputation to retain more of the data.']}, {'end': 7937.385, 'start': 7287.33, 'title': 'Streamlit app model building', 'summary': 'Covers a tutorial on building a predictive model for penguin species using a unique approach, where a prediction model is built, pickled, and then read in the saved file within the streamlit code to avoid rebuilding the model every time input parameters are changed, as demonstrated in a streamlit web application.', 'duration': 650.055, 'highlights': ['A unique approach is used to build and pickle a prediction model, which is then read in the saved file within the Streamlit code to avoid rebuilding the model every time input parameters are changed.', 'The tutorial demonstrates the use of ordinal feature encoding to encode qualitative features such as species, island, and sex for predicting the species of the penguin.', 'The tutorial involves building a random forest model using the scikit-learn library and saving the model using the pickle library to avoid rebuilding it every time input parameters are changed.', 'The chapter also covers the creation of a Streamlit web application where users can upload a CSV file or input parameters directly to make predictions for penguin species.']}], 'duration': 719.156, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM7218229.jpg', 'highlights': ['The dataset contains 333 rows and 7 columns, including species, island, bill length, bill depth, flipper length, body mass, and sex.', 'The missing values were deleted, resulting in less than 10 missing values in the dataset.', 'The tutorial involves building a random forest model using the scikit-learn library and saving the model using the pickle library to avoid rebuilding it every time input parameters are changed.', 'A unique approach is used to build and pickle a prediction model, which is then read in the saved file within the Streamlit code to avoid rebuilding the model every time input parameters are changed.', 'The tutorial demonstrates the use of ordinal feature encoding to encode qualitative features such as species, island, and sex for predicting the species of the penguin.', 'The chapter discusses the option of data imputation to retain more of the data.', 'The chapter also covers the creation of a Streamlit web application where users can upload a CSV file or input parameters directly to make predictions for penguin species.']}, {'end': 8942.646, 'segs': [{'end': 8053.52, 'src': 'embed', 'start': 8021.846, 'weight': 5, 'content': [{'end': 8026.269, 'text': 'And then there will be two possibility for sex, three possibility for island.', 'start': 8021.846, 'duration': 4.423}, {'end': 8028.87, 'text': 'Okay All right.', 'start': 8026.289, 'duration': 2.581}, {'end': 8033.492, 'text': "So now we're going to display the user input in the user input features.", 'start': 8028.97, 'duration': 4.522}, {'end': 8036.914, 'text': 'So right here, user input feature is this block of code.', 'start': 8033.672, 'duration': 3.242}, {'end': 8039.014, 'text': "And we're going to use conditional again.", 'start': 8037.214, 'duration': 1.8}, {'end': 8044.697, 'text': 'And so the first possibility is if there is an uploaded file, write out the content.', 'start': 8039.034, 'duration': 5.663}, {'end': 8053.52, 'text': 'Otherwise, write out the content of the slider bar and then also put a text that we are awaiting the CSV file to be uploaded.', 'start': 8045.037, 'duration': 8.483}], 'summary': 'Two sex possibilities, three island possibilities, user input displayed with conditional logic.', 'duration': 31.674, 'max_score': 8021.846, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM8021846.jpg'}, {'end': 8820.753, 'src': 'embed', 'start': 8796.532, 'weight': 3, 'content': [{'end': 8803.738, 'text': "And then finally, in the remaining 10 lines of code, we're going to print out the plots provided by the SHAP library.", 'start': 8796.532, 'duration': 7.206}, {'end': 8809.923, 'text': 'So line number 75 and 76 is going to extract the SHAP values.', 'start': 8803.918, 'duration': 6.005}, {'end': 8812.345, 'text': 'Line number 78.', 'start': 8810.264, 'duration': 2.081}, {'end': 8814.667, 'text': 'is going to print out the header here, Feature Importance.', 'start': 8812.345, 'duration': 2.322}, {'end': 8820.753, 'text': 'Line number 79 is going to print the header of the plot, I mean the title of the plot.', 'start': 8815.008, 'duration': 5.745}], 'summary': 'Code extracts shap values and prints feature importance plots.', 'duration': 24.221, 'max_score': 8796.532, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM8796532.jpg'}, {'end': 8942.646, 'src': 'embed', 'start': 8885.925, 'weight': 0, 'content': [{'end': 8893.954, 'text': 'So, in prior episodes, I have shown you how you could use the Streamlit library in Python to build simple web application,', 'start': 8885.925, 'duration': 8.029}, {'end': 8900.081, 'text': 'ranging from a simple financial web application where you could check the stock price,', 'start': 8893.954, 'duration': 6.127}, {'end': 8904.366, 'text': 'a simple web application where you could predict the Boston housing price.', 'start': 8900.081, 'duration': 4.285}, {'end': 8907.149, 'text': 'a penguin species prediction web application.', 'start': 8904.366, 'duration': 2.783}, {'end': 8914.533, 'text': "And so in today's episode, we're going to talk about how you could build a simple bioinformatics web application.", 'start': 8907.389, 'duration': 7.144}, {'end': 8921.176, 'text': 'And it is going to be based on the prior tutorial videos that are mentioned in this channel.', 'start': 8915.073, 'duration': 6.103}, {'end': 8935.363, 'text': "So the bioinformatics web application that we're going to be building today will be an extension of a tutorial series where I have shown you how you could build a molecular solubility prediction model using machine learning.", 'start': 8921.416, 'duration': 13.947}, {'end': 8942.646, 'text': 'where, particularly, we are applying machine learning and python to the field of computational drug discovery.', 'start': 8935.763, 'duration': 6.883}], 'summary': 'Learn to build bioinformatics web app for drug discovery using machine learning in python.', 'duration': 56.721, 'max_score': 8885.925, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM8885925.jpg'}], 'start': 7937.385, 'title': 'Data upload, preprocessing & web app development', 'summary': 'Covers data upload and preprocessing for penguin prediction, including feature engineering and web app development for boston housing ml prediction and bioinformatics application with a focus on input parameters and model creation using python and machine learning. the boston housing dataset with 506 rows and 14 columns is utilized for web app development, and the bioinformatics web application includes a random forest model and shap plots for molecular solubility prediction.', 'chapters': [{'end': 7997.81, 'start': 7937.385, 'title': 'Data upload and preprocessing for penguin prediction', 'summary': 'Covers the process of uploading and preprocessing penguin data, including dropping the species column and encoding categorical variables, with a focus on input features and expected values.', 'duration': 60.425, 'highlights': ["The encoding code expects multiple values for certain columns, such as three possibilities for the 'island' variable and two for the 'sex' column.", "The process involves reading in the data from 'penguins_clean.csv', dropping the 'species' column, and combining the input data frame with the penguin dataset.", 'The conditional block determines whether to upload a file or use a slider bar as input for the prediction process.']}, {'end': 8645.272, 'start': 7997.99, 'title': 'Web app for boston housing ml prediction', 'summary': 'Explores the development of a web application powered by machine learning for predicting boston housing prices, utilizing the boston housing dataset with 506 rows and 14 columns, along with the creation of a side panel for specifying input parameters and a custom function for accepting user input features.', 'duration': 647.282, 'highlights': ['The Boston housing dataset comprises 506 rows and 14 columns, with the target response being the median value of owner-occupied homes in $1,000 units.', 'The creation of a side panel allows for specifying input parameters and defining a custom function for accepting user input features, consisting of 13 features.', 'The development of a web application powered by machine learning for predicting Boston housing prices is showcased, along with the utilization of the Boston housing dataset and the creation of a side panel for specifying input parameters.']}, {'end': 8942.646, 'start': 8645.673, 'title': 'Building bioinformatics web app', 'summary': 'Explains the process of building a bioinformatics web application using streamlit library in python, extending a tutorial series on building a molecular solubility prediction model using machine learning and python for computational drug discovery, and highlights the steps involved in creating the application, including the use of streamlit library, building a random forest model, and displaying shap plots.', 'duration': 296.973, 'highlights': ['The chapter explains the process of building a bioinformatics web application using Streamlit library in Python.', 'It extends a tutorial series on building a molecular solubility prediction model using machine learning and Python for computational drug discovery.', 'The steps involved in creating the application, including the use of Streamlit library, building a random forest model, and displaying SHAP plots.']}], 'duration': 1005.261, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM7937385.jpg', 'highlights': ['The development of a web application powered by machine learning for predicting Boston housing prices is showcased, along with the utilization of the Boston housing dataset and the creation of a side panel for specifying input parameters.', "The process involves reading in the data from 'penguins_clean.csv', dropping the 'species' column, and combining the input data frame with the penguin dataset.", 'The conditional block determines whether to upload a file or use a slider bar as input for the prediction process.', 'The Boston housing dataset comprises 506 rows and 14 columns, with the target response being the median value of owner-occupied homes in $1,000 units.', "The encoding code expects multiple values for certain columns, such as three possibilities for the 'island' variable and two for the 'sex' column.", 'It extends a tutorial series on building a molecular solubility prediction model using machine learning and Python for computational drug discovery.', 'The chapter explains the process of building a bioinformatics web application using Streamlit library in Python.', 'The creation of a side panel allows for specifying input parameters and defining a custom function for accepting user input features, consisting of 13 features.', 'The steps involved in creating the application, including the use of Streamlit library, building a random forest model, and displaying SHAP plots.']}, {'end': 9718.403, 'segs': [{'end': 8970.494, 'src': 'embed', 'start': 8942.646, 'weight': 1, 'content': [{'end': 8948.488, 'text': 'and if you think of it in the grand scheme of things, it is part of the bioinformatics research area,', 'start': 8942.646, 'duration': 5.842}, {'end': 8953.03, 'text': 'and so this video will focus more on the aspect of actually building the web application.', 'start': 8948.488, 'duration': 4.542}, {'end': 8960.312, 'text': "and if you're interested in how to build the prediction model on the molecular solubility that we will be using today,", 'start': 8953.53, 'duration': 6.782}, {'end': 8964.793, 'text': 'let me refer you to the prior tutorial videos on this channel,', 'start': 8960.312, 'duration': 4.481}, {'end': 8970.494, 'text': 'and the links will be provided in the video description and also the pinned comments of this video.', 'start': 8964.793, 'duration': 5.701}], 'summary': 'This video focuses on building a web application for bioinformatics research, while prior tutorial videos cover molecular solubility prediction models.', 'duration': 27.848, 'max_score': 8942.646, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM8942646.jpg'}, {'end': 9174.86, 'src': 'embed', 'start': 9147.562, 'weight': 3, 'content': [{'end': 9153.323, 'text': 'all right, prediction has been made and it is assigned to the y pred variable,', 'start': 9147.562, 'duration': 5.761}, {'end': 9159.905, 'text': 'and the prediction is made using the model dot predict and then using x as the input argument,', 'start': 9153.323, 'duration': 6.582}, {'end': 9163.248, 'text': "And then we're going to print the model performance here.", 'start': 9160.465, 'duration': 2.783}, {'end': 9174.86, 'text': 'And these four values are the regression coefficient values for each of the four input variables of the x, comprising of mo lock p, molecular weights,', 'start': 9163.829, 'duration': 11.031}], 'summary': 'Model predicted using input x, with 4 regression coefficient values.', 'duration': 27.298, 'max_score': 9147.562, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM9147562.jpg'}, {'end': 9350.704, 'src': 'embed', 'start': 9322.466, 'weight': 2, 'content': [{'end': 9327.189, 'text': 'And so that Jupyter Notebook will be using this input file here,', 'start': 9322.466, 'duration': 4.723}, {'end': 9332.691, 'text': 'but also in the code it is downloading directly from the GitHub of the data professor.', 'start': 9327.189, 'duration': 5.502}, {'end': 9335.493, 'text': "So actually, we don't actually need this as well.", 'start': 9332.931, 'duration': 2.562}, {'end': 9337.554, 'text': 'So I can just delete that.', 'start': 9335.893, 'duration': 1.661}, {'end': 9340.935, 'text': "And then I'm just going to provide you the Jupyter Notebook.", 'start': 9338.274, 'duration': 2.661}, {'end': 9348.623, 'text': 'And Okay, so a total of three files will be used for this web application.', 'start': 9341.656, 'duration': 6.967}, {'end': 9350.704, 'text': 'So the first one is the logo.', 'start': 9348.843, 'duration': 1.861}], 'summary': 'Jupyter notebook uses input file and downloads from github, three files used for web application.', 'duration': 28.238, 'max_score': 9322.466, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM9322466.jpg'}, {'end': 9417.711, 'src': 'embed', 'start': 9389.358, 'weight': 0, 'content': [{'end': 9393.72, 'text': "So there's a total of about 110 lines of code.", 'start': 9389.358, 'duration': 4.362}, {'end': 9400.103, 'text': 'And so notice that I have also included several lines of code that are essentially the comments here.', 'start': 9393.98, 'duration': 6.123}, {'end': 9407.526, 'text': 'And so these are just for ease of reading, having a look at what each block of codes are doing.', 'start': 9400.683, 'duration': 6.843}, {'end': 9413.789, 'text': 'All right, so if deleting the comments, it will be probably just under 100 lines of code.', 'start': 9407.546, 'duration': 6.243}, {'end': 9417.711, 'text': "So let's take a look at the code here.", 'start': 9415.51, 'duration': 2.201}], 'summary': "Code consists of about 110 lines, with comments making up a portion. without comments, it's around 100 lines.", 'duration': 28.353, 'max_score': 9389.358, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM9389358.jpg'}, {'end': 9718.403, 'src': 'embed', 'start': 9689.868, 'weight': 6, 'content': [{'end': 9694.472, 'text': "I'm going to search for it, command F and then search for smiles.", 'start': 9689.868, 'duration': 4.604}, {'end': 9697.985, 'text': 'And we have it here in 2.1.', 'start': 9694.692, 'duration': 3.293}, {'end': 9699.496, 'text': '4 canonical smiles.', 'start': 9697.985, 'duration': 1.511}, {'end': 9700.958, 'text': "So we're going to copy that.", 'start': 9699.977, 'duration': 0.981}, {'end': 9703.66, 'text': 'So this is the smiles notation.', 'start': 9701.618, 'duration': 2.042}, {'end': 9706.602, 'text': "And so I'm going to copy and then I'm going to paste it here.", 'start': 9704.301, 'duration': 2.301}, {'end': 9713.989, 'text': 'And then after we paste it here, we have to press command and enter in order to apply this.', 'start': 9708.324, 'duration': 5.665}, {'end': 9718.403, 'text': 'And then note that the predicted value here will be updated.', 'start': 9714.98, 'duration': 3.423}], 'summary': 'Identified 4 canonical smiles in 2.1 for updating predicted value.', 'duration': 28.535, 'max_score': 9689.868, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM9689868.jpg'}], 'start': 8942.646, 'title': 'Building web & solubility prediction models', 'summary': 'Covers building a web application for molecular solubility with a focus on importing computed descriptors and connecting to the streamlit directory. it also details building a solubility prediction model using linear regression with an r-squared value of 0.77 and mean squared error of 1.01, followed by creating a web application using streamlit and generating molecular descriptors.', 'chapters': [{'end': 9057.183, 'start': 8942.646, 'title': 'Building web application for molecular solubility', 'summary': 'Provides a concise version of a prior video on building machine learning models for molecular solubility, focusing on building the web application, importing computed descriptors, and connecting to the streamlit directory.', 'duration': 114.537, 'highlights': ['The video focuses on building the web application for molecular solubility.', 'The tutorial provides a concise version of the prior video on building machine learning models for molecular solubility.', 'Importing computed descriptors directly from the GitHub of the data professor.', 'Connecting to the streamlit directory and clearing all the outputs.']}, {'end': 9718.403, 'start': 9057.623, 'title': 'Building solubility prediction model', 'summary': 'Covers building a solubility prediction model using linear regression, achieving an r-squared value of 0.77 and mean squared error of 1.01, followed by creating a web application using streamlit and generating molecular descriptors.', 'duration': 660.78, 'highlights': ['Building a linear regression model to predict log S with an R-squared value of 0.77 and mean squared error of 1.01', 'Creating a web application using Streamlit and generating molecular descriptors', 'Separating the data frame into X and Y variables, with X containing all columns except log S and Y containing only log S']}], 'duration': 775.757, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM8942646.jpg', 'highlights': ['Building a linear regression model to predict log S with an R-squared value of 0.77 and mean squared error of 1.01', 'Creating a web application using Streamlit and generating molecular descriptors', 'Importing computed descriptors directly from the GitHub of the data professor', 'Connecting to the Streamlit directory and clearing all the outputs', 'The video focuses on building the web application for molecular solubility', 'The tutorial provides a concise version of the prior video on building machine learning models for molecular solubility', 'Separating the data frame into X and Y variables, with X containing all columns except log S and Y containing only log S']}, {'end': 11511.033, 'segs': [{'end': 10589.411, 'src': 'embed', 'start': 10532.902, 'weight': 0, 'content': [{'end': 10536.344, 'text': 'So this will provide us with the details that we see on the GitHub.', 'start': 10532.902, 'duration': 3.442}, {'end': 10547.027, 'text': 'And other than that, we have copied the following four files from the GitHub of the Penguins web application penguinsapp.py.', 'start': 10537.325, 'duration': 9.702}, {'end': 10552.188, 'text': 'penguinsclean.csv. penguinsclf.pkl.', 'start': 10547.027, 'duration': 5.161}, {'end': 10554.508, 'text': 'penguins-example.csv.', 'start': 10552.188, 'duration': 2.32}, {'end': 10560.73, 'text': "So these four files were copied directly from the repository that I'm going to show you right now.", 'start': 10554.709, 'duration': 6.021}, {'end': 10562.65, 'text': "So we're going to code.", 'start': 10561.65, 'duration': 1}, {'end': 10564.15, 'text': "We're going to streamlit.", 'start': 10562.75, 'duration': 1.4}, {'end': 10566.311, 'text': 'And it was from part three.', 'start': 10564.651, 'duration': 1.66}, {'end': 10575.703, 'text': 'So we copied everything except for the model building.py, because the model building.py will produce the PKL.', 'start': 10567.131, 'duration': 8.572}, {'end': 10579.929, 'text': "And that's what we need, which is the saved model that we have created.", 'start': 10575.964, 'duration': 3.965}, {'end': 10583.725, 'text': "Let's head back All right.", 'start': 10579.949, 'duration': 3.776}, {'end': 10589.411, 'text': 'And aside from the four files, which is directly related to the Streamlit web application,', 'start': 10583.986, 'duration': 5.425}], 'summary': 'Four files copied from github for streamlit web application.', 'duration': 56.509, 'max_score': 10532.902, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM10532902.jpg'}, {'end': 11087.298, 'src': 'embed', 'start': 11051.191, 'weight': 1, 'content': [{'end': 11052.492, 'text': "So it's apparently loading.", 'start': 11051.191, 'duration': 1.301}, {'end': 11057.093, 'text': 'And when the web application is loading for the first time, it might take you some time.', 'start': 11052.832, 'duration': 4.261}, {'end': 11059.934, 'text': 'Okay So now the web application is loaded.', 'start': 11057.533, 'duration': 2.401}, {'end': 11062.794, 'text': "And so let's play around with the input parameters.", 'start': 11060.114, 'duration': 2.68}, {'end': 11068.596, 'text': 'And as we can see, the prediction label changes along with the prediction probability.', 'start': 11063.075, 'duration': 5.521}, {'end': 11070.237, 'text': 'Okay So congratulations.', 'start': 11068.856, 'duration': 1.381}, {'end': 11075.018, 'text': 'You have now successfully deployed your Streamlit web application onto Heroku.', 'start': 11070.317, 'duration': 4.701}, {'end': 11087.298, 'text': 'Do you want to deploy your web application that you have just created in Python using the Streamlit library?', 'start': 11082.034, 'duration': 5.264}], 'summary': 'Streamlit web app deployed successfully on heroku.', 'duration': 36.107, 'max_score': 11051.191, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM11051191.jpg'}, {'end': 11299.388, 'src': 'heatmap', 'start': 11161.192, 'weight': 3, 'content': [{'end': 11172.347, 'text': 'and then it will create a data frame of the data and then it will also allow us to download the data as a CSV file and finally,', 'start': 11161.192, 'duration': 11.155}, {'end': 11176.353, 'text': "we're also able to make some beautiful plots here.", 'start': 11172.347, 'duration': 4.006}, {'end': 11178.094, 'text': 'All right.', 'start': 11177.754, 'duration': 0.34}, {'end': 11180.736, 'text': "And so we're going to proceed with deploying the app.", 'start': 11178.214, 'duration': 2.522}, {'end': 11191.823, 'text': 'And so, before continuing further, it should be noted that we need also the requirements.txt file, which will list all of the dependency,', 'start': 11180.876, 'duration': 10.947}, {'end': 11198.167, 'text': "meaning the libraries that we're using in our web app and also the corresponding version number.", 'start': 11191.823, 'duration': 6.344}, {'end': 11200.329, 'text': 'So how do I get this? Let me show you.', 'start': 11198.468, 'duration': 1.861}, {'end': 11209.708, 'text': 'So you can get this by going to your terminal and I have already logged into my conda environment.', 'start': 11203.041, 'duration': 6.667}, {'end': 11217.437, 'text': 'So I will directly type pip freeze and then requirements.txt.', 'start': 11209.909, 'duration': 7.528}, {'end': 11222.771, 'text': "And then let's have a look at the contents of the file.", 'start': 11220.27, 'duration': 2.501}, {'end': 11232.394, 'text': 'And then notice that I have all of the libraries that are installed in my conda environment along with the corresponding version number.', 'start': 11224.651, 'duration': 7.743}, {'end': 11238.675, 'text': 'So I will selectively choose the libraries that are use for the web app.', 'start': 11232.814, 'duration': 5.861}, {'end': 11241.676, 'text': "And then I'm going to copy the corresponding lines.", 'start': 11238.955, 'duration': 2.721}, {'end': 11245.257, 'text': "Like for example, we're making use of the Y finance library.", 'start': 11241.936, 'duration': 3.321}, {'end': 11249.559, 'text': "So we're going to copy this line, we also made use of the map plot lib library.", 'start': 11245.537, 'duration': 4.022}, {'end': 11252.539, 'text': "And so we're going to find map plot lib right here.", 'start': 11249.879, 'duration': 2.66}, {'end': 11253.72, 'text': "And then we're going to copy that.", 'start': 11252.579, 'duration': 1.141}, {'end': 11257.581, 'text': "And then do the same thing for all of the other libraries that you're using.", 'start': 11254.02, 'duration': 3.561}, {'end': 11269.843, 'text': 'Okay, so essentially you have the app file itself, you have the requirements.txt, and then you also have the readme.md file.', 'start': 11260.515, 'duration': 9.328}, {'end': 11277.53, 'text': 'So this is normally created automatically if you ticked on it, and it will allow you to show this readme here.', 'start': 11270.104, 'duration': 7.426}, {'end': 11285.537, 'text': "So I'm going to show you how you could include a button that will allow you to click on it, and then it will launch the web application.", 'start': 11277.771, 'duration': 7.766}, {'end': 11290.321, 'text': 'Okay, so let me open up the share Streamlit website.', 'start': 11286.238, 'duration': 4.083}, {'end': 11294.364, 'text': 'Share Streamlit.', 'start': 11293.403, 'duration': 0.961}, {'end': 11299.388, 'text': "And so it should be noted that for this one, I've signed in using my GitHub.", 'start': 11295.105, 'duration': 4.283}], 'summary': 'Demonstrating how to create a data frame, download as csv, and deploy a web app using streamlit.', 'duration': 29.284, 'max_score': 11161.192, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM11161192.jpg'}], 'start': 9718.584, 'title': 'Using rdkit library for molecular descriptors, molecular solubility prediction app, deploying streamlit web app on heroku, and deploying sp500 web app', 'summary': 'Covers using the rdkit library to compute molecular descriptors, developing a web app for predicting molecular solubility, deploying a streamlit web application on heroku, and deploying a web app for scraping sp500 data, including specific functionalities and practical implementation steps.', 'chapters': [{'end': 9765.672, 'start': 9718.584, 'title': 'Using rdkit library for molecular descriptors', 'summary': 'Demonstrates using the rdkit library to compute molecular descriptors including molecular weight, number of rotatable bonds, and aromatic proportion, with the aromatic proportion being computed using a custom function.', 'duration': 47.088, 'highlights': ['The computed molecular descriptors include molecular weight, number of rotatable bonds, and aromatic proportion.', 'The aromatic proportion is computed using a custom function.', 'The descriptors are computed using the RDKit library and the chem and descriptors function.']}, {'end': 10434.491, 'start': 9765.672, 'title': 'Molecular solubility prediction app', 'summary': "Covers the development of a simple web application for predicting molecular solubility values using custom functions and markdown formatting, with an emphasis on the use of four specific descriptors based on john delaney's original research, and the process of reading input features, generating molecular descriptors, and making predictions.", 'duration': 668.819, 'highlights': ['The chapter covers the development of a simple web application for predicting molecular solubility values.', "The emphasis on the use of four specific descriptors based on John Delaney's original research.", 'The process of reading input features, generating molecular descriptors, and making predictions.']}, {'end': 11130.989, 'start': 10434.491, 'title': 'Deploying streamlit web app on heroku', 'summary': 'Discusses the deployment of a streamlit web application onto the internet using heroku, covering the necessary files, setup steps, and the deployment process, with a focus on the practical implementation and the benefits of using heroku for web application deployment.', 'duration': 696.498, 'highlights': ['The chapter explains the process of deploying a Streamlit web application onto the internet using Heroku, providing practical guidance and a step-by-step approach for the deployment process.', 'The speaker details the required files for the deployment onto Heroku, including penguinsapp.py, penguinsclean.csv, penguinsclf.pkl, penguins-example.csv, procfile, requirements.txt, and setup.sh.', 'The speaker emphasizes the importance of specifying the precise version numbers of the Python libraries used in the web application in the requirements.txt file, ensuring compatibility and functionality during deployment.', 'The chapter provides a walkthrough of the deployment process on Heroku, including connecting the GitHub repository, enabling automatic deploy options, and monitoring the real-time deployment progress.', 'The speaker highlights the simplicity of the deployment process on Heroku, eliminating the need to maintain the server and focusing solely on the application itself, making it an efficient and user-friendly platform for web application deployment.']}, {'end': 11511.033, 'start': 11131.009, 'title': 'Deploying sp500 web app', 'summary': 'Discusses deploying a web app that web scrapes data from wikipedia of the sp500, downloads y finance data from yahoo finance, creates a data frame, allows data download as a csv file, and facilitates plotting. it also covers creating a requirements.txt file, deploying the app on streamlit, and accessing the beta trial feature on streamlit share.', 'duration': 380.024, 'highlights': ['The app web scrapes data from Wikipedia of the SP500 and downloads the Y Finance data set from Yahoo Finance via the yfinance library.', 'Creating a requirements.txt file is necessary, listing all dependencies, and their corresponding version numbers.', 'Deploying the app on Streamlit involves clicking on the deploy button after providing the necessary details, such as the app name and repository.', 'Accessing the beta trial feature on Streamlit share requires being one of the 1000 selected users and requesting an invitation on the Streamlit share website.']}], 'duration': 1792.449, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/JwSS70SZdyM/pics/JwSS70SZdyM9718584.jpg', 'highlights': ['The computed molecular descriptors include molecular weight, number of rotatable bonds, and aromatic proportion.', 'The chapter covers the development of a simple web application for predicting molecular solubility values.', 'The chapter explains the process of deploying a Streamlit web application onto the internet using Heroku, providing practical guidance and a step-by-step approach for the deployment process.', 'The app web scrapes data from Wikipedia of the SP500 and downloads the Y Finance data set from Yahoo Finance via the yfinance library.']}], 'highlights': ['Building 12 data apps in Python using Streamlit with interactive web apps and Python libraries', 'Developing classification and regression models for data-driven web applications', 'Utilizing Python libraries like NumPy, SciPy, Maps.lib, Seaborn within the Streamlit environment', 'Creating a web app with Streamlit requiring approximately 20 lines of code', 'Retrieving stock price data for over 500 companies using Y Finance library in Python', 'Building a full-screen cryptocurrency web app with improved interface and layout', 'Building a linear regression model to predict log S with an R-squared value of 0.77 and mean squared error of 1.01', 'Building a web application for molecular solubility and deploying it using Heroku', 'Building a machine learning model for predicting Boston housing prices and creating a side panel for specifying input parameters', 'Building a bioinformatics web application using Streamlit library in Python', 'Building a random forest model using the scikit-learn library and saving the model using the pickle library', 'Building a data-driven web application in Python for retrieving NFL football player stats data and web scraping S&P 500 stock prices']}