title
DevOps Course | DevOps Training | DevOps Tools | Intellipaat
description
🔥 Intellipaat DevOps Course: https://intellipaat.com/advanced-certification-devops-cloud-computing/
In this DevOps Course video, you will learn What is DevOps, How DevOps Works, the Importance of DevOps, Why do we need DevOps, What is GIT, GIT is Architecture, DevOps Tools like Puppet, Nagios, Jenkins, Terraform, Ansible, and a lot more. In this DevOps Full Course video, you will also get an overview of How to Start Your Career in DevOps. We also have a special segment where we are also discussing DevOps Interview Questions and Answers which will help you ace your interviews.
👇👇The following are covered in this video:
00:00:00 - Introduction to DevOps Course
00:01:21 - Why DevOps?
00:11:21 - How DevOps works?
00:23:12 - DevOps Tools
00:30:14 - Introduction to GIT
00:32:15 - Common GIT commands
01:20:20 - Problems before Docker
01:27:18 - What is Docker?
01:33:12 - Docker vs Virtual Machine
01:40:27 - Docker Container Lifecycle
01:43:21 - Common Docker Operations
01:57:45 - Creating a Docker Hub Account
02:19:26 - What is DockerFile?
02:55:57 - What is a Monolithic Application?
03:01:17 - What are Microservices?
03:06:59 - What is Docker Compose?
03:29:26 - What is a Service?
03:36:52 - What is Kubernetes?
04:09:55 - YAML Files
04:42:28 - What is Puppet?
05:32:46 - What is Ansible?
09:15:59 - Introduction to the world of SDLC
09:24:40 - DevOps vs Agile
09:42:11 - How to Become a DevOps Engineer & Interview Questions
10:04:45 - Virtualization and Containerization
👉In this DevOps Course video, you will learn everything about DevOps from basic to advance level. This video includes an introduction to DevOps, DevOps tools, DevOps git, DevOps docker, Kubernetes tutorial, DevOps project, Ansible tutorial for beginners & DevOps Jenkins with the complete hands-on explanation. This is a must-watch session for everyone who wishes to learn DevOps and make a career in the cloud domain.
#DevOpsCourse #DevOpsTraining #DevOpsTools #DevopsFullCourse #LearnDevOps #DevOpsForBeginners #Intellipaat
đź“Ś Do subscribe to Intellipaat channel & get regular updates on videos: https://goo.gl/hhsGWb
đź“•Read the complete DevOps tutorial here: https://intellipaat.com/blog/tutorial/devops-tutorial/
đź“”Interested to learn DevOps Course still more? Please check similar what is DevOps blog here: https://intellipaat.com/blog/what-is-devops/
âť“Are you looking for something more? Enroll in our DevOps certification course and become a certified DevOps professional (https://intellipaat.com/devops-certification-training/). It is a 32 hrs instructor-led DevOps training provided by Intellipaat which is completely aligned with industry standards and certification bodies.
If you’ve enjoyed this DevOps Full course video, Like us and Subscribe to our channel for a more similar informative video.
Got any questions about DevOps Tools? Ask in the comment section below.
----------------------------
Intellipaat Edge
1. 24*7 Lifetime Access & Support
2. Flexible Class Schedule
3. Job Assistance
4. Mentors with +14 yrs
5. Industry Oriented Courseware
6. Lifetime free Course Upgrade
----------------------------
For more information:
Call Our Course Advisors IND: +91-7022374614 US: 1-800-216-8930 (Toll-Free)
Website: https://intellipaat.com/devops-certification-training/
Facebook: https://www.facebook.com/intellipaatonline
LinkedIn: https://www.linkedin.com/in/intellipaat/
Twitter: https://twitter.com/Intellipaat
Telegram: https://t.me/s/Learn_with_Intellipaat
Instagram: https://www.instagram.com/intellipaat
detail
{'title': 'DevOps Course | DevOps Training | DevOps Tools | Intellipaat', 'heatmap': [{'end': 3221.865, 'start': 1204.39, 'weight': 0.966}, {'end': 4834.112, 'start': 4419.858, 'weight': 0.747}, {'end': 6857.907, 'start': 6442.095, 'weight': 0.748}, {'end': 7653.968, 'start': 7247.341, 'weight': 0.719}, {'end': 9270.486, 'start': 8855.274, 'weight': 0.704}], 'summary': 'Covers devops lifecycle, git fundamentals, docker, kubernetes, puppet, jenkins, nagios installation, and career insights, including practical demonstrations and implementation of devops tools and techniques, providing insights into devops engineer demand and salaries.', 'chapters': [{'end': 1768.999, 'segs': [{'end': 168.382, 'src': 'embed', 'start': 139.589, 'weight': 4, 'content': [{'end': 144.111, 'text': 'he will only forward the code to somebody else who wants to test it right.', 'start': 139.589, 'duration': 4.522}, {'end': 146.632, 'text': 'but it did not run on the operation system guy.', 'start': 144.111, 'duration': 2.521}, {'end': 148.412, 'text': 'it could be the problem of environment.', 'start': 146.632, 'duration': 1.78}, {'end': 152.754, 'text': 'it could be the problem that he has not installed everything which is acquired by the software.', 'start': 148.412, 'duration': 4.342}, {'end': 156.895, 'text': 'so you used to push it back to the developer saying you know your code is not running on my system.', 'start': 152.754, 'duration': 4.141}, {'end': 157.596, 'text': 'please check your code.', 'start': 156.895, 'duration': 0.701}, {'end': 159.216, 'text': 'it is faulty right.', 'start': 157.596, 'duration': 1.62}, {'end': 161.738, 'text': 'but now the developer was furious.', 'start': 159.216, 'duration': 2.522}, {'end': 168.382, 'text': 'you know it ran fine on his system and if that operations guy is not installing something properly.', 'start': 161.738, 'duration': 6.644}], 'summary': 'The code did not run on the operations system, potentially due to environment issues or missing installations, leading to friction between the developer and operations.', 'duration': 28.793, 'max_score': 139.589, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY4139589.jpg'}, {'end': 512.008, 'src': 'embed', 'start': 488.277, 'weight': 2, 'content': [{'end': 494.86, 'text': "Building the code means if you have a Java file, for example, you'll have to create a jar file or an executable file out of that.", 'start': 488.277, 'duration': 6.583}, {'end': 499.402, 'text': 'So that is called building, right? So building your code and then you will be testing your code.', 'start': 495.16, 'duration': 4.242}, {'end': 505.445, 'text': "So what executable you have built, you'll basically test that code for any faults in the functionalities.", 'start': 499.422, 'duration': 6.023}, {'end': 510.327, 'text': 'And if there are no faults, you will basically release the code to the upstream.', 'start': 506.085, 'duration': 4.242}, {'end': 512.008, 'text': 'Now there are two kinds of release guys.', 'start': 510.387, 'duration': 1.621}], 'summary': 'Building code involves creating executable files, testing for faults, and releasing to upstream.', 'duration': 23.731, 'max_score': 488.277, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY4488277.jpg'}, {'end': 1352.941, 'src': 'embed', 'start': 1320.971, 'weight': 0, 'content': [{'end': 1322.991, 'text': 'it gets monitored for any errors.', 'start': 1320.971, 'duration': 2.02}, {'end': 1329.756, 'text': 'it gets monitored for any user comments on version upgrades right on version upgrades or, you can say, any feature requests.', 'start': 1322.991, 'duration': 6.765}, {'end': 1339.247, 'text': 'so all those feature requests, all those errors, all those logs are then visualized or then are then stored inside a monitoring tool.', 'start': 1329.756, 'duration': 9.491}, {'end': 1346.075, 'text': 'right, and this monitoring tool helps us in sorting these logs based on what we wanted.', 'start': 1339.247, 'duration': 6.828}, {'end': 1352.941, 'text': 'it will basically sorted on the basis of general logs, is sorted on the basis of error logs.', 'start': 1346.075, 'duration': 6.866}], 'summary': 'Logs and user comments are monitored and visualized for error detection and feature requests, aiding in sorting logs based on type.', 'duration': 31.97, 'max_score': 1320.971, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY41320971.jpg'}], 'start': 12.219, 'title': 'Devops and its lifecycle', 'summary': 'Discusses the demand for devops skills, its impact on productivity and release cycles, and explains the devops lifecycle with an emphasis on tools such as git, jenkins, docker, and nagios.', 'chapters': [{'end': 100.653, 'start': 12.219, 'title': 'Devops full course overview', 'summary': 'Discusses the popularity and demand for devops skills, outlines the agenda of the course, and introduces the roles of development and operations teams in the software release process, emphasizing the importance of devops in the it industry.', 'duration': 88.434, 'highlights': ['DevOps professionals are one of the highest paid ones in the IT industry The demand for DevOps professionals is high, and they are among the highest paid in the IT industry.', 'Agenda includes an introduction to DevOps, various DevOps tools, setting up CI-CD pipeline, comparison of DevOps and Agile, career path, salary, job opportunities, and interview questions and answers The agenda covers a wide range of topics, including DevOps introduction, tools, CI-CD pipeline setup, comparison with Agile, career guidance, and interview preparation.', 'Introduction to development and operations teams in the software release process The chapter introduces the roles of development and operations teams in software release, setting the stage for understanding the importance of DevOps in bridging these functions.']}, {'end': 606.617, 'start': 100.814, 'title': 'Devops methodology and its impact', 'summary': 'Discusses the challenges between developers and operations, the role of devops in solving these problems, and the comparison between traditional it and devops, highlighting the increased productivity, skill specialization, faster release cycles, and ease of goal achievement.', 'duration': 505.803, 'highlights': ['Using DevOps methodology, release cycles can be achieved in a day, as compared to weeks or months in traditional IT. DevOps methodology enables release cycles in a day, showcasing significant improvement over the weeks or months required in traditional IT.', 'DevOps methodology leads to smaller and frequent release cycles, utilizing automation to release software in shorter timeframes than traditional IT. DevOps promotes smaller and frequent release cycles, leveraging automation to release software in shorter timeframes compared to traditional IT.', 'DevOps fosters skill specialization in smaller teams, with individuals proficient in various technologies, unlike traditional IT where teams were skill-centric. DevOps encourages skill specialization within smaller teams, with individuals adept in various technologies, differing from the skill-centric teams in traditional IT.', 'DevOps methodology facilitates easier goal achievement through shorter release cycles, in contrast to the difficulties posed by longer-term objectives in traditional IT. DevOps methodology simplifies goal achievement with shorter release cycles, contrasting the challenges posed by longer-term objectives in traditional IT.']}, {'end': 1367.794, 'start': 607.018, 'title': 'Understanding devops lifecycle', 'summary': 'Explains the devops lifecycle, involving continuous development, integration, testing, deployment, and monitoring, where code is pushed to version control, integrated and tested, deployed, monitored, and the cycle repeats continuously.', 'duration': 760.776, 'highlights': ['Continuous monitoring involves sorting logs based on general logs, error logs, and feature requests, providing a single destination for planning and team coordination. Continuous monitoring organizes logs into categories such as general, error, and feature requests, serving as a centralized platform for planning and team coordination.', 'Continuous testing includes building executable files and running automated test suites to ensure the code passes all tests before deployment. Continuous testing involves creating and executing automated test suites on newly built executable files to verify code functionality before deployment.', 'Continuous integration automatically picks up code changes from the version control system and pushes them to the testing server, ensuring seamless integration of the development lifecycle. Continuous integration seamlessly integrates the development lifecycle by automatically picking up code changes from the version control system and pushing them to the testing server.', 'Continuous deployment involves automating the process of deploying code onto servers, ensuring the right software environment for the code to function. Continuous deployment automates the process of deploying code onto servers, ensuring the appropriate software environment for the code to operate.', 'Continuous development encompasses pushing code to a version control system, creating new code versions, and retaining previous versions for easy reversion if needed. Continuous development involves pushing code to a version control system, creating new code versions, and preserving previous versions for easy reversion if necessary.']}, {'end': 1768.999, 'start': 1367.794, 'title': 'Devops lifecycle and tools', 'summary': 'Covers the devops lifecycle stages, which enable shorter release cycles through automation with tools, including git for version control, jenkins for continuous integration, docker for virtualization, and nagios for continuous monitoring.', 'duration': 401.205, 'highlights': ['DevOps lifecycle stages enable shorter release cycles through automation with tools like Git, Jenkins, Docker, and Nagios. The DevOps lifecycle stages allow for shorter release cycles through automation using tools such as Git, Jenkins, Docker, and Nagios.', 'Git is a version control system used for team collaboration, tracking changes, and command line operations. Git is a version control system utilized for team collaboration, change tracking, and command line operations, commonly used in many companies.', 'Jenkins, an open-source tool, is used for implementing continuous integration features in the DevOps lifecycle. Jenkins, an open-source tool, is employed for implementing continuous integration features in the DevOps lifecycle.', 'Docker is used for virtualization and containerization, solving the problem of code portability and software dependencies by wrapping them in containers. Docker facilitates virtualization and containerization, addressing code portability and software dependencies by encapsulating them in containers.', 'Configuration management is achieved using tools like Puppet and Ansible, enabling the installation of required software without touching the target system. Tools like Puppet and Ansible enable configuration management, allowing the installation of necessary software without direct interaction with the target system.', 'Selenium is utilized for creating automation test suites in continuous testing to automatically test code once built. Selenium is employed for generating automation test suites in continuous testing to automatically test code after the build process.', 'Nagios is used for continuous monitoring, providing a dashboard for service monitoring, error detection, and user activity data collection. Nagios serves as a tool for continuous monitoring, offering a dashboard for service monitoring, error detection, and user activity data aggregation.']}], 'duration': 1756.78, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY412219.jpg', 'highlights': ['DevOps professionals are among the highest paid in the IT industry.', 'DevOps methodology enables release cycles in a day, showcasing significant improvement over traditional IT.', 'Continuous monitoring organizes logs into categories such as general, error, and feature requests, serving as a centralized platform for planning and team coordination.', 'Continuous testing involves creating and executing automated test suites on newly built executable files to verify code functionality before deployment.', 'The DevOps lifecycle stages allow for shorter release cycles through automation using tools such as Git, Jenkins, Docker, and Nagios.']}, {'end': 3629.522, 'segs': [{'end': 1982.584, 'src': 'embed', 'start': 1950.992, 'weight': 12, 'content': [{'end': 1954.933, 'text': "Then we'll be dealing with commands which can help you make some changes in Git.", 'start': 1950.992, 'duration': 3.941}, {'end': 1958.954, 'text': "Then we'll be dealing with commands which will help you do parallel development in Git.", 'start': 1955.173, 'duration': 3.781}, {'end': 1964.095, 'text': "And then towards the end, we'll also be learning commands which will help you in syncing your repositories.", 'start': 1959.374, 'duration': 4.721}, {'end': 1967.778, 'text': "Okay So let's discuss each of these domains one by one.", 'start': 1964.475, 'duration': 3.303}, {'end': 1970.819, 'text': "So let's talk about how to create repositories.", 'start': 1968.058, 'duration': 2.761}, {'end': 1972.14, 'text': "Let's start from the scratch.", 'start': 1970.939, 'duration': 1.201}, {'end': 1976.482, 'text': 'Right. So our first whenever you start a project,', 'start': 1972.6, 'duration': 3.882}, {'end': 1982.584, 'text': 'the first thing that you have to do is you have to go to your working directory and you have to do some code or make some code.', 'start': 1976.482, 'duration': 6.102}], 'summary': 'Git commands for making changes, parallel development, and syncing repositories will be discussed in detail.', 'duration': 31.592, 'max_score': 1950.992, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY41950992.jpg'}, {'end': 2090.014, 'src': 'embed', 'start': 2044.251, 'weight': 0, 'content': [{'end': 2051.634, 'text': 'Now for making it a Git initialized working directory all I have to do is type in Git init and hit enter.', 'start': 2044.251, 'duration': 7.383}, {'end': 2057.357, 'text': 'OK And now we have an initialized Git repository in front of us.', 'start': 2051.855, 'duration': 5.502}, {'end': 2062.108, 'text': 'OK So This is how you can initialize a directory in Git.', 'start': 2057.618, 'duration': 4.49}, {'end': 2068.853, 'text': 'So now you have created a repository in which you have your code which will now be tracked by Git.', 'start': 2062.308, 'duration': 6.545}, {'end': 2070.295, 'text': "Let's come back to our slides.", 'start': 2069.114, 'duration': 1.181}, {'end': 2072.117, 'text': 'So this is exactly what we wanted to do.', 'start': 2070.415, 'duration': 1.702}, {'end': 2075.48, 'text': 'We wanted to initialize a Git repository and we have done that.', 'start': 2072.197, 'duration': 3.283}, {'end': 2077.562, 'text': 'We have added some files in that directory.', 'start': 2075.88, 'duration': 1.682}, {'end': 2080.023, 'text': "Now let's go ahead and see what our next command is.", 'start': 2077.88, 'duration': 2.143}, {'end': 2085.929, 'text': 'So our next command is about knowing the status of what are the files doing in that directory.', 'start': 2080.264, 'duration': 5.665}, {'end': 2087.692, 'text': 'Are those files saved in Git??', 'start': 2086.03, 'duration': 1.662}, {'end': 2090.014, 'text': 'What is the status of that directory?', 'start': 2088.032, 'duration': 1.982}], 'summary': 'Initialized git repository, added files, and checked status.', 'duration': 45.763, 'max_score': 2044.251, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY42044251.jpg'}, {'end': 3367.202, 'src': 'embed', 'start': 3341.213, 'weight': 7, 'content': [{'end': 3345.237, 'text': "okay, don't worry, if you're not understanding as of now, you'll understand as you move along, okay.", 'start': 3341.213, 'duration': 4.024}, {'end': 3347.278, 'text': 'so i have added a branch, feature one.', 'start': 3345.237, 'duration': 2.041}, {'end': 3355.094, 'text': 'now, if i want to delete a branch again, all i have to do is get branch hyphen d and then feature one.', 'start': 3347.278, 'duration': 7.816}, {'end': 3358.876, 'text': 'OK So it is deleted branch feature one.', 'start': 3356.275, 'duration': 2.601}, {'end': 3359.457, 'text': "That's it.", 'start': 3359.197, 'duration': 0.26}, {'end': 3362.699, 'text': 'OK So now your branch has been deleted.', 'start': 3359.777, 'duration': 2.922}, {'end': 3365.961, 'text': 'OK And this is exactly what it what we wanted to do.', 'start': 3363.459, 'duration': 2.502}, {'end': 3367.202, 'text': 'Moving forward.', 'start': 3366.642, 'duration': 0.56}], 'summary': "Branch 'feature one' successfully added and deleted.", 'duration': 25.989, 'max_score': 3341.213, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY43341213.jpg'}], 'start': 1768.999, 'title': 'Git fundamentals and operations', 'summary': 'Introduces the importance of git in devops, its lifecycle, setting up repositories on github, syncing local and remote repositories, cloning and updating repositories, and branching in git for parallel development and isolation of changes.', 'chapters': [{'end': 2362.301, 'start': 1768.999, 'title': 'Introduction to git and its lifecycle', 'summary': "Introduces the importance of learning git in devops, highlighting its popularity as the most famous distributed version control system and detailing the process of creating a repository, tracking changes, and syncing repositories in git's lifecycle.", 'duration': 593.302, 'highlights': ['Git is the most famous among distributed version control systems, making it crucial for DevOps knowledge. The transcript emphasizes the popularity of Git as the most famous distributed version control system, highlighting its importance in DevOps knowledge.', "Process of creating a Git repository involves initializing it and adding files to be tracked, demonstrated with the 'git init' and 'git status' commands. The detailed process of creating a Git repository is explained, involving initialization and adding files to be tracked, demonstrated with the 'git init' and 'git status' commands.", 'Staging and committing changes to files in Git is crucial for tracking and saving modifications into the Git file system. The importance of staging and committing changes to files in Git is highlighted as crucial for tracking and saving modifications into the Git file system.', "Understanding the status of files and syncing repositories are integral parts of Git's lifecycle, demonstrated with the 'git status' command and syncing repositories process. The integral parts of Git's lifecycle, including understanding the status of files and syncing repositories, are detailed, demonstrated with the 'git status' command and the syncing repositories process."]}, {'end': 2553.773, 'start': 2362.301, 'title': 'Setting up a git repository on github', 'summary': 'Explains the process of creating a git repository on github, adding files to the repository, and pushing the files, with a demonstration of commands and successful file push to the repository.', 'duration': 191.472, 'highlights': ["The process of creating a Git repository on GitHub and adding files to the repository was demonstrated. Demonstration of creating a new public repository named 'DevOps course' on GitHub and copying the repository's HTTPS link.", "The command 'git remote add origin' was used to add the repository's URL to the command line. Explanation of using the command 'git remote add origin' to save the repository's URL in the command line for working on the Git repository.", "The command 'git push origin master' was used to push the files to the repository, requiring username and password authentication. Demonstration of using the command 'git push origin master' to push the files to the repository, including entering username and password for authentication."]}, {'end': 2759.643, 'start': 2553.973, 'title': 'Syncing local and remote repositories', 'summary': 'Demonstrates the process of adding, modifying, staging, and pushing files from a local repository to a remote repository using git, including adding 2.txt and 3.txt files, modifying 2.txt, and syncing the local repository to the remote repository.', 'duration': 205.67, 'highlights': ['The process of adding, modifying, staging, and pushing files from a local repository to a remote repository using Git is demonstrated. The chapter covers the steps involved in adding and modifying files, staging them, and pushing them from a local repository to a remote repository using Git.', "Adding 2.txt and 3.txt files to the remote repository is demonstrated. The process of adding 2.txt and 3.txt files to the remote repository is described, including using the 'git add' command and pushing the files to the origin master.", "Modifying the 2.txt file and pushing the changes to the remote repository is explained. The process of modifying the 2.txt file, staging the changes, and pushing them to the remote repository is detailed, including using the 'git add' and 'git push' commands."]}, {'end': 3243.939, 'start': 2759.643, 'title': 'Cloning and updating git repositories', 'summary': 'Discusses how to clone a repository using git, including the process of copying a repository to a local system, and the significance of git clone and git pull in updating and managing files in the repository.', 'duration': 484.296, 'highlights': ['Git clone allows for copying an entire repository to the local system, enabling developers to work on the code without the need for adding a remote origin, simplifying the process of pushing files to the repository. By using git clone, developers can easily copy an entire repository to their local system, eliminating the need to add a remote origin and simplifying the process of pushing files to the repository.', 'The significance of git pull is demonstrated in updating the local repository to the latest commit available on the remote repository, ensuring that developers have the most recent files in their system. Git pull plays a crucial role in updating the local repository to the latest commit available on the remote repository, ensuring that developers have the most recent files in their system.']}, {'end': 3629.522, 'start': 3243.939, 'title': 'Branching in git', 'summary': 'Explains how to create, switch, and manage branches in git, including the process of parallel development, merging, and pushing changes to a remote repository, emphasizing the importance of branching for parallel development and isolation of changes.', 'duration': 385.583, 'highlights': ["The process of creating a branch in Git is demonstrated, using the command 'git branch' followed by the name of the branch, resulting in the ability to assign code to multiple developers and enabling parallel development. By using the command 'git branch' followed by the branch name, developers can create branches to assign code to multiple developers, facilitating parallel development.", 'The concept of merging branches is explained, emphasizing that merging adds the code from the feature branch to the master branch, enabling changes made in the feature branch to be reflected in the master branch. Merging branches allows the code from the feature branch to be added to the master branch, ensuring that changes made in the feature branch are reflected in the master branch.', "The process of pushing changes to a remote repository is detailed, using the command 'git push origin feature one' to push the feature one branch to the remote repository, demonstrating the workflow of pushing changes to a specific branch in the remote repository. The command 'git push origin feature one' is used to push the feature one branch to the remote repository, showcasing the process of pushing changes to a specific branch in the remote repository."]}], 'duration': 1860.523, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY41768999.jpg', 'highlights': ['Git is crucial for DevOps knowledge, being the most famous distributed version control system.', 'Staging and committing changes in Git is crucial for tracking and saving modifications.', "Understanding the status of files and syncing repositories are integral parts of Git's lifecycle.", 'The process of creating a Git repository on GitHub and adding files was demonstrated.', "Using 'git remote add origin' to save the repository's URL in the command line is explained.", "Demonstration of using 'git push origin master' to push files to the repository is provided.", 'The chapter covers the steps involved in adding and modifying files, staging them, and pushing them from a local repository to a remote repository using Git.', 'The process of adding 2.txt and 3.txt files to the remote repository is demonstrated.', 'The process of modifying the 2.txt file, staging the changes, and pushing them to the remote repository is detailed.', 'By using git clone, developers can easily copy an entire repository to their local system.', 'Git pull plays a crucial role in updating the local repository to the latest commit available on the remote repository.', 'Developers can create branches to assign code to multiple developers, facilitating parallel development.', 'Merging branches allows the code from the feature branch to be added to the master branch.', "The command 'git push origin feature one' is used to push the feature one branch to the remote repository."]}, {'end': 4819.498, 'segs': [{'end': 4137.461, 'src': 'embed', 'start': 4108.54, 'weight': 8, 'content': [{'end': 4110.6, 'text': 'now, what will happen now?', 'start': 4108.54, 'duration': 2.06}, {'end': 4118.594, 'text': 'if i go to my master, get checkout master.', 'start': 4110.6, 'duration': 7.994}, {'end': 4123.816, 'text': 'okay, and if I do an LS, I can see the 7.txt file is no longer visible over here.', 'start': 4118.594, 'duration': 5.222}, {'end': 4128.478, 'text': 'okay, and also, if I do a cat on 5.txt, I can see this.', 'start': 4123.816, 'duration': 4.662}, {'end': 4132.439, 'text': 'this is the original file that the master should have right now.', 'start': 4128.478, 'duration': 3.961}, {'end': 4136.081, 'text': 'say, I want to wanted to do something in my master branch.', 'start': 4132.439, 'duration': 3.642}, {'end': 4137.461, 'text': "I'm done with that work.", 'start': 4136.081, 'duration': 1.38}], 'summary': 'Checked out master branch, 7.txt file disappeared, 5.txt file contains original content.', 'duration': 28.921, 'max_score': 4108.54, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY44108540.jpg'}, {'end': 4555.96, 'src': 'embed', 'start': 4524.78, 'weight': 0, 'content': [{'end': 4528.102, 'text': "so i don't no longer have 7.txt, which i created,", 'start': 4524.78, 'duration': 3.322}, {'end': 4535.587, 'text': 'and i no longer have the contents of 5t.txt to be the same as what was there in that particular commit.', 'start': 4528.102, 'duration': 7.485}, {'end': 4539.89, 'text': 'okay, but remember, guys, you still have that commit in your logs and if you want,', 'start': 4535.587, 'duration': 4.303}, {'end': 4543.652, 'text': 'you can go to that commit and see what all things you want to fix right.', 'start': 4539.89, 'duration': 3.762}, {'end': 4545.534, 'text': "It's right here,", 'start': 4544.133, 'duration': 1.401}, {'end': 4555.96, 'text': 'but the head of the feature branch has now changed to a new commit which basically has undid changes or undone changes of what was there over here.', 'start': 4545.534, 'duration': 10.426}], 'summary': 'Changes in 7.txt and 5t.txt were undone in a new commit, but the original commit is still accessible for reference.', 'duration': 31.18, 'max_score': 4524.78, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY44524780.jpg'}, {'end': 4797.127, 'src': 'embed', 'start': 4774.15, 'weight': 2, 'content': [{'end': 4781.215, 'text': 'I am comparing my latest commit with what is there currently in my folder, that is,', 'start': 4774.15, 'duration': 7.065}, {'end': 4788.241, 'text': 'I hit enter and you can see it says that there is something which has changed in the 4.txt file.', 'start': 4781.215, 'duration': 7.026}, {'end': 4794.945, 'text': 'So there was test file for clone before, which now has been changed to test file for diff.', 'start': 4788.741, 'duration': 6.204}, {'end': 4797.127, 'text': 'And this is exactly what I wanted to check.', 'start': 4795.366, 'duration': 1.761}], 'summary': 'Comparing latest commit with current folder, 4.txt file changed from test file for clone to test file for diff.', 'duration': 22.977, 'max_score': 4774.15, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY44774150.jpg'}], 'start': 3629.522, 'title': 'Git commands and branch management', 'summary': 'Covers the usage of git log, stash, revert, and diff commands, providing insights into branch history, reverting commits, file comparison, and preventing mix-up when working with multiple branches, with practical examples and demonstrations.', 'chapters': [{'end': 3844.124, 'start': 3629.522, 'title': 'Git log command and branch history', 'summary': 'Covers the usage of the git log command to view the history of a repository, including the ability to revert to a specific commit and the relationship between branch creation and commit history.', 'duration': 214.602, 'highlights': ['The git log command helps to see the history of the repository, displaying all the commits and their IDs. The git log command enables users to view the complete history of the repository, including all commits and their respective IDs.', "Branch creation from the master branch includes the previous commits in its history, while subsequent commits on the master branch do not affect the feature branch's log. When creating a new branch from the master, it includes the previous commits in its history. Subsequent commits on the master branch do not affect the feature branch's log.", "The command 'git revert' can be used along with a specific commit ID to revert to a particular commit. The 'git revert' command, when used with a specific commit ID, allows users to revert to a particular commit in the repository."]}, {'end': 4545.534, 'start': 3844.611, 'title': 'Git stash command and revert feature', 'summary': 'Explains the git stash command, which allows temporarily shelving changes, and the git revert feature, which undoes specific commit changes, to prevent file mix-up and confusion when working with multiple branches, demonstrated with examples.', 'duration': 700.923, 'highlights': ['The git stash command allows developers to temporarily shelve changes in order to switch to another branch and prevent file mix-up. It helps in avoiding confusion by temporarily shelving changes and switching to another branch, preventing file mix-up and potential problems when working with multiple branches.', 'Demonstrates how the git stash command helps in isolating changes to specific branches and preventing confusion when working on different branches. The git stash command helps in isolating changes to specific branches, preventing confusion, and ensuring that changes are committed to the correct branch, thus avoiding mix-ups.', 'Illustrates the use of the git revert feature to undo specific commit changes, avoiding file mix-up and confusion when working with code in different branches. The git revert feature allows developers to undo specific commit changes, ensuring that code in different branches remains isolated and preventing mix-ups and confusion.']}, {'end': 4819.498, 'start': 4545.534, 'title': 'Understanding git revert and git diff', 'summary': 'Covers the usage of git revert to undo changes and git diff to compare differences between commits or files, demonstrating the process with examples and clarifying its purpose.', 'duration': 273.964, 'highlights': ['Git diff command helps to understand the differences between two files or commits, displaying added, deleted, and modified content, such as in the comparison of two commit IDs, revealing 5.txt changes and 7.txt addition. The git diff command facilitates understanding differences in files or commits, exemplified by the comparison of commit IDs showing the changes in 5.txt and the addition of 7.txt.', "Demonstration of using git diff with the head to compare the committed version with the current directory's uncommitted changes, revealing the modification of 4.txt from 'test file for clone' to 'test file for diff'. Illustration of utilizing git diff with the head to compare the committed version with the uncommitted changes in the current directory, highlighting the alteration of 4.txt content from 'test file for clone' to 'test file for diff'.", 'Explanation of the purpose of git diff to check differences between commits or files, emphasizing its practical usage demonstrated in the examples. Clarification of the role of git diff in checking differences between commits or files, underscored by practical demonstrations in the provided examples.']}], 'duration': 1189.976, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY43629522.jpg', 'highlights': ['The git log command enables users to view the complete history of the repository, including all commits and their respective IDs.', "Branch creation from the master branch includes the previous commits in its history, while subsequent commits on the master branch do not affect the feature branch's log.", "The 'git revert' command, when used with a specific commit ID, allows users to revert to a particular commit in the repository.", 'The git stash command helps in avoiding confusion by temporarily shelving changes and switching to another branch, preventing file mix-up and potential problems when working with multiple branches.', 'The git stash command helps in isolating changes to specific branches, preventing confusion, and ensuring that changes are committed to the correct branch, thus avoiding mix-ups.', 'The git revert feature allows developers to undo specific commit changes, ensuring that code in different branches remains isolated and preventing mix-ups and confusion.', 'The git diff command facilitates understanding differences in files or commits, exemplified by the comparison of commit IDs showing the changes in 5.txt and the addition of 7.txt.', "Illustration of utilizing git diff with the head to compare the committed version with the uncommitted changes in the current directory, highlighting the alteration of 4.txt content from 'test file for clone' to 'test file for diff'.", 'Clarification of the role of git diff in checking differences between commits or files, underscored by practical demonstrations in the provided examples.']}, {'end': 7378.356, 'segs': [{'end': 4875.694, 'src': 'embed', 'start': 4844.698, 'weight': 1, 'content': [{'end': 4847.96, 'text': "Let's see what all components would be there in that particular environment.", 'start': 4844.698, 'duration': 3.262}, {'end': 4854.923, 'text': 'so the first thing that you would need in this environment, that is, around the php file, is an operating system.', 'start': 4848.38, 'duration': 6.543}, {'end': 4864.386, 'text': 'right. so an operating system which probably would have a browser or would have a text editor, right, so you need an operating system to work on Now.', 'start': 4854.923, 'duration': 9.463}, {'end': 4867.428, 'text': 'secondly, because you are developing a PHP application,', 'start': 4864.386, 'duration': 3.042}, {'end': 4875.694, 'text': 'of course you need the PHP software installed on that operating system so that your file can actually work right?', 'start': 4867.428, 'duration': 8.266}], 'summary': 'To develop a php application, you need an operating system with necessary components and php software installed.', 'duration': 30.996, 'max_score': 4844.698, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY44844698.jpg'}, {'end': 5201.707, 'src': 'embed', 'start': 5178.699, 'weight': 2, 'content': [{'end': 5187.543, 'text': 'the developer environment is going to be the same and And the moment it passes gets passed on to the ops guy he will again deploy the same wrapper.', 'start': 5178.699, 'duration': 8.844}, {'end': 5196.425, 'text': 'And his problem is also solved because the ops guy does not have to match the versions of the OS, the software, the libraries,', 'start': 5187.943, 'duration': 8.482}, {'end': 5201.707, 'text': 'because everything is there inside that wrapper or everything is there inside that container right?', 'start': 5196.425, 'duration': 5.282}], 'summary': 'Developers and ops can use the same environment, reducing version-matching issues.', 'duration': 23.008, 'max_score': 5178.699, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY45178699.jpg'}, {'end': 6857.907, 'src': 'heatmap', 'start': 6442.095, 'weight': 0.748, 'content': [{'end': 6452.604, 'text': 'so basically this if you get something like this, that this means that your container has just started and then to view all the running containers,', 'start': 6442.095, 'duration': 10.509}, {'end': 6458.909, 'text': 'all you have to do is pass the command docker ps and that would list down all the containers which are currently running in your system.', 'start': 6452.604, 'duration': 6.305}, {'end': 6460.63, 'text': 'So let us do that as well.', 'start': 6459.369, 'duration': 1.261}, {'end': 6462.452, 'text': 'So, let me just clear the screen.', 'start': 6460.81, 'duration': 1.642}, {'end': 6470.739, 'text': 'So, just pass in sudo docker ps and this would basically show you the container which you have just started.', 'start': 6462.952, 'duration': 7.787}, {'end': 6478.946, 'text': 'So, as you can see, I started Ubuntu container 29 seconds ago and this is the container ID for that particular container.', 'start': 6470.779, 'duration': 8.167}, {'end': 6481.108, 'text': 'Alright So, my container is running.', 'start': 6479.266, 'duration': 1.842}, {'end': 6484.628, 'text': "so what's next next step would be to see.", 'start': 6481.827, 'duration': 2.801}, {'end': 6489.09, 'text': 'docker ps would basically show you the containers which are run in the running state.', 'start': 6484.628, 'duration': 4.462}, {'end': 6492.231, 'text': 'but what if you want to see all the containers which are there in your system?', 'start': 6489.09, 'duration': 3.141}, {'end': 6501.415, 'text': 'for example, what i can do is i can do a sudo docker stop and i can stop this container right.', 'start': 6492.231, 'duration': 9.184}, {'end': 6511.404, 'text': 'so this container is now stopped and what i can do is i can run one more container, which would be sudo docker, run hyphen it,', 'start': 6501.415, 'duration': 9.989}, {'end': 6514.365, 'text': 'hyphen d and then ubuntu.', 'start': 6511.404, 'duration': 2.961}, {'end': 6514.965, 'text': 'sorry about that.', 'start': 6514.365, 'duration': 0.6}, {'end': 6520.047, 'text': 'so if i do a docker ps, i would only be able to see the container which is currently running.', 'start': 6514.965, 'duration': 5.082}, {'end': 6520.587, 'text': 'so you can see.', 'start': 6520.047, 'duration': 0.54}, {'end': 6522.668, 'text': 'it is up seven seconds ago.', 'start': 6520.587, 'duration': 2.081}, {'end': 6529.431, 'text': 'but if i want to see all the containers which are there on my system, that could either be in the start state or the stop state.', 'start': 6522.668, 'duration': 6.763}, {'end': 6537.295, 'text': 'all i have to do is type in the command sudo docker ps and then hyphen A.', 'start': 6529.431, 'duration': 7.864}, {'end': 6540.236, 'text': 'And with this, I can see the containers which are running.', 'start': 6537.295, 'duration': 2.941}, {'end': 6544.857, 'text': 'So this is container which is running, which was made around 20 seconds ago.', 'start': 6540.276, 'duration': 4.581}, {'end': 6549.518, 'text': 'And this is the container which has exited, which we manually stopped.', 'start': 6545.397, 'duration': 4.121}, {'end': 6555.28, 'text': 'And we can see that also by passing the command sudo docker ps hyphen A.', 'start': 6550.359, 'duration': 4.921}, {'end': 6559.7, 'text': 'okay, now the next step would be to work with the container.', 'start': 6556.238, 'duration': 3.462}, {'end': 6565.083, 'text': 'so we have started the containers, but the next step would be to start working on them.', 'start': 6559.7, 'duration': 5.383}, {'end': 6566.444, 'text': 'and how can we do that?', 'start': 6565.083, 'duration': 1.361}, {'end': 6570.926, 'text': 'we can basically do that using the command docker exec.', 'start': 6566.444, 'duration': 4.482}, {'end': 6572.647, 'text': "so what we'll do?", 'start': 6570.926, 'duration': 1.721}, {'end': 6577.67, 'text': "we'll just first get the container id so that container id is docker ps.", 'start': 6572.647, 'duration': 5.023}, {'end': 6581.905, 'text': 'Alright, so this is the container.', 'start': 6580.242, 'duration': 1.663}, {'end': 6584.57, 'text': 'This is where the this is the container which is currently running.', 'start': 6581.925, 'duration': 2.645}, {'end': 6592.284, 'text': 'Now if you if I want to get inside this container, the command for that would be docker sudo docker.', 'start': 6584.971, 'duration': 7.313}, {'end': 6603.312, 'text': 'exec, so E-X-E-C, then hyphen IT, make it interactive, give the container ID, and then bash.', 'start': 6593.59, 'duration': 9.722}, {'end': 6610.154, 'text': "Bash would be that I want to run this container in the current terminal space that I'm working in,", 'start': 6603.792, 'duration': 6.362}, {'end': 6613.254, 'text': "and the current terminal space is bash and I'll hit enter.", 'start': 6610.154, 'duration': 3.1}, {'end': 6615.135, 'text': 'So let me just clear the screen.', 'start': 6613.954, 'duration': 1.181}, {'end': 6621.436, 'text': 'So can you see that you are now inside the container, right? So this is the container ID.', 'start': 6615.555, 'duration': 5.881}, {'end': 6625.955, 'text': 'this is the user of the container.', 'start': 6623.354, 'duration': 2.601}, {'end': 6631.117, 'text': 'so we are basically acting as root of the container ID.', 'start': 6625.955, 'duration': 5.162}, {'end': 6635.078, 'text': 'that is, we are inside the container and we are acting as root.', 'start': 6631.117, 'duration': 3.961}, {'end': 6638.459, 'text': 'so this is basically the environment.', 'start': 6635.078, 'duration': 3.381}, {'end': 6645.301, 'text': 'this is the environment that I was talking about, that the developer will start working in right and this is basically a Ubuntu container.', 'start': 6638.459, 'duration': 6.842}, {'end': 6650.363, 'text': 'so all the Ubuntu commands are gonna work inside this particular container.', 'start': 6645.301, 'duration': 5.062}, {'end': 6654.009, 'text': 'So once we are inside the container, we can do whatever like.', 'start': 6651.047, 'duration': 2.962}, {'end': 6657.792, 'text': 'so, for example, if I have to update the container,', 'start': 6654.009, 'duration': 3.783}, {'end': 6668.819, 'text': "I can do an apt-get update and it will start updating the container as if it's a new operating system which is running on the system right.", 'start': 6657.792, 'duration': 11.027}, {'end': 6677.685, 'text': 'and also, to show you guys that this is different, or this is, this is completely independent from what we But doing outside the container.', 'start': 6668.819, 'duration': 8.866}, {'end': 6681.767, 'text': 'you remember we have docker installed on our host operating system.', 'start': 6677.685, 'duration': 4.082}, {'end': 6685.508, 'text': 'now, if I try to access Docker from here,', 'start': 6681.767, 'duration': 3.741}, {'end': 6688.99, 'text': 'I will not be able to do that so if I do a pseudo docker ps.', 'start': 6685.689, 'duration': 3.301}, {'end': 6693.792, 'text': 'You can see that it says Okay, let me do a command docker ps.', 'start': 6689.65, 'duration': 4.142}, {'end': 6698.102, 'text': 'You can see that it says docker command not found, and right.', 'start': 6694.173, 'duration': 3.929}, {'end': 6704.428, 'text': 'so basically, docker is not installed inside of the container and that is the reason it is not able to access docker.', 'start': 6698.102, 'duration': 6.326}, {'end': 6712.957, 'text': 'also, an interesting thing that you can see over here is that i pass a command sudo docker ps and it says sudo command not found.', 'start': 6704.428, 'duration': 8.529}, {'end': 6717.101, 'text': 'so the sudo library is not installed in the container.', 'start': 6712.957, 'duration': 4.144}, {'end': 6718.622, 'text': 'So I told you,', 'start': 6718.102, 'duration': 0.52}, {'end': 6731.557, 'text': 'the bare minimum libraries which are required for a container to make it behave as a particular operating system is only present in the container and nothing else,', 'start': 6718.622, 'duration': 12.935}, {'end': 6737.111, 'text': 'and that is why you can see even sudo as a command is not present inside the container.', 'start': 6731.557, 'duration': 5.554}, {'end': 6737.991, 'text': 'all right.', 'start': 6737.111, 'duration': 0.88}, {'end': 6744.593, 'text': 'so if you are inside the container and you want to exit the container, all you have to do is type in the command exit,', 'start': 6737.991, 'duration': 6.602}, {'end': 6748.494, 'text': 'and this will make you come out to your host operating system.', 'start': 6744.593, 'duration': 3.901}, {'end': 6750.515, 'text': 'but mind you guys, your container is still running.', 'start': 6748.494, 'duration': 2.021}, {'end': 6757.777, 'text': 'so if you do a docker ps, you can still see the container running in your docker ps space.', 'start': 6750.515, 'duration': 7.262}, {'end': 6759.585, 'text': 'all right.', 'start': 6758.985, 'duration': 0.6}, {'end': 6763.146, 'text': 'so this is how you can get inside a container.', 'start': 6759.585, 'duration': 3.561}, {'end': 6770.047, 'text': "all you have to do is docker, exec, hyphen id and then the container id space, the terminal that you're working in,", 'start': 6763.146, 'duration': 6.901}, {'end': 6773.028, 'text': 'which in my case was bash all right.', 'start': 6770.047, 'duration': 2.981}, {'end': 6774.348, 'text': 'okay, now again.', 'start': 6773.028, 'duration': 1.32}, {'end': 6777.748, 'text': 'so this i already showed you that if you want to stop a container,', 'start': 6774.348, 'duration': 3.4}, {'end': 6786.49, 'text': 'all you have to do is sudo docker stop and then the container id hit enter and the container will be stopped.', 'start': 6777.748, 'duration': 8.742}, {'end': 6793.177, 'text': 'and then, if you do a docker ps, you will not be able to see any containers which will be running inside the system.', 'start': 6787.046, 'duration': 6.131}, {'end': 6799.388, 'text': 'so, as you can see once we do once we did a stop there are no containers running over there in a docker ps space.', 'start': 6793.177, 'duration': 6.211}, {'end': 6810.013, 'text': "now, okay, You can also kill a container in case a container becomes non-responsive and you're stopping the container but it's not able to exit.", 'start': 6799.388, 'duration': 10.625}, {'end': 6812.875, 'text': 'What you can do is you can kill a container.', 'start': 6810.473, 'duration': 2.402}, {'end': 6819.458, 'text': "It's similar to that of stop but when you stop a container it basically gracefully exits the container.", 'start': 6813.075, 'duration': 6.383}, {'end': 6826.042, 'text': "It's just like shutting down your computer or just switching off the power switch from behind.", 'start': 6820.219, 'duration': 5.823}, {'end': 6829.484, 'text': "the computer's power outlet right.", 'start': 6826.962, 'duration': 2.522}, {'end': 6839.23, 'text': 'so if you have the stop the container but the container is still not stopping because of some program which is running in loop inside the container or something like that,', 'start': 6829.484, 'duration': 9.746}, {'end': 6844.814, 'text': 'you can immediately kill the container using the command docker kill and then the container ID.', 'start': 6839.23, 'duration': 5.584}, {'end': 6847.776, 'text': 'then you have something called as docker RM.', 'start': 6844.814, 'duration': 2.962}, {'end': 6857.907, 'text': 'so basically, I told you guys that if you do a docker ps, Hyphen a, you can still see the containers which were stopped,', 'start': 6847.776, 'duration': 10.131}], 'summary': 'Using docker commands to start, stop, and work with containers, including accessing and updating containers, and handling non-responsive containers.', 'duration': 415.812, 'max_score': 6442.095, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY46442095.jpg'}, {'end': 7308.5, 'src': 'embed', 'start': 7274.807, 'weight': 3, 'content': [{'end': 7289.243, 'text': 'right now if i exit this container and say i do an ls, sorry if i do a sudo docker ps, i can see that the container is running.', 'start': 7274.807, 'duration': 14.436}, {'end': 7296.211, 'text': 'but the catch here is that if i delete this container, say i delete this container.', 'start': 7289.243, 'duration': 6.968}, {'end': 7299.475, 'text': 'okay. so one more thing, guys.', 'start': 7297.714, 'duration': 1.761}, {'end': 7308.5, 'text': 'if you want to remove or delete a container which is running, you can pass in the command sudo docker, RM, hyphen F and then the container ID.', 'start': 7299.475, 'duration': 9.025}], 'summary': "To delete a running docker container, use 'sudo docker rm -f '", 'duration': 33.693, 'max_score': 7274.807, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY47274807.jpg'}, {'end': 7378.356, 'src': 'embed', 'start': 7372.634, 'weight': 0, 'content': [{'end': 7377.336, 'text': 'these changes do not propagate into the image that you downloaded originally.', 'start': 7372.634, 'duration': 4.702}, {'end': 7378.356, 'text': 'all right now,', 'start': 7377.336, 'duration': 1.02}], 'summary': 'Changes do not propagate to original downloaded image.', 'duration': 5.722, 'max_score': 7372.634, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY47372634.jpg'}], 'start': 4819.879, 'title': 'Docker: simplifying development environments', 'summary': 'Discusses challenges faced by developers in setting up and replicating development environments, emphasizing the need for consistent and reproducible environments, and the role of docker in creating portable containers. it also covers docker installation on various operating systems, docker container lifecycle, and managing docker containers and images.', 'chapters': [{'end': 4991.753, 'start': 4819.879, 'title': 'Docker: simplifying development environments', 'summary': 'Discusses the challenges faced by developers in setting up and replicating development environments, highlighting the need for consistent and reproducible environments and the potential complications arising from version discrepancies in software.', 'duration': 171.874, 'highlights': ['Developers needing consistent environments with operating systems, software, and libraries to run their programs. Developers require an operating system, necessary software, and libraries to create and test their programs, emphasizing the need for consistent development environments.', "Operation's challenge in replicating the developer's environment and potential complications arising from version discrepancies in software. Operations face challenges in replicating the developer's environment, especially with version differences in software such as PHP, creating potential complications in running the program.", 'The transition from PHP 5.6 to PHP 7 poses challenges due to legacy issues and changes in commands. Transitioning from PHP 5.6 to PHP 7 presents challenges due to legacy issues and command changes, impacting code base and software compatibility.']}, {'end': 5753.797, 'start': 4991.753, 'title': 'Docker: a solution for portable environments', 'summary': "Discusses the challenges in replicating development environments, the role of docker in creating portable containers, and the comparison between docker's lightweight containers and traditional vms.", 'duration': 762.044, 'highlights': ["Docker's role in creating portable containers Docker is a tool that creates lightweight containers, with sizes as low as 50 MB, containing all the necessary code, operating system, and libraries, allowing for easy replication and testing of environments.", 'Challenges in replicating development environments without Docker Replicating development environments without Docker led to inconsistencies, large VM sizes, and complexity in handling staging, deployment, and production areas.', 'Comparison between Docker containers and traditional VMs Docker containers, leveraging the underlying kernel of the operating system, provide lightweight and portable environments, unlike traditional VMs that require full operating system installations, resulting in larger sizes.']}, {'end': 6070.317, 'start': 5753.797, 'title': 'Docker installation and container lifecycle', 'summary': 'Discusses the process of installing docker on different operating systems, including mac, windows, and ubuntu, emphasizing commands for ubuntu installation and verifying the installation. it also introduces the docker container lifecycle and the concept of a central repository like github for code base management.', 'duration': 316.52, 'highlights': ['The chapter discusses the process of installing Docker on different operating systems, including Mac, Windows, and Ubuntu. It explains the different installation processes for Mac, Windows, and Ubuntu, providing specific links for Mac and Windows and commands for Ubuntu.', "Emphasizes commands for Ubuntu installation and verifying the installation. It provides specific commands for Ubuntu installation, including 'sudo apt-get update' and 'sudo apt-get install docker.io' and emphasizes the importance of checking the Docker version to verify the installation.", 'Introduces the Docker container lifecycle and the concept of a central repository like GitHub for code base management. It compares the central repository in GitHub to the central place for downloading code base, providing a clear analogy for understanding container lifecycle and code base management.']}, {'end': 6370.155, 'start': 6071.35, 'title': 'Docker container lifecycle', 'summary': 'Explains the lifecycle of a docker container, including pulling an image from docker hub, running the image to create a container, and the common docker operations like finding the docker version and pulling an image from docker hub.', 'duration': 298.805, 'highlights': ['The process of the lifecycle of a Docker container, including pulling an image from Docker Hub, running the image to create a container, and the common Docker operations. The first step of the Docker container lifecycle is pulling an image from Docker Hub, followed by running the image to create a container. Common Docker operations include finding the Docker version and pulling an image from Docker Hub.', "Explaining the command 'docker version' to find the Docker version. The command 'docker version' allows users to find the current Docker version installed on their system, along with the build name.", "Demonstrating the 'docker pull' command to download a container image from Docker Hub, using the example of pulling the Ubuntu image. The 'docker pull' command is used to download a container image from Docker Hub. The example shows pulling the Ubuntu image, which automatically downloads the container from Docker Hub.", "Using the 'docker images' command to verify the downloaded image and its size, highlighting the small size of the Ubuntu container image compared to a traditional operating system. The 'docker images' command allows users to verify the downloaded image and its size, showcasing the significantly smaller size of the Ubuntu container image (86.2 MB) compared to a traditional operating system (1.5-2 GB)."]}, {'end': 7378.356, 'start': 6370.155, 'title': 'Managing docker containers and images', 'summary': 'Covers managing docker containers and images, including running, stopping, killing, and removing containers, as well as removing images. it also includes creating a docker hub account and saving changes to a container.', 'duration': 1008.201, 'highlights': ["The command 'docker run' is used to run an image with additional flags like '-it' for interactive terminal and '-d' to run the container as a daemon, allowing it to run in the background. The command 'docker run' is used along with flags like '-it' for interactive terminal and '-d' to run the container as a daemon, allowing it to run in the background.", "Running containers can be viewed using the 'docker ps' command, while all containers, including stopped ones, can be viewed using 'docker ps -a'. Running containers can be viewed using the 'docker ps' command, while all containers, including stopped ones, can be viewed using 'docker ps -a'.", "The 'docker exec' command is used to work within a running container by making it interactive and accessing its terminal. The 'docker exec' command is used to work within a running container by making it interactive and accessing its terminal.", "To save changes made within a container, it is essential to commit the changes to the image using the 'docker commit' command. To save changes made within a container, it is essential to commit the changes to the image using the 'docker commit' command.", 'Creating a Docker Hub account involves signing up on hub.docker.com, verifying the email, and remembering the unique Docker ID. Creating a Docker Hub account involves signing up on hub.docker.com, verifying the email, and remembering the unique Docker ID.']}], 'duration': 2558.477, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY44819879.jpg', 'highlights': ['Developers need consistent environments with operating systems, software, and libraries to run their programs.', 'Docker creates lightweight containers, with sizes as low as 50 MB, containing all the necessary code, operating system, and libraries.', 'The chapter discusses the process of installing Docker on different operating systems, including Mac, Windows, and Ubuntu.', 'The first step of the Docker container lifecycle is pulling an image from Docker Hub, followed by running the image to create a container.', "The command 'docker run' is used to run an image with additional flags like '-it' for interactive terminal and '-d' to run the container as a daemon."]}, {'end': 8788.827, 'segs': [{'end': 7520.908, 'src': 'embed', 'start': 7489.314, 'weight': 4, 'content': [{'end': 7501.598, 'text': 'right, and now I can run a container on the image test and all I have to do is sudo docker, run-it-d and then test.', 'start': 7489.314, 'duration': 12.284}, {'end': 7513.225, 'text': 'this would run the container and if I go inside the container now I can see that my changes would be there inside this container.', 'start': 7501.598, 'duration': 11.627}, {'end': 7514.666, 'text': 'so if I do a list.', 'start': 7513.225, 'duration': 1.441}, {'end': 7520.908, 'text': 'I can see that the app folder is present and this is how you can create a custom container.', 'start': 7514.666, 'duration': 6.242}], 'summary': "Running a custom container using 'sudo docker run-it-d' command allows for changes to be made inside the container and the creation of a custom container with the 'app' folder present.", 'duration': 31.594, 'max_score': 7489.314, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY47489314.jpg'}, {'end': 7594.264, 'src': 'embed', 'start': 7561.447, 'weight': 1, 'content': [{'end': 7564.148, 'text': "we're going to save that container into an image.", 'start': 7561.447, 'duration': 2.701}, {'end': 7567.59, 'text': "Alright. so let's see how we can do this.", 'start': 7564.808, 'duration': 2.782}, {'end': 7574.556, 'text': "So let's clear the screen Let's exit this container and let us first clear everything up.", 'start': 7568.291, 'duration': 6.265}, {'end': 7575.937, 'text': 'So sudo docker ps.', 'start': 7574.816, 'duration': 1.121}, {'end': 7577.639, 'text': 'There are two containers running.', 'start': 7576.538, 'duration': 1.101}, {'end': 7581.382, 'text': 'Okay. so let me show you one more command, which is basically like a shortcut.', 'start': 7577.639, 'duration': 3.743}, {'end': 7581.942, 'text': 'so if you,', 'start': 7581.382, 'duration': 0.56}, {'end': 7594.264, 'text': "if there are like tens or five or more than five or more than three containers running on your system and You always don't have to pass in the command sudo docker-rm-f and then pass the container IDs.", 'start': 7581.942, 'duration': 12.322}], 'summary': 'Demonstrating saving container into an image and managing multiple containers.', 'duration': 32.817, 'max_score': 7561.447, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY47561447.jpg'}, {'end': 7642.035, 'src': 'embed', 'start': 7612.689, 'weight': 0, 'content': [{'end': 7616.51, 'text': 'You pass in this command, it would remove all the containers which are present in your system.', 'start': 7612.689, 'duration': 3.821}, {'end': 7624.079, 'text': 'So if I do a sudo docker ps now, you can see there are no containers running anymore.', 'start': 7617.173, 'duration': 6.906}, {'end': 7629.564, 'text': 'So this is a shortcut that you can use while working with Docker.', 'start': 7624.359, 'duration': 5.205}, {'end': 7633.447, 'text': 'So my next step is to basically install.', 'start': 7629.904, 'duration': 3.543}, {'end': 7637.05, 'text': 'So first I have to run an Ubuntu container.', 'start': 7634.008, 'duration': 3.042}, {'end': 7642.035, 'text': "So I'll do a sudo docker hyphen run hyphen id ubuntu.", 'start': 7637.071, 'duration': 4.964}], 'summary': "The command removes all containers from the system, as demonstrated by 'sudo docker ps', showing no running containers. a shortcut for docker tasks, the next step involves installing an ubuntu container with 'sudo docker run -id ubuntu'.", 'duration': 29.346, 'max_score': 7612.689, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY47612689.jpg'}, {'end': 8230.559, 'src': 'embed', 'start': 8202.726, 'weight': 5, 'content': [{'end': 8214.271, 'text': "so i'll pass in the username and then i'll pass in the password and on a successful login, you will get this message that is, login.", 'start': 8202.726, 'duration': 11.545}, {'end': 8216.031, 'text': 'succeeded, awesome.', 'start': 8214.271, 'duration': 1.76}, {'end': 8217.873, 'text': 'so i have logged into my docker app.', 'start': 8216.031, 'duration': 1.842}, {'end': 8218.994, 'text': 'now what i want to do?', 'start': 8217.873, 'duration': 1.121}, {'end': 8220.193, 'text': 'i want to push my image.', 'start': 8218.994, 'duration': 1.199}, {'end': 8225.097, 'text': "so i'll have to do a sudo docker push.", 'start': 8220.193, 'duration': 4.904}, {'end': 8226.277, 'text': 'and what was your image name?', 'start': 8225.097, 'duration': 1.18}, {'end': 8230.559, 'text': 'it was hshar slash apache.', 'start': 8226.277, 'duration': 4.282}], 'summary': "Logged into docker app, pushing image 'hshar/apache' with sudo docker push.", 'duration': 27.833, 'max_score': 8202.726, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY48202726.jpg'}, {'end': 8318.192, 'src': 'embed', 'start': 8286.443, 'weight': 3, 'content': [{'end': 8289.164, 'text': 'and let us go ahead and learn what is docker file.', 'start': 8286.443, 'duration': 2.721}, {'end': 8291.905, 'text': 'but before that, let us recap what we have just learned.', 'start': 8289.164, 'duration': 2.741}, {'end': 8295.517, 'text': 'so we understood what docker is.', 'start': 8292.415, 'duration': 3.102}, {'end': 8296.657, 'text': 'why do we need docker?', 'start': 8295.517, 'duration': 1.14}, {'end': 8298.739, 'text': 'what is docker exactly?', 'start': 8296.657, 'duration': 2.082}, {'end': 8303.903, 'text': 'then we went through some of the docker operations, the day-to-day operations that you would be going through.', 'start': 8298.739, 'duration': 5.164}, {'end': 8305.804, 'text': 'we got acquainted through that.', 'start': 8303.903, 'duration': 1.901}, {'end': 8318.192, 'text': 'then we saw how to run a container and then how to do some changes to the container and save it on your local system and finally how to push that container onto a Docker Hub.', 'start': 8305.804, 'duration': 12.388}], 'summary': 'Introduction to docker, including operations and container management', 'duration': 31.749, 'max_score': 8286.443, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY48286442.jpg'}, {'end': 8389.078, 'src': 'embed', 'start': 8367.771, 'weight': 2, 'content': [{'end': 8376.814, 'text': 'So a docker file is nothing but a text document in which you write how do you want your container to be customized right.', 'start': 8367.771, 'duration': 9.043}, {'end': 8381.514, 'text': 'So one example, like I just told you guys, was when I did it manually.', 'start': 8377.093, 'duration': 4.421}, {'end': 8389.078, 'text': 'I ran a container, I went inside it, I installed the software, came out, committed the container and then pushed it on to docker hub.', 'start': 8381.514, 'duration': 7.564}], 'summary': 'Docker file is a text document to customize containers. one example is manual customization, such as installing software and pushing to docker hub.', 'duration': 21.307, 'max_score': 8367.771, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY48367771.jpg'}], 'start': 7378.356, 'title': 'Docker container customization and management', 'summary': "Covers saving changes inside a docker container using 'docker commit' command, resulting in a new image size of 212 mb, demonstrates docker port mapping for apache with successful access on port 82, and explains pushing an image to docker hub and introduces docker file, its syntax, usage, and differences with manual container customization.", 'chapters': [{'end': 7849.664, 'start': 7378.356, 'title': 'Saving changes inside a docker container', 'summary': "Explains how to use 'docker commit' command to save changes inside a container, create a custom image, and install software within the container, leading to a new image size of 212 mb.", 'duration': 471.308, 'highlights': ["The 'docker commit' command is used to save changes inside a container and create a new image. By using the 'docker commit' command, users can save changes made inside a container and create a new image, thereby preserving the modifications made within the container.", 'Installing software within a container can significantly increase the image size, as demonstrated by the increase from 86 MB to 212 MB after installing a software. The installation of software within a container resulted in a notable increase in image size, growing from 86 MB to 212 MB, showcasing the impact of software installations on container image sizes.', "The naming convention for custom images requires specifying the user ID and the desired container name. The naming convention for custom images necessitates specifying the user ID and the desired container name, following the format 'user ID/container name,' as explained for pushing images to a docker app account."]}, {'end': 8116.082, 'start': 7849.664, 'title': 'Docker port mapping for apache', 'summary': 'Demonstrates how to use docker to create a container running apache, showcasing port mapping to access the software externally, with a successful demonstration of accessing apache on port 82.', 'duration': 266.418, 'highlights': ['The chapter demonstrates how to use Docker to create a container running Apache, showcasing port mapping to access the software externally, with a successful demonstration of accessing Apache on port 82.', 'Port mapping allows the internal port of the container to be linked with the outside host operating system, as exemplified by mapping port 82 of the host OS to port 80 of the container, enabling successful access to Apache software externally.', "The successful execution of accessing Apache on port 82 after mapping the container's port 80 to the host OS's port 82 validates the functionality of port mapping within Docker."]}, {'end': 8788.827, 'start': 8116.802, 'title': 'Pushing image to docker hub & introduction to docker file', 'summary': 'Covers the process of pushing an image to docker hub, including commands and steps involved, as well as an introduction to docker file and its syntax, usage, and differences with manual container customization.', 'duration': 672.025, 'highlights': ["The process of pushing an image to Docker Hub involves logging in, using 'sudo docker login', pushing the image using 'sudo docker push', and verifying the image on Docker Hub by visiting the repository page and refreshing the page, showcasing the successful completion of the process.", "Introduction to Docker file, its role as a text document specifying container customization, its automation of changes, and its usage of syntaxes such as 'from', 'add', 'run', 'CMD', 'entry point', and 'env' for specifying base image, adding files, running commands, and defining container start-up commands.", "Docker file serves as a script-style automation for making changes to a container image, providing a faster and more efficient approach compared to manual processes, with the ability to specify base image, add files, run commands, and define start-up commands using syntaxes like 'from', 'add', 'run', 'CMD', 'entry point', and 'env'."]}], 'duration': 1410.471, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY47378356.jpg', 'highlights': ["The 'docker commit' command is used to save changes inside a container and create a new image.", 'Installing software within a container can significantly increase the image size, as demonstrated by the increase from 86 MB to 212 MB after installing a software.', 'The naming convention for custom images requires specifying the user ID and the desired container name.', 'The chapter demonstrates how to use Docker to create a container running Apache, showcasing port mapping to access the software externally, with a successful demonstration of accessing Apache on port 82.', 'Port mapping allows the internal port of the container to be linked with the outside host operating system, as exemplified by mapping port 82 of the host OS to port 80 of the container, enabling successful access to Apache software externally.', "The process of pushing an image to Docker Hub involves logging in, using 'sudo docker login', pushing the image using 'sudo docker push', and verifying the image on Docker Hub by visiting the repository page and refreshing the page, showcasing the successful completion of the process.", "Introduction to Docker file, its role as a text document specifying container customization, its automation of changes, and its usage of syntaxes such as 'from', 'add', 'run', 'CMD', 'entry point', and 'env' for specifying base image, adding files, running commands, and defining container start-up commands.", "Docker file serves as a script-style automation for making changes to a container image, providing a faster and more efficient approach compared to manual processes, with the ability to specify base image, add files, run commands, and define start-up commands using syntaxes like 'from', 'add', 'run', 'CMD', 'entry point', and 'env'."]}, {'end': 10499.668, 'segs': [{'end': 9270.486, 'src': 'heatmap', 'start': 8855.274, 'weight': 0.704, 'content': [{'end': 8859.318, 'text': "so i'll specify nano and then docker file.", 'start': 8855.274, 'duration': 4.044}, {'end': 8860.539, 'text': 'we went inside.', 'start': 8859.318, 'duration': 1.221}, {'end': 8867.264, 'text': 'so the first thing that we want to do is we want the ubuntu image to be called.', 'start': 8860.539, 'duration': 6.725}, {'end': 8871.528, 'text': 'then we want to update this image.', 'start': 8867.264, 'duration': 4.264}, {'end': 8873.77, 'text': 'so apt get update.', 'start': 8871.528, 'duration': 2.242}, {'end': 8878.696, 'text': 'then we want to install apache inside it.', 'start': 8874.554, 'duration': 4.142}, {'end': 8884.479, 'text': 'apt-get-y. sorry about that.', 'start': 8878.696, 'duration': 5.783}, {'end': 8894.904, 'text': 'so apt-get-y, install apache 2, all right, sounds good.', 'start': 8884.479, 'duration': 10.425}, {'end': 8903.515, 'text': 'then we are going to add all the files from this directory to the directory where www.html.', 'start': 8894.904, 'duration': 8.611}, {'end': 8909.578, 'text': "so we're going to create this uh file, do not worry, we'll just create this file right.", 'start': 8903.515, 'duration': 6.063}, {'end': 8917.501, 'text': 'once we have done that, uh, the next step would be to run apache in the foreground that has run it.', 'start': 8909.578, 'duration': 7.923}, {'end': 8926.125, 'text': 'so apache ctl hyphen d foreground.', 'start': 8917.501, 'duration': 8.624}, {'end': 8931.342, 'text': 'So this would run Apache automatically.', 'start': 8929.241, 'duration': 2.101}, {'end': 8943.43, 'text': 'Okay And you specify the entry point and say I also want to specify environment variable.', 'start': 8931.362, 'duration': 12.068}, {'end': 8953.677, 'text': 'Say let me create an environment variable called name and I want to specify the value as IntelliPath.', 'start': 8943.51, 'duration': 10.167}, {'end': 8957.145, 'text': 'okay, sounds good.', 'start': 8955.484, 'duration': 1.661}, {'end': 8964.989, 'text': "so this is my docker file and i'll just save this now and let me create a html file as well.", 'start': 8957.145, 'duration': 7.844}, {'end': 8974.974, 'text': "so let's create and one dot html and let us make it a little simple.", 'start': 8964.989, 'duration': 9.985}, {'end': 8978.136, 'text': 'so this would be a hello world html file.', 'start': 8974.974, 'duration': 3.162}, {'end': 8999.302, 'text': 'okay, the body and in the body say i have an h1 which will say hello from entire part.', 'start': 8985.276, 'duration': 14.026}, {'end': 9007.605, 'text': 'close the header, close the body and then close the html.', 'start': 8999.302, 'duration': 8.303}, {'end': 9011.467, 'text': 'all right, this is what i want to do.', 'start': 9007.605, 'duration': 3.862}, {'end': 9023.618, 'text': "i'll shave this and now we are done with the docker file and we have the uh html page in place.", 'start': 9011.467, 'duration': 12.151}, {'end': 9026.44, 'text': 'so the next step would be to build this docker file.', 'start': 9023.618, 'duration': 2.822}, {'end': 9027.8, 'text': "now let's see how we can build it.", 'start': 9026.44, 'duration': 1.36}, {'end': 9034.903, 'text': 'so for building this docker file, all you have to do is docker build.', 'start': 9027.8, 'duration': 7.103}, {'end': 9036.104, 'text': 'where do you want to build it?', 'start': 9034.903, 'duration': 1.201}, {'end': 9041.851, 'text': 'i want to build it from the current directory and the image that it will create.', 'start': 9036.104, 'duration': 5.747}, {'end': 9049.621, 'text': 'i want to name that image as say new underscore docker file.', 'start': 9041.851, 'duration': 7.77}, {'end': 9052.084, 'text': 'okay, so it will be named like this.', 'start': 9049.621, 'duration': 2.463}, {'end': 9056.93, 'text': "so i'll hit enter and i forgot to mention sudo.", 'start': 9052.084, 'duration': 4.846}, {'end': 9061.006, 'text': 'so let me just clear the screen.', 'start': 9058.985, 'duration': 2.021}, {'end': 9061.286, 'text': 'all right.', 'start': 9061.006, 'duration': 0.28}, {'end': 9068.008, 'text': 'so let me first teach you guys how to run a docker uh docker command without sudo.', 'start': 9061.286, 'duration': 6.722}, {'end': 9072.37, 'text': 'so for doing that, just type in sudo user mod.', 'start': 9068.008, 'duration': 4.362}, {'end': 9096.271, 'text': 'uh, hyphen a and g and then docker and then dollar user enter, and now all i have to do is just re-login into your session And this should work.', 'start': 9072.37, 'duration': 23.901}, {'end': 9103.853, 'text': 'So if I do a docker ps now without sudo, You can see the command is running awesome.', 'start': 9096.271, 'duration': 7.582}, {'end': 9112.596, 'text': 'So What I want to do is I want to go inside the docker file.', 'start': 9104.513, 'duration': 8.083}, {'end': 9115.596, 'text': 'Okay, and now I want to build this docker file.', 'start': 9112.596, 'duration': 3}, {'end': 9125.889, 'text': 'so docker build, dot, hyphen t, new underscore docker file.', 'start': 9115.596, 'duration': 10.293}, {'end': 9148.722, 'text': 'so you can see the docker file is now being executed and basically this is creating the container for me and this will basically come up with the image which will have all the changes that we just told to the container.', 'start': 9125.889, 'duration': 22.833}, {'end': 9157.585, 'text': 'so now, if i do a docker images, i can see that there is a new image which has been created, which is new underscore docker file.', 'start': 9148.722, 'duration': 8.863}, {'end': 9163.427, 'text': 'so let us run this new underscore for docker file a docker run, hyphen it, hyphen, d.', 'start': 9157.585, 'duration': 5.842}, {'end': 9166.868, 'text': 'uh, also, let us specify the port number.', 'start': 9163.427, 'duration': 3.441}, {'end': 9183.957, 'text': 'so let us open it at port 84 and then d, new underscore docker file, and then hit enter.', 'start': 9166.868, 'duration': 17.089}, {'end': 9187.318, 'text': 'okay, so the container is launched now.', 'start': 9183.957, 'duration': 3.361}, {'end': 9200.087, 'text': 'so if i do a docker ps, i can see that a container has been launched seven seconds ago awesome, and it being opened on port 84.', 'start': 9187.318, 'duration': 12.769}, {'end': 9204.832, 'text': "so let's check if everything is working well for us.", 'start': 9200.087, 'duration': 4.745}, {'end': 9211.498, 'text': 'so we just have to change this to 84 and you can see apache is running.', 'start': 9204.832, 'duration': 6.666}, {'end': 9219.666, 'text': "awesome, apache is running and now let's check our web page if it's there in the container on rod, so we named it as one dot html.", 'start': 9211.498, 'duration': 8.168}, {'end': 9228.375, 'text': 'And yes, so this is the HTML that we created, which has been added inside the container.', 'start': 9220.749, 'duration': 7.626}, {'end': 9231.597, 'text': 'So let me show you inside the container what exactly happened.', 'start': 9228.655, 'duration': 2.942}, {'end': 9242.405, 'text': 'OK, so docker exec-it container name and then bash and let me just clear the screen OK.', 'start': 9233.158, 'duration': 9.247}, {'end': 9246.07, 'text': 'Let me compare it with my docker file.', 'start': 9243.429, 'duration': 2.641}, {'end': 9253.894, 'text': 'So the first thing that we did was apt-get update, apt-get hyphen y install Apache to this basically installed Apache.', 'start': 9246.07, 'duration': 7.824}, {'end': 9255.234, 'text': 'So this was clear.', 'start': 9253.894, 'duration': 1.34}, {'end': 9260.957, 'text': 'then we added everything from the current directory to where www.html Right.', 'start': 9255.234, 'duration': 5.723}, {'end': 9270.486, 'text': 'So if we go inside where and then www and then HTML and do an ls over there,', 'start': 9261.237, 'duration': 9.249}], 'summary': 'Created a docker file to build and run apache server, resulting in a new image and container running on port 84.', 'duration': 415.212, 'max_score': 8855.274, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY48855274.jpg'}, {'end': 10023.054, 'src': 'embed', 'start': 9990.642, 'weight': 3, 'content': [{'end': 9993.904, 'text': 'let me just duplicate the session.', 'start': 9990.642, 'duration': 3.262}, {'end': 9999.948, 'text': "and what i'm going to do now is i'm going to launch one more container.", 'start': 9995.167, 'duration': 4.781}, {'end': 10008.19, 'text': "okay, so i'm going to launch docker, run hyphen it, hyphen,", 'start': 9999.948, 'duration': 8.242}, {'end': 10018.792, 'text': 'hyphen mount and then say source is equal to test and target could be some other folder as well, but for the sake of simplicity,', 'start': 10008.19, 'duration': 10.602}, {'end': 10023.054, 'text': "let's keep slash app And then hyphen D and Ubuntu.", 'start': 10018.792, 'duration': 4.262}], 'summary': "Launching one more container using docker with source 'test' and target '/app' in ubuntu.", 'duration': 32.412, 'max_score': 9990.642, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY49990642.jpg'}, {'end': 10255.454, 'src': 'embed', 'start': 10205.42, 'weight': 0, 'content': [{'end': 10226.924, 'text': 'so if I exit this container and if I launch a new container with the mount as Apache and target would be, say slash app, slash app and the image,', 'start': 10205.42, 'duration': 21.504}, {'end': 10230.005, 'text': 'is this okay?', 'start': 10226.924, 'duration': 3.081}, {'end': 10235.487, 'text': 'so this is the Ubuntu image right, so it should not have the files inside it.', 'start': 10230.005, 'duration': 5.482}, {'end': 10253.673, 'text': 'but if I go inside this container now, Can you see the files over here are same as what was there inside this container where slash,', 'start': 10235.487, 'duration': 18.186}, {'end': 10255.454, 'text': 'double slash HTML.', 'start': 10253.673, 'duration': 1.781}], 'summary': 'Discussion about launching a new container with a specific mount and target directory.', 'duration': 50.034, 'max_score': 10205.42, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY410205420.jpg'}, {'end': 10361.292, 'src': 'embed', 'start': 10311.204, 'weight': 4, 'content': [{'end': 10315.426, 'text': 'i want to copy this tool.html inside this container.', 'start': 10311.204, 'duration': 4.222}, {'end': 10316.347, 'text': 'so what i will do?', 'start': 10315.426, 'duration': 0.921}, {'end': 10329.334, 'text': "i'll do a docker cp dot, slash two dot html and then slash, slash html and then the container id.", 'start': 10316.347, 'duration': 12.987}, {'end': 10347.545, 'text': 'Okay, so now two dot HTML is present inside the container.', 'start': 10343.783, 'duration': 3.762}, {'end': 10351.107, 'text': "Okay Now what I will do, I'll just delete the container.", 'start': 10347.845, 'duration': 3.262}, {'end': 10354.949, 'text': 'I delete the new underscore Docker file container.', 'start': 10351.607, 'duration': 3.342}, {'end': 10360.031, 'text': 'Docker RM hyphen F.', 'start': 10355.869, 'duration': 4.162}, {'end': 10361.292, 'text': 'Delete Okay.', 'start': 10360.031, 'duration': 1.261}], 'summary': "Copied tool.html into container using 'docker cp' command and then deleted the container.", 'duration': 50.088, 'max_score': 10311.204, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY410311204.jpg'}], 'start': 8788.827, 'title': 'Dockerfile, container management, and volume mounting', 'summary': 'Explains creating dockerfile with environment variables, building and managing containers, docker volumes, bind mounts, differences between volumes and bind mounts, and volume mounting in docker containers, showcasing file transfer impact on container behavior.', 'chapters': [{'end': 9023.618, 'start': 8788.827, 'title': 'Creating dockerfile with environment variables', 'summary': "Explains how to create a dockerfile with environment variables, update an ubuntu image, install apache, and configure a simple html file, while setting an environment variable, 'name', to 'intellipath'.", 'duration': 234.791, 'highlights': ["Using 'env' command to set environment variables inside a Docker container, such as setting 'name' variable to 'IntelliPath'.", "Creating a Dockerfile to update an Ubuntu image, install Apache, and add files to the directory where 'www.html' resides.", "Configuring a simple HTML file with 'hello from IntelliPath' message."]}, {'end': 9476.643, 'start': 9023.618, 'title': 'Dockerfile building and container management', 'summary': 'Covers the process of building a dockerfile to create an image, running a container from the image, and managing the container, with key points on building, running, and managing containers without sudo, creating and managing docker volumes, and persisting data across container lifecycles.', 'duration': 453.025, 'highlights': ["Explaining the process of building a Dockerfile to create an image and running a container from the image, with key points on naming the image, managing sudo access, building the image, running the container, and accessing the container's web page and content. Docker image created, container launched, container accessed, web page verified.", 'Demonstrating the process of managing Docker volumes to persist data across the lifetime of a container, with key points on the importance of Docker volumes in persisting data and preventing data loss when containers are deleted and relaunched. Example scenario of website content loss due to container deletion.']}, {'end': 9763.325, 'start': 9476.643, 'title': 'Docker volumes and bind mounts', 'summary': 'Discusses how docker volumes and bind mounts can be used to host storage outside the container and map it inside, ensuring persistence of data across container lifecycles, and highlights the differences and limitations between bind mounts and docker volumes.', 'duration': 286.682, 'highlights': ['Docker volumes host storage outside the container and map it inside, ensuring persistence of data across container lifecycles. Docker volumes allow the storage to be on the host system rather than on the container, ensuring that the attached container will have the same file system as the older container, even if the container is deleted or started again.', 'Bind mounts map a particular file location inside the container, mirroring the directory on the host operating system, but may face limitations in different environment setups. Bind mounts mirror the files from the directory on the host operating system inside the container, allowing any changes made inside the directory to be dynamically available inside the container. However, bind mounts may face limitations in different environment setups, such as when working with different operating systems.']}, {'end': 10075.528, 'start': 9763.325, 'title': 'Docker volumes vs bind mounts', 'summary': 'Explains the differences between docker volumes and bind mounts, highlighting the advantage of volumes in managing storage and ease of migration, as well as the syntax and practical demonstration of creating and sharing volumes between containers.', 'duration': 312.203, 'highlights': ["Volumes are managed by Docker and provide automatic identification of storage location, making migration easy and handling file system itself, as demonstrated by the creation and sharing of 'test' volume between two containers. Volumes are automatically handled by the Docker engine; 'test' volume shared between two containers.", 'Bind mounts require manual identification of the storage location and binding to the container, making it less convenient for migration and not managed by Docker. Bind mounts need manual handling of storage location and binding to the container.', "Creating a volume is demonstrated using the command 'docker volume create' followed by the volume name, as shown by the creation of 'test' volume. Demonstration of creating a volume using 'docker volume create' for 'test' volume."]}, {'end': 10499.668, 'start': 10077.159, 'title': 'Docker volume mounting', 'summary': 'Discusses the process of mounting volumes in docker containers, showcasing the seamless transfer of files and directories, as well as the impact on container behavior when volumes are mounted or not, with quantifiable data on file transfers and container behavior.', 'duration': 422.509, 'highlights': ["The process of mounting volumes in Docker containers allows for seamless transfer of files and directories, maintaining the container's contents even when launched again, with an example showcasing the transfer of files from the host to the container. N/A", 'Demonstration of the impact of volume mounting on container behavior, showing files being automatically included in the volume directory, eliminating the need for manual addition, and the ability to access transferred files in subsequent containers, highlighting the practicality of volume mounting. N/A', 'Illustration of the impact of volume mounting on container behavior, exemplifying the presence of files inside the container even if not originally included in the image, with quantifiable data on file availability across containers. N/A']}], 'duration': 1710.841, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY48788827.jpg', 'highlights': ["Demonstration of creating a volume using 'docker volume create' for 'test' volume.", "Volumes are managed by Docker and provide automatic identification of storage location, making migration easy and handling file system itself, as demonstrated by the creation and sharing of 'test' volume between two containers.", "Explaining the process of building a Dockerfile to create an image and running a container from the image, with key points on naming the image, managing sudo access, building the image, running the container, and accessing the container's web page and content. Docker image created, container launched, container accessed, web page verified.", 'Docker volumes host storage outside the container and map it inside, ensuring persistence of data across container lifecycles. Docker volumes allow the storage to be on the host system rather than on the container, ensuring that the attached container will have the same file system as the older container, even if the container is deleted or started again.', "Using 'env' command to set environment variables inside a Docker container, such as setting 'name' variable to 'IntelliPath'.", 'Demonstration of the impact of volume mounting on container behavior, showing files being automatically included in the volume directory, eliminating the need for manual addition, and the ability to access transferred files in subsequent containers, highlighting the practicality of volume mounting.']}, {'end': 12611.863, 'segs': [{'end': 10933.602, 'src': 'embed', 'start': 10904.672, 'weight': 0, 'content': [{'end': 10911.679, 'text': "for example, in uber, until and unless you don't make the payment, you will not be allowed to book cab right.", 'start': 10904.672, 'duration': 7.007}, {'end': 10920.769, 'text': 'so still, communication has to be there between these two components, that is, the payment and the booking has to be, you know,', 'start': 10911.679, 'duration': 9.09}, {'end': 10922.351, 'text': 'communicating with each other.', 'start': 10920.769, 'duration': 1.582}, {'end': 10926.115, 'text': "but now they don't have to communicate through in between the program.", 'start': 10922.351, 'duration': 3.764}, {'end': 10933.602, 'text': 'they can communicate through probably HTTP ways or, you know, by hitting the API of each other.', 'start': 10926.115, 'duration': 7.487}], 'summary': 'Uber now uses http or hitting apis for communication between payment and booking components.', 'duration': 28.93, 'max_score': 10904.672, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY410904672.jpg'}, {'end': 12104.719, 'src': 'embed', 'start': 12078.285, 'weight': 1, 'content': [{'end': 12084.81, 'text': 'so there in the swarm there will be one machine which will be called as the leader, which will basically tell the workers what to do,', 'start': 12078.285, 'duration': 6.525}, {'end': 12088.912, 'text': 'and the workers will have the containers running on them.', 'start': 12084.81, 'duration': 4.102}, {'end': 12095.377, 'text': 'right. so you have the leader, like we have it over here, and then you have multiple workers which are running on the cluster,', 'start': 12088.912, 'duration': 6.465}, {'end': 12098.798, 'text': 'And these workers will run the containers that we want to launch.', 'start': 12095.977, 'duration': 2.821}, {'end': 12104.719, 'text': 'So this was about Docker Swarm, but this is not it.', 'start': 12100.618, 'duration': 4.101}], 'summary': 'Docker swarm involves a leader machine directing workers running containers on a cluster.', 'duration': 26.434, 'max_score': 12078.285, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY412078285.jpg'}, {'end': 12513.21, 'src': 'embed', 'start': 12488.367, 'weight': 3, 'content': [{'end': 12498.778, 'text': 'so this is how you can go ahead and create a docker swarm cluster, and the command, like i said, is docker swarm init,', 'start': 12488.367, 'duration': 10.411}, {'end': 12508.406, 'text': 'advertise address equal to ip address of leader, specify that, hit enter and just copy this command on the worker and it will work like a charm,', 'start': 12498.778, 'duration': 9.628}, {'end': 12509.087, 'text': 'all right.', 'start': 12508.406, 'duration': 0.681}, {'end': 12513.21, 'text': 'so our next step is now to deploy an app on the docker swarm.', 'start': 12509.087, 'duration': 4.123}], 'summary': "Create a docker swarm cluster using 'docker swarm init' command, then deploy an app on the swarm.", 'duration': 24.843, 'max_score': 12488.367, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY412488367.jpg'}], 'start': 10499.668, 'title': 'Docker volumes, microservices & container orchestration', 'summary': 'Covers docker volumes, microservices, and container orchestration, discussing their relevance in a production-grade environment, challenges of monolithic applications, advantages of microservices, deploying multiple containers with docker compose, and understanding docker swarm as a clustering and scheduling tool.', 'chapters': [{'end': 10620.76, 'start': 10499.668, 'title': 'Understanding docker volumes & microservices', 'summary': 'Discusses the concept of docker volumes, the relevance of microservices in a production-grade environment, and the characteristics of a monolithic application using examples like uber app, and how it contributes to the understanding of microservices architecture.', 'duration': 121.092, 'highlights': ['The relevance of microservices in a production-grade environment is explained, highlighting the need for multiple docker containers working together.', 'The concept of a monolithic application is described using the example of an Uber app, including its components like notification, mail, payments, location services, customer service, and passenger management.', 'An interactive question is posed to test the knowledge of DevOps tools, offering a prize of 5000 points in Intellipaat wallet to the correct answer.', 'The session introduces the understanding of Docker volumes and their significance in the context of DevOps.']}, {'end': 11172.666, 'start': 10620.76, 'title': 'Monolithic vs microservices', 'summary': 'Discusses the challenges of monolithic applications, including the risks of code changes impacting the entire app, downtime, and technology restrictions, and introduces the advantages of microservices, which include independent modules, reduced downtime, and flexibility in technology choices.', 'duration': 551.906, 'highlights': ['Microservices allow for independent modules that are not dependent on each other, reducing the risk of downtime and impact on the entire application when making changes. The microservices architecture allows for segregated modules that can exist independently, reducing the risk of downtime and impact on the entire application when making changes.', 'In contrast to monolithic applications, microservices enable reduced downtime when updating features, as problems are isolated to specific modules without affecting other services. With microservices, updating features in a specific module only affects that module and does not impact other services, reducing downtime compared to monolithic applications.', 'Microservices offer the flexibility to use different technologies for individual modules, solving the problem of technology restrictions present in monolithic applications. The use of microservices allows for the flexibility to use different technologies for individual modules, addressing the technology restrictions present in monolithic applications.', 'Monolithic applications pose challenges such as understanding dependencies between components, redeploying the entire application, and the potential for a bug in one module to affect the entire application. Challenges of monolithic applications include understanding dependencies between components, redeploying the entire application, and the potential for a bug in one module to affect the entire application.']}, {'end': 11459.909, 'start': 11172.666, 'title': 'Docker compose for deploying multiple containers', 'summary': 'Discusses the use of docker compose to deploy multiple containers at once, creating and configuring them with a single command, using a yaml file, and demonstrates it through a sample docker file deploying a wordpress website with mysql, specifying the containers, their images, environment variables, dependencies, and ports.', 'duration': 287.243, 'highlights': ['Docker Compose allows creating and configuring multiple containers at once with a single command using a YAML file. The tool enables the creation and configuration of multiple containers at once with a single command using a YAML file, making it efficient and convenient for managing multiple containers.', 'Demonstrating Docker Compose through a sample Docker file deploying a WordPress website with MySQL. The speaker demonstrates the power of Docker Compose by using a sample Docker file to deploy a WordPress website with MySQL, showcasing its ability to handle dependencies, environment variables, and container specifications.', 'Detailed explanation of the Docker Compose file specifying containers, images, environment variables, dependencies, and ports. The detailed breakdown of the Docker Compose file includes specifying containers, their images, environment variables, dependencies, and ports, providing a comprehensive understanding of its configuration.']}, {'end': 11923.053, 'start': 11459.909, 'title': 'Docker compose for multi-container deployment', 'summary': 'Discusses the process of installing docker compose on linux, creating a yaml file for multi-container deployment, launching multiple containers at once using docker compose, and understanding the interaction between containers in a multi-tier app.', 'duration': 463.144, 'highlights': ['The process of installing Docker Compose on Linux Explains the need to install Docker Compose separately on Linux, with details on how to install it and verify the installation.', 'Launching multiple containers at once using Docker Compose Describes the steps to create a YAML file for multi-container deployment, renaming the file to adhere to Docker Compose naming conventions, and launching multiple containers using Docker Compose.', 'Understanding the interaction between containers in a multi-tier app Provides insights into the interaction between containers in a multi-tier app, specifically showcasing how a WordPress container interacts with a MySQL container, and explains the concept of multi-tier apps and microservices deployment.']}, {'end': 12611.863, 'start': 11923.133, 'title': 'Understanding docker swarm orchestration', 'summary': 'Explains the importance of container orchestration, focusing on docker swarm as a clustering and scheduling tool, its functionality in monitoring container health, and the steps to create a docker swarm cluster.', 'duration': 688.73, 'highlights': ['Docker Swarm automatically repairs unhealthy containers by stopping and launching new ones, ensuring continuous service availability. Docker Swarm automatically repairs unhealthy containers by stopping and launching new ones.', 'Docker Swarm monitors and maintains the specified number of healthy containers, ensuring continuous operation. Docker Swarm helps keep a healthy number of specified containers always in the running state.', 'Docker Swarm uses a leader-worker architecture to manage and run containers, ensuring fault tolerance and high availability. Docker Swarm uses a leader-worker architecture to manage and run containers, ensuring fault tolerance and high availability.', "The process of creating a Docker Swarm cluster involves initializing a master node, joining worker nodes, and verifying the cluster's readiness. The process of creating a Docker Swarm cluster involves initializing a master node, joining worker nodes, and verifying the cluster's readiness.", "The command 'docker swarm init' initializes a master node, and 'docker swarm join' connects worker nodes to the master. The command 'docker swarm init' initializes a master node, and 'docker swarm join' connects worker nodes to the master."]}], 'duration': 2112.195, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY410499668.jpg', 'highlights': ['Microservices offer flexibility to use different technologies for individual modules, addressing technology restrictions in monolithic applications.', 'Docker Compose enables creating and configuring multiple containers at once with a single command using a YAML file.', 'Docker Swarm automatically repairs unhealthy containers by stopping and launching new ones, ensuring continuous service availability.', "The process of creating a Docker Swarm cluster involves initializing a master node, joining worker nodes, and verifying the cluster's readiness.", 'Monolithic applications pose challenges such as understanding dependencies between components, redeploying the entire application, and the potential for a bug in one module to affect the entire application.']}, {'end': 14948.7, 'segs': [{'end': 12642.528, 'src': 'embed', 'start': 12611.863, 'weight': 2, 'content': [{'end': 12614.184, 'text': "so let's go ahead and create a service.", 'start': 12611.863, 'duration': 2.321}, {'end': 12619.225, 'text': 'so, guys, this is the syntax for creating a service in docker swamp.', 'start': 12614.184, 'duration': 5.041}, {'end': 12626.977, 'text': "what we'll have to do is we'll have to type in docker service, create And then specify the name of the service.", 'start': 12619.225, 'duration': 7.752}, {'end': 12632.38, 'text': 'Say I want the name of the service to be Apache and then specify the number of replicas that we want to learn.', 'start': 12627.017, 'duration': 5.363}, {'end': 12637.984, 'text': 'So say I want to run five replicas and then the port mapping.', 'start': 12633.001, 'duration': 4.983}, {'end': 12642.528, 'text': 'So basically I want to do it to be launched on Port 83.', 'start': 12638.025, 'duration': 4.503}], 'summary': 'Creating a docker service named apache with 5 replicas on port 83.', 'duration': 30.665, 'max_score': 12611.863, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY412611863.jpg'}, {'end': 12686.806, 'src': 'embed', 'start': 12663.789, 'weight': 6, 'content': [{'end': 12673.574, 'text': 'So now, if you want to see what all services are running on your swarm cluster, all you have to do is say docker swarm service ls, docker service ls.', 'start': 12663.789, 'duration': 9.785}, {'end': 12676.816, 'text': 'And it will show you that this is the service which is running.', 'start': 12673.674, 'duration': 3.142}, {'end': 12682.739, 'text': 'It is running in the mode replicated, right? And it has five to five replicas running.', 'start': 12676.896, 'duration': 5.843}, {'end': 12684.06, 'text': 'And this is the image name.', 'start': 12682.779, 'duration': 1.281}, {'end': 12686.806, 'text': "now i'll show you a very awesome thing.", 'start': 12684.745, 'duration': 2.061}], 'summary': 'The swarm cluster has 5 replicas running in replicated mode.', 'duration': 23.017, 'max_score': 12663.789, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY412663789.jpg'}, {'end': 12948.783, 'src': 'embed', 'start': 12923.847, 'weight': 5, 'content': [{'end': 12931.913, 'text': 'if you want to remove a service, if you want to remove an application from the cluster, all you have to do is docker service, rm,', 'start': 12923.847, 'duration': 8.066}, {'end': 12936.396, 'text': 'and then specify the name of the service and it will remove that service.', 'start': 12931.913, 'duration': 4.483}, {'end': 12942.139, 'text': 'and now, if I do a docker ps, it should slowly remove everything out of here.', 'start': 12936.396, 'duration': 5.743}, {'end': 12945.941, 'text': "so it's dock appears again.", 'start': 12942.139, 'duration': 3.802}, {'end': 12948.783, 'text': 'slowly it will remove everything from the container.', 'start': 12945.941, 'duration': 2.842}], 'summary': "Removing a service using 'docker service rm' removes it from the cluster and containers.", 'duration': 24.936, 'max_score': 12923.847, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY412923847.jpg'}, {'end': 13177.884, 'src': 'embed', 'start': 13154.523, 'weight': 4, 'content': [{'end': 13161.346, 'text': 'in microservices, we basically decided that all the features should have different code base and should be deployed separately.', 'start': 13154.523, 'duration': 6.823}, {'end': 13164.787, 'text': 'now my customer service is deployed separately.', 'start': 13161.906, 'duration': 2.881}, {'end': 13166.468, 'text': "it's in a separate code base.", 'start': 13164.787, 'duration': 1.681}, {'end': 13169.389, 'text': 'my notifications have been deployed separately in us.', 'start': 13166.468, 'duration': 2.921}, {'end': 13175.323, 'text': "it's in a separate code base and all of them they interact using json's right.", 'start': 13169.389, 'duration': 5.934}, {'end': 13177.884, 'text': 'so if they have to communicate with each other,', 'start': 13175.323, 'duration': 2.561}], 'summary': 'Microservices enable separate deployment of customer service and notifications using different code bases, interacting with json.', 'duration': 23.361, 'max_score': 13154.523, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY413154523.jpg'}, {'end': 13662.089, 'src': 'embed', 'start': 13610.996, 'weight': 0, 'content': [{'end': 13616.319, 'text': "You don't have to worry about the number of containers running behind it because Kubernetes Automatically.", 'start': 13610.996, 'duration': 5.323}, {'end': 13621.521, 'text': 'it takes care of the health, monitoring and scaling up of your containers.', 'start': 13616.319, 'duration': 5.202}, {'end': 13624.923, 'text': 'You only have to worry about what kind of services you want to launch.', 'start': 13621.601, 'duration': 3.322}, {'end': 13628.144, 'text': 'If there is a new feature that you have to include in your application,', 'start': 13625.043, 'duration': 3.101}, {'end': 13651.819, 'text': 'all you have to do is launch a new deployment which will basically have those All the containers which are deployed on the Kubernetes cluster basically can interact with each other as if they were on one system.', 'start': 13628.144, 'duration': 23.675}, {'end': 13661.549, 'text': 'But they could be on 100 nodes or 200 nodes or 300 nodes, but still they can interact with each other, irrespective of the fact that where they exist,', 'start': 13652.439, 'duration': 9.11}, {'end': 13662.089, 'text': 'on the cluster.', 'start': 13661.549, 'duration': 0.54}], 'summary': 'Kubernetes automates container management, enabling seamless interaction among containers on clusters of any size.', 'duration': 51.093, 'max_score': 13610.996, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY413610996.jpg'}, {'end': 13748.31, 'src': 'embed', 'start': 13714.5, 'weight': 1, 'content': [{'end': 13717.721, 'text': 'if your code runs fine, everything is going to run fine in kubernetes.', 'start': 13714.5, 'duration': 3.221}, {'end': 13723.458, 'text': "Moving forward, now let's discuss about how Kubernetes actually works.", 'start': 13719.136, 'duration': 4.322}, {'end': 13726.74, 'text': "So Kubernetes, like I said, it's basically a cluster.", 'start': 13723.758, 'duration': 2.982}, {'end': 13731.062, 'text': 'So you have a master and then you have several nodes inside the master.', 'start': 13727.16, 'duration': 3.902}, {'end': 13739.106, 'text': 'The job of the master is basically to schedule containers on the nodes which are under it, monitor the nodes,', 'start': 13731.582, 'duration': 7.524}, {'end': 13748.31, 'text': 'monitor the containers which are running inside each node and also keep a track of the logs of what operations are being performed on each container.', 'start': 13739.106, 'duration': 9.204}], 'summary': 'Kubernetes ensures smooth operations by scheduling containers and monitoring nodes and logs.', 'duration': 33.81, 'max_score': 13714.5, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY413714500.jpg'}, {'end': 13898.625, 'src': 'embed', 'start': 13872.504, 'weight': 3, 'content': [{'end': 13876.706, 'text': 'Okay So the first method of installing Kubernetes is kubeadm.', 'start': 13872.504, 'duration': 4.202}, {'end': 13882.969, 'text': 'Now guys, kubeadm is a method using which we can install Kubernetes on any bare metal server.', 'start': 13877.026, 'duration': 5.943}, {'end': 13886.071, 'text': 'So it need not have to be on premise.', 'start': 13883.19, 'duration': 2.881}, {'end': 13887.752, 'text': 'It need not have to be on cloud.', 'start': 13886.131, 'duration': 1.621}, {'end': 13891.76, 'text': 'on AWS, on Azure or on GCP.', 'start': 13888.337, 'duration': 3.423}, {'end': 13894.102, 'text': 'it can be any generic server right?', 'start': 13891.76, 'duration': 2.342}, {'end': 13896.244, 'text': 'It could be a server on Google Cloud.', 'start': 13894.262, 'duration': 1.982}, {'end': 13897.344, 'text': 'it could be a server on Azure.', 'start': 13896.244, 'duration': 1.1}, {'end': 13898.625, 'text': 'it could be a server on-premise.', 'start': 13897.344, 'duration': 1.281}], 'summary': 'Kubeadm allows installing kubernetes on any bare metal or cloud server.', 'duration': 26.121, 'max_score': 13872.504, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY413872504.jpg'}, {'end': 14071.302, 'src': 'embed', 'start': 14037.375, 'weight': 9, 'content': [{'end': 14042.458, 'text': "and you don't have to get into all the details of how to install kubernetes and everything.", 'start': 14037.375, 'duration': 5.083}, {'end': 14044.48, 'text': 'everything is set up for you automatically.', 'start': 14042.458, 'duration': 2.022}, {'end': 14051.946, 'text': 'you can just get started by using or by throwing kubectl commands on your gcp server and it will work like a charm.', 'start': 14044.48, 'duration': 7.466}, {'end': 14060.713, 'text': 'but again, in that sense, it would not be that much beneficial for people who basically want to set up kubernetes in their company.', 'start': 14051.946, 'duration': 8.767}, {'end': 14071.302, 'text': 'or they are probably a devops architect or a devops engineer who whose sole responsibility is to manage the life cycle And he only has to deal with setting up all the tools in his company.', 'start': 14060.713, 'duration': 10.589}], 'summary': 'Kubernetes setup on gcp is automatic, suitable for individual use, less beneficial for company-wide implementation.', 'duration': 33.927, 'max_score': 14037.375, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY414037375.jpg'}, {'end': 14326.271, 'src': 'embed', 'start': 14300.527, 'weight': 8, 'content': [{'end': 14305.19, 'text': 'So as we studied in the architecture that Docker is a common component among the master and the slave.', 'start': 14300.527, 'duration': 4.663}, {'end': 14308.88, 'text': "So let's update the machine that we are going to work on first.", 'start': 14305.478, 'duration': 3.402}, {'end': 14313.443, 'text': 'sudo apt-get update on master and similarly on slave as well.', 'start': 14309.06, 'duration': 4.383}, {'end': 14316.825, 'text': 'Okay So master is updated.', 'start': 14314.664, 'duration': 2.161}, {'end': 14318.506, 'text': "Now let's install Docker on it.", 'start': 14316.965, 'duration': 1.541}, {'end': 14323.229, 'text': 'sudo apt-get install docker.io.', 'start': 14318.586, 'duration': 4.643}, {'end': 14326.271, 'text': 'So Docker is getting installed on this system.', 'start': 14323.91, 'duration': 2.361}], 'summary': 'Docker is being installed on the master and slave systems in the architecture.', 'duration': 25.744, 'max_score': 14300.527, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY414300527.jpg'}], 'start': 12611.863, 'title': 'Docker, kubernetes, and orchestration', 'summary': 'Covers docker service creation, scaling, and removal, deploying applications on docker swarm, introduction to kubernetes for container orchestration, kubernetes installation methods, installing kubernetes on aws with ubuntu, and setting up a kubernetes cluster, emphasizing the advantages and essential features of docker and kubernetes in managing, monitoring, and scaling containers.', 'chapters': [{'end': 12978, 'start': 12611.863, 'title': 'Docker service creation and management', 'summary': 'Explains the creation and management of services in docker swarm, including scaling, auto-scaling, and removal of services, showcasing commands and their outcomes.', 'duration': 366.137, 'highlights': ["The command 'docker service create' is used to create a service in Docker swarm, where the number of replicas, port mapping, and image name are specified, with an example of creating a service named Apache with 5 replicas and port 83 mapping.", "The 'docker service ls' command displays the running services, their mode (replicated), number of replicas, and image name, allowing users to monitor the services in the swarm cluster.", 'Containers in the swarm are auto-scaled, as demonstrated by deleting containers and observing their automatic recreation to maintain the specified number of replicas, with a command example of scaling the number of replicas for a service.', "The process of removing a service from the cluster is explained using the 'docker service rm' command, showcasing the removal of services and the subsequent verification of the removal.", "Leaving a node from the swarm cluster is demonstrated using the 'sudo docker swarm leave' command, exemplifying the process of node removal from the cluster."]}, {'end': 13386.976, 'start': 12978, 'title': 'Deploying applications on docker swarm', 'summary': 'Explains the challenges of monolithic applications, the advantages of microservices, and how docker solved the problem of underutilization and scaling by enabling independent deployment of services on a single computer.', 'duration': 408.976, 'highlights': ['The challenges of monolithic applications and the advantages of microservices The chapter discusses the issues with monolithic applications, such as dependencies and difficulties in updates, and introduces the concept of microservices for independent deployment and interaction using JSON.', 'The problem of underutilization and scaling in microservices The chapter highlights the issue of underutilization and scaling in microservices, where deploying on separate servers led to inefficiencies in resource usage and scaling challenges.', 'How Docker solved the problem of underutilization and scaling in microservices The chapter explains how Docker provided a separate environment for independent deployment of services on a single computer, solving the underutilization and scaling issues in microservices.']}, {'end': 13764.041, 'start': 13386.976, 'title': 'Introduction to kubernetes for container orchestration', 'summary': 'Discusses the challenges of monitoring and orchestrating containers at scale, the role of kubernetes in solving these challenges, and the essential features of kubernetes, emphasizing its ability to manage, monitor, scale, and interact with containers across multiple nodes.', 'duration': 377.065, 'highlights': ['Kubernetes Features Kubernetes automates monitoring, scaling, and health checks for containers, eliminating manual intervention, and allows interaction among containers across multiple nodes, making it essential for managing large-scale container deployments.', 'Challenges Without Kubernetes Manually monitoring and scaling containers, handling manual intervention for container failures, and the inability to orchestrate containers across multiple servers were significant challenges addressed by Kubernetes.', 'Kubernetes Working Mechanism Kubernetes operates as a cluster with a master node scheduling, monitoring, and logging container operations, while worker nodes execute processing tasks, providing an efficient and centralized system for container orchestration.']}, {'end': 14111.185, 'start': 13764.041, 'title': 'Kubernetes installation methods', 'summary': 'Explains how kubernetes works, its ease of adding more nodes to the cluster, and the various popular ways of installing kubernetes including kubeadm, minikube, kops, and gcp, with a preference for kubeadm due to its versatility and benefits for all users.', 'duration': 347.144, 'highlights': ['Kubeadm is a versatile method for installing Kubernetes on any server, whether on-premise, on cloud platforms like AWS or Azure, or even on a virtual machine, providing flexibility and ease of use. Kubeadm allows for the installation of Kubernetes on any bare metal server, including on-premise, AWS, Azure, or GCP, as well as on virtual machines, offering versatility and ease of use.', 'Minikube provides a virtualized environment for practicing Kubernetes commands, although it may not be suitable for in-depth learning of Kubernetes functionalities. Minikube offers a virtualized environment for practicing Kubernetes commands, but may not be ideal for in-depth learning of Kubernetes functionalities.', 'KOPs is a shortcut for installing Kubernetes exclusively for AWS, but may not be cost-effective as it deploys instances that do not fall under the free tier. KOPs provides a shortcut for installing Kubernetes on AWS, but may not be cost-effective as it deploys instances that do not fall under the free tier, limiting its usability.', 'Kubernetes on GCP offers a pre-set-up service, making it easy to get started with Kubernetes, but may not be beneficial for individuals seeking to set up Kubernetes in their own company or for DevOps engineers responsible for managing the lifecycle of tools. Kubernetes on GCP offers a pre-set-up service, making it easy to get started with Kubernetes, but may not be beneficial for those seeking to set up Kubernetes in their own company or for DevOps engineers responsible for managing the tool lifecycle.']}, {'end': 14419.873, 'start': 14111.425, 'title': 'Installing kubernetes on aws with ubuntu', 'summary': 'Covers the process of launching two ubuntu instances on aws, installing docker on both instances, and running commands to prepare the master and slave for kubernetes installation.', 'duration': 308.448, 'highlights': ['Launching two Ubuntu instances on AWS The speaker guides through the process of launching two instances on AWS, specifying the OS as Ubuntu, and configuring instance details, ultimately deploying the instances on AWS.', 'Installing Docker on master and slave The process of updating the machines, installing Docker on both master and slave instances, and running commands to prepare for Kubernetes installation is explained.', 'Running commands to prepare for Kubernetes installation The speaker demonstrates running commands from the official documentation to prepare the master and slave for Kubernetes installation, emphasizing the need to follow the documentation based on the OS being used.']}, {'end': 14948.7, 'start': 14419.893, 'title': 'Setting up kubernetes cluster', 'summary': "Covers setting up a kubernetes cluster on a master and a slave, including initializing the cluster, configuring network plugins, and checking the cluster's readiness, with detailed instructions on commands and plugin choices.", 'duration': 528.807, 'highlights': ['Initializing Kubernetes cluster on master and slave The chapter highlights the steps to install and initialize Kubernetes on the master and slave nodes, including running commands for update, installation, and initialization.', 'Configuring network plugins and pod network CIDR It explains the significance of configuring the pod network CIDR, choosing the network plugin Calico, and setting the range as 192.168.0.0/16 to enable Kubernetes network functionality.', "Checking cluster readiness and status of nodes It demonstrates the usage of 'kubectl get nodes' and 'kubectl get pods --all-namespaces' commands to verify the readiness of the cluster and the status of pods running on it."]}], 'duration': 2336.837, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY412611863.jpg', 'highlights': ['Kubernetes automates monitoring, scaling, and health checks for containers, eliminating manual intervention.', 'Kubernetes operates as a cluster with a master node scheduling, monitoring, and logging container operations.', "The 'docker service create' command is used to create a service in Docker swarm, specifying the number of replicas, port mapping, and image name.", 'Kubeadm allows for the installation of Kubernetes on any bare metal server, providing flexibility and ease of use.', 'The challenges of monolithic applications and the advantages of microservices are discussed, emphasizing the benefits of independent deployment and interaction using JSON.', "The process of removing a service from the cluster is explained using the 'docker service rm' command.", "The 'docker service ls' command displays the running services, their mode, number of replicas, and image name, allowing users to monitor the services in the swarm cluster.", 'Kubernetes allows interaction among containers across multiple nodes, making it essential for managing large-scale container deployments.', 'The process of updating the machines, installing Docker on both master and slave instances, and running commands to prepare for Kubernetes installation is explained.', 'Kubernetes on GCP offers a pre-set-up service, making it easy to get started with Kubernetes.']}, {'end': 16855.901, 'segs': [{'end': 15732.136, 'src': 'embed', 'start': 15701.748, 'weight': 1, 'content': [{'end': 15703.108, 'text': 'i want the nginx to work.', 'start': 15701.748, 'duration': 1.36}, {'end': 15704.749, 'text': 'i want this app to be running now.', 'start': 15703.108, 'duration': 1.641}, {'end': 15706.529, 'text': 'inside that app we have three replicas.', 'start': 15704.749, 'duration': 1.78}, {'end': 15715.792, 'text': 'Now, whatever the status of these is, the load will be distributed in a round-robin fashion and whoever is free, the data will be directed to that.', 'start': 15706.849, 'duration': 8.943}, {'end': 15718.432, 'text': 'That is the job that the replicas do.', 'start': 15715.852, 'duration': 2.58}, {'end': 15720.313, 'text': 'It helps you balance the load out.', 'start': 15718.793, 'duration': 1.52}, {'end': 15723.594, 'text': 'You have different replicas which are running constantly.', 'start': 15720.453, 'duration': 3.141}, {'end': 15732.136, 'text': 'Now, what I want to show you all is, I was saying you that the API versions, if it is not perfectly defined, that will create a different problem.', 'start': 15723.874, 'duration': 8.262}], 'summary': 'Desire to have nginx and app running with 3 replicas for load balancing, and api version accuracy is crucial.', 'duration': 30.388, 'max_score': 15701.748, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY415701748.jpg'}, {'end': 15827.782, 'src': 'embed', 'start': 15801.707, 'weight': 2, 'content': [{'end': 15809.155, 'text': 'if these two are not matching, your deployment will not work, or whatever it is your service or whatever you are trying to deploy will not work.', 'start': 15801.707, 'duration': 7.448}, {'end': 15814, 'text': 'you need to have proper api versions so that you have a proper kind of thing that is working.', 'start': 15809.155, 'duration': 4.845}, {'end': 15819.465, 'text': 'every different thing is defined under certain things, so you have to get that spot on in order to work.', 'start': 15814, 'duration': 5.465}, {'end': 15823.63, 'text': "that's why the original part was working, because it was apps slash everyone.", 'start': 15819.465, 'duration': 4.165}, {'end': 15827.782, 'text': 'if we just nano it working body.', 'start': 15823.63, 'duration': 4.152}], 'summary': 'Matching api versions are crucial for successful deployment and functionality.', 'duration': 26.075, 'max_score': 15801.707, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY415801707.jpg'}, {'end': 16320.816, 'src': 'embed', 'start': 16290.311, 'weight': 0, 'content': [{'end': 16295.255, 'text': 'For example, I want a deployment which goes by the name of Nginx.', 'start': 16290.311, 'duration': 4.944}, {'end': 16307.787, 'text': 'The specification of this deployment would be that I want three replicas which should be running and the container should have the image Nginx 1.7.9 running on it.', 'start': 16295.856, 'duration': 11.931}, {'end': 16312.594, 'text': 'and the container should expose the port 80.', 'start': 16309.873, 'duration': 2.721}, {'end': 16315.615, 'text': 'right now this is the deployment that I want to run now.', 'start': 16312.594, 'duration': 3.021}, {'end': 16320.816, 'text': 'say I want to change the number of replicas to say five.', 'start': 16315.615, 'duration': 5.201}], 'summary': 'Requesting a deployment named nginx with 3 replicas, running nginx 1.7.9 on port 80, and considering changing the number of replicas to 5.', 'duration': 30.505, 'max_score': 16290.311, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY416290311.jpg'}], 'start': 14948.7, 'title': 'Kubernetes deployment and yaml basics', 'summary': 'Discusses kubernetes deployment readiness, yaml basics, deployment file structure, and api version and kind importance, emphasizing configuration, deployment creation, and nginx deployment with specific settings and expectations.', 'chapters': [{'end': 14983.723, 'start': 14948.7, 'title': 'Kubernetes deployment readiness', 'summary': 'Outlines the successful installation and configuration of kubernetes, making it ready to accept commands and deployments, setting the prime state for future sessions.', 'duration': 35.023, 'highlights': ['The cluster is now ready to accept commands to deploy applications on it.', 'Kubernetes has been successfully installed and configured to accept any kind of commands and deployments.', 'The purpose of this session was to set up Kubernetes to be ready for deployments.', 'The next session will involve working with Kubernetes.']}, {'end': 15218.546, 'start': 14983.723, 'title': 'Understanding yaml for kubernetes', 'summary': 'Covers the basics of yaml for kubernetes, including its definition, features, and usage, emphasizing the importance of proper indentation and its role in creating portable, consistent, and readable data, with a focus on how yaml serves as a subset of json and includes key data structures like maps and lists.', 'duration': 234.823, 'highlights': ['YAML is a data serialization language designed to be human friendly and works well with programming languages for everyday tasks. YAML is designed to be human friendly and works well with programming languages, making it suitable for everyday tasks.', 'YAML data is portable between programming languages and is a subset of JSON. YAML data can be easily transferred between programming languages and serves as a subset of JSON.', 'Proper indentation is crucial while working with YAML files, ensuring consistency and readability. Proper indentation is crucial for consistency and readability when working with YAML files.', 'YAML includes two key data structures: maps and lists, used for defining key-value pairs and collections of data. YAML includes key data structures like maps and lists, vital for defining key-value pairs and collections of data.']}, {'end': 15723.594, 'start': 15218.626, 'title': 'Yaml deployment overview', 'summary': 'Illustrates the structure of a yaml deployment file, including key elements like api version, metadata, and specifications, and demonstrates the creation of a deployment using kubernetes, with the expectation of three replicas being created.', 'duration': 504.968, 'highlights': ['The chapter illustrates the structure of a YAML deployment file, including key elements like API version, metadata, and specifications The transcript explains the key components of a YAML deployment file, such as API version, metadata, and specifications, providing a comprehensive overview of its structure.', 'Demonstrates the creation of a deployment using Kubernetes, with the expectation of three replicas being created It demonstrates the process of creating a deployment using Kubernetes, with an expectation of three replicas being created, showcasing the practical application of the YAML file.', 'The deployment includes three replicas, running in a round-robin fashion to balance the load The chapter highlights the functionality of the three replicas in the deployment, emphasizing their role in balancing the load in a round-robin fashion.']}, {'end': 16031.017, 'start': 15723.874, 'title': 'Importance of api version and kind in yaml', 'summary': "Highlights the importance of properly defining api version and kind in yaml files for successful deployment, as mismatched versions will result in deployment failure, emphasized by the example of creating a 'not working' yaml file and correcting the deployment name to resolve the issue.", 'duration': 307.143, 'highlights': ["Mismatched API version and kind will cause deployment failure The API version and kind must be accurately defined in YAML files for successful deployment; otherwise, it will result in deployment failure, as demonstrated by the 'not working' YAML file not being recognized and failing to create the deployment.", "Correcting deployment name resolves the issue Changing the deployment name in the YAML file from 'nginx deployment' to 'nginx test' resolves the issue of having two deployments with the same name, demonstrating the importance of proper YAML file definition to avoid conflicts and ensure successful deployment.", 'Creating multiple deployments with varying replicas Creating multiple deployments with different replica numbers, such as deploying another YAML file with two replicas, demonstrates the flexibility and customization options available in YAML files for managing deployments.']}, {'end': 16855.901, 'start': 16031.037, 'title': 'Hands-on with kubernetes and deploying nginx', 'summary': 'Covers setting up a kubernetes cluster, deploying nginx with five replicas, accessing and load balancing the nginx service using node port, and the limitations of accessing pods outside the cluster.', 'duration': 824.864, 'highlights': ['The chapter covers setting up a Kubernetes cluster, deploying Nginx with five replicas, accessing and load balancing the Nginx service using node port, and the limitations of accessing pods outside the cluster.', "Creating a deployment in Kubernetes involves specifying the number of replicas, the container image, and the exposed port, such as deploying Nginx with five replicas and accessing them through the cluster's IP addresses.", "The process of creating a service in Kubernetes, specifically a node port type service, allows load balancing among the pods, as demonstrated by accessing the Nginx service through the cluster's master IP address and the specified port.", 'The limitations of accessing pods outside the cluster are highlighted, as attempting to access the pods directly from a browser results in failure due to their private IP addresses being valid only within the cluster.']}], 'duration': 1907.201, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY414948700.jpg', 'highlights': ['The chapter covers setting up a Kubernetes cluster, deploying Nginx with five replicas, accessing and load balancing the Nginx service using node port, and the limitations of accessing pods outside the cluster.', 'The deployment includes three replicas, running in a round-robin fashion to balance the load The chapter highlights the functionality of the three replicas in the deployment, emphasizing their role in balancing the load in a round-robin fashion.', "Mismatched API version and kind will cause deployment failure The API version and kind must be accurately defined in YAML files for successful deployment; otherwise, it will result in deployment failure, as demonstrated by the 'not working' YAML file not being recognized and failing to create the deployment.", "Creating a deployment in Kubernetes involves specifying the number of replicas, the container image, and the exposed port, such as deploying Nginx with five replicas and accessing them through the cluster's IP addresses."]}, {'end': 18004.876, 'segs': [{'end': 16888.961, 'src': 'embed', 'start': 16857.321, 'weight': 1, 'content': [{'end': 16866.426, 'text': 'okay, now again, if I create a service, it will be able to access to kubectl create service, which is nginx,', 'start': 16857.321, 'duration': 9.105}, {'end': 16874.651, 'text': 'and the type of the service is no port right and I want this service to be available on port 80.', 'start': 16866.426, 'duration': 8.225}, {'end': 16881.015, 'text': 'so I specify that.', 'start': 16874.651, 'duration': 6.364}, {'end': 16888.961, 'text': 'okay, so it has been created and if I do a kubectl get service now, i can see that this is the port it is available on.', 'start': 16881.015, 'duration': 7.946}], 'summary': 'A service for nginx has been created, accessible on port 80.', 'duration': 31.64, 'max_score': 16857.321, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY416857321.jpg'}, {'end': 16947.606, 'src': 'embed', 'start': 16919.458, 'weight': 2, 'content': [{'end': 16922.301, 'text': 'you can still access the service right.', 'start': 16919.458, 'duration': 2.843}, {'end': 16923.882, 'text': 'as you can see, you can still access the service.', 'start': 16922.301, 'duration': 1.581}, {'end': 16927.825, 'text': 'Reason being, this port is available throughout the cluster.', 'start': 16924.281, 'duration': 3.544}, {'end': 16934.573, 'text': "Any IP address that is of the master or the node I specify with this port, I'll be able to access the Nginx service.", 'start': 16928.065, 'duration': 6.508}, {'end': 16942.782, 'text': 'Just a quick info guys, Intellipaat provides DevOps online training validated and certified by NASSCOM Future Skills and IBM.', 'start': 16934.653, 'duration': 8.129}, {'end': 16945.365, 'text': 'The course link is given in the description below.', 'start': 16943.202, 'duration': 2.163}, {'end': 16947.606, 'text': "Now, let's continue with the session.", 'start': 16946.025, 'duration': 1.581}], 'summary': 'The nginx service is accessible via a specified port throughout the cluster, and intellipaat offers devops online training validated and certified by nasscom future skills and ibm.', 'duration': 28.148, 'max_score': 16919.458, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY416919458.jpg'}, {'end': 16997.47, 'src': 'embed', 'start': 16964.952, 'weight': 7, 'content': [{'end': 16974.936, 'text': 'Now, the awesome thing about Puppet is that it can basically be installed on various platforms, like Microsoft Windows, various Linux platforms, etc.', 'start': 16964.952, 'duration': 9.984}, {'end': 16984.753, 'text': 'right so it is very versatile in terms of its working on different platforms and also it basically does not use any programming language.', 'start': 16974.936, 'duration': 9.817}, {'end': 16997.47, 'text': 'it has its own declarative format in which you have to specify what basically you want to install any type of or any set of configuration on any node or any group of nodes.', 'start': 16984.753, 'duration': 12.717}], 'summary': 'Puppet can be installed on multiple platforms, including microsoft windows and various linux platforms, and operates without using a programming language.', 'duration': 32.518, 'max_score': 16964.952, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY416964952.jpg'}, {'end': 17148.815, 'src': 'embed', 'start': 17104.938, 'weight': 5, 'content': [{'end': 17114.181, 'text': 'so the quality of features and the quality that this tool has to offer is more than what a private company can produce, right.', 'start': 17104.938, 'duration': 9.243}, {'end': 17123.864, 'text': "so therefore, uh, this is a key feature of puppet that there's a large open source community for puppet which basically keeps on adding features,", 'start': 17114.181, 'duration': 9.683}, {'end': 17130.125, 'text': 'keeps on resolving bugs and keeps the puppet software in its most optimal state right.', 'start': 17123.864, 'duration': 6.261}, {'end': 17131.986, 'text': 'the third thing is documentation.', 'start': 17130.125, 'duration': 1.861}, {'end': 17136.747, 'text': 'so documentation is very important for anyone who is trying to learn this tool,', 'start': 17131.986, 'duration': 4.761}, {'end': 17142.629, 'text': 'and there is an extensive documentation available for puppet on their official website, right.', 'start': 17136.747, 'duration': 5.882}, {'end': 17148.815, 'text': "so if you don't understand puppet, you don't have to take help from your colleague, you don't have to search for tutorials,", 'start': 17142.629, 'duration': 6.186}], 'summary': 'Puppet offers high-quality features, strong community support, and extensive documentation.', 'duration': 43.877, 'max_score': 17104.938, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY417104938.jpg'}, {'end': 17390.13, 'src': 'embed', 'start': 17351.69, 'weight': 0, 'content': [{'end': 17362.033, 'text': 'Right So this is a three step working of how the puppet master and slave interact when there is a change which is supposed to happen on the slave.', 'start': 17351.69, 'duration': 10.343}, {'end': 17370.116, 'text': "Similarly now, because there's a lot of communication or there's a lot of back and forth, which happens between the puppet master and puppet slave,", 'start': 17362.373, 'duration': 7.743}, {'end': 17378.382, 'text': 'the communication which happens between these nodes is encrypted and it is encrypted using ssl certificates, right.', 'start': 17370.116, 'duration': 8.266}, {'end': 17390.13, 'text': "so there's an ssl connection which is established between the puppet master and the puppet slave and the way it works is something like this right now for ssl connection we need certificates right.", 'start': 17378.382, 'duration': 11.748}], 'summary': 'Puppet master and slave interact via ssl-encrypted communication using certificates.', 'duration': 38.44, 'max_score': 17351.69, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY417351690.jpg'}, {'end': 17537.53, 'src': 'embed', 'start': 17507.048, 'weight': 8, 'content': [{'end': 17514.413, 'text': "and then i'll show you how you guys can basically sign certification puppet master, so that the connection becomes encrypted.", 'start': 17507.048, 'duration': 7.365}, {'end': 17521.338, 'text': "so in this session we're going to start with setting up puppet master slave on the aws management console.", 'start': 17514.413, 'duration': 6.925}, {'end': 17523.519, 'text': "right, so let's go ahead and start with this session.", 'start': 17521.338, 'duration': 2.181}, {'end': 17527.002, 'text': "so i'll quickly jump on to my aws management console.", 'start': 17523.519, 'duration': 3.483}, {'end': 17529.327, 'text': 'So guys, here is my AWS management console.', 'start': 17527.386, 'duration': 1.941}, {'end': 17537.53, 'text': "The first thing that I'll do is I'll have to launch an instance, right? So I'll select the Ubuntu AMI, Ubuntu 18.04.", 'start': 17529.347, 'duration': 8.183}], 'summary': 'Setting up puppet master slave on aws with ubuntu 18.04.', 'duration': 30.482, 'max_score': 17507.048, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY417507048.jpg'}], 'start': 16857.321, 'title': 'Kubernetes services and puppet configuration', 'summary': "Covers creating and accessing a service in kubernetes, demonstrating port availability, and discusses puppet's features, emphasizing its popularity and ssl connection setup, as well as detailing the installation of puppet master and slave on aws, including security group setup and specific instructions.", 'chapters': [{'end': 16942.782, 'start': 16857.321, 'title': 'Creating and accessing services in kubernetes', 'summary': "Demonstrates creating a service in kubernetes, making it available on port 80, and accessing it from both the master and node ips, showcasing the port's cluster-wide availability.", 'duration': 85.461, 'highlights': ['The created service is available on port 80, and the instructor demonstrates accessing it by copying the port number and pasting it into the browser, showcasing the simplicity of the process.', 'The instructor explains that the created port is available throughout the cluster, allowing access from both the master and node IP addresses, exemplifying the cluster-wide availability of the service.', "The instructor emphasizes the cluster-wide availability of the port by demonstrating access to the Nginx service from both the master and node IP addresses, highlighting the practical implications of the port's availability."]}, {'end': 17551.935, 'start': 16943.202, 'title': 'Puppet: configuration management tool', 'summary': 'Discusses the key features of puppet, emphasizing its popularity due to a large open-source community, extensive documentation, and platform support, along with detailing the three-step interaction and ssl connection setup between puppet master and slave.', 'duration': 608.733, 'highlights': ["Puppet's popularity is attributed to a large open-source community constantly adding features and resolving bugs, making it a mature and versatile tool. The large open-source community constantly adding features and resolving bugs enhances Puppet's maturity and versatility, contributing to its popularity.", "Extensive documentation is available for Puppet on their official website, providing comprehensive guidance for users. The availability of extensive documentation on Puppet's official website offers comprehensive guidance for users, simplifying the learning process.", "Puppet's platform support enables its installation on various platforms, making it popular among companies with diverse server environments. The platform support of Puppet enables its installation on various platforms, catering to companies with diverse server environments and solving their configuration management needs.", 'The three-step interaction between Puppet master and slave involves state checking, change catalog transmission, and state matching, facilitating efficient configuration management. The three-step interaction between Puppet master and slave involves state checking, change catalog transmission, and state matching, facilitating efficient configuration management.', 'The SSL connection setup between Puppet master and slave involves the exchange of certificates and manual signing, ensuring secure and encrypted communication. The SSL connection setup between Puppet master and slave involves the exchange of certificates and manual signing, ensuring secure and encrypted communication.']}, {'end': 18004.876, 'start': 17552.415, 'title': 'Install puppet master and slave on aws', 'summary': 'Details the process of installing puppet master and slave on aws, including setting up security groups, launching instances, connecting to the master and slave, and installing and configuring puppet on both machines, with specific instructions and commands provided.', 'duration': 452.461, 'highlights': ['The chapter details the process of installing Puppet Master and Slave on AWS, including setting up security groups, launching instances, connecting to the master and slave, and installing and configuring Puppet on both machines, with specific instructions and commands provided. installation process, setting up security groups, launching instances, connecting to master and slave, configuring Puppet, specific instructions and commands', "The security group settings involve allowing TCP connections from the IP address of the slave or allowing TCP connection on port 8140 from the slave's IP address. security group settings, allowing TCP connections, specific port settings", 'Instructions are provided for connecting to the master and slave using Putty, specifying private keys, updating the machines, and executing commands for installation and configuration. connecting to master and slave, specifying private keys, updating machines, executing commands']}], 'duration': 1147.555, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY416857321.jpg', 'highlights': ['The SSL connection setup between Puppet master and slave involves the exchange of certificates and manual signing, ensuring secure and encrypted communication.', 'The created service is available on port 80, and the instructor demonstrates accessing it by copying the port number and pasting it into the browser, showcasing the simplicity of the process.', 'The instructor explains that the created port is available throughout the cluster, allowing access from both the master and node IP addresses, exemplifying the cluster-wide availability of the service.', "The instructor emphasizes the cluster-wide availability of the port by demonstrating access to the Nginx service from both the master and node IP addresses, highlighting the practical implications of the port's availability.", 'The three-step interaction between Puppet master and slave involves state checking, change catalog transmission, and state matching, facilitating efficient configuration management.', "Puppet's popularity is attributed to a large open-source community constantly adding features and resolving bugs, making it a mature and versatile tool.", 'Extensive documentation is available for Puppet on their official website, providing comprehensive guidance for users.', "Puppet's platform support enables its installation on various platforms, making it popular among companies with diverse server environments.", 'The chapter details the process of installing Puppet Master and Slave on AWS, including setting up security groups, launching instances, connecting to the master and slave, and installing and configuring Puppet on both machines, with specific instructions and commands provided.']}, {'end': 19500.997, 'segs': [{'end': 18036.361, 'src': 'embed', 'start': 18005.544, 'weight': 0, 'content': [{'end': 18011.267, 'text': 'the moment puppet slave starts, it basically sends a request for the master certificate.', 'start': 18005.544, 'duration': 5.723}, {'end': 18013.708, 'text': 'how does the puppet slave know where the master is?', 'start': 18011.267, 'duration': 2.441}, {'end': 18016.769, 'text': 'we just specify the ip address in the host file right.', 'start': 18013.708, 'duration': 3.061}, {'end': 18018.75, 'text': 'so it knows where the puppet master is.', 'start': 18016.769, 'duration': 1.981}, {'end': 18021.411, 'text': 'so it sends the request the master.', 'start': 18018.75, 'duration': 2.661}, {'end': 18029.035, 'text': 'the master in turn sends the master certificate and at the same time it also requests for the slave certificate.', 'start': 18021.411, 'duration': 7.624}, {'end': 18036.361, 'text': 'so this when, when the master requests for the slave certificate, the puppet slave basically sends the slave certificate.', 'start': 18029.035, 'duration': 7.326}], 'summary': 'Puppet slave requests master certificate; master sends both certificates; ip address specified in host file for communication.', 'duration': 30.817, 'max_score': 18005.544, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY418005544.jpg'}, {'end': 18303.137, 'src': 'embed', 'start': 18271.167, 'weight': 1, 'content': [{'end': 18272.968, 'text': 'is it works on?', 'start': 18271.167, 'duration': 1.801}, {'end': 18278.185, 'text': 'and then the ip address of the slave, which is this right.', 'start': 18272.968, 'duration': 5.217}, {'end': 18279.466, 'text': 'so congratulations, guys.', 'start': 18278.185, 'duration': 1.281}, {'end': 18283.289, 'text': 'your puppet master slave setup is now complete right.', 'start': 18279.466, 'duration': 3.823}, {'end': 18289.414, 'text': 'so now you can use the puppet master slave to basically do configuration management.', 'start': 18283.289, 'duration': 6.125}, {'end': 18297.455, 'text': 'so you just have to specify the configuration on the master and then it will be applied on the slave, the code basics for puppet,', 'start': 18289.414, 'duration': 8.041}, {'end': 18303.137, 'text': 'which will enable you to basically get started in puppet and start your own configuration management projects.', 'start': 18297.455, 'duration': 5.682}], 'summary': 'Puppet master slave setup complete for configuration management.', 'duration': 31.97, 'max_score': 18271.167, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY418271167.jpg'}, {'end': 18435.317, 'src': 'embed', 'start': 18404.97, 'weight': 2, 'content': [{'end': 18407.69, 'text': 'So what is Nginx? Nginx is basically a package.', 'start': 18404.97, 'duration': 2.72}, {'end': 18409.591, 'text': 'So the resource type goes package.', 'start': 18407.87, 'duration': 1.721}, {'end': 18411.491, 'text': 'The package name is Nginx.', 'start': 18409.831, 'duration': 1.66}, {'end': 18413.412, 'text': 'And what do we want with Nginx?', 'start': 18411.631, 'duration': 1.781}, {'end': 18420.214, 'text': 'We basically want to ensure that it is installed on the system on which we are targeting on right?', 'start': 18413.792, 'duration': 6.422}, {'end': 18423.435, 'text': 'So this is how you define a resource in Puppet.', 'start': 18420.494, 'duration': 2.941}, {'end': 18428.176, 'text': "And through this, you can basically configure your agent's configurations.", 'start': 18423.695, 'duration': 4.481}, {'end': 18430.115, 'text': 'Using the resources.', 'start': 18428.975, 'duration': 1.14}, {'end': 18435.317, 'text': 'so, basically, whatever resources you mentioned gets installed on the agent.', 'start': 18430.115, 'duration': 5.202}], 'summary': 'Nginx is a package that puppet ensures is installed on the targeted system.', 'duration': 30.347, 'max_score': 18404.97, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY418404970.jpg'}, {'end': 19403.762, 'src': 'embed', 'start': 19376.965, 'weight': 3, 'content': [{'end': 19385.712, 'text': 'So we successfully installed two packages, two softwares on this machine using loops in manifests.', 'start': 19376.965, 'duration': 8.747}, {'end': 19386.212, 'text': 'All right.', 'start': 19385.892, 'duration': 0.32}, {'end': 19387.694, 'text': "So let's come back to our slides.", 'start': 19386.413, 'duration': 1.281}, {'end': 19389.215, 'text': 'So we are done with variables.', 'start': 19387.914, 'duration': 1.301}, {'end': 19390.276, 'text': "We're done with loops.", 'start': 19389.295, 'duration': 0.981}, {'end': 19393.038, 'text': "Now let's go ahead and understand what are conditions.", 'start': 19390.496, 'duration': 2.542}, {'end': 19393.759, 'text': 'All right.', 'start': 19393.458, 'duration': 0.301}, {'end': 19395.42, 'text': "So now let's discuss conditions.", 'start': 19393.819, 'duration': 1.601}, {'end': 19403.762, 'text': 'So conditions are basically statements which helps us to take a decision based on a particular output.', 'start': 19395.834, 'duration': 7.928}], 'summary': 'Installed 2 packages using loops in manifests. completed variables and loops, moving on to conditions.', 'duration': 26.797, 'max_score': 19376.965, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY419376965.jpg'}], 'start': 18005.544, 'title': 'Puppet configuration and management', 'summary': 'Covers puppet slave and master interaction, configuration setup, nginx configuration, and manifest implementation. it includes processes like certificate exchange, manifest creation, and successful configuration changes, with a demonstration of nginx configuration and manifest implementation resulting in a successful catalog application in 0.01 seconds.', 'chapters': [{'end': 18271.167, 'start': 18005.544, 'title': 'Puppet slave and master interaction', 'summary': 'Explains the process of puppet slave and master interaction, including the request and exchange of certificates, signing of certificates, and manifest creation, resulting in successful configuration changes on the slave server.', 'duration': 265.623, 'highlights': ['The puppet slave sends a request for the master certificate and, in turn, the master requests the slave certificate, resulting in successful certificate exchange. The puppet slave sends a request for the master certificate, and the master requests the slave certificate, leading to successful certificate exchange.', 'The puppet master and slave interact after successful certificate signing, leading to the slave requesting and implementing configuration changes from the master. After successful certificate signing, the puppet master and slave interact, with the slave requesting and implementing configuration changes from the master.', 'The creation of a manifest specifying changes to be made on the slave server, including file presence, permissions, and content, results in successful implementation of the specified changes. Creating a manifest specifying changes on the slave server, such as file presence, permissions, and content, leads to the successful implementation of the specified changes.']}, {'end': 18615.052, 'start': 18271.167, 'title': 'Puppet master-slave configuration', 'summary': 'Explains how to set up a puppet master-slave configuration and the basics of defining resources and creating manifests in puppet, enabling configuration management for agents. it also demonstrates creating a manifest file to install apache on an agent.', 'duration': 343.885, 'highlights': ['The chapter explains how to set up a puppet master-slave configuration, enabling configuration management for agents, and demonstrates creating a manifest file to install Apache on an agent.', 'The most basic unit of coding in puppet is a resource, which can be a file, command, package, etc., defined in a puppet catalog and executed on agents.', 'Manifests in Puppet are files that group resources together, and once applied on an agent, all the resources are installed one by one.']}, {'end': 18881.246, 'start': 18615.052, 'title': 'Configuring nginx with puppet', 'summary': 'Demonstrates how to use puppet to configure nginx and create a file on a slave, encountering errors and resolving them, with a successful application of the catalog in 0.01 seconds.', 'duration': 266.194, 'highlights': ['The chapter demonstrates using Puppet to configure Nginx and create a file, encountering errors and resolving them, with a successful application of the catalog in 0.01 seconds.', "The resource type 'package' is used to specify the installation of Nginx, with the attribute 'ensure' set to 'installed'.", "A file is created with the content 'Nginx installed' and permissions set to '0644', encountering errors in the process.", "The folder name 'environments' is corrected, and the site.pp file syntax error is identified and resolved by closing a bracket."]}, {'end': 19500.997, 'start': 18881.246, 'title': 'Implementing manifests in puppet', 'summary': 'Covers the implementation of manifests in puppet, including installing nginx and creating files, using variables to store values and pass them on to resources, using loops to iteratively install packages, and using conditions to make decisions based on specific outputs.', 'duration': 619.751, 'highlights': ['Using loops to iteratively install packages The transcript demonstrates how to use loops in manifests to iteratively install multiple packages, such as Nginx and MySQL server, on a slave machine, showcasing the practical application of loops in Puppet.', 'Creating variables to store values and pass them on to resources The chapter explains the process of creating variables inside a manifest file to store values and then pass them on to resources, illustrating the functionality and practical use of variables in Puppet.', 'Installing Nginx and creating files The transcript showcases the successful implementation of a manifest to install Nginx and create a file on the agent, providing a practical example of using manifests to perform specific tasks in Puppet.', "Using conditions to make decisions based on specific outputs The chapter discusses the usage of conditions in Puppet, using the 'only if' keyword to make decisions based on the success of a particular command, demonstrating the practical application of conditions in Puppet manifests."]}], 'duration': 1495.453, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY418005544.jpg', 'highlights': ['Demonstrates successful certificate exchange between puppet master and slave.', 'Explains setting up puppet master-slave configuration for configuration management.', 'Illustrates using Puppet to configure Nginx and resolving errors for successful application.', 'Showcases practical application of loops, variables, and conditions in Puppet manifests.']}, {'end': 22127.497, 'segs': [{'end': 20165.22, 'src': 'embed', 'start': 20141.681, 'weight': 3, 'content': [{'end': 20148.767, 'text': "Now, how do you write the Ansible playbook? You write it in the YAML language, which is YAML ain't markup language.", 'start': 20141.681, 'duration': 7.086}, {'end': 20151.129, 'text': 'Right And this is the full form for that.', 'start': 20149.147, 'duration': 1.982}, {'end': 20156.213, 'text': "And as you move along, I'm going to show you guys how you can start writing an Ansible playbook.", 'start': 20151.829, 'duration': 4.384}, {'end': 20158.675, 'text': 'Right So guys, Ansible playbook.', 'start': 20156.513, 'duration': 2.162}, {'end': 20161.077, 'text': "Let's focus on the word playbook first.", 'start': 20158.895, 'duration': 2.182}, {'end': 20165.22, 'text': 'Like what is a playbook? A playbook means a book of plays.', 'start': 20161.157, 'duration': 4.063}], 'summary': 'Writing ansible playbooks in yaml language, focusing on the word playbook.', 'duration': 23.539, 'max_score': 20141.681, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY420141681.jpg'}, {'end': 21209.771, 'src': 'embed', 'start': 21177.435, 'weight': 0, 'content': [{'end': 21179.876, 'text': 'At the click of one button right?', 'start': 21177.435, 'duration': 2.441}, {'end': 21187.559, 'text': 'I just executed one playbook which configured two servers of mine, installed two different softwares on it based on the name that I specified.', 'start': 21180.016, 'duration': 7.543}, {'end': 21188.24, 'text': 'All right.', 'start': 21187.96, 'duration': 0.28}, {'end': 21189.22, 'text': 'So awesome guys.', 'start': 21188.6, 'duration': 0.62}, {'end': 21193.322, 'text': 'Uh, we, I think we have almost begin.', 'start': 21189.46, 'duration': 3.862}, {'end': 21197.064, 'text': 'Uh, we have, we have, we are almost a beginner now, uh, with the playbooks.', 'start': 21193.602, 'duration': 3.462}, {'end': 21199.405, 'text': "I'm sure you guys can also write your playbooks.", 'start': 21197.224, 'duration': 2.181}, {'end': 21201.265, 'text': "but let's take it a notch up.", 'start': 21199.724, 'duration': 1.541}, {'end': 21203.827, 'text': "let's try to do something else as well.", 'start': 21201.265, 'duration': 2.562}, {'end': 21209.771, 'text': 'now. if you have to execute scripts on remote system, how can you do that?', 'start': 21203.827, 'duration': 5.944}], 'summary': 'Executed playbook to configure 2 servers and install 2 softwares based on specified name.', 'duration': 32.336, 'max_score': 21177.435, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY421177435.jpg'}, {'end': 21380.748, 'src': 'embed', 'start': 21353.685, 'weight': 2, 'content': [{'end': 21358.547, 'text': 'Can you see automatically a website was created which is now called Hello World.', 'start': 21353.685, 'duration': 4.862}, {'end': 21361.388, 'text': 'Now let me show you one more awesome thing over here.', 'start': 21358.887, 'duration': 2.501}, {'end': 21363.429, 'text': "Now I'm not going to any of those servers right.", 'start': 21361.648, 'duration': 1.781}, {'end': 21365.489, 'text': "This is my slave to I'm not going over here.", 'start': 21363.449, 'duration': 2.04}, {'end': 21366.47, 'text': "I'm still on my master.", 'start': 21365.509, 'duration': 0.961}, {'end': 21369.891, 'text': 'Now what I can do is I can just simply change the script file.', 'start': 21367.19, 'duration': 2.701}, {'end': 21373.973, 'text': 'So if I turn the script file to be say I go inside the script file.', 'start': 21370.151, 'duration': 3.822}, {'end': 21377.865, 'text': 'I change the text to hello world 12.', 'start': 21374.542, 'duration': 3.323}, {'end': 21380.748, 'text': "Okay Mind you, I'm not going to any other server.", 'start': 21377.865, 'duration': 2.883}], 'summary': "A website called hello world was created, with a script file changed to 'hello world 12'.", 'duration': 27.063, 'max_score': 21353.685, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY421353685.jpg'}, {'end': 21648.159, 'src': 'embed', 'start': 21619.426, 'weight': 1, 'content': [{'end': 21623.247, 'text': "that is in, uh, in another use case, what i'm going to show you from my site.", 'start': 21619.426, 'duration': 3.821}, {'end': 21625.108, 'text': 'okay, so i have.', 'start': 21623.247, 'duration': 1.861}, {'end': 21628.789, 'text': 'i have written some jenkins file.', 'start': 21625.108, 'duration': 3.681}, {'end': 21630.929, 'text': 'okay, so this is one jenkins file.', 'start': 21628.789, 'duration': 2.14}, {'end': 21637.334, 'text': "i'm just copying this jenkins file for our work and we will modify it according to our requirement.", 'start': 21630.929, 'duration': 6.405}, {'end': 21642.657, 'text': 'So first of all, you have to go into the new item demo project one.', 'start': 21637.714, 'duration': 4.943}, {'end': 21644.738, 'text': "I'm going to build a pipeline job.", 'start': 21643.097, 'duration': 1.641}, {'end': 21648.159, 'text': 'Okay, this is my script.', 'start': 21646.679, 'duration': 1.48}], 'summary': 'Demonstration of modifying a jenkins file and creating a new pipeline job.', 'duration': 28.733, 'max_score': 21619.426, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY421619426.jpg'}], 'start': 19501.457, 'title': 'Configuration management tools', 'summary': 'Introduces executing commands with puppet using the exec resource, discusses ansible basics and playbook structure, and covers adding new hosts to an ansible cluster. it also delves into creating a jenkins pipeline for implementing the devops lifecycle.', 'chapters': [{'end': 19953.048, 'start': 19501.457, 'title': 'Puppet basics: executing commands with puppet', 'summary': "Introduces the concept of executing commands with puppet using the exec resource to check for the presence of apache 2, demonstrating the creation of files and executing commands based on conditions, and explains the concepts of 'only if' and 'unless' with practical examples.", 'duration': 451.591, 'highlights': ['Introducing the concept of executing commands with Puppet using the exec resource to check for the presence of Apache 2 and create a file based on the conditions specified The chapter demonstrates the use of the exec resource in Puppet to check for the presence of Apache 2 and create a file with specific conditions, showcasing practical implementation.', "Explaining the concepts of 'only if' and 'unless' with practical examples, illustrating the conditions for executing commands based on successful or failed conditions The chapter provides practical examples and explanations of the concepts of 'only if' and 'unless' in Puppet, highlighting the conditions for executing commands based on successful or failed conditions.", 'Demonstrating the creation of files and executing commands based on conditions, showcasing practical implementation of executing commands with Puppet The chapter demonstrates the creation of files and executing commands based on conditions, providing practical implementation and usage of Puppet for executing commands.']}, {'end': 20844.193, 'start': 19953.328, 'title': 'Introduction to ansible configuration management', 'summary': 'Discusses the basics of ansible, a configuration management tool, and its significance in automating software installations on multiple systems, with a focus on ansible playbook structure and execution, as well as the process of creating and running a playbook. the chapter also delves into the acquisition of ansible by red hat and its open-source nature.', 'duration': 890.865, 'highlights': ['The significance of Ansible and configuration management in automating software installations on multiple systems, reducing time and eliminating errors, leading to the popularity of such tools and the development of Ansible and Puppet.', 'The origin of Ansible, written by Michael DeHaan, and its acquisition by Red Hat in 2015, ensuring its updates and features are directed by Red Hat, while remaining open source and available for free download and modification.', 'The core aspect of Ansible, the playbook, written in YAML language, defining plays and tasks to be executed on connected servers, emphasizing the concept of plays, tasks, and the execution structure within the playbook.', 'The detailed process of creating an Ansible playbook, including defining hosts, specifying tasks, and executing the playbook to automate tasks, with a practical demonstration of installing Apache software on a remote system and verifying the successful installation.']}, {'end': 21418.833, 'start': 20844.574, 'title': 'Adding and configuring new host in ansible', 'summary': 'Describes the process of adding a new host to an ansible cluster, configuring it, and executing tasks such as installing software and executing scripts, resulting in successful configuration of two servers with different software, and automatic creation and modification of a website without manually accessing the servers.', 'duration': 574.259, 'highlights': ['The user launches a new Ubuntu instance, adds it to the Ansible host file, and successfully establishes keyless SSH connection, resulting in the addition of a new host to the Ansible cluster.', 'The user configures a playbook to install different software on two servers, executes the playbook, and successfully configures two servers with Nginx and Apache, demonstrating the ability to configure multiple servers with different software using a single command.', 'The user creates and executes a script to automatically create a website on the Apache server, demonstrates the ability to modify the script and automatically reflect changes on the website without accessing the server, showcasing the automation capabilities of Ansible.', 'The user emphasizes the significance of configuration management and highlights the advantage of using Ansible for managing a large number of servers, illustrating the time-saving and efficiency benefits of Ansible in business operations.']}, {'end': 22127.497, 'start': 21419.154, 'title': 'Creating jenkins pipeline for devops lifecycle', 'summary': 'Covers the process of creating a jenkins pipeline for implementing the devops lifecycle, including setting up a jenkins server, creating pipeline jobs, modifying jenkins files, setting up apache servers, and configuring jenkins slave machines.', 'duration': 708.343, 'highlights': ['The process of creating a Jenkins pipeline for implementing the DevOps lifecycle is covered, including the setup of a Jenkins server, creation of pipeline jobs, and modification of Jenkins files. Jenkins pipeline setup, creation of pipeline jobs', 'The process of setting up Apache servers and configuring Jenkins slave machines is explained in detail. Setting up Apache servers, configuring Jenkins slave machines']}], 'duration': 2626.04, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY419501457.jpg', 'highlights': ['The user configures a playbook to install different software on two servers, executes the playbook, and successfully configures two servers with Nginx and Apache, demonstrating the ability to configure multiple servers with different software using a single command.', 'The process of creating a Jenkins pipeline for implementing the DevOps lifecycle is covered, including the setup of a Jenkins server, creation of pipeline jobs, and modification of Jenkins files.', 'The user creates and executes a script to automatically create a website on the Apache server, demonstrates the ability to modify the script and automatically reflect changes on the website without accessing the server, showcasing the automation capabilities of Ansible.', 'The core aspect of Ansible, the playbook, written in YAML language, defining plays and tasks to be executed on connected servers, emphasizing the concept of plays, tasks, and the execution structure within the playbook.']}, {'end': 24481.364, 'segs': [{'end': 22209.238, 'src': 'embed', 'start': 22179.366, 'weight': 2, 'content': [{'end': 22183.169, 'text': 'you have to configure it your Jenkins file or Jenkins script.', 'start': 22179.366, 'duration': 3.803}, {'end': 22188.312, 'text': 'so this is the node parameter and you have to provide that slave name.', 'start': 22183.169, 'duration': 5.143}, {'end': 22190.353, 'text': 'it is going to run on that machine.', 'start': 22188.312, 'duration': 2.041}, {'end': 22193.616, 'text': 'it will download this website.git.', 'start': 22190.353, 'duration': 3.263}, {'end': 22195.717, 'text': 'okay with this content.', 'start': 22193.616, 'duration': 2.101}, {'end': 22198.239, 'text': 'okay, what we next we have to do?', 'start': 22195.717, 'duration': 2.522}, {'end': 22201.661, 'text': "let's see how it looks like string error.", 'start': 22198.239, 'duration': 3.422}, {'end': 22204.103, 'text': "let's first check what error it is.", 'start': 22201.661, 'duration': 2.442}, {'end': 22207.476, 'text': 'So it is closing here.', 'start': 22205.233, 'duration': 2.243}, {'end': 22209.238, 'text': 'It is closing here.', 'start': 22207.496, 'duration': 1.742}], 'summary': 'Configure jenkins file with node parameter, download website.git, and troubleshoot errors.', 'duration': 29.872, 'max_score': 22179.366, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY422179366.jpg'}, {'end': 22268.203, 'src': 'embed', 'start': 22241.009, 'weight': 1, 'content': [{'end': 22247.471, 'text': "now let's try to deploy your index.html images folder into the proper location.", 'start': 22241.009, 'duration': 6.462}, {'end': 22251.592, 'text': "so i told you, as it's going to be run with the jenkins user, we have to use sudo.", 'start': 22247.471, 'duration': 4.121}, {'end': 22256.615, 'text': 'everything needs to be copied under html folder.', 'start': 22252.592, 'duration': 4.023}, {'end': 22258.276, 'text': 'so this is a deployment step.', 'start': 22256.615, 'duration': 1.661}, {'end': 22260.878, 'text': "okay, let's try and run this time.", 'start': 22258.276, 'duration': 2.602}, {'end': 22261.599, 'text': 'what will happen?', 'start': 22260.878, 'duration': 0.721}, {'end': 22266.202, 'text': "let's see, failed why images?", 'start': 22261.599, 'duration': 4.603}, {'end': 22268.203, 'text': 'so understand, this is a folder.', 'start': 22266.202, 'duration': 2.001}], 'summary': 'Deploy index.html and images folder under html folder using sudo, encountered deployment failure due to images folder.', 'duration': 27.194, 'max_score': 22241.009, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY422241009.jpg'}, {'end': 22489.099, 'src': 'embed', 'start': 22465.376, 'weight': 0, 'content': [{'end': 22472.079, 'text': 'my pipeline, my pipeline will compile the code, build, clone the code, build the code and deploy the code.', 'start': 22465.376, 'duration': 6.703}, {'end': 22475.24, 'text': 'so this is the way your work is going to be happening.', 'start': 22472.079, 'duration': 3.161}, {'end': 22479.362, 'text': 'your. in any of the organization, you have to write this type of jenkins file.', 'start': 22475.24, 'duration': 4.122}, {'end': 22481.169, 'text': 'There are two type of Jenkins file.', 'start': 22479.907, 'duration': 1.262}, {'end': 22483.091, 'text': 'you are present in the market.', 'start': 22481.169, 'duration': 1.922}, {'end': 22485.354, 'text': 'One is declarative and another is scripted.', 'start': 22483.271, 'duration': 2.083}, {'end': 22489.099, 'text': 'I hope guys you are aware of this type of Jenkins file.', 'start': 22485.774, 'duration': 3.325}], 'summary': 'Pipeline will compile, clone, build, and deploy code. two types of jenkins file: declarative and scripted.', 'duration': 23.723, 'max_score': 22465.376, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY422465376.jpg'}, {'end': 23218.866, 'src': 'embed', 'start': 23193.714, 'weight': 3, 'content': [{'end': 23201.196, 'text': 'Centralized workflow will directly go ahead and merge with the master or directly merge it in the public server.', 'start': 23193.714, 'duration': 7.482}, {'end': 23209.218, 'text': 'And the second one, which is feature branching, will create separate branches for each and every developer,', 'start': 23201.956, 'duration': 7.262}, {'end': 23214.562, 'text': 'so that developer will be sending one by one which we can merge it to the master.', 'start': 23209.218, 'duration': 5.344}, {'end': 23218.866, 'text': 'and finally, git flow workflow, where will not directly merge it to the master.', 'start': 23214.562, 'duration': 4.304}], 'summary': 'Comparison of centralized, feature branching, and git flow workflows for merging code.', 'duration': 25.152, 'max_score': 23193.714, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY423193714.jpg'}, {'end': 24263.8, 'src': 'embed', 'start': 24228.735, 'weight': 4, 'content': [{'end': 24230.515, 'text': 'continue so that what happened?', 'start': 24228.735, 'duration': 1.78}, {'end': 24232.016, 'text': 'my Jenkins will be unlocked.', 'start': 24230.515, 'duration': 1.501}, {'end': 24236.957, 'text': 'Jenkins will be unlocked and over here will have other further.', 'start': 24232.016, 'duration': 4.941}, {'end': 24240.698, 'text': 'basically, install suggested plugins or select plugins to install.', 'start': 24236.957, 'duration': 3.741}, {'end': 24241.838, 'text': 'what exactly these are?', 'start': 24240.698, 'duration': 1.14}, {'end': 24243.779, 'text': 'These are nothing but the plugins.', 'start': 24242.119, 'duration': 1.66}, {'end': 24251.177, 'text': 'Jenkins basically use certain plugins that plugins will help us to have a web UI where I can go ahead and use it over here.', 'start': 24244.195, 'duration': 6.982}, {'end': 24255.058, 'text': 'These are the plugins that are suggested directly by the Jenkins.', 'start': 24251.217, 'duration': 3.841}, {'end': 24263.8, 'text': "I'm installing the suggestions that is given by Jenkins to me then once I completed with that it will open a new page.", 'start': 24255.438, 'duration': 8.362}], 'summary': 'Installing jenkins plugins suggested by jenkins for web ui.', 'duration': 35.065, 'max_score': 24228.735, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY424228735.jpg'}], 'start': 22127.497, 'title': 'Jenkins, git, and aws setup', 'summary': 'Covers setting up jenkins for deployment and compilation workflows, implementing jenkins pipeline and capstone project configuration, git workflow for website deployment, launching aws instance, and configuring jenkins and nodes, with a focus on devops lifecycle and aws integration.', 'chapters': [{'end': 22465.376, 'start': 22127.497, 'title': 'Jenkins deployment and compilation workflow', 'summary': 'Covers the process of setting up a jenkins slave, configuring jobs to run on specific machines, deploying files with proper permissions, and using maven for compilation, with emphasis on understanding and resolving failures and errors.', 'duration': 337.879, 'highlights': ['Understanding and resolving deployment failures The process of deploying files with proper permissions is demonstrated, including identifying and resolving issues such as failed copying of folders and the need for recursive copy commands.', 'Setting up Jenkins slave and configuring jobs The process of setting up a Jenkins slave and configuring jobs to run on specific machines is explained, emphasizing the use of Jenkins files or scripts to specify the node parameters and slave names.', 'Using Maven for compilation and deployment The use of Maven for compiling code and deploying packages to Tomcat server is demonstrated, with details on successful compilation and deployment, as well as the verification of the application running on the designated port.', 'Troubleshooting errors and making script changes The ability to troubleshoot errors and make script changes using the replay feature in Jenkins is highlighted, enabling modifications without altering the main script and facilitating experimentation or testing.']}, {'end': 23022.209, 'start': 22465.376, 'title': 'Jenkins pipeline and capstone project configuration', 'summary': 'Covers the implementation of jenkins pipeline, distinguishing between declarative and scripted jenkins files, and the usage of docker and a monitoring tool called nagos in the capstone project configuration, which aims to implement a devops lifecycle for a product-based company available on a github link.', 'duration': 556.833, 'highlights': ['Jenkins Pipeline Implementation The chapter emphasizes the implementation of Jenkins pipeline for code compilation, building, and deployment, highlighting the distinction between declarative and scripted Jenkins files.', 'Usage of Docker in Capstone Project The chapter explains the usage of Docker for container orchestration, emphasizing its ability to provide minimal image size and reduce the data from GBs to MBs for application deployment.', "Capstone Project Configuration The chapter details the implementation of a DevOps lifecycle for a product-based company available on a GitHub link, aiming to enhance the company's methodology and utilize a CICD pipeline for efficient deployment and integration."]}, {'end': 23521.785, 'start': 23022.209, 'title': 'Implementing git workflow for website deployment', 'summary': 'Discusses the implementation of a git workflow for website deployment, including the three types of workflows in git, the use of webhook for triggering code, and the containerization of code using a docker file. it also highlights the process of creating instances in aws for integration.', 'duration': 499.576, 'highlights': ['The chapter discusses the three types of workflows in Git: centralized workflow, feature branching, and Git flow workflow. It explains the different specifications and processes for each type of workflow in Git, providing an overview of the options available for managing code versioning and deployment.', 'The use of webhook for triggering code and building the website automatically is explained. The use of webhook as a triggering point for automating the build process in response to code changes is highlighted, emphasizing its role in streamlining the development and deployment workflow.', 'The containerization of code with the help of a Docker file is discussed, emphasizing its role in automating required tasks. It explains the purpose and functionality of a Docker file in containerizing code, highlighting its ability to automate tasks and streamline the deployment process.', 'The process of creating instances in AWS for integration is highlighted, demonstrating the practical implementation of the deployment environment. It provides insights into the practical steps involved in creating instances within the AWS console for integration, showcasing the real-world application of the deployment process.']}, {'end': 24228.735, 'start': 23521.785, 'title': 'Launching aws instance and installing jenkins', 'summary': 'Discusses launching an aws instance, configuring security groups, and installing jenkins on the server, including the process of forking a website and accessing jenkins web ui.', 'duration': 706.95, 'highlights': ["Configuring security groups and launching AWS instance The speaker discusses setting up a security group named 'test project' and enabling all traffic to avoid issues with ports or firewalls while launching an instance.", 'Forking a website and working with instances The chapter explains the process of forking a website and using two instances, one for master server and another for testing server, to work on the copied content from the website.', 'Installing and configuring Jenkins on the server The speaker provides a detailed guide on installing Java, adding necessary packages, importing keys, and installing Jenkins on the server, followed by accessing the Jenkins web UI.']}, {'end': 24481.364, 'start': 24228.735, 'title': 'Setting up jenkins and configuring nodes', 'summary': 'Describes the process of setting up jenkins, including unlocking jenkins, installing suggested plugins, configuring user details, setting up jenkins ui, configuring credentials, and adding a new node, emphasizing the importance of clustering for data transfer and the use of web sockets for connecting nodes.', 'duration': 252.629, 'highlights': ['The chapter describes the process of setting up Jenkins, including unlocking Jenkins, installing suggested plugins, configuring user details, setting up Jenkins UI, configuring credentials, and adding a new node This encompasses the main steps involved in setting up Jenkins and its initial configuration.', 'Emphasizes the importance of clustering for data transfer and the use of web sockets for connecting nodes Highlighting the significance of clustering for efficient data transfer and the use of web sockets for connecting nodes in different availability zones or data centers.', 'Explains the concept of remote directory and the launch method for adding a new node Provides information on defining the remote directory and the launch method for adding a new node, essential for configuring and making changes to the added node.', 'Describes the process of installing suggested plugins and their purpose in Jenkins Details the process of installing suggested plugins and their role in enabling specific functionalities in Jenkins.', 'Details the configuration of user details and the process of setting up Jenkins UI Provides information on configuring user details and the steps involved in setting up the Jenkins UI, including setting the Jenkins URL and modifying the default password.']}], 'duration': 2353.867, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY422127497.jpg', 'highlights': ['Jenkins Pipeline Implementation The chapter emphasizes the implementation of Jenkins pipeline for code compilation, building, and deployment, highlighting the distinction between declarative and scripted Jenkins files.', 'Understanding and resolving deployment failures The process of deploying files with proper permissions is demonstrated, including identifying and resolving issues such as failed copying of folders and the need for recursive copy commands.', 'Setting up Jenkins slave and configuring jobs The process of setting up a Jenkins slave and configuring jobs to run on specific machines is explained, emphasizing the use of Jenkins files or scripts to specify the node parameters and slave names.', 'The chapter discusses the three types of workflows in Git: centralized workflow, feature branching, and Git flow workflow. It explains the different specifications and processes for each type of workflow in Git, providing an overview of the options available for managing code versioning and deployment.', 'The process of setting up Jenkins, including unlocking Jenkins, installing suggested plugins, configuring user details, setting up Jenkins UI, configuring credentials, and adding a new node This encompasses the main steps involved in setting up Jenkins and its initial configuration.']}, {'end': 25739.73, 'segs': [{'end': 24541.661, 'src': 'embed', 'start': 24482.645, 'weight': 0, 'content': [{'end': 24494.651, 'text': "And we'll save it once we save it node test server is created, but it is showing a cross symbol which basically means it is not active.", 'start': 24482.645, 'duration': 12.006}, {'end': 24498.933, 'text': 'So for that to making that active we need to download two files.', 'start': 24495.111, 'duration': 3.822}, {'end': 24501.354, 'text': 'which is present inside it.', 'start': 24499.793, 'duration': 1.561}, {'end': 24504.056, 'text': "Once you click on test server, you'll have two files.", 'start': 24501.654, 'duration': 2.402}, {'end': 24511.68, 'text': 'One is launch which is JNLP and the other one is agent.jar.', 'start': 24504.716, 'duration': 6.964}, {'end': 24514.202, 'text': 'Let me keep this both files.', 'start': 24512.621, 'duration': 1.581}, {'end': 24518.684, 'text': 'Once it is downloaded, what we need to do is I need to copy this file.', 'start': 24514.842, 'duration': 3.842}, {'end': 24525.409, 'text': 'I need to copy this file into my node machine to connect it to Jenkins cluster.', 'start': 24518.945, 'duration': 6.464}, {'end': 24527.59, 'text': 'I need to copy it to the node machine.', 'start': 24525.889, 'duration': 1.701}, {'end': 24529.291, 'text': 'Then only I can do that.', 'start': 24528.11, 'duration': 1.181}, {'end': 24533.956, 'text': 'How to copy it? Let me open a tool called FileZilla.', 'start': 24529.793, 'duration': 4.163}, {'end': 24541.661, 'text': 'It is a tool that is used to copy the data from our local machine to the node machine.', 'start': 24534.576, 'duration': 7.085}], 'summary': 'To activate the test server, download two files, jnlp and agent.jar, and copy them to the node machine using filezilla.', 'duration': 59.016, 'max_score': 24482.645, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY424482645.jpg'}, {'end': 24653.581, 'src': 'embed', 'start': 24623.662, 'weight': 1, 'content': [{'end': 24625.102, 'text': 'Let me drag and drop it.', 'start': 24623.662, 'duration': 1.44}, {'end': 24628.163, 'text': "Then I'll do the same for agent 2.0 jar.", 'start': 24625.362, 'duration': 2.801}, {'end': 24630.604, 'text': 'Let me drag and drop it both the files.', 'start': 24628.603, 'duration': 2.001}, {'end': 24632.124, 'text': 'Once I drag and drop it.', 'start': 24630.964, 'duration': 1.16}, {'end': 24635.825, 'text': 'There is a file created slave agent dot again LP.', 'start': 24632.184, 'duration': 3.641}, {'end': 24636.886, 'text': "I'll rename it.", 'start': 24636.105, 'duration': 0.781}, {'end': 24642.467, 'text': 'There is a additional number called 2 because I have downloaded some other files previously.', 'start': 24637.566, 'duration': 4.901}, {'end': 24647.229, 'text': "So it is giving that mean I'll modify or rename that file over here itself.", 'start': 24643.008, 'duration': 4.221}, {'end': 24651.861, 'text': 'and then finally the other it, then dot jar.', 'start': 24647.919, 'duration': 3.942}, {'end': 24653.581, 'text': 'it will take a bit of time.', 'start': 24651.861, 'duration': 1.72}], 'summary': 'Dragging and dropping files, renaming, and modifying, takes some time.', 'duration': 29.919, 'max_score': 24623.662, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY424623662.jpg'}, {'end': 24724.073, 'src': 'embed', 'start': 24671.108, 'weight': 3, 'content': [{'end': 24684.183, 'text': 'the reason for renaming this particular command because there is a command that is given to us on the browser which will help us to run the com join this node.', 'start': 24671.108, 'duration': 13.075}, {'end': 24689.887, 'text': 'for that we need to have java installed on our machine and the directory need to be created.', 'start': 24684.183, 'duration': 5.704}, {'end': 24693.45, 'text': 'then we can go ahead and implement git on the test server.', 'start': 24689.887, 'duration': 3.563}, {'end': 24697.572, 'text': 'but before that, let me into the node machine first.', 'start': 24693.45, 'duration': 4.122}, {'end': 24700.194, 'text': 'these are the setting up of jenkins.', 'start': 24697.572, 'duration': 2.622}, {'end': 24704.357, 'text': 'it might take some time, but the thing is that will be helpful for you.', 'start': 24700.194, 'duration': 4.163}, {'end': 24710.047, 'text': 'that will be helpful for you to go ahead and get the opportunity to connect to the cluster.', 'start': 24704.357, 'duration': 5.69}, {'end': 24718.971, 'text': 'Then the cluster will have both master and test server which will be helpful for us to implement the DevOps life cycle.', 'start': 24710.427, 'duration': 8.544}, {'end': 24724.073, 'text': 'So let me open putty and paste the IP address of note.', 'start': 24720.211, 'duration': 3.862}], 'summary': 'Renamed command enables running com join, setting up jenkins, connecting to cluster for implementing devops life cycle.', 'duration': 52.965, 'max_score': 24671.108, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY424671108.jpg'}, {'end': 25718.638, 'src': 'embed', 'start': 25690.61, 'weight': 10, 'content': [{'end': 25693.432, 'text': 'There is a build starting in the build history.', 'start': 25690.61, 'duration': 2.822}, {'end': 25703.018, 'text': 'It is showing that there is a build going on, which basically means our job is triggered automatically on my Jenkins.', 'start': 25694.493, 'duration': 8.525}, {'end': 25707.601, 'text': "Under Jenkins, my job got triggered and we'll wait for the output.", 'start': 25703.559, 'duration': 4.042}, {'end': 25715.867, 'text': 'What exactly comes? If we get a blue color ball or a blue color at the ending of this particular thing, which means success.', 'start': 25707.982, 'duration': 7.885}, {'end': 25717.548, 'text': 'our job is executed successfully.', 'start': 25715.867, 'duration': 1.681}, {'end': 25718.638, 'text': 'SEM tab.', 'start': 25717.998, 'duration': 0.64}], 'summary': 'A build is ongoing on jenkins. success is indicated by a blue color ball or ending.', 'duration': 28.028, 'max_score': 25690.61, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY425690610.jpg'}], 'start': 24482.645, 'title': 'Setting up test server connection, configuring jenkins and git, and jenkins webhook', 'summary': 'Details setting up a test server connection to jenkins, configuring jenkins and git on the test server, and integrating git with jenkins. it emphasizes the importance of these steps in implementing the devops life cycle and explains setting up and triggering jenkins webhook for automated job execution.', 'chapters': [{'end': 24671.108, 'start': 24482.645, 'title': 'Setting up test server connection', 'summary': 'Details the process of setting up a test server connection to jenkins, involving downloading two files, using filezilla to copy the files to the node machine, and renaming the files upon completion of the file transfer.', 'duration': 188.463, 'highlights': ['Using FileZilla to copy the downloaded files to the node machine The speaker explains the process of using FileZilla to copy the downloaded files to the node machine, emphasizing the requirement of the public IP address and the username for the host, with the port being 22.', "Renaming the files after successful file transfer The chapter highlights the process of renaming the files upon successful completion of the file transfer, indicating that the file transfer is successful and the renaming of the files as 'slave agent dot again LP' and 'agent dot jar'.", "Downloading two essential files for activating the test server The chapter emphasizes the necessity of downloading two files, namely 'launch' (JNLP) and 'agent.jar', to activate the test server and connect it to the Jenkins cluster."]}, {'end': 25160.055, 'start': 24671.108, 'title': 'Configuring jenkins and git on test server', 'summary': 'Details the process of configuring jenkins and git on the test server, setting up the integration, installing java and creating directories, as well as initiating and cloning git repositories, emphasizing the importance of these steps in implementing the devops life cycle.', 'duration': 488.947, 'highlights': ['Configuring Jenkins and Git on the test server The chapter focuses on setting up Jenkins and Git on the test server, highlighting the importance of this configuration in implementing the DevOps life cycle.', 'Setting up the integration on the testing server Emphasizes the significance of setting up integration on the testing server for obtaining code from the developer, a crucial aspect of the DevOps life cycle.', 'Installing Java and creating directories The importance of installing Java and creating the necessary directories for successful configuration and implementation of Jenkins and Git is highlighted.', 'Initiating and cloning Git repositories The process of initiating and cloning Git repositories is explained, showcasing its essential role in the development and implementation of the DevOps life cycle.']}, {'end': 25379.916, 'start': 25160.415, 'title': 'Integrating git with jenkins', 'summary': 'Discusses integrating git with jenkins, including adding files to the staging area, committing changes, creating a webhook, and integrating a testing server for continuous integration.', 'duration': 219.501, 'highlights': ["Adding files to staging area using 'git add .' and committing changes using 'git commit -m' are essential steps in integrating Git with Jenkins. The process of adding files to the staging area using 'git add .' and committing changes using 'git commit -m' is crucial for tracking files and defining commit messages.", 'Creating a webhook in GitHub to trigger Jenkins using the Jenkins URL is a key step in integrating Git with Jenkins. Creating a webhook in GitHub to trigger Jenkins using the Jenkins URL is crucial for automating the process of pushing changes to Git and enabling continuous integration.', 'The discussion emphasizes the importance of integrating a testing server with Git and Jenkins for continuous integration, showcasing the use of webhooks to trigger Jenkins for automation. Integrating a testing server with Git and Jenkins for continuous integration is highlighted as a crucial step, with the use of webhooks to trigger Jenkins for automation.']}, {'end': 25739.73, 'start': 25380.316, 'title': 'Understanding jenkins webhook', 'summary': 'Explains the process of setting up and triggering jenkins webhook for automated job execution, showing the successful execution and the need for potential changes in case of private github usage.', 'duration': 359.414, 'highlights': ['The chapter explains the process of setting up and triggering Jenkins webhook for automated job execution The process of setting up and triggering Jenkins webhook for automated job execution is explained, demonstrating the automated triggering of the job and the successful execution.', 'Showing the successful execution and the need for potential changes in case of private GitHub usage The successful execution of the job is shown, stressing the potential need for changes in case of private GitHub usage, where credentials need to be provided.']}], 'duration': 1257.085, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY424482645.jpg', 'highlights': ['Downloading two essential files for activating the test server', 'Renaming the files after successful file transfer', 'Using FileZilla to copy the downloaded files to the node machine', 'Configuring Jenkins and Git on the test server', 'Setting up the integration on the testing server', 'Installing Java and creating directories', 'Initiating and cloning Git repositories', 'Creating a webhook in GitHub to trigger Jenkins using the Jenkins URL', "Adding files to staging area using 'git add .' and committing changes using 'git commit -m' are essential steps in integrating Git with Jenkins", 'The chapter explains the process of setting up and triggering Jenkins webhook for automated job execution', 'The discussion emphasizes the importance of integrating a testing server with Git and Jenkins for continuous integration']}, {'end': 26923.286, 'segs': [{'end': 26113.702, 'src': 'embed', 'start': 26080.391, 'weight': 1, 'content': [{'end': 26082.312, 'text': 'It will create a separate container for us.', 'start': 26080.391, 'duration': 1.921}, {'end': 26086.675, 'text': "At that time, I'll show you what exactly the container is.", 'start': 26082.973, 'duration': 3.702}, {'end': 26088.657, 'text': 'But we are not creating it.', 'start': 26087.536, 'duration': 1.121}, {'end': 26092.067, 'text': 'and let me save this particular file.', 'start': 26090.166, 'duration': 1.901}, {'end': 26096.43, 'text': 'This is inside my developer branch.', 'start': 26093.588, 'duration': 2.842}, {'end': 26101.253, 'text': "I have saved it and what I'll do is sudo git add.", 'start': 26096.971, 'duration': 4.282}, {'end': 26107.798, 'text': "I'll add all the file and sudo git commit.", 'start': 26102.414, 'duration': 5.384}, {'end': 26110.62, 'text': 'I send a commit message.', 'start': 26108.358, 'duration': 2.262}, {'end': 26113.702, 'text': "I'll be giving docker file added.", 'start': 26110.7, 'duration': 3.002}], 'summary': 'Creating a separate container, adding files using git, and committing changes.', 'duration': 33.311, 'max_score': 26080.391, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY426080391.jpg'}, {'end': 26485.544, 'src': 'embed', 'start': 26454.778, 'weight': 0, 'content': [{'end': 26461.524, 'text': 'I do that once I created or once I configure this job at Gordon, install docker.', 'start': 26454.778, 'duration': 6.746}, {'end': 26464.097, 'text': 'Right now.', 'start': 26463.757, 'duration': 0.34}, {'end': 26469.799, 'text': "I am currently giving only master and also if I want to give I'll also give it develop as well.", 'start': 26464.117, 'duration': 5.682}, {'end': 26472.64, 'text': 'Let me give it develop.', 'start': 26470.479, 'duration': 2.161}, {'end': 26476.041, 'text': "Let's suppose my developer branch completed.", 'start': 26473.96, 'duration': 2.081}, {'end': 26480.262, 'text': 'Have you enabled Auto refresh in Jenkins know with the recent update.', 'start': 26476.421, 'duration': 3.841}, {'end': 26481.523, 'text': 'It is already enabled.', 'start': 26480.362, 'duration': 1.161}, {'end': 26485.544, 'text': 'It will work on automatically.', 'start': 26481.543, 'duration': 4.001}], 'summary': 'Configured job at gordon to install docker, with auto refresh enabled.', 'duration': 30.766, 'max_score': 26454.778, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY426454778.jpg'}, {'end': 26567.125, 'src': 'embed', 'start': 26537.366, 'weight': 3, 'content': [{'end': 26543.01, 'text': "And right now I'll be going ahead and doing some command, some execute shell command.", 'start': 26537.366, 'duration': 5.644}, {'end': 26551.878, 'text': "what exactly we'll do in the execute shell is we'll write some docker command, sudo docker build, slash home.", 'start': 26543.01, 'duration': 8.868}, {'end': 26558.421, 'text': "I'll giving the location slash Ubuntu slash website.", 'start': 26552.118, 'duration': 6.303}, {'end': 26564.604, 'text': 'slash dot dot basically means it will pick the docker file automatically.', 'start': 26558.421, 'duration': 6.183}, {'end': 26567.125, 'text': "then I'll give it tag to it hyphen T test.", 'start': 26564.604, 'duration': 2.521}], 'summary': "Executing shell command to build docker image with tag 'test'", 'duration': 29.759, 'max_score': 26537.366, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY426537366.jpg'}, {'end': 26644.05, 'src': 'embed', 'start': 26617.877, 'weight': 5, 'content': [{'end': 26625.719, 'text': 'Do we need to create a second slave to have one more than one job? No, you need not to create any other slaves for other jobs.', 'start': 26617.877, 'duration': 7.842}, {'end': 26628.64, 'text': 'You can directly use it.', 'start': 26626.419, 'duration': 2.221}, {'end': 26631.2, 'text': "Right now I'll be in the master.", 'start': 26629.82, 'duration': 1.38}, {'end': 26634.141, 'text': "I'll go ahead and start installing Docker in it.", 'start': 26631.86, 'duration': 2.281}, {'end': 26635.321, 'text': 'A simple command.', 'start': 26634.541, 'duration': 0.78}, {'end': 26644.05, 'text': 'sudo apt-get install docker.io, which is a command useful for us to install docker.', 'start': 26635.828, 'duration': 8.222}], 'summary': 'No need for additional slaves. installing docker with sudo apt-get install docker.io.', 'duration': 26.173, 'max_score': 26617.877, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY426617877.jpg'}, {'end': 26859.565, 'src': 'embed', 'start': 26829.353, 'weight': 2, 'content': [{'end': 26834.356, 'text': 'Basically, this is the command that will be helpful for me to enter my master machine.', 'start': 26829.353, 'duration': 5.003}, {'end': 26840.302, 'text': 'Once I was in the master If I do an LS, there is only two files.', 'start': 26835.917, 'duration': 4.385}, {'end': 26841.682, 'text': "So I'll merge it.", 'start': 26840.842, 'duration': 0.84}, {'end': 26845.142, 'text': 'sudo git merge develop.', 'start': 26841.942, 'duration': 3.2}, {'end': 26854.284, 'text': 'What exactly happens? If I merged it, there is docker file that is present inside the developer and directly merged into my system.', 'start': 26846.063, 'duration': 8.221}, {'end': 26859.565, 'text': "So once I'll check the status, everything is clean.", 'start': 26854.724, 'duration': 4.841}], 'summary': "Merged 'develop' branch into master, adding docker file and ensuring clean status.", 'duration': 30.212, 'max_score': 26829.353, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY426829353.jpg'}], 'start': 25739.73, 'title': 'Git branching, docker, and jenkins integration', 'summary': 'Discusses implementing git branching, automatic job execution triggered by push commands, creating docker file, integrating jenkins with github, and setting up docker in jenkins. it covers details such as creating and modifying branches, handling remote repositories, automation of the build process, and docker setup on master and test server.', 'chapters': [{'end': 25933.866, 'start': 25739.73, 'title': 'Implementing git branching and automatic job execution', 'summary': 'Discusses implementing branching in git, creating a develop branch, and automatic job execution triggered by push commands, with details on creating and modifying branches, and handling remote repositories using different locations.', 'duration': 194.136, 'highlights': ['Creating a develop branch and explaining the concept of branching and merging in Git. 2 branches created, master and develop.', 'Demonstrating how Git handles remote repositories when using different locations like Bitbucket or Azure Repo. Explanation of handling remote repositories using different locations.', 'Creating a Docker file inside the develop branch and adding content to it. Demonstration of creating a Docker file inside the develop branch.']}, {'end': 26367.634, 'start': 25934.406, 'title': 'Docker file and jenkins integration', 'summary': 'Covers the creation of a docker file, integration of jenkins with github, and the use of webhook to trigger jenkins jobs, aiming to automate the build process and branch-based triggering.', 'duration': 433.228, 'highlights': ['The creation of a Docker file for LV containerizing the code and its automatic build upon GitHub push. The task involves creating a Docker file to containerize the code and ensuring that it is built every time there is a push to GitHub.', 'Use of pre-built containers and the role of webhooks in triggering Jenkins jobs. The discussion involves utilizing pre-built containers and leveraging webhooks to inform or trigger Jenkins jobs upon specific events, aiming for automation.', "Explanation of the content and purpose of the pre-built container for the organization's application and its components. Detailed explanation of the pre-built container, comprising the operating system, required software, and the specific components like Apache, aiming for clarity on container content and purpose."]}, {'end': 26923.286, 'start': 26367.634, 'title': 'Setting up docker in jenkins', 'summary': 'Describes the process of setting up docker in jenkins to build a website, including creating a new job, installing docker on master and test server, and pushing changes from the develop branch to master, emphasizing the need to install docker on both machines in the cluster.', 'duration': 555.652, 'highlights': ['Creating a new job to build a website using a Docker file The speaker sets up a new job in Jenkins to build a website using a Docker file created by the developer, emphasizing the use of a freestyle project and the configuration of the job with specific details.', "Installing Docker on both master and test server The importance of installing Docker on both machines in the cluster is emphasized, with the speaker providing the command 'sudo apt-get install docker.io' for installation.", 'Pushing changes from the develop branch to master The speaker demonstrates the process of pushing changes from the develop branch to master, including updating the Docker file and merging changes into the master system.']}], 'duration': 1183.556, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY425739730.jpg', 'highlights': ['Demonstrating how Git handles remote repositories when using different locations like Bitbucket or Azure Repo. Explanation of handling remote repositories using different locations.', 'Creating a develop branch and explaining the concept of branching and merging in Git. 2 branches created, master and develop.', 'The creation of a Docker file for LV containerizing the code and its automatic build upon GitHub push. The task involves creating a Docker file to containerize the code and ensuring that it is built every time there is a push to GitHub.', 'Use of pre-built containers and the role of webhooks in triggering Jenkins jobs. The discussion involves utilizing pre-built containers and leveraging webhooks to inform or trigger Jenkins jobs upon specific events, aiming for automation.', 'Creating a new job to build a website using a Docker file The speaker sets up a new job in Jenkins to build a website using a Docker file created by the developer, emphasizing the use of a freestyle project and the configuration of the job with specific details.', "Installing Docker on both master and test server The importance of installing Docker on both machines in the cluster is emphasized, with the speaker providing the command 'sudo apt-get install docker.io' for installation."]}, {'end': 29353.944, 'segs': [{'end': 27004.549, 'src': 'embed', 'start': 26949.552, 'weight': 4, 'content': [{'end': 26950.152, 'text': 'Yeah over here.', 'start': 26949.552, 'duration': 0.6}, {'end': 26951.852, 'text': "I'll copy the test IP address.", 'start': 26950.212, 'duration': 1.64}, {'end': 26954.513, 'text': "I'll paste it over here.", 'start': 26953.353, 'duration': 1.16}, {'end': 26958.928, 'text': 'on port 82.', 'start': 26956.087, 'duration': 2.841}, {'end': 26965.309, 'text': "I'll place it on a new tab with a port 82 and I'll see the output port exactly and get it.", 'start': 26958.928, 'duration': 6.381}, {'end': 26966.21, 'text': 'this is the website.', 'start': 26965.309, 'duration': 0.901}, {'end': 26968.43, 'text': 'guys, this is the website that we want to build.', 'start': 26966.21, 'duration': 2.22}, {'end': 26973.472, 'text': 'it. see, developer has sent some data to us, which we merged it.', 'start': 26968.43, 'duration': 5.042}, {'end': 26978.173, 'text': 'once we push it to the docker, once we push it to the github,', 'start': 26973.472, 'duration': 4.701}, {'end': 26990.104, 'text': 'then automatically the job which I installed in the Jenkins got triggered and this particular website is created with the help of the index dot html which we have used.', 'start': 26978.173, 'duration': 11.931}, {'end': 26992.164, 'text': 'why port 82?', 'start': 26990.104, 'duration': 2.06}, {'end': 26998.427, 'text': 'i have given port as 82 while creating the job, so it is opened in the port 82.', 'start': 26992.164, 'duration': 6.263}, {'end': 26999.207, 'text': 'that is the reason.', 'start': 26998.427, 'duration': 0.78}, {'end': 26999.987, 'text': "that's it.", 'start': 26999.207, 'duration': 0.78}, {'end': 27001.148, 'text': 'there is no specific reason.', 'start': 26999.987, 'duration': 1.161}, {'end': 27003.488, 'text': 'you can use any port.', 'start': 27001.148, 'duration': 2.34}, {'end': 27004.549, 'text': 'that is fine.', 'start': 27003.488, 'duration': 1.061}], 'summary': 'Developer merged data, pushed to docker and github, triggering jenkins job to create website on port 82.', 'duration': 54.997, 'max_score': 26949.552, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY426949552.jpg'}, {'end': 27249.593, 'src': 'embed', 'start': 27173.757, 'weight': 3, 'content': [{'end': 27177.659, 'text': 'i need to give the container id right for that.', 'start': 27173.757, 'duration': 3.902}, {'end': 27182.943, 'text': "i'll be writing sudo docker, ps, hyphen e and hyphen q.", 'start': 27177.659, 'duration': 5.284}, {'end': 27185.427, 'text': 'this is the command.', 'start': 27184.446, 'duration': 0.981}, {'end': 27188.629, 'text': 'guys, what exactly this command will do us?', 'start': 27185.427, 'duration': 3.202}, {'end': 27194.454, 'text': 'it will remove the previous container and it will be used for building the new container.', 'start': 27188.629, 'duration': 5.825}, {'end': 27197.016, 'text': 'let me save this job.', 'start': 27194.454, 'duration': 2.562}, {'end': 27201.019, 'text': 'so docker container is a lightweight image where website will be hosted every time.', 'start': 27197.016, 'duration': 4.003}, {'end': 27202.58, 'text': 'we put something into the master.', 'start': 27201.019, 'duration': 1.561}, {'end': 27205.242, 'text': 'yes, after pushing in develop branch.', 'start': 27202.58, 'duration': 2.662}, {'end': 27207.564, 'text': 'why it did not build?', 'start': 27205.242, 'duration': 2.322}, {'end': 27209.365, 'text': 'we did not push it from the dollar branch,', 'start': 27207.564, 'duration': 1.801}, {'end': 27218.394, 'text': 'the developer branch we already tested for the first time with the first job and second time when we are doing nothing is supposed to get up,', 'start': 27210.171, 'duration': 8.223}, {'end': 27220.175, 'text': 'so it is not triggered.', 'start': 27218.394, 'duration': 1.781}, {'end': 27222.656, 'text': 'can there be more than one good job in a slave?', 'start': 27220.175, 'duration': 2.481}, {'end': 27227.298, 'text': 'yes, you can have n number of jobs after pushing in developer branch.', 'start': 27222.656, 'duration': 4.642}, {'end': 27228.899, 'text': 'why it did not build again?', 'start': 27227.298, 'duration': 1.601}, {'end': 27234.441, 'text': 'the same answer, basically of that push is saying everything is up to date.', 'start': 27228.899, 'duration': 5.542}, {'end': 27238.462, 'text': 'it does not push or modify any changes into the github.', 'start': 27234.441, 'duration': 4.021}, {'end': 27239.943, 'text': 'so that does not work.', 'start': 27238.462, 'duration': 1.481}, {'end': 27246.071, 'text': "why can't we use docker compose, then bring down the container using docker.", 'start': 27241.007, 'duration': 5.064}, {'end': 27249.593, 'text': 'compose down and then bring it up using docker.', 'start': 27246.071, 'duration': 3.522}], 'summary': "Using 'sudo docker ps -e -q' command to remove previous containers and build new ones for hosting the website, encountering issues with triggering jobs and updating github.", 'duration': 75.836, 'max_score': 27173.757, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY427173757.jpg'}, {'end': 27678.034, 'src': 'embed', 'start': 27647.497, 'weight': 9, 'content': [{'end': 27648.698, 'text': 'Test1 is the name.', 'start': 27647.497, 'duration': 1.201}, {'end': 27652.32, 'text': 'I want to include my public main also.', 'start': 27649.278, 'duration': 3.042}, {'end': 27653.66, 'text': 'Let me open it.', 'start': 27652.9, 'duration': 0.76}, {'end': 27656.721, 'text': 'Yes, this is my class which I have created.', 'start': 27653.86, 'duration': 2.861}, {'end': 27661.123, 'text': "Once it is started, what we'll do is we'll give the web driver.", 'start': 27657.362, 'duration': 3.761}, {'end': 27663.584, 'text': "We'll define the web driver first.", 'start': 27661.604, 'duration': 1.98}, {'end': 27667.446, 'text': "There is, we'll create it like this.", 'start': 27664.405, 'duration': 3.041}, {'end': 27669.147, 'text': 'Web driver.', 'start': 27668.426, 'duration': 0.721}, {'end': 27678.034, 'text': 'The very first thing what we need to do is I need to import the package web driver, which I have done once I click on double click on it.', 'start': 27670.147, 'duration': 7.887}], 'summary': "Creating a class 'test1' and importing the 'web driver' package.", 'duration': 30.537, 'max_score': 27647.497, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY427647497.jpg'}, {'end': 27864.479, 'src': 'embed', 'start': 27834.039, 'weight': 0, 'content': [{'end': 27838.522, 'text': 'we need to inspect our website and we need to check the xpath.', 'start': 27834.039, 'duration': 4.483}, {'end': 27844.486, 'text': "for that, go to function f12 and i'll copy the xpath over here.", 'start': 27838.522, 'duration': 5.964}, {'end': 27850.853, 'text': 'it will be slash home, slash ubuntu only, but for that we need to define it.', 'start': 27844.486, 'duration': 6.367}, {'end': 27857.256, 'text': 'we need to define what we need to search it.', 'start': 27850.853, 'duration': 6.403}, {'end': 27858.297, 'text': "yeah, we'll check it.", 'start': 27857.256, 'duration': 1.041}, {'end': 27860.978, 'text': 'what exactly need to be tested over here?', 'start': 27858.297, 'duration': 2.681}, {'end': 27864.479, 'text': 'this? testing cases, uh, is the part of your tester job.', 'start': 27860.978, 'duration': 3.501}], 'summary': 'Inspect website for xpath /home/ubuntu, define and test search criteria.', 'duration': 30.44, 'max_score': 27834.039, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY427834039.jpg'}, {'end': 28403.232, 'src': 'embed', 'start': 28368.516, 'weight': 10, 'content': [{'end': 28372.118, 'text': 'what I need to do is I need to execute in a headless mode.', 'start': 28368.516, 'duration': 3.602}, {'end': 28376.479, 'text': 'headless mode is nothing, but it will execute without opening any browser.', 'start': 28372.118, 'duration': 4.361}, {'end': 28379.5, 'text': 'It will execute without any bra opening any browser.', 'start': 28376.659, 'duration': 2.841}, {'end': 28387.803, 'text': 'If you want me to check you can go ahead and add the browser again or the driver address over here then try to execute it.', 'start': 28379.94, 'duration': 7.863}, {'end': 28391.504, 'text': 'What happens is it will not open any other browser.', 'start': 28388.043, 'duration': 3.461}, {'end': 28393.205, 'text': 'Let me copy this address.', 'start': 28391.544, 'duration': 1.661}, {'end': 28403.232, 'text': "of my chrome driver and let me show you first, then i'll go ahead and copy with this particular home slash ubuntu slash chrome drive.", 'start': 28394.105, 'duration': 9.127}], 'summary': 'Executing in headless mode means running without opening any browser, demonstrated using chrome driver.', 'duration': 34.716, 'max_score': 28368.516, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY428368516.jpg'}, {'end': 28692.356, 'src': 'embed', 'start': 28663.596, 'weight': 2, 'content': [{'end': 28667.618, 'text': "So I'll enter into D Drive where I have hello.jar.", 'start': 28663.596, 'duration': 4.022}, {'end': 28669.798, 'text': 'Let me scroll it down over here.', 'start': 28667.878, 'duration': 1.92}, {'end': 28671.459, 'text': 'This is the hello.jar.', 'start': 28670.338, 'duration': 1.121}, {'end': 28673.62, 'text': 'Let me double click on it.', 'start': 28672.119, 'duration': 1.501}, {'end': 28680.702, 'text': 'What happens is when I double click on it it automatically transfer into the node machine which I have connected already.', 'start': 28674.1, 'duration': 6.602}, {'end': 28684.229, 'text': 'once it completed with the uploading.', 'start': 28681.667, 'duration': 2.562}, {'end': 28692.356, 'text': 'it will take a bit of time, but once it is completed with the uploading we can go ahead and execute that particular test jar file.', 'start': 28684.229, 'duration': 8.127}], 'summary': 'Uploading hello.jar to node machine for test execution.', 'duration': 28.76, 'max_score': 28663.596, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY428663596.jpg'}, {'end': 28842.376, 'src': 'embed', 'start': 28814.626, 'weight': 8, 'content': [{'end': 28820.771, 'text': 'the testing will be there inside a in it industry over here.', 'start': 28814.626, 'duration': 6.145}, {'end': 28824.834, 'text': 'you can use any other automated tools for testing purposes.', 'start': 28820.771, 'duration': 4.063}, {'end': 28825.995, 'text': 'there is no issue with that.', 'start': 28824.834, 'duration': 1.161}, {'end': 28836.093, 'text': 'but selenium is one of, uh the tools which widely used by the market so that we have included in our course curriculum over here.', 'start': 28826.949, 'duration': 9.144}, {'end': 28838.574, 'text': "i'll select the key pair.", 'start': 28836.093, 'duration': 2.481}, {'end': 28842.376, 'text': "i'll select the group that i have created yesterday,", 'start': 28838.574, 'duration': 3.802}], 'summary': 'Selenium is widely used in the it industry for testing purposes.', 'duration': 27.75, 'max_score': 28814.626, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY428814626.jpg'}, {'end': 28970.505, 'src': 'embed', 'start': 28937.781, 'weight': 12, 'content': [{'end': 28943.762, 'text': "We'll start with the installation of Java inside my mission sudo apt-get.", 'start': 28937.781, 'duration': 5.981}, {'end': 28956.884, 'text': 'let me type directly sudo apt-get install open jdk-8-jdk and then install our for me, which is a site for installing Jenkins,', 'start': 28943.762, 'duration': 13.122}, {'end': 28960.005, 'text': 'or adding this node to a Jenkins cluster.', 'start': 28956.884, 'duration': 3.121}, {'end': 28962.422, 'text': 'java is completely.', 'start': 28960.822, 'duration': 1.6}, {'end': 28970.505, 'text': "so we have installed that java and also we'll go ahead and install some other things as well, which is nothing but my docker,", 'start': 28962.422, 'duration': 8.083}], 'summary': 'Installed java using sudo apt-get and also added jenkins node to cluster.', 'duration': 32.724, 'max_score': 28937.781, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY428937781.jpg'}, {'end': 29274.154, 'src': 'embed', 'start': 29244.639, 'weight': 1, 'content': [{'end': 29251.005, 'text': 'You can use a pseudo MV command as well to renaming it but this is quite easier for us.', 'start': 29244.639, 'duration': 6.366}, {'end': 29253.107, 'text': 'So let me rename it.', 'start': 29251.966, 'duration': 1.141}, {'end': 29266.118, 'text': 'What happens if I do an LS over here? The names are modified accordingly agent.jar and slave agent.jnlp right now over here.', 'start': 29255.329, 'duration': 10.789}, {'end': 29271.223, 'text': 'If you can see the difference on the top the IP address the IP address is different.', 'start': 29266.178, 'duration': 5.045}, {'end': 29274.154, 'text': 'from the IP address that is present over here.', 'start': 29271.994, 'duration': 2.16}], 'summary': 'Renamed files using command, showing ip address differences.', 'duration': 29.515, 'max_score': 29244.639, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY429244639.jpg'}], 'start': 26923.987, 'title': 'Devops and test automation', 'summary': 'Covers the process of automatically building and managing docker containers, implementing devops methodologies and testing processes, using xpath and chrome driver for test automation, creating a runnable jar file, and setting up a production server for a jenkins cluster, resulting in successful website deployment and testing. it emphasizes port conflict avoidance, docker image creation, headless testing, server deployment under aws, and jenkins cluster setup.', 'chapters': [{'end': 27307.312, 'start': 26923.987, 'title': 'Building and managing docker containers', 'summary': 'Demonstrates the process of automatically building a new job in jenkins, with a focus on managing containers, removing previous containers, and building new ones to avoid port conflicts, resulting in successful website deployment on port 82.', 'duration': 383.325, 'highlights': ['The process of automatically building a new job in Jenkins and successfully deploying a website on port 82 is demonstrated. Automatic building of a new job in Jenkins, successful deployment of a website on port 82', "The need to remove previous containers before building new ones to avoid port conflicts is highlighted, with the demonstration of the command 'sudo docker rm -f' and 'sudo docker ps -q'. Necessity of removing previous containers, demonstration of 'sudo docker rm -f' and 'sudo docker ps -q' commands", 'Explanation of the reason for removing the previous container before building the new one, leading to a successful output. Explanation of the reason for removing the previous container, successful output after removing and building a new container']}, {'end': 27979.968, 'start': 27307.733, 'title': 'Devops process and testing methodologies', 'summary': 'Discusses the implementation of a devops methodology using git for version controlling, webhook for triggering changes, building a docker file for creating an image, testing processes using manual and automated testing, and setting up test cases in eclipse for automated website testing.', 'duration': 672.235, 'highlights': ['Implementation of DevOps methodology using Git for version controlling, webhook for triggering changes, and building a Docker file for creating an image. Discussing the implementation of DevOps methodology using Git, webhook, and Docker for version controlling and image creation.', 'Explanation of manual and automated testing processes, including the drawbacks of manual testing and the use of Selenium for automated testing. Explaining the differences between manual and automated testing, highlighting the drawbacks of manual testing and the use of Selenium for automated testing.', 'Setting up test cases in Eclipse for automated website testing, including defining the web driver, getting the X path of the element, and adding options for running on a server without a user interface. Detailing the process of setting up test cases in Eclipse for automated website testing, covering the steps of defining the web driver, obtaining the X path of the element, and adding options for running on a server without a user interface.']}, {'end': 28508.192, 'start': 27980.468, 'title': 'Using xpath and chrome driver in test automation', 'summary': 'Demonstrates the process of using xpath for element identification and setting up chrome driver for headless testing in a linux environment, emphasizing the need for headless mode and addressing the configuration and installation process.', 'duration': 527.724, 'highlights': ['The chapter emphasizes the need for using XPath to find elements inside the website, and addresses the usage of HTML/XPath for test cases. XPath is discussed as a method for finding elements within the website, with a specific example of using HTML/XPath for test cases.', 'The chapter discusses the process of setting up Chrome Driver in a Linux environment, covering the installation and configuration steps for headless testing. The process of setting up Chrome Driver in a Linux environment is explained, focusing on installation and configuration for headless testing.', 'The chapter explains the need for headless mode in testing, outlining its purpose and how it enables executing test cases without opening a browser. The concept of headless mode in testing is elaborated, highlighting its purpose in executing test cases without opening a browser interface.']}, {'end': 28891.821, 'start': 28508.192, 'title': 'Creating runnable jar file and server deployment', 'summary': 'Covers the process of converting code into a runnable jar file, including its configuration and packaging of libraries, with emphasis on server deployment and instance creation under aws, discussing the usage of selenium and the need for testing in devops.', 'duration': 383.629, 'highlights': ['The process of converting code into a runnable jar file, its configuration, and packaging of libraries is covered. The speaker explains the steps involved in converting code into a runnable jar file, including the configuration process and packaging of libraries to ensure all required packages are included within the jar.', 'Discussion on server deployment and instance creation under AWS is presented. The speaker discusses the deployment of a production server, including the process of launching a new instance under AWS, and the addition of the server to the Jenkins cluster for deployment.', 'Usage of Selenium and the importance of testing in DevOps is emphasized. The importance of understanding the usage of Selenium and testing in DevOps is highlighted, explaining the need to check if the expectations for a product are met and the relevance of different automated testing tools in the industry.']}, {'end': 29353.944, 'start': 28891.961, 'title': 'Setting up production server for jenkins cluster', 'summary': 'Details the process of setting up a production server for a jenkins cluster, including installing java, docker, and configuring a new node, with an emphasis on using ubuntu as the default username and addressing ip address changes.', 'duration': 461.983, 'highlights': ['Installing Java and Docker on the production server for Jenkins cluster. Java installed using sudo apt-get install openjdk-8-jdk, Docker installed for executing containers, and new node added to Jenkins console for the production server.', "Adding a new node named 'production' to the Jenkins cluster and configuring its location. Configuring the new node for production and setting the location to /home/ubuntu/Jenkins, distinguishing it from the test server and live server.", 'Addressing IP address changes and ensuring connectivity to the new server for the Jenkins cluster. Modifying IP addresses due to instance restarts, updating the command to connect to the new server, and confirming connectivity by testing the server.']}], 'duration': 2429.957, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY426923987.jpg', 'highlights': ['Automatic building of a new job in Jenkins, successful deployment of a website on port 82', "Necessity of removing previous containers, demonstration of 'sudo docker rm -f' and 'sudo docker ps -q' commands", 'Explanation of the reason for removing the previous container, successful output after removing and building a new container', 'Discussing the implementation of DevOps methodology using Git, webhook, and Docker for version controlling and image creation', 'Explaining the differences between manual and automated testing, highlighting the drawbacks of manual testing and the use of Selenium for automated testing', 'Detailing the process of setting up test cases in Eclipse for automated website testing, covering the steps of defining the web driver, obtaining the X path of the element, and adding options for running on a server without a user interface', 'XPath is discussed as a method for finding elements within the website, with a specific example of using HTML/XPath for test cases', 'The process of setting up Chrome Driver in a Linux environment is explained, focusing on installation and configuration for headless testing', 'The concept of headless mode in testing is elaborated, highlighting its purpose in executing test cases without opening a browser interface', 'The speaker explains the steps involved in converting code into a runnable jar file, including the configuration process and packaging of libraries to ensure all required packages are included within the jar', 'The speaker discusses the deployment of a production server, including the process of launching a new instance under AWS, and the addition of the server to the Jenkins cluster for deployment', 'The importance of understanding the usage of Selenium and testing in DevOps is highlighted, explaining the need to check if the expectations for a product are met and the relevance of different automated testing tools in the industry', 'Java installed using sudo apt-get install openjdk-8-jdk, Docker installed for executing containers, and new node added to Jenkins console for the production server', 'Configuring the new node for production and setting the location to /home/ubuntu/Jenkins, distinguishing it from the test server and live server', 'Modifying IP addresses due to instance restarts, updating the command to connect to the new server, and confirming connectivity by testing the server']}, {'end': 30816.308, 'segs': [{'end': 29621.482, 'src': 'embed', 'start': 29578.987, 'weight': 2, 'content': [{'end': 29592.636, 'text': "I'll execute this particular java file command sudo java item jar slash home slash ubuntu slash.", 'start': 29578.987, 'duration': 13.649}, {'end': 29604.104, 'text': 'jenkin, sorry, it will be inside ubuntu only hello dot jar.', 'start': 29592.636, 'duration': 11.468}, {'end': 29608.847, 'text': 'this is the jar file which we want to it.', 'start': 29604.104, 'duration': 4.743}, {'end': 29611.508, 'text': "i'll save this command first.", 'start': 29608.847, 'duration': 2.661}, {'end': 29612.589, 'text': "i'll be copying.", 'start': 29611.508, 'duration': 1.081}, {'end': 29613.63, 'text': "why i'm doing this?", 'start': 29612.589, 'duration': 1.041}, {'end': 29621.482, 'text': 'because i want to enter this jar file inside my website directory, where i am pushing into my github through github.', 'start': 29613.63, 'duration': 7.852}], 'summary': 'Executing sudo java command to move jar file to website directory for github', 'duration': 42.495, 'max_score': 29578.987, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY429578987.jpg'}, {'end': 30306.452, 'src': 'embed', 'start': 30282.609, 'weight': 4, 'content': [{'end': 30291.113, 'text': 'can we export our jenkin jobs and re-import it again where you are want to export it over here?', 'start': 30282.609, 'duration': 8.504}, {'end': 30296.075, 'text': 'the test job is built successfully, but it want to be a continuous process.', 'start': 30291.113, 'duration': 4.962}, {'end': 30298.457, 'text': 'it should be a continuous process.', 'start': 30296.075, 'duration': 2.382}, {'end': 30306.452, 'text': 'i cannot directly use once because it should be a continuous process when there is changes which will automate the complete process,', 'start': 30298.457, 'duration': 7.995}], 'summary': 'Need to export/import jenkins jobs for continuous automation.', 'duration': 23.843, 'max_score': 30282.609, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY430282609.jpg'}, {'end': 30412.852, 'src': 'embed', 'start': 30384.664, 'weight': 0, 'content': [{'end': 30386.925, 'text': 'you can do it by taking the image of it.', 'start': 30384.664, 'duration': 2.261}, {'end': 30391.604, 'text': 'Once you take the image inside the docker, you can build the image again.', 'start': 30387.822, 'duration': 3.782}, {'end': 30395.285, 'text': 'After it for that you need to write a separate job.', 'start': 30391.624, 'duration': 3.661}, {'end': 30401.387, 'text': 'Right now, if I execute again from here what happened,', 'start': 30396.605, 'duration': 4.782}, {'end': 30410.731, 'text': 'my Jenkins jobs will be executed successfully and it create a process to automate the changes due to the network.', 'start': 30401.387, 'duration': 9.344}, {'end': 30411.891, 'text': 'this issue occurred.', 'start': 30410.731, 'duration': 1.16}, {'end': 30412.852, 'text': "Don't worry on it.", 'start': 30411.951, 'duration': 0.901}], 'summary': 'Automate image creation and job execution in jenkins, resolving network issues.', 'duration': 28.188, 'max_score': 30384.664, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY430384664.jpg'}, {'end': 30619.769, 'src': 'embed', 'start': 30594.839, 'weight': 3, 'content': [{'end': 30602.943, 'text': 'in problem statement it says that if you push and develop, then test, only without pushing to the production moment.', 'start': 30594.839, 'duration': 8.104}, {'end': 30608.706, 'text': 'what exactly we have done just now is we pushed it from the master, right from the master.', 'start': 30602.943, 'duration': 5.763}, {'end': 30611.187, 'text': 'when we are pushing, it is built successfully,', 'start': 30608.706, 'duration': 2.481}, {'end': 30619.769, 'text': "but for the developer branch what we need to do is we need to create a separate job for it where exactly we'll be writing a command.", 'start': 30611.187, 'duration': 8.582}], 'summary': 'Pushed to master branch without separate job for developer branch.', 'duration': 24.93, 'max_score': 30594.839, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY430594839.jpg'}, {'end': 30769.737, 'src': 'embed', 'start': 30738.163, 'weight': 1, 'content': [{'end': 30743.524, 'text': "I'll make it as 85 port and I'll name it as test one.", 'start': 30738.163, 'duration': 5.361}, {'end': 30747.305, 'text': 'then finally, for executing it, sudo copy.', 'start': 30743.524, 'duration': 3.781}, {'end': 30753.646, 'text': 'we did not add it inside my doc branch file.', 'start': 30747.305, 'duration': 6.341}, {'end': 30758.468, 'text': "don't worry when you are practicing what these two commands to do.", 'start': 30753.646, 'duration': 4.822}, {'end': 30769.737, 'text': 'these two commands will build the docker file with the tag test one and test one image is executed on a particular post over here.', 'start': 30758.468, 'duration': 11.269}], 'summary': "Creating a docker image named 'test one' at port 85 and executing it with 'sudo copy'.", 'duration': 31.574, 'max_score': 30738.163, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY430738163.jpg'}], 'start': 29353.944, 'title': 'Jenkins ci/cd setup', 'summary': 'Covers setting up jenkins for test and production environments, configuring post build actions, cd pipeline configuration, and job automation, including creating test jobs, defining source code management, deploying a website, and automating continuous integration and deployment.', 'chapters': [{'end': 29647.157, 'start': 29353.944, 'title': 'Setting up jenkins for test and production environments', 'summary': 'Outlines the process of setting up jenkins to execute test and production jobs, including creating test jobs, defining source code management, executing shell commands, and managing files within the workspace.', 'duration': 293.213, 'highlights': ['Creating test jobs and defining source code management for executing test and production jobs', 'Executing shell commands to build and manage files within the workspace', 'Copying and executing Java files for website testing and pushing to GitHub']}, {'end': 29891.813, 'start': 29647.457, 'title': 'Configuring post build actions in jenkins', 'summary': 'Demonstrates the process of configuring post build actions in jenkins, including adding downstream projects and creating a build pipeline using the build pipeline plugin.', 'duration': 244.356, 'highlights': ['Configuring post build actions to trigger downstream projects upon updates, creating a chain of job executions. The post build actions trigger downstream projects upon updates, creating a chain of job executions.', "Adding downstream projects such as 'hyphen website' and 'test-website' to the job configuration. The addition of downstream projects 'hyphen website' and 'test-website' to the job configuration.", 'Installing the build pipeline plugin to create a visual pipeline of sequential job executions. Installing the build pipeline plugin to visualize a sequential pipeline of job executions.']}, {'end': 30282.609, 'start': 29891.813, 'title': 'Jenkins cd pipeline configuration', 'summary': 'Discusses the configuration of a jenkins cd pipeline, including copying and pushing jar files, executing build and test jobs, and deploying a website on a production server using docker, while emphasizing the separation of testing and production environments.', 'duration': 390.796, 'highlights': ['The pipeline includes steps such as copying jar files, committing changes to GitHub, and pushing changes to the master branch, with an emphasis on the flow of the pipeline and the execution of build and test jobs.', 'The process of deploying a website on a production server is detailed, involving Docker build commands and configuring the server to run the website on port 80, highlighting the separation of master and testing servers for configuration management and maintenance.', 'The usage of Jenkins CD for executing test cases without creating multiple commits and the execution of specific commands during the job running process are explained, emphasizing the efficiency and functionality of the pipeline.']}, {'end': 30816.308, 'start': 30282.609, 'title': 'Jenkins job automation and continuous integration', 'summary': 'Discusses the process of exporting and re-importing jenkins jobs, automating continuous integration and deployment, using docker to manage containers, and creating separate jobs for different branches, resulting in successful execution and deployment on the production server.', 'duration': 533.699, 'highlights': ['The process involves removing previous containers and automating the complete process of continuous integration and deployment, resulting in successful job execution and automation of changes due to network issues.', 'Using Docker to manage containers involves listing and removing running containers, creating job backups by taking images, and building and executing images on different ports to check output.', 'Creating separate jobs for different branches, such as the developer branch, involves using puppet configurations, writing manifest files, and executing commands for building and running Docker images, ensuring the process is restricted to the test server only.']}], 'duration': 1462.364, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY429353944.jpg', 'highlights': ['Configuring post build actions to trigger downstream projects upon updates, creating a chain of job executions.', 'The process involves removing previous containers and automating the complete process of continuous integration and deployment, resulting in successful job execution and automation of changes due to network issues.', 'The pipeline includes steps such as copying jar files, committing changes to GitHub, and pushing changes to the master branch, with an emphasis on the flow of the pipeline and the execution of build and test jobs.', 'Creating separate jobs for different branches, such as the developer branch, involves using puppet configurations, writing manifest files, and executing commands for building and running Docker images, ensuring the process is restricted to the test server only.', 'The process of deploying a website on a production server is detailed, involving Docker build commands and configuring the server to run the website on port 80, highlighting the separation of master and testing servers for configuration management and maintenance.']}, {'end': 32347.098, 'segs': [{'end': 31124.623, 'src': 'embed', 'start': 31089.569, 'weight': 2, 'content': [{'end': 31095.391, 'text': 'What exactly it will do it will add it will add some data to the puppet master.', 'start': 31089.569, 'duration': 5.822}, {'end': 31101.373, 'text': 'It will allocate some data to puppet master which I need to add inside the following file.', 'start': 31096.051, 'duration': 5.322}, {'end': 31103.931, 'text': 'i have saved the command as well.', 'start': 31102.35, 'duration': 1.581}, {'end': 31110.335, 'text': 'where i am you giving 512 mb and i have saved it.', 'start': 31103.931, 'duration': 6.404}, {'end': 31124.623, 'text': "once i saved it, what i'll do is i'll go ahead and restart the puppet master, sudo system, ctl, restart puppet, hyphen master.", 'start': 31110.335, 'duration': 14.288}], 'summary': 'Adding 512 mb data to puppet master and restarting it', 'duration': 35.054, 'max_score': 31089.569, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY431089569.jpg'}, {'end': 31193.383, 'src': 'embed', 'start': 31157.592, 'weight': 4, 'content': [{'end': 31168.355, 'text': 'as you see, slash force over here we need to define our public IP of the master.', 'start': 31157.592, 'duration': 10.763}, {'end': 31180.018, 'text': "yeah, I'll be using the master server, not any production or our master server IP address, along with the name puppet, so that what happened then?", 'start': 31168.355, 'duration': 11.663}, {'end': 31184.559, 'text': "finally, what we'll do is we'll save it and restart it.", 'start': 31180.018, 'duration': 4.541}, {'end': 31193.383, 'text': "that's it from the master, and We'll be adding a file for will be adding a file to create the manifest file.", 'start': 31184.559, 'duration': 8.824}], 'summary': 'Defining public ip for master server, saving and restarting configuration, adding manifest file.', 'duration': 35.791, 'max_score': 31157.592, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY431157592.jpg'}, {'end': 31412.01, 'src': 'embed', 'start': 31377.207, 'weight': 1, 'content': [{'end': 31390.137, 'text': 'i need to do the same for the node mission, which is a production environment, because they asked us to do for both servers.', 'start': 31377.207, 'duration': 12.93}, {'end': 31398.448, 'text': "then after it, what i'll do is i'll go ahead and done the do the installation first and I'll run the test case which I have written, not test case.", 'start': 31390.137, 'duration': 8.311}, {'end': 31412.01, 'text': 'basically, the manifest file will have the manifest file where it will define the condition for sudo apt-get update.', 'start': 31398.448, 'duration': 13.562}], 'summary': 'Need to implement changes in production environment for both servers, including installation and testing.', 'duration': 34.803, 'max_score': 31377.207, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY431377207.jpg'}, {'end': 31847.567, 'src': 'embed', 'start': 31812.462, 'weight': 0, 'content': [{'end': 31814.422, 'text': 'There is already an issue.', 'start': 31812.462, 'duration': 1.96}, {'end': 31820.304, 'text': "What it's showing is file manifest already declared it.", 'start': 31816.043, 'duration': 4.261}, {'end': 31823.345, 'text': 'Conditions is already declared over here.', 'start': 31821.004, 'duration': 2.341}, {'end': 31825.845, 'text': 'But no issues which we have written.', 'start': 31823.925, 'duration': 1.92}, {'end': 31830.586, 'text': 'Let me go ahead and modify it.', 'start': 31828.566, 'duration': 2.02}, {'end': 31832.747, 'text': 'Let me test it.', 'start': 31831.887, 'duration': 0.86}, {'end': 31837.84, 'text': 'and not using cache phase catalog.', 'start': 31833.957, 'duration': 3.883}, {'end': 31841.042, 'text': 'there is a command attribute.', 'start': 31837.84, 'duration': 3.202}, {'end': 31844.265, 'text': 'let me modify it once again.', 'start': 31841.042, 'duration': 3.223}, {'end': 31847.567, 'text': 'let me do one more thing so that it will be clear for us.', 'start': 31844.265, 'duration': 3.302}], 'summary': 'Issue with file manifest and conditions, modifications and testing required.', 'duration': 35.105, 'max_score': 31812.462, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY431812462.jpg'}, {'end': 32045.807, 'src': 'embed', 'start': 32017.551, 'weight': 3, 'content': [{'end': 32023.316, 'text': "I'll be installing it on my system and then I'll execute the puppet.", 'start': 32017.551, 'duration': 5.765}, {'end': 32027.719, 'text': "then I'll execute the puppet and I'll see what exactly the changes happen.", 'start': 32023.316, 'duration': 4.403}, {'end': 32033.124, 'text': "then what I'll do is I'll go ahead and run puppet agent test again.", 'start': 32027.719, 'duration': 5.405}, {'end': 32040.629, 'text': 'once I executed the puppet agent test again, it is executed, but there is a non-written code existed.', 'start': 32033.124, 'duration': 7.505}, {'end': 32045.807, 'text': 'the reason for it is basically is my apache is not starting successful.', 'start': 32040.629, 'duration': 5.178}], 'summary': 'Installing puppet, testing agent, apache not starting successfully', 'duration': 28.256, 'max_score': 32017.551, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY432017551.jpg'}], 'start': 30816.308, 'title': 'Puppet configuration and management', 'summary': 'Covers triggering the develop branch in the test server, setting up puppet master and agent, installation and configuration of puppet, along with testing configuration files and managing servers, emphasizing devops and jenkins integration.', 'chapters': [{'end': 30904.942, 'start': 30816.308, 'title': 'Triggering develop branch in test server', 'summary': 'Discusses triggering the execution of the develop branch in the test server when changes are committed, and highlights the process of build and testing in the devops environment.', 'duration': 88.634, 'highlights': ['The problem statement mentions that changes made inside the develop branch will be executed separately, including build and testing processes.', 'Committing changes in the develop branch triggers the automatic execution of relevant jobs in the test server.', 'The session provides information on DevOps online training validated and certified by NASCOM Future Skills and IBM.']}, {'end': 31277.742, 'start': 30905.262, 'title': 'Setting up puppet master and puppet agent', 'summary': 'Provides a detailed guide on setting up a puppet master and puppet agent, including the installation steps and configuration process, emphasizing the use of puppet for server management.', 'duration': 372.48, 'highlights': ['The chapter provides a detailed guide on setting up a Puppet master and Puppet agent. The focus of the transcript is on setting up a Puppet master and Puppet agent.', 'The installation steps for Puppet are explained, including updating the system, installing prerequisites like W get and checking for existing installations. The transcript includes detailed installation steps for Puppet, such as updating the system, installing prerequisites like W get, and checking for existing installations.', 'The use of Puppet for server management and configuration is emphasized, with the process involving allocating data to the Puppet master, restarting the Puppet master, and defining the host. The emphasis is on using Puppet for server management and configuration, involving allocating data to the Puppet master, restarting the Puppet master, and defining the host.']}, {'end': 31751.844, 'start': 31277.742, 'title': 'Puppet installation and configuration', 'summary': 'Covers the installation and configuration of puppet on a test machine and a production environment, including the use of jenkins plugins for monitoring and the creation of manifest files for configuration management.', 'duration': 474.102, 'highlights': ['Puppet installation and configuration on test and production servers The session covers the installation and configuration of Puppet on a test machine and a production environment, including the use of Jenkins plugins for monitoring.', 'Creation of manifest files for configuration management The process involves creating manifest files defining conditions such as the presence of Apache and the use of unless statements for waiting until Apache is executed.', 'Use of Jenkins plugins for monitoring Jenkins plugins are utilized for monitoring changes and notifications within the Puppet configuration.', 'Signing of certificates for server communication The signing of certificates allows for server communication and configuration of the cluster, ensuring secure connections and updates.']}, {'end': 32347.098, 'start': 31751.844, 'title': 'Config management and puppet agent test', 'summary': 'Discusses the process of modifying and testing configuration files on a puppet agent, including issues encountered and solutions, as well as the significance of the agent test in managing multiple servers and the process of installing apache on a system.', 'duration': 595.254, 'highlights': ['The chapter discusses the process of modifying and testing configuration files on a Puppet agent, including issues encountered and solutions. The speaker details the process of modifying and testing configuration files on a Puppet agent, encountering issues such as file manifest and conditions declaration, and subsequently providing solutions and modifications.', 'The significance of the agent test in managing multiple servers is emphasized. The speaker explains the importance of the agent test when managing a cluster of 100 to 250 missions, enabling the distribution of file updates across multiple servers without manual intervention.', 'The process of installing Apache on a system and its impact on server configuration is explained. The speaker describes the process of installing Apache on a system, highlighting the impact on server configuration and addressing issues related to port utilization and directory creation.']}], 'duration': 1530.79, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY430816308.jpg', 'highlights': ['Committing changes in the develop branch triggers automatic execution of relevant jobs in the test server.', 'The session covers the installation and configuration of Puppet on a test machine and a production environment, including the use of Jenkins plugins for monitoring.', 'The process involves creating manifest files defining conditions such as the presence of Apache and the use of unless statements for waiting until Apache is executed.', 'The process of modifying and testing configuration files on a Puppet agent, encountering issues such as file manifest and conditions declaration, and subsequently providing solutions and modifications.', 'The emphasis is on using Puppet for server management and configuration, involving allocating data to the Puppet master, restarting the Puppet master, and defining the host.']}, {'end': 34149.132, 'segs': [{'end': 32408.928, 'src': 'embed', 'start': 32378.033, 'weight': 5, 'content': [{'end': 32380.434, 'text': "what i'll do is i'll start with the installation.", 'start': 32378.033, 'duration': 2.401}, {'end': 32387.03, 'text': 'These are the commands that the prereq sites for installing navigate.', 'start': 32382.867, 'duration': 4.163}, {'end': 32390.033, 'text': "I'll be doing a quick thing.", 'start': 32387.771, 'duration': 2.262}, {'end': 32392.054, 'text': 'You can get the commands no issues.', 'start': 32390.173, 'duration': 1.881}, {'end': 32394.296, 'text': 'These friends are basically the installation step.', 'start': 32392.094, 'duration': 2.202}, {'end': 32404.484, 'text': 'It will take a bit of time for adding the installation and as we are running on nagui server, it needs to create a separate user for it.', 'start': 32394.717, 'duration': 9.767}, {'end': 32408.928, 'text': 'Let me execute this command one by one what exactly this will do.', 'start': 32405.225, 'duration': 3.703}], 'summary': 'The transcript covers installation commands for navigate, with a focus on creating a separate user for running on nagui server.', 'duration': 30.895, 'max_score': 32378.033, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY432378033.jpg'}, {'end': 33239.624, 'src': 'embed', 'start': 33211.006, 'weight': 6, 'content': [{'end': 33219.192, 'text': "what we'll do is we'll go ahead and build that website from the testing server with the help of continuous integration tool jenkins.", 'start': 33211.006, 'duration': 8.186}, {'end': 33231.536, 'text': 'from there, once it is executed successfully, we use an automated test cases to test the website, to test the website with our exact output or not.', 'start': 33219.192, 'duration': 12.344}, {'end': 33239.624, 'text': "then, after testing that particular website, we'll push it to the production environment where it is a live server where the people can access.", 'start': 33231.536, 'duration': 8.088}], 'summary': 'Build website using jenkins, test with automated cases, push to live server.', 'duration': 28.618, 'max_score': 33211.006, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY433211006.jpg'}, {'end': 33588.838, 'src': 'embed', 'start': 33561.109, 'weight': 3, 'content': [{'end': 33564.87, 'text': 'And finally, it lowers the rate of failure of the new releases.', 'start': 33561.109, 'duration': 3.761}, {'end': 33570.811, 'text': 'If you release a new software every time, you know that that software is going to be stable,', 'start': 33565.51, 'duration': 5.301}, {'end': 33575.413, 'text': 'because you have tested it thoroughly and you have pushed it through your automated code pipeline.', 'start': 33570.811, 'duration': 4.602}, {'end': 33577.433, 'text': "Alright, now let's talk about Agile.", 'start': 33575.633, 'duration': 1.8}, {'end': 33583.795, 'text': 'Now, Agile basically focuses on having multiple short-term development life cycles.', 'start': 33577.553, 'duration': 6.242}, {'end': 33588.838, 'text': 'Having shorter development life cycle improves the whole quality of the product.', 'start': 33584.235, 'duration': 4.603}], 'summary': 'Implementing agile reduces new release failure rate and ensures stable, thoroughly tested software through automated code pipeline.', 'duration': 27.729, 'max_score': 33561.109, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY433561109.jpg'}, {'end': 33843.205, 'src': 'embed', 'start': 33816.316, 'weight': 4, 'content': [{'end': 33822.878, 'text': 'once you start using devops, the whole quality of your product increases and the whole cost efficiency also increases.', 'start': 33816.316, 'duration': 6.562}, {'end': 33826.339, 'text': 'you are making your product more efficient and your profits are increasing.', 'start': 33822.878, 'duration': 3.461}, {'end': 33829.2, 'text': "now. let's see the impact of agile now.", 'start': 33826.339, 'duration': 2.861}, {'end': 33836.743, 'text': "unlike for startups, where they don't have to choose the old methods, the older companies that have already existed for a long time.", 'start': 33829.2, 'duration': 7.543}, {'end': 33843.205, 'text': "they've been using the older traditional methodologies for a longer period of time and because of that, the transition from them,", 'start': 33836.743, 'duration': 6.462}], 'summary': 'Adopting devops improves product quality and cost efficiency, leading to increased profits. older companies transitioning from traditional methodologies face challenges.', 'duration': 26.889, 'max_score': 33816.316, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY433816316.jpg'}, {'end': 33876.681, 'src': 'embed', 'start': 33854.088, 'weight': 2, 'content': [{'end': 33862.533, 'text': "So after they've crossed that barrier of transitioning into a newer methodology like Agile, they have become much more prosperous.", 'start': 33854.088, 'duration': 8.445}, {'end': 33865.315, 'text': 'Companies like British Telecom, National Bank of Canada,', 'start': 33862.613, 'duration': 2.702}, {'end': 33876.681, 'text': 'Cisco and Lego have seen a lot of increase in their profits and a lot of increase in the efficiency after they started to introduce Agile as a methodology to be used.', 'start': 33865.315, 'duration': 11.366}], 'summary': 'Companies like british telecom, national bank of canada, cisco, and lego have seen increased profits and efficiency after transitioning to agile methodology.', 'duration': 22.593, 'max_score': 33854.088, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY433854088.jpg'}, {'end': 34103.53, 'src': 'embed', 'start': 34061.161, 'weight': 0, 'content': [{'end': 34069.126, 'text': "Create an atmosphere where both of these teams do the same thing and work towards the organization's goal.", 'start': 34061.161, 'duration': 7.965}, {'end': 34071.748, 'text': 'Have a single goal, not departmental goal again.', 'start': 34069.407, 'duration': 2.341}, {'end': 34075.711, 'text': 'And again, increase the efficiency of the whole development process.', 'start': 34072.108, 'duration': 3.603}, {'end': 34084.722, 'text': "Whereas in agile, the whole aim is to decrease the gap between the understanding of the customer's need and the developers and the testers.", 'start': 34076.251, 'duration': 8.471}, {'end': 34091.224, 'text': 'So the agile basically helps you to understand what the customer needs in more in depth.', 'start': 34085.142, 'duration': 6.082}, {'end': 34098.707, 'text': "Whereas in DevOps, you're creating an environment for the employees to make the whole process of creating software more efficient.", 'start': 34091.524, 'duration': 7.183}, {'end': 34103.53, 'text': 'Now, if we were to discuss which one is better, which one is better DevOps or Agile?', 'start': 34099.167, 'duration': 4.363}], 'summary': 'Create aligned teams, increase efficiency in development, compare devops vs. agile.', 'duration': 42.369, 'max_score': 34061.161, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY434061161.jpg'}, {'end': 34155.996, 'src': 'embed', 'start': 34127.867, 'weight': 1, 'content': [{'end': 34130.19, 'text': 'What does DevOps stands for? A.', 'start': 34127.867, 'duration': 2.323}, {'end': 34133.253, 'text': 'Development and Operations B.', 'start': 34130.19, 'duration': 3.063}, {'end': 34135.835, 'text': 'Drive and Operations C.', 'start': 34133.253, 'duration': 2.582}, {'end': 34138.538, 'text': 'Digital and Operations D.', 'start': 34135.835, 'duration': 2.703}, {'end': 34141.581, 'text': 'None of these Comment your answers in the comment section below.', 'start': 34138.538, 'duration': 3.043}, {'end': 34144.324, 'text': 'Subscribe to Intellipaat to know the right answer.', 'start': 34142.041, 'duration': 2.283}, {'end': 34146.626, 'text': "Now, let's continue with the session.", 'start': 34145.024, 'duration': 1.602}, {'end': 34149.132, 'text': 'why should you become a DevOps engineer?', 'start': 34147.031, 'duration': 2.101}, {'end': 34155.996, 'text': 'I will be going through what are the things that you need to learn to become a DevOps engineer, and why should you become one first of all?', 'start': 34149.292, 'duration': 6.704}], 'summary': 'Devops stands for development and operations. subscribe to intellipaat for the right answer.', 'duration': 28.129, 'max_score': 34127.867, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY434127867.jpg'}], 'start': 32348.019, 'title': 'Nagios installation and devops methodologies', 'summary': 'Covers the step-by-step installation process of nagios monitoring tool, enabling monitoring of websites and services, taking 5 to 10 minutes. it also explains devops lifecycle, continuous monitoring every one minute, and the benefits of devops and agile, emphasizing automation, faster deployment, and increased productivity.', 'chapters': [{'end': 32835.332, 'start': 32348.019, 'title': 'Installing nagios monitoring tool', 'summary': 'Covers the step-by-step installation process of nagios monitoring tool, including commands, prerequisites, and user creation, taking approximately 5 to 10 minutes, enabling monitoring of websites and services, and explaining the difference between start and enable commands.', 'duration': 487.313, 'highlights': ['The installation process of Nagios involves executing commands for prerequisites, creating a separate user, and configuring web interfaces, taking around 5 to 10 minutes. The installation process includes executing commands for prerequisites, creating a separate user, and configuring web interfaces, taking around 5 to 10 minutes.', 'The chapter explains the start and enable commands for creating a service and a system link on a production server. The chapter explains the start and enable commands for creating a service and a system link on a production server.', 'The installation involves the use of sudo commands for make all, make install, and starting the Nagios service. The installation involves the use of sudo commands for make all, make install, and starting the Nagios service.', 'Nagios provides pre-created templates for monitoring different services and websites, sourced from official documentation. Nagios provides pre-created templates for monitoring different services and websites, sourced from official documentation.', 'The process includes installing Nagios NRPE plugins on the node for monitoring, with an explanation of the purpose of Nagios as a monitoring tool. The process includes installing Nagios NRPE plugins on the node for monitoring, with an explanation of the purpose of Nagios as a monitoring tool.']}, {'end': 33272.534, 'start': 32835.332, 'title': 'Setting up nagios monitoring for production server', 'summary': 'Details the steps to set up nagios monitoring for a production server, including adding configuration, defining hosts and services, restarting nagios, and accessing the nagios ui to monitor the production server and its services, with a focus on monitoring an http service and the overall devops architecture.', 'duration': 437.202, 'highlights': ['The chapter details the steps to set up Nagios monitoring for a production server, including adding configuration, defining hosts and services, restarting Nagios, and accessing the Nagios UI to monitor the production server and its services, with a focus on monitoring an HTTP service and the overall DevOps architecture. (relevance: 5)', 'The process involves adding the configuration under a load host in a specific location, defining the host master machine, adding the IP address of the master server, creating and defining the production server address, and modifying the nagios.cfg file to include the production server configuration. (relevance: 4)', 'The chapter demonstrates the steps to define a monitoring service for an HTTP service, including defining the service, setting check commands, intervals, and retry intervals, saving the configuration, and restarting Nagios to enable the monitoring service, with a focus on checking the status of the HTTP service. (relevance: 3)', 'The chapter provides an overview of the DevOps architecture, including version controlling, building the website from the testing server using Jenkins, automated test cases, pushing to the production environment, configuration management, and monitoring using Nagios, emphasizing the flow and steps involved in the DevOps process. (relevance: 2)']}, {'end': 33481.892, 'start': 33272.534, 'title': 'Devops lifecycle and methodologies', 'summary': 'Explains the devops lifecycle, including the continuous monitoring process every one minute, the stages of development, and the importance of devops in promoting better communication and automation in software development.', 'duration': 209.358, 'highlights': ["The chapter explains the continuous monitoring process that checks the website's status every one minute, ensuring the detection of any issues, and the ability to modify the interval.", 'It outlines the stages of software development, including continuous development with version control, building the website with container orchestration, testing, and production, as well as continuous feedback and monitoring in the production server.', 'It emphasizes the importance of DevOps in promoting better communication between different teams involved in software development, such as infrastructure, development, testing, and operations teams, to ensure high-quality software production and the automation of every process in software development.']}, {'end': 33756.1, 'start': 33481.892, 'title': 'Devops and agile benefits', 'summary': 'Explores the benefits of devops and agile, emphasizing automation, faster deployment, increased productivity, and quicker market reach, as well as the advantages of shorter development cycles, improved product quality, and increased transparency and predictability of costs and schedules.', 'duration': 274.208, 'highlights': ['DevOps increases productivity and product quality through automation, resulting in faster deployment and quicker market reach, satisfying the growing market demand. DevOps introduces automation into every aspect of the software development process, leading to increased productivity and product quality. This results in faster deployment, satisfying the ever-increasing market demand.', 'DevOps enables faster time to market, quicker recovery time, and lower rate of failure of new releases, leading to increased profits and improved reputation. With DevOps, faster time to market is achievable, allowing products to reach consumers quickly. Additionally, automated rollbacks decrease the recovery time of software, maintaining reputation and increasing profits, while lowering the rate of failure of new releases.', "Agile's shorter development life cycles prioritize feedback, improving product quality and allowing for early market capture, as opposed to traditional methodologies like waterfall. Agile's shorter development life cycles prioritize client feedback, resulting in improved product quality and early market capture. This is in contrast to traditional methodologies like waterfall, which have longer software development life cycles.", 'Agile promotes transparency, easier cost prediction, and schedule tasking, with the ability to accommodate changes in a dynamic market environment. Agile promotes transparency by allowing customers and clients to know the progress of the development, leading to improved customer loyalty. Additionally, it enables easier cost prediction and schedule tasking, with the ability to accommodate changes in a dynamic market environment.']}, {'end': 33876.681, 'start': 33756.38, 'title': 'Impact of agile and devops', 'summary': 'Explains the impact of agile and devops, emphasizing how they improve product quality, increase efficiency, and drive profits for companies like amazon, walmart, and british telecom.', 'duration': 120.301, 'highlights': ['DevOps is being applied by big enterprises like Amazon, Walmart, Sony Pictures, Netflix, Adobe, as well as small startups like Ola, Flipkart, and Grofers, leading to improved product quality and increased cost efficiency.', 'Transitioning to Agile methodologies has led to increased profits and efficiency for companies like British Telecom, National Bank of Canada, Cisco, and Lego, despite the initial challenges of implementation.', 'Agile allows for adapting to market changes and continuously improving product quality through a feedback loop.']}, {'end': 34149.132, 'start': 33876.941, 'title': 'Devops vs agile comparison', 'summary': 'Compares devops and agile methodologies, highlighting differences in philosophy, implementation, team structure, emphasis, evaluation, tools used, and goals, emphasizing the need to choose based on specific organizational goals and needs.', 'duration': 272.191, 'highlights': ['DevOps emphasizes organizational focus and removing silos, while Agile focuses on delivering the product on time and using relevant feedback to improve the product. DevOps emphasizes organizational focus and removing silos, while Agile focuses on delivering the product on time and using relevant feedback to improve the product.', 'DevOps aims to decrease the gap between the dev and ops team, create a single goal for both teams, and increase the efficiency of the development process, while Agile aims to decrease the gap between understanding customer needs and the development and testing teams. DevOps aims to decrease the gap between the dev and ops team, create a single goal for both teams, and increase the efficiency of the development process, while Agile aims to decrease the gap between understanding customer needs and the development and testing teams.', 'DevOps has a wide range of tools including Jenkins, Kubernetes, Docker, Git, and more, while Agile has specific tools like Jira, Agile Bench, and Pivotal Tracker to create an agile environment. DevOps has a wide range of tools including Jenkins, Kubernetes, Docker, Git, and more, while Agile has specific tools like Jira, Agile Bench, and Pivotal Tracker to create an agile environment.', 'DevOps evaluation is given by employees themselves, focusing on the code pipeline, while Agile evaluation is given by customers and clients, using their feedback to improve the product. DevOps evaluation is given by employees themselves, focusing on the code pipeline, while Agile evaluation is given by customers and clients, using their feedback to improve the product.', 'DevOps does not have specific frameworks but relies on individual requirements and tools, while Agile offers multiple frameworks such as Scrum, Lean, and feature-driven development for software development. DevOps does not have specific frameworks but relies on individual requirements and tools, while Agile offers multiple frameworks such as Scrum, Lean, and feature-driven development for software development.']}], 'duration': 1801.113, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY432348019.jpg', 'highlights': ['The installation process of Nagios involves executing commands for prerequisites, creating a separate user, and configuring web interfaces, taking around 5 to 10 minutes.', 'Nagios provides pre-created templates for monitoring different services and websites, sourced from official documentation.', 'The process includes installing Nagios NRPE plugins on the node for monitoring, with an explanation of the purpose of Nagios as a monitoring tool.', 'The chapter details the steps to set up Nagios monitoring for a production server, including adding configuration, defining hosts and services, restarting Nagios, and accessing the Nagios UI to monitor the production server and its services, with a focus on monitoring an HTTP service and the overall DevOps architecture.', 'The process involves adding the configuration under a load host in a specific location, defining the host master machine, adding the IP address of the master server, creating and defining the production server address, and modifying the nagios.cfg file to include the production server configuration.', "The chapter explains the continuous monitoring process that checks the website's status every one minute, ensuring the detection of any issues, and the ability to modify the interval.", 'DevOps increases productivity and product quality through automation, resulting in faster deployment and quicker market reach, satisfying the growing market demand.', 'DevOps enables faster time to market, quicker recovery time, and lower rate of failure of new releases, leading to increased profits and improved reputation.', "Agile's shorter development life cycles prioritize feedback, improving product quality and allowing for early market capture, as opposed to traditional methodologies like waterfall.", 'DevOps is being applied by big enterprises like Amazon, Walmart, Sony Pictures, Netflix, Adobe, as well as small startups like Ola, Flipkart, and Grofers, leading to improved product quality and increased cost efficiency.', 'DevOps emphasizes organizational focus and removing silos, while Agile focuses on delivering the product on time and using relevant feedback to improve the product.', 'DevOps aims to decrease the gap between the dev and ops team, create a single goal for both teams, and increase the efficiency of the development process, while Agile aims to decrease the gap between understanding customer needs and the development and testing teams.', 'DevOps has a wide range of tools including Jenkins, Kubernetes, Docker, Git, and more, while Agile has specific tools like Jira, Agile Bench, and Pivotal Tracker to create an agile environment.', 'DevOps evaluation is given by employees themselves, focusing on the code pipeline, while Agile evaluation is given by customers and clients, using their feedback to improve the product.']}, {'end': 36237.934, 'segs': [{'end': 34175.588, 'src': 'embed', 'start': 34149.292, 'weight': 5, 'content': [{'end': 34155.996, 'text': 'I will be going through what are the things that you need to learn to become a DevOps engineer, and why should you become one first of all?', 'start': 34149.292, 'duration': 6.704}, {'end': 34164.902, 'text': "There are three things that you might think about whenever you're planning to apply for a particular profile or whenever you're planning to do a career change.", 'start': 34156.337, 'duration': 8.565}, {'end': 34169.344, 'text': 'And these are the three things that you should actually think about in that perspective.', 'start': 34165.322, 'duration': 4.022}, {'end': 34171.706, 'text': 'The first thing is job opportunities.', 'start': 34169.624, 'duration': 2.082}, {'end': 34175.588, 'text': 'You should think about how many job opportunities are there right?', 'start': 34171.766, 'duration': 3.822}], 'summary': 'Becoming a devops engineer involves considering job opportunities and career change.', 'duration': 26.296, 'max_score': 34149.292, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY434149292.jpg'}, {'end': 34485.882, 'src': 'embed', 'start': 34460.235, 'weight': 1, 'content': [{'end': 34469.537, 'text': 'right. so it could be that your starting salary would be less, but rest assured, once you progress in your career, your salary will become high,', 'start': 34460.235, 'duration': 9.302}, {'end': 34475.979, 'text': 'because there is a lot of scope, you know, at least in the cloud and DevOps domain in the coming years,', 'start': 34469.537, 'duration': 6.442}, {'end': 34485.882, 'text': 'because most of the companies now are making use of these practices and these technologies for getting their product up and ready on production systems right?', 'start': 34475.979, 'duration': 9.903}], 'summary': 'Starting salary may be lower, but career progression promises higher pay in cloud and devops domain due to increasing industry adoption.', 'duration': 25.647, 'max_score': 34460.235, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY434460235.jpg'}, {'end': 35765.712, 'src': 'embed', 'start': 35735.114, 'weight': 0, 'content': [{'end': 35738.196, 'text': 'You can see the code has been changed just now.', 'start': 35735.114, 'duration': 3.082}, {'end': 35742.378, 'text': 'So it says 44 seconds ago, the code was changed.', 'start': 35738.356, 'duration': 4.022}, {'end': 35751.462, 'text': 'Awesome So now because my code has been changed, if I go to this website now and hit enter, you can see the background is now changed.', 'start': 35742.798, 'duration': 8.664}, {'end': 35753.764, 'text': 'It is now a different background.', 'start': 35751.623, 'duration': 2.141}, {'end': 35759.847, 'text': 'Now what I want to do is I realize that this change that I did is probably wrong.', 'start': 35754.364, 'duration': 5.483}, {'end': 35765.712, 'text': 'and I want to revert to a particular commit, to the older commit that was actually working.', 'start': 35760.367, 'duration': 5.345}], 'summary': 'Code changed 44 seconds ago, background updated, seeking to revert to older commit.', 'duration': 30.598, 'max_score': 35735.114, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY435735114.jpg'}, {'end': 36059.167, 'src': 'embed', 'start': 36025.498, 'weight': 3, 'content': [{'end': 36033.124, 'text': 'But if you can write a code which will basically test each and every functionality, that code will never make a mistake.', 'start': 36025.498, 'duration': 7.626}, {'end': 36037.707, 'text': 'That is why you should always automate things as far as possible.', 'start': 36033.164, 'duration': 4.543}, {'end': 36046.174, 'text': 'Like in my example, what happened was that there was basically a commit to the repository, which was basically a feature addition.', 'start': 36038.588, 'duration': 7.586}, {'end': 36059.167, 'text': 'and the tester did not see the all the functionalities or forgot to see some of the functionalities that could impact the other components of my product and because of that,', 'start': 36047.995, 'duration': 11.172}], 'summary': 'Automating testing can prevent missed functionalities and errors in code.', 'duration': 33.669, 'max_score': 36025.498, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY436025498.jpg'}], 'start': 34149.292, 'title': 'Devops engineer career and job insights', 'summary': 'Discusses the considerations for becoming a devops engineer, job opportunities and salaries in india and the us, key insights into the demand for devops engineers, sequential approach for becoming a devops engineer, and git architecture, lifecycle, and reverting commits with practical implementation in a ci-cd pipeline.', 'chapters': [{'end': 34235.538, 'start': 34149.292, 'title': 'Becoming a devops engineer: considerations', 'summary': 'Discusses the key considerations, including job opportunities, salary prospects, and industry demand, for individuals planning to become a devops engineer, emphasizing the importance of these factors in career decision-making.', 'duration': 86.246, 'highlights': ['The chapter emphasizes the significance of job opportunities, salary prospects, and industry demand in career decision-making, providing a comprehensive guide for individuals aspiring to become DevOps engineers.', 'The importance of job opportunities, salary considerations, and industry demand is highlighted, underscoring the critical factors for individuals contemplating a career transition to become a DevOps engineer.', 'The significance of job opportunities, salary prospects, and industry demand is underscored, offering valuable insights for individuals seeking to pursue a career as a DevOps engineer.']}, {'end': 34460.235, 'start': 34235.538, 'title': 'Devops engineer opportunities and salaries', 'summary': 'Explores the job opportunities and salaries for devops engineers in india and the us, revealing that there are 2600 open positions in india and 21,000+ in the us, with starting salaries of 80,000 inr and $11,250 respectively, and the potential for much higher earnings based on skill set and experience.', 'duration': 224.697, 'highlights': ['There are 2600 job positions open for DevOps engineers in India, with 900+ in Bangalore alone, and over 21,000 jobs in the US, with 1200+ in California. 2600 job positions open in India, with 900+ in Bangalore alone, and over 21,000 jobs in the US, with 1200+ in California.', 'The starting salary for a DevOps engineer in India is around 80,000 INR per month, and the average salary is approximately 1 lakh per month, with potential for higher earnings based on skills and experience. Starting salary in India is around 80,000 INR per month, with an average of 1 lakh per month, and potential for higher earnings based on skills and experience.', 'In the US, the average starting salary for a DevOps engineer is $11,250 per month, and it can go upwards to around $16,000 per month, with potential for higher earnings based on skills and experience. Average starting salary in the US is $11,250 per month, with potential to go upwards to around $16,000 per month based on skills and experience.', "Salaries in service-based companies are generally lower than those in product-based companies, and the company's own product can influence the salary it offers. Salaries in service-based companies are generally lower than those in product-based companies, and the company's own product can influence the salary it offers."]}, {'end': 34945.364, 'start': 34460.235, 'title': 'Devops engineer job insights', 'summary': 'Highlights the increasing demand for devops engineers, with big companies like walmart, amazon, adobe, dell, and vmware seeking candidates with skills in cloud platforms, programming languages, unix/linux, devops tools like ansible, terraform, jenkins, docker, and databases like sql and pl sql.', 'duration': 485.129, 'highlights': ['The demand for DevOps engineers is on the rise, with big companies like Walmart, Amazon, Adobe, Dell, and VMware seeking candidates with skills in cloud platforms, programming languages, Unix/Linux, and DevOps tools like Ansible, Terraform, Jenkins, Docker (e.g., companies with tech teams of 10-15 people are making use of DevOps practices, and a cloud and DevOps engineer is needed when the team size goes beyond 25 people).', 'Job descriptions often include mandatory skills like knowledge of Unix/Linux operating systems, build tools like Gradle or Maven, Artifactory Integration, experience in working with tools like git, team, city, jenkins, and other CI CD tools, and optional skills like Java, web services, rest and soap APIs (e.g., these skills can fetch a higher package, but are not mandatory).', 'DevOps engineers are expected to have knowledge of open source tools like puppet and ansible, cloud automation software, and a cloud platform like AWS (e.g., understanding open source tools first is suggested, as the knowledge of implementing cloud tools will automatically follow once the base is clear).']}, {'end': 35278.287, 'start': 34945.805, 'title': 'Becoming a devops engineer', 'summary': 'Provides a sequential approach to becoming a devops engineer, emphasizing the need to learn devops tools, programming languages, linux operating system, cloud implementation, and specific tools like ansible and jenkins.', 'duration': 332.482, 'highlights': ['Understanding the major DevOps tools and their working is essential before moving forward to learn programming languages like Python. ', 'Deep diving into understanding the Linux operating system comes after acquiring knowledge about DevOps tools and programming languages. ', 'Learning about cloud implementation, especially AWS, is crucial as most companies are using cloud infrastructure, followed by a focus on specific tools like Ansible and Jenkins. ', 'Acquiring soft skills such as passion, pro activities, and learning testing skills are also emphasized for a successful career as a DevOps engineer. ']}, {'end': 35525.475, 'start': 35278.908, 'title': 'Git architecture and reverting commits in devops', 'summary': 'Covers the explanation of git architecture, including the distributed version control system and the process of reverting commits in git, as well as the implementation of the revert procedure using the git revert command in a ci-cd pipeline.', 'duration': 246.567, 'highlights': ['Explanation of Git architecture and distributed version control system The transcript provides a detailed explanation of Git architecture, emphasizing its distributed version control system, which is crucial for troubleshooting and working as a DevOps engineer.', 'Process of reverting commits in Git and quick fix in production server The chapter discusses the process of reverting commits in Git, highlighting the necessity for quick fixes in the production server and the intention behind the revert procedure to roll back to the last working commit.', 'Implementation of the revert procedure using git revert command The transcript includes a demonstration of how to implement the revert procedure using the git revert command, providing practical insights for DevOps engineers in a CI-CD pipeline scenario.']}, {'end': 36237.934, 'start': 35526.656, 'title': 'Git lifecycle and reverting commits', 'summary': 'Covers the git lifecycle including cloning a repository, making changes to a website, reverting commits, and handling failed deployments. key points include the process of cloning a repository, making and reverting changes to a website, and best practices for handling failed deployments.', 'duration': 711.278, 'highlights': ['Cloning a repository involves copying the repository address and using the git clone command to create a local copy, creating a specific website, and making changes to its code. ', "Reverting a commit is demonstrated by using git log to find the commit ID and then using git revert to revert the changes, ensuring that the website's code reverts to a previous state. ", 'Best practices for handling failed deployments include automating code testing, using Docker for the same environment, employing microservices, and overcoming risks to avoid failures, with examples of how these practices can prevent issues in production. ']}], 'duration': 2088.642, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY434149292.jpg', 'highlights': ['The demand for DevOps engineers is on the rise, with big companies like Walmart, Amazon, Adobe, Dell, and VMware seeking candidates with skills in cloud platforms, programming languages, Unix/Linux, and DevOps tools like Ansible, Terraform, Jenkins, Docker.', 'There are 2600 job positions open for DevOps engineers in India, with 900+ in Bangalore alone, and over 21,000 jobs in the US, with 1200+ in California.', 'The starting salary for a DevOps engineer in India is around 80,000 INR per month, and the average salary is approximately 1 lakh per month, with potential for higher earnings based on skills and experience.', 'Understanding the major DevOps tools and their working is essential before moving forward to learn programming languages like Python.', 'Explanation of Git architecture and distributed version control system The transcript provides a detailed explanation of Git architecture, emphasizing its distributed version control system, which is crucial for troubleshooting and working as a DevOps engineer.', 'Cloning a repository involves copying the repository address and using the git clone command to create a local copy, creating a specific website, and making changes to its code.']}, {'end': 38598.918, 'segs': [{'end': 36328.21, 'src': 'embed', 'start': 36302.279, 'weight': 0, 'content': [{'end': 36310.41, 'text': 'so virtualization is nothing but installing a new piece of operating system on top of a virtualized hardware.', 'start': 36302.279, 'duration': 8.131}, {'end': 36311.431, 'text': 'what does that mean?', 'start': 36310.41, 'duration': 1.021}, {'end': 36319.181, 'text': 'so, basically, there are softwares like hypervisor or any other software which specializes in virtualizing hardware.', 'start': 36311.431, 'duration': 7.75}, {'end': 36328.21, 'text': 'so if you have server which has around 64 gigs of RAM and 1000 TB of hard disk space, with the software like hypervisor,', 'start': 36319.181, 'duration': 9.029}], 'summary': 'Virtualization installs a new os on virtualized hardware using software like hypervisor, enabling efficient use of server resources.', 'duration': 25.931, 'max_score': 36302.279, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY436302279.jpg'}, {'end': 36813.207, 'src': 'embed', 'start': 36779.646, 'weight': 2, 'content': [{'end': 36783.508, 'text': 'so this is the dollar problem that we are outside the container right now.', 'start': 36779.646, 'duration': 3.862}, {'end': 36790.971, 'text': 'now, if i again do the same command, that is, again i search for processes which have the word watch in it.', 'start': 36783.508, 'duration': 7.463}, {'end': 36799.536, 'text': 'i can actually see that there is a new process which is running over here and this process is running inside the container,', 'start': 36790.971, 'duration': 8.565}, {'end': 36803.48, 'text': "which I'm able to see from the host operating system level, right?", 'start': 36799.536, 'duration': 3.944}, {'end': 36813.207, 'text': 'So the host operating system is basically treating this particular process as if it was running on its own system, that is,', 'start': 36803.78, 'duration': 9.427}], 'summary': 'A new process is running inside the container, visible from the host os.', 'duration': 33.561, 'max_score': 36779.646, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY436779646.jpg'}, {'end': 37264.122, 'src': 'embed', 'start': 37228.918, 'weight': 4, 'content': [{'end': 37239.124, 'text': 'Great Now if I run this image now, Docker run hyphen IT hyphen P say I run it on port 84, run it as a demon and run the image.', 'start': 37228.918, 'duration': 10.206}, {'end': 37246.017, 'text': 'Okay, great.', 'start': 37245.256, 'duration': 0.761}, {'end': 37253.248, 'text': "So if I go to port 84 now, let's see if the container is working first.", 'start': 37246.498, 'duration': 6.75}, {'end': 37255.571, 'text': 'So yes, the container is working.', 'start': 37253.809, 'duration': 1.762}, {'end': 37262.581, 'text': 'Now if I go inside DevOps IQ, what do I see? Great.', 'start': 37256.272, 'duration': 6.309}, {'end': 37264.122, 'text': 'So I can see the web.', 'start': 37262.721, 'duration': 1.401}], 'summary': 'Using docker, the image runs on port 84 as a demon and is accessible at devops iq.', 'duration': 35.204, 'max_score': 37228.918, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY437228918.jpg'}, {'end': 37384.625, 'src': 'embed', 'start': 37357.327, 'weight': 5, 'content': [{'end': 37360.408, 'text': 'And all of this is possible using containers.', 'start': 37357.327, 'duration': 3.081}, {'end': 37366.97, 'text': 'So, basically, what they would have done is they would have run each and every component inside a container.', 'start': 37360.808, 'duration': 6.162}, {'end': 37375.238, 'text': 'Now, the problem over here is now, when you have a website like Amazon, you would be dealing like.', 'start': 37367.833, 'duration': 7.405}, {'end': 37384.625, 'text': 'you would be dealing with minimum like 10 or 11 containers for one particular copy of that website or one particular instance of that website,', 'start': 37375.238, 'duration': 9.387}], 'summary': 'Containers enable running 10-11 instances of a website like amazon.', 'duration': 27.298, 'max_score': 37357.327, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY437357327.jpg'}, {'end': 37756.367, 'src': 'embed', 'start': 37706.789, 'weight': 6, 'content': [{'end': 37713.134, 'text': "or I'll say it's a stage which basically connects all the other stages of the DevOps lifecycle.", 'start': 37706.789, 'duration': 6.345}, {'end': 37717.797, 'text': 'For example, you push your code to Git, like we took an example.', 'start': 37713.394, 'duration': 4.403}, {'end': 37725.983, 'text': 'When you push the code to Git, you might have provisions which might allow you that the moment the code is pushed onto the remote repository,', 'start': 37718.137, 'duration': 7.846}, {'end': 37730.046, 'text': 'it automatically gets deployed on the servers as well.', 'start': 37725.983, 'duration': 4.063}, {'end': 37739.113, 'text': 'Well, if that is the case, basically that would be possible using integration tools that would integrate your Git repository with your remote server.', 'start': 37730.706, 'duration': 8.407}, {'end': 37741.815, 'text': 'And that is exactly what Jenkins runs.', 'start': 37739.794, 'duration': 2.021}, {'end': 37745.858, 'text': "It's a continuous integration tool which helps you,", 'start': 37741.935, 'duration': 3.923}, {'end': 37754.105, 'text': 'which helps us integrate different DevOps lifecycle stages together so that they work like an organism right?', 'start': 37745.858, 'duration': 8.247}, {'end': 37756.367, 'text': 'This is what continuous integration means.', 'start': 37754.465, 'duration': 1.902}], 'summary': 'Jenkins is a continuous integration tool that connects devops lifecycle stages, enabling automatic code deployment from git to remote servers.', 'duration': 49.578, 'max_score': 37706.789, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY437706789.jpg'}, {'end': 37841.293, 'src': 'embed', 'start': 37808.219, 'weight': 8, 'content': [{'end': 37818.807, 'text': 'which in turn will integrate or will take the website from the GitHub repository and push it onto the build server on which we want the website to be deployed.', 'start': 37808.219, 'duration': 10.588}, {'end': 37820.408, 'text': 'All right, sounds awesome.', 'start': 37819.207, 'duration': 1.201}, {'end': 37824.169, 'text': "Great Now, let's go ahead and do this demo.", 'start': 37820.948, 'duration': 3.221}, {'end': 37830.51, 'text': 'For that, we will have to SSH into our server.', 'start': 37824.829, 'duration': 5.681}, {'end': 37831.251, 'text': 'Let us do that.', 'start': 37830.55, 'duration': 0.701}, {'end': 37834.391, 'text': "I'm in now.", 'start': 37833.611, 'duration': 0.78}, {'end': 37837.252, 'text': 'Let me clear the screen.', 'start': 37835.532, 'duration': 1.72}, {'end': 37841.293, 'text': "First, let's check if our Jenkins is running on this server.", 'start': 37837.312, 'duration': 3.981}], 'summary': 'Integrating website from github and deploying it onto the build server using ssh for jenkins demo.', 'duration': 33.074, 'max_score': 37808.219, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY437808219.jpg'}, {'end': 37906.048, 'src': 'embed', 'start': 37872.109, 'weight': 7, 'content': [{'end': 37882.616, 'text': 'Now our question is or our aim is to create a job which basically will push a website that we are uploading to GitHub on a particular server.', 'start': 37872.109, 'duration': 10.507}, {'end': 37883.597, 'text': 'All right.', 'start': 37883.297, 'duration': 0.3}, {'end': 37886.319, 'text': "So let's create a new job first.", 'start': 37883.917, 'duration': 2.402}, {'end': 37893.564, 'text': "So let's call our job as demo job.", 'start': 37886.579, 'duration': 6.985}, {'end': 37898.265, 'text': "okay, and let's name it as a freestyle project and click OK.", 'start': 37894.404, 'duration': 3.861}, {'end': 37903.627, 'text': 'so this will create a job in Jenkins for us all right.', 'start': 37898.265, 'duration': 5.362}, {'end': 37906.048, 'text': 'so our job has now been created.', 'start': 37903.627, 'duration': 2.421}], 'summary': 'Create a job in jenkins to push website to github on a server.', 'duration': 33.939, 'max_score': 37872.109, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY437872109.jpg'}, {'end': 38023.244, 'src': 'embed', 'start': 37991.481, 'weight': 3, 'content': [{'end': 38000.082, 'text': 'Once this is done I want to build my website which is going or build my container which is going to have my website.', 'start': 37991.481, 'duration': 8.601}, {'end': 38001.123, 'text': 'All right.', 'start': 38000.803, 'duration': 0.32}, {'end': 38002.083, 'text': 'Now how can we do that.', 'start': 38001.203, 'duration': 0.88}, {'end': 38009.099, 'text': "For that, I'll have to push the code to my GitHub, which will have the Docker file as well.", 'start': 38003.132, 'duration': 5.967}, {'end': 38013.544, 'text': 'So we created a Docker file inside.', 'start': 38010.28, 'duration': 3.264}, {'end': 38015.366, 'text': 'So here it is.', 'start': 38014.805, 'duration': 0.561}, {'end': 38023.244, 'text': 'So we have our Docker file created in the DevOps IQ folder, which was there in my home directory.', 'start': 38016.519, 'duration': 6.725}], 'summary': 'The speaker plans to build a container for their website by pushing the code to github, including a docker file.', 'duration': 31.763, 'max_score': 37991.481, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY437991481.jpg'}], 'start': 36237.934, 'title': 'Devops and containerization', 'summary': 'Discusses virtualization vs containerization, docker processes, docker file and container orchestration, and continuous integration with jenkins. it compares docker swarm and kubernetes, and explains setting up jenkins for ci/cd pipeline deployment.', 'chapters': [{'end': 36525.751, 'start': 36237.934, 'title': 'Virtualization vs containerization', 'summary': 'Discusses the differences between virtualization and containerization, highlighting that virtualization involves installing a new os on virtualized hardware, while containerization uses a container engine to run with the host os, resulting in smaller and more efficient containers.', 'duration': 287.817, 'highlights': ['Virtualization involves installing a new OS on virtualized hardware, allowing the division of resources among multiple operating systems, while containerization uses a container engine on the host OS, resulting in smaller and more efficient containers. Virtualization allows division of resources among multiple OS, Containerization uses a container engine on the host OS, resulting in smaller and more efficient containers', 'Virtualization virtualizes the hardware and installs a guest OS on top of the hypervisor, while containerization does not encourage installing a whole OS, resulting in smaller containers with the bare minimum libraries required for the container to behave as a particular OS. Virtualization installs a guest OS on top of the hypervisor, Containerization results in smaller containers with the bare minimum libraries required for the container to behave as a particular OS', 'Containers do not contain an operating system and are based on the same kernel as the host OS, while virtualization involves a separate kernel present for the virtual OS. Containers do not contain an operating system and are based on the same kernel as the host OS, Virtualization involves a separate kernel present for the virtual OS']}, {'end': 36900.983, 'start': 36526.312, 'title': 'Devops online training & docker processes', 'summary': 'Discusses how to view processes running inside a docker container without using docker, and demonstrates launching a process inside a container and viewing it from the host operating system. it also touches on the purpose of a docker file.', 'duration': 374.671, 'highlights': ["You can view processes running inside a Docker container from the host operating system using the 'psaux' command.", 'Demonstrating the process by launching a watch command inside a container and viewing it from the host operating system.', 'Explanation of a Docker file as a text document used to create an image by adding files to an older image.']}, {'end': 37658.875, 'start': 36900.983, 'title': 'Docker file and container orchestration', 'summary': 'Explains the usage of a docker file to build an apache image and deploy a website inside a container, the concept of container orchestration, and the differences between docker swarm and kubernetes. it also mentions the benefits of using docker swarm over kubernetes in terms of ease of installation and speed, but highlights the lack of auto-scaling functionality in docker swarm compared to kubernetes.', 'duration': 757.892, 'highlights': ['Usage of Docker File to Deploy a Website Inside a Container The speaker demonstrates the process of using a Docker file to build an Apache image and deploy a website inside a container, showcasing the efficient and automated method compared to manual steps, offering a streamlined approach for image creation and deployment.', 'Explanation of Container Orchestration The chapter delves into the concept of container orchestration, highlighting its significance in managing multiple containers, ensuring synchronization, communication, scaling, and health monitoring, and mentions Kubernetes and Docker Swarm as popular container orchestration tools that automate manual tasks and provide health status reports.', 'Differences Between Docker Swarm and Kubernetes The differences between Docker Swarm and Kubernetes are discussed, emphasizing the ease of installation and pre-packaged nature of Docker Swarm, its faster deployment, and the absence of auto-scaling functionality, while Kubernetes is described as having more dependencies, slower deployment, but offering auto-scaling capabilities and a broader range of features.']}, {'end': 38598.918, 'start': 37659.415, 'title': 'Understanding continuous integration with jenkins', 'summary': 'Explains the concept of continuous integration in devops, with a focus on using jenkins to create a ci/cd pipeline to deploy a website on every commit to the main branch of a git repository, detailing the process of setting up jenkins, creating a job, integrating with github, building and deploying the website, and troubleshooting issues with the deployment.', 'duration': 939.503, 'highlights': ['Explanation of Continuous Integration and Jenkins The chapter provides a detailed explanation of continuous integration as a development practice that connects all stages of the DevOps lifecycle, emphasizing the role of Jenkins as a continuous integration tool to integrate different lifecycle stages and automate the deployment process.', 'Setting up Jenkins and Creating a Job It details the process of setting up Jenkins, creating a new job, specifying the GitHub repository, and configuring the build trigger to automatically deploy the code from the repository to a remote server on every push to the master branch.', 'Building and Deploying the Website Using Jenkins It explains the commands to remove running containers, build the Docker file, and run the Docker image to deploy the website on a specified port, providing a step-by-step demonstration of the process within Jenkins.', 'Troubleshooting Deployment Issues The chapter demonstrates the troubleshooting process for deployment issues, including fixing a command error, making code changes, pushing the changes to the repository, triggering the Jenkins job, and verifying the successful deployment of the website.']}], 'duration': 2360.984, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY436237934.jpg', 'highlights': ['Virtualization allows division of resources among multiple OS', 'Containerization uses a container engine on the host OS, resulting in smaller and more efficient containers', 'Containers do not contain an operating system and are based on the same kernel as the host OS', 'Usage of Docker File to Deploy a Website Inside a Container', 'Explanation of Container Orchestration', 'Differences Between Docker Swarm and Kubernetes', 'Explanation of Continuous Integration and Jenkins', 'Setting up Jenkins and Creating a Job', 'Building and Deploying the Website Using Jenkins', 'Troubleshooting Deployment Issues']}, {'end': 40114.814, 'segs': [{'end': 38826.941, 'src': 'embed', 'start': 38799.247, 'weight': 0, 'content': [{'end': 38802.148, 'text': 'Awesome guys, so we have completed the demo.', 'start': 38799.247, 'duration': 2.901}, {'end': 38810.733, 'text': 'We basically asked us to create a CICD pipeline using Git and Jenkins to deploy a website on every commit on the main branch.', 'start': 38802.769, 'duration': 7.964}, {'end': 38812.434, 'text': 'So we have done it successfully.', 'start': 38811.173, 'duration': 1.261}, {'end': 38820.579, 'text': "Awesome, let's move on to our next domain which talks about configuration management and continuous monitoring.", 'start': 38812.454, 'duration': 8.125}, {'end': 38826.941, 'text': "Awesome So what is configuration management and what is continuous monitoring? Let's understand it.", 'start': 38821.179, 'duration': 5.762}], 'summary': 'Completed cicd pipeline using git and jenkins for website deployment on every commit on main branch, moving on to configuration management and continuous monitoring.', 'duration': 27.694, 'max_score': 38799.247, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY438799247.jpg'}, {'end': 38930.287, 'src': 'embed', 'start': 38900.807, 'weight': 1, 'content': [{'end': 38908.492, 'text': "So you don't have to sweat a lot or you don't have to sweat much on learning the commands for Ansible because it is based on Python.", 'start': 38900.807, 'duration': 7.685}, {'end': 38911.975, 'text': 'So if you know Python, Ansible is going to be a cakewalk for you.', 'start': 38908.512, 'duration': 3.463}, {'end': 38916.778, 'text': 'It is preferred for environments which are designed to scale rapidly.', 'start': 38912.915, 'duration': 3.863}, {'end': 38919.52, 'text': 'Basically with Ansible.', 'start': 38917.659, 'duration': 1.861}, {'end': 38930.287, 'text': "the thing is that you don't have to install the Ansible client software on the systems on which you want to basically deploy the configuration.", 'start': 38919.52, 'duration': 10.767}], 'summary': "Learning ansible is easier if you know python. it's preferred for rapidly scalable environments and does not require client software installation.", 'duration': 29.48, 'max_score': 38900.807, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY438900807.jpg'}, {'end': 39287.159, 'src': 'embed', 'start': 39255.539, 'weight': 2, 'content': [{'end': 39264.952, 'text': 'So NRPE plugins are basically extensions to Nagios which help you monitor the local resources of the client machines right?', 'start': 39255.539, 'duration': 9.413}, {'end': 39271.808, 'text': "So you don't have to SSH into the client machines to see how much of memory or how much of CPU is being used.", 'start': 39265.253, 'duration': 6.555}, {'end': 39274.389, 'text': 'Nagios being a monitoring tool,', 'start': 39272.268, 'duration': 2.121}, {'end': 39287.159, 'text': 'you just have to install the NRPE extension on the client machine and it will give you a real-time data of the resources that are being consumed on that particular client machine.', 'start': 39274.389, 'duration': 12.77}], 'summary': 'Nrpe plugins extend nagios to monitor local resources, providing real-time data without ssh access.', 'duration': 31.62, 'max_score': 39255.539, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY439255539.jpg'}, {'end': 39855.581, 'src': 'embed', 'start': 39825.068, 'weight': 3, 'content': [{'end': 39830.717, 'text': 'So the Selenium tool is used widely for automatic testing or automated testing.', 'start': 39825.068, 'duration': 5.649}, {'end': 39837.517, 'text': "But what are the problems that you get with Selenium? So if you're using Selenium, mobile testing cannot be done.", 'start': 39830.777, 'duration': 6.74}, {'end': 39843.498, 'text': 'So if you have developed an application for your mobile, you cannot test it using Selenium.', 'start': 39838.097, 'duration': 5.401}, {'end': 39848.199, 'text': 'The reporting capabilities of Selenium are very limited.', 'start': 39844.218, 'duration': 3.981}, {'end': 39855.581, 'text': 'If your application or your web application deals with pop-up windows or it gives pop-up windows,', 'start': 39848.579, 'duration': 7.002}], 'summary': 'Selenium is widely used for automated testing, but has limitations for mobile testing and reporting capabilities.', 'duration': 30.513, 'max_score': 39825.068, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY439825068.jpg'}], 'start': 38598.918, 'title': 'Devops tools and techniques', 'summary': 'Covers the creation of a cicd pipeline with git and jenkins, a comparison of ansible, chef, and puppet as configuration management tools, configuration management and monitoring with nagios and ansible, and continuous testing with selenium.', 'chapters': [{'end': 38820.579, 'start': 38598.918, 'title': 'Cicd pipeline demo with git and jenkins', 'summary': 'Demonstrates the successful creation of a cicd pipeline using git and jenkins to deploy a website on every commit on the main branch, with successful test pushes and a demonstration of reverting to a previous version.', 'duration': 221.661, 'highlights': ['The chapter demonstrates the successful creation of a CICD pipeline using Git and Jenkins to deploy a website on every commit on the main branch, with successful test pushes and a demonstration of reverting to a previous version.', 'The port was changed to 82, and the job was successfully completed, indicating that Apache is working.', 'The website was successfully deployed on the build server, reflecting changes made on GitHub, with the background also being updated.', 'A demonstration of reverting to a previous version was successfully executed using Git, showcasing the ability to revert changes made on the website.', 'The demo successfully achieved the goal of creating a CICD pipeline using Git and Jenkins for deploying a website on every commit on the main branch.']}, {'end': 39235.739, 'start': 38821.179, 'title': 'Configuration management tools comparison', 'summary': 'Presents a comparison of ansible, chef, and puppet as configuration management tools, highlighting their differences in terms of language, ease of setup, stability, flexibility, automation support, and scalability.', 'duration': 414.56, 'highlights': ['Puppet offers strong support for automation, making it suitable for automating configuration management, while it is not ideal for scaling deployments with a growing number of servers. Puppet is strong in automation support and not suitable for scaling deployments.', 'Chef, being Ruby-based, has a more complicated initial setup compared to Ansible, but it is stable and flexible, especially for OS and middleware management. Chef has a stable and flexible solution for OS and middleware management, although its initial setup is more complex than Ansible.', 'Ansible, based on Python, is easy to learn and preferred for environments designed to scale rapidly, with simplified orchestration and no need to install client software on target systems. Ansible is easy to learn, preferred for rapid scaling environments, and offers simplified orchestration without client software installation.']}, {'end': 39777.603, 'start': 39235.739, 'title': 'Configuration management and monitoring with nagios and ansible', 'summary': 'Discusses configuration management with ansible, monitoring with nagios, including nrpe plugins for real-time resource monitoring, the difference between active and passive checks in nagios, and the deployment of apache on a client server using ansible.', 'duration': 541.864, 'highlights': ['NRPE Plugins in Nagios NRPE plugins are extensions to Nagios that allow real-time monitoring of client machine resources without the need to SSH in, facilitating centralized resource monitoring across multiple machines.', 'Active and Passive Checks in Nagios Active checks involve Nagios agents collecting data from clients, while passive checks utilize logs pushed by third-party software to the Nagios master, with logs from both types published to a queue for monitoring metrics creation.', 'Deploying Apache with Ansible An Ansible playbook is used to install Apache on a client server without SSH, demonstrating centralized software deployment to multiple servers through Ansible.']}, {'end': 40114.814, 'start': 39778.143, 'title': 'Continuous testing with selenium', 'summary': 'Explores continuous testing with selenium, highlighting its limitations, differences between verify and assert commands, and the distinction between set speed and sleep methods in selenium testing.', 'duration': 336.671, 'highlights': ['The limitations of Selenium for mobile testing, pop-up windows, non-web applications, and image testing are discussed, providing insights into the challenges faced with using Selenium.', 'The differences between verify and assert commands in Selenium are explained, emphasizing their impact on test execution and the scenarios in which they are beneficial.', 'The distinction between set speed and sleep methods in Selenium testing is outlined, illustrating their respective functionalities and use cases in controlling task execution and program suspension.']}], 'duration': 1515.896, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/YzwD02ImKY4/pics/YzwD02ImKY438598918.jpg', 'highlights': ['The chapter demonstrates the successful creation of a CICD pipeline using Git and Jenkins for deploying a website on every commit on the main branch, with successful test pushes and a demonstration of reverting to a previous version.', 'Ansible, based on Python, is easy to learn and preferred for environments designed to scale rapidly, with simplified orchestration and no need to install client software on target systems.', 'NRPE Plugins in Nagios allow real-time monitoring of client machine resources without the need to SSH in, facilitating centralized resource monitoring across multiple machines.', 'The limitations of Selenium for mobile testing, pop-up windows, non-web applications, and image testing are discussed, providing insights into the challenges faced with using Selenium.']}], 'highlights': ['DevOps professionals are among the highest paid in the IT industry.', 'Covers devops lifecycle, git fundamentals, docker, kubernetes, puppet, jenkins, nagios installation, and career insights.', 'Continuous monitoring organizes logs into categories such as general, error, and feature requests, serving as a centralized platform for planning and team coordination.', "Understanding the status of files and syncing repositories are integral parts of Git's lifecycle.", 'Docker creates lightweight containers, with sizes as low as 50 MB, containing all the necessary code, operating system, and libraries.', "The 'docker commit' command is used to save changes inside a container and create a new image.", 'Microservices offer flexibility to use different technologies for individual modules, addressing technology restrictions in monolithic applications.', 'Kubernetes automates monitoring, scaling, and health checks for containers, eliminating manual intervention.', 'The SSL connection setup between Puppet master and slave involves the exchange of certificates and manual signing, ensuring secure and encrypted communication.', 'The user configures a playbook to install different software on two servers, executes the playbook, and successfully configures two servers with Nginx and Apache, demonstrating the ability to configure multiple servers with different software using a single command.', 'Jenkins Pipeline Implementation The chapter emphasizes the implementation of Jenkins pipeline for code compilation, building, and deployment, highlighting the distinction between declarative and scripted Jenkins files.', 'Understanding and resolving deployment failures The process of deploying files with proper permissions is demonstrated, including identifying and resolving issues such as failed copying of folders and the need for recursive copy commands.', 'Configuring post build actions to trigger downstream projects upon updates, creating a chain of job executions.', 'The demand for DevOps engineers is on the rise, with big companies like Walmart, Amazon, Adobe, Dell, and VMware seeking candidates with skills in cloud platforms, programming languages, Unix/Linux, and DevOps tools like Ansible, Terraform, Jenkins, Docker.', 'Virtualization allows division of resources among multiple OS', 'Containerization uses a container engine on the host OS, resulting in smaller and more efficient containers', 'The chapter demonstrates the successful creation of a CICD pipeline using Git and Jenkins for deploying a website on every commit on the main branch, with successful test pushes and a demonstration of reverting to a previous version.']}