title
DevOps Crash Course (Docker, Terraform, and Github Actions)

description
In this DevOps and Cloud Infrastructure tutorial, you will learn what DevOps is and how to apply some of the most important concepts including: - Docker containers - Infrastructure as Code - Continuous Integration and Continuous Deployment DevOps Directive YouTube Channel: https://www.youtube.com/c/DevOpsDirective Link to application: storybooks.devopsdirective.com GitHub Repos: - https://github.com/bradtraversy/storybooks (Original) - https://github.com/sidpalas/storybooks (Version from video) NOTE: After filming I discovered that the set-env command that I used within the Github Action was deprecated due to a security vulnerability (https://github.blog/changelog/2020-10-01-github-actions-deprecating-set-env-and-add-path-commands/). I replaced the usage with the updated method described here (https://docs.github.com/en/free-pro-team@latest/actions/reference/workflow-commands-for-github-actions#environment-files) TimeStamps: 0:00 - Intro 1:18 - Project Overview 2:21 - Application Architecture 4:14 - Part 1: Getting the initial project running 7:51 - Part 2: Dockerize the application 9:55 - Docker-compose 11:51 - Aside: Makefiles! 12:31 - Part 3: Terraform (Infrastructure as Code) 17:16 - Setting up Terraform providers 22:21 - GCP Resources 25:55 - Terraform variables 28:23 - Atlas MongoDB Resources 31:42 - Cloudflare Resources 34:33 - Aside: Secrets/credential management 37:21 - Part 4: Deploying Manually 44:25 - Part 5: CI/CD with Github Actions 50:14 - Testing the Github action 51:25 - Separate staging and production 57:22 - Outro

detail
{'title': 'DevOps Crash Course (Docker, Terraform, and Github Actions)', 'heatmap': [{'end': 3278.909, 'start': 3239.445, 'weight': 1}, {'end': 3469.818, 'start': 3447.458, 'weight': 0.833}], 'summary': 'Covers a one-hour devops crash course including docker for containerization, terraform for infrastructure as code, and github actions for continuous integration and delivery, showcasing the benefits of devops practices.', 'chapters': [{'end': 178.033, 'segs': [{'end': 58.391, 'src': 'embed', 'start': 24.682, 'weight': 0, 'content': [{'end': 30.246, 'text': "I know it's a big investment to spend a chunk of your time watching a video like this, so I promise I'll make it worth your while.", 'start': 24.682, 'duration': 5.564}, {'end': 34.288, 'text': 'This video is meant to be the ultimate one-hour DevOps crash course.', 'start': 30.906, 'duration': 3.382}, {'end': 38.07, 'text': "If you're not familiar with the term DevOps, it's a set of practices,", 'start': 34.969, 'duration': 3.101}, {'end': 46.776, 'text': 'techniques and tools to speed up the software development lifecycle by bringing together two historically separate functions of development and operations.', 'start': 38.07, 'duration': 8.706}, {'end': 55.05, 'text': "In the next hour we're going to take an application from development to production, including containerizing it with Docker,", 'start': 48.029, 'duration': 7.021}, {'end': 58.391, 'text': 'setting up infrastructure as code, using Terraform,', 'start': 55.05, 'duration': 3.341}], 'summary': 'Ultimate one-hour devops crash course covering docker, terraform, and speeding up software development lifecycle.', 'duration': 33.709, 'max_score': 24.682, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/OXE2a8dqIAI/pics/OXE2a8dqIAI24682.jpg'}, {'end': 178.033, 'src': 'embed', 'start': 150.061, 'weight': 1, 'content': [{'end': 155.186, 'text': 'If we wanted to scale further, we would likely add additional replicas of this and put a load balancer in front.', 'start': 150.061, 'duration': 5.125}, {'end': 157.047, 'text': "But that's outside the scope of this video.", 'start': 155.546, 'duration': 1.501}, {'end': 160.404, 'text': 'For the back end, we have a MongoDB database.', 'start': 158.063, 'duration': 2.341}, {'end': 165.227, 'text': "And while we could run this on another virtual machine ourselves, I'm not a database expert.", 'start': 160.764, 'duration': 4.463}, {'end': 172.15, 'text': "Because of this, I'm going to offload the operations of the database onto Atlas, which is a database as a service product from MongoDB.", 'start': 165.647, 'duration': 6.503}, {'end': 178.033, 'text': "While it costs a bit more to do it this way, it's nice to know that the operations of the database are in good hands.", 'start': 172.83, 'duration': 5.203}], 'summary': 'To scale further, adding replicas and a load balancer may be considered. database operations are offloaded to atlas, ensuring reliability.', 'duration': 27.972, 'max_score': 150.061, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/OXE2a8dqIAI/pics/OXE2a8dqIAI150061.jpg'}], 'start': 7.773, 'title': 'One-hour devops crash course', 'summary': 'Covers a one-hour devops crash course, including containerization with docker, infrastructure as code with terraform, continuous integration and delivery with github actions, and deploying into multiple environments, showcasing the benefits of devops practices.', 'chapters': [{'end': 178.033, 'start': 7.773, 'title': 'One-hour devops crash course', 'summary': 'Covers a one-hour devops crash course, including containerization with docker, infrastructure as code with terraform, continuous integration and delivery with github actions, and deploying into multiple environments, showcasing the benefits of devops practices.', 'duration': 170.26, 'highlights': ['The video covers a one-hour DevOps crash course, taking an application from development to production, including containerizing it with Docker, setting up infrastructure as code using Terraform, building out continuous integration and delivery with GitHub Actions, and deploying it into multiple environments, showcasing the benefits of DevOps practices working together in concert.', 'The tutorial is based on a two and a half hour tutorial building a full stack application using a MongoDB database, a Node.js based API, and OAuth authentication using Passport, and aims to apply DevOps practices to it.', 'The application architecture involves a Node.js application deployed on a virtual machine running in Google Cloud, and a MongoDB database being offloaded to Atlas, a database as a service product from MongoDB.', 'The speaker plans to live stream during the launch of the video, monitoring system resource usage and traffic, hoping that the site can withstand a Traversi Media hug of death, and encourages users to open the application, refresh the page, and create a few stories to add some load.', 'The database operations are offloaded onto Atlas, a database as a service product from MongoDB, ensuring that the operations of the database are in good hands, albeit at a slightly higher cost.']}], 'duration': 170.26, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/OXE2a8dqIAI/pics/OXE2a8dqIAI7773.jpg', 'highlights': ['The video covers a one-hour DevOps crash course, including containerizing with Docker, setting up infrastructure as code using Terraform, and deploying into multiple environments.', 'The tutorial is based on a two and a half hour tutorial building a full stack application using a MongoDB database, a Node.js based API, and OAuth authentication using Passport, and aims to apply DevOps practices to it.', 'The speaker plans to live stream during the launch of the video, monitoring system resource usage and traffic, hoping that the site can withstand a Traversi Media hug of death.', 'The application architecture involves a Node.js application deployed on a virtual machine running in Google Cloud, and a MongoDB database being offloaded to Atlas, a database as a service product from MongoDB.', 'The database operations are offloaded onto Atlas, a database as a service product from MongoDB, ensuring that the operations of the database are in good hands, albeit at a slightly higher cost.']}, {'end': 740.804, 'segs': [{'end': 249.039, 'src': 'embed', 'start': 217.988, 'weight': 2, 'content': [{'end': 220.97, 'text': "If you don't have them set up, they're fairly easy to Google and then install.", 'start': 217.988, 'duration': 2.982}, {'end': 222.852, 'text': 'The first is Node and NPM.', 'start': 221.211, 'duration': 1.641}, {'end': 224.213, 'text': 'The second is Docker.', 'start': 223.273, 'duration': 0.94}, {'end': 225.835, 'text': 'The third is Terraform.', 'start': 224.674, 'duration': 1.161}, {'end': 229.659, 'text': 'Fourth is gcloud, the command line utility for Google Cloud Platform.', 'start': 226.416, 'duration': 3.243}, {'end': 231.18, 'text': 'And fifth is Make.', 'start': 230.179, 'duration': 1.001}, {'end': 237.866, 'text': "Now, if you're a Windows user, you'll probably have to use the Windows subsystem for Linux in order to have my code work properly on your system.", 'start': 231.781, 'duration': 6.085}, {'end': 244.896, 'text': "Now, speaking of code, if you want to follow along, the original code can be found in a repository on Brad's GitHub,", 'start': 238.772, 'duration': 6.124}, {'end': 249.039, 'text': 'and the modified updated version can be found as a fork of that repository on my GitHub.', 'start': 244.896, 'duration': 4.143}], 'summary': 'Set up node, npm, docker, terraform, gcloud, and make for following the code. windows users may need windows subsystem for linux.', 'duration': 31.051, 'max_score': 217.988, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/OXE2a8dqIAI/pics/OXE2a8dqIAI217988.jpg'}, {'end': 505.422, 'src': 'embed', 'start': 480.717, 'weight': 0, 'content': [{'end': 487.602, 'text': 'Now. Docker is a great tool that allows you to make sure that your development environment is as close as possible to the production environment,', 'start': 480.717, 'duration': 6.885}, {'end': 491.504, 'text': "so that you won't have to worry about slight inconsistencies causing bugs down the road.", 'start': 487.602, 'duration': 3.902}, {'end': 500.17, 'text': "Before I actually populate that Docker file, I'm going to create a Docker ignore file in which I add the node modules directory.", 'start': 493.566, 'duration': 6.604}, {'end': 505.422, 'text': 'This prevents my locally installed dependencies from getting copied into that Docker image and causing issues.', 'start': 500.643, 'duration': 4.779}], 'summary': 'Docker ensures close dev-prod environment alignment, preventing local dependencies from causing issues.', 'duration': 24.705, 'max_score': 480.717, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/OXE2a8dqIAI/pics/OXE2a8dqIAI480717.jpg'}, {'end': 595.056, 'src': 'embed', 'start': 573.026, 'weight': 3, 'content': [{'end': 581.694, 'text': 'we use the command docker build and then the dash T flag allows us to specify a tag and then the period indicates that it should use the current directory.', 'start': 573.026, 'duration': 8.668}, {'end': 590.87, 'text': 'At this point I could run this container directly by issuing a docker run command, but because Mongo is also running in a container on my system,', 'start': 583.14, 'duration': 7.73}, {'end': 595.056, 'text': "I'm going to use docker compose to coordinate those two and the networking between them.", 'start': 590.87, 'duration': 4.186}], 'summary': 'Using docker build and docker compose to coordinate containers and networking for mongodb.', 'duration': 22.03, 'max_score': 573.026, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/OXE2a8dqIAI/pics/OXE2a8dqIAI573026.jpg'}, {'end': 695.715, 'src': 'embed', 'start': 668.122, 'weight': 5, 'content': [{'end': 674.086, 'text': "This ensures that the data will persist across restarts, which wouldn't be true if we just stored it within the container itself.", 'start': 668.122, 'duration': 5.964}, {'end': 681.09, 'text': 'This volume then gets mounted into the container at slash data slash db, which is the default location for Mongo to store its data.', 'start': 674.486, 'duration': 6.604}, {'end': 686.812, 'text': "There's one additional configuration setting to update now that the application is running inside of the container.", 'start': 681.71, 'duration': 5.102}, {'end': 695.715, 'text': 'Localhost no longer has the same meaning and instead we need to connect to Mongo using a hostname equal to the service name in our Docker compose file.', 'start': 687.292, 'duration': 8.423}], 'summary': 'Data persistence is ensured by mounting a volume into the container at /data/db for mongodb storage.', 'duration': 27.593, 'max_score': 668.122, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/OXE2a8dqIAI/pics/OXE2a8dqIAI668122.jpg'}], 'start': 179.016, 'title': 'Setting up cloud infrastructure', 'summary': "Covers setting up cloudflare for ssl, including using flexible ssl option and considerations for using full encryption with let's encrypt certificates. it also details setting up the development environment with node, npm, docker, terraform, gcloud, and make, running a mongodb database, configuring a google oauth client, dockerizing the application, and provisioning cloud infrastructure.", 'chapters': [{'end': 217.427, 'start': 179.016, 'title': 'Setting up cloudflare for ssl', 'summary': "Discusses setting up cloudflare to route traffic to a server, using flexible ssl option and considerations for using full encryption with let's encrypt certificates.", 'duration': 38.411, 'highlights': ['Setting up Cloudflare to route traffic to a server and using flexible SSL option.', "Considerations for using full encryption with Let's Encrypt certificates and the ease of setting it up."]}, {'end': 740.804, 'start': 217.988, 'title': 'Setting up development environment and cloud infrastructure', 'summary': 'Covers setting up the development environment with node, npm, docker, terraform, gcloud, and make, along with instructions for running a mongodb database and configuring a google oauth client. it also details dockerizing the application and provisioning cloud infrastructure.', 'duration': 522.816, 'highlights': ['Setting up development environment with Node, NPM, Docker, Terraform, gcloud, and Make The chapter outlines setting up the development environment by installing Node, NPM, Docker, Terraform, gcloud, and Make, with instructions for Windows users to use the Windows subsystem for Linux if needed.', "Running a MongoDB database in a Docker container Instructions are provided for running a MongoDB database in a Docker container, including using 'docker run' with port forwarding and setting up the 'config.environment' file.", 'Configuring a Google OAuth client and setting up infrastructure as code The process of creating a Google OAuth client and setting up infrastructure as code is detailed, including creating a new Google Cloud project and configuring the OAuth consent screen.', 'Dockerizing the application and using Docker Compose for coordination The chapter explains the process of Dockerizing the application, creating a Docker file, using Docker Compose to coordinate services, and configuring YAML file for network communication between services.', "Provisioning cloud infrastructure and creating a makefile for storing commands The chapter covers provisioning cloud infrastructure and creating a makefile to store commands, providing a target to execute 'docker-compose up' and emphasizing the incremental addition of commands to the makefile."]}], 'duration': 561.788, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/OXE2a8dqIAI/pics/OXE2a8dqIAI179016.jpg', 'highlights': ["Considerations for using full encryption with Let's Encrypt certificates and the ease of setting it up.", 'Setting up Cloudflare to route traffic to a server and using flexible SSL option.', 'Setting up development environment with Node, NPM, Docker, Terraform, gcloud, and Make.', "Running a MongoDB database in a Docker container with port forwarding and setting up the 'config.environment' file.", 'Configuring a Google OAuth client and setting up infrastructure as code, including creating a new Google Cloud project and configuring the OAuth consent screen.', 'Dockerizing the application, creating a Docker file, using Docker Compose to coordinate services, and configuring YAML file for network communication between services.', "Provisioning cloud infrastructure and creating a makefile for storing commands, providing a target to execute 'docker-compose up' and emphasizing the incremental addition of commands to the makefile."]}, {'end': 1330.877, 'segs': [{'end': 837.877, 'src': 'embed', 'start': 810.878, 'weight': 4, 'content': [{'end': 813.979, 'text': 'I can use my project ID variable within my bucket name here.', 'start': 810.878, 'duration': 3.101}, {'end': 819.8, 'text': "It's important to note that, while most of these resource names are scoped to within your own project,", 'start': 815.159, 'duration': 4.641}, {'end': 822.881, 'text': 'the names of Google Cloud Storage buckets must be globally unique.', 'start': 819.8, 'duration': 3.081}, {'end': 826.024, 'text': 'I can now execute this make target to create the bucket.', 'start': 823.561, 'duration': 2.463}, {'end': 831.269, 'text': "Now I'll copy the name of the bucket into my Terraform config, and I'll delete the prefix.", 'start': 826.684, 'duration': 4.585}, {'end': 837.877, 'text': 'In order to grant Terraform the permissions to actually interact with Google Cloud, I need to create a service account.', 'start': 832.771, 'duration': 5.106}], 'summary': 'Creating a globally unique google cloud storage bucket and service account for terraform interaction.', 'duration': 26.999, 'max_score': 810.878, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/OXE2a8dqIAI/pics/OXE2a8dqIAI810878.jpg'}, {'end': 877.863, 'src': 'embed', 'start': 852.173, 'weight': 1, 'content': [{'end': 858.334, 'text': "I know that to initialize this backend it's going to need storage object admin access, and so I'm granting that here.", 'start': 852.173, 'duration': 6.161}, {'end': 864.816, 'text': 'Then I need to create a key file, which will download to my system, and I can point Terraform to that to use it to authenticate.', 'start': 858.695, 'duration': 6.121}, {'end': 877.863, 'text': "I added this key information to a local JSON file and then added that to both my gitignore and my dockerignore files so that the key wouldn't get accidentally checked into version control or accidentally built into the container.", 'start': 865.618, 'duration': 12.245}], 'summary': 'Granted storage object admin access, created key file for terraform authentication, and secured key information in json, gitignore, and dockerignore files.', 'duration': 25.69, 'max_score': 852.173, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/OXE2a8dqIAI/pics/OXE2a8dqIAI852173.jpg'}, {'end': 948.617, 'src': 'embed', 'start': 926.12, 'weight': 2, 'content': [{'end': 933.765, 'text': "Also, because my makefile is in a parent directory of the terraform directory, I'll need to add a change directory command ahead of this.", 'start': 926.12, 'duration': 7.645}, {'end': 943.134, 'text': 'The double ampersand combines the two into a single shell invocation, because otherwise each command within make will be executed separately.', 'start': 934.79, 'duration': 8.344}, {'end': 948.617, 'text': "Invoking that make target will create the workspace, and then we'll still need to reinitialize it as we did before.", 'start': 943.434, 'duration': 5.183}], 'summary': 'Adding change directory command ahead of makefile for single shell invocation', 'duration': 22.497, 'max_score': 926.12, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/OXE2a8dqIAI/pics/OXE2a8dqIAI926120.jpg'}, {'end': 1315.628, 'src': 'embed', 'start': 1289.833, 'weight': 0, 'content': [{'end': 1297.437, 'text': "Starting from the example usage, I'm going to modify both the internal Terraform reference name as well as the name of the firewall rule within GCP.", 'start': 1289.833, 'duration': 7.604}, {'end': 1308.162, 'text': 'Here we see that it actually uses that data object that we defined above by referencing Google Compute Network, which is the data type default,', 'start': 1301.159, 'duration': 7.003}, {'end': 1310.944, 'text': 'which is our internal reference name, and then the name field.', 'start': 1308.162, 'duration': 2.782}, {'end': 1315.628, 'text': "We don't actually need ICMP access, so we'll just delete that block entirely.", 'start': 1311.806, 'duration': 3.822}], 'summary': 'Modifying internal terraform reference name and gcp firewall rule, deleting icmp access.', 'duration': 25.795, 'max_score': 1289.833, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/OXE2a8dqIAI/pics/OXE2a8dqIAI1289833.jpg'}], 'start': 740.804, 'title': 'Using terraform and configuring for cloud resource management', 'summary': 'Covers using terraform to define infrastructure as code, storing state file in google cloud storage, creating a unique bucket, and granting permissions. it also details setting up terraform for managing cloud resources, initializing the working directory, creating workspaces, defining configuration, and setting up providers for google cloud, atlas, and cloudflare.', 'chapters': [{'end': 885.827, 'start': 740.804, 'title': 'Using terraform with google cloud', 'summary': 'Outlines the process of using terraform to define infrastructure as code, storing the state file in google cloud storage, creating a unique bucket, and granting permissions to terraform using a service account and key file.', 'duration': 145.023, 'highlights': ['Terraform produces a state file, which should be stored in a remote backend like Google Cloud Storage to protect sensitive information with identity and access management roles. state file, Google Cloud Storage', 'Creating a unique bucket in Google Cloud Storage, using project ID as a variable to ensure reusability, and pointing out the necessity for globally unique bucket names. unique bucket, project ID variable', "Creating a service account to grant Terraform the necessary permissions, including storage object admin access, and using a key file for authentication, while ensuring it's not accidentally checked into version control or built into the container. service account, key file, permissions"]}, {'end': 1330.877, 'start': 885.827, 'title': 'Configuring terraform for cloud resource management', 'summary': 'Details the steps involved in setting up terraform for managing cloud resources, including initializing the working directory, creating workspaces, defining configuration for cloud resources, and setting up providers for google cloud, atlas, and cloudflare.', 'duration': 445.05, 'highlights': ['Initializing the working directory and storing the state file in Google Cloud The chapter highlights the process of running the terraform init command to initialize the working directory and store the state file in Google Cloud, ensuring proper storage of the state file.', 'Creating make targets to set up staging and production environments The chapter emphasizes the creation of make targets to set up staging and production environments, ensuring the execution of terraform workspace new command and dynamically switching between staging and production environments.', 'Defining configuration for cloud resources using Terraform The chapter explains the process of defining the configuration for cloud resources using Terraform, organizing files per cloud provider and creating a variables.tf file to store different variables used across the resources.']}], 'duration': 590.073, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/OXE2a8dqIAI/pics/OXE2a8dqIAI740804.jpg', 'highlights': ['Terraform produces a state file, which should be stored in a remote backend like Google Cloud Storage to protect sensitive information with identity and access management roles.', 'Creating a unique bucket in Google Cloud Storage, using project ID as a variable to ensure reusability, and pointing out the necessity for globally unique bucket names.', "Creating a service account to grant Terraform the necessary permissions, including storage object admin access, and using a key file for authentication, while ensuring it's not accidentally checked into version control or built into the container.", 'Initializing the working directory and storing the state file in Google Cloud, ensuring proper storage of the state file.', 'Creating make targets to set up staging and production environments, ensuring the execution of terraform workspace new command and dynamically switching between staging and production environments.', 'Defining configuration for cloud resources using Terraform, organizing files per cloud provider and creating a variables.tf file to store different variables used across the resources.']}, {'end': 2038.857, 'segs': [{'end': 1386.41, 'src': 'embed', 'start': 1362.49, 'weight': 3, 'content': [{'end': 1369.638, 'text': "In this case, I'm going to use Container Optimized OS, which is a purpose-built operating system from Google specifically designed to run containers.", 'start': 1362.49, 'duration': 7.148}, {'end': 1374.183, 'text': 'It has a number of benefits, such as a smaller attack surface area for improved security.', 'start': 1369.658, 'duration': 4.525}, {'end': 1379.204, 'text': 'That being said, because Container Optimized OS does not include a package manager,', 'start': 1374.841, 'duration': 4.363}, {'end': 1382.947, 'text': "you'd be unable to install additional software packages directly onto the instance.", 'start': 1379.204, 'duration': 3.743}, {'end': 1386.41, 'text': 'So any configuration options will need to be handled at the container level.', 'start': 1383.348, 'duration': 3.062}], 'summary': 'Container optimized os is a purpose-built os from google for containers, offering improved security with a smaller attack surface area, but lacks a package manager for direct installation of additional software.', 'duration': 23.92, 'max_score': 1362.49, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/OXE2a8dqIAI/pics/OXE2a8dqIAI1362490.jpg'}, {'end': 1454.487, 'src': 'embed', 'start': 1421.721, 'weight': 2, 'content': [{'end': 1426.104, 'text': "I'm going to use a variable here so that I can use different size machines for staging and production.", 'start': 1421.721, 'duration': 4.383}, {'end': 1433.547, 'text': 'This way, I can have a smaller affordable instance type on staging and a bigger machine type on production to handle the production load.', 'start': 1426.464, 'duration': 7.083}, {'end': 1439.47, 'text': 'As with all of my variables, I need to also declare it within the variables.tf file.', 'start': 1435.408, 'duration': 4.062}, {'end': 1446.043, 'text': "This tags field is how we're going to apply that firewall rule that we defined above.", 'start': 1442.182, 'duration': 3.861}, {'end': 1454.487, 'text': 'We use the data type, Google compute firewall, we reference the internal name of that firewall, and then we specify the target tags key.', 'start': 1446.724, 'duration': 7.763}], 'summary': 'Using variable for different machine sizes, applying firewall rule with tags field.', 'duration': 32.766, 'max_score': 1421.721, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/OXE2a8dqIAI/pics/OXE2a8dqIAI1421721.jpg'}, {'end': 1551.369, 'src': 'embed', 'start': 1517.954, 'weight': 4, 'content': [{'end': 1521.456, 'text': "We're going to replace that with a reference to our static IP that we defined above.", 'start': 1517.954, 'duration': 3.502}, {'end': 1532.472, 'text': "We don't need this metadata or metadata startup script blocks.", 'start': 1529.329, 'duration': 3.143}, {'end': 1538.237, 'text': "And we're going to modify our service account scopes to have only storage read-only access.", 'start': 1533.213, 'duration': 5.024}, {'end': 1543.482, 'text': 'This access is necessary in order to be able to read our Docker image from the Google Container Registry.', 'start': 1538.678, 'duration': 4.804}, {'end': 1551.369, 'text': 'With all of the Google Cloud resources defined, I can go back to the Makefile and create a target to execute the Terraform plan command,', 'start': 1544.443, 'duration': 6.926}], 'summary': 'Configure static ip, remove metadata, set storage read-only access, and execute terraform plan.', 'duration': 33.415, 'max_score': 1517.954, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/OXE2a8dqIAI/pics/OXE2a8dqIAI1517954.jpg'}, {'end': 1656.588, 'src': 'embed', 'start': 1613.953, 'weight': 5, 'content': [{'end': 1620.517, 'text': 'For the environment specific file, I can substitute that env variable in when specifying the path to the file.', 'start': 1613.953, 'duration': 6.564}, {'end': 1623.379, 'text': 'Once I run this command,', 'start': 1622.178, 'duration': 1.201}, {'end': 1629.703, 'text': 'I can look at the output in the terminal to see exactly what resources Terraform is going to create when I apply this configuration.', 'start': 1623.379, 'duration': 6.324}, {'end': 1635.047, 'text': "With the plan looking good, I'm going to update my make target to be able to apply it.", 'start': 1631.244, 'duration': 3.803}, {'end': 1640.877, 'text': 'While I could just copy my previous target and modify it from plan to apply.', 'start': 1636.394, 'duration': 4.483}, {'end': 1646.141, 'text': "instead, I'm going to make a generic target that can perform both by using the environment variable tfaction.", 'start': 1640.877, 'duration': 5.264}, {'end': 1654.086, 'text': 'Using the question mark equals sign syntax indicates that if no environment variable is set, it will use plan.', 'start': 1648.622, 'duration': 5.464}, {'end': 1656.588, 'text': 'Otherwise, it will use the environment variable.', 'start': 1654.527, 'duration': 2.061}], 'summary': 'Substitute env variable for file path, update make target to apply terraform resources.', 'duration': 42.635, 'max_score': 1613.953, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/OXE2a8dqIAI/pics/OXE2a8dqIAI1613953.jpg'}, {'end': 1819.575, 'src': 'embed', 'start': 1795.405, 'weight': 0, 'content': [{'end': 1801.748, 'text': "And finally, the instance settings, including the size of the disk, the size of the machine, and the region it's deployed into.", 'start': 1795.405, 'duration': 6.343}, {'end': 1808.691, 'text': 'Next up, with the cluster defined, we need to create a database user for the application to read and write from the database.', 'start': 1802.808, 'duration': 5.883}, {'end': 1814.233, 'text': 'As per usual, we update the internal reference name.', 'start': 1811.712, 'duration': 2.521}, {'end': 1819.575, 'text': "Then I'll use my project ID variable create another variable for this user's password,", 'start': 1814.873, 'duration': 4.702}], 'summary': 'Configuring instance settings, creating database user, and updating references.', 'duration': 24.17, 'max_score': 1795.405, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/OXE2a8dqIAI/pics/OXE2a8dqIAI1795405.jpg'}, {'end': 1919.234, 'src': 'embed', 'start': 1890.433, 'weight': 8, 'content': [{'end': 1895.956, 'text': 'With this all configured, I can rerun my Terraform Apply Make Target, and it should provision those resources.', 'start': 1890.433, 'duration': 5.523}, {'end': 1901.68, 'text': "The cluster takes a while to provision, so in the meantime, I'll go ahead and get started configuring Cloudflare.", 'start': 1896.717, 'duration': 4.963}, {'end': 1906.328, 'text': 'As with the other cloud platforms, the first step is to generate the access key.', 'start': 1902.603, 'duration': 3.725}, {'end': 1913.378, 'text': 'The page to do this can be accessed by clicking the settings menu in the top right and then my profile on the API tokens tab.', 'start': 1906.649, 'duration': 6.729}, {'end': 1919.234, 'text': 'When you create a token, you give it specific permissions.', 'start': 1916.732, 'duration': 2.502}], 'summary': 'Configuring terraform to provision resources and generating access key for cloudflare.', 'duration': 28.801, 'max_score': 1890.433, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/OXE2a8dqIAI/pics/OXE2a8dqIAI1890433.jpg'}, {'end': 2019.801, 'src': 'embed', 'start': 1976.918, 'weight': 1, 'content': [{'end': 1985.726, 'text': "The second Cloudflare resource that we're going to create is a DNS A record or address record pointing from our domain to the IP address of our virtual machine.", 'start': 1976.918, 'duration': 8.808}, {'end': 1993.013, 'text': "This record is going to reference the zones object from above, which I'll change the internal name from example to cfzones.", 'start': 1986.127, 'duration': 6.886}, {'end': 1997.417, 'text': "I'll also update the internal name of this A record.", 'start': 1994.975, 'duration': 2.442}, {'end': 2005.972, 'text': 'The Zones data object above returns an array, so in order to reference our zone ID, we need to use data CloudFlare Zones,', 'start': 1999.127, 'duration': 6.845}, {'end': 2009.094, 'text': 'which is the type C of Zones, is our internal name.', 'start': 2005.972, 'duration': 3.122}, {'end': 2013.256, 'text': 'we reference the Zones field at the 0th index and then the ID field.', 'start': 2009.094, 'duration': 4.162}, {'end': 2019.801, 'text': 'The name field here represents the subdomain that the site is going to be hosted on.', 'start': 2016.078, 'duration': 3.723}], 'summary': 'Creating a dns a record pointing to virtual machine ip address using cloudflare zones data object and referencing subdomain name.', 'duration': 42.883, 'max_score': 1976.918, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/OXE2a8dqIAI/pics/OXE2a8dqIAI1976918.jpg'}], 'start': 1332.117, 'title': 'Configuring gcp compute engine instance and terraform provisioning', 'summary': 'Covers configuring a gcp compute engine instance with target tags, container optimized os, and machine type variables, and discusses terraform provisioning of google cloud resources, mongodb resources within atlas, and cloudflare dns settings.', 'chapters': [{'end': 1543.482, 'start': 1332.117, 'title': 'Configuring gcp compute engine instance', 'summary': 'Explains how to configure a gcp compute engine instance, including setting up target tags for firewall rules, defining the operating system as container optimized os, and using variables for machine types to handle staging and production loads.', 'duration': 211.365, 'highlights': ['Target tags are used to attach virtual machines to firewall rules, allowing for a more secure network setup.', 'Container Optimized OS from Google is designed for running containers, providing a smaller attack surface area for improved security.', 'Using variables for machine types enables flexibility, allowing for a smaller instance type on staging and a bigger machine type on production to handle different loads.', 'Referencing the data object for network interface configuration allows for easier modification in the future, minimizing hard-coded strings in multiple places.', 'Modifying the service account scopes to have only storage read-only access is necessary for reading Docker images from the Google Container Registry.']}, {'end': 2038.857, 'start': 1544.443, 'title': 'Terraform provisioning and cloud configuration', 'summary': 'Discusses the process of defining google cloud resources in terraform, creating environment-specific variable files, provisioning resources using terraform, defining mongodb resources within atlas, and configuring cloudflare for dns settings.', 'duration': 494.414, 'highlights': ['Creating environment-specific variable files and passing them to Terraform plan command The speaker creates environment and staging subdirectories for defining specific variables, including common variables such as app name and MongoDB access keys, and unique variables like GCP machine type, then passes these variable files to the Terraform plan command.', 'Creating a generic target to perform both plan and apply actions in Terraform The speaker uses the environment variable tfaction to create a generic target that can perform both plan and apply actions in Terraform, allowing flexibility based on the environment variable set, enhancing the efficiency of Terraform usage.', 'Defining MongoDB resources within Atlas, including creating a cluster and database user The speaker defines MongoDB resources within Atlas by creating a M10 tier cluster with specific storage and RAM, stores the project ID in a variable, and creates a database user with a separate password for staging and production, enhancing security and customization.', 'Configuring Cloudflare by generating access key and creating DNS A record The speaker configures Cloudflare by generating an access key with specific permissions, declares the variable for the domain name, and creates a DNS A record pointing from the domain to the IP address of the virtual machine, facilitating DNS settings for the application.']}], 'duration': 706.74, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/OXE2a8dqIAI/pics/OXE2a8dqIAI1332117.jpg', 'highlights': ['Using variables for machine types enables flexibility, allowing for a smaller instance type on staging and a bigger machine type on production to handle different loads.', 'Creating environment-specific variable files and passing them to Terraform plan command The speaker creates environment and staging subdirectories for defining specific variables, including common variables such as app name and MongoDB access keys, and unique variables like GCP machine type, then passes these variable files to the Terraform plan command.', 'Defining MongoDB resources within Atlas, including creating a cluster and database user The speaker defines MongoDB resources within Atlas by creating a M10 tier cluster with specific storage and RAM, stores the project ID in a variable, and creates a database user with a separate password for staging and production, enhancing security and customization.', 'Configuring Cloudflare by generating access key and creating DNS A record The speaker configures Cloudflare by generating an access key with specific permissions, declares the variable for the domain name, and creates a DNS A record pointing from the domain to the IP address of the virtual machine, facilitating DNS settings for the application.', 'Target tags are used to attach virtual machines to firewall rules, allowing for a more secure network setup.', 'Container Optimized OS from Google is designed for running containers, providing a smaller attack surface area for improved security.', 'Referencing the data object for network interface configuration allows for easier modification in the future, minimizing hard-coded strings in multiple places.', 'Modifying the service account scopes to have only storage read-only access is necessary for reading Docker images from the Google Container Registry.', 'Creating a generic target to perform both plan and apply actions in Terraform The speaker uses the environment variable tfaction to create a generic target that can perform both plan and apply actions in Terraform, allowing flexibility based on the environment variable set, enhancing the efficiency of Terraform usage.']}, {'end': 2511.968, 'segs': [{'end': 2101.977, 'src': 'embed', 'start': 2061.172, 'weight': 3, 'content': [{'end': 2067.077, 'text': 'As you can see from the output in the console, that MongoDB Atlas cluster is still being provisioned six minutes later.', 'start': 2061.172, 'duration': 5.905}, {'end': 2072.021, 'text': "So we're going to go ahead and move on while that finishes before we can deploy these Cloudflare resources.", 'start': 2067.097, 'duration': 4.924}, {'end': 2078.502, 'text': "Now, while we're waiting on that, I'm going to go take care of something that I think gets overlooked too often on YouTube tutorials,", 'start': 2072.978, 'duration': 5.524}, {'end': 2079.842, 'text': "and that's management of secrets.", 'start': 2078.502, 'duration': 1.34}, {'end': 2089.128, 'text': "So up until this point, I've been storing my secrets as either environment variables and end files or within that dfr file to authenticate Terraform.", 'start': 2080.322, 'duration': 8.806}, {'end': 2094.452, 'text': "I'm going to move those into Google Secret Manager and only retrieve them when needed.", 'start': 2089.649, 'duration': 4.803}, {'end': 2098.614, 'text': "That way, I don't need to store any of this private information on my local system.", 'start': 2094.831, 'duration': 3.783}, {'end': 2101.977, 'text': 'The only thing I need locally is to be authenticated to Google Cloud.', 'start': 2098.875, 'duration': 3.102}], 'summary': 'Mongodb atlas cluster provisioning in progress. implementing secure secret management with google secret manager.', 'duration': 40.805, 'max_score': 2061.172, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/OXE2a8dqIAI/pics/OXE2a8dqIAI2061172.jpg'}, {'end': 2170.816, 'src': 'embed', 'start': 2121.161, 'weight': 1, 'content': [{'end': 2123.202, 'text': 'such that the impact of a leak is minimized.', 'start': 2121.161, 'duration': 2.041}, {'end': 2133.579, 'text': "Okay, it looks like the MongoDB cluster has finished provisioning, so let's run Terraform Apply and try to create these Cloudflare resources.", 'start': 2126.573, 'duration': 7.006}, {'end': 2139.924, 'text': 'Oh, I had used the incorrect field name for the API token.', 'start': 2136.581, 'duration': 3.343}, {'end': 2143.987, 'text': 'Rather than just token, it needs to be API underscore token.', 'start': 2140.364, 'duration': 3.623}, {'end': 2149.791, 'text': 'Running the Terraform Apply make target once more, and it looks like it was added.', 'start': 2145.548, 'duration': 4.243}, {'end': 2154.175, 'text': "I'll confirm this just by going to the DNS tab within the Cloudflare GUI.", 'start': 2150.552, 'duration': 3.623}, {'end': 2163.45, 'text': "Earlier, I created all those secrets within Google Secret Manager, but I didn't actually explain how I was going to use them.", 'start': 2157.665, 'duration': 5.785}, {'end': 2170.816, 'text': "I'm going to create a little helper function within my Makefile called getSecret, where I can retrieve those at the time that they're needed.", 'start': 2164.151, 'duration': 6.665}], 'summary': 'Provisioned mongodb cluster, corrected field name, added resources, created helper function for accessing secrets.', 'duration': 49.655, 'max_score': 2121.161, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/OXE2a8dqIAI/pics/OXE2a8dqIAI2121161.jpg'}, {'end': 2252.727, 'src': 'embed', 'start': 2220.991, 'weight': 4, 'content': [{'end': 2223.652, 'text': "And it's asking for the MongoDB Atlas private key.", 'start': 2220.991, 'duration': 2.661}, {'end': 2226.852, 'text': 'Oh, it looks like I used the wrong variable name.', 'start': 2224.912, 'duration': 1.94}, {'end': 2228.733, 'text': 'I called it just Atlas private key.', 'start': 2227.193, 'duration': 1.54}, {'end': 2230.934, 'text': 'Let me fix that.', 'start': 2230.313, 'duration': 0.621}, {'end': 2238.816, 'text': "If I run it again, the command succeeds and we see that there's no changes to the infrastructure.", 'start': 2234.535, 'duration': 4.281}, {'end': 2239.436, 'text': 'This is good.', 'start': 2238.976, 'duration': 0.46}, {'end': 2247.364, 'text': 'With all of our infrastructure defined as code and successfully provisioned, we can now move on to deploying the application.', 'start': 2241.482, 'duration': 5.882}, {'end': 2252.727, 'text': "There are a number of ways that we could handle deployment for this application, but here I'm just going to keep it simple.", 'start': 2247.805, 'duration': 4.922}], 'summary': 'Fixed variable name, infrastructure defined as code, ready for application deployment.', 'duration': 31.736, 'max_score': 2220.991, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/OXE2a8dqIAI/pics/OXE2a8dqIAI2220991.jpg'}, {'end': 2304.562, 'src': 'embed', 'start': 2267.718, 'weight': 0, 'content': [{'end': 2272.501, 'text': 'The command takes an SSH string as an input, which is user at and the name of the instance.', 'start': 2267.718, 'duration': 4.783}, {'end': 2278.486, 'text': "And then I'll also specify the project ID as well as the zone to uniquely identify the machine.", 'start': 2274.063, 'duration': 4.423}, {'end': 2286.97, 'text': 'Executing this make target takes a little while while the SSH key propagates to the Compute Engine instance metadata,', 'start': 2280.345, 'duration': 6.625}, {'end': 2290.172, 'text': 'but once it finishes we should have a live shell running on that VM.', 'start': 2286.97, 'duration': 3.202}, {'end': 2295.075, 'text': 'While we could just type out the Docker commands that we want to run here within that session,', 'start': 2290.872, 'duration': 4.203}, {'end': 2299.398, 'text': "I'm going to create another make target to send individual commands over SSH.", 'start': 2295.075, 'duration': 4.323}, {'end': 2304.562, 'text': 'This will be much easier to reuse down the road when we build out a GitHub action to automate this process.', 'start': 2299.818, 'duration': 4.744}], 'summary': 'Ssh command creates live shell on vm, plans to automate via github action', 'duration': 36.844, 'max_score': 2267.718, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/OXE2a8dqIAI/pics/OXE2a8dqIAI2267718.jpg'}], 'start': 2039.417, 'title': 'Deploying with terraform and docker', 'summary': 'Covers deploying environments, managing secrets with google secret manager, and eliminating sensitive information from terraform configuration. additionally, it outlines the process of deploying a docker application using make targets and ssh commands, including building and pushing docker images and crafting commands to deploy the application via ssh.', 'chapters': [{'end': 2247.364, 'start': 2039.417, 'title': 'Managing secrets and infrastructure with terraform', 'summary': 'Discusses deploying environments, managing secrets with google secret manager, and eliminating sensitive information from terraform configuration, while also addressing issues encountered during the deployment process.', 'duration': 207.947, 'highlights': ['Deploying environments and managing secrets with Google Secret Manager The speaker discusses deploying environments and managing secrets with Google Secret Manager to eliminate storing private information on the local system.', 'Fixing issues encountered during the deployment process The speaker addresses issues encountered during the deployment process, such as using incorrect field names and variable names in the Terraform configuration.', 'Eliminating sensitive information from Terraform configuration The speaker uses a helper function to retrieve secrets from Google Secret Manager and removes sensitive information from the Terraform configuration by passing them in directly with the dash var option.']}, {'end': 2511.968, 'start': 2247.805, 'title': 'Deployment process for docker application', 'summary': 'Outlines the process of deploying a docker application using make targets and ssh commands, including creating ssh commands, building and pushing docker images, and crafting commands to deploy the application via ssh.', 'duration': 264.163, 'highlights': ['Creating make target for SSHing into virtual machine The process involves creating a make target using gcloud compute SSH command, which takes an SSH string as an input and specifies the project ID and zone to uniquely identify the machine.', "Creating make target to send individual commands over SSH Another make target is created to send individual commands over SSH, allowing for easier reuse and execution of specific commands, such as executing 'echo hello' on the virtual machine.", 'Building and pushing Docker images to Google Container Registry The process includes creating make targets to build and push Docker images, with the push make target tagging the local image with the remote tag before pushing it to Google Container Registry.', 'Crafting commands to deploy the application via SSH The final make target involves crafting commands to use SSH CMD make target and execute commands via SSH on the virtual machine, including running Docker credential GCR helper function, docker pull to fetch the latest image, and stopping and removing existing containers.']}], 'duration': 472.551, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/OXE2a8dqIAI/pics/OXE2a8dqIAI2039417.jpg', 'highlights': ['Crafting commands to deploy the application via SSH', 'Building and pushing Docker images to Google Container Registry', 'Creating make target to send individual commands over SSH', 'Deploying environments and managing secrets with Google Secret Manager', 'Eliminating sensitive information from Terraform configuration', 'Fixing issues encountered during the deployment process', 'Creating make target for SSHing into virtual machine']}, {'end': 3474.447, 'segs': [{'end': 2603.777, 'src': 'embed', 'start': 2554.612, 'weight': 5, 'content': [{'end': 2558.053, 'text': 'We can go into the Atlas GUI and it will provide us a template string.', 'start': 2554.612, 'duration': 3.441}, {'end': 2569.598, 'text': 'I added escape double quotes to the beginning and end of this environment variable definition to deal with how the shell is going to expand the string before it gets passed to Docker.', 'start': 2560.734, 'duration': 8.864}, {'end': 2572.439, 'text': 'Within the template string.', 'start': 2571.138, 'duration': 1.301}, {'end': 2579.223, 'text': 'I then needed to update both the database user as well as the database cluster name so that they can easily switch between staging and production.', 'start': 2572.439, 'duration': 6.784}, {'end': 2590.009, 'text': 'I then retrieve and pass in the database user password from Google Secret Manager and set the database name.', 'start': 2580.483, 'duration': 9.526}, {'end': 2600.935, 'text': 'I then need to pass in the environment variables to configure our OAuth setup.', 'start': 2597.373, 'duration': 3.562}, {'end': 2603.777, 'text': "We'll paste the client ID here.", 'start': 2602.196, 'duration': 1.581}], 'summary': 'Configured environment variables for database and oauth setup in atlas gui.', 'duration': 49.165, 'max_score': 2554.612, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/OXE2a8dqIAI/pics/OXE2a8dqIAI2554612.jpg'}, {'end': 2804.121, 'src': 'embed', 'start': 2778.598, 'weight': 7, 'content': [{'end': 2783.421, 'text': 'Because the instance name and the zone are handled within the make file, I can just delete these two.', 'start': 2778.598, 'duration': 4.823}, {'end': 2789.158, 'text': "I'm also going to set the project ID directly here rather than as a secret within the repository.", 'start': 2784.417, 'duration': 4.741}, {'end': 2795.619, 'text': "If I were using the same project within many different actions, I might store it as a secret, but here it's simpler just to do it this way.", 'start': 2789.478, 'duration': 6.141}, {'end': 2799.38, 'text': 'We can see that the first step of this workflow is the checkout action.', 'start': 2796.199, 'duration': 3.181}, {'end': 2804.121, 'text': 'This is a standard action provided by GitHub that will be the first step of almost every workflow.', 'start': 2799.84, 'duration': 4.281}], 'summary': 'Make file handles instance name and zone, setting project id directly simplifies process.', 'duration': 25.523, 'max_score': 2778.598, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/OXE2a8dqIAI/pics/OXE2a8dqIAI2778598.jpg'}, {'end': 3083.723, 'src': 'embed', 'start': 3002.984, 'weight': 1, 'content': [{'end': 3004.926, 'text': "So I'll add that directory to my gitignore.", 'start': 3002.984, 'duration': 1.942}, {'end': 3014.891, 'text': "With all of the files staged, I'll just add my commit message, commit, and then push.", 'start': 3010.029, 'duration': 4.862}, {'end': 3024.914, 'text': "Now, if I go to the repository within GitHub, under the Actions tab, I'll see the workflow executing for the first time.", 'start': 3018.452, 'duration': 6.462}, {'end': 3033.057, 'text': "Within this view, we'll see the workflow progress through all of the steps that were defined within it.", 'start': 3028.956, 'duration': 4.101}, {'end': 3035.409, 'text': "It's checked out the code.", 'start': 3034.428, 'duration': 0.981}, {'end': 3042.514, 'text': "It's setting up the gcloud command line utility, authorizing Docker building our image,", 'start': 3035.909, 'duration': 6.605}, {'end': 3048.839, 'text': 'pushing that image to the Google Container Registry and finally deploying onto our virtual machine.', 'start': 3042.514, 'duration': 6.325}, {'end': 3079.741, 'text': "With the green check mark signaling successful completion, let's go try to log into the site again and make sure everything's still working.", 'start': 3073.439, 'duration': 6.302}, {'end': 3083.723, 'text': 'Cool, looks like we managed to not break anything.', 'start': 3081.482, 'duration': 2.241}], 'summary': 'Successfully executed github actions workflow deploying image to google container registry and virtual machine, ensuring no issues with the site.', 'duration': 80.739, 'max_score': 3002.984, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/OXE2a8dqIAI/pics/OXE2a8dqIAI3002984.jpg'}, {'end': 3175.875, 'src': 'embed', 'start': 3147.202, 'weight': 4, 'content': [{'end': 3153.585, 'text': 'I can then put this CheckEnvMakeTarget as a prerequisite for these other MakeTargets so that it will get run before they do.', 'start': 3147.202, 'duration': 6.383}, {'end': 3163.509, 'text': 'Because production will live as a separate workspace within Terraform.', 'start': 3160.048, 'duration': 3.461}, {'end': 3169.152, 'text': 'the first thing that I need to do is run my MakeTerraformCreateWorkspace MakeTarget with env set to prod.', 'start': 3163.509, 'duration': 5.643}, {'end': 3175.875, 'text': 'With the workspace created, I can then run MakeTerraformInit, again with env set to prod, to initialize that workspace.', 'start': 3170.032, 'duration': 5.843}], 'summary': 'Use checkenvmaketarget as a prerequisite for maketargets to run in prod workspace.', 'duration': 28.673, 'max_score': 3147.202, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/OXE2a8dqIAI/pics/OXE2a8dqIAI3147202.jpg'}, {'end': 3278.909, 'src': 'heatmap', 'start': 3239.445, 'weight': 1, 'content': [{'end': 3244.31, 'text': 'and that is to deploy to production on version tags associated with releases.', 'start': 3239.445, 'duration': 4.865}, {'end': 3252.758, 'text': "And so I'll specify that we want to trigger this action on tags that match a regular expression starting with a V followed by three digits.", 'start': 3244.85, 'duration': 7.908}, {'end': 3256.34, 'text': 'For example, 0.0.1, etc.', 'start': 3253.399, 'duration': 2.941}, {'end': 3262.223, 'text': 'This way, the normal workflow for making a change would be to build that change, merge it to master,', 'start': 3256.78, 'duration': 5.443}, {'end': 3267.985, 'text': 'view that everything is working on staging before issuing a release on GitHub which will trigger a production deploy.', 'start': 3262.223, 'duration': 5.762}, {'end': 3277.209, 'text': 'The other thing that I need to add to the action is the ability to set that env environment variable to prod or staging,', 'start': 3270.746, 'duration': 6.463}, {'end': 3278.909, 'text': 'depending on which event triggered the action.', 'start': 3277.209, 'duration': 1.7}], 'summary': "Deploy to production on version tags matching 'v' + three digits, triggering a production deploy after a release on github.", 'duration': 39.464, 'max_score': 3239.445, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/OXE2a8dqIAI/pics/OXE2a8dqIAI3239445.jpg'}, {'end': 3321.08, 'src': 'embed', 'start': 3293.243, 'weight': 0, 'content': [{'end': 3295.924, 'text': 'This will contain a reference to the event that triggered the action.', 'start': 3293.243, 'duration': 2.681}, {'end': 3301.085, 'text': "If it's a push to the master branch, it'll be slash refs, slash heads, slash master.", 'start': 3296.264, 'duration': 4.821}, {'end': 3305.666, 'text': "or if it's triggered by a tag, it'll be slash ref, slash tags and then that tag.", 'start': 3301.085, 'duration': 4.581}, {'end': 3312.848, 'text': "I'm going to use a bash technique called prefix removal with this pound sign to remove everything except for the substring after that final slash.", 'start': 3306.106, 'duration': 6.742}, {'end': 3315.925, 'text': "Then I'll check whether that's equal to master or not.", 'start': 3313.919, 'duration': 2.006}, {'end': 3319.094, 'text': "If it is, I'll set the environment variable to staging.", 'start': 3316.406, 'duration': 2.688}, {'end': 3321.08, 'text': "If it's not, I'll set it to prod.", 'start': 3319.555, 'duration': 1.525}], 'summary': 'Using bash technique to trigger environment variable based on branch/tag, e.g., master -> staging, else prod.', 'duration': 27.837, 'max_score': 3293.243, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/OXE2a8dqIAI/pics/OXE2a8dqIAI3293243.jpg'}, {'end': 3374.448, 'src': 'embed', 'start': 3338.543, 'weight': 3, 'content': [{'end': 3340.164, 'text': "For now, I'm just going to leave it like this though.", 'start': 3338.543, 'duration': 1.621}, {'end': 3345.067, 'text': "Now, before I commit these changes, I'm going to just add a few exclamation points to this header,", 'start': 3340.685, 'duration': 4.382}, {'end': 3350.791, 'text': 'just to prove 100% to myself that these changes are getting properly integrated and deployed through this process.', 'start': 3345.067, 'duration': 5.724}, {'end': 3353.793, 'text': "Now I'll just stage all these changes.", 'start': 3352.272, 'duration': 1.521}, {'end': 3360.884, 'text': 'commit them, and push.', 'start': 3359.264, 'duration': 1.62}, {'end': 3366.886, 'text': 'Because we were on the master branch.', 'start': 3365.305, 'duration': 1.581}, {'end': 3374.448, 'text': 'we should see a new action triggered and if we go into the logs and look at the environment variables for any steps after our first one,', 'start': 3366.886, 'duration': 7.562}], 'summary': 'Adding exclamation points to the header to validate 100% integration and deployment through the process.', 'duration': 35.905, 'max_score': 3338.543, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/OXE2a8dqIAI/pics/OXE2a8dqIAI3338543.jpg'}, {'end': 3474.447, 'src': 'heatmap', 'start': 3447.458, 'weight': 0.833, 'content': [{'end': 3454.342, 'text': "I hope that you've learned a few things along the way that you'll be able to apply to your own projects to make them more robust and easier to work with.", 'start': 3447.458, 'duration': 6.884}, {'end': 3457.905, 'text': 'Thank you so much for watching and sticking around until the end.', 'start': 3455.023, 'duration': 2.882}, {'end': 3463.129, 'text': 'I would love to have you come join me in my small but growing community over at DevOps Directive.', 'start': 3458.345, 'duration': 4.784}, {'end': 3467.772, 'text': "So head on over, subscribe, and leave me a comment telling me you came from Brad's channel.", 'start': 3463.509, 'duration': 4.263}, {'end': 3469.818, 'text': "That's it for today.", 'start': 3468.936, 'duration': 0.882}, {'end': 3474.447, 'text': "Again, I'm Sid with DevOps Directive, and remember, just keep building.", 'start': 3470.078, 'duration': 4.369}], 'summary': 'Learn to make projects more robust at devops directive. join the growing community. keep building.', 'duration': 26.989, 'max_score': 3447.458, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/OXE2a8dqIAI/pics/OXE2a8dqIAI3447458.jpg'}], 'start': 2514.079, 'title': 'Docker container configuration and deployment automation', 'summary': 'Covers configuring and running a docker container, setting up port forwarding, environment variables, and retrieving sensitive information from google secret manager. it also discusses automating deployment using github actions, setting up service accounts with specific roles, creating a json key, adding a secret to the github repository, and configuring github actions for deploying to google container registry and virtual machines.', 'chapters': [{'end': 2632.785, 'start': 2514.079, 'title': 'Configuring and running docker container', 'summary': 'Explains how to configure and run a docker container, setting up port forwarding, environment variables, and retrieving sensitive information from google secret manager before running the container with a specific tag.', 'duration': 118.706, 'highlights': ['Configuring port forwarding on port 80 of the VM to port 3000 inside the container for handling HTTP requests.', 'Setting environment variables including port and Mongo URI to connect to the Atlas DB instance.', 'Retrieving and setting the database user password from Google Secret Manager.', 'Configuring OAuth setup by passing client ID and retrieving the client secret from Google Secret Manager.', 'Running the Docker container with the specific tag for the image stored in Google Container Registry.']}, {'end': 2815.352, 'start': 2634.406, 'title': 'Automating deployment with github actions', 'summary': 'Discusses automating deployment using github actions, emphasizing its ease of sharing and reuse, and the process of modifying and creating a github action workflow for deploying to google cloud platform.', 'duration': 180.946, 'highlights': ['GitHub actions emphasize sharing and reuse, making it easy to find and modify existing actions for specific needs. GitHub actions have an emphasis on sharing and reuse, allowing easy modification of existing actions to suit specific needs.', 'Modifying make targets within the action to prevent echoing sensitive commands and environment variables in GitHub action logs. Modifying make targets to prevent echoing sensitive commands and environment variables in GitHub action logs, improving security.', 'Using the official Google Cloud Platform action as the base for the GitHub action workflow, with modifications for deploying to the virtual machine. Utilizing the official Google Cloud Platform action as the base for the GitHub action workflow, with modifications for deploying to the virtual machine.', 'Creating a GitHub action workflow named build push deploy, including modifying environment variable section and utilizing standard checkout action provided by GitHub. Creating a GitHub action workflow named build push deploy, modifying environment variable section, and utilizing the standard checkout action provided by GitHub.']}, {'end': 3474.447, 'start': 2816.392, 'title': 'Setting up service accounts and github actions', 'summary': 'Details setting up a new service account with specific roles, creating a json key for the service account, adding a secret to the github repository, and configuring github actions to push to google container registry and deploy onto virtual machines. it also demonstrates the use of environment variables and github actions to support different deployments for staging versus production.', 'duration': 658.055, 'highlights': ['The chapter details setting up a new service account with specific roles, such as service account user, secret manager accessor, and access to the Google container registry bucket.', 'It highlights the process of creating a JSON key for the service account and adding it as a secret within the GitHub repository to be accessed during the workflow.', 'The configuration of GitHub actions to push to Google Container Registry and deploy onto virtual machines, utilizing a make file to handle different configurations and environment variables.', 'The demonstration of using reserved environment variables within GitHub Actions, such as GitHub_SHA, to tag the image with a unique version number for each version.', 'The chapter also demonstrates the use of GitHub Actions to support different deployments for staging versus production, by setting the env environment variable based on the event that triggered the action.']}], 'duration': 960.368, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/OXE2a8dqIAI/pics/OXE2a8dqIAI2514079.jpg', 'highlights': ['Running the Docker container with the specific tag for the image stored in Google Container Registry.', 'Setting environment variables including port and Mongo URI to connect to the Atlas DB instance.', 'Retrieving and setting the database user password from Google Secret Manager.', 'Configuring port forwarding on port 80 of the VM to port 3000 inside the container for handling HTTP requests.', 'Configuring OAuth setup by passing client ID and retrieving the client secret from Google Secret Manager.', 'Modifying make targets within the action to prevent echoing sensitive commands and environment variables in GitHub action logs.', 'Creating a GitHub action workflow named build push deploy, modifying environment variable section, and utilizing the standard checkout action provided by GitHub.', 'Utilizing the official Google Cloud Platform action as the base for the GitHub action workflow, with modifications for deploying to the virtual machine.', 'The chapter details setting up a new service account with specific roles, such as service account user, secret manager accessor, and access to the Google container registry bucket.', 'The demonstration of using reserved environment variables within GitHub Actions, such as GitHub_SHA, to tag the image with a unique version number for each version.']}], 'highlights': ['The video covers a one-hour DevOps crash course, including containerizing with Docker, setting up infrastructure as code using Terraform, and deploying into multiple environments.', 'The tutorial is based on a two and a half hour tutorial building a full stack application using a MongoDB database, a Node.js based API, and OAuth authentication using Passport, and aims to apply DevOps practices to it.', 'Terraform produces a state file, which should be stored in a remote backend like Google Cloud Storage to protect sensitive information with identity and access management roles.', 'Using variables for machine types enables flexibility, allowing for a smaller instance type on staging and a bigger machine type on production to handle different loads.', 'Crafting commands to deploy the application via SSH', 'Running the Docker container with the specific tag for the image stored in Google Container Registry.', 'Creating a GitHub action workflow named build push deploy, modifying environment variable section, and utilizing the standard checkout action provided by GitHub.']}