title
Learn Docker - DevOps with Node.js & Express
description
Learn the core fundamentals of Docker by building a Node/Express app with a Mongo & Redis database.
We'll start off by keeping things simple with a single container, and gradually add more complexity to our app by integrating a Mongo container, and then finally adding in a redis database for authentication.
We'll learn how to do things manually with the cli, then move on to docker compose. We'll focus on the challenges of moving from a development environment to a production environment.
We'll deploy and Ubuntu VM as our production server, and utilize a container orchestrator like docker swarm to handle rolling updates.
✏️ Course developed by Sanjeev Thiyagarajan. Check out his channel: https://www.youtube.com/channel/UC2sYgV-NV6S5_-pqLGChoNQ
⭐️ Course Contents ⭐️
0:00:14 Intro & demo express app
0:04:18 Custom Images with Dockerfile
0:10:34 Docker image layers & caching
0:20:26 Docker networking opening ports
0:26:36 Dockerignore file
0:31:46 Syncing source code with bind mounts
0:45:30 Anonymous Volumes hack
0:51:58 Read-Only Bind Mounts
0:54:58 Environment variables
0:59:16 loading environment variables from file
1:01:31 Deleting stale volumes
1:04:01 Docker Compose
1:21:36 Development vs Production configs
Part 02: Working with multiple containers
1:44:47 Adding a Mongo Container
2:01:48 Communicating between containers
2:12:00 Express Config file
2:21:45 Container bootup order
2:32:26 Building a CRUD application
2:51:27 Sign up and Login
3:06:57 Authentication with sessions & Redis
3:34:36 Architecture Review
3:40:48 Nginx for Load balancing to multiple node containers
3:54:33 Express CORS
Part 03: Moving to Prod
3:57:44 Installing docker on Ubuntu(Digital Ocean)
4:03:21 Setup Git
4:05:37 Environment Variables on Ubuntu
4:14:12 Deploying app to production server
4:18:57 Pushing changes the hard way
4:25:58 Rebuilding Containers
4:27:32 Dev to Prod workflow review
4:30:50 Improved Dockerhub workflow
4:46:10 Automating with watchtower
4:56:06 Why we need an orchestrator
5:03:32 Docker Swarm
5:16:13 Pushing changes to Swarm stack
--
Learn to code for free and get a developer job: https://www.freecodecamp.org
Read hundreds of articles on programming: https://freecodecamp.org/news
detail
{'title': 'Learn Docker - DevOps with Node.js & Express', 'heatmap': [{'end': 1159.551, 'start': 963.108, 'weight': 0.885}, {'end': 6572.266, 'start': 6376.051, 'weight': 1}, {'end': 7730.854, 'start': 7534.282, 'weight': 0.809}, {'end': 11789.348, 'start': 11586.126, 'weight': 0.744}, {'end': 13919.621, 'start': 13521.706, 'weight': 0.876}], 'summary': 'Learn docker for devops with node.js & express: covers setting up workflow, optimizing image building process, troubleshooting file syncing, managing volumes, environment configuration, production & development setup, data persistence, networks, user authentication, security, server setup, and automating image updates for efficient docker usage.', 'chapters': [{'end': 1072.091, 'segs': [{'end': 269.532, 'src': 'embed', 'start': 239.902, 'weight': 0, 'content': [{'end': 241.623, 'text': 'And so that confirms our Express app works.', 'start': 239.902, 'duration': 1.721}, {'end': 243.264, 'text': "So we've got our dummy Express app.", 'start': 241.783, 'duration': 1.481}, {'end': 257.511, 'text': 'So now we can go ahead and get started with actually integrating our Express app into a Docker container and setting up a workflow so that we can move to developing our application exclusively within the Docker container instead of developing it on our local machine,', 'start': 243.284, 'duration': 14.227}, {'end': 258.171, 'text': 'like we just did.', 'start': 257.511, 'duration': 0.66}, {'end': 266.109, 'text': "So now that we got our Express application complete, or the demo Express application, let's work on setting up our Docker container.", 'start': 259.764, 'duration': 6.345}, {'end': 269.532, 'text': "Now for this video, I'm going to assume that you already have Docker installed on your local machine.", 'start': 266.489, 'duration': 3.043}], 'summary': 'Confirm express app works, integrate into docker container, develop exclusively within docker.', 'duration': 29.63, 'max_score': 239.902, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ239902.jpg'}, {'end': 372.186, 'src': 'embed', 'start': 343.177, 'weight': 1, 'content': [{'end': 346.4, 'text': "So to do that, what we're going to do is we're going to create our own custom image.", 'start': 343.177, 'duration': 3.223}, {'end': 349.262, 'text': "And we're going to base this custom image off of this node image.", 'start': 346.82, 'duration': 2.442}, {'end': 354.107, 'text': "So we're going to take this node image, we're going to copy all of our source code into that node image.", 'start': 349.282, 'duration': 4.825}, {'end': 357.229, 'text': "And then we're going to install all of our dependencies like Express.", 'start': 354.707, 'duration': 2.522}, {'end': 362.194, 'text': 'And then that final image is going to have everything that we need to ultimately run our application.', 'start': 357.89, 'duration': 4.304}, {'end': 364.396, 'text': "So let's get started on doing that now.", 'start': 362.734, 'duration': 1.662}, {'end': 368.824, 'text': "Okay, so let's get started on creating our own custom image.", 'start': 365.802, 'duration': 3.022}, {'end': 372.186, 'text': 'To create a custom image, we need to create a Docker file.', 'start': 369.304, 'duration': 2.882}], 'summary': 'Creating a custom docker image based on the node image, adding source code and dependencies, to run the application.', 'duration': 29.009, 'max_score': 343.177, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ343177.jpg'}, {'end': 750.588, 'src': 'embed', 'start': 723.138, 'weight': 2, 'content': [{'end': 727.424, 'text': 'We then NPM install, cache that result, and then we copy all of our code and we cache that result.', 'start': 723.138, 'duration': 4.286}, {'end': 732.371, 'text': "And this is important because let's say we decide to rebuild the image again.", 'start': 728.185, 'duration': 4.186}, {'end': 738.658, 'text': "Right? If nothing changes, Docker is efficient, it knows that nothing's changed in any of these layers.", 'start': 733.274, 'duration': 5.384}, {'end': 742.381, 'text': 'And it just takes the final cache result of step five, which is the last layer.', 'start': 739.199, 'duration': 3.182}, {'end': 743.622, 'text': 'And it just gives that to you.', 'start': 742.842, 'duration': 0.78}, {'end': 748.026, 'text': 'So if you run Docker built, which is the command to actually create the image the first time you run it,', 'start': 743.723, 'duration': 4.303}, {'end': 750.588, 'text': "it's going to take a long time because it's got to run all of these steps.", 'start': 748.026, 'duration': 2.562}], 'summary': 'Npm install and caching save time in docker image creation by reusing unchanged layers.', 'duration': 27.45, 'max_score': 723.138, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ723138.jpg'}], 'start': 0.129, 'title': 'Docker workflow and optimization', 'summary': 'Covers setting up a workflow for developing node js and express apps within a docker container, creating custom docker images, and optimizing the image building process to reduce build time from long to less than a second on subsequent runs.', 'chapters': [{'end': 308.904, 'start': 0.129, 'title': 'Docker fundamentals with node express', 'summary': 'Covers setting up a workflow for developing node js and express apps within a docker container, starting with creating a simple express app and then integrating it into a docker container, leveraging the official node docker image from hub.docker.com.', 'duration': 308.775, 'highlights': ['Sanjeev teaches the core fundamentals of Docker by building a Node Express app with a Mongo and Redis database. Sanjeev provides instruction on building a Node Express app with a Mongo and Redis database, emphasizing core Docker fundamentals.', 'Demonstration of setting up a quick and simple Express app within a minute, including creating package.json, installing Express, creating node modules folder, and creating index.js file with a route for testing purposes. The process of setting up a quick and simple Express app within a minute is demonstrated, covering steps such as creating package.json, installing Express, creating node modules folder, and creating index.js file with a route for testing purposes.', 'Guidance on integrating the Express app into a Docker container and setting up a workflow for developing the application exclusively within the Docker container. Guidance is provided on integrating the Express app into a Docker container and setting up a workflow for exclusive development within the Docker container.', 'Explanation of leveraging the official node Docker image from hub.docker.com, which provides a lightweight image with node already installed. The use of the official node Docker image from hub.docker.com is explained, highlighting its lightweight nature and the pre-installed node.']}, {'end': 637.764, 'start': 308.904, 'title': 'Creating custom docker image', 'summary': 'Explains the process of creating a custom docker image using a base image, setting up the working directory, copying the package.json file, installing dependencies, and copying the source code into the image.', 'duration': 328.86, 'highlights': ['The chapter explains that a Docker image needs to have everything needed for the application to work, including source code and dependencies like Express. Docker image requires all components for the application, such as source code and dependencies like Express.', 'It details the process of creating a custom Docker image based on a node image, including copying source code and installing dependencies like Express. Creating a custom Docker image based on a node image involves copying source code and installing dependencies.', 'The chapter outlines the steps for creating a Docker file to run instructions and customize the image, including specifying a base image and setting the working directory. Creating a Docker file involves specifying a base image and setting the working directory.', 'It explains the commands to copy the package.json file and install dependencies using npm in the Docker file. The Docker file includes commands to copy the package.json file and install dependencies using npm.', 'The transcript describes the process of copying the source code and other files into the Docker image. Copying the source code and files into the Docker image is a crucial step in the process.']}, {'end': 1072.091, 'start': 637.764, 'title': 'Optimizing docker image building process', 'summary': 'Explains the optimization technique of splitting the docker image building process into two steps, where the first step caches the results to improve efficiency, enabling the docker to recognize unchanged layers and reduce the build time from initially taking long to less than a second on subsequent runs.', 'duration': 434.327, 'highlights': ['Docker images are built in layers, and each command in the Dockerfile creates a separate layer, with Docker caching the result of each layer when the image is built for the first time. Docker images are created in layers, and each command in the Dockerfile contributes to a separate layer. When the image is initially built, Docker caches the result of each layer, optimizing the build process.', 'Splitting the image building process into two steps, caching the results of layers that are unlikely to change during the development process, such as the node image and the working directory, helps to reduce the build time by recognizing unchanged layers and only rerunning the necessary steps when changes occur. By dividing the image building process into two steps and caching the results of layers like the node image and the working directory, the build time is reduced by recognizing unchanged layers and only rerunning necessary steps when changes occur.', 'The optimization technique leads to a significant reduction in build time from initially taking a long time to being completed in less than a second on subsequent runs if there are no changes in the layers that have been cached. Due to the caching of results, the build time is significantly reduced from initially taking a long time to being completed in less than a second on subsequent runs if there are no changes in the cached layers.']}], 'duration': 1071.962, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ129.jpg', 'highlights': ['Guidance on integrating the Express app into a Docker container and setting up a workflow for exclusive development within the Docker container.', 'Creating a custom Docker image based on a node image involves copying source code and installing dependencies.', 'The optimization technique leads to a significant reduction in build time from initially taking a long time to being completed in less than a second on subsequent runs if there are no changes in the cached layers.']}, {'end': 2336.848, 'segs': [{'end': 1132.808, 'src': 'embed', 'start': 1072.131, 'weight': 0, 'content': [{'end': 1074.974, 'text': "And so now all the way down to step five it's cached.", 'start': 1072.131, 'duration': 2.843}, {'end': 1078.978, 'text': "So if you run it again, like I said, there's an optimization where it actually caches the results.", 'start': 1075.054, 'duration': 3.924}, {'end': 1081.501, 'text': "And the second time you run it, it's going to be much, much faster.", 'start': 1079.018, 'duration': 2.483}, {'end': 1085.204, 'text': "So now what we're going to do is we're going to do a Docker image LS.", 'start': 1082.281, 'duration': 2.923}, {'end': 1087.878, 'text': 'And you can see the new image you created.', 'start': 1086.297, 'duration': 1.581}, {'end': 1094.96, 'text': "This is the one without a name, because we didn't specify a name, you also see the node image that it pulled from Docker Hub, which is node 15.", 'start': 1088.858, 'duration': 6.102}, {'end': 1097.08, 'text': "Now, I don't like the fact that we didn't give this a name.", 'start': 1094.96, 'duration': 2.12}, {'end': 1102.022, 'text': "So let's do a Docker image, RM, this is going to delete the image that we just created.", 'start': 1097.56, 'duration': 4.462}, {'end': 1104.583, 'text': "And I'm going to pass in that image ID.", 'start': 1102.042, 'duration': 2.541}, {'end': 1109.904, 'text': "And if we do a Docker image LS, you can see that it's gone now.", 'start': 1104.623, 'duration': 5.281}, {'end': 1112.785, 'text': "So now we're going to go back to the Docker build command that we ran.", 'start': 1110.484, 'duration': 2.301}, {'end': 1115.346, 'text': "But this time, we're going to pass in a specific flag.", 'start': 1113.385, 'duration': 1.961}, {'end': 1118.517, 'text': "So we'll pass in the dash T flag.", 'start': 1116.756, 'duration': 1.761}, {'end': 1122.54, 'text': "So here we can give it a name, I'm going to call this node dash app dash image.", 'start': 1118.898, 'duration': 3.642}, {'end': 1128.845, 'text': "Alright, so once that's complete, we'll do a Docker image LS.", 'start': 1125.663, 'duration': 3.182}, {'end': 1132.808, 'text': "And now you can see we've got our image that we just created.", 'start': 1129.966, 'duration': 2.842}], 'summary': 'Caching at step five speeds up process; docker image creation and deletion demonstrated.', 'duration': 60.677, 'max_score': 1072.131, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ1072131.jpg'}, {'end': 1205.53, 'src': 'embed', 'start': 1174.859, 'weight': 1, 'content': [{'end': 1178.06, 'text': "And then finally, there's one more flag I want to pass, which is dash D.", 'start': 1174.859, 'duration': 3.201}, {'end': 1182.622, 'text': "So that means it's going to run detached mode, because by default, when you create a Docker container from Docker run,", 'start': 1178.06, 'duration': 4.562}, {'end': 1185.884, 'text': "you're going to be attached to the, the CLI or the console, or whatever it's called.", 'start': 1182.622, 'duration': 3.262}, {'end': 1190.025, 'text': 'But here I can run in detached mode so that my command line still free and open.', 'start': 1186.704, 'duration': 3.321}, {'end': 1191.766, 'text': "So let's hit enter.", 'start': 1191.246, 'duration': 0.52}, {'end': 1197.341, 'text': "And it looks like it's successfully created my container, I can do a Docker PS.", 'start': 1193.536, 'duration': 3.805}, {'end': 1201.205, 'text': "And we should see that there's a container open at the moment.", 'start': 1198.542, 'duration': 2.663}, {'end': 1205.53, 'text': "Alright, so let's test this out.", 'start': 1204.189, 'duration': 1.341}], 'summary': 'Using flag -d runs container in detached mode, as shown by successful creation and presence of open container.', 'duration': 30.671, 'max_score': 1174.859, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ1174859.jpg'}, {'end': 1301.72, 'src': 'embed', 'start': 1271.454, 'weight': 4, 'content': [{'end': 1277.139, 'text': 'And the thing about Docker containers is, by default, they can talk to the outside world.', 'start': 1271.454, 'duration': 5.685}, {'end': 1283.605, 'text': 'So if a Docker container wants to reach out to the internet, or wants to reach out to any other devices in your host network, it can do that.', 'start': 1277.299, 'duration': 6.306}, {'end': 1290.051, 'text': 'However, outside devices like the internet or your host machine, or any other machine from the outside world,', 'start': 1284.065, 'duration': 5.986}, {'end': 1292.092, 'text': 'by default cannot talk to a Docker container.', 'start': 1290.051, 'duration': 2.041}, {'end': 1295.415, 'text': 'This is almost like a built in security mechanism, right?', 'start': 1292.913, 'duration': 2.502}, {'end': 1301.72, 'text': "You don't want the outside world to be able to access your Docker container, but your Docker container can can outside can access them.", 'start': 1295.896, 'duration': 5.824}], 'summary': 'Docker containers can communicate externally, but have built-in security preventing external access.', 'duration': 30.266, 'max_score': 1271.454, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ1271454.jpg'}, {'end': 1361.332, 'src': 'embed', 'start': 1333.63, 'weight': 5, 'content': [{'end': 1338.432, 'text': 'So we have to basically say on our host machine hey, if we receive traffic on a specific port,', 'start': 1333.63, 'duration': 4.802}, {'end': 1340.974, 'text': 'we want to forward that traffic to our Docker container.', 'start': 1338.432, 'duration': 2.542}, {'end': 1344.043, 'text': "And the way we do that is, it's very easy.", 'start': 1342.042, 'duration': 2.001}, {'end': 1346.664, 'text': "First of all, let's kill our container, we don't need that anymore.", 'start': 1344.083, 'duration': 2.581}, {'end': 1348.746, 'text': "So I'm going to do a Docker RM.", 'start': 1346.705, 'duration': 2.041}, {'end': 1351.767, 'text': 'And then specify the name of our container.', 'start': 1350.186, 'duration': 1.581}, {'end': 1353.008, 'text': "Well, so we'll do no dash app.", 'start': 1351.887, 'duration': 1.121}, {'end': 1356.27, 'text': "And then I'm going to pass in the dash F flag.", 'start': 1354.889, 'duration': 1.381}, {'end': 1357.55, 'text': 'So this is stands for force.', 'start': 1356.31, 'duration': 1.24}, {'end': 1361.332, 'text': 'So by default, usually you have to stop a container before you can delete it.', 'start': 1358.291, 'duration': 3.041}], 'summary': 'Forward traffic from host machine to docker container on a specific port, using docker rm command.', 'duration': 27.702, 'max_score': 1333.63, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ1333630.jpg'}, {'end': 1439.2, 'src': 'embed', 'start': 1411.654, 'weight': 6, 'content': [{'end': 1416.578, 'text': 'So whatever our containers expecting traffic on the number to the right of the colon should be set to that value.', 'start': 1411.654, 'duration': 4.924}, {'end': 1424.546, 'text': "Now the number to the left represents it represents traffic that's going to be basically coming in from the outside world.", 'start': 1417.219, 'duration': 7.327}, {'end': 1434.535, 'text': 'So if another device on your network or even your local host machine right my Windows machine if we send traffic to localhost port 3000, right,', 'start': 1425.146, 'duration': 9.389}, {'end': 1439.2, 'text': "we're going to take traffic that's coming in on port 3000 and sending it to port 3000 on our container.", 'start': 1434.535, 'duration': 4.665}], 'summary': "Set container's traffic expectations with specified port numbers.", 'duration': 27.546, 'max_score': 1411.654, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ1411654.jpg'}, {'end': 1710.017, 'src': 'embed', 'start': 1670.179, 'weight': 8, 'content': [{'end': 1674.402, 'text': 'So every single file, and it copies it into our container, or our image.', 'start': 1670.179, 'duration': 4.223}, {'end': 1677.124, 'text': 'And this is a bad thing.', 'start': 1675.522, 'duration': 1.602}, {'end': 1683.188, 'text': "Because there are going to be files that you don't actually want copied into your container.", 'start': 1678.244, 'duration': 4.944}, {'end': 1685.729, 'text': 'right, just like our Docker file that I mentioned,', 'start': 1683.188, 'duration': 2.541}, {'end': 1692.074, 'text': "we may also have an environment file that has a lot of our secrets that we don't actually want copied into our container potentially.", 'start': 1685.729, 'duration': 6.345}, {'end': 1696.239, 'text': "Also, on top of that, we don't need to copy our node modules folder.", 'start': 1692.854, 'duration': 3.385}, {'end': 1700.925, 'text': 'this is actually a waste of time, because a lot of times this folder is actually fairly large.', 'start': 1696.239, 'duration': 4.686}, {'end': 1702.227, 'text': "And we don't need to do that.", 'start': 1701.426, 'duration': 0.801}, {'end': 1705.511, 'text': "Because remember, we're copying our package dot JSON file, and then we're doing an npm install.", 'start': 1702.267, 'duration': 3.244}, {'end': 1710.017, 'text': 'So there is zero reason to ever have to copy this node modules folder into our container.', 'start': 1705.831, 'duration': 4.186}], 'summary': 'Copying unnecessary files into container can be wasteful and potentially insecure, e.g., node modules, secrets.', 'duration': 39.838, 'max_score': 1670.179, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ1670179.jpg'}, {'end': 1753.442, 'src': 'embed', 'start': 1728.849, 'weight': 7, 'content': [{'end': 1736.835, 'text': "So we need to find a way to make it so that Docker does not copy over files that we don't ultimately want copied over, just like this Docker file,", 'start': 1728.849, 'duration': 7.986}, {'end': 1738.116, 'text': 'just like the node modules folder.', 'start': 1736.835, 'duration': 1.281}, {'end': 1741.118, 'text': "If we have Git configured, we definitely don't want that copied over.", 'start': 1738.656, 'duration': 2.462}, {'end': 1744.959, 'text': 'And so we can do that by creating a Docker ignore file, right.', 'start': 1741.978, 'duration': 2.981}, {'end': 1746.32, 'text': 'And that probably sounds a little familiar.', 'start': 1744.999, 'duration': 1.321}, {'end': 1753.442, 'text': "Because if you work with Git, Git has a git ignore file for files that you don't want checked into your Git repository, same exact concept.", 'start': 1746.36, 'duration': 7.082}], 'summary': "To prevent unwanted file copying, create a docker ignore file, similar to git's gitignore.", 'duration': 24.593, 'max_score': 1728.849, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ1728849.jpg'}, {'end': 1973.764, 'src': 'embed', 'start': 1942.741, 'weight': 10, 'content': [{'end': 1944.683, 'text': 'And I want you to think about why that is.', 'start': 1942.741, 'duration': 1.942}, {'end': 1952.569, 'text': "It's actually a really simple answer, right? Basically, what happens is, first of all, we have to take our Docker file.", 'start': 1945.403, 'duration': 7.166}, {'end': 1953.51, 'text': 'We have to build an image.', 'start': 1952.589, 'duration': 0.921}, {'end': 1954.791, 'text': 'So we built an image.', 'start': 1953.951, 'duration': 0.84}, {'end': 1956.813, 'text': 'then we built a container from it.', 'start': 1955.632, 'duration': 1.181}, {'end': 1965.659, 'text': 'Now we changed our code to add the exclamation points, but our image that we created from our Docker file was run before we made these changes.', 'start': 1957.433, 'duration': 8.226}, {'end': 1969.481, 'text': 'So the code in that image does not have the exclamation points.', 'start': 1965.719, 'duration': 3.762}, {'end': 1973.764, 'text': 'And so, basically, our image has a stale version of our code,', 'start': 1970.802, 'duration': 2.962}], 'summary': 'Using docker, a stale image was created due to code changes.', 'duration': 31.023, 'max_score': 1942.741, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ1942741.jpg'}, {'end': 2112.878, 'src': 'embed', 'start': 2086.917, 'weight': 11, 'content': [{'end': 2091.918, 'text': 'So within Docker, we have volumes which allows us to have persistent data in our containers.', 'start': 2086.917, 'duration': 5.001}, {'end': 2096.92, 'text': "But there's a very specific type of volume called a bind mount in Docker.", 'start': 2092.659, 'duration': 4.261}, {'end': 2105.069, 'text': 'And this is a special volume, because what it allows us to do is allows us to sync a folder in our local host machine,', 'start': 2097.501, 'duration': 7.568}, {'end': 2106.331, 'text': 'in my Windows machine in this case.', 'start': 2105.069, 'duration': 1.262}, {'end': 2112.878, 'text': 'It allows me to sync a folder or a file system to a folder within our Docker container.', 'start': 2106.931, 'duration': 5.947}], 'summary': 'Docker volumes enable persistent data; bind mounts sync local folders to container.', 'duration': 25.961, 'max_score': 2086.917, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ2086917.jpg'}], 'start': 1072.131, 'title': 'Docker container management and optimization', 'summary': 'Covers creating and managing docker images, troubleshooting container connectivity, understanding port forwarding, optimizing container files, and updating code in a docker container to improve development workflow and result in faster development time.', 'chapters': [{'end': 1191.766, 'start': 1072.131, 'title': 'Docker image creation and management', 'summary': 'Covers the process of creating and managing docker images, including optimization through caching, naming images, deleting images, and running containers in detached mode.', 'duration': 119.635, 'highlights': ['The optimization where it caches the results allows the second run to be much faster. The caching optimization results in a much faster second run of the command.', 'Deleting the image using Docker image RM command removes the image from the list. Using the Docker image RM command to delete the image removes it from the list of images.', 'Creating a Docker image with a specific name using the -T flag results in the new image being listed. Using the -T flag to give a specific name to the Docker image results in the new image being listed.', 'Running a Docker container in detached mode allows the command line to remain free and open. Running a Docker container in detached mode enables the command line to remain free and open.']}, {'end': 1372.689, 'start': 1193.536, 'title': 'Troubleshooting docker container connectivity', 'summary': "Explains the issue of not being able to connect to a docker container on localhost:3000 due to the misconception about the 'expose' command in the docker file and the need to configure port forwarding to enable outside devices to talk to the docker container.", 'duration': 179.153, 'highlights': ["The 'expose' command in the Docker file does not actually open port 3000 for the container, serving more as a documentation purpose. The 'expose' command in the Docker file serves as a documentation purpose and does not actually open up port 3000 for the container.", 'By default, outside devices cannot talk to a Docker container, requiring port forwarding on the host machine to allow traffic to be forwarded to the Docker container. Outside devices cannot talk to a Docker container by default, necessitating the need for port forwarding on the host machine to allow traffic to be forwarded to the Docker container.', "The process of configuring port forwarding involves killing the container, specifying the name, and using the '-F' flag to force deletion of a running container. The process of configuring port forwarding involves killing the container, specifying the name, and using the '-F' flag to force deletion of a running container."]}, {'end': 1653.865, 'start': 1373.269, 'title': 'Understanding port forwarding in docker', 'summary': 'Explains how to set up port forwarding in docker, using the example of forwarding traffic from port 3000 on the host machine to the container, and emphasizes the significance of matching the port numbers for successful communication, demonstrated through a diagram and file system view in the container.', 'duration': 280.596, 'highlights': ['The process of setting up port forwarding in Docker involves specifying a port using the -P flag, with the significance of matching the port numbers for successful communication emphasized. The speaker explains the process of setting up port forwarding in Docker using the -P flag and emphasizes the importance of matching the port numbers for successful communication.', "A detailed explanation of the two numbers associated with port forwarding: the number to the right represents the port traffic is sent to on the container, while the number to the left represents traffic coming in from the outside world. The speaker provides a detailed explanation of the significance of the two numbers in port forwarding, where the number to the right represents the container's port for traffic, and the number to the left represents incoming traffic from the outside world.", "The significance of matching the port numbers is illustrated through an example, emphasizing that the numbers don't have to be the same and can be changed to allow traffic from different ports to be forwarded to the container. The speaker provides an example to illustrate the significance of matching the port numbers and emphasizes that the numbers don't have to be the same and can be changed to forward traffic from different ports to the container.", 'A diagram is used to visually demonstrate the concept of port forwarding, illustrating the flow of traffic from the host machine to the container based on the specified port numbers. A diagram is used to visually demonstrate the concept of port forwarding, showing the flow of traffic from the host machine to the container based on the specified port numbers.', 'The speaker demonstrates how traffic sent to the local host IP and different ports can be forwarded to the container, reiterating the importance of the container listening on the specified port for successful forwarding. The speaker demonstrates how traffic sent to the local host IP and different ports can be forwarded to the container, emphasizing the importance of the container listening on the specified port for successful forwarding.', 'The speaker showcases the successful forwarding of a request to the Docker container on port 3000, verifying the effectiveness of the port forwarding setup. The speaker showcases the successful forwarding of a request to the Docker container on port 3000, verifying the effectiveness of the port forwarding setup.', 'A view of the file system in the container is presented, displaying the files and directories present, providing insight into the contents of the container. The speaker presents a view of the file system in the container, displaying the files and directories present, providing insight into the contents of the container.']}, {'end': 1815.968, 'start': 1653.865, 'title': 'Optimizing docker container files', 'summary': 'Discusses the need to optimize docker container files by creating a docker ignore file to avoid copying unnecessary files like node modules, docker file, and git related files, optimizing container size and improving development workflow.', 'duration': 162.103, 'highlights': ['Creating a Docker ignore file to avoid copying unnecessary files like node modules, Docker file, and Git related files. Creating a Docker ignore file helps in optimizing container size and improving development workflow by preventing unnecessary files like node modules, Docker file, and Git related files from being copied into the Docker container.', 'Optimizing container size by avoiding copying the node modules folder, which is typically large and unnecessary due to npm install process. Avoiding copying the node modules folder optimizes the container size, as it is usually large and unnecessary due to the npm install process.', 'Pointing out the need to avoid copying files like Docker file and environment files containing secrets into the Docker container. Emphasizing the need to avoid copying files such as Docker file and environment files containing secrets into the Docker container to enhance security and prevent unnecessary data transfer.']}, {'end': 2336.848, 'start': 1816.768, 'title': 'Updating code in docker container', 'summary': 'Demonstrates how to update code in a docker container, including rebuilding the image, using volumes for persistent data and bind mounts for syncing local directories, resulting in faster development time.', 'duration': 520.08, 'highlights': ["The chapter begins with rebuilding the image in Docker by using the build command 'docker build -t' for the current directory, followed by running the container from the new Docker image, ensuring the application works in a Docker container.", 'It explains the issue of code not getting updated in the container due to running the previous image before making changes, and the solution involves deleting the container, rebuilding the image, and deploying the updated image to reflect the changes.', 'The concept of using bind mounts as a special volume in Docker to sync a local folder or file system with a folder in the Docker container is demonstrated, offering a solution to avoid continuously rebuilding the image and redeploying the container for every code change.']}], 'duration': 1264.717, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ1072131.jpg', 'highlights': ['Using the -T flag to give a specific name to the Docker image results in the new image being listed.', 'Running a Docker container in detached mode enables the command line to remain free and open.', 'The caching optimization results in a much faster second run of the command.', 'Using the Docker image RM command to delete the image removes it from the list of images.', 'By default, outside devices cannot talk to a Docker container, requiring port forwarding on the host machine to allow traffic to be forwarded to the Docker container.', "The process of configuring port forwarding involves killing the container, specifying the name, and using the '-F' flag to force deletion of a running container.", "The speaker provides a detailed explanation of the significance of the two numbers in port forwarding, where the number to the right represents the container's port for traffic, and the number to the left represents incoming traffic from the outside world.", 'Creating a Docker ignore file helps in optimizing container size and improving development workflow by preventing unnecessary files like node modules, Docker file, and Git related files from being copied into the Docker container.', 'Avoiding copying the node modules folder optimizes the container size, as it is usually large and unnecessary due to the npm install process.', 'Emphasizing the need to avoid copying files such as Docker file and environment files containing secrets into the Docker container to enhance security and prevent unnecessary data transfer.', "The chapter begins with rebuilding the image in Docker by using the build command 'docker build -t' for the current directory, followed by running the container from the new Docker image, ensuring the application works in a Docker container.", 'The concept of using bind mounts as a special volume in Docker to sync a local folder or file system with a folder in the Docker container is demonstrated, offering a solution to avoid continuously rebuilding the image and redeploying the container for every code change.']}, {'end': 3522.567, 'segs': [{'end': 2501.311, 'src': 'embed', 'start': 2472.172, 'weight': 1, 'content': [{'end': 2477.274, 'text': "And if any changes take place, it's going to restart the node process so that the changes are updated in real time.", 'start': 2472.172, 'duration': 5.102}, {'end': 2478.775, 'text': "So let's get that set up and installed.", 'start': 2477.314, 'duration': 1.461}, {'end': 2483.278, 'text': "I'm going to exit out here and we need to update our package.json file.", 'start': 2479.315, 'duration': 3.963}, {'end': 2485.84, 'text': "So let's get NodeMon installed as a dev dependency.", 'start': 2483.338, 'duration': 2.502}, {'end': 2490.183, 'text': "And I'm going to do this on my local machine just so that we can have it set up in this file.", 'start': 2486.42, 'duration': 3.763}, {'end': 2496.287, 'text': "So I'm going to do an NPM install NodeMon dash dash save dev.", 'start': 2490.203, 'duration': 6.084}, {'end': 2501.311, 'text': "So this is going to be a dev dependency just because we don't need it to run when we actually deploy to production.", 'start': 2496.768, 'duration': 4.543}], 'summary': 'Setting up nodemon as a dev dependency for real-time updates.', 'duration': 29.139, 'max_score': 2472.172, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ2472172.jpg'}, {'end': 2574.562, 'src': 'embed', 'start': 2547.257, 'weight': 0, 'content': [{'end': 2550, 'text': "It's going to restart the node process anytime there's any changes to our source code.", 'start': 2547.257, 'duration': 2.743}, {'end': 2559.85, 'text': 'Now as a heads up, when I was doing a dry run of this demo, I did run into some issues on specifically it looks like Windows machines.', 'start': 2551.843, 'duration': 8.007}, {'end': 2566.995, 'text': "So if you're on a Windows machine and for some reason later on in this video you run into some issues with node mod not actually restarting.", 'start': 2560.45, 'duration': 6.545}, {'end': 2569.117, 'text': 'you may need to pass in the dash L flag.', 'start': 2566.995, 'duration': 2.122}, {'end': 2571.679, 'text': 'So that that kind of fixed most of the issues.', 'start': 2569.818, 'duration': 1.861}, {'end': 2574.562, 'text': 'If you do run into the issue, just try and try the L flag.', 'start': 2572.019, 'duration': 2.543}], 'summary': 'Node process restarts on source code changes; issues encountered on windows machines.', 'duration': 27.305, 'max_score': 2547.257, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ2547257.jpg'}, {'end': 2631.904, 'src': 'embed', 'start': 2608.532, 'weight': 2, 'content': [{'end': 2615.177, 'text': 'To do a Docker, let me see if I have that command cache someplace.', 'start': 2608.532, 'duration': 6.645}, {'end': 2615.878, 'text': 'Here we go build.', 'start': 2615.237, 'duration': 0.641}, {'end': 2617.539, 'text': "So we'll do the Docker build.", 'start': 2615.898, 'duration': 1.641}, {'end': 2622.296, 'text': "And notice how it's taking a little bit longer this time.", 'start': 2620.495, 'duration': 1.801}, {'end': 2625.379, 'text': "And that's because our package dot JSON file changed.", 'start': 2622.336, 'duration': 3.043}, {'end': 2631.904, 'text': 'So it had to rerun step three, where we copy package dot JSON and then rerun step four and then rerun step five,', 'start': 2625.819, 'duration': 6.085}], 'summary': 'Docker build took longer due to changes in package.json file.', 'duration': 23.372, 'max_score': 2608.532, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ2608532.jpg'}, {'end': 2778.635, 'src': 'embed', 'start': 2752.27, 'weight': 3, 'content': [{'end': 2758.815, 'text': "And what I'm going to do is I'm going to take this node modules folder on my local machine which I don't need anymore because we're not developing on my local machine,", 'start': 2752.27, 'duration': 6.545}, {'end': 2759.576, 'text': "and I'm just going to delete it.", 'start': 2758.815, 'duration': 0.761}, {'end': 2766.485, 'text': "Alright, so now that it's gone, I'm going to redeploy the container.", 'start': 2763.322, 'duration': 3.163}, {'end': 2770.008, 'text': "I'm going to show you that we're going to break our application.", 'start': 2768.186, 'duration': 1.822}, {'end': 2778.635, 'text': "So if I it's now running, and if I go to my web browser, hit refresh, you can see it spins and it's eventually going to crash.", 'start': 2771.149, 'duration': 7.486}], 'summary': 'Deleted node modules folder, causing application crash on refresh.', 'duration': 26.365, 'max_score': 2752.27, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ2752270.jpg'}, {'end': 2981.575, 'src': 'embed', 'start': 2951.069, 'weight': 4, 'content': [{'end': 2952.289, 'text': 'We also have an anonymous volume.', 'start': 2951.069, 'duration': 1.22}, {'end': 2956.086, 'text': 'And volumes work based off of specificity.', 'start': 2953.405, 'duration': 2.681}, {'end': 2959.727, 'text': "So we've got a volume on our container for the slash app directory.", 'start': 2956.246, 'duration': 3.481}, {'end': 2964.809, 'text': 'But we want to make sure that we preserve the slash app slash node modules.', 'start': 2960.327, 'duration': 4.482}, {'end': 2970.051, 'text': "So we want to make sure this bind mount doesn't override the slash node modules folder within the app directory.", 'start': 2964.929, 'duration': 5.122}, {'end': 2977.654, 'text': 'And the way we can do that is we could just specify another another volume.', 'start': 2970.091, 'duration': 7.563}, {'end': 2981.575, 'text': 'So first of all, let me delete our broken container.', 'start': 2977.674, 'duration': 3.901}], 'summary': 'Ensuring preservation of specific directory using bind mount and additional volume.', 'duration': 30.506, 'max_score': 2951.069, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ2951069.jpg'}, {'end': 3139.312, 'src': 'embed', 'start': 3109.997, 'weight': 5, 'content': [{'end': 3112.998, 'text': 'we still need this copy command for when we deploy into production,', 'start': 3109.997, 'duration': 3.001}, {'end': 3118.459, 'text': 'because there is no bind mount and we have to make sure all of our source code gets copied into the image for our production container.', 'start': 3112.998, 'duration': 5.461}, {'end': 3120.079, 'text': 'Alright, guys.', 'start': 3118.479, 'duration': 1.6}, {'end': 3124.4, 'text': 'so when it comes to Docker volumes and bind mounts, I want to show you one last thing.', 'start': 3120.079, 'duration': 4.321}, {'end': 3128.56, 'text': "we're going to make one slight change, not required, but it's kind of best practice.", 'start': 3124.4, 'duration': 4.16}, {'end': 3132.001, 'text': 'So what I want to do is I have a I have my container still running.', 'start': 3129.241, 'duration': 2.76}, {'end': 3136.902, 'text': "If you don't go ahead and just run this command again, we have the bind mount and the anonymous volume.", 'start': 3132.381, 'duration': 4.521}, {'end': 3139.312, 'text': 'And I want to drop into bash.', 'start': 3138.012, 'duration': 1.3}], 'summary': 'Using copy command for production, demonstrating docker volumes and bind mounts', 'duration': 29.315, 'max_score': 3109.997, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ3109997.jpg'}, {'end': 3355.715, 'src': 'embed', 'start': 3326.208, 'weight': 6, 'content': [{'end': 3329.29, 'text': "If this variable is not set, it's going to default to a value of 3000.", 'start': 3326.208, 'duration': 3.082}, {'end': 3334.774, 'text': "So how do we make use of environment variables within Docker containers? First of all, let's go to our Docker file.", 'start': 3329.29, 'duration': 5.484}, {'end': 3339.502, 'text': "we're going to specify a default value for our port variable.", 'start': 3335.879, 'duration': 3.623}, {'end': 3344.966, 'text': "So after our copy command, doesn't technically matter where you put this, but we can say ENV.", 'start': 3339.682, 'duration': 5.284}, {'end': 3348.409, 'text': "So this is referencing environment variables, we're going to create one called port.", 'start': 3345.006, 'duration': 3.403}, {'end': 3355.715, 'text': "And we're going to say the default value for this environment is port 3000.", 'start': 3348.429, 'duration': 7.286}], 'summary': "By setting an environment variable 'port' to default to 3000 in a docker container, we can make use of environment variables within it.", 'duration': 29.507, 'max_score': 3326.208, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ3326208.jpg'}], 'start': 2340.072, 'title': 'Troubleshooting file syncing and docker deployment', 'summary': 'Discusses troubleshooting file syncing and real-time updates in node and express applications, emphasizing the need to restart the node process after making code changes and introducing the use of nodemon. it also covers docker caching when rebuilding images, redeploying a container with bind mount for automatic code syncing, and troubleshooting docker container crashes related to bind mounts.', 'chapters': [{'end': 2566.995, 'start': 2340.072, 'title': 'Troubleshooting file syncing and real-time updates', 'summary': 'Discusses troubleshooting file syncing and real-time updates in node and express applications, emphasizing the need to restart the node process after making code changes and introducing the use of nodemon for automatic process restarts.', 'duration': 226.923, 'highlights': ['The need to restart the Node process after making code changes in Node or Express applications is emphasized to ensure the changes take effect in the web browser. In any Node or Express application, we have to restart the Node process anytime we make changes to code to see the changes take effect in the web browser.', 'Introduction of NodeMon for automatic process restarts to ensure real-time updates of changes in the source code. NodeMon is introduced as a solution to automatically restart the node process whenever changes take place in the source code, ensuring real-time updates without manual intervention.', 'The process of setting up NodeMon as a dev dependency and configuring scripts in the package.json file is explained. The process of setting up NodeMon as a dev dependency and configuring scripts in the package.json file, including installation and usage, is elaborated.', 'Mention of potential issues with NodeMon on Windows machines, providing a heads up for potential troubleshooting. Potential issues with NodeMon on Windows machines are mentioned, alerting viewers to potential troubleshooting requirements when encountering problems with process restarts.']}, {'end': 2770.008, 'start': 2566.995, 'title': 'Docker caching and redeployment', 'summary': 'Discusses docker caching when rebuilding images due to changes in package.json, and the process of redeploying a container with bind mount for automatic code syncing, also highlighting potential issues with node modules deletion and redeployment.', 'duration': 203.013, 'highlights': ['Explains Docker caching when rebuilding images due to changes in package.json, and the need to rerun steps 3, 4, and 5, taking a little longer due to cache invalidation. caching in Docker, rebuilding images, package.json changes, rerunning steps 3, 4, 5, cache invalidation', 'Discusses the process of redeploying a container with bind mount for automatic code syncing and testing the functionality with code changes. redeploying container, bind mount, automatic code syncing, testing functionality, code changes', 'Highlights the potential issue of breaking the application when deleting the node modules folder and redeploying the container. potential issue, deleting node modules, redeploying container, breaking application']}, {'end': 3045.311, 'start': 2771.149, 'title': 'Docker node app troubleshooting', 'summary': 'Covers troubleshooting a docker container crash, identifying the cause as the bind mount overriding the node modules folder, and implementing a hack using an anonymous volume to prevent the override.', 'duration': 274.162, 'highlights': ['The issue of a Docker container crash is identified, with the cause traced to the bind mount syncing the local directory with the container directory, leading to the deletion of the node modules folder.', "A solution is proposed through the creation of an anonymous volume to prevent the bind mount from overriding the node modules folder, ensuring the container's functionality and preventing crashes.", 'Explanation of the specificity of volumes in Docker containers is provided, with the hack leveraging the specificity of volumes to preserve the node modules folder from being overridden by the bind mount.']}, {'end': 3522.567, 'start': 3045.851, 'title': 'Docker volumes and bind mounts', 'summary': 'Discusses the use of bind mounts in docker for development and production, emphasizing the need for a copy command during deployment, and demonstrates the creation of a read-only bind mount and the use of environment variables within docker containers.', 'duration': 476.716, 'highlights': ['The bind mount is essential for development but requires a copy command for deployment in production, as it allows syncing of files between the local machine and container, ensuring smooth code changes. The bind mount is used for development, but the copy command is necessary for deployment in production, as it enables syncing of files between the local machine and container, ensuring smooth code changes during development and deployment.', 'Demonstrates the creation of a read-only bind mount to prevent the Docker container from making changes to the source code, ensuring the protection of source code from unauthorized alterations. The creation of a read-only bind mount prevents the Docker container from making changes to the source code, ensuring the protection of source code from unauthorized alterations.', 'Illustrates the utilization of environment variables within Docker containers, allowing for the specification of default values and the override of values during deployment, providing flexibility in managing environment configurations. The usage of environment variables within Docker containers allows for specifying default values and overriding values during deployment, providing flexibility in managing environment configurations.']}], 'duration': 1182.495, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ2340072.jpg', 'highlights': ['Introduction of NodeMon for automatic process restarts to ensure real-time updates of changes in the source code.', 'The process of setting up NodeMon as a dev dependency and configuring scripts in the package.json file is explained.', 'Explains Docker caching when rebuilding images due to changes in package.json, and the need to rerun steps 3, 4, and 5, taking a little longer due to cache invalidation.', 'Highlights the potential issue of breaking the application when deleting the node modules folder and redeploying the container.', "A solution is proposed through the creation of an anonymous volume to prevent the bind mount from overriding the node modules folder, ensuring the container's functionality and preventing crashes.", 'The bind mount is essential for development but requires a copy command for deployment in production, as it allows syncing of files between the local machine and container, ensuring smooth code changes.', 'Illustrates the utilization of environment variables within Docker containers, allowing for the specification of default values and the override of values during deployment, providing flexibility in managing environment configurations.']}, {'end': 4486.176, 'segs': [{'end': 3563.814, 'src': 'embed', 'start': 3522.567, 'weight': 3, 'content': [{'end': 3528.391, 'text': 'I do want to make sure that the environment variable did get set so we can drop back into that container again, like we always do.', 'start': 3522.567, 'duration': 5.824}, {'end': 3530.752, 'text': "Where's that command? Docker exec.", 'start': 3529.491, 'duration': 1.261}, {'end': 3537.877, 'text': 'So in a Linux machine, if you want to see the environment variables, you just type in print env.', 'start': 3533.314, 'duration': 4.563}, {'end': 3543.954, 'text': 'And so here we can see that the environment variable of port equals 4000 was set.', 'start': 3539.69, 'duration': 4.264}, {'end': 3550.1, 'text': 'And so that confirms that when we ran our Docker run command and passed in the dash dash ENV flag,', 'start': 3544.054, 'duration': 6.046}, {'end': 3556.626, 'text': 'we were successfully able to overwrite the port variable or the port environment variable that was specified in our Docker file.', 'start': 3550.1, 'duration': 6.526}, {'end': 3562.593, 'text': 'Now, when it comes to your application, you may have more than one environment variable.', 'start': 3558.669, 'duration': 3.924}, {'end': 3563.814, 'text': 'Actually, you most certainly will.', 'start': 3562.633, 'duration': 1.181}], 'summary': "Successfully set environment variable 'port' to 4000 using docker run command.", 'duration': 41.247, 'max_score': 3522.567, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ3522567.jpg'}, {'end': 3621.128, 'src': 'embed', 'start': 3589.75, 'weight': 4, 'content': [{'end': 3590.751, 'text': "And it's a little bit of a pain.", 'start': 3589.75, 'duration': 1.001}, {'end': 3596.836, 'text': 'And so what you can do instead is we can actually create a file that stores all of our environment variables.', 'start': 3590.931, 'duration': 5.905}, {'end': 3603.041, 'text': "So here I'm going to call this, you can call it whatever you want, but standard convention is dot ENV.", 'start': 3597.477, 'duration': 5.564}, {'end': 3610.12, 'text': 'And here we can just specify port equals 4000.', 'start': 3605.163, 'duration': 4.957}, {'end': 3612.362, 'text': "Right And so that's going to essentially do the same thing in here.", 'start': 3610.12, 'duration': 2.242}, {'end': 3616.565, 'text': "You could just provide a list of all of your environment variables and let's save that.", 'start': 3612.382, 'duration': 4.183}, {'end': 3621.128, 'text': "And I'm going to kill my Docker container real quick.", 'start': 3618.426, 'duration': 2.702}], 'summary': 'Create a .env file to store environment variables, e.g., port=4000.', 'duration': 31.378, 'max_score': 3589.75, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ3589750.jpg'}, {'end': 3723.069, 'src': 'embed', 'start': 3694.595, 'weight': 5, 'content': [{'end': 3700.238, 'text': "Now, one thing I want to point out is, as you've been creating Docker containers and deleting them.", 'start': 3694.595, 'duration': 5.643}, {'end': 3704.74, 'text': "if we do a Docker PS, you'll see that we just have one container running.", 'start': 3700.238, 'duration': 4.502}, {'end': 3709.842, 'text': 'If you do a Docker volume, LS is going to list all the volumes that you have.', 'start': 3705.48, 'duration': 4.362}, {'end': 3713.064, 'text': "And you can see that we've kind of built up a couple of different volumes.", 'start': 3710.423, 'duration': 2.641}, {'end': 3715.185, 'text': 'And you might be wondering what are these from?', 'start': 3713.264, 'duration': 1.921}, {'end': 3716.846, 'text': 'and why are they building up?', 'start': 3715.185, 'duration': 1.661}, {'end': 3723.069, 'text': "As you keep creating containers and deleting containers, they're going to slowly build up over and over and you'll eventually end up with hundreds.", 'start': 3716.866, 'duration': 6.203}], 'summary': 'Creating and deleting docker containers leads to accumulation of volumes, eventually reaching hundreds.', 'duration': 28.474, 'max_score': 3694.595, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ3694595.jpg'}, {'end': 3770.472, 'src': 'embed', 'start': 3734.676, 'weight': 6, 'content': [{'end': 3735.636, 'text': 'this is anonymous volume.', 'start': 3734.676, 'duration': 0.96}, {'end': 3741.5, 'text': "So every time you delete your container, it's going to preserve that node modules folder in here.", 'start': 3735.676, 'duration': 5.824}, {'end': 3748.119, 'text': "And we don't actually need to preserve it, right? Because we're going to be deleting and creating new containers all the time.", 'start': 3742.993, 'duration': 5.126}, {'end': 3757.97, 'text': 'So you can go in and manually delete the volumes, right? You can always do a Docker volume RM and then specify the volume name.', 'start': 3748.179, 'duration': 9.791}, {'end': 3762.551, 'text': 'or you can do a Docker volume prune that should remove all unnecessary volumes.', 'start': 3758.63, 'duration': 3.921}, {'end': 3770.472, 'text': "But if you want to make sure that these volumes don't build up, usually what I like to do is when you do the Docker RM command,", 'start': 3763.531, 'duration': 6.941}], 'summary': 'Unused docker volumes can be manually deleted or pruned to prevent buildup.', 'duration': 35.796, 'max_score': 3734.676, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ3734676.jpg'}, {'end': 3957.366, 'src': 'embed', 'start': 3914.002, 'weight': 0, 'content': [{'end': 3920.107, 'text': "So what I'm going to show you guys is a way we can kind of automate all of these steps, so that we don't have to run this monstrosity of command.", 'start': 3914.002, 'duration': 6.105}, {'end': 3923.209, 'text': "And we're going to use a feature called Docker compose,", 'start': 3920.147, 'duration': 3.062}, {'end': 3929.353, 'text': 'where we can create a file that has all the steps and all of the configuration settings we want for each Docker container.', 'start': 3923.209, 'duration': 6.144}, {'end': 3933.597, 'text': 'So we can say like, hey, I want to create a node container using the image that I created.', 'start': 3929.373, 'duration': 4.224}, {'end': 3936.619, 'text': 'And I want to create a volume for the bind mount.', 'start': 3934.237, 'duration': 2.382}, {'end': 3941.681, 'text': 'And I want to create an anonymous volume I want to pass in the environment files, and I want to open up these ports.', 'start': 3936.639, 'duration': 5.042}, {'end': 3948.563, 'text': 'So you can pass in all these steps into a file, and then you can just run one very simple command to bring up as many containers as you want.', 'start': 3941.761, 'duration': 6.802}, {'end': 3951.464, 'text': 'So if you have like six or seven different containers in your development environment,', 'start': 3948.583, 'duration': 2.881}, {'end': 3956.126, 'text': 'you can bring up all six or seven all at once with one command and bring them all down with one command.', 'start': 3951.464, 'duration': 4.662}, {'end': 3957.366, 'text': 'So let me show you guys how to do that.', 'start': 3956.286, 'duration': 1.08}], 'summary': 'Automate docker setup using docker compose to bring up multiple containers with one command.', 'duration': 43.364, 'max_score': 3914.002, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ3914002.jpg'}, {'end': 4046.598, 'src': 'embed', 'start': 4019.346, 'weight': 7, 'content': [{'end': 4023.611, 'text': 'Now, the next thing we want to do is we want to specify all of the containers that we want to create.', 'start': 4019.346, 'duration': 4.265}, {'end': 4029.018, 'text': 'And so within our Docker compose file, each container is referred to as a service.', 'start': 4024.172, 'duration': 4.846}, {'end': 4030.039, 'text': 'So we do services.', 'start': 4029.258, 'duration': 0.781}, {'end': 4034.173, 'text': 'And then we just specify all of our services.', 'start': 4032.432, 'duration': 1.741}, {'end': 4036.414, 'text': 'Now, this is very important with YAML files.', 'start': 4034.253, 'duration': 2.161}, {'end': 4038.454, 'text': 'Spacing matters.', 'start': 4037.454, 'duration': 1}, {'end': 4044.617, 'text': "OK, so we're going under services and we're going to provide a list of all the different services that we want.", 'start': 4038.474, 'duration': 6.143}, {'end': 4046.598, 'text': 'So I want you to hit tab just once.', 'start': 4044.957, 'duration': 1.641}], 'summary': 'Specify containers as services in docker compose file with correct spacing.', 'duration': 27.252, 'max_score': 4019.346, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ4019346.jpg'}], 'start': 3522.567, 'title': 'Docker configuration and management', 'summary': 'Covers setting environment variables, managing volumes, and configuring containers using docker compose, enabling efficient environment management, simplifying container management, and emphasizing the importance of yaml file spacing, ultimately reducing complexity in docker setup.', 'chapters': [{'end': 3691.592, 'start': 3522.567, 'title': 'Setting environment variables in docker', 'summary': 'Discusses setting environment variables in docker, demonstrating how to overwrite variables, pass multiple variables using flags, and load variables from a file, facilitating efficient environment management for docker containers.', 'duration': 169.025, 'highlights': ["Demonstrates overwriting environment variable 'port' with '4000' using Docker run command and '-e' flag The chapter demonstrates using the Docker run command and passing the '-e' flag to successfully overwrite the port environment variable with the value 4000.", "Explains how to store multiple environment variables in a file and load them into a Docker container The chapter explains the process of creating a file to store multiple environment variables, such as 'port=4000', and subsequently loading these variables into a Docker container, offering a more efficient alternative to passing each variable individually.", "Illustrates the use of 'printenv' command to display environment variables The chapter illustrates the use of the 'printenv' command in a Linux machine to display the environment variables, providing a practical demonstration of inspecting the environment setup."]}, {'end': 4017.344, 'start': 3694.595, 'title': 'Managing docker volumes and simplifying container management', 'summary': 'Explains how to manage docker volumes, prevent buildup of unnecessary volumes, and introduces docker compose to simplify the management of multiple containers, reducing the complexity of the development and production environment setup.', 'duration': 322.749, 'highlights': ["The command 'docker volume ls' lists all the volumes, which can accumulate as containers are created and deleted. By running 'docker volume ls', the accumulation of volumes due to creating and deleting containers is evident.", "The chapter outlines the impact of anonymous volumes on the accumulation of unnecessary data, providing solutions such as manual deletion or using 'docker volume prune' to remove unnecessary volumes. The chapter addresses the impact of anonymous volumes on volume accumulation and introduces solutions like manual deletion or 'docker volume prune' to remove unnecessary volumes.", 'The introduction of Docker Compose as a means to automate container management and simplify the process of bringing up multiple containers with a single command is discussed. The chapter introduces Docker Compose as a solution to automate container management and simplify the process of bringing up multiple containers with a single command.']}, {'end': 4486.176, 'start': 4019.346, 'title': 'Docker compose configuration', 'summary': 'Explains how to configure containers using docker compose, emphasizing the importance of spacing in yaml files and demonstrating the creation of a node app service with specific settings, including building the image, exposing ports, adding volumes, setting environment variables, and running docker compose to bring up the services and containers.', 'duration': 466.83, 'highlights': ['Demonstrating the creation of a node app service with specific settings including building the image, exposing ports, adding volumes, and setting environment variables The chapter explains the process of creating a node app service within a Docker Compose file, specifying settings such as building the image, exposing ports (e.g., 3000), adding volumes for bind mount and anonymous volume, and setting environment variables (e.g., port=3000).', 'Emphasizing the importance of spacing in YAML files and specifying all containers as services within a Docker Compose file The importance of spacing in YAML files is highlighted, and the process of specifying all containers as services within a Docker Compose file is explained to ensure correct configuration and setup.', 'Running Docker Compose command to bring up the services and containers The process of running the Docker Compose command to bring up the configured services and containers is demonstrated, providing a streamlined approach to setting up and tearing down the entire development environment with a single command.']}], 'duration': 963.609, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ3522567.jpg', 'highlights': ['The chapter introduces Docker Compose as a solution to automate container management and simplify the process of bringing up multiple containers with a single command.', 'The process of running the Docker Compose command to bring up the configured services and containers is demonstrated, providing a streamlined approach to setting up and tearing down the entire development environment with a single command.', 'The chapter explains the process of creating a node app service within a Docker Compose file, specifying settings such as building the image, exposing ports (e.g., 3000), adding volumes for bind mount and anonymous volume, and setting environment variables (e.g., port=3000).', "The chapter demonstrates using the Docker run command and passing the '-e' flag to successfully overwrite the port environment variable with the value 4000.", "The chapter explains the process of creating a file to store multiple environment variables, such as 'port=4000', and subsequently loading these variables into a Docker container, offering a more efficient alternative to passing each variable individually.", "By running 'docker volume ls', the accumulation of volumes due to creating and deleting containers is evident.", "The chapter addresses the impact of anonymous volumes on volume accumulation and introduces solutions like manual deletion or 'docker volume prune' to remove unnecessary volumes.", 'The importance of spacing in YAML files is highlighted, and the process of specifying all containers as services within a Docker Compose file is explained to ensure correct configuration and setup.', "The chapter illustrates the use of the 'printenv' command in a Linux machine to display the environment variables, providing a practical demonstration of inspecting the environment setup."]}, {'end': 5676.419, 'segs': [{'end': 4518.358, 'src': 'embed', 'start': 4486.377, 'weight': 0, 'content': [{'end': 4489.139, 'text': 'And it looks it looks like that, guys, everything works perfectly.', 'start': 4486.377, 'duration': 2.762}, {'end': 4496.648, 'text': 'So hopefully you guys can see how easy it is now to actually bring up our entire Docker environment, which is one simple command.', 'start': 4489.2, 'duration': 7.448}, {'end': 4501.289, 'text': 'And bringing it up is just as easy as bringing it down.', 'start': 4497.927, 'duration': 3.362}, {'end': 4508.393, 'text': "So now if you want to tear down everything, uh, we can do, uh, let's see, we just do Docker dash compose instead of up.", 'start': 4501.309, 'duration': 7.084}, {'end': 4510.194, 'text': "I'm sure you can guess what the command is.", 'start': 4508.893, 'duration': 1.301}, {'end': 4510.754, 'text': "It's just down.", 'start': 4510.274, 'duration': 0.48}, {'end': 4518.358, 'text': 'And just like when it comes to deleting Docker containers by default, it will not delete those, uh, anonymous volumes.', 'start': 4512.055, 'duration': 6.303}], 'summary': 'Bringing up docker environment with one command. bringing it down is just as easy. deleting docker containers will not delete anonymous volumes.', 'duration': 31.981, 'max_score': 4486.377, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ4486377.jpg'}, {'end': 4572.054, 'src': 'embed', 'start': 4549.338, 'weight': 1, 'content': [{'end': 4556.864, 'text': 'And you do get some extra features and perks when it comes to having a custom network like DNS so that you can reference names within your project.', 'start': 4549.338, 'duration': 7.526}, {'end': 4558.466, 'text': "But don't worry about that for now.", 'start': 4557.425, 'duration': 1.041}, {'end': 4564.309, 'text': "So now if we do a Docker PS, you can see that the container is deleted and everything's cleaned up for you.", 'start': 4559.066, 'duration': 5.243}, {'end': 4564.69, 'text': "That's right.", 'start': 4564.369, 'duration': 0.321}, {'end': 4572.054, 'text': 'One command and you can start and stop theoretically hundreds of containers, right? Now, there is one thing I want to tell you guys.', 'start': 4564.87, 'duration': 7.184}], 'summary': 'Custom network provides perks like dns for project referencing. docker allows quick start and stop of containers.', 'duration': 22.716, 'max_score': 4549.338, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ4549338.jpg'}, {'end': 5007.162, 'src': 'embed', 'start': 4982.238, 'weight': 2, 'content': [{'end': 4989.339, 'text': "So we're going to use only one Docker file, but we are going to split up the Docker compose files into two different files,", 'start': 4982.238, 'duration': 7.101}, {'end': 4992.479, 'text': "because showing you how to do with two different Docker files, it's pretty easy, right?", 'start': 4989.339, 'duration': 3.14}, {'end': 4996.06, 'text': 'You just create two different Docker files and then reference them when you actually run the build command.', 'start': 4992.499, 'duration': 3.561}, {'end': 5003.361, 'text': "So there's nothing really to that, but I do want to show you how to do it with one file, just in case you want to know that,", 'start': 4998.24, 'duration': 5.121}, {'end': 5004.541, 'text': 'because it was a little bit trickier.', 'start': 5003.361, 'duration': 1.18}, {'end': 5007.162, 'text': 'We do create like a custom bash script that actually handles it.', 'start': 5004.961, 'duration': 2.201}], 'summary': 'Using one docker file split into two compose files is a bit trickier, requiring a custom bash script.', 'duration': 24.924, 'max_score': 4982.238, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ4982238.jpg'}, {'end': 5188.468, 'src': 'embed', 'start': 5162.015, 'weight': 3, 'content': [{'end': 5167.377, 'text': "So ports, we'll set that equal to 3000 colon 3000 like we always have.", 'start': 5162.015, 'duration': 5.362}, {'end': 5182.182, 'text': "And then we'll also set an environment variable to be port 3000.", 'start': 5172.657, 'duration': 9.525}, {'end': 5186.306, 'text': "Alright, so this is the only configuration that's shared between both it's our development.", 'start': 5182.182, 'duration': 4.124}, {'end': 5188.468, 'text': 'So in our development, our production environment.', 'start': 5186.787, 'duration': 1.681}], 'summary': 'Ports set to 3000 for both development and production environments.', 'duration': 26.453, 'max_score': 5162.015, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ5162015.jpg'}], 'start': 4486.377, 'title': 'Docker environment management and configuration', 'summary': 'Covers effortless management of docker environment, including bringing up and tearing down the environment with one command, deleting anonymous volumes, managing containers with docker compose, creating docker compose files for production and development, and configuring docker environments for development and production.', 'chapters': [{'end': 4527.223, 'start': 4486.377, 'title': 'Effortless docker environment management', 'summary': 'Details how easy it is to bring up and tear down the entire docker environment with one simple command, and also explains how to delete anonymous volumes using docker compose down with the -v flag.', 'duration': 40.846, 'highlights': ['Bringing up and tearing down the entire Docker environment is achieved with one simple command, demonstrating the ease of management (1 simple command).', 'Explains how to delete anonymous volumes using Docker compose down with the -V flag, providing a solution for deleting unnecessary volumes (using -V flag to delete unnecessary volumes).']}, {'end': 4819.153, 'start': 4530.123, 'title': 'Docker compose: building and managing containers', 'summary': "Explains how docker compose manages containers, highlighting the creation of a custom network, the build process for docker images, and the use of the 'dash dash build' flag to force a rebuild.", 'duration': 289.03, 'highlights': ['Docker Compose automatically creates a separate network for all services, providing features like DNS for referencing names within the project.', 'Docker Compose skips the entire build process and only creates the network and starts the container if the image already exists, leading to quicker results.', "Using the 'dash dash build' flag with 'Docker compose up' forces a rebuild of the image, ensuring that changes in the Dockerfile are reflected in the container."]}, {'end': 5160.314, 'start': 4819.193, 'title': 'Creating docker compose files for production and development', 'summary': 'Covers setting up docker compose files for both production and development environments, including the differences between the two, creating separate docker compose files, and configuring shared settings in a shared docker compose file.', 'duration': 341.121, 'highlights': ['Setting up Docker compose files for both production and development environments The chapter explains the need to set up separate Docker compose files for production and development environments, highlighting the differences in configurations and command execution for each environment.', 'Creating separate Docker compose files for production and development It discusses the process of creating separate Docker compose files for production and development, allowing for distinct configurations and commands to be specified for each environment.', 'Configuring shared settings in a shared Docker compose file The chapter emphasizes the creation of a shared Docker compose file for configurations that are shared between production and development environments, ensuring efficient management of shared settings.']}, {'end': 5676.419, 'start': 5162.015, 'title': 'Configuring docker environments', 'summary': 'Illustrates how to configure docker environments for development and production, including setting up ports, environment variables, volumes, and command overrides.', 'duration': 514.404, 'highlights': ['Both development and production environments share the same configuration, using port 3000 and a shared environment variable.', 'In the development environment, bind mounts and an extra anonymous volume for the node modules folder are set up to ensure files are not deleted, while in the production environment, these are not required.', 'The Docker Compose command is used to run different configurations for development and production, overwriting configurations as needed and ensuring changes are reflected by rebuilding the image.']}], 'duration': 1190.042, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ4486377.jpg', 'highlights': ['Bringing up and tearing down the entire Docker environment is achieved with one simple command, demonstrating the ease of management (1 simple command).', 'Docker Compose automatically creates a separate network for all services, providing features like DNS for referencing names within the project.', 'Setting up Docker compose files for both production and development environments The chapter explains the need to set up separate Docker compose files for production and development environments, highlighting the differences in configurations and command execution for each environment.', 'Both development and production environments share the same configuration, using port 3000 and a shared environment variable.']}, {'end': 6451.452, 'segs': [{'end': 5709.263, 'src': 'embed', 'start': 5676.459, 'weight': 0, 'content': [{'end': 5678.28, 'text': 'Because remember, we are in production environment.', 'start': 5676.459, 'duration': 1.821}, {'end': 5688.989, 'text': 'So this confirms that we now have set up a different Docker compose file for our production environment and for our development environment to accommodate our specific needs for each environment.', 'start': 5678.3, 'duration': 10.689}, {'end': 5694.634, 'text': "Now, there is one last thing we got to do because there's a little bit of an issue.", 'start': 5690.711, 'duration': 3.923}, {'end': 5696.375, 'text': "And I'll show you why.", 'start': 5695.275, 'duration': 1.1}, {'end': 5702.821, 'text': "Right now we what was the last command we're we're running in production mode, right? So we ran the Docker compose up prod.", 'start': 5697.336, 'duration': 5.485}, {'end': 5709.263, 'text': "And I'm going to do a Docker PS, let's just quickly get the name of that Docker image.", 'start': 5704.682, 'duration': 4.581}], 'summary': 'Separate docker compose files for production and development environments set up to accommodate specific needs.', 'duration': 32.804, 'max_score': 5676.459, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ5676459.jpg'}, {'end': 5827.951, 'src': 'embed', 'start': 5800.142, 'weight': 3, 'content': [{'end': 5810.03, 'text': "If you want to actually deploy this to production, right, you would normally run a you would normally run I think it's a dash dash only.", 'start': 5800.142, 'duration': 9.888}, {'end': 5812.838, 'text': 'equals production, right.', 'start': 5811.337, 'duration': 1.501}, {'end': 5816.561, 'text': "And so that'll prevent any dev dependencies from getting installed.", 'start': 5812.878, 'duration': 3.683}, {'end': 5819.124, 'text': "Because you're running in production mode.", 'start': 5817.883, 'duration': 1.241}, {'end': 5827.951, 'text': 'So what we have to do now is set up our Docker file to be intelligent enough to know whether we are in development mode or production mode,', 'start': 5819.724, 'duration': 8.227}], 'summary': "To deploy to production, use '--only=production' to prevent dev dependencies from getting installed.", 'duration': 27.809, 'max_score': 5800.142, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ5800142.jpg'}, {'end': 6368.812, 'src': 'embed', 'start': 6343.348, 'weight': 4, 'content': [{'end': 6348.271, 'text': "So let's go to our Docker compose file and let's add in this new Mongo database.", 'start': 6343.348, 'duration': 4.923}, {'end': 6353.446, 'text': 'And so first of all, we have to figure out where we need to add it inside this Docker compose file.', 'start': 6349.164, 'duration': 4.282}, {'end': 6358.628, 'text': 'And so if you already forgot, under services, this is where we actually define all of our containers.', 'start': 6353.966, 'duration': 4.662}, {'end': 6361.089, 'text': 'So each container is a different service.', 'start': 6358.668, 'duration': 2.421}, {'end': 6364.45, 'text': 'So we have one service called node app, which is our node container.', 'start': 6361.129, 'duration': 3.321}, {'end': 6368.812, 'text': "So logically, if you want to add a MongoDB container, we're just going to create a new service.", 'start': 6364.87, 'duration': 3.942}], 'summary': 'Adding a new mongodb container to docker compose file under services.', 'duration': 25.464, 'max_score': 6343.348, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ6343348.jpg'}], 'start': 5676.459, 'title': 'Docker setup for production and development', 'summary': 'Discusses docker compose setup for production and development, addressing specific needs, focusing on running in production mode, and differentiating between development and production environments. it also covers preventing dev dependencies installation and adding a mongodb container to the application.', 'chapters': [{'end': 5744.88, 'start': 5676.459, 'title': 'Docker compose for production', 'summary': 'Discusses the setup of different docker compose files for production and development environments to accommodate specific needs, with a focus on addressing issues related to running in production mode and copying docker compose files.', 'duration': 68.421, 'highlights': ['Setting up different Docker compose files for production and development environments to accommodate specific needs.', 'Identifying and addressing issues related to running in production mode and copying Docker compose files.', 'Copying a whole bunch of Docker compose files and ensuring that Docker ignore includes them to avoid copying any file that starts with a Docker dash compose.']}, {'end': 6451.452, 'start': 5744.9, 'title': 'Docker file setup for production and development', 'summary': 'Discusses setting up a docker file to differentiate between development and production environments, preventing dev dependencies from being installed, and adding a mongodb container to the application.', 'duration': 706.552, 'highlights': ["Setting up Docker file to differentiate between development and production environments The chapter explains how to set up an embedded bash script in the Docker file to determine if the environment is in development or production, and then run 'npm install' or 'npm install --only=production' accordingly.", "Preventing dev dependencies from being installed It details the process of preventing development dependencies from being installed by running 'npm install --only=production' in the production environment to save space and improve efficiency.", 'Adding a MongoDB container to the application The chapter covers the addition of a MongoDB container to the application to create a more real-world application, enabling data persistence and demonstrating how to define the MongoDB container in the Docker Compose file.']}], 'duration': 774.993, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ5676459.jpg', 'highlights': ['Setting up different Docker compose files for production and development environments to accommodate specific needs.', 'Identifying and addressing issues related to running in production mode and copying Docker compose files.', 'Setting up Docker file to differentiate between development and production environments.', "Preventing dev dependencies from being installed by running 'npm install --only=production' in the production environment.", 'Adding a MongoDB container to the application to enable data persistence and define it in the Docker Compose file.']}, {'end': 7484.69, 'segs': [{'end': 6537.756, 'src': 'embed', 'start': 6491.031, 'weight': 0, 'content': [{'end': 6491.851, 'text': "So let's save this.", 'start': 6491.031, 'duration': 0.82}, {'end': 6494.671, 'text': "And then let's do a Docker compose up.", 'start': 6491.871, 'duration': 2.8}, {'end': 6497.632, 'text': "We don't need to do a dash dash build.", 'start': 6496.112, 'duration': 1.52}, {'end': 6506.134, 'text': 'Alright, so now we can see that it is now creating our Mongo container.', 'start': 6497.652, 'duration': 8.482}, {'end': 6510.247, 'text': 'And if I do a Docker PS, we should now see two containers.', 'start': 6507.085, 'duration': 3.162}, {'end': 6514.229, 'text': 'We have our no Docker Mongo container as well as our node app as well.', 'start': 6510.267, 'duration': 3.962}, {'end': 6521.253, 'text': 'So now that we have our Mongo container up and running, what I want to do is I want to connect into the container and just poke around a bit.', 'start': 6515.309, 'duration': 5.944}, {'end': 6524.214, 'text': "So let's do a Docker exec dash it.", 'start': 6521.313, 'duration': 2.901}, {'end': 6527.636, 'text': 'And then the name of the container.', 'start': 6525.995, 'duration': 1.641}, {'end': 6531.492, 'text': "And then we'll do bash.", 'start': 6530.612, 'duration': 0.88}, {'end': 6533.193, 'text': 'So we can take a look at the file system.', 'start': 6531.873, 'duration': 1.32}, {'end': 6537.756, 'text': "And so here, since we're connected to the container, we can actually connect into Mongo.", 'start': 6533.854, 'duration': 3.902}], 'summary': 'Docker compose sets up 2 containers; connects to mongo', 'duration': 46.725, 'max_score': 6491.031, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ6491031.jpg'}, {'end': 6661.941, 'src': 'embed', 'start': 6632.591, 'weight': 2, 'content': [{'end': 6635.192, 'text': 'Alright, and so this means we successfully wrote to our database.', 'start': 6632.591, 'duration': 2.601}, {'end': 6645.776, 'text': 'And if I type in DB dot books dot find, this is going to list out all of the documents within our books collection.', 'start': 6635.792, 'duration': 9.984}, {'end': 6649.497, 'text': "So here we've got one entry, and we can see the name is set to Harry Potter.", 'start': 6646.516, 'duration': 2.981}, {'end': 6651.438, 'text': "Perfect And we'll do a show DBs.", 'start': 6649.917, 'duration': 1.521}, {'end': 6654.099, 'text': 'Now, we can see my DB is now listed on there.', 'start': 6651.478, 'duration': 2.621}, {'end': 6656.66, 'text': 'So let me log out of here.', 'start': 6655.5, 'duration': 1.16}, {'end': 6659.38, 'text': 'And let me log out of here.', 'start': 6658.399, 'duration': 0.981}, {'end': 6661.941, 'text': 'And I do want to show you guys one thing real quick.', 'start': 6660.14, 'duration': 1.801}], 'summary': "Successfully wrote to database, 1 entry found with name 'harry potter'.", 'duration': 29.35, 'max_score': 6632.591, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ6632591.jpg'}, {'end': 7043.668, 'src': 'embed', 'start': 7019.93, 'weight': 4, 'content': [{'end': 7027.996, 'text': 'And when it comes to named volumes, we do have to declare this volume, uh, in another portion of our compose, uh, in our Docker compose file.', 'start': 7019.93, 'duration': 8.066}, {'end': 7032.82, 'text': "And that's because a named volume can be used, um, by multiple services.", 'start': 7028.336, 'duration': 4.484}, {'end': 7040.405, 'text': 'So you know, if we had a um, you know, like another Mongo instance or another Mongo service or any other service,', 'start': 7033.38, 'duration': 7.025}, {'end': 7043.668, 'text': 'they can attach to the same exact volume, just like this Mongo service does.', 'start': 7040.405, 'duration': 3.263}], 'summary': 'Named volumes must be declared in docker compose file for use by multiple services.', 'duration': 23.738, 'max_score': 7019.93, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ7019930.jpg'}, {'end': 7157.923, 'src': 'embed', 'start': 7125.309, 'weight': 6, 'content': [{'end': 7126.69, 'text': "And that's why I wanted to show you guys this.", 'start': 7125.309, 'duration': 1.381}, {'end': 7134.694, 'text': "So remember, we're using this dash V volume, this dash V flag to automatically delete this anonymous volume.", 'start': 7127.19, 'duration': 7.504}, {'end': 7137.072, 'text': "because we don't need it.", 'start': 7136.071, 'duration': 1.001}, {'end': 7140.013, 'text': "It's just there for that one little workaround for our node application.", 'start': 7137.112, 'duration': 2.901}, {'end': 7145.136, 'text': 'However, the problem is, is that this will delete not only anonymous volumes, but also named volumes.', 'start': 7140.534, 'duration': 4.602}, {'end': 7151.98, 'text': "So if we pass in this dash V flag, it's going to also delete this database, our Mongo database, a volume.", 'start': 7145.496, 'duration': 6.484}, {'end': 7157.923, 'text': "And so we obviously don't want to do that because we just went through all of this hassle so that we could save our database data.", 'start': 7152.94, 'duration': 4.983}], 'summary': 'Using -v flag deletes both anonymous and named volumes, affecting the mongo database volume as well.', 'duration': 32.614, 'max_score': 7125.309, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ7125309.jpg'}, {'end': 7221.339, 'src': 'embed', 'start': 7166.728, 'weight': 3, 'content': [{'end': 7168.43, 'text': 'And after that finishes running.', 'start': 7166.728, 'duration': 1.702}, {'end': 7173.215, 'text': 'if you do a Docker volume, LS is going to list out all of your volumes.', 'start': 7168.43, 'duration': 4.785}, {'end': 7175.797, 'text': 'you can see we have our node Docker MongoDB volume.', 'start': 7173.215, 'duration': 2.582}, {'end': 7177.319, 'text': "So you can see it's given a nice name.", 'start': 7176.017, 'duration': 1.302}, {'end': 7179.2, 'text': 'So we know exactly what this is being used for.', 'start': 7177.339, 'duration': 1.861}, {'end': 7183.425, 'text': "But we've got all of these anonymous volumes.", 'start': 7179.761, 'duration': 3.664}, {'end': 7185.326, 'text': "So you'll see over time, they start to build up.", 'start': 7183.445, 'duration': 1.881}, {'end': 7188.231, 'text': 'And you just have to delete them yourselves.', 'start': 7186.629, 'duration': 1.602}, {'end': 7194.696, 'text': "And so there's a nice easy command called a Docker volume, I believe prune, but don't run it yet.", 'start': 7188.551, 'duration': 6.145}, {'end': 7197.839, 'text': 'Instead of what I recommend that you do is start up your containers.', 'start': 7194.796, 'duration': 3.043}, {'end': 7213.933, 'text': 'So bring this back up and then do a Docker volume and then do a dash dash help.', 'start': 7199.781, 'duration': 14.152}, {'end': 7221.339, 'text': 'And so we have this prune command, which removes all unused local volumes.', 'start': 7217.776, 'duration': 3.563}], 'summary': "Use 'docker volume ls' to manage volumes, prune removes unused volumes", 'duration': 54.611, 'max_score': 7166.728, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ7166728.jpg'}, {'end': 7316.511, 'src': 'embed', 'start': 7288.189, 'weight': 8, 'content': [{'end': 7295.26, 'text': "And let's just do show DBs and we can see that our data is still there.", 'start': 7288.189, 'duration': 7.071}, {'end': 7301.883, 'text': 'And if I do a db.books.find, Oh, I forgot to switch databases.', 'start': 7295.721, 'duration': 6.162}, {'end': 7303.304, 'text': 'So we have to use my DB.', 'start': 7301.963, 'duration': 1.341}, {'end': 7304.965, 'text': 'And then now we do this.', 'start': 7304.104, 'duration': 0.861}, {'end': 7305.925, 'text': 'And there we go.', 'start': 7305.345, 'duration': 0.58}, {'end': 7309.227, 'text': "So we've now got persistent data for our Mongo database.", 'start': 7305.965, 'duration': 3.262}, {'end': 7316.511, 'text': "So now that our Mongo database is up and running, let's set up our Express application to connect to our Mongo database.", 'start': 7309.887, 'duration': 6.624}], 'summary': 'Set up persistent data for mongo database and connect to express application.', 'duration': 28.322, 'max_score': 7288.189, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ7288189.jpg'}], 'start': 6451.532, 'title': 'Managing docker volumes and data persistence with mongodb', 'summary': 'Covers setting up and interacting with a mongo container, managing container lifecycle, demonstrating data loss on restart, implementing data persistence using named volumes, and managing docker volumes and pruning, resulting in successful database management and data persistence.', 'chapters': [{'end': 6659.38, 'start': 6451.532, 'title': 'Setting up and interacting with a mongo container', 'summary': 'Demonstrates setting up a mongo container, connecting to it, creating a new database, inserting data, and querying the database, resulting in successful insertion and retrieval of data.', 'duration': 207.848, 'highlights': ['The chapter demonstrates setting up a Mongo container with root username and password, then running the container using Docker compose, resulting in the creation of the Mongo container and a node app container. The process involves setting environment variables for the root username and password, followed by running the Mongo container using Docker compose, which results in the creation of both Mongo and node app containers.', 'The chapter illustrates connecting to the Mongo container and accessing the file system, then logging into the Mongo instance using the provided username and password, and executing various Mongo commands, including switching to a new database, creating a collection, and inserting a document. After connecting to the Mongo container and logging in, various operations are performed, such as switching to a new database, creating a collection, and inserting a document into the collection, showcasing successful interaction with the Mongo instance.', "The chapter demonstrates the successful insertion of a document into the 'books' collection within the newly created database and querying the database to retrieve the inserted data. The process involves successfully inserting a document with the name 'Harry Potter' into the 'books' collection within the newly created database and querying the database to retrieve the inserted data, demonstrating the successful insertion and retrieval of data."]}, {'end': 7125.289, 'start': 6660.14, 'title': 'Mongodb container management and data persistence', 'summary': 'Covers setting up a mongodb container, managing container lifecycle using docker compose, demonstrating data loss on container restart, and implementing data persistence using named volumes in docker compose.', 'duration': 465.149, 'highlights': ["By running 'Mongo -u' instead of using Docker commands, users can quickly access the MongoDB shell without the need for additional steps.", 'After tearing down and bringing up a MongoDB container, the data created in the previous container is lost, leading to potential data loss and application instability.', 'Anonymous volumes in Docker pose a risk of accidental deletion, hence named volumes are preferred for better data management and persistence.', "To implement named volumes in Docker Compose, a volume with a human-readable name can be created by specifying the path within the container and assigning it a name using the format 'name: path'.", 'Declared named volumes in Docker Compose can be utilized by multiple services, offering a flexible and reusable data persistence solution for various containers.']}, {'end': 7484.69, 'start': 7125.309, 'title': 'Managing docker volumes and pruning', 'summary': "Discusses the management of docker volumes, highlighting the use of the dash v flag to automatically delete anonymous volumes, the caution against using the dash v flag to avoid deleting named volumes, the pruning of unused local volumes using the 'docker volume prune' command, and the process of connecting an express application to a mongo database using mongoose.", 'duration': 359.381, 'highlights': ['The caution against using the dash V flag to avoid deleting named volumes The chapter cautions against using the dash V flag to avoid inadvertently deleting named volumes, such as the Mongo database volume, emphasizing the need to remove the dash V flag for specific cases.', 'The use of the dash V flag to automatically delete anonymous volumes The chapter explains the use of the dash V flag to automatically delete anonymous volumes, highlighting its purpose in removing unnecessary volumes while running a node application.', "The pruning of unused local volumes using the 'docker volume prune' command The chapter introduces the 'docker volume prune' command as a means to remove all unused local volumes, ensuring that only the unnecessary volumes are deleted, thereby reducing the volume buildup over time.", 'Connecting an Express application to a Mongo database using Mongoose The chapter discusses the process of connecting an Express application to a Mongo database using Mongoose, highlighting the steps of importing Mongoose, connecting to the database, and addressing the IP address configuration for Docker containers.']}], 'duration': 1033.158, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ6451532.jpg', 'highlights': ['The chapter demonstrates setting up a Mongo container with root username and password, then running the container using Docker compose, resulting in the creation of the Mongo container and a node app container.', 'The chapter illustrates connecting to the Mongo container and accessing the file system, then logging into the Mongo instance using the provided username and password, and executing various Mongo commands, including switching to a new database, creating a collection, and inserting a document.', "The chapter demonstrates the successful insertion of a document into the 'books' collection within the newly created database and querying the database to retrieve the inserted data.", 'Named volumes in Docker pose a risk of accidental deletion, hence named volumes are preferred for better data management and persistence.', 'Declared named volumes in Docker Compose can be utilized by multiple services, offering a flexible and reusable data persistence solution for various containers.', "The pruning of unused local volumes using the 'docker volume prune' command.", 'The caution against using the dash V flag to avoid deleting named volumes.', 'The use of the dash V flag to automatically delete anonymous volumes.', 'Connecting an Express application to a Mongo database using Mongoose.']}, {'end': 8500.994, 'segs': [{'end': 7738.195, 'src': 'heatmap', 'start': 7520.471, 'weight': 0, 'content': [{'end': 7525.615, 'text': 'Docker compose does create a brand new network just for your application,', 'start': 7520.471, 'duration': 5.144}, {'end': 7530.339, 'text': 'so that all of the containers and services within your Docker compose file will get placed into that network.', 'start': 7525.615, 'duration': 4.724}, {'end': 7533.961, 'text': "And you'll see here, we have an IP address.", 'start': 7531.9, 'duration': 2.061}, {'end': 7536.423, 'text': 'So this is the IP address of our node application.', 'start': 7534.282, 'duration': 2.141}, {'end': 7538.745, 'text': 'And then here you can see its default gateway.', 'start': 7537.024, 'duration': 1.721}, {'end': 7542.388, 'text': 'And so we want to grab the IP address of our Mongo container.', 'start': 7539.766, 'duration': 2.622}, {'end': 7544.069, 'text': "So let's do the same thing for our Mongo container.", 'start': 7542.408, 'duration': 1.661}, {'end': 7549.342, 'text': "And let's do a Docker inspect and then grab the name of our Mongo container.", 'start': 7545.618, 'duration': 3.724}, {'end': 7556.449, 'text': "Once again, it's using that same exact network that was created.", 'start': 7553.827, 'duration': 2.622}, {'end': 7559.622, 'text': 'And we can see here, this is the 172.25.', 'start': 7556.469, 'duration': 3.153}, {'end': 7561.334, 'text': '0.2 So this is the IP address.', 'start': 7559.622, 'duration': 1.712}, {'end': 7564.618, 'text': "And I'm just going to copy that.", 'start': 7563.657, 'duration': 0.961}, {'end': 7565.639, 'text': 'We can paste it in here.', 'start': 7564.798, 'duration': 0.841}, {'end': 7573.554, 'text': "And then we want to put the port that's going to run on, that Mongo's running on.", 'start': 7570.233, 'duration': 3.321}, {'end': 7577.714, 'text': "So it's going to be running on the default port, as long as you didn't change any of the default configs.", 'start': 7573.614, 'duration': 4.1}, {'end': 7578.155, 'text': 'So 27017.', 'start': 7577.754, 'duration': 0.401}, {'end': 7589.417, 'text': "And then we're going to pass this one property auth source equals admin.", 'start': 7578.155, 'duration': 11.262}, {'end': 7592.897, 'text': "All right, let's save that.", 'start': 7592.057, 'duration': 0.84}, {'end': 7595.418, 'text': "And then here we're going to call dot then.", 'start': 7593.598, 'duration': 1.82}, {'end': 7602.632, 'text': "So here and we're going to pass in an error function and we're just going to say if we successfully connect to our database,", 'start': 7595.558, 'duration': 7.074}, {'end': 7612.714, 'text': "I just want to do a console dot log and we'll just say successfully connected to database and if this failed,", 'start': 7602.632, 'duration': 10.082}, {'end': 7619.775, 'text': "we'll just do a dot catch and we'll say first of all, we'll pass in the error and we'll just console dot log error.", 'start': 7612.714, 'duration': 7.061}, {'end': 7622.835, 'text': "alright, let's save that and let's see what happens.", 'start': 7619.775, 'duration': 3.06}, {'end': 7629.055, 'text': "so if we do a docker PS and then we're going to do a Docker logs,", 'start': 7622.835, 'duration': 6.22}, {'end': 7634.596, 'text': "and we'll connect to that node application.", 'start': 7632.995, 'duration': 1.601}, {'end': 7638.757, 'text': 'So here we can see that it said successfully connected to database.', 'start': 7634.636, 'duration': 4.121}, {'end': 7642.279, 'text': "So it looks like we've successfully connected to the database and everything's working.", 'start': 7639.238, 'duration': 3.041}, {'end': 7645.76, 'text': "However, there's something I don't like right?", 'start': 7642.839, 'duration': 2.921}, {'end': 7653.703, 'text': 'Right now we had to go into Docker inspect to get the IP address of our container and then put it into our code.', 'start': 7645.82, 'duration': 7.883}, {'end': 7659.366, 'text': 'However, you know if we stop and start our containers or if we do a Docker, compose down and then back up.', 'start': 7653.784, 'duration': 5.582}, {'end': 7663.689, 'text': "First of all, there's no guarantee that we get the same IP address.", 'start': 7660.186, 'duration': 3.503}, {'end': 7668.912, 'text': "And even if we could guarantee the same exact IP address the first time we run it, we'd have to go in.", 'start': 7663.749, 'duration': 5.163}, {'end': 7671.113, 'text': "we'd have to get the IP address and then update our code.", 'start': 7668.912, 'duration': 2.201}, {'end': 7673.655, 'text': "And that's just a really sloppy way of doing things.", 'start': 7671.514, 'duration': 2.141}, {'end': 7680.96, 'text': 'And so Docker actually has a nice feature that allows us to make it easy to talk between containers.', 'start': 7674.216, 'duration': 6.744}, {'end': 7687.584, 'text': 'And this feature only exists when it comes to custom networks that get created.', 'start': 7682.241, 'duration': 5.343}, {'end': 7691.707, 'text': "So if I do a Docker network LS, You'll see there's a couple of networks that we have.", 'start': 7687.665, 'duration': 4.042}, {'end': 7693.629, 'text': "We've got the bridge and host network.", 'start': 7692.328, 'duration': 1.301}, {'end': 7697.171, 'text': 'So these are the two default networks that come bundled with Docker.', 'start': 7693.969, 'duration': 3.202}, {'end': 7702.916, 'text': "And then I've got a couple other ones, you may not have these, but you'll see that we have one for our node Docker default.", 'start': 7697.712, 'duration': 5.204}, {'end': 7705.077, 'text': 'So this is the one Docker compose created.', 'start': 7702.936, 'duration': 2.141}, {'end': 7708.039, 'text': 'This is the custom one that created just for our application.', 'start': 7705.097, 'duration': 2.942}, {'end': 7715.224, 'text': 'And when you have a custom network, this only happens with the custom networks, the ones that you create, not one of the two default ones.', 'start': 7708.92, 'duration': 6.304}, {'end': 7718.386, 'text': 'When you have a custom network, we have DNS.', 'start': 7715.824, 'duration': 2.562}, {'end': 7723.529, 'text': 'So when one Docker container wants to talk to another Docker container,', 'start': 7719.026, 'duration': 4.503}, {'end': 7728.232, 'text': 'we can use the name of that container or the name of that service to talk to that container.', 'start': 7723.529, 'duration': 4.703}, {'end': 7730.854, 'text': 'And so if we go back to our Docker dash compose file,', 'start': 7728.893, 'duration': 1.961}, {'end': 7738.195, 'text': "You'll see that the service for my node app is called node app and the service for my Mongo container is called Mongo.", 'start': 7732.014, 'duration': 6.181}], 'summary': 'Docker compose creates a new network for containers; custom networks enable easy communication between containers using dns.', 'duration': 217.724, 'max_score': 7520.471, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ7520471.jpg'}, {'end': 7834.477, 'src': 'embed', 'start': 7810.917, 'weight': 1, 'content': [{'end': 7821.507, 'text': 'Look at that, right? It automatically uses DNS to resolve the name of Mongo, and he got the IP address of 172 25.02, which is our Mongo container.', 'start': 7810.917, 'duration': 10.59}, {'end': 7824.21, 'text': "So that's how this whole DNS process works.", 'start': 7822.028, 'duration': 2.182}, {'end': 7826.833, 'text': 'So, anytime you want one of your containers to talk to another container,', 'start': 7824.23, 'duration': 2.603}, {'end': 7834.477, 'text': "All you have to do is refer to its service name and it'll automatically be able to resolve it, because DNS is built into Docker.", 'start': 7827.433, 'duration': 7.044}], 'summary': "Docker automatically resolves mongo's name to ip 172.25.02, enabling seamless container communication via dns.", 'duration': 23.56, 'max_score': 7810.917, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ7810917.jpg'}, {'end': 7949.538, 'src': 'embed', 'start': 7922.706, 'weight': 2, 'content': [{'end': 7928.369, 'text': "So, you know, one thing I don't like in our application is that first of all, we're hard coding the URL into our application.", 'start': 7922.706, 'duration': 5.663}, {'end': 7929.869, 'text': 'You never want to do that.', 'start': 7929.029, 'duration': 0.84}, {'end': 7936.112, 'text': 'Instead, what I would rather do is have this URL stored as an environment variable.', 'start': 7929.969, 'duration': 6.143}, {'end': 7940.194, 'text': "And so that way, you know, when we move to production, you know, there's nothing that we would need to change.", 'start': 7936.712, 'duration': 3.482}, {'end': 7944.676, 'text': 'We can just pull the environment variables that we set either in Docker Compose or on the host machine.', 'start': 7940.214, 'duration': 4.462}, {'end': 7949.538, 'text': "So what I want to do is within our base directory, I'm going to create a new folder.", 'start': 7945.696, 'duration': 3.842}], 'summary': 'Hard coded url in application needs to be stored as an environment variable for better flexibility and scalability.', 'duration': 26.832, 'max_score': 7922.706, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ7922706.jpg'}], 'start': 7484.71, 'title': 'Docker networks, custom networks & environment variables', 'summary': 'Covers docker networks, custom networks, and environment variables, demonstrating the creation of multiple networks, utilizing docker compose, dns for seamless communication between containers, and managing environment variables for docker applications, emphasizing the avoidance of hard-coded urls and preparation for future changes.', 'chapters': [{'end': 7595.418, 'start': 7484.71, 'title': 'Understanding docker networks', 'summary': 'Discusses docker networks, explaining the concept of creating multiple networks and placing containers within them, while also showcasing the creation of a new network by docker compose and obtaining the ip address and default gateway of containers.', 'duration': 110.708, 'highlights': ['Docker compose creates a new network for the application, placing all containers and services within the Docker compose file into that network.', 'Obtaining and using the IP address and default gateway of containers, such as the node application and Mongo container, is essential for configuring network communication.', 'Explaining the concept of Docker networks, which allow the creation of multiple networks to segregate container communication, ensuring that containers within a network can only communicate with each other.']}, {'end': 7902.059, 'start': 7595.558, 'title': 'Docker custom networks & dns', 'summary': 'Discusses the use of docker custom networks and dns for seamless communication between containers, eliminating the need to update ip addresses manually, with a demonstration of how dns resolves container names to ip addresses and provides insights into creating and inspecting custom networks.', 'duration': 306.501, 'highlights': ['Docker custom networks with DNS enable seamless communication between containers by resolving container names to IP addresses, eliminating the need to update IP addresses manually. The chapter discusses the use of Docker custom networks and DNS for seamless communication between containers, eliminating the need to update IP addresses manually.', "Demonstration of DNS resolving container names to IP addresses using the 'ping' command within a Docker container. A demonstration is provided of how DNS resolves container names to IP addresses, with a 'ping' command example showcasing the automatic resolution.", "Insights into creating and inspecting custom networks using Docker commands 'network LS' and 'network inspect'. Insights are provided into creating and inspecting custom networks using Docker commands 'network LS' and 'network inspect'."]}, {'end': 8500.994, 'start': 7903.24, 'title': 'Managing environment variables for docker applications', 'summary': 'Covers the creation of a config file to store environment variables, emphasizing the importance of avoiding hard-coded urls and preparing for future changes by utilizing environment variables, with a focus on mongodb settings and docker configurations.', 'duration': 597.754, 'highlights': ['Emphasizing the importance of avoiding hard-coded URLs The speaker highlights the significance of avoiding hard-coded URLs in the application to prevent the need for changes when moving to production.', 'Utilizing environment variables for MongoDB settings and Docker configurations The speaker emphasizes the use of environment variables for storing MongoDB settings, such as username, password, and IP address, and discusses the flexibility it provides for future changes in the application.', "Creating a config file to store environment variables The speaker explains the creation of a config file, 'config.js', to store all environment variables in a centralized location, providing clarity and ease of access for making any necessary changes."]}], 'duration': 1016.284, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ7484710.jpg', 'highlights': ['Docker compose creates a new network for the application, placing all containers and services within the Docker compose file into that network.', "Demonstration of DNS resolving container names to IP addresses using the 'ping' command within a Docker container.", 'Emphasizing the importance of avoiding hard-coded URLs The speaker highlights the significance of avoiding hard-coded URLs in the application to prevent the need for changes when moving to production.']}, {'end': 12048.891, 'segs': [{'end': 8560.236, 'src': 'embed', 'start': 8537.84, 'weight': 5, 'content': [{'end': 8547.61, 'text': "So we need a way to kind of tell Docker to load up our Mongo instance, our Mongo container first, so that we can ensure that when it's up and running,", 'start': 8537.84, 'duration': 9.77}, {'end': 8549.792, 'text': 'only then does our node container connect to it.', 'start': 8547.61, 'duration': 2.182}, {'end': 8553.754, 'text': 'And Docker compose has a depends on field that we can use.', 'start': 8550.533, 'duration': 3.221}, {'end': 8560.236, 'text': "So, if we go into our Docker, compose.yaml, we'll use the shared one in this case, because we would want the same behavior,", 'start': 8554.294, 'duration': 5.942}], 'summary': "Docker compose uses 'depends on' to ensure mongodb container is up before the node container.", 'duration': 22.396, 'max_score': 8537.84, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ8537840.jpg'}, {'end': 8858.025, 'src': 'embed', 'start': 8832.07, 'weight': 0, 'content': [{'end': 8837.553, 'text': "I'm sure there's something there's some negative aspect as to the way I've implemented this and maybe it's not best practice.", 'start': 8832.07, 'duration': 5.483}, {'end': 8843.578, 'text': "I'm just here to try and kind of sell home the point that you need to make sure your application handles the logic.", 'start': 8837.934, 'duration': 5.644}, {'end': 8845.699, 'text': "Don't rely on an orchestrator.", 'start': 8844.458, 'duration': 1.241}, {'end': 8848.4, 'text': "Don't rely on Docker or Docker compose,", 'start': 8846.079, 'duration': 2.321}, {'end': 8855.084, 'text': 'because none of them can truly guarantee you that your Mongo database is fully up and running before your application starts.', 'start': 8848.4, 'duration': 6.684}, {'end': 8858.025, 'text': 'Make sure your application is intelligent enough to handle that scenario.', 'start': 8855.324, 'duration': 2.701}], 'summary': 'Emphasize handling logic in application, not relying on orchestrator or docker, to ensure database is fully up and running before application starts.', 'duration': 25.955, 'max_score': 8832.07, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ8832070.jpg'}, {'end': 9170.688, 'src': 'embed', 'start': 9148.223, 'weight': 1, 'content': [{'end': 9156.973, 'text': "So now that our node application can successfully talk to our MongoDB instance, I think it's time we started to build out a demo CRUD application.", 'start': 9148.223, 'duration': 8.75}, {'end': 9163.08, 'text': 'And I was trying to think of what would be the best example project, and I realized that, searching through YouTube,', 'start': 9156.993, 'duration': 6.087}, {'end': 9167.865, 'text': 'I could not find a single tutorial that covered how to build a to-do application.', 'start': 9163.08, 'duration': 4.785}, {'end': 9169.266, 'text': "So that's exactly what we're going to build.", 'start': 9167.905, 'duration': 1.361}, {'end': 9170.688, 'text': "I'm just kidding.", 'start': 9170.047, 'duration': 0.641}], 'summary': 'Node app connects to mongodb, planning to build a unique to-do application.', 'duration': 22.465, 'max_score': 9148.223, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ9148223.jpg'}, {'end': 11064.081, 'src': 'embed', 'start': 11034.769, 'weight': 3, 'content': [{'end': 11039.35, 'text': 'So when you hash a password, what we have to do is we have the hash password stored in the database.', 'start': 11034.769, 'duration': 4.581}, {'end': 11044.692, 'text': 'And then we have a the password that the user is trying to log in with.', 'start': 11039.95, 'duration': 4.742}, {'end': 11051.874, 'text': "So what we have to do is we have to hash the password the user tried to log in with and compare that hash with the hash that's stored in our database.", 'start': 11045.092, 'duration': 6.782}, {'end': 11055.495, 'text': 'And if they equal the same, then that means the user should be able to log in.', 'start': 11051.914, 'duration': 3.581}, {'end': 11056.635, 'text': "So we'll say bcrypt.", 'start': 11056.055, 'duration': 0.58}, {'end': 11059.157, 'text': 'dot compare.', 'start': 11058.556, 'duration': 0.601}, {'end': 11064.081, 'text': 'And we pass in the password that the user tried to log in with.', 'start': 11059.177, 'duration': 4.904}], 'summary': 'Hash passwords for secure storage and comparison with bcrypt.compare.', 'duration': 29.312, 'max_score': 11034.769, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ11034769.jpg'}, {'end': 11236.049, 'src': 'embed', 'start': 11210.377, 'weight': 2, 'content': [{'end': 11216.3, 'text': "So when the user logs in, how do we actually store that state within our application, and we're going to use sessions to do that.", 'start': 11210.377, 'duration': 5.923}, {'end': 11217.821, 'text': "So we'll tackle that in the next video.", 'start': 11216.34, 'duration': 1.481}, {'end': 11225.94, 'text': "Alright, so now let's actually go ahead and implement authentication in our application, because right now a user can sign up, and it can log in.", 'start': 11219.235, 'duration': 6.705}, {'end': 11233.687, 'text': 'But how do we actually make it so that, to you know, retrieve posts, maybe modify posts a user has to log in and authenticate first.', 'start': 11225.96, 'duration': 7.727}, {'end': 11236.049, 'text': "well, we're going to use something called Express Session.", 'start': 11233.687, 'duration': 2.362}], 'summary': 'Implementing authentication using sessions in the application.', 'duration': 25.672, 'max_score': 11210.377, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ11210377.jpg'}, {'end': 11305.98, 'src': 'embed', 'start': 11280.594, 'weight': 4, 'content': [{'end': 11287.237, 'text': 'And this will walk you through the whole process of getting express sessions wired up with a Redis database.', 'start': 11280.594, 'duration': 6.643}, {'end': 11293.455, 'text': "Now, what we're going to do is before we do that, let's go ahead and actually get ourselves a Redis database.", 'start': 11288.393, 'duration': 5.062}, {'end': 11300.998, 'text': "So let's go to our images in our Docker Hub, and I'm going to search for Redis.", 'start': 11294.235, 'duration': 6.763}, {'end': 11305.04, 'text': "And there's the official image is going to be the first result.", 'start': 11302.939, 'duration': 2.101}, {'end': 11305.98, 'text': "So let's add this.", 'start': 11305.1, 'duration': 0.88}], 'summary': 'Setting up express sessions with redis database using official redis image.', 'duration': 25.386, 'max_score': 11280.594, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ11280594.jpg'}, {'end': 11789.348, 'src': 'heatmap', 'start': 11586.126, 'weight': 0.744, 'content': [{'end': 11593.309, 'text': "So we'll do let in this case, so do let Rita store equals require.", 'start': 11586.126, 'duration': 7.183}, {'end': 11597.411, 'text': 'And then we need connect dash Redis.', 'start': 11593.329, 'duration': 4.082}, {'end': 11602.193, 'text': 'And then we pass in session.', 'start': 11599.672, 'duration': 2.521}, {'end': 11611.677, 'text': "And then let's define our readers clients will do let fetus client equals require.", 'start': 11605.314, 'duration': 6.363}, {'end': 11618.17, 'text': 'Sorry, not require.', 'start': 11617.389, 'duration': 0.781}, {'end': 11622.074, 'text': 'We want Redis dot create client.', 'start': 11618.21, 'duration': 3.864}, {'end': 11624.957, 'text': 'And then here we have to pass in two things.', 'start': 11623.095, 'duration': 1.862}, {'end': 11632.886, 'text': 'We have to pass in the URL, the host URL, as well as the port that the Redis server is going to be listening on.', 'start': 11624.978, 'duration': 7.908}, {'end': 11637.101, 'text': "Alright, so let's, let's go to our config.js.", 'start': 11633.82, 'duration': 3.281}, {'end': 11639.102, 'text': "And let's define that here as environment variable.", 'start': 11637.121, 'duration': 1.981}, {'end': 11644.584, 'text': "So we'll say, Redis, underscore URL, when I say URL, this is just going to be the IP address.", 'start': 11639.142, 'duration': 5.442}, {'end': 11651.126, 'text': "So I'll just say process.env.Redis underscore URL.", 'start': 11645.424, 'duration': 5.702}, {'end': 11661.463, 'text': "And then if that's not set to anything, we're going to default to Redis, right? Remember, we have the DNS at our disposal.", 'start': 11651.666, 'duration': 9.797}, {'end': 11669.671, 'text': 'So, you know, anytime any one of our containers wants to talk to our Redis database and it needs to know the IP address, we can just reference Redis.', 'start': 11661.523, 'duration': 8.148}, {'end': 11673.474, 'text': "So if I go back to config.js, I'm just going to default to Redis.", 'start': 11670.291, 'duration': 3.183}, {'end': 11678.598, 'text': "I'm, you know, in production, And in development, I'm never actually going to pass this URL in.", 'start': 11673.695, 'duration': 4.903}, {'end': 11680.559, 'text': "It's just there, so that in the future,", 'start': 11678.998, 'duration': 1.561}, {'end': 11688.343, 'text': "if I do decide to have a Redis database that's not a Docker container and that I can't actually resolve using Redis,", 'start': 11680.559, 'duration': 7.784}, {'end': 11693.425, 'text': 'then I can just pass it as an environment variable and then I can connect to, like maybe a managed Redis server.', 'start': 11688.343, 'duration': 5.082}, {'end': 11702.089, 'text': "So back here, we'll pass in host as Redis underscore URL.", 'start': 11696.406, 'duration': 5.683}, {'end': 11705.033, 'text': 'And then we also have to pass in the port.', 'start': 11703.712, 'duration': 1.321}, {'end': 11709.474, 'text': "So we're going to use the default port for everything, but let's define a environment variable for that.", 'start': 11705.273, 'duration': 4.201}, {'end': 11712.696, 'text': "So we'll call this Redis underscore port.", 'start': 11710.955, 'duration': 1.741}, {'end': 11718.658, 'text': 'And this is going to be set to process.env.Redis underscore port.', 'start': 11714.736, 'duration': 3.922}, {'end': 11733.575, 'text': "And we're going to define, we're going to default to whatever the default port is for Redis, which is 6379.", 'start': 11724.88, 'duration': 8.695}, {'end': 11740.839, 'text': "All right, and then right down here before our first middleware here, I'm going to define a brand new middleware for our sessions.", 'start': 11733.575, 'duration': 7.264}, {'end': 11743.501, 'text': "Here, we'll then pass in session.", 'start': 11742, 'duration': 1.501}, {'end': 11746.763, 'text': 'And then we have to pass in an object.', 'start': 11745.502, 'duration': 1.261}, {'end': 11748.824, 'text': 'So the first thing that we have to do is we have to pass in our store.', 'start': 11746.783, 'duration': 2.041}, {'end': 11751.646, 'text': "But here we'll call it new Rita store.", 'start': 11749.705, 'duration': 1.941}, {'end': 11760.813, 'text': 'And then we have to pass in our client that we created.', 'start': 11758.871, 'duration': 1.942}, {'end': 11761.714, 'text': "So it's going to be client.", 'start': 11760.853, 'duration': 0.861}, {'end': 11764.698, 'text': "And then we're going to reference our readers client right here.", 'start': 11762.315, 'duration': 2.383}, {'end': 11773.358, 'text': 'Alright, then the next thing that we have to pass in is a secret.', 'start': 11771.257, 'duration': 2.101}, {'end': 11781.944, 'text': 'So this is just a random secret that we store on our Express server that we use when we are handling the sessions.', 'start': 11774.159, 'duration': 7.785}, {'end': 11783.705, 'text': 'So this can be any string.', 'start': 11782.484, 'duration': 1.221}, {'end': 11787.567, 'text': "So what I'm going to do is I'm going to create an environment variable for that as well.", 'start': 11783.825, 'duration': 3.742}, {'end': 11789.348, 'text': "So we'll go to back to our config.", 'start': 11787.807, 'duration': 1.541}], 'summary': 'Configuring redis connection and session middleware with environment variables.', 'duration': 203.222, 'max_score': 11586.126, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ11586126.jpg'}], 'start': 8501.415, 'title': 'Docker, application logic, and user authentication', 'summary': 'Discusses challenges of managing dependencies between docker containers, emphasizing the importance of handling application logic with docker commands and logs. it also covers building a blog application with node and express, implementing user signup, login functionality, and user authentication with express session, and configuring redis and express sessions.', 'chapters': [{'end': 8832.01, 'start': 8501.415, 'title': 'Docker container dependency management', 'summary': "Discusses the challenges of managing dependencies between docker containers, specifically addressing the issue of ensuring that the mongo container starts before the node container to prevent application crashes, and recommends implementing a retry logic in the application to handle scenarios where the mongo database isn't up and running.", 'duration': 330.595, 'highlights': ['Docker compose has a depends on field that can be used to ensure that the Mongo container starts before the node container, preventing application crashes. By using the depends on field in Docker compose, the chapter explains how to specify the dependency between containers, ensuring that the Mongo container starts before the node container, thus preventing application crashes.', "Implementing a retry logic in the application to handle scenarios where the Mongo database isn't up and running, ensuring continuous connection attempts for 30 seconds. The chapter recommends implementing a retry logic in the application, using Mongoose's built-in functionality to continuously attempt to connect to the Mongo database for 30 seconds, thus addressing the scenario where the database isn't up and running.", "Example of implementing a retry logic in the application by creating a function 'connect with retry' and using set timeout to continuously attempt to connect to the Mongo database. The transcript provides an example of implementing a retry logic in the application by creating a function 'connect with retry' and using set timeout to continuously attempt to connect to the Mongo database, demonstrating a practical approach to handling the connection scenario."]}, {'end': 9146.922, 'start': 8832.07, 'title': 'Handling application logic and dependencies', 'summary': 'Emphasizes the importance of ensuring application logic can handle scenarios where dependencies like mongo database are not fully up and running, demonstrated through docker commands and logs.', 'duration': 314.852, 'highlights': ['The importance of ensuring application logic can handle scenarios where dependencies like Mongo database are not fully up and running is emphasized, as demonstrated through Docker commands and logs.', 'The chapter advises not to rely on orchestrators, Docker, or Docker compose to guarantee that the Mongo database is fully up and running before the application starts, and instead, to ensure that the application is intelligent enough to handle such scenarios.', "Demonstrated the use of Docker commands to start specific services and the 'no-deps' flag to prevent starting linked services, illustrating how to only start the node application without triggering its dependencies.", 'The application is shown continuously retrying to connect to the Mongo database until it is successfully up and running, confirming its ability to handle scenarios where the database is not initially available.']}, {'end': 10271.656, 'start': 9148.223, 'title': 'Building a blog application with node and express', 'summary': 'Discusses building a blog application using node and express, covering the creation of models, controllers, routes, and testing with postman, emphasizing the integration with docker and a development to production workflow.', 'duration': 1123.433, 'highlights': ['The chapter discusses building a blog application using Node and Express It covers the process of creating a blog application with Node and Express, emphasizing the development of a CRUD application.', 'Creation of models, controllers, and routes for the blog application The transcript details the creation of Mongoose models for blog posts, post controllers for CRUD operations, and routes for handling different HTTP methods.', "Integration with Docker and emphasizing the development to production workflow The focus is on integrating the blog application with Docker and establishing a development to production workflow, showcasing the broader context of the application's deployment.", "Testing the blog application using Postman for CRUD operations The speaker demonstrates testing the blog application's functionality using Postman, showcasing the creation, retrieval, updating, and deletion of blog posts."]}, {'end': 10971.526, 'start': 10273.378, 'title': 'Implementing user signup and login', 'summary': 'Covers implementing user signup and login functionality using a user model with username and password properties, creating an auth controller for signup and login, using bcrypt library to hash passwords, and testing the functionality through post requests.', 'duration': 698.148, 'highlights': ['Implemented a user model with username and password properties for authentication. Creating a user model with username and password properties.', 'Created an auth controller for signing up users, using try-catch block for error handling and returning the new user upon successful signup. Setting up an auth controller for user signup with error handling.', 'Utilized bcrypt library to hash passwords for secure storage and updated the signup process to incorporate password hashing. Using bcrypt library to hash passwords for secure storage.', 'Defined a controller for user login, including error handling within a try-catch block. Setting up a controller for user login with error handling.', 'Tested the signup functionality through post requests, ensuring the successful creation of a new user. Testing the signup functionality through post requests.']}, {'end': 11535.734, 'start': 10973.171, 'title': 'Implementing user authentication with express session', 'summary': 'Covers implementing user authentication with express session, including checking user credentials, handling login and signup routes, and integrating redis database for session storage.', 'duration': 562.563, 'highlights': ['Implementing user authentication with Express Session and handling login and signup routes The chapter covers the implementation of user authentication using Express Session and demonstrates the process of handling login and signup routes, ensuring secure user authentication.', 'Integrating Redis database for session storage and demonstrating its usage with Express Session The process of integrating a Redis database for session storage is explained, highlighting the simplicity of wiring up Express Session with Redis, and the advantages of using Redis for session storage.', 'Explaining the process of comparing hashed passwords for user authentication The explanation details the process of comparing hashed passwords for user authentication, highlighting the security measures involved in ensuring password validity.']}, {'end': 12048.891, 'start': 11536.454, 'title': 'Configuring redis and express sessions', 'summary': 'Discusses configuring redis and express sessions, including defining redis store, creating a redis client, setting environment variables, defining session middleware, and setting up docker environment variables for development.', 'duration': 512.437, 'highlights': ['The chapter discusses configuring Redis and Express sessions, including defining Redis store, creating a Redis client, setting environment variables, defining session middleware, and setting up Docker environment variables for development.', 'The speaker discusses importing Redis and defining a Redis store, including setting the host URL and port for the Redis server.', 'The chapter covers defining a Redis client, including setting the host URL and port as environment variables and creating a new middleware for sessions.', 'The speaker explains setting a secret for handling sessions and passing properties for the cookie sent back to the user, including setting the max age of the cookie to 30 seconds and the HTTP only property to true.', 'The speaker discusses setting up Docker environment variables for development, including setting the session secret.']}], 'duration': 3547.476, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ8501415.jpg', 'highlights': ['The chapter emphasizes the importance of handling application logic with docker commands and logs.', 'The chapter discusses building a blog application with Node and Express, emphasizing the development of a CRUD application.', 'The chapter covers implementing user authentication using Express Session and demonstrates the process of handling login and signup routes, ensuring secure user authentication.', 'The chapter explains the process of comparing hashed passwords for user authentication, highlighting the security measures involved in ensuring password validity.', 'The chapter discusses configuring Redis and Express sessions, including defining Redis store, creating a Redis client, setting environment variables, defining session middleware, and setting up Docker environment variables for development.', 'By using the depends on field in Docker compose, the chapter explains how to specify the dependency between containers, ensuring that the Mongo container starts before the node container, thus preventing application crashes.']}, {'end': 12989.516, 'segs': [{'end': 12215.109, 'src': 'embed', 'start': 12186.064, 'weight': 0, 'content': [{'end': 12188.868, 'text': 'And you might be wondering why did it return an empty array, we have a session.', 'start': 12186.064, 'duration': 2.804}, {'end': 12198.839, 'text': 'Well, if you recall, where I set some properties on my cookies and my sessions, is that the that this is only going to last 30 seconds.', 'start': 12189.328, 'duration': 9.511}, {'end': 12200.941, 'text': 'So the session dies after 30 seconds.', 'start': 12199.259, 'duration': 1.682}, {'end': 12203.784, 'text': "So what I'm going to do is, I'm going to re log in.", 'start': 12200.961, 'duration': 2.823}, {'end': 12205.926, 'text': 'This should create a new session.', 'start': 12204.505, 'duration': 1.421}, {'end': 12208.487, 'text': "And then now let's quickly run the same command.", 'start': 12206.466, 'duration': 2.021}, {'end': 12210.647, 'text': 'And you can see we have a session right there.', 'start': 12208.967, 'duration': 1.68}, {'end': 12215.109, 'text': 'And if you want to see the details for a session, you can type in get and then the key for that.', 'start': 12210.667, 'duration': 4.442}], 'summary': 'Session expires after 30 seconds, new session created upon re-logging in.', 'duration': 29.045, 'max_score': 12186.064, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ12186064.jpg'}, {'end': 12507.545, 'src': 'embed', 'start': 12479.988, 'weight': 2, 'content': [{'end': 12483.171, 'text': 'depending on how your application works, they have to be logged in.', 'start': 12479.988, 'duration': 3.183}, {'end': 12486.715, 'text': 'And the way we can accomplish that is by using Express middleware.', 'start': 12483.712, 'duration': 3.003}, {'end': 12490.918, 'text': 'And a middleware is nothing more than a function that runs before your controller.', 'start': 12487.155, 'duration': 3.763}, {'end': 12493.52, 'text': 'this function is going to have a little bit of logic.', 'start': 12491.559, 'duration': 1.961}, {'end': 12499.482, 'text': "And all it's going to do is it's going to check that sessions object to see if there's a user property attached to it.", 'start': 12493.84, 'duration': 5.642}, {'end': 12506.024, 'text': 'And if there is a user property to it attached to it, then it will then forward the request on to the controller.', 'start': 12499.902, 'duration': 6.122}, {'end': 12507.545, 'text': 'So the controller can handle that logic.', 'start': 12506.044, 'duration': 1.501}], 'summary': 'Express middleware checks if user is logged in before forwarding request to controller.', 'duration': 27.557, 'max_score': 12479.988, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ12479988.jpg'}, {'end': 13002.499, 'src': 'embed', 'start': 12975.929, 'weight': 3, 'content': [{'end': 12981.632, 'text': "And it's also a little bit of a security vulnerability because your database holds all of your critical application data.", 'start': 12975.929, 'duration': 5.703}, {'end': 12983.333, 'text': "It's got all of your user information.", 'start': 12982.072, 'duration': 1.261}, {'end': 12984.393, 'text': "it's got all of their emails.", 'start': 12983.333, 'duration': 1.06}, {'end': 12989.516, 'text': 'it could have potentially other sensitive information like social security number passwords and other things like that.', 'start': 12984.393, 'duration': 5.123}, {'end': 12995.417, 'text': "So generally, it's best not to make the Mongo container accessible to the outside world.", 'start': 12990.296, 'duration': 5.121}, {'end': 13002.499, 'text': "And I love how Docker by default, you know, if we don't open up any ports, it already isolates the Mongo container.", 'start': 12996.097, 'duration': 6.402}], 'summary': "Exposing the mongo container can pose security risks, holding critical user data including emails and potentially sensitive information. docker's default isolation provides added security.", 'duration': 26.57, 'max_score': 12975.929, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ12975929.jpg'}], 'start': 12049.271, 'title': 'Session management and application architecture', 'summary': 'Discusses setting up sessions, session management, and user authentication in node.js and express, with a focus on user login, session expiration, and access restriction. it also introduces docker application architecture and emphasizes security measures for limiting port access to the mongo database.', 'chapters': [{'end': 12186.024, 'start': 12049.271, 'title': 'Session setup and database interaction', 'summary': 'Discusses setting up sessions and creating a session upon user login, demonstrating cookie creation and expiration, as well as accessing and viewing the redis database for session entries.', 'duration': 136.753, 'highlights': ['Upon user login, a session needs to be created, and the cookies tab shows a received cookie with a value of one, set to expire in 30 seconds.', "The session cookie's properties include the domain set to locals, HTTP only set to true, and secure set to false, which can be adjusted based on environment requirements.", 'Demonstrates accessing and viewing the Redis database for session entries by executing commands to interact with the database.']}, {'end': 12792.655, 'start': 12186.064, 'title': 'Session management and user authentication', 'summary': 'Explains how to manage user sessions and implement user authentication using sessions in node.js, setting session expiration, storing user information, and using express middleware for protecting routes, with a focus on ensuring user authentication and session management, enabling users to be logged in for a specific duration and restricting access to certain routes based on user authentication.', 'duration': 606.591, 'highlights': ['Implementing session expiration and setting session duration to 60 seconds, demonstrating the process of logging in and creating a new session, and accessing user information within the session. Session expiration set to 60 seconds, logging in and creating a new session, accessing user information within the session.', 'Using Express middleware to protect routes, ensuring user authentication by checking the presence of a user property in the session object, and granting access to controllers based on user authentication. Implementing Express middleware to protect routes, checking user authentication, granting access to controllers based on user authentication.', 'Demonstrating the process of session expiration leading to user being logged out, and the subsequent inability to create a post after session expiration. Session expiration leading to user being logged out, inability to create a post after session expiration.']}, {'end': 12989.516, 'start': 12801.651, 'title': 'Docker application architecture', 'summary': 'Introduces an express application and its architecture, showcasing the docker setup and highlighting the importance of limiting port access to the mongo database for security purposes.', 'duration': 187.865, 'highlights': ['The current architecture includes an Express application listening on port 3000 and a Mongo database accessible on port 27017.', 'The tutorial emphasizes the significance of restricting outside access to the Mongo database due to its critical data, including user information and sensitive details.', 'The chapter concludes the Express application side and previews the upcoming Docker-related content for the next video.']}], 'duration': 940.245, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ12049271.jpg', 'highlights': ['Upon user login, a session needs to be created, and the cookies tab shows a received cookie with a value of one, set to expire in 30 seconds.', 'Implementing session expiration and setting session duration to 60 seconds, demonstrating the process of logging in and creating a new session, and accessing user information within the session.', 'Using Express middleware to protect routes, ensuring user authentication by checking the presence of a user property in the session object, and granting access to controllers based on user authentication.', 'The tutorial emphasizes the significance of restricting outside access to the Mongo database due to its critical data, including user information and sensitive details.']}, {'end': 14347.22, 'segs': [{'end': 13117.794, 'src': 'embed', 'start': 13087.623, 'weight': 0, 'content': [{'end': 13091.226, 'text': "And then we'd have to grab a different port on our local machine like 3001.", 'start': 13087.623, 'duration': 3.603}, {'end': 13099.031, 'text': 'And so any traffic that gets sent to our local host on port 3001 will get mapped to port 3000 on our second node container.', 'start': 13091.226, 'duration': 7.805}, {'end': 13103.474, 'text': "And if we wanted a third one, we'd have to open up another port like 3002 and so on.", 'start': 13099.791, 'duration': 3.683}, {'end': 13108.84, 'text': 'So if we had 50 containers, 50 node apps, we would need to open up 50 different ports.', 'start': 13103.534, 'duration': 5.306}, {'end': 13112.045, 'text': "And you know, like I said, that's not a scalable solution.", 'start': 13109.36, 'duration': 2.685}, {'end': 13117.794, 'text': "You know, our front end shouldn't have to be aware of the number of node containers that we're running on our back end.", 'start': 13112.065, 'duration': 5.729}], 'summary': 'To avoid opening multiple ports for containers, a scalable solution is needed.', 'duration': 30.171, 'max_score': 13087.623, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ13087623.jpg'}, {'end': 13232.925, 'src': 'embed', 'start': 13203.991, 'weight': 1, 'content': [{'end': 13207.133, 'text': "And then what our NGINX is going to do is it's going to act as a load balancer.", 'start': 13203.991, 'duration': 3.142}, {'end': 13212.816, 'text': "So every request that it receives, it's going to load balance it to our two express instances.", 'start': 13207.593, 'duration': 5.223}, {'end': 13223.222, 'text': 'And if we have four, five, one, or a thousand instances, NGINX will be able to load balance all of those requests across all of our node instances.', 'start': 13213.316, 'duration': 9.906}, {'end': 13232.925, 'text': 'And so this is a much cleaner, elegant solution, because, first of all, we only have to publish one port, and then Nginx, which is highly efficient,', 'start': 13223.962, 'duration': 8.963}], 'summary': 'Nginx acts as a load balancer, efficiently distributing requests to multiple node instances.', 'duration': 28.934, 'max_score': 13203.991, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ13203991.jpg'}, {'end': 13410.951, 'src': 'embed', 'start': 13386.795, 'weight': 2, 'content': [{'end': 13392.979, 'text': "And one of the things that nginx does is you'll lose the original IP address of the sender.", 'start': 13386.795, 'duration': 6.184}, {'end': 13396.622, 'text': 'So you know what was the IP address that originated that request.', 'start': 13393.039, 'duration': 3.583}, {'end': 13401.025, 'text': 'So we can tell nginx to make sure that we forward that along to our node applications.', 'start': 13396.742, 'duration': 4.283}, {'end': 13403.586, 'text': "Now our node application isn't actually making use of that.", 'start': 13401.305, 'duration': 2.281}, {'end': 13409.11, 'text': "But if you're doing any kind of like rate limiting per IP address, these are all things that you want to need.", 'start': 13404.087, 'duration': 5.023}, {'end': 13410.951, 'text': "So it's always best practice to configure this.", 'start': 13409.13, 'duration': 1.821}], 'summary': 'Nginx can forward original ip addresses, essential for rate limiting per ip address.', 'duration': 24.156, 'max_score': 13386.795, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ13386795.jpg'}, {'end': 13919.621, 'src': 'heatmap', 'start': 13521.706, 'weight': 0.876, 'content': [{'end': 13524.832, 'text': "Now, You know, for what we're doing, we're just building a backend.", 'start': 13521.706, 'duration': 3.126}, {'end': 13532.599, 'text': 'However, if you wanted this Nginx server to also handle serving your front end assets, what you would ultimately want to do is,', 'start': 13524.892, 'duration': 7.707}, {'end': 13541.748, 'text': "in your API and we've already kind of set this up is that for all of your routes, you want to make sure that they are listening on API slash V1,", 'start': 13532.599, 'duration': 9.149}, {'end': 13542.609, 'text': 'slash something.', 'start': 13541.748, 'duration': 0.861}, {'end': 13548.991, 'text': 'So that way we know that the NGINX server can actually specify that any request that starts with API is meant for our back end.', 'start': 13543.149, 'duration': 5.842}, {'end': 13554.473, 'text': "And then any request that's meant for a URL that does not have the API that's meant for our front end.", 'start': 13549.411, 'duration': 5.062}, {'end': 13560.275, 'text': 'So since all of these requests are listening for API first, well, except for this one, but we can add that real quick.', 'start': 13554.573, 'duration': 5.702}, {'end': 13568.221, 'text': 'what we can do is we can go back to that configuration file and we can say slash api.', 'start': 13563.533, 'duration': 4.688}, {'end': 13572.929, 'text': 'so in this case, whatever is, whatever urls passed for location,', 'start': 13568.221, 'duration': 4.708}, {'end': 13581.166, 'text': 'this is going to specify what the request needs to look like for us to forward it to our node application.', 'start': 13572.929, 'duration': 8.237}, {'end': 13585.287, 'text': "So any request that comes in starting with slash API, we'll send it to our node app.", 'start': 13581.226, 'duration': 4.061}, {'end': 13588.808, 'text': "And then anything that doesn't have slash API right now is just going to drop.", 'start': 13585.707, 'duration': 3.101}, {'end': 13595.229, 'text': 'But in the future, we could configure it so that it can handle, you know, redirect that traffic to our Nginx.', 'start': 13589.188, 'duration': 6.041}, {'end': 13599.99, 'text': 'Sorry, not our Nginx, but like a React application or whatever our front end application is.', 'start': 13595.649, 'duration': 4.341}, {'end': 13605.475, 'text': "Right now let's go ahead and go to our Docker compose file and let's add our, add our NGINX service.", 'start': 13600.97, 'duration': 4.505}, {'end': 13607.277, 'text': 'We can do NGINX.', 'start': 13606.296, 'duration': 0.981}, {'end': 13619.93, 'text': 'And then the image, uh, this is going to be NGINX and then we can, and let me put a space NGINX.', 'start': 13612.302, 'duration': 7.628}, {'end': 13623.714, 'text': "And then I'm going to grab the stable Alpine version.", 'start': 13620.831, 'duration': 2.883}, {'end': 13626.488, 'text': 'All right.', 'start': 13626.308, 'duration': 0.18}, {'end': 13632.312, 'text': 'And so now, first of all, we no longer have to publish ports for our node application.', 'start': 13626.628, 'duration': 5.684}, {'end': 13634.453, 'text': 'So we can remove that actually.', 'start': 13632.872, 'duration': 1.581}, {'end': 13641.418, 'text': "And let's actually go into our dev and prod, make sure we remove any of the ports being opened there as well.", 'start': 13636.975, 'duration': 4.443}, {'end': 13642.799, 'text': "And it doesn't look like we have anything.", 'start': 13641.458, 'duration': 1.341}, {'end': 13644.82, 'text': 'And prod looks okay.', 'start': 13643.78, 'duration': 1.04}, {'end': 13647.562, 'text': "And so let's go back to our Docker compose.", 'start': 13645.221, 'duration': 2.341}, {'end': 13650.104, 'text': "And then here, let's open up a port.", 'start': 13648.703, 'duration': 1.401}, {'end': 13651.064, 'text': "So we'll say ports.", 'start': 13650.284, 'duration': 0.78}, {'end': 13655.326, 'text': "And then let's pick the port that we want to publish.", 'start': 13653.206, 'duration': 2.12}, {'end': 13656.167, 'text': 'So pick anything.', 'start': 13655.386, 'duration': 0.781}, {'end': 13657.827, 'text': 'We can do 3000 still if we want.', 'start': 13656.207, 'duration': 1.62}, {'end': 13664.068, 'text': "And then we just want to make sure we map it to port 80 because that's the port that our NGINX server is listening on.", 'start': 13659.567, 'duration': 4.501}, {'end': 13670.609, 'text': "And for production, actually, let's copy this.", 'start': 13667.949, 'duration': 2.66}, {'end': 13675.19, 'text': "For production, it's going to be a different port.", 'start': 13673.03, 'duration': 2.16}, {'end': 13680.703, 'text': "So instead of opening up port 3000, we're going to open up port 80.", 'start': 13675.25, 'duration': 5.453}, {'end': 13683.224, 'text': "And we can remove that image because we're not changing anything.", 'start': 13680.703, 'duration': 2.521}, {'end': 13694.288, 'text': 'And actually, why even have this here? I can just copy this and put this in the dev section.', 'start': 13688.946, 'duration': 5.342}, {'end': 13710.789, 'text': 'All right, now the next thing that we have to do is we have to get our configuration file that we built out into our nginx container.', 'start': 13704.244, 'duration': 6.545}, {'end': 13717.494, 'text': "So there's a couple of different ways we can do this, we can create our own custom nginx image that already has our configuration built in.", 'start': 13710.989, 'duration': 6.505}, {'end': 13722.598, 'text': 'Or we can just configure a volume specifically a bind mount and just have it sync those two files.', 'start': 13717.934, 'duration': 4.664}, {'end': 13724.359, 'text': "And so that'll.", 'start': 13723.438, 'duration': 0.921}, {'end': 13725.42, 'text': "I think that's the route we're going to go.", 'start': 13724.359, 'duration': 1.061}, {'end': 13729.943, 'text': "we're just going to configure bind mounts, that we don't have to worry about building a custom image and doing all of that nonsense.", 'start': 13725.42, 'duration': 4.523}, {'end': 13744.39, 'text': 'under volumes So you have to understand a little bit about where nginx looks for this config.', 'start': 13736.445, 'duration': 7.945}, {'end': 13753.276, 'text': 'So nginx is going to look for this in the slash etsy slash nginx slash conf dot d slash default dot conf file.', 'start': 13744.41, 'duration': 8.866}, {'end': 13755.198, 'text': "So that's where it expects the configuration.", 'start': 13753.377, 'duration': 1.821}, {'end': 13760.682, 'text': "And we're going to sync that with nginx default dot conf.", 'start': 13756.018, 'duration': 4.664}, {'end': 13766.246, 'text': "So we'll just go dot slash nginx slash default dot conf.", 'start': 13760.722, 'duration': 5.524}, {'end': 13771.227, 'text': "And then on the nginx container side, we're going to make this read only.", 'start': 13767.046, 'duration': 4.181}, {'end': 13773.367, 'text': 'It should never have to change the configuration.', 'start': 13771.327, 'duration': 2.04}, {'end': 13775.028, 'text': 'So a little bit of a security check.', 'start': 13773.407, 'duration': 1.621}, {'end': 13779.729, 'text': "And let's tear everything down.", 'start': 13778.529, 'duration': 1.2}, {'end': 13785.11, 'text': "And let's build it back up.", 'start': 13784.05, 'duration': 1.06}, {'end': 13787.371, 'text': "We're just going to do one instance just for now.", 'start': 13785.55, 'duration': 1.821}, {'end': 13788.731, 'text': "Let's just make sure everything's working.", 'start': 13787.391, 'duration': 1.34}, {'end': 13798.034, 'text': "And let's try sending a request.", 'start': 13796.773, 'duration': 1.261}, {'end': 13799.996, 'text': "So we're going to try to log in again.", 'start': 13798.095, 'duration': 1.901}, {'end': 13804.101, 'text': "And let's go to the body here.", 'start': 13800.016, 'duration': 4.085}, {'end': 13808.165, 'text': 'And it looks like we broke our application.', 'start': 13806.503, 'duration': 1.662}, {'end': 13814.435, 'text': "So let's take a look and see what exactly we broke.", 'start': 13811.932, 'duration': 2.503}, {'end': 13816.857, 'text': 'All right, guys, so I made a stupid mistake.', 'start': 13814.455, 'duration': 2.402}, {'end': 13819.54, 'text': 'I just forgot to update this to port 3000.', 'start': 13816.917, 'duration': 2.623}, {'end': 13820.681, 'text': 'So I had left it at 3001.', 'start': 13819.54, 'duration': 1.141}, {'end': 13825.046, 'text': "But make sure we change that to 3000 because that's what our Nginx server is listening on.", 'start': 13820.681, 'duration': 4.365}, {'end': 13829.47, 'text': "So now if I log in, we can see that it's successful and we did receive the cookie.", 'start': 13825.066, 'duration': 4.404}, {'end': 13833.431, 'text': 'Alright, so it looks like we got our application up into a working state,', 'start': 13830.37, 'duration': 3.061}, {'end': 13838.693, 'text': 'using nginx as a proxy so that I can load balance requests to all of our node instances.', 'start': 13833.431, 'duration': 5.262}, {'end': 13841.194, 'text': "But there's a couple more things that we got to do.", 'start': 13839.614, 'duration': 1.58}, {'end': 13845.576, 'text': "So So what I'm going to do is I'm going to pull up this web page right here.", 'start': 13841.354, 'duration': 4.222}, {'end': 13850.038, 'text': "So I just want you to search for Express and then proxy, and it'll be the first result you get.", 'start': 13845.596, 'duration': 4.442}, {'end': 13854.901, 'text': 'it just explains that we do have to add one extra configuration into our Express application.', 'start': 13850.698, 'duration': 4.203}, {'end': 13864.428, 'text': "When, whenever our Express application is sitting behind a proxy and this isn't technically required for our example, for our demonstration project,", 'start': 13855.301, 'duration': 9.127}, {'end': 13866.009, 'text': 'but in a production grade project.', 'start': 13864.428, 'duration': 1.581}, {'end': 13871.092, 'text': 'you probably will need to add this that the configuration right here is this app dot set, trust proxy.', 'start': 13866.009, 'duration': 5.083}, {'end': 13878.858, 'text': "And so all this is saying is that we're going to trust some of the headers that our nginx proxy is going to be adding on to the request.", 'start': 13871.553, 'duration': 7.305}, {'end': 13880.358, 'text': 'And so remember,', 'start': 13879.558, 'duration': 0.8}, {'end': 13888.881, 'text': 'we configure nginx server to basically add the originating senders IP address into the headers so that if our Express application does need it,', 'start': 13880.358, 'duration': 8.523}, {'end': 13889.742, 'text': 'it has access to it.', 'start': 13888.881, 'duration': 0.861}, {'end': 13895.924, 'text': "All we're doing here is we're just telling Express to trust whatever our nginx server is adding onto those headers.", 'start': 13890.262, 'duration': 5.662}, {'end': 13899.856, 'text': 'And so all we have to do is we have one simple configuration.', 'start': 13897.836, 'duration': 2.02}, {'end': 13902.897, 'text': 'So if we go to our middleware, we have our session middleware.', 'start': 13899.936, 'duration': 2.961}, {'end': 13906.078, 'text': "So right above this, we're going to do app dot enable.", 'start': 13902.917, 'duration': 3.161}, {'end': 13908.238, 'text': 'And then we just say trust proxy.', 'start': 13906.878, 'duration': 1.36}, {'end': 13910.139, 'text': "That's the only thing that we have to do.", 'start': 13908.899, 'duration': 1.24}, {'end': 13915.8, 'text': "But this is really just in cases for when you need access to that IP address, which we don't.", 'start': 13910.199, 'duration': 5.601}, {'end': 13919.621, 'text': "But if you're doing some sort of rate limiting, it can be it's going to be necessary.", 'start': 13915.86, 'duration': 3.761}], 'summary': 'Configuring nginx to serve backend and front end assets, using it as a proxy to load balance requests to node instances.', 'duration': 397.915, 'max_score': 13521.706, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ13521706.jpg'}, {'end': 14302.49, 'src': 'embed', 'start': 14273.91, 'weight': 4, 'content': [{'end': 14277.533, 'text': "So there's still a lot of things that we need to learn about Docker, a lot of best practices.", 'start': 14273.91, 'duration': 3.623}, {'end': 14282.738, 'text': "So in the deployment section, what's going to happen is I'm going to show you guys how to deploy it.", 'start': 14277.613, 'duration': 5.125}, {'end': 14285.16, 'text': "And we're going to start off by doing it the wrong way.", 'start': 14283.238, 'duration': 1.922}, {'end': 14291.904, 'text': "And then we're going to slowly correct each mistake one by one, so that you know exactly why we are doing these things.", 'start': 14285.7, 'duration': 6.204}, {'end': 14296.166, 'text': 'And then, when we get to our final deployment scenario,', 'start': 14292.244, 'duration': 3.922}, {'end': 14302.49, 'text': 'where we actually deploy our application the proper way and we know how to properly update our application,', 'start': 14296.166, 'duration': 6.324}], 'summary': 'Learning docker best practices and deployment process, correcting mistakes and achieving proper deployment.', 'duration': 28.58, 'max_score': 14273.91, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ14273910.jpg'}], 'start': 12990.296, 'title': 'Securing docker containers, scaling node apps, and implementing nginx load balancer', 'summary': "Discusses securing a mongo container, leveraging docker's default isolation for security, scaling node containers, implementing nginx load balancer for efficient request balancing across multiple node instances, configuring nginx for express server, and deploying applications with nginx and docker for load balancing, cors enabling, and docker best practices.", 'chapters': [{'end': 13108.84, 'start': 12990.296, 'title': 'Securing docker containers and scaling node apps', 'summary': "Discusses securing a mongo container by not making it accessible to the outside world, leveraging docker's default isolation to add security, and scaling up node containers to handle increased traffic by publishing different ports for each container.", 'duration': 118.544, 'highlights': ['Securing the Mongo container by not making it accessible to the outside world By not opening any ports for the Mongo container, it is isolated and only the express application and containers within the network can communicate with it, adding a layer of security to the application.', "Leveraging Docker's default isolation for added security Without opening any ports, Docker isolates the Mongo container, allowing only the express application and containers within the network to communicate with the Mongo database, enhancing application security.", 'Scaling up node containers to handle increased traffic by publishing different ports for each container To handle increased traffic, additional node express containers are spun up and connected to the Mongo database using different ports, allowing each container to be accessed via a unique port on the local machine, enabling scalability to handle increased load in traffic.']}, {'end': 13366.386, 'start': 13109.36, 'title': 'Implementing nginx load balancer', 'summary': 'Discusses implementing an nginx load balancer to efficiently balance requests across multiple node instances, utilizing a single port and default ports 80 and 443 for http and https, respectively, and configuring nginx to redirect traffic to express or node containers via the proxy pass field.', 'duration': 257.026, 'highlights': ['Implementing Nginx load balancer to efficiently balance requests The Nginx load balancer efficiently balances requests across all node instances, providing a cleaner and more elegant solution, requiring the publication of only one port.', 'Utilizing a single port and default ports 80 and 443 for HTTP and HTTPS The use of a single port for Nginx container, defaulting to ports 80 and 443 for HTTP and HTTPS, streamlines the process, simplifying the configuration and ensuring efficient traffic management.', 'Configuring Nginx to redirect traffic to Express or node containers via the proxy pass field Nginx is configured to redirect traffic to Express or node containers through the proxy pass field, utilizing DNS access to efficiently load balance across all node app containers on port 3000.']}, {'end': 13773.367, 'start': 13366.386, 'title': 'Configuring nginx for express server', 'summary': "Covers configuring nginx as a proxy to forward requests to an express application, including ensuring the original sender's ip is passed, specifying the proxy server ips in headers, and setting up nginx to handle requests for backend and front-end applications.", 'duration': 406.981, 'highlights': ["Configuring NGINX to pass the original sender's IP address to the Express application to ensure it is forwarded along for potential use in rate limiting per IP address. Original sender's IP address", 'Setting up NGINX to include a list of proxy server IPs in the headers for each request, as a best practice for configuration. Proxy server IPs', "Specifying NGINX to handle requests for the backend by listening on routes starting with 'API', and ensuring requests without 'API' are meant for the front-end application. Routing requests to backend and front-end applications"]}, {'end': 14347.22, 'start': 13773.407, 'title': 'Deploying application with nginx and docker', 'summary': 'Covers setting up nginx as a proxy for load balancing requests to multiple node instances, configuring express application to trust nginx proxy headers, scaling the application to two node instances, and enabling cors for cross-domain api requests, with a plan to deploy the application into production and cover docker best practices.', 'duration': 573.813, 'highlights': ['Setting up Nginx as a proxy for load balancing requests to multiple node instances. The speaker successfully sets up Nginx as a proxy to load balance requests to multiple node instances, confirming successful load balancing by verifying logs.', 'Configuring Express application to trust Nginx proxy headers. The speaker explains the need to configure the Express application to trust headers added by the Nginx proxy, ensuring access to additional headers if required.', 'Scaling the application to two node instances. The speaker scales the application to two node instances, demonstrating the use of the scale flag and verifying load balancing by generating logs from both instances.', 'Enabling CORS for cross-domain API requests. The speaker adds the CORS library to enable cross-domain API requests, demonstrating the simple configuration process and verifying successful API access from a different domain.', 'Plan to deploy the application into production and cover Docker best practices. The speaker outlines a plan to deploy the application into production, highlighting the intention to demonstrate the wrong deployment methods and gradually correct mistakes to understand proper deployment procedures and Docker best practices.']}], 'duration': 1356.924, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ12990296.jpg', 'highlights': ['Scaling up node containers to handle increased traffic by publishing different ports for each container', 'Implementing Nginx load balancer to efficiently balance requests', "Configuring NGINX to pass the original sender's IP address to the Express application to ensure it is forwarded along for potential use in rate limiting per IP address", 'Setting up Nginx as a proxy for load balancing requests to multiple node instances', 'Plan to deploy the application into production and cover Docker best practices']}, {'end': 15528.227, 'segs': [{'end': 14375.225, 'src': 'embed', 'start': 14347.28, 'weight': 2, 'content': [{'end': 14352.022, 'text': "So definitely stick to watching this video, even if you can't follow along with all of the steps.", 'start': 14347.28, 'duration': 4.742}, {'end': 14356.6, 'text': "All right, so let's now get our Ubuntu server up and running.", 'start': 14353.919, 'duration': 2.681}, {'end': 14360.641, 'text': "And like I said, we're going to deploy this on DigitalOcean as a droplet.", 'start': 14356.92, 'duration': 3.721}, {'end': 14369.023, 'text': 'However, if you want to use a different platform like AWS or Azure, or even run it as a virtual machine on VirtualBox on your local machine,', 'start': 14360.981, 'duration': 8.042}, {'end': 14369.883, 'text': 'feel free to do that.', 'start': 14369.023, 'duration': 0.86}, {'end': 14375.225, 'text': 'As long as you have an Ubuntu server someplace, you should be able to follow along with everything that I do.', 'start': 14370.504, 'duration': 4.721}], 'summary': 'Follow along with deploying ubuntu server on digitalocean or other platforms.', 'duration': 27.945, 'max_score': 14347.28, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ14347280.jpg'}, {'end': 14435.794, 'src': 'embed', 'start': 14389.114, 'weight': 0, 'content': [{'end': 14390.275, 'text': "We'll select the basic plan.", 'start': 14389.114, 'duration': 1.161}, {'end': 14393.277, 'text': "And then we want to select regular Intel with SSD because it's cheaper.", 'start': 14390.595, 'duration': 2.682}, {'end': 14395.539, 'text': 'And then we select the cheapest option that we can find.', 'start': 14393.297, 'duration': 2.242}, {'end': 14399.741, 'text': "And then by default, because I'm on the East Coast, it's going to default to the New York data center.", 'start': 14396.419, 'duration': 3.322}, {'end': 14402.443, 'text': 'Just pick whichever data center is closest to you geographically.', 'start': 14399.781, 'duration': 2.662}, {'end': 14404.965, 'text': 'And then we want to select our password.', 'start': 14402.463, 'duration': 2.502}, {'end': 14406.026, 'text': 'So put in your password here.', 'start': 14404.985, 'duration': 1.041}, {'end': 14414.322, 'text': 'then we just hit create droplet.', 'start': 14412.861, 'duration': 1.461}, {'end': 14420.946, 'text': "And so we'll let this run for a couple minutes.", 'start': 14417.804, 'duration': 3.142}, {'end': 14424.668, 'text': 'It does take some time for digital ocean to spin up a new VM.', 'start': 14421.486, 'duration': 3.182}, {'end': 14428.25, 'text': "And then, once that gets started, we'll then install Docker.", 'start': 14425.108, 'duration': 3.142}, {'end': 14432.532, 'text': "we do need to install Docker on our production environment, because that's how our application runs obviously.", 'start': 14428.25, 'duration': 4.282}, {'end': 14435.794, 'text': "So I'll see you guys about I'll see you guys.", 'start': 14432.552, 'duration': 3.242}], 'summary': 'Select basic plan with regular intel and ssd, cheapest option, east coast data center, set password, create droplet, install docker for production environment.', 'duration': 46.68, 'max_score': 14389.114, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ14389114.jpg'}, {'end': 14502.186, 'src': 'embed', 'start': 14471.047, 'weight': 4, 'content': [{'end': 14473.99, 'text': 'And the first thing that we have to do is get Docker installed.', 'start': 14471.047, 'duration': 2.943}, {'end': 14476.332, 'text': "And so there's a couple of different ways to do this.", 'start': 14474.63, 'duration': 1.702}, {'end': 14482.437, 'text': "So if you pull up the documentation for installing Docker engine on Ubuntu, they've got some very easy steps to go through.", 'start': 14476.372, 'duration': 6.065}, {'end': 14483.538, 'text': "It's just a couple of commands.", 'start': 14482.457, 'duration': 1.081}, {'end': 14485.72, 'text': "However, there's an even easier method.", 'start': 14483.978, 'duration': 1.742}, {'end': 14493.364, 'text': "So if you go to get.docker.com right here, there's actually a script that's on this, hosted on this website,", 'start': 14485.94, 'duration': 7.424}, {'end': 14495.864, 'text': 'that actually installs Docker for you automatically.', 'start': 14493.364, 'duration': 2.5}, {'end': 14497.325, 'text': 'So you just have to run one command.', 'start': 14495.904, 'duration': 1.421}, {'end': 14502.186, 'text': 'So here under this section right here, you just copy this curl command.', 'start': 14497.865, 'duration': 4.321}], 'summary': 'Install docker using an easy script from get.docker.com, requiring just one command.', 'duration': 31.139, 'max_score': 14471.047, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ14471047.jpg'}, {'end': 14614.754, 'src': 'embed', 'start': 14552.828, 'weight': 5, 'content': [{'end': 14553.708, 'text': "Let's pull up the directions.", 'start': 14552.828, 'duration': 0.88}, {'end': 14556.549, 'text': "So we'll say Docker compose install.", 'start': 14554.288, 'duration': 2.261}, {'end': 14561.641, 'text': 'Ubuntu And then here we can just select Linux here.', 'start': 14558.339, 'duration': 3.302}, {'end': 14574.527, 'text': 'And we just copy this command, paste it in there, and then copy this command.', 'start': 14561.661, 'duration': 12.866}, {'end': 14586.373, 'text': 'So now if we do docker dash compose dash v, we should see it return a version.', 'start': 14578.949, 'duration': 7.424}, {'end': 14591.825, 'text': "Alright, and so now we've got Docker installed on our Ubuntu machine.", 'start': 14587.984, 'duration': 3.841}, {'end': 14593.266, 'text': 'In the next video.', 'start': 14592.246, 'duration': 1.02}, {'end': 14600.729, 'text': "let's set up a git for our application so that we can store our application in a git repository and then we can pull it into our production server.", 'start': 14593.266, 'duration': 7.463}, {'end': 14607.771, 'text': "Alright, so let's get started on creating a git repo for our application.", 'start': 14602.809, 'duration': 4.962}, {'end': 14613.453, 'text': "So on logged into GitHub, we're gonna hit this plus sign and we're just like new repository, we'll give it whatever name we want.", 'start': 14608.031, 'duration': 5.422}, {'end': 14614.754, 'text': "I'm just gonna call this node Docker.", 'start': 14613.473, 'duration': 1.281}], 'summary': 'Installed docker on ubuntu and created a git repository for application.', 'duration': 61.926, 'max_score': 14552.828, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ14552828.jpg'}, {'end': 14770.422, 'src': 'embed', 'start': 14745.094, 'weight': 7, 'content': [{'end': 14750.176, 'text': "And so you'll see here that there's a couple of environment variables that we need in our application for it to work, right.", 'start': 14745.094, 'duration': 5.082}, {'end': 14755.137, 'text': 'So first of all, we need the Mongo user, the Mongo password in our node app.', 'start': 14750.216, 'duration': 4.921}, {'end': 14757.118, 'text': 'And then we also need our session secret.', 'start': 14755.157, 'duration': 1.961}, {'end': 14761.199, 'text': 'And then under our Mongo server, we need a couple of things.', 'start': 14757.778, 'duration': 3.421}, {'end': 14763.68, 'text': 'So we need the root user and then the root password.', 'start': 14761.639, 'duration': 2.041}, {'end': 14770.422, 'text': 'So Here we are hard coding it into our Docker compose dev file, because this is our development environment.', 'start': 14763.72, 'duration': 6.702}], 'summary': 'Application requires environment variables for mongo and session secret in development environment.', 'duration': 25.328, 'max_score': 14745.094, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ14745094.jpg'}, {'end': 15050.886, 'src': 'embed', 'start': 15024.786, 'weight': 8, 'content': [{'end': 15030.01, 'text': 'So I want to show you my method of getting our environment variable set on our machine.', 'start': 15024.786, 'duration': 5.224}, {'end': 15035.113, 'text': 'that will actually persist through reboots of the switch goes down, comes back up, is automatically going to load all of our environment variables.', 'start': 15030.01, 'duration': 5.103}, {'end': 15041.758, 'text': 'And so the first thing that I want to do is I want to create an environment file that is going to store all of our environment variables.', 'start': 15036.074, 'duration': 5.684}, {'end': 15047.343, 'text': "So here I'm under my root folder, I'm just going to make that file store that file here.", 'start': 15042.478, 'duration': 4.865}, {'end': 15050.886, 'text': "I'm not saying you should be doing anything under the root user.", 'start': 15047.743, 'duration': 3.143}], 'summary': 'Creating an environment file to persist variables through reboots.', 'duration': 26.1, 'max_score': 15024.786, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ15024786.jpg'}, {'end': 15355.781, 'src': 'embed', 'start': 15317.932, 'weight': 9, 'content': [{'end': 15318.893, 'text': "I'm going to CD into app.", 'start': 15317.932, 'duration': 0.961}, {'end': 15329.63, 'text': "And then we're going to we're going to clone our git repo.", 'start': 15327.588, 'duration': 2.042}, {'end': 15332.292, 'text': 'So copy this.', 'start': 15329.67, 'duration': 2.622}, {'end': 15338.837, 'text': "I'm gonna say git clone, and then clone it into our current directory.", 'start': 15332.312, 'duration': 6.525}, {'end': 15344.121, 'text': 'And so now if I do an LS, we should see all of our application files.', 'start': 15340.698, 'duration': 3.423}, {'end': 15355.781, 'text': "And so now just like we did on our local machine, let's run a Docker compose up and let's see if this works on our production server.", 'start': 15347.594, 'duration': 8.187}], 'summary': 'Cloning git repo into app directory and running docker compose on production server.', 'duration': 37.849, 'max_score': 15317.932, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ15317932.jpg'}], 'start': 14347.28, 'title': 'Setting up ubuntu server, docker, and docker compose', 'summary': 'Details setting up an ubuntu server on digitalocean, installing docker and git, and configuring docker compose for production. it includes selecting server type, plan, data center, creating a git repository, and deploying the application, with adaptability for other platforms like aws or azure.', 'chapters': [{'end': 14404.965, 'start': 14347.28, 'title': 'Setting up ubuntu server on digitalocean', 'summary': 'Details the process of setting up an ubuntu server on digitalocean, highlighting the steps involved in selecting the server type, plan, data center, and setting up a password, emphasizing that the instructions can be adapted for other platforms such as aws or azure.', 'duration': 57.685, 'highlights': ['Selecting the Ubuntu 20.04 image and choosing the basic plan with regular Intel and SSD can provide cost-effective deployment.', 'Geographically selecting the closest data center and setting up a password are crucial steps in the server setup process.', 'The instructions can be adapted for other platforms like AWS or Azure, providing flexibility for users to deploy the server as per their preferences.']}, {'end': 14723.057, 'start': 14404.985, 'title': 'Setting up docker and git on ubuntu', 'summary': 'Covers setting up docker on ubuntu by creating a droplet on digital ocean, installing docker, verifying the installation, installing docker compose, and setting up a git repository on github for an application.', 'duration': 318.072, 'highlights': ['A droplet was created on Digital Ocean to host the production environment. Creating a droplet on Digital Ocean to host the production environment, which takes a couple of minutes for the new VM to spin up.', 'Docker was successfully installed on the Ubuntu machine using the method from get.docker.com, which simplifies the process to just one command. Successfully installing Docker on the Ubuntu machine using the method from get.docker.com, which simplifies the process to just one command, making it easier and quicker.', 'Docker Compose was installed on the Ubuntu machine to manage multi-container Docker applications. Installing Docker Compose on the Ubuntu machine to manage multi-container Docker applications, which involves running specific commands for installation and verification.', 'Setting up a Git repository on GitHub for the application, including creating a .gitignore file and initializing the repository. Setting up a Git repository on GitHub for the application, including creating a .gitignore file to exclude node_modules, initializing the repository, adding and committing files, and pushing the changes to the remote repository.']}, {'end': 15528.227, 'start': 14725.098, 'title': 'Configuring docker compose for production', 'summary': 'Covers configuring docker compose for production, including setting up environment variables, creating an environment file, and deploying the application on a production server using docker.', 'duration': 803.129, 'highlights': ['Configuring environment variables for production Explains the process of setting up environment variables for production, including the need for variables like Mongo user, Mongo password, and session secret, and the method of fetching these values from the host Ubuntu machine.', 'Creating an environment file to store variables Describes the creation of a separate environment file to store production environment variables, ensuring they do not get pushed into GitHub and persisting across reboots.', 'Deploying the application on a production server using Docker Demonstrates the steps to clone the git repo, run Docker Compose for production, fix configuration issues, and deploy the application on a production server, ensuring it works as expected.']}], 'duration': 1180.947, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ14347280.jpg', 'highlights': ['Selecting the Ubuntu 20.04 image and choosing the basic plan with regular Intel and SSD can provide cost-effective deployment.', 'Geographically selecting the closest data center and setting up a password are crucial steps in the server setup process.', 'The instructions can be adapted for other platforms like AWS or Azure, providing flexibility for users to deploy the server as per their preferences.', 'A droplet was created on Digital Ocean to host the production environment, which takes a couple of minutes for the new VM to spin up.', 'Docker was successfully installed on the Ubuntu machine using the method from get.docker.com, which simplifies the process to just one command.', 'Docker Compose was installed on the Ubuntu machine to manage multi-container Docker applications.', 'Setting up a Git repository on GitHub for the application, including creating a .gitignore file to exclude node_modules, initializing the repository, adding and committing files, and pushing the changes to the remote repository.', 'Configuring environment variables for production Explains the process of setting up environment variables for production, including the need for variables like Mongo user, Mongo password, and session secret, and the method of fetching these values from the host Ubuntu machine.', 'Creating an environment file to store variables Describes the creation of a separate environment file to store production environment variables, ensuring they do not get pushed into GitHub and persisting across reboots.', 'Deploying the application on a production server using Docker Demonstrates the steps to clone the git repo, run Docker Compose for production, fix configuration issues, and deploy the application on a production server, ensuring it works as expected.']}, {'end': 17331.578, 'segs': [{'end': 16164.333, 'src': 'embed', 'start': 16136.978, 'weight': 2, 'content': [{'end': 16142.585, 'text': 'So if you do this on a production server, you could end up starving your actual production traffic,', 'start': 16136.978, 'duration': 5.607}, {'end': 16146.83, 'text': 'because all of the compute power and all of your memory is going towards building an image.', 'start': 16142.585, 'duration': 4.245}, {'end': 16152.283, 'text': 'And your production server should only be meant for one thing, and that is just to handle production traffic.', 'start': 16147.899, 'duration': 4.384}, {'end': 16153.784, 'text': 'It should never be doing anything else.', 'start': 16152.303, 'duration': 1.481}, {'end': 16164.333, 'text': "So what I ultimately want to do is move away from this development workflow and move towards a workflow that allows us to build an image on a machine that's not a production server.", 'start': 16154.385, 'duration': 9.948}], 'summary': 'Avoid using production server for image building to prevent production traffic disruption.', 'duration': 27.355, 'max_score': 16136.978, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ16136978.jpg'}, {'end': 16262.598, 'src': 'embed', 'start': 16237.599, 'weight': 0, 'content': [{'end': 16243.123, 'text': 'And you can see that by building it on the dev server, we no longer have to build it on our production server.', 'start': 16237.599, 'duration': 5.524}, {'end': 16246.806, 'text': "So next video, we're going to actually go ahead and implement this.", 'start': 16243.764, 'duration': 3.042}, {'end': 16249.808, 'text': "And I'm going to show you guys how much better of a workflow this actually is.", 'start': 16246.846, 'duration': 2.962}, {'end': 16257.454, 'text': 'Alright, so to implement our new workflow, the first thing that we have to do is create an account on Docker Hub.', 'start': 16249.828, 'duration': 7.626}, {'end': 16262.598, 'text': "So if you haven't already done that, go ahead and sign up to Docker Hub, and then sign in.", 'start': 16258.055, 'duration': 4.543}], 'summary': 'Implementing new workflow eliminates production server builds. next video will demonstrate improved workflow and docker hub account creation.', 'duration': 24.999, 'max_score': 16237.599, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ16237599.jpg'}, {'end': 16511.842, 'src': 'embed', 'start': 16478.615, 'weight': 3, 'content': [{'end': 16482.096, 'text': 'However, before our production server can actually pull this image,', 'start': 16478.615, 'duration': 3.481}, {'end': 16488.698, 'text': 'we need to tell Docker compose that we want to actually use this image for application moving forward.', 'start': 16482.096, 'duration': 6.602}, {'end': 16495.22, 'text': 'So how do we do that, right? Because we still need to be able to build the build our image ourselves with Docker compose.', 'start': 16489.277, 'duration': 5.943}, {'end': 16499.001, 'text': 'But we also need to be able to tell it that you know when we want to actually run the application.', 'start': 16495.24, 'duration': 3.761}, {'end': 16501.542, 'text': 'we want to use this specific image in this repository.', 'start': 16499.001, 'duration': 2.541}, {'end': 16504.803, 'text': "So what we have to do is let's go to Docker compose out YAML.", 'start': 16502.081, 'duration': 2.722}, {'end': 16511.842, 'text': 'And under node dash app, what we can do is we can pass under here an image property.', 'start': 16506.678, 'duration': 5.164}], 'summary': 'In order to use a specific image in the docker compose file, the image property should be added under the node-app.', 'duration': 33.227, 'max_score': 16478.615, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ16478615.jpg'}, {'end': 17211.564, 'src': 'embed', 'start': 17186.296, 'weight': 1, 'content': [{'end': 17193.6, 'text': "wouldn't it be cool if there was a nice way to have the production server automatically detect that we pushed a new image and pull that new image?", 'start': 17186.296, 'duration': 7.304}, {'end': 17200.123, 'text': 'Well, there is a tool that we can use, called watchtower, that will automatically check Docker Hub periodically for a new image.', 'start': 17194, 'duration': 6.123}, {'end': 17207.826, 'text': "And whenever an image gets pushed, it'll automatically pull it to your production server and then restart a your container with the brand new image.", 'start': 17200.503, 'duration': 7.323}, {'end': 17211.564, 'text': 'Now, some people, you know, like this feature.', 'start': 17208.743, 'duration': 2.821}], 'summary': 'Watchtower tool automatically pulls new docker images to production server, restarting containers.', 'duration': 25.268, 'max_score': 17186.296, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ17186296.jpg'}], 'start': 15529.228, 'title': 'Optimizing image building workflow', 'summary': 'Emphasizes not building images on production servers, and presents a new workflow involving building and pushing images to docker hub from a development server, enhancing efficiency and preventing resource starvation. it also demonstrates creating a docker hub account, pushing images to the repository, updating docker compose yaml file, and automating image updates with watchtower.', 'chapters': [{'end': 15639.22, 'start': 15529.228, 'title': 'Deploying code changes to production', 'summary': 'Discusses the process of deploying code changes to the production server, including pushing code changes to github and pulling them into the production server, ensuring successful deployment and handling production traffic.', 'duration': 109.992, 'highlights': ['The process of deploying code changes to the production server is crucial for handling production traffic and ensuring successful deployment. crucial for handling production traffic, ensuring successful deployment', "The first step involves making a code change and pushing it to the GitHub repository using commands like 'git add', 'git commit', and 'git push'. making code change, pushing to GitHub repository, using commands like 'git add', 'git commit', and 'git push'", "After pushing the changes to GitHub, the next step is to pull the updated code into the production server using the command 'git pull'. pulling updated code into production server using command 'git pull'"]}, {'end': 15897.439, 'start': 15639.3, 'title': 'Managing docker compose for production', 'summary': 'Discusses managing docker compose for production, covering the process of updating code, optimizing the rebuild process, and addressing dependencies, highlighting the need for specific rebuilds and the impact of dependencies.', 'duration': 258.139, 'highlights': ['The need for specific rebuilds - When updating code, only the node app container needs to be rebuilt, avoiding unnecessary checks on other containers and reducing potential outage.', "Optimizing the rebuild process - Using the 'no-deps' flag in Docker Compose prevents the rebuilding of dependencies, such as the Mongo container, when specific services need updating.", 'Impact of dependencies - Docker Compose checks and potentially rebuilds dependent containers, like Mongo, when updating specific services due to their interdependencies, leading to unnecessary checks and rebuilds.', 'Recreating containers for code changes - Code updates trigger the rebuilding of the node app container, while other unchanged containers like Mongo, Redis, and Nginx remain unaffected, ensuring efficient updates and minimal downtime.']}, {'end': 16099.845, 'start': 15897.459, 'title': 'Docker compose workflow', 'summary': 'Discusses docker compose workflow, including rebuilding a container, using specific flags for rebuilding, and pushing changes from development to production.', 'duration': 202.386, 'highlights': ['Rebuilding node container triggers rebuilding of dependent containers, as seen with the Mongo container being rebuilt due to its dependency, impacting the development to production workflow.', 'Using specific flags like --force-recreate and --no-deps are essential for triggering container rebuilds, avoiding unnecessary container recreations and managing dependencies effectively.', "The process of pushing changes from development to production involves pushing code to GitHub, pulling the code in the production server, and then triggering a rebuild of the node image and container using 'docker compose up --build'."]}, {'end': 16626.63, 'start': 16100.165, 'title': 'Optimizing image building workflow', 'summary': 'Emphasizes the importance of not building images on production servers, outlining the resource costs and potential impact on production traffic. it introduces a new workflow that involves building and pushing images to docker hub from a development server, and then pulling and deploying the images on the production server, enhancing efficiency and preventing resource starvation. it also demonstrates the process of creating a docker hub account, pushing images to the repository, renaming images, updating docker compose yaml file, and deploying the updated application.', 'duration': 526.465, 'highlights': ['The importance of not building images on production servers is emphasized due to the resource costs and potential impact on production traffic. Building images on production servers consumes resources, such as CPU cycles and memory, potentially leading to resource starvation for production traffic.', 'Introduction of a new workflow involving building and pushing images to Docker Hub from a development server, and then pulling and deploying the images on the production server, aiming to enhance efficiency and prevent resource starvation. The new workflow emphasizes building and pushing images from a development server to Docker Hub, followed by pulling and deploying these images on the production server, aiming to improve efficiency and prevent resource starvation.', 'Demonstration of the process of creating a Docker Hub account, pushing images to the repository, renaming images, updating Docker Compose YAML file, and deploying the updated application. The process involves creating a Docker Hub account, pushing images to the repository, renaming images, updating the Docker Compose YAML file to specify the image repository, and deploying the updated application.']}, {'end': 17331.578, 'start': 16626.63, 'title': 'Docker image workflow', 'summary': 'Discusses the process of building, pushing, and pulling docker images, including using docker compose for image building, pushing images to docker hub, and automating image updates with watchtower.', 'duration': 704.948, 'highlights': ["The process of using Docker Compose to build an image for production involves specifying the production YAML file and using the 'build' command to build all services, resulting in the creation of the image for the specified service. Using Docker Compose to build an image for production involves specifying the production YAML file, using the 'build' command to build all services, and creating the image for the specified service.", "After building the image, it can be pushed to Docker Hub using the 'push' command, with the option to push all images or specify a single service's image. After building the image, it can be pushed to Docker Hub using the 'push' command, with the option to push all images or specify a single service's image.", "The process of pulling a new image onto the production server involves using the 'pull' command, with the ability to specify the service for which the image should be pulled. The process of pulling a new image onto the production server involves using the 'pull' command and specifying the service for which the image should be pulled.", 'The option to automate the detection and pulling of new images to the production server using Watchtower is discussed, providing a tool to periodically check Docker Hub for new images and automatically update containers with the new images. The option to automate the detection and pulling of new images to the production server using Watchtower is discussed, providing a tool to periodically check Docker Hub for new images and automatically update containers with the new images.']}], 'duration': 1802.35, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ15529228.jpg', 'highlights': ['New workflow: build and push images to Docker Hub from dev server, enhance efficiency, prevent resource starvation', 'Demonstration: create Docker Hub account, push images to repository, update Docker Compose YAML, automate image updates with Watchtower', 'Avoid building images on production servers to prevent resource starvation and enhance efficiency', "Using Docker Compose to build image for production involves specifying production YAML file and using 'build' command", 'Automate detection and pulling of new images to production server using Watchtower']}, {'end': 19317.676, 'segs': [{'end': 17682.863, 'src': 'embed', 'start': 17657.067, 'weight': 1, 'content': [{'end': 17661.669, 'text': "So then it says it's determined that there is a new image, so it's going to do a poll.", 'start': 17657.067, 'duration': 4.602}, {'end': 17662.809, 'text': "So it's pulling the new image.", 'start': 17661.709, 'duration': 1.1}, {'end': 17665.831, 'text': 'Uh, then it stopped our container.', 'start': 17664.07, 'duration': 1.761}, {'end': 17667.332, 'text': "It's deleting our container.", 'start': 17666.251, 'duration': 1.081}, {'end': 17668.772, 'text': "It's creating a brand new container.", 'start': 17667.392, 'duration': 1.38}, {'end': 17670.673, 'text': "It's then starting the new container.", 'start': 17669.172, 'duration': 1.501}, {'end': 17676.476, 'text': "Right And then after 50 seconds, it's going to do all of the same stuff over again.", 'start': 17672.534, 'duration': 3.942}, {'end': 17680.402, 'text': "All right, so let's test this out.", 'start': 17679.161, 'duration': 1.241}, {'end': 17682.863, 'text': "So remember, we didn't do anything.", 'start': 17681.082, 'duration': 1.781}], 'summary': 'Automated process: pulls new image, updates container, repeats every 50 seconds.', 'duration': 25.796, 'max_score': 17657.067, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ17657067.jpg'}, {'end': 17816.145, 'src': 'embed', 'start': 17785.051, 'weight': 2, 'content': [{'end': 17788.672, 'text': 'Now, one of the things I want to talk to you guys about is our current workflow.', 'start': 17785.051, 'duration': 3.621}, {'end': 17794.654, 'text': "And that is, you know whether you're using watchtower to do the pulling of the image and restarting in the container,", 'start': 17789.213, 'duration': 5.441}, {'end': 17799.776, 'text': "or if you're mainly do it manually, doing it yourself by doing a Docker, compose, pull and then an up.", 'start': 17794.654, 'duration': 5.122}, {'end': 17801.957, 'text': 'at the end of the day we have to recreate the container.', 'start': 17799.776, 'duration': 2.181}, {'end': 17809.32, 'text': 'So we have to tear down our current container, we have to build a brand new container with a brand new image, and then start that container.', 'start': 17802.137, 'duration': 7.183}, {'end': 17816.145, 'text': 'and during that window of tearing down and building up, we are going to experience, uh, a network outage.', 'start': 17810.12, 'duration': 6.025}], 'summary': 'Discussing workflow for image pulling and container recreation, with network outage during the process.', 'duration': 31.094, 'max_score': 17785.051, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ17785051.jpg'}, {'end': 18084.281, 'src': 'embed', 'start': 18055.137, 'weight': 3, 'content': [{'end': 18061.46, 'text': 'So, if I wanted to be able to distribute my Express containers you know maybe five or six of them across multiple servers,', 'start': 18055.137, 'duration': 6.323}, {'end': 18066.121, 'text': "so that if one goes down we'll have some redundancy, with the other servers being able to pick up the slack.", 'start': 18061.46, 'duration': 4.661}, {'end': 18067.762, 'text': "I can't do that with Docker compose.", 'start': 18066.401, 'duration': 1.361}, {'end': 18070.414, 'text': "That's where Docker Swarm comes in.", 'start': 18069.153, 'duration': 1.261}, {'end': 18076.077, 'text': "Docker Swarm is an orchestrator, right? So there's more logic behind Docker Swarm.", 'start': 18070.914, 'duration': 5.163}, {'end': 18084.281, 'text': "Docker Compose can only run just a bunch of Docker run commands, right? It's just a bunch of Docker run commands that's listed out in a YAML format.", 'start': 18076.237, 'duration': 8.044}], 'summary': 'Docker swarm allows distributing express containers across servers for redundancy, unlike docker compose.', 'duration': 29.144, 'max_score': 18055.137, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ18055137.jpg'}, {'end': 18141.113, 'src': 'embed', 'start': 18114.259, 'weight': 0, 'content': [{'end': 18119.982, 'text': 'So it gives us a little bit more flexibility when it comes to our production environment,', 'start': 18114.259, 'duration': 5.723}, {'end': 18123.324, 'text': "and giving us some more tools that Docker compose doesn't provide us.", 'start': 18119.982, 'duration': 3.342}, {'end': 18130.208, 'text': 'And when it comes to Docker swarm, like I said, you know, Docker swarm gives us a multi node environment,', 'start': 18124.004, 'duration': 6.204}, {'end': 18133.33, 'text': 'which means we can use multiple servers to deploy our applications.', 'start': 18130.208, 'duration': 3.122}, {'end': 18135.271, 'text': "we don't need to run everything all on one server.", 'start': 18133.33, 'duration': 1.941}, {'end': 18141.113, 'text': 'so each server within a Docker swarm is referred to as a node.', 'start': 18135.991, 'duration': 5.122}], 'summary': 'Docker swarm offers multi-node environment for flexible deployment, avoiding reliance on single server.', 'duration': 26.854, 'max_score': 18114.259, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ18114259.jpg'}, {'end': 18723.79, 'src': 'embed', 'start': 18688.037, 'weight': 4, 'content': [{'end': 18690.098, 'text': "But that's all we really need for Docker.", 'start': 18688.037, 'duration': 2.061}, {'end': 18691.878, 'text': 'For Docker swarm.', 'start': 18690.218, 'duration': 1.66}, {'end': 18697.86, 'text': "You can see how easy it is to integrate it into your Docker compose workflow, we just had to add a couple of properties and we're good to go.", 'start': 18691.898, 'duration': 5.962}, {'end': 18703.321, 'text': 'So we made a few changes to our Docker compose file.', 'start': 18700, 'duration': 3.321}, {'end': 18706.943, 'text': 'So we have to actually push these changes to our production service.', 'start': 18703.361, 'duration': 3.582}, {'end': 18710.144, 'text': 'So we have to do a commit and git and then do a git push.', 'start': 18706.963, 'duration': 3.181}, {'end': 18711.925, 'text': 'So I do a git add.', 'start': 18710.244, 'duration': 1.681}, {'end': 18715.026, 'text': "And then we'll do a git commit.", 'start': 18713.766, 'duration': 1.26}, {'end': 18723.79, 'text': "And then we'll do a git push.", 'start': 18722.81, 'duration': 0.98}], 'summary': 'Integrating docker swarm into docker compose workflow, making changes to compose file, and pushing changes to production service.', 'duration': 35.753, 'max_score': 18688.037, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ18688037.jpg'}], 'start': 17332.599, 'title': 'Docker setup, automation, and orchestration', 'summary': "Covers setting up watchtower for docker with a 50-second poll interval, automated image updates and limitations of docker compose. it also discusses docker swarm's benefits, setting it up in production, and deploying applications with rolling updates.", 'chapters': [{'end': 17539.88, 'start': 17332.599, 'title': 'Setting up watchtower for docker', 'summary': 'Covers the setup of watchtower for docker, including specifying poll interval, setting up volumes, specifying the image repository, and specifying the services to be watched, with a poll interval of 50 seconds and the ability to specify multiple containers for automatic image updates.', 'duration': 207.281, 'highlights': ['The poll interval is set to 50 seconds for checking new images, and multiple containers can be specified for automatic updates.', 'The Docker run command must include the list of services to be watched for automatic updates.', "The image repository is specified as 'container/watchtower' and the volume setup follows the documentation.", 'The environment variables watchtower_trace and watchtower_debug can be set to true or false as needed.']}, {'end': 17902.685, 'start': 17548.083, 'title': 'Automated image update with docker', 'summary': 'Discusses the automated process of updating and rebuilding containers with docker, including authentication, image polling, and potential network outages, while also highlighting the limitations of using docker compose for achieving rolling updates.', 'duration': 354.602, 'highlights': ['Automated process of updating and rebuilding containers with Docker The process involves automated detection of new images, pulling the new image, stopping and deleting the old container, creating and starting a new container, and demonstrating the automated workflow.', 'Authentication for private repository images The need for Docker login and authentication credentials for private repository images to ensure access on production servers.', 'Limitations of Docker Compose for rolling updates Exploration of limitations and hacks in Docker Compose to achieve rolling updates, highlighting its unsuitability as a container orchestrator and the potential network outages during container recreation.']}, {'end': 18212.684, 'start': 17903.366, 'title': 'Container orchestration with docker swarm', 'summary': 'Covers the purpose and benefits of using docker swarm over kubernetes, highlighting its ability to handle rolling updates, distribution across multiple servers, and managing multi-node environments.', 'duration': 309.318, 'highlights': ['Docker Swarm provides the ability to handle rolling updates and distribute containers across multiple servers for redundancy, unlike Docker Compose, which is limited to deploying containers onto one server. Docker Swarm can handle rolling updates and distribute containers across multiple servers, providing redundancy and flexibility, unlike Docker Compose, which can only deploy containers onto one server.', 'Docker Swarm serves as an orchestrator with logic and brains, allowing distribution of containers across multiple servers and handling the update process, providing more flexibility compared to Docker Compose. Docker Swarm acts as an orchestrator with logic and brains, enabling distribution of containers across multiple servers and managing the update process, offering more flexibility compared to Docker Compose.', 'Docker Swarm allows the deployment of applications using multiple servers, with manager nodes handling task distribution and control, while worker nodes execute the tasks. Docker Swarm enables deployment of applications using multiple servers, with manager nodes handling task distribution and control, and worker nodes executing the tasks.']}, {'end': 18687.977, 'start': 18214.181, 'title': 'Setting up docker swarm in production', 'summary': 'Provides a guide for setting up docker swarm in a production environment, including enabling swarm, adding nodes as workers or managers, and utilizing docker service commands and compose files to manage services, with a focus on updating and scaling services efficiently.', 'duration': 473.796, 'highlights': ["Enabling Swarm in Docker By initializing Swarm with 'Docker Swarm init' and specifying the public facing IP address with the 'advertise-addr' flag, Swarm can be enabled, defaulting to a manager role.", 'Utilizing Docker Service Commands Docker service commands, similar to regular Docker commands, enable the creation, listing, deletion, orchestration, and updating of services within the Swarm environment.', 'Utilizing Docker Compose Files for Swarm Similar to Docker run configurations, Docker Swarm allows the use of Docker Compose files to automate configurations, with the ability to include Swarm-specific parameters such as replicas, restart policy, and update configuration.']}, {'end': 19317.676, 'start': 18688.037, 'title': 'Deploying applications with docker swarm', 'summary': 'Explains the process of integrating docker swarm into the docker compose workflow, deploying and updating services using docker stack, and implementing rolling updates to minimize downtime, providing insights into docker swarm commands and their functionalities.', 'duration': 629.639, 'highlights': ["The chapter explains the process of integrating Docker swarm into the Docker compose workflow, providing insights into how to make changes to the Docker compose file and deploy applications using Docker stack, with a focus on the command 'Docker stack deploy' (relevance score: 5)", "It demonstrates the use of Docker swarm commands such as 'Docker node LS', 'Docker stack LS', 'Docker stack services', and 'Docker stack PS' to manage and monitor services and tasks within the swarm, offering practical examples and usage (relevance score: 4)", 'The chapter details the implementation of rolling updates with Docker swarm, highlighting the parallelism and wait time for updating services, showcasing the process of minimizing downtime during updates and providing a practical demonstration of the update process (relevance score: 3)', 'It concludes by emphasizing the challenges of transitioning from development to production using Docker, and suggests further exploration of building a CI/CD pipeline for Docker-based applications, providing a comprehensive overview of the potential next steps in the deployment workflow (relevance score: 2)']}], 'duration': 1985.077, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/9zUHg7xjIqQ/pics/9zUHg7xjIqQ17332599.jpg', 'highlights': ['Docker Swarm enables deployment of applications using multiple servers, with manager nodes handling task distribution and control, and worker nodes executing the tasks.', 'The poll interval is set to 50 seconds for checking new images, and multiple containers can be specified for automatic updates.', 'Automated process of updating and rebuilding containers with Docker involves automated detection of new images, pulling the new image, stopping and deleting the old container, creating and starting a new container, and demonstrating the automated workflow.', 'Docker Swarm serves as an orchestrator with logic and brains, enabling distribution of containers across multiple servers and managing the update process, offering more flexibility compared to Docker Compose.', "The chapter explains the process of integrating Docker swarm into the Docker compose workflow, providing insights into how to make changes to the Docker compose file and deploy applications using Docker stack, with a focus on the command 'Docker stack deploy' (relevance score: 5)"]}], 'highlights': ['Automate image updates with Watchtower for efficient Docker usage', "Using Docker Compose to build image for production involves specifying production YAML file and using 'build' command", 'Creating a custom Docker image based on a node image involves copying source code and installing dependencies', 'The optimization technique leads to a significant reduction in build time from initially taking a long time to being completed in less than a second on subsequent runs if there are no changes in the cached layers', "The process of configuring port forwarding involves killing the container, specifying the name, and using the '-F' flag to force deletion of a running container", 'The concept of using bind mounts as a special volume in Docker to sync a local folder or file system with a folder in the Docker container is demonstrated, offering a solution to avoid continuously rebuilding the image and redeploying the container for every code change', 'Introduction of NodeMon for automatic process restarts to ensure real-time updates of changes in the source code', 'The process of setting up NodeMon as a dev dependency and configuring scripts in the package.json file is explained', 'The chapter introduces Docker Compose as a solution to automate container management and simplify the process of bringing up multiple containers with a single command', 'The process of running the Docker Compose command to bring up the configured services and containers is demonstrated, providing a streamlined approach to setting up and tearing down the entire development environment with a single command', 'Setting up different Docker compose files for production and development environments to accommodate specific needs', 'The chapter demonstrates setting up a Mongo container with root username and password, then running the container using Docker compose, resulting in the creation of the Mongo container and a node app container', 'Docker compose creates a new network for the application, placing all containers and services within the Docker compose file into that network', 'The chapter emphasizes the importance of handling application logic with docker commands and logs', 'The chapter discusses building a blog application with Node and Express, emphasizing the development of a CRUD application', 'Implementing Nginx load balancer to efficiently balance requests', 'Setting up Nginx as a proxy for load balancing requests to multiple node instances', 'Selecting the Ubuntu 20.04 image and choosing the basic plan with regular Intel and SSD can provide cost-effective deployment', 'New workflow: build and push images to Docker Hub from dev server, enhance efficiency, prevent resource starvation', 'Docker Swarm enables deployment of applications using multiple servers, with manager nodes handling task distribution and control, and worker nodes executing the tasks', 'Automated process of updating and rebuilding containers with Docker involves automated detection of new images, pulling the new image, stopping and deleting the old container, creating and starting a new container, and demonstrating the automated workflow', 'Docker Swarm serves as an orchestrator with logic and brains, enabling distribution of containers across multiple servers and managing the update process, offering more flexibility compared to Docker Compose']}