This article is a part 2 in series. Please see part one in Docker Cheatsheet: Guideline That Will Make You Comfortable With Docker. Part 1.
Docker Container
And now it is time to run your first Container from the Image you’ve just created. Check out this command:
docker run -d -p 8000:3000 --rm --name <your-container-name> <your-image-name:and-tag>
Let’s review it first. One remark: don’t forget to use our values for everything in “<>”.
docker run #instruct Docker to run a Container (required)
-d #instructus that we want to launch it in detached mode
--rm #instructs Docker to remove this Container wehn stopped
--name #provides name for Container
<your-image-name:and-tag> #provides Image for the Contianer (required), always last
-p is instructing Docker on the port mapping. Your application listens to port 3000, but in the Host machine, it can be occupied. So in the -p command, you specify which port of the Host machine you will dedicate to your Container. The application will listen to port 3000, no change is required, but Docker will handle all the routing between the Host port and the Container port.
There are a few modes in which you can launch your Container. In particular, we will review two:
- Detached mode, where we don’t see what is going on inside our Container and are not able to interact with terminal inside it. We still can interact with it from our CLI and send command that it will execture internally. But it works isolated and we don’t see inside it. For exampke if your program requires manuanual input from user, in detached mode you will not be able to input anything to your program.
- Interactive mode (use -it instead of -d) will allow us to seee what is going on inside Container. If there are console logs or input required from user, we will be able to see and interact with application in live.
–rm stands for “remove”, which instructs Docker to remove Container if it is stopped. This is very common practice because usually, we stop one Container to replace it with a Container that has a new version of our application, so we do not need the old one.
# you can also remove all stopped Containers using
docker container prune
Now that your Container is started, you can go ahead and check the app in your browser. Open a new tab or window and navigate to “http://localhost:8000”.
If you run your Container in the detached mode, you can check logs of running Container by executing the following command:
docker logs <container-name> (-f)
You can add the optional parameter -f to this command to follow the logs. In this case, your terminal will be showing logs from the Container in real-time.
Now you can check the list of running Containers with the following command:
docker container ps
Now you should see your Container in the list. If you add -a at the end of the command above, you will see the list of all Containers (running and stopped). Although at the moment you probably do not have any stopped Containers.
You can stop running Container with the following command:
docker stop <container-name-or-id>
You will get the id of running Container from the table that appears when you run the “docker container ps” command.
And last but not least, you can remove the Container with the following command:
docker container rm <name-or-id-of-one-or-multiple-containers-separated-by-space>
When you have stopped and removed your Container, you can go ahead and remove the Image that we used to create it. You can get Image id from the table that Docker will show when you run “docker image ls”. You can run this command to remove Images you no longer need.
docker image rmi <image-ids-separated-by-space>
Volumes
When you run the Container from the Image, you can optionally specify
#unnamed volume like so
docker run -d -p 8000:3000 --rm --name <your-container-name> -v <internal-path-inside-container> <your-image-name:and-tag>
#named volume like so
docker run -d -p 8000:3000 --rm --name <your-container-name> -v <name>:<internal-path-inside-container> <your-image-name:and-tag>
#bind mount like so
docker run -d -p 8000:3000 --rm --name <your-container-name> -v <full-path-to-folder-on-host-machine>:<internal-path-inside-container> <your-image-name:and-tag>
When you use bind mount, you must specify the full absolute path to the folder you want Docker to synchronize.
You can add bind mount in read-only mode:
-v <path-on-host>:<path-in-container>:ro
Sometimes you will have to mount unnamed, named, and bind volumes simultaneously to the same Container, and things might quickly become complicated in this case. One helpful rule will allow you to avoid risks: if paths to your volumes are overlapping, volume with a more specific (deeper level) path overrides volume with higher levels.
You can view all volumes registered in the system:
docker volume ls
And remove the one
docker volume rm <volume-names-separated-by-space>
And get rid of all unused volumes:
docker volume prune
Arguments and Environment variables
We spoke about arguments and environmental variables above.
You can specify arguments when you build Image:
docker build -t <tagname> --build-arg <argument-name>=<argument-value> .
Then you can use the build argument in your Dockerfile. So, for example, you can specify port like so:
docker build -t <tagname> --build-arg DEFAULT_PORT=8000 .
And then use it in Dockerfile:
#pulling node js base image
FROM node:12-alpine
# Adding build tools to make yarn install work on Apple silicon / arm64 machines
RUN apk add --no-cache python2 g++ make
#/app is going to be our working directory
WORKDIR /app
#copy all files from the folder where our Dockerfile is located to the working dir
COPY . .
#install dependencies
RUN yarn install --production
#expect argument
ARG DEFAULT_PORT
#we use DEFAULT_PORT argument to set environment varaible PORT, we will use it later
ENV PORT $DEFAULT_PORT
EXPOSE $PORT
#launch our server
CMD ["node", "src/index.js"]
You can also specify arguments inside the Dockerfile. This way, you will set up a default value of the argument when it was not specified during building.
#pulling node js base image
FROM node:12-alpine
# Adding build tools to make yarn install work on Apple silicon / arm64 machines
RUN apk add --no-cache python2 g++ make
#/app is going to be our working directory
WORKDIR /app
#copy all files from the folder where our Dockerfile is located to the working dir
COPY . .
#install dependencies
RUN yarn install --production
#expect argument
ARG DEFAULT_PORT=8000
#we use DEFAULT_PORT argument to set environment varaible PORT, we will use it later
ENV PORT $DEFAULT_PORT
EXPOSE $PORT
#launch our server
CMD ["node", "src/index.js"]
When you run the Container (docker container run), you have a few options how to specify environment variables:
- in CLI command argument,
- in Dockerfile,
- from file.
CLI command will look like so:
docker run -d -p <host-port>:<container-port> --env <variable>=<value> --rm <name-of-image> #(or -e PORT=8000, can have multiple -e separated by space)
You can specify multiple env variables separating them with space:
-e PORT=8000 -e MODE=PROD ...
Dockerfile with environment variables should have ENV instructions in it, like so:
#pulling node js base image
FROM node:12-alpine
# Adding build tools to make yarn install work on Apple silicon / arm64 machines
RUN apk add --no-cache python2 g++ make
#/app is going to be our working directory
WORKDIR /app
#copy all files from the folder where our Dockerfile is located to the working dir
COPY . .
#install dependencies
RUN yarn install --production
#expect argument
ARG DEFAULT_PORT=8000
#we can use DEFAULT_PORT in our app code
ENV PORT $DEFAULT_PORT
EXPOSE $PORT
#launch our server
CMD ["node", "src/index.js"]
There is another scenario (which I personally would recommend using) – specify all your environment variables in a file (.env) and load them when launching your Container:
#.env
PORT=8000
#cli command
docker run -d -p <host-port>:<container-port> --env-file <path-to-env-file> --rm <name-of-image> #for example --env-file ./.env
When you pass your environment variables, you can use them inside your code. In node js, for example, you can do it by using
process.env.<variable-name>
#for example
app.listen(process.env.PORT);
Container networking
You can launch the network by running the following command:
docker network create <network-name>
There are not so many parameters to specify apart from –driver. With this parameter, you can regulate the behavior of the network.
docker network create --driver <driver-name> <network-name>
The default driver is “bridge”, which allows Containers to identify each other by name and send requests to one another. Most of the time, you will use this driver. You can read about other drivers and their use-cases here.
Now you can add –network argument to docker run command:
docker run -d -p <host-port>:<container-port> --network <network-name> --rm <name-of-image>
And use the names of your containers when you want to send requests from one Container to another. For example, run your node js app in one Container and mongo database in another. If you want node js app to use mongo database, you can connect it like so:
mongoose.connect(
'mongodb://<other_container_name(mongodb-for-example)>:<necessary-port(27017)>/<database-name>' #connection string to the mongo db hosten on another Container
{ useNewUrlParser: true },
(err) => {
if (err) {
console.log(err);
} else {
app.listen(3000);
}
}
);
Conclusion
To be honest, I didn’t think it would become such a long guide. So I sincerely thank you for staying here and following this guide.
At the same time, this guide covers only the bare minimum, and I encourage you to explore the world of Docker because it is fantastic.
In the following articles, I will write about Docker Compose and Kubernetes.
Thank you, and May the Force be with You!