Docker Containers: Getting Started
SOURCE: https://docs.docker.com/get-started/
Get Started, Part 1: Orientation and Setup
Estimated reading time: 3 minutes
Welcome! We are excited you want to learn how to use Docker.
In this six-part tutorial, you will:
- Get set up and oriented, on this page.
- Build and run your first app
- Turn your app into a scaling service
- Span your service across multiple machines
- Add a visitor counter that persists data
- Deploy your swarm to production
The application itself is very simple so that you are not too distracted by what the code is doing. After all, the value of Docker is in how it can build, ship, and run applications; it’s totally agnostic as to what your application actually does.
Prerequisites
While we’ll define concepts along the way, it is good for you to understand what Docker is and why you would use Docker before we begin.
We also need to assume you are familiar with a few concepts before we continue:
- IP Addresses and Ports
- Virtual Machines
- Editing configuration files
- Basic familiarity with the ideas of code dependencies and building
- Machine resource usage terms, like CPU percentages, RAM use in bytes, etc.
A brief explanation of containers
An image is a lightweight, stand-alone, executable package that includes everything needed to run a piece of software, including the code, a runtime, libraries, environment variables, and config files.
A container is a runtime instance of an image – what the image becomes in memory when actually executed. It runs completely isolated from the host environment by default, only accessing host files and ports if configured to do so.
Containers run apps natively on the host machine’s kernel. They have better performance characteristics than virtual machines that only get virtual access to host resources through a hypervisor. Containers can get native access, each one running in a discrete process, taking no more memory than any other executable.
Containers vs. virtual machines
Consider this diagram comparing virtual machines to containers:
Virtual Machine diagram
Virtual machines run guest operating systems – note the OS layer in each box. This is resource intensive, and the resulting disk image and application state is an entanglement of OS settings, system-installed dependencies, OS security patches, and other easy-to-lose, hard-to-replicate ephemera.
Container diagram
Containers can share a single kernel, and the only information that needs to be in a container image is the executable and its package dependencies, which never need to be installed on the host system. These processes run like native processes, and you can manage them individually by running commands like docker
– just like you would run
psps
on Linux to see active processes. Finally, because they contain all their dependencies, there is no configuration entanglement; a containerized app “runs anywhere.”
Setup
Before we get started, make sure your system has the latest version of Docker installed.
Get Docker CE for CentOS
https://docs.docker.com/engine/installation/linux/docker-ce/centos/
Note version 1.13 or higher is required
You should be able to run docker run hello-world
and see a response like this:
$ docker run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
...(snipped)...
Now would also be a good time to make sure you are using version 1.13 or higher
$ docker --version
Docker version 17.05.0-ce-rc1, build 2878a85
If you see messages like the ones above, you’re ready to begin the journey.
Conclusion
The unit of scale being an individual, portable executable has vast implications. It means that CI/CD can push updates to one part of a distributed application, system dependencies aren’t a thing you worry about, resource density is increased, and orchestrating scaling behavior is a matter of spinning up new executables, not new VM hosts. We’ll be learning about all of those things, but first let’s learn to walk.
Get Started, Part 2: Containers
Estimated reading time: 10 minutes
Prerequisites
- Install Docker version 1.13 or higher.
- Read the orientation in Part 1.
- Give your environment a quick test run to make sure you’re all set up:
docker run hello-world
Introduction
It’s time to begin building an app the Docker way. We’ll start at the bottom of the hierarchy of such an app, which is a container, which we cover on this page. Above this level is a service, which defines how containers behave in production, covered in Part 3. Finally, at the top level is the stack, defining the interactions of all the services, covered in Part 5.
- Stack
- Services
- Container (you are here)
Your new development environment
In the past, if you were to start writing a Python app, your first order of business was to install a Python runtime onto your machine. But, that creates a situation where the environment on your machine has to be just so in order for your app to run as expected; ditto for the server that runs your app.
With Docker, you can just grab a portable Python runtime as an image, no installation necessary. Then, your build can include the base Python image right alongside your app code, ensuring that your app, its dependencies, and the runtime, all travel together.
These portable images are defined by something called a Dockerfile
.
Define a container with a Dockerfile
Dockerfile
will define what goes on in the environment inside your container. Access to resources like networking interfaces and disk drives is virtualized inside this environment, which is isolated from the rest of your system, so you have to map ports to the outside world, and be specific about what files you want to “copy in” to that environment. However, after doing that, you can expect that the build of your app defined in this Dockerfile
will behave exactly the same wherever it runs.
Dockerfile
Create an empty directory and put this file in it, with the name Dockerfile
. Take note of the comments that explain each statement.
# Use an official Python runtime as a base image
FROM python:2.7-slim
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
ADD . /app
# Install any needed packages specified in requirements.txt
RUN pip install -r requirements.txt
# Make port 80 available to the world outside this container
EXPOSE 80
# Define environment variable
ENV NAME World
# Run app.py when the container launches
CMD ["python", "app.py"]
This Dockerfile
refers to a couple of things we haven’t created yet, namely app.py
and requirements.txt
. Let’s get those in place next.
The app itself
Grab these two files and place them in the same folder as Dockerfile
. This completes our app, which as you can see is quite simple. When the above Dockerfile
is built into an image, app.py
and requirements.txt
will be present because of that Dockerfile
’s ADD
command, and the output from app.py
will be accessible over HTTP thanks to the EXPOSE
command.
requirements.txt
Flask
Redis
app.py
from flask import Flask
from redis import Redis, RedisError
import os
import socket
# Connect to Redis
redis = Redis(host="redis", db=0)
app = Flask(__name__)
@app.route("/")
def hello():
try:
visits = redis.incr('counter')
except RedisError:
visits = "<i>cannot connect to Redis, counter disabled</i>"
html = "<h3>Hello {name}!</h3>" \
"<b>Hostname:</b> {hostname}<br/>" \
"<b>Visits:</b> {visits}"
return html.format(name=os.getenv('NAME', "world"), hostname=socket.gethostname(), visits=visits)
if __name__ == "__main__":
app.run(host='0.0.0.0', port=80)
Now we see that pip install requirements.txt
installs the Flask and Redis libraries for Python, and the app prints the environment variable NAME
, as well as the output of a call to socket.gethostname()
. Finally, because Redis isn’t running (as we’ve only installed the Python library, and not Redis itself), we should expect that the attempt to use it here will fail and produce the error message.
Note: Accessing the name of the host when inside a container retrieves the container ID, which is like the process ID for a running executable.
Build the App
That’s it! You don’t need Python or anything in requirements.txt
on your system, nor will building or running this image install them on your system. It doesn’t seem like you’ve really set up an environment with Python and Flask, but you have.
Here’s what ls
should show:
$ ls
Dockerfile app.py requirements.txt
Now run the build command. This creates a Docker image, which we’re going to tag using -t
so it has a friendly name.
docker build -t friendlyhello .
Where is your built image? It’s in your machine’s local Docker image registry:
$ docker images
REPOSITORY TAG IMAGE ID
friendlyhello latest 326387cea398
Run the app
Run the app, mapping your machine’s port 4000 to the container’s EXPOSE
d port 80 using -p
:
docker run -p 4000:80 friendlyhello
You should see a notice that Python is serving your app at http://0.0.0.0:80
. But that message coming from inside the container, which doesn’t know you mapped port 80 of that container to 4000, making the correct URL http://localhost:4000
. Go there, and you’ll see the “Hello World” text, the container ID, and the Redis error message.
Note: This port remapping of
4000:80
is to demonstrate the difference between what youEXPOSE
within theDockerfile
, and what youpublish
usingdocker run -p
. In later steps, we’ll just map port 80 on the host to port 80 in the container and usehttp://localhost
.
Hit CTRL+C
in your terminal to quit.
Now let’s run the app in the background, in detached mode:
docker run -d -p 4000:80 friendlyhello
You get the long container ID for your app and then are kicked back to your terminal. Your container is running in the background. You can also see the abbreviated container ID with docker ps
(and both work interchangeably when running commands):
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED
1fa4ab2cf395 friendlyhello "python app.py" 28 seconds ago
You’ll see that CONTAINER ID
matches what’s on http://localhost:4000
.
Now use docker stop
to end the process, using the CONTAINER ID
, like so:
docker stop 1fa4ab2cf395
Share your image
To demonstrate the portability of what we just created, let’s upload our build and run it somewhere else. After all, you’ll need to learn how to push to registries to make deployment of containers actually happen.
A registry is a collection of repositories, and a repository is a collection of images – sort of like a GitHub repository, except the code is already built. An account on a registry can create many repositories. The docker
CLI is preconfigured to use Docker’s public registry by default.
Note: We’ll be using Docker’s public registry here just because it’s free and pre-configured, but there are many public ones to choose from, and you can even set up your own private registry using Docker Trusted Registry.
If you don’t have a Docker account, sign up for one at cloud.docker.com. Make note of your username.
Log in your local machine.
docker login
Now, publish your image. The notation for associating a local image with a repository on a registry, is username/repository:tag
. The :tag
is optional, but recommended; it’s the mechanism that registries use to give Docker images a version. So, putting all that together, enter your username, and repo and tag names, so your existing image will upload to your desired destination:
docker tag friendlyhello username/repository:tag
Upload your tagged image:
docker push username/repository:tag
Once complete, the results of this upload are publicly available. From now on, you can use docker run
and run your app on any machine with this command:
docker run -p 4000:80 username/repository:tag
Note: If you don’t specify the
:tag
portion of these commands, the tag of:latest
will be assumed, both when you build and when you run images.
No matter where docker run
executes, it pulls your image, along with Python and all the dependencies from requirements.txt
, and runs your code. It all travels together in a neat little package, and the host machine doesn’t have to install anything but Docker to run it.
Conclusion of part two
That’s all for this page. In the next section, we will learn how to scale our application by running this container in a service.
Recap and cheat sheet (optional)
Here’s a terminal recording of what was covered on this page:
Here is a list of the basic commands from this page, and some related ones if you’d like to explore a bit before moving on.
docker build -t friendlyname . # Create image using this directory's Dockerfile
docker run -p 4000:80 friendlyname # Run "friendlyname" mapping port 4000 to 80
docker run -d -p 4000:80 friendlyname # Same thing, but in detached mode
docker ps # See a list of all running containers
docker stop <hash> # Gracefully stop the specified container
docker ps -a # See a list of all containers, even the ones not running
docker kill <hash> # Force shutdown of the specified container
docker rm <hash> # Remove the specified container from this machine
docker rm $(docker ps -a -q) # Remove all containers from this machine
docker images -a # Show all images on this machine
docker rmi <imagename> # Remove the specified image from this machine
docker rmi $(docker images -q) # Remove all images from this machine
docker login # Log in this CLI session using your Docker credentials
docker tag <image> username/repository:tag # Tag <image> for upload to registry
docker push username/repository:tag # Upload tagged image to registry
docker run username/repository:tag # Run image from a registry
Get Started, Part 3: Services
Estimated reading time: 5 minutes
Prerequisites
- Install Docker version 1.13 or higher.
- Read the orientation in Part 1.
- Learn how to create containers in Part 2.
- Make sure you have pushed the container you created to a registry, as instructed; we’ll be using it here.
- Ensure your image is working by running this and visiting
http://localhost/
(slotting in your info forusername
,repo
, andtag
):docker run -p 80:80 username/repo:tag
Introduction
In part 3, we scale our application and enable load-balancing. To do this, we must go one level up in the hierarchy of a distributed application: the service.
- Stack
- Services (you are here)
- Container (covered in part 2)
Understanding services
In a distributed application, different pieces of the app are called “services.” For example, if you imagine a video sharing site, there will probably be a service for storing application data in a database, a service for video transcoding in the background after a user uploads something, a service for the front-end, and so on.
A service really just means, “containers in production.” A service only runs one image, but it codifies the way that image runs – what ports it should use, how many replicas of the container should run so the service has the capacity it needs, and so on. Scaling a service changes the number of container instances running that piece of software, assigning more computing resources to the service in the process.
Luckily it’s very easy to define, run, and scale services with the Docker platform – just write a docker-compose.yml
file.
Your first docker-compose.yml
File
A docker-compose.yml
file is a YAML file that defines how Docker containers should behave in production.
docker-compose.yml
Save this file as docker-compose.yml
wherever you want. Be sure you have pushed the image you created in Part 2 to a registry, and use that info to replace username/repo:tag
:
version: "3"
services:
web:
image: username/repository:tag
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "80:80"
networks:
- webnet
networks:
webnet:
This docker-compose.yml
file tells Docker to do the following:
- Run five instances of the image we uploaded in step 2 as a service called
web
, limiting each one to use, at most, 10% of the CPU (across all cores), and 50MB of RAM. - Immediately restart containers if one fails.
- Map port 80 on the host to
web
’s port 80. - Instruct
web
’s containers to share port 80 via a load-balanced network calledwebnet
. (Internally, the containers themselves will publish toweb
’s port 80 at an ephemeral port.) - Define the
webnet
network with the default settings (which is a load-balanced overlay network).
Run your new load-balanced app
Before we can use the docker stack deploy
command we’ll first run
docker swarm init
Note: We’ll get into the meaning of that command in part 4. If you don’t run docker swarm init
you’ll get an error that “this node is not a swarm manager.”
Now let’s run it. You have to give your app a name – here it is set to getstartedlab
:
docker stack deploy -c docker-compose.yml getstartedlab
See a list of the five containers you just launched:
docker stack ps getstartedlab
You can run curl http://localhost
several times in a row, or go to that URL in your browser and hit refresh a few times. Either way, you’ll see the container ID randomly change, demonstrating the load-balancing; with each request, one of the five replicas is chosen at random to respond.
Scale the app
You can scale the app by changing the replicas
value in docker-compose.yml
, saving the change, and re-running the docker stack deploy
command:
docker stack deploy -c docker-compose.yml getstartedlab
Docker will do an in-place update, no need to tear the stack down first or kill any containers.
Take down the app
Take the app down with docker stack rm
:
docker stack rm getstartedlab
It’s as easy as that to stand up and scale your app with Docker. You’ve taken a huge step towards learning how to run containers in production. Up next, you will learn how to run this app on a cluster of machines.
Note: Compose files like this are used to define applications with Docker, and can be uploaded to cloud providers using Docker Cloud, or on any hardware or cloud provider you choose with Docker Enterprise Edition.
Recap and cheat sheet (optional)
Here’s a terminal recording of what was covered on this page:
To recap, while typing docker run
is simple enough, the true implementation of a container in production is running it as a service. Services codify a container’s behavior in a Compose file, and this file can be used to scale, limit, and redeploy our app. Changes to the service can be applied in place, as it runs, using the same command that launched the service: docker stack deploy
.
Some commands to explore at this stage:
docker stack ls # List all running applications on this Docker host
docker stack deploy -c <composefile> <appname> # Run the specified Compose file
docker stack services <appname> # List the services associated with an app
docker stack ps <appname> # List the running containers associated with an app
docker stack rm <appname> # Tear down an application
Get Started, Part 4: Swarms
Estimated reading time: 12 minutes
Prerequisites
- Install Docker version 1.13 or higher.
- Read the orientation in Part 1.
- Learn how to create containers in Part 2.
- Make sure you have pushed the container you created to a registry, as instructed; we’ll be using it here.
- Ensure your image is working by running this and visiting
http://localhost/
(slotting in your info forusername
,repo
, andtag
):docker run -p 80:80 username/repo:tag
- Have a copy of your
docker-compose.yml
from Part 3 handy.
Introduction
In part 3, you took an app you wrote in part 2, and defined how it should run in production by turning it into a service, scaling it up 5x in the process.
Here in part 4, you deploy this application onto a cluster, running it on multiple machines. Multi-container, multi-machine applications are made possible by joining multiple machines into a “Dockerized” cluster called a swarm.
Understanding Swarm clusters
A swarm is a group of machines that are running Docker and have been joined into a cluster. After that has happened, you continue to run the Docker commands you’re used to, but now they are executed on a cluster by a swarm manager. The machines in a swarm can be physical or virtual. After joining a swarm, they are referred to as nodes.
Swarm managers can use several strategies to run containers, such as “emptiest node” – which fills the least utilized machines with containers. Or “global”, which ensures that each machine gets exactly one instance of the specified container. You instruct the swarm manager to use these strategies in the Compose file, just like the one you have already been using.
Swarm managers are the only machines in a swarm that can execute your commands, or authorize other machines to join the swarm as workers. Workers are just there to provide capacity and do not have the authority to tell any other machine what it can and can’t do.
Up until now you have been using Docker in a single-host mode on your local machine. But Docker also can be switched into swarm mode, and that’s what enables the use of swarms. Enabling swarm mode instantly makes the current machine a swarm manager. From then on, Docker will run the commands you execute on the swarm you’re managing, rather than just on the current machine.
Set up your swarm
A swarm is made up of multiple nodes, which can be either physical or virtual machines. The basic concept is simple enough: run docker swarm init
to enable swarm mode and make your current machine a swarm manager, then run docker swarm join
on other machines to have them join the swarm as a worker. Choose a tab below to see how this plays out in various contexts. We’ll use VMs to quickly create a two-machine cluster and turn it into a swarm.
Create a cluster
VMs on your local machine (Mac, Linux, Windows 7 and 8)
First, you’ll need a hypervisor that can create VMs, so install VirtualBox for your machine’s OS.
Note: If you’re on a Windows system that has Hyper-V installed, such as Windows 10, there is no need to install VirtualBox and you should use Hyper-V instead. View the instructions for Hyper-V systems by clicking the Hyper-V tab above.
Now, create a couple of VMs using docker-machine
, using the VirtualBox driver:
$ docker-machine create --driver virtualbox myvm1
$ docker-machine create --driver virtualbox myvm2
You now have two VMs created, named myvm1
and myvm2
. The first one will act as the manager, which executes docker
commands and authenticates workers to join the swarm, and the second will be a worker.
You can send commands to your VMs using docker-machine ssh
. Instruct myvm1
to become a swarm manager with docker swarm init
and you’ll see output like this:
$ docker-machine ssh myvm1 "docker swarm init"
Swarm initialized: current node <node ID> is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join \
--token <token> \
<ip>:<port>
Getting an error about needing to use
--advertise-addr
?Copy the IP address for
myvm1
by runningdocker-machine ls
, then run thedocker swarm init
command again, using that IP and specifying port2377
(the port for swarm joins) with--advertise-addr
. For example:docker-machine ssh myvm1 "docker swarm init --advertise-addr 192.168.99.100:2377"
As you can see, the response to docker swarm init
contains a pre-configured docker swarm join
command for you to run on any nodes you want to add. Copy this command, and send it to myvm2
via docker-machine ssh
to have myvm2
join your new swarm as a worker:
$ docker-machine ssh myvm2 "docker swarm join \
--token <token> \
<ip>:<port>"
This node joined a swarm as a worker.
Congratulations, you have created your first swarm.
Note: You can also run
docker-machine ssh myvm2
with no command attached to open a terminal session on that VM. Typeexit
when you’re ready to return to the host shell prompt. It may be easier to paste the join command in that way.
Deploy your app on a cluster
The hard part is over. Now you just repeat the process you used in part 3 to deploy on your new swarm. Just remember that only swarm managers like myvm1
execute Docker commands; workers are just for capacity.
Copy the file docker-compose.yml
you created in part 3 to the swarm manager myvm1
’s home directory (alias: ~
) by using the docker-machine scp
command:
docker-machine scp docker-compose.yml myvm1:~
Now have myvm1
use its powers as a swarm manager to deploy your app, by sending the same docker stack deploy
command you used in part 3 to myvm1
using docker-machine ssh
:
docker-machine ssh myvm1 "docker stack deploy -c docker-compose.yml getstartedlab"
And that’s it, the app is deployed on a cluster.
Wrap all the commands you used in part 3 in a call to docker-machine ssh
, and they’ll all work as you’d expect. Only this time, you’ll see that the containers have been distributed between both myvm1
and myvm2
.
$ docker-machine ssh myvm1 "docker stack ps getstartedlab"
ID NAME IMAGE NODE DESIRED STATE
jq2g3qp8nzwx test_web.1 username/repo:tag myvm1 Running
88wgshobzoxl test_web.2 username/repo:tag myvm2 Running
vbb1qbkb0o2z test_web.3 username/repo:tag myvm2 Running
ghii74p9budx test_web.4 username/repo:tag myvm1 Running
0prmarhavs87 test_web.5 username/repo:tag myvm2 Running
Accessing your cluster
You can access your app from the IP address of eithermyvm1
or myvm2
. The network you created is shared between them and load-balancing. Run docker-machine ls
to get your VMs’ IP addresses and visit either of them on a browser, hitting refresh (or just curl
them). You’ll see five possible container IDs all cycling by randomly, demonstrating the load-balancing.
The reason both IP addresses work is that nodes in a swarm participate in an in ingress routing mesh. This ensures that a service deployed at a certain port within your swarm always has that port reserved to itself, no matter what node is actually running the container. Here’s a diagram of how a routing mesh for a service called my-web
published at port 8080
on a three-node swarm would look:
Note: If you’re having any connectivity trouble, keep in mind that in order to use the ingress network in the swarm, you need to have the following ports open between the swarm nodes before you enable swarm mode:
- Port 7946 TCP/UDP for container network discovery.
- Port 4789 UDP for the container ingress network.
Iterating and scaling your app
From here you can do everything you learned about in part 3: scale the app by changing the docker-compose.yml
file, or change the app behavior be editing code. In either case, simply running docker stack deploy
again deploys these changes. You can tear down the stack with docker stack rm
. You can also join any machine, physical or virtual, to this swarm, using the same docker swarm join
command you used on myvm2
, and capacity will be added to your cluster; just run docker stack deploy
afterwards and your app will take advantage of the new resources.
Recap and cheat sheet (optional)
Here’s a terminal recording of what was covered on this page:
In part 4 you learned what a swarm is, how nodes in swarms can be managers or workers, created a swarm, and deployed an application on it. You saw that the core Docker commands didn’t change from part 3, they just had to be targeted to run on a swarm master. You also saw the power of Docker’s networking in action, which kept load-balancing requests across containers, even though they were running on different machines. Finally, you learned how to iterate and scale your app on a cluster.
Here are some commands you might like to run to interact with your swarm a bit:
docker-machine create --driver virtualbox myvm1 # Create a VM (Mac, Win7, Linux)
docker-machine create -d hyperv --hyperv-virtual-switch "myswitch" myvm1 # Win10
docker-machine env myvm1 # View basic information about your node
docker-machine ssh myvm1 "docker node ls" # List the nodes in your swarm
docker-machine ssh myvm1 "docker node inspect <node ID>" # Inspect a node
docker-machine ssh myvm1 "docker swarm join-token -q worker" # View join token
docker-machine ssh myvm1 # Open an SSH session with the VM; type "exit" to end
docker-machine ssh myvm2 "docker swarm leave" # Make the worker leave the swarm
docker-machine ssh myvm1 "docker swarm leave -f" # Make master leave, kill swarm
docker-machine start myvm1 # Start a VM that is currently not running
docker-machine stop $(docker-machine ls -q) # Stop all running VMs
docker-machine rm $(docker-machine ls -q) # Delete all VMs and their disk images
docker-machine scp docker-compose.yml myvm1:~ # Copy file to node's home dir
docker-machine ssh myvm1 "docker stack deploy -c <file> <app>" # Deploy an app
Get Started, Part 5: Stacks
Estimated reading time: 7 minutes
Prerequisites
- Install Docker version 1.13 or higher.
- Read the orientation in Part 1.
- Learn how to create containers in Part 2.
- Make sure you have pushed the container you created to a registry, as instructed; we’ll be using it here.
- Ensure your image is working by running this and visiting
http://localhost/
(slotting in your info forusername
,repo
, andtag
):docker run -p 80:80 username/repo:tag
- Have a copy of your
docker-compose.yml
from Part 3 handy. - Have the swarm you created in part 4 running and ready.
Introduction
In part 4, you learned how to set up a swarm, which is a cluster of machines running Docker, and deployed an application to it, with containers running in concert on multiple machines.
Here in part 5, you’ll reach the top of the hierarchy of distributed applications: the stack. A stack is a group of interrelated services that share dependencies, and can be orchestrated and scaled together. A single stack is capable of defining and coordinating the functionality of an entire application (though very complex applications may want to use multiple stacks).
Some good news is, you have technically been working with stacks since part 3, when you created a Compose file and used docker stack deploy
. But that was a single service stack running on a single host, which is not usually what takes place in production. Here, you’re going to take what you’ve learned and make multiple services relate to each other, and run them on multiple machines.
This is the home stretch, so congratulate yourself!
Adding a new service and redeploying.
It’s easy to add services to our docker-compose.yml
file. First, let’s add a free visualizer service that lets us look at how our swarm is scheduling containers. Open up docker-compose.yml
in an editor and replace its contents with the following:
version: "3"
services:
web:
image: username/repo:tag
deploy:
replicas: 5
restart_policy:
condition: on-failure
resources:
limits:
cpus: "0.1"
memory: 50M
ports:
- "80:80"
networks:
- webnet
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
networks:
webnet:
The only thing new here is the peer service to web
, named visualizer
. You’ll see two new things here: a volumes
key, giving the visualizer access to the host’s socket file for Docker, and a placement
key, ensuring that this service only ever runs on a swarm manager – never a worker. That’s because this container, built from an open source project created by Docker, displays Docker services running on a swarm in a diagram.
We’ll talk more about placement constraints and volumes in a moment. But for now, copy this new docker-compose.yml
file to the swarm manager, myvm1
:
docker-machine scp docker-compose.yml myvm1:~
Now just re-run the docker stack deploy
command on the manager, and whatever services need updating will be updated:
$ docker-machine ssh myvm1 "docker stack deploy -c docker-compose.yml getstartedlab"
Updating service getstartedlab_web (id: angi1bf5e4to03qu9f93trnxm)
Updating service getstartedlab_visualizer (id: l9mnwkeq2jiononb5ihz9u7a4)
You saw in the Compose file that visualizer
runs on port 8080. Get the IP address of the one of your nodes by running docker-machine ls
. Go to either IP address @ port 8080 and you will see the visualizer running:
The single copy of visualizer
is running on the manager as you expect, and the five instances of web
are spread out across the swarm. You can corroborate this visualization by running docker stack ps <stack>
:
docker-machine ssh myvm1 "docker stack ps getstartedlab"
The visualizer is a standalone service that can run in any app that includes it in the stack. It doesn’t depend on anything else. Now let’s create a service that does have a dependency: the Redis service that will provide a visitor counter.
Persisting data
Go through the same workflow once more. Save this new docker-compose.yml
file, which finally adds a Redis service.
version: "3"
services:
web:
image: username/repo:tag
deploy:
replicas: 5
restart_policy:
condition: on-failure
resources:
limits:
cpus: "0.1"
memory: 50M
ports:
- "80:80"
networks:
- webnet
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
redis:
image: redis
ports:
- "6379:6739"
volumes:
- ./data:/data
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
networks:
webnet:
Redis has an official image in the Docker library and has been granted the short image
name of just redis
, so no username/repo
notation here. The Redis port, 6379, has been pre-configured by Redis to be exposed from the container to the host, and here in our Compose file we expose it from the host to the world, so you can actually enter the IP for any of your nodes into Redis Desktop Manager and manage this Redis instance, if you so choose.
Most importantly, there are a couple of things in the redis
specification that make data persist between deployments of this stack:
redis
always runs on the manager, so it’s always using the same filesystem.redis
accesses an arbitrary directory in the host’s file system as/data
inside the container, which is where Redis stores data.
Together, this is creating a “source of truth” in your host’s physical filesystem for the Redis data. Without this, Redis would store its data in /data
inside the container’s filesystem, which would get wiped out if that container were ever redeployed.
This source of truth has two components:
- The placement constraint you put on the Redis service, ensuring that it always uses the same host.
- The volume you created that lets the container access
./data
(on the host) as/data
(inside the Redis container). While containers come and go, the files stored on./data
on the specified host will persist, enabling continuity.
To deploy your new Redis-using stack, create ./data
on the manager, copy over the new docker-compose.yml
file with docker-machine scp
, and run docker stack deploy
one more time.
$ docker-machine ssh myvm1 "mkdir ./data"
$ docker-machine scp docker-compose.yml myvm1:~
$ docker-machine ssh myvm1 "docker stack deploy -c docker-compose.yml getstartedlab"
Check the results on http://localhost and you’ll see that a visitor counter is now live and storing information on Redis.
Recap (optional)
Here’s a terminal recording of what was covered on this page:
You learned that stacks are inter-related services all running in concert, and that – surprise! – you’ve been using stacks since part three of this tutorial. You learned that to add more services to your stack, you insert them in your Compose file. Finally, you learned that by using a combination of placement constraints and volumes you can create a permanent home for persisting data, so that your app’s data survives when the container is torn down and redeployed.
Get Started, Part 6: Deploy your app
Estimated reading time: 6 minutes
Prerequisites
- Install Docker version 1.13 or higher.
- Read the orientation in Part 1.
- Learn how to create containers in Part 2.
- Make sure you have pushed the container you created to a registry, as instructed; we’ll be using it here.
- Ensure your image is working by running this and visiting
http://localhost/
(slotting in your info forusername
,repo
, andtag
):docker run -p 80:80 username/repo:tag
- Have the final version of
docker-compose.yml
from Part 5 handy.
Introduction
You’ve been editing the same Compose file for this entire tutorial. Well, we have good news: that Compose file works just as well in production as it does on your machine. Here, we go through some options for running your Dockerized application.
Choose an option
If you’re okay with using Docker Community Edition in production, you can use Docker Cloud to help manage your app on popular cloud providers such as Amazon Web Services, DigitalOcean, and Microsoft Azure.
To set up and deploy:
- Connect Docker Cloud with your preferred provider, granting Docker Cloud permission to automatically provision and “Dockerize” VMs for you.
- Use Docker Cloud to create your computing resources and create your swarm.
- Deploy your app.
Note: We will be linking into the Docker Cloud documentation here; be sure to come back to this page after completing each step.
Connect Docker Cloud
First, link Docker Cloud with your cloud provider:
- Amazon Web Services setup guide
- DigitalOcean setup guide
- Microsoft Azure setup guide
- Packet setup guide
- SoftLayer setup guide
- Use the Docker Cloud Agent to Bring your Own Host
Create your swarm
After your cloud provider is all set up, create a Swarm:
- If you’re on AWS you can automatically create a swarm.
- Otherwise, create your nodes in the Docker Cloud UI, and run the
docker swarm init
anddocker swarm join
commands you learned in part 4 over SSH via Docker Cloud. Finally, enable Swarm Mode by clicking the toggle at the top of the screen, and register the swarm you just made.
Deploy your app
Connect to your swarm via Docker Cloud. This opens a terminal whose context is your local machine, but whose Docker commands are routed up to the swarm running on your cloud provider. This is a little different from the paradigm you’ve been following, where you were slinging commands via SSH; now, you can directly access both your local file system and your remote swarm, enabling some very tidy-looking commands:
docker stack deploy -c docker-compose.yml getstartedlab
That’s it! Your app is running in production and is managed by Docker Cloud.
Congratulations!
You’ve taken a full-stack, dev-to-deploy tour of the entire Docker platform.
There is much more to the Docker platform than what was covered here, but you have a good idea of the basics of containers, images, services, swarms, stacks, scaling, load-balancing, volumes, and placement constraints.
Want to go deeper? Here are some resources we recommend:
- Samples: Our samples include multiple examples of popular software running in containers, and some good labs that teach best practices.
- User Guide: The user guide has several examples that explain networking and storage in greater depth than was covered here.
- Admin Guide: Covers how to manage a Dockerized production environment.
- Training: Official Docker courses that offer in-person instruction and virtual classroom environments.
- Blog: Covers what’s going on with Docker lately.