Docker Tutorial: How to Install and Use Containers

Sarath Pillai's picture
Getting started with Docker

Container based virtualization is not a new thing at all. But a new implementation of container based virtualization has really made it a hot thing for application developers and administrators. The main thing that made it popular is the flexibility with which it manages containers and its applications inside it. Developers can now easily ship applications and its entire dependencies inside to anywhere and have the guarantee that it will run anywhere.  And so many companies have already started adopting it as part of their continues integration environment.

If you are quite new to container based virtualization, and is interested in understanding the major difference between a hyper-visor and container based virtualization, then i would recommend reading the below article before going ahead. 


Read: Main Difference between Containers and Hypervisors


Unlike other types of virtualization, containers are much light weight and has less overhead. Hypervisor based virtualization technologies require a virtual hardware that needs to be created, on top of which an operating system with its own kernel and drivers are installed. So technically hypervisor based stuff is a full isolation, and to the base operating system, the guest system looks as an independent host, which has no relation to itself. 

Containers share the same kernel which the base operating system is using. This is the main plus point of a container based virtualization as well as a limiting point. Its a limiting point because, it cannot run any other operating system like windows on it(Simply because a Linux kernel can only run a Linux simple as that.)


What is Docker?

Docker is nothing but a software engine that manages and automates deployment of software applications into containers. So to put it in a better way, its something that can be used to build, ship and run applications anywhere. Its open source and released under Apache 2 license. Its made and developed by Docker, Inc.  

Docker has become a popular tool in the IT industry, because of the flexibility that it brings along with containers. You can actually get your first container running in seconds(yeah believe me,,,seconds). 

The main advantage of using docker is the "shipping" part. When i say shipping, you are shipping an entire container and your applications inside the container with all its dependencies. This makes running that containerized applications anywhere so easy and gives you the guarantee that your application will run as expected.  Portability is the main thing that docker aims at. 

Before we go ahead with the installation part, let's first understand some of the core components of docker. 

  • Docker Client and Server
  • Docker Images
  • Docker Registry
  • Docker Container


Docker works in a client server model. Docker client contacts the server(which is a daemon waiting for requests to be processed) and the server daemon does the actual job of creating, stopping, and things related to containers. 


Docker Client Server Architecture

As shown in the above diagram, docker works in a client server model. Docker daemon waiting on the docker host, can be connected by docker client utility and can execute commands. 
Docker images are from which we will be launching our containers. In simple words we can say that docker images are nothing but source codes for our containers. You can find many base images like Ubuntu, centos etc from the official docker registry, and you can launch your own container from that image and then install your required applications and save it as another image to be used as a base image for your application. 
Docker images are the main components using which you will ship and download your application (so that you can launch a container from that).


Docker Registries are nothing but the place where you store and retrieve your images for later use. Its as simple as a GIT repository for your images or say a yum repository for your docker images. Similar to yum repository, docker registries can also be public as well as private.


Docker Containers: You can think of containers as the main execution part of docker. You can run one or many processes inside your container, as you run in a normal Linux system. So containers are nothing but an execution environment for the end user. You can have many containers running inside a single system with names to identify them, each of them running different application/processes or even interconnected processes. For example you can have a container running only an Apache server and another container in the same docker host running MySQL (which will be used by Apache for database)


Shipping your applications has now become so easy that you can simply make an image out of your running container and tag it, and push it to your registry, so that anybody who has access to your registry account can pull it and run that container. 

Docker can be run on any server that has a modern Linux Kernel. Recommended version of the kernel for running docker is 3.8 or later versions.

How to Install Docker ?


Installing Docker is quite easy and straight forward in Linux. Because docker is available as a package through different package managers like apt-get and yum for easy installation. Due to this reason, docker can be installed easily on many Linux distributions out there. Being that said, its well tested with Ubuntu and Red Hat Enterprise Linux. 

In this tutorial we will see how to install docker on Ubuntu as well as on Red Hat Enterprise Linux.  Let's start with Ubuntu First.  They recommend the below versions of Ubuntu for running docker. 


  • Ubuntu Precise 12.04 (LTS) (64-bit)
  • Ubuntu Saucy 13.10 (64-bit)
  • Ubuntu Saucy 13.10 (64-bit)
  • Ubuntu Raring 13.04 (64-bit)      
  • Ubuntu Trusty 14.04 (LTS) (64-bit)

Please note the fact that Docker will also work on other earlier Ubuntu versions as well(Provided it has a recommended kernel version and other dependencies), but different releases of Ubuntu has different versions of kernel. The problem is that they are not officially recommended. 


Step1:       The first step is to verify whether you have the correct version of kernel in your system. If you are using an ubunt 13.10 or later, then you dont have to worry because you already have the latest version. But in this example i will be using an ubuntu 12.04 relase, which has an older kernel. So the first step is to install the latest kernel version. 

You can do this in ubuntu by running the below commands. 

$ sudo apt-get update
$ sudo apt-get install linux-headers-3.8.0-27-generic linux-image-3.8.0-27-generic linux-headers-3.8.0-27



The above command will install the latest kernel version required for Docker. Now let's update Ubuntu grub with these kernel changes. 

$ sudo update-grub


Now you can reboot your server, and you should be able to see the below when executing uname -a command.

root@docker-workstation:~# uname -a
Linux 3.8.0-23-generic #34~precise1-Ubuntu SMP Wed Sep 3 21:30:07 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux


Please note the fact that you only need to follow the above steps if you are running older than Ubuntu 12.04.3. If you are running Ubuntu 12.04.3 and later, you dont need to upgrade your kernel because you might already have 3.8.0 x86_64 or later kernel by default.

Step2: Now we will be using devicemapper storage driver for docker. Hence the next step to verify before going ahead is whether we have device mapper installed and enabled in the kernel. This can be done by the following method.


root@docker-workstation:~# grep device-mapper /proc/devices
252 device-mapper

If you do not find that, then you can actually enable that kernel module by using the below modeprobe command.


root@docker-workstation:~# modprobe dm_mod


The main core component of docker is its use of layered images. To implement this layered approach, docker uses several kernel features. Docker uses a layered approach. Each layer will be uniquely named. Each layer can be mounted when required or even modified when required. Even new layers can be created on top of an existing layer with another unique name. When we talk about docker images, each of the images in the system are stored in the form of a layer. For now keep in mind that the device mapper provides the layered feature to Docker, hence its required to be enabled/installed. We will be discussing these concepts in more detail in upcoming posts related to Docker.



Step 3:   Now as we are using Ubuntu, the next step is to add the official Docker's apt-get repository to our system. This can be done by using the below command.

root@docker-workstation:~# sh -c "echo deb docker main > /etc/apt/sources.list.d/docker.list"


Now let's add the GPG key for this repo to apt. This can be done as shown below.

root@docker-workstation:~# curl -s | sudo apt-key add -



After this simply run apt-get update and we are done with adding Docker repository for packages.


Step4: Now the final step is to install docker itself. This can now be done by a single apt-get command as we have already included apt repository to our list.


root@docker-workstation:~# apt-get install lxc-docker


The above command will install docker along with all required dependancies, and we are ready to go...Using the below command you can get the current status of all containers, images, data space, driver details, kernel version, OS details and much more.

root@docker-workstation:~# docker info


Step 5: There is a slight modification required in your default firewall setup(well only applies if you are using the ubuntu UFW firewall). The change is to allow forwarded packets to your containers. The change is nothing but modifying /etc/default/ufw file. 



The default setting is to DROP. Modify it, so that it looks like the above shown line. Now let's go ahead with understanding the installation in a Red Hat Linux system. Then we will see how to get our first container running.


Installing docker on Red Hat Enterprise Linux

Now let's see the method to install docker in RHEL. Its quite simple, as Ubuntu.

Docker is included in the default package list for Red Hat Enterprise Linux 7. Also Docker on RHEL 7 is officially supported. No other versions of RHEL officially supports Docker.

Docker in Red Hat is supported in the below versions.

  • Red Hat Enterprise Linux (and CentOS) 6 and later (64-bit)
  • Fedora Core 19 and later (64-bit)

Step 1: So the first step here is to verify the kernel version. This can again be done with the same command as shown below(Being that said, all the supported version of red-hat has the required kernel by default).


root@docker-workstation:~# uname -a
Linux 3.10.9-200.fc19.x86_64 #1 SMP Wed Nov 20 10:34:35 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux


Step 2:  Now as we did on Ubuntu, we need to confirm, whether devicemapper is installed and enabled in the kernel. This is done the same way as shown below.

root@docker-workstation:~# grep device-mapper /proc/devices


In case you do not have it installed, you can do it by the below commands.

root@docker-workstation:~# yum install -y device-mapper
root@docker-workstation:~# modprobe dm_mod


Step 3: Now let's install Docker as shown below. On a Red Hat Enterprise Linux 6 or Centos 6, you can do this by the below command.

root@docker-workstation:~# rpm -Uvh

Now installing docker package is one single command away as shown below.


root@docker-workstation:~# yum -y install docker-io


If you are using Red Hat Enterprise Linux 7(Provided you have a Red Hat server subscription), then the method is slightly different. You can do it by running the below commands.

root@docker-workstation:~# subscription-manager repos --enable=rhel-7-server-extras-rpms
root@docker-workstation:~# sudo yum install -y docker

On fedora 20 and later, you need to run the below command to install docker instead.

root@docker-workstation:~# yum -y install docker


The simplest method to Install Docker on Linux

Till now we have seen package manager methods via which we installed docker. Now let's see the most easiest and simplest method to install docker on Linux. This is done via a script provided officially by docker. For this we need to first ensure that curl is installed on our system.

root@docker-workstation:~# apt-get -y install curl

In red hat or fedora, you need to run the below command of course.

root@docker-workstation:~# yum -y install curl


Now run the below command to execute the script provided by Docker.

curl | sudo sh



The above command will redirect the script to bash command line, which intern will execute it. It will verify the kernel, device mapper and all other things that is required, and will install docker and start the daemon. This is the simplest method out there to install docker on linux. 


Understanding Docker Daemon and Client


Now that we have installed docker on our system, let's see how it functions and launches containers. As i have mentioned in the beginning, docker is a client server application. Docker daemon should be running on the server in order to execute the commands sent by docker client. Docker daemon needs to be run as root, because it will doing tasks which normal users does not have

By default docker will be listening at /var/run/docker.sock for incoming docker client requests. Now although we can change this to make it listen on a TCP port(So that other clients from remote area, can execute commands on this docker host), it is not advisable to do so.


Making docker daemon to bing on a tcp port will increase the security risk because, that port will be a door for others to get inside your system and gain root level access. So do this only if your docker host is in a trusted private network.


echo DOCKER_OPTS="-H=tcp://" >> /etc/default/docker


The above command will add a startup argument permanently, so that it opens port 4243, so that remote docker clients can connect. Or alternatively you can make it run on a port, by manually starting it as below.


/usr/bin/docker -H tcp:// -d &


Please keep in mind that docker daemon by default does not provide any form of authentication. What that means is that anybody who has access to this port via docker client, can run commands. However there is a form of TLS authentication in Docker, which i will be discussing in a seperate post.

You can actually bind docker daemon to both /var/run/docker.sock as well as TCP port 4243 at the same time as shown below.

/usr/bin/docker -d -H tcp:// -H unix://var/run/docker.sock


Running our First Container with Docker


Now that we have installed, and configured docker daemon, and is running in our system, let's now get started with our containers. All adminitration activities like launching, stopping, removing, imaging are done by docker client command line and different parameters to it.

root@docker-workstation:~# docker help


Using the above command you can get all command line arguments related to Docker.

root@docker-workstation:~# docker run -i -t ubuntu /bin/bash


Boom!. After the above command has executed, you are now sitting inside a docker containers. With a new root file system. Is int that amazing and fast compared to launching a virtualized host?


The above command's output will show you different steps like pulling down of the ubuntu image from the docker offical repository. The -i flag we used with the command will enable the STDIN to be kept open to us. Now as the STDIN is kept open to us, we need to attach a terminal to it, which is done by the command line argument -t (which will provide us with a pseudo terminal inside the container). The next argument we gave ("ubuntu") is the name of the image to use for this container(by default docker client will pull the image from the official docker registry, if not found in the local drive).

These images like "ubuntu", which we just launched can be used as a base for our applications to be built on top. Now this ubuntu image will be stored in the local drive, for later use. The final argument we gave is a command, that needs to be run inside the container. That is the reason after running the above docker run command you are sitting inside the container as the final outcome (which means you are inside the bash pompt of the newly launched ubuntu container).

You are now sitting inside a full new linux operating system with a complete different name space for processes, network, filesystem's etc(But it sharing the kernel with the base docker host). You can set new hostname to your container, set its own resolver etc.


1. Try running ps -a inside your newly launched container(it will only show one single process, called bash,which we launched)

2. Try running #hostname command(which will show a random container id as hostname, you can refer this container with this id or you can even change hostname)

3. See #ip a for ip address of the container.


As soon as you type exit or CTRL + D, the container will get stopped. That is because container will exist only for the duration of the final command we provided as the argument (/bin/bash in our docker run command.). However we can even lauch a container in the background (which is normally the case) as shown below.


root@docker-workstation:~# docker run --name test_container -t -d ubuntu /bin/sh



In the next upcoming Docker tutorials, we will see a lot of docker command line options, to do different tasks like seeing processes from outside, installing applications, taking images, working with registries, building a private registry, docker API and much more. Hope this article was helpful in getting started with docker installation in your environment, and understanding some basics.


Read the Next Docker Tutorial in this series: Running Docker Containers

Rate this article: 
Average: 2.6 (120 votes)


There is trailing forward slash missing the protocol specification.

/usr/bin/docker -d -H tcp:// -H unix://var/run/docker.sock >> /usr/bin/docker -d -H tcp:// -H unix:///var/run/docker.sock.

Add new comment

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.