Docker Network Configuration Tutorial

Sarath Pillai's picture
Container Networking Configuration

 

 

 

In our previous tutorials, we talked about introduction to docker containers, docker installation, running our first container, pulling images, creating images etc. But we haven't talked about how docker does networking. Its really important to understand the networking part of docker, as the primary use case of Docker is to create services that users or other services will connect and communicate to.

 

Previous Tutorials can be accessed via the following links.


Read: Difference between Containers and Virtual Machines

Read: How to Install Docker

Read: How to Run Docker Containers

Read: How to Build Docker images using a Dockerfile

 

Let’s get started by understanding the by default network configuration of a docker container. Let’s start an ubuntu default docker container, and analyse its default network configurations to see what's going on.

 

root@ip-10-1-136-71:~# docker run -it --rm ubuntu
Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
0bf056161913: Pull complete
1796d1c62d0c: Pull complete
e24428725dd6: Pull complete
89d5d8e8bafb: Pull complete
Digest: sha256:a2b67b6107aa640044c25a03b9e06e2a2d48c95be6ac17fb1a387e75eebafd7c
Status: Downloaded newer image for ubuntu:latest
root@1dfa7ba191a9:/# ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:ac:11:00:02  
          inet addr:172.17.0.2  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::42:acff:fe11:2/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
          RX packets:16 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1296 (1.2 KB)  TX bytes:648 (648.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

 

The first command shown above runs the default ubuntu:latest container. Now as the system does not have that image, it downloads it from dockerhub and runs it interactively. Once the image is pulled and the container is started, we are in the container shell prompt(shown by root@1dfa7ba191a9. Here 1dfa7ba191a9 is the container id.)

Then we fired up the basic ifconfig command to see the default IP address we got inside the container.

 

eth0      Link encap:Ethernet  HWaddr 02:42:ac:11:00:02  
          inet addr:172.17.0.2  Bcast:0.0.0.0  Mask:255.255.0.0

 

 

So the container has an ip address of 172.17.0.2 and a netmask of 255.255.0.0. Which means the default docker internal network will always have a network CIDR of 172.17.0.0/16. This will be the case always by default.

 

Now let's see how the underlying host is dealing with this network. Let's exit from the container by typing exit or CTRL + D. Once we have exited from the container, lets see the ifconfig output of the underlying host.

 

root@ip-10-1-136-71:~# ifconfig
docker0   Link encap:Ethernet  HWaddr 02:42:a8:de:ed:84  
          inet addr:172.17.0.1  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::42:a8ff:fede:ed84/64 Scope:Link
          UP BROADCAST MULTICAST  MTU:9001  Metric:1
          RX packets:23 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1580 (1.5 KB)  TX bytes:648 (648.0 B)

eth0      Link encap:Ethernet  HWaddr 06:c8:1f:44:eb:01  
          inet addr:10.1.136.71  Bcast:10.1.136.255  Mask:255.255.255.0
          inet6 addr: fe80::4c8:1fff:fe44:eb01/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
          RX packets:90197 errors:0 dropped:0 overruns:0 frame:0
          TX packets:7374 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:131280146 (131.2 MB)  TX bytes:762026 (762.0 KB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:3 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1728 (1.7 KB)  TX bytes:1728 (1.7 KB)

 

 

So the underlying host has three interfaces. One eth0 and one lo as always, and one additional interface called docker0 shown above.

 

The docker0 interface is a bridge interface created when we installed docker. Docker is quite smart when it comes to ip addressing. It tries to use a network that is not in conflict with host network. The flow is something like this...When you initially start docker service on your host, docker will first check whether you have given a custom bridge option(using -b argument to docker daemon at startup). If you have not provided any custom bridge option, Docker will then attempt to create a bridge of its own called docker0 (which we saw in the ifconfig output of the host above).

As mentioned, while creating the bridge, docker will first try to find a network range that is not being used by the host. Docker uses the ip route command to find the networks the host needs to reach. And then finds a range that does not conflict with any.

This method of finding a network that is not in conflict using ip route, is not fully perfect. Because, sometimes the host does not have specific routes to another private network, but can be reached via the default gateway. Let's say the host default gateway is 10.1.136.1. And the host can reach another private server 172.17.0.2 via the default gateway 10.1.136.1. But docker does not know about this, as this specific route is not present on the host.

 

In such cases, its always better to create your own bridge with your own defined network scheme. So that there is no conflict of any kind(we will be doing that shortly.)

Let's now see, how the docker containers are communicating with the outside world. Exit the container and let's see the Iptable NAT rules.

 

root@ip-10-1-136-71:~# iptables -t nat -L
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination         
DOCKER     all  --  anywhere             anywhere             ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
DOCKER     all  --  anywhere            !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination         
MASQUERADE  all  --  172.17.0.0/16        anywhere            

Chain DOCKER (2 references)
target     prot opt source               destination         

 

 

See the POSTROUTING chain in the above output. It has a rule that will masquerade all traffic originating from 172.17.0.0/16(docker bridge network). This masquerade rule will make all outgoing traffic from containers to outside world look like, as if the traffic was originating from the HOST. However, the outside world cannot yet talk to containers directly.

 

To make outside world directly reach container ports, we need to have a similar Iptable DNAT rule. Let's say you want the outside world to interact with the container port 3306. In that case, you will have to add a DNAT rule that says requests from anywhere towards the port 3306 on the host system must be mapped to 3306 port on the container.

 

This is exactly what docker does, when you give an option similar to -p 3306:3306 during docker run command. Let's see if that happens, by simply passing -p 3306:3306 during docker run.

 

root@ip-10-1-136-71:~# iptables -t nat -L
Chain DOCKER (2 references)
target     prot opt source               destination         
DNAT       tcp  --  anywhere             anywhere             tcp dpt:mysql to:172.17.0.2:3306

 

From the above output its clear that docker added the DNAT rule that we just discussed above. In simple words, if the outside world wants to talk to MySQL service running in the container, then use the HOST address with port 3306. The host will verify the IP packet and reads the source, destination and destination port. On matching the DNAT rule the host will send the request to 3306 port of the container.

The above DNAT rule is forwarding requests to 3306 on 172.17.0.2(which is the ip address of the container that we launched using docker run.).

Docker does not provide any command line options to expose ports of an already running container. Let's say, you forgot to provide the -p option during docker run command for your required ports. In that case, you can always run the DNAT iptable command with the docker container ip to expose the required port of the running container.

Let's say your container ip address is 172.17.0.6. And you want to expose one of the ports of that running container to outside world(let's say port 8080 for this example). In that case, you can simply fire up the below command and expose the port.

 

iptables -t nat -A  DOCKER -p tcp --dport 8080 -j DNAT --to-destination 172.17.0.6:8080

 

By default all containers residing in the same host can intercommunicate with each other using ip address of the containers. This is because all containers running in the same host will be connected via the bridge interface, and will be part of the same docker private network. Also iptable rule sets does not prevent this communication from happening.

 

If you want to manage iptables all by yourself, and do not want docker to create iptable rule sets, then you can always use iptables=false in docker startup parameters. If you have not set iptables=false option, docker will by default create a FORWARD policy, with all sources allowed(shown below is the default FORWARD rule set created by docker.)

 

root@ip-10-1-136-71:~# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
DOCKER     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
ACCEPT     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere            

 

 

Although docker by default allows container intercommunication, you can disable this feature for security reasons. Disabling intercommunication between containers can be done using --icc=false option during docker startup. This can be done by modifying /etc/default/docker file in Ubuntu, and /etc/sysconfig/docker file in RHEL based systems. My /etc/default/docker file looks like the below.

 

root@ip-10-1-136-71:~# cat /etc/default/docker
DOCKER_OPTS="-icc=false"

 

 

Once you enable icc=false option, and restart docker daemon, you will no longer be able to do inter container communication. This is because, docker will then add a DROP policy for forwarding packets on the host as shown below.

 

root@ip-10-1-136-71:~# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
DOCKER     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
ACCEPT     all  --  anywhere             anywhere            
DROP       all  --  anywhere             anywhere            

 

 

See the last DROP rule in FORWARD chain above(which was not there earlier.)

Setting -icc=false is a good idea for security reasons. Let's say one of your container got compromised, then in such cases, the compromised container should not be able to find other containers in the same network by scanning the network.

 

Although intercommunication between two containers in the same host works out of the box, discovering ip address of the containers that you want to reach is not easy in docker. ie: You need to find a method to determine the IP address of the container once it is launched, so that other containers can connect to it. You will have to find your own method to achieve this, as docker assigns a free ip address from the network range(ie: ip addresses to containers are assigned dynamically by docker). But this makes discovering addresses of containers a bit difficult.

 

The best solution from both security as well as convenience perspective is to use docker container linking.

Please keep the fact in mind that container linking does not have any relation with networking of docker.

 

Container linking depends on the name of the container. Let's see the names of our containers that we launched while discussing bridged networking earlier(#docker ps command will show you the details of running containers with its respective names as shown below).

 

root@ip-10-1-136-71:~# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS                    NAMES
9c7c0c41d694        ubuntu              "/bin/bash"         About an hour ago   Up 37 minutes       0.0.0.0:80->80/tcp       desperate_ptolemy
396e2a687528        ubuntu              "/bin/bash"         About an hour ago   Up 36 minutes       0.0.0.0:3306->3306/tcp   elated_lichterman

 

If you remember, we have not passed name option during docker run command earlier. This is the reason, docker assigned random names to our containers(in our example containers above the names are desperate_ptolemy and elated_lichterman).  Names are important in identifying your containers. Using --name option during docker run, we can assign names to our containers. Let's create a container using the ubuntu:latest image, and name it database.

 

root@ip-10-1-136-71:~# docker run -it -d --name=database -p 3306:3306 ubuntu
a9bec6b068b88e0d2b484e3bd2a33e2c818c274d8437c40ab29ab1ee261a1c0a
root@ip-10-1-136-71:~# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
a9bec6b068b8        ubuntu              "/bin/bash"         2 seconds ago       Up 1 seconds                            database

 

Now lets create another container called website, and link it to the database container we just created above. This can be done as shown below.

 

root@ip-10-1-136-71:~# docker run -it -d --name=website --link database:database ubuntu
7ec96444baa61420bd505d7e734fd51f08593690657d126ab331d0412091bf6d
root@ip-10-1-136-71:~# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS                    NAMES
7ec96444baa6        ubuntu              "/bin/bash"         2 seconds ago       Up 1 seconds                                 website
9233d556669e        ubuntu              "/bin/bash"         12 seconds ago      Up 11 seconds       0.0.0.0:3306->3306/tcp   database

 

The --link argument we passed above is in the format of --link name-of-the-container-to-link:alias-name. Alias name is optional, and you can simply use --link name-of-the-container-to-link

So we used --link option to link our website container to our database container. Let's see what's going on inside our website container.

 

root@ip-10-1-136-71:~# docker exec -it 7ec96444baa6 /bin/bash
root@7ec96444baa6:/# env
DATABASE_PORT_3306_TCP_PROTO=tcp
HOSTNAME=7ec96444baa6
TERM=xterm
DATABASE_PORT_3306_TCP_ADDR=172.17.0.2
DATABASE_NAME=/website/database
DATABASE_PORT=tcp://172.17.0.2:3306
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
DATABASE_PORT_3306_TCP=tcp://172.17.0.2:3306
SHLVL=1
HOME=/root
DATABASE_PORT_3306_TCP_PORT=3306
LESSOPEN=| /usr/bin/lesspipe %s
LESSCLOSE=/usr/bin/lesspipe %s %s
_=/usr/bin/env


The  env command output shown above shows few environment variables related to the database contaner that we established link to. What happened during this linking, is that the source container (database container) has provided few details to our website container. The details provided by source container is mentioned below.

 

  • Ports expose
  • IP address of the source container
  • Protocol (ie: whether its tcp or udp etc.)

Another thing that happened on the website container, during this linking process is shown below.

 

root@7ec96444baa6:/# cat /etc/hosts
172.17.0.3    7ec96444baa6
127.0.0.1    localhost
::1    localhost ip6-localhost ip6-loopback
fe00::0    ip6-localnet
ff00::0    ip6-mcastprefix
ff02::1    ip6-allnodes
ff02::2    ip6-allrouters
172.17.0.2    database 9233d556669e

It has added an entry in /etc/hosts file for the datbase container along with its ip address. So you can basically ping database container, using the container name itself.

root@7ec96444baa6:/# ping database
PING database (172.17.0.2) 56(84) bytes of data.
64 bytes from database (172.17.0.2): icmp_seq=1 ttl=64 time=0.074 ms
64 bytes from database (172.17.0.2): icmp_seq=2 ttl=64 time=0.053 ms

 

 

The number of port,IP and protocol environment will depend upon the number of ports exposed by the container to which you are linking. Let's say the database container exposes two different ports, then environment variables in website container, after its linked to database container, will have all the ports exposed by database container.

Please note the fact that ping to source container will not work if you have -icc=false. However, you can directly connect to database container port 3306 from website container.

 

Although we have discussed about bridge network of docker in the beginning. We have yet not created our own bridge network for docker to use. This is handy if you need to define your own networking bridge configuration, or you already have few bridges that docker can use etc.

To setup a custom bridge, you need to first delete docker0 bridge if it exist already, from the default docker installation. The first step is to install bridge-utils package on your system.

 

root@ip-10-1-136-71:~# apt-get install bridge-utils

The above package can be installed in RHEL/Centos using yum install bridge-utils

Now let's stop docker service as shown below.

 

root@ip-10-1-136-71:~# stop docker
docker stop/waiting

The above command will be service docker stop in RHEL/Centos systems. Now lets bring the default docker0 bridge down as shown below and then delete the docker0 bridge

 

root@ip-10-1-136-71:~# ip link set dev docker0 down
root@ip-10-1-136-71:~# brctl delbr docker0

 

The next step is to delete all iptables POSTROUTING rule set(if any, from older bridge docker containers). This can be done as shown below.

root@ip-10-1-136-71:~# iptables -t nat -F POSTROUTING

 

Now let's create our own custom bridge as shown below.

root@ip-10-1-136-71:~# brctl addbr mybridge0
root@ip-10-1-136-71:~# ip addr add 192.168.0.0/16 dev mybridge0
root@ip-10-1-136-71:~# ip link set dev mybridge0 up

 

Let's now confirm if the newly created bridge is up or not using ifconfig command as shown below.

root@ip-10-1-136-71:~# ifconfig
eth0      Link encap:Ethernet  HWaddr 06:c8:1f:44:eb:01  
          inet addr:10.1.136.71  Bcast:10.1.136.255  Mask:255.255.255.0
          inet6 addr: fe80::4c8:1fff:fe44:eb01/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
          RX packets:98973 errors:0 dropped:0 overruns:0 frame:0
          TX packets:13039 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:132955084 (132.9 MB)  TX bytes:1513378 (1.5 MB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:3 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1728 (1.7 KB)  TX bytes:1728 (1.7 KB)

mybridge0 Link encap:Ethernet  HWaddr c6:b9:8d:19:c8:92  
          inet addr:192.168.0.0  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::c4b9:8dff:fe19:c892/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:258 (258.0 B)

 


So we have our bridge mybridge0 running with our defined subnet 192.168.0.0/16. Now let's modify docker startup options with our newly created bridge as shown below. This can be done by editing /etc/default/docker in Ubuntu systems and /etc/sysconfig/docker in RHEL/Centos based systems. My /etc/default/docker file looks like the below.

 

root@9233d556669e:/# root@ip-10-1-136-71:~# cat /etc/default/docker
DOCKER_OPTS="--bridge=mybridge0"

 

 

So we asked docker to use the bridge mybridge0 instead of the default docker0 bridge(which we deleted earlier.). Let's now start our old containers that we created (our database and website containers).

 

root@ip-10-1-136-71:~# docker start 9233d556669e
9233d556669e
root@ip-10-1-136-71:~# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS                    NAMES
9233d556669e        ubuntu              "/bin/bash"         59 minutes ago      Up 2 seconds        0.0.0.0:3306->3306/tcp   database
root@ip-10-1-136-71:~# docker exec -it 9233d556669e /bin/bash
root@9233d556669e:/# ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:c0:a8:00:02  
          inet addr:192.168.0.2  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::42:c0ff:fea8:2/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:648 (648.0 B)  TX bytes:648 (648.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

 

 

Saw that? Our same database container is now using an IP address that is part of our custom bridge subnet(192.168.0.2 which is part of 192.168.0.0/16, which we provided while creating our bridge mybridge0). Let's also confirm if docker has enabled iptable rule sets correctly.

 

root@ip-10-1-136-71:~# iptables -t nat -L
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination         
DOCKER     all  --  anywhere             anywhere             ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
DOCKER     all  --  anywhere            !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination         
MASQUERADE  all  --  192.168.0.0/16       anywhere            
MASQUERADE  tcp  --  192.168.0.2          192.168.0.2          tcp dpt:mysql

Chain DOCKER (2 references)
target     prot opt source               destination         
DNAT       tcp  --  anywhere             anywhere             tcp dpt:mysql to:192.168.0.2:3306

 

Cool..So docker has already fixed the POSTROUTING rule sets to match our newly created bridge subnet addresses. Thats great. So we now have a custom bridge which docker is using.

Docker also provides an option --bip=192.168.0.0/16, using which you can modify the subnet of the default docker0 bridge.

 

Apart from the bridge, custom bridge,  and container linking options that we saw till now, docker has another interesting networking feature called host networking.

Using host networking option during docker run command, the container can directly use the network of the host system. In other words, container will not create a seperate networking stack for itself. Container will still use process stack of its own, but not the network. If you fire up ifconfig command from the container, you will get to know that the network interface listing is nothing but the host network interace listing.

If you are using this host networking option, exposing ports etc with -p option does not make any sense. This is because, any services running in the container, will be directly on the host network. So everything is already exposed.

Let's start a container with host networking option, and see how it looks like.

 

root@ip-10-1-136-71:~# docker run -it -d --net=host ubuntu
70072d99a18f7b096f52b713e319ca233cb7a2e09c33676417250cc539f1197e
root@ip-10-1-136-71:~# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS                    NAMES
70072d99a18f        ubuntu              "/bin/bash"         2 seconds ago       Up 1 seconds                                 distracted_colden
9233d556669e        ubuntu              "/bin/bash"         About an hour ago   Up 31 minutes       0.0.0.0:3306->3306/tcp   database
root@ip-10-1-136-71:~# docker exec -it 70072d99a18f /bin/bash
root@ip-10-1-136-71:/# ifconfig
eth0      Link encap:Ethernet  HWaddr 06:c8:1f:44:eb:01  
          inet addr:10.1.136.71  Bcast:10.1.136.255  Mask:255.255.255.0
          inet6 addr: fe80::4c8:1fff:fe44:eb01/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
          RX packets:100306 errors:0 dropped:0 overruns:0 frame:0
          TX packets:13841 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:133046028 (133.0 MB)  TX bytes:1627584 (1.6 MB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:3 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1728 (1.7 KB)  TX bytes:1728 (1.7 KB)

mybridge0 Link encap:Ethernet  HWaddr 56:c5:9d:3c:6e:8d  
          inet addr:192.168.0.1  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::c4b9:8dff:fe19:c892/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:536 (536.0 B)  TX bytes:648 (648.0 B)

vetha244cf5 Link encap:Ethernet  HWaddr 56:c5:9d:3c:6e:8d  
          inet6 addr: fe80::54c5:9dff:fe3c:6e8d/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:648 (648.0 B)  TX bytes:648 (648.0 B)

 

 

Host networking is enabled by using --net=host option with docker run command. See the hostname of the container also did not change(its same as the host itself). And ifconfig output shows the host network interfaces including the bridge itself. This is handy if you want to completely avoid bridge and seperate ip addresses for your containers. Also this option is best suited, if you are thinking of running only one container per host.

In the next article, we will discuss about inter host networking feature of docker.

Rate this article: 
Average: 3.5 (846 votes)

Comments

Hi,

I have followed your steps and while running docker my containers are using the default bridge network and not the custom network ip. Is there anyway to enforce docker to use the custom network by default? Using the compose file or any other method?

Sarath Pillai's picture

Hi Swapnil,

To just change the subnet being used by docker0, you can simply use bip setting (generally in the file /etc/docker/daemon.json file). An example content of daemon.json file is shown below.

{
  "bip": "172.19.0.0/16"
}

Please provide more details about the environment. Like OS, config file contents, etc.  Let me know if that worked.

Thanks

Sarath

Hi Sharath!

Thanks for your reply. We have a react js app with node as backend and mongo as a database. I am in the process of dockerizing the app. OS is Centos 7.
I have three containers, one for react js (Frontend), one for node js (backend) and another one for mongo (db).
I am using the default bridge network. Did not modify anything in the configuration files.
When I run docker-compose up --build, I did not get any errors except few warnings from node. I can reach the backend from the browser with the url http://127.0.0.1:9000. But
I am now stuck at a point where my frontend is not communicating with the backend so the app is not launching from the url http://127.0.0.1:8081.

While looking for the solutions I came across your article which is very informative! Any feedback from you on this will be really helpful.

If you want to have a look at the Dockerfile or the docker-compose file, please let me know.

Thanks & Regards
Swapnil Kulkarni
reach me at swwapnilk24@gmail.com

Sarath Pillai's picture

Hi Swapnil,

The simplest method to interconnect containers when using docker-compose in local for testing is link method. For example...Let us imagine we have two images, one backend and the other is frontend. Create a link as shown below(an example).

frontend:
    image: frontend:latest
    ports:
      - "8081:8081"
    links:
      - backend
  backend:
      image: backend:latest
      ports:
        - "9000:9000"

Now inside the frontend configuration (ie: wherever you are mentioning backend localtion), you can use the name backend followed by port 9000 (backend:9000). The name backend will resolve to your actual backend container.

Thanks

Sarath

Hi Sarath

Our docker compose presently looks like this!

version: "3"
services:
front_end:
container_name: front_end
restart: always
build: /frontend/app
ports:
- "8081:8081"
volumes:
- /frontend/app:/usr/src/app
- /usr/src/app/node_modules
links:
- app
environment:
NODE_ENV: production
app:
container_name: app
restart: always
build: /api/backend
ports:
- "9000:9000"
volumes:
- /api/backend:/usr/src/app
- /usr/src/app/node_modules
links:
- mongo
environment:
NODE_ENV: production
MONGO_URL: mongodb://mongo:27017/api

mongo:
container_name: mongo
image: mongo:latest
volumes:
- ./data:/data/db
ports:
- "27017:27017"

We are linking the containers using links but still we are not able to get the connectivity between frontend and our backend

Sarath Pillai's picture

did you try port connection from container to container after logging into them? What is the error you are getting? (try with telnet..you will have to install it inside the container.).

Also, what about executing "iptables -F" on the machine where you are running these?. Centos/Redhat comes with a default rule set.

Thanks
Sarath

Hi Sarath,

Is it possible to add one of host network instance in bridge mode container's network?

Can you pls help to understand how docker container in bridge mode can access Host backend network to connect other hosts.

Add new comment

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.