Are you a Software developer, or a DevOps engineer, or an IT student, or simply a tech-enthusiast? If yes, my guess is that you already know what containers are and how the Docker project is making it better. Unlike virtual machines, which run on their own kernel modules regardless of their hypervisor‘s kernel; containers utilize the single common kernel to run the multiple instances of operating systems. And the upside to using containers are definitely low memory and CPU consumption, which means more applications and less resources. In addition, it also provides higher agility in developing, testing and implementing softwares. This ultimately reduces total CAPEX and OPEX costs to run a cloud or data center.
What is Docker?
According to Docker Inc., docker is a container-based virtualization technology, which is lightweight, open and secure by default. It runs a docker engine on top of host operating system and allows software binaries and libraries to run on top of it. These containers wrap a software package in a complete filesystem including everything necessary to execute: code, runtime, system tools, system binaries, and so on. This ensures as-is and easy transportation and deployment of the application.
Docker Installation
For this post, I’m going to use Ubuntu 16.10 as my host operating system. You can find corresponding installation guides for other platforms in this documentation. Here are the commands I entered:
[code language=”bash”]
# Adding a docker repo with codename xenial (16.04). It also works for Yakkety Yak (16.10)
sajjan@learner:~$ sudo vi /etc/apt/sources.list.d/docker.list
deb https://apt.dockerproject.org/repo ubuntu-xenial main
# Remove any existing docker
sajjan@learner:~$ sudo apt-get purge lxc-docker
# Prioritize docker-engine in APT cache
sajjan@learner:~$ sudo apt-cache policy docker-engine
# Install docker-engine
sajjan@learner:~$ sudo apt-get install docker-engine
[/code]
These are the installation steps to make sure we’re setting up the latest version from the docker repository. If you rather prefer to follow an easier process, simply enter the command “sudo apt-get install docker-engine“. This will install the docker version that is currently available in your Ubuntu’s repository. After installation, let’s start the docker service and begin using it.
[code language=”bash”]
# Start docker engine
sajjan@learner:~$ sudo systemctl start docker
# Get docker’s installed version
sajjan@learner:~$ sudo docker version
[/code]
Next, let’s also configure our system so that we don’t need to enter “sudo” everytime we use docker. This is an optional step, but really useful one because we’ll be using “docker” command quite often and it won’t be pleasing to type “sudo” command each time. It’s done by adding our user into docker group as follows:
[code language=”bash”]
# Add a group called docker. It’s generally gets added during docker installation
sajjan@learner:~$ sudo groupadd docker
# Modify current user to be member of docker group
sajjan@learner:~$ sudo usermod -aG docker $USER
[/code]
To enforce this change, we must have to logout and re-login into the system. After re-login, we can run docker without using “sudo” before it.
Getting Started With Docker Containers
Then, let’s perform some basic container actions:
[code language=”bash”]
# Running simple hello-world container
sajjan@learner:~$ docker run hello-world
# Search for available images for CentOS container
sajjan@learner:~$ docker search centos
# Pull CentOS image. Since no repo or tag is provided, it’ll pull latest image from default repo
sajjan@learner:~$ docker pull centos
# List locally available container images
sajjan@learner:~$ docker images
[/code]
Now that we’ve an image for CentOS, let’s go ahead and run it. To get the interactive terminal access to this container, let’s run it as follows:
[code language=”bash”]
sajjan@learner:~$ docker run –name mycent -it centos
[root@96ee7cce09b7 /]# cat /etc/redhat-release
CentOS Linux release 7.3.1611 (Core)
# Enter Ctrl+P+Q to switch to host’s shell
[/code]
Since this is just a core version of CentOS, we won’t be able to perform our usual system commands like ifconfig, ip, ssh, and so on. In order to get them to work, we’ve to install their packages inside this container and then commit it to generate a new image from it. Or, we can also build our custom container using Dockerfile.
Now, let’s install the packages using Yum. If you’re wondering how does networking inside the container work, docker engine by default creates a bridge interface and assigns the virtual interface an IP range of 172.17.0.0/16. We can verify it by view interface details in the host system and trying to ping each other. I’ll be exploring networking for containers in detail in my future posts, but now let’s just understand that docker provides a bridged networking by default and the containers can directly access external network through this bridge network.
[code language=”bash”]
sajjan@learner:~$ ifconfig | more
sajjan@learner:~$ docker network inspect bridge
# Ping container
sajjan@learner:~$ ping 172.17.0.2
# Attach to docker container named "mycent"
sajjan@learner:~$ docker attach mycent
# Ping host
[root@96ee7cce09b7 /]# ping 172.17.0.1
[root@96ee7cce09b7 /]# yum install -y iproute openssh-server
[/code]
Now, I’ve modified the initial image and would like to build custom image out of it so that I can use it later. We can do this by committing this image:
[code language=”bash”]
sajjan@learner:~$ docker stop mycent
# Commit the container to build custom image
sajjan@learner:~$ docker commit -m "sshd + ip" -a "SSH N IP" `docker ps -l -q` sajjanbh/cen
tos:v1
sajjan@learner:~$ docker images
[/code]
Working with Docker Files
Lastly for this post, let’s explore the basics of DockerFile. DockerFile is similar to the installation scripts we use to provision and deploy operating systems or servers. It’s a batch job of various types of statements to perform on container. Here’s my example DockerFile looks like:
[code language=”bash”]
sajjan@learner:~$ mkdir -p dockerfiles/centos
sajjan@learner:~$ vi dockerfiles/centos/Dockerfile
FROM centos
MAINTAINER sajjan <sajjanbhattarai@gmail.com>
# Install ssh server and ip commands
RUN yum install -y openssh-server iproute
# Backup original sshd config file
RUN cp /etc/ssh/sshd_config /etc/ssh/sshd_config.orig
# Deduct write permission to original sshd config file
RUN chmod a-w /etc/ssh/sshd_config.orig
RUN mkdir /var/run/sshd
# Set login password for user root in container
RUN echo ‘root:najjas123’ | chpasswd
# Allow root login through SSH
RUN sed -i ‘s/#PermitRootLogin/PermitRootLogin/’ /etc/ssh/sshd_config
# Generate keys for SSH. Without these, SSH daemon won’t start
RUN /usr/bin/ssh-keygen -q -t rsa -f /etc/ssh/ssh_host_rsa_key -C ” -N ”
RUN /usr/bin/ssh-keygen -q -t dsa -f /etc/ssh/ssh_host_dsa_key -C ” -N ”
# Start SSHD daemon in container, so that we can login into it with SSH
CMD ["/usr/sbin/sshd", "-D"]
# Expose SSH port to the container
EXPOSE 22
# Build a custom image of CentOS using DockerFile
sajjan@learner:~$ docker build -t sajjanbh/centsshd:v1 Dockerfiles/centos/
# List docker images, it’ll contain our newly built image
sajjan@learner:~$ docker images
# Start the new container
sajjan@learner:~$ docker run -d –name centsshd -P sajjanbh/centsshd:v1
# List docker processes and get the mapped port for container eg. 32771
sajjan@learner:~$ docker ps
# SSH into container
sajjan@learner:~$ ssh root@localhost -p 32771
[root@96ee7cce09b7 /]#
[/code]
Here’s the whole tutorial video in action:
Well, this is it! I believe this article has fulfilled its purpose in illustrating the introduction and getting started guide for Docker. I hope this has been informative to you. Please don’t hesitate to like, comment, share and subscribe this blog. And as always, thanks for reading!
Leave a Reply