“Let’s dockerize it” is a phrase that has gained a lot of popularity in the past few years among techies. Chances are, you have heard the same, regardless of whether you are a blockchain developer, AI developer, a product owner, DevOps or even QA.
Dockers essentially make lives easy for developers by making it effortless to deploy applications in different infrastructures and environments. It allows developers to pack an application with all the ingredients like libraries, code, drivers, other dependencies and ship it out as a single package that can be deployed on any other machine regardless of customized settings that the target machines might have. A docker is similar to a virtual machine, but instead of creating a whole virtual operating system, it allows applications to use the same kernel as the system that it is running on. This only requires applications to be packaged with components which are not already present in the host computer thereby reducing the size of the application and providing a significant boost to its performance.
Dockers can host multiple containers and instances to build a unique Blockchain Network. For instance, Hyperledger Fabric uses a docker container to host different container with different roles like Peer, Orderer, and Database/Ledger. Hyperledger fabric uses Docker to run the chain code in an isolated container which affords extra security. As the whole fabric uses Docker containers to run all different roles, it becomes easy to host the fabric API and production to run without relying on any traditional servers. As Docker is a software that runs on executable code, the Fabric blockchain uses this container orchestration to build a complete stack of Blockchain Network.
Reach out to us today, to discuss your project
Using docker has several benefits as explained below. If you are already aware of it, you can skip the next part to learn the important security measures to consider for using Docker containers.
Highly portable: One of the features that make Docker really popular is its portability. Docker containers can be run inside a Google Compute Engine instance, Amazon EC2 instance, VirtualBox or Rackspace server, provided the host OS supports Docker. Because of this feature, a container running on Amazon EC2 instance can be ported between environments, say to VirtualBox, with similar consistency and functionality. Not only in AWS and GCP, Docker works perfectly with various other IaaS providers like OpenStack, Microsoft Azure and can also be used with many other configuration managers like Chef, Puppet, and Ansible.
Continuous Deployment and testing: If you want to upgrade during a product’s release cycle, you can make the necessary changes to the Docker containers, test it and execute changes to your existing containers. This kind of flexibility is one of the key advantages of using Docker. It allows you to test, build and release images which can be deployed across multiple servers.
Easier program isolation: Removing applications from your server is a difficult process and can cause conflicts with dependencies. Docker helps you to ensure clean app removal since each application runs on its own container. If an application is no longer needed, you can simply delete its container. It won’t leave any temporary files on the host OS.
Environment standardization and version control: Imagine that you are performing a component upgrade that breaks your environment. It is easy to rollback to the previous versions of your Docker image. The whole process can be tested in a few minutes. When you compare it to VM backup and image creation processes, Docker is pretty fast thereby allowing you to make quick replications and achieve redundancy.
Every technology comes with its own unique vulnerability and security issues. When using Docker, machines and hosting containers should undergo several Security Audit. This basic audit will help the Docker machine to run securely and reduce the attack surface. Some of the steps to secure Docker are discussed below.
Docker container is based on LXC container where the namespace for the kernel is same as OPENVZ. Once the Docker run command is triggered, separate namespaces (user ID, Process ID, Group ID etc) will be assigned to each container that is hosted in the same kernel. Each time Docker container runs, the namespace will be based on root user. This might lead to privilege escalation issue.
Thus all the containers may work with a high privileged user as root. It is important to set the namespace for each stack that runs on the particular kernel manually
(To assign a namespace to refer docker official document – https://docs.docker.com/engine/security/userns-remap/#enable-userns-remap-on-the-daemon )
As mentioned in the official doc, set the UID and GID and once the container is up and running, list down the namespace that is set to a particular container.
Access control in Docker is based on user ID and group ID while Pulling (Download application from docker HUB) a docker image, it is important to add UID and GID to the RUN command. In default, the container may run as root in the kernel which leads to access other resources in the docker.
By setting UID and GID, when a particular image is pulled, not everyone can access your image and set it into a container. Even after the image is pulled and the container is up and running, there is a difference between real root and root users in the container. In result, you run the container as root but for the host, it is a non-root user.
While pulling image it is important to check the legitimacy of the image. We have to see whether the image has any malicious code or does unpredictable behavior. By downloading these images there is a chance for a breach in the docker host.
Docker Content Trust (DCT) is a default mechanism that checks image legitimacy. Thus the publisher of the image should support the DCT so the consumer of the image can verify the image.
To configure and verify the images for legitimacy export the variable path as follows,
DOCKER_CONTENT_TRUST = 1 (export it to variable path)
(To configure DCT for Engine for signature verification we need to add daemon.json.)
Hosting container using your laptop or some computer with low processors may not be a good idea due to the scalability of total containers. Docker can be hosted into the cloud like AWS ubuntu machine and the container inside the docker can be accessed via anyone by accessing the DNS entry. Thus the HTTP routing needs to be safe from packet capture/packet spoofing (Changing HTTP to HTTPS).
This can be configured by adding a Certificate like a traditional website. Docker Certificates is based on X.509 standard to add TLS into HTTP connection. We can add certificates by creating our own OpenSSL (OpenSSL is not recommended due to HeartBleed Vulnerability) or other Certificate Authority for a better secure connection. This mechanism prevents data leakage or avoids sniffers from accessing the data that is being transferred. It is important to run the docker command with necessary TLS flags mentioned.
TLS can be used to access the endpoint safely but it is important to secure the Docker Registry and Daemon. It may take time to create our own OpenSSL (OpenSSL is not recommended due to HeartBleed Vulnerability) but you can download from Letsencrypt website or others and add it to an environment variable. Sign up for the certificates and download one from some CA (Certificate Authority) issuer. Once the certificate has been created, add it to the Certificate folder inside /etc/docker/certs.d. (Make sure the added certificate extension is .crt which will be easy to identify by the docker when it is up and running.)
AppArmor is a security mechanism in Linux kernel that deals with creating a profile and adding security to a path. It comes default in Linux based system thus before creating and running docker load the AppArmor and configure it. Using AppArmor as an administrator can audit the profile management and create file permission.
AppArmor profile mechanism allows multiple privileges like reading, writing, memory mapping and creating executables. In docker, AppArmor is default and is loaded to the docker-default container. Once the container starts running docker-default will load the AppArmor profile. We can override the default profile by adding a new profile and set it to our path.
To load new profile, create a profile using AppArmor profile syntax and use the following comment
$ apparmor_parser -r -W /path/to/your_profile -r = read , -w = write
AppArmor can be stopped and unload the newly added profile by using following flags
# stop apparmor $ /etc/init.d/apparmor stop # unload the profile $ apparmor_parser -R /path/to/profile # start apparmor $ /etc/init.d/apparmor start
By default, AppArmor helps to audit and we can store the log file to our specified location. AppArmor desktop notification can be enabled if there is a violation in accessing the profile path (if any breach occurred). (Detailed flag usage and description about AppArmor will be found here https://wiki.archlinux.org/index.php/AppArmor )
Seccomp is also dealing with kernel security and helps to secure system process calls. Similar to AppArmor in seccomp, the kernel should already be configured. To configure seccomp with kernel use following comments.
$ grep CONFIG_SECCOMP= /boot/config-$(uname -r) CONFIG_SECCOMP=y
Like AppArmor, Seccomp will start once the container starts running with blocking 60+ system calls. Thus, like AppArmor, default Seccomp configuration can be overridden by creating a new configuration and set seccomp manually using –security-opt flag. Use the following command for specify Seccomp configuration.
$ docker run --rm \ -it \ --security-opt seccomp=/path/to/seccomp/profile.json \ Hello-world
To run container without seccomp, use the following command,
$ docker run --rm -it --security-opt seccomp=unconfined debian:jessie \ unshare --map-root-user --user sh -c whoami
Reach out to us today, to discuss your project
There are network configuration settings in Docker that should be secured before running the container. As per the Docker, swarm networking is a collection of node called as a cluster, it communicates with each other like a stack in microservice. These nodes can be secured using overlay network which helps to manage these cluster network and encrypt their communication over TCP connection. By default when the swarm is created, data traffic is managed by ingress network and docker_gwbridge. We can add encryption by adding –opt encrypted flag to the docker run command. This makes data transfer between two different containers secure.
Docker can run on any computer, on any cloud and infrastructure. The flexibility, portability, and simplicity it offers are the main reasons why Docker has become so popular in a very short period. But It is important that you never let your guard down. You have to be extra careful while running Docker containers. It is better to be vigilant now than to suffer in the future.