DevOps

Docker for Java Developers: Introduction

This article is part of our Academy Course titled Docker Tutorial for Java Developers.

In this course, we provide a series of tutorials so that you can develop your own Docker based applications. We cover a wide range of topics, from Docker over command line, to development, testing, deployment and continuous integration. With our straightforward tutorials, you will be able to get your own projects up and running in minimum time. Check it out here!

1. Introduction

If you have not heard about Docker, then you have probably spent the last few years on some other planet of the Solar system. Docker stormed into our industry and in no time dramatically changed many well-established software development and operational practices and patterns. These days pretty much every organization is using Docker (or equivalent of it), the brave ones even in production, and its adoption is growing at fantastic pace.

In this tutorial we are going to talk about how Docker can help us, Java developers, in accomplishing our day to day tasks. The tutorial consists of several parts where we going to touch upon different aspects of Docker and its applicability to Java applications development.

We will start off by learning the basics:

  • Why we should invest our time in learning Docker
  • Get to know Docker command line tooling
  • Using REST façade to talk to Docker

Then we will move on to the topics related specifically to Docker in context of Java applications right after:

  • Building
  • Developing
  • Testing
  • Deploying
  • Continuous Integration / Delivery

The material we will be going through assumes that you have some basic familiarity with Docker and have at least version 17.06.1-ce already installed on the machine (it does not really matter if you are on Linux, Windows or Mac per se).

2. Linux Containers: The Big Bang

The story, which made Docker and friends possible, begins back in 2006, when a couple of awesome engineers at Google started the work on the feature under the name “process containers”.  It was later rebranded to “control groups” (or cgroups as we know them today) and was merged into the Linux kernel starting from version 2.6.24, released in January 2008.

Essentially, cgroups is a Linux kernel feature that limits, accounts for, prioritizes and isolates the resource usage (CPU, memory, disk I/O, network, etc.) of processes. Most importantly, to support all that the Linux kernel does not need to start any virtual machines or hypervisors. Along with namespaces, another very powerful feature of the Linux kernel, cgroups serve as a fundamental building block for containers: operating system-level virtualization.

Container-based virtualization is exceptionally lightweight (comparing to traditional virtual machines), imposes little to no overhead, share the same operating system kernel and do not require special hardware support to perform efficiently. To say it in other words, containers become a new model to wrap the applications so they could be run in isolation on a shared operating system. Although not without the limitations, going with containers becomes a mainstream in the virtualization space nowadays.

To be fair, not all Linux/Unix distributions use the same mechanisms for operating system-level virtualization. To mention a couple of examples, FreeBSD has jails for such purposes while Solaris has the concept of zones.

So, how to get started with containers? Well, you may have heard abbreviations like LXC or LXD which are essentially the entry points for containers management on most of the Linux/Unix distributions. The thing is, those are somewhat low-level and not easy to start with. But luckily we have Docker and rkt, the application-centric container management engines, which right from the inception became the de facto choices for the application developers across the globe.

3. Docker: Containers for Masses

So what is Docker essentially? It started off as a powerful and easy to use container engine but these days it would be fair to call it a full-fledged container management platform. It is written in Go and takes advantage of the Linux kernel features (mostly namespaces and cgroups) to do the job. The community edition is downloadable free of charge whereas the enterprise edition is also available through subscription offerings. To settle the stage, along this tutorial we are going to use the features of the community edition only.

3.1. Architecture

From the architectural perspective, Docker consists of three main parts. In the heart of Docker sits the daemon process, dockerd. In turn, dockerd relies on another daemon, containerd, as the abstraction layer to interface with the Linux kernel namespaces and cgroups. The last piece of the puzzle is a set of command line tools (like for example docker and docker-compose), known as Docker CLI, which are able to talk to dockerd daemon though the Docker Engine API it exposes.

Each of the Docker components mentioned above deserves own tutorial, so many interesting features and capabilities they provide, though our focus would be primarily centered on Docker Engine API and the Docker CLI family (docker and docker-compose).

One of the strongest arguments in favor of choosing Docker is that it runs natively on the majority of the Linux distributions but it does not stop there. macOS and Windows operating systems are also supported pretty well, with a few caveats to be aware of.

In order to understand how Docker works, we have to unveil a bit its internal model. At any time, if you feel like there are not enough details uncovered about the subject, please do not hesitate to consult the official documentation.

3.2. Images

In Docker, everything you do is managing the specific objects. Images and containers are arguably the most important ones however there are others like volumes, networks and plugins, to name a few. All of them we are going to see in action in different sections of the tutorial, starting with images and containers right away.

Image could be treated as a set of instructions on how to create the container. In Docker, one image could be inherited (or based on) from another image, adding additional instructions on top of base ones. Each image consists of multiple layers, which are effectively immutable. Under the hood these layers are backed by dedicated file systems (by default UnionFS, but others could be plugged in as well), making them very lightweight and fast.

So … how could you create such images for your own needs? It is actually pretty simple, to build your own image in Docker you create a Dockerfile which is just a text document that defines the set of steps (or instructions) required to assemble the image (and run it later). Along the way you may decide to create  completely customized images yourself or, in most cases, reference the images created by others, which are published in a registry. To give you a sneak peek on how the Dockerfiles may look like, here is a quick example:

FROM alpine:3.6
CMD ["uname", "-a"]

Each instruction in a Dockerfile creates a new layer so at the end each image has a list of immutable layers, stacked on top of each other, that represent the filesystem differences.

In the upcoming sections we will be writing quite a lot of different Dockerfiles, closely following the best practices and recommendations.

3.3. Containers

When you have your images ready, it is time to bring them to live. Here is where the containers appear on the stage: they are runnable instances of the images. You can run as many of them as your want, assuming the target host (where Docker is installed) has enough resources. All of that is feasible because the containers are well isolated from each other, at least by default (however you have quite a lot of options to control that).

When Docker creates an instance of the container, it also adds a new writable layer on top of the underlying stack of image layers, often called the container layer. All the changes which are made to the running container (such as creating, deleting or modifying files for example) are written to this thin layer.

It is important to think about containers as ephemeral: when a container is terminated (stopped and removed), any changes to its state disappear (unless they are stored in persistent storage).

As of now, Docker internally uses its own image format and heavily relies on libcontainer and runc for spawning and running the containers.

3.4. Registries

The purpose of the registries in Docker architecture is to store images so they could be shared and used as the base ones. Docker Hub and Docker Cloud are the well-known public registries that anyone can use. To keep things simple, Docker is configured to look for images on Docker Hub by default.

You may also consider the option to host your own private Docker registry (or registries). There are a lot of good reasons to do that, particularly in the world of enterprises. One of the most crucial ones is security as the public images do not undergo through comprehensive security audit and may have known security vulnerabilities or exposures. However, the things are getting better as more and more companies maintain so called official repositories, which are curated and adhere to higher standards.

To make things even more intriguing, the new player has joined the game of registries recently. The flagship Docker product, Docker Store, has been announced to become generally available.


 

3.5. Image is the new RPM

With the Docker and other container engines becoming more and more popular and widespread, the way we used to package and distribute the application is also changing dramatically. Literally, the image becomes a “new RPM” in a sense that you could distribute it to any platform where the container engine of your choice is supported (more on that later) and just run it as a container. It is indeed easy, simple and powerful.

4. Moby: The Future of Docker

Docker has been seeing a lot of changes recently. Driven by initiatives to break Docker into modular components and consolidate all of its open source collaborations, the Moby Project was born.

The Moby Project is a new open-source project to advance the software containerization movement and help the ecosystem take containers mainstream. It provides a library of components, a framework for assembling them into custom container-based systems and a place for all container enthusiasts to experiment and exchange ideas. – https://blog.docker.com/2017/04/introducing-the-moby-project/

As Docker continues to be split up into more components, the Moby Project will become the home for those components as well, so let us keep an eye on it and look forward to exciting announcements.

5. Towards Interoperability

With multiple application-centric container engines available at the moment, namely Docker and rkt (and very likely more to appear in the future), the obvious question to ask would be: how to pick one? And what happens if you would have to switch to another one along the way?

Indeed, at the moment if you select one container engine over the other, you would probably have to stick to it as the move to alternative may be proven difficult. But there is a hope, thanks to Open Container Initiative (or just OCI), that container engines interoperability would improve significantly, with enough players supporting the common specifications.

We have mentioned earlier that at the moment Docker uses own image format. To our luck, the important milestone towards openness in this space has been achieved recently with the first release of OCI Runtime and Image specifications.

6. Docker and Java

There have been a lot of discussions lately regarding any legal consequences or licensing considerations while using Java inside the containers. The official Oracle position on the matter is nicely summarized in the Q&A section:

Are there any licensing considerations for Oracle Java SE that are unique to Docker?

 No.  Docker is a containerization platform and there are no unique or special restrictions in the license for use or redistribution as compared to any operating system, virtualization or packaging format. The Oracle JDK is widely used and adopted in the Docker ecosystem. – https://blogs.oracle.com/developers/official-docker-image-for-oracle-java-and-the-openjdk-roadmap-for-containers

Although the answer is referring to Docker only, it is equally applicable to other container-based virtualization engines as well.

To prove the point, Oracle has published the official Oracle Java 8 SE (Server JRE) image into the Docker Store, one of the places to find the trusted commercial and free software distributed as Docker Images.

7. Conclusions

In this introductory section of the tutorial we have looked a bit into evolution of virtualization mechanisms in the Linux/Unix operating systems. We have learned at the high level what are images, containers, their benefits comparing to traditional virtual machines and how we could get started using them.

8. What’s next

We have not done much along this section while just going through bare listing of facts and terms. However, roll up your sleeves, in the next section we are going to take a closer look on Docker container engine by learning its tooling and commands.

Andrey Redko

Andriy is a well-grounded software developer with more then 12 years of practical experience using Java/EE, C#/.NET, C++, Groovy, Ruby, functional programming (Scala), databases (MySQL, PostgreSQL, Oracle) and NoSQL solutions (MongoDB, Redis).
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments
Back to top button