DevOps

Docker for Java Developers: Deploy on Docker

This article is part of our Academy Course titled Docker Tutorial for Java Developers.

In this course, we provide a series of tutorials so that you can develop your own Docker based applications. We cover a wide range of topics, from Docker over command line, to development, testing, deployment and continuous integration. With our straightforward tutorials, you will be able to get your own projects up and running in minimum time. Check it out here!

1. Introduction

Many companies have been using container-based virtualization to deploy applications (including JVM based ones) in production way before Docker appearance on the horizon. However, primarily because of Docker, deployment practices using containers turned into the mainstream these days.

In this section of the tutorial we are going to glance over some of the most popular orchestration and cluster management engines (covering a couple of cloud offerings as well) which natively support deployment and lifecycle of the containerized applications.

The topics we are going to talk about are worth of several books (at least!) so the goal of this part would be to serve as an introduction. If you feel a particular interest in any of those, there is a tremendous amount of the resources available in the public access.

2. Containers as Deployment Units

These days containers (in particular, Docker containers) became standardized units of software deployment. They are prepackaged with everything that the applications may need to run: code (or binary), runtime, system tools and libraries, you name it.

Managing and orchestrating the container-based deployments is quite a hot topic and, as we are going to see in a moment, every solution out there offers it right from the start.

3. Orchestration using Kubernetes

In a rare case you haven’t heard about it yet, Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. This is one of the most exciting, innovative and actively evolving projects in the open-source community. Probably, if you or your company is looking for container orchestration solution, Kubernetes is the de-facto choice nowadays.

There are a lot of things which Kubernetes could do for you, but in this part of the tutorial we are going to see how easy it is to deploy our containerized Spring Boot application stack, including MySQL, using only minimal subset of the Kubernetes features.

Setting up a full-fledged Kubernetes cluster on the local machine might sound a bit impractical but luckily Kubernetes could be deployed in development mode via minikube. This awesome tool spawns a single-node Kubernetes cluster inside a virtual machine so you could develop on it day-to-day using your laptop or desktop for example.

With the assumption that you installed minikube on the operating system of your choice, let us move on to the next step and deploy a couple of Docker containers on it.

$ minikube start

When minikube is started, we could immediately initiate MySQL deployment, sticking to the same version 8.0.2 which we have been using along this tutorial.

$ kubectl run mysql --image=mysql:8.0.2 --env='MYSQL_ROOT_PASSWORD=p$ssw0rd' --env='MYSQL_DATABASE=my_app_db' --env='MYSQL_ROOT_HOST=%' --port=3306

It will take some time and at the end you should be able to see MySQL container (or better to say, Kubernetes pod) up and running:

$ kubectl get pod
NAME                       READY     STATUS    RESTARTS   AGE
mysql-5d4dbfcd58-6fmck     1/1       Running   0          22m

Excellent, now we would need to rebuild the Docker image of our Spring Boot application we have developed previously, using the Docker settings of the deployed Kubernetes cluster (alternatively, we could have used private registry).

$ eval $(minikube docker-env)
$ docker image build \
  --build-arg BUILD_VERSION=0.0.1-SNAPSHOT \
  -f Dockerfile.build \
  -t jcg/spring-boot-webapp:latest \
  -t jcg/spring-boot-webapp:0.0.1-SNAPSHOT .

With this step completed, we have our Docker image available for deployment in Kubernetes and we could run it as another pod.

$ kubectl run spring-boot-webapp --image=jcg/spring-boot-webapp:latest --env='DB_HOST=mysql.default.svc.cluster.local' --port=19900 --image-pull-policy=Never

Let us check that we have two pods up and running:

$ kubectl get pod
NAME                                  READY     STATUS    RESTARTS   AGE
mysql-5d4dbfcd58-6fmck                1/1       Running   0          33m
spring-boot-webapp-5ff8456bf5-gf5qv   1/1       Running   0          31m

Last but not least, we have to expose our Spring Boot application deployment as Kubernetes service to make it accessible:

$ kubectl expose deployment spring-boot-webapp --type=NodePort

And quickly check it is listed among other services:

$ kubectl.exe get service
NAME               TYPE      CLUSTER-IP          EXTERNAL-IP PORT(S     AGE
kubernetes         ClusterIP 10.96.0.1           443/TCP                4d
spring-boot-webapp NodePort  10.109.108.87       19900:30253/TCP        40s

And that’s basically it, in a few simple steps we have gotten our Docker containers up and running, all managed by Kubernetes. Let us make sure that the application is actually passing all the health checks:

$ curl $(minikube service spring-boot-webapp --url)/application/health
{
    "status":"UP",
    "details": {
        "diskSpace": {
            "status":"UP",
            "details": {
                "total":17293533184,
                "free":14476333056,
                "threshold":10485760
            }
        },
        "db": {
            "status":"UP",
            "details": {
                "database":"MySQL",
                "hello":1
            }
        }
    }
}

And it really does! To finish up our discussion regarding Kubernetes, it is worth to mention that Docker already includes an early native integration with Kubernetes on some of the edge channels.

4. Orchestration using Apache Mesos

Apache Mesos is arguably the one of the oldest resource and cluster management frameworks in use. It effectively abstracts resources (CPU, memory, storage) away from physical or virtual hardware, allowing to build and operate fault-tolerant and elastic distributed systems. One of its strengths is exceptional level of extensibility and support of containerized application deployments, however it is also known to be quite complex and difficult to operate.

Architecturally, Apache Mesos consists of a master that manages agents (running on each cluster node), and frameworks (that run tasks on these agents). The master enables fine-grained sharing of resources (CPU, RAM, …) across frameworks by making them resource offers. Bounding ourselves to only the necessary components, let us take a look on how Apache Mesos cluster could be defined using docker-compose specification.

version: "3"

services:
  zookeeper:
    image: zookeeper
    networks:
      - mesos-network
    environment:
      ZOO_TICK_TIME: 2000
      ZOO_INIT_LIMIT: 10
      ZOO_SYNC_LIMIT: 5
      ZOO_MAX_CLIENT_CNXNS: 128
      ZOO_PORT: 2181
      ZOO_MY_ID: 1

  mesos-master:
    image: mesosphere/mesos-master:1.3.2
    networks:
      - mesos-network
    ports:
      - "5050:5050"
    environment:
      MESOS_ZK: zk://zookeeper:2181/mesos
      MESOS_QUORUM: 1
      MESOS_CLUSTER: docker-compose
      MESOS_REGISTRY: replicated_log
    volumes:
      - /var/run/docker.sock:/run/docker.sock
    depends_on:
      - zookeeper

  mesos-slave:
    image: mesosphere/mesos-slave:1.3.2
    privileged: true
    networks:
      - mesos-network
    ports:
      - "5051:5051"
    links:
      - zookeeper
      - mesos-master
    environment:
      - MESOS_CONTAINERIZERS=docker
      - MESOS_ISOLATOR=cgroups/cpu, cgroups/mem
      - MESOS_LOG_DIR=var/log
      - MESOS_MASTER=zk://zookeeper:2181/mesos
      - MESOS_PORT=5051
      - MESOS_EXECUTOR_REGISTRATION_TIMEOUT=5mins
      - MESOS_EXECUTOR_SHUTDOWN_GRACE_PERIOD=90secs
      - MESOS_DOCKER_STOP_TIMEOUT=90secs
      - MESOS_RESOURCES=cpus:2;mem:2080;disk:5600;ports(*):[19000-19999]
      - MESOS_WORK_DIR=/var/lib/mesos
      - MESOS_SYSTEMD_ENABLE_SUPPORT=false
    volumes:
      - /var/run/docker.sock:/run/docker.sock
    dns:
      - mesos-dns
    depends_on:
      - mesos-master
      - mesos-dns

  marathon:
    image: mesosphere/marathon:v1.5.6
    networks:
      - mesos-network
    environment:
      - MARATHON_ZK=zk://zookeeper:2181/marathon
      - MARATHON_MASTER=zk://zookeeper:2181/mesos
    ports:
      - "8080:8080"
    depends_on:
      - mesos-master

  mesos-dns:
    image: mesosphere/mesos-dns:v0.6.0
    command: [ "/usr/bin/mesos-dns", "-v=2", "-config=/config.json" ]
    ports:
      - 53:53/udp
      - 8123:8123
    volumes:
      - ./config.json:/config.json
      - /tmp
    links:
      - zookeeper
    dns:
      - 8.8.8.8
      - 8.8.4.4
    networks:
      - mesos-network

networks:
    mesos-network:
       driver: bridge

As we can see, there are quite a few moving parts in there, besides just Apache Mesos (and Apache Zookeeper). In the heart of it is the Marathon framework, the container orchestration platform. Marathon provides a beautiful web UI as well as REST(ful) APIs to manage the application deployments and uses own JSON-based specification format. Following our Spring Boot application stack, let us take a look on the example of the MySQL deployment descriptor (which is stored inside the mysql.json file):

{
  "id": "/jcg/mysql",
  "container": {
    "type": "DOCKER",
    "docker": {
      "image": "mysql:8.0.2",
      "network": "BRIDGE",
      "portMappings": [
        {
          "containerPort": 3306,
          "servicePort": 3306,
          "hostPort": 0,
          "protocol": "tcp"
        }
      ],
      "parameters": [
        {
          "key": "hostname",
          "value": "mysql"
        }
      ]
    }
  },
  "env": {
    "MYSQL_ROOT_PASSWORD": "p$ssw0rd",
    "MYSQL_DATABASE": "my_app_db",
    "MYSQL_ROOT_HOST": "%"
  },
  "instances": 1,
  "cpus": 0.1,
  "mem": 500,
  "healthChecks": [
    {
      "protocol": "COMMAND",
      "command": { "value": "ss -ltn src :3306 | grep 3306" },
      "gracePeriodSeconds": 10,
      "intervalSeconds": 10,
      "timeoutSeconds": 5,
      "maxConsecutiveFailures": 2
    }
  ]
}

With that, to bring MySQL container to live we just need to submit this descriptor to Marathon by leveraging its REST(ful) APIs (assuming our Apache Mesos is up and running), for example:

$ curl -X POST http://localhost:8080/v2/apps -d @mysql.json -H "Content-type: application/json"

{
  "id":"/jcg/mysql",
  "container":{
     …
  },
  …
}

Looks good, let us do the same for Spring Boot application, starting off from the Marathon deployment descriptor and persist it in the spring-webapp.json file.

{
  "id": "/jcg/spring-webapp",
  "container": {
    "type": "DOCKER",
    "docker": {
      "image": "jcg/spring-boot-webapp:latest",
      "network": "BRIDGE",
      "portMappings": [
        { 
          "containerPort": 19900, 
          "servicePort": 19900, 
          "hostPort": 19900 
        }
      ],
      "parameters": [
        { "key": "hostname", "value": "spring-webapp" }
      ]
    }
  },
  "env": {
    "DB_HOST": "172.17.0.2"
  },
  "instances": 1,
  "cpus": 0.1,
  "mem": 512,
  "healthChecks": [
    {
      "protocol": "COMMAND",
      "command": { "value": "nc -z localhost 19900" },
      "gracePeriodSeconds": 25,
      "intervalSeconds": 10,
      "timeoutSeconds": 5,
      "maxConsecutiveFailures": 3
    }
  ]
}

Next step would be to submit it to Marathon:

$ curl -X POST http://localhost:8080/v2/apps -d @spring-webapp.json -H "Content-type: application/json"

{
  "id":"/jcg/spring-webapp",
  "container":{
     …
  },
  …
}

And we are effectively done! Once the deployment is finished, we should be able to see our application stack in operational state using, for example, Marathon’s web UI, accessible at http://localjost:8080/ui/.

Marathon Applications

The curious reader may wonder how we linked those two Marathon applications, Spring Boot and MySQL, together. For this particular case, we have been using service discovery and load balancing capabilities of Apache Mesos fulfilled by Mesos-DNS. To illustrate them in action, this is how we can query the IP address of the MySQL instance by its name, mysql-jcg.marathon.mesos.

$ curl http://localhost:8123/v1/hosts/mysql-jcg.marathon.mesos
[
  {
   "host": "mysql-jcg.marathon.mesos.",
   "ip": "172.17.0.2"
  }
]

As usual, let us confirm that Spring Boot application is up and running by sending over a HTTP request to its health endpoint:

$ curl http://localhost:19900/application/health

{
    "details": {
        "db": {
            "details": {
                "database": "MySQL",
                "hello": 1
            },
            "status": "UP"
        },
        "diskSpace": {
            "details": {
                "free": 44011802624,
                "threshold": 10485760,
                "total": 49536962560
            },
            "status": "UP"
        }
    },
    "status": "UP"
}


 

5. Orchestration using Docker Swarm

It has been awhile since Docker has the cluster management and orchestration features embedded in the engine. This orchestration layer is used to be known as Docker Swarm but later on evolved into, essentially, just a special mode (swarm mode) to run Docker Engine in.

If you are betting heavily on Docker and prefer not to look for anything else, running Docker Engine in a swarm mode could be a good option as an orchestrator. It is fairly easy to get started with as well.

$ docker swarm init

Conceptually, there are quite a few differences which swarm mode introduces, beyond just changes in the Docker Engine itself. First and foremost, you should start thinking in terms of services, not containers. The assumptions about running everything inside single Docker host won’t be accurate anymore as the swarm would very like consist of many Docker hosts distributed over the network.

When running Docker Engine in a swarm mode, we can deploy a complete application (service) stack using already familiar docker-compose specification. There is only one constraint though, the docker-compose file format should be version 3 (or above) to be compatible between Docker Compose and Docker Engine’s swarm mode.

To see the swarm mode in action, let us rework our Spring Boot application stack a bit to comply with docker-compose file format version 3.3.

version: '3.3'

services:
  mysql:
    image: mysql:8.0.2
    environment:
      - MYSQL_ROOT_PASSWORD=p$$ssw0rd
      - MYSQL_DATABASE=my_app_db
      - MYSQL_ROOT_HOST=%
    healthcheck:
      test: ["CMD-SHELL", "ss -ltn src :3306 | grep 3306"]
      interval: 10s
      timeout: 5s
      retries: 3
    networks:
      - my-app-network

  java-app:
    image: jcg/spring-boot-webapp:latest
    environment:
      - DB_HOST=mysql
    ports:
      - 19900:19900
    depends_on:
      - mysql
    healthcheck:
      test: ["CMD-SHELL", "nc -z localhost 19900"]
      interval: 10s
      timeout: 5s
      retries: 3
    networks:
      - my-app-network

networks:
    my-app-network:
       driver: overlay

Fairly speaking, we didn’t have to do many changes. Specifically to support the swarm mode, there is a dedicated family of commands which has been introduced into docker tooling: docker stack. We have not talked about them in the second part of this tutorial, but now is time to do so.

With the deploy command, and docker-compose specification at hand, we could initiate the application stack deployment into the swarm cluster.

$ docker stack deploy --compose-file docker-compose.yml springboot-webapp

To see the status of the deployments and also all deployed services we could use services command, as in the example below (the output has been shortened a bit):

$ docker stack services  springboot-webapp
NAME                        IMAGE                         REPLICAS PORTS
springboot-webapp_java-app  jcg/spring-boot-webapp:latest 1/1      *:19900->19900/tcp
springboot-webapp_mysql     mysql:8.0.2                   1/1   

Awesome, looks like our Spring Boot application is up and running, let us confirm that by sending over a HTTP request to its health endpoint:

$ curl http://localhost:19900/application/health

{
    "details": {
        "db": {
            "details": {
                "database": "MySQL",
                "hello": 1
            },
            "status": "UP"
        },
        "diskSpace": {
            "details": {
                "free": 3606978560,
                "threshold": 10485760,
                "total": 19195224064
            },
            "status": "UP"
        }
    },
    "status": "UP"
}

It looks exactly what we expected. To conclude, Docker Engine in swarm mode in an interesting option to consider however it is worth to mention that it is not as popular as Kubernetes or Apache Mesos.

6. Containers in the Cloud

Every major player in the cloud space is rushing to the market their own managed offerings which intend to support the deployment and orchestration of the containers so you could just drop one and magic happens. Let us quickly glance through them.

6.1. Amazon Elastic Container Service

Amazon Elastic Container Service (or simply Amazon ECS) is a highly scalable, fast, container management service that makes it easy to run, stop, and manage Docker containers in the cluster. As you may expect, Amazon ECS integrates nicely with many other cloud offerings from Amazon Web Services portfolio, including:

  • AWS Identity and Access Management
  • Amazon EC2 Auto Scaling
  • Elastic Load Balancing
  • Amazon Elastic Container Registry
  • AWS CloudFormation

Amazon ECS is a regional service that simplifies running application containers in a highly available manner across multiple availability zones within a region.

6.2. Google Kubernetes Engine

Google Kubernetes Engine (formerly known as Google Container Engine) is a managed environment for deploying containerized applications. It brings Google’s unique experience and latest innovations in developer productivity, resource efficiency, automated operations, and open source flexibility to accelerate the time to market.

Google has been running production workloads in containers for a very long time, and have incorporated the best of what they learnt into Kubernetes, the industry-leading open source container orchestrator which powers Kubernetes Engine. It offers (but is not limited to) the following distinguishing features:

  • Identity & Access Management
  • Hybrid Networking
  • Security and Compliance
  • Integrated Logging & Monitoring
  • Auto Scale
  • Auto Upgrade
  • Auto Repair
  • Resource Limits
  • Stateful Application Support
  • Docker Images Support
  • Fully Managed
  • OS Built for Containers
  • Private Container Registry
  • Fast Consistent Builds
  • Open Source Portability

6.3. Azure Container Service

Azure Container Service (AKS) manages the hosted Kubernetes environment, making it quick and easy to deploy and manage containerized applications without container orchestration expertise. It also eliminates the burden of ongoing operations and maintenance by provisioning, upgrading, and scaling resources on demand, without the need to take the applications offline.

As a managed Kubernetes service, Azure Container Service provides:

  • Automated Kubernetes version upgrades and patching
  • Easy cluster scaling
  • Self-healing hosted control plane (masters)
  • Pay only for running agent pool nodes

7. Conclusions

In this section of the tutorial we have looked at a number of leading cluster management and orchestration solutions which offer a comprehensive support of the containerized application deployments. By and large, Kubernetes is taking a lead here, however every option has own pros and cons.

8. What’s next

In the next, the final part of the tutorial we are going to talk about very important subject of continuous integration practices and how Docker fits into the picture there.

The complete set of configuration and specification files is available for download.

Andrey Redko

Andriy is a well-grounded software developer with more then 12 years of practical experience using Java/EE, C#/.NET, C++, Groovy, Ruby, functional programming (Scala), databases (MySQL, PostgreSQL, Oracle) and NoSQL solutions (MongoDB, Redis).
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments
Back to top button