DevOps

Run 1,000 Docker Redis Containers In Less Than 15 Minutes On A Cluster Of 5 Cloud Servers With 2GB Of Memory Each

Background

While application portability (i.e. being able to run the same application on any Linux host) is still the leading driver for the adoption of Linux Containers, another key advantages is being able to optimize server utilization so that you can use every bit of compute. Of course, for upstream environments, like PROD, you may still want to dedicate more than enough CPU & Memory for your workload – but in DEV/TEST environments, which typically represent the majority of compute resource consumption in an organization, optimizing server utilization can lead to significant cost savings.

This all sounds good on paper — but DevOps engineers and infrastructure operators still struggle with the following questions:

  • How can I group servers across different clouds into clusters that map to business groups, development teams, or application projects?
  • How do I monitor these clusters and get insight into the resource consumption by different groups or users?
  • How do I set up networking across servers in a cluster so that containers across multiple hosts can communicate with each other?
  • How do I define my own capacity-based placement policy so that I can use every bit of compute in a cluster?
  • How can I automatically scale out the cluster to meet the demands of the developers for new container-based application deployments?

DCHQ, available in hosted and on-premise versions, addresses all of these challenges and provides the most advanced infrastructure provisioning, auto-scaling, clustering and placement policies for infrastructure operators or DevOps engineers.

  • A user can register any Linux host running anywhere by running an auto-generated script to install the DCHQ agent, along with Docker and the software-defined networking layer (optional).  This task can be automated programmatically using our REST API’s for creating “Docker Servers” (https://dchq.readme.io/docs/dockerservers)
  •  Alternatively, DCHQ integrates with 13 cloud providers, allowing users to automatically spin up virtual infrastructure on vSphere, OpenStack, CloudStack, Amazon Elastic Cloud Computing, Google Compute Engine, Rackspace, DigitalOcean, SoftLayer, Microsoft Azure, and many others.

8012494_orig

Servers across hybrid clouds or local development machines can be associated with a cluster, which is a logical mapping of infrastructure. This cluster has advanced options, like:

  • Networking – a user can select either Docker networking or the software-defined networking to facilitate cross-container communicate across multiple hosts
  • Lease – a user can specify when the servers in this cluster expire so that DCHQ can automatically destroy those servers.
  • Placement Policy – a user can select from a number of placement policies like a proximity-based policy, round robin, or the default policy, which is a capacity-based placement policy that will place the Docker workload on the host that has sufficient compute resources.
  • Quota – a user can indicate whether or not this cluster adherers to the quota profiles that are assigned to users and groups. For example, in DCHQ.io, all users are assigned a quota of 8GB of Memory.
  • Auto-Scale Policy – a user can define an auto-scale policy to automatically add servers if the cluster runs out of compute resources to meet the developer’s demands for new container-based application deployments
  • Granular Access Controls – a tenant admin can define access controls to a cluster to dictate who is able to deploy container-based applications to it. For example, a developer may register his/her local machine and mark it as private. A tenant admin, on the other hand, may share a cluster with a specific group of users or to all tenant users.

358585_orig

In addition to the advanced infrastructure provisioning & clustering capabilities, DCHQ simplifies the containerization of enterprise applications through an advance application composition framework that extends Docker Compose with cross-image environment variable bindings, extensible BASH script plug-ins that can be invoked at request time or post-provision, and application clustering for high availability across multiple hosts or regions with support for auto scaling.

Once an application is provisioned, a user can monitor the CPU, Memory, & I/O of the running containers, get notifications & alerts, and perform day-2 operations like Scheduled Backups, Container Updates using BASH script plug-ins, and Scale In/Out. Moreover, out-of-box workflows that facilitate Continuous Delivery with Jenkins allow developers to refresh the Java WAR file of a running application without disrupting the existing dependencies & integrations.

In this blog, we will go over the deployment of 1,000 Redis containers in less than fifteen minutes on a cluster of 5 cloud servers on IBM’s SoftLayer. We will cover:

  • Building the application template for the clustered Redis that can re-used on any Linux host running anywhere
  • Provisioning the underlying infrastructure on any cloud (with SoftLayer being the example in this blog)
  • Deploying the Redis cluster programmatically using DCHQ’s REST API’s
  • Monitoring the CPU, Memory & I/O of the Running Containers

Building the Application Template for the Redis Cluster

Once logged in to DCHQ (either the hosted DCHQ.io or on-premise version), a user can navigate to Manage > Templates and then click on the + button to create a new Docker Compose template.

We have created a simple Redis Cluster for the sake of this scalability test. You will notice that the cluster_size parameter allows you to specify the number of containers to launch (with the same application dependencies).

The host parameter allows you to specify the host you would like to use for container deployments. That way you can ensure high-availability for your application server clusters across different hosts (or regions) and you can comply with affinity rules to ensure that the database runs on a separate host for example. Here are the values supported for the host parameter:

  • host1, host2, host3, etc. – selects a host randomly within a data-center (or cluster) for container deployments
  • <IP Address 1, IP Address 2, etc.> — allows a user to specify the actual IP addresses to use for container deployments
  • <Hostname 1, Hostname 2, etc.> — allows a user to specify the actual hostnames to use for container deployments
  • Wildcards (e.g. “db-*”, or “app-srv-*”) – to specify the wildcards to use within a hostname

9283824_orig

Provisioning the Underlying Infrastructure on Any Cloud

Once an application is saved, a user can register a Cloud Provider to automate the provisioning and auto-scaling of clusters on 13 different cloud end-points including vSphere, OpenStack, CloudStack, Amazon Web Services, Rackspace, Microsoft Azure, DigitalOcean, HP Public Cloud, IBM SoftLayer, Google Compute Engine, and many others.

First, a user can register a Cloud Provider for Rackspace (for example) by navigating to Manage > Repo & Cloud Provider and then clicking on the + button to select SoftLayer (IBM). The SoftLayer API Key needs to be provided – which can be retrieved from the Account Settings section.

5066348_orig

A user can then create a cluster with an auto-scale policy to automatically spin up new Cloud Servers. This can be done by navigating to Manage > Clusters page and then clicking on the + button. You can select a capacity-based placement policy and then Weave as the networking layer in order to facilitate secure, password-protected cross-container communication across multiple hosts within a cluster.

9645321_orig

A user can now provision a number of Cloud Servers on the newly created cluster by navigating to Manage > Hosts and then clicking on the + button to select SoftLayer (IBM). Once the Cloud Provider is selected, a user can select the region, size and image needed. Ports can be opened on the new Cloud Servers (e.g. 32000-59000 for Docker, 6783 for Weave, and 5672 for RabbitMQ). A Cluster is then selected and the number of Cloud Servers can be specified.

6726793_orig

Deploying the Redis cluster programmatically using DCHQ’s REST API’s

Once the Cloud Servers are provisioned, a user can deploy the Redis cluster programmatically using DCHQ’s REST API’s. To simplify the use of the API’s, a user will need to select the cluster created earlier as the default cluster. This can be done by navigating to User’s Name > My Profile, and then selecting the default cluster needed.

5162375_orig

Once the default cluster is selected, then a user can simply execute the following curl script that invokes the “deploy” API (https://dchq.readme.io/docs/deployid).

#!/bin/bash  for i in `seq 1 1 100`; do      curl -X POST https://user%40dchq.io:<dchq-password>@<www.dchq.io_or_ip> /api/1.0/apps/deploy/<id>      echo     echo $i     sleep 8 done  exit 0;

In this simple curl script, we have the following:

  • A for loop, from 1 to 100
  • With each iteration we’re deploying the clustered Redis application using the default cluster assigned to the user.
  • user%40dchq.io is used for user@dchq.io where @ symbol is replaced by hex %40
  • @ between password & host is not replaced by hex
  • <id> refers to the Redis cluster application ID. This can be retrieved by navigating to the Library > Customize for the Redis cluster. The ID should be in the URL
  • sleep 8 is used between each iteration. This accounts for 800 seconds – or 13.3 minutes.

You can try out this curl script yourself. You can either install DCHQ On-Premise or sign up on DCHQ.io Hosted PaaS.

Monitoring the CPU, Memory & I/O Utilization of the Cluster, Servers & Running Containers

DCHQ allows users to monitor the CPU, Memory, Disk and I/O of the clusters, hosts and containers.

  • To monitor clusters, you can just navigate to Manage > Clusters
  • To monitor hosts, you can just navigate to Manage > Hosts > Monitoring Icon
  • To monitor containers, you can just navigate to Live Apps > Monitoring Icon

We tracked the performance of the hosts and cluster before and after we launched the 1,000 containers.

Before spinning up the containers, we’ve captured a screenshot of the performance charts for the hosts. You can see that the CPU utilization was negligible and the Memory utilization was at 25%.

7028885_orig

After spinning up 500 containers, we’ve captured screenshots of the performance charts for the hosts. You can see that the highest CPU utilization was about 18% and the highest Memory utilization was at 49%.

3004846_orig

When we drilled down into one of the 5 hosts, we saw more details like the # of containers running on that particular host, the number of images pulled and of course, the CPU/Memory/Disk Utilization.

324885_orig

After spinning up 1000 containers, we’ve captured screenshots of the performance charts for the hosts. You can see that the highest CPU utilization was about 31% and the highest Memory utilization was at 75%.

4946271_orig

When we drilled down into one of the 5 hosts, we saw more details like the # of containers running on that particular host, the number of images pulled and of course, the CPU/Memory/Disk Utilization.

4554319_orig

Here’s a view of all running 100 Redis clusters (where each cluster had 10 containers).

9398821_orig

After leaving these apps running for a few hours, we captured screenshots of our cluster. You can see that the CPU Utilization was 4% and the Memory Utilization was at 81%. These are aggregated metrics across the 5 servers in the cluster.

853830_orig

After we deleted all our container-based applications, we captured other screenshots for the cluster. The Memory Utilization was at 35%.

We then drilled down into one of the servers to view the historical performance – where the Memory Utilization decreased from close 90% all the way down to 38%.

7347615_orig

Conclusion

Orchestrating Docker-based application deployments is still a challenge for many DevOps engineers and infrastructure operators as they often struggle to manage pools of servers across multiple development teams where access controls, monitoring, networking, capacity-based placement, auto-scale out policies and quota are key aspects that need to be configured.

DCHQ, available in hosted and on-premise versions, addresses all of these challenges and provides the most advanced infrastructure provisioning, auto-scaling, clustering and placement policies for infrastructure operators or DevOps engineers.

In addition to the advanced infrastructure provisioning & clustering capabilities, DCHQ simplifies the containerization of enterprise applications through an advance application composition framework that extends Docker Compose with cross-image environment variable bindings, extensible BASH script plug-ins that can be invoked at request time or post-provision, and application clustering for high availability across multiple hosts or regions with support for auto scaling.

Sign Up for FREE on http://DCHQ.io or download DCHQ On-Premise to get access to out-of-box multi-tier Java application templates along with application lifecycle management functionality like monitoring, container updates, scale in/out and continuous delivery.

Amjad Afanah

Amjad Afanah is the founder of DCHQ. He has extensive experience in application deployment automation and systems management. DCHQ was part of 500 Startups.
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments
Back to top button