Blog: Containerization: The perfect mechanism for the deployment and orchestration of microservices.

December 27, 2018  |  DIEBOLD NIXDORF

In part one of this series, we discussed how we’re replacing the waterfall method of software development and deployment with a microservices architecture framework that enables us to make targeted updates more quickly and efficiently. In this post, I’ll explain why containerization is necessary to this new approach.

The microservice approach certainly offers many benefits—but it also leads to a high degree of complexity in the software architecture, as each microservice needs to be tested, deployed and managed individually, or even must be made accessible in different versions at runtime. It can be challenging to deploy and operate a microservice application, but we’ve found that the benefits far exceed the new demands. Let’s have a look at the development approach of our retail software portfolio, and how we’ve solved these issues.

To solve the challenge of deploying different microservices onto physical servers with regards to competing and conflicting runtime environments and dependencies, we can initially utilize virtual machines as a way to isolate them from each other. Each of these virtual machines contain a specific run-time environment (such as Operating System, frameworks and libraries in a certain version) and can be used to deploy and host microservices that match these prerequisites. The virtual machine can be prepared and managed via an automatic setup and update process.

Each virtual machine can be easily stored, distributed and backed up in the data center due to having the whole system written down to an image file. This virtual disk image can be used to spawn up multiple virtual machine instances with the same runtime conditions easily.

Solving the problem of size limitations

Each virtual machine has some overhead to it. Every virtual machine contains a full operating system stack, which results not only in large image file sizes (usually >5 GB), but also increases the memory and CPU utilization of a virtual machine instance at runtime. Due to this limitation, we can generally host no more than 10 virtual machines on a regular-sized single server.

To increase the number of microservices per server, and also better utilize available hardware resources for our applications, we have to get rid of the overhead of the virtual machines and find another way to encapsulate and isolate different runtime environments and the microservices running in them. This is exactly the area in which containers are a novel approach, and a pretty good fit.

Containers solve a few different problems at once. Initially, they reduce the payload per instance and server. If we use virtual machines as isolation mechanisms and deployment targets for our microservices, we will face a lot of overhead in the form of CPU and memory consumption due to the separate OS instance that is always part of any virtual machine instance and has to be loaded and started before being able to host our application code. Due to this overhead, we will usually end up with a utilization of approximately 10 virtual machines per physical server. By stripping out the OS from the image containers, we can offer a more lightweight approach for packaging, deploying and hosting our microservices.

A container solely contains the application-specific runtime environment, its required libraries and configuration files in addition to the application code itself. This results in a smaller payload as well as faster deployment and boot times of application containers (usually in seconds) when compared to virtual machines (usually in minutes).

Whereas a single virtual machine is normally re-used to host multiple applications requiring the same runtime environment, libraries and frameworks, a container generally hosts only a single process that fulfills a certain business need (aka microservice). Therefore, containers are even more separated and stripped down on CPU and memory consumption than virtual machines, which allows us to generally host up to 100 containers per physical server.

Figure 1: Differences between hosting services in Virtual Machines (left) vs. Containers (right) (Source: Docker Inc.)

Tapping into reusable building blocks and image repositories

In addition to these deployment benefits, containers also allow us to create a library of deployable, re-usable building blocks in the form of container images and their deployment descriptions, which will not only be used by developers for their development tasks and functional tests, but which can also be utilized by quality engineers and operational staff to validate and distribute the final application from the same set of software artifacts. To be able to do so, an application development team will also have to provide the build script (in the form of a Dockerfile) to generate the container image, which includes all required frameworks, libraries and configuration files for the application to boot up.

The resulting container image can be pushed to a central container image repository for others to access. Any external resource (such as databases, other services or persistent storage systems) will be linked at runtime to the individual application container, and the whole setup of containers, storage services and runtime environments to boot up a specific configuration of an application can be expressed in an application deployment description. This application deployment description can be used by developers, quality engineers and operational staff alike to boot up a certain configuration of an application in a reproducible fashion. Needless to say, these container build scripts and the application deployment descriptions should be part of the source code repository of the application, and should be kept in sync with updated requirements and dependencies.

The company Docker Inc. (creator of the Docker Engine), also operates a SaaS container registry on the public cloud, in which we can find a lot of predefined, reusable container images for things like databases, runtime environments, message queues, etc. These can be referred to by application developers instead of having to create them every time from scratch (e.g. re-use the MSSQL-Server container from Microsoft instead of manually setting up an SQL Server every time).

Figure 2: Container-based development and deployment workflow (Source: Docker Inc.)

At the end, the reusability of the various application and infrastructure containers (together with the application deployment descriptions) allows the IT operations to easily hook up a new instance of the application without having to think about possible deployment problems due to incompatible or missing libraries or frameworks, inaccurate configuration of application servers or systems, or issues with isolating and running different versions of an application side-by-side.

Making the move from waterfall to microservices

As with every new and disruptive technology, the most important question is: how to get started?

When starting with an application or service from scratch, it should be designed and implemented with microservice architectures and container deployment scenarios from the very beginning. If there are applications or services already in place without a microservice architecture, at least the benefits from containerized deployments can be utilized. Instead of using virtual machines hosting monolithic applications, they may initially be put into containers to ease the application deployment and operation procedure. In subsequent releases of these containerized applications it is then possible to start separating out functional building blocks from the monolith into ‘mini services’, which can be built, packaged and deployed individually.

At this intermediate stage there are already a lot of benefits compared to the old approach. Splitting up individual mini services even further, they may eventually end up in a microservice-based architecture, which has a single functional building block packaged and deployed within a single container.

Figure 3: Transforming a monolithic application into a micro service architecture

All in all, these new design principles—and the features that come with microservice-based software architecture and the containerized deployment approach—help us to overcome the disadvantages of traditional monolithic architectures. Thanks to modular, scalable and reusable services, this novel architectural model improves the productivity and speed of our solutions development and reduces the time-to-market for our software products dramatically.

Want to learn more about how our innovative software solutions could transform your business? Let’s start a conversation.

Lassen Sie uns in Verbindung treten

Ich interessiere mich für

Vertriebsanfrage

  • Vertriebsanfrage
  • CONNECT WITH GLOBAL SECURITY
  • Bestandskunden Support-Anfrage
WEITER