Containers continue to grow – What this means for storage solutions

Container weiter im Aufwind – Was das für Speicher-Lösungen bedeutet

Why 85 percent of companies will opt for containers by 2025

IT is in the midst of a tectonic shift. Almost everything about the way companies deploy and develop applications is changing in the context of the so-called digital transformation.

According to Pure Storage, this can be characterized by three main elements:

  • Firstly, it is about the digital empowerment of processes within companies and externally to customers and partners.
  • Secondly, it is strongly influenced by the cloud – by the actual use of cloud resources or by the use of cloud-like operating models.
  • Thirdly, the way of application development is also changing towards a continuous integration and deployment model that allows frequent iterative changes.

At the forefront of these three elements is containerization. It provides an opportunity to create applications based on a continuous development model. These applications are extremely self-contained, highly scalable and portable, as well as granular in terms of the service components they encapsulate.

It is not only in the opinion of Pure Storage that containerized applications, deployed and managed via an orchestration platform such as Kubernetes, will play a central role in the IT development of the next decade. According to Gartner, 85 percent of companies will use containers in production by 2025, up from 35 percent in 2019.

Containers can run at a much higher density than traditional virtual workloads, which means fewer servers are required. This has the side effect of reducing licensing costs and, above all, energy requirements. For these reasons, containerization is increasingly becoming the basis of cost reduction initiatives and broader business scenarios, with companies usually aiming for 25 to 40 percent of applications as a starting point.

But what about storage, data backup, backups, snapshots, replication, high availability and disaster recovery? These areas are of crucial importance for the application infrastructure of a company, but can be a challenge in containerized processes. Before looking at how to solve this problem, Pure Storage believes that you should take a look at why containers are so important and how they work.

Agility of a containerized application deployment

Let’s say that the core business of a company is focused on the frequent launch of many new products with rapid spikes in demand and the associated analytical requirements. For example, it could be a ticket sales operation with sudden and massive spikes in sales. Traditional applications on a three-level architecture (client, server, database) would be slow to implement, would not be scalable well and would collapse if there was high demand. Containers are designed to cope with just such a situation.

This is because containers encapsulate the countless components of an application. This means that many such microservices are reusable as new applications are developed – and can be rapidly proliferated to meet the demands of scaling. In addition, containers contain all the API connectivity to those on which they depend and can be ported to numerous operating environments. For example, a sudden increase in the demand for event tickets can be absorbed by a rapid duplication of interconnected containerized service instances and transferred to several data centers, including in the public cloud.

The technical basics of containers – greatly simplified – consist in the fact that it is a form of virtualization. Unlike virtual servers, they run directly on the host operating system and without an intermediary hypervisor. This means that containers are a lean virtual machine with much higher granularity, which, as a rule, provides separate components of the entire application, connected by code in the form of APIs.

So there is no hypervisor and therefore no overhead, but containers benefit from an orchestration layer provided by tools such as Kubernetes. These organize one or more running containers – each with their code, their runtime, their dependencies and resource calls – in so-called pods. The intelligence for executing pods is located above them in one or more Kubernetes clusters.

Pure Storage explains: Storage and Backup challenges at Kubernetes

One of the biggest challenges that Kubernetes has to deal with is the storage and backup of data. The roots of the problem go back to the origins of containers, which were initially supposed to run as an ephemeral, i.e. temporary instance on a developer’s laptop and for which data storage existed only as long as the container was running. However, since containers have become established in the application development of companies, this is no longer possible. Most applications in companies are state-oriented, i.e. they create, interact with data and store data.

Companies that want to deploy containers with storage and data backup at the enterprise level must therefore deal with a newly emerging product group. This is a container storage management platform from which you can operate Kubernetes and deploy and manage its storage and data backup requirements.

What to look for in this product category?

An important point is that every Kubernetes storage product should be container-native. This means that the storage requirements of an application itself are provided as containerized microservices. The requirements for deployment, connectivity and performance are written as code, with all the dynamics and agility that this entails. This is in contrast to other methods such as CSI (Container Storage Interface), which rely on hard-coded drivers for the memory allocated to the containers.

A software-defined, container-native Kubernetes storage platform should provide access to block, file and object storage and be able to access cloud storage as well. The aim is to replicate the central features and advantages of containerization and Kubernetes. This means that the data should be just as portable as the containerized application, that it should be managed via a common control level, that it should scale autonomously and be self-healing.

As for data backup, ideally, such a platform provides all the main methods for backing up data, including backups and snapshots, synchronous and asynchronous replication, and migration capabilities. Again, the cloud should be considered as a source or destination in these processes. To handle the scalability of Kubernetes environments, the product must be able to manage clusters, nodes, and containers running in the hundreds, thousands, and hundreds of thousands, respectively, with a manageable storage capacity of tens of petabytes. After all, it should be intelligent and have a rule-based, automated management. This allows, among other things, the creation, replication and deletion of containers according to the preset monitoring triggers as well as the provision and resizing of memory as required.

IT managers who have found and implemented such a solution that meets all these criteria will realize why 85 percent of companies will rely on containers by 2025. You will also wonder why you did not take this step earlier.

Tech Outsourcing | Dedicated Software Team

Ready to see us in action:

More To Explore

IWanta.tech
Logo
Enable registration in settings - general
Have any project in mind?

Contact us:

small_c_popup.png