There is an acute shortage of developers in the world. According to IDC, in 2021, companies around the world lacked 1.4 million full-time developers. And by 2025, their deficit will grow to 4.0 million. To give developers the opportunity to focus on writing code and business logic of applications, speed up the time of product launch to the market, and at the same time reduce infrastructure costs, organizations of all sizes – from startups to large enterprises — are increasingly using serverless technologies. According to a Datadog study, the demand for AWS Lambada FaaS service has tripled in just one year. According to Mordor Intelligence forecasts, in 2021-2026, the average annual growth rate of the serverless market will exceed 23%. How does using serverless computing change DevOps practices?
Why the idea of resource sharing has become popular again
Technology development is cyclical — the same approaches and methods are regularly repeated at a new stage of technological development. It all started in the days of mainframes and old computers, when computing resources were not enough and they were allocated for a specific task. The next stage was shared servers, inside which executable scripts worked. The request was sent to the server, processed, then a call was made and a certain amount of resources were allocated to a specific script. Various mechanisms for dividing computing resources of one server have been used before within operating systems, for example Jail in FreeBSD, C-groups in Linux, zones in Solaris. Serverless is a new round of the idea of sharing resources within a large system.
The cloud can be imagined as a large computer on which you can share resources and throw a piece of code into it so that it is executed there. The main idea of serverless is to think less about where and how data is stored, where it will be processed. The task of developers within the company is to focus on the implementation of business logic, and the cloud provider will monitor the execution of specific functions (number of instances, startup time, resource utilization, etc.).
Another important factor is the logic of technology development, which strives for a microservice architecture and the separation of large monolithic applications into separate components, which can be microscopic. Within the framework of the function as a service approach, each individual function is a microservice that implements certain functionality within the overall process.
But when The software is divided into microservices, it becomes expensive and difficult to manage. There are additional costs associated with management, security, maintenance. And here it becomes economically justified to use a cloud platform that allows you to optimize the management of microservices and, as a result, reduce costs.
Division of responsibility
The service lifecycle is tied to a platform that ensures its launch, response speed, runtime, and security. When working with individual functions or serveless containers, developers do not need to think about maintaining the infrastructure operability – it is enough to create a function and it will be executed, and all the necessary parameters can be provided by a cloud provider.
A full-fledged serverless ecosystem necessarily includes object storage, databases, thread management services, and trigger support. The developer no longer needs to think about how to deploy or maintain these services. For example, to launch a function, it is enough to upload its code to cloud function services and set up public access to it. There is no need to perform actions related to ensuring security, allocating resources and additional capacity to launch instances of functions within the existing quota. For example, for serverless databases, most of the work related to administration is taken over by a cloud provider, and the user remains to access an automated scalable system and pay only for the amount of resources consumed.
Thanks to this approach, the volume of DevOps tasks within microservice architectures is reduced, part of the routine is removed from developers and the cognitive load is reduced. The use of serverless computing is changing the practice of DevOps, as they do not require developers to support operating systems and be responsible for all possible risks. Instead, they can create generic code and then upload it to a serverless platform and monitor its execution.
For example, Genotek bioinformatics, thanks to the use of a serverless approach, were able to focus on writing code and not think about performance, maintenance and scaling of databases. Their service will allow you to create family trees and search for relatives using a database of genetic data. Serverless technologies allow you to implement a “function as a service” approach, in which a separate container or virtual machine with the necessary characteristics is automatically created for each request, and after execution, the created object is destroyed. The function takes data from the database on request and creates trees for each user. The developer gets automatic scaling and fault tolerance, and applications turn into a collection of individual functions that are launched when necessary. When the number of users of the service increases, there is no need to raise new virtual machines or configure balancing — instead, additional instances of the function are automatically created that run in parallel.
What tasks is serverless suitable for?
Serverless is an ideal solution for peak loads depending on seasonality and other periodic factors. For example, as part of marketing campaigns, to analyze orders that have already been processed. Or in a situation where the developer needs to regulate the load due to quotas. Increasing the number of instances of the function that can simultaneously receive incoming messages increases the throughput of the system. Because quotas are a good tool for regulating and processing such a volume of data. Now this approach can be implemented using other tools — for example, take the entire incoming data stream into a serverless queue and parse it using triggers. But this scenario is applicable in the case when an urgent response to the requests of external consumers is not required.
For example, the portal Auto.ru I used serverless technologies when launching the project “Big SDA Exam” — an online test for pedestrians and drivers. It was expected that a specially launched landing page would experience a high load when users start participating in the online test en masse. Therefore, it was important to ensure automatic scaling of the service. As a result, the application withstood peak loads — more than 100 thousand people successfully tested themselves in testing.
Serverless solutions are not suitable for tasks where guaranteed fast response time and real-time processing of requests are required. Serverless assumes a constantly changing load pattern. If the load is constant, it is easier to use services designed to constantly load allocated resources (containers in the Kubernetes stack, databases running 24/7 with guaranteed response time).
What will happen next with serverless technologies
At the AWS re:Invent 2021 conference, the main hyperscale announcements were dedicated to serverless technologies. The main idea that connected the reports was that the serverless approach had passed the peak of hype and reached the productivity plateau. The developer community comes to the conclusion that the resources provided by cloud providers should become serverless.
Serverless is the next stage in the development of cloud services based on open databases. Developers have long been accustomed to existing solutions and want to have either exactly the same protocol for working with serverless databases, or their own database working on the serverless principle. In the next couple of years, we can expect the expansion of support for serverless technologies inside existing databases or the emergence of solutions that emulate them.