Year for Momentum in Cloud, Service Mesh and Serverless Computing

By | Managed Services News

Feb 10

Container use will grow, while serverless computing and service mesh standards will develop.

Amdocs' Avishai Sharlin

Avishai Sharlin

As cloud momentum storms ahead, the industry this year will introduce new related platforms and software as it develops a more standard approach to usage. We can also expect service mesh and serverless computing technologies to take off in exciting directions.

What to Look For

Containers will become the de facto software packaging model, with application modernization taking an accelerated path. Container platforms have become an essential factor in the hybrid cloud landscape, accelerating multicloud adoption in enterprises. According to a Portworx and Aqua Security survey, more than 87% of respondents said they were now running container technologies, a remarkable increase from just over half (55%) two years ago.

At the same time, existing networks and tools continue to work unabated. As cloud adoption picks up speed, the current era of coexistence between traditional and emerging technologies will face increasing operational business challenges that must be addressed through a continuous transformation process.

In the upcoming Project Pacific (the next release of VMware), Kubernetes (K8S) will play a vital role. This new release will blur the line between what is virtualized and what is running in a container. These containerizing legacy applications help by having the same unified domain of containers and K8S. Although these aren’t cloud-native technologies, they’re managed in the same way as part of the K8S workloads and deployed in the same fashion.

In 2020, developers and operations teams will adopt a new breed of containers: ClearContainers (or KataContainers), which could have a disruptive impact on virtual machine concepts that many DevOps teams are used to, partly because they aren’t sharing the Linux OS kernel.

Serverless computing will gain more momentum as it’s brought into on-premise platforms like OpenShift, Google Anthos and others via kNative. According to RightScale’s 2018 State of the Cloud report (registration required), serverless was the fastest-growing cloud service model, with an annual growth rate of 75%. In 2020, we will see a move toward maturing best practices, security solutions, and tooling as more communications providers look to implement the technology.

In 2019, we saw platforms such as OpenShift, Google Anthos and kNative adapted to connect microservices and offer fully managed serverless workloads. These solutions, designed to address platform-as-a-service needs for developers, as well as hybrid cloud deployments for enterprises, are recognized as a way to standardize Kubernetes environments.

However, I expect that AWS Outposts, a managed service that extends AWS infrastructure, services, APIs and tools to on-premise locations, will likely be the chief disruptor of the industry in 2020 as they blur the lines between on-cloud and on-premise workloads and services.

Organizations must look to other solutions beyond K8S to scale out. DevOps teams need the required on-premise infrastructure to run and manage serverless at scale, and most organizations cannot compete with AWS or other public cloud vendors on this front. The ability to have serverless on-premise remains a riddle to be solved.

Istio / service mesh will become a standard approach to run cloud-native apps and microservices. Cloud technologies are becoming part of every IT environment, from operations to orchestration and beyond. With this broadening impact, hybrid ecosystems are increasingly becoming the norm – with traditional applications, cloud-native apps and virtual network functions (VNFs) running together. In fact, according to the Cloud Native Computing Foundation, around 62% of companies are relying on cloud-native technologies for more than half of their new applications.

Running cloud-native applications at scale can be highly complex, especially when thousands of microservices are involved. This is why securing communications between microservices is crucial. However, controlling QoS and predictable performance is challenging, and it’s also difficult to monitor, debug and observe metrics across thousands of microservices.

Here is where Istio, an open-source project from IBM, Google and Lyft, comes in. It provides a single standard service mesh on top of …

About the Author