Recent News

Last Year’s Predictions and 4 Kubernetes and Edge Trends to Watch | – Spiceworks News and Insights

The edge computing landscape is evolving fast. How can enterprises best prepare to ride the upcoming trends? In this article, Stewart McGrath, CEO and co-founder of Section, reviews the predictions from last year about Kubernetes and the edge and examines four key trends to look forward to.

As this year draws to a close, I thought it would be a good time to throw out a few predictions about what 2023 holds for the Kubernetes, container orchestration and edge computing landscape. But first, I’d like to hold ourselves accountable and look back on the predictions we made this time last year. In retrospect, how did we score?

Reviewing Our Predictions for 2022

1. The use of containers at the edge will continue to grow
The Internet of Things, online gaming, video conferencing and a whole host of emerging use cases mean the use of containers at the edge will continue to grow. Moreover, as usage increases, so too will organizational expectations. Companies will demand more from edge platform providers in terms of support to help ease deployment and ongoing operations.

This one is tough to measure as there’s little data available. This outcome seems inevitable, and anecdotal evidence from conversations with analysts, customers and others in the industry indicates it is, in fact, happening. That said, without hard evidence, I have to give us a N/A on the score check here.

2. Kubernetes will become central to edge computing
Hosting and edge platforms built to support Kubernetes will have a competitive advantage in flexibly supporting modern DevOps teams’ requirements. Edge platform providers who can ease integration with Kubernetes-aware environments will attract attention from the growing cloud-native community; for example, leveraging Helm charts to allow application builders to hand over their application manifest and rely on an intelligent edge orchestration system to deploy clusters accordingly.

How about 7.5 out of 10 on this one? The overall ecosystem developing around Cloud Native Computing Foundation (CNCF) technologies is growing quickly and extensively. CNCF projects like KubeVirt, Knative, WASM, Krustlet, Dapr and others indicate the growing acceptance of Kubernetes as an operating system of choice for not only containers but also virtual machines and serverless workloads. Providers of limited distribution for Kubernetes clusters, such as VMWare’s Tanzu, Rafay Systems and Platform9, continue to build and help customers run on multi-location, always-on footprints, while our location-aware global Kubernetes platform as a service grew substantially in its ability to help customers instantly run Kubernetes workloads in the right place at the right time.

3. CDN attempts to reinvent themselves will gain pace
In the year ahead, content delivery networks (CDNs) will increasingly recognize the need to diversify away from the steadily declining margins of large object (e.g., video and download) delivery. In addition to reinventing themselves as application security platforms, CDNs will continue to lean into the application hosting market. Cloudflare and Fastly have built on their existing infrastructure to deliver distributed serverless. We expect other CDNs will enter and/or expand offerings focused on the application hosting market as they seek to capitalize on their investment in building distributed networks.

I am going to take a 10 out of 10 here. Akamai indicated a major shift when it spent nearly $1 billion acquiring Linode to plunge headlong into the application hosting space and recently announced its investment in data network company Macrometa. Fastly and Cloudflare have continued to expand their Edge offerings and, at recent conferences, reinforced the importance of their Edge compute plays for the future of their companies.

4. Telcos will rise
Telcos will start developing more mature approaches to application hosting and leverage their unique differentiation of massively distributed networks to deliver hosting options at the edge. Additionally, more partnerships will emerge to facilitate the connection between developers and telcos’ 5G and edge infrastructure to solve their lack of expertise in this space.

We were too optimistic, so I’ll give this one a 5 out of 10. The telcos do seem to be moving in this direction but are moving at a typical telco pace. While players like Lumen have continued to roll out hosting infrastructure in distributed footprints, we did not see a monumental shift released by any telco during 2022.

See More: What’s Next for DevOps? Four DevOps Predictions for 2023

2022 Overall Score

Overall, I’d give ourselves 22.5 out of 30, or 75% (having removed the N/A score). Definitely a passing mark, but some headroom for excellence this year!

Four Trends to Watch in 2023

  1. The rise of Kubernetes as a service
    Kubernetes has been described as an operating system for containers. As workload management continues to expand to serverless and virtual machines, and the operations ecosystem (e.g., security and observability) matures and hardens, we will see Kubernetes more abstracted from users. No developer working on building an application really needs (or probably wants) to understand and manage Kubernetes. What they really want is the benefits of Kubernetes when managing their applications in production. In the same way, no developer wants to manage Linux or even the servers on which it runs, so cloud computing gave us compute as a service. Kubernetes is one layer above that compute, and a natural fit for an “as a service” offering; in 2023, we’ll see that take off.
  2. The rise of telcos – again
    Doubling down here; I am going to have another pitch on this one. This year we will see some movement from the telcos after spending 2022 watching and planning. We will continue to see investment in Edge infrastructure from ISPs, telcos, CDNs, hosting companies and hyperscalers. And we will see the emergence of a need from these infra providers for application-level technologies to enable developers to place their workload on that infrastructure.
  3. Data distribution to go mainstream
    One of the key concerns for the global distribution of applications is appropriately managing connections to a central data store. The durability of data is a challenge when we are working with distributed systems. For a long time, centralizing data stores has been the solution to solving for consistency and the easy way to achieve ACID (Atomicity, Consistency, Isolation, Durability) properties. Facilitating the distribution of data or Edge applications brings challenges for consistency. Fortunately, there has been significant investment in solving these problems by organizations such as Cockroach, MongoDB, Macrometa, Fauna and PolyScale. Caching, distribution and replication are all techniques these organizations are employing to let us have our data available in distributed footprints but still with ACID (or close to ACID) properties.
  4. The Edge will remain a nebulous and disputed concept
    Edge is a bad name for a distributed compute paradigm. There is simultaneously no edge to the Internet, and many Edges, depending on your perspective. The debate will continue to rage about where the Edge is and whether some distributed systems are more or less “Edge-y” than others. What will not be disputed is that distribution of applications to wider hosting footprints has advantages with respect to elements such as latency, reliability, redundancy and data backhaul cost. So maybe a new phase will emerge focusing on application distribution rather than Edge.

See More: Predictions for Service Mesh and Microservices: What Does 2023 Have in Store?

Long-term Predictions for the Next Five Years

Kubernetes environments allow for the dynamic scheduling of non-related workloads in a single cluster. With the development of greater levels of Kubernetes abstraction and the hardening of security and observability, I can see a world where providers of Kubernetes clusters will announce the availability of their clusters to a general global pool of available resources on which a developer could deploy workloads. 

Each cluster will be able to describe its attributes (location, capacity, compliance, etc.), and devs will be able to let an overall orchestration system match workload requirements to underlying attributes of contributed clusters (e.g., needs GPU, PCI DSS, specific always on locations, etc.). This will be the next evolution of cloud computing: a dynamic cloud of clusters.

The Kubernetes ecosystem has continued to demonstrate remarkable growth over the past 12 months. I have no doubt we’ll see further evolution in the coming year as the demand for better automation of deployment, scaling and management of containerized applications is clear.

What’s your take on the trends predicted? Share your thoughts with us on Facebook, Twitter, and LinkedIn.

MORE ON KUBERNETES AND EDGE COMPUTING

This UrIoTNews article is syndicated fromGoogle News

About Post Author