Recent News

Achieve the Cloud-Edge Continuum Without Burdening Developers – DevPro Journal


One of the really cool things about the internet is that all computers are fundamentally equal. Probably one of the things that made me fall in love with it.

When I first got an iPhone around 2008, almost the first thing I did on it was to install Linux and run Drupal. Talk about edge computing! I actually think this is where we are going. The grand period of hyper-scalers is clearly not coming to an end — but things are going to change quite a bit.

Think about it. When you push at least some of the personalization and even the collaboration to computers that are really close to the end-user it means that their private information doesn’t need to traverse the whole internet. That means you not only get incomparable performance but also respecting the user’s privacy becomes easier. Maybe even the default. That actually respecting compliance frameworks like the GDPR (and even hyper-local frameworks like we see popping up all over the place) becomes much easier.

It also changes the picture quite a bit regarding environmental impact. A huge percent of the carbon impact of running web applications happens at the network level rather than at the origins. In our models, in some cases, it can be 90%.

In the same way, doing a lot of our security at the edge layer has proven to be effective. So much of the attack volume never hits any of the inner machines.

The flip side is that it’s not something that happens trivially. Computers may be all equal on the Internet but the software they are running is not. Performance, privacy, and carbon impact are not the only constraints. Sometimes you must have strong consistency and save data that is going to be reliably available.

We always say that there aren’t really many useful stateless services. Most services that have value change something in the world. An ecommerce transaction. A comment on a design. And when you change the world it all becomes much trickier.

Consistency is not impossible to do in a distributed-peer-to-peer-edge-scenario … but way more expensive, largely slower, much more costly to achieve, and often with a huge environmental impact (think WEB3).

When we say computers are equal it’s evidently a huge simplification. They differ by how much raw power they have, their concurrency capabilities — and most importantly their throughput and latency from other computers. You can only efficiently load balance if the computer taking off the load form is nearby. So, it’s really a complex set of tradeoffs. I see a lot of fluff going around with magical promises that are not considering those tradeoffs. Can you run a relational database that is “geo-replicated” at the edge? Yes. But should you? In most cases, the answer is no. This is going to be brittle as hell. You’ll pay for it in a couple of years (or months!)

You shouldn’t expect magic, and you can’t rewrite the whole software stack we have created over the last few decades to just “project everything to the edge.” When people complain about “cold-starts” on serverless function when what they really deploy is a huge Java monolith that does a thousand queries to a relational database on every call … it’s simply that there is an impedance mismatch here.

Part of our mission is figuring this out at the infrastructure level so that we can coordinate the application clusters. This allows us to project closer to the end customer what can be while keeping services that want to be close to each other a very short ping away. This eliminates the need for customers to rewrite everything — without blockchains — and without incredibly brittle and complex machinery.

This is harder to do than it sounds. But I think this also confirms our approach to infrastructure orchestration and a lot of the intuitions that we had early on. We are in a unique position because we always refused the “patchwork approach” to running containerized applications. Because we control (and see) the whole thing from the storage layer through network dependencies — and the dependencies of each service — we can understand the constraints. This includes what we can push further away from the center and what must stay closely knit. And we’ve been looking at what can be done in terms of “just right consistency” at the actual edge quite a bit.

At the end-of-the-day, from my standpoint, it’s not really a question of “what you should do at the edge vs what you should do at the origin.” It’s more of a continuum question and more of a question from a developers’ standpoint. How can we make sure that the optimum can be achieved without burdening the app developer? And more importantly, how can we solve as much of this as possible, transparently, within the infrastructure.

This UrIoTNews article is syndicated fromGoogle News

About Post Author