Edge computing is nothing new. But creating applications and solutions at the edge that leverage the cloud for analytics, as well as utilizing a network as efficiently as possible, can be challenging.
But developing a solution that works is not the only challenge. How can developers actually guarantee post-deployment and maintenance? Deploying a cloud-native app on the edge may unlock a Pandora’s box with unknown interoperability, scalability and maintenance issues.
“The biggest problem is developers still don’t know, at the edge, how to bring a legacy application and make it cloud-native,” said Ajay Mungara (pictured), senior director of edge SW and AI, developer solutions, and engineering at Intel. “So they just wrap it all into one Docker and they say, ‘OK, now I’m containerized.’ So we [Intel Dev Cloud] tell them how to do it right. So we train these developers. We give them an opportunity to experiment with all these use cases so that they get closer and closer to what the customer solutions need to be.”
Mungara spoke with theCUBE industry analysts Dave Vellante and Paul Gillin during the recent Red Hat Summit event, an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed DevCloud, edge computing, use cases and solutions. [The following content has been condensed for clarity.] (* Disclosure below.)
Vellante: DevCloud, what’s it all about?
Mungara: A lot of time, people think about edge solutions as just computers at the edge, but you’ve also got to have some component of the cloud and the network. And edge is complicated because of the variety of edge devices that you need. And when you’re building a solution, you’ve got to figure out, where am I going to push the compute? How much of the compute I’m going to run in the cloud? How much of the compute I’m going to push at the network, and how much do I need to run it at the edge. A lot of times what happens for developers is they don’t have one environment where all of the three come together.
So, what we did is we took all of these edge devices that will theoretically get deployed at the edge and put them in a cloud environment. All of these devices are available to you. You can pull all of these together, and we give you one place where you can build, test and run performance benchmarks. So you can know when you’re actually going to the field to deploy it and what type of sizing you need.
Vellante: Take that example of AI inferencing at the edge. So I’ve got an edge device, I’ve developed an application, and I want you to do the AI inferencing in real time. You’ve got some kind of streaming data coming in. I want you to persist that data, send that back to the cloud, and you can develop that, test it and benchmark it.
Mungara: What we have is a product, which is Intel OpenVINO, which is an open-source product that does all of the optimizations you need for edge inference. So you develop … the training model somewhere in the cloud. I developed with all of the things, I’ve annotated the different video streams, etc., and you don’t want to send all of your video streams to the cloud, it’s too expensive — bandwidth, that costs a lot. So you want to compute that inference at the edge. In order to do that inference at the edge, you need some environment. What type of edge device do you really need? What type of computer do you need? How many cameras are you computing it?
And the bigger challenge at the edge (and developing a solution is fine) is when you go to actual deployment and post-deployment monitoring maintenance. To make sure you are managing it, it’s very complicated. What we have seen is over 50% of developers are developing some kind of a cloud-native application recently. So, we believe that if you bring that type of cloud-native development model to the edge, then your scaling problem, your maintenance problem, you are like, how do you actually deploy it?
Vellante: What’s the edge look like? What’s that architecture?
Mungara: I’m not talking about far edge, where there are tiny microcontrollers and these devices. I’m talking about those devices that connect to these far edge devices, collect the data, do some analytics, some computing, etc. You have far edge devices, could be a camera, could be a temperature sensor, could be a weighing scale, could be anything, right? It could be that far edge. And then instead of pushing all the data to the cloud, in order for you to do the analysis, you are going to have some type of an edge set of devices, where it is collecting all these data, doing some decisions that are close to the data — you’re making some analysis there.
So, you have a bunch of devices sitting there. And those devices all can be managed and clustered in an environment. So the question is, how do you deploy applications to that edge? How do you collect all the data that is sitting through the camera and other sensors, and you’re processing it close to where the data is being generated, make immediate decisions? So the architecture would look like, you have some cloud, which does some management of these edge devices, management of these applications, some type of control. You have some network, because you need to connect to that. Then you have the whole plethora of edge, starting from a hybrid environment where you have an entire mini data center sitting at the edge, or it could be one or two of these devices that are just collecting data from these sensors and processing it.
Here’s the complete video interview, part of SiliconANGLE’s and theCUBE’s coverage of the Red Hat Summit event:
(* Disclosure below: TheCUBE is a paid media partner for Red Hat Summit. Neither Red Hat Inc., the sponsor for theCUBE’s event coverage, nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)