Years before cloud computing utterly revolutionized where and how we could access technology, there were regional, national, and even global “grids” handling huge datasets for researchers, without requiring heavy-duty machinery in each of their labs.
But if grid computing was the precursor to the commercial clouds operated by the likes of Microsoft, Amazon, IBM, Google, and Salesforce, those clouds are now enabling new kinds of computing that promise to work with billions of devices in faster, more efficient fashion.
As Ian Foster, the Argonne National Laboratory computer scientist who pioneered grid computing in the mid-1990s, explained in a recent interview on Argonne’s website:
Grid and cloud [computing] were each made possible by increasingly widely deployed and capable physical networks—first among scientific laboratories, for grid, and then to homes and businesses, for cloud. But this reliance on physical connections means that these utilities can never be universal.
Even with the millions of servers supporting cloud technology, latency—the time it takes for data to move from one point to another—remains a challenge for certain kinds of applications. And though the commercial providers have addressed many of the questions about the security, cost, and bandwidth needs of cloud computing, there are limits to what the technology can do.
Enter a potential new wave of computing architecture, summed up by Foster as “the emergence of ultrafast wireless networks that will permit access to computing anywhere, anytime, with the only limit being the speed of light.”
Check out our glossary to understand where cloud computing came from, and where it’s headed.
In grid computing, computers work together from afar to handle heavy-duty processing needs. It’s primarily used in scientific research, but also useful in risk management calculations by financial firms, development tasks for video-game designers, and even for special effects in films. The whole constellation of machines, or “nodes,” runs on software and standards that keep the data easily shareable across the grid.
Described by some as “grid with a business model,” a cloud is essentially a network of servers that can store and process data. Crucially, in the cloud, data are retrievable on demand and delivered over the internet.
Cloud computing arrived in the lexicon in the mid-1990s (it’s believed to have been coined by Compaq Computer). But the concept didn’t start to take off until Amazon Web Services launched a beta version of a public cloud in 2002. Even then, it would be years before anyone outside a small circle of early adopters knew what to do with it.
Today, cloud computing is close to a $500 billion global business, projected by Gartner Research to reach nearly $600 billion in 2023.
What if the kinds of computing performed in the cloud could be fully decentralized, with the processing moved near to where the data was generated, or even directly within the individual devices that depended on it? Well, then you would have edge computing, conducted at the far edges of a network. Here, latency and bandwidth aren’t as much of a problem because the computing takes place much closer to the devices themselves.
Finally, there’s fog computing, coined by networking equipment maker Cisco in 2014. An amalgam of cloud and edge computing, it essentially brings the cloud down to the edge of a network, where it settles like a layer of fog over a landscape, sending data to and from smart devices that both generate and need information—for example, self-driving cars. With fog computing, the data doesn’t need to be sent all the way back to the main part of the cloud, which cuts down on latency and bandwidth requirements.