Recent News

A brief history of edge computing – TechRepublic

edge computing watch
Image: Yury Zap/Adobe Stock

Edge computing is one of the most important technologies of modern times. Thanks to the edge, organizations can leverage data-consuming innovations like artificial intelligence, biometrics, the Internet of Things and endpoint management.

Combined with 5G and the cloud, companies use the edge to bring data closer to where it is processed, do real-time operations, and reduce latency and IT costs. But where did it all start? What is the history of edge computing?

What was there before the edge?

To understand the early days of the edge, we must go back to the history of computers. The origins of computing can be traced back more than 200 years in history. However, it wasn’t until World War II when data processing computers began taking real shape with devices like the 1931 MIT’s mechanical analog computer and the 1936 Turing Machine, a principle for a universal machine created by the British scientist Alan Turing.

As LiveScience’s timeline reveals, the 40s, 50s and 60s saw computing advancements, but all these computers shared common ground. They were large, often taking up entire rooms, and they processed all data on-site. They were, in fact, data servers. These massive computers were expensive, rare and hard to build. They were only used mainly by the military, governments and big industries.

SEE: Don’t curb your enthusiasm: Trends and challenges in edge computing (TechRepublic)

By the late 70s, big tech companies like IBM, Intel, Microsoft and Apple began taking shape, and microprocessors and other micro-tech inevitably gave form to the first personal computers. By the 80s, iconic computers like the 1984 Apple Macintosh were finding their way into every home. These personal computers provided new applications but just like the big machines of the early days, they processed all data on the device.

It wasn’t until 1989 that a significant shift in data computing began when Tim Berners-Lee invented the World Wide Web, the first web server, the first web browser and the formatting protocol called Hypertext Markup Language.

Data shifted from being processed by devices to being processed by servers, creating the server-computer model. But even before the internet was officially created, Berners-Lee knew this model had a big problem: congestion. Berners-Lee realized that when more devices were connected to the internet the servers providing the data became stressed. Eventually, a breaking point would be reached inevitably and applications and sites were bound to malfunction and crash in the near future.

From centralized servers to the first edge

The same year that the web was created, a small group of computing scientists from MIT presented a business proposition at the 1998 MIT $50K competition. The group was selected as one of the finalists that year. From that group, a company emerged that would change how data is managed throughout the world, the company’s name: Akamai.

Akamai today — with an annual revenue of $3.5 billion, more than 355,000 servers in more than 135 countries and over 1,300 networks around the world — is a content delivery network, cybersecurity and cloud service company. But back in 1998, they were a small group of scientists working to solve the traffic congestion problem that the early World Wide Web had. They foresaw how the congestion would cripple the internet and developed an innovative concept to ensure data flowed smoothly without sites crashing. The first edge computing architecture was born.

The model shifted away from the relationships of centralized servers managing all the data transfers and away from the server-device relationship. The edge would decentralize this model, creating thousands of networks and servers that relieve bandwidth and reduce latency and data processing fatigue.

SEE: 20 good habits network administrators need–and 10 habits to break (free PDF) (TechRepublic)

The 2002 paper of Akamai titled Globally Distributed Content Delivery revealed how the company deployed its system with 12,000 services in over 1,000 networks to fight service bottlenecks and shutdowns by delivering content from the internet’s edge.

“Serving web content from a single location can present serious problems for site scalability, reliability and performance,” Akamai explained. “By caching content at the internet’s edge, we reduce demand on the site’s infrastructure and provide faster service for users, whose content comes from nearby servers.”

The Akamai system, when launched in 1999, focused on delivering web objects like images and documents. It soon evolved to distribute dynamically generated pages and applications handling flash crowds by allocating more servers to sites experiencing high loads. With automatic network control and mapping, the edge computing concept presented by Akamai is still being used today.

Edge computing: From content data to business uses

Soon after the Akamai edge network rose, big tech companies and vendors began providing similar content distribution networks to meet the demands of the global boom of the internet. For the next decade, the main focus of the edge was on data management for websites, but new technology would find new uses for the edge.

The model Central Servers-Edge Servers-Device would see another shift as IoT, smart devices and new endpoints began to emerge. The edge network today adds devices and nodes that can process data in the machine. Their primary function is not limited to internet content distribution.

Businesses use the edge to process data in sight, avoiding costly and time-consuming cloud transfers and enhancing their operations. IoT devices connected by 5G are used by retail for instant payment options, inventories and customer experience. In contrast, industries use IoT and endpoint devices to improve performance, insights, security and operations.

While the uses of the edge have moved away from online content distribution and are aligned to each business, storing, processing, managing and distributing data on the edge remains true to its essence.

The history of edge computing is still being written, as its past 30 years have seen incredible developments and innovation shows no signs of deceleration. The edge will continue to drive advancements, as centralized servers and the cloud cannot compete with its speed, low latency, costs, security benefits and data management capabilities.

This UrIoTNews article is syndicated fromGoogle News

About Post Author