Predicting future technological performance is tricky business — we anticipate linear growth, but experience something different. So, as much as we might like to, we can’t predict the future by extrapolating from a straight line. Unfortunately for us forecasters, the dichotomy between expectation and reality makes it difficult to anticipate the exponential nature of technological progress, and that holds us back as change accelerates.
Futurists frequently apply Moore’s Law — which suggests that processing power doubles every two years — to technological advancements. For example, in April 2020, Zscaler announced that the cloud-based Zscaler Zero Trust Exchange was processing more than 100 billion daily transactions. Eighteen months later, the Zscaler Zero Trust Exchange is processing more than 200 billion transactions daily. (Thanks, Gordon!) For context, it’s estimated that there are between 7 to 10 billion Google searches and around 5 billion YouTube videos viewed daily. So 200 billion for Zscaler is truly remarkable.
Moore’s Law and Neven’s Law define the trajectory of the technology revolution
Moore’s prediction has defined the trajectory of the technology revolution. But in the next 10 years, Moore’s Law will confront physical limitations. Neven’s Law, a newer postulate, holds that quantum computers are gaining computational power at a doubly exponential rate. Neven’s Law could theoretically supplant Moore’s Law to accurately predict technology evolution.
Smartphone development aligns with Moore’s Law. We will continue to see smaller, more powerful devices with more memory and computational power. This is also true for networking bandwidth. But when it comes to network latency, no such luck. Improvement in latency reduction — when it happens — occurs in small increments.
Computing, storage, memory, and bandwidth capacity will continue to accelerate and grow in the future. So how do we deal with latency? Unfortunately, transmitting data faster than the speed of light is presumably impossible. I had thought leveraging quantum entanglement to transmit data might address this challenge, but that doesn’t seem to be the case, at least for the foreseeable future.
Latency’s limit: When traveling at the speed of light isn’t fast enough
Moore’s Law correlates loosely with Zscaler’s exponential transactional-processing growth on its platform. But that same growth forecasting model breaks when it comes up against physical limits.
Some physics: Light travels at 299,792,458 meters/second and covers a kilometer’s distance in 3.33 microseconds. The light you see from that beautiful sunrise took more than eight minutes to reach Earth from the sun, some 152 million kilometers away. The eight-minute delay is latency in its purest form: the speed of light is an absolute boundary in the physical world. As much as we might want to, we can’t go faster than that.
Light’s progress slows as latency increases when moving through a physical medium. For example, light travels around 4.9 microseconds per kilometer through a fiber optic cable.
While 4.9 microseconds of latency may not sound like much, it adds up over distance. And that latency is particularly significant in the world of networking. For example, a direct fiber cable laid in a straight line from Copenhagen, Denmark, to Auckland, New Zealand, would stretch 17,500 kilometers. The roundtrip signal travel time? 178 ms. That’s direct, mind you. However, more realistic real-world routing includes hops, routers, and suboptimal routing protocols along the way, all of which lengthen travel distance and add additional latency: the total latency is more like 300 ms.
Why latency matters
Latency — in all of its combined forms — impacts enterprise network throughput, creates performance problems for collaboration platforms, and affects any application requiring connectivity. As a result, it’s the bane of application performance, leading to productivity reduction and even profit loss.
Emerging technologies like 5G, IoT/OT, VR/AR, “smart city” applications, and even autonomous vehicles demand near real-time connectivity performance. Often, vendors of those technologies promote associated latency-reduction and response-time improvements.
But no matter how those technologies are promoted, there will always be some element of latency.
The fixed, constant baseline for latency: Why we can’t ignore the flat line
Latency improvements — in protocols, TCP hand-shaking, DNS response, etc. — all converge to approach an absolute baseline: the speed of light.
Figure 1. Computational exponential growth vs. slow convergence toward the speed of light for application latency (Note: speed of light not to scale).
While networking protocol overheads tend to add the most latency, other aspects can slow connectivity performance. To ensure an optimal path for data traffic, IT leaders seek to reduce built-in infrastructure latency, particularly when they shift to fog and edge computing. When centralized, security adds more latency as users travel long distances over backhauled MPLS networks to move data single-file through stacked appliances. Placing security processing (in a distributed fashion) at the cloud edge improves performance by shortening travel distance and — at least in the case of the Zscaler Zero Trust Exchange — removing linear security processing.
That security must be automated and software-defined to ensure scalability and simple policy enforcement.
The next evolution in connectivity acceleration:
New advances in digital telecommunications are disrupting traditional connectivity. We already see this in the deployment of 5G networks: companies can connect more directly and more often with employees, customers, and partners, with computing occurring closer to users and devices.
Importantly, data travels a comparatively shorter distance, promising faster performance.
Telco companies behind 5G are moving away from legacy in-house, monolithic solutions and toward massively scalable, cloud-first, and (importantly!) highly-distributed enterprise designs.
They are refactoring infrastructure to be centrally managed but dynamically implemented at the cloud edge, nearer to onramps and consumption points.
An internet future: Secure Service Edge (SSE) to the rescue
Latency — in its many forms — complicates the delivery of effective cybersecurity solutions over traditional networking infrastructure. This new cloud-first, device-agnostic, work-from-anywhere world requires a management mindset change in how we architect security into the organization: we must protect users, devices, and workloads no matter where resources reside. We must ensure policy is user-, device-, and workload-centric, not network-centric.
Enforcement must be architected for speed, leveraging single-scan multiple action technology to accelerate performance.
We may never be able to travel faster than the speed of light. But we can do something to reduce the time it takes for data to travel from A to B: we must bring security to the users and devices rather than expecting the users and devices to travel to the security. Achieving that requires distributed, cloud-edge-delivered security, and specifically, a Secure Service Edge (SSE) architecture that ensures secure connectivity while minimizing data travel.
The future is user-experience-based. The way we interact with technology requires that security enables edge-based speed. Businesses will not survive without it.
To learn more about Secure Service Edge, visit Zscaler.