Keep your business running smoothly.
In today’s data-driven world, business success and data have become inexorably intertwined. Consequently, the cost and impact of system downtime continue to rise. In the past, if systems went down people simply reverted to manual methods or “patiently” waited for operations to come back online.
Today, even just a few minutes of unplanned downtime can lead to lost customers, damaged reputation and long-term bad will, not to mention the legal and regulations compliance ramifications.
Prominent cases making headlines, such as the Slack work chat app’s recent technical issues that brought it down for hours and contributed to a Q2 earnings report that sent its stock down more than 14%, show just how damaging downtime can be to a business.
You may also like: How We Got to Hyperconverged Infrstructure
While Slack appears to have the wherewithal to recover from the financial and reputational backlash, what about small-to-midsize (SME) enterprises? How would they weather an outage storm? Unfortunately, in all likelihood, many wouldn’t.
There is no light at the end of the IT tunnel for SMEs and distributed enterprises trying to keep pace with increasingly sophisticated cyber threats and the requirement for their business to remain “always-on.” For SMEs, and those in industries with distributed enterprises like financial services or retail, there are innovative technologies.
This includes hyper-convergence and edge computing, that when combined not only enable the highest levels of availability but also offer features and functionality previously available to just enterprise data centers – all at a lower total cost of ownership (TCO), thereby ensuring a high return on investment (ROI).
Business Continuity at the Edge Computing
It seems like just yesterday that the idea of an organization’s data processing being done anywhere than at a centralized data center seemed ludicrous. Today, however, edge computing has emerged as a smart alternative to centralized data processing, with computing that is done at or near the source of data, and closest to where the data is being used.
While chief among edge computing’s benefits is enhanced control, speed and reliability of data processing is the elimination of a single point of failure (SPOF). To illustrate, when there are devices at branch locations, like point-of-sale cash registers in retail stores, that all connect to a centralized data center, then an outage at that central data center can bring down all the branch locations.
By putting an edge computing platform at individual branch locations, a failure at the central data center would not interrupt branch operations, because each branch can run independently from it. A solid virtualized environment can run all of the different applications needed to provide customers with the high-tech services they have come to expect.
Many might ask why this hasn’t been done before, and there is a simple answer: until very recently, it was cost-prohibitive to implement the kind of infrastructure needed to make this work – highly available infrastructure. Creating a highly available virtualized infrastructure involved considerable investment in a shared-storage appliance, multiple host servers, virtual hypervisor licensing, and then a disaster recovery (DR) solution.
Enabling Optimum Performance for Hyperconverged Infrastructure
Hyperconvergence enables the consolidation of all those various components into an easy-to-deploy and manage, low-cost solution. However, not all hyper-converged infrastructure (HCI) solutions are alike and ensure cost-effective edge computing.
Some HCI solutions are still designed like traditional virtualization architectures and emulate storage area network (SAN) technology to support that legacy architecture. This deployment emulation results in resource inefficiency requiring larger systems that are not cost-compatible with edge computing.
The solution is HCI with hypervisor-embedded storage, which offers a smaller, cost-effective, highly-available infrastructure that allows each branch location to run independently, even if the central data center goes down.
A small cluster of three HCI appliances can remain running notwithstanding drive failures or even the failure of an entire appliance. There is no way to prevent downtime completely, but edge computing, with the right highly-available infrastructure, can insulate branches and enable independent continuous business operations.
With HCI, the central data center is still a critical component of the total IT infrastructure. It consolidates data from all of the branch locations for analysis to make key business decisions. That doesn’t need to change with edge computing.
On-site edge computing platforms can provide local computing while communicating key data back to the central data center for reporting and analytics. By eliminating the single point of failure out of the equation, outages at the central data center or any location need not have far-reaching consequences across the whole organization.
Always-On, Always Available Infrastructure for HCI
Always-on and always-available are no longer nice to have, it has become an IT and business necessity. By deploying HCI at the edge, SMEs can achieve it and enjoy the powerful infrastructure capabilities previously only available to resource-rich enterprises. HCI at the edge enables SMEs with distributed branches to level the playing field, compete and win in today’s digital age.