Ensuring distributed enterprises are ‘always-on’ with Edge Computing and hyperconvergence – Intelligent CIO ME

Alan Conboy, Office of the CTO at Scale Computing, discusses how technologies like HCI and Edge Computing could be the answer to enabling distributed enterprises to achieve a robust cybersecurity strategy and mitigate the risk of costly downtime.

IT downtime is terrible news for any business in this increasingly data and technology driven world. Neglecting to remain ‘always-on’ is especially damaging when the consequences of just minutes of downtime ripple across the organisation’s reputation, capital and customer satisfaction. Recent high-profile cases, such as the outage which led British Airways to pay an estimated £100,000,000 after cancelling more than 400 flights and stranding 75,000 passengers in one day, are illustrative of how much downtime can truly cost a business.

British Airways may have been able to withstand the monetary and reputational backlash, but many businesses could be so negatively impacted that they never fully recover. For organisations in industries with distributed enterprises like retail or financial services, there is an answer to the challenge of keeping pace with, not only ever-sophisticated cyberthreats, but also the growing requirement for your business to remain ‘always-on’. 

Distributed enterprises should be looking to the latest technologies, like hyperconvergence and Edge Computing, that offer high availability, lower total cost of ownership (TCO) and easy deployment and management at each site.

By investing in technologies that are simple to manage and don’t require onsite IT experts, distributed organisations can achieve a sophisticated cybersecurity strategy that mitigates the risk of costly downtime.

Kiss single point of failure goodbye

Edge Computing is built for distributed enterprises, because it is all about putting the computing resources close to where they are needed the most. Unlike a traditional single point of failure, when there are devices at branch locations, like point-of-sale cash registers in retail stores, that all connect to a centralised data centre. In that scenario, an outage at that central data centre can affect all the branch locations.

But by putting an Edge Computing platform at each branch location, a failure at the central data centre would not need to bring down everything because each branch can run independently from it. A solid virtualised environment can run all the different applications needed to provide customers with the high-tech services they have come to expect.

You might be asking why this hasn’t been done before and there is a simple answer: until now, it was cost-prohibitive to implement the kind of infrastructure that was needed to make this work – highly-available infrastructure. Until very recently, forming a highly-available virtual infrastructure would involve a sizeable investment in a shared-storage appliance, multiple host servers, virtual hypervisor licensing, as well as a disaster recovery solution.

Hyperconvergence at work

Then came hyperconvergence, which has consolidated those components into an easy-to-deploy, low-cost solution. It is true that not all hyperconverged infrastructure (HCI) solutions are cost-effective for Edge Computing. Many HCI solutions are designed similarly to traditional virtualisation architectures and emulate SAN technology to support that legacy architecture. This results in resource inefficiency and requires bigger systems that are not cost-compatible with Edge Computing.

But there is still hope. HCI with hypervisor-embedded storage can offer smaller, cost-effective, highly-available infrastructure that allows each branch location to run independently, even if the central data centre fails or goes down. A small cluster of three HCI appliances can continue running despite drive failures or even the failure of an entire appliance. There is no way to prevent downtime completely, but Edge Computing, with the right highly-available infrastructure, can insulate branches to continue operating independently. 

While the central data centre is still a vital piece of the overall IT infrastructure with HCI, the difference is that it consolidates data from all the branch locations for analysis to make key business decisions. That is also the case with Edge Computing, as on-site Edge Computing platforms can provide local computing while communicating key data back to the central data centre for reporting and analytics. By taking the single point of failure out of the central data centre, outages at any location need not have far-reaching effects across the whole organisation.

One step closer to 100% uptime

Our daily lives are increasingly driven by a rise in technologies becoming common place and it is no different for businesses. Because of this, high availability is becoming a necessity rather than a luxury. Traditional virtualisation infrastructure is being quickly replaced with technologies such as HCI and Edge Computing, because they make that high availability more accessible for everyone, including distributed enterprises.

This UrIoTNews article is syndicated fromGoogle News