Send Message
Contact Us
LEO

Phone Number : 13486085502

The Long-Term Vision for Distributed Applications in Data Centers

October 22, 2020

A phrase heard often is “cloud-native application.” It’s a type of deployment model where developers build their customer-facing services and applications in the same cloud environment where it will eventually be tested and deployed. It also entails a new type of relationship between developers, IT administrators, and security engineers. It’s the end goal of most every enterprise IT project today.

So where does the enterprise data center provider fit in with all of this? The way the cloud is marketed, enterprises and consumers equate it with the brand names of the corporations that provide the services they consume: Amazon AWS, Microsoft Azure and Google Cloud. Even for a hybrid cloud deployment model, it’s one of these three that ends up on top. And when multicloud becomes a desirable thing, all three leap into the public pool at once.

Outside of their own IT departments, enterprises don’t often equate the idea of “cloud” with the virtual infrastructure providers that actually make it possible: Red Hat, VMware and HPE. And they certainly don’t equate it with the name of the proprietor of their local or regional data center provider, whose connectivity services they used to reach the public cloud. The business relationship ends up being between the two parties at the ends of the communication line: those big three public cloud service providers and the enterprise IT department. The enterprise data center or colocation provider ends up being a passive participant – just a conduit.

This is not the type of scenario upon which a healthy IT economy is based. Data centers are the backbones of information technology.

Recently, private data centers have offered their customers interconnectivity – sometimes as a premium service option, but more often as a standard feature. It’s a way to move beyond mere cross-connects between buildings and open up new opportunities with fiber optic links between major points-of-presence (PoPs). Telecommunications carriers are one such PoP, enabling access to mobile devices and edge computing facilities.

Public cloud service providers are – more than ever before – the most desirable PoP. It’s like living in a rural district for years, suddenly to have a shopping mall open up within ten miles of you. There’s a world of products and services you’ve never had available before when you’ve walked downtown.

This puts the private data center in the role of the bypassed, neglected, perhaps forgotten local general store. And you’ve seen what a blight its loss becomes to the overall economy.

For the time being, interconnectivity to public cloud infrastructure, supplied by hyperscale facilities throughout the big three providers’ availability zones, gives data centers a leg up for delivering web applications and distributed virtual infrastructure to remote customers. But it’s the aim of the big three to make the data center’s customers their customers full-time. Recent analyst reports based on enterprise surveys reveal that once an enterprise is invested in services such as object data stores, managed Kubernetes, and real-time analytics, very few are likely or willing to migrate to another service, even when they think they might like the other service better. It’s too much trouble.

If enterprise data centers were to pool their resources, they could provide a massively distributed cloud platform that the big three aren’t capable of providing yet: a very granular, diverse, geographical distribution of applications and resources. Data centers have always had the advantage of locality. But pool several of their facilities’ advantages together, and there may yet be an avenue of opportunity for them to onboard a new class of customers: enterprises that need managed web applications delivered to specific locations, without having to invest in content delivery networks or private delivery network routes.

Technologically, there has never been anything preventing data centers from providing such a service before. Indeed, none of the tools for implementing such a global network overlay are anything new. What was lacking were the business relationships between data center partners for such a platform, the pooling of skills and resources necessary to keep the platform running, and the collective willingness to piece together all these data centers’ regional scale into a combined, global scale – big enough to compete with the big three.

Ridge now offers data centers the opportunity to make available to their existing clients the means for deploying modern, containerized workloads using the infrastructure they have now, with zero upfront capital investment. The result is a set of managed services that provide what customers are looking for from AWS, Azure, and Google Cloud: quality, reliability, security, planetary scale – all while addressing their needs directly, where they live and work. All of this, while maintaining manageability through a single administrative console. That’s a competitive strategy for the long term.

In the whitepaper, The Cloud Equalizer: Restoring Enterprise Data Centers’ Competitive Advantage, Ridge details how a worldwide partner network of data centers can implement a global cloud using their existing infrastructure, with the aid of Ridge APIs, to offer competitive cloud services to their enterprise customers for the first time – services that are fundamentally the same as what the big three offer, but with the added virtue of geolocation.