This is part of Solutions Review’s Premium Content Series, a collection of contributed columns written by industry experts in maturing software categories. In this submission, Arcion Founder and Chief Architect Rajkumar Sen breaks down the pros and cons of putting all your data in one cloud and details the hybrid cloud approach.
Enterprises are embracing cloud as a way to modernize their infrastructure on a scale never seen before. In fact, nearly half (48 percent) of companies plan on moving 50 percent or more of their apps to the cloud by the end of this year. Why? Because there are very substantial benefits to cloud adoption — the biggest of which is improved data availability. The only thing is companies aren’t actually taking full advantage of what’s on offer, and by doing so, they’re leaving their data more exposed than they realize.
Modernizing Critical Data Systems
Major cloud providers like AWS, Azure, and Google Cloud Platform provide enterprises an easy and scalable way to use and deploy raw infrastructure and virtual machines (AWS EC2, EKS, etc.) as well as build applications on cloud databases (AWS RDS, Azure Databases, MongoDB Atlas, etc.), often on fully managed platforms.
The net result of having virtually limitless resources on tap 24/7 is that enterprises across different industries have begun building or converting their mission-critical applications to run on the cloud. This is happening across verticals and across different cloud providers. In 2019, Verizon Wireless announced that they had started moving some of their mission-critical data applications to AWS RDS databases. The reasons for migrating to the cloud are several — 24/7/365 high availability of the data being the most important criteria for this transition. Other reasons include automated backups, automatic failover, etc. A pre-pandemic whitepaper from Oracle lists numerous other instances of enterprises (Ryanair, Samsung, Intuit, Equinox and others) moving their critical database applications to AWS.
What Can Go Wrong?
Most of the mission-critical data applications that are being run on top of databases offer resilience by spreading data across multiple data centers within one cloud provider. The applications are highly available but still within the domain of a single cloud provider. This poses a serious business continuity risk. Can we truly guarantee 24/7 availability if our application is built and run entirely on a database infrastructure within a single cloud?
The answer for most data professionals would be no. While business continuity is indeed a tricky topic, most of us can agree that having mission-critical data within a single domain isn’t ideal. Especially today when increasing cloud adoption is putting vendors under unprecedented loads. Take, for instance, AWS. AWS controls 32 percent of the cloud services market and in the last year alone had 27 outages. AWS isn’t alone in this; Google Cloud and Microsoft Azure are also facing frequent outages. You may even have noticed your favorite apps like WhatsApp, Twitter, and YouTube going down more often.
But what’s most important for enterprises is that the impact of a single outage, lasting minutes, can cause millions — and even billions — of dollars in lost revenue and delay operations by days if not weeks, especially when mission-critical data is involved.
Hybrid-Cloud to the Rescue? Not Quite
What did enterprises do for mission-critical applications in the pre-cloud era? The most common deployment model was to ensure high availability across multiple data centers, where one data center was typically assigned as the primary and the other as a standby. These data centers were fairly independent of each other and were designed in a way that one data center failure would rarely lead to others failing. These assumptions are not always true if an application is running on multiple regions within a single cloud provider. But what if, instead of using multiple data centers, we could simply use multiple cloud vendors, adding a whole new dimension to our business continuity strategy. Less than a decade ago, the logistical challenges involved would make something like this nearly impossible, and while deploying a hybrid cloud architecture is still extremely challenging today, it’s not impossible. The key factor in making such an architecture is seamless cross-cloud data movement and replication so that data is always up to date and consistent on both primary and secondary cloud environments.
The problem is that cloud providers would never be motivated to build such tools (to send data to other cloud providers). Vendor lock-in is a serious issue, and there has been a severe lack of production-ready tools that allow cross-cloud data movement and synchronization.
Enabling Cross-Cloud Data Movement
The benefit of being in a free market is that as the need for hybrid cloud deployment grows, modern tools will inevitably become available in the market, enabling data replication and synchronization across multiple data centers located in different cloud providers. Such tools need to ensure data replication and synchronization across both directions. Using such bi-directional data movement solutions, organizations can ensure that data is always fresh and available in at least two major cloud providers, designating one as the primary and the other as the standby. Applications can always connect to the primary by default, and when the primary goes down, they can start accessing the standby. The beauty of all this is, of course, new levels of automation and, more importantly, real-time data transfer. When the primary comes back up, it is synchronized with the standby, and once it is fully up to date, applications can start reconnecting to the primary.
Only such a hybrid cloud solution will ensure close to 24/7 year-round availability for mission-critical applications.