Ad Image

Ransomware Defense Fears Drive Backup & Storage Strategies

Ransomware Defense Fears

Ransomware Defense Fears

Solutions Review’s Expert Insights Series is a collection of contributed articles written by industry experts in enterprise software categories. In this feature, Scality Field CTO Candida Valois offers a reaction to current ransomware defense fears with some strategies for success.

A ransomware attack is thought to happen every 11 seconds, and by 2031, that number is predicted to drop to every 2 seconds. In the past year, 85 percent of businesses experienced at least one ransomware attack.

Multiple surveys of IT leaders reiterate that they view Job #1 as ransomware protection. It’s the lead item on every set of board slides and scrawled across every white board, with multiple action items beneath.

The near inevitability of malfeasance has redefined priorities and aligned IT teams with a new set of technology answers.

Three concerns are driving new, more robust strategies that can outsmart bad actors.

Download Link to Data Storage Buyer's Guide

Ransomware Defense Fears

Backups Under Attack

As ransomware attackers strive to eliminate all possibilities for a victim to recover data, backups have grown in importance as a target. In 2022, 94 percent of attackers attempted to delete backup repositories, according to Veeam’s study on ransomware trends. 72 percent of the firms questioned said their backups had been attacked partially or completely, and 36 percent of their data could not be recovered.

This statistic has a real-world application, as LastPass recently disclosed that in a breach in November 2022, criminals stole the encrypted backups of clients. It makes sense, then, that by 2025, 60 percent of all companies will require storage products to include integrated ransomware defense mechanisms, up from 10 percent in 2022.

A modern backup strategy includes far more than just keeping an extra copy of the data. It requires immutable copies of the data, deciding between short-term and long-term retention, and possibly using offshore storage.

Understanding the Ups and Downs of Cloud Migration

Post-COVID supply chain disruptions have led many companies to accelerate their migration to the cloud and look for cloud- and hardware-agnostic models to mitigate a reliance on potentially delayed hardware shipment or on vendor lock-in.

Yet large public clouds have vulnerabilities that make the headlines with increasing regularity. When data is put into a public cloud, the organization gives up a certain level of control, from both the residency and sovereignty perspectives – with trade-off between public cloud scalability and flexibility. Considerations in financial services and healthcare, for example, may tip the balance differently than in other industries.

According to a recent survey, nearly half of IT leaders use hybrid clouds or regional cloud service providers, compared to 40% who typically use a huge public cloud – like Azure, the local AWS or Google Cloud services – to store their data. The final 11% have used or will use an on-site data center.

Unstructured Data Growth

Growing amounts of unstructured data necessitate a new method of data delivery that is centered on the application instead of on the location or the technology. Applications need faster access to the massive volumes of data being created everywhere—at the edges, in the cloud and on-premises—in order to uncover insightful information that can be used to take appropriate action.

AI and machine learning require an underpinning data storage system that can handle a range of workloads, including both large and small files. Workload sizes can range from a few tens of gigabytes to many petabytes in certain circumstances. Not all solutions are made to handle enormous files, nor are all solutions made to handle very small ones. It’s essential to choose one that can adjust to both.

The scaling capacity of conventional block storage and file storage solutions is limited to a few hundred terabytes. The address space for object storage, however, is completely flat and unrestricted. A standard file system’s hierarchy and capacity limitations do not apply. With the use of this capability, IT can ensure that object storage in data centers can elastically scale to tens of petabytes or more inside one global namespace.

Architecting for Ultra-Resiliency

Ultra-resiliency is the ideal to go after; it’s a term reserved for copies of data that are air-gapped, immutable or offline. It addresses all three concerns noted above. When ransomware strikes and you need to implement data recovery, having a duplicate of backup data that meets at least one of these descriptions is an incredibly resilient

approach. In some cases, a duplicate would have more than one of these properties. Immutability is regarded as being crucial in the fight against ransomware, but it also protects data from unauthorized change or destruction, whether intentional or unintentional.

Support for the S3 Object Lock API enables immutability. This ability ensures the lowest recovery point objective (RPO) and recovery time objective (RTO), which makes recovering from ransomware and other disasters simple. S3 Object Lock essentially applies a fixed retention period on data, so during this period, it is impossible to update, alter or delete the object. The standard for affordable, air-gapped security in the cloud is immutable object storage that complies with S3.

Immutability For the Win

The reality of ransomware in today’s digital world is constant and sometimes harsh. Criminal actors find backups particularly alluring because destroying them increases the chances that their victims will pay the ransom. This indicates that immutability has swiftly become essential. Create an ultra-resilient backup strategy using immutability to keep your data out of the hands of attackers.

Download Link to Data Storage Buyer's Guide

Candida Valois
Follow

Share This

Related Posts