Ad Image

How to Decrease Cost and Increase Efficiency of Mainframe Backup & Recovery

How to Decrease Cost and Increase Efficiency of Mainframe Backup and Recovery

How to Decrease Cost and Increase Efficiency of Mainframe Backup and Recovery

This is part of Solutions Review’s Premium Content Series, a collection of contributed columns written by industry experts in maturing software categories. In this submission, Model9‘s Offer Baruch shares thoughts for decreasing cost and increasing the efficiency of mainframe backup and recovery.

Mainframe backup and disaster recovery situations are about much more than full-blown disasters. Costly outages and interruptions are much more frequent, so making plans to ensure minimum downtime during interruptions is a strategic objective that cannot be ignored. A recent ITIC survey revealed the huge costs associated with these “smaller” uptime issues:

  • 98% of organizations said that one hour of downtime costs more than $100,000
  • 86% indicated that an hour of downtime costs their business over $300,000
  • 34% reported that one hour of downtime costs their firms $1–5 million

Given the huge costs, mainframe leaders must be prepared to meet the dual challenges of minimizing both downtime AND backup and archiving costs. Unfortunately, legacy mainframe backup and archive data management solutions don’t make it easy to do so. Handling enormous quantities of data in a way that enables efficient recovery without going over budget is a challenge – but it’s not impossible. Mainframe leaders not only have several tools immediately at their disposal to reduce costs and increase recovery efficiency but also new cloud data management for mainframe options as well.

Reducing Mainframe Backup Costs

Mainframe data management incurs direct and indirect costs. Inefficiencies in the backup and archive process often continue along undetected, silently adding to expenditures without delivering any additional value. Below are a few examples of changes that will decrease the total cost of ownership (TCO) without requiring large-scale changes, allowing organizations to reduce backup costs without sacrificing recovery requirements:

  • Incremental backup: Limiting backups to data sets that have changed since the last backup–instead of backing up all data sets every single time–will save time and money on redundant processes.
  • Deduplication: Duplicate copies of data should be eliminated if your target storage system supports it. This will free up a lot of storage space by removing repeating data.
  • Compression: Decreasing the size of stored data sets before it is sent over the network decreases costs. Secure and efficient compression is critical. This reduces costs in two ways: First, by reducing the capacity on the target storage, and, second, by reducing the amount of cloud provider network bandwidth used.
  • Leveraging commodity storage: Legacy tape and VTL tools write data serially, causing performance bottlenecks. They are also expensive to maintain. You need a solution that securely delivers mainframe data to any cloud or on-prem storage system. This will eliminate vendor lock-in and make it possible to benefit from pay-as-you-go cloud storage instead of endlessly stocking up on tapes and VTLs.

Legacy mainframe backup and archive data tools also indirectly add to the TCO of data management by requiring separate encryption software, and by taking the option of tiered long-term storage in the cloud off the table. For example, banks’ data regulatory requirements require masses of archived data–most of which will never be accessed–to be kept for years. Paying the same amount for storing this data as you do for hot data significantly increases backup costs for no reason. As explained here, selecting the right kind of storage for this type of data can significantly affect backup costs.

Improving Recovery Efficiency

How companies approach the backup stage dictates how efficient their recovery process is. Unplanned downtime is so expensive and potential non-compliance fees so high that all possibilities for avoiding it should be considered. Several things can increase recovery efficiency:

  • Availability & Replication: Cloud storage makes data recovery possible from anywhere, and data can also be replicated to a different region for disaster recovery (both cloud and on-prem).
  • Write Once, Read Many (WORM) storage: This prevents erasure and/or tampering while also protecting you against ransom attacks.
  • Immutable backups: Backing up data in the cloud so there is an immutable backup in the cloud keeps your data available as soon as your system is up and running, without any need for archived data.
  • Multiple snapshots: Maintaining multiple versions of your data by making regular flash copies of volumes and datasets maintains accurate versioning, enabling automated recovery.
  • Stand-alone restore: Stand-alone restore allows bare-metal recovery from tape or cloud in cases of cyber-attacks, disasters, and errors. Cloud-based backup platforms can enable initial program load (IPL) from a cloud server for a quick recovery that significantly reduces unplanned downtime.
  • End-to-end encryption: End-to-end encryption reduces the risk of malicious data corruption that could cause logical failures and other problems, making recovery scenarios more complex and more expensive. Encryption is also critical for meeting regulatory requirements regarding data security and privacy.

Decreasing the cost and increasing the efficiency of mainframe backup and recovery is an important strategic goal for IT leaders. Although it’s a big challenge, companies that systematically address the core issues will be well on their way to increasing security and minimizing downtime without incurring a significant increase in costs.

Download Link to DRaaS Buyer's Guide

Share This

Related Posts