Ad Image

Expert Reveals Key Data Management Trends for 2024 to Know

Solutions Review’s Contributed Content Series is a collection of contributed articles written by thought leaders in enterprise tech. In this feature, Komprise‘s Krishna Subramanian offers key 2024 data management trends to know.

More than ever, IT infrastructure is a core piece of IT strategy as it is a fundamental pillar supporting broader business and organizational objectives. Everything goes back to data. The way organizations manage their data, store and protect it, find and leverage it, and deliver it to departments, can make or break success along every aspect – from customer relationships to employee satisfaction, operational efficiency and marketplace innovation.

While the global economy is chugging along amid significant pressures and conflicts, there is no clear forecast on economic conditions for 2024. In that light, we are predicting several trends for enterprise IT organizations, across cost and operational efficiency, watertight disaster recovery at the data management level, more attention to preparing unstructured data for AI, and more. Let’s start at the top with a nod to using automation to save money and time.

Efficiency Becomes a Must-Have for Unstructured Data Management

In the past few years, we have seen IT buyers focus increasingly on cost savings when making decisions, particularly in unstructured data management. Many organizations are being more cautious and analytical with cloud spend and seeking ways to use secondary and tertiary storage classes to save money as data ages. However, unstructured data continues to grow explosively and it’s hard to keep up. Finite capacity and budget plus urgent needs for big data analytics means that we’ve reached a tipping point where cutting costs isn’t the whole story.

Cost-efficiency will remain vital but IT leaders will push for overall efficiency across the entire stack. In the world of unstructured data, this means simpler administration and management of data lifecycles from creation to replication, archives and eventually deletion; better automation with analytics to continuously right-place data into the optimal storage; and central visibility across storage and data metrics for easier decision-making.

Organizations Must Plan for More Frequent Disasters and Outages with Strategic Management of File and Object Data

With two major global conflicts now underway, a pandemic that has become endemic, ongoing pressures on the global economy and supply chain, an accelerating frequency of climate calamities, and growth in ransomware attacks, uncertainty is the new normal. Organizations need to be better prepared for a variety of different disasters and distractions. IT leaders have been prioritizing these threats by understanding data needs and investing in the right infrastructure and security products and services to be resilient, yet this goes beyond deploying cybersecurity systems at the network and database level.

Comprehensive disaster recovery requires more than traditional backup and data protection since those solutions are too expensive and therefore not tenable on all but the most mission-critical, active data. Unstructured data management will deliver affordable resiliency at a fraction of the cost. The goal is to create cheap copies in durable object storage in the cloud for non-critical data—which is the bulk of all data in storage. This “poor man’s data resiliency” approach will complement the 3x backup method for hot data to create a cost-effective and holistic disaster recovery strategy.

Storage Tech expands to Address New Use Cases, from Sustainability to AI

Data storage technology will continue to evolve based on the changing needs of the enterprise. This is great news for enterprise customers as choice will continue to expand next year. Large storage vendors are creating GPU and Flash products specifically for the extreme processing needs of AI and machine learning applications. Storage vendors are incorporating more security features to protect data at the source, along with energy efficient features to help organizations comply with sustainability policies and mandates.

Across the board, we’ll see more storage tiers —from on-premises to edge to cloud storage. However, more choice and the distinct move away from one-size-fits-all storage means that enterprise IT complexity will grow. IT teams will need strategies, knowledge and guidance for data lifecycle management to ensure that data continually moves to the best storage tier for current needs. By doing so, IT can optimize data storage across the board for maximum cost savings, data protection and performance for users.

AI Drives Urgency to Efficiently Deliver Value from Unstructured Data

Various surveys have shown that most unstructured data is not being used or analyzed to support business decision-making. Organizations may lack funding for large-scale analytics initiatives, but they also may lack the right approach to better leverage all the data they store and collect. It’s vital to find only the data that you want the AI to ingest to be economically viable, due to the high cost of storing and analyzing petabytes of data or millions of files. Then you need to sort the results of the AI process and move that data somewhere else such as to another analytics application. Vendors and data scientists are working to improve AI’s ability to identify patterns and do it faster and less expensively.

But to move the needle on using AI to extract value from unstructured data, organizations need a data management scaffolding that makes AI more trustworthy and easier to use. It delivers automated workflows to find, sort, tag and move data to and from AI and other locations as it’s processed. Another issue is that today we don’t have an inventory of all our data. This notion of keeping a searchable index of all the data with the ability to access that data regardless of the technology on which it resides is required to deliver the right unstructured data to AI.

Share This

Related Posts