7 Key Big Data Trends and Predictions for 2023 & Beyond

Key Big Data Trends and Predictions for 2023

Solutions Review’s Expert Insights Series is a collection of contributed articles written by industry experts in enterprise software categories. In this feature, Dataddo CEO Petr Nemeth reveals his most important big data trends and predictions for 2023.

Expert Insights Badge SmallContrary to what you might think, the percentage of businesses investing in digital transformation today is not much higher than it was before the pandemic. What is higher today, however, is the percentage of businesses at more advanced stages of transformation.

They are using more data-producing tools, sharing data with more end users, and making more concerted efforts to govern data.

This raises many questions about the future of effective data management and BI. Is the need for more tools endless? How can we ensure the data they generate is continually integrated, shared, and interpreted correctly? How will we keep the data secure and clean?

Here are seven predictions to help business leaders anticipate the answers to these questions in 2023 and beyond.

Big Data Trends and Predictions for 2023

Data-producing tools Become More Diverse, but the Customer Lifecycle of each tool Becomes Shorter

There is no doubt anymore that the number of SaaS tools available, as well as the volume of data they collectively produce, will only continue to grow. Look at the size of the SaaS market—in 2023; it’s projected to be worth twice what it was in 2019. Companies are adopting more and more tools year after year, and there is no obvious end in sight.

But one not-so-obvious side effect of this is likely to be a shortening of the average customer lifecycle of these tools.

Organizations large and small waste millions of dollars annually on tools that are rarely used, if at all. They are constantly trying out new ones while at the same time forgetting about others.

Moreover, many of these tools are adopted at the department, team, and employee levels, resulting in large enterprises being unaware of about half of their deployed SaaS tools and small enterprises being unaware of about one-third.

To counteract the pile-up of unused tools, we will see increased consolidation and purging by IT departments. This, together with increased adoption, will result in shortened life cycles for most SaaS tools.

The exception will be tools essential to company infrastructure, like CRMs and data integration tools.

Data Integrations Become Architecture-Agnostic

It is common for businesses today to use separate platforms for ETL/ELT, reverse ETL, and sometimes data replication.

This is understandable because while ETL/ELT and data replication are established processes in the world of data integration, reverse ETL is a very new process that is only offered by a few special vendors.

Reverse ETL is also the final piece of modern data architecture, so companies interested in it usually already have established relationships with vendors of ETL/ELT and data replications solutions. It may therefore seem natural to seek a separate platform exclusively for reverse ETL.

But, over time, data integration will become such a core facet of business that companies will stop perceiving the differences between integration processes. The tools for integration will become more user-friendly, and users will no longer want to think about the type of engineering that connects data sources with data destinations.

Instead, they will want one architecture-agnostic platform that serves all integration types; to pick a source, destination, and send.

Business Pros Become More Data Literate & Low- to No-Code BI, and Data Integration tools Become the Norm

The percentage of non-technical professionals who recognize the need to become data-savvy is high (58 percent according to a 2022 survey by Qlik), and the percentage of decision-makers who expect them to be data-savvy is even higher (82 percent according to a 2022 survey by Tableau via Forrester). If these professionals want to remain relevant on the job market, they will have to develop competencies that used to be the exclusive domain of engineers.

Fortunately for them, the technical knowledge required to operate data tools (BI tools, data integration tools, even some data storages) is getting lower and lower.

Gartner predicts that, by 2025, 70 percent of new applications developed by enterprises will rely on low- and no-code technologies. Though the terms “low-code” and “no-code” are often used to describe development platforms, we will more and more see them used to describe BI and data integration platforms.

This trend, together with the push for data literacy within companies, will effectively offload busywork from engineers and empower non-technical employees to build their own data solutions.

Demand for Citizen Data Scientists Continues

Citizen data scientists are professionals in business departments who have some knowledge of data and analytics and sometimes coding but who are not fully-fledged data scientists. In the near future, they will play a major role in bridging the gap between business teams and data teams. Their duties can range from determining measurements for success, to collecting and interpreting data, to evaluating and deploying data models.

The U.S. Bureau of Labor Statistics predicts that, through 2029, the field of data science will grow more than any other. So, it’s no wonder global companies like BP and Epsilon are already reaping the benefits of citizen data scientists.

The rise to prominence of this new class of professionals will have a decentralizing effect on the data governance policies of many companies, as defined by the hub-and-spoke governance model.

The resulting empowerment of business teams will shift the focus of data teams to security and quality.

Data security Becomes a Major Concern for Buyers

Decentralizing data competencies is necessary for organizations that want more analytics flexibility at the operational level. But, with data breaches and other privacy issues becoming more and more common, it also exposes them to a higher degree of risk.

In Europe, data protection authorities constantly issue fines for GDPR violations, with the stiffest fines going to tech companies. So far in 2022, the highest fine was €405 million (or $402 million), served to Instagram owner Meta Platforms Ireland Limited in September.

In the US, although there is no federal data privacy law, businesses still have state laws to worry about. And, of course, hackers. Just this year, Microsoft, Uber, Red Cross, and News Corp were all hacked.

SaaS buyers are noticing and will soon become much more conscious about what data they provide to vendors. Vendors will find it harder to close large deals without certifications like SOC 2. We at Dataddo can see this firsthand. Ultimately, data security will precede other buying criteria like user-friendliness and price.

Data quality Remains a challenge & AI Plays a Bigger Role in Cleaning Data

For as long as people have been collecting data, data quality has been a challenge. But, with data today coming from an increasing number of disparate sources and being handled by an increasing number of line-of-business professionals, the cost of mistakes being proliferated to downstream systems is getting a lot more tangible.

In 2021, Gartner estimated that bad data costs organizations an average of $12.9 million annually.

Though data quality will never be perfect, one thing that will greatly contribute to keeping it high will be the gradual implementation of AI-based mechanisms in analytics and data integration tools. (Dataddo, for example, is an integration tool with an AI anomaly detector under the hood.)

These technologies will get better and better at flagging up outliers and keeping missing, incorrect, and corrupt data out of pipelines and dashboards.

It’s also important to note that since AI-based data quality solutions will always be most effective when analyzing large datasets over longer periods, they should always be implemented alongside classic, people-focused solutions.

BI Tools Become Mobile-Friendly for Passive Use

It seems natural that BI would enter the mobile sphere.

Regular consumers of data, like marketers, salespersons, and upper management, are more often needing to access to it when they are not in front of the computer. And professionals who don’t spend the majority of their day in front of the computer, like warehouse staff and truck drivers, are beginning to need to access it more regularly

It’s therefore not a surprise that the market value for mobile BI is expected to rise—from $10 billion in 2021 to around $55.5 billion in 2030. Nevertheless, these values are only a fraction of the market value for BI as a whole, which is expected to rise from $35.2 billion in 2020 to $224.2 billion in 2028.

This supports the prediction that mobile BI tools, no matter how advanced and streamlined they may become, will serve primarily to deliver insights. For production of insights, e.g., via deep drilldowns and heavy dashboard customizations, the desktop interface will always be king.

Staying Ahead of the Curve

The race to digital transformation is an extremely dynamic one. But one way to stay ahead of the curve is to keep an eye out for emerging trends in data management and BI. They give us a sneak peek of what’s around the corner and can help inform the strategies we implement today.

Businesses should be thinking about:

  • Actively promoting adoption (and reducing pileup) of SaaS tools by giving more support to end-users.
  • Investing in future-ready data integration tools.
  • Fostering data literacy among non-technical business professionals.
  • Making every effort to become and stay compliant with international data security standards.

These are all golden opportunities, and the best time to take them is now.

Petr Nemeth
Follow Petr