Working Backwards – Knowing When Data Consolidation Should be a Priority
insightsoftware’s Jay Allardyce offers insights on knowing when data consolidation should be a priority. This article originally appeared on Solutions Review’s Insight Jam, an enterprise IT community enabling the human conversation on AI.
Enterprises are swamped with data, but many don’t know how to leverage this data to remain competitive in their industry. This is especially true when data is siloed, making it challenging for users to share and collaborate. Too often, organizations continue old patterns and build data programs that are on the hunt for a business problem rather than addressing the problem at hand. In fact, organizations spend $115,856 per developer yearly to tackle data concerns like data inconsistencies and performance issues. This is where the importance of a well-structured data consolidation project can be most effective. Let’s take a look at the best ways to start a data consolidation project and the top mistakes to avoid to ensure long-term success.
Kicking off Data Consolidation
By starting with the ‘business outcome’ in mind organizations will have a deep understanding of what experiences or KPIs they want to impact. Too often, organizations focus on an outcome to be cost and or efficiencies. If this is the headline of the business case – STOP. Rather, organizations need to think about the top line and or productivity improvement that a data consolidation will bring and, therefore, what new insights it will help the business with. This means understanding where business is today and where teams want it to be – which may require asking some of the following questions: Will the project be ROI-driven? Do we want to reduce the number of servers? Should our ultimate goal be to drive cost efficiencies in the cloud? Or should we aim to bring disparate data sources together? Or rather, step back and say to yourself: I want to be able to more accurately forecast my demand and correlate this with my supply inventory, using these various data sources, both internal and external.
Once these questions are answered, organizations must take the following steps to ensure they are building a data consolidation project that will reach their desired outcomes:
-
Determine the company’s current data-driven culture and its overall data landscape: Depending on the answer, a data consolidation project might further establish this desired culture and identify who will need to use this consolidated data the most. Find a balance between serving the business and the architecture to underpin the project.
-
Define/refine the outcome via the Jobs-To-Be-Done (JTBD) framework: Be clear on who’s using the information and in what way, and work closely with the business to fulfill specific use cases. For example, if the goal is to have a daily signal on product/competitive positioning, this might entail combining internal data (product, pricing, sales performance, marketing leads) and external data. Ultimately, teams should be outcome-driven, not architecture or technology-driven, while investing in technologies and architectural approaches that can be flexible, agile, and evolving.
-
Identify quick wins close to the domain and business user – Organizations should start small and expand, remaining fast, nimble, and distributed, focusing first on easy-to-achieve objectives. Once successful, organizations can move toward meeting larger goals that might require more time and resources. Be mindful of maintaining consistent source-independent master data and hierarchies but avoid full-blown companywide master data management initiatives until the project demands it due to complexity.
-
Develop a data governance framework: This framework should support flexibility and allow teams to access data when and how they need it, while still keeping valuable data secure. Don’t constrain the user with policy, rather empower the user, who has domain context, to do their job with the right data and data access.
With these steps, teams work backward from the reports and dashboards that are required by the business – a visualized end state. This is a good way to focus on the consolidation project and keep the cost profile to a minimum. It forces precision around which data sources are required and which data items need to be moved or referenced.
Data Consolidation Mistakes to Avoid
One of the biggest mistakes organizations can make when launching a data consolidation project is doing too much without showing results and not evaluating the project based on the company’s overall goals. They must also realize that a transformation can’t happen overnight. This is why it’s more important to show quick wins that build credibility while still aligning with business and IT objectives. Furthermore, organizations often fail to think ahead and evaluate their KPIs or JTBD frameworks based on how their industry is evolving. They are too focused on how their business runs today.
Many organizations also don’t realize that data and domain expertise need to go hand in hand. Without such knowledge, organizations will lack the context to know how their data should be used. So, instead of going right to consolidating data, they should first identify their stakeholder map and focus on how to consistently use their data to become more efficient and respond to ongoing market changes effectively. This is especially true when considering industry and regulatory environments – whether that be for healthcare, finance, government, etc. Enterprises must work within each industry’s unique boundaries to best manage and access their data. Without this knowledge, a data consolidation project can prove to be even more challenging.
Technology or implementation teams often spend too much time, money, and emphasis on the technology and tooling choices to satisfy the infrastructural requirements. Selecting a business intelligence tool, data pipeline infrastructure, cloud data warehouse, and ETL technologies isn’t enough to solve the business problems alone. The expertise required to produce the reports that are needed is a combination of technical expertise and functional expertise, whether that is part of the vertical part of the department or relative to the source system, and the quality of this experience is a better indicator of success.
IT organizations tend to favor the “we can build it” approach, which can manifest as an internal build or an external SI coming in to build for the business. Often underestimated as part of these projects is the cost of maintaining the solutions and the infrastructure, which can span a range of required functional and technical expertise. Overall, these projects are notoriously difficult, lengthy, and risky and also end up increasing the distance between the implementation and the business stakeholder.
Data, fuel organizations and data teams can spend 90 percent of their time consolidating data into informative reports. However, these reports are only useful if they can advise what businesses should be doing and where they should be headed – instead of only reporting on what has already happened. By taking a step back and understanding where the company is today and what problems actually need solving, an organization can determine exactly what tools are needed in order to maximize the insights that can be derived from such large data sets.