Data Integration Buyer's Guide

Bernd Harzog’s 2016 Big Data Market Predictions: Part 1

Prediction

By Bernd Harzog

Enterprises have consistently drawn the short stick in terms of successful IT management. For decades, service quality and performance has topped the list of focus areas for IT operations; however, no one vendor or tool has been able to help IT teams adequately address this priority. Vendors have miscalculated the speed at which innovation occurs, leaving enterprises struggling to provide uninterrupted services.

How did we get here? Digital businesses operate at a faster rate than traditional businesses. This faster rate generates exponentially more business and operations data. These disparate streams of data need to be related, analyzed and made consumable for users in real time.

In 2016, we’ll see businesses adopt new strategies that address this longstanding challenge. Below I’ve outlined several Big Data predictions that will inevitably change the landscape and enable enterprises to solve these service quality issues once and for all.

1. The diversity in the sources of Big Data will explode: To date, most big data has been business data. It has been about understanding patterns in data that affect revenue. This was the logical place for big data to start since it had a huge bottom line business impact, and the collection and subsequent analysis of the data could be done after the fact in batch. While analyzing ‘after the fact’ batch business data at scale with sophisticated analytics for a reasonable cost was a huge step forward, we have only started the journey. In the very near future the number of sources of big data will expand beyond last quarter’s sales patterns to include detailed data about the experience of end users using applications, the performance and throughput of applications that support online commerce, and how the IT infrastructure that supports the applications is behaving. This will also include data from everything in the Internet of Things (IoT) and data about how every “thing” in the IoT is functioning.

2. Big Data will become a stream of Tsunamis: Many of these new sources of data will produce data at a far more rapid rate than the daily, weekly or monthly batches that characterize big data today. Consider the data collected by one of our partners, ExtraHop, which is all of the data that flows through a switch. Consider the implications of collecting data from 50,000 virtual servers in a private or public cloud. Virtual Instruments collects the response time every transaction that hits a disk in a data center. AppDynamics and Dynatrace collect data about every transaction that hit every online system in the enterprise.


Widget not in any sidebars

3. Streams of data must be processed in real-time: It is not enough to be able to ingest data as it arrives. It must also be turned around immediately for access in tools that end users can use to consume this data and derive value from it. This breaks the existing batch big data paradigm. Right now big data analytics give users last week’s or last month’s data at best. How about the data from the last minute across all data sources continuously updated? Kafka will play a key role in managing the consumption of these streams of data and Spark will play a key role in processing these streams of data.

4. Dumb Big Data is a bad thing: Existing big data approaches presume the existence of a big data analyst that has the domain knowledge and the expertise to query the correct parts of the data and perform the correct analytics upon the result of that query. This is because most existing big data is “dumb big data.” There is nothing in the data that allows anyone to use one piece of the data to automatically know what is related to that piece of data.

5. Intelligence must be applied to Big Data at ingest time: You cannot apply intelligence to data after it has been ingested. Doing this after the fact falls afoul of the garbage in, garbage out problem. If all you have is a bunch of metrics and unrelated objects (stores, geographies, transactions, applications, etc.), you cannot use analytics (statistics) to draw intelligent conclusions from dumb and unrelated data.

Click here for part two of Harzog’s 2016 Big Data market predictions.

Bernd HarzogBernd Harzog is the CEO and Founder of OpsDataStore Inc., where he is responsible for the strategy, execution and financing activities of the company. Before Harzog founded OpsDataStore, he was the CEO and founder of APM Experts, CEO of RTO Software, Inc., founding VP of Products at Netuitive, a general manager at XcelleNet, and a Research Director for the Gartner Group focusing on the Windows Server Operating family of products. Connect with him on LinkedIn.

Share This

Related Posts

Insight Jam Ad

Insight Jam Ad

Follow Solutions Review