Recent fragmentation in the legacy Data Integration market made us wonder whether or not traditional integration tools were becoming obsolete before our very eyes. With this in mind, we asked the crowd whether or not they believed Data Integration as we’ve known it was dying. To our surprise, the answer was a resounding no, and it appears that legacy tools are still being used in many verticals as enterprises prepare for the next wave in data tools. But just because ETL Data Integration solutions are still being used in data architectures all around doesn’t mean that they’re not frustrating to use.
Forward-thinking vendors are looking to get out ahead of what is becoming a major problem for enterprises, and that is data latency. Organizations, now more than ever, need integration tools that enable their end-users to do self-service both on-premise and in the cloud. Modern integration solutions, such as those that are branded as Data Virtualization, Integration Platform as a Service, or more commonly, self-service, provide agility that legacy offerings simply cannot match.
If you’ve been working with data for any amount of time and have been exposed to the traditional ETL process, you’re very likely aware that it can be painstaking, to say the least. And though the crowd has made it loud and clear that this technology, though being offered by fewer and fewer solution providers, is still being utilized. Some business professionals who work in more traditional data environments explain the process only in expletives, others in screams, still others in not very attractive facial expressions. In a changing world where data is streaming from a variety of sources in real-times, collecting and integrating data via this process after the fact is time consuming.
ETL tools, while formerly the market standard for migrating data from one place to another, are being replaced by software that can do all of this in closer to real-time, often as an automated process. Many data professionals believe ETL to be one of, if not the biggest IT bottleneck, forcing end-users to wait for data to be taken from a source or storage medium, transferred to the new location, and loaded into the database where it can be pulled for analysis. All of these added steps cause huge amounts of latency.
Though there has been some movement in ETL from a technological perspective over the course of the last half-decade, such as change data capture feature enhancements which migrate only changed data and shaving down process time, enterprises and vendors alike have their eyes on the future. Hadoop and other more modern Data Management solutions have really taken the market by storm, providing the kinds of automation and integrated storage that integration tools have long gone without. And while it’s true that many large organizations would like to avoid replacing all of their current systems with a Hadoop because of cost and effort, the fact of the matter is that ETL is just too slow and wonky to keep up with the demands for real-time self-service integration that the market demands.
Latest posts by Timothy King (see all)
- Trifacta Nabs $100 Million to Fuel Product Area Expansion - September 12, 2019
- The 10 Best Integration Platform as a Service Tools for 2019 and Beyond - August 29, 2019
- New MuleSoft Government Cloud Touts Complete Integration Environment - August 27, 2019