SnapLogic recently announced the spring release of their Elastic Integration Platform. The update adds new capabilities for integrating streaming data and powering Big Data Analytics in the cloud with support for Apache Kafka, Microsoft HDInsight, and Google Cloud Storage. In addition, the spring release also includes enhancements that automate Data Management and shaping tasks that are critical to transforming data into insights.
SnapLogic’s Vice President of Engineering Vaikom Krishnan adds: “SnapLogic’s platform is now processing over 100 billion JSON documents per month, delivering enterprise-scale data and application integration as a service to our customers. The spring 2016 release further expands our big data integration capabilities with advanced streaming capabilities that are well suited for Internet of Things and Data Lake use cases.”
SnapLogic’s spring release includes the following enhancements:
Self-service integration for streaming data
This makes it simple for users to create low-latency Big Data pipelines without coding, and help to make Kafka enterprise-ready with pre-built Snaps for common data transformation operations plus connectors for more than 400 endpoints. This can be used in conjunction with SnapLogic Ultra Pipelines and always-on data flows which receive input from a website or an application and return data to the requested endpoint at speeds up to 10 times faster, which makes them ideal for Internet of Things data flows.
The power to do analytics and Data Management in the cloud
With SnapLogic, users can ingest data from virtually any source to an HDInsight cluster, prepare and deliver timely and relevant data for analysis to Business Intelligence tools or off-cluster data stores. In addition, SnapLogic’s ability to accept any data from anywhere is strengthened in the spring 2016 offering, with new support for Google Cloud Storage, which complements SnapLogic’s Snaps for Google BigQuery.
Automated Data Preparation and sharing
The SnapLogic Designer enables users to operationalize many of the data quality, preparation and transformation tasks required for analysis through automated tasks within visual data flow pipelines. The spring 2016 release includes Data Mapping improvements, with SmartLink helping to simplify the mapping of data by suggesting field-to-field mapping. In addition, new transformation Snaps for Spark are also included.
Containerized Integration Pipeline
SnapLogic has added a new capability currently under development that will containerize hybrid cloud and Big Data Integration. While this feature is currently only available via a customer preview program, this will allow users to deploy a just-in-time SnapLogic via a Docker container. The container can then be deployed in any cloud environment that can host Docker containers, and can run in data centers running Docker Swarm.
Latest posts by Timothy King (see all)
- Did We Just Witness the Biggest Data Migration Fail in History? - March 20, 2019
- Trifacta Adds Data Quality Functionality to its Data Preparation Suite - March 20, 2019
- The Single-Most Overlooked Part of the Data Integration Process - March 15, 2019