Ad Image

Cloudian’s Gary Ogasawara on Object Storage and Edge Computing

Cloudian HyperIQ Updated with New Security and Management Features

Cloudian's Gary Ogasawara on Object Storage and Edge ComputingLess than a decade ago, when object storage was announced to the world, the technology distinguished itself as a distributed, scalable, high-performing, cost-effective technology that redefined the limits of what legacy storage can do. But edge computing has emerged and with its decentralized infrastructure, it demands a newer method of storing terabytes and petabytes of data. We had the opportunity to speak with the Chief Technology Officer of Cloudian, Gary Ogasawara about the distributed properties of object storage and how it can best support edge computing.

Download Link to Data Storage Buyer's Guide

What differentiates object storage from the other approaches to data storage?

Today enterprise organizations are producing large volumes of unstructured data on a regular basis. Object storage solves the scale problem with an architecture that lets you combine multiple types of data in a single, global namespace. Traditional storage systems were designed with an upper limit on capacity. Unlike NAS, in which the final hierarchy limits growth, object storage stores data as natively distributed and horizontally scalable – allowing the management of exabytes of data. As such, object storage is the only storage type that can scale to exabyte volumes and beyond with the required application APIs, security, and performance.

Here are additional facts on what separates object storage from the more traditional SAN and NAS:

  • Object storage is standardizing on the S3 API
    • Object storage typically employs the S3 API as the de facto standard. Object storage also incorporates data management features that simplify data placement; public cloud and on-prem storage become two parts of a single global namespace. Having multiple storage systems and tools support the same API makes it easier for app developers and other users.
  • Object Storage creates one storage pool that can span the globe
    • With the advent of IoT and remote-sensing technologies with large volumes of streaming and continuous data coming in rapidly, this paradigm shift places new demands on networking and storage technologies. Object storage addresses this challenge with a distributed system in which nodes may be deployed wherever needed. Low-cost, remote storage lets analysis happen where the data is collected, rather than having to load the network with raw information and transport it to a central hub for storage and processing.
  • Object storage has robust metadata capabilities
    • Metadata includes user-defined tags that can be associated with each object. Object storage has rich metadata tagging features built-in, unlike NAS, which has very limited metadata, or SAN, which has none.

How does object storage work to support edge computing?

The distributed properties of the technology allow object storage to be the “go-to” for edge computing. Why? Object storage software can run at both the edge in small edge processors as well as large servers at central hubs. Networked together, the object storage software distributes the data across the different layers. Putting object storage and data processing code at the edge enables quick data filtering, image processing, and distributed processing.

In the legacy batch processing model, data must travel to “compute” to be filtered and processed. This means the code is not portable, requiring larger, physical resources and limiting the volume of data that can be managed. When it comes to edge technologies, the exact opposite needs to occur: The code needs to move to where the data is being generated.

Edge computing demands that the processing and managing of data needs to be as close as possible to the source of where data is being generated. This enables filtering where only certain important data is pushed to a data center or an intermediate hub, rather than all of it. In other words, object storage can help make sense of the data being processed at the edge, so only a finite amount gets sent back. For example, autonomous cars have a camera rig on each vehicle that can generate pictures of the road –approximately 5GB of data per second – resulting in terabytes of data per day and petabytes per year. Object storage can be deployed at the edge (the car) to collect all data, then leverage machine learning to only send anomalous or important data back to the hub.

Where do you see the object storage space going in the future?

  •  Scalability – Customers will continue to want the ability to start small and scale to petabytes and beyond without disruption
  • Performance – While object storage has traditionally been focused on providing massive capacity rather than ultra-low latency, the technology is evolving to serve both needs (e.g., incorporating flash).
  • Intelligence – with the rapid growth of AI and ML applications, object storage will take on a larger role in enabling these workloads.

Looking to learn more? Download our free 2020 Buyer’s Guide for Data Storage with full profiles on the top 28 providers in the space.

Download Link to Data Storage Buyer's Guide

Tess Hanna

Share This

Related Posts