At Mobile World Congress 2017 there is a lot of buzz around the massive amounts and types of data that mobile devices will be adding to the deluge of big data traffic that is already traveling over today’s enterprise networks. The recently published “Cisco Visual Networking Index: Global Mobile Data Traffic Forecast Update, 2016-2021” predicts that there will be 11.6 billion mobile-connected devices by 2021, including machine-to-machine (M2M) modules and IoT devices -exceeding the world’s projected population of 7.8 billion. Global mobile data traffic from those devices will increase sevenfold between 2016 and 2021, reaching 49.0 exabytes per month by 2021, and annual traffic will exceed half a zettabyte.
Here lies the dilemma for many enterprises: Access to multiple types of mobile data from numerous sources and locations, in either an event-based or time-scheduled manner, places a large burden on enterprise data infrastructure and services. Without fast, scalable and secure distributed data access at the edge and tools to segment access logically and geographically, the pendulum will swing over to distributed data sprawl. In addition, the physics of latency, along with ever-increasing bandwidth costs, mean that backhauling all this data to a centralized corporate data center is not sustainable.
This is where deploying an Interconnection Oriented Architecture™ (IOA™) strategy to develop a distributed data repository can help you create a more high-performance, secure and scalable data infrastructure for the increasing amount of mobile data traffic coming your way.
Deploy a distributed data repository at the edge
The first step to deploying an IOA-based data strategy is to localize your data requirements in a digital edge node. This allows you to balance protection with accessibility, and govern data movement and placement. Each node can be tailored for the local or shared data services at that geographic location, placing you in control of your data and performance.
From there, you can configure a geographically distributed data repository that is designed for scale and consistency and leverages both private and public cloud capacity as a single distributed pool. This data pool can become your default tier of data service available everywhere, with both file system and API interfaces.
Deploying a single namespace data service that is available in all digital edge node locations optimizes for high availability and data protection. It can be immutable, to protect against human error and data corruption, and provide policy-based controls to address logical and geographical data segmentation.
Equinix Distributed Data Repository Reference Model
A distributed data repository is designed for scale and consistency, allowing enterprises to leverage both private and public cloud capacity as a single distributed data pool.
Solving the scalability dilemma with an interconnection-first strategy
Leveraging a distributed data repository will allow your enterprise to benefit from the same proven technology cloud providers use to support massive scale, multi-tenant data services (i.e., private cloud storage, object storage). Until now, this technology has not been widely used by enterprises, since it requires multiple geographical locations and an optimized WAN to be truly effective.
But now all that can be solved if you deploy an IOA strategy and geographically place data nodes in each edge node. Direct and private connectivity will give you high-speed, low-latency connectivity between data, mobile and other network carriers, and cloud services.
To get started, strategically place data that is latency-sensitive in proximity to the services (applications, analytics, cloud) that require access to it, as well as on a faster local cache repository. Establish this cache/copy at the edge to make it securely accessible to multiple clouds and business partners, or services running locally an edge node.
From there, you can use built-in algorithms to interpret policies and store the actual data in a way that protects it from device, location or even regional failures without losing data or access. This offers far more protection than a “copy” of the data and uses much less storage. Data services are also optimized for integration, supporting multiple interfaces (e.g., web, APIs, file systems). By placing data at the edge, you also satisfy data sovereignty or sensitivity requirements that require data containment in the node or region/location.
You can also place data analytics services, either standalone with large data sets or real-time event-processing, at the edge. In addition, streaming data from mobile devices can be aggregated at the edge at an interconnection point from multiple sources and be made available to multiple destinations – many sources and many subscribers.
Your distributed data service with centralized management will also be designed to handle zettabyte-scale data sizes and its accessibility will significantly reduce the volume of data moved, copied and stored, which decreases your WAN bandwidth and storage costs. And, these strategies can be applied to all types of data, not just mobile.
Access the IOA Knowledge Base to learn more about re-architecting your data collection, processing and access at the digital edge.