Time-as-a-Service for a Distributed Future

Stan Chernavsky
Time-as-a-Service for a Distributed Future

Every second, 127 new IoT devices connect to the Internet. By 2025, there will be more than 64 billion of them, generating gargantuan amounts of data. Figuring out how to harness the full value of this data has become the principal lifeblood of technological pursuit.

For instance, 5G development is enabling ultra-high bandwidth wireless data transfers, and quantum computing research is redefining what’s possible in big data processing. Machine learning—a data-driven approach to building mathematical models—will leverage such advancements in networking and computation to bring unprecedented automation capabilities to our IoT devices.

Running real-time interactions on IoT-enabled infrastructure will mandate low-latency communication. As the speed of data transfer over a network is ultimately limited by the speed of light, achieving low delays will require compute and storage resources to be deployed in close proximity to the endpoint devices they manage. Here’s where the edge enters the picture.

The rise of edge computing

Edge computing constitutes an expansion of the computing landscape. Compared to cloud computing, edge computing positions resources closer to the edge of a network in order to reduce jitter and latency for delay-sensitive services and applications. Moreover, running an application on the edge for a global marketplace incurs the additional challenge of managing a distributed system.

Fortunately, the rise of container technology, which enables isolated and portable workload environments, coupled with powerful container-orchestration systems like Kubernetes, is making running a globally distributed system “on-prem” increasingly flexible, painless, and cost-effective.

Globally distributed systems require streamlined tools for delivering precise, reliable time synchronization

While Kubernetes can greatly facilitate the deployment and management of distributed software, the fragmentation of the underlying infrastructure will demand equally powerful hardware management tools. First and foremost, globally distributed systems will require streamlined tools for delivering precise and reliable time synchronization.

Time synchronization is essential for providing efficiency and trust in the billions of daily transactions that support digitally driven industries and government organizations. The deficiencies of our current time synchronization methods are being felt across the telecommunications, broadcasting, finance, gaming, defense and manufacturing industries.

Financial institutions governed by regulators like FINRA and MiFID II must ensure that trading systems are precisely synchronized, and that transactions timestamps are easily traceable. Gaming companies need to guarantee a tightly synchronized gaming experience regardless of where in the world participants are located. Defense organizations are looking for higher levels of accuracy and security. Manufacturers need to orchestrate high-volume data flows generated by IoT devices to coordinate critical processes. Municipalities can improve the quality of infrastructure services like utilities, traffic, safety and security through better integration and coordination of real-time data.

Limitations of NTP and PTP

In most hardware devices, local time is maintained by internal crystal oscillators. However, crystal oscillators are imperfect keepers of time; their accuracy degrades significantly in a matter of days, or even hours, due to variations in temperature and humidity. An external, more accurate time source is therefore used to stabilize a local hardware clock. Typically, local hardware clocks are synchronized over the Internet using a networking protocol known as NTP, which is designed to mitigate the effects of variable network latency. Internet NTP servers are usually offered for free, with no precision or uptime guarantees.

Alternatively, many companies, particularly financial firms with strict regulatory requirements on transaction timestamps, elect to deploy timing infrastructure in-house in order to take advantage of hardware-timestamping network cards, which can provide sub-microsecond precision over a more advanced protocol known as PTP. However, the deployment and maintenance of PTP timing infrastructure on-prem can be an incredibly costly endeavor, particularly if hardware must be replicated at each node of a distributed system.

Need for robust, scalable timing technology

With the fundamental model of computing shifting toward a distributed paradigm, systems will more heavily rely on the coordination and scheduling of actions across multiple machines. Concurrent processes running on different hosts will need a more robust and scalable technology for ordering events and generating timestamps.

Moreover, tightening industry requirements over transaction traceability will demand greater precision from timing hardware. For most organizations, building and managing an in-house time synchronization system for distributed digital infrastructure will be daunting; high-precision time synchronization hardware is expensive, and so are the network engineers required to maintain it. This deployment overhead can manifest as a serious drag on business agility and scalability.

Time-as-a-Service

The reality is that our time synchronization systems are showing their age. In today’s service-based technology landscape, time synchronization apparently missed the memo.

Consuming time “as-a-service” could greatly increase flexibility and reduce operating costs for IoT-intensive or distributed deployment strategies. Software-as-a-Service (SaaS) allows companies to change on demand, whether it’s being the first to a new market or scaling down to accommodate a dwindling business climate.

Furthermore, Time-as-a-Service (TaaS) could help democratize the process of running a business based on distributed technology. By replacing timing infrastructure and IT personnel with a monthly bill, companies could fully outsource the operating costs associated with staying up-to-date with the latest industry regulations, monitoring clock precision and protecting hardware from GPS spoofing attacks.

Precision timing requires consistent, reliable network infrastructure

Time synchronization algorithms, while quite resilient to moderate packet propagation delays, require jitter, the variance in the delay, to be very low. Therefore, network infrastructure for high-precision Time-as-a-Service must be consistent and reliable. Due to the unpredictable latency and unreliable connectivity of the public Internet, jitter spikes are practically certain.

Equinix Cloud Exchange Fabric (ECX Fabric™) provides an alternative to Internet-based software delivery by offering private, secure and reliable global interconnection.

ECX directly and dynamically connects distributed infrastructure by establishing data center-to-data center network connections via software-defined interconnection. Thus, ECX is perfectly positioned to enable the consumption of “as-a-service” precision time synchronization for the highly distributed and IoT-centric infrastructure of the future.

Subscribe to the Equinix Blog