By John Knuff (Part 5 in an 8-part series)
New Rules:
-
Longer but faster end-to-end supply chains
-
Leveraging network neutral ubiquitious connectivity
So how do firms effectively engineer distributed yet collaborating trading algorithms to span the global market?
There are many different architectural strategies to cope with constantly changing markets and technology.
With more distribution each trading component can adapt to local needs, accessing local data or services, yet still optimize end-to-end latencies.
Once again colocation offerings in neutral data centers provide the right local choices and ubiquitous connectivity for evolving needs.
Some hedge funds, for example, might use a single strategy server per node colocated with Eurex, Liffe and CME, with perhaps a central control server to manage positions and limits and apply an arbitrage strategy between them. The strategy servers may either communicate via the control server, or collaborate directly with each other.
FX hedging platforms and less active derivatives exchanges might then be accessed remotely from the most appropriate server location.
Alternatively, some firms will insist on trading engines at every node.
In Asia, multiple servers are the norm, given the large distances. However, if an exchange is slow then it might be safely accessed remotely. Additionally, there might be a post trade risk server, possibly outsourced, to pick up drop copies and margin calls and send out risk signaling in a standardized way.
“When the market became too competitive for simple algorithms,” said a multi-strategy trader, “we looked cross market for opportunities. We began with a simple master-slave arrangement with the master node trading the less liquid futures product in one colo, while the slave node managed the hedge in colo with another more liquid product. This kept things simple.”
“Volatility levels can change very quickly and liquidity shift from one venue to another – in minutes or even seconds at times,” observes a European prop trader. “You just have to adapt taking market microstructure and local rules into account. This is the role of our central control server. It can launch new strategies in different locations or vary the capital allocated to each.”
These dynamic elements also adapt to constantly changing latency and liquidity patterns as brokers and telecom carriers leapfrog each other’s latency claims.
This led one Swiss hedge fund to move from a two to a three-tier architecture to optimize its cross-market and cross asset strategies. A fast intermediate tier was introduced to maintain the models for regional colo centers. “It’s a constant learning process,” they observed.
For market data, most high frequency firms will take both direct feeds from each exchange and some slower, but richer, aggregated data or news feeds from third party networks.
For the lowest latencies, mesh networks are used that capture, normalize and filter each feed colocated at source and then multicast the normalized feed to all trading nodes over the shortest possible network routes. If aggregation is required that would be done as a pre-process to the trading engine.
Accurate latency data can also be measured with this model to allow smart order routers to take best execution decisions. Retransmit requests for market ticks also involve fewer delays if the feed handler is colocated with the data vendor rather than operatng remotely.
Customizations are also quite easy and share-able where the main work is done at source. Of course, less latency sensitive firms may optimize on costs, using perhaps a standard extranet, even if it means leaving a few basis points on the table.
Traditional aggregated feeds required extra network hops which add significant latency to normalize data at a central node and then broadcast it to all subscribers.
This is much slower and less flexible for the data vendor compared to the mesh approach where data is broadcast at each source, directly transmitted to every trading engine and then aggregated within the feed handler.
Nor can it provide latency metrics since it is not following the direct link paths between markets and traders.
So market data aggregators are either moving into the colocation centers themselves or offering their own colocation services to traders, who wish to optimize on end-to-end latencies.
All of these innovations show how M2M 2.0 encourages architects to move processes to the data instead of bringing data to the process.
Being more data-driven, rather than process driven, distributed strategies facilitate both agility and speed.
Coming up in Part 6: Optimizing Space and Time
In M2M 2.0 this means optimizing when and where we process market data and trading decisions, but also how we allocate infrastructure capacity.