<- Back to all terms

Latency equalization

Quick definition

Latency equalization is the practice of ensuring that all connections to a trading venue's matching engine have the same latency.

What is Latency equalization?

Since the data centers that house matching engines are physically vast, there could be non-negligible, and unequal propagation latency to the matching engine between two different trading participants that are colocated in the same data center. This may give rise to concerns of fairness, especially if the trading venue promises pure FIFO matching (i.e. price-time priority) and charges a significant cost for direct connectivity.

To achieve latency equalization, trading venues usually spool large, equal lengths of optical fiber to ensure that cross-connects have the same transmission distance to the matching engine. As a result, this practice is often referred to as fiber equalization. A third-party auditor is sometimes engaged to ensure that this is implemented to a certain tolerance.

Another way to think of this is that fiber equalization attempts to achieve closer to true FIFO matching across handoffs. While the terms are often used interchangeably, in practice, fiber equalization doesn't guarantee latency equalization: it's easy to equalize propagation latency along the lengths of fiber, but other significant sources of latency variance may still exist upstream in the matching engine's load balancers and gateways.

Since this is costly to implement, latency equalization is usually only exercised by large exchanges, whose participants are more concerned with latency variance and adverse selection that may arise from it. There's no guarantee that a smaller exchange or ECN may implement latency equalization.

For venues that don't employ latency equalization, latency to the matching engine may be an important trade secret. In such cases, infrastructure vendors and trading participants may hold on to their unused colocation space within the data center, even if it's costly, if it's known that a particular space has better physical proximity and latency to the venue's matching engine. Such a space may sometimes fetch a premium rate when subleased to a trading participant. Hoarding rack space with good latency properties is quite common at larger data centers like Equinix NY4 and NY5, which house the matching engines of many trading venues.

Hoarding of rack space like this is a barrier to entry and form of adverse selection for a newer entrant to low-latency trading. Data centers that serve financial customers tend to operate close to full capacity and lease rack space on an availability basis—this means newer customers will usually have limited picks of racks with less favorable latency. Moreover, the sales teams of these data centers are generally unable to provide information about the physical proximity of a rack to another customer in the data center (like a specific exchange that doesn't employ fiber equalization). It's usually possible, however, to get a tour of the data center and make your own inference, or to extract this information from (former or current) employees of the trading venue and technicians who are familiar with the layout of the data center.

New users get $125 in free credits

Free credit applies to all of our historical market data.

Sign up
Dataset illustration