This option is only available to customers with an existing annual, flat-rate subscription for Databento Live.
Dedicated connectivity
Databento provides private, dedicated connectivity for customers who require predictable latency, high throughput, and higher uptime.
The following table provides an overview of the four dedicated connectivity options supported by Databento and the recommended use case of each.
Connectivity option | Services | Best for | Ports | Latency (90th) |
---|---|---|---|---|
Interconnect with AWS, Google Cloud or Microsoft Azure | Live, Historical | Mission-critical, cloud-based applications requiring high uptime or consuming entire feed(s) | 1G | 1.7+ ms |
Interconnect at proximity hosting location | Live, Historical | Mission-critical, self-hosted applications requiring high uptime or consuming entire feed(s) | 1G | 0.5+ ms |
Cross-connect with any colocation or managed services provider (MSP) | Live | Applications requiring lowest latency | 10G, 25G | 42.4 μs |
Colocation with Databento | Live | Applications requiring lowest latency | 10G, 25G | 42.4 μs |
Contact support if you need dedicated connectivity or any customized connectivity solution.
Interconnect with AWS, Google Cloud or Microsoft Azure
This is the recommended setup to achieve 1.5 to 7 ms latency with Databento and the most cost-effective option of our four dedicated connectivity solutions. A 1 Gb connection is provided here and adequate for consuming entire feeds.
This setup leverages our existing interconnections to AWS, Google Cloud, and Microsoft Azure on-ramps at Equinix CH1 and NY5. A dedicated layer 3 connection is installed and ensures that our data traffic goes through your cloud provider's backbone to reach your cloud services in any availability region or zone.
Note that connecting to the cloud over a dedicated interconnect may only achieve similar median latency as connecting over public internet, as Databento has a highly-optimized IP network with diverse routes across tier 1 ISPs. However, this solution is expected to achieve better tail latencies.
Total NRC | Total MRC | Latency* (90th) |
---|---|---|
$300 | $750 | 1.7 ms, DC3 to Azure US N. Central. 6.8 ms, DC3 to GCP us-central1. |
* Varies with site. First byte of data in at our boundary switch to last byte read from your client socket.
Advantages
- Ease of setup, especially if your infrastructure is already AWS, Google Cloud, or Microsoft Azure.
- It avoids contention on public internet routes, ensuring stable tail latencies and sufficient bandwidth between you and our live data gateways.
- Simplicity—redundancy is built-in with BGP, which will fall back on our public internet routes if the dedicated connection is down, and once set up, you can connect to our gateways using our public DNS hostnames.
- You can use public cloud services, which are more cost-effective compared to conventional hosting and colocation services; it's cheaper to spin up redundant servers on public cloud.
- Your cloud servers may be situated in any availability region or zone to use this solution, giving you many flexible hosting options.
- Most cost-effective of our four dedicated connectivity options.
Disadvantages
- Most other financial services providers, e.g. your broker, trading venues, extranet providers, etc. only provide connectivity at certain points-of-presence (PoPs) and don't support connections to public cloud. If you need to connect to such other external services besides Databento, you'll most likely use us to arrange additional cross-connects and backhaul. If these services require much more than 1 Gb of bandwidth, it then becomes more cost-effective to pursue our other dedicated connectivity options.
- Higher latency than our other connectivity options.
Interconnect at proximity hosting location
This is similar to the setup for an Interconnect with AWS, Google Cloud or Microsoft Azure, except that we'll connect to your own infrastructure at a proximity hosting site instead of a public cloud site. At the moment, the following proximity hosting locations are supported:
- Equinix CH1/4, 350 E Cermak Ave.
- Equinix NY1, 165 Halsey St.
- Equinix NY2, 275 Hartz Way.
- Equinix NY9, 111 8th Ave.
- Equinix LD4, 2 Buckingham Ave, Slough, UK.
- Equinix FR2, Kruppstrasse 121-127, Frankfurt, Germany.
- LSE, 1 Earl St, London, UK.
- Interxion LON-1, 11 Hanbury St, London, UK.
Total NRC | Total MRC | Latency* (90th) |
---|---|---|
$300 | $750+ | 0.59 ms, NY4 to NY1. 1.05 ms, DC3 to CH1. |
* Varies with site. First byte of data in at our boundary switch to last byte read from your client socket.
Cross-connect with any colocation or managed services provider (MSP)
This is the minimum recommended setup to achieve sub-50 us latency with Databento. We’re carrier-neutral and vendor-neutral—this means you may use any colocation provider or MSP that allows you to run a cross-connect to us at either CyrusOne Aurora I (DC3) or Equinix NY4/5. At the moment, our Raw API for live data only supports TCP transport and a single port is sufficient for reliable transmission with this setup.
Latency varies with the site and data feed. The estimate below is based on our slowest path, which includes the following hops:
- Arista 7050SX3-48YC, 800 ns
- Arista 7260X3, 450 ns
- Arista 7060SX2, 565 ns
- Mellanox Spectrum SN2410, 680 ns
Total NRC | Total MRC | Latency* (90th) |
---|---|---|
$300 | $2,177.50 | 42.4 μs |
* First byte of data in at our boundary switch to last byte of data out onto your cross-connect.
Colocation with Databento
Info
This is similar to cross-connecting with any colocation or MSP, and achieves similar latency. The main difference is that you're colocating with us directly.
Total NRC | Total MRC | Latency* (90th) |
---|---|---|
$1,500 | $2,048.75 | 42.4 μs |
* First byte of data in at our boundary switch to last byte of data out on your data port.
Advantages
- Allows us to provide more end-to-end support and debug a wider range of issues.
- Provides you with predictable latency that matches our test bench.
- No cross-connect to Databento required. You're in the same rack as our live gateway and this skips hops on your colocation provider or MSP's edge border leaf or spine switches.
Disadvantages
- Not suitable if you anticipate a complex environment with more than 3 cross-connects, you have frequently-changing requirements, or you need a diverse selection of on-net counterparties and direct venue connectivity.