The amount billed will be based on the actual amount of bytes sent; see our pricing documentation for more details.
API reference - Historical
Databento's historical data service can be accessed programmatically over its HTTP API. To make it easier to integrate the API, we also provide official client libraries that simplify the code you need to write.
Our HTTP API is designed as a collection of RPC-style methods, which can be
called using URLs in the
form https://hist..
Our client libraries wrap these HTTP RPC-style methods with more idiomatic interfaces in their respective languages.
You can use our API to stream or load data directly into your application. You can also use our API to make batch download requests, which instruct our service to prepare the data as flat files that can downloaded from the Download center.
Overview
Our historical API has the following structure:
- Metadata provides information about the datasets themselves.
- Time series provides all types of time series data. This includes subsampled data (second, minute, hour, daily aggregates), trades, top-of-book, order book deltas, order book snapshots, summary statistics, static data and macro indicators. We also provide properties of products such as expirations, tick sizes and symbols as time series data.
- Symbology provides methods that help find and resolve symbols across different symbology systems.
- Batch provides a means of submitting and querying for details of batch download requests.
Authentication
Databento uses API keys to authenticate requests. You can view and manage your keys on the API keys page of your portal.
Each API key is a 32-character string starting with db-.
The library will use the environment variable DATABENTO_API_KEY as your API key if the key_from_env method is called.
However, you can pass an API key directly to the historical::ClientBuilder through the key method.
Calling the build method constructs and returns an instance of HistoricalClient.
Related: Securing your API keys.
use databento::HistoricalClient;
// Establish connection and authenticate
let mut client =
HistoricalClient::builder().key_from_env()?.build()?;
// Authenticated request
let datasets = client.metadata().list_datasets(None).await?;
for dataset in datasets {
println!("{dataset}");
}
Schemas and conventions
A schema is a data record format represented as a collection of different data fields. Our datasets support multiple schemas, such as order book, tick data, bar aggregates, and so on. You can see a full list from our List of market data schemas.
You can get a list of all supported schemas for any given dataset using the metadata.list_schemas method. The same information can also be found on each dataset's detail page found through the Explore feature.
The following table provides details about the data types and conventions used for various fields that you will commonly encounter in the data.
| Name | Field | Description |
|---|---|---|
| Dataset | dataset |
A unique string name assigned to each dataset by Databento. Full list of datasets can be found from the metadata. |
| Publisher ID | publisher_id |
A unique u16 assigned to each publisher by Databento. Full list of publisher IDs can be found from the metadata. |
| Instrument ID | instrument_id |
A unique u32 assigned to each instrument by the venue. Information about instrument IDs for any given dataset can be found in the symbology. |
| Order ID | order_id |
A unique u64 assigned to each order by the venue. |
| Timestamp (event) | ts_event |
The matching-engine-received timestamp expressed as the number of nanoseconds since the UNIX epoch. |
| Timestamp (receive) | ts_recv |
The capture-server-received timestamp expressed as the number of nanoseconds since the UNIX epoch. |
| Timestamp delta (in) | ts_in_delta |
The matching-engine-sending timestamp expressed as the number of nanoseconds before ts_recv. See timestamping guide. |
| Timestamp out | ts_out |
The Databento gateway-sending timestamp expressed as the number of nanoseconds since the UNIX epoch. See timestamping guide. |
| Price | price |
The price expressed as signed integer where every 1 unit corresponds to 1e-9, i.e. 1/1,000,000,000 or 0.000000001. |
| Book side | side |
The side that initiates the event. Can be Ask for a sell order (or sell aggressor in a trade), Bid for a buy order (or buy aggressor in a trade), or None where no side is specified by the original source. |
| Size | size |
The order quantity. |
| Flag | flag |
A bit field indicating event end, message characteristics, and data quality. |
| Action | action |
The event type or order book operation. Can be Add, Cancel, Modify, cleaR book, Trade, Fill, or None. |
| Sequence number | sequence |
The original message sequence number from the venue. |
Datasets
Databento provides time series datasets for a variety of markets, sourced from different publishers. Our available datasets can be browsed through the search feature on our site.
Each dataset is assigned a unique string identifier (dataset ID) in the form PUBLISHER.DATASET, such as GLBX.MDP3.
For publishers that are also markets, we use standard four-character ISO 10383 Market Identifier Codes (MIC).
Otherwise, Databento arbitrarily assigns a four-character identifier for the publisher.
These dataset IDs are also found on the Data catalog and Download request features of the Databento user portal.
When a publisher provides multiple data products with different levels of granularity, Databento subscribes to the most-granular product. We then provide this dataset with alternate schemas to make it easy to work with the level of detail most appropriate for your application.
More information about different types of venues and publishers is available in our FAQs.
Symbology
Databento's historical API supports several ways to select an instrument in a dataset. An instrument is specified using a symbol and a symbology type, also referred to as an stype. The supported symbology types are:
- Raw symbology (
RawSymbol) original string symbols used by the publisher in the source data. - Instrument ID symbology (
InstrumentId) unique numeric ID assigned to each instrument by the publisher. - Parent symbology (
Parent) groups instruments related to the market for the same underlying. - Continuous contract symbology (
Continuous) proprietary symbology that specifies instruments based on certain systematic rules.
When requesting data from our timeseries.get_range or batch.submit_job endpoints, an input and output symbology type can be specified. By default, our client libraries will use raw symbology for the input type and instrument ID symbology for the output type. Not all symbology types are supported for every dataset.
The process of converting between one symbology type to another is called symbology resolution. This conversion can be done, for no cost, with the symbology.resolve endpoint.
For more about symbology at Databento, see our Standards and conventions.
Encodings
DBN
Databento Binary Encoding (DBN) is an extremely fast message encoding and highly-compressible storage format for normalized market data. It includes self-describing metadata header and adopts a binary format with zero-copy serialization.
We recommend using our Python, C++, or Rust client libraries to read DBN files locally. A CLI tool is also available for converting DBN files to CSV or JSON.
CSV
Comma-separated values (CSV) is a simple text file format for tabular data, CSVs can be easily opened with Excel, loaded into pandas data frames, or parsed in C++.
Our CSVs have one header line, followed by one record per line.
Lines use UNIX-style \n separators.
JSON
JavaScript Object Notation (JSON) is a flexible text file format with broad language support and wide adoption across web apps.
Our JSON files follow the JSON lines specification, where
each line of the file is a JSON record.
Lines use UNIX-style \n separators.
Compression
Databento provides options for compressing files from our API. Available compression formats depend on the encoding you select.
Zstd
The Zstd compression option uses the Zstandard format.
This option is available for all encodings, and is recommended for faster transfer speeds and smaller files.
The DBN crate comes with support for reading Zstandard-compressed DBN files and you can read any Zstandard file in Rust using the zstd library.
Read more about working with Zstandard-compressed files.
None
The None compression option disables compression entirely, resulting
in significantly larger files.
However, this can be useful for loading small CSV files directly into Excel.
Dates and times
Our Rust client library uses the time crate for representing both dates and datetimes.
Dates and datetime from the historical API will be deserialized into time::Date and time::OffsetDateTimes.
To localize these, use OffsetDateTime::to_offset.
In DBN records, timestamps are represented as u64 nanosecond-precision UNIX timestamps.
These timestamps are always in UTC.
These can be parsed with OffsetDateTime::from_unix_timestamp_nanos.
Errors
Our historical API uses HTTP response codes to indicate the success or failure of an API request. The client library provides an error enum to wrap these response codes.
2xxindicates success.4xxindicates an error on the client side. Represented as aError::Http.5xxindicates an error with Databento's servers. Represented as aError::Http.
Use the status method to get the status code associated with an error.
The full list of the response codes and associated causes is as follows:
| Code | Message | Cause |
|---|---|---|
| 200 | OK | Successful request. |
| 206 | Partial Content | Successful request, with partially resolved symbols. |
| 400 | Bad Request | Invalid request. Usually due to a missing, malformed or unsupported parameter. |
| 401 | Unauthorized | Invalid username or API key. |
| 402 | Payment Required | Issue with your account payment information. |
| 403 | Forbidden | The API key has insufficient permissions to perform the request. |
| 404 | Not Found | A resource is not found, or a requested symbol does not exist. |
| 409 | Conflict | A resource already exists. |
| 422 | Unprocessable Entity | The request is well formed, but we cannot or will not process the contained instructions. |
| 429 | Too Many Requests | API rate limit exceeded. |
| 500 | Internal Server Error | Unexpected condition encountered in our system. |
| 503 | Service Unavailable | Data gateway is offline or overloaded. |
| 504 | Gateway Timeout | Data gateway is available but other parts of our system are offline or overloaded. |
pub enum Error {
/// An invalid argument was passed.
BadArgument {
/// The name of the parameter to which the bad argument was passed.
param_name: String,
/// The description of how the argument was invalid.
desc: String,
},
/// An I/O error while reading or writing DBN or another encoding.
Io(std::io::Error),
/// An HTTP error.
Http(reqwest::Error),
/// An error from the Databento API.
Api(ApiError),
/// An error internal to the client.
Internal(String),
/// An error related to DBN encoding.
Dbn(dbn::Error),
/// An when authentication failed.
Auth(String),
}
pub struct ApiError {
/// The request ID.
pub request_id: Option<String>,
/// The HTTP status code of the response.
pub status_code: reqwest::StatusCode,
/// The message from the Databento API.
pub message: String,
/// The link to documentation related to the error.
pub docs_url: Option<String>,
}
Rate limits
Our historical API allows each IP address up to:
- 100 concurrent connections.
- 100 time series requests per second.
- 100 symbology requests per second.
- 20 metadata requests per second.
- 20 batch list jobs requests per second.
- 20 batch submit job requests per minute.
When a request exceeds a rate limit, an Error::Http will be returned with a 429 error code.
Size limits
There is no size limit for either stream or batch download requests. Batch download is more manageable for large datasets, so we recommend using batch download for requests over 5 GB.
You can also manage the size of your request by splitting it into
multiple, smaller requests. The historical API allows you to make stream and
batch download requests with time ranges specified up to nanosecond resolution.
You can also use the limit parameter in any request to limit the number of
data records returned from the service.
Batch download supports different
delivery methods which can be specified using the delivery parameter.
use std::num::NonZeroU64;
use databento::{
dbn::Schema, historical::batch::SubmitJobParams,
HistoricalClient, Symbols,
};
use time::macros::datetime;
let mut client =
HistoricalClient::builder().key_from_env()?.build()?;
let job = client
.batch()
.submit_job(
&SubmitJobParams::builder()
.dataset("GLBX.MDP3")
.symbols(Symbols::All)
.schema(Schema::Trades)
.date_time_range((
datetime!(2022-08-26 00:00:00 UTC),
datetime!(2022-09-28 00:00:00 UTC),
))
.limit(NonZeroU64::new(1000))
.build(),
)
.await?;
Metered pricing
Databento only charges for the data that you use. You can find rates (per MB) for the various datasets and estimate pricing on our Data catalog. We meter the data by its uncompressed size in binary encoding.
When you stream the data, you are billed incrementally for each outbound byte of data sent from our historical gateway. If your connection is interrupted while streaming our data and our historical gateway detects connection timeout over 5 seconds, it will immediately stop sending data and you will not be billed for the remainder of your request.
Duplicate streaming requests will incur repeated charges. If you intend to access the same data multiple times, we recommend using our batch download feature. When you make a batch download request, you are only billed once for the request and, subsequently, you can download the data from the Download center multiple times over 30 days for no additional charge.
You will only be billed for usage of time series data. Access to metadata, symbology, and account management is free.
Related: Billing management.
Versioning
Our historical API and its client libraries adopt MAJOR.MINOR.PATCH format
for version numbers. These version numbers conform to
semantic versioning. We are using major version 0 for
initial development, where our API is not considered stable.
Once we release major version 1, our public API will be stable. This means that
you will be able to upgrade minor or patch versions to pick up new functionality,
without breaking your integration.
Starting with major versions after 1, we will provide support for previous
versions for one year after the date of the subsequent major release.
For example, if version 2.0.0 is released on January 1, 2024, then all versions
1.x.y of the API and client libraries will be deprecated. However, they will
remain supported until January 1, 2025.
We may introduce backwards-compatible changes between minor versions in the form of:
- New data encodings
- Additional fields to existing data schemas
- Additional batch download customizations
Our Release notes will contain information about both breaking and backwards-compatible changes in each release.
Our API and official client libraries are kept in sync with same-day releases
for major versions. For instance, 1.x.y of the Rust client
library will use the same functionality found in any 1.x.y version of the Python client.
Related: Release notes.
HistoricalClient
To access Databento's historical API, first create an instance of HistoricalClient.
The API is exposed through four subclients:
metadatatimeseriessymbologybatch
Note that the API key can be passed as an argument, which is
not recommended for production applications.
Instead, you can use the historical::ClientBuilder through HistoricalClient::builder(), which includes a key_from_env method for setting key from the DATABENTO_API_KEY environment variable.
Parameters
Bo1 is supported. If using ClientBuilder, it defaults to Bo1.impl ClientBuilder {
// Required
pub fn key(
self,
key: impl ToString,
) -> databento::Result<Self>;
pub fn key_from_env(self) -> databento::Result<Self>;
// Optional
pub fn gateway(mut self, gateway: HistoricalGateway) -> Self;
pub fn upgrade_policy(
mut self,
upgrade_policy: VersionUpgradePolicy,
) -> Self;
pub async fn build(self) -> databento::Result<Client>;
}
HistoricalClient::metadata::list_publishers
List all publisher ID mappings.
Use this method to list the mappings of publisher names to publisher IDs.
Returns
Vec<PublisherDetail>
A list of publisher details, where PublisherDetail is:
[
PublisherDetail {
publisher_id: 1,
dataset: "GLBX.MDP3",
venue: "GLBX",
description: "CME Globex MDP 3.0",
},
PublisherDetail {
publisher_id: 2,
dataset: "XNAS.ITCH",
venue: "XNAS",
description: "Nasdaq TotalView-ITCH",
},
PublisherDetail {
publisher_id: 3,
dataset: "XBOS.ITCH",
venue: "XBOS",
description: "Nasdaq BX TotalView-ITCH",
},
PublisherDetail {
publisher_id: 4,
dataset: "XPSX.ITCH",
venue: "XPSX",
description: "Nasdaq PSX TotalView-ITCH",
},
PublisherDetail {
publisher_id: 5,
dataset: "BATS.PITCH",
venue: "BATS",
description: "Cboe BZX Depth",
},
PublisherDetail {
publisher_id: 6,
dataset: "BATY.PITCH",
venue: "BATY",
description: "Cboe BYX Depth",
},
PublisherDetail {
publisher_id: 7,
dataset: "EDGA.PITCH",
venue: "EDGA",
description: "Cboe EDGA Depth",
},
PublisherDetail {
publisher_id: 8,
dataset: "EDGX.PITCH",
venue: "EDGX",
description: "Cboe EDGX Depth",
},
PublisherDetail {
publisher_id: 9,
dataset: "XNYS.PILLAR",
venue: "XNYS",
description: "NYSE Integrated",
},
PublisherDetail {
publisher_id: 10,
dataset: "XCIS.PILLAR",
venue: "XCIS",
description: "NYSE National Integrated",
},
PublisherDetail {
publisher_id: 11,
dataset: "XASE.PILLAR",
venue: "XASE",
description: "NYSE American Integrated",
},
PublisherDetail {
publisher_id: 12,
dataset: "XCHI.PILLAR",
venue: "XCHI",
description: "NYSE Texas Integrated",
},
PublisherDetail {
publisher_id: 13,
dataset: "XCIS.BBO",
venue: "XCIS",
description: "NYSE National BBO",
},
PublisherDetail {
publisher_id: 14,
dataset: "XCIS.TRADES",
venue: "XCIS",
description: "NYSE National Trades",
},
PublisherDetail {
publisher_id: 15,
dataset: "MEMX.MEMOIR",
venue: "MEMX",
description: "MEMX Memoir Depth",
},
PublisherDetail {
publisher_id: 16,
dataset: "EPRL.DOM",
venue: "EPRL",
description: "MIAX Pearl Depth",
},
PublisherDetail {
publisher_id: 17,
dataset: "XNAS.NLS",
venue: "FINN",
description: "FINRA/Nasdaq TRF Carteret",
},
PublisherDetail {
publisher_id: 18,
dataset: "XNAS.NLS",
venue: "FINC",
description: "FINRA/Nasdaq TRF Chicago",
},
PublisherDetail {
publisher_id: 19,
dataset: "XNYS.TRADES",
venue: "FINY",
description: "FINRA/NYSE TRF",
},
PublisherDetail {
publisher_id: 20,
dataset: "OPRA.PILLAR",
venue: "AMXO",
description: "OPRA - NYSE American Options",
},
PublisherDetail {
publisher_id: 21,
dataset: "OPRA.PILLAR",
venue: "XBOX",
description: "OPRA - BOX Options",
},
PublisherDetail {
publisher_id: 22,
dataset: "OPRA.PILLAR",
venue: "XCBO",
description: "OPRA - Cboe Options",
},
PublisherDetail {
publisher_id: 23,
dataset: "OPRA.PILLAR",
venue: "EMLD",
description: "OPRA - MIAX Emerald",
},
PublisherDetail {
publisher_id: 24,
dataset: "OPRA.PILLAR",
venue: "EDGO",
description: "OPRA - Cboe EDGX Options",
},
PublisherDetail {
publisher_id: 25,
dataset: "OPRA.PILLAR",
venue: "GMNI",
description: "OPRA - Nasdaq GEMX",
},
PublisherDetail {
publisher_id: 26,
dataset: "OPRA.PILLAR",
venue: "XISX",
description: "OPRA - Nasdaq ISE",
},
PublisherDetail {
publisher_id: 27,
dataset: "OPRA.PILLAR",
venue: "MCRY",
description: "OPRA - Nasdaq MRX",
},
PublisherDetail {
publisher_id: 28,
dataset: "OPRA.PILLAR",
venue: "XMIO",
description: "OPRA - MIAX Options",
},
PublisherDetail {
publisher_id: 29,
dataset: "OPRA.PILLAR",
venue: "ARCO",
description: "OPRA - NYSE Arca Options",
},
PublisherDetail {
publisher_id: 30,
dataset: "OPRA.PILLAR",
venue: "OPRA",
description: "OPRA - Options Price Reporting Authority",
},
PublisherDetail {
publisher_id: 31,
dataset: "OPRA.PILLAR",
venue: "MPRL",
description: "OPRA - MIAX Pearl",
},
PublisherDetail {
publisher_id: 32,
dataset: "OPRA.PILLAR",
venue: "XNDQ",
description: "OPRA - Nasdaq Options",
},
PublisherDetail {
publisher_id: 33,
dataset: "OPRA.PILLAR",
venue: "XBXO",
description: "OPRA - Nasdaq BX Options",
},
PublisherDetail {
publisher_id: 34,
dataset: "OPRA.PILLAR",
venue: "C2OX",
description: "OPRA - Cboe C2 Options",
},
PublisherDetail {
publisher_id: 35,
dataset: "OPRA.PILLAR",
venue: "XPHL",
description: "OPRA - Nasdaq PHLX",
},
PublisherDetail {
publisher_id: 36,
dataset: "OPRA.PILLAR",
venue: "BATO",
description: "OPRA - Cboe BZX Options",
},
PublisherDetail {
publisher_id: 37,
dataset: "OPRA.PILLAR",
venue: "MXOP",
description: "OPRA - MEMX Options",
},
PublisherDetail {
publisher_id: 38,
dataset: "IEXG.TOPS",
venue: "IEXG",
description: "IEX TOPS",
},
PublisherDetail {
publisher_id: 39,
dataset: "DBEQ.BASIC",
venue: "XCHI",
description: "DBEQ Basic - NYSE Texas",
},
PublisherDetail {
publisher_id: 40,
dataset: "DBEQ.BASIC",
venue: "XCIS",
description: "DBEQ Basic - NYSE National",
},
PublisherDetail {
publisher_id: 41,
dataset: "DBEQ.BASIC",
venue: "IEXG",
description: "DBEQ Basic - IEX",
},
PublisherDetail {
publisher_id: 42,
dataset: "DBEQ.BASIC",
venue: "EPRL",
description: "DBEQ Basic - MIAX Pearl",
},
PublisherDetail {
publisher_id: 43,
dataset: "ARCX.PILLAR",
venue: "ARCX",
description: "NYSE Arca Integrated",
},
PublisherDetail {
publisher_id: 44,
dataset: "XNYS.BBO",
venue: "XNYS",
description: "NYSE BBO",
},
PublisherDetail {
publisher_id: 45,
dataset: "XNYS.TRADES",
venue: "XNYS",
description: "NYSE Trades",
},
PublisherDetail {
publisher_id: 46,
dataset: "XNAS.QBBO",
venue: "XNAS",
description: "Nasdaq QBBO",
},
PublisherDetail {
publisher_id: 47,
dataset: "XNAS.NLS",
venue: "XNAS",
description: "Nasdaq Trades",
},
PublisherDetail {
publisher_id: 48,
dataset: "EQUS.PLUS",
venue: "XCHI",
description: "Databento US Equities Plus - NYSE Texas",
},
PublisherDetail {
publisher_id: 49,
dataset: "EQUS.PLUS",
venue: "XCIS",
description: "Databento US Equities Plus - NYSE National",
},
PublisherDetail {
publisher_id: 50,
dataset: "EQUS.PLUS",
venue: "IEXG",
description: "Databento US Equities Plus - IEX",
},
PublisherDetail {
publisher_id: 51,
dataset: "EQUS.PLUS",
venue: "EPRL",
description: "Databento US Equities Plus - MIAX Pearl",
},
PublisherDetail {
publisher_id: 52,
dataset: "EQUS.PLUS",
venue: "XNAS",
description: "Databento US Equities Plus - Nasdaq",
},
PublisherDetail {
publisher_id: 53,
dataset: "EQUS.PLUS",
venue: "XNYS",
description: "Databento US Equities Plus - NYSE",
},
PublisherDetail {
publisher_id: 54,
dataset: "EQUS.PLUS",
venue: "FINN",
description: "Databento US Equities Plus - FINRA/Nasdaq TRF Carteret",
},
PublisherDetail {
publisher_id: 55,
dataset: "EQUS.PLUS",
venue: "FINY",
description: "Databento US Equities Plus - FINRA/NYSE TRF",
},
PublisherDetail {
publisher_id: 56,
dataset: "EQUS.PLUS",
venue: "FINC",
description: "Databento US Equities Plus - FINRA/Nasdaq TRF Chicago",
},
PublisherDetail {
publisher_id: 57,
dataset: "IFEU.IMPACT",
venue: "IFEU",
description: "ICE Europe Commodities",
},
PublisherDetail {
publisher_id: 58,
dataset: "NDEX.IMPACT",
venue: "NDEX",
description: "ICE Endex",
},
PublisherDetail {
publisher_id: 59,
dataset: "DBEQ.BASIC",
venue: "DBEQ",
description: "Databento US Equities Basic - Consolidated",
},
PublisherDetail {
publisher_id: 60,
dataset: "EQUS.PLUS",
venue: "EQUS",
description: "EQUS Plus - Consolidated",
},
PublisherDetail {
publisher_id: 61,
dataset: "OPRA.PILLAR",
venue: "SPHR",
description: "OPRA - MIAX Sapphire",
},
PublisherDetail {
publisher_id: 62,
dataset: "EQUS.ALL",
venue: "XCHI",
description: "Databento US Equities (All Feeds) - NYSE Texas",
},
PublisherDetail {
publisher_id: 63,
dataset: "EQUS.ALL",
venue: "XCIS",
description: "Databento US Equities (All Feeds) - NYSE National",
},
PublisherDetail {
publisher_id: 64,
dataset: "EQUS.ALL",
venue: "IEXG",
description: "Databento US Equities (All Feeds) - IEX",
},
PublisherDetail {
publisher_id: 65,
dataset: "EQUS.ALL",
venue: "EPRL",
description: "Databento US Equities (All Feeds) - MIAX Pearl",
},
PublisherDetail {
publisher_id: 66,
dataset: "EQUS.ALL",
venue: "XNAS",
description: "Databento US Equities (All Feeds) - Nasdaq",
},
PublisherDetail {
publisher_id: 67,
dataset: "EQUS.ALL",
venue: "XNYS",
description: "Databento US Equities (All Feeds) - NYSE",
},
PublisherDetail {
publisher_id: 68,
dataset: "EQUS.ALL",
venue: "FINN",
description: "Databento US Equities (All Feeds) - FINRA/Nasdaq TRF Carteret",
},
PublisherDetail {
publisher_id: 69,
dataset: "EQUS.ALL",
venue: "FINY",
description: "Databento US Equities (All Feeds) - FINRA/NYSE TRF",
},
PublisherDetail {
publisher_id: 70,
dataset: "EQUS.ALL",
venue: "FINC",
description: "Databento US Equities (All Feeds) - FINRA/Nasdaq TRF Chicago",
},
PublisherDetail {
publisher_id: 71,
dataset: "EQUS.ALL",
venue: "BATS",
description: "Databento US Equities (All Feeds) - Cboe BZX",
},
PublisherDetail {
publisher_id: 72,
dataset: "EQUS.ALL",
venue: "BATY",
description: "Databento US Equities (All Feeds) - Cboe BYX",
},
PublisherDetail {
publisher_id: 73,
dataset: "EQUS.ALL",
venue: "EDGA",
description: "Databento US Equities (All Feeds) - Cboe EDGA",
},
PublisherDetail {
publisher_id: 74,
dataset: "EQUS.ALL",
venue: "EDGX",
description: "Databento US Equities (All Feeds) - Cboe EDGX",
},
PublisherDetail {
publisher_id: 75,
dataset: "EQUS.ALL",
venue: "XBOS",
description: "Databento US Equities (All Feeds) - Nasdaq BX",
},
PublisherDetail {
publisher_id: 76,
dataset: "EQUS.ALL",
venue: "XPSX",
description: "Databento US Equities (All Feeds) - Nasdaq PSX",
},
PublisherDetail {
publisher_id: 77,
dataset: "EQUS.ALL",
venue: "MEMX",
description: "Databento US Equities (All Feeds) - MEMX",
},
PublisherDetail {
publisher_id: 78,
dataset: "EQUS.ALL",
venue: "XASE",
description: "Databento US Equities (All Feeds) - NYSE American",
},
PublisherDetail {
publisher_id: 79,
dataset: "EQUS.ALL",
venue: "ARCX",
description: "Databento US Equities (All Feeds) - NYSE Arca",
},
PublisherDetail {
publisher_id: 80,
dataset: "EQUS.ALL",
venue: "LTSE",
description: "Databento US Equities (All Feeds) - Long-Term Stock Exchange",
},
PublisherDetail {
publisher_id: 81,
dataset: "XNAS.BASIC",
venue: "XNAS",
description: "Nasdaq Basic - Nasdaq",
},
PublisherDetail {
publisher_id: 82,
dataset: "XNAS.BASIC",
venue: "FINN",
description: "Nasdaq Basic - FINRA/Nasdaq TRF Carteret",
},
PublisherDetail {
publisher_id: 83,
dataset: "XNAS.BASIC",
venue: "FINC",
description: "Nasdaq Basic - FINRA/Nasdaq TRF Chicago",
},
PublisherDetail {
publisher_id: 84,
dataset: "IFEU.IMPACT",
venue: "XOFF",
description: "ICE Europe - Off-Market Trades",
},
PublisherDetail {
publisher_id: 85,
dataset: "NDEX.IMPACT",
venue: "XOFF",
description: "ICE Endex - Off-Market Trades",
},
PublisherDetail {
publisher_id: 86,
dataset: "XNAS.NLS",
venue: "XBOS",
description: "Nasdaq NLS - Nasdaq BX",
},
PublisherDetail {
publisher_id: 87,
dataset: "XNAS.NLS",
venue: "XPSX",
description: "Nasdaq NLS - Nasdaq PSX",
},
PublisherDetail {
publisher_id: 88,
dataset: "XNAS.BASIC",
venue: "XBOS",
description: "Nasdaq Basic - Nasdaq BX",
},
PublisherDetail {
publisher_id: 89,
dataset: "XNAS.BASIC",
venue: "XPSX",
description: "Nasdaq Basic - Nasdaq PSX",
},
PublisherDetail {
publisher_id: 90,
dataset: "EQUS.SUMMARY",
venue: "EQUS",
description: "Databento Equities Summary",
},
PublisherDetail {
publisher_id: 91,
dataset: "XCIS.TRADESBBO",
venue: "XCIS",
description: "NYSE National Trades and BBO",
},
PublisherDetail {
publisher_id: 92,
dataset: "XNYS.TRADESBBO",
venue: "XNYS",
description: "NYSE Trades and BBO",
},
PublisherDetail {
publisher_id: 93,
dataset: "XNAS.BASIC",
venue: "EQUS",
description: "Nasdaq Basic - Consolidated",
},
PublisherDetail {
publisher_id: 94,
dataset: "EQUS.ALL",
venue: "EQUS",
description: "Databento US Equities (All Feeds) - Consolidated",
},
PublisherDetail {
publisher_id: 95,
dataset: "EQUS.MINI",
venue: "EQUS",
description: "Databento US Equities Mini",
},
PublisherDetail {
publisher_id: 96,
dataset: "XNYS.TRADES",
venue: "EQUS",
description: "NYSE Trades - Consolidated",
},
PublisherDetail {
publisher_id: 97,
dataset: "IFUS.IMPACT",
venue: "IFUS",
description: "ICE Futures US",
},
PublisherDetail {
publisher_id: 98,
dataset: "IFUS.IMPACT",
venue: "XOFF",
description: "ICE Futures US - Off-Market Trades",
},
PublisherDetail {
publisher_id: 99,
dataset: "IFLL.IMPACT",
venue: "IFLL",
description: "ICE Europe Financials",
},
PublisherDetail {
publisher_id: 100,
dataset: "IFLL.IMPACT",
venue: "XOFF",
description: "ICE Europe Financials - Off-Market Trades",
},
PublisherDetail {
publisher_id: 101,
dataset: "XEUR.EOBI",
venue: "XEUR",
description: "Eurex EOBI",
},
PublisherDetail {
publisher_id: 102,
dataset: "XEEE.EOBI",
venue: "XEEE",
description: "European Energy Exchange EOBI",
},
PublisherDetail {
publisher_id: 103,
dataset: "XEUR.EOBI",
venue: "XOFF",
description: "Eurex EOBI - Off-Market Trades",
},
PublisherDetail {
publisher_id: 104,
dataset: "XEEE.EOBI",
venue: "XOFF",
description: "European Energy Exchange EOBI - Off-Market Trades",
},
]
HistoricalClient::metadata::list_datasets
List all available dataset IDs on Databento.
Use this method to list the dataset IDs (string identifiers) of all available datasets, so you can use
other methods which take the dataset parameter.
Constants for dataset IDs are also available in databento::dbn::datasets.
Parameters
None then all available dates.
Returns
Vec<String>
A list of dataset IDs.
use databento::HistoricalClient;
use time::{macros::date, Duration};
let mut client =
HistoricalClient::builder().key("$YOUR_API_KEY")?.build()?;
let dataset_codes = client
.metadata()
.list_datasets(Some(date!(2024 - 01 - 28).into()))
.await?;
println!("{dataset_codes:#?}");
[
"ARCX.PILLAR",
"BATS.PITCH",
"BATY.PITCH",
"DBEQ.BASIC",
"EDGA.PITCH",
"EDGX.PITCH",
"EPRL.DOM",
"EQUS.MINI",
"GLBX.MDP3",
"IEXG.TOPS",
"IFEU.IMPACT",
"IFUS.IMPACT",
"MEMX.MEMOIR",
"NDEX.IMPACT",
"OPRA.PILLAR",
"XASE.PILLAR",
"XBOS.ITCH",
"XCHI.PILLAR",
"XCIS.TRADESBBO",
"XNAS.ITCH",
"XNYS.PILLAR",
"XPSX.ITCH",
]
HistoricalClient::metadata::list_schemas
List all available schemas for a dataset.
Parameters
Returns
Vec<Schema>
A list of available data schemas for the dataset.
HistoricalClient::metadata::list_fields
List all fields for a schema and encoding.
Parameters
Returns
Vec<FieldDetail>
A list of field details objects, where FieldDetail is:
use databento::{
dbn::{Encoding, Schema},
historical::metadata::ListFieldsParams,
HistoricalClient,
};
let mut client =
HistoricalClient::builder().key("$YOUR_API_KEY")?.build()?;
let fields = client
.metadata()
.list_fields(
&ListFieldsParams::builder()
.schema(Schema::Trades)
.encoding(Encoding::Dbn)
.build(),
)
.await?;
println!("{fields:#?}");
[
FieldDetail {
name: "length",
type_name: "uint8_t",
},
FieldDetail {
name: "rtype",
type_name: "uint8_t",
},
FieldDetail {
name: "publisher_id",
type_name: "uint16_t",
},
FieldDetail {
name: "instrument_id",
type_name: "uint32_t",
},
FieldDetail {
name: "ts_event",
type_name: "uint64_t",
},
FieldDetail {
name: "price",
type_name: "int64_t",
},
FieldDetail {
name: "size",
type_name: "uint32_t",
},
FieldDetail {
name: "action",
type_name: "char",
},
FieldDetail {
name: "side",
type_name: "char",
},
FieldDetail {
name: "flags",
type_name: "uint8_t",
},
FieldDetail {
name: "depth",
type_name: "uint8_t",
},
FieldDetail {
name: "ts_recv",
type_name: "uint64_t",
},
FieldDetail {
name: "ts_in_delta",
type_name: "int32_t",
},
FieldDetail {
name: "sequence",
type_name: "uint32_t",
},
]
HistoricalClient::metadata::list_unit_prices
List unit prices for each data schema in US dollars per gigabyte.
Parameters
Returns
Vec<UnitPricesForMode>
A list of objects with the unit prices for a feed mode, where UnitPricesForMode is:
[
UnitPricesForMode {
mode: Historical,
unit_prices: {
Cbbo1S: 2.0,
Statistics: 11.0,
Definition: 5.0,
Status: 5.0,
Cmbp1: 0.16,
Tcbbo: 210.0,
Ohlcv1D: 600.0,
Cbbo1M: 2.0,
Trades: 280.0,
Ohlcv1S: 280.0,
Ohlcv1M: 280.0,
Ohlcv1H: 600.0,
},
},
UnitPricesForMode {
mode: HistoricalStreaming,
unit_prices: {
Trades: 280.0,
Statistics: 11.0,
Ohlcv1D: 600.0,
Definition: 5.0,
Status: 5.0,
Ohlcv1S: 280.0,
Cbbo1S: 2.0,
Tcbbo: 210.0,
Cmbp1: 0.16,
Cbbo1M: 2.0,
Ohlcv1M: 280.0,
Ohlcv1H: 600.0,
},
},
UnitPricesForMode {
mode: Live,
unit_prices: {
Status: 6.0,
Ohlcv1H: 720.0,
Ohlcv1D: 720.0,
Definition: 6.0,
Trades: 336.0,
Cmbp1: 0.2,
Statistics: 13.2,
Tcbbo: 252.0,
Ohlcv1M: 336.0,
Cbbo1S: 2.4,
Ohlcv1S: 336.0,
Cbbo1M: 2.4,
},
},
]
HistoricalClient::metadata::get_dataset_condition
Get the dataset condition from Databento.
Use this method to discover data availability and quality.
Parameters
Returns
Vec<DatasetConditionDetail>
A list of conditions per date, where DatasetConditionDetail is:
condition is Missing.Possible values for condition:
Available: the data is available with no known issuesDegraded: the data is available, but there may be missing data or other correctness issuesPending: the data is not yet available, but may be available soonMissing: the data is not available
use databento::{
historical::metadata::GetDatasetConditionParams,
HistoricalClient,
};
use time::macros::date;
let mut client =
HistoricalClient::builder().key("$YOUR_API_KEY")?.build()?;
let conditions = client
.metadata()
.get_dataset_condition(
&GetDatasetConditionParams::builder()
.dataset("GLBX.MDP3")
.date_range((
date!(2019 - 06 - 06),
date!(2019 - 06 - 10),
))
.build(),
)
.await?;
println!("{conditions:#?}");
[
DatasetConditionDetail {
date: 2019-06-06,
condition: Available,
last_modified_date: Some(
2024-05-13,
),
},
DatasetConditionDetail {
date: 2019-06-07,
condition: Available,
last_modified_date: Some(
2024-05-13,
),
},
DatasetConditionDetail {
date: 2019-06-09,
condition: Available,
last_modified_date: Some(
2024-05-13,
),
},
DatasetConditionDetail {
date: 2019-06-10,
condition: Available,
last_modified_date: Some(
2024-05-13,
),
},
]
HistoricalClient::metadata::get_dataset_range
Get the available range for the dataset given the user's entitlements.
Use this method to discover data availability.
The start and end values in the response can be used with the timeseries::get_range and batch::submit_job endpoints.
Parameters
Returns
DatasetRange
The available range for the dataset.
DatasetRange {
start: 2018-05-01 0:00:00.0 +00:00:00,
end: 2025-10-20 20:30:00.0 +00:00:00,
range_by_schema: {
Ohlcv1M: DateTimeRange {
start: 2018-05-01 0:00:00.0 +00:00:00,
end: 2025-10-20 20:30:00.0 +00:00:00,
},
Trades: DateTimeRange {
start: 2018-05-01 0:00:00.0 +00:00:00,
end: 2025-10-20 20:30:00.0 +00:00:00,
},
Definition: DateTimeRange {
start: 2018-05-01 0:00:00.0 +00:00:00,
end: 2025-10-20 20:30:00.0 +00:00:00,
},
Statistics: DateTimeRange {
start: 2018-05-01 0:00:00.0 +00:00:00,
end: 2025-10-20 20:30:00.0 +00:00:00,
},
Status: DateTimeRange {
start: 2018-05-01 0:00:00.0 +00:00:00,
end: 2025-10-20 20:30:00.0 +00:00:00,
},
Tbbo: DateTimeRange {
start: 2018-05-01 0:00:00.0 +00:00:00,
end: 2025-10-20 20:30:00.0 +00:00:00,
},
Mbp10: DateTimeRange {
start: 2018-05-01 0:00:00.0 +00:00:00,
end: 2025-10-20 20:30:00.0 +00:00:00,
},
Imbalance: DateTimeRange {
start: 2018-05-01 0:00:00.0 +00:00:00,
end: 2025-10-20 20:30:00.0 +00:00:00,
},
Ohlcv1H: DateTimeRange {
start: 2018-05-01 0:00:00.0 +00:00:00,
end: 2025-10-20 20:00:00.0 +00:00:00,
},
Bbo1M: DateTimeRange {
start: 2018-05-01 0:00:00.0 +00:00:00,
end: 2025-10-20 20:30:00.0 +00:00:00,
},
Mbo: DateTimeRange {
start: 2018-05-01 0:00:00.0 +00:00:00,
end: 2025-10-20 20:30:00.0 +00:00:00,
},
Mbp1: DateTimeRange {
start: 2018-05-01 0:00:00.0 +00:00:00,
end: 2025-10-20 20:30:00.0 +00:00:00,
},
Ohlcv1D: DateTimeRange {
start: 2018-05-01 0:00:00.0 +00:00:00,
end: 2025-10-18 0:00:00.0 +00:00:00,
},
Bbo1S: DateTimeRange {
start: 2018-05-01 0:00:00.0 +00:00:00,
end: 2025-10-20 20:30:00.0 +00:00:00,
},
Ohlcv1S: DateTimeRange {
start: 2018-05-01 0:00:00.0 +00:00:00,
end: 2025-10-20 20:30:00.0 +00:00:00,
},
},
}
HistoricalClient::metadata::get_record_count
Get the record count of the time series data query.
This method may not be accurate for time ranges that are not discrete multiples of 10 minutes, potentially over-reporting the number of records in such cases. The definition schema is only accurate for discrete multiples of 24 hours.
Parameters
Returns
u64
The record count.
use databento::{
dbn::Schema, historical::metadata::GetRecordCountParams,
HistoricalClient,
};
use time::macros::datetime;
let mut client =
HistoricalClient::builder().key("$YOUR_API_KEY")?.build()?;
let record_count = client
.metadata()
.get_record_count(
&GetRecordCountParams::builder()
.dataset("GLBX.MDP3")
.date_time_range((
datetime!(2022-01-06 12:00 UTC),
datetime!(2022-03-10 00:00 UTC),
))
.symbols("ESM2")
.schema(Schema::Mbo)
.build(),
)
.await?;
println!("{record_count}");
HistoricalClient::metadata::get_billable_size
Get the billable uncompressed raw binary size for historical streaming or batched files.
This method may not be accurate for time ranges that are not discrete multiples of 10 minutes, potentially over-reporting the size in such cases. The definition schema is only accurate for discrete multiples of 24 hours.
Info
Parameters
Returns
u64
The size in number of bytes used for billing.
use databento::{
dbn::Schema, historical::metadata::GetBillableSizeParams,
HistoricalClient,
};
use time::macros::datetime;
let mut client =
HistoricalClient::builder().key("$YOUR_API_KEY")?.build()?;
let billable_size = client
.metadata()
.get_billable_size(
&GetBillableSizeParams::builder()
.dataset("GLBX.MDP3")
.date_time_range((
datetime!(2022-06-06 00:00 UTC),
datetime!(2022-06-10 12:10 UTC),
))
.symbols("ESM2")
.schema(Schema::Trades)
.build(),
)
.await?;
println!("{billable_size}");
HistoricalClient::metadata::get_cost
Get the cost in US dollars for a historical streaming or batch download request. This cost respects any discounts provided by flat rate plans.
This method may not be accurate for time ranges that are not discrete multiples of 10 minutes, potentially over-reporting the cost in such cases. The definition schema is only accurate for discrete multiples of 24 hours.
InfoThe amount billed will be based on the actual amount of bytes sent; see our pricing documentation for more details.
Parameters
Returns
f64
The cost in US dollars.
use databento::{
dbn::Schema, historical::metadata::GetCostParams,
HistoricalClient,
};
use time::macros::datetime;
let mut client =
HistoricalClient::builder().key("$YOUR_API_KEY")?.build()?;
let cost = client
.metadata()
.get_cost(
&GetCostParams::builder()
.dataset("GLBX.MDP3")
.date_time_range((
datetime!(2022-06-06 00:00 UTC),
datetime!(2022-06-10 12:10 UTC),
))
.symbols("ESM2")
.schema(Schema::Trades)
.build(),
)
.await?;
println!("{cost:.4}");
HistoricalClient::timeseries::get_range
Makes a streaming request for time series data from Databento.
This is the primary method for getting historical market data, instrument definitions, and status data directly into your application.
This method returns an async decoder. To persist the data immediately, use timeseries::get_range_to_file. For large requests, consider using batch::submit_job instead.
Parameters
Returns
An AsyncDbnDecoder object for incrementally decoding the records from the stream.
A full list of fields for each schema is available through Historical.metadata.list_fields.
use std::num::NonZeroU64;
use databento::{
dbn::{Schema, TradeMsg},
historical::timeseries::GetRangeParams,
HistoricalClient,
};
use time::macros::datetime;
let mut client =
HistoricalClient::builder().key("$YOUR_API_KEY")?.build()?;
let mut decoder = client
.timeseries()
.get_range(
&GetRangeParams::builder()
.dataset("GLBX.MDP3")
.date_time_range((
datetime!(2022-06-06 00:00 UTC),
datetime!(2022-06-10 12:10 UTC),
))
.symbols("ESM2")
.schema(Schema::Trades)
.limit(NonZeroU64::new(1))
.build(),
)
.await?;
let trade = decoder.decode_record::<TradeMsg>().await?.unwrap();
println!("{trade:#?}");
TradeMsg {
hd: RecordHeader {
length: 12,
rtype: Mbp0,
publisher_id: GlbxMdp3Glbx,
instrument_id: 3403,
ts_event: 1654473600070033767,
},
price: 4108.500000000,
size: 1,
action: 'T',
side: 'A',
flags: 0,
depth: 0,
ts_recv: 1654473600070314216,
ts_in_delta: 18681,
sequence: 157862,
}
HistoricalClient::timeseries::get_range_to_file
Makes a streaming request for time series data from Databento.
This is the primary method for getting historical market data, instrument definitions, and status data directly into your application.
This method persists the stream to a file at the given path before returning an async decoder on that file. For large requests, consider using batch::submit_job instead.
Parameters
Returns
An AsyncDbnDecoder object for incrementally decoding the records from the file.
A full list of fields for each schema is available through Historical.metadata.list_fields.
use std::num::NonZeroU64;
use databento::{
dbn::{Schema, TradeMsg},
historical::timeseries::GetRangeToFileParams,
HistoricalClient,
};
use time::macros::datetime;
let mut client =
HistoricalClient::builder().key("$YOUR_API_KEY")?.build()?;
let mut decoder = client
.timeseries()
.get_range_to_file(
&GetRangeToFileParams::builder()
.dataset("GLBX.MDP3")
.date_time_range((
datetime!(2022-06-06 00:00 UTC),
datetime!(2022-06-10 12:10 UTC),
))
.symbols("ESM2")
.schema(Schema::Trades)
.limit(NonZeroU64::new(1))
.path("ESM2_20220606-20220610.dbn.zst")
.build(),
)
.await?;
let trade = decoder.decode_record::<TradeMsg>().await?.unwrap();
println!("{trade:#?}");
TradeMsg {
hd: RecordHeader {
length: 12,
rtype: Mbp0,
publisher_id: GlbxMdp3Glbx,
instrument_id: 3403,
ts_event: 1654473600070033767,
},
price: 4108.500000000,
size: 1,
action: 'T',
side: 'A',
flags: 0,
depth: 0,
ts_recv: 1654473600070314216,
ts_in_delta: 18681,
sequence: 157862,
}
HistoricalClient::symbology::resolve
Resolve a list of symbols from an input symbology type, to an output symbology type.
Take, for example, a raw symbol to an instrument ID: ESM2 ā 3403.
Parameters
Returns
Resolution
The results for the symbology resolution.
Can be converted to a TsSymbolMap via the CreateSymbolMap() method.
use databento::{
historical::symbology::ResolveParams, HistoricalClient,
};
use time::macros::date;
let mut client =
HistoricalClient::builder().key("$YOUR_API_KEY")?.build()?;
let resolution = client
.symbology()
.resolve(
&ResolveParams::builder()
.dataset("GLBX.MDP3")
.date_range((
date!(2022 - 06 - 01),
date!(2022 - 06 - 30),
))
.symbols("ESM2")
.build(),
)
.await?;
println!("{resolution:#?}");
Batch downloads
Batch downloads allow you to download flat files directly from within your portal. For more information, see Streaming vs. batch download.
HistoricalClient::batch::submit_job
Make a batch download job request for flat files.
Once a request is submitted, our system processes the request and prepares the batch files in the background. The status of your request and the files can be accessed from the Download center from your user portal or downloaded with batch::download.
This method takes longer than a streaming request, but is advantageous for larger requests as it supports delivery mechanisms that allow multiple accesses of the data without additional cost for each subsequent download after the first.
Related: batch::list_jobs.
Parameters
Returns
BatchJob
The description of the submitted batch job.
None until the job is done processing).symbols.Day.Download is supported at this time.None until the job is processed).use databento::{
dbn::Schema, historical::batch::SubmitJobParams,
HistoricalClient,
};
use time::macros::datetime;
let mut client =
HistoricalClient::builder().key("$YOUR_API_KEY")?.build()?;
let job = client
.batch()
.submit_job(
&SubmitJobParams::builder()
.dataset("GLBX.MDP3")
.date_time_range((
datetime!(2022-06-06 12:00 UTC),
datetime!(2022-06-10 00:00 UTC),
))
.symbols("ESM2")
.schema(Schema::Trades)
.build(),
)
.await?;
println!("{job:#?}");
BatchJob {
id: "GLBX-20220720-BTW9J5HY5C",
user_id: Some(
"46PCMCVF",
),
cost_usd: None,
dataset: "GLBX.MDP3",
symbols: Symbols(
[
"ESM2",
],
),
stype_in: RawSymbol,
stype_out: InstrumentId,
schema: Trades,
start: 2022-06-06 12:00:00.0 +00:00:00,
end: 2022-06-10 0:00:00.0 +00:00:00,
limit: None,
encoding: Dbn,
compression: Zstd,
pretty_px: false,
pretty_ts: false,
map_symbols: false,
split_duration: Some(Day),
split_size: None,
delivery: Download,
record_count: None,
billed_size: None,
actual_size: None,
package_size: None,
state: Queued,
ts_received: 2023-07-28 21:44:15.77437 +00:00:00,
ts_queued: None,
ts_process_start: None,
ts_process_done: None,
ts_expiration: None,
}
HistoricalClient::batch::list_jobs
List batch job details for the user account.
The job details will be sorted in order of ts_received.
Related: Download center.
Parameters
Returns
Vec<BatchJob>
A list of batch job details.
use databento::{
historical::batch::{JobState, ListJobsParams},
HistoricalClient,
};
use time::macros::datetime;
let mut client =
HistoricalClient::builder().key("$YOUR_API_KEY")?.build()?;
let jobs = client
.batch()
.list_jobs(
&ListJobsParams::builder()
.states(vec![
JobState::Queued,
JobState::Processing,
JobState::Done,
])
.since(datetime!(2025-01-01 00:00 UTC))
.build(),
)
.await?;
println!("{jobs:#?}");
[
BatchJob {
id: "XNAS-20230704-NMN5T38NUD",
user_id: Some(
"NBPDLF33",
),
cost_usd: Some(
0.00075096637011,
),
dataset: "XNAS.ITCH",
symbols: Symbols(
[
"QQQ",
],
),
stype_in: RawSymbol,
stype_out: InstrumentId,
schema: Ohlcv1M,
start: 2023-06-01 0:00:00.0 +00:00:00,
end: 2023-06-02 0:00:00.0 +00:00:00,
limit: None,
encoding: Csv,
compression: None,
pretty_px: false,
pretty_ts: false,
map_symbols: false,
split_duration: Some(Day),
split_size: None,
delivery: Download,
record_count: Some(
1267,
),
billed_size: Some(
47432,
),
actual_size: Some(
71023,
),
package_size: Some(
74450,
),
state: Done,
ts_received: 2023-07-04 12:22:30.06656 +00:00:00,
ts_queued: Some(
2023-07-04 12:22:30.352526 +00:00:00,
),
ts_process_start: Some(
2023-07-04 12:22:44.992907 +00:00:00,
),
ts_process_done: Some(
2023-07-04 12:22:45.93711 +00:00:00,
),
ts_expiration: Some(
2023-08-03 12:22:45.93711 +00:00:00,
),
},
...
]
HistoricalClient::batch::list_files
List files for a batch job.
This will include all data files and support files.
Related: Download center.
Parameters
Returns
Vec<BatchFileDesc>
The file details for the batch job.
[
BatchFileDesc {
filename: "manifest.json",
size: 1889,
hash: "sha256:9f43e431be88c403e73ce244bb2e94b293c9c05ac74d93a00443311fc0c9ef09",
urls: {
"https": "https://api.databento.com/v0/batch/download/NBPDLF33/XNAS-20250108-VVS57U5PD8/manifest.json",
"ftp": "ftp://ftp.databento.com/NBPDLF33/XNAS-20250108-VVS57U5PD8/manifest.json",
},
},
BatchFileDesc {
filename: "condition.json",
size: 122,
hash: "sha256:43dba9f90ba29334f233de4f76541f3d78f5378ed4baff0227673a680fde95d7",
urls: {
"ftp": "ftp://ftp.databento.com/NBPDLF33/XNAS-20250108-VVS57U5PD8/condition.json",
"https": "https://api.databento.com/v0/batch/download/NBPDLF33/XNAS-20250108-VVS57U5PD8/condition.json",
},
},
BatchFileDesc {
filename: "metadata.json",
size: 699,
hash: "sha256:001a0e2b8e285875b8f1ac1aa5fa3dfadc39b4b92f664b12d99734c6a1bae148",
urls: {
"https": "https://api.databento.com/v0/batch/download/NBPDLF33/XNAS-20250108-VVS57U5PD8/metadata.json",
"ftp": "ftp://ftp.databento.com/NBPDLF33/XNAS-20250108-VVS57U5PD8/metadata.json",
},
},
BatchFileDesc {
filename: "symbology.json",
size: 1753497,
hash: "sha256:3f3908205ec8b24def7cb3589d9f1be523ec64b4ac63be1c706d8dfca2051d78",
urls: {
"https": "https://api.databento.com/v0/batch/download/NBPDLF33/XNAS-20250108-VVS57U5PD8/symbology.json",
"ftp": "ftp://ftp.databento.com/NBPDLF33/XNAS-20250108-VVS57U5PD8/symbology.json",
},
},
BatchFileDesc {
filename: "xnas-itch-20250106.imbalance.dbn.zst",
size: 88237480,
hash: "sha256:7d0945aa1f04dad3e263237dcbdb7529ce7f0f41d902e468bb99d686754ce599",
urls: {
"https": "https://api.databento.com/v0/batch/download/NBPDLF33/XNAS-20250108-VVS57U5PD8/xnas-itch-20250106.imbalance.dbn.zst",
"ftp": "ftp://ftp.databento.com/NBPDLF33/XNAS-20250108-VVS57U5PD8/xnas-itch-20250106.imbalance.dbn.zst",
},
},
]
HistoricalClient::batch::download
Download a batch job or a specific file to {output_dir}/{job_id}/.
Will automatically create any necessary directories if they do not already exist. Verifies the checksum of each downloaded file and will retry on a download failure.
Related: Download center.
Parameters
Returns
Vec<PathBuf>
A list of paths to the downloaded files.
use std::path::PathBuf;
use databento::{
historical::batch::DownloadParams, HistoricalClient,
};
let mut params = DownloadParams::builder()
.output_dir(PathBuf::from("my_data"))
.job_id("XNAS-20250108-VVS57U5PD8")
.build();
let mut client =
HistoricalClient::builder().key("$YOUR_API_KEY")?.build()?;
// Download all files for the batch job
let files = client.batch().download(¶ms).await?;
println!("{files:#?}");
// Download all files for the batch job
params.filename_to_download = Some("metadata.json".to_owned());
let file = client.batch().download(¶ms).await?;
assert_eq!(file.len(), 1);
println!("{}", file[0].display());
[
"my_data/XNAS-20230704-NMN5T38NUD/manifest.json",
"my_data/XNAS-20230704-NMN5T38NUD/condition.json",
"my_data/XNAS-20230704-NMN5T38NUD/metadata.json",
"my_data/XNAS-20230704-NMN5T38NUD/symbology.json",
"my_data/XNAS-20230704-NMN5T38NUD/xnas-itch-20230601.ohlcv-1m.csv",
]
AsyncDbnDecoder
An object for working with DBN-encoded data. Typically this object is created when performing historical requests. However, it can be created directly using DBN data on disk or in memory using the provided associated functions:
- AsyncDbnDecoder::new
- AsyncDbnDecoder::from_file
- AsyncDbnDecoder::from_zstd_file
- AsyncDbnDecoder::with_zstd
- AsyncDbnDecoder::with_zstd_buffer
See alsoThe crate documentation for a comprehensive list of methods and implemented traits.
AsyncDbnStore::new
Create a new decoder from a DBN stream that implements AsyncReadExt. Immediately decodes the DBN Metadata.
Parameters
Returns
An AsyncDbnStore object.
This function will return an error if it is unable to parse the metadata in reader or the input is encoded in a newer version of DBN.
use std::io::SeekFrom;
use databento::{
dbn::{
decode::{AsyncDbnDecoder, DbnMetadata},
encode::{
dbn::AsyncEncoder as AsyncDbnEncoder,
AsyncEncodeRecord,
},
Schema, TradeMsg,
},
historical::timeseries::GetRangeParams,
HistoricalClient,
};
use time::macros::datetime;
use tokio::{fs::OpenOptions, io::AsyncSeekExt};
let mut client =
HistoricalClient::builder().key("$YOUR_API_KEY")?.build()?;
let mut decoder = client
.timeseries()
.get_range(
&GetRangeParams::builder()
.dataset("GLBX.MDP3")
.date_time_range((
datetime!(2022-06-06 00:00 UTC),
datetime!(2022-06-07 00:00 UTC),
))
.symbols("ESM2")
.schema(Schema::Trades)
.build(),
)
.await?;
// Save streamed data to .dbn
let path = "GLBX-ESM2-20220606.trades.dbn.zst";
let mut file = OpenOptions::new()
.read(true)
.write(true)
.create(true)
.truncate(true)
.open(path)
.await?;
let mut encoder =
AsyncDbnEncoder::new(&mut file, decoder.metadata()).await?;
while let Some(trade) =
decoder.decode_record::<TradeMsg>().await?
{
encoder.encode_record(trade).await?;
}
encoder.flush().await?;
// Open saved data
file.seek(SeekFrom::Start(0)).await?;
let mut decoder = AsyncDbnDecoder::new(file).await?;
for i in 0..5 {
let trade =
decoder.decode_record::<TradeMsg>().await?.unwrap();
println!("{trade:?}");
}
TradeMsg { hd: RecordHeader { length: 12, rtype: Mbp0, publisher_id: GlbxMdp3Glbx, instrument_id: 3403, ts_event: 1654473600070033767 }, price: 4108.500000000, size: 1, action: 'T', side: 'A', flags: 0, depth: 0, ts_recv: 1654473600070314216, ts_in_delta: 18681, sequence: 157862 }
TradeMsg { hd: RecordHeader { length: 12, rtype: Mbp0, publisher_id: GlbxMdp3Glbx, instrument_id: 3403, ts_event: 1654473600089830441 }, price: 4108.250000000, size: 1, action: 'T', side: 'A', flags: 0, depth: 0, ts_recv: 1654473600090544076, ts_in_delta: 18604, sequence: 157922 }
TradeMsg { hd: RecordHeader { length: 12, rtype: Mbp0, publisher_id: GlbxMdp3Glbx, instrument_id: 3403, ts_event: 1654473600807018955 }, price: 4108.250000000, size: 4, action: 'T', side: 'B', flags: 0, depth: 0, ts_recv: 1654473600807324169, ts_in_delta: 18396, sequence: 158072 }
TradeMsg { hd: RecordHeader { length: 12, rtype: Mbp0, publisher_id: GlbxMdp3Glbx, instrument_id: 3403, ts_event: 1654473601317385867 }, price: 4108.000000000, size: 1, action: 'T', side: 'A', flags: 0, depth: 0, ts_recv: 1654473601317722490, ts_in_delta: 22043, sequence: 158111 }
TradeMsg { hd: RecordHeader { length: 12, rtype: Mbp0, publisher_id: GlbxMdp3Glbx, instrument_id: 3403, ts_event: 1654473601317385867 }, price: 4108.000000000, size: 7, action: 'T', side: 'A', flags: 0, depth: 0, ts_recv: 1654473601317736158, ts_in_delta: 17280, sequence: 158112 }
AsyncDbnStore::from_file
Create a new decoder from a DBN file. If the file is zstd-compressed, use from_zstd_file instead.
Parameters
Returns
An AsyncDbnStore object.
This function will return an error if it is unable to parse the metadata in the file or the input is encoded in a newer version of DBN.
use databento::{
dbn::{
decode::{AsyncDbnDecoder, DbnMetadata},
encode::{
dbn::AsyncEncoder as AsyncDbnEncoder,
AsyncEncodeRecord,
},
Schema, TradeMsg,
},
historical::timeseries::GetRangeParams,
HistoricalClient,
};
use time::macros::datetime;
use tokio::fs::File;
let mut client =
HistoricalClient::builder().key("$YOUR_API_KEY")?.build()?;
let mut decoder = client
.timeseries()
.get_range(
&GetRangeParams::builder()
.dataset("GLBX.MDP3")
.date_time_range((
datetime!(2022-06-06 00:00 UTC),
datetime!(2022-06-07 00:00 UTC),
))
.symbols("ESM2")
.schema(Schema::Trades)
.build(),
)
.await?;
// Save streamed data to .dbn
let path = "GLBX-ESM2-20220606.trades.dbn.zst";
let file = File::create(path).await?;
let mut encoder =
AsyncDbnEncoder::new(file, decoder.metadata()).await?;
while let Some(trade) =
decoder.decode_record::<TradeMsg>().await?
{
encoder.encode_record(trade).await?;
}
encoder.flush().await?;
// Open saved data
let mut decoder = AsyncDbnDecoder::from_file(path).await?;
for i in 0..5 {
let trade =
decoder.decode_record::<TradeMsg>().await?.unwrap();
println!("{trade:?}");
}
TradeMsg { hd: RecordHeader { length: 12, rtype: Mbp0, publisher_id: GlbxMdp3Glbx, instrument_id: 3403, ts_event: 1654473600070033767 }, price: 4108.500000000, size: 1, action: 'T', side: 'A', flags: 0, depth: 0, ts_recv: 1654473600070314216, ts_in_delta: 18681, sequence: 157862 }
TradeMsg { hd: RecordHeader { length: 12, rtype: Mbp0, publisher_id: GlbxMdp3Glbx, instrument_id: 3403, ts_event: 1654473600089830441 }, price: 4108.250000000, size: 1, action: 'T', side: 'A', flags: 0, depth: 0, ts_recv: 1654473600090544076, ts_in_delta: 18604, sequence: 157922 }
TradeMsg { hd: RecordHeader { length: 12, rtype: Mbp0, publisher_id: GlbxMdp3Glbx, instrument_id: 3403, ts_event: 1654473600807018955 }, price: 4108.250000000, size: 4, action: 'T', side: 'B', flags: 0, depth: 0, ts_recv: 1654473600807324169, ts_in_delta: 18396, sequence: 158072 }
TradeMsg { hd: RecordHeader { length: 12, rtype: Mbp0, publisher_id: GlbxMdp3Glbx, instrument_id: 3403, ts_event: 1654473601317385867 }, price: 4108.000000000, size: 1, action: 'T', side: 'A', flags: 0, depth: 0, ts_recv: 1654473601317722490, ts_in_delta: 22043, sequence: 158111 }
TradeMsg { hd: RecordHeader { length: 12, rtype: Mbp0, publisher_id: GlbxMdp3Glbx, instrument_id: 3403, ts_event: 1654473601317385867 }, price: 4108.000000000, size: 7, action: 'T', side: 'A', flags: 0, depth: 0, ts_recv: 1654473601317736158, ts_in_delta: 17280, sequence: 158112 }
AsyncDbnStore::from_zstd_file
Create a new decoder from a Zstandard-compressed DBN file. If the file is uncompressed, use from_file instead.
Parameters
Returns
An AsyncDbnStore object.
This function will return an error if it is unable to parse the metadata in the file or the input is encoded in a newer version of DBN.
AsyncDbnStore::with_zstd
Create a new decoder from a Zstandard-compressed DBN stream that implements AsyncReadExt.
If reader implements AsyncBufRead, it's better to use with_zstd_buffer to avoid unnecessary additional buffering.
Parameters
Returns
An AsyncDbnStore object.
This function will return an error if it is unable to parse the metadata in reader or the input is encoded in a newer version of DBN.
AsyncDbnStore::with_zstd_buffer
Create a new decoder from a buffered Zstandard-compressed DBN stream that implements AsyncBufReadExt.
Parameters
Returns
An AsyncDbnStore object.
This function will return an error if it is unable to parse the metadata in reader or the input is encoded in a newer version of DBN.
AsyncDbnStore::decode_record
Decode a single record of a specific type. If the record type is unknown, such as when working with Live data where the stream can contain several different record types, use decode_record_ref.
Parameters
Returns
A reference to the decoded record of type T or Ok(None) if the stream has been exhausted.
This function will return an error if the record is not of type T or there's an error reading from the input stream.
use databento::{
dbn::{OhlcvMsg, Schema},
historical::timeseries::GetRangeParams,
HistoricalClient,
};
use time::macros::datetime;
let mut client =
HistoricalClient::builder().key("$YOUR_API_KEY")?.build()?;
let mut decoder = client
.timeseries()
.get_range(
&GetRangeParams::builder()
.dataset("XNAS.ITCH")
.date_time_range((
datetime!(2023-08-07 00:00 UTC),
datetime!(2023-08-08 00:00 UTC),
))
.symbols("AAPL")
.schema(Schema::Ohlcv1M)
.build(),
)
.await?;
let bar = decoder.decode_record::<OhlcvMsg>().await?.unwrap();
println!("{bar:#?}");
AsyncDbnStore::decode_record_ref
Decode a single record of an unknown type.
Returns
A RecordRefāa wrapper around a record of a dynamic typeāor Ok(None) if the stream has been exhausted.
This function will return an error if there's an error reading from the input stream.
use databento::{
dbn::{OhlcvMsg, SType, Schema},
historical::timeseries::GetRangeParams,
HistoricalClient,
};
use time::macros::datetime;
let mut client =
HistoricalClient::builder().key("$YOUR_API_KEY")?.build()?;
let mut decoder = client
.timeseries()
.get_range(
&GetRangeParams::builder()
.dataset("OPRA.PILLAR")
.stype_in(SType::Parent)
.date_time_range((
datetime!(2023-08-07 00:00 UTC),
datetime!(2023-08-08 00:00 UTC),
))
.symbols("SPXW.OPT")
.schema(Schema::Ohlcv1H)
.build(),
)
.await?;
let bar = decoder.decode_record::<OhlcvMsg>().await?.unwrap();
println!("{bar:#?}");
AsyncDbnStore::metadata
Get a reference to the decoded DBN Metadata.
Returns
A reference to the DBN Metadata.
use databento::{
dbn::{decode::DbnMetadata, OhlcvMsg, Schema},
historical::timeseries::GetRangeParams,
HistoricalClient,
};
use time::macros::datetime;
let mut client =
HistoricalClient::builder().key("$YOUR_API_KEY")?.build()?;
let mut decoder = client
.timeseries()
.get_range(
&GetRangeParams::builder()
.dataset("XNAS.ITCH")
.date_time_range((
datetime!(2023-08-07 00:00 UTC),
datetime!(2023-08-08 00:00 UTC),
))
.symbols("META")
.schema(Schema::Ohlcv1H)
.build(),
)
.await?;
println!("{:#?}", decoder.metadata());
Metadata {
version: 3,
dataset: "XNAS.ITCH",
schema: Some(
Ohlcv1H,
),
start: 1691366400000000000,
end: Some(
1691452800000000000,
),
limit: None,
stype_in: Some(
RawSymbol,
),
stype_out: InstrumentId,
ts_out: false,
symbol_cstr_len: 71,
symbols: [
"META",
],
partial: [],
not_found: [],
mappings: [
SymbolMapping {
raw_symbol: "META",
intervals: [
MappingInterval {
start_date: 2023-08-07,
end_date: 2023-08-08,
symbol: "6508",
},
],
},
],
}
Metadata
The contents of the header of a DBN stream.
See alsoThe crate documentation for a comprehensive list of methods and implemented traits.
Fields
None for live data which can mix schemas.None for live data.pub struct Metadata {
pub version: u8,
pub dataset: String,
pub schema: Option<Schema>,
pub start: u64,
pub end: Option<NonZeroU64>,
pub limit: Option<NonZeroU64>,
pub stype_in: Option<SType>,
pub stype_out: SType,
pub ts_out: bool,
pub symbol_cstr_len: usize,
pub symbols: Vec<String>,
pub partial: Vec<String>,
pub not_found: Vec<String>,
pub mappings: Vec<SymbolMapping>,
}
pub struct SymbolMapping {
pub raw_symbol: String,
pub intervals: Vec<MappingInterval>,
}
pub struct MappingInterval {
pub start_date: time::Date,
pub end_date: time::Date,
pub symbol: String,
}
Metadata::symbol_map
Create a symbology mapping from instrument ID and date to text symbol from the mappings in the metadata.
Returns
A TsSymbolMap with the symbol mappings for the query range indexed by instrument ID and date.
This function returns an error if it fails to parse a symbol mapping into a u32 instrument ID.
Metadata::symbol_map_for_date
Create a symbology mapping from the mappings in the metadata for the specified date.
Parameters
Returns
A PitSymbolMap with the symbol mappings for the query range indexed by instrument ID.
This function returns an error if it fails to parse a symbol mapping into a u32 instrument ID or the provided date is outside the query range.
RecordRef
A wrapper around a non-owning immutable reference to a DBN record. This wrapper allows for mixing of record types and schemas, and runtime record polymorphism.
Both AsyncRecordDecoder::decode_record_ref and LiveClient::next_record return RecordRef objects.
See alsoThe crate documentation for a comprehensive list of methods and implemented traits.
RecordRef::header
Get a reference to the RecordHeader which is found at the start of every DBN record.
This allows some basic information about the record like instrument ID without first determining its type.
InfoThis method is part of the
Recordtrait, so you must import the trait to call this method.
Returns
An immutable reference to the record's header (RecordHeader).
RecordRef::get
Get a reference to a particular record type.
Usually paired with if let Some(...) or with has.
Returns
Option<&'a T
A reference to the underlying record of type T or None if the RecordRef points to another type.
TsSymbolMap
A timeseries symbol map, i.e. instrument IDs to text symbols by date. These objects can be obtained from Metadata::symbol_map.
See alsoThe crate documentation for a comprehensive list of methods and implemented traits.
TsSymbolMap::get_for_rec
Get the symbol mapping for a record.
InfoThis method is part of the
SymbolIndextrait, so you must import the trait to call this method.
Parameters
Returns
Option<&String>
The corresponding text symbol for the record's instrument ID and timestamp.
PitSymbolMap
A point-in-time symbol map. Useful for working real-time symbology or a historical request over a single day and other situations where the symbol mappings are known to not change. These objects can be obtained from Metadata::symbol_map_for_date for historical data.
See alsoThe crate documentation for a comprehensive list of methods and implemented traits.
PitSymbolMap::get_for_rec
Get the symbol mapping for a record.
InfoThis method is part of the
SymbolIndextrait, so you must import the trait to call this method.
Parameters
Returns
Option<&String>
The corresponding text symbol for the record's instrument ID.
PitSymbolMap::on_record
Update the symbol map with the contents of the record.
Only SymbolMappingMsg records affect the map; all other record types will be ignored.
Parameters
Returns
()
This function returns an error if record contains a SymbolMappingMsg with invalid UTF-8 symbols.
ListFieldsParams
The parameter struct and builder for HistoricalClient::metadata::list_fields.
The builder can be instantiated with the associated function builder(), and the struct is created from the builder with the build() method.
For every field, there is an identically named setter method on the builder.
Fields
Encoding::Dbn is recommended.
GetDatasetConditionParams
The parameter struct and builder for HistoricalClient::metadata::get_dataset_condition.
The builder can be instantiated with the associated function builder(), and the struct is created from the builder with the build() method.
For every field, there is an identically named setter method on the builder.
Fields
None then will return all available dates.
GetQueryParams
The parameter struct and builder for several historical metadata endpoints:
- HistoricalClient::metadata::get_record_count.
- HistoricalClient::metadata::get_billable_size.
- HistoricalClient::metadata::get_cost.
The builder can be instantiated with the associated function builder(), and the struct is created from the builder with the build() method.
For every field, there is an identically named setter method on the builder.
Fields
Symbols::All then will select all symbols.None then no limit. Defaults to None.pub struct GetQueryParams {
pub dataset: String,
pub symbols: Symbols,
pub schema: Schema,
pub date_time_range: DateTimeRange,
pub stype_in: SType,
pub limit: Option<NonZeroU64>,
}
pub type GetRecordCountParams = GetQueryParams;
pub type GetBillableSizeParams = GetQueryParams;
pub type GetCostParams = GetQueryParams;
use databento::{
dbn::{SType, Schema},
historical::metadata::GetQueryParams,
};
use time::macros::datetime;
assert_eq!(
GetQueryParams {
dataset: "OPRA.PILLAR".to_owned(),
date_time_range: (
datetime!(2023-08-01 00:00 UTC),
datetime!(2023-08-08 00:00 UTC)
)
.into(),
symbols: vec!["VIX.OPT".to_owned()].into(),
schema: Schema::Trades,
stype_in: SType::Parent,
limit: None,
},
GetQueryParams::builder()
.dataset("OPRA.PILLAR")
.date_time_range((
datetime!(2023-08-01 00:00 UTC),
datetime!(2023-08-08 00:00 UTC)
))
.symbols("VIX.OPT")
.schema(Schema::Trades)
.stype_in(SType::Parent)
.build()
);
GetRangeParams
The parameter struct and builder for HistoricalClient::timeseries::get_range.
The builder can be instantiated with the associated function builder(), and the struct is created from the builder with the build() method.
For every field, there is an identically named setter method on the builder.
This struct can also be created from a GetRangeToFileParams struct via the From trait.
Fields
Symbols::All then will select all symbols.ts_recv if it exists in the schema, otherwise ts_event.InstrumentId.None then no limit. Defaults to None.use databento::{
dbn::{SType, Schema, VersionUpgradePolicy},
historical::timeseries::GetRangeParams,
};
use time::macros::datetime;
assert_eq!(
GetRangeParams {
dataset: "XNAS.ITCH".to_owned(),
date_time_range: (
datetime!(2023-11-03 14:00 -4),
datetime!(2023-11-03 16:00 -4)
)
.into(),
symbols: "NVDA".into(),
schema: Schema::Trades,
stype_in: SType::RawSymbol,
stype_out: SType::InstrumentId,
limit: None,
#[expect(deprecated)]
upgrade_policy: None,
},
GetRangeParams::builder()
.dataset("XNAS.ITCH")
.symbols("NVDA")
.date_time_range((
datetime!(2023-11-03 14:00 -4),
datetime!(2023-11-03 16:00 -4)
))
.schema(Schema::Trades)
.build()
);
GetRangeToFileParams
The parameter struct and builder for HistoricalClient::timeseries::get_range_to_file.
The builder can be instantiated with the associated function builder(), and the struct is created from the builder with the build() method.
For every field, there is an identically named setter method on the builder.
This struct can also be created from a GetRangeParams struct via the GetRangeParams::with_path() method.
Fields
Symbols::All then will select all symbols.ts_recv if it exists in the schema, otherwise ts_event.InstrumentId.None then no limit. Defaults to None.use std::path::PathBuf;
use databento::{
dbn::{Dataset, SType, Schema, VersionUpgradePolicy},
historical::timeseries::GetRangeToFileParams,
};
use time::macros::datetime;
assert_eq!(
GetRangeToFileParams {
dataset: Dataset::IfeuImpact.to_string(),
date_time_range: (
datetime!(2024-05-17 00:00 UTC),
datetime!(2023-05-20 00:00 UTC)
)
.into(),
symbols: "BRN.OPT".into(),
schema: Schema::Statistics,
stype_in: SType::Parent,
stype_out: SType::InstrumentId,
limit: None,
#[expect(deprecated)]
upgrade_policy: None,
path: PathBuf::from(
"ifeu-impact.statistics.20240517.dbn.zst"
),
},
GetRangeToFileParams::builder()
.dataset(Dataset::IfeuImpact)
.symbols("BRN.OPT")
.stype_in(SType::Parent)
.date_time_range((
datetime!(2024-05-17 00:00 UTC),
datetime!(2023-05-20 00:00 UTC)
))
.schema(Schema::Statistics)
.path("ifeu-impact.statistics.20240517.dbn.zst")
.build()
);
ResolveParams
The parameter struct and builder for HistoricalClient::symbology::resolve.
The builder can be instantiated with the associated function builder(), and the struct is created from the builder with the build() method.
For every field, there is an identically named setter method on the builder.
Fields
Symbols::All then will select all symbols (not available for every dataset).InstrumentId.use databento::{
dbn::SType, historical::symbology::ResolveParams,
};
use time::macros::date;
assert_eq!(
ResolveParams {
dataset: "XNAS.ITCH".to_owned(),
date_range: (
date!(2020 - 01 - 01),
date!(2022 - 01 - 01)
)
.into(),
symbols: vec!["IWM", "SPY", "QQQ"].into(),
stype_in: SType::RawSymbol,
stype_out: SType::InstrumentId,
},
ResolveParams::builder()
.dataset("XNAS.ITCH")
.symbols(vec!["IWM", "SPY", "QQQ"])
.date_range((
date!(2020 - 01 - 01),
date!(2022 - 01 - 01)
))
.build()
);
SubmitJobParams
The parameter struct and builder for HistoricalClient::batch::submit_job.
The builder can be instantiated with the associated function builder(), and the struct is created from the builder with the build() method.
For every field, there is an identically named setter method on the builder.
Fields
Symbols::All then will select all symbols.ts_recv if it exists in the schema, otherwise ts_event.Dbn.Csv or Json encodings. Defaults to false.Csv or Json encodings. Defaults to false.Csv or Json encodings. Defaults to false.limit. Defaults to false.Day.None.Download is supported at this time.InstrumentId.None then no limit. Cannot be used with split_symbols.pub struct SubmitJobParams {
pub dataset: String,
pub symbols: Symbols,
pub schema: Schema,
pub date_time_range: DateTimeRange,
pub encoding: Encoding,
pub compression: Compression,
pub pretty_px: bool,
pub pretty_ts: bool,
pub map_symbols: bool,
pub split_symbols: bool,
pub split_duration: Option<SplitDuration>,
pub split_size: Option<NonZeroU64>,
pub delivery: Delivery,
pub stype_in: SType,
pub stype_out: SType,
pub limit: Option<NonZeroU64>,
}
use databento::{
dbn::{Compression, Encoding, SType, Schema},
historical::batch::{
Delivery, SplitDuration, SubmitJobParams,
},
};
use time::macros::datetime;
assert_eq!(
SubmitJobParams {
dataset: "GLBX.MDP3".to_owned(),
date_time_range: (
datetime!(2019-01-01 00:00 UTC),
datetime!(2020-09-03 00:00 UTC)
)
.into(),
symbols: vec!["CL.c.0, NG.c.0"].into(),
schema: Schema::Ohlcv1M,
stype_in: SType::Continuous,
stype_out: SType::InstrumentId,
limit: None,
encoding: Encoding::Csv,
compression: Compression::None,
pretty_px: true,
pretty_ts: true,
map_symbols: true,
split_symbols: false,
split_duration: Some(SplitDuration::Day),
split_size: None,
delivery: Delivery::Download,
},
SubmitJobParams::builder()
.dataset("GLBX.MDP3")
.symbols(vec!["CL.c.0, NG.c.0"])
.stype_in(SType::Continuous)
.date_time_range((
datetime!(2019-01-01 00:00 UTC),
datetime!(2020-09-03 00:00 UTC)
))
.schema(Schema::Ohlcv1M)
.encoding(Encoding::Csv)
.compression(Compression::None)
.pretty_px(true)
.pretty_ts(true)
.map_symbols(true)
.build()
);
ListJobsParams
The parameter struct and builder for HistoricalClient::batch::list_jobs.
The builder can be instantiated with the associated function builder(), and the struct is created from the builder with the build() method.
For every field, there is an identically named setter method on the builder.
Fields
Queued, Processing, Done, and Expired. If None, defaults to all except Expireduse databento::historical::batch::{JobState, ListJobsParams};
use time::macros::datetime;
assert_eq!(
ListJobsParams {
states: Some(vec![JobState::Done]),
since: Some(datetime!(2023-11-06 00:00 UTC))
},
ListJobsParams::builder()
.states(vec![JobState::Done])
.since(datetime!(2023-11-06 00:00 UTC))
.build()
);
DownloadParams
The parameter struct and builder for HistoricalClient::batch::download.
The builder can be instantiated with the associated function builder(), and the struct is created from the builder with the build() method.
For every field, there is an identically named setter method on the builder.
Fields
None then will download all files for the batch job.use std::path::PathBuf;
use databento::historical::batch::DownloadParams;
use time::macros::datetime;
assert_eq!(
DownloadParams {
output_dir: PathBuf::from("/tmp"),
job_id: "GLBX-20230926-ANMGJK7JB6".to_owned(),
filename_to_download: None,
},
DownloadParams::builder()
.output_dir("/tmp")
.job_id("GLBX-20230926-ANMGJK7JB6")
.build()
);