Support

API reference - Historical

Databento's historical data service can be accessed programmatically over its HTTP API. To make it easier to integrate the API, we also provide official client libraries that simplify the code you need to write.

Our HTTP API is designed as a collection of RPC-style methods, which can be called using URLs in the form https://hist.databento.com/v0/METHOD_FAMILY.METHOD.

Our client libraries wrap these HTTP RPC-style methods with more idiomatic interfaces in their respective languages.

You can use our API to stream or load data directly into your application. You can also use our API to make batch download requests, which instruct our service to prepare the data as flat files that can downloaded from the Download center.

HISTORICAL DATA
Client libraries
Python
Python
C++
C++
Rust
Rust
APIs
HTTP
HTTP
$
cargo add databento
77

Basics

Overview

Our historical API has the following structure:

  • Metadata provides information about the datasets themselves.
  • Time series provides all types of time series data. This includes subsampled data (second, minute, hour, daily aggregates), trades, top-of-book, order book deltas, order book snapshots, summary statistics, static data and macro indicators. We also provide properties of products such as expirations, tick sizes and symbols as time series data.
  • Symbology provides methods that help find and resolve symbols across different symbology systems.
  • Batch provides a means of submitting and querying for details of batch download requests.

Authentication

Databento uses API keys to authenticate requests. You can view and manage your keys on the API keys page of your portal.

Each API key is a 32-character string starting with db-.

The library will use the environment variable DATABENTO_API_KEY as your API key if the key_from_env method is called. However, you can pass an API key directly to the historical::ClientBuilder through the key method. Calling the build method constructs and returns an instance of HistoricalClient.

Related: Securing your API keys.

Example usage
use databento::HistoricalClient;

// Establish connection and authenticate
let mut client =
    HistoricalClient::builder().key_from_env()?.build()?;

// Authenticated request
let datasets = client.metadata().list_datasets(None).await?;
for dataset in datasets {
    println!("{dataset}");
}

Schemas and conventions

A schema is a data record format represented as a collection of different data fields. Our datasets support multiple schemas, such as order book, tick data, bar aggregates, and so on. You can see a full list from our List of market data schemas.

You can get a list of all supported schemas for any given dataset using the metadata.list_schemas method. The same information can also be found on each dataset's detail page found through the Explore feature.

The following table provides details about the data types and conventions used for various fields that you will commonly encounter in the data.

Name Field Description
Dataset dataset A unique string name assigned to each dataset by Databento. Full list of datasets can be found from the metadata.
Publisher ID publisher_id A unique u16 assigned to each publisher by Databento. Full list of publisher IDs can be found from the metadata.
Instrument ID instrument_id A unique u32 assigned to each instrument by the venue. Information about instrument IDs for any given dataset can be found in the symbology.
Order ID order_id A unique u64 assigned to each order by the venue.
Timestamp (event) ts_event The matching-engine-received timestamp expressed as the number of nanoseconds since the UNIX epoch.
Timestamp (receive) ts_recv The capture-server-received timestamp expressed as the number of nanoseconds since the UNIX epoch.
Timestamp delta (in) ts_in_delta The matching-engine-sending timestamp expressed as the number of nanoseconds before ts_recv. See timestamping guide.
Timestamp out ts_out The Databento gateway-sending timestamp expressed as the number of nanoseconds since the UNIX epoch. See timestamping guide.
Price price The price expressed as signed integer where every 1 unit corresponds to 1e-9, i.e. 1/1,000,000,000 or 0.000000001.
Book side side The side that initiates the event. Can be Ask for a sell order (or sell aggressor in a trade), Bid for a buy order (or buy aggressor in a trade), or None where no side is specified by the original source.
Size size The order quantity.
Flag flag A bit field indicating event end, message characteristics, and data quality.
Action action The event type or order book operation. Can be Add, Cancel, Modify, cleaR book, Trade, Fill, or None.
Sequence number sequence The original message sequence number from the venue.

Datasets

Databento provides time series datasets for a variety of markets, sourced from different publishers. Our available datasets can be browsed through the search feature on our site.

Each dataset is assigned a unique string identifier (dataset ID) in the form PUBLISHER.DATASET, such as GLBX.MDP3. For publishers that are also markets, we use standard four-character ISO 10383 Market Identifier Codes (MIC). Otherwise, Databento arbitrarily assigns a four-character identifier for the publisher.

These dataset IDs are also found on the Data catalog and Download request features of the Databento user portal.

When a publisher provides multiple data products with different levels of granularity, Databento subscribes to the most-granular product. We then provide this dataset with alternate schemas to make it easy to work with the level of detail most appropriate for your application.

More information about different types of venues and publishers is available in our FAQs.

Symbology

Databento's historical API supports several ways to select an instrument in a dataset. An instrument is specified using a symbol and a symbology type, also referred to as an stype. The supported symbology types are:

  • Raw symbology (RawSymbol) original string symbols used by the publisher in the source data.
  • Instrument ID symbology (InstrumentId) unique numeric ID assigned to each instrument by the publisher.
  • Parent symbology (Parent) groups instruments related to the market for the same underlying.
  • Continuous contract symbology (Continuous) proprietary symbology that specifies instruments based on certain systematic rules.

When requesting data from our timeseries.get_range or batch.submit_job endpoints, an input and output symbology type can be specified. By default, our client libraries will use raw symbology for the input type and instrument ID symbology for the output type. Not all symbology types are supported for every dataset.

The process of converting between one symbology type to another is called symbology resolution. This conversion can be done, for no cost, with the symbology.resolve endpoint.

For more about symbology at Databento, see our Standards and conventions.

Encodings

DBN

Databento Binary Encoding (DBN) is an extremely fast message encoding and highly-compressible storage format for normalized market data. It includes self-describing metadata header and adopts a binary format with zero-copy serialization.

We recommend using our Python, C++, or Rust client libraries to read DBN files locally. A CLI tool is also available for converting DBN files to CSV or JSON.

CSV

Comma-separated values (CSV) is a simple text file format for tabular data, CSVs can be easily opened with Excel, loaded into pandas data frames, or parsed in C++.

Our CSVs have one header line, followed by one record per line. Lines use UNIX-style \n separators.

JSON

JavaScript Object Notation (JSON) is a flexible text file format with broad language support and wide adoption across web apps.

Our JSON files follow the JSON lines specification, where each line of the file is a JSON record. Lines use UNIX-style \n separators.

Compression

Databento provides options for compressing files from our API. Available compression formats depend on the encoding you select.

Zstd

The Zstd compression option uses the Zstandard format.

This option is available for all encodings, and is recommended for faster transfer speeds and smaller files.

The DBN crate comes with support for reading Zstandard-compressed DBN files and you can read any Zstandard file in Rust using the zstd library.

Read more about working with Zstandard-compressed files.

None

The None compression option disables compression entirely, resulting in significantly larger files. However, this can be useful for loading small CSV files directly into Excel.

Dates and times

Our Rust client library uses the time crate for representing both dates and datetimes.

Dates and datetime from the historical API will be deserialized into time::Date and time::OffsetDateTimes. To localize these, use OffsetDateTime::to_offset.

In DBN records, timestamps are represented as u64 nanosecond-precision UNIX timestamps. These timestamps are always in UTC. These can be parsed with OffsetDateTime::from_unix_timestamp_nanos.

Errors

Our historical API uses HTTP response codes to indicate the success or failure of an API request. The client library provides an error enum to wrap these response codes.

  • 2xx indicates success.
  • 4xx indicates an error on the client side. Represented as a Error::Http.
  • 5xx indicates an error with Databento's servers. Represented as a Error::Http.

Use the status method to get the status code associated with an error.

The full list of the response codes and associated causes is as follows:

Code Message Cause
200 OK Successful request.
206 Partial Content Successful request, with partially resolved symbols.
400 Bad Request Invalid request. Usually due to a missing, malformed or unsupported parameter.
401 Unauthorized Invalid username or API key.
402 Payment Required Issue with your account payment information.
403 Forbidden The API key has insufficient permissions to perform the request.
404 Not Found A resource is not found, or a requested symbol does not exist.
409 Conflict A resource already exists.
422 Unprocessable Entity The request is well formed, but we cannot or will not process the contained instructions.
429 Too Many Requests API rate limit exceeded.
500 Internal Server Error Unexpected condition encountered in our system.
503 Service Unavailable Data gateway is offline or overloaded.
504 Gateway Timeout Data gateway is available but other parts of our system are offline or overloaded.
API method
pub enum Error {
    /// An invalid argument was passed.
    BadArgument {
        /// The name of the parameter to which the bad argument was passed.
        param_name: String,
        /// The description of how the argument was invalid.
        desc: String,
    },
    /// An I/O error while reading or writing DBN or another encoding.
    Io(std::io::Error),
    /// An HTTP error.
    Http(reqwest::Error),
    /// An error from the Databento API.
    Api(ApiError),
    /// An error internal to the client.
    Internal(String),
    /// An error related to DBN encoding.
    Dbn(dbn::Error),
    /// An when authentication failed.
    Auth(String),
}

pub struct ApiError {
    /// The request ID.
    pub request_id: Option<String>,
    /// The HTTP status code of the response.
    pub status_code: reqwest::StatusCode,
    /// The message from the Databento API.
    pub message: String,
    /// The link to documentation related to the error.
    pub docs_url: Option<String>,
}
Example usage
use databento::HistoricalClient;

assert!(HistoricalClient::builder().key("invalid").is_err());

Rate limits

Our historical API allows each IP address up to:

When a request exceeds a rate limit, an Error::Http will be returned with a 429 error code.

Size limits

There is no size limit for either stream or batch download requests. Batch download is more manageable for large datasets, so we recommend using batch download for requests over 5 GB.

You can also manage the size of your request by splitting it into multiple, smaller requests. The historical API allows you to make stream and batch download requests with time ranges specified up to nanosecond resolution. You can also use the limit parameter in any request to limit the number of data records returned from the service.

Batch download supports different delivery methods which can be specified using the delivery parameter.

Example usage
use std::num::NonZeroU64;

use databento::{
    dbn::Schema, historical::batch::SubmitJobParams,
    HistoricalClient, Symbols,
};
use time::macros::datetime;

let mut client =
    HistoricalClient::builder().key_from_env()?.build()?;
let job = client
    .batch()
    .submit_job(
        &SubmitJobParams::builder()
            .dataset("GLBX.MDP3")
            .symbols(Symbols::All)
            .schema(Schema::Trades)
            .date_time_range((
                datetime!(2022-08-26 00:00:00 UTC),
                datetime!(2022-09-28 00:00:00 UTC),
            ))
            .limit(NonZeroU64::new(1000))
            .build(),
    )
    .await?;

Metered pricing

Databento only charges for the data that you use. You can find rates (per MB) for the various datasets and estimate pricing on our Data catalog. We meter the data by its uncompressed size in binary encoding.

When you stream the data, you are billed incrementally for each outbound byte of data sent from our historical gateway. If your connection is interrupted while streaming our data and our historical gateway detects connection timeout over 5 seconds, it will immediately stop sending data and you will not be billed for the remainder of your request.

Duplicate streaming requests will incur repeated charges. If you intend to access the same data multiple times, we recommend using our batch download feature. When you make a batch download request, you are only billed once for the request and, subsequently, you can download the data from the Download center multiple times over 30 days for no additional charge.

You will only be billed for usage of time series data. Access to metadata, symbology, and account management is free.

Related: Billing management.

Versioning

Our historical API and its client libraries adopt MAJOR.MINOR.PATCH format for version numbers. These version numbers conform to semantic versioning. We are using major version 0 for initial development, where our API is not considered stable.

Once we release major version 1, our public API will be stable. This means that you will be able to upgrade minor or patch versions to pick up new functionality, without breaking your integration.

Starting with major versions after 1, we will provide support for previous versions for one year after the date of the subsequent major release. For example, if version 2.0.0 is released on January 1, 2024, then all versions 1.x.y of the API and client libraries will be deprecated. However, they will remain supported until January 1, 2025.

We may introduce backwards-compatible changes between minor versions in the form of:

Our Release notes will contain information about both breaking and backwards-compatible changes in each release.

Our API and official client libraries are kept in sync with same-day releases for major versions. For instance, 1.x.y of the Rust client library will use the same functionality found in any 1.x.y version of the Python client.

Related: Release notes.

Client

HistoricalClient

To access Databento's historical API, first create an instance of HistoricalClient. The API is exposed through four subclients:

  • metadata
  • timeseries
  • symbology
  • batch

Note that the API key can be passed as an argument, which is not recommended for production applications. Instead, you can use the historical::ClientBuilder through HistoricalClient::builder(), which includes a key_from_env method for setting key from the DATABENTO_API_KEY environment variable.

Parameters

key
String
32-character API key. Found on your API keys page.
gateway
databento::HistoricalGateway
Site of historical gateway to connect to. Currently only Bo1 is supported. If using ClientBuilder, it defaults to Bo1.
upgrade_policy
VersionUpgradePolicy
How to handle data from prior DBN versions. By default, records from DBN versions 1 and 2 are upgraded to version 3.
API method
impl ClientBuilder {
    // Required
    pub fn key(
        self,
        key: impl ToString,
    ) -> databento::Result<Self>;
    pub fn key_from_env(self) -> databento::Result<Self>;

    // Optional
    pub fn gateway(mut self, gateway: HistoricalGateway) -> Self;
    pub fn upgrade_policy(
        mut self,
        upgrade_policy: VersionUpgradePolicy,
    ) -> Self;

    pub async fn build(self) -> databento::Result<Client>;
}
Example usage
use databento::HistoricalClient;

let client =
    HistoricalClient::builder().key("$YOUR_API_KEY")?.build()?;
let client =
    HistoricalClient::builder().key_from_env()?.build()?;

Metadata

HistoricalClient::metadata::list_publishers

List all publisher ID mappings.

Use this method to list the mappings of publisher names to publisher IDs.

Returns

Vec<PublisherDetail>

A list of publisher details, where PublisherDetail is:

publisher_id
u16
The publisher ID assigned by Databento.
dataset
String
The dataset ID for the publisher.
venue
String
The venue for the publisher.
description
String
The publisher description.
API method
pub async fn list_publishers(
    &mut self,
) -> databento::Result<Vec<PublisherDetail>>;
Example usage
use databento::HistoricalClient;

let mut client =
    HistoricalClient::builder().key_from_env()?.build()?;
let publisher_mappings =
    client.metadata().list_publishers().await?;
println!("{publisher_mappings:#?}");
Example response
[
    PublisherDetail {
        publisher_id: 1,
        dataset: "GLBX.MDP3",
        venue: "GLBX",
        description: "CME Globex MDP 3.0",
    },
    PublisherDetail {
        publisher_id: 2,
        dataset: "XNAS.ITCH",
        venue: "XNAS",
        description: "Nasdaq TotalView-ITCH",
    },
    PublisherDetail {
        publisher_id: 3,
        dataset: "XBOS.ITCH",
        venue: "XBOS",
        description: "Nasdaq BX TotalView-ITCH",
    },
    PublisherDetail {
        publisher_id: 4,
        dataset: "XPSX.ITCH",
        venue: "XPSX",
        description: "Nasdaq PSX TotalView-ITCH",
    },
    PublisherDetail {
        publisher_id: 5,
        dataset: "BATS.PITCH",
        venue: "BATS",
        description: "Cboe BZX Depth",
    },
    PublisherDetail {
        publisher_id: 6,
        dataset: "BATY.PITCH",
        venue: "BATY",
        description: "Cboe BYX Depth",
    },
    PublisherDetail {
        publisher_id: 7,
        dataset: "EDGA.PITCH",
        venue: "EDGA",
        description: "Cboe EDGA Depth",
    },
    PublisherDetail {
        publisher_id: 8,
        dataset: "EDGX.PITCH",
        venue: "EDGX",
        description: "Cboe EDGX Depth",
    },
    PublisherDetail {
        publisher_id: 9,
        dataset: "XNYS.PILLAR",
        venue: "XNYS",
        description: "NYSE Integrated",
    },
    PublisherDetail {
        publisher_id: 10,
        dataset: "XCIS.PILLAR",
        venue: "XCIS",
        description: "NYSE National Integrated",
    },
    PublisherDetail {
        publisher_id: 11,
        dataset: "XASE.PILLAR",
        venue: "XASE",
        description: "NYSE American Integrated",
    },
    PublisherDetail {
        publisher_id: 12,
        dataset: "XCHI.PILLAR",
        venue: "XCHI",
        description: "NYSE Texas Integrated",
    },
    PublisherDetail {
        publisher_id: 13,
        dataset: "XCIS.BBO",
        venue: "XCIS",
        description: "NYSE National BBO",
    },
    PublisherDetail {
        publisher_id: 14,
        dataset: "XCIS.TRADES",
        venue: "XCIS",
        description: "NYSE National Trades",
    },
    PublisherDetail {
        publisher_id: 15,
        dataset: "MEMX.MEMOIR",
        venue: "MEMX",
        description: "MEMX Memoir Depth",
    },
    PublisherDetail {
        publisher_id: 16,
        dataset: "EPRL.DOM",
        venue: "EPRL",
        description: "MIAX Pearl Depth",
    },
    PublisherDetail {
        publisher_id: 17,
        dataset: "XNAS.NLS",
        venue: "FINN",
        description: "FINRA/Nasdaq TRF Carteret",
    },
    PublisherDetail {
        publisher_id: 18,
        dataset: "XNAS.NLS",
        venue: "FINC",
        description: "FINRA/Nasdaq TRF Chicago",
    },
    PublisherDetail {
        publisher_id: 19,
        dataset: "XNYS.TRADES",
        venue: "FINY",
        description: "FINRA/NYSE TRF",
    },
    PublisherDetail {
        publisher_id: 20,
        dataset: "OPRA.PILLAR",
        venue: "AMXO",
        description: "OPRA - NYSE American Options",
    },
    PublisherDetail {
        publisher_id: 21,
        dataset: "OPRA.PILLAR",
        venue: "XBOX",
        description: "OPRA - BOX Options",
    },
    PublisherDetail {
        publisher_id: 22,
        dataset: "OPRA.PILLAR",
        venue: "XCBO",
        description: "OPRA - Cboe Options",
    },
    PublisherDetail {
        publisher_id: 23,
        dataset: "OPRA.PILLAR",
        venue: "EMLD",
        description: "OPRA - MIAX Emerald",
    },
    PublisherDetail {
        publisher_id: 24,
        dataset: "OPRA.PILLAR",
        venue: "EDGO",
        description: "OPRA - Cboe EDGX Options",
    },
    PublisherDetail {
        publisher_id: 25,
        dataset: "OPRA.PILLAR",
        venue: "GMNI",
        description: "OPRA - Nasdaq GEMX",
    },
    PublisherDetail {
        publisher_id: 26,
        dataset: "OPRA.PILLAR",
        venue: "XISX",
        description: "OPRA - Nasdaq ISE",
    },
    PublisherDetail {
        publisher_id: 27,
        dataset: "OPRA.PILLAR",
        venue: "MCRY",
        description: "OPRA - Nasdaq MRX",
    },
    PublisherDetail {
        publisher_id: 28,
        dataset: "OPRA.PILLAR",
        venue: "XMIO",
        description: "OPRA - MIAX Options",
    },
    PublisherDetail {
        publisher_id: 29,
        dataset: "OPRA.PILLAR",
        venue: "ARCO",
        description: "OPRA - NYSE Arca Options",
    },
    PublisherDetail {
        publisher_id: 30,
        dataset: "OPRA.PILLAR",
        venue: "OPRA",
        description: "OPRA - Options Price Reporting Authority",
    },
    PublisherDetail {
        publisher_id: 31,
        dataset: "OPRA.PILLAR",
        venue: "MPRL",
        description: "OPRA - MIAX Pearl",
    },
    PublisherDetail {
        publisher_id: 32,
        dataset: "OPRA.PILLAR",
        venue: "XNDQ",
        description: "OPRA - Nasdaq Options",
    },
    PublisherDetail {
        publisher_id: 33,
        dataset: "OPRA.PILLAR",
        venue: "XBXO",
        description: "OPRA - Nasdaq BX Options",
    },
    PublisherDetail {
        publisher_id: 34,
        dataset: "OPRA.PILLAR",
        venue: "C2OX",
        description: "OPRA - Cboe C2 Options",
    },
    PublisherDetail {
        publisher_id: 35,
        dataset: "OPRA.PILLAR",
        venue: "XPHL",
        description: "OPRA - Nasdaq PHLX",
    },
    PublisherDetail {
        publisher_id: 36,
        dataset: "OPRA.PILLAR",
        venue: "BATO",
        description: "OPRA - Cboe BZX Options",
    },
    PublisherDetail {
        publisher_id: 37,
        dataset: "OPRA.PILLAR",
        venue: "MXOP",
        description: "OPRA - MEMX Options",
    },
    PublisherDetail {
        publisher_id: 38,
        dataset: "IEXG.TOPS",
        venue: "IEXG",
        description: "IEX TOPS",
    },
    PublisherDetail {
        publisher_id: 39,
        dataset: "DBEQ.BASIC",
        venue: "XCHI",
        description: "DBEQ Basic - NYSE Texas",
    },
    PublisherDetail {
        publisher_id: 40,
        dataset: "DBEQ.BASIC",
        venue: "XCIS",
        description: "DBEQ Basic - NYSE National",
    },
    PublisherDetail {
        publisher_id: 41,
        dataset: "DBEQ.BASIC",
        venue: "IEXG",
        description: "DBEQ Basic - IEX",
    },
    PublisherDetail {
        publisher_id: 42,
        dataset: "DBEQ.BASIC",
        venue: "EPRL",
        description: "DBEQ Basic - MIAX Pearl",
    },
    PublisherDetail {
        publisher_id: 43,
        dataset: "ARCX.PILLAR",
        venue: "ARCX",
        description: "NYSE Arca Integrated",
    },
    PublisherDetail {
        publisher_id: 44,
        dataset: "XNYS.BBO",
        venue: "XNYS",
        description: "NYSE BBO",
    },
    PublisherDetail {
        publisher_id: 45,
        dataset: "XNYS.TRADES",
        venue: "XNYS",
        description: "NYSE Trades",
    },
    PublisherDetail {
        publisher_id: 46,
        dataset: "XNAS.QBBO",
        venue: "XNAS",
        description: "Nasdaq QBBO",
    },
    PublisherDetail {
        publisher_id: 47,
        dataset: "XNAS.NLS",
        venue: "XNAS",
        description: "Nasdaq Trades",
    },
    PublisherDetail {
        publisher_id: 48,
        dataset: "EQUS.PLUS",
        venue: "XCHI",
        description: "Databento US Equities Plus - NYSE Texas",
    },
    PublisherDetail {
        publisher_id: 49,
        dataset: "EQUS.PLUS",
        venue: "XCIS",
        description: "Databento US Equities Plus - NYSE National",
    },
    PublisherDetail {
        publisher_id: 50,
        dataset: "EQUS.PLUS",
        venue: "IEXG",
        description: "Databento US Equities Plus - IEX",
    },
    PublisherDetail {
        publisher_id: 51,
        dataset: "EQUS.PLUS",
        venue: "EPRL",
        description: "Databento US Equities Plus - MIAX Pearl",
    },
    PublisherDetail {
        publisher_id: 52,
        dataset: "EQUS.PLUS",
        venue: "XNAS",
        description: "Databento US Equities Plus - Nasdaq",
    },
    PublisherDetail {
        publisher_id: 53,
        dataset: "EQUS.PLUS",
        venue: "XNYS",
        description: "Databento US Equities Plus - NYSE",
    },
    PublisherDetail {
        publisher_id: 54,
        dataset: "EQUS.PLUS",
        venue: "FINN",
        description: "Databento US Equities Plus - FINRA/Nasdaq TRF Carteret",
    },
    PublisherDetail {
        publisher_id: 55,
        dataset: "EQUS.PLUS",
        venue: "FINY",
        description: "Databento US Equities Plus - FINRA/NYSE TRF",
    },
    PublisherDetail {
        publisher_id: 56,
        dataset: "EQUS.PLUS",
        venue: "FINC",
        description: "Databento US Equities Plus - FINRA/Nasdaq TRF Chicago",
    },
    PublisherDetail {
        publisher_id: 57,
        dataset: "IFEU.IMPACT",
        venue: "IFEU",
        description: "ICE Europe Commodities",
    },
    PublisherDetail {
        publisher_id: 58,
        dataset: "NDEX.IMPACT",
        venue: "NDEX",
        description: "ICE Endex",
    },
    PublisherDetail {
        publisher_id: 59,
        dataset: "DBEQ.BASIC",
        venue: "DBEQ",
        description: "Databento US Equities Basic - Consolidated",
    },
    PublisherDetail {
        publisher_id: 60,
        dataset: "EQUS.PLUS",
        venue: "EQUS",
        description: "EQUS Plus - Consolidated",
    },
    PublisherDetail {
        publisher_id: 61,
        dataset: "OPRA.PILLAR",
        venue: "SPHR",
        description: "OPRA - MIAX Sapphire",
    },
    PublisherDetail {
        publisher_id: 62,
        dataset: "EQUS.ALL",
        venue: "XCHI",
        description: "Databento US Equities (All Feeds) - NYSE Texas",
    },
    PublisherDetail {
        publisher_id: 63,
        dataset: "EQUS.ALL",
        venue: "XCIS",
        description: "Databento US Equities (All Feeds) - NYSE National",
    },
    PublisherDetail {
        publisher_id: 64,
        dataset: "EQUS.ALL",
        venue: "IEXG",
        description: "Databento US Equities (All Feeds) - IEX",
    },
    PublisherDetail {
        publisher_id: 65,
        dataset: "EQUS.ALL",
        venue: "EPRL",
        description: "Databento US Equities (All Feeds) - MIAX Pearl",
    },
    PublisherDetail {
        publisher_id: 66,
        dataset: "EQUS.ALL",
        venue: "XNAS",
        description: "Databento US Equities (All Feeds) - Nasdaq",
    },
    PublisherDetail {
        publisher_id: 67,
        dataset: "EQUS.ALL",
        venue: "XNYS",
        description: "Databento US Equities (All Feeds) - NYSE",
    },
    PublisherDetail {
        publisher_id: 68,
        dataset: "EQUS.ALL",
        venue: "FINN",
        description: "Databento US Equities (All Feeds) - FINRA/Nasdaq TRF Carteret",
    },
    PublisherDetail {
        publisher_id: 69,
        dataset: "EQUS.ALL",
        venue: "FINY",
        description: "Databento US Equities (All Feeds) - FINRA/NYSE TRF",
    },
    PublisherDetail {
        publisher_id: 70,
        dataset: "EQUS.ALL",
        venue: "FINC",
        description: "Databento US Equities (All Feeds) - FINRA/Nasdaq TRF Chicago",
    },
    PublisherDetail {
        publisher_id: 71,
        dataset: "EQUS.ALL",
        venue: "BATS",
        description: "Databento US Equities (All Feeds) - Cboe BZX",
    },
    PublisherDetail {
        publisher_id: 72,
        dataset: "EQUS.ALL",
        venue: "BATY",
        description: "Databento US Equities (All Feeds) - Cboe BYX",
    },
    PublisherDetail {
        publisher_id: 73,
        dataset: "EQUS.ALL",
        venue: "EDGA",
        description: "Databento US Equities (All Feeds) - Cboe EDGA",
    },
    PublisherDetail {
        publisher_id: 74,
        dataset: "EQUS.ALL",
        venue: "EDGX",
        description: "Databento US Equities (All Feeds) - Cboe EDGX",
    },
    PublisherDetail {
        publisher_id: 75,
        dataset: "EQUS.ALL",
        venue: "XBOS",
        description: "Databento US Equities (All Feeds) - Nasdaq BX",
    },
    PublisherDetail {
        publisher_id: 76,
        dataset: "EQUS.ALL",
        venue: "XPSX",
        description: "Databento US Equities (All Feeds) - Nasdaq PSX",
    },
    PublisherDetail {
        publisher_id: 77,
        dataset: "EQUS.ALL",
        venue: "MEMX",
        description: "Databento US Equities (All Feeds) - MEMX",
    },
    PublisherDetail {
        publisher_id: 78,
        dataset: "EQUS.ALL",
        venue: "XASE",
        description: "Databento US Equities (All Feeds) - NYSE American",
    },
    PublisherDetail {
        publisher_id: 79,
        dataset: "EQUS.ALL",
        venue: "ARCX",
        description: "Databento US Equities (All Feeds) - NYSE Arca",
    },
    PublisherDetail {
        publisher_id: 80,
        dataset: "EQUS.ALL",
        venue: "LTSE",
        description: "Databento US Equities (All Feeds) - Long-Term Stock Exchange",
    },
    PublisherDetail {
        publisher_id: 81,
        dataset: "XNAS.BASIC",
        venue: "XNAS",
        description: "Nasdaq Basic - Nasdaq",
    },
    PublisherDetail {
        publisher_id: 82,
        dataset: "XNAS.BASIC",
        venue: "FINN",
        description: "Nasdaq Basic - FINRA/Nasdaq TRF Carteret",
    },
    PublisherDetail {
        publisher_id: 83,
        dataset: "XNAS.BASIC",
        venue: "FINC",
        description: "Nasdaq Basic - FINRA/Nasdaq TRF Chicago",
    },
    PublisherDetail {
        publisher_id: 84,
        dataset: "IFEU.IMPACT",
        venue: "XOFF",
        description: "ICE Europe - Off-Market Trades",
    },
    PublisherDetail {
        publisher_id: 85,
        dataset: "NDEX.IMPACT",
        venue: "XOFF",
        description: "ICE Endex - Off-Market Trades",
    },
    PublisherDetail {
        publisher_id: 86,
        dataset: "XNAS.NLS",
        venue: "XBOS",
        description: "Nasdaq NLS - Nasdaq BX",
    },
    PublisherDetail {
        publisher_id: 87,
        dataset: "XNAS.NLS",
        venue: "XPSX",
        description: "Nasdaq NLS - Nasdaq PSX",
    },
    PublisherDetail {
        publisher_id: 88,
        dataset: "XNAS.BASIC",
        venue: "XBOS",
        description: "Nasdaq Basic - Nasdaq BX",
    },
    PublisherDetail {
        publisher_id: 89,
        dataset: "XNAS.BASIC",
        venue: "XPSX",
        description: "Nasdaq Basic - Nasdaq PSX",
    },
    PublisherDetail {
        publisher_id: 90,
        dataset: "EQUS.SUMMARY",
        venue: "EQUS",
        description: "Databento Equities Summary",
    },
    PublisherDetail {
        publisher_id: 91,
        dataset: "XCIS.TRADESBBO",
        venue: "XCIS",
        description: "NYSE National Trades and BBO",
    },
    PublisherDetail {
        publisher_id: 92,
        dataset: "XNYS.TRADESBBO",
        venue: "XNYS",
        description: "NYSE Trades and BBO",
    },
    PublisherDetail {
        publisher_id: 93,
        dataset: "XNAS.BASIC",
        venue: "EQUS",
        description: "Nasdaq Basic - Consolidated",
    },
    PublisherDetail {
        publisher_id: 94,
        dataset: "EQUS.ALL",
        venue: "EQUS",
        description: "Databento US Equities (All Feeds) - Consolidated",
    },
    PublisherDetail {
        publisher_id: 95,
        dataset: "EQUS.MINI",
        venue: "EQUS",
        description: "Databento US Equities Mini",
    },
    PublisherDetail {
        publisher_id: 96,
        dataset: "XNYS.TRADES",
        venue: "EQUS",
        description: "NYSE Trades - Consolidated",
    },
    PublisherDetail {
        publisher_id: 97,
        dataset: "IFUS.IMPACT",
        venue: "IFUS",
        description: "ICE Futures US",
    },
    PublisherDetail {
        publisher_id: 98,
        dataset: "IFUS.IMPACT",
        venue: "XOFF",
        description: "ICE Futures US - Off-Market Trades",
    },
    PublisherDetail {
        publisher_id: 99,
        dataset: "IFLL.IMPACT",
        venue: "IFLL",
        description: "ICE Europe Financials",
    },
    PublisherDetail {
        publisher_id: 100,
        dataset: "IFLL.IMPACT",
        venue: "XOFF",
        description: "ICE Europe Financials - Off-Market Trades",
    },
    PublisherDetail {
        publisher_id: 101,
        dataset: "XEUR.EOBI",
        venue: "XEUR",
        description: "Eurex EOBI",
    },
    PublisherDetail {
        publisher_id: 102,
        dataset: "XEEE.EOBI",
        venue: "XEEE",
        description: "European Energy Exchange EOBI",
    },
    PublisherDetail {
        publisher_id: 103,
        dataset: "XEUR.EOBI",
        venue: "XOFF",
        description: "Eurex EOBI - Off-Market Trades",
    },
    PublisherDetail {
        publisher_id: 104,
        dataset: "XEEE.EOBI",
        venue: "XOFF",
        description: "European Energy Exchange EOBI - Off-Market Trades",
    },
]

HistoricalClient::metadata::list_datasets

List all available dataset IDs on Databento.

Use this method to list the dataset IDs (string identifiers) of all available datasets, so you can use other methods which take the dataset parameter.

Constants for dataset IDs are also available in databento::dbn::datasets.

Parameters

date_range
Option<DateRange>
The UTC date request range with an inclusive start date and an exclusive end date. If None then all available dates.

Returns

Vec<String>

A list of dataset IDs.

API method
pub async fn list_datasets(
    &mut self,
    date_range: Option<DateRange>,
) -> databento::Result<Vec<String>>;
Example usage
use databento::HistoricalClient;
use time::{macros::date, Duration};

let mut client =
    HistoricalClient::builder().key("$YOUR_API_KEY")?.build()?;
let dataset_codes = client
    .metadata()
    .list_datasets(Some(date!(2024 - 01 - 28).into()))
    .await?;
println!("{dataset_codes:#?}");
Example response
[
    "ARCX.PILLAR",
    "BATS.PITCH",
    "BATY.PITCH",
    "DBEQ.BASIC",
    "EDGA.PITCH",
    "EDGX.PITCH",
    "EPRL.DOM",
    "EQUS.MINI",
    "GLBX.MDP3",
    "IEXG.TOPS",
    "IFEU.IMPACT",
    "IFUS.IMPACT",
    "MEMX.MEMOIR",
    "NDEX.IMPACT",
    "OPRA.PILLAR",
    "XASE.PILLAR",
    "XBOS.ITCH",
    "XCHI.PILLAR",
    "XCIS.TRADESBBO",
    "XNAS.ITCH",
    "XNYS.PILLAR",
    "XPSX.ITCH",
]

HistoricalClient::metadata::list_schemas

List all available schemas for a dataset.

Parameters

dataset
&str
The dataset code (string identifier). Must be one of the values from list_datasets.

Returns

Vec<Schema>

A list of available data schemas for the dataset.

API method
pub async fn list_schemas(
    &mut self,
    dataset: &str,
) -> databento::Result<Vec<Schema>>;
Example usage
use databento::HistoricalClient;

let mut client =
    HistoricalClient::builder().key("$YOUR_API_KEY")?.build()?;
let schemas =
    client.metadata().list_schemas("XNAS.ITCH").await?;
println!("{schemas:#?}");
Example response
[
    Mbo,
    Mbp1,
    Mbp10,
    Bbo1S,
    Bbo1M,
    Tbbo,
    Trades,
    Ohlcv1S,
    Ohlcv1M,
    Ohlcv1H,
    Ohlcv1D,
    Definition,
    Statistics,
    Status,
    Imbalance,
]

HistoricalClient::metadata::list_fields

List all fields for a schema and encoding.

Parameters

params
ListFieldsParams
See ListFieldsParams for details.

Returns

Vec<FieldDetail>

A list of field details objects, where FieldDetail is:

name
String
The name of the field.
type
String
The type of the field.
API method
pub async fn list_fields(
    &mut self,
    params: &ListFieldsParams,
) -> databento::Result<Vec<FieldDetail>>;
Example usage
use databento::{
    dbn::{Encoding, Schema},
    historical::metadata::ListFieldsParams,
    HistoricalClient,
};

let mut client =
    HistoricalClient::builder().key("$YOUR_API_KEY")?.build()?;
let fields = client
    .metadata()
    .list_fields(
        &ListFieldsParams::builder()
            .schema(Schema::Trades)
            .encoding(Encoding::Dbn)
            .build(),
    )
    .await?;
println!("{fields:#?}");
Example response
[
    FieldDetail {
        name: "length",
        type_name: "uint8_t",
    },
    FieldDetail {
        name: "rtype",
        type_name: "uint8_t",
    },
    FieldDetail {
        name: "publisher_id",
        type_name: "uint16_t",
    },
    FieldDetail {
        name: "instrument_id",
        type_name: "uint32_t",
    },
    FieldDetail {
        name: "ts_event",
        type_name: "uint64_t",
    },
    FieldDetail {
        name: "price",
        type_name: "int64_t",
    },
    FieldDetail {
        name: "size",
        type_name: "uint32_t",
    },
    FieldDetail {
        name: "action",
        type_name: "char",
    },
    FieldDetail {
        name: "side",
        type_name: "char",
    },
    FieldDetail {
        name: "flags",
        type_name: "uint8_t",
    },
    FieldDetail {
        name: "depth",
        type_name: "uint8_t",
    },
    FieldDetail {
        name: "ts_recv",
        type_name: "uint64_t",
    },
    FieldDetail {
        name: "ts_in_delta",
        type_name: "int32_t",
    },
    FieldDetail {
        name: "sequence",
        type_name: "uint32_t",
    },
]

HistoricalClient::metadata::list_unit_prices

List unit prices for each data schema in US dollars per gigabyte.

Parameters

dataset
String
The dataset code (string identifier). Must be one of the values from list_datasets.

Returns

Vec<UnitPricesForMode>

A list of objects with the unit prices for a feed mode, where UnitPricesForMode is:

mode
FeedMode
The feed mode.
unit_prices
HashMap<Schema, double>
The map of unit prices in US dollars by schema.
API method
pub async fn list_unit_prices(
    &mut self,
    dataset: &str,
) -> databento::Result<Vec<UnitPricesForMode>>;
Example usage
use databento::{
    historical::metadata::FeedMode, HistoricalClient,
};

let mut client =
    HistoricalClient::builder().key("$YOUR_API_KEY")?.build()?;
let unit_prices =
    client.metadata().list_unit_prices("OPRA.PILLAR").await?;
println!("{unit_prices:#?}");
Example response
[
    UnitPricesForMode {
        mode: Historical,
        unit_prices: {
            Cbbo1S: 2.0,
            Statistics: 11.0,
            Definition: 5.0,
            Status: 5.0,
            Cmbp1: 0.16,
            Tcbbo: 210.0,
            Ohlcv1D: 600.0,
            Cbbo1M: 2.0,
            Trades: 280.0,
            Ohlcv1S: 280.0,
            Ohlcv1M: 280.0,
            Ohlcv1H: 600.0,
        },
    },
    UnitPricesForMode {
        mode: HistoricalStreaming,
        unit_prices: {
            Trades: 280.0,
            Statistics: 11.0,
            Ohlcv1D: 600.0,
            Definition: 5.0,
            Status: 5.0,
            Ohlcv1S: 280.0,
            Cbbo1S: 2.0,
            Tcbbo: 210.0,
            Cmbp1: 0.16,
            Cbbo1M: 2.0,
            Ohlcv1M: 280.0,
            Ohlcv1H: 600.0,
        },
    },
    UnitPricesForMode {
        mode: Live,
        unit_prices: {
            Status: 6.0,
            Ohlcv1H: 720.0,
            Ohlcv1D: 720.0,
            Definition: 6.0,
            Trades: 336.0,
            Cmbp1: 0.2,
            Statistics: 13.2,
            Tcbbo: 252.0,
            Ohlcv1M: 336.0,
            Cbbo1S: 2.4,
            Ohlcv1S: 336.0,
            Cbbo1M: 2.4,
        },
    },
]

HistoricalClient::metadata::get_dataset_condition

Get the dataset condition from Databento.

Use this method to discover data availability and quality.

Parameters

params
GetDatasetConditionParams
See GetDatasetConditionParams for details.

Returns

Vec<DatasetConditionDetail>

A list of conditions per date, where DatasetConditionDetail is:

date
time::Date
The UTC date of the described data.
condition
DatasetCondition
The condition code describing the quality and availability of the data on the given day. Possible values are listed below.
last_modified_date
Option<time::Date>
The UTC date when any schema in the dataset on the given day was last generated or modified. Will be None when condition is Missing.

Possible values for condition:

  • Available: the data is available with no known issues
  • Degraded: the data is available, but there may be missing data or other correctness issues
  • Pending: the data is not yet available, but may be available soon
  • Missing: the data is not available
API method
pub async fn get_dataset_condition(
    &mut self,
    params: &GetDatasetConditionParams,
) -> databento::Result<Vec<DatasetConditionDetail>>;
Example usage
use databento::{
    historical::metadata::GetDatasetConditionParams,
    HistoricalClient,
};
use time::macros::date;

let mut client =
    HistoricalClient::builder().key("$YOUR_API_KEY")?.build()?;
let conditions = client
    .metadata()
    .get_dataset_condition(
        &GetDatasetConditionParams::builder()
            .dataset("GLBX.MDP3")
            .date_range((
                date!(2019 - 06 - 06),
                date!(2019 - 06 - 10),
            ))
            .build(),
    )
    .await?;
println!("{conditions:#?}");
Example response
[
    DatasetConditionDetail {
        date: 2019-06-06,
        condition: Available,
        last_modified_date: Some(
            2024-05-13,
        ),
    },
    DatasetConditionDetail {
        date: 2019-06-07,
        condition: Available,
        last_modified_date: Some(
            2024-05-13,
        ),
    },
    DatasetConditionDetail {
        date: 2019-06-09,
        condition: Available,
        last_modified_date: Some(
            2024-05-13,
        ),
    },
    DatasetConditionDetail {
        date: 2019-06-10,
        condition: Available,
        last_modified_date: Some(
            2024-05-13,
        ),
    },
]

HistoricalClient::metadata::get_dataset_range

Get the available range for the dataset given the user's entitlements.

Use this method to discover data availability. The start and end values in the response can be used with the timeseries::get_range and batch::submit_job endpoints.

Parameters

dataset
&str
The dataset code (string identifier). Must be one of the values from list_datasets.

Returns

DatasetRange

The available range for the dataset.

start
time::OffsetDateTime
The inclusive UTC start timestamp of the available range.
end
time::OffsetDateTime
The exclusive UTC end timestamp of the available range.
range_by_schema
HashMap<Schema, DateTimeRange>
A mapping of schema names to per-schema start and end timestamps.
API method
pub async fn get_dataset_range(
    &mut self,
    dataset: &str,
) -> databento::Result<DatasetRange>;
Example usage
use databento::HistoricalClient;

let mut client =
    HistoricalClient::builder().key("$YOUR_API_KEY")?.build()?;
let range =
    client.metadata().get_dataset_range("XNAS.ITCH").await?;
println!("{range:#?}");
Example response
DatasetRange {
    start: 2018-05-01 0:00:00.0 +00:00:00,
    end: 2025-10-20 20:30:00.0 +00:00:00,
    range_by_schema: {
        Ohlcv1M: DateTimeRange {
            start: 2018-05-01 0:00:00.0 +00:00:00,
            end: 2025-10-20 20:30:00.0 +00:00:00,
        },
        Trades: DateTimeRange {
            start: 2018-05-01 0:00:00.0 +00:00:00,
            end: 2025-10-20 20:30:00.0 +00:00:00,
        },
        Definition: DateTimeRange {
            start: 2018-05-01 0:00:00.0 +00:00:00,
            end: 2025-10-20 20:30:00.0 +00:00:00,
        },
        Statistics: DateTimeRange {
            start: 2018-05-01 0:00:00.0 +00:00:00,
            end: 2025-10-20 20:30:00.0 +00:00:00,
        },
        Status: DateTimeRange {
            start: 2018-05-01 0:00:00.0 +00:00:00,
            end: 2025-10-20 20:30:00.0 +00:00:00,
        },
        Tbbo: DateTimeRange {
            start: 2018-05-01 0:00:00.0 +00:00:00,
            end: 2025-10-20 20:30:00.0 +00:00:00,
        },
        Mbp10: DateTimeRange {
            start: 2018-05-01 0:00:00.0 +00:00:00,
            end: 2025-10-20 20:30:00.0 +00:00:00,
        },
        Imbalance: DateTimeRange {
            start: 2018-05-01 0:00:00.0 +00:00:00,
            end: 2025-10-20 20:30:00.0 +00:00:00,
        },
        Ohlcv1H: DateTimeRange {
            start: 2018-05-01 0:00:00.0 +00:00:00,
            end: 2025-10-20 20:00:00.0 +00:00:00,
        },
        Bbo1M: DateTimeRange {
            start: 2018-05-01 0:00:00.0 +00:00:00,
            end: 2025-10-20 20:30:00.0 +00:00:00,
        },
        Mbo: DateTimeRange {
            start: 2018-05-01 0:00:00.0 +00:00:00,
            end: 2025-10-20 20:30:00.0 +00:00:00,
        },
        Mbp1: DateTimeRange {
            start: 2018-05-01 0:00:00.0 +00:00:00,
            end: 2025-10-20 20:30:00.0 +00:00:00,
        },
        Ohlcv1D: DateTimeRange {
            start: 2018-05-01 0:00:00.0 +00:00:00,
            end: 2025-10-18 0:00:00.0 +00:00:00,
        },
        Bbo1S: DateTimeRange {
            start: 2018-05-01 0:00:00.0 +00:00:00,
            end: 2025-10-20 20:30:00.0 +00:00:00,
        },
        Ohlcv1S: DateTimeRange {
            start: 2018-05-01 0:00:00.0 +00:00:00,
            end: 2025-10-20 20:30:00.0 +00:00:00,
        },
    },
}

HistoricalClient::metadata::get_record_count

Get the record count of the time series data query.

This method may not be accurate for time ranges that are not discrete multiples of 10 minutes, potentially over-reporting the number of records in such cases. The definition schema is only accurate for discrete multiples of 24 hours.

Parameters

params
GetRecordCountParams
See GetRecordCountParams for details.

Returns

u64

The record count.

API method
pub async fn get_record_count(
    &mut self,
    params: &GetRecordCountParams,
) -> databento::Result<u64>;
Example usage
use databento::{
    dbn::Schema, historical::metadata::GetRecordCountParams,
    HistoricalClient,
};
use time::macros::datetime;

let mut client =
    HistoricalClient::builder().key("$YOUR_API_KEY")?.build()?;
let record_count = client
    .metadata()
    .get_record_count(
        &GetRecordCountParams::builder()
            .dataset("GLBX.MDP3")
            .date_time_range((
                datetime!(2022-01-06 12:00 UTC),
                datetime!(2022-03-10 00:00 UTC),
            ))
            .symbols("ESM2")
            .schema(Schema::Mbo)
            .build(),
    )
    .await?;
println!("{record_count}");
Example response
86913546

HistoricalClient::metadata::get_billable_size

Get the billable uncompressed raw binary size for historical streaming or batched files.

This method may not be accurate for time ranges that are not discrete multiples of 10 minutes, potentially over-reporting the size in such cases. The definition schema is only accurate for discrete multiples of 24 hours.

Info
Info

The amount billed will be based on the actual amount of bytes sent; see our pricing documentation for more details.

Parameters

params
GetBillableSizeParams
See GetBillableSizeParams for details.

Returns

u64

The size in number of bytes used for billing.

API method
pub async fn get_billable_size(
    &mut self,
    params: &GetBillableSizeParams,
) -> databento::Result<u64>;
Example usage
use databento::{
    dbn::Schema, historical::metadata::GetBillableSizeParams,
    HistoricalClient,
};
use time::macros::datetime;

let mut client =
    HistoricalClient::builder().key("$YOUR_API_KEY")?.build()?;
let billable_size = client
    .metadata()
    .get_billable_size(
        &GetBillableSizeParams::builder()
            .dataset("GLBX.MDP3")
            .date_time_range((
                datetime!(2022-06-06 00:00 UTC),
                datetime!(2022-06-10 12:10 UTC),
            ))
            .symbols("ESM2")
            .schema(Schema::Trades)
            .build(),
    )
    .await?;
println!("{billable_size}");
Example response
99219648

HistoricalClient::metadata::get_cost

Get the cost in US dollars for a historical streaming or batch download request. This cost respects any discounts provided by flat rate plans.

This method may not be accurate for time ranges that are not discrete multiples of 10 minutes, potentially over-reporting the cost in such cases. The definition schema is only accurate for discrete multiples of 24 hours.

Info
Info

The amount billed will be based on the actual amount of bytes sent; see our pricing documentation for more details.

Parameters

params
GetCostParams
See GetCostParams for details.

Returns

f64

The cost in US dollars.

API method
pub async fn get_cost(
    &mut self,
    params: &GetCostParams,
) -> databento::Result<f64>;
Example usage
use databento::{
    dbn::Schema, historical::metadata::GetCostParams,
    HistoricalClient,
};
use time::macros::datetime;

let mut client =
    HistoricalClient::builder().key("$YOUR_API_KEY")?.build()?;
let cost = client
    .metadata()
    .get_cost(
        &GetCostParams::builder()
            .dataset("GLBX.MDP3")
            .date_time_range((
                datetime!(2022-06-06 00:00 UTC),
                datetime!(2022-06-10 12:10 UTC),
            ))
            .symbols("ESM2")
            .schema(Schema::Trades)
            .build(),
    )
    .await?;
println!("{cost:.4}");
Example response
2.5874

Time series

HistoricalClient::timeseries::get_range

Makes a streaming request for time series data from Databento.

This is the primary method for getting historical market data, instrument definitions, and status data directly into your application.

This method returns an async decoder. To persist the data immediately, use timeseries::get_range_to_file. For large requests, consider using batch::submit_job instead.

Parameters

params
GetRangeParams
See GetRangeParams for details.

Returns

An AsyncDbnDecoder object for incrementally decoding the records from the stream.

A full list of fields for each schema is available through Historical.metadata.list_fields.

API method
pub async fn get_range(
    &mut self,
    params: &GetRangeParams,
) -> databento::Result<AsyncDbnDecoder>;
Example usage
use std::num::NonZeroU64;

use databento::{
    dbn::{Schema, TradeMsg},
    historical::timeseries::GetRangeParams,
    HistoricalClient,
};
use time::macros::datetime;

let mut client =
    HistoricalClient::builder().key("$YOUR_API_KEY")?.build()?;
let mut decoder = client
    .timeseries()
    .get_range(
        &GetRangeParams::builder()
            .dataset("GLBX.MDP3")
            .date_time_range((
                datetime!(2022-06-06 00:00 UTC),
                datetime!(2022-06-10 12:10 UTC),
            ))
            .symbols("ESM2")
            .schema(Schema::Trades)
            .limit(NonZeroU64::new(1))
            .build(),
    )
    .await?;
let trade = decoder.decode_record::<TradeMsg>().await?.unwrap();
println!("{trade:#?}");
Example response
TradeMsg {
    hd: RecordHeader {
        length: 12,
        rtype: Mbp0,
        publisher_id: GlbxMdp3Glbx,
        instrument_id: 3403,
        ts_event: 1654473600070033767,
    },
    price: 4108.500000000,
    size: 1,
    action: 'T',
    side: 'A',
    flags: 0,
    depth: 0,
    ts_recv: 1654473600070314216,
    ts_in_delta: 18681,
    sequence: 157862,
}

HistoricalClient::timeseries::get_range_to_file

Makes a streaming request for time series data from Databento.

This is the primary method for getting historical market data, instrument definitions, and status data directly into your application.

This method persists the stream to a file at the given path before returning an async decoder on that file. For large requests, consider using batch::submit_job instead.

Parameters

params
GetRangeToFileParams
See GetRangeToFileParams for details.

Returns

An AsyncDbnDecoder object for incrementally decoding the records from the file.

A full list of fields for each schema is available through Historical.metadata.list_fields.

API method
pub async fn get_range_to_file(
    &mut self,
    params: &GetRangeToFileParams,
) -> databento::Result<
    AsyncDbnDecoder<ZstdDecoder<BufReader<File>>>,
>;
Example usage
use std::num::NonZeroU64;

use databento::{
    dbn::{Schema, TradeMsg},
    historical::timeseries::GetRangeToFileParams,
    HistoricalClient,
};
use time::macros::datetime;

let mut client =
    HistoricalClient::builder().key("$YOUR_API_KEY")?.build()?;
let mut decoder = client
    .timeseries()
    .get_range_to_file(
        &GetRangeToFileParams::builder()
            .dataset("GLBX.MDP3")
            .date_time_range((
                datetime!(2022-06-06 00:00 UTC),
                datetime!(2022-06-10 12:10 UTC),
            ))
            .symbols("ESM2")
            .schema(Schema::Trades)
            .limit(NonZeroU64::new(1))
            .path("ESM2_20220606-20220610.dbn.zst")
            .build(),
    )
    .await?;
let trade = decoder.decode_record::<TradeMsg>().await?.unwrap();
println!("{trade:#?}");
Example response
TradeMsg {
    hd: RecordHeader {
        length: 12,
        rtype: Mbp0,
        publisher_id: GlbxMdp3Glbx,
        instrument_id: 3403,
        ts_event: 1654473600070033767,
    },
    price: 4108.500000000,
    size: 1,
    action: 'T',
    side: 'A',
    flags: 0,
    depth: 0,
    ts_recv: 1654473600070314216,
    ts_in_delta: 18681,
    sequence: 157862,
}

Symbology

HistoricalClient::symbology::resolve

Resolve a list of symbols from an input symbology type, to an output symbology type.

Take, for example, a raw symbol to an instrument ID: ESM2 → 3403.

Parameters

params
ResolveParams
See ResolveParams for details.

Returns

Resolution

The results for the symbology resolution.

mappings
HashMap<String, Vec>
The symbol mappings for historical data.
partial
Vec
The symbols that did not resolve for any day in the query time range.
not_found
Vec
The symbols that did not resolve for at least one day in the query time range.

Can be converted to a TsSymbolMap via the CreateSymbolMap() method.

API method
pub async fn resolve(
    &mut self,
    params: &ResolveParams,
) -> databento::Result<Resolution>;
Example usage
use databento::{
    historical::symbology::ResolveParams, HistoricalClient,
};
use time::macros::date;

let mut client =
    HistoricalClient::builder().key("$YOUR_API_KEY")?.build()?;
let resolution = client
    .symbology()
    .resolve(
        &ResolveParams::builder()
            .dataset("GLBX.MDP3")
            .date_range((
                date!(2022 - 06 - 01),
                date!(2022 - 06 - 30),
            ))
            .symbols("ESM2")
            .build(),
    )
    .await?;
println!("{resolution:#?}");
Example response
Resolution {
    mappings: {
        "ESM2": [
            MappingInterval {
                start_date: 2022-06-01,
                end_date: 2022-06-26,
                symbol: "3403",
            },
        ],
    },
    partial: [],
    not_found: [],
    stype_in: RawSymbol,
    stype_out: InstrumentId,
}

Batch downloads

Batch downloads allow you to download flat files directly from within your portal. For more information, see Streaming vs. batch download.

HistoricalClient::batch::submit_job

Make a batch download job request for flat files.

Once a request is submitted, our system processes the request and prepares the batch files in the background. The status of your request and the files can be accessed from the Download center from your user portal or downloaded with batch::download.

This method takes longer than a streaming request, but is advantageous for larger requests as it supports delivery mechanisms that allow multiple accesses of the data without additional cost for each subsequent download after the first.

Related: batch::list_jobs.

Parameters

params
SubmitJobParams
See SubmitJobParams for details.

Returns

BatchJob

The description of the submitted batch job.

id
String
The unique job ID for the request.
user_id
Option<String>
The user ID of the user who made the request.
cost_usd
Option<f64>
The cost of the job in US dollars (None until the job is done processing).
dataset
String
The dataset code (string identifier).
symbols
Symbols
The list of symbols specified in the request.
stype_in
SType
The symbology type of input symbols.
stype_out
SType
The symbology type of output symbols.
schema
Schema
The data record schema.
start
time::OffsetDateTime
The inclusive UTC timestamp start of request time range.
end
time::OffsetDateTime
The exclusive UTC timestamp end of request time range.
limit
Option<NonZeroU64>
The maximum number of records to return.
encoding
Encoding
The data encoding.
compression
Compression
The data compression mode.
pretty_px
bool
If prices are formatted to the correct scale (using the fixed-precision scalar 1e-9).
pretty_ts
bool
If timestamps are formatted as ISO 8601 strings.
map_symbols
bool
If a symbol field is included with each text-encoded record.
split_symbols
bool
If files are split by raw symbol.
split_duration
Option<SplitDuration>
The maximum time interval for an individual file before splitting into multiple files. A week starts on Sunday UTC. Defaults to Day.
split_size
Option<NonZeroU64>
The maximum size for an individual file before splitting into multiple files.
delivery
Delivery
The delivery mechanism of the batch data. Only Download is supported at this time.
record_count
Option<u64>
The number of data records (None until the job is processed).
billed_size
Option<u64>
The size of the raw binary data used to process the batch job (used for billing purposes).
actual_size
Option<u64>
The total size of the result of the batch job after splitting and compression.
package_size
Option<u64>
The total size of the result of the batch job after any packaging (including metadata).
state
JobState.
The current status of the batch job.
ts_received
time::OffsetDateTime
The UTC timestamp when Databento received the batch job.
ts_queued
Option<time::OffsetDateTime>
The UTC timestamp when the batch job was queued.
ts_process_start
Option<time::OffsetDateTime>
The UTC timestamp when the batch job began processing (if it's begun).
ts_process_done
Option<time::OffsetDateTime>
The UTC timestamp when the batch job finished processing (if it's finished).
ts_expiration
Option<time::OffsetDateTime>
The UTC timestamp when the batch job will expire from the Download center.
API method
pub async fn submit_job(
    &mut self,
    params: &SubmitJobParams,
) -> databento::Result<BatchJob>;
Example usage
use databento::{
    dbn::Schema, historical::batch::SubmitJobParams,
    HistoricalClient,
};
use time::macros::datetime;

let mut client =
    HistoricalClient::builder().key("$YOUR_API_KEY")?.build()?;
let job = client
    .batch()
    .submit_job(
        &SubmitJobParams::builder()
            .dataset("GLBX.MDP3")
            .date_time_range((
                datetime!(2022-06-06 12:00 UTC),
                datetime!(2022-06-10 00:00 UTC),
            ))
            .symbols("ESM2")
            .schema(Schema::Trades)
            .build(),
    )
    .await?;
println!("{job:#?}");
Example response
BatchJob {
    id: "GLBX-20220720-BTW9J5HY5C",
    user_id: Some(
        "46PCMCVF",
    ),
    cost_usd: None,
    dataset: "GLBX.MDP3",
    symbols: Symbols(
        [
            "ESM2",
        ],
    ),
    stype_in: RawSymbol,
    stype_out: InstrumentId,
    schema: Trades,
    start: 2022-06-06 12:00:00.0 +00:00:00,
    end: 2022-06-10 0:00:00.0 +00:00:00,
    limit: None,
    encoding: Dbn,
    compression: Zstd,
    pretty_px: false,
    pretty_ts: false,
    map_symbols: false,
    split_duration: Some(Day),
    split_size: None,
    delivery: Download,
    record_count: None,
    billed_size: None,
    actual_size: None,
    package_size: None,
    state: Queued,
    ts_received: 2023-07-28 21:44:15.77437 +00:00:00,
    ts_queued: None,
    ts_process_start: None,
    ts_process_done: None,
    ts_expiration: None,
}

HistoricalClient::batch::list_jobs

List batch job details for the user account.

The job details will be sorted in order of ts_received.

Related: Download center.

Parameters

params
ListJobsParams
See ListJobsParams for details.

Returns

Vec<BatchJob>

A list of batch job details.

API method
pub async fn list_jobs(
    &mut self,
    params: &ListJobsParams,
) -> databento::Result<Vec<BatchJob>>;
Example usage
use databento::{
    historical::batch::{JobState, ListJobsParams},
    HistoricalClient,
};
use time::macros::datetime;

let mut client =
    HistoricalClient::builder().key("$YOUR_API_KEY")?.build()?;
let jobs = client
    .batch()
    .list_jobs(
        &ListJobsParams::builder()
            .states(vec![
                JobState::Queued,
                JobState::Processing,
                JobState::Done,
            ])
            .since(datetime!(2025-01-01 00:00 UTC))
            .build(),
    )
    .await?;
println!("{jobs:#?}");
Example response
[
    BatchJob {
        id: "XNAS-20230704-NMN5T38NUD",
        user_id: Some(
            "NBPDLF33",
        ),
        cost_usd: Some(
            0.00075096637011,
        ),
        dataset: "XNAS.ITCH",
        symbols: Symbols(
            [
                "QQQ",
            ],
        ),
        stype_in: RawSymbol,
        stype_out: InstrumentId,
        schema: Ohlcv1M,
        start: 2023-06-01 0:00:00.0 +00:00:00,
        end: 2023-06-02 0:00:00.0 +00:00:00,
        limit: None,
        encoding: Csv,
        compression: None,
        pretty_px: false,
        pretty_ts: false,
        map_symbols: false,
        split_duration: Some(Day),
        split_size: None,
        delivery: Download,
        record_count: Some(
            1267,
        ),
        billed_size: Some(
            47432,
        ),
        actual_size: Some(
            71023,
        ),
        package_size: Some(
            74450,
        ),
        state: Done,
        ts_received: 2023-07-04 12:22:30.06656 +00:00:00,
        ts_queued: Some(
            2023-07-04 12:22:30.352526 +00:00:00,
        ),
        ts_process_start: Some(
            2023-07-04 12:22:44.992907 +00:00:00,
        ),
        ts_process_done: Some(
            2023-07-04 12:22:45.93711 +00:00:00,
        ),
        ts_expiration: Some(
            2023-08-03 12:22:45.93711 +00:00:00,
        ),
    },
    ...
]

HistoricalClient::batch::list_files

List files for a batch job.

This will include all data files and support files.

Related: Download center.

Parameters

job_id
&str
The batch job identifier.

Returns

Vec<BatchFileDesc>

The file details for the batch job.

filename
String
The file name.
size
u64
The size of the file in bytes.
hash
String
The SHA256 hash of the file.
urls
HashMap<String, String>
A map of download protocol to URL.
API method
pub async fn list_files(
    &mut self,
    job_id: &str,
) -> databento::Result<Vec<BatchFileDesc>>;
Example usage
use databento::HistoricalClient;
use time::macros::date;

let mut client =
    HistoricalClient::builder().key("$YOUR_API_KEY")?.build()?;
let files = client
    .batch()
    .list_files("XNAS-20250108-VVS57U5PD8")
    .await?;
println!("{files:#?}");
Example response
[
    BatchFileDesc {
        filename: "manifest.json",
        size: 1889,
        hash: "sha256:9f43e431be88c403e73ce244bb2e94b293c9c05ac74d93a00443311fc0c9ef09",
        urls: {
            "https": "https://api.databento.com/v0/batch/download/NBPDLF33/XNAS-20250108-VVS57U5PD8/manifest.json",
            "ftp": "ftp://ftp.databento.com/NBPDLF33/XNAS-20250108-VVS57U5PD8/manifest.json",
        },
    },
    BatchFileDesc {
        filename: "condition.json",
        size: 122,
        hash: "sha256:43dba9f90ba29334f233de4f76541f3d78f5378ed4baff0227673a680fde95d7",
        urls: {
            "ftp": "ftp://ftp.databento.com/NBPDLF33/XNAS-20250108-VVS57U5PD8/condition.json",
            "https": "https://api.databento.com/v0/batch/download/NBPDLF33/XNAS-20250108-VVS57U5PD8/condition.json",
        },
    },
    BatchFileDesc {
        filename: "metadata.json",
        size: 699,
        hash: "sha256:001a0e2b8e285875b8f1ac1aa5fa3dfadc39b4b92f664b12d99734c6a1bae148",
        urls: {
            "https": "https://api.databento.com/v0/batch/download/NBPDLF33/XNAS-20250108-VVS57U5PD8/metadata.json",
            "ftp": "ftp://ftp.databento.com/NBPDLF33/XNAS-20250108-VVS57U5PD8/metadata.json",
        },
    },
    BatchFileDesc {
        filename: "symbology.json",
        size: 1753497,
        hash: "sha256:3f3908205ec8b24def7cb3589d9f1be523ec64b4ac63be1c706d8dfca2051d78",
        urls: {
            "https": "https://api.databento.com/v0/batch/download/NBPDLF33/XNAS-20250108-VVS57U5PD8/symbology.json",
            "ftp": "ftp://ftp.databento.com/NBPDLF33/XNAS-20250108-VVS57U5PD8/symbology.json",
        },
    },
    BatchFileDesc {
        filename: "xnas-itch-20250106.imbalance.dbn.zst",
        size: 88237480,
        hash: "sha256:7d0945aa1f04dad3e263237dcbdb7529ce7f0f41d902e468bb99d686754ce599",
        urls: {
            "https": "https://api.databento.com/v0/batch/download/NBPDLF33/XNAS-20250108-VVS57U5PD8/xnas-itch-20250106.imbalance.dbn.zst",
            "ftp": "ftp://ftp.databento.com/NBPDLF33/XNAS-20250108-VVS57U5PD8/xnas-itch-20250106.imbalance.dbn.zst",
        },
    },
]

HistoricalClient::batch::download

Download a batch job or a specific file to {output_dir}/{job_id}/.

Will automatically create any necessary directories if they do not already exist. Verifies the checksum of each downloaded file and will retry on a download failure.

Related: Download center.

Parameters

params
DownloadParams
See DownloadParams for details.

Returns

Vec<PathBuf>

A list of paths to the downloaded files.

API method
pub async fn download(
    &mut self,
    params: &DownloadParams,
) -> databento::Result<Vec<PathBuf>>;
Example usage
use std::path::PathBuf;

use databento::{
    historical::batch::DownloadParams, HistoricalClient,
};

let mut params = DownloadParams::builder()
    .output_dir(PathBuf::from("my_data"))
    .job_id("XNAS-20250108-VVS57U5PD8")
    .build();
let mut client =
    HistoricalClient::builder().key("$YOUR_API_KEY")?.build()?;
// Download all files for the batch job
let files = client.batch().download(&params).await?;
println!("{files:#?}");

// Download all files for the batch job
params.filename_to_download = Some("metadata.json".to_owned());
let file = client.batch().download(&params).await?;
assert_eq!(file.len(), 1);
println!("{}", file[0].display());
Example response
[
    "my_data/XNAS-20230704-NMN5T38NUD/manifest.json",
    "my_data/XNAS-20230704-NMN5T38NUD/condition.json",
    "my_data/XNAS-20230704-NMN5T38NUD/metadata.json",
    "my_data/XNAS-20230704-NMN5T38NUD/symbology.json",
    "my_data/XNAS-20230704-NMN5T38NUD/xnas-itch-20230601.ohlcv-1m.csv",
]

Helpers

AsyncDbnDecoder

An object for working with DBN-encoded data. Typically this object is created when performing historical requests. However, it can be created directly using DBN data on disk or in memory using the provided associated functions:

See also
See also

The crate documentation for a comprehensive list of methods and implemented traits.

AsyncDbnStore::new

Create a new decoder from a DBN stream that implements AsyncReadExt. Immediately decodes the DBN Metadata.

Parameters

reader
R: tokio::io::AsyncReadExt + Unpin
A readable DBN byte stream.

Returns

An AsyncDbnStore object.

This function will return an error if it is unable to parse the metadata in reader or the input is encoded in a newer version of DBN.

API method
pub async fn new<R>(mut reader: R) -> dbn::Result<Self>
where
    R: tokio::io::AsyncReadExt + Unpin;
Example usage
use std::io::SeekFrom;

use databento::{
    dbn::{
        decode::{AsyncDbnDecoder, DbnMetadata},
        encode::{
            dbn::AsyncEncoder as AsyncDbnEncoder,
            AsyncEncodeRecord,
        },
        Schema, TradeMsg,
    },
    historical::timeseries::GetRangeParams,
    HistoricalClient,
};
use time::macros::datetime;
use tokio::{fs::OpenOptions, io::AsyncSeekExt};

let mut client =
    HistoricalClient::builder().key("$YOUR_API_KEY")?.build()?;
let mut decoder = client
    .timeseries()
    .get_range(
        &GetRangeParams::builder()
            .dataset("GLBX.MDP3")
            .date_time_range((
                datetime!(2022-06-06 00:00 UTC),
                datetime!(2022-06-07 00:00 UTC),
            ))
            .symbols("ESM2")
            .schema(Schema::Trades)
            .build(),
    )
    .await?;

// Save streamed data to .dbn
let path = "GLBX-ESM2-20220606.trades.dbn.zst";
let mut file = OpenOptions::new()
    .read(true)
    .write(true)
    .create(true)
    .truncate(true)
    .open(path)
    .await?;
let mut encoder =
    AsyncDbnEncoder::new(&mut file, decoder.metadata()).await?;
while let Some(trade) =
    decoder.decode_record::<TradeMsg>().await?
{
    encoder.encode_record(trade).await?;
}
encoder.flush().await?;

// Open saved data
file.seek(SeekFrom::Start(0)).await?;
let mut decoder = AsyncDbnDecoder::new(file).await?;
for i in 0..5 {
    let trade =
        decoder.decode_record::<TradeMsg>().await?.unwrap();
    println!("{trade:?}");
}
Example response
TradeMsg { hd: RecordHeader { length: 12, rtype: Mbp0, publisher_id: GlbxMdp3Glbx, instrument_id: 3403, ts_event: 1654473600070033767 }, price: 4108.500000000, size: 1, action: 'T', side: 'A', flags: 0, depth: 0, ts_recv: 1654473600070314216, ts_in_delta: 18681, sequence: 157862 }
TradeMsg { hd: RecordHeader { length: 12, rtype: Mbp0, publisher_id: GlbxMdp3Glbx, instrument_id: 3403, ts_event: 1654473600089830441 }, price: 4108.250000000, size: 1, action: 'T', side: 'A', flags: 0, depth: 0, ts_recv: 1654473600090544076, ts_in_delta: 18604, sequence: 157922 }
TradeMsg { hd: RecordHeader { length: 12, rtype: Mbp0, publisher_id: GlbxMdp3Glbx, instrument_id: 3403, ts_event: 1654473600807018955 }, price: 4108.250000000, size: 4, action: 'T', side: 'B', flags: 0, depth: 0, ts_recv: 1654473600807324169, ts_in_delta: 18396, sequence: 158072 }
TradeMsg { hd: RecordHeader { length: 12, rtype: Mbp0, publisher_id: GlbxMdp3Glbx, instrument_id: 3403, ts_event: 1654473601317385867 }, price: 4108.000000000, size: 1, action: 'T', side: 'A', flags: 0, depth: 0, ts_recv: 1654473601317722490, ts_in_delta: 22043, sequence: 158111 }
TradeMsg { hd: RecordHeader { length: 12, rtype: Mbp0, publisher_id: GlbxMdp3Glbx, instrument_id: 3403, ts_event: 1654473601317385867 }, price: 4108.000000000, size: 7, action: 'T', side: 'A', flags: 0, depth: 0, ts_recv: 1654473601317736158, ts_in_delta: 17280, sequence: 158112 }

AsyncDbnStore::from_file

Create a new decoder from a DBN file. If the file is zstd-compressed, use from_zstd_file instead.

Parameters

path
impl AsRef
A path to an uncompressed DBN file.

Returns

An AsyncDbnStore object.

This function will return an error if it is unable to parse the metadata in the file or the input is encoded in a newer version of DBN.

API method
pub async fn from_file(
    path: impl AsRef<Path>,
) -> dbn::Result<Self>;
Example usage
use databento::{
    dbn::{
        decode::{AsyncDbnDecoder, DbnMetadata},
        encode::{
            dbn::AsyncEncoder as AsyncDbnEncoder,
            AsyncEncodeRecord,
        },
        Schema, TradeMsg,
    },
    historical::timeseries::GetRangeParams,
    HistoricalClient,
};
use time::macros::datetime;
use tokio::fs::File;

let mut client =
    HistoricalClient::builder().key("$YOUR_API_KEY")?.build()?;
let mut decoder = client
    .timeseries()
    .get_range(
        &GetRangeParams::builder()
            .dataset("GLBX.MDP3")
            .date_time_range((
                datetime!(2022-06-06 00:00 UTC),
                datetime!(2022-06-07 00:00 UTC),
            ))
            .symbols("ESM2")
            .schema(Schema::Trades)
            .build(),
    )
    .await?;

// Save streamed data to .dbn
let path = "GLBX-ESM2-20220606.trades.dbn.zst";
let file = File::create(path).await?;
let mut encoder =
    AsyncDbnEncoder::new(file, decoder.metadata()).await?;
while let Some(trade) =
    decoder.decode_record::<TradeMsg>().await?
{
    encoder.encode_record(trade).await?;
}
encoder.flush().await?;

// Open saved data
let mut decoder = AsyncDbnDecoder::from_file(path).await?;
for i in 0..5 {
    let trade =
        decoder.decode_record::<TradeMsg>().await?.unwrap();
    println!("{trade:?}");
}
Example response
TradeMsg { hd: RecordHeader { length: 12, rtype: Mbp0, publisher_id: GlbxMdp3Glbx, instrument_id: 3403, ts_event: 1654473600070033767 }, price: 4108.500000000, size: 1, action: 'T', side: 'A', flags: 0, depth: 0, ts_recv: 1654473600070314216, ts_in_delta: 18681, sequence: 157862 }
TradeMsg { hd: RecordHeader { length: 12, rtype: Mbp0, publisher_id: GlbxMdp3Glbx, instrument_id: 3403, ts_event: 1654473600089830441 }, price: 4108.250000000, size: 1, action: 'T', side: 'A', flags: 0, depth: 0, ts_recv: 1654473600090544076, ts_in_delta: 18604, sequence: 157922 }
TradeMsg { hd: RecordHeader { length: 12, rtype: Mbp0, publisher_id: GlbxMdp3Glbx, instrument_id: 3403, ts_event: 1654473600807018955 }, price: 4108.250000000, size: 4, action: 'T', side: 'B', flags: 0, depth: 0, ts_recv: 1654473600807324169, ts_in_delta: 18396, sequence: 158072 }
TradeMsg { hd: RecordHeader { length: 12, rtype: Mbp0, publisher_id: GlbxMdp3Glbx, instrument_id: 3403, ts_event: 1654473601317385867 }, price: 4108.000000000, size: 1, action: 'T', side: 'A', flags: 0, depth: 0, ts_recv: 1654473601317722490, ts_in_delta: 22043, sequence: 158111 }
TradeMsg { hd: RecordHeader { length: 12, rtype: Mbp0, publisher_id: GlbxMdp3Glbx, instrument_id: 3403, ts_event: 1654473601317385867 }, price: 4108.000000000, size: 7, action: 'T', side: 'A', flags: 0, depth: 0, ts_recv: 1654473601317736158, ts_in_delta: 17280, sequence: 158112 }

AsyncDbnStore::from_zstd_file

Create a new decoder from a Zstandard-compressed DBN file. If the file is uncompressed, use from_file instead.

Parameters

path
impl AsRef
A path to a Zstandard-compressed DBN file.

Returns

An AsyncDbnStore object.

This function will return an error if it is unable to parse the metadata in the file or the input is encoded in a newer version of DBN.

API method
pub async fn from_zstd_file(
    path: impl AsRef<Path>,
) -> dbn::Result<Self>;

AsyncDbnStore::with_zstd

Create a new decoder from a Zstandard-compressed DBN stream that implements AsyncReadExt. If reader implements AsyncBufRead, it's better to use with_zstd_buffer to avoid unnecessary additional buffering.

Parameters

reader
R: tokio::io::AsyncReadExt + Unpin
A readable Zstandard-compressed DBN byte stream.

Returns

An AsyncDbnStore object.

This function will return an error if it is unable to parse the metadata in reader or the input is encoded in a newer version of DBN.

API method
pub async fn with_zstd<R>(mut reader: R) -> dbn::Result<Self>
where
    R: tokio::io::AsyncReadExt + Unpin;

AsyncDbnStore::with_zstd_buffer

Create a new decoder from a buffered Zstandard-compressed DBN stream that implements AsyncBufReadExt.

Parameters

reader
R: tokio::io::AsyncBufReadExt + Unpin
A readable buffered Zstandard-compressed DBN byte stream.

Returns

An AsyncDbnStore object.

This function will return an error if it is unable to parse the metadata in reader or the input is encoded in a newer version of DBN.

API method
pub async fn with_zstd_buffer<R>(
    mut reader: R,
) -> dbn::Result<Self>
where
    R: tokio::io::AsyncBufReadExt + Unpin;

AsyncDbnStore::decode_record

Decode a single record of a specific type. If the record type is unknown, such as when working with Live data where the stream can contain several different record types, use decode_record_ref.

Parameters

T
HasRType
A DBN record type.

Returns

A reference to the decoded record of type T or Ok(None) if the stream has been exhausted.

This function will return an error if the record is not of type T or there's an error reading from the input stream.

API method
pub async fn decode_record<T: HasRType>(
    &mut self,
) -> dbn::Result<Option<&T>>;
Example usage
use databento::{
    dbn::{OhlcvMsg, Schema},
    historical::timeseries::GetRangeParams,
    HistoricalClient,
};
use time::macros::datetime;

let mut client =
    HistoricalClient::builder().key("$YOUR_API_KEY")?.build()?;
let mut decoder = client
    .timeseries()
    .get_range(
        &GetRangeParams::builder()
            .dataset("XNAS.ITCH")
            .date_time_range((
                datetime!(2023-08-07 00:00 UTC),
                datetime!(2023-08-08 00:00 UTC),
            ))
            .symbols("AAPL")
            .schema(Schema::Ohlcv1M)
            .build(),
    )
    .await?;
let bar = decoder.decode_record::<OhlcvMsg>().await?.unwrap();
println!("{bar:#?}");
Example response
OhlcvMsg {
    hd: RecordHeader {
        length: 14,
        rtype: Ohlcv1M,
        publisher_id: XnasItchXnas,
        instrument_id: 32,
        ts_event: 1691395200000000000,
    },
    open: 183.440000000,
    high: 183.460000000,
    low: 182.750000000,
    close: 183.300000000,
    volume: 2030,
}

AsyncDbnStore::decode_record_ref

Decode a single record of an unknown type.

Returns

A RecordRef—a wrapper around a record of a dynamic type—or Ok(None) if the stream has been exhausted.

This function will return an error if there's an error reading from the input stream.

API method
pub async fn decode_record_ref(
    &mut self,
) -> dbn::Result<Option<RecordRef>>;
Example usage
use databento::{
    dbn::{OhlcvMsg, SType, Schema},
    historical::timeseries::GetRangeParams,
    HistoricalClient,
};
use time::macros::datetime;

let mut client =
    HistoricalClient::builder().key("$YOUR_API_KEY")?.build()?;
let mut decoder = client
    .timeseries()
    .get_range(
        &GetRangeParams::builder()
            .dataset("OPRA.PILLAR")
            .stype_in(SType::Parent)
            .date_time_range((
                datetime!(2023-08-07 00:00 UTC),
                datetime!(2023-08-08 00:00 UTC),
            ))
            .symbols("SPXW.OPT")
            .schema(Schema::Ohlcv1H)
            .build(),
    )
    .await?;
let bar = decoder.decode_record::<OhlcvMsg>().await?.unwrap();
println!("{bar:#?}");
Example response
OhlcvMsg {
    hd: RecordHeader {
        length: 14,
        rtype: Ohlcv1H,
        publisher_id: OpraPillarXcbo,
        instrument_id: 587251473,
        ts_event: 1691413200000000000,
    },
    open: 47.400000000,
    high: 47.500000000,
    low: 47.400000000,
    close: 47.500000000,
    volume: 16,
}

AsyncDbnStore::metadata

Get a reference to the decoded DBN Metadata.

Returns

A reference to the DBN Metadata.

API method
pub fn metadata(&self) -> &dbn::Metadata;
Example usage
use databento::{
    dbn::{decode::DbnMetadata, OhlcvMsg, Schema},
    historical::timeseries::GetRangeParams,
    HistoricalClient,
};
use time::macros::datetime;

let mut client =
    HistoricalClient::builder().key("$YOUR_API_KEY")?.build()?;
let mut decoder = client
    .timeseries()
    .get_range(
        &GetRangeParams::builder()
            .dataset("XNAS.ITCH")
            .date_time_range((
                datetime!(2023-08-07 00:00 UTC),
                datetime!(2023-08-08 00:00 UTC),
            ))
            .symbols("META")
            .schema(Schema::Ohlcv1H)
            .build(),
    )
    .await?;
println!("{:#?}", decoder.metadata());
Example response
Metadata {
    version: 3,
    dataset: "XNAS.ITCH",
    schema: Some(
        Ohlcv1H,
    ),
    start: 1691366400000000000,
    end: Some(
        1691452800000000000,
    ),
    limit: None,
    stype_in: Some(
        RawSymbol,
    ),
    stype_out: InstrumentId,
    ts_out: false,
    symbol_cstr_len: 71,
    symbols: [
        "META",
    ],
    partial: [],
    not_found: [],
    mappings: [
        SymbolMapping {
            raw_symbol: "META",
            intervals: [
                MappingInterval {
                    start_date: 2023-08-07,
                    end_date: 2023-08-08,
                    symbol: "6508",
                },
            ],
        },
    ],
}

Metadata

The contents of the header of a DBN stream.

See also
See also

The crate documentation for a comprehensive list of methods and implemented traits.

Fields

version
u8
The DBN schema version.
dataset
String
The dataset code.
schema
Option<Schema>
The data record schema. Will be None for live data which can mix schemas.
start
u64
The timestamp of the start of request time range (inclusive) expressed as the number nanoseconds since the UNIX epoch.
end
Option<NonZeroU64>
The timestamp of the end of request time range (exclusive) expressed as the number nanoseconds since the UNIX epoch. Will be None for live data.
limit
Option<NonZeroU64>
The maximum number of records to return.
stype_in
Option<SType>
The symbology type of input symbols. Will be None for live data.
stype_out
SType
The symbology type of output symbols.
ts_out
bool
Whether the stream contains live data with the send timestamps appended to every record.
symbol_cstr_len
usize
The length of fixed-length symbol strings.
symbols
Vec<String>
The product symbols from the original request.
partial
Vec<String>
The symbols that did not resolve for at least one day in the query time range.
not_found
Vec<String>
The symbols that did not resolve for any day in the query time range.
mappings
Vec<SymbolMapping>
The symbol mappings for historical data.
API method
pub struct Metadata {
    pub version: u8,
    pub dataset: String,
    pub schema: Option<Schema>,
    pub start: u64,
    pub end: Option<NonZeroU64>,
    pub limit: Option<NonZeroU64>,
    pub stype_in: Option<SType>,
    pub stype_out: SType,
    pub ts_out: bool,
    pub symbol_cstr_len: usize,
    pub symbols: Vec<String>,
    pub partial: Vec<String>,
    pub not_found: Vec<String>,
    pub mappings: Vec<SymbolMapping>,
}

pub struct SymbolMapping {
    pub raw_symbol: String,
    pub intervals: Vec<MappingInterval>,
}

pub struct MappingInterval {
    pub start_date: time::Date,
    pub end_date: time::Date,
    pub symbol: String,
}

Metadata::symbol_map

Create a symbology mapping from instrument ID and date to text symbol from the mappings in the metadata.

Returns

A TsSymbolMap with the symbol mappings for the query range indexed by instrument ID and date.

This function returns an error if it fails to parse a symbol mapping into a u32 instrument ID.

API method
pub fn symbol_map(&self) -> dbn::Result<TsSymbolMap>;

Metadata::symbol_map_for_date

Create a symbology mapping from the mappings in the metadata for the specified date.

Parameters

date
time::Date
The date to create the symbol map for.

Returns

A PitSymbolMap with the symbol mappings for the query range indexed by instrument ID.

This function returns an error if it fails to parse a symbol mapping into a u32 instrument ID or the provided date is outside the query range.

API method
pub fn symbol_map_for_date(
    &self,
    date: time::Date,
) -> dbn::Result<PitSymbolMap>;

RecordRef

A wrapper around a non-owning immutable reference to a DBN record. This wrapper allows for mixing of record types and schemas, and runtime record polymorphism.

Both AsyncRecordDecoder::decode_record_ref and LiveClient::next_record return RecordRef objects.

See also
See also

The crate documentation for a comprehensive list of methods and implemented traits.

API method
pub struct RecordRef<'a> { /* private fields */ }

RecordRef::header

Get a reference to the RecordHeader which is found at the start of every DBN record. This allows some basic information about the record like instrument ID without first determining its type.

Info
Info

This method is part of the Record trait, so you must import the trait to call this method.

Returns

An immutable reference to the record's header (RecordHeader).

API method
pub fn header(&self) -> &'a RecordHeader;

RecordRef::rtype

Get the record's record type enum which can be used for match expressions with different handling for different types of records.

Returns

RType

This function returns an error if the header contains an unknown RType.

API method
pub fn rtype(&self) -> dbn::Result<RType>;

RecordRef::has

Check if the record reference points to a record of a particular type.

Returns

bool

Whether the record reference points to a record of type T.

API method
pub fn has<T: HasRType>(&self) -> bool;

RecordRef::get

Get a reference to a particular record type. Usually paired with if let Some(...) or with has.

Returns

Option<&'a T

A reference to the underlying record of type T or None if the RecordRef points to another type.

API method
pub fn get<T: HasRType>(&self) -> Option<&'a T>;

TsSymbolMap

A timeseries symbol map, i.e. instrument IDs to text symbols by date. These objects can be obtained from Metadata::symbol_map.

See also
See also

The crate documentation for a comprehensive list of methods and implemented traits.

API method
pub struct TsSymbolMap(/* private fields */);

TsSymbolMap::get_for_rec

Get the symbol mapping for a record.

Info
Info

This method is part of the SymbolIndex trait, so you must import the trait to call this method.

Parameters

rec
R: Record
The record to fetch a symbol mapping for.

Returns

Option<&String>

The corresponding text symbol for the record's instrument ID and timestamp.

API method
pub fn get_for_rec<R: Record>(&self, record: &R) -> Option<&String>;

PitSymbolMap

A point-in-time symbol map. Useful for working real-time symbology or a historical request over a single day and other situations where the symbol mappings are known to not change. These objects can be obtained from Metadata::symbol_map_for_date for historical data.

See also
See also

The crate documentation for a comprehensive list of methods and implemented traits.

API method
pub struct PitSymbolMap(/* private fields */);

PitSymbolMap::get_for_rec

Get the symbol mapping for a record.

Info
Info

This method is part of the SymbolIndex trait, so you must import the trait to call this method.

Parameters

rec
R: Record
The record to fetch a symbol mapping for.

Returns

Option<&String>

The corresponding text symbol for the record's instrument ID.

API method
pub fn get_for_rec<R: Record>(&self, record: &R) -> Option<&String>;

PitSymbolMap::on_record

Update the symbol map with the contents of the record. Only SymbolMappingMsg records affect the map; all other record types will be ignored.

Parameters

record
RecordRef
The record to update the map from.

Returns

()

This function returns an error if record contains a SymbolMappingMsg with invalid UTF-8 symbols.

API method
pub fn on_record(&mut self, record: RecordRef) -> dbn::Result<()>;

PitSymbolMap::on_symbol_mapping

Update the symbol map with the contents of the record, either a SymbolMappingMsg or a SymbolMappingMsgV1.

Parameters

symbol_mapping
SymbolMappingRec
The record to update the map from.

Returns

()

This function returns an error if record contains invalid UTF-8 symbols.

API method
pub fn on_symbol_mapping<S: SymbolMappingRec>(&mut self, symbol_mapping: &S) -> dbn::Result<()>;

ListFieldsParams

The parameter struct and builder for HistoricalClient::metadata::list_fields.

The builder can be instantiated with the associated function builder(), and the struct is created from the builder with the build() method. For every field, there is an identically named setter method on the builder.

Fields

encoding
Encoding
The data encoding. Encoding::Dbn is recommended.
schema
Schema
The data record schema. Must be one of the values from list_schemas.
API method
pub struct ListFieldsParams {
    pub encoding: Encoding,
    pub schema: Schema,
}
Example usage
use databento::{
    dbn::{Encoding, Schema},
    historical::metadata::ListFieldsParams,
};

assert_eq!(
    ListFieldsParams {
        schema: Schema::Tbbo,
        encoding: Encoding::Csv,
    },
    ListFieldsParams::builder()
        .schema(Schema::Tbbo)
        .encoding(Encoding::Csv)
        .build()
);

GetDatasetConditionParams

The parameter struct and builder for HistoricalClient::metadata::get_dataset_condition.

The builder can be instantiated with the associated function builder(), and the struct is created from the builder with the build() method. For every field, there is an identically named setter method on the builder.

Fields

dataset
String
The dataset code (string identifier). Must be one of the values from list_datasets.
date_range
Option<DateRange>
The UTC date request range with an inclusive start date and an inclusive end date. If None then will return all available dates.
API method
pub struct GetDatasetConditionParams {
    pub dataset: String,
    pub date_range: Option<DateRange>,
}
Example usage
use databento::historical::metadata::GetDatasetConditionParams;

assert_eq!(
    GetDatasetConditionParams {
        dataset: "XNAS.ITCH".to_owned(),
        date_range: None
    },
    GetDatasetConditionParams::builder()
        .dataset("XNAS.ITCH")
        .build()
);

GetQueryParams

The parameter struct and builder for several historical metadata endpoints:

The builder can be instantiated with the associated function builder(), and the struct is created from the builder with the build() method. For every field, there is an identically named setter method on the builder.

Fields

dataset
String
The dataset code (string identifier). Must be one of the values from list_datasets.
symbols
Symbols
The product symbols to filter for. Takes up to 2,000 symbols per request. If Symbols::All then will select all symbols.
schema
Schema
The data record schema. Must be one of the values from list_schemas.
date_time_range
DateTimeRange
The request range with an inclusive start and an exclusive end.
stype_in
SType
The symbology type of input symbols. Defaults to RawSymbol.
limit
Option<NonZeroU64>
The maximum number of records to return. If None then no limit. Defaults to None.
API method
pub struct GetQueryParams {
    pub dataset: String,
    pub symbols: Symbols,
    pub schema: Schema,
    pub date_time_range: DateTimeRange,
    pub stype_in: SType,
    pub limit: Option<NonZeroU64>,
}
pub type GetRecordCountParams = GetQueryParams;
pub type GetBillableSizeParams = GetQueryParams;
pub type GetCostParams = GetQueryParams;
Example usage
use databento::{
    dbn::{SType, Schema},
    historical::metadata::GetQueryParams,
};
use time::macros::datetime;

assert_eq!(
    GetQueryParams {
        dataset: "OPRA.PILLAR".to_owned(),
        date_time_range: (
            datetime!(2023-08-01 00:00 UTC),
            datetime!(2023-08-08 00:00 UTC)
        )
            .into(),
        symbols: vec!["VIX.OPT".to_owned()].into(),
        schema: Schema::Trades,
        stype_in: SType::Parent,
        limit: None,
    },
    GetQueryParams::builder()
        .dataset("OPRA.PILLAR")
        .date_time_range((
            datetime!(2023-08-01 00:00 UTC),
            datetime!(2023-08-08 00:00 UTC)
        ))
        .symbols("VIX.OPT")
        .schema(Schema::Trades)
        .stype_in(SType::Parent)
        .build()
);

GetRangeParams

The parameter struct and builder for HistoricalClient::timeseries::get_range.

The builder can be instantiated with the associated function builder(), and the struct is created from the builder with the build() method. For every field, there is an identically named setter method on the builder.

This struct can also be created from a GetRangeToFileParams struct via the From trait.

Fields

dataset
String
The dataset code (string identifier). Must be one of the values from list_datasets.
symbols
Symbols
The product symbols to filter for. Takes up to 2,000 symbols per request. If Symbols::All then will select all symbols.
schema
Schema
The data record schema. Must be one of the values from list_schemas.
date_time_range
DateTimeRange
The request range with an inclusive start and an exclusive end. Filters on ts_recv if it exists in the schema, otherwise ts_event.
stype_in
SType
The symbology type of input symbols. Defaults to RawSymbol.
stype_out
SType
The symbology type of output symbols. Defaults to InstrumentId.
limit
Option<NonZeroU64>
The maximum number of records to return. If None then no limit. Defaults to None.
API method
pub struct GetRangeParams {
    pub dataset: String,
    pub symbols: Symbols,
    pub schema: Schema,
    pub date_time_range: DateTimeRange,
    pub stype_in: SType,
    pub stype_out: SType,
    pub limit: Option<NonZeroU64>,
}
Example usage
use databento::{
    dbn::{SType, Schema, VersionUpgradePolicy},
    historical::timeseries::GetRangeParams,
};
use time::macros::datetime;

assert_eq!(
    GetRangeParams {
        dataset: "XNAS.ITCH".to_owned(),
        date_time_range: (
            datetime!(2023-11-03 14:00 -4),
            datetime!(2023-11-03 16:00 -4)
        )
            .into(),
        symbols: "NVDA".into(),
        schema: Schema::Trades,
        stype_in: SType::RawSymbol,
        stype_out: SType::InstrumentId,
        limit: None,
        #[expect(deprecated)]
        upgrade_policy: None,
    },
    GetRangeParams::builder()
        .dataset("XNAS.ITCH")
        .symbols("NVDA")
        .date_time_range((
            datetime!(2023-11-03 14:00 -4),
            datetime!(2023-11-03 16:00 -4)
        ))
        .schema(Schema::Trades)
        .build()
);

GetRangeToFileParams

The parameter struct and builder for HistoricalClient::timeseries::get_range_to_file.

The builder can be instantiated with the associated function builder(), and the struct is created from the builder with the build() method. For every field, there is an identically named setter method on the builder.

This struct can also be created from a GetRangeParams struct via the GetRangeParams::with_path() method.

Fields

dataset
String
The dataset code (string identifier). Must be one of the values from list_datasets.
symbols
Symbols
The product symbols to filter for. Takes up to 2,000 symbols per request. If Symbols::All then will select all symbols.
schema
Schema
The data record schema. Must be one of the values from list_schemas.
date_time_range
DateTimeRange
The request range with an inclusive start and an exclusive end. Filters on ts_recv if it exists in the schema, otherwise ts_event.
stype_in
SType
The symbology type of input symbols. Defaults to RawSymbol.
stype_out
SType
The symbology type of output symbols. Defaults to InstrumentId.
limit
Option<NonZeroU64>
The maximum number of records to return. If None then no limit. Defaults to None.
path
PathBuf
The file path to persist the stream data to.
API method
pub struct GetRangeToFileParams {
    pub dataset: String,
    pub symbols: Symbols,
    pub schema: Schema,
    pub date_time_range: DateTimeRange,
    pub stype_in: SType,
    pub stype_out: SType,
    pub limit: Option<NonZeroU64>,
    pub path: PathBuf,
}
Example usage
use std::path::PathBuf;

use databento::{
    dbn::{Dataset, SType, Schema, VersionUpgradePolicy},
    historical::timeseries::GetRangeToFileParams,
};
use time::macros::datetime;

assert_eq!(
    GetRangeToFileParams {
        dataset: Dataset::IfeuImpact.to_string(),
        date_time_range: (
            datetime!(2024-05-17 00:00 UTC),
            datetime!(2023-05-20 00:00 UTC)
        )
            .into(),
        symbols: "BRN.OPT".into(),
        schema: Schema::Statistics,
        stype_in: SType::Parent,
        stype_out: SType::InstrumentId,
        limit: None,
        #[expect(deprecated)]
        upgrade_policy: None,
        path: PathBuf::from(
            "ifeu-impact.statistics.20240517.dbn.zst"
        ),
    },
    GetRangeToFileParams::builder()
        .dataset(Dataset::IfeuImpact)
        .symbols("BRN.OPT")
        .stype_in(SType::Parent)
        .date_time_range((
            datetime!(2024-05-17 00:00 UTC),
            datetime!(2023-05-20 00:00 UTC)
        ))
        .schema(Schema::Statistics)
        .path("ifeu-impact.statistics.20240517.dbn.zst")
        .build()
);

ResolveParams

The parameter struct and builder for HistoricalClient::symbology::resolve.

The builder can be instantiated with the associated function builder(), and the struct is created from the builder with the build() method. For every field, there is an identically named setter method on the builder.

Fields

dataset
String
The dataset code (string identifier). Must be one of the values from list_datasets.
symbols
Symbols
The symbols to filter for. Takes up to 2,000 symbols per request. If Symbols::All then will select all symbols (not available for every dataset).
stype_in
SType
The symbology type of input symbols. Defaults to RawSymbol.
stype_out
SType
The symbology type of output symbols. Defaults to InstrumentId.
date_range
DateRange
The UTC date range with an inclusive start and an exclusive end.
API method
pub struct ResolveParams {
    pub dataset: String,
    pub symbols: Symbols,
    pub stype_in: SType,
    pub stype_out: SType,
    pub date_range: DateRange,
}
Example usage
use databento::{
    dbn::SType, historical::symbology::ResolveParams,
};
use time::macros::date;

assert_eq!(
    ResolveParams {
        dataset: "XNAS.ITCH".to_owned(),
        date_range: (
            date!(2020 - 01 - 01),
            date!(2022 - 01 - 01)
        )
            .into(),
        symbols: vec!["IWM", "SPY", "QQQ"].into(),
        stype_in: SType::RawSymbol,
        stype_out: SType::InstrumentId,
    },
    ResolveParams::builder()
        .dataset("XNAS.ITCH")
        .symbols(vec!["IWM", "SPY", "QQQ"])
        .date_range((
            date!(2020 - 01 - 01),
            date!(2022 - 01 - 01)
        ))
        .build()
);

SubmitJobParams

The parameter struct and builder for HistoricalClient::batch::submit_job.

The builder can be instantiated with the associated function builder(), and the struct is created from the builder with the build() method. For every field, there is an identically named setter method on the builder.

Fields

dataset
String
The dataset code (string identifier). Must be one of the values from list_datasets.
symbols
Symbols
The product symbols to filter for. Takes up to 2,000 symbols per request. If Symbols::All then will select all symbols.
schema
Schema
The data record schema. Must be one of the values from list_schemas.
date_time_range
DateTimeRange
The request range with an inclusive start and an exclusive end. Filters on ts_recv if it exists in the schema, otherwise ts_event.
encoding
Encoding
The data encoding. Defaults to Dbn.
compression
Compression
The data compression mode. For fastest transfer speed, Zstd is recommended. Defaults to Zstd.
pretty_px
bool
If prices should be formatted to the correct scale (using the fixed-precision scalar 1e-9). Only applicable for Csv or Json encodings. Defaults to false.
pretty_ts
bool
If timestamps should be formatted as ISO 8601 strings. Only applicable for Csv or Json encodings. Defaults to false.
map_symbols
bool
If a symbol field should be included with each text-encoded record. Only applicable for Csv or Json encodings. Defaults to false.
split_symbols
bool
If files should be split by raw symbol. Cannot be used with limit. Defaults to false.
split_duration
Option<SplitDuration>
The maximum time duration before batched data is split into multiple files. A week starts on Sunday UTC. Defaults to Day.
split_size
Option<NonZeroU64>
The maximum size (in bytes) of each batched data file before being split. Must be an integer between 1e9 and 10e9 inclusive (1GB - 10GB). Defaults to None.
delivery
Delivery
The delivery mechanism for the batched data files once processed. Only Download is supported at this time.
stype_in
SType
The symbology type of input symbols. Defaults to RawSymbol.
stype_out
SType
The symbology type of output symbols. Defaults to InstrumentId.
limit
Option<NonZeroU64>
The maximum number of records to return. If None then no limit. Cannot be used with split_symbols.
API method
pub struct SubmitJobParams {
    pub dataset: String,
    pub symbols: Symbols,
    pub schema: Schema,
    pub date_time_range: DateTimeRange,
    pub encoding: Encoding,
    pub compression: Compression,
    pub pretty_px: bool,
    pub pretty_ts: bool,
    pub map_symbols: bool,
    pub split_symbols: bool,
    pub split_duration: Option<SplitDuration>,
    pub split_size: Option<NonZeroU64>,
    pub delivery: Delivery,
    pub stype_in: SType,
    pub stype_out: SType,
    pub limit: Option<NonZeroU64>,
}
Example usage
use databento::{
    dbn::{Compression, Encoding, SType, Schema},
    historical::batch::{
        Delivery, SplitDuration, SubmitJobParams,
    },
};
use time::macros::datetime;

assert_eq!(
    SubmitJobParams {
        dataset: "GLBX.MDP3".to_owned(),
        date_time_range: (
            datetime!(2019-01-01 00:00 UTC),
            datetime!(2020-09-03 00:00 UTC)
        )
            .into(),
        symbols: vec!["CL.c.0, NG.c.0"].into(),
        schema: Schema::Ohlcv1M,
        stype_in: SType::Continuous,
        stype_out: SType::InstrumentId,
        limit: None,
        encoding: Encoding::Csv,
        compression: Compression::None,
        pretty_px: true,
        pretty_ts: true,
        map_symbols: true,
        split_symbols: false,
        split_duration: Some(SplitDuration::Day),
        split_size: None,
        delivery: Delivery::Download,
    },
    SubmitJobParams::builder()
        .dataset("GLBX.MDP3")
        .symbols(vec!["CL.c.0, NG.c.0"])
        .stype_in(SType::Continuous)
        .date_time_range((
            datetime!(2019-01-01 00:00 UTC),
            datetime!(2020-09-03 00:00 UTC)
        ))
        .schema(Schema::Ohlcv1M)
        .encoding(Encoding::Csv)
        .compression(Compression::None)
        .pretty_px(true)
        .pretty_ts(true)
        .map_symbols(true)
        .build()
);

ListJobsParams

The parameter struct and builder for HistoricalClient::batch::list_jobs.

The builder can be instantiated with the associated function builder(), and the struct is created from the builder with the build() method. For every field, there is an identically named setter method on the builder.

Fields

states
Option<Vec<JobState>>
The filter for job states. Can include Queued, Processing, Done, and Expired. If None, defaults to all except Expired
since
Option<time::OffsetDateTime>
The filter for timestamp submitted (will not include jobs prior to this).
API method
pub struct ListJobsParams {
    pub states: Option<Vec<JobState>>,
    pub since: Option<OffsetDateTime>,
}
Example usage
use databento::historical::batch::{JobState, ListJobsParams};
use time::macros::datetime;

assert_eq!(
    ListJobsParams {
        states: Some(vec![JobState::Done]),
        since: Some(datetime!(2023-11-06 00:00 UTC))
    },
    ListJobsParams::builder()
        .states(vec![JobState::Done])
        .since(datetime!(2023-11-06 00:00 UTC))
        .build()
);

DownloadParams

The parameter struct and builder for HistoricalClient::batch::download.

The builder can be instantiated with the associated function builder(), and the struct is created from the builder with the build() method. For every field, there is an identically named setter method on the builder.

Fields

output_dir
PathBuf
The directory to download the file(s) to.
job_id
String
The batch job identifier.
filename_to_download
Option<String>
The specific file to download. If None then will download all files for the batch job.
API method
pub struct DownloadParams {
    pub output_dir: PathBuf,
    pub job_id: String,
    pub filename_to_download: Option<String>,
}
Example usage
use std::path::PathBuf;

use databento::historical::batch::DownloadParams;
use time::macros::datetime;

assert_eq!(
    DownloadParams {
        output_dir: PathBuf::from("/tmp"),
        job_id: "GLBX-20230926-ANMGJK7JB6".to_owned(),
        filename_to_download: None,
    },
    DownloadParams::builder()
        .output_dir("/tmp")
        .job_id("GLBX-20230926-ANMGJK7JB6")
        .build()
);