OnixS C++ CME MDP Premium Market Data Handler  5.8.3
API Documentation
Replaying market data

The SDK supports market data replay from the following sources:

Replaying the handler's log files

Handler's log files can be used to reproduce the original handler's behaviour during the live processing of the recorded data.

The handler can only extract market data from a log file if the OnixS::CME::MDH::FileLogger instance produces that file. The replay of market data from user-defined log files is not supported. Instead, users can use the Feed Engine abstraction, which extracts data from log files in their formats. See Custom Feed Engines for more information.

To record market data into a log file, set OnixS::CME::MDH::FileLoggerSettings::severityLevel to either OnixS::CME::MDH::LogSeverity::Regular or OnixS::CME::MDH::LogSeverity::Debug. Otherwise, the handler will not trigger market-related events due to the absence of source data.

The OnixS::CME::MDH::replayLogFiles function does the market data replay. It accepts the list of log files to be replayed and a set of OnixS::CME::MDH::Handler instances to process the data stored in the given logs.

If logfile names were created using the OnixS::CME::MDH::makeLogFilename function, then the OnixS::CME::MDH::replayLogFiles can find such files automatically.

FileList logs;
// Collects all log files recorded for channel `310` and stored in the "data" folder.
gatherLogFiles(logs, 310, "data");
replayLogFiles(logs, handler);

Alternatively, users may assign a list of log files manually. A manually fulfilled list of log files allows replaying log files whose names differ from those produced by the OnixS::CME::MDH::makeLogFilename function.

FileList logs;
The SDK does not limit the amount of data to be replayed, and there are no restrictions for the file size except those defined by an operating system. The only requirement is the files must be in the exact order as the logger recorded them in the bounds of a single processing session.
The critical requirement is that files must form continuous recordings of a single processing session. For example, suppose the handler is launched each morning and stopped each evening during the trading week. Files produced by a logger for a single day refer to a single processing session. Thus, these files can be used as a list of files for replay. At the same time, files recorded on different days refer to other processing sessions and cannot be replayed together.

Suppose logfiles were recorded in the bounds of a single trading week, which usually starts on Sunday and ends on Friday. In that case, it's possible to replay them simultaneously, although with some limitations. Please see Log replay settings.

Replaying logs containing data from multiple channels

The logging services allow using the same logger instance for multiple handler instances. Thus a single log file may contain market data belonging to several channels.

The following code snippet shows how to replay a log file containing data recorded by two handlers for channels 310 and 312:

FileList logs;
Handler handler310;
Handler handler312;
Handler* handlers[] = { &handler310, &handler312 };
const size_t handlerQty = staticArrayLength(handlers);
replayLogFiles(logs, handlers, handlerQty);
The channel identifier must be unique in the bounds of the handlers collection passed to replay.

Log replay settings

By default, the log replay extracts parameters of a processing session from the given log files and uses them to configure the handler instances participating in the replay. However, the SDK allows overriding the default behaviour and lets replay process logged market data according to user-defined settings.

The OnixS::CME::MDH::LogReplaySettings class contains settings affecting the log replay:

Parameter Description

The given parameter defines how the log replay handles the processing session parameters for the handlers participating in the log replay.

If the parameter is set to OnixS::CME::MDH::HandlerSettingsUse::Suggested value, the log replay extracts parameters for the processing session from the log files. It temporarily updates each instance of the handler participating in the replay. This is the default value.

Alternatively, if the parameter is set to OnixS::CME::MDH::HandlerSettingsUse::AsIs, the log replay does not modify the parameters of the processing session for any handler instance. In this mode, the processing flow may differ from that observed during the original session (whose data is replayed).


The log replay logic is based on the feed id matching. The replay service pushes extracted market data to the feeds used by the handler participating in the replay if the data source identifier matches the feed identifier.

However, sometimes there is a need for more flexibility in matching. For example, a log file may contain data for a different channel.

Therefore, the given parameter allows customizing data source matching to satisfy the needs. It represents a set of aliases for data sources. The given parameter tells the replay engine to use predefined matching instead of direct correspondence.

OnixS::CME::MDH::LogReplaySettings::timeSpan By default, the log replay processes all records from the log files to be replayed. The given parameter allows defining the time for which the logged data must be replayed. The replay logic skips entries that log time is out of the given time span.
OnixS::CME::MDH::LogReplaySettings::speed The given parameter allows controlling the speed with which market data is replayed. By default, the log replay extracts push market data to the handler without delays. Therefore, recorded data is replayed faster than processed during the original session. However, it is possible to override this behaviour and to tell the replay logic to replay market data with the same speed as it was processed during the original session.

The following sample shows how to replay a log file recorded for a different channel using different session settings.

LogReplaySettings supplements;
// Map both incremental feeds of channel 310 to a single incremental feed A of channel 312.
supplements.aliases()["310IA"] = "312IA";
supplements.aliases()["310IB"] = "312IA";
Handler handler;
FileList logs;
gatherLogFiles(logs, 310, "Data");
replayLogFiles(logs, handler, supplements);

Event notifications during the log replay

From the event notification perspective, there's no difference whether the handler processes market received by the network or from a log file. This is because implementation of the log replay functionality is based on the same concept of the Feed Engine and internally uses own Feed Engine which extracts data not from network sources, but from the given log files.

During the replay, the OnixS::CME::MDH::PacketArgs::receiveTime member returns original time when the network packet was received from the network.

Replaying network packet capture files (*.pcap)

The API for replaying market data stored in network packet capture files (*.pcap files) is very similar to the handler's log files replay.

The following table describes the network packet capture files replay API:

Parameter Description

Gathers files which are stored in the given folder with the given extension. Gathered files are sorted by name.

In contrast to the OnixS::CME::MDH::gatherLogFiles function for gathering log files, which collects log files in exact order the files were produced by a logger, the given one represents a generic routine finding files according to the given pattern (extension).

OnixS::CME::MDH::replayPcapFiles Replays the given list of *.pcap files.
In contrast to the handler's log files, *.pcap files do not include the handler's settings used to process recorded market data. As a result, handlers must be explicitly configured. Also, the current implementation is limited to replay Incremental data only. Therefore, users must configure the handler instances participating in the replay explicitly to process market data in the natural refresh mode without any live recovery capabilities.

Recovering Instrument Definitions while replaying *.pcap files

The network capture replay service processes data from Incremental feeds only. Usually, the CME MDP 3.0 transmits instrument definitions through Incremental feeds only at the beginning of the week. Thus, no instrument definitions may be present in the recorded packets if capturing was performed later in the week. The absence of a security definition causes the handler to use default values for direct and implied books' depths. Therefore, relevant values must be established for the corresponding parameters. Otherwise, order book maintenance issues may take place during the replay.

Default depths for the direct and implied books can be accessed by the following paths:


The data replay subsystem allows recovering instrument definitions from a previously recorded cache file or the secdef.dat file downloaded from the CME FTP. In this case, the processing session should be configured to recover instruments at the start-up (join) stage, and a path to the file containing instrument definitions must be defined:

Handler handler;

Having instrument definitions while replaying market data from PCAP files is also essential for security filtering. Security filtering allows selecting instruments based on their attributes like id, security group, symbol, and asset. However, only security id represents a primary attribute as MDP uses it to link various data like order book update with instruments. Other attributes like symbols or assets are retrieved from an instrument definition. Therefore, the lack of instrument definitions narrows the filtering capabilities. Only filtering by security id will function correctly. Filtering using any other attribute, like symbol, group or asset, will not work for securities whose definitions are not available during data replay.

Packet captures matching & aliasing

The replay service extracts multicast group information from a captured packet and dispatches it to a feed with the same multicast group. In this way, the replay service matches packets and feeds. However, sometimes there is a need for more flexibility in matching packets to feeds. The OnixS::CME::MDH::PcapReplaySettings::aliases member exposes an instance of the OnixS::CME::MDH::NetAddressAliases class, allowing to redirect data from one source to another one at replay time.

Suppose captured packets belong to the production environment while the handler is configured with the certification environment's connectivity configuration. The production and certification environments use different multicast groups to serve the same channels to avoid simultaneous data transmission conflicts. Thus, replaying data belonging to the production environment using certification connectivity configuration will lead to nothing will happen from the user's perspective. The handler will trigger no events because the replay subsystem will find no data for certification feeds. Users must define source aliases to make the replay subsystem considering production feeds as certification ones. The following code snippet shows how to solve the described case:

PcapReplaySettings supplements;
// Multicast group for the primary incremental feed of channel 310 in the production environment.
const NetFeedConnection production("", 14310);
// Multicast group for the same feed (primary incremental, channel 310) but belonging to the certification environment.
const NetFeedConnection certification("", 14310);
supplements.aliases()[production] = certification;

Replaying CME DataMine files

The SDK allows replaying historical data from CME DataMine:

Parameter Description

Gathers files which are stored in the given folder with the given extension. Gathered files are sorted by name.

In contrast to the OnixS::CME::MDH::gatherLogFiles function gathering handler's log files, which collects log files in the exact order the files were produced by a logger, the given one represents a generic routine finding files according to the given pattern (extension).

OnixS::CME::MDH::replayDatamineFiles Replays the given list of CME Datamine files.
From the user's perspective, the DataMine data replay is close to the network capture files (*.pcap) replay in terms of limitations and behaviour customization. Therefore, it's highly recommended getting familiar with the *.pcap replay functionality.
Due to the same reasons as in the case of network capture files replay, replaying historical data from CME DataMine is limited to processing Incremental data only. Therefore, the handler's instances participating in replay must be configured to process market data in the the natural refresh mode without any recovery capabilities (except Instrument Definition recovery from an instrument cache).

The critical aspect of the historical data replay feature is related to the kind of data supported. CME DataMine offers historical data in various formats, including FIX and market data packet captures. The replay functionality accepts historical data as market data packet captures only. The handler's processing engine is built over SBE binary structures to gain maximal performance. Also, replay functionality simulates data receiving and raises events related to packet handling. FIX messages do not contain information stored in packets. Therefore, they cannot be used as a replay data source.

The CME calls its format for storing historical data "packet captures". However, it differs from network packet captures containing raw network attributes like IP headers, etc. The CME Packet Capture dataset represents a different binary format. Thus, the CME historical data as packet captures cannot be replayed by the OnixS::CME::MDH::replayPcapFiles function. For this reason, the SDK exposes an additional function designed to extract data from files in this particular format.
The replayed files can also be gzip- (*.gz) and zstd- (*.zst) compressed.
The zstd decompressor is not supported on some platforms.

See CME Packet Capture Dataset for more information.

FileList logs;
/// Collects all the data files stored in the current folder.
gatherFiles(logs, "./", ".gz");
/// Replays the collected files.
replayDatamineFiles(logs, handler);
See also