OnixS C++ CME Market Data Handler  5.3.0
API documentation
Feature Summary

The Updated Feed Engine Concept

The Feed Engine machinery is used by the Handler to receive and dispatch market data. The concept was improved to make things more efficient. Now it represents a set of abstract classes. Ready-to-use implementations exposed by the SDK allow users to choose a solution better satisfying the needs. For example, users with Solarflare network cards may take advantage of using an implementation of the Feed Engine concept based on Solarflare ef_vi SDK instead of a socket-based one to bypass the kernel and thus decrease overall latency. Users may also build their own feed engines extracting market data from various sources including network, file-based stores, etc.

Starting from the 5.3 Release, the new OnixS::CME::MDH::NetFeedEngine::process member was introduced to run the Feed Engine machinery explicitly. As a result, the thread management layer was moved out from all implementations of the Feed Engine concept exposed by the SDK (e.g. based on sockets or Solarflare ef_vi).

To provide multi-threaded processing close to previous implementations of the Feed Engine like the OnixS::CME::MDH::MultithreadedFeedEngine one, the new OnixS::CME::MDH::FeedEngineThreads class was added to the SDK.

See The Feed Engine Concept and Setting Up Feed Engine for more information.

The Feed Engine Based On Solarflare ef_vi SDK

Since the 5.1 Release the SDK exposes implementation of the Feed Engine concept based on the Solarflare ef_vi SDK. The ef_vi SDK is for high performance raw Ethernet networking. It bypasses kernel and uses zero-copying techniques while manipulating data. The functionality is exposed by the OnixS::CME::MDH::SolarflareFeedEngine class.

See the The Feed Engine Based On Solarflare ef_vi SDK for more information.

Hardware Timestamping of Received Packets

The 5.1 Release introduces support of the hardware timestamps for the received packets. The new feature allows customers to measure latency starting from the moment data arrived to the network adapter. The Benchmarking sample was enhanced to let users take adantage of the new feature.

See the Hardware Timestamping of Incoming Packets for more information.

Enhanced Log and PCAP Files Replay

The Log Replay functionality has been enhanced and now allows to replay log files containing market data from different channels using multiple instances of Handlers. Thus users may replay data for multiple channels in the way close to original processing.

The new parameters affecting replay behavior have been added. Now users may select range of market data to be replayed as well as to control replay speed.

Replay subsystem now supports PCAP (Packet Captures) files as market data source. The API and functionality is close to the Log Replay thus letting the users to have same experience.

See Replaying Logs, PCAPs CME DataMine/Historical Data for more information.

More Flexibility In Configuring The Handler

Previously, configuring the Handler consisted of a few stages and most of the configuration parameters had to be set at the instance construction stage. The given approach has been redesigned and now Handler can be configured after instance construction and any time between market data processing sessions. Also, the API has been updated and all settings are gathered into the single place and exposed by the Handler as the instance of OnixS::CME::MDH::HandlerSettings class through the OnixS::CME::MDH::Handler::settings() member. Listeners and other services like Feed Engine and TCP Recovery Service can be bound through the Handler's settings.

Thus, starting from the given major release, the Handler satisfies the 'Construct->Configure->Use' pattern.

The SDK still provides the legacy way of configuring the Handler to minimize users efforts on migrating from the previous versions. However, the legacy members will be removed from the API eventually. Thus, it's highly recommended to migrate to the newer analogues as soon as possible.

See Adjusting Handler Settings for more information.

TCP Recovery Request Throttling

The TCP Recovery service limits the number of requests it serves per second. As a result, the OnixS::CME::MDH::TcpRecoveryService class has been improved to support the request throttling. The OnixS::CME::MDH::TcpRecoverySettings class now exposes the new OnixS::CME::MDH::TcpRecoverySettings::maxRequests parameter allowing to control the number of requests allowed per second. If the number of requests exceeds the defined limit, the recovery request is rejected by the service and thus the Handler will have either to repeat an attempt to recover missing data or to recover market state from snapshots or resume natural incremental processing.

Updated Messaging Subsystem

To support the BrokerTec markets, the CME updated message specifications bringing the new messages. As part of the update, messages before Price Precision Update were removed from the specification. The messaging subsystem has been updated in the 5.3 Release to reflect all changes in the SBE message specification.

See also
Migration Guide