Under normal conditions, the handler logs important events and market data transmitted by MDP into a log file. Binary data, like incoming market data packets, are encoded using base64-encoding before being stored in a log. That adds extra time to a processing cycle. Finally, if the Logger implementation stores its data into a file, that may be a relatively slow operation.
If the users want to eliminate slowdowns caused by flushing data to the filesystem and extra encoding operations, they can disable logging by binding the OnixS::CME::MDH::NullLogger instance to the handler. In this case, the handler does not construct log events, and nothing is logged at all.
For example:
In-place invocation of the OnixS::CME::MDH::NetFeedEngine::process member allows avoiding using additional threads. It lets combine market data handling with other tasks like sending/receiving orders through an order management system.
The pseudo-code below illustrates this approach:
Suppose the application uses a thread-safe implementation of the Feed Engine and invokes its OnixS::CME::MDH::NetFeedEngine::process member across multiple threads. In that case, threads participating in the execution of the Feed Engine logic may need additional turning. Under normal circumstances, threads are executed on any processor available in the system. That may have a negative influence on overall performance due to unnecessary thread context switching.
Establishing thread affinity for each working thread avoids or minimizes switching between processors. Suppose the application uses the OnixS::CME::MDH::FeedEngineThreadPool class to run the Feed Engine logic across multiple threads. In that case, the affinity can be specified through the settings supplied at the instance construction:
In addition to establishing the affinity for working threads, the OnixS::CME::MDH::FeedEngineThreadPool also provides a set of events triggered by working threads at the beginning of the master loop and before ending a processing loop. See Multi-threaded processing for more information.
With the help of working thread events, it is possible to perform more advanced thread turning like updating thread priority:
When the handler finishes processing the previously received market data, it tries to receive new data. Pauses between incoming network packets may cause a Feed Engine to block while waiting for incoming data. As a result, data and the execution code can be removed from a processor's cache, and the subsequent execution could be slower. The OnixS::CME::MDH::SocketFeedEngineSettings::dataWaitTime parameter defines the time the Feed Engine spends blocked while waiting for incoming data. Reducing the parameter increases the number of wake-ups and reduces the probability that the Feed Engine's data and code will be thrown out of a processor's cache. The Feed Engine constantly checks for data availability (busy waits) if the parameter is zero. The drawback of reducing the waiting interval is an increased CPU consumption (up to 100% for zero parameter value).
Under normal conditions, the handler effectively utilizes internal structures used to keep incoming market data. Packets and messages are re-used once the handler processes the contained data. Therefore, no data is allocated during real-time market data processing.
However, data may be copied within callbacks, which the handler uses to report market data events. In this case, when an order book is copied, it causes memory allocation and thus harms performance and latency.
To achieve ultra-low latency, minimize data copying. Pre-allocation strategies could be used to decrease the latency. For example, order book snapshots can be constructed of a particular capacity capable of storing a specific number of price levels. Building book snapshots with the initial capacity to hold books of maximal possible depth eliminate reallocations.