OnixS C++ eSpeed ITCH Market Data Handler  1.7.3
API documentation
Low Latency Best Practices [UPDATED]

Given topic uncovers how to configure Handler in order to achieve maximal performance characteristics and lowest processing latency.

Disable logging

Please use this code snippet to disable all logging operations:

// Available only log messages for fatal errors.
// Available only logging to file.

Turning Up Working Threads

Market data processing is done asynchronously by using working threads. Under normal conditions, threads may be executed on any processor available in the system. That may have negative influence onto overall performance due to unnecessary thread context switching.

To avoid switching threads between processors, Feed Engine as manager of working threads, allows to establish processor affinity for each working thread:

FeedEngineSettings feSettings1;
FeedEngineSettings feSettings2;
// Scheduling Feed Engines to use different processors.
feSettings1.threadAffinity().insert(1);
feSettings2.threadAffinity().insert(2);

In addition to ability to manipulate thread affinity for working threads of Feed Engine, it also provides a set of events triggered by working threads at the beginning of processing and before ending processing loop. See Feed Engine Events for more information.

With help of working threads events, it's possible to perform more advanced threads turning like updating thread priority:

using namespace OnixS::Eurex::MarketData;
struct ThreadPriorityManager : FeedEngineListener
{
void onFeedEngineThreadBegin(const FeedEngine&)
{
SetThreadPriority(
GetCurrentThread(),
THREAD_PRIORITY_TIME_CRITICAL);
}
};
FeedEngineSettings feSettings;
ThreadPriorityManager priorityManager;
FeedEngine feedEngine(feSettings, &priorityManager);

Decreasing Time Handler Spends on Waiting for Incoming Data

When Handler accomplishes processing of previously received market data, it initiates reception of new incoming data if it's available in the feed. In reality, data packets do not come each after another, but there's a certain time interval between two neighbor packets. Pauses between incoming packets causes Feed Engine bound to the Handler to suspend activities and sleep in kernel while waiting for incoming data. As a result, data and execution code can be ejected from processor's cache. That brings to the fact next iteration of processing loop will be performed slower in compare to previous one. To have an influence on the amount of time Feed Engine waits in kernel on incoming data, it provides OnixS::eSpeed::MarketData::Itch::FeedEngineSettings::dataWaitTime parameter, whose value defines time Feed Engine spends in I/O operation while waiting for incoming data. Reducing value of noted parameter increases number of wake-ups and thus reduces probability for Feed Engine to be thrown out of processor's cache. If parameter is set to zero, Feed Engine just checks for data available, but does not enter kernel for a sleep. The drawback of reducing waiting time is an increase of processor consumption (up to 100% in case of zero parameter value).