OnixS C++ CME MDP Conflated UDP Handler  1.1.2
API documentation
Frequently Asked Questions

Summary

General

Order Books and Prices

Troubleshooting

Miscellaneous

General

Can more than one channel per Handler be processed? Alternatively, are multiple Handler instances necessary?

One instance of the OnixS::CME::ConflatedUDP::Handler processes market data for a single channel only. Users must create multiple instances of the Handler to process data from multiple channels.

Back to top

Is the Handler implicitly subscribed to all instruments of the specified channel? Will it receive security definitions for all instruments of this channel?

Yes, exactly.

Back to top

How does instrument/group filtering work? Why would we want to do this?

Filtering (instrument selection) is useful when market data required for specific instruments only. Instrument selection is like an inclusive white-list. E.g., if security A is selected, security-related events (members of OnixS::CME::ConflatedUDP::SecurityListener) are raised for security A only.

If two or more securities are selected, events are triggered only for those securities.

See also
Selecting Securities of Interest

Back to top

How are problems with market data indicated (book became invalid)?

The Handler reports all errors and warnings through OnixS::CME::ConflatedUDP::HandlerListener::onError and OnixS::CME::ConflatedUDP::HandlerListener::onWarning callbacks respectively.

See also
Handling Issues Like Errors and Warnings

Back to top

Order Books and Prices

How to retrieve/receive the initial order book?

Any OnixS::CME::ConflatedUDP::SecurityListener::onBookUpdate call gives you consistent order book state (except for Natural Refresh mode). If you need book state at startup, you can just get a snapshot at first call.

See also
Constructing And Copying Books

Back to top

How do we know whether the data delivered through onBookAtomicUpdate is the first or last change of a "transaction"? That is, how can it be determined that the order book is valid/consistent/not-crossed after a single onBookAtomicUpdate() call?

It is a bit more complicated with book consistency & up-to-date. Please, note that CME does not guarantee 100% book up-to-date on intermediate updates. It is guaranteed only after 'End of XXX quote' market event where XXX is real. So you have to check this field to ensure the book is 100% up-to-date.

It can be achieved by using book update callbacks of OnixS::CME::ConflatedUDP::SecurityListener rather than atomic book update listening. They are publishing the books only on 'End of XXX quotes' market event giving guarantee books are consistent and up-to-date.

See also
OnixS::CME::ConflatedUDP::SecurityListener, OnixS::CME::ConflatedUDP::MarketDataListener

Back to top

What's the difference between 'Book Atomic Update' and 'Book Update' events?

Book Atomic Update (OnixS::CME::ConflatedUDP::SecurityListener::onBookAtomicUpdate) is an elementary action over a book, and usually, there're multiple atomic updates inside a single snapshot or incremental refresh. Also, the book may not be valid between two atomic updates. Only when all atomic updates are processed from the single market event, the book is considered as valid.

Book Update (OnixS::CME::ConflatedUDP::SecurityListener::onBookUpdate) are raised exactly at a time when all atomic updates are processed, and the book appears to be valid and up-to-date.

See also
Building and Maintaining Books by Yourself

Back to top

Is there any way to get the message seq. number for when order book update happened?

Previously, all atomic updates were sent by the MDP within a single message. Therefore, there was correspondence between the updated book and the message that caused the book update. For that reason, order book updates were raised by the Handler at the end of single message processing. The SDK exposed the ability to obtain an instance of the message caused book update through the order book class interfaces.

The concept has been changed since MDP 3.0. The relation between the updated book and the message caused that update was eliminated. Order book updates are now raised at the end of the market (sub-) event like 'End Of Quotes' and 'End of Event'. In general, a single market event may consist of multiple messages. As a result, data referring to a single book update may be distributed across multiple messages.

Also, the MDP assigns sequence numbers to the packets, but not to the messages. Previously, each packet sent by the MDP consisted of a single message. Thus messages could be identified by the sequence number of the packet containing the message. Therefore, the SDK exposed sequence number attribute for the messages.

Since the MDP 3.0, a single packet may include multiple messages. Therefore, the sequence number attribute was removed from the SDK classes encapsulating messages. Instead, the SDK exposes the new class representing the packet.

Finally, the attribute identifying the end of the market event is sent in the messages. As far as a single packet may contain multiple messages, multiple market events may be sent within a single packet. Thus multiple order book updates may be raised while processing a single packet.

While processing market data for CME, I noticed that some instruments are divided by 100, others by 10000 and the others are not. How do I determine what value to scale the prices?

Security definition sent by Market Data Platform contains fields

representing the multipliers to convert the CME Globex display price to the conventional price.

See also
CME Pricing documentaton.

Back to top

Troubleshooting

After switching the Operating System, even the sample program is no longer able to receive market data (code=NoNetworkActivity). However, I can ping the CME router and iLink server. Any insight or suggestions on what may be going on here?

See Connectivity Troubleshooting.

Back to top

If Handler joins while the market is busy, warnings (code=QueueOverflow) are reported. Is there any possibility to raise the configured limit, if so how?

When the Handler performs book recovery, it caches all the data received on incremental feeds. When the market is busy, a number of cached messages may exceed configured limit defined by the OnixS::CME::ConflatedUDP::HandlerSettings::recoveryQueueMaxSize parameter value. So, to avoid QueueOverflow warnings, it is necessary to increase the value of the noted parameter.

Back to top

What is the cause of 'QueueOverflow' warnings?

The cause is a violation of received packets sequence (gap) that wasn't restored after the OnixS::CME::ConflatedUDP::RealtimeFeedSettings::outOfOrderPacketMaxInterval number of packets or after the OnixS::CME::ConflatedUDP::HandlerSettings::lostPacketWaitTime elapsed. Violation of packet sequence can be caused by packets loss on different levels: CME environment issues, network connectivity issues, system or application issues.

Also, when Handler performs a large-scale recovery due to the gap or while joining market lately, incremental packets are cached. In case of intensive data transmission, queue keeping incremental packets may reach its limit causing a noted warning.

Please refer to our Lost multicast packets troubleshooting guide for more details.

See also
Connectivity Troubleshooting.

Back to top

How can packet issues (back-pressure/queuing) be diagnosed?

The Handler allows tracing packets as they are received by active feeds including out-of-order or duplicated packets (in contrast to market data listening which exposes packets passed through session rules). With the help of the new OnixS::CME::ConflatedUDP::FeedListener listener, it is possible to write advanced diagnostic services.

See also
Listening for Activity on the Feeds

Back to top

Miscellaneous

Which latency is added by the Handler?

The Benchmarking Results topic contains results of latency measuring.

Back to top

What is Handler CPU consumption?

It depends significantly on many different parameters including network, hardware, operating system environment, market data rate and applied handler settings.

Back to top

What is the expected maximum latency our callback implementation should have?

The smaller, the better. It shall be as small as possible to do not impact the ability to process data on high tickers. Although the Handler and the Feed Engines use different techniques to avoid data losses during data bursts, long callback execution may lead to buffer overflows and thus data losses.

Back to top