OnixS Eurex EDCI Handler C++ library 1.0.0
Users' manual and API documentation
Loading...
Searching...
No Matches
Threading Model

The Handler starts with two threads: one for sending messages and another for receiving messages.

One additional thread will be created if asynchronous logging facility is used. For more details please see Asynchronous Logging section.

All the following callbacks are fired from a single thread:

It is very important for user code to be as fast as possible in these callbacks to avoid delay with next message processing.

Backpressure and Slow Consumer Risk

Warning
EDCI is a push-based protocol. The exchange does not queue messages indefinitely for slow consumers. If the TCP receive buffer fills because the application is not draining messages fast enough, the TCP receive window closes. Sustained backpressure causes the exchange to terminate the session. There is no prior notification — the first indication is an onError callback with OnixS::Eurex::DropCopy::ErrorCode::General.

Why This Happens

All listener callbacks fire on the receive thread. If any callback blocks — to acquire a mutex, perform I/O, write to a database, or call an external service — the receive thread stalls. During the stall:

  1. No further bytes are read from the socket.
  2. The socket receive buffer fills.
  3. The kernel advertises a zero receive window to the exchange.
  4. After a protocol timeout the exchange closes the session.

This is not a theoretical edge case. In production, a single slow callback during a burst — for example an auction uncross producing hundreds of fill reports, or a mass-cancel event — is sufficient to cause disconnection.

Design Guidance

Keep callbacks non-blocking. The receive thread callback must do nothing more than copy the message data into a lock-free queue or ring buffer and return immediately. All downstream processing — state updates, persistence, risk checks, notifications — must run in a separate consumer thread.

// Correct pattern: copy and return
void onOrderExecReportBroadcast(
const OrderExecReportBroadcast& msg, const MessageInfo& info) override
{
queue_.push(msg); // non-blocking, lock-free push (copy by value)
}

Never block on a mutex in a callback. A contended mutex in a callback is equivalent to blocking the receive thread. Use atomic operations or lock-free data structures to share state with consumer threads. If a mutex is unavoidable, use try_lock and drop the message if the lock is not immediately available, rather than blocking.

Size your queue for burst capacity, not average throughput. During an auction uncross or mass-cancel event the message rate can spike by orders of magnitude for hundreds of milliseconds. The queue depth must be sufficient to absorb the burst while the consumer thread catches up.

Define a queue-full policy. If the consumer thread falls behind and the queue fills, the receive thread will either block (acceptable for a short time) or drop data. Choose a policy explicitly: drop, circuit-break, or alert and disconnect intentionally via OnixS::Eurex::DropCopy::Handler::disconnectAsync().

Disable synchronous logging in production. As noted in Low Latency Best Practices, synchronous log writes inside callbacks add latency directly to the receive path. Use LogLevel::Fatal in production.

Pin the receive thread to a dedicated core. Use OnixS::Eurex::DropCopy::HandlerSettings::receivingThreadAffinity to eliminate OS scheduling jitter from the receive-to-callback path.

Heartbeat Violations

If the receive thread stalls longer than the session heartbeat interval (HeartBtInt from LogonResponse), the exchange will conclude the session is dead and terminate it with OnixS::Eurex::DropCopy::SessionRejectReason::HeartbeatViolation. This triggers an onError callback. The handler will then attempt reconnect according to OnixS::Eurex::DropCopy::HandlerSettings::connectionRetries, which re-executes the full session logon and synchronization sequence described in [Understanding Handler States].