Inter-thread messaging with Disruptor Vladimir Iliev July 4 th , 2015
Aug 08, 2015
Agenda
Problem definition1
Current mainstream solution2
Why it doesn’t work3
Why the Disruptor works better4
How much better?5
Q & A6
What problem are we trying to solve
Retail financial exchange1
Low-latency2
High-throughput3
High availability4
Reliability5
Can’t lose data6
5CONFIDENTIAL
The problem with queues
• Queues get resized and objects get allocated and released
• Garbage collection pauses cause latency jitter
• Queue implementations do not take advantage of what CPUs have to offer in terms of
optimizing performance
A LOT OF GARBAGE
CPU UNFRIEDNLINESS
7CONFIDENTIAL
The cost of contention
• Atomic conditional updates to a single word
• Significantly more efficient than locks
• A lot harder to get right
CAS2
1
• Provide mutual exclusion and visibility of change
• Require arbitration by the OS kernel
• Caches lose hot data and instructions
Locks
3
• Indicate sections of code where read/write order is important
• Required in a multi-threaded environment for visibility
• In Java – the volatile keyword
Memory
barriers
9CONFIDENTIAL
Why queues don’t get along with CPUs
How many writers to a single queue in a 1P1C scenario1
Either full or empty – head and tail often in the same cache line2
Linked-list backed queues are hard to pre-fetch3
Array-backed queues hard to resize4
14CONFIDENTIAL
Summary
Understanding how the underlying hardware works1
Aim for simplicity – single-writer principle, simple clean code2