Miss Manners and Waltzdb
While durable_rules is a framework for complex event coordination, its linguistic abstraction is also suitable for solving business rules problems such as constraint propagation. At its core durable_rules is powered by a variation of the Rete algorithm. Measuring and driving its performance using well established Business Rules industry standards has been extremely valuable. It has brought the quality of the implementation to a new level: improving performance by orders of magnitude and making the system more robust.
The most popular industry standards for production Business Rules systems are “Miss Manners” and Waltzdb. Designed more than 25 years ago, they have been subject of endless debate. Their direct applicability to customer problems as well as features used by platform vendors specifically to improve the numbers have been questioned. After all, however, we still use such benchmarks to understand the performance of Business Rules systems. Manners and Waltzdb are complementary benchmarks: while Manners tests the speed of combinatoric evaluation, Waltzdb tests the efficiency of action scheduling and evaluation.
Just as Thomas Edison once famously said about genius, performance improvements too are one percent inspiration and ninety nine percent perspiration. Because durable_rules relies on Redis to store inference state, all performance improvements derived from optimizing how Redis is used. For each experiment I reset the slowlog, executed the test, reviewed the slowlog results and implemented an improvement (sometimes a regression 😦 ). I repeated this over and over. In my first experiments I set the slowlog threshold to 1 second, as I implemented effective improvements I reduced the slowlog threshold and by the end I was only reviewing the commands which took more than 1 millisecond.
All the performance measurements were done using Redis 3.0, unix sockets, disabling Redis disk save and iMac, 4GHz i7, 32GB 1600MHz DDR3, 1.12 TB Fusion Drive.
Consider the following tests plotted in the chart below:
- Blue: Execute a set of sequential hget operations from a node.js client in a tight loop. The throughput for retrieving a 200 byte value is about 25K operations every second, as the size increases, the throughput decreases to 11K operations for 20KB values.
- Green: A set of 100 hget operations batched in a multi-exec command: The throughput in the case of 200 byte value is around 160k operations every second, degrading to 18k when the value is 20KB.
- Yellow: A set of hget operations inside a Lua script: The throughput for 200 bytes jumps all the way up to 3M operations/sec, the throughput degrades to 458K for 20KB value.
- Red: A set of cmsg unpack operations inside a Lua script: The throughput for 200 bytes is 9M operations/sec, which degrades to 1.1M for 20KB values.
- Orange: A set of Lua table get inside a Lua script: The throughput remains constant at 63M operations/sec.
What is important about the experiment above is the order of magnitude of each operation. For the Manners and Waltzdb benchmarks, where hundreds of thousands of combinatorics need to be evaluated in a few seconds, reducing sequential and batched access from redis clients is critical but not enough. Access to Redis structures from Lua as well as packing and unpacking using cmsg can also become a bottleneck. Thus, all performance optimizations were done following these principles.
- Push the combinatorics evaluation as close to the data as possible leveraging Lua scripts.
- Design chunky Lua script function signatures batching operations when possible.
- Avoid fetching the same information multiple times from Redis data-structures by using Lua local variables.
- Be careful with the shape of the data stored, avoid excessive cmsg usage.
Miss Manners has decided to throw a party. She wants her guests not to get bored, so she needs to sit them such that adjacent guests are always of opposite sex and share at least one hobby. The problem needs to be resolved for 8, 16, 32 and 128 guests.
The ruleset for the algorithm is relatively simple consisting of only seven rules (Ruby, Python, JScript). Guests hobbies are expressed using facts. Therefore the input datasets consists of 20, 40, 83 and 439 facts for 8, 16, 32 and 128 guests respectively. The declarative algorithm conceptually (not practically) creates the combinatorics tree for all the guests and selects the first valid solution via depth first search.
The antecedent in the rule below (expressed in Python) is the heart of the algorithm, as it determines valid solutions. Consider the case of 128 seatings with 439 guest assertions: by the end of the test execution the first 3 terms of the expression will have generated hundreds of thousands combinatorics, the next two terms will have filtered out the invalid ones. Recalculating the full expression for all terms for all combinatorics every time a new seating is asserted slows down execution significantly. I addressed the problem with the following three improvements:
@when_all(c.seating << (m.t == 'seating') & (m.path == True), c.right_guest << (m.t == 'guest') & (m.name == c.seating.right_guest_name), c.left_guest << (m.t == 'guest') & (m.sex != c.right_guest.sex) & (m.hobby == c.right_guest.hobby), none((m.t == 'path') & (m.p_id == c.seating.s_id) & (m.guest_name == c.left_guest.name)), none((m.t == 'chosen') & (m.c_id == c.seating.s_id) & (m.guest_name == c.left_guest.name) & (m.hobby == c.right_guest.hobby)), (m.t == 'context') & (m.l == 'assign')) def find_seating(c):
- When evaluating combinatorics and resolving action scheduling conflicts the rules engine has to choose the most recent result to enable the depth first search of a valid solution. This resolution policy has been implemented and pushed as de-facto standard since the early days of Business Rules.
- For a given rule, to avoid scanning and evaluating all expression lvalues (inference list) for every assertion, the inference results are stored in a hash-set indexed by the combined result of all equality expressions. Thus, the combinatorics calculation complexity becomes linear and depends on the number of assertions done on the rule.
- When evaluating a fact through a complex expression the engine might compare it with the same target fact multiple times, caching target facts in lua local variables reduces the number of Redis data-structure access.
Waltz is an image recognition algorithm invented by Dr. David Waltz in 1972. Given a set of lines in a 2D space, the computer needs to interpret the 3D depth of the image. The first part of the algorithm consists of identifying four types of junctions: Line, Arrow, Fork, Tee. Junctions have 16 different possible labelings following Huffman-Clowes notation, where + is convex, – is concave and an arrow is occluding. Valid junctions and labelings:
It is important to point out: pairs of adjacent junctions constraint each other’s edge labeling. So, after choosing the labeling for an initial junction, the second part of the algorithm iterates through the graph, propagating the labeling constraints by removing inconsistent labels. Example labeling a cube with 9 lines:
The ruleset for the algorithm consists of 34 rules (Ruby, Python, JScript). In the Waltzdb benchmark, the dataset provided is a set of cubes organized in regions. Each region consists of 4 visible and 2 hidden cubes composed by 72 lines. In addition, all tests are flanked by four visible and 1 hidden cube formed by 76 more lines. The tests were run with 4 (376 lines, 20 visible cubes, 9 hidden cubes), 8 (680 lines, 36 visible cubes, 17 hidden cubes), 12 (984 lines, 52 visible cubes, 23 hidden cubes), 16 (1288 lines, 68 visible cubes, 33 hidden cubes) and 50 regions (3872 lines, 204 visible cubes, 101 hidden cubes).
- waltzdb requires a large amount of action evaluations. Some actions (reversing edges or making junctions) don’t imply a change in context (moving forward state), such actions can be batched and marshaled to the scripting engine in a single shot.
- Fact assertions can lead to multiple rule evaluation, instead of invoking a Lua function for every rule evaluation, all rule evaluations for a single fact assertion can be batched and marshaled to Redis in a single shot.
- durable_rules doesn’t support fact modification. To avoid excessive churn in the scheduled action queue, events can be used (instead of facts) to drive the state machine coordination.
durable_rules efficiency in combinatorics evaluation is competitive, its efficiency in memory usage is remarkable. The results compare well with well established Production Business Rules systems. There is performance penalty for marshaling actions in and out of Redis as shown in Waltzdb.
Redis is a great technology, but it has to be used appropriately. The fact that Manners is able to push 225k Redis commands and Waltzdb 159K Redis commands every second, while I was able to get 3M hget commands/sec in a simple experiment, means I might be able to still squeeze more performance out of this system.