Inference: From Expert Systems to Cloud Scale Event Processing

by jruizwp

For a few years now I have been interested in event processing, tracking and analyzing information about things that happen. Not only because time and again I have stumbled upon problems that can easily be modeled with events, but because I have come to believe proper event modeling, just as data modeling, is one of the fundamental pillars of distributed application design and development.  E-commerce, supply-chain-management, financial, social and health-care applications, are some of the domains where event processing is interesting.

There are a number of high end commercial products specifically tailored for event processing. Despite very strong theoretical underpinnings, in my opinion, many of them suffer from fundamental problems:

  • Complex abstraction: Very powerful abstractions can cover a large number of cases. The learning curve, however, is steep. In addition the abstraction is not well integrated with most popular languages and tooling.
  • Type system impedance: Event rules and event data are defined in a custom type system. User state typically handled separately using relational models. As a result applications spend a large amount of cycles transforming data.
  • Events and State storage: Because of the type system impedance, events and state are often stored separately, which leads to increased network access and complex distributed coordination.
  • Difficult to manage: Monolithic servers cannot readily be deployed to the cloud, as they require special network and management arrangements.

First Round

About a year ago, given the rapid developments in cloud technologies and the trend towards enterprise software democratization. I decided to invest my personal time in researching this area and sharing the results with anyone interested. After a few months of intense creativity, learning, and coding I published the results in NPM under the name ‘durable’ version 0.0.x, git repository http://www.github.com/jruizgit/durable.

Important learnings to report:

  • The best place for writing code is the airplane, preferable overseas roundtrip to\from Shanghai. Indeed: no email, no messages, no phone, 12-14 hours of uninterrupted coding! Unsurpassable remedy for the jet-lag when flying out of Seattle around noon. Getting back to Seattle at 8AM is a little rough, given you need to start the day after staring at the computer for more than 10 hours.
  • A new exciting project is a perfect way to justify to your partner the purchase of new equipment (in my case a beautiful MacBook pro). I must say: retina display has kept me motivated, as I look forward to working on my project at least a few minutes every day just because the editor, the browser and the terminal look so clear and beautiful.

On a more serious note: From the beginning of the project I established a few core principles worth enumerating and carrying along in future iterations.

  • JSON type system: Event information and user state are defined stored and queried using the JSON type system.
  • Storage: Events and user state are stored in the same place. This eliminates unnecessary network access and distributed consistency coordination.
  • REST API: Events are raised through a REST API. This allows for easy integration with different event sources.
  • Fault tolerance: Reactions to events are guaranteed to be executed at least once. The tradeoff is reactive functions need to be idempotent.
  • Cloud ready: Leverage technologies that can easily be hosted and managed in the cloud. Node.js, D3.js and MongoDb in this case.

I was very happy with the results of my work for about a week. Then disaster! The code got old, it started to rot and had to be rewritten (yes I have worked as a manager for a long time, however I’m still a developer and I can’t help this). A couple of fundamental problems made me feel uncomfortable with my recent work:

  • Meta-linguistic abstraction: It had many concepts, which made it complex. It heavily relied on the ‘Promise’ concept, which did not compose well with Node.js ecosystem. It inherited the MongoDb expression language with its faults and limitations. For example, a logical ‘and’ was defined using a JSON object, which implies no ordering.
  • Message brokering:  Events were stored as messages in a queue. A background daemon periodically fetched and correlated messages with user state. As a result I could not bring the performance of the system quite where I wanted.

Note: I will describe the benchmark in future posts, suffice it to say: in a quad core, 16 GB IMac I got 700 event-action-event cycles/sec

Second Round

September last year, I had heard a number of people allude to the importance and relevance of business rules. I was familiar with this domain, but I had always considered it a solution for a very specific problem. With some skepticism, I spent a few days reading documents on inference, forward chaining and Rete algorithms. It occurred to me that inference could help improve the performance of ‘durable’ at least by an order of magnitude. So I decided to start a new code iteration by implementing a Rete algorithm, which could scale out simply by adding commodity hardware. The published papers on forward chaining only consider an in memory single process environment, without problems of consistency nor cost of memory access. So my main area of focus became the use of  cache technology for storing inference state. At the end I decided to use Redis because it is fast, it offers powerful data structures (hashsets and ordered hashsets), server side scripting and, believe it or not, its single threaded model is great for handling concurrency and consistency.

On top of the basic principles I had established in the first iteration (JSON, storage, REST API, Fault Tolerance and Cloud Ready), I adopted 4 new principles:

  • Meta-linguistic abstraction: A reduced set of concepts provides an intuitive, minimalistic abstraction to ease the learning curve.
  • Rules: The basic building block of the system is the ‘Rule’ which is composed of an antecedent (expression) and a consequent (action). By allowing expressing rules over incoming events as well as user state, more complex constructs such as statecharts and flowcharts can be supported, because they can be reduced to a set of rules.
  • Forward chaining: Allows for quick event evaluation without the need for recomputing all expressions nor reading and deserializing user state. This the key for a significant performance boost over my previous implementation based on message brokering.
  • Caching: Inference state is stored in a out of proc cache, which allows scaling out rule evaluation beyond a single process using commodity hardware.

I just published my work to NPM under the name ‘durable’ version 0.10.x, git repository http://www.github.com/jruizgit/rules. So far I’m still happy with my work, but a few questions remain unanswered:

  • Can I improve the performance of the system even more by reimplementing it in C and cutting the cost of event marshaling to Redis?
  • Or will the next performance boost come from moving rule evaluation closer to the memory location?
  • Can I leverage a C implementation to support meta-linguistic abstractions for Python and Ruby?
  • Now that I’m not traveling to Shanghai as often, how can I find 12 hours of uninterrupted coding time?

Note: After more than a month of perf tuning, using the same benchmark as in First Round, with a quad core, 16 GB IMac I got 7500 event-action-event cycles/sec.