Note! This project has moved to github.com/armmbed/mbed-events
You are viewing an older revision! See the latest version
Composable Event Loops
Composable Event Loops¶
At the simplest level, composable event loops combine the cheap synchronicity of event loops with the composability of preempted threads.
Multithreading¶
In a traditional OS, asynchronous tasks are generally accomplished by multiple threads or processes. Each thread has its own stack and is preempted periodically to switch between active threads. Low-level synchronization primitives or synchronized queues are often used to communicate between running threads.
- Complicated logic can be executed linearly without worrying about delays
- Multiple threads can be composed without impacting the existing timing of other threads
- Supports multiple cores
- Parallelism is transparent to the user
Event Loop¶
In the classic event loop model, a single process executes short-lived events sequentially. Only a single stack is used and no actual synchronization is needed between events. Insuring that events return in a lively manner is left up to the user.
- Synchronization is free
- Reduced memory consumption with only a single stack
- No cost for context switching and synchronization
- Parallelism is explicit and controlled
Composable Event Loops¶
A composable event loop system is a model for structuring multitasking programs such that each module is contained in a single event loop. Multiple modules are separated into distinct parallel threads and can communicate through message passing in the form of registering events across modules.
At minimum, composable event loops need three primitives:
- Modular event loops - dispatching of a module's queued events
- Multithreading - isolation between multiple module's threads of execution
- Synchronized event registration - message passing between modules in the form of enqueuing events
Example¶
An asynchronous teapot based on the simple events library:
/* An asynchronous tea pot built on composable event loops */ class TeaPot { private: // Internal event loop and set of events for the tea pot EventLoop loop; Event<void()> begin_event; Event<void(float)> read_event; // Other tea pot internals HeatingElement heater; AnalogIn sensor; FuncPtr<void()> done_callback; public: // The publically-accessibly boil method can be called from other threads. // Registering with the internal event loop synchronizes the tea pot. void boil(FuncPtr<void()> callback) { done_callback = callback; begin_event.attach(&loop, this, &TeaPot::begin_callback); begin_event.trigger(); } private: // Event registered from the boil method in case the heating element is not thread-safe. void begin_callback() { heater.on(); read_event.attach(&loop, this, &TeaPot::read_callback); sensor.read(&read_event); } // Event for handling read callbacks. This callback is issued from the AnalogIn sensor, // but runs on the TeaPot's event loop. AnalogIn may or may-not use an event loop. void read_callback(float data) { // Hopefully this log call will be pretty quick. However, if the log call takes a while, // we only have to worry about it blocking our tea pot. log.log("got: %d\n", data); if (data > 110) { heater.off(); // Since events are compatible function objects, triggering events that may be from // other event loops is as easy as a function call. if (done_callback) { done_callback(); } } else { sensor.read(read_event); } } }
Benefits¶
The core benefit of composable event loops is the alignment of synchronization issues with the separation of concerns between individual modules. When creating a module, a developer needs to worry only about internal synchronization and timing, and a developer can provide external synchronization through event registration.
- Modules are internally synchronized, removing the need for synchronize individual components or worry about the interactions between internal threads
- Multiple modules can be composed without impacting the existing timing of other modules
- The quantity of stacks is statically bounded by the number of modules
- Supports multiple cores bound by the number of modules
- Easily integrated with interrupts which can be treated as another module boundary
- Easily integrated with traditional multithreaded environments
- Parallelism is transparent across module boundaries but controlled inside modules