Note! This project has moved to github.com/armmbed/mbed-events
Composable Event Loops
At the simplest level, composable event loops combine the cheap synchronicity of event loops with the composability of preempted threads.
Multithreading¶
In a traditional OS, asynchronous tasks are generally accomplished by multiple threads or processes. Each thread has its own stack and is preempted periodically to switch between active threads. Low-level synchronization primitives or synchronized queues are often used to communicate between running threads.
Example of a multithreaded teapot:
Benefits:
- Complicated logic can be executed linearly without worrying about delays
- Multiple threads can be composed with minimal impacting to the timing of existing threads
- Supports multiple cores
- Parallelism is transparent to the user
Event Loop¶
In the classic event loop model, a single process executes short-lived events sequentially. Only a single stack is used and no synchronization is needed between events. The user is responsible for ensuring that events return in a lively manner.
Example of an event loop based teapot:
Benefits:
- Synchronization is elided
- Reduced memory consumption with only a single stack
- No cost for context switching and synchronization
- Parallelism is explicit and controlled
Composable Event Loops¶
A composable event loop system is a model for structuring multitasking programs such that each module is contained in a single event loop. Multiple modules are separated into distinct parallel threads and can communicate through message passing in the form of registering events across modules.
Example of a teapot using composable event loops:
At minimum, composable event loops need three primitives:
- Modular event loops - dispatching of a module's queued events
- Multithreading - isolation between multiple module's threads of execution
- Synchronized event registration - message passing between modules in the form of enqueuing events
Example¶
An asynchronous teapot based on the simple events library:
/* An asynchronous tea pot built on composable event loops */ class TeaPot { private: // Internal event loop and set of events for the tea pot EventLoop loop; Event<void(float)> read_event(&loop); // Other tea pot internals HeatingElement heater; AnalogIn sensor; FuncPtr<void()> done_callback; public: // The publically-accessibly boil method can be called from other threads. // Registering with the internal event loop synchronizes the tea pot. void boil(FuncPtr<void()> callback) { loop.trigger(this, &TeaPot::begin_callback, callback); } private: // Event registered from the boil method in case the heating element is not thread-safe. void begin_callback(FuncPtr<void()> callback) { done_callback = callback; heater.on(); read_event.attach(this, &TeaPot::read_callback); sensor.read(&read_event); } // Event for handling read callbacks. This callback is issued from the AnalogIn sensor, // but runs on the TeaPot's event loop. AnalogIn may or may-not use an event loop. void read_callback(float data) { // Hopefully this log call will be pretty quick. However, if the log call takes a while, // we only have to worry about it blocking our tea pot. log.log("got: %d\n", data); if (data > 110) { heater.off(); // Since events are compatible function objects, triggering events that may be from // other event loops is as easy as a function call. if (done_callback) { done_callback(); } } else { sensor.read(&read_event); } } }
Benefits¶
The core benefit of composable event loops is the alignment of synchronization issues with the separation of concerns between individual modules. When creating a module, a developer needs to worry about external synchronization and internal timing.
Benefits:
- Modules are internally synchronized, removing the need for synchronize individual components or worry about the interactions between internal threads
- Multiple modules can be composed with minimal impact to the timing of existing modules
- The quantity of stacks is statically bounded by the number of modules
- Supports multiple cores bounded by the number of modules
- Easily integrated with interrupts by treating them as another module boundary
- Easily integrated with traditional multithreaded environments
- Parallelism is transparent across module boundaries but controlled inside modules