Monday, October 30, 2006
Real time systems and Java
Writing real-time system is not easy in Java because of following:
- Garbage collection - the gc can kick in anytime and there is no upper bound on how long gc will run to reclaim memory. A full gc with compaction (due to heavy fragmentation) can cause stop-the-world pause at worst possible time for a time-sensitive code block.
- Just-in-time compilation - the JIT or hotspot compilation can happen at a time when time critical code is executing leading to unacceptable delay.
- Class initialization - occurs at first use, may result in initialization of other classes, leading to performance issues at time critical code execution.
- Collection and array causing memory and cpu related issues - such as large array being allocated and copied due to array resizing (of ArrayList, Vector, StringBuffer). Or internal rehashing of hash maps, hash sets using precious cpu time when it is needed the most.
- Use of time-deterministic library such as Javolution.
- Initialize all required classes at applicaton startup. Strategies such as object pooling with weak object references can be used.
- Minimize the need for GC to execute: not allocating large objects (recycling objects is much preferred) so as to avoid GC compaction due to fragmentation, creating large number of short-lived objects.
Sunday, October 29, 2006
Staged Event-Driven Architecture (SEDA)
Staged Event-Driven Architecture or SEDA is very a interesting concept for building high performance software systems. Its is a classical "pipes & filter" software architecture (see Garlan & Shaw paper Introduction to Software Architecture) with admission control policies attached to each pipe (called queue in SEDA) to manage load bursts without global degradation of performance. The best way to describe SEDA is to quote the originator of the SEDA (Matt Welsh) from his web site:
It would be an interesting exercise to marry ACE (The ADAPTIVE Communication Environment) concepts of Reactor, Acceptor, Handler pattern with SEDA where Reactor to Acceptor handoff is performed by queueing request and similarly from Acceptor to Handler handoff is performed via another queue. We could write thread pools and configure fixed number of threads to process Accetor and Handler messages to achieve high throughput.
SEDA is an acronym for staged event-driven architecture, and decomposes a complex, event-driven application into a set of stages connected by queues. This design avoids the high overhead associated with thread-based concurrency models, and decouples event and thread scheduling from application logic. By performing admission control on each event queue, the service can be well-conditioned to load, preventing resources from being overcommitted when demand exceeds service capacity. SEDA employs dynamic control to automatically tune runtime parameters (such as the scheduling parameters of each stage), as well as to manage load, for example, by performing adaptive load shedding. Decomposing services into a set of stages also enables modularity and code reuse, as well as the development of debugging tools for complex event-driven applications.
It would be an interesting exercise to marry ACE (The ADAPTIVE Communication Environment) concepts of Reactor, Acceptor, Handler pattern with SEDA where Reactor to Acceptor handoff is performed by queueing request and similarly from Acceptor to Handler handoff is performed via another queue. We could write thread pools and configure fixed number of threads to process Accetor and Handler messages to achieve high throughput.