A similar philosophy can be used in memory allocation. Take the actual allocation and freeing of the memory out of the critical response path. Do this by always having enough memory “chunks” pre-allocated for usage by the real-time events. The “softer realtime tasks” can ensure there are memory blocks available at all times, and can do the garbage collection and defragmentation.
]]>One thing I am always dubious about though is garbage allocation and the languages that rely upon it. IMO the control of object lifetimes should be in the hands of the engineer, not a compiler run-time system, and relaxed disposal systems actually need more thought than simply using new and delete or malloc and free.
]]>Long answer:
As many pointed out, dynamic memory allocation is almost tabu for real-time embedded systems. This is generally true when referring to a traditional global heap, using the allocate/free mechanisms available in C and C++. Memory fragmentation due to non-deterministic lifetime and granularity of data objects makes it unacceptable to use DM for mission critical systems.
I design bare-metal systems with over 180K lines of C code for mission-critical industrial systems, using no Heap. All real-time tasks have static allocation.
But it is also true that such systems make wide use of FIFOs, transmit/receive buffers and queues, as Mattias pointed out.
You have to analyse the usage data patterns and prove that you will never get out of memory for the critical part of the system.
However, use of fixed block heaps and dynamic allocation for the non-critical portions of the code, like sockets servlets and user interface objects, can reduce the RAM footprint for the system, without jeopardizing the core system. In this way, you can even use more than one heap, for each concern that has different usage patterns.
When designing deeply embedded, you need to know the details of memory allocation, RAM usage patterns and behavior for extended periods of time. The same is valid for automatic data objects that allocate RAM from the stack with long call chains.
]]>For non-critical embedded systems without hard real-time requirements, Java (or C# on a Microsoft platform) may be more suitable. These languages provides copying garbage collectors, which avoid fragmentation (provided you don’t allocate and free large objects) as well as memory leaks.
]]>