You are viewing an older revision! See the latest version

Memory Model

Table of Contents

  1. Memory Model

Memory Model

This is a basic overview of the memory model of the mbed, with a focus on how RAM is used. Please note that for the purposes of this discussion, I am conveniently forgetting about the special RAM blocks used for USB, CAN and Ethernet etc., and the memory mapped peripheral space. For a more detailed breakdown, have a look at the User Manual for the processor in your mbed:

The Sections

The information in your program is made up of several sorts:

  • Executable code
  • Constants and other read-only data
  • Initialised global/static variables
  • Uninitialised global/static variables
  • Local variables
  • Dynamically created data

Each of these groups of information get allocated to a region of the memory space called a section. The executable code, constants and other read-only data get put in a section called "RO" (for read-only), which is stored in the flash memory of the device. Initialised static and global variables go into a section called "RW" (read-write), and the unitialised ones in to one called "ZI" (Zero Initialise). I'll come on to the local and dynamic data in a bit, but these along with RW and ZI need to live in RAM.

On reset, RAM is an undefined state. However, those initialised variables have to have some data in, for your program to work correctly. What happens is that when you compile your program, a data block with all these variables with their initialised values is defined, and put into the image that you program into the flash. When the mbed starts excuting code, one of the first things it does is copy this data block into the beginning of RAM, and this becomes the runtime RW section. One of the other things it does is to zero fill the next section of RAM, which is where the ZI section lives.

	Loaded		After startup
      +--------+        +--------+    High Address
      |        |        |        |         |
      |        |        |        |         |
RAM   |        |        +--------+         ^
      |        |        | ZI = 0 |         |
      |        |        +--------+         |
      |        |   +->  |RW Data |         |
      +========+   |    +========+         ^
      |        |   |    |        |         |
      |        | copy   |        |         |
      +--------+   |    +--------+         |
Flash |RW Data | >-+    |        |         ^
      +--------+        +--------+         |
      |Program |        |Program |         |
      +--------+        +--------+     Low Address

Heap and Stack

The last two sorts of information are the local variables, and the dynamically created data. The first goes into a region of RAM called the stack, and the second into a region called the heap. Obviously, the sizes of these two regions various during program execution, and only have fixed starting points. Unlike some systems, we use the single memory space shared stack/heap model. What this means is that the heap starts at the first address after the end of ZI, growing up into higher memory addresses, and the stack starts at the last memory address of RAM, and grows downwards into lower memory addresses:

      +--------+   Last Address of RAM
      | Stack  |
      |   |    |
      |   v    |
      +--------+
RAM   |        |
      |        |
      +--------+
      |   ^    |
      |   |    |
      | Heap   |
      +--------+
      |   ZI   |
      +--------+
      |   RW   |  
      +========+  First Address of RAM
      |        |
Flash |        |

When you call a function, its parameters and any variables you have defined in that function which are not static, are stored on the stack (as is other information to do with register values, and how to return from the function). So, if you start having deep function call trees or recursion, your stack grows downwards. As you return back up the call tree, your stack decreases in size.

When you create a new instance of an object using 'new', or if you allocate a block of memory using malloc and friends, you use memory in the heap. If there is a piece of unused memory in the heap which is big enough for what you need, then that is used. If there is not, then the heap grows upwards to fit the new instance/memory block. When you use delete or free, the memory it was using inside the heap is deallocated, ready for use again. However, unless the memory you released was at the very end of the heap, the heap does not shrink.

Collision

Obviously, looking at this, it is possible to for your heap and stack to collide, which is never going to end well. To try and help you prevent this, the routines that allocate memory on the heap (new, malloc and friends) tell you when you don't have enough memory. Instead of passing you a pointer to he newly allocated block, they pass you the value NULL.

If your stack and your heap do collide, despite your best efforts, then the results will be unpredictable. You may get data corruption, or you may get a hard fault. You can write a hard fault handler and/or implement a watchdog to recover from some of these faults, but you basically have to restart the system.


All wikipages