mozillaZineheader image

New Memory Management Plan for Feedback

The Problem

This document is intended to briefly address some of the shortcomings of our memory management strategy in Seamonkey. Nothing here is particularly new; no state of the art design ideas. But it's my hope that by applying a few of these patterns we can improve the overall state of affairs, thus raising our quality level a bit.

The impetus for this was a quick survey that I did of the Seamonkey source code. A quick look revealed that we have over 2000 memory allocation points in 600+ files. These numbers don't even include the number of calls to malloc and it's peers.

More troubling is the fact that we regularly write code like this:

nsresult nsMyClass::DoSomething() {
  nsresult theResult=NS_OK;
  nsFoo* aFoo = new nsFoo();  //assume this fails to alloc memory...
  return theResult;
There are several problems with this code:
  1. The memory allocation result is never conditioned for failure.
  2. Error propagation of the memory failure is nonexistent.
  3. Object aFoo is dereferenced unconditionally.

Memory Management Proposal

I propose that each team allocate 1 day during each Milestone for the explicit purpose of systematically reviewing our code in order to resolve and improve memory related issues.

Step 1:

Memory and resource cleanup and tuning. First, let's review our code and add pointer conditioning to make sure we're not dereferencing null pointers. This is especially true at memory allocation points like the example given above. So we'll rewrite our first example like this:

nsresult nsMyClass::DoSomething() { 
  nsresult theResult=NS_OK; 
  nsFoo* aFoo = new nsFoo(); 
  if(aFoo) {
  return theResult;
Note that to be correct, allocation conditioning needs to deal with a few subtleties. For example, assume that you've been given (via an API) pointers to a few objects that you intend to cache. Then you go to make your cache (an nsDeque, for example) and the allocation fails. You have to be particularly careful about what you do with the objects you meant to store. You have to know, for example, whether you can simply drop them on the floor, versus RELEASING them.

There's also the issue of benign failures, where you wanted extra memory (or an instance of a given class), but your algorithm would continue to work correctly if even if you can't get the resource you requested. Handling benign failures offers a level of complexity all its own.

Step 2:

Let's factor the memory allocation sites, which can help in general with quality. Instead of having n unique allocation sites, we can have considerably fewer, which will help in in general with memory related issues. (Fewer sites means easier maintenance).

Your task is to review your files in all the areas your work to discover places where you call new xxx. Next, see if you can't (at least) create a common function where classes of type xxx get created. Better still, perhaps you want to make a static function on your class that constructs new instances. (See step 5).

Whatever you decide, the goal is to get the number of places where object allocations occur down to a minimum.

Step 3:

Let's add error propagation for memory allocation failures. Returning to our first example, it's obvious that if a memory error occurred, it would not get propagated outside the local stack frame. This is a real problem for the app, as other code on the execution stack will likely have unpredictable results. So let's rewrite that method assuming the use of a factory, and include error propagation:

nsresult nsMyClass::DoSomething() { 
  nsFoo* aFoo=0;
  nsresult theResult=nsFoo::CreateInstance(?aFoo,...args...);
  return theResult;       
One of the hardest parts of this step is to correctly propagate the memory allocation error from the call site outward through the call stack via a return value. Lot's of code may need to get cleaned up to deal with this, but I'd argue that such error propagation must be in place in order to deal with other potential XPCOM errors anyway.

Step 4:

For step 4 we will implement and share a common allocator interface, let's call it nsIAllocator. A default implementation will get registered as a service, so everyone can get quick access to it. This will help to eliminate impedence mismatch we have today with the seemingly random calls to new versus the various forms of malloc().

class nsIAllocator {
  virtual nsresult Alloc(PRUint32 aCount,void** aPtr)=0;
  virtual nsresult Realloc(PRUint32 aCount,void** aPtr)=0;
  virtual nsresult Free(void** aPtr)=0;

Step 5:

Step 5 is a bit more ambitious: I propose implementing recyclers and/or arenas on a class by class basis.

Recyclers work by maintaining a list of "recycled" instances of a given type. When a caller invokes CreateInstance(), the factory first checks the recycled list. If a recycled object is available it is returned to the caller (perhaps after a reinitialization call). If a recycled object is unavailable, new is called as usual. When the caller is done with the instance, they call a second static method, RecycleInstance(...) to return it to the recycled list for reuse later. Also, keep in mind that it is really easy to combine recyclers with our nsIAllocator interface.

Here's our nsFoo class rewritten to use recyclers:

class nsFoo {
  nsFoo() { } //ctor

  static nsresult CreateInstance(nsFoo** aNewInstance,...args...) {     nsFooRecycler* theFooRecycler=GetFooRecycler();     if(theFooRecycler){       *aNewInstance=theFooRecycler->Pop(); //pop last recycled isntance...     }     if(!*aNewInstance){       aNewInstance=new nsFoo(...args...); //otherwise stamp out a new one     }     nsresult result=(!aNewInstance) ? NS_MEMORY_ALLOCATION_ERROR : NS_OK;     return result;   }

  static nsresult RecycleInstance(nsFoo* anInstance) {      //add instance to recycled list, or return to arena...     nsFooRecycler* theFooRecycler=GetFooRecycler();     if(theFooRecycler){       theFooRecycler->Push(anInstance); //pop last recycled isntance...     }     else delete anInstance;   }


Step 6:

The last step in this proposal is to hook up the recyclers to a global (service-based) memory pressure API. Whenever a memory request fails, the call site can invoke this service. In turn, the memory pressure API notifies all registered observers asking each one to release as much memory as possible. Then the memory request can be made a 2nd time. If it fails then -- the app is in critical condition -- otherwise we get back some recycled memory and continue (with caution).

Note that the memory pressure service is really just an object that recyclers and other caching objects "observe". When memory gets low, error handling will get routed to the mem-pressure API, which in turm will notify all the recycler/observers. They in turn will free up as much cached memory as they can. (See below for a simple API).
Brief Survey of Memory Management Patterns


Used in cases where many instances of an object are created. Rather than destruct objects of a given type, they're added to a "recycler" which caches a certain number for later use. When a new object that type is required, the recycler is asked to provide one. A new instance is constructed only if the recycler can't provide a "recycled" instance. Upon termination of the application, the recycler distracts all remaining unused instances.

Memory Pressure API's

Typically implemented as a service, this API is invoked when memory allocation fails or is near to failing. Recyclers register themselves with this API as observers. The pressure API iterates through its list of memory-pressure observers asking each to free up available resources (usually this results in them dumping the contents of the recycled object lists).

nsIMemoryPressureService {
  NSRESULT RegisterObserver(nsIMemoryPressureObserver* anObserver)=0;
  NSRESULT UnregisterObserver(nsIMemoryPressureObserver* anObserver)=0;
  NSRESULT FreeMemory(size_t aSize,size_t** aFreedAmount)=0;

nsIMemoryPressureObserver {   NSRESULT FreeMemory(size_t** aFreedAmount)=0; };


Arenas are private heaps from which memory allocation for a given object type(s) is provided. This technique offers a powerful allocation optimization and a simple way to free large chunks of memory. This technique is usually used in conjunction with an overloaded version of operator new(), using the standard C++ in-place allocation protocol.

Smart Pointers

This is a handy way to get allocation "right" in many cases. Also used to handle reference counting (like our XPCOM smart pointers do). Finally, this technique also makes automatic destruction or recycling of heap-based objects easier.


A technique straight out the book "Design Patterns". This pattern offers a simple way to reduce the number of instances of a given object.

Got a response? TalkBack!


MozillaZine and the MozillaZine Logo Copyright © 2000 Chris Nelson. All Rights Reserved.