One of the best patterns for using all the
available heap memory, but never running into the dreaded OutOfMemoryError
, is to maintain pointers to your memory-intensive
objects using the CLDC 1.1 WeakReference
. This class
is often overlooked, but it is specifically designed for caching.
When an object is referenced by a WeakReference
,
and not using traditional object pointers, this is a signal to the
garbage collector that it has permission to collect the object if
memory is running low. Architecturally this is very useful, because
it means you can allocate as much memory as you like, and let the
garbage collector and total heap size on the phone determine how many
of those objects to keep.
WeakReference
caching
works quite well to simplify your design vs. explicit memory management
of large objects that are held by hard links. The pattern is: get
the object from the hashtable, if it is null
(has
been garbage collected), then re-create it (load from flash memory
or re-draw it) and put the new object into the Hashtable
. Note that if the object is definitely needed in the near future,
such as an image which is currently visible or nearly visible at the
edge of the screen or next likely action is to change to a new view,
you prevent it from being garbage collected by also keeping a normal
hard object reference. For example, your code is simpler if you always
get images from a Hashtable
of WeakReference
s, but also keep a hard link to the images which are visible. This
allows you to get maximum use from the available memory since if the
object becomes visible again and it is still in memory, you won’t
have to (slowly) re-create or re-load it.
To understand this pattern better, first let’s look at the implementation in the Tantalum 3 library. This and similar utilities are available in open source form to make it faster for you to utilise high performance design patterns.
public class WeakHashCache { protected final Hashtable hash = new Hashtable(); public Object get(final Object key) { final WeakReference reference = (WeakReference) hash.get(key); if (reference != null) { return reference.get(); } return null; } public void put(final Object key, final Object value) { synchronized (hash) { if (key == null) { return; } if (value == null) { hash.remove(key); return; } hash.put(key, new WeakReference(value)); } } public void remove(final Object key) { if (key != null) { hash.remove(key); } } public boolean containsKey(final Object key) { if (key != null) { return hash.containsKey(key); } return false; } public int size() { return hash.size(); } public void clear() { hash.clear(); } }
As we can see by examining the code, we can put() as
many objects as we like, but when memory gets low, we may get() a
null because the phone has to collect some of the objects. We do not
have any control over which objects will be collected before others,
so the trick is to always get object from the WeakHashCache
, but if there is something which should not be garbage collected
at this time (for example an image which is currently visible on the
screen), then you should separately maintain a traditional object
reference to indicate that the object cannot be garbage collected.
The key question for a WeakHashCache
is, what
does my application do when I get back a null instead of the object
I requested? You may be able to re-generate the object from other
available data, but another solution is to also store the objects
in flash memory on the phone and thereby create a sort of virtual
memory extension of the heap. For a complete implementation of this,
see the Tantalum 3 StaticWebCache
that takes care
of all the load, store, and fetch details associated with caching
web service data on your phone.
One of the common performance needs is to make your
application paint, in particular scroll, smoothly and quickly. This
occurs on all platforms, and for example Windows Phone will automatically
store items as images and re-render them for you. We can do the same
on Series 40 phones by painting items each into their own Image
, keeping that pre-painted Image
in
a cache, and reusing it as the object moves around the screen.
This is essentially just a WeakReference
cache
of pre-painted Image
s. Using this technique, the
frame rate of a scrolling list of news items increased from 3 fps
to 12 fps on a Nokia Asha 305 on Nokia Asha software platform after
changing the application to re-use the rendered list items by implementing WeakReference
caching of the rendered list items as images.
See the difference for yourself in this video.
The source code to this solution can be
found in VerticalListView
in the Tantalum 3 Series
40 Example. To enable it, replace the default IconListView
with a VerticalListView
.
Although flash memory is very slow, accessing it is much faster than downloading data from the web (see Flash memory for more information about flash). It also costs less in places where data bundles are relatively expensive.
One of the nicest places to use flash memory caching is on the main screen of your application. Users prefer not to wait while you load data from the web at each application start-up, but rather display existing items from the flash cache. After the start-up you can update the data asynchronously, but an end-user gets a feeling that the application starts fast. You can also pre-fetch data and images needed for other pages and store that in the flash cache so that performance will appear instantaneous as you navigate between screens.
Case example: The start-up time of a news reader application reduced from 10 seconds to 2 seconds after changing the application to load the XML containing news from flash cache, not from the web (Nokia Asha 305 on Nokia Asha software platform). This makes it much more rewarding to start the application, and therefore people use it more often. See this video.
Some algorithms are relatively slow. Typically
these are scaling by “order n”, meaning that as the number of items
in a data structure increases, the code slows down by a linear amount
since each item in the data structure must be touched. Vector.contains()
is a frequent example of where this can start to get slow.
When profiling your application, if you discover that this is a performance
bottleneck for your application, consider storing the data in a Hashtable
in addition to the Vector or existing code. Hashtable.containsKey()
is extremely fast and can speed
up your program at these critical points.
Note that this technique
can also apply to surprising places which will only be visible when
you profile. For example, Font.stringWidth()
is surprisingly
slow, but may be necessary for you to lay out multi-line text on your Canvas
. Creating a Hashtable
with the width
in each character you have used in the Font
can transform
this into a fast operation and increase Canvas.paint()
speed. This is of course not needed, if you already use render caching.