[Dev] two layer object caching

David McCusker david at treedragon.com
Mon Nov 4 12:47:08 PST 2002


I usually tell folks a two layer caching scheme is a good idea, as
opposed to just one layer of caching.  Below I explain what this means.
It might or might not be applicable to a given storage system or object
persistence system, depending on how the system works inside.  Note this
technique is not applicable to load-everything-in-memory systems.

Caching is an attempt to preserve work already done so it's not repeated.
In storage contexts this means not throwing away something already read
from disk, or already assembled into desired object form, until memory
pressure demands re-using space used by objects not accessed recently.

For a disk-based storage system, the lowest layer of caching is buffering
in memory the content read from secondary storage.  One can use pseudo
virtual memory (mimicing VM demand paging), or native memory mapping, but
memory mapping databases larger than 4 gigabytes is hard in a 32-bit
address space, so pseudo VM is most general for scaling.

In a two layer caching scheme, the higher layer is a cache of objects
used by the application, in the non-serialized form used directly by app
code in normal execution.  The higher cache prevents excessive or repeated
trips back and forth between serialized and non-serialized object formats.

In short, a high cache is for objects, and a low cache is for serialized
serialized objects.

A low cache usually needs to be a significant fraction of the entire db
in size in order to have excellent performance with a lot of random access.
But for a really large db, the only rule of thumb is "bigger is better" if
it's not practical to use as much RAM as you'd like.  Anyway, a low cache
typically needs to be big.

A high cache need not be nearly as big as a low cache to be effective.  In
some situations, caching a single most recently used object is enough to
fix some pathological app behaviors.  Here's a Netscape address book example:

The Metrowerks app framework for rendering tables was structured so each
table cell would make calls to get data needed to render a cell, without
any coordination between cells.  As a result, when a single row in a table
described a single address book entry, this entry was fetched independently
from the database, once per cell for all seven cells on every line.

(I call this behavior a typical example of "the left hand doesn't know
what the right hand is doing" in object oriented frameworks.)

Anyway, just caching the most recently used object in exactly the form
ready for application use fixes this worst case problem.  Better still is
caching the last N recently used objects (or the last M bytes of footprint
worth of objects).  However, the amount of RAM used by the N objects in a
high cache need not be a significant fraction of database size to work well.

A low cache avoids unnecessary disk latency to keep re-accessing the same
parts of secondary storage.  And because disks are incredibly slow in
comparison to current chip clock speeds, it's important to avoid ongoing
disk costs during app operations when it involves very many seeks.  (Disk
seek time is typically on the order of 30 to 120 times a second, and
opening a file can take nearly half second on some network file servers.)

When the low cache is working well and disk cost is low, then a good high
cache starts to matter because wasted cycles converting back and forth
between serialized and non-serialized form can become the next bottleneck
instead of disk access time.

Sorry for the long lecture, and I hope it was not overly obvious.  I just
thought it would be useful to say it once in one place for future reference,
rather than explaining numerous times later.

Some operating systems are good about caching recently used files in RAM,
which makes app buffering somewhat less critical.  (For example, Linux
is very nice about using RAM to hold files.)  But this is more a deployment
context issue than an architecture issue for cross platform applications.

--David McCusker




More information about the Dev mailing list