Coming soon – A new RAM cache

Last year Martin Jonsson blogged about the HTTP compression feature he was working on, and although it's not yet been enabled in standard 5.0 builds it will be in the near future. As Martin wrote you can test it now by using the -DHTTP_COMPRESSION start flag.

Another recent development which is about to leave the lab is a brand new RAM cache module. Like the HTTP compression capability we are giving it some real-world stress testing before enabling it universally but soon we will flip the switch and make it official.

What is the cache doing?

Roxen's RAM cache is used to store many different data types such as compiled RXML pages, <cache> tag output, XML node trees, repository metadata, XSLT stylesheets, file objects etc. Its main purpose is to allow quick retrieval of data which is expensive to build from scratch.

However, caching something also has a cost since total RAM is limited. The cache has to decide whether it makes more sense to keep an item or to throw it away to make room for other data. To help in this decision it collects statistics on how many times the item has been retrieved in the past and uses that to make a prediction on future relevance.

Another crucial parameter aside from the memory footprint is the time it takes to rebuild a given entry if it's evicted from the cache. These two properties, size and regeneration cost, vary greatly depending on data type. For instance, a short directory listing may require disk I/O to be reconstructed and thus be more expensive to produce than evaluating a large RXML code block inside a <cache> tag. The age of an item is another property which can be taken into consideration as well as the hit ratio for all items of a particular data type.

What's wrong with the old cache?

Only in Pike 7.8 did we get the proper low-level calls to compute the size of complex data items stored in the cache. Moreover, in the old cache regeneration cost wasn't computed so the cache would base its eviction decision on hit rate and item age alone.

The most serious problem was that due to lack of reliable size information we were unable to determine the total memory usage of the cache. Without this knowledge we couldn't place an upper limit on the cache; instead many product subsystems had to adopt very conservative cache policies just to keep the total cache size reasonable in worst-case situations. Failing to do that would have led to swapping or even server crashes on systems where the usable address space is tight (32-bit Windows comes to mind). Needless to say these worst-case scenarios represent an extreme that most installations never encounter and because of that we got less than optimal use of RAM in normal operation.

Anyone that looked at the Cache Status page in the administration interface could see that perfectly cacheable items such as expensive XSLT transforms only stayed in RAM for a few minutes if the server was idling. Ideally such data should stay forever if nothing else needs the space.

How does it work?

The new cache takes advantage of the Pike additions mentioned above to correctly compute memory footprint of complex data structures. It also adds advanced policy managers based on scientific research. Finally, there is an admin-configurable size limit which allows you to control how much memory you want to assign to the cache. If you have the need a 64-bit Roxen server should let you go beyond the traditional 2/3/4 GB address space boundaries seen in 32-bit versions. A relative small investment in a RAM upgrade may give signficant performance improvements for existing servers.

Of course you shouldn't reserve too much memory so the Roxen process (or other processes) starts swapping; you must also leave room for other (uncached) data in Roxen's memory. At this time we have no exact guidelines but I'm sure we'll know more once this has been in use for a while.

Let me try now!

Make sure you are using Roxen 5.0.449-release3 or later since earlier versions have known problems in this area. Next, start your server with -DNEW_RAM_CACHE and you are done!

Size configuration takes place in the Globals > Cache page in the administration interface, and there is a Cache Status wizard in Tasks > Status.

Compatibility issues for module developers

Module developers should note that the Pike-level API for the cache is backwards compatible. Still, there are some recommended changes for certain usage patterns that will help the cache function better:

  • Call cache_peek() instead of cache_lookup() just to peek in the cache. It avoids skewing internal statistics when you don't really want the data.
  • The cache detects a cache_lookup() failure followed by a subsequent cache_set() and uses the time in between as the cache entry generation cost. If your cache access doesn't follow this model you should pass a special flag to cache_set() to avoid the timing to be computed incorrectly.
  • The best way to empty a whole cache is to use cache_expire() instead of cache_remove() with zero as the second argument.
  • The new cache implementation contains several policy managers with different heuristics. Developers can choose which one to use for each cache by calling cache_register(), though it's not mandatory to do so. The internal behavior of the managers may differ in future versions or even between operating systems in the same version.

The full API and documentation can be seen in server-x.y.z/base_server/cache.pike.

Credits

Thanks to Martin Stjernholm and Martin Jonsson for the development work.

2010-03-22, 21:39 by Jonas Wallden

 

You need to log in to post comments.

 

1   Martin Stjernholm

2010-03-23 14:05

In the old cache, people often had to tune down the gc interval setting (under Globals -> Cache) to keep the cache from growing out of hand. That's also a thing of the past - the new cache never exceeds the configured size.

Rather, the gc interval in the new cache says how often Roxen should go through the cache to throw out garbage (i.e. stale or timed out) entries to make more room for new data. Under the Garbage Collector heading in the Cache Status page, there's some useful statistics for tuning the GC interval. Using that you can strike a balance between the amount of garbage and the time taken in the garbage collector.

Footnote: For those of you that sometimes hear us pike nerds talk about the "pike gc", I can mention that this gc and the pike gc are two entirely different things. Don't mix them up.

2   Martin Stjernholm

2010-03-23 14:14

Should also mention that there are several more caches in Roxen besides this one. In particular, the http protocol cache (i.e. the one that caches complete http responses and which is configured under Cache tab for each site) is separate and (so far) not affected by this cache system.

Sep 23, 2017

Categories

Community Update (1)
Customers (0)
Development (10)
New sites (1)

Latest comments

Should also mention that there are several more caches in Roxen besides this one. In particular, the http protocol cache (i.e. the one that caches complete http responses and which is configured under Cache tab for each site) is separate and (so far) not affected by this cache system.
In the old cache, people often had to tune down the gc interval setting (under Globals -> Cache) to keep the cache from growing out of hand. That's also a thing of the past - the new cache never exceeds the configured size. Rather, the gc interval in the new cache says how often Roxen should go through the cache to throw out garbage (i.e. stale or timed out) entries to make more room for new data. Under the Garbage Collector heading in the Cache Status page, there's some useful statistics for tuning the GC interval. Using that you can strike a balance between the amount of garbage and the time taken in the garbage collector. Footnote: For those of you that sometimes hear us pike nerds talk about the "pike gc", I can mention that this gc and the pike gc are two entirely different things. Don't mix them up.