Evaluating effects of cache memory compression on embedded systems
Anderson Farias Briglia
Nokia Institute of Technology
anderson.briglia@indt.org.br
Allan Bezerra
Nokia Institute of Technology
allan.bezerra@nokia.com
Leonid Moiseichuk
Nokia Multimedia, OSSO
leonid.moiseichuk@nokia.com
Nitin Gupta
VMware Inc.
ngupta@vmware.com
Abstract
Cache memory compression (or compressed caching)
was originally developed for desktop and server plat-
forms, but has also attracted interest on embedded sys-
tems where memory is generally a scarce resource, and
hardware changes bring more costs and energy con-
sumption. Cache memory compression brings a consid-
erable advantage in input-output-intensive applications
by means of using a virtually larger cache for the local
file system through compression algorithms. As a result,
it increases the probability of fetching the necessary data
in RAM itself, avoiding the need to make low calls to
local storage. This work evaluates an Open Source im-
plementation of the cache memory compression applied
to Linux on an embedded platform, dealing with the un-
avoidable processor and memory resource limitations as
well as with existing architectural differences.
We will describe the Compressed Cache (CCache) de-
sign, compression algorithm used, memory behavior
tests, performance and power consumption overhead,
and CCache tuning for embedded Linux.
1 Introduction
Compressed caching is the introduction of a new level
into the virtual memory hierarchy. Specifically, RAM
is used to store both an uncompressed cache of pages
in their ‘natural’ encoding, and a compressed cache of
pages in some compressed format. By using RAM to
store some number of compressed pages, the effective
size of RAM is increased, and so the number of page
faults that must be handled by very slow hard disks is
decreased. Our aim is to improve system performance.
When that is not possible, our goal is to introduce no (or
minimal) overhead when compressed caching is enabled
in the system.
Experimental data show that not only