On Wed, May 30, 2007 at 09:49:38AM -0700, Michael Nishimoto wrote:
> Has anyone done any work or had thoughts on changes required
> to reduce the total memory footprint of high extent xfs files?
We changed the way we do memory allocation to avoid needing
large contiguous chunks of memory a bit over a year ago;
that solved the main OOM problem we were getting reported
with highly fragmented files.
> Obviously, it is important to reduce fragmentation as files
> are generated and to regularly defrag files, but both of these
> alternatives are not complete solutions.
> To reduce memory consumption, xfs could bring in extents
> from disk as needed (or just before needed) and could free
> up mappings when certain extent ranges have not been recently
> accessed. A solution should become more aggressive about
> reclaiming extent mapping memory as free memory becomes limited.
Yes, it could, but that's a pretty major overhaul of the extent
interface which currently assumes everywhere that the entire
extent tree is in core.
Can you describe the problem you are seeing that leads you to
ask this question? What's the problem you need to solve?
SGI Australian Software Group