On Wed, Aug 17, 2011 at 03:05:28PM +0200, Bernd Schubert wrote:
> >(squeeze-x86_64)fslab2:~# xfs_bmap -a
> > 0: [0..7]: 92304..92311
> (Sorry, I have no idea what "0: [0..7]: 92304..9231" is supposed to
> tell me).
It means that you are having an extent spanning 8 blocks for xattr
storage, that map to physical blocks 92304 to 9231 in the filesystem.
It sounds to me like your workload has a lot more than 256 bytes of
xattrs, or the underlying code is doing something rather stupid.
> Looking at 'top' and 'iostat -x' outout, I noticed we are actually
> not limited by io to disk, but CPU bound. If you should be
> interested, I have attached 'perf record -g' and 'perf report -g'
> outout, of the bonnie file create (create + fsetfattr() ) phase.
It's mostly spending a lot of time on copying things into the CIL
buffers, which is expected and intentional as that allows for additional
parallelity. I you'd switch the workload to multiple intances doing
the create in parallel you should be able to scale to better numbers.
> > 100:256:256/10 37026 91 +++++ +++ 43691 93 35960 92 +++++ +++ 40708
> > 92
> >Latency 4328us 765us 2960us 527us 440us 1075us
> mkfs.xfs -f -i size=512 -i maxpct=90 -l lazy-count=1 -n size=64k /dev/sdd
Do 64k dir blocks actually help you with the workload? They also tend
to do a lot of useless memcpys in their current form, although these
didn't show up on your profile. Did you try using a larger inode size
as suggested in my previous mail?