|Subject:||Large sequential reads.|
|From:||Kalvinder Singh <ksingh@xxxxxxxxxxx>|
|Date:||Mon, 09 Apr 2001 15:01:39 +1000|
|User-agent:||Mozilla/5.0 (X11; U; Linux 2.2.16-22 i686; en-US; 0.8.1) Gecko/20010326|
Hi Guys,I have been running bonnie++ over XFS + RAID5, and I have to say congrats on a great job. You have made my life a lot easier, and I probably owe each of you quite a few beers.
My question though concerns large sequential reads (45K to 64K reads). I noticed that I was not able to increase the blocksize on XFS (from 4K to 16K or 32K). Is there another way to improve these reads? Is there going to be improvements in the kernel code or XFS in the near future so large sequential reads are efficient???
I would like to hear a response to this, even if it says "no way are we going to do this...".
Anyway, once again, congrats on a great job guys. Cheers, Kal.
|<Prev in Thread]||Current Thread||[Next in Thread>|
|Previous by Date:||Re: XFS assertion failed: tp->t_blk_res_used <= tp->t_blk_res, file: xfs_trans.c, line: 335, Rob Aagaard|
|Next by Date:||Re: TAKE - quota-3.01-pre3, thomas graichen|
|Previous by Thread:||TAKE - quota-3.01-pre3, Nathan Scott|
|Next by Thread:||Kernel compilation problem, Anthony Hinsinger|
|Indexes:||[Date] [Thread] [Top] [All Lists]|