Search String: Display: Description: Sort:

Results:

References: [ +subject:/^(?:^\s*(re|sv|fwd|fw)[\[\]\d]*[:>-]+\s*)*buffered\s+writeback\s+torture\s+program\s*$/: 19 ]

Total 19 documents matching your query.

1. buffered writeback torture program (score: 1)
Author: Chris Mason <chris.mason@xxxxxxxxxx>
Date: Wed, 20 Apr 2011 14:23:29 -0400
Hi everyone, The basic idea is: 1) make a nice big sequential 8GB file 2) fork a process doing random buffered writes inside that file 3) overwrite a second 4K file in a loop, doing fsyncs as you go.
/archives/xfs/2011-04/msg00255.html (13,549 bytes)

2. Re: buffered writeback torture program (score: 1)
Author: Vivek Goyal <vgoyal@xxxxxxxxxx>
Date: Wed, 20 Apr 2011 18:06:26 -0400
Chris, You seem to be doing 1MB (32768*32) writes on fsync file instead of 4K. I changed the size to 4K still not much difference though. Once the program has exited because of high write time, i res
/archives/xfs/2011-04/msg00258.html (10,647 bytes)

3. Re: buffered writeback torture program (score: 1)
Author: Christoph Hellwig <hch@xxxxxxxxxxxxx>
Date: Thu, 21 Apr 2011 04:32:58 -0400
I can't really reproduce this locally on XFS: setting up random write file done setting up random write file starting fsync run starting random io! write time: 0.0023s fsync time: 0.5949s write time:
/archives/xfs/2011-04/msg00269.html (10,115 bytes)

4. Re: buffered writeback torture program (score: 1)
Author: Chris Mason <chris.mason@xxxxxxxxxx>
Date: Thu, 21 Apr 2011 07:09:11 -0400
Excerpts from Vivek Goyal's message of 2011-04-20 18:06:26 -0400: Whoops, I had that change made locally but didn't get it copied out. I see this for some of my runs as well. I haven't traced it to s
/archives/xfs/2011-04/msg00275.html (12,026 bytes)

5. Re: buffered writeback torture program (score: 1)
Author: Chris Mason <chris.mason@xxxxxxxxxx>
Date: Thu, 21 Apr 2011 11:25:41 -0400
Excerpts from Chris Mason's message of 2011-04-21 07:09:11 -0400: Oh, I see now. The test program first creates the file with a big streaming write. So the task doing the streaming writes gets nailed
/archives/xfs/2011-04/msg00278.html (11,902 bytes)

6. Re: buffered writeback torture program (score: 1)
Author: Vivek Goyal <vgoyal@xxxxxxxxxx>
Date: Thu, 21 Apr 2011 11:35:47 -0400
Ok, that makes sense. So initial file creation accounted lots of buffered IO to this process hence VM thinks it has crossed it dirty limits and later this task comes with 4K write and gets throttled.
/archives/xfs/2011-04/msg00279.html (12,700 bytes)

7. Re: buffered writeback torture program (score: 1)
Author: Jan Kara <jack@xxxxxxx>
Date: Thu, 21 Apr 2011 18:55:29 +0200
Ok, so there isn't a problem with fsync() as such if I understand it right. We just block tasks in balance_dirty_pages() for a *long* time because it takes long time to write out that dirty IO and we
/archives/xfs/2011-04/msg00281.html (13,074 bytes)

8. Re: buffered writeback torture program (score: 1)
Author: Chris Mason <chris.mason@xxxxxxxxxx>
Date: Thu, 21 Apr 2011 12:57:17 -0400
Excerpts from Jan Kara's message of 2011-04-21 12:55:29 -0400: You're right. With one small exception, we probably do want to rotor out of the random buffered writes in hopes of finding some sequenti
/archives/xfs/2011-04/msg00282.html (13,662 bytes)

9. Re: buffered writeback torture program (score: 1)
Author: Chris Mason <chris.mason@xxxxxxxxxx>
Date: Thu, 21 Apr 2011 13:34:44 -0400
Excerpts from Christoph Hellwig's message of 2011-04-21 04:32:58 -0400: Sorry, this doesn't do it. I think that given what a strange special case this is, we're best off waiting for the IO-less throt
/archives/xfs/2011-04/msg00283.html (11,375 bytes)

10. Re: buffered writeback torture program (score: 1)
Author: Christoph Hellwig <hch@xxxxxxxxxxxxx>
Date: Thu, 21 Apr 2011 13:41:21 -0400
I'm not sure what you mean with seek aware. XFS only clusters additional pages that are in the same extent, and in fact only does so for asynchrononous writeback. Not sure how this should be more see
/archives/xfs/2011-04/msg00284.html (10,515 bytes)

11. Re: buffered writeback torture program (score: 1)
Author: Andreas Dilger <adilger@xxxxxxxxx>
Date: Thu, 21 Apr 2011 11:59:37 -0600
But doesn't XFS have potentially very large extents, especially in the case of files that were fallocate()'d or linearly written? If there is a single 8GB extent, and then random writes within that e
/archives/xfs/2011-04/msg00285.html (11,024 bytes)

12. Re: buffered writeback torture program (score: 1)
Author: Christoph Hellwig <hch@xxxxxxxxxxxxx>
Date: Thu, 21 Apr 2011 14:02:13 -0400
It doesn't cluster any writes in an extent. It only writes out additional dirty pages directly following that one we were asked to write out. As soon as we hit a non-dirty page we give up.
/archives/xfs/2011-04/msg00286.html (10,862 bytes)

13. Re: buffered writeback torture program (score: 1)
Author: Chris Mason <chris.mason@xxxxxxxxxx>
Date: Thu, 21 Apr 2011 14:00:29 -0400
Excerpts from Christoph Hellwig's message of 2011-04-21 13:41:21 -0400: How big are extents? fiemap tells me the file has a single 8GB extent. There's a little room for seeking inside there. -chris
/archives/xfs/2011-04/msg00287.html (10,006 bytes)

14. Re: buffered writeback torture program (score: 1)
Author: Chris Mason <chris.mason@xxxxxxxxxx>
Date: Thu, 21 Apr 2011 14:02:43 -0400
Excerpts from Christoph Hellwig's message of 2011-04-21 14:02:13 -0400: For this program, they are almost all dirty pages. I tried patching it to give up if we seek but it is still pretty slow. There
/archives/xfs/2011-04/msg00288.html (11,182 bytes)

15. Re: buffered writeback torture program (score: 1)
Author: Christoph Hellwig <hch@xxxxxxxxxxxxx>
Date: Thu, 21 Apr 2011 14:08:05 -0400
I'm not sure where you this beeing to agressive from - it's doing exactly the same amount of I/O as a filesystem writing out a single page from ->writepage or using write_cache_pages (either directly
/archives/xfs/2011-04/msg00289.html (11,033 bytes)

16. Re: buffered writeback torture program (score: 1)
Author: Chris Mason <chris.mason@xxxxxxxxxx>
Date: Thu, 21 Apr 2011 14:29:37 -0400
Excerpts from Christoph Hellwig's message of 2011-04-21 14:08:05 -0400: Ok, I see what you mean. The clustering code stops once it hits nr_to_write, I missed that. So we shouldn't be doing more than
/archives/xfs/2011-04/msg00290.html (11,326 bytes)

17. Re: buffered writeback torture program (score: 1)
Author: Andreas Dilger <adilger@xxxxxxxxx>
Date: Thu, 21 Apr 2011 12:43:47 -0600
I wonder if it makes sense to disentangle the two processes state in the kernel, by forking the fsync thread before any writes are done. That would avoid penalizing the random writer in the VM/VFS, b
/archives/xfs/2011-04/msg00291.html (11,835 bytes)

18. Re: buffered writeback torture program (score: 1)
Author: Chris Mason <chris.mason@xxxxxxxxxx>
Date: Thu, 21 Apr 2011 14:47:22 -0400
Excerpts from Andreas Dilger's message of 2011-04-21 14:43:47 -0400: The test itself may not be realistic, but I actually think its a feature that we end up stuck doing the random buffered ios. Someh
/archives/xfs/2011-04/msg00292.html (12,537 bytes)

19. Re: buffered writeback torture program (score: 1)
Author: Jan Kara <jack@xxxxxxx>
Date: Thu, 21 Apr 2011 22:44:34 +0200
Flusher thread should do this - it writes at most 1024 pages (as much as queue and go on with the next dirty inode. So if there is some sequential IO as well, we should get to it... Honza -- Jan Kara
/archives/xfs/2011-04/msg00309.html (16,283 bytes)


This search system is powered by Namazu