We are having some problems with preallocation of large files. We have found that we can preallocate about 500 1GB files on a volume using the resvsp and truncate commands, but the extents are still showing up as preallocated. Is this a problem? The OS appears to think the files are allocated and correctly sized.
For reference, we are trying to create files for an external piece of equipment to write to a SSD with. The SSD would then be mounted in RHEL and the data pulled off in the 1G chunks. Because of the nature of the data, we need to constantly erase and recreate the files and preallocation seems to be the fastest option. We don’t really care if the data gets 0’ed out. Is there another method – allocsp takes too long for this application? Or, does it matter if XFS thinks the extents are preallocated but unwritten if no other files are written to the disk?