On Wed, Jan 15, 2003 at 09:57:28AM -0600, Mandy Kirkconnell wrote:
> Jason Joines wrote:
> > Mandy Kirkconnell wrote:
> >> Jason Joines wrote:
> >>> What's the maximum file size for a file to be dumped by xfsdump?
> >> xfsdump doesn't (really) have a maximum file size limitation. There
> >> is a maximum file size defined in xfsdump/dump/content.c but it is
> >> set to the largest theoretical file size, 18 million terabytes. The
> >> definition is defined in bytes:
> >> /* max "unsigned long long int"
> >> */
> >> #define ULONGLONG_MAX 18446744073709551615LLU
> >> Obviously this maximum limit is impossible to hit, which is why I say
> >> xfsdump doesn't have a max file size limit. You should be able to
> >> dump the biggest possible file you can create.
> >> There is, however, a command line option (-z) to set a maximum file
> >> size for your dump. This option allows you to specify a maximum file
> >> size, in kilobytes. Files over this size will be excluded from the
> >> dump.
> > When running a dump with "xfsdump -F -e -f
> > /local/backup/weekly/sdb3.dmp -l 0 /dev/sdb3" I get the message,
> > "xfsdump: WARNING: could not open regular file ino 4185158 mode
> > 0x000081b0: File too large: not dumped". The file in question is 5.0 GB.
> > Jason
> > ===========
> xfsdump does not set EFBIG (errno 27) anywhere. It looks like the error
> is coming from the filesystem on the first attempt to open the file.
Another thing to note is that in April 2002 a fix went into libhandle
which xfsdump uses:
xfsprogs-2.0.2 (4 April 2002)
- Bumped version of libhandle to libhandle.so.1.0.1
This changes open_by_handle() and friends so that
O_LARGEFILE is added to the open flags.
This allows xfsdump to dump files greater than
2^31-1 bytes instead of not dumping the large
files and giving warning messages.