[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
On Fri, 2003年10月17日 at 18:06, W. Chris Shank wrote: > I'm trying to move a 28G file from Linux to OS X. Transmit fails after > 4G. scp fails after about the same. I'm trying to avoid cutting it up - > I already tried to make on 5.6G file - but that fails too (no surprise). > Unless I can take my 28G tarball and slice it into 7 4G pieces and > reassemble then on the other side. It just really bother me that it's > bombing at 4G. I assume because the file is being cached in RAM. Anyway > to force scp to incrementally write the file to disk? > I checked it out further and it appears that HFS has a limit of 2GB file size. However, HFS+ has a file size limit of 2^63 bytes, which is well over 2 terabytes. As Adam points out, something you're using is not large-file-aware, but you should be aware that this may be your shell, not your copy of cp (scp is an alias for using ssh with a file copy). -- << Tobias DiPasquale >> 88FA 30C9 1E63 CFE2 CBD8 37C4 DA1C E2BF 1D26 F036 http://cbcg.net/
Attachment:
signature.asc
Description: This is a digitally signed message part