Here are some quick tip(s) for copying a ton of files between unixy machines really fast. You’re probably thinking “why not use rsync?”…..well rsync can be miserably slow if your source or destination cpu is underpowered. You can always do a rsync after these commands to make 100% certain that everything checks out, but try using one of these methods for the initial copy:
- One way of doing it is
tar -c /path/to/dir | ssh user@remote_server 'tar -xpvf - -C /absolute/path/to/remotedir'
You’ll be prompted for the remote servers password or you can use the private key of the remote server using the -i switch in the ssh command. This has the side benefit of preserving permissions. An alternate version of this command can also be used to locally move folder structures across mount points while preserving permissions:
tar -cf - -C srcdir . | tar -xpf - -C destdir
or
cd srcdir ; tar -cf - . | (cd destdir ; tar -xpf -)
- Another way of doing it with netcat (nc) is
srv1$ tar -cfv - * | nc -w1 remote.server.net 4321
followed by
srv2$ nc -l -p 4321 |tar -xpfv -
Note that you type the first command on the source machine and the second command on the destination machine.
- Yet another way of doing it with socat utility is
host1$ tar -cvf - * | socat stdin tcp4:host2:portnum
followed by
host2$ socat tcp4-listen:portnum stdout | tar -xvpf -
Note that you type the first command on the source machine and the second command on the destination machine.
Once your favourite process (above) is done you can do a quick rsync to tie up any loose ends.
rsync -avW -e ssh /path/to/dir/ remote_server:/path/to/remotedir
Rsync will now fly through the filesystem as 99.9% of the time, 99.9% of the files on the destination are good. And as always make sure you understand the commands before you use them…..and keep backups just in case :-).
One response to “Copying large number of files between two Unix/Linux/OSX Servers”
how did i get remote server