I am looking for an 'sftp' alternative to the following command:
cat list_of_files_to_copy.txt | xargs -I % cp -r % -t /target/folder/
: Read a text file containing the folder paths to be copied, and pass each line (here using xargs) to a copy command cp to process them one-by-one.
I want to do this so I can parallelize the copying process, using partitions of all folders I can give each one as a different text file to multiple copying command on separate terminals (if this does not work as I am expecting to work please comment).
For some reason, the copy command is very slow in my system (even if I don't try to parallelize), whereas doing sftp get seems more efficient.
Any way I can implement this using sftp get ?
scp is the non interactive version of sftp, why don't you just create a loop like this
for F in $(<list_of_files_to_copy.txt);do
scp source destination
done
Related
I'm trying to make a bash expect script that takes in input like user, host, password, and file names and then copies the files from remote to local. From what I've read so far, scp'ing multiple files from remote to local works just fine when you're assuming the files are coming from ~/, i.e:
scp -r user#host:"file1 file2 file3" .
would copy file1, file2, and file3 from ~/ into the current working directory. But I need to be able to pass in another directory as an argument, so my bash script looks like this (but doesn't work, I'll explain how):
eval spawn scp -oStrictHostKeyChecking=no -oCheckHostIP=no -r $user#$host:$dir/"$file1 $file2 $file3" ~/Downloads
This doesn't work after the first file; the shell raises a "No such file or directory" error after the first file, which I would assume means that the script only works on $dir for the first file, then kicks back into ~/ and of course can't find the files there. I've looked everywhere for an answer on this but can't find it, and it would be super tedious to do this one file at a time.
Assuming your remote login shell understands Brace Expansion, this should work
scp $user#$host:$dir/"{$file1,$file2,$file3}" ~/Downloads
If you want to download multiple files with a specific pattern, you can do the following for example if you want all zip files:
scp -r user#host:/path/to/files/"*.zip" /your/local/path
I am using the command cp -a <source>/* <destination> for copying and pasting the files inside one particular destination. In the destination the above command only replaces the files inside a folder that is present in source as well. If the there are other files present in destination, the command will not do anything and leave as it is. Now before doing the pasting, I want to take the back up of the files that are about to be replaced with the copy paste. Is there an option in the cp command that does this?
There is no such option in cp command. Here you need to create a shell script. First execute a ls command in your destination directory and store the output in a file like history.txt. Now just before cp command execute a grep command with the file you want to copy in the history file to check whether that file is already available in history file or not. If the file is available in destination directory (that means file available in history file) back up the file in destination directory first with todays datestamp and then copy the same file name from source to destination.
If you want to backup these files that will be copied from source, use -b option, available in GNU cp
cp -ab <source>/* <destination>
There is 2 caveats that you should know about.
This command, in my knoledge, is not available in non GNU
system (like BSD systems)
It will ask for confirmation for each existing file in target. We can reduce the probleme with the -u option but this is unusable in a script.
It appears to me that you are trying to make a backup (copy files to another location, don't erase them, don't overwrite those already in them), you probably want to take a look at the rsync command. This same command would be written
rsync -ab --suffix=".bak" <source>/ <destination>
and the rsync command is much more flexible to handle this sort of things.
I like to create tar-files to distribute some scripts using bash.
For every script certain configuration-files and libraries (or toolboxes) are needed,
e.g. a script called CheckTool.py needs Checks.ini, CheckToolbox.py and CommontToolbox.py to run, which are stored in specific folders on my harddisk and need to be copied in the same manner on the users harddisk.
I can create a tarfile manually for each script, but i like to have it more simple.
For this i have the idea to define a list of all needed files and their pathes for a specific script and read this in a bashscript, which creates the tar file.
I started with:
#!/bin/bash
while read line
do
echo "$line"
done < $1
Which is reading the files and pathes. In my example the lines are:
./CheckTools/CheckMesh.bs
./Configs/CheckMesh.ini
./Toolboxes/CommonToolbox.bs
./Toolboxes/CheckToolbox.bs
My question is how do I have to organize the data to make a tar file with the specified files using bash?
Or is there someone having a better idea?
No need for a complicated script, use option -T of tar. Every file listed in there will be added to the tar file:
-T, --files-from FILE
get names to extract or create from FILE
So your script becomes:
#!/bin/bash
tar -cvpf something.tar -T listoffiles.txt
listoffiles.txt format is super easy, one file per line. You might want to put full path to ensure you get the right files:
./CheckTools/CheckMesh.bs
./Configs/CheckMesh.ini
./Toolboxes/CommonToolbox.bs
./Toolboxes/CheckToolbox.bs
You can add tar commands to the script as needed, or you could loop on the list files, from that point on, your imagination is the limit!
I'm using 7z command in bash script to create a 7z archive for backup purposes. My script does also check if this newly created 7z archive exists in my backup folder and if it does, I go and run md5sum to see if content differs. So if the archive file doesn't exits yet or the md5sum differs from the previous I copy it to my backup folder. So I tried a simple example to test the script, but the problem is that I sometimes get different md5sum for the same folder I am compressing. Why is that so? Is there any other reliable way of checking if file content differs? The commands are simple:
SourceFolder="/home/user/Documents/"
for file in $SourceFolder*
do
localfile=${file##*/}
7z a -t7z "$SourceFolder${localfile}.7z" "$file"
md5value=`md5sum "$SourceFolder${localfile}.7z"|cut -d ' ' -f 1`
...copyinf files goes from here on...
The reliable way to check if two different losslessly compressed files have identical contents is to expand their contents and compare those (e.g. using md5sum). Comparing the compressed files is going to end badly sooner or later, regardless of which compression scheme you use.
I've partially solved this. It looks like it matters if you specify full path to the folder you are compressing or not. The resulting file is not the same. .This affects both 7z and tar.I mean like this:
value1=$(tar -c /tmp/at-spi2/|md5sum|cut -d ' ' -f 1)
value2=$(tar -c at-spi2/|md5sum|cut -d ' ' -f 1)
So obviously I'm doing this wrong. Is there a switch for 7z and tar which would remove absolute path?
Is it possible to use a wildcard in scp
I am trying to achieve:
loop
{
substitue_host (scp path/file.jar user#host:path1/foo*/path2/jar/)
}
I keep on getting "scp: ambiguous target"
Actually I am calling an api with source and dest that uses scp underneath and loops over diff hosts to put files
Thanks!
In general, yes, it is certainly possible to use a wildcard in scp.
But, in your scp command, the second argument is the target, the first argument is the source. You certainly cannot copy a source into multiple targets.
If you were trying to copy multiple jars, for example, then the following would certainly work:
scp path/*.jar user#host:path2/jar/
"ambigious target" in this case is specifically complaining that the wildcard you're using results in multiple possible target directories on the #host system.
--- EDIT:
If you want to copy to multiple directories on a remote system and have to determine them dynamically, a script like the following should work:
dir_list=$(ssh user#host ls -d '/path1/foo*/path2/jar/')
for dir in $dir_list; do
scp path/file.jar user#host:$dir
done
The dir_list variable will hold the results of the execution of the ls on the remote system. The -d is so that you get the directory names, not their contents. The single quotes are to ensure that wildcard expansion waits to execute on the remote system, not on the local system.
And then you'll loop through each dir to do the remote copy into that directory.
(All this is ksh syntax, btw.)