I've got a collection of hundreds of directories ordered in alphabetical order and with differently named files inside. These directories I want to copy over to another location using rsync.
I don't wanna go over all the directories manually, but instead I want to use the --include option of rsync or create a loop in bash to go over the directories.
For far I've tried using the bash script below, but had no success yet.
for dir in {A..Z}; do
echo "$dir";
rsync --progress --include $dir'*' --exclude '*' -rt -e ssh username#192.168.1.123:/source/directory/ ~/target/directory/
done;
Does anyone know what would be the correct way to go over the directories using rsync's --include option?
Update:
The bash script above was more to try out the loop to go over my directories and see what comes out. The command I actually wanted to use was this one:
for dir in /*; do
rsync --progress --include $dir'*' --exclude '*' --bwlimit=2000 -rt -e ssh username#192.168.1.123:/source/directory/ ~/target/directory/
done;
I know bash can do something like {A..Z}, but this doesn't seem to get me the result I want. I already copied half of the alphabet of directories so I was trying {F..Z} as an array.
Update
I've come up with the following script to run from my source directories location.
#!/bin/bash
time=$(date +"%Y-%m-%d %H:%M:%S") # For time indication
dir=/source/directory/[P-Z]* # Array of directories with name starting with "P" to "Z"
printf "[$time] Transferring: /source/directory/\"$dir\"\n"
rsync -trP -e 'ssh -p 123' --bwlimit=2000 $dir username#192.168.1.123:/target/directory
This will transfer all directories from the source directory with names starting with character "P" to "Z" over ssh using port 123.
This works for me in a shell script. I'm sure there are better ways to do this in a single line command, but this one I just came up with to help myself out.
Sounds like you want recursive rsync. I'd go with:
rsync -r / --restOfYourRsyncArgs
That walks over every file/folder/subfolder in / (could be A LOT, consider excludes and/or a different target path) and uploads/downloads. Set excludes for files and folders you don't want sent.
I've come up with the following script to run from my source directories location.
#!/bin/bash
time=$(date +"%Y-%m-%d %H:%M:%S") # For time indication
dir=/source/directory/[P-Z]* # Array of directories with name starting with "P" to "Z"
printf "[$time] Transferring: /source/directory/\"$dir\"\n"
rsync -trP -e 'ssh -p 123' --bwlimit=2000 $dir username#192.168.1.123:/target/directory
This will transfer all directories from the source directory with names starting with character "P" to "Z" over ssh using port 123. This works for me in a shell script. I'm sure there are better ways to do this in a single line command, but this one I just came up with to help myself out.
Related
I am running numerous simulations on a remote server (via ssh). The outcomes of these simulations are stored as .tar archives in an archive directory on this remote server.
What I would like to do, is write a bash script which connects to the remote server via ssh and extracts the required output files from each .tar archive into separate folders on my local hard drive.
These folders should have the same name as the .tar file from which the files come (To give an example, say the output of simulation 1 is stored in the archive S1.tar on the remote server, I want all '.dat' and '.def' files within this .tar archive to be extracted to a directory S1 on my local drive).
For the extraction itself, I was trying:
for f in *.tar; do
(
mkdir ../${f%.tar}
tar -x -f "$f" -C ../${f%.tar} "*.dat" "*.def"
)
done
wait
Every .tar file is around 1GB and there is a lot of them. So downloading everything takes too much time, which is why I only want to extract the necessary files (see the extensions in the code above).
Now the code works perfectly when I have the .tar files on my local drive. However, what I can't figure out is how I can do it without first having to download all the .tar archives from the server.
When I first connect to the remote server via ssh username#host, then the terminal stops with the script and just connects to the server.
Btw I am doing this in VS Code and running the script through terminal on my MacBook.
I hope I have described it clear enough. Thanks for the help!
Stream the results of tar back with filenames via SSH
To get the data you wish to retrieve from .tar files, you'll need to pass the results of tar to a string of commands with the --to-command option. In the example below, we'll run three commands.
# Send the files name back to your shell
echo $TAR_FILENAME
# Send the contents of the file back
cat /dev/stdin
# Send EOF (Ctrl+d) back (note: since we're already in a $'' we don't use the $ again)
echo '\004'
Once the information is captured in your shell, we can start to process the data. This is a three-step process.
Get the file's name
note that, in this code, we aren't handling directories at all (simply stripping them away; i.e. dir/1.dat -> 1.dat)
you can write code to create directories for the file by replacing the forward slashes / with spaces and iterating over each directory name but that seems out-of-scope for this.
Check for the EOF (end-of-file)
Add content to file
# Get the files via ssh and tar
files=$(ssh -n <user#server> $'tar -xf <tar-file> --wildcards \'*\' --to-command=$\'echo $TAR_FILENAME; cat /dev/stdin; echo \'\004\'\'')
# Keeps track of what state we're in (filename or content)
state="filename"
filename=""
# Each line is one of these:
# - file's name
# - file's data
# - EOF
while read line; do
if [[ $state == "filename" ]]; then
filename=${line/*\//}
touch $filename
echo "Copying: $filename"
state="content"
elif [[ $state == "content" ]]; then
# look for EOF (ctrl+d)
if [[ $line == $'\004' ]]; then
filename=""
state="filename"
else
# append data to file
echo $line >> <output-folder>/$filename
fi
fi
# Double quotes here are very important
done < <(echo -e "$files")
Alternative: tar + scp
If the above example seems overly complex for what it's doing, it is. An alternative that touches the disk more and requires to separate ssh connections would be to extract the files you need from your .tar file to a folder and scp that folder back to your workstation.
ssh -n <username>#<server> 'mkdir output/; tar -C output/ -xf <tar-file> --wildcards *.dat *.def'
scp -r <username>#<server>:output/ ./
The breakdown
First, we'll make a place to keep our outputted files. You can skip this if you already know the folder they'll be in.
mkdir output/
Then, we'll extract the matching files to this folder we created (if you don't want them to be in a different folder remove the -C output/ option).
tar -C output/ -xf <tar-file> --wildcards *.dat *.def
Lastly, now that we're running commands on our machine again, we can run scp to reconnect to the remote machine and pull the files back.
scp -r <username>#<server>:output/ ./
I'm implementing agent script bash to pull files from the remote server with SFTP service.
The script must:
connect SFTP
file listing
cycling on files found
get every file and copy agent side
after that files copied must be deleted
The script is followed:
#!/bin/bash
SFTP_CONNECTION="sftp -oIdentityFile=/home/account_xxx/.ssh/service_ssh user#host"
DEST_DATA=/tmp/test/data/
# GET list file by ls command ###############
$SFTP_CONNECTION
$LIST_FILES_DATA_OSM1 = $("ls fromvan/test/data/test_1")
echo $LIST_FILES_DATA_OSM1
for file in "${LIST_FILES_DATA_OSM1[#]}"
do
$SFTP_CONNECTION get $file $DEST_DATA
$SFTP_CONNECTION rm $file
done
I tried the script but it seems that the connection and command execution (ls) are distinct on thread separated.
How can I provide command sequential as described above ?
Screenshoot:
Invalid find command
SSH it seem not available
RSYNC result to take the files is the followed:
Thanks
First of all, I would recommend the following syntax changes:
#!/bin/bash
sftp_connection() {
sftp -oIdentityFile=/home/account_xxx/.ssh/service_ssh user#host "$#";
}
Dest_Data=/tmp/test/data/
# GET list file by ls command ###############
sftp_connection
List_Files_D_OSM1=$("ls fromvan/test/data/test_1")
echo "$LIST_FILES_DATA_OSM1"
for file in "${LIST_FILES_DATA_OSM1[#]}"
do
sftp_connection get "$file" $Dest_Data
sftp_connection rm "$file"
done
Quoting $file and $List_Files_D_OSM1 to prevent globbing and word splitting.
Assignments can't start with a $, otherwise bash will try to execute List_Files_D_OSM1 and will complain with a command not found
No white spaces in assignments like List_Files_D_OSM1 = $("ls fromvan/test/data/test_1")
You can use ShellCheck to catch this kind of errors.
Having said that, it is in general not a good idea to use ls in such way.
What you can use instead is something like find. For example:
find . -type d -exec echo '{}' \;
Use a different client. lftp supports sftp as a transport, and has a subcommand for mirroring which will do the work of listing the remote directory and iterating over files for you.
Assuming your ~/.ssh/config contains an entry like:
Host myhost
IdentityFile /home/account_xxx/.ssh/service_ssh
...you can run:
lftp -e 'mirror -R fromvan/test/data/test_1 /tmp/test/data' sftp://user#myhost
Source path /var/www/html/20170101/*.jpg
Destination path /Backup/html/20170101/
There will be a lot of Source path below, such as 20170101, 20171231, a directory with date name, and there will be a lot of graphics in each directory. How do I write a Rsync shell script? I'm using it now
ex:
Rsync -avzh --progress /var/www/html/20170101/ /Backup/html/20170101/
I want to write a script that can make the date part of the variable when the day directory is finished, and then change the directory for the next day to continue Rsync
Is this what you want:
Put all the directory names in a text file in the below format
cat dirnamefile
2017010
20171231
Now write a shell script to read the file and substitute the directory each time:
i=0
while read line;
do
arr[$i]=`echo $line`
i = `expr $i + 1`
done<dirname
for var in ${arr[#]}
do
Rsync -avzh --progress /var/www/html/$var/ /Backup/html/$var/
done
The above script will take in the list of different names for the directories and then loop though running rsync at each loop.
Is this what you are looking for?
I have a script which sync's a few files with a remote host. The commands that I want to issue are of the form
rsync -avz ~/.alias user#y:~/.alias
My script looks like this:
files=(~/.alias ~/.vimrc)
for file in "${files[#]}"; do
rsync -avz "${file}" "user#server:${file}"
done
But the ~ always gets expanded and in fact I invoke the command
rsync -avz /home/user/.alias user#server:/home/user/.alias
instead of the one above. But the path to the home directory is not necessarily the same locally as it is on the server. I can use e.g. sed to replace this part, but it get's extremely tedious to do this for several servers with all different paths. Is there a way to use ~ without it getting expanded during the runtime of the script, but still rsync understands that the home directory is meant by ~?
files=(~/.alias ~/.vimrc)
The paths are already expanded at this point. If you don't want that, escape them or quote them.
files=('~/.alias' \~/.vimrc)
Of course, then you can't use them, because you prevented the shell from expanding '~':
~/.alias: No such file or directory
You can expand the tilde later in the command using eval (always try to avoid eval though!) or a simple substitution:
for file in "${files[#]}"; do
rsync -avz "${file/#\~/$HOME/}" "user#server:${file}"
done
You don't need to loop, you can just do:
rsync -avz ~/.alias/* 'user#y:~/.alias/'
EDIT: You can do:
files=(.alias .vimrc)
for file in "${files[#]}"; do
rsync -avz ~/"${file}" 'user#server:~/'"${file}"
done
I'm sure there is a simple way to do this, but I am not finding it. What I want to do is execute a series of commands using lftp, and I want to avoid repeatedly connecting to the server if possible.
Basically, I have a file with a list full of ftp directories on the server. I want to connect to the server then execute something like the following: (assume at this point that I have already converted the text file into an array of lines using cat)
for f in "${myarray}"
do
cd $f;
nlist >> $f.txt;
cd ..;
done
Of course that doesn't work, but I have to imagine there is a simple solution to what I am trying to accomplish.
I am quite inexperienced when it comes to shell scripting. Any suggestions?
First build a string that contains the list of lftp commands. Then call lftp, passing the command on its standard input. Lftp itself can redirect the output of a command to a file, with a syntax that resembles the shell.
list_commands=""
for dir in "${myarray[#]}"; do
list_commands="$list_commands
cd \"$dir\"
nlist >\"$dir.txt\"
cd .."
done
lftp <<EOF
open -u $username,$password $site
$list_commands
bye
EOF
Note that I assume that the directory names don't contain backslashes, single quotes or globbing characters. Add proper escaping if necessary.
By the way, to read lines from a file, see Why is while IFS= read used so often, instead of IFS=; while read..?. You might prefer to combine reading from the list of directories and building the commands:
list_commands=""
while IFS= read -r dir; do
list_commands="$list_commands
cd \"$dir\"
nlist >\"$dir.txt\"
cd .."
done <directory_list.txt