bash shell copy the file from one location to another using break and continue - bash

I have small script to copy the all the files from one directory (SRC) to another directory (DES). This below script is running perfectly.
#!/bin/bash
SRC="/home/user/dir1/*"
DES="/home/user/dir2/"
for file in "$SRC"
do
if [ -f "$file" ]
then
cp "$file" "$DES"
echo "$file -----> file copied"
fi
done
Now what i am thinking while copying files from one directory to another directory, how to skip the copying file if that file has already exist in (DES) directory with same name of (SRC) directory and continue the remaining file as usual from source to destination?
Here how to i use break and continue looping to perform this action?
Thanks,

I recommend to use rsync:
src="/home/user/dir1/"
dst="/home/user/dir2/"
rsync -rav --ignore-existing "${src}" "${dst}"
The switch --ignore-existing tells rsync to skip files which exist at the destination.

Why not just reduce the entire script to a oneliner?
cp -n /home/user/dir1/* /home/user/dir2/
The -n flag (--no-clobber) prevents cp from overwriting existing files.
If your real situation is more complicated, you can also take a look at rsync.

Related

Extracting certain files from a tar archive on a remote ssh server

I am running numerous simulations on a remote server (via ssh). The outcomes of these simulations are stored as .tar archives in an archive directory on this remote server.
What I would like to do, is write a bash script which connects to the remote server via ssh and extracts the required output files from each .tar archive into separate folders on my local hard drive.
These folders should have the same name as the .tar file from which the files come (To give an example, say the output of simulation 1 is stored in the archive S1.tar on the remote server, I want all '.dat' and '.def' files within this .tar archive to be extracted to a directory S1 on my local drive).
For the extraction itself, I was trying:
for f in *.tar; do
(
mkdir ../${f%.tar}
tar -x -f "$f" -C ../${f%.tar} "*.dat" "*.def"
)
done
wait
Every .tar file is around 1GB and there is a lot of them. So downloading everything takes too much time, which is why I only want to extract the necessary files (see the extensions in the code above).
Now the code works perfectly when I have the .tar files on my local drive. However, what I can't figure out is how I can do it without first having to download all the .tar archives from the server.
When I first connect to the remote server via ssh username#host, then the terminal stops with the script and just connects to the server.
Btw I am doing this in VS Code and running the script through terminal on my MacBook.
I hope I have described it clear enough. Thanks for the help!
Stream the results of tar back with filenames via SSH
To get the data you wish to retrieve from .tar files, you'll need to pass the results of tar to a string of commands with the --to-command option. In the example below, we'll run three commands.
# Send the files name back to your shell
echo $TAR_FILENAME
# Send the contents of the file back
cat /dev/stdin
# Send EOF (Ctrl+d) back (note: since we're already in a $'' we don't use the $ again)
echo '\004'
Once the information is captured in your shell, we can start to process the data. This is a three-step process.
Get the file's name
note that, in this code, we aren't handling directories at all (simply stripping them away; i.e. dir/1.dat -> 1.dat)
you can write code to create directories for the file by replacing the forward slashes / with spaces and iterating over each directory name but that seems out-of-scope for this.
Check for the EOF (end-of-file)
Add content to file
# Get the files via ssh and tar
files=$(ssh -n <user#server> $'tar -xf <tar-file> --wildcards \'*\' --to-command=$\'echo $TAR_FILENAME; cat /dev/stdin; echo \'\004\'\'')
# Keeps track of what state we're in (filename or content)
state="filename"
filename=""
# Each line is one of these:
# - file's name
# - file's data
# - EOF
while read line; do
if [[ $state == "filename" ]]; then
filename=${line/*\//}
touch $filename
echo "Copying: $filename"
state="content"
elif [[ $state == "content" ]]; then
# look for EOF (ctrl+d)
if [[ $line == $'\004' ]]; then
filename=""
state="filename"
else
# append data to file
echo $line >> <output-folder>/$filename
fi
fi
# Double quotes here are very important
done < <(echo -e "$files")
Alternative: tar + scp
If the above example seems overly complex for what it's doing, it is. An alternative that touches the disk more and requires to separate ssh connections would be to extract the files you need from your .tar file to a folder and scp that folder back to your workstation.
ssh -n <username>#<server> 'mkdir output/; tar -C output/ -xf <tar-file> --wildcards *.dat *.def'
scp -r <username>#<server>:output/ ./
The breakdown
First, we'll make a place to keep our outputted files. You can skip this if you already know the folder they'll be in.
mkdir output/
Then, we'll extract the matching files to this folder we created (if you don't want them to be in a different folder remove the -C output/ option).
tar -C output/ -xf <tar-file> --wildcards *.dat *.def
Lastly, now that we're running commands on our machine again, we can run scp to reconnect to the remote machine and pull the files back.
scp -r <username>#<server>:output/ ./

How to check if a file is in a dir and then delete it and another file?

I'm now using Ubuntu, and increasingly using terminal.
I would like to delete files from Trash via command line.
So, I've gotta delete files from ~/.local/share/Trash/files dir.
All right, here's the question:
When I move some file to trash, it also creates a file_name.trashinfo file in ~/.local/share/Trash/info.
How could I automatically delete the corresponding .trashinfo file when I delete something in ../files?
You can use the following script to delete both files simultaneously. Save it in some file in the ~/.local/share/Trash directory, and call then bash <script.sh> <path-to-file-to-be-deleted-in-files-dir>.
A sample call to delete the file test if you named the script del.sh: bash del.sh files/test
#!/bin/bash
file=$1
if [ -e "$file" ] # check if file exists
then
rm -rf "$file" # remove file
base=$(basename "$file")
rm -rf "info/$base.trashinfo" # remove second file in info/<file>.trashinfo
echo 'files deleted!'
fi

Shell Script to redirect to different directory and create a list file

src_dir="/export/home/destination"
list_file="client_list_file.txt"
file=".csv"
echo "src directory="$src_dir
echo "list_file="$list_file
echo "file="$file
cd /export/home/destination
touch $list_file
x=`ls *$file | sort >$list_file`
if [ -s $list_file ]
then
echo "List File is available, archiving now"
y=`tar -cvf mystuff.tar $list_file`
else
echo "List File is not available"
fi
The above script is working fine and it's supposed to create a list file of all .csv files and tar's it.
However I am trying to do it from a different directory while running the script, so it should go to the destination directory and makes a list file with all the .csv in destination directory and make a .tar from the list file(i.e archive the list file)
So i am not sure what to change
there are a lot of tricks in filename handling. the one thing you should know is file naming under POSIX sucks. commands like ls or find may not return the expected result(but 99% of the time they will). so here is what you have to do to get the list of files truely:
for file in $src_dir/*.csv; do
echo `basename $file` >> $src_dir/$list_file
done
tar cvf $src_dir/mystuff.tar $src_dir/$list_file
maybe you should learn bash in a serious manner and try to google first before you asking question in SO next time.
http://www.gnu.org/software/bash/manual/html_node/index.html#SEC_Contents
http://tldp.org/HOWTO/Bash-Prog-Intro-HOWTO.html

Bash - Moving files from subdirectories

I am relatively new to bash scripting.
I need to create a script that will loop through a series of directories, go into subdirectories with a certain name, and then move their file contents into a common folder for all of the files.
My code so far is this:
#!/bin/bash
#used to gather usable pdb files
mkdir -p usable_pdbFiles
#loop through directories in "pdb" folder
for pdbDirectory in */
do
#go into usable_* directory
for innerDirectory in usable_*/
do
if [ -d "$innerDirectory" ] ; then
for file in *.ent
do
mv $file ../../usable_pdbFiles
done < $file
fi
done < $innerDirectory
done
exit 0
Currently I get
usable_Gather.sh: line 7: $innerDirectory: ambiguous redirect
when I try and run the script.
Any help would be appreciated!
The redirections < $innerDirectory and < $file are invalid and this is causing the problem. You don't need to use a loop for this, you can instead rely on the shell's filename expansion and use mv directly:
mkdir -p usable_pdbFiles
mv */usable_*/*.ent usable_pdbFiles
Bear in mind that this solution, and the loop based one that you are working on, will overwrite files with the same name in the destination directory.

Simple Bash Script File Copy

I am having trouble with a simple grading script I am writing. I have a directory called HW5 containing a folder for each student in the class. From my current directory, which contains the HW5 folder, I would like to copy all files starting with the word mondial, to each of the students' folders. My script runs but does not copy any of the files over. Any suggestions?
#!/bin/bash
for file in ./HW5; do
if [ -d $file ]; then
cp ./mondial.* ./$file;
fi
done
Thanks,
The first loop was executing only once, with file equal ./HW5. Add the star to actually select the files or directories inside it.
#!/bin/bash
for file in ./HW5/*; do
if [ -d "$file" ]; then
cp ./mondial.* ./"$file"
fi
done
As suggested by Mark Reed, this can be simplified:
for file in ./HW5/*/; do
cp ./mondial.* ./"$file"
done

Resources