what is the purpase of the command rsync -rvzh - bash

im trying to understand what this two command doing:
config=$(date +%s)
rsync -rvzh $1 /var/lib/tomcat7/webapps/ROOT/DataMining/target > /var/lib/tomcat7/webapps/ROOT/DataMining/$config
this line appears in a bigger script - script.sh looking like this:
#! /bin/bash
config=$(date +%s)
rsync -rvzh $1 /var/lib/tomcat7/webapps/ROOT/DataMining/target > /var/lib/tomcat7/webapps/ROOT/DataMining/$config
countC=0
countS=`wc -l /var/lib/tomcat7/webapps/ROOT/DataMining/$config | sed 's/^\([0-9]*\).*$/\1/'`
let countS--
let countS--
let countS--
while read LINEC #read line
do
if [ "$countC" -gt 0 ]; then
if [ "$countC" -lt "$countS" ]; then
FILENAME="/var/lib/tomcat7/webapps/ROOT/DataMining/target/"$LINEC
count=0
countW=0
while read LINE
do
for word in $LINE;
do
echo "INSERT INTO data_mining.data (word, line, numWordLine, file) VALUES ('$word', '$count', '$countW', '$FILENAME');" >> /var/lib/tomcat7/webapps/ROOT/DataMining/query
mysql -u root -Alaba1515< /var/lib/tomcat7/webapps/ROOT/DataMining/query
echo > /var/lib/tomcat7/webapps/ROOT/DataMining/query
let countW++
done
countW=0
let count++
done < $FILENAME
count=0
rm -f /var/lib/tomcat7/webapps/ROOT/DataMining/query
rm -f /var/lib/tomcat7/webapps/ROOT/DataMining/$config
fi
fi
let countC++
done < /var/lib/tomcat7/webapps/ROOT/DataMining/$config #finish while
i was able to find lots of documentary about rsync and what it is doing but i don't understand whats the rest of the command do. any help please?

The first command assigns the current time (in seconds since epoch) to the shell variable config. For example:
$ config=$(date +%s)
$ echo $config
1446506996
rsync is a file copying utility. The second command thus makes a backup copy of the directory listed in argument 1 (referred to as $1). The backup copy is placed in /var/lib/tomcat7/webapps/ROOT/DataMining/target. A log file of what was copied is saved in var/lib/tomcat7/webapps/ROOT/DataMining/$config:
rsync -rvzh $1 /var/lib/tomcat7/webapps/ROOT/DataMining/target > /var/lib/tomcat7/webapps/ROOT/DataMining/$config
The rsync options mean:
-r tells rsync to copy files diving recursively into subdirectories
-v tells it to be verbose so that it shows what is copied.
-z tells it to compress files during their transfer from one location to the other.
-h tells it to show any numbers in the output in human-readable format.
Note that because $1 is not inside double-quotes, this script will fail if the name of directory $1 contains whitespace.

Related

Send files to folders using bash script

I want to copy the functionality of a windows program called files2folder, which basically lets you right-click a bunch of files and send them to their own individual folders.
So
1.mkv 2.png 3.doc
gets put into directories called
1 2 3
I have got it to work using this script but it throws out errors sometimes while still accomplishing what I want
#!/bin/bash
ls > list.txt
sed -i '/list.txt/d' ./list.txt
sed 's/.$//;s/.$//;s/.$//;s/.$//' ./list.txt > list2.txt
for i in $(cat list2.txt); do
mkdir $i
mv $i.* ./$i
done
rm *.txt
is there a better way of doing this? Thanks
EDIT: My script failed with real world filenames as they contained more than one . so I had to use a different sed command which makes it work. this is an example filename I'm working with
Captain.America.The.First.Avenger.2011.INTERNAL.2160p.UHD.BluRay.X265-IAMABLE
I guess you are getting errors on . and .. so change your call to ls to:
ls -A > list.txt
-A List all entries except for . and ... Always set for the super-user.
You don't have to create a file to achieve the same result, just assign the output of your ls command to a variable. Doing something like this:
files=`ls -A`
for file in $files; do
echo $file
done
You can also check if the resource is a file or directory like this:
files=`ls -A`
for res in $files; do
if [[ -d $res ]];
then
echo "$res is a folder"
fi
done
This script will do what you ask for:
files2folder:
#!/usr/bin/env sh
for file; do
dir="${file%.*}"
{ ! [ -f "$file" ] || [ "$file" = "$dir" ]; } && continue
echo mkdir -p -- "$dir"
echo mv -n -- "$file" "$dir/"
done
Example directory/files structure:
ls -1 dir/*.jar
dir/paper-279.jar
dir/paper.jar
Running the script above:
chmod +x ./files2folder
./files2folder dir/*.jar
Output:
mkdir -p -- dir/paper-279
mv -n -- dir/paper-279.jar dir/paper-279/
mkdir -p -- dir/paper
mv -n -- dir/paper.jar dir/paper/
To make it actually create the directories and move the files, remove all echo

why 'ls' command printing the directory content multiple times

I have the following shell script in which I want to check the specific directory content on the remote machines and print them in a file.
file=serverList.csv
n=0
while [ $n -le 2 ]
do
while IFS=: read -r f1 f2
do
# echo line is stored in $line
if echo $f1 | grep -q "xx.xx.xxx";
then
ssh user#$f1 ls path/*war_* > path/$f1.txt < /dev/null; ls path/*zip_* >> path/$f1.txt < /dev/null;
ssh user#$f1 ls -d /apps/jetty*_* >> path/$f1.txt < /dev/null;
fi
done < "$file"
sleep 15
n=$(( n+1 ))
done
I am using this script inside a cron job for every 2 minute as following:
*/2 * * * * /path/myscript.sh
but somehow I am ending up with the following output file:
/apps/jetty/webapps_wars/test_new.war
path/ReleaseTest.static.zip_2020-08-05
path/ReleaseTest.static.zip_2020-08-05
path/ReleaseTest.static.zip_2020-08-05
path/jetty_xx.xx_2020-08-05
path/jetty_new
path/jetty_xx.xx_2020-08-05
path/jetty_new
I am not sure why am I getting the files in the list twice, sometimes 3 times. but I execute the shell directly from putty, it works fine. What do I need to change in order to correct this script?
Example:
~$ cd tmp
~/tmp$ mkdir test
~/tmp$ cd !$
cd test
~/tmp/test$ mkdir -p apps/jetty/webapp_wars/ && touch apps/jetty/webapp_wars/test_new.war
~/tmp/test$ mkdir path
~/tmp/test$ touch path/{ReleaseTest.static.zip_2020-08-05,jetty_xx.xx_2020-08-05,jetty_new}
~/tmp/test$ cd ..
~/tmp$ listpath=$(find test/path \( -name "*2020-08-05" -o -name "*new" \) )
~/tmp$ listapps=$(find test/apps/ -name "*war" )
~/tmp$ echo ${listpath[#]}" "${listapps[#]} | tr " " "\n" | sort > resultfile
~/tmp$
~/tmp$ cat resultfile
test/apps/jetty/webapp_wars/test_new.war
test/path/jetty_new
test/path/jetty_xx.xx_2020-08-05
test/path/ReleaseTest.static.zip_2020-08-05
~/tmp$ rm -rf test/ && unset listapps && unset listpath && rm resultfile
~/tmp$
This way you get only one result for each pattern you are looking for in your if...then...else block of code.
Just adapt the ssh ..... find commands and take care of quotes & parentheses but there is the easiest solution, this way you do not have to rewrite the script from scratch. And be careful on local / remote variables if you use them.
You really should not use ls but the fundamental problem is probably that three separate commands with three separate wildcards could match the same file three times.
Also, one of your commands is executed locally (you forgot to put ssh etc in front of the second one), so if the wildcard matches on your local computer, that would produce a result which doesn't reflect the situation on the remote server.
Try this refactoring.
file=serverList.csv
n=0
while [ $n -le 2 ]
do
while IFS=: read -r f1 f2
do
# echo line is stored in $line <- XXX this is not true
if echo "$f1" | grep -q "xx.xx.xxx";
then
ssh user#$f1 "printf '%s\n' path/*war_* path/*zip_* /apps/jetty*_*" | sort -u >path/"$f1".txt < /dev/null
fi
done < "$file"
sleep 15
n=$(( n+1 ))
done
The sort gets rid of any duplicates. This assumes none of your file names contain newlines; if they do, you'd need to use something which robustly handles them (try printf '%s\0' and sort -z but these are not portable).
ls would definitely also accept three different wildcards but like the link above explains, you really never want to use ls in scripts.

How to access a target directory using bash scripting

I am relatively new to shell scripting. I am writing a script to compress all the files in current and target directory. I have found success in compressing the files of a current directory but I'm unable to write a script for compressing files in a target directory can anyone guide me?
I want to do something like this
% myCompress -t /home/users/bigFoot/ pdf ppt jpg
next time try to spread your code (it will make it easier to answer):
#!/bin/bash
if [[ $# == 0 ]]; then
echo "This shell script compress files with a specific extensions"
echo "Call Syntax: compress <extension_list>"
exit
fi
for ext in $*; do
for file in ls *.$ext; do
gzip -k $file
done
done
Mistakes made
1) $* - all args coming after command - so.... -t and path are not $ext variables
2) ls *.$ext is red in loop as 2 strings "ls and *.$ext" should be written as $(ls *.$ext) to get ls command executed
My script for your request
#!/bin/bash
script_name=`basename "$0"`
if [[ $# == 0 ]]; then
echo "This shell script compress files with a specific extensions"
echo "Call Syntax: $script_name <dirctories_list> <extension_list>"
exit
fi
# check if $1 is a directory
path=". "
file_type=""
for check_type in $* ; do
if [[ -d $check_type ]]; then
path=$path$check_type" "
else
file_type=$file_type"*."$check_type" "
fi
done
echo paths to gzip $path
echo files type to check "$file_type"
for x in $path; do
cd $x
for file in $(ls $file_type); do
gzip $file
done
cd -
done
Explanation
1) basename "$0" - get scripts name - it is more generic for usage - in case you change script's name
2) path=". " - variable hold a string of all directories to be compressed, your request is to run it also on current directory ". "
file_type="" - variable hold a string of all extensions to be compressed in $path string
3) running a loop on all input ARGS and concatenate directories names to $path string and other file types to $file_type
4) for each of the directories inserted to script:
i. cd $x - enter directorie
ii. gzip - compress all files with inserted extensions
iii. cd - - go back to base directories
Check gzip
I'm not familiar with the gzip command , check that you have -k flag

Change directory for saving a file and return to old directory

I wrote a very little and basic script which compares two files and writes all matching lines into a file.
I now want to secure, that no matter from which directory/working directory you run the bash script, the file is stored in the directory where the script is located.
#! /bin/bash
typeset -i count=1
typeset -i useable_counter=1
file="Fundstellen.txt"
curDir=`pwd`
wantedDir="/Users/Stephan/Documents/Schule/SYT/Skripting/bin/Uebungen"
echo `pushd ${wantedDir}`
if [ -e $file ]; then
echo `chmod 777 ${file}`
echo `rm ${file}`
fi
echo `touch ${file}`
while read pass; do
pass_nr=`echo $pass | cut -d ":" -f 3`
while read groups; do
group_nr=`echo $groups | cut -d ":" -f 3`
if [ "$pass_nr" = "$group_nr" ]; then
if [ $count -gt 15 ]; then
echo "#$useable_counter: $pass in $groups" >> $file
useable_counter=$useable_counter+1
fi
count=$count+1
fi
done < /etc/group
done < /etc/passwd
echo `chmod 444 $file`
echo `popd`
echo "Writing done!"
That's my script with the pushd command to get to the directory in which the script is located and popd should return.
But still, the output file is created in the directory/working directory, from where the script is launched.
What do I have to change so it'll work? I already tried to use normal cd to change the directory, that's why the variable curDir stores the starting directory.
By putting pushd into backticks, you're running it in a subshell. No subshell can change the current working directory of its parent shell.
Just call
pushd "$wantedDir"
directly, and same with popd.
Your script is a hopeless mess. All you need to produce output in the named file is
command >/Users/Stephan/Documents/Schule/SYT/Skripting/bin/Uebungen/Fundstellen.txt
Generally, very few scripts need to explicitly cd and fewer still would need to pushd and popd -- these commands are almost exclusively for interactive use.
The loop where you read all of the group file for every entry in the passwd file is extremely inefficient, especially when the purpose of the inner loop seems to be to find only a small subset of the records in the file. Very often, when you see while read, you want Awk instead. Here is a simple framework for doing that.
awk -F : 'NR==FNR { ++p[$3]; next }
FNR > 15 && $3 in p { print "#" ++i ": " $3 " in " $0 }' /etc/passdwd /etc/group
It's not clear what the 15 is supposed to accomplish. Is it a bug in your script, or is the intent to only skip the first 15 lines on the first iteration?

bash call script with variable

What I want to achieve is the following :
I want the subtitles for my TV Show downloaded automatically.
The script "getSubtitle.sh" is ran as soon as the show is downloaded, but it can happen that no subtitle are released yet.
So what I am doing to counter this :
Creating a file each time "getSubtitle.sh" is ran. It contain the location of the script with its arguments, for example :
/Users/theo/logSubtitle/getSubtitle.sh "The Walking Dead - 5x10 - Them.mp4" "The.Walking.Dead.S05E10.480p.HDTV.H264.mp4" "/Volumes/Window HD/Série/The Walking Dead"
If a subtitle has been found, this file will contain only this line, if no subtitle has been found, this file will have 2 lines (the first one being "no subtitle downloaded", and the second one being the path to the script as explained above)
Now, once I get this, I'm planning to run a cron everyday that will do the following :
Remove all file that have only 1 line (Subtitle found), and execute the script again for the remaining file. Here is the full script :
cd ~/logSubtitle/waiting/
for f in *
do nbligne=$(wc -l $f | cut -c 8)
if [ "$nbligne" = "1" ]
then
rm $f
else
command=$(sed -n "2 p" $f)
sh $command 3>&1 1>&2 2>&3 | grep down > $f ; echo $command >> $f
fi
done
This is unfortunately not working, I have the feeling that the script is not called.
When I replace $command by the line in the text file, it is working.
I am sure that $command match the line because of the "echo $command >> $f" at the end of my script.
So I really don't get what I am missing here, any ideas ?
Thanks.
I'm not sure what you're trying to achieve with the cut -c 8 part in wc -l $f | cut -c 8. cut -c 8 will select the 8th character of the output of wc -l.
A suggestion: to check whether your file contains 1 or two lines (and since you'll need the content of the second line, if any, anyway), use mapfile. This will slurp the file in an array, one line per field. You can use the option -n 2 to read at most 2 lines. This will be much more efficient, safe and nice than your solution:
mapfile -t -n 2 ary < file
Then:
if ((${#ary[#]}==1)); then
printf 'File contains one line only: %s\n' "${ary[0]}"
elif ((${#ary[#]==2)); then
printf 'File contains (at least) two lines:\n'
printf ' %s\n' "${ary[#]}"
else
printf >&2 'Error, no lines found in file\n'
fi
Another suggestion: use more quotes!
With this, a better way to write your script:
#!/bin/bash
dir=$HOME/logSubtitle/waiting/
shopt -s nullglob
for f in "$dir"/*; do
mapfile -t -n 2 ary < "$f"
if ((${#ary[#]}==1)); then
rm -- "$f" || printf >&2 "Error, can't remove file %s\n" "$f"
elif ((${#ary[#]}==2)); then
{ sh -c "${ary[1]}" 3>&1 1>&2 2>&3 | grep down; echo "${ary[1]}"; } > "$f"
else
printf >&2 'Error, file %s contains no lines\n' "$f"
fi
done
After the done keyword you can even add the redirection 2>> logfile to a log file if you wish. Make sure the cron job is run with your user: check crontab -l and, if needed, edit it with crontab -e.
Use eval instead of sh. The reason it works with eval and not sh is due to the number of passes to evaluate variables. sh will treat the sed command as its command to execute while eval will evaluate the sed command first and then execute the result.
Briefly explained.

Resources