Change directory for saving a file and return to old directory - bash

I wrote a very little and basic script which compares two files and writes all matching lines into a file.
I now want to secure, that no matter from which directory/working directory you run the bash script, the file is stored in the directory where the script is located.
#! /bin/bash
typeset -i count=1
typeset -i useable_counter=1
file="Fundstellen.txt"
curDir=`pwd`
wantedDir="/Users/Stephan/Documents/Schule/SYT/Skripting/bin/Uebungen"
echo `pushd ${wantedDir}`
if [ -e $file ]; then
echo `chmod 777 ${file}`
echo `rm ${file}`
fi
echo `touch ${file}`
while read pass; do
pass_nr=`echo $pass | cut -d ":" -f 3`
while read groups; do
group_nr=`echo $groups | cut -d ":" -f 3`
if [ "$pass_nr" = "$group_nr" ]; then
if [ $count -gt 15 ]; then
echo "#$useable_counter: $pass in $groups" >> $file
useable_counter=$useable_counter+1
fi
count=$count+1
fi
done < /etc/group
done < /etc/passwd
echo `chmod 444 $file`
echo `popd`
echo "Writing done!"
That's my script with the pushd command to get to the directory in which the script is located and popd should return.
But still, the output file is created in the directory/working directory, from where the script is launched.
What do I have to change so it'll work? I already tried to use normal cd to change the directory, that's why the variable curDir stores the starting directory.

By putting pushd into backticks, you're running it in a subshell. No subshell can change the current working directory of its parent shell.
Just call
pushd "$wantedDir"
directly, and same with popd.

Your script is a hopeless mess. All you need to produce output in the named file is
command >/Users/Stephan/Documents/Schule/SYT/Skripting/bin/Uebungen/Fundstellen.txt
Generally, very few scripts need to explicitly cd and fewer still would need to pushd and popd -- these commands are almost exclusively for interactive use.
The loop where you read all of the group file for every entry in the passwd file is extremely inefficient, especially when the purpose of the inner loop seems to be to find only a small subset of the records in the file. Very often, when you see while read, you want Awk instead. Here is a simple framework for doing that.
awk -F : 'NR==FNR { ++p[$3]; next }
FNR > 15 && $3 in p { print "#" ++i ": " $3 " in " $0 }' /etc/passdwd /etc/group
It's not clear what the 15 is supposed to accomplish. Is it a bug in your script, or is the intent to only skip the first 15 lines on the first iteration?

Related

what is the purpase of the command rsync -rvzh

im trying to understand what this two command doing:
config=$(date +%s)
rsync -rvzh $1 /var/lib/tomcat7/webapps/ROOT/DataMining/target > /var/lib/tomcat7/webapps/ROOT/DataMining/$config
this line appears in a bigger script - script.sh looking like this:
#! /bin/bash
config=$(date +%s)
rsync -rvzh $1 /var/lib/tomcat7/webapps/ROOT/DataMining/target > /var/lib/tomcat7/webapps/ROOT/DataMining/$config
countC=0
countS=`wc -l /var/lib/tomcat7/webapps/ROOT/DataMining/$config | sed 's/^\([0-9]*\).*$/\1/'`
let countS--
let countS--
let countS--
while read LINEC #read line
do
if [ "$countC" -gt 0 ]; then
if [ "$countC" -lt "$countS" ]; then
FILENAME="/var/lib/tomcat7/webapps/ROOT/DataMining/target/"$LINEC
count=0
countW=0
while read LINE
do
for word in $LINE;
do
echo "INSERT INTO data_mining.data (word, line, numWordLine, file) VALUES ('$word', '$count', '$countW', '$FILENAME');" >> /var/lib/tomcat7/webapps/ROOT/DataMining/query
mysql -u root -Alaba1515< /var/lib/tomcat7/webapps/ROOT/DataMining/query
echo > /var/lib/tomcat7/webapps/ROOT/DataMining/query
let countW++
done
countW=0
let count++
done < $FILENAME
count=0
rm -f /var/lib/tomcat7/webapps/ROOT/DataMining/query
rm -f /var/lib/tomcat7/webapps/ROOT/DataMining/$config
fi
fi
let countC++
done < /var/lib/tomcat7/webapps/ROOT/DataMining/$config #finish while
i was able to find lots of documentary about rsync and what it is doing but i don't understand whats the rest of the command do. any help please?
The first command assigns the current time (in seconds since epoch) to the shell variable config. For example:
$ config=$(date +%s)
$ echo $config
1446506996
rsync is a file copying utility. The second command thus makes a backup copy of the directory listed in argument 1 (referred to as $1). The backup copy is placed in /var/lib/tomcat7/webapps/ROOT/DataMining/target. A log file of what was copied is saved in var/lib/tomcat7/webapps/ROOT/DataMining/$config:
rsync -rvzh $1 /var/lib/tomcat7/webapps/ROOT/DataMining/target > /var/lib/tomcat7/webapps/ROOT/DataMining/$config
The rsync options mean:
-r tells rsync to copy files diving recursively into subdirectories
-v tells it to be verbose so that it shows what is copied.
-z tells it to compress files during their transfer from one location to the other.
-h tells it to show any numbers in the output in human-readable format.
Note that because $1 is not inside double-quotes, this script will fail if the name of directory $1 contains whitespace.

bash call script with variable

What I want to achieve is the following :
I want the subtitles for my TV Show downloaded automatically.
The script "getSubtitle.sh" is ran as soon as the show is downloaded, but it can happen that no subtitle are released yet.
So what I am doing to counter this :
Creating a file each time "getSubtitle.sh" is ran. It contain the location of the script with its arguments, for example :
/Users/theo/logSubtitle/getSubtitle.sh "The Walking Dead - 5x10 - Them.mp4" "The.Walking.Dead.S05E10.480p.HDTV.H264.mp4" "/Volumes/Window HD/Série/The Walking Dead"
If a subtitle has been found, this file will contain only this line, if no subtitle has been found, this file will have 2 lines (the first one being "no subtitle downloaded", and the second one being the path to the script as explained above)
Now, once I get this, I'm planning to run a cron everyday that will do the following :
Remove all file that have only 1 line (Subtitle found), and execute the script again for the remaining file. Here is the full script :
cd ~/logSubtitle/waiting/
for f in *
do nbligne=$(wc -l $f | cut -c 8)
if [ "$nbligne" = "1" ]
then
rm $f
else
command=$(sed -n "2 p" $f)
sh $command 3>&1 1>&2 2>&3 | grep down > $f ; echo $command >> $f
fi
done
This is unfortunately not working, I have the feeling that the script is not called.
When I replace $command by the line in the text file, it is working.
I am sure that $command match the line because of the "echo $command >> $f" at the end of my script.
So I really don't get what I am missing here, any ideas ?
Thanks.
I'm not sure what you're trying to achieve with the cut -c 8 part in wc -l $f | cut -c 8. cut -c 8 will select the 8th character of the output of wc -l.
A suggestion: to check whether your file contains 1 or two lines (and since you'll need the content of the second line, if any, anyway), use mapfile. This will slurp the file in an array, one line per field. You can use the option -n 2 to read at most 2 lines. This will be much more efficient, safe and nice than your solution:
mapfile -t -n 2 ary < file
Then:
if ((${#ary[#]}==1)); then
printf 'File contains one line only: %s\n' "${ary[0]}"
elif ((${#ary[#]==2)); then
printf 'File contains (at least) two lines:\n'
printf ' %s\n' "${ary[#]}"
else
printf >&2 'Error, no lines found in file\n'
fi
Another suggestion: use more quotes!
With this, a better way to write your script:
#!/bin/bash
dir=$HOME/logSubtitle/waiting/
shopt -s nullglob
for f in "$dir"/*; do
mapfile -t -n 2 ary < "$f"
if ((${#ary[#]}==1)); then
rm -- "$f" || printf >&2 "Error, can't remove file %s\n" "$f"
elif ((${#ary[#]}==2)); then
{ sh -c "${ary[1]}" 3>&1 1>&2 2>&3 | grep down; echo "${ary[1]}"; } > "$f"
else
printf >&2 'Error, file %s contains no lines\n' "$f"
fi
done
After the done keyword you can even add the redirection 2>> logfile to a log file if you wish. Make sure the cron job is run with your user: check crontab -l and, if needed, edit it with crontab -e.
Use eval instead of sh. The reason it works with eval and not sh is due to the number of passes to evaluate variables. sh will treat the sed command as its command to execute while eval will evaluate the sed command first and then execute the result.
Briefly explained.

Remove the bottom level from a path

I am writing a script where I want to check if a folder relative to the script's working directory is in the shell's path.
For example, if the project structure is:
top/
bin/
tests/
test1/
foo.sh
test2/
foo.sh
I want to check if top/bin is in PATH from either foo.sh. I know the following will work:
cd ../../bin
if echo $PATH|grep `pwd`; then
echo "Success"
else
echo "Failure"
fi
But then I have to keep track of what directory I started in so I can cd back there. Can I do something like chopping off the last two directories after pwd and then appending bin to that? Is there some other intelligent way to handle this?
As a bonus, I'd like to make this script robust against additional directory levels inside tests, but that's not strictly necessary if it complicates things.
# Exits with code 0 if the first argument is on the user's path.
is_on_path() {
CANONICAL_NAME=$(readlink -f "$1")
while read -r -d: COMPONENT; do
if [[ "$CANONICAL_NAME" = "$COMPONENT" ]]; then
return 0
fi
done <<< "$PATH"
return 1
}
readlink -f will evaluate path components like ../.. as needed.
read -r -d: COMPONENT will read a colon-separated list of arguments from its input, into the COMPONENT variable.
<<< "$PATH" will pass in the system path into while's stdin.
Now you can call that like this:
if is_on_path "../../bin"; then
echo "Success!"
else
echo "Failure!"
fi
Bonus: From here it's pretty easy to recursively apply this to all parent directories:
find_matching_ancestor() {
current_dir=$(pwd)
while [[ "$current_dir" != "/" ]]; do
if is_on_path "$current_dir/bin"; then
echo "$current_dir"
return 0
fi
current_dir=$(dirname "$current_dir")
done
return 1
}
This seems to work
echo $PATH | xargs -d: -n1 | grep ^$(readlink -f ../../bin)$ | wc -l
Perhaps the most reliable sulution would be to put a file with executable bit set to that directory and to use "which".
For example, is /bin on my PATH?
which ls
/bin/ls

Bash Script exiting without error while using diff

I'm fairly new to using bash and was trying to create an autograder script for running some test cases. Currently my bash script seems to be acting strangely; when I have the -e flag set bash will just exit when a diff has a positive size, and when the -e flag is not set the script ignores any differences in the diff files and says that all tests passed.
The script exits immediately after the "write_diff_out=...." command, the next line is not printed. I've only included the diffing portion of the script as everything else runs fine (the files all exist as well).
# Validate outputs and print results
echo "> Comparing current build's final memory output with golden memory output...";
for file in `ls test_progs`;
do
file=$(echo $file | cut -d '.' -f1);
echo "$file";
write_diff_out=$(diff ./log/$file.writeback.out ./log/$file.writeback.gold.out > ./diff/$file.writeback.diff);
echo "Finished write_diff";
program_diff_out=$(diff -u <(grep -E '###' ./log/$file.program.out) <(grep -E '###' ./log/$file.program.gold.out) > ./diff/$file.program.diff);
echo "Finished program diff";
if [ -z "$write_diff_out" ] && [ -z "$program_diff_out" ]; then
printf "%20s:\e[0;32mPASSED\e[0m\n" "$file";
else
printf "%20s:\e[0;31mFAILED\e[0m\n" "$file";
fi
done
echo "> Done comparing test outputs.";
Feel free to suggest a better way of formatting the diff commands as well, I know there are different methods of writing them.
I don't exactly know what's your problem, but I have rewritten your script to conform to some best practices. Perhaps it will work better.
#!/bin/bash
# Debugging mode: prints every command as executed, remove when uneeded
set -x
# Validate outputs and print results
echo "> Comparing current build's final memory output with golden memory output..."
cd test_progs
for file in *; do
file="$(echo "$file" | sed 's/\.[^.]*$//')"
echo "$file"
# will PASS when both diffs return non-zero
if ! diff "log/$file.writeback.out" \
"log/$file.writeback.gold.out" > \
"diff/$file.writeback.diff" && \
! diff -u <(grep -E ### "log/$file.program.out") \
<(grep -E ### "log/$file.program.gold.out") > \
"diff/$file.program.diff"; then
printf '%20s:\e[0;32mPASSED\e[0m\n' "$file"
else
printf '%20s:\e[0;31mFAILED\e[0m\n' "$file"
fi
done
echo "> Done comparing test outputs."
It avoids parsing ls, use quotes where it is due, used [[ instead of [ (you don't need to quote variables inside of [[), and it tests if the written file is empty instead of storing something at a variable.
If you really wanted to store diff's output in a variable, you would do this:
write_diff_out="$(diff "log/$file.writeback.out" "log/$file.writeback.gold.out" | tee "diff/$file.writeback.diff")"
Then $write_diff_out would contain the same data the diff/$file.writeback.diff file has.
EDIT: edit my answer a bit, to implement some of the things in the comments.

bash save last user input value permanently in the script itself

Is it possible to save last entered value of a variable by the user in the bash script itself so that I reuse value the next time while executing again?.
Eg:
#!/bin/bash
if [ -d "/opt/test" ]; then
echo "Enter path:"
read path
p=$path
else
.....
........
fi
The above script is just a sample example I wanted to give(which may be wrong), is it possible if I want to save the value of p permanently in the script itself to so that I use it somewhere later in the script even when the script is re-executed?.
EDIT:
I am already using sed to overwrite the lines in the script while executing, this method works but this is not at all good practice as said. Replacing the lines in the same file as said in the below answer is much better than what I am using like the one below:
...
....
PATH=""; #This is line no 7
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )";
name="$(basename "$(test -L "$0" && readlink "$0" || echo "$0")")";
...
if [ condition ]
fi
path=$path
sed -i '7s|.*|PATH='$path';|' $DIR/$name;
Someting like this should do the asked stuff :
#!/bin/bash
ENTERED_PATH=""
if [ "$ENTERED_PATH" = "" ]; then
echo "Enter path"
read path
ENTERED_PATH=$path
sed -i 's/ENTERED_PATH=""/ENTERED_PATH='$path'/g' $0
fi
This script will ask user a path only if not previously ENTERED_PATH were defined, and store it directly into the current file with the sed line.
Maybe a safer way to do this, would be to write a config file somewhere with the data you want to save and source it . data.saved at the begining of your script.
In the script itself? Yes with sed but it's not advisable.
#!/bin/bash
test='0'
echo "test currently is: $test";
test=`expr $test + 1`
echo "changing test to: $test"
sed -i "s/test='[0-9]*'/test='$test'/" $0
Preferable method:
Try saving the value in a seperate file you can easily do a
myvar=`cat varfile.txt`
And whatever was in the file is not in your variable.
I would suggest using the /tmp/ dir to store the file in.
Another option would be to save the value as an extended attribute attached to the script file. This has many of the same problems as editing the script's contents (permissions issues, weird for multiple users, etc) plus a few of its own (not supported on all filesystems...), but IMHO it's not quite as ugly as rewriting the script itself (a config file really is a better option).
I don't use Linux, but I think the relevant commands would be something like this:
path="$(getfattr --only-values -n "user.saved_path" "${BASH_SOURCE[0]}")"
if [[ -z "$path" ]]; then
read -p "Enter path:" path
setfattr -n "user.saved_path" -v "$path" "${BASH_SOURCE[0]}"
fi

Resources