suppress the output to screen in shell script - shell

Hi
i have written a small script:
#!/usr/bin/ksh
for i in *.DAT
do
awk 'BEGIN{OFS=FS=","}$3~/^353/{$3="353861958962"}{print}' $i >> $i_changed
awk '$3~/^353/' $i_changed >> $i_353
rm -rf $i_changed
done
exit
i tested it and its wrking fine.
But it is giving the output to screen i dont need the output to screen.
i simply need the final file that is made $i_353
how is it possible?

Wrap the body of the script in braces and redirect to /dev/null:
#!/usr/bin/ksh
{
for i in *.DAT
do
awk 'BEGIN{OFS=FS=","}$3~/^353/{$3="353861958962"}{print}' $i >> $i_changed
awk '$3~/^353/' $i_changed >> $i_353
rm -rf $i_changed
done
} >/dev/null 2>&1
This sends errors to the bit-bucket too. That may not be such a good idea; if you don't want that, remove the 2>&1 redirection.
Also: beware - you probably need to use ${i}_changed and ${i}_353. This is why the output is not going to the files...your variables ${i_changed} and ${i_353} are not initialized, and hence the redirections don't name a file.

Related

bash : cat commands output only if it success

I'm trying to redirect a command's output in a file only if the command has been successful because I don't want it to erase its content when it fails.
(command is reading the file as input)
I'm currently using
cat <<< $( <command> ) > file;
Which erases the file if it fails.
It's possible to do what I want by storing the output in a temp file like that:
<command> > temp_file && cat temp_file > file
But it looks kinda messy to me, I want avoid manually creating temp files (I know <<< redirection is creating a temp file)
I finally came up with this trick
cat <<< $( <command> || cat file) > file;
Which will not change the contents of the file... but which is even more messy I guess.
Perhaps capture the output into a variable, and echo the variable into the file if the exit status is zero:
output=$(command) && echo "$output" > file
Testing
$ out=$(bash -c 'echo good output') && echo "$out" > file
$ cat file
good output
$ out=$(bash -c 'echo bad output; exit 1') && echo "$out" > file
$ cat file
good output
Remember, the > operator replaces the existing contents of the file with the output of the command. If you want to save the output of multiple commands to a single file, you’d use the >> operator instead. This will append the output to the end of the file.
For example, the following command will append output information to the file you specify:
ls -l >> /path/to/file
So, for log the command output only if it success, you can try something like this:
until command
do
command >> /path/to/file
done

Replace file only if not being accessed in bash

My requirement is to replace file only when it is not being accessed. I have following snippet:
if [ -f file ]
then
while true
do
if [ -n "$(fuser "file")" ]
then
echo "file is in use..."
else
echo "file is free..."
break
fi
done
fi
{
flock -x 3
mv newfile file
} 3>file
But I have a doubt that I am not handling concurrency properly. Please give some insights and possible way to achieve this.
Thanks.
My requirement is to replace file only when it is not being accessed.
Getting requirements right can be hard. In case your actual requirement is the following, you can boil down the whole script to just one command.
My guess on the actual requirement (not as strict as the original):
Replace file without disturbing any programs reading/writing file.
If this is the case, you can use a very neat behavior: In Unix-like systems file descriptors always point to the file (not path) for which they where opened. You can move or even delete the corresponding path. See also How do the UNIX commands mv and rm work with open files?.
Example:
Open a terminal and enter
i=1; while true; do echo $((i++)); sleep 1; done > file &
tail -f file
The first command writes output to file and runs in the background. The second command reads the file and continues to print its changing content.
Open another terminal and move or delete file, for instance with
mv file file2
echo overwritten > otherFile
mv otherFile file2
rm file2
echo overwritten > file
echo overwritten > file2
While executing these commands have a look at the output of tail -f in the first terminal – it won't be affected by any of these commands. You will never see overwritten.
Solution For New Requirement:
Because of this behavior you can replace the whole script with just one mv command:
mv newfile file
Consider lsof.
mvWhenClear() {
while [[ -f "$1" ]] && lsof "$1"
do sleep $delay
done
mv "$1" "$2" # still allows race condition
}

Change directory for saving a file and return to old directory

I wrote a very little and basic script which compares two files and writes all matching lines into a file.
I now want to secure, that no matter from which directory/working directory you run the bash script, the file is stored in the directory where the script is located.
#! /bin/bash
typeset -i count=1
typeset -i useable_counter=1
file="Fundstellen.txt"
curDir=`pwd`
wantedDir="/Users/Stephan/Documents/Schule/SYT/Skripting/bin/Uebungen"
echo `pushd ${wantedDir}`
if [ -e $file ]; then
echo `chmod 777 ${file}`
echo `rm ${file}`
fi
echo `touch ${file}`
while read pass; do
pass_nr=`echo $pass | cut -d ":" -f 3`
while read groups; do
group_nr=`echo $groups | cut -d ":" -f 3`
if [ "$pass_nr" = "$group_nr" ]; then
if [ $count -gt 15 ]; then
echo "#$useable_counter: $pass in $groups" >> $file
useable_counter=$useable_counter+1
fi
count=$count+1
fi
done < /etc/group
done < /etc/passwd
echo `chmod 444 $file`
echo `popd`
echo "Writing done!"
That's my script with the pushd command to get to the directory in which the script is located and popd should return.
But still, the output file is created in the directory/working directory, from where the script is launched.
What do I have to change so it'll work? I already tried to use normal cd to change the directory, that's why the variable curDir stores the starting directory.
By putting pushd into backticks, you're running it in a subshell. No subshell can change the current working directory of its parent shell.
Just call
pushd "$wantedDir"
directly, and same with popd.
Your script is a hopeless mess. All you need to produce output in the named file is
command >/Users/Stephan/Documents/Schule/SYT/Skripting/bin/Uebungen/Fundstellen.txt
Generally, very few scripts need to explicitly cd and fewer still would need to pushd and popd -- these commands are almost exclusively for interactive use.
The loop where you read all of the group file for every entry in the passwd file is extremely inefficient, especially when the purpose of the inner loop seems to be to find only a small subset of the records in the file. Very often, when you see while read, you want Awk instead. Here is a simple framework for doing that.
awk -F : 'NR==FNR { ++p[$3]; next }
FNR > 15 && $3 in p { print "#" ++i ": " $3 " in " $0 }' /etc/passdwd /etc/group
It's not clear what the 15 is supposed to accomplish. Is it a bug in your script, or is the intent to only skip the first 15 lines on the first iteration?

send bash stderr to logfile, but only if an error exists

I am using the following code to send stderr to a file.
.script >2 "errorlog.$(date)"
The problem is that a blank log file is created every time I run the script, even if an error doesn't exist. I have looked online and in a few books as well, and can't figure out how to create a log file only if errors exist.
Output redirection opens the file before the script is run, so there is no way to tell if the file will receive any output. What you can do, however, is immediately delete the file if it winds up being empty:
logfile="errorlog.$(date)"
# Note your typo; it's 2>, not >2
script 2> "$logfile"; [ -s "$logfile" ] || rm -f "$logfile"
I use -f just in case, as -s can fail if $logfile does not exist, not just if it's empty. I use ; to separate the commands because whether or not $logfile contains anything does not depend on whether or not script succeeds.
You can wrap this up in a function to make it easier to use.
save_log () {
logfile=${1:-errorlog.$(date)}
cat - > "$logfile"
[ -s "$logfile" ] || rm -f "$logfile"
}
script 2> >( save_log )
script 2> >( save_log my_logfile.txt )
Not quite as simple as redirecting to a file, and depends on a non-standard feature (process substitution), but not too bad, either.

nohup for loop output naming

I use for loop to use specific tool with the set of files:
nohup sh -c 'for i in ~/files/*txt; do ID=`echo ${i} | sed 's/^.*\///'`; ./tool $i &&
mv output ${ID}.out; done' &
This tool has specific naming for outputted files and I want to rename the output as it would be overwritten and it is simpler for me.
However this specific mv doesn't work with nohup - files are not renamed individually and get overwritten.
How can I solve this problem.
Why the complicated nohup dance, and not just
for i in ~/files/*.txt; do
./tool $i && mv output `basename $i`.out
done

Resources