bash : cat commands output only if it success - bash

I'm trying to redirect a command's output in a file only if the command has been successful because I don't want it to erase its content when it fails.
(command is reading the file as input)
I'm currently using
cat <<< $( <command> ) > file;
Which erases the file if it fails.
It's possible to do what I want by storing the output in a temp file like that:
<command> > temp_file && cat temp_file > file
But it looks kinda messy to me, I want avoid manually creating temp files (I know <<< redirection is creating a temp file)
I finally came up with this trick
cat <<< $( <command> || cat file) > file;
Which will not change the contents of the file... but which is even more messy I guess.

Perhaps capture the output into a variable, and echo the variable into the file if the exit status is zero:
output=$(command) && echo "$output" > file
Testing
$ out=$(bash -c 'echo good output') && echo "$out" > file
$ cat file
good output
$ out=$(bash -c 'echo bad output; exit 1') && echo "$out" > file
$ cat file
good output

Remember, the > operator replaces the existing contents of the file with the output of the command. If you want to save the output of multiple commands to a single file, you’d use the >> operator instead. This will append the output to the end of the file.
For example, the following command will append output information to the file you specify:
ls -l >> /path/to/file
So, for log the command output only if it success, you can try something like this:
until command
do
command >> /path/to/file
done

Related

How to make "rename -n ..." output to a file?

I'm experimenting with some arguments for the rename command by using -n option to do "dry runs". How to make it output into a file so I can analyze more? The following does not work -- the resultant rename.log is empty:
bash$ echo "XXX" > \"XXX\"__XXX.txt
$ rename -n 's/"([^\/"《》]+)"__(.*)/“$1”__$2/' '{}' \; *.txt > rename.log
'"XXX"__XXX.txt' would be renamed to '“XXX”__XXX.txt'
Mark's comment is correct, it seems the -n option outputs on stderr. So you can run a command like this:
rename -n [options] > rename.log 2>&1
If you wanted to pipe the output to another command (as I was trying to do), put the redirection before the pipe:
rename -n [options] 2>&1 | less

Replace file only if not being accessed in bash

My requirement is to replace file only when it is not being accessed. I have following snippet:
if [ -f file ]
then
while true
do
if [ -n "$(fuser "file")" ]
then
echo "file is in use..."
else
echo "file is free..."
break
fi
done
fi
{
flock -x 3
mv newfile file
} 3>file
But I have a doubt that I am not handling concurrency properly. Please give some insights and possible way to achieve this.
Thanks.
My requirement is to replace file only when it is not being accessed.
Getting requirements right can be hard. In case your actual requirement is the following, you can boil down the whole script to just one command.
My guess on the actual requirement (not as strict as the original):
Replace file without disturbing any programs reading/writing file.
If this is the case, you can use a very neat behavior: In Unix-like systems file descriptors always point to the file (not path) for which they where opened. You can move or even delete the corresponding path. See also How do the UNIX commands mv and rm work with open files?.
Example:
Open a terminal and enter
i=1; while true; do echo $((i++)); sleep 1; done > file &
tail -f file
The first command writes output to file and runs in the background. The second command reads the file and continues to print its changing content.
Open another terminal and move or delete file, for instance with
mv file file2
echo overwritten > otherFile
mv otherFile file2
rm file2
echo overwritten > file
echo overwritten > file2
While executing these commands have a look at the output of tail -f in the first terminal – it won't be affected by any of these commands. You will never see overwritten.
Solution For New Requirement:
Because of this behavior you can replace the whole script with just one mv command:
mv newfile file
Consider lsof.
mvWhenClear() {
while [[ -f "$1" ]] && lsof "$1"
do sleep $delay
done
mv "$1" "$2" # still allows race condition
}

Change variables of the subscript into the script in the loop

I have a bash script that takes a file and performs an operation with this file. During the operation the out_file is produced. When it's done, I start the other script (script_2) into my script to perform another operation on the out_file. But the problem that I have is to pass parameters to the script_2, which are different for each initial file:
#/bin/bash
for i in $(ls folder); do
.\*operation*.sh folder/$i # this step produces the *out_file.$i*
.\script_2 *out_file.$i* parameter_1 parameter_2
done
Thus, the parameter_1 and parameter_2 should be different for each out_file. So, is it possible to pass different parameters every time inside the loop and don't launch the script_2 separately, every time for each file?
without any more information it's hard to know what your purpose is:
$ ls
script1.sh script2.sh script3.sh testfiles
ls ./testfiles/
file1.txt file2.txt
$ cat script1.sh
#!/bin/bash
for i in $(ls ./testfiles/); do
./script2.sh $i
./script3.sh ./testfiles/out_file.$i parameter_1 parameter_2
done
$ cat script2.sh
#!/bin/bash
touch ./testfiles/out_$1.txt
exit
$ cat script3.sh
#!/bin/bash
echo "dollar1: $1
dollar2 $2
dollar3 $3 "
$ ./script1.sh
dollar1: ./testfiles/out_file.file1.txt
dollar2 parameter_1
dollar3 parameter_2
dollar1: ./testfiles/out_file.file2.txt
dollar2 parameter_1
dollar3 parameter_2
$ ls ./testfiles/
file1.txt file2.txt out_file1.txt.txt out_file2.txt.txt
As you can see it loops through all files in the folder, creates the out file and then passes this into script 3.
I wouldn't advise you run the script again in the current format (it'll loop through the out files then) but you get the idea.

issue with creating file containing $ from bash script

I want produce udev rule file from bash script. For this I'm using cat command. Unfortunately produced file has missing "$" char. Here is example test.sh script:
#!/bin/sh
rc=`cat <<stmt1 > ./test.txt
-p $tempnode
archive/$env{ID_FS_LABEL_ENC}
stmt1`
Result is following:
cat test.txt
-p ''
archive/{ID_FS_LABEL_ENC}
Where issue is ?
If you don't want any variable interpolation, use:
#!/bin/sh
group="test_1"
cat <<'stmt1' > ./test.txt
-p $tempnode
archive/$env{ID_FS_LABEL_ENC}
stmt1
rc=$?
(Notice the '' around stmt1.)

suppress the output to screen in shell script

Hi
i have written a small script:
#!/usr/bin/ksh
for i in *.DAT
do
awk 'BEGIN{OFS=FS=","}$3~/^353/{$3="353861958962"}{print}' $i >> $i_changed
awk '$3~/^353/' $i_changed >> $i_353
rm -rf $i_changed
done
exit
i tested it and its wrking fine.
But it is giving the output to screen i dont need the output to screen.
i simply need the final file that is made $i_353
how is it possible?
Wrap the body of the script in braces and redirect to /dev/null:
#!/usr/bin/ksh
{
for i in *.DAT
do
awk 'BEGIN{OFS=FS=","}$3~/^353/{$3="353861958962"}{print}' $i >> $i_changed
awk '$3~/^353/' $i_changed >> $i_353
rm -rf $i_changed
done
} >/dev/null 2>&1
This sends errors to the bit-bucket too. That may not be such a good idea; if you don't want that, remove the 2>&1 redirection.
Also: beware - you probably need to use ${i}_changed and ${i}_353. This is why the output is not going to the files...your variables ${i_changed} and ${i_353} are not initialized, and hence the redirections don't name a file.

Resources