I've seen several answers on SO about how to append to a file if it exists and create a new file if it doesn't (echo "hello" >> file.txt) or overwrite a file if it exists and create one if it doesn't (echo "hello" > file.txt).
But how do I make sure that echo "hello" only works and appends to the file if it already exists and raises an error if it doesn't?
EDIT: Right now, I'm already checking for the file using [ -f file.txt ]. I was wondering if there's a way in which I could simply use echo.
Assuming the file is either nonexistent or both readable and writable, you can try to open it for reading first to determine whether it exists or not, e.g.:
command 3<file 3<&- >>file
3<&- may be omitted in most cases as it's unexpected for a program to start reading from file descriptor 3 without redirecting it first.
Proof of concept:
$ echo hello 3<file 3<&- >>file
bash: file: No such file or directory
$ ls file
ls: cannot access 'file': No such file or directory
$ touch file
$ echo hello 3<file 3<&- >>file
$ cat file
hello
$
This works because redirections are processed from left to right, and a redirection error causes the execution of a command to halt. So if file doesn't exist (or is not readable), 3<file fails, the shell prints an error message and stops processing this command. Otherwise, 3<&- closes the descriptor (3) associated with file in previous step, >>file reopens file for appending and redirects standard output to it.
I think a simple if as proposed in the other answers would be best. However, here are some more exotic solutions:
Using dd
dd can do the check and redirection in one step
echo hello | dd conv=nocreat of=file.txt
Note that dd prints statistics to stderr. You can silence them by appending 2> /dev/null but then the warning file does not exist goes missing too.
Using a custom Function
When you do these kind of redirections very often, then a reusable function would be appropriate. Some examples:
Run echo and redirect only if the file exists. Otherwise, raise the syntax error -bash: $(...): ambiguous redirect.
ifExists() { [ -f "$1" ] && printf %s "$1"; }
echo hello >> "$(ifExists file.txt)"
Always run echo, but print a warning and discard the output if the file does not exist.
ifExists() {
if [ -f "$1" ]; then
printf %s "$1"
else
echo "File $1 does not exist. Discarding output." >&2
printf /dev/null
fi
}
echo hello >> "$(ifExists file.txt)"
Please note that ifExists cannot handle all file names. If you deal with very unusual filenames ending with newlines, then the subshell $( ...) will remove those trailing newlines and the resulting file will be different from the one specified. To solve this problem you have to use a pipe.
Always run echo, but print a warning and discard the output if the file does not exist.
appendIfExists() {
if [ -f "$1" ]; then
cat >> "$1"
else
echo "File $1 does not exist. Discarding output." >&2
return 1
fi
}
echo hello | appendIfExists file.txt
Just check:
if [ -f file.txt ]; then
echo "hello" >> file.txt
else
echo "No file.txt" >&2
exit 1
fi
There's no way in bash to alter how >> works; it will always (try to) create a file if it doesn't already exist.
For example:
if [ -f "filename" ]; then
echo "hello" >>filename
fi
Related
I have a lot of bash commands.Some of them fail for different reasons.
I want to check if some of my errors contain a substring.
Here's an example:
#!/bin/bash
if [[ $(cp nosuchfile /foobar) =~ "No such file" ]]; then
echo "File does not exist. Please check your files and try again."
else
echo "No match"
fi
When I run it, the error is printed to screen and I get "No match":
$ ./myscript
cp: cannot stat 'nosuchfile': No such file or directory
No match
Instead, I wanted the error to be captured and match my condition:
$ ./myscript
File does not exist. Please check your files and try again.
How do I correctly match against the error message?
P.S. I've found some solution, what do you think about this?
out=`cp file1 file2 2>&1`
if [[ $out =~ "No such file" ]]; then
echo "File does not exist. Please check your files and try again."
elif [[ $out =~ "omitting directory" ]]; then
echo "You have specified a directory instead of a file"
fi
I'd do it like this
# Make sure we always get error messages in the same language
# regardless of what the user has specified.
export LC_ALL=C
case $(cp file1 file2 2>&1) in
#or use backticks; double quoting the case argument is not necessary
#but you can do it if you wish
#(it won't get split or glob-expanded in either case)
*"No such file"*)
echo >&2 "File does not exist. Please check your files and try again."
;;
*"omitting directory"*)
echo >&2 "You have specified a directory instead of a file"
;;
esac
This'll work with any POSIX shell too, which might come in handy if you ever decide to
convert your bash scripts to POSIX shell (dash is quite a bit faster than bash).
You need the first 2>&1 redirection because executables normally output information not primarily meant for further machine processing to stderr.
You should use the >&2 redirections with the echos because what you're ouputting there fits into that category.
PSkocik's answer is the correct one for one you need to check for a specific string in an error message. However, if you came looking for ways to detect errors:
I want to check whether or not a command failed
Check the exit code instead of the error messages:
if cp nosuchfile /foobar
then
echo "The copy was successful."
else
ret="$?"
echo "The copy failed with exit code $ret"
fi
I want to differentiate different kinds of failures
Before looking for substrings, check the exit code documentation for your command. For example, man wget lists:
EXIT STATUS
Wget may return one of several error codes if it encounters problems.
0 No problems occurred.
1 Generic error code.
2 Parse error---for instance, when parsing command-line options
3 File I/O error.
(...)
in which case you can check it directly:
wget "$url"
case "$?" in
0) echo "No problem!";;
6) echo "Incorrect password, try again";;
*) echo "Some other error occurred :(" ;;
esac
Not all commands are this disciplined in their exit status, so you may need to check for substrings instead.
Both examples:
out=`cp file1 file2 2>&1`
and
case $(cp file1 file2 2>&1) in
have the same issue because they mixing the stderr and stdout into one output which can be examined. The problem is when you trying the complex command with interactive output i.e top or ddrescueand you need to preserve stdout untouched and examine only the stderr.
To omit this issue you can try this (working only in bash > v4.2!):
shopt -s lastpipe
declare errmsg_variable="errmsg_variable UNSET"
command 3>&1 1>&2 2>&3 | read errmsg_variable
if [[ "$errmsg_variable" == *"substring to find"* ]]; then
#commands to execute only when error occurs and specific substring find in stderr
fi
Explanation
This line
command 3>&1 1>&2 2>&3 | read errmsg_variable
redirecting stderr to the errmsg_variable (using file descriptors trick and pipe) without mixing with stdout. Normally pipes spawning own sub-processes and after executing command with pipes all assignments are not visible in the main process so examining them in the rest of code can't be effective. To prevent this you have to change standard shell behavior by using:
shopt -s lastpipe
which executes last pipe manipulation in command as in the current process so:
| read errmsg_variable
assignes content "pumped" to pipe (in our case error message) into variable which resides in the main process. Now you can examine this variable in the rest of code to find specific sub-string:
if [[ "$errmsg_variable" == *"substring to find"* ]]; then
#commands to execute only when error occurs and specific substring find in stderr
fi
What I have to to is edit a script given to me that will check if the user has write permission for a file named journal-file in the user's home directory. The script should take appropriate actions if journal-file exists and the user does not have write permission to the file.
Here is what I have written so far:
if [ -w $HOME/journal-file ]
then
file=$HOME/journal-file
date >> file
echo -n "Enter name of person or group: "
read name
echo "$name" >> $file
echo >> $file
cat >> $file
echo "--------------------------------" >> $file
echo >> $file
exit 1
else
echo "You do not have write permission."
exit 1
fi
When I run the script it prompt me to input the name of the person/group, but after I press enter nothing happens. It just sits there allowing me to continue inputting stuff and doesn't continue past that part. Why is it doing this?
The statement:
cat >>$file
will read from standard input and write to the file. That means it will wait until you indicate end of file with something like CTRL-D. It's really no different from just typing cat at a command line and seeing that nothing happens until you enter something and it waits until you indicate end of file.
If you're trying to append another file to the output file, you need to specify its name, such as cat $HOME/myfile.txt >>$file.
If you're trying to get a blank line in there, use echo rather than cat, such as echo >>$file.
You also have a couple of other problems, the first being:
date >> file
since that will try to create a file called file (in your working directory). Use $file instead.
The second is the exit code of 1 in the case where what you're trying to do has succeeded. That may not be a problem now but someone using this at a later date may wonder why it seems to indicate failure always.
To be honest, I'm not really a big fan of the if ... then return else ... construct. I prefer fail-fast with less indentation and better grouping of output redirection, such as:
file=${HOME}/journal-file
if [[ ! -w ${file} ]] ; then
echo "You do not have write permission."
exit 1
fi
echo -n "Enter name of person or group: "
read name
(
date
echo "$name"
echo
echo "--------------------------------"
echo
) >>${file}
I believe that's far more readable and maintainable.
It's this line
cat >> $file
cat is concatenating input from standard input (ie whatever you type) to $file
I think the part
cat >> $file
copies everything from stdin to the file. Maybe if you hid Ctrl+D (end of file) the script can continue.
1) You better check first whether the file exists or not:
[[ -e $HOME/journal-file ]] || \
{ echo "$HOME/journal-file does not exist"; exit 1 }
2) You gotta change "cat >> $file" for whatever you want to do with the file. This is the command that is blocking the execution of the script.
I'm trying to write a very simple bash script that modifies a number of files, and I'm outputting the results of each command to a log as a check whether the command was completed successfully. Everything appears to be working except I can't pass CAT with variables to my script -- I keep getting a cat: >>: No such file or directory error.
#! /bin/bash
file1="./file1"
file2="./file2"
check () {
if ( $1 > /dev/null ) then
echo " $1 : completed" | tee -a log
return 0;
else
echo "ERR> $1 : command failed" | tee -a log
return 1;
fi
}
check "cp $file1 $file1.bak" # this works fine
check "sed -i s/text/newtext/g $file1" # this works, too
check "cat $file1 >> $file2" # this does not work
I've tried any number of combinations of quoting the command. The only way that I can get it to work is by using the following:
check $(cat $file1 >> $file2)
However, this does not pass the command itself to check only the return value, so $1 in function check carries /dev/null and not the command performed, which is not the particular behaviour I want.
Just for completeness, the log file looks like:
cp ./file1 ./file1.bak : completed
sed -i s/text/newtext/g ./file1 : completed
ERR> cat ./file1 >> ./file2 : command failed
I'm sure the solution is rather simple, but it has eluded me for a few hours and no amount of Google searches has yielded any help. Thanks for having a look.
The problem is that the I/O redirection in your cat command is not being interpreted as I/O redirection but rather as a simple argument to the cat command. It's not cat so much as the I/O redirection that is causing grief. Trying a pipeline would also give you problems.
Options available to remedy this include:
check "cp $file1 $file2" # Use copy instead of cat and I/O redirection; clobbers file2
check "eval cat $file1 >> $file2" # Use eval to handle I/O redirection, piping, etc
If either $file1 or $file2 contains shell special characters, the eval option is dangerous.
The cp command substitutes something that works without needing I/O redirection. You could even use a (microscopic) shell script to handle the job — where your script executes the shell script, and the shell script handles the redirection:
#!/bin/sh
exec cat ${1:?} >> ${2:?}
This generates a default error message if either argument 1 or 2 is missing (but doesn't object to extra arguments).
EDIT: the approach I first tried below doesn't quite work. There's another trick that can rescue this, even without resorting to bash magic, but it's getting ugly.
The >> redirection occurs at the wrong level, in this case. You wind up asking cat to read ./file, then a file named >>, then ./file2. To get the redirection to occur you'll need to do it elsewhere (see below), or invoke eval.
I'd recommend not using eval, but instead, rejiggering the logic of function check instead. You can redirect check at the top level, e.g.,:
check() {
if "$#"; then
echo " $# : completed" | tee -a log
return 0
fi
echo "ERR> $# : failed, status $?" | tee -a log
return 1
}
check cp "$file1" "$file.bak" # doesn't print anything
check sed -i s/text/newtext/g "$file1" >/dev/null # does print, so >/dev/null
check cat "$file1" >> "$file2"
(The double quotes here in the invocations of check are in case file1 and/or file2 ever acquire meta-characters like * or ;, or white space, etc.)
EDIT: as #cdm and #rici note, this fails for the append-to-file cases, because check's output is redirected even for the tee command. Again the redirection is happening at the wrong level. It's possible to fix this by adding another level of indirection:
append_to_file() {
local fname
fname="$1"
shift
"$#" >> "$fname"
}
check cp "$file1" "$file.bak"
check append_to_file /dev/null sed -e s/text/newtext/g "$file1"
check append_to_file "$file2" cat "$file1"
Now, though, the completed and failure messages log append_to_file at the front, which is really pretty klunky. I think I'd go back to eval instead.
for i in `cat ${DIR}/${INDICATOR_FLIST}`
do
X=`ls $i|wc -l`
echo $X
echo $X>>txt.txt
done
I have a code like this to check if file is present in a directory or not
but this is not working and gives error like this:
not foundtage001/dev/BadFiles/RFM_DW/test_dir/sales_crdt/XYZ.txt
You can see there is no space between not found and file path.
I have a code like this to check if file is present in a directory or not ...
It seems that you're trying to read a list of files from ${DIR}/${INDICATOR_FLIST} and then trying to determine if those actually exist. The main problem is:
You're trying to parse ls in order to figure whether the file exists.
This is what results in the sort of output that you see; mix of stderr and stdout. Use the test operator instead as given below.
The following would give tell you whether the file exists or not:
while read line; do
[ -e "${line}" ] && echo "${line} exists" || echo "${line} does not exist";
done < ${DIR}/${INDICATOR_FLIST}
Your file ${INDICATOR_FLIST} has CRLF line terminators (DOS-style). You need to strip out the CR characters, as Unix convention is for LF-only line terminators.
You can tell this by the way "not found" is printed at the start of the line. The immediately preceding character (the last char of the filename) is a CR, which sends the cursor back to the start of the line.
Find a dos2unix utility, or run tr -d \\015 over it (this deletes all CR's indiscriminately).
maybe the location of your file isn't ok.
take this example
m:~ tr$ echo "1 2 3 4 5" > file.txt
m:~ tr$ cat file.txt
1 2 3 4 5
m:~ tr$ for i in `cat file.txt`;do echo $i ;done
1
2
3
4
5
m:~ tr$
you could write the file before "for" and maybe check if the file exists:
echo "location : ${DIR}/${INDICATOR_FLIST}"
if [ -e ${DIR}/${INDICATOR_FLIST} ];then echo "file exists ";else echo "file was not found";fi
for i in `cat ${DIR}/${INDICATOR_FLIST}`;
do
let "X=`ls $i|wc -l`";
echo $X;
echo $X>>txt.txt;
done
The proper syntax is:
for i in `cat data-file`
do
echo $I
done
which you are following, therefore the problem must be in the specification of $DIR and $INDICATOR_LIST so therefore double check the location of your file.
I am trying to write a loop, and this doesn't work:
for t in `ls $TESTS_PATH1/cmd*.in` ; do
diff $t.out <($parser_test `cat $t`)
# Print result
if [[ $? -eq 0 ]] ; then
printf "$t ** TEST PASSED **"
else
printf "$t ** TEST FAILED **"
fi
done
This also doesn't help:
$parser_test `cat $t` | $DIFF $t.out -
Diff shows that output differs (it's strange, I see output of needed error line as it was printed to stdout, and not caught by diff), but when running with temporary file, everything works fine:
for t in `ls $TESTS_PATH1/cmd*.in` ; do
# Compare output with template
$parser_test `cat $t` 1> $TMP_FILE 2> $TMP_FILE
diff $TMP_FILE $t.out
# Print result
if [[ $? -eq 0 ]] ; then
printf "$t $CGREEN** TEST PASSED **$CBLACK"
else
printf "$t $CRED** TEST FAILED **$CBLACK"
fi
done
I must avoid using temporary file. Why first loop doesn't work and how to fix it?
Thank you.
P.S. Files *.in contain erroneous command line parameters for program, and *.out contain errors messages that program must print for these parameters.
First, to your error, you need to redirect standard error:
diff $t.out <($parser_test `cat $t` 2>&1)
Second, to all the other problems you may not be aware of:
don't use ls with a for loop (it has numerous problems, such as unexpected behavior in filenames containing spaces); instead, use: for t in $TESTS_PATH1/cmd*.in; do
to support file names with spaces, quote your variable expansion: "$t" instead of $t
don't use backquotes; they are deprecated in favor of $(command)
don't use a subshell to cat one file; instead, just run: $parser_test <$t
use either [[ $? == 0 ]] (new syntax) or [ $? -eq 0 ] (old syntax)
if you use printf instead of echo, don't forget that you need to add \n at the end of the line manually
never use 1> $TMP_FILE 2> $TMP_FILE - this just overwrites stdout with stderr in a non-predictable manner. If you want to combine standard out and standard error, use: 1>$TMP_FILE 2>&1
by convention, ALL_CAPS names are used for/by environment variables. In-script variable names are recommended to be no_caps.
you don't need to use $? right after executing a command, it's redundant. Instead, you can directly run: if command; then ...
After fixing all that, your script would look like this:
for t in $tests_path1/cmd*.in; do
if diff "$t.out" <($parser_test <"$t" 2>&1); then
echo "$t ** TEST PASSED **"
else
echo "$t ** TEST FAILED **"
fi
done
If you don't care for the actual output of diff, you can add >/dev/null right after diff to silence it.
Third, if I understand correctly, your file names are of the form foo.in and foo.out, and not foo.in and foo.in.out (like the script above expects). If this is true, you need to change the diff line to this:
diff "${t/.in}.out" <($parser_test <"$t" 2>&1)
In your second test you are capturing standard error, but in the first one (and the pipe example) stderr remains uncaptured, and perhaps that's the "diff" (pun intended).
You can probably add a '2>&1' in the proper place to combine the stderr and stdout streams.
.eg.
diff $t.out <($parser_test cat $t 2>&1)
Not to mention, you don't say what "doesn't work" means, does that mean it doesn't find a difference, or it exits with an error message? Please clarify in case you need more info.