I am working on a shell script and I am getting an error saying Test : Argument expected. Its basically a sed command followed by checking if there were any errors
Please find the below
sed "s|${var1}|${var2}|g" $FILE_PATH$FILE_NAME > /tmp/$FILE_NAME
if [ "$command_error" != 0 ] ; then
date
echo "Error $command_error reading file $WS_FILE"
echo "File Does not exist or is not readable"
exit 30
fi
What shell are you using?
I think you are missing a line:
command_error=$?
between the two blocks.
Be aware that many (all?) commands (eg echo $?) will actually change it's value. Therefore, it's a good idea to assign $? to a temporary variable like this.
Related
How do I create a Bash script that takes a file name as input? Then, if that file exists, it should print "File exists"; if not, print "File does not exist".
For example, if I ran ./do-i-exist.sh ./do-i-exist.sh, the output should be only 'File exists'
file="$1"
read answer
if [ $file != -$2 ]
then
echo "File exists"
else
echo "File does not exist"
fi
This is what I'm working with but is not working for me, whenever I add an extension like .sh, .txt or something similar it won't find the file.
The test if a file exists can be done like this
if [ -f "$file" ]
then
This tests for a regular file, not for other kinds of files like a directory.
This is how you can do it. Pass the name of the file while like ./do-i-exist.sh file_path.
if [ -f "$1" ]
then
echo "File Exists"
else
echo "File does not exist"
fi
First of all, I want to thank anyone and everyone who tried to help. After 3 hard working days, I found the answer, here it is:
#!/bin/bash
file="$#"
if [ -f $file ]
then
echo "File exists"
else
echo "File does not exist"
fi
Using this table:
Variable Name
Description
$0
The name of the Bash script
$1 - $9
The first 9 arguments to the Bash script
$#
Number of arguments passed to the Bash script
$#
All arguments passed to the Bash script
$?
The exit status of the most recently run process
$$
The process ID of the current script
$USER
The username of the user running the script
$HOSTNAME
The hostname of the machine
$RANDOM
A random number
$LINENO
The current line number in the script
I and other users were focused on using $1 from my understanding this refers to the first argument passed to the script but for some reason, it wasn't working since it needed to pass more inputs.
As from my previous comments I didn't have control over the input. The input was hidden in a locked file, and I needed to feed my script to it.
From what we know $0 is only used to check for the file names, $1 to get the first statement and $# will just take anything(I guess).
I know absolutely nothing about bash and it was the first time ever using it, which is why it took me 3 days to solve this puzzle. This was part of a CTF and just like me, many others may struggle in the future to understand or know how to make a script that will just adapt to a series of inputs from a second script.
This is how it was supported to work:
I was given access to a very restricted server and on this server, I was given the encrypted-file.sh file. This file was supposed to be fed to /path/to/myfile.sh then encrypted-file.sh would execute a second command to open a third locked file hiding a flag on it.
This only works with the right bash file using the right variables on it for encrypted-file.sh to run without errors, which is what I accomplished here.
I used a while loop because it made sense in my case because I really needed a file for the script to work.
restore_file="$1"
while [ ! -f "$restore_file" ]
do
echo "File not found: $restore_file"
echo "Please provide a valid file:"
read restore_file
done
As written above, $1 is the first argument given to the script. In this case if no argument is given or that is not a file, it will prompt again.
By the way, use -d instead of -f to check for a directory.
I have the following bash script:
echo one
echo two
cd x
echo three
which fails at the 3rd line as there is no directory named x. However, after running the script, when I do $?, 0 is returned, even though the script has an error. How do I detect whether the script ran successfully or not?
Check the condition of directory existence in the script statements:
[ -d x ] && cd x || { echo "no such directory"; exit 1; }
Or put set -e after shebang line:
#!/bin/bash
set -e
echo one
echo two
cd x
echo three
You should end with an exit statement
echo one
echo two
cd x
exitCode=$?
echo three
exit $exitCode;
Then
./myscript
echo $?
1
I have searched all over with no clear answer to this. Put simply it doesn't appear to be a native feature in bash. So I will give you the hard way.
To make a .sh script with multiple commands and you don't know if any will error but you want to check if at least one has. You would need to put a $? at the end of literally every command and redirect it to a text file. Once it's in the text file you could format it like.
Command1 = 0 Success.
Command2 = 127 Fail.
Or you could just add the numbers to the file run it through some kind of calculator to add everything together and if the output is greater than zero then the command at some point failed. But this won't be overly useful if you want the exact number and there are more than one failure.
UPDATED - This is the best way I could find.
You can put this at the top of your script file to catch any errors and exit if it fails.
set -euo pipefail
Feel free to read the manual pages.
Example script:
#!/bin/bash
printf '1\n1\n1\n1\n' | ./script2*.sh >/dev/null 2>/dev/null
Shellcheck returns the following:
In script1.sh line 3:
printf '1\n1\n1\n1\n' | ./script2*.sh >/dev/null 2>/dev/null
^-- SC2211: This is a glob used as a command name. Was it supposed to be in ${..}, array, or is it missing quoting?
According to https://github.com/koalaman/shellcheck/wiki/SC2211, there should be no exceptions to this rule.
Specifically, it suggests "If you want to specify a command name via glob, e.g. to not hard code version in ./myprogram-*/foo, expand to array or parameters first to allow handling the cases of 0 or 2+ matches."
The reason I'm using the glob in the first place is that I append or change the date to any script that I have just created or changed. Interestingly enough, when I use "bash script2*.sh" instead of "./script2*.sh" the complaint goes away.
Have I fixed the problem or I am tricking shellcheck into ignoring a problem that should not be ignored? If I am using bad bash syntax, how might I execute another script that needs to be referenced to using a glob the proper way?
The problem with this is that ./script2*.sh may end up running
./script2-20171225.sh ./script2-20180226.sh ./script2-copy.sh
which is a strange and probably unintentional thing to do, especially if the script is confused by such arguments, or if you wanted your most up-to-date file to be used. Your "fix" has the same fundamental problem.
The suggestion you mention would take the form:
array=(./script2*.sh)
[ "${#array[#]}" -ne 1 ] && { echo "Multiple matches" >&2; exit 1; }
"${array[0]}"
and guard against this problem.
Since you appear to assume that you'll only ever have exactly one matching file to be invoked without parameters, you can turn this into a function:
runByGlob() {
if (( $# != 1 ))
then
echo "Expected exactly 1 match but found $#: $*" >&2
exit 1
elif command -v "$1" > /dev/null 2>&1
then
"$1"
else
echo "Glob is not a valid command: $*" >&2
exit 1
fi
}
whatever | runByGlob ./script2*.sh
Now if you ever have zero or multiple matching files, it will abort with an error instead of potentially running the wrong file with strange arguments.
I have a lot of bash commands.Some of them fail for different reasons.
I want to check if some of my errors contain a substring.
Here's an example:
#!/bin/bash
if [[ $(cp nosuchfile /foobar) =~ "No such file" ]]; then
echo "File does not exist. Please check your files and try again."
else
echo "No match"
fi
When I run it, the error is printed to screen and I get "No match":
$ ./myscript
cp: cannot stat 'nosuchfile': No such file or directory
No match
Instead, I wanted the error to be captured and match my condition:
$ ./myscript
File does not exist. Please check your files and try again.
How do I correctly match against the error message?
P.S. I've found some solution, what do you think about this?
out=`cp file1 file2 2>&1`
if [[ $out =~ "No such file" ]]; then
echo "File does not exist. Please check your files and try again."
elif [[ $out =~ "omitting directory" ]]; then
echo "You have specified a directory instead of a file"
fi
I'd do it like this
# Make sure we always get error messages in the same language
# regardless of what the user has specified.
export LC_ALL=C
case $(cp file1 file2 2>&1) in
#or use backticks; double quoting the case argument is not necessary
#but you can do it if you wish
#(it won't get split or glob-expanded in either case)
*"No such file"*)
echo >&2 "File does not exist. Please check your files and try again."
;;
*"omitting directory"*)
echo >&2 "You have specified a directory instead of a file"
;;
esac
This'll work with any POSIX shell too, which might come in handy if you ever decide to
convert your bash scripts to POSIX shell (dash is quite a bit faster than bash).
You need the first 2>&1 redirection because executables normally output information not primarily meant for further machine processing to stderr.
You should use the >&2 redirections with the echos because what you're ouputting there fits into that category.
PSkocik's answer is the correct one for one you need to check for a specific string in an error message. However, if you came looking for ways to detect errors:
I want to check whether or not a command failed
Check the exit code instead of the error messages:
if cp nosuchfile /foobar
then
echo "The copy was successful."
else
ret="$?"
echo "The copy failed with exit code $ret"
fi
I want to differentiate different kinds of failures
Before looking for substrings, check the exit code documentation for your command. For example, man wget lists:
EXIT STATUS
Wget may return one of several error codes if it encounters problems.
0 No problems occurred.
1 Generic error code.
2 Parse error---for instance, when parsing command-line options
3 File I/O error.
(...)
in which case you can check it directly:
wget "$url"
case "$?" in
0) echo "No problem!";;
6) echo "Incorrect password, try again";;
*) echo "Some other error occurred :(" ;;
esac
Not all commands are this disciplined in their exit status, so you may need to check for substrings instead.
Both examples:
out=`cp file1 file2 2>&1`
and
case $(cp file1 file2 2>&1) in
have the same issue because they mixing the stderr and stdout into one output which can be examined. The problem is when you trying the complex command with interactive output i.e top or ddrescueand you need to preserve stdout untouched and examine only the stderr.
To omit this issue you can try this (working only in bash > v4.2!):
shopt -s lastpipe
declare errmsg_variable="errmsg_variable UNSET"
command 3>&1 1>&2 2>&3 | read errmsg_variable
if [[ "$errmsg_variable" == *"substring to find"* ]]; then
#commands to execute only when error occurs and specific substring find in stderr
fi
Explanation
This line
command 3>&1 1>&2 2>&3 | read errmsg_variable
redirecting stderr to the errmsg_variable (using file descriptors trick and pipe) without mixing with stdout. Normally pipes spawning own sub-processes and after executing command with pipes all assignments are not visible in the main process so examining them in the rest of code can't be effective. To prevent this you have to change standard shell behavior by using:
shopt -s lastpipe
which executes last pipe manipulation in command as in the current process so:
| read errmsg_variable
assignes content "pumped" to pipe (in our case error message) into variable which resides in the main process. Now you can examine this variable in the rest of code to find specific sub-string:
if [[ "$errmsg_variable" == *"substring to find"* ]]; then
#commands to execute only when error occurs and specific substring find in stderr
fi
the following script is working fine on one server but on the other it gives an error
#!/bin/bash
processLine(){
line="$#" # get the complete first line which is the complete script path
name_of_file=$(basename "$line" ".php") # seperate from the path the name of file excluding extension
ps aux | grep -v grep | grep -q "$line" || ( nohup php -f "$line" > /var/log/iphorex/$name_of_file.log & )
}
FILE=""
if [ "$1" == "" ]; then
FILE="/var/www/iphorex/live/infi_script.txt"
else
FILE="$1"
# make sure file exist and readable
if [ ! -f $FILE ]; then
echo "$FILE : does not exists. Script will terminate now."
exit 1
elif [ ! -r $FILE ]; then
echo "$FILE: can not be read. Script will terminate now."
exit 2
fi
fi
# read $FILE using the file descriptors
# $ifs is a shell variable. Varies from version to version. known as internal file seperator.
# Set loop separator to end of line
BACKUPIFS=$IFS
#use a temp. variable such that $ifs can be restored later.
IFS=$(echo -en "\n")
exec 3<&0
exec 0<"$FILE"
while read -r line
do
# use $line variable to process line in processLine() function
processLine $line
done
exec 0<&3
# restore $IFS which was used to determine what the field separators are
IFS=$BAKCUPIFS
exit 0
i am just trying to read a file containing path of various scripts and then checking whether those scripts are already running and if not running them. The file /var/www/iphorex/live/infi_script.txt is definitely present. I get the following error on my amazon server-
[: 24: unexpected operator
infinity.sh: 32: cannot open : No such file
Thanks for your helps in advance.
You should just initialize file with
FILE=${1:-/var/www/iphorex/live/infi_script.txt}
and then skip the existence check. If the file
does not exist or is not readable, the exec 0< will
fail with a reasonable error message (there's no point
in you trying to guess what the error message will be,
just let the shell report the error.)
I think the problem is that the shell on the failing server
does not like "==" in the equality test. (Many implementations
of test only accept one '=', but I thought even older bash
had a builtin that accepted two '==' so I might be way off base.)
I would simply eliminate your lines from FILE="" down to
the end of the existence check and replace them with the
assignment above, letting the shell's standard default
mechanism work for you.
Note that if you do eliminate the existence check, you'll want
to either add
set -e
near the top of the script, or add a check on the exec:
exec 0<"$FILE" || exit 1
so that the script does not continue if the file is not usable.
For bash (and ksh and others), you want [[ "$x" == "$y" ]] with double brackets. That uses the built-in expression handling. A single bracket calls out to the test executable which is probably barfing on the ==.
Also, you can use [[ -z "$x" ]] to test for zero-length strings, instead of comparing to the empty string. See "CONDITIONAL EXPRESSIONS" in your bash manual.