I have a function that checks for duplicate values held within objects inside a json file. When duplicates are found the function returns something like this:
{
"Basket1": [
Apple,
Orange
],
"Basket2": [
Apple,
Orange
]
}
If no duplicates are found then it returns am empty list:
{}
Currently I am using -s in bash like such to check if the there are dups found within the output:
<"$baskets" jq -L $HOME 'check_dups' > "$dups"
if [[ ! -s "$dups" ]];
then
echo -e "${RED}[Error]${NC} Duplicates found! Please review duplicates below" >&2
echo "$(cat "$dups" | jq '.')"
else
echo -e "${GREEN}[SUCCESS]${NC} No duplicates found" >&2
fi
However empty object returned if no dups are found will cause the -s file check in bash to succeed regardless. What would be the best way using jq or bash to check whether the output of this function is an empty object or not?
You can use error() to cause a failure if your input is identical to {}, and proceed otherwise.
jq '
if . == {} then error("empty document found") else . end
| ...rest of your processing here...
'
As a quick example:
<<<"{}" jq 'if . == {} then error("empty document found") else . end | {"output": (.)}'
...emits a nonzero exit status even without jq -e.
(Addressing a concern #Thomas brought up, error() has a different exit status than an input parsing error; the former is 4, the latter is 5, while compile errors are 3; so should there be a need to distinguish them it's entirely possible).
You can compare to the empty object {}, so . == {} or . != {} produce a boolean, which should do what you want.
Furthermore, you could use jq's -e option which sets the exit code based on the result, for integration into the shell:
<"$baskets" jq -L $HOME 'check_dups' > "$dups"
if jq -e '. == {}' <"$dups" >/dev/null
then ...
else ...
fi
You could stick with your approach using the shell's -s test by arranging for your jq command to emit nothing instead of {}. This could be done using the following at the end of your jq program:
if . == {} then empty else . end
Related
I have a function that checks for duplicate values held within objects inside a json file. When duplicates are found the function returns something like this:
{
"Basket1": [
Apple,
Orange
],
"Basket2": [
Apple,
Orange
]
}
If no duplicates are found then it returns am empty list:
{}
Currently I am using -s in bash like such to check if the there are dups found within the output:
<"$baskets" jq -L $HOME 'check_dups' > "$dups"
if [[ ! -s "$dups" ]];
then
echo -e "${RED}[Error]${NC} Duplicates found! Please review duplicates below" >&2
echo "$(cat "$dups" | jq '.')"
else
echo -e "${GREEN}[SUCCESS]${NC} No duplicates found" >&2
fi
However empty object returned if no dups are found will cause the -s file check in bash to succeed regardless. What would be the best way using jq or bash to check whether the output of this function is an empty object or not?
You can use error() to cause a failure if your input is identical to {}, and proceed otherwise.
jq '
if . == {} then error("empty document found") else . end
| ...rest of your processing here...
'
As a quick example:
<<<"{}" jq 'if . == {} then error("empty document found") else . end | {"output": (.)}'
...emits a nonzero exit status even without jq -e.
(Addressing a concern #Thomas brought up, error() has a different exit status than an input parsing error; the former is 4, the latter is 5, while compile errors are 3; so should there be a need to distinguish them it's entirely possible).
You can compare to the empty object {}, so . == {} or . != {} produce a boolean, which should do what you want.
Furthermore, you could use jq's -e option which sets the exit code based on the result, for integration into the shell:
<"$baskets" jq -L $HOME 'check_dups' > "$dups"
if jq -e '. == {}' <"$dups" >/dev/null
then ...
else ...
fi
You could stick with your approach using the shell's -s test by arranging for your jq command to emit nothing instead of {}. This could be done using the following at the end of your jq program:
if . == {} then empty else . end
awkOut1="awkOut1.csv"
awkOut2="awkOut2.csv"
if [[ "$(-s $awkOut1)" || "$(-s $awkOut2)" ]]
The above 'if' check in shell script gives me below error:
-bash: -s: command not found
Suggestions anyone?
If you just have 2 files I would do
if [[ -e "$awkOut1" && ! -s "$awkOut1" ]] &&
[[ -e "$awkOut2" && ! -s "$awkOut2" ]]
then
echo both files exist and are empty
fi
Since [[ is a command, you can chain the exit statuses together with && to ensure they are all true. Also, within [[ (but not [), you can use && to chain tests together.
Note that -s tests for True if file exists and is not empty. so I'm explicitly adding the -e tests so that -s only checks if the file is not empty.
If you have more than 2:
files=( awkOut1.csv awkOut2.csv ... )
sum=$( stat -c '%s' "${files[#]}" | awk '{sum += $1} END {print sum}' )
if (( sum == 0 )); then
echo all the files are empty
fi
This one does not test for existence of the files.
You can use basic Bourne shell syntax and the test command (a single left bracket) to find out if either file is non-empty:
if [ -s "$awkOut1" -o -s "$awkOut2" ]; then
echo "One of the files is non-empty."
fi
When using single brackets, the -o means "or", so this expression is checking to see if awkOut1 or awkOut2 is non-empty.
If you have a whole directory full of files and you want to find out if any of them is empty, you could do something like this (again with basic Bourne syntax and standard utilities):
find . -empty | grep -q . && echo "some are empty" || echo "no file is empty"
In this line, find will print any files in the current directory (and recursively in any subdirectories) that are empty; grep will turn that into an exit status; and then you can take action based on success or failure to find empties. In an if statement, it would look like this:
if find . -empty | grep -q .; then
echo "some are empty"
else
echo "no file is empty"
fi
Here is one for GNU awk and filefuncs extension. It checks all parameter given files and exits once the first one is empty:
$ touch foo
$ awk '
#load "filefuncs" # enable
END {
for(i=1;i<ARGC;i++) { # all given files
if(stat(ARGV[i], fdata)<0) { # use stat
printf("could not stat %s: %s\n", # nonexists n exits
ARGV[i], ERRNO) > "/dev/stderr"
exit 1
}
if(fdata["size"]==0) { # file size check
printf("%s is empty\n",
ARGV[i]) > "/dev/stderr"
exit 2
}
}
exit
}' foo
Output:
foo is empty
I use:
md5sum * > checklist.chk # Generates a list of checksums and files.
and use:
md5sum -c checklist.chk # runs through the list to check them
How can I automate a PASS or FAIL state? I basically want to get a notification if something on my app changes. Whether by hacker or unauthorized change by a developer. I want to write a script that will notify of any changes to my code.
I found a few scripts online but they only appear to work for single files, I have been unable to adapt the script to work for multiple files with pass or fail states.
if [ "$(md5sum < File.name)" = "24f4ce42e0bc39ddf7b7e879a -" ]
then
echo Pass
else
echo Fail
fi
Reference:
Shell scripts and the md5/md5sum command: need to decide when to use which one
https://unix.stackexchange.com/questions/290240/md5sum-check-no-file
I would do something like:
for f in $(awk '{printf"%s ", $2}' checklist.chk); do
md5sum=$(grep $f checklist.chk | awk '{print $1}')
if [[ "$(md5sum < $f)" = "$md5sum -" ]]; then
echo Pass
else
echo Fail
fi
done
Store your checksums directly in the script. Then just run the md5sum -c.
Something like:
#!/bin/bash
get_stored_checksums() {
grep -P '^[0-9a-f]{32} .' <<-'EOF'
#########################################################
# Stored checksums in the script itself
# the output from the md5sum for the files you want guard
5e3f61b243679426d7f74c22b863438b Workbook1.xls
777a161c82fe0c810e00560411fb076e Workbook1.xlsx
# empty lines and comments - theyre simply ignored
d41d8cd98f00b204e9800998ecf8427e abc def.xxx
# this very important file
809f911bcde79d6d0f6dc8801d367bb5 jj.xxx
#########################################################
EOF
}
#MAIN
cd /where/the/files/are
#run the md5sum in the check-mode
result=$( md5sum -c <(get_stored_checksums) )
if (( $? ))
then
#found some problems
echo "$result"
#mail -s "PROBLEM" security.manager#example.com <<<"$result"
#else
# echo "all OK"
fi
If something is wrong, you will see something like:
Workbook1.xls: OK
Workbook1.xlsx: OK
abc def.xxx: FAILED
jj.xxx: OK
md5sum: WARNING: 1 of 4 computed checksums did NOT match
Of course, you can change the get_stored_checksums function to anything other, like:
get_stored_checksums() {
curl -s 'http://integrityserver.example.com/mytoken'
}
and you will fetch the guarded checksums from the remote server...
In bash scripting, you can check for errors by doing something like this:
if some_command; then
printf 'some_command succeeded\n'
else
printf 'some_command failed\n'
fi
If you have a large script, constantly having to do this becomes very tedious and long. It would be nice to write a function so all you have to do is pass the command and have it output whether it failed or succeeded and exiting if it failed.
I thought about doing something like this:
function cmdTest() {
if $($#); then
echo OK
else
echo FAIL
exit
fi
}
So then it would be called by doing something like cmdTest ls -l, but when Id o this I get:
ls -l: Command not found.
What is going on? How can I fix this?
Thank you.
Your cmdtest function is limited to simple commands (no pipes, no &&/||, no redirections) and its arguments. It's only a little more typing to use the far more robust
{ ... ;} && echo OK || { echo Fail; exit 1; }
where ... is to be replaced by any command line you might want to execute. For simple commands, the braces and trailing semi-colon are unnecessary, but in general you will need
them to ensure that your command line is treated as a single command for purposes of using &&.
If you tried to run something like
cmdTest grep "foo" file.txt || echo "foo not found"
your function would only run grep "foo" file.txt, and if your function failed, then echo "foo not found" would run. An attempt to pass the entirex || y` to your function like
cmdTest 'grep "foo" file.txt || echo "foo not found"
would fail even worse: the entire command string would be treated as a command name, which is not what you intended. Likewise, you cannot pass pipelines
cmdTest grep "foo" file.txt | grep "bar"
because the pipeline consists of two commands: cmdTest grep "foo" file.txt and grep "bar".
Shell (whether bash, zsh, POSIX sh or nearly any other kind) is simply not a language that is useful for passing arbitrary command lines as an argument to another function to be executed. The only work around, to use the eval command, is fragile at best and a security risk at worst, and cannot be recommended against enough. (And no, I will not provide an example of how to use eval, even in a limited capacity.)
Use this function like this:
function cmdTest() { if "$#"; then echo "OK"; else return $?; fi; }
OR to print Fail also:
function cmdTest() { if "$#"; then echo "OK"; else ret=$?; echo "Fail"; return $ret; fi; }
I'm sort of a newbie when it comes to shell scripting. What am I doing wrong?
I'm trying to grep a running log file and take action if the grep returns data.
# grep for "success" in the log which will tell us if we were successful
tail -f file.log | grep success > named_pipe &
# send signal to my server to do something
/bin/kill -10 $PID
timeout=0;
while : ; do
OUTPUT=$(cat < named_pipe)
if test [-n] $OUTPUT
then
echo "output is '" $OUTPUT "'"
echo_success
break;
else
timeout=$((timeout+1))
sleep 1
if [ $timeout -ge $SHUTDOWN_TIMEOUT ]; then
echo_failure
break
fi
fi
done
I'm finding that even when "success" is not in the log, test [-n] $OUTPUT returns true. This is because apparently OUTPUT is equal to " ". Why is OUTPUT a single space rather than empty?
How can I fix this?
Here's a smaller test case for your problem:
output=""
if test [-n] $output
then
echo "Why does this happen?"
fi
This happens because when $output is empty or whitespace, it expands to nothing, and you just run test [-n].
test foo is true when foo is non-empty. It doesn't matter that your foo is a flag wrapped in square brackets.
The correct way to do this is without the brackets, and with quotes:
if test -n "$output"
then
...
fi
As for why $OUTPUT is a single space, that's simple: it isn't. echo just writes out its arguments separated as spaces, and you specified multiple arguments. The correct code is echo "output is '$OUTPUT'"