Executing code in bash only if a string is not found in a file - bash

I'm trying to execute a block of code only if the string SVN_BRANCH is not found in /etc/profile. My current code looks like the following:
a = cat /etc/profile
b = `$a | grep 'SVN_BRANCH'`
not_if "$b"
{
....
...
...
}
This fails as given. How should it be done?

grep can take file as an argument, you don't need to cat the file and then pass the content to grep with pipe, that's totally unnecessary.
This is an example of if else block with grep:
if grep -q "pattern" filepath;then
echo "do something"
else
echo "do something else"
fi
Note:
-q option is for quite operation. It will hide the output of grep command (error will be printed).
If you want it to not print any errors too then use this:
if grep -sq "pattern" filepath;then
Or this:
if grep "pattern" filepath >/dev/null 2>&1;then
>/dev/null is to redirect the output to /dev/null
2>&1 redirects both stderr and stdout

you can use the exit code of the grep command to determine whether to execute your code block like this
cat /etc/profile | grep SVN_BRANCH 2>&1 >/dev/null || {
....
....
}

Related

Bash script - Modify output of command and print into file

Im trying to get text output of specified command, modify it somehow (e.g. add prefix before output) and print into file (.txt or .log)
LOG_FILE=...
LOG_ERROR_FILE=..
command_name >> ${LOG_FILE} 2>> ${LOG_ERROR_FILE}
I would like to do it in one line to modify what command will return and print it into files.
The same situation for error output and regular output.
Im beginner in bash scripts, so please be understading.
Create a function to execute commands and capture sterr an stdout to variables.
function execCommand(){
local command="$#"
{
IFS=$'\n' read -r -d '' STDERR;
IFS=$'\n' read -r -d '' STDOUT;
} < <((printf '\0%s\0' "$($command)" 1>&2) 2>&1)
}
function testCommand(){
grep foo bar
echo "return code $?"
}
execCommand testCommand
echo err: $STDERR
echo out: $STDOUT
execCommand "touch /etc/foo"
echo err: $STDERR
echo out: $STDOUT
execCommand "date"
echo err: $STDERR
echo out: $STDOUT
output
err: grep: bar: No such file or directory
out: return code 2
err: touch: cannot touch '/etc/foo': Permission denied
out:
err:
out: Mon Jan 31 16:29:51 CET 2022
Now you can modify $STDERR & $STDOUT
execCommand testCommand && { echo "$STDERR" > err.log; echo "$STDOUT" > out.log; }
Explanation: Look at the answer from madmurphy
Pipe | and/or redirects > is the answer, it seems.
So, as a bogus example to show what I mean: to get all interfaces that the command ip a spits out, you could pipe that to the processing commands and do output redirection into a file.
ip a | awk -F': *' '/^[0-9]/ { print $2 }' > my_file.txt
If you wish to send it to separate processing, you could redirect into a sub-shell:
$ command -V cd curl bogus > >(awk '{print $NF}' > stdout.txt) 2> >(sed 's/.*\s\(\w\+\):/\1/' > stderr.txt)
$ cat stdout.txt
builtin
(/usr/bin/curl)
$ cat stderr.txt
bogus not found
But it might be better for readability to process in a separate step:
$ command -V cd curl bogus >stdout.txt 2>stderr.txt
$ sed -i 's/.*\s//' stdout.txt
$ sed -i 's/.*\s\(\w\+\):/\1/' stderr.txt
$ cat stdout.txt
builtin
(/usr/bin/curl)
$ cat stderr.txt
bogus not found
There are a myriad of ways to do what you ask and I guess situation will have to decide what to use, but here's a start.
To modify the output and write it to a file, while modifying the error stream differently and writing to a different file, you just need to manipulate the file descriptors appropriately. eg:
#!/bin/sh
# A command that writes trivial data to both stdout and stderr
cmd() {
echo 'Hello stdout!'
echo 'Hello stderr!' >&2
}
# Filter both streams and redirect to different files
{ cmd 2>&1 1>&3 | sed 's/stderr/cruel world/' > "$LOG_ERROR_FILE"; } 3>&1 |
sed 's/stdout/world/' > "$LOG_FILE"
The technique is to redirect the error stream to the stdout so it can flow into the pipe (2>&1), and then redirect the output stream to a ancillary file descriptor, which is being redirected into a different pipe.
You can clean it up a bit by moving the file redirections into an earlier exec call. eg:
#!/bin/sh
cmd() {
echo 'Hello stdout!'
echo 'Hello stderr!' >&2
}
exec > "$LOG_FILE"
exec 2> "$LOG_ERROR_FILE"
# Filter both streams and redirect to different files
{ cmd 2>&1 1>&3 | sed 's/stderr/cruel world/' >&2; } 3>&1 | sed 's/stdout/world/'

bash stdout some information and pipe other from inside loop

How to print output from a loop which is piped to some other command:
for f in "${!myList[#]}"; do
echo $f > /dev/stdout # echoed to stdout, how to?
unzip -qqc $f # piped to awk script
done | awk -f script.awk
You can use /dev/stderr or second file descriptor:
echo something >&2 | grep nothing
echo something >/dev/stderr | grep nothing
You can use another file descriptor that will be connected to stdout:
# for a single command group
{ echo something >&3 | grep nothing; } 3>&1
# or for everywhere
exec 3>&1
echo something >&3 | grep nothing
# same as above with named file descriptor
exec {LOG}>&1
echo 123 >&$LOG | grep nothing
You can also redirect the output to current controlling terminal /dev/tty (if there is one):
echo something >/dev/tty | grep nothing

Set a command to a variable in bash script problem

Trying to run a command as a variable but I am getting strange results
Expected result "1" :
grep -i nosuid /etc/fstab | grep -iq nfs
echo $?
1
Unexpected result as a variable command:
cmd="grep -i nosuid /etc/fstab | grep -iq nfs"
$cmd
echo $?
0
It seems it returns 0 as the command was correct not actual outcome. How to do this better ?
You can only execute exactly one command stored in a variable. The pipe is passed as an argument to the first grep.
Example
$ printArgs() { printf %s\\n "$#"; }
# Two commands. The 1st command has parameters "a" and "b".
# The 2nd command prints stdin from the first command.
$ printArgs a b | cat
a
b
$ cmd='printArgs a b | cat'
# Only one command with parameters "a", "b", "|", and "cat".
$ $cmd
a
b
|
cat
How to do this better?
Don't execute the command using variables.
Use a function.
$ cmd() { grep -i nosuid /etc/fstab | grep -iq nfs; }
$ cmd
$ echo $?
1
Solution to the actual problem
I see three options to your actual problem:
Use a DEBUG trap and the BASH_COMMAND variable inside the trap.
Enable bash's history feature for your script and use the hist command.
Use a function which takes a command string and executes it using eval.
Regarding your comment on the last approach: You only need one function. Something like
execAndLog() {
description="$1"
shift
if eval "$*"; then
info="PASSED: $description: $*"
passed+=("${FUNCNAME[1]}")
else
info="FAILED: $description: $*"
failed+=("${FUNCNAME[1]}")
done
}
You can use this function as follows
execAndLog 'Scanned system' 'grep -i nfs /etc/fstab | grep -iq noexec'
The first argument is the description for the log, the remaining arguments are the command to be executed.
using bash -x or set -x will allow you to see what bash executes:
> cmd="grep -i nosuid /etc/fstab | grep -iq nfs"
> set -x
> $cmd
+ grep -i nosuid /etc/fstab '|' grep -iq nfs
as you can see your pipe | is passed as an argument to the first grep command.

Bash - catch the output of a command

I am trying to check the output of a command and run different commands depending on the output.
count="1"
for f in "$#"; do
BASE=${f%.*}
# if [ -e "${BASE}_${suffix}_${suffix2}.mp4" ]; then
echo -e "Reading GPS metadata using MediaInfo file ${count}/${##} "$(basename "${BASE}_${suffix}_${suffix2}.mp4")"
mediainfo "${BASE}_${suffix}_${suffix2}.mp4" | grep "©xyz" | head -n 1
if [[ $? != *xyz* ]]; then
echo -e "WARNING!!! No GPS information found! File ${count}/${##} "$(basename "${BASE}_${suffix}_${suffix2}.mp4")" || exit 1
fi
((count++))
done
MediaInfo is the command I am checking the output of.
If a video file has "©xyz" atom written into it the output looks like this:
$ mediainfo FILE | grep "©xyz" | head -n 1
$ ©xyz : +60.9613-125.9309/
$
otherwise it is null
$ mediainfo FILE | grep "©xyz" | head -n 1
$
The above code does not work and echos the warning even when ©xyz presented.
Any ideas of what I am doing wrong?
The syntax you are using the capture the output of the mediainfo command is plain wrong. When using grep you can use its return code (the output of $?) directly in the if-conditional
if mediainfo "${BASE}_${suffix}_${suffix2}.mp4" | grep -q "©xyz" 2> /dev/null;
then
..
The -q flag in grep instructs it to run the command silently without throwing any results to stdout, and the part 2>/dev/null suppresses any errors thrown via stderr, so you will get the if-conditional pass when the string is present and fail if not present
$? is the exit code of the command: a number between 0 and 255. It's not related to stdout, where your value "xyz" is written.
To match in stdout, you can just use grep:
if mediainfo "${BASE}_${suffix}_${suffix2}.mp4" | grep -q "©xyz"
then
echo "It contained that thing"
else
echo "It did not"
fi

How to search an expression in a file from a bash script?

I have a bash script.
I need to look if "text" exists in the file and do something if it exists.
If you need to execute a command on all files containing the text, you can combine grep with xargs. For example, this would remove all files containing "yourtext":
grep -l "yourtext" * | xargs rm
To search a single file, use if grep ...
if grep -q "yourtext" yourfile ; then
# Found
fi
Something like the following would do what you need.
grep -w "text" file > /dev/null
if [ $? -eq 0 ]; then
#Do something
else
#Do something else
fi
grep is your friend here
You can put the grep inside the if statement, and you can use the -q flag to silence it.
if grep -q "text" file; then
:
else
:
fi
cat <file> | grep <"text"> and check the return code with test $?
Check out the excellent:
Advanced Bash-Scripting Guide
just use the shell
while read -r line
do
case "$line" in
*text* )
echo "do something here"
;;
* ) echo "text not found"
esac
done <"file"

Resources