How to grep for a specific text after executing a command in terminal? - bash

I'm executing
eb deploy my_env,
After which quite a bit of text is shown in command line, and I would like to grep for "update completed successfully" Upon Completion of the above command, and if not found return an exit status of 1.
I was wondering how this is possible.
(If you're interested in why, I've found that eb deployments despite a deploy failure return an exit status of 0)

This is a textbook case of what pipes are used for:
eb deploy my_env | grep "update completed successfully"
If you want to suppress the output of grep, use the -q flag. The result will act just as if eb deploy had an error code that actually matched your needs:
eb deploy my_env | grep -q "update completed successfully"
As you mention in your comments, eb deploy my_env can actually be any other pipeline of commands that contains "update completed successfully" somewhere in the final output. The return value for the entire pipeline will be the return value of grep, being the last command.
This type of piping is actually a fundamental principle of UNIX design. UNIX commands are ideally small blocks that perform one single function and do it well. The idea is that it is much easier to pipe together some number of robust modules than to write a single very complex program that does everything.
Interesting Update
You can actually see the raw output from eb deploy and still use grep to determine the return code. The simplest way (which better illustrates chained pipes) is to use process substitution, which is available in bash, ksh and zsh:
eb deploy my_env | tee >(cat) | grep -q "update completed successfully"
tee will duplicate the output to a file and stdout. stdout will be piped to grep as before. Instead of specifying a file, we use a sub-process >(cat), which will just print the output to the command line.
For other, more portable methods, refer to this post on Unix/Linux stack exchange for more info.

Grep returns an exit code. To use only the exit code without outputting the match, use the following:
if <command> | grep -q "<text>"; then ...
-q is for quiet mode.

Pipe the output to a file. Then grep that output.
eb deploy my_env > deploy_results.txt
grep 'update completed successfully' deploy_results.txt

Related

Stopping a task in shell script if logs contain specific string

I am trying to run a command in shell script and would like to exit it if the processing logs (not sure what you call the logs that are outputted on terminal while the task is running) contains the string "INFO | Next session will start at"
I tried using grep but because the string "INFO | Next session will start at" is not in a stdout it does not detect while the command is running.
The specific command I'm running is below
pipenv run python3 run.py --config accounts/user/config.yml
By 'processing logs' I mean the log output before the stdout is displayed in the terminal.
...
[D 211127 10:07:12 init:400] atx-agent version 0.10.0
[D 211127 10:07:12 init:403] device wlan ip: route ip+net: no such network interface
[11/27 10:07:12] INFO | Time delta has set to 00:11:51.
[11/27 10:07:13] INFO | Kill atx agent.
[11/27 09:59:32] INFO | Next session will start at: 10:28:30 (2021/11/27).
[11/27 09:59:32] INFO | Time left: 00:28:57.
I am trying to do this because the yml file I'm trying to run has a limit on what time you can execute it, and I would like to exit the task if the time is not met.
I tried to give as much context but if there's something missing please let me know.
This may work:
pipenv run python3 run.py --config accounts/user/config.yml |
sed "/INFO | Next session will start at/q"
sed prints the piped input, until it matches the expression and quits (q). The program will receive SIGPIPE (broken pipe) when it tries to continue writing, and (likely) exit. It's the same as what happens when you do something like find | head.
You could also use kill in a shell wrapper:
sh -c 'pipenv run python3 run.py --config accounts/user/config.yml |
{ sed "/INFO | Next session will start at/q"; kill -- -$$; }'
Notes:
The program may print a different log if stdout is not a terminal.
If you want to match a literal string, you could use grep -Fm 1 PATTERN, but other log output will be hidden. grep fails if no match, which can be useful.
This will work any shell, including zsh. zsh or bash can also be used for the kill wrapper.
There are other approaches. This thread focuses on tail, but is a useful reference: https://superuser.com/questions/270529/monitoring-a-file-until-a-string-is-found

Determining all the processes started with a given executable in Linux

I have this need to collect\log all the command lines that were used to start a process on my machine during the execution of a Perl script which happens to be a test automation script. This Perl script starts the executable in question (MySQL) multiple times with various command lines and I would like to inspect all of the command lines of those invocations. What would be the right way to do this? One possibility i see is run something like "ps -aux | grep mysqld | grep -v grep" in a loop in a shell script and capture the results in a file but then I would have to do some post processing on this and remove duplicates etc and I could possibly miss some process command lines because of timing issues. Is there a better way to achieve this.
Processing the ps output can always miss some processes. It will only capture the ones currently existing. The best way would be to modify the Perl script to log each command before or after it executes it.
If that's not an option, you can get the child pids of the perl script by running:
pgrep -P $pid -a
-a gives the full process command. $pid is the pid of the perl script. Then process just those.
You could use strace to log calls to execve.
$ strace -f -o strace.out -e execve perl -e 'system("echo hello")'
hello
$ egrep ' = 0$' strace.out
11232 execve("/usr/bin/perl", ["perl", "-e", "system(\"echo hello\")"], 0x7ffc6d8e3478 /* 55 vars */) = 0
11233 execve("/bin/echo", ["echo", "hello"], 0x55f388200cf0 /* 55 vars */) = 0
Note that strace.out will also show the failed execs (where execve returned -1), hence the egrep command to find the successful ones. A successful execve call does not return, but strace records it as if it returned 0.
Be aware that this is a relatively expensive solution because it is necessary to include the -f option (follow forks), as perl will be doing the exec call from forked subprocesses. This is applied recursively, so it means that your MySQL executable will itself run through strace. But for a one-off diagnostic test it might be acceptable.
Because of the need to use recursion, any exec calls done from your MySQL executable will also appear in the strace.out, and you will have to filter those out. But the PID is shown for all calls, and if you were to log also any fork or clone calls (i.e. strace -e execve,fork,clone), you would see both the parent and child PIDs, in the form <parent_pid> clone(......) = <child_pid> so then you should hopefully then have enough information to reconstruct the process tree and decide which processes you are interested in.

Loop trough docker output until I find a String in bash

I am quite new to bash (barely any experience at all) and I need some help with a bash script.
I am using docker-compose to create multiple containers - for this example let's say 2 containers. The 2nd container will execute a bash command, but before that, I need to check that the 1st container is operational and fully configured. Instead of using a sleep command I want to create a bash script that will be located in the 2nd container and once executed do the following:
Execute a command and log the console output in a file
Read that file and check if a String is present. The command that I will execute in the previous step will take a few seconds (5 - 10) seconds to complete and I need to read the file after it has finished executing. I suppose i can add sleep to make sure the command is finished executing or is there a better way to do this?
If the string is not present I want to execute the same command again until I find the String I am looking for
Once I find the string I am looking for I want to exit the loop and execute a different command
I found out how to do this in Java, but if I need to do this in a bash script.
The docker-containers have alpine as an operating system, but I updated the Dockerfile to install bash.
I tried this solution, but it does not work.
#!/bin/bash
[command to be executed] > allout.txt 2>&1
until
tail -n 0 -F /path/to/file | \
while read LINE
do
if echo "$LINE" | grep -q $string
then
echo -e "$string found in the console output"
fi
done
do
echo "String is not present. Executing command again"
sleep 5
[command to be executed] > allout.txt 2>&1
done
echo -e "String is found"
In your docker-compose file make use of depends_on option.
depends_on will take care of startup and shutdown sequence of your multiple containers.
But it does not check whether a container is ready before moving to another container startup. To handle this scenario check this out.
As described in this link,
You can use tools such as wait-for-it, dockerize, or sh-compatible wait-for. These are small wrapper scripts which you can include in your application’s image to poll a given host and port until it’s accepting TCP connections.
OR
Alternatively, write your own wrapper script to perform a more application-specific health check.
In case you don't want to make use of above tools then check this out. Here they use a combination of HEALTHCHECK and service_healthy condition as shown here. For complete example check this.
Just:
while :; do
# 1. Execute a command and log the console output in a file
command > output.log
# TODO: handle errors, etc.
# 2. Read that file and check if a String is present.
if grep -q "searched_string" output.log; then
# Once I find the string I am looking for I want to exit the loop
break;
fi
# 3. If the string is not present I want to execute the same command again until I find the String I am looking for
# add ex. sleep 0.1 for the loop to delay a little bit, not to use 100% cpu
done
# ...and execute a different command
different_command
You can timeout a command with timeout.
Notes:
colon is a utility that returns a zero exit status, much like true, I prefer while : instead of while true, they mean the same.
The code presented should work in any posix shell.

Piping to a process when the process doesn't exist?

Say I start with the following statement, which echo-s a string into the ether:
$ echo "foo" 1>/dev/null
I then submit the following pipeline:
$ echo "foo" | cat -e - 1>/dev/null
I then leave the process out:
$ echo "foo" | 1>/dev/null
Why is this not returning an error message? The documentation on bash and piping doesn't seem to make direct mention of may be the cause. Is there an EOF sent before the first read from echo (or whatever the process is, which is running upstream of the pipe)?
A shell simple command is not required to have a command name. For a command without a command-name:
variable assignments apply to the current execution environment. The following will set two variables to argument values:
arg1=$1 arg3=$3
redirections occur in a subshell, but the subshell doesn't do anything other than initialize the redirect. The following will truncate or create the indicated file (if you have appropriate permissions):
>/file/to/empty
However, a command must have at least one word. A completely empty command is a syntax error (which is why it is occasionally necessary to use :).
Answer summarized from Posix XCU§2.9.1

complex command within a variable

I am writing a script that among other things runs a shell command several times. This command doesn't handle exit codes very well and I need to know if the process ended successfully or not.
So what I was thinking is to analyze the stderr to find out the word error (using grep). I know this is not the best thing to do, I'm working on it....
Anyway, the only way I can imagine is to put the stderr of that program in a variable and then use grep to well, "grep" it and throw it to another variable. Then I can see if that variable is valorized, meaning that there was an error, and do my work.
The qustion is: how can I do this ?
I don't really want to run the program inside a variable, because it has got a lot of arguments (with special characters such as backslash, quotes, doublequotes...) and it's a memory and I/O intensive program.
Awaiting your reply, thanks.
Redirect the stderr of that command to a temporary file and check if the word "error" is present in that file.
mycommand 2> /tmp/temp.txt
grep error /tmp/temp.txt
Thanks #Jdamian, this was my answer too, in the end.
I asked my principal if I can write a temp file and it allowed, so this is the end result:
... script
command to be launched -argument -other "argument" -other "other" argument 2>&1 | tee $TEMPFILE
ERRORCODE=( `grep -i error "$TEMPFILE" `)
if [ -z $ERRORCODE ] ;
then
some actions ....
I didn't tested this yet because I got some other scripts involved that I need to write before.
What I'm trying to do is:
run the command, having its stderr redirected to stdout;
using tee, have the above result printed on screen and also to the temp file;
have grep to store the string error found on the temp file (if any) in a variable called ERRORCODE;
if that variable is populated (which mean if it has been created by grep), then the script stops, quitting with status 1, else it contiunes.
What do you think ?
If you don't need the standard output:
if mycommand 2>&1 >/dev/null | grep -q error; then
echo an error occurred
fi

Resources