bash: parse number at the end of a long command - bash

I'm writing a shell script that will run a command and parse out the last few numbers (changes everytime).
Text to parse after running npm run server which outputs:
Please visit http;//mysite.com/id/2318
I want to parse out the value and assign it to id:
2318
My attempt:
id=$(echo npm run server | sed -n 's:.*id\/\(.*\)\n.*:\1:p')
Nothing is being returned.

Addressing your original one-liner:
My attempt:
id=$(echo npm run server | sed -n 's:.*id\/\(.*\)\n.*:\1:p')
Nothing
is being returned.
You could try this instead:
id=$(npm run server | sed -E -e 's:(^.*)(id/)(.*$):\3:g')
NOTE: This addresses only the component of your original attempt that obviously has some workability issues. This doesn't take anything into account except the premise of your quoted output string that you supposedly get from running the command. i.e. I reproduced this using the following command:
echo 'Please visit http;//mysite.com/id/2318' | sed -E -e 's:(^.*)(id/)(.*$):\3:g'
So assuming that when you run npm run server, you get the output 'Please visit http;//mysite.com/id/2318' (which, by the way - I'd suggest might be http: // and not http;//), then this command should return just the id component.
Note that if it's stderr:
If the text your trying to filter is coming out of stderr and not stdout, you may in fact need to use this instead:
id=$(npm run server &> >(sed -E -e 's:(^.*)(id/)(.*$):\3:g'))
For example, parsing the output of an unconfigured npm server:
06:38:23 ✗ :~ >npm run server
npm ERR! Darwin 15.5.006:38:23 ✗ :~ >npm run server | sed -E -e "s/(Darwin)/HELLO/g"
npm ERR! Darwin 15.5.006:38:56 ✗ :~ >npm run server &> >(sed -E -e "s/(Darwin)/HELLO/g")
npm ERR! HELLO 15.5.0
You can see about redirecting stderr in the bash manual:
Redirecting Standard Output and Standard Error
Bash allows both the standard output (file descriptor 1) and the stan-
dard error output (file descriptor 2) to be redirected to the file
whose name is the expansion of word with this construct.
There are two formats for redirecting standard output and standard
error:
&>word
and
>&word
Of the two forms, the first is preferred.

I'm assuming:
That you want to invoke npm run server as a command.
That this command at some point emits the given message on its stdout (as opposed to stderr, direct to the TTY, etc).
That this command does not self-background, and that you want it to keep running even after that output is given.
That it's not important that npm run server continue running after the shell script that started it has exited.
If all those assumptions are correct, consider a process substitution for this job:
#!/usr/bin/env bash
regex='Please visit .*/([[:digit:]]+)$' # define a regex to search for the string
exec 3< <(npm run server) # attach output from "npm run server" to FD 3
## the action is here: searching through output from server until we find a match
while read -r server_output <&3; do # loop reading a line at a time from the server
if [[ $server_output =~ $regex ]]; then # if a line matches the regex...
id=${BASH_REMATCH[1]} # then put the first capture group in a variable
break # and stop looping further.
fi
done
## after-the-fact: log success/failure, and set up any future output to be consumed
## ...so the server doesn't hang trying to write later output/logs to stdout w/ no readers.
if [[ $id ]]; then # if that variable contains a non-empty value
echo "Detected server instance $id" >&2 # log it...
cat </dev/fd/3 >/dev/fd/2 & cat_fd=$! # start a background process to copy any further
# stdout from service to stderr...
exec 3<&- # then close our own copy of the handle.
else
echo "Unable to find an id in stdout of 'npm run server'" >&2
exit 1
fi
## and we're done: if you like, run your other code here.
## also: if you want to wait until the server has exited
## (or at least closed its stdout), consider:
[[ $cat_fd ]] && wait "$cat_fd"

Related

Trying to exit main command from a piped grep condition

I'm struggling to find a good solution for what I'm trying to do.
So I have a CreateReactApp instance that is booted through a yarn run start:e2e. As soon as the output from that command has "Compiled successfully", I want to be able to run next command in the bash script.
Different things I tried:
if yarn run start:e2e | grep "Compiled successfully"; then
exit 0
fi
echo "THIS NEEDS TO RUN"
This does appear to stop the logs, but it does not run the next command.
yarn run start:e2e | while read -r line;
do
echo "$line"
if [[ "$line" == *"Compiled successfully!"* ]]; then
exit 0
fi
done
echo "THIS NEEDS TO RUN"
yarn run start:e2e | grep -q "Compiled successfully";
echo $?
echo "THIS NEEDS TO RUN"
I've read about the differences between pipes / process substitions, but don't see a practical implementation regarding my use case..
Can someone enlighten me on what I'm doing wrong?
Thanks in advance!
EDIT: Because I got multiple proposed solutions and none of those worked I'll maybe redefine my main problem a bit.
So the yarn run start:e2e boots op a react app, that has a sort of "watch" mode. So it keeps spewing out logs after the "Compiled successfully" part, when changes occur to the source code, typechecks, ....
After the React part is booted (so if the log Compiled succesfully is outputted) the logs do not matter anymore but the localhost:3000 (that the yarn compiles to) must remain active.
Then I run other commands after the yarn run to do some testing on the localhost:3000
So basically what I want to achieve in pseudo (the pipe stuff in command A is very abstract and may not even look like the correct solution but trying to explain thoroughly):
# command A
yarn run dev | cmd_to_watch_the_output "Compiled succesfully" | exit 0 -> localhost:3000 active but the shell is back in 'this' window
-> keep watching the output until Compiled succesfully occurs
-> If it occurs, then the logs does not matter anymore and I want to run command B
# command B
echo "I WANT TO SEE THIS LOG"
... do other stuff ...
I hope this clears it up a bit more :D
Thanks already for the propositions!
If you want yarn run to keep running even after Compiled successfully, you can't just pipe its stdout to another program that exits after that line: that stdout needs to have somewhere to go so yarn's future attempts to write logs don't fail or block.
#!/usr/bin/env bash
case $BASH_VERSION in
''|[0-3].*|4.[012].*) echo "Error: bash 4.3+ required" >&2; exit 1;;
esac
exec {yarn_fd}< <(yarn run); yarn_pid=$!
while IFS= read -r line <&$yarn_fd; do
printf '%s\n' "$line"
if [[ $line = *"Compiled successfully!"* ]]; then
break
fi
done
# start a background process that reads future stdout from `yarn run`
cat <&$yarn_fd >/dev/null & cat_pid=$!
# close the FD from that background process so `cat` has the only copy
exec {yarn_fd}<&-
echo "Doing other things here!"
echo "When ready to shut down yarn, kill $yarn_pid and $cat_pid"

`grep` cause bash script stop [duplicate]

I'm studying the content of this preinst file that the script executes before that package is unpacked from its Debian archive (.deb) file.
The script has the following code:
#!/bin/bash
set -e
# Automatically added by dh_installinit
if [ "$1" = install ]; then
if [ -d /usr/share/MyApplicationName ]; then
echo "MyApplicationName is just installed"
return 1
fi
rm -Rf $HOME/.config/nautilus-actions/nautilus-actions.conf
rm -Rf $HOME/.local/share/file-manager/actions/*
fi
# End automatically added section
My first query is about the line:
set -e
I think that the rest of the script is pretty simple: It checks whether the Debian/Ubuntu package manager is executing an install operation. If it is, it checks whether my application has just been installed on the system. If it has, the script prints the message "MyApplicationName is just installed" and ends (return 1 mean that ends with an “error”, doesn’t it?).
If the user is asking the Debian/Ubuntu package system to install my package, the script also deletes two directories.
Is this right or am I missing something?
From help set :
-e Exit immediately if a command exits with a non-zero status.
But it's considered bad practice by some (bash FAQ and irc freenode #bash FAQ authors). It's recommended to use:
trap 'do_something' ERR
to run do_something function when errors occur.
See http://mywiki.wooledge.org/BashFAQ/105
set -e stops the execution of a script if a command or pipeline has an error - which is the opposite of the default shell behaviour, which is to ignore errors in scripts. Type help set in a terminal to see the documentation for this built-in command.
I found this post while trying to figure out what the exit status was for a script that was aborted due to a set -e. The answer didn't appear obvious to me; hence this answer. Basically, set -e aborts the execution of a command (e.g. a shell script) and returns the exit status code of the command that failed (i.e. the inner script, not the outer script).
For example, suppose I have the shell script outer-test.sh:
#!/bin/sh
set -e
./inner-test.sh
exit 62;
The code for inner-test.sh is:
#!/bin/sh
exit 26;
When I run outer-script.sh from the command line, my outer script terminates with the exit code of the inner script:
$ ./outer-test.sh
$ echo $?
26
As per bash - The Set Builtin manual, if -e/errexit is set, the shell exits immediately if a pipeline consisting of a single simple command, a list or a compound command returns a non-zero status.
By default, the exit status of a pipeline is the exit status of the last command in the pipeline, unless the pipefail option is enabled (it's disabled by default).
If so, the pipeline's return status of the last (rightmost) command to exit with a non-zero status, or zero if all commands exit successfully.
If you'd like to execute something on exit, try defining trap, for example:
trap onexit EXIT
where onexit is your function to do something on exit, like below which is printing the simple stack trace:
onexit(){ while caller $((n++)); do :; done; }
There is similar option -E/errtrace which would trap on ERR instead, e.g.:
trap onerr ERR
Examples
Zero status example:
$ true; echo $?
0
Non-zero status example:
$ false; echo $?
1
Negating status examples:
$ ! false; echo $?
0
$ false || true; echo $?
0
Test with pipefail being disabled:
$ bash -c 'set +o pipefail -e; true | true | true; echo success'; echo $?
success
0
$ bash -c 'set +o pipefail -e; false | false | true; echo success'; echo $?
success
0
$ bash -c 'set +o pipefail -e; true | true | false; echo success'; echo $?
1
Test with pipefail being enabled:
$ bash -c 'set -o pipefail -e; true | false | true; echo success'; echo $?
1
This is an old question, but none of the answers here discuss the use of set -e aka set -o errexit in Debian package handling scripts. The use of this option is mandatory in these scripts, per Debian policy; the intent is apparently to avoid any possibility of an unhandled error condition.
What this means in practice is that you have to understand under what conditions the commands you run could return an error, and handle each of those errors explicitly.
Common gotchas are e.g. diff (returns an error when there is a difference) and grep (returns an error when there is no match). You can avoid the errors with explicit handling:
diff this that ||
echo "$0: there was a difference" >&2
grep cat food ||
echo "$0: no cat in the food" >&2
(Notice also how we take care to include the current script's name in the message, and writing diagnostic messages to standard error instead of standard output.)
If no explicit handling is really necessary or useful, explicitly do nothing:
diff this that || true
grep cat food || :
(The use of the shell's : no-op command is slightly obscure, but fairly commonly seen.)
Just to reiterate,
something || other
is shorthand for
if something; then
: nothing
else
other
fi
i.e. we explicitly say other should be run if and only if something fails. The longhand if (and other shell flow control statements like while, until) is also a valid way to handle an error (indeed, if it weren't, shell scripts with set -e could never contain flow control statements!)
And also, just to be explicit, in the absence of a handler like this, set -e would cause the entire script to immediately fail with an error if diff found a difference, or if grep didn't find a match.
On the other hand, some commands don't produce an error exit status when you'd want them to. Commonly problematic commands are find (exit status does not reflect whether files were actually found) and sed (exit status won't reveal whether the script received any input or actually performed any commands successfully). A simple guard in some scenarios is to pipe to a command which does scream if there is no output:
find things | grep .
sed -e 's/o/me/' stuff | grep ^
It should be noted that the exit status of a pipeline is the exit status of the last command in that pipeline. So the above commands actually completely mask the status of find and sed, and only tell you whether grep finally succeeded.
(Bash, of course, has set -o pipefail; but Debian package scripts cannot use Bash features. The policy firmly dictates the use of POSIX sh for these scripts, though this was not always the case.)
In many situations, this is something to separately watch out for when coding defensively. Sometimes you have to e.g. go through a temporary file so you can see whether the command which produced that output finished successfully, even when idiom and convenience would otherwise direct you to use a shell pipeline.
I believe the intention is for the script in question to fail fast.
To test this yourself, simply type set -e at a bash prompt. Now, try running ls. You'll get a directory listing. Now, type lsd. That command is not recognized and will return an error code, and so your bash prompt will close (due to set -e).
Now, to understand this in the context of a 'script', use this simple script:
#!/bin/bash
# set -e
lsd
ls
If you run it as is, you'll get the directory listing from the ls on the last line. If you uncomment the set -e and run again, you won't see the directory listing as bash stops processing once it encounters the error from lsd.
set -e The set -e option instructs bash to immediately exit if any command [1] has a non-zero exit status. You wouldn't want to set this for your command-line shell, but in a script it's massively helpful. In all widely used general-purpose programming languages, an unhandled runtime error - whether that's a thrown exception in Java, or a segmentation fault in C, or a syntax error in Python - immediately halts execution of the program; subsequent lines are not executed.
By default, bash does not do this. This default behavior is exactly what you want if you are using bash on the command line
you don't want a typo to log you out! But in a script, you really want the opposite.
If one line in a script fails, but the last line succeeds, the whole script has a successful exit code. That makes it very easy to miss the error.
Again, what you want when using bash as your command-line shell and using it in scripts are at odds here. Being intolerant of errors is a lot better in scripts, and that's what set -e gives you.
copied from : https://gist.github.com/mohanpedala/1e2ff5661761d3abd0385e8223e16425
this may help you .
Script 1: without setting -e
#!/bin/bash
decho "hi"
echo "hello"
This will throw error in decho and program continuous to next line
Script 2: With setting -e
#!/bin/bash
set -e
decho "hi"
echo "hello"
# Up to decho "hi" shell will process and program exit, it will not proceed further
It stops execution of a script if a command fails.
A notable exception is an if statement. eg:
set -e
false
echo never executed
set -e
if false; then
echo never executed
fi
echo executed
false
echo never executed
cat a.sh
#! /bin/bash
#going forward report subshell or command exit value if errors
#set -e
(cat b.txt)
echo "hi"
./a.sh; echo $?
cat: b.txt: No such file or directory
hi
0
with set -e commented out we see that echo "hi" exit status being reported and hi is printed.
cat a.sh
#! /bin/bash
#going forward report subshell or command exit value if errors
set -e
(cat b.txt)
echo "hi"
./a.sh; echo $?
cat: b.txt: No such file or directory
1
Now we see b.txt error being reported instead and no hi printed.
So default behaviour of shell script is to ignore command errors and continue processing and report exit status of last command. If you want to exit on error and report its status we can use -e option.

retaining stdin if script is run through pipe

I have the following script which is built to read in stdin and print it out:
#############
# myscript.sh
#############
#!/bin/sh
STDIN=$(less <&0 2>/dev/null)
echo $STDIN
This script works if run the "normal"/"expected" way:
echo Testing | ./myscript.sh
Testing
However, I need to run it differently...i.e. storing the script in a variable and running it. But the problem is, when I do, i lose the stdin information.
[root#ip-test]# thescript=$(cat ./myscript.sh)
[root#ip-test]#
[root#ip-test]# echo Testing | echo "${thescript}" | sh
## I get no response
How can I resolve this?
Pass the script as a command line argument instead.
echo Testing | sh -c "$thescript"

Command output redirection works only from console, not from script

I would like to capture command output to variable in bash, but display it as well to console.
exec 5>&1
STATUS=$(zypper info rar|tee >(cat - >&5))
echo $STATUS
It works in console expected way. When calling within following simple script, it works as well expected way.
#!/bin/bash
exec 5>&1
STATUS=$(zypper info rar|tee >(cat - >&5))
echo $STATUS
But when calling within following script, it produces error.
#!/bin/sh
#
# description: currency_trader_tools installation script
# Currency_Trader software.
#
# prerequisities:
# OpenSuse Leap 42.1 x86_64
# clean installation of Minimal Server Selection (Text mode)
# install:
# Midnight Commander - linux file manager
# x11vnc - X11 vnc server
# xvfb-run - X11 virtual frame buffer server
# java - latest JDK environment rpm
#
# commit_id = "0f46a17011ca82c57ddb7f81636984c7bebd5798";
# build_revision_full = "Build 0144 created 2016-05-11 18:04:00 based on commit 0f46a17011ca82c57ddb7f81636984c7bebd5798";
# build_revision_short = "0f46a17";
# build_revision = "0144";
RETVAL=0
ZIP_FILE_VERSIONED="Currency_Trader_Bash_Scripts_0_9_1- r-0144-0f46a17.zip"
ZIP_FILE="Currency_Trader_Bash_Scripts_0_9_1.zip"
# See how we were called.
if [[ ! `whoami` = "root" ]]; then
echo "You must have administrative privileges to run this script"
echo "Try 'sudo ./currency_trader_tools_install'"
exit 1
fi
exec 5>&1
STATUS=$(zypper info rar|tee >(cat - >&5))
echo
echo $STATUS
case "$1" in
all)
install_all
;;
*)
echo $"Usage: currency_trader_tools_install {all}"
exit 1
esac
exit $RETVAL
Error is:
./Currency_Trader_Bash_Scripts_0_9_1-Install-Script: command substitution: line 34: syntax error near unexpected token `('
./Currency_Trader_Bash_Scripts_0_9_1-Install-Script: command substitution: line 34: `zypper info rar|tee >(cat - >&5))'
Any recommendation, how to make the same using sh and not bash?
>(...) is not part of the POSIX standard, so you would need to use an explicit named pipe. However, managing this properly could get tricky. Just capture the output, and output to the console explicitly.
STATUS=$(zypper info rar)
echo "$STATUS"
(The script is already outputting the captured output to the terminal; there doesn't seem to be any need for tee in the first place.)

How to get error and their codes in drush in a shell sh script?

I'm launching some drush commands inside a shell script sh. How can i get if the command has terminated with no error? And, in case of error, how can i get the error and present it to the user executing the script?
As I said in my first comment if you put together a shell script with drush commands the user executing the script will see all the messages drush writes to the console. But if you want to write a more complex script, i.e. with checks for errors etc. then here is a short example that should get you started:
msg=$(drush blah 2>&1)
if [[ "$msg" =~ error* ]]
then
echo "we had an error!"
else
echo "success"
fi
msg=$(drush cc all 2>&1)
echo $msg
NOTE: this script assumes that you have a bash shell.
The construct $(command) is used to pipe stdout to a variable. Since drush pipes error messages to stderr we also need suffix the drush command with 2>&1 which is a command to redirect stderr to stdout. The if check is then basically a substring check for "error" in the $msg variable.
Here is a good resource for Bash programming:
http://tldp.org/HOWTO/Bash-Prog-Intro-HOWTO.html
Hope this helps.
=========== EDIT =============
if you are in a Bourne shell you would use the following to redirect output to a variable:
msg=`drush cc all 2>&1`
You should be able to see which shell you are using by executing echo $SHELL

Resources