Use a different command variation on Solaris - bash

Since Solaris grep by default doesn't have the -E option, I would have to update my bash to work with a specific grep. This is what I do.
It works on the command line, but when I put it in the bash file, it looks like the script doesn't pick it up and still uses the normal grep. (I do not want to change the whole $PATH.)
Please advise:
export isSolaris=`uname -a | grep -i "sunos"`
if [ -n "$isSolaris" ]; then
alias grep="/usr/xpg4/bin/grep -E";
fi

bash and ksh don't process aliases defined in a file until after the file is read... which means you can't define it in the same script that will use it. You can put it in another file and . (source) that into your script, though.
Alternately, use a shell function.
mygrep() {
if test -n "$isSolaris"; then
/usr/xpg4/bin/grep -E ${1+"$#"}
else
grep ${1+"$#"}
fi
}

Consider using egrep instead; it probably works everywhere, even though POSIX/SUS doesn't list it as a command any more. (SUS v2 from 1997 listed egrep as a 'legacy' utility; POSIX 1003.1:2004 omitted egrep).

Related

Bash get the command that is piping into a script

Take the following example:
ls -l | grep -i readme | ./myscript.sh
What I am trying to do is get ls -l | grep -i readme as a string variable in myscript.sh. So essentially I am trying to get the whole command before the last pipe to use inside myscript.sh.
Is this possible?
No, it's not possible.
At the OS level, pipelines are implemented with the mkfifo(), dup2(), fork() and execve() syscalls. This doesn't provide a way to tell a program what the commands connected to its stdin are. Indeed, there's not guaranteed to be a string representing a pipeline of programs being used to generate stdin at all, even if your stdin really is a FIFO connected to another program's stdout; it could be that that pipeline was generated by programs calling execve() and friends directly.
The best available workaround is to invert your process flow.
It's not what you asked for, but it's what you can get.
#!/usr/bin/env bash
printf -v cmd_str '%q ' "$#" # generate a shell command representing our arguments
while IFS= read -r line; do
printf 'Output from %s: %s\n' "$cmd_str" "$line"
done < <("$#") # actually run those arguments as a command, and read from it
...and then have your script start the things it reads input from, rather than receiving them on stdin.
...thereafter, ./yourscript ls -l, or ./yourscript sh -c 'ls -l | grep -i readme'. (Of course, never use this except as an example; see ParsingLs).
It can't be done generally, but using the history command in bash it can maybe sort of be done, provided certain conditions are met:
history has to be turned on.
Only one shell has been running, or accepting new commands, (or failing that, running myscript.sh), since the start of myscript.sh.
Since command lines with leading spaces are, by default, not saved to the history, the invoking command for myscript.sh must have no leading spaces; or that default must be changed -- see Get bash history to remember only the commands run with space prefixed.
The invoking command needs to end with a &, because without it the new command line wouldn't be added to the history until after myscript.sh was completed.
The script needs to be a bash script, (it won't work with /bin/dash), and the calling shell needs a little prep work. Sometime before the script is run first do:
shopt -s histappend
PROMPT_COMMAND="history -a; history -n"
...this makes the bash history heritable. (Code swiped from unutbu's answer to a related question.)
Then myscript.sh might go:
#!/bin/bash
history -w
printf 'calling command was: %s\n' \
"$(history | rev |
grep "$0" ~/.bash_history | tail -1)"
Test run:
echo googa | ./myscript.sh &
Output, (minus the "&" associated cruft):
calling command was: echo googa | ./myscript.sh &
The cruft can be halved by changing "&" to "& fg", but the resulting output won't include the "fg" suffix.
I think you should pass it as one string parameter like this
./myscript.sh "$(ls -l | grep -i readme)"
I think that it is possible, have a look at this example:
#!/bin/bash
result=""
while read line; do
result=$result"${line}"
done
echo $result
Now run this script using a pipe, for example:
ls -l /etc | ./script.sh
I hope that will be helpful for you :)

Serialize a subset of environment variables

I'm trying to export some environment variables for use by a TomCat process.
There's a few ways to do this (I know how to solve the overall problem), but it bugged me that I didn't know how to do this particular shell task.
Tomcat recommends that all your environment customizations should be exported by "$CATALINA_HOME/bin/setenv.sh".
This whole thing is gonna be stuffed into a Docker container, so the only parameterizability will be via Docker env variables (let's assume for this task that I don't want to use volume mounts or create setenv.sh during the build process).
First, observe that docker run -e can be used to pass environment into the container:
🍔 docker run -eMY_VAR=SUP alpine env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=a528b6fc264b
MY_VAR=SUP
no_proxy=*.local, 169.254/16
HOME=/root
If we wanted to copy all of that env into setenv.sh, it's as simple as:
SETENV="/usr/local/tomcat/bin/setenv.sh"
echo '#!/bin/sh' > "$SETENV"
echo 'export -p' >> "$SETENV"
env >> "$SETENV"
But copying everything somewhat defeats the point of setenv.sh -- which is, to give your tomcat process a clean environment, with only intentional customizations.
So, we can agree on a convention for "which env vars are ones that we want to pass through to setenv.sh". Everything prefixed with MY_.
And now we get to an interesting shell problem.
env | grep '^MY_' | sed 's/^MY_/EXPORT /'
This gets us pretty close. Output looks like:
🍔 docker run -e MY_VAR=hey alpine sh -c "env | grep '^MY_' | sed 's/^MY_/EXPORT /'"
EXPORT VAR=hey
So, we've selected from the env command: only env vars prefixed with MY_. And we can redirect that output to setenv.sh.
Why do I say "pretty close"? Looks like we're done, right?
Try this for size:
🍔 docker run -e MY_VAR='multi
quote> line
quote> string' alpine sh -c "env | grep '^MY_' | sed 's/^MY_/EXPORT /'"
EXPORT VAR=multi
The script only worked for a simple subset of possibilities. i.e. we only managed to export the first line of our multi-line string.
For your convenience: env output for multi-line strings looks like this:
🍔 docker run -e MY_VAR='multi
line
string' alpine env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=0d0afaac6bec
MY_VAR=multi
line
string
no_proxy=*.local, 169.254/16
HOME=/root
I hesitate to try and tackle this using awk; there may be further string escaping complications that I have not considered.
I wonder whether there's a better way altogether to select & serialize a subset of exported environment?
EDIT: I negligently tagged this as a bash question, when really my intention was to pose an sh question. Specifically my intention is to get something that will work with no dependencies other than those that come with the alpine docker image. i.e. BusyBox sh, sed, grep, awk, env.
I've retained the bash tag so as not to punish the initial answer that was submitted when this was a bash-only question.
But I will give preference to an sh-compatible answer, and in particular to one that works with just the BusyBox UNIX utils.
So you need several things:
Enumerate the environment variables and select a subset.
For each selected environment variable, emit sh code that sets the variable to the desired value.
You can use export -p if you want to export all variables in a form that can be read back in, but parsing it to select only certain variables is harder. One way to make use of export -p is to unset the other variables. This only works if none of the environment variables is read-only, but you can work around that by running a separate shell instance (as opposed to a subshell).
To gather the list of variables to unset, you only need to get a superset of the list of all environment variables, and remove the ones you want to keep. You can easily do that by filtering the env output. I do that with a simple grep, you may want to use more complex code if your criteria for inclusion are more complex than “begins with a specific prefix”.
The occasional false positive due to a variable containing a newline followed by a valid variable name and an equal sign will only lead to calling unset on a non-existent variable, which does nothing. The desired variables are removed from the exclusion list, so the final output will never omit a desired variable.
excluded=$(env | LC_ALL=C sed -n 's/^\([A-Z_a-z][0-9A-Z_a-z]*\)=.*/\1/p' |
grep -v 'MY_')
sh -c 'unset $1; export -p' sh "$excluded" >setenv.sh
Dash prints an extra export PATH (with no value) if PATH was in the environment when it was invoked. If that bothers you, change sh -c … to (unset PATH; sh -c …).
Assuming GNU grep:
grep --null '^MY_' </proc/self/environ
...will emit your environment variables in NUL-delimited form (newlines intact).
Similarly, if you have bash:
while IFS= read -r -d '' vardef; do
[[ $vardef = MY_* ]] && printf '%s\0' "$vardef"
done </proc/self/environ
Note that if these variables were set in the same shell session, you may need to create a subprocess for /proc/self/environ to be updated:
(while IFS= read -r -d '' vardef; do
[[ $vardef = MY_* ]] && printf '%s\0' "$vardef"
done </proc/self/environ)
alpine image doesn't ship with bash.
You can use this script to extract all MY_* variables including newline variables:
docker run -e MY_FOO=bar -e MY_VAR="multi' export MY_INJECTED='val" -e MY_VAR2=$'multi
0MY_line=val
string' alpine sh -c "awk -v RS='\06' -F= '/^MY_/{k=\$1; sub(/^[^=]+=/, \"\");
gsub(/\047/, \"\047\\\\\\047\047\"); printf \"export %s=\047%s\047\n\", k, \$0
}' /proc/self/environ"
This will output:
export MY_FOO='bar'
export MY_VAR='multi'\'' export MY_INJECTED='\''val'
export MY_VAR2='multi
0MY_line=val
string'
Here is how awk works:
-v RS='\6': sets record separator as \6 works for nul byte as well (assuming you don't have \6 in value)
-F=: sets field separator as =
/^MY_/: Only process records starting with MY_
store variable name or $1 in variable k
Using sub function get part after = in $0
Using print format output so that it can be used in $CATALINA_HOME/bin/setenv.sh file.
\047 is for printing single quote
what about
declare -p ${!MY_*}
and
declare -p ${!MY_*} | sed -r 's/^declare (-[^ ]*)* MY_/export /'
or
declare -p ${!MY_*} | sed 's/^declare \(-[^ ]*\)* MY_/export /'
EDIT posix compliant version :
some env or printenv accept -0 option to end each output line by \0 rather than a newline. Thus
env -0 | perl -ne 'BEGIN{$/="\0";$\="\n";$q="\047"}next unless /^MY_/;chomp;s/$q/$q\\$q$q/;s/=/=$q/;s/$/$q/;print'
How it works
$/ : input record separator
$\ : output record separator
$q : variable to store single quote (\047) because of surrounding single quotes in command
next : to filter "MY_" variables
chomp : removes the input separator
s/// : quote substitution
EDIT: variation of perl version in posix shell
env -0 | xargs -0 sh -c 'for entry; do [[ $entry = MY_* ]] || continue; printf "%s=\047%s\047\n" "${entry%%=*}" "$(echo "${entry#*=}" | sed '\''s/\x27/\x27\\\x27\x27/g'\'' )"; done' -

Escaping Shebang in grep

in a shell script, i'm trying to find out if another file is a shell script. i'm doing that by grepping the shebang line. but my grep statement doesn't work:
if [[ $($(cat $file) | grep '^#! /bin' | wc -l) -gt 0 ]]
then
echo 'file is a script!'
else
echo "no script"
fi
i always get the error "bash: #!: command not found". i tried several things to escape the shebang but that didn't work.
maybe you can help me with that? :)
cheers,
narf
I would suggest that you change your condition to this:
if grep -q '^#! */bin' "$file"
The -q option to grep is useful in this case as it tells grep to produce no output, exiting successfully if the pattern is matched. This can be used with if directly; there's no need to wrap everything in a test [[ (and especially no need for a useless use of cat).
I also modified your pattern slightly so that the space between #! and /bin is optional.
It's worth noting that this will produce false positives in cases where the match is on a different line of the file, or when another shebang is used. You could work around the first issue by piping head -n 1 to grep, so that only the first line would be checked:
if head -n 1 "$file" | grep -q '^#! */bin'
If you are searching for a known list of shebangs, e.g. /bin/sh and /bin/bash, you could change the pattern to something like ^#! */bin/\(sh\|bash\).

How to redirect echoed shell commands as they sre executed

After the question In a shell script: echo shell commands as they are executed I wonder how can I redirect the command executed/echoed to a file (or a variable)?
I tried the usual stdout redirection, like ls $HOME > foo.txt, after setting the bash verbose mode, set -v, but only the output of ls was redirected.
PS: What I want is to have a function (call it "save_succ_cmdline()") that I could put in front of a (complex) command-line (e.g, save_succ_cmdline grep -m1 "model name" /proc/cpuinfo | sed 's/.*://' | cut -d" " -f -3) so that this function will save the given command-line if it succeeds.
Notice that the grep -m1 "model name" ... example above is just to give an example of a command-line with special characters (|,',"). What I expect from such a function "save_succ_cmdline()" is that the actual command (after the function name, grep -m1 "model name"...) is executed and the function verifies the exit code ([$? == 0]) to decide if the command-line can be save or not. If the actual command has succeeded, the function ("save_succ_cmdline") can save the command-line expression (with the pipes and everything else).
My will is to use the bash -o verbose feature to have and (temporarily) save the command-line. But I am not being able to do it.
Thanks in advance.
Your save_succ_cmdline function will only see the grep -m1 "model name" /proc/cpuinfo part of the command line as the shell will see the pipe itself.
That being said if you just want the grep part then this will do what you want.
save_succ_cmdline() {
"$#" && cmd="$#"
}
If you want the whole pipeline then you would need to quote the entire argument to save_succ_cmdline and use eval on "$#" (or similar) and I'm not sure you could make that work for arbitrary quoting.

Using grep, ls to get a file in bash

I'm trying to write a bash script which would locate a single file in the current directory. The file will be used later but I don't need help there. I tried using ls and grep but it doesn't work, I'm a newbie using bash.
#!/bin/sh
#Here I need smt like
#trFile = ls | grep myString (but I get file not found error)
echo $trFile
Use shell wildcards, as in
ls *${pattern}*
And, to store the result in a variable, put it inside a $() structure (you can also use deprecated backticks if you like using deprecated functionality that doesn't nest well)
var=$( ls *${pattern}* )
Or, put your ls | grep in there (but that's bad practice, IMHO):
var=$( ls | grep -- "$pattern" )
#!/bin/sh
#
trfile=$( ls | grep myString )
echo $trfile
The $( xxx ) causes the commands within to be executed and the output returned.
I believe you are looking for something like this:
#!/bin/sh
trFile=`ls | grep "$myString"`
In order to run a command and redirect/store its output, you need put the command between backticks. The variable that will store the output, equal sign and the backtick need to be together, as in my example.
Hope this helps.
Try this, if I guess what you're trying to do is, get the capture of the filename from grepping via the output of the ls into a shell variable, try this:
#!/bin/sh
trFile=`ls | grep "name_of_file"`
echo $trFile
Notice the usage of the back-tick operator surrounding the command, what-ever is the output, in this case, will get captured.
using output of ls will bite when you least expect. Better use Globbing.
http://tldp.org/LDP/abs/html/globbingref.html

Resources