Retrieve bash version in AIX - bash

After the shellshock issue detected on Unix systems with bash, I have to create a script as part of my internship in a firm in order to update bash.
Prerequisites to the installation of the updated IBM bash*.rpm are:
having bash installed (simple check)
having bash version under 4.2.50
I have a problem dealing with the second part, since the true bash version is given by command bash -version instead of rpm -qi bash which essentially gives the version/release of the installation package (and not neccessarily the true bash version).
Basically, my script goes like this:
if [[ bash installed ]] ; then
if [[ bash version installed is < 4.2.50 ]] ; then
install bash version 4.2.50
fi
fi
bash -version returns a lot of text, and I would like to pick out the bash version.
So far, I've used the following command:
$ bash -version | grep version | awk '{print $4}' | head -n 1
That returns :
4.2.50(1)-release
Is there any way to retrieve the real bash version? I've played around with the sed command with no success.

Seems like you're trying to get output like this,
$ bash -version | awk -F'[ (]' '/version/{print $4;exit}'
4.3.11

I guess you can directly use the $BASH_VERSION variable:
$ echo "$BASH_VERSION"
4.2.47(1)-release
From man bash:
Shell Variables
BASH_VERSION
Expands to a string describing the version of this instance of bash.
Then, to check if the version is under or above 4.2.50 you can make use of sort -V to order them:
$ printf "%s\n%s\n" 4.2.50 $BASH_VERSION | sort -V
4.2.47(1)-release
4.2.50
$ printf "%s\n%s\n" 4.2.50 5.2.33 | sort -V
4.2.50
5.2.33
This should be enough for you to determine if your current bash version is under or above the desired one, just some head and tail is needed for the final result.
From man sort:
-V, --version-sort
natural sort of (version) numbers within text

Related

Get sourced filename in busybox ash

The ash shell in busybox doesn't seem to implement any of the standard ways to get the filename that's being sourced. For instance:
testo:
#!/usr/bin/env -S busybox ash
echo hello whorl
echo using source
source ./sourceme
echo using .
. ./sourceme
sourceme:
echo underscore $_
echo bs $BASH_SOURCE
echo zero $0
# ./testo
hello whorl
using source
underscore ./testo
bs
zero ./testo
using .
underscore ./testo
bs
zero ./testo
I need something to put in sourceme that will get its own name/path.
This excellent answer contains a very clever way to accomplish this. I've adapted their solution here - insert this line into sourceme:
echo lsof `lsof -p $$ -Fn | tail -n1 | sed 's!^[^/]*!!g'``
And you get:
lsof /absolute/path/to/sourceme
Note: since we're talking about busybox here, this is an implementation of the above using busybox's lsof:
lsof | grep '^'$$ | tail -n1 | awk '{print $3}'
Note: if anyone finds a way to do this using some builtin mechanism from busybox ash, post an answer and I'll change the accepted answer to yours.

While-read nested loop giving me nothing in return [duplicate]

I want to write a script that loops through the output (array possibly?) of a shell command, ps.
Here is the command and the output:
$ ps -ewo pid,cmd,etime | grep python | grep -v grep | grep -v sh
3089 python /var/www/atm_securit 37:02
17116 python /var/www/atm_securit 00:01
17119 python /var/www/atm_securit 00:01
17122 python /var/www/atm_securit 00:01
17125 python /var/www/atm_securit 00:00
Convert it into bash script (snippet):
for tbl in $(ps -ewo pid,cmd,etime | grep python | grep -v grep | grep -v sh)
do
echo $tbl
done
But the output becomes:
3089
python
/var/www/atm_securit
38:06
17438
python
/var/www/atm_securit
00:02
17448
python
/var/www/atm_securit
00:01
How do I loop through every row like in the shell output, but in a bash script?
Never for loop over the results of a shell command if you want to process it line by line unless you are changing the value of the internal field separator $IFS to \n. This is because the lines will get subject of word splitting which leads to the actual results you are seeing. Meaning if you for example have a file like this:
foo bar
hello world
The following for loop
for i in $(cat file); do
echo "$i"
done
gives you:
foo
bar
hello
world
Even if you use IFS='\n' the lines might still get subject of Filename expansion
I recommend to use while + read instead because read reads line by line.
Furthermore I would use pgrep if you are searching for pids belonging to a certain binary. However, since python might appear as different binaries, like python2.7 or python3.4 I suggest to pass -f to pgrep which makes it search the whole command line rather than just searching for binaries called python. But this will also find processes which have been started like cat foo.py. You have been warned! At the end you can refine the regex passed to pgrep like you wish.
Example:
pgrep -f python | while read -r pid ; do
echo "$pid"
done
or if you also want the process name:
pgrep -af python | while read -r line ; do
echo "$line"
done
If you want the process name and the pid in separate variables:
pgrep -af python | while read -r pid cmd ; do
echo "pid: $pid, cmd: $cmd"
done
You see, read offers a flexible and stable way to process the output of a command line-by-line.
Btw, if you prefer your ps .. | grep command line over pgrep use the following loop:
ps -ewo pid,etime,cmd | grep python | grep -v grep | grep -v sh \
| while read -r pid etime cmd ; do
echo "$pid $cmd $etime"
done
Note how I changed the order of etime and cmd. Thus to be able to read cmd, which can contain whitespace, into a single variable. This works because read will break down the line into variables, as many times as you specified variables. The remaining part of the line - possibly including whitespace - will get assigned to the last variable which has been specified in the command line.
I found you can do this just use double quotes:
while read -r proc; do
#do work
done <<< "$(ps -ewo pid,cmd,etime | grep python | grep -v grep | grep -v sh)"
This will save each line to the array rather than each item.
When using for loops in bash it splits the given list by default by whitespaces, this can be adapted by using the so called Internal Field Seperator, or IFS in short .
IFS The Internal Field Separator that is used for word splitting after
expansion and to split lines into words with the read builtin command.
The default value is "".
For your example we would need to tell IFS to use new-lines as break point.
IFS=$'\n'
for tbl in $(ps -ewo pid,cmd,etime | grep python | grep -v grep | grep -v sh)
do
echo $tbl
done
This example returns the following output on my machine.
668 /usr/bin/python /usr/bin/ud 03:05:54
27892 python 00:01
Here is another bash-based solution, inspired by comment of #Gordon Davisson.
For this we need (atleast bash v1.13.5 (1992) or later verison), because Process-Substitution2,3,4 while read var; do { ... }; done < <(...);, etc are used.
#!/bin/bash
while IFS= read -a oL ; do { # reads single/one line
echo "${oL}"; # prints that single/one line
};
done < <(ps -ewo pid,cmd,etime | grep python | grep -v grep | grep -v sh);
unset oL;
Note: You can use any simple or complex command/command-set inside the <(...) which may have multiple output lines.
And what code does what function is shown here.
And here is a single/one-liner way:
while IFS= read -a oL ; do { echo "${oL}"; }; done < <(ps -ewo pid,cmd,etime | grep python | grep -v grep | grep -v sh); unset oL;
( As Process-Substitution is not part of POSIX yet So its not supported in many POSIX compliant shell or in POSIX shell mode of bash-shell. Process-Substitution existed in bash since 1992 (so that is 28yrs ago from now/2020), & existed in ksh86 (before 1985)1. So POSIX should've included it. )
If you or any user wants to use something similar as Process-Substitution in POSIX compliant shell (i.e: sh, ash, dash, pdksh/mksh, etc), then look into NamedPipes.

JQ is iterating over each word in a given key, as opposed to each instance of the key [duplicate]

I want to write a script that loops through the output (array possibly?) of a shell command, ps.
Here is the command and the output:
$ ps -ewo pid,cmd,etime | grep python | grep -v grep | grep -v sh
3089 python /var/www/atm_securit 37:02
17116 python /var/www/atm_securit 00:01
17119 python /var/www/atm_securit 00:01
17122 python /var/www/atm_securit 00:01
17125 python /var/www/atm_securit 00:00
Convert it into bash script (snippet):
for tbl in $(ps -ewo pid,cmd,etime | grep python | grep -v grep | grep -v sh)
do
echo $tbl
done
But the output becomes:
3089
python
/var/www/atm_securit
38:06
17438
python
/var/www/atm_securit
00:02
17448
python
/var/www/atm_securit
00:01
How do I loop through every row like in the shell output, but in a bash script?
Never for loop over the results of a shell command if you want to process it line by line unless you are changing the value of the internal field separator $IFS to \n. This is because the lines will get subject of word splitting which leads to the actual results you are seeing. Meaning if you for example have a file like this:
foo bar
hello world
The following for loop
for i in $(cat file); do
echo "$i"
done
gives you:
foo
bar
hello
world
Even if you use IFS='\n' the lines might still get subject of Filename expansion
I recommend to use while + read instead because read reads line by line.
Furthermore I would use pgrep if you are searching for pids belonging to a certain binary. However, since python might appear as different binaries, like python2.7 or python3.4 I suggest to pass -f to pgrep which makes it search the whole command line rather than just searching for binaries called python. But this will also find processes which have been started like cat foo.py. You have been warned! At the end you can refine the regex passed to pgrep like you wish.
Example:
pgrep -f python | while read -r pid ; do
echo "$pid"
done
or if you also want the process name:
pgrep -af python | while read -r line ; do
echo "$line"
done
If you want the process name and the pid in separate variables:
pgrep -af python | while read -r pid cmd ; do
echo "pid: $pid, cmd: $cmd"
done
You see, read offers a flexible and stable way to process the output of a command line-by-line.
Btw, if you prefer your ps .. | grep command line over pgrep use the following loop:
ps -ewo pid,etime,cmd | grep python | grep -v grep | grep -v sh \
| while read -r pid etime cmd ; do
echo "$pid $cmd $etime"
done
Note how I changed the order of etime and cmd. Thus to be able to read cmd, which can contain whitespace, into a single variable. This works because read will break down the line into variables, as many times as you specified variables. The remaining part of the line - possibly including whitespace - will get assigned to the last variable which has been specified in the command line.
I found you can do this just use double quotes:
while read -r proc; do
#do work
done <<< "$(ps -ewo pid,cmd,etime | grep python | grep -v grep | grep -v sh)"
This will save each line to the array rather than each item.
When using for loops in bash it splits the given list by default by whitespaces, this can be adapted by using the so called Internal Field Seperator, or IFS in short .
IFS The Internal Field Separator that is used for word splitting after
expansion and to split lines into words with the read builtin command.
The default value is "".
For your example we would need to tell IFS to use new-lines as break point.
IFS=$'\n'
for tbl in $(ps -ewo pid,cmd,etime | grep python | grep -v grep | grep -v sh)
do
echo $tbl
done
This example returns the following output on my machine.
668 /usr/bin/python /usr/bin/ud 03:05:54
27892 python 00:01
Here is another bash-based solution, inspired by comment of #Gordon Davisson.
For this we need (atleast bash v1.13.5 (1992) or later verison), because Process-Substitution2,3,4 while read var; do { ... }; done < <(...);, etc are used.
#!/bin/bash
while IFS= read -a oL ; do { # reads single/one line
echo "${oL}"; # prints that single/one line
};
done < <(ps -ewo pid,cmd,etime | grep python | grep -v grep | grep -v sh);
unset oL;
Note: You can use any simple or complex command/command-set inside the <(...) which may have multiple output lines.
And what code does what function is shown here.
And here is a single/one-liner way:
while IFS= read -a oL ; do { echo "${oL}"; }; done < <(ps -ewo pid,cmd,etime | grep python | grep -v grep | grep -v sh); unset oL;
( As Process-Substitution is not part of POSIX yet So its not supported in many POSIX compliant shell or in POSIX shell mode of bash-shell. Process-Substitution existed in bash since 1992 (so that is 28yrs ago from now/2020), & existed in ksh86 (before 1985)1. So POSIX should've included it. )
If you or any user wants to use something similar as Process-Substitution in POSIX compliant shell (i.e: sh, ash, dash, pdksh/mksh, etc), then look into NamedPipes.

How to get bash ssh version number only?

Running ssh -V gives me:
OpenSSH_7.6p1, OpenSSL 1.1.0i-fips 14 Aug 2018
Now I would like to get just
7.6
to allow me to compare version numbers.
NOTE: I needed the ssh version number to allow me to compare it in my bash scripts. Since I didn't find an easy solution online, I thought it would be nice to document this for future users as a self-answered Q&A.
Could you please try following(tested in GNU awk).
ssh -V 2>&1 | awk -F'[_,]' '{print $2+0}'
where $2+0 means it will look for maximum match of digits only and remove text after it. Which will provide exact version of ssh.
You may use awk also:
ssh -V 2>&1 | awk -F '[^0-9.]+' '{print $2}'
7.6
Using sed:
ssh -V 2>&1 | sed 's/OpenSSH_\([^p]*\)p.*/\1/'
explanation:
2>&1 : for some strange reason ssh prints the version info to stderr; we redirect to stdout to allow parsing.
\([^p]*\) : take all characters that are not a p.
Or with pure Bash Regex:
[[ $(ssh -V 2>&1) =~ [0-9.]+ ]];echo $BASH_REMATCH

alternative to readarray, because it does not work on mac os x

I have a varsValues.txt file
cat varsValues.txt
aa=13.7
something=20.6
countries=205
world=1
languages=2014
people=7.2
oceans=3.4
And I would like to create 2 arrays, vars and values. It should contain
echo ${vars[#]}
aa something countries world languages people oceans
echo ${values[#]}
13.7 20.6 205 1 2014 7.2 3.4
I use
Npars=7
readarray -t vars < <(cut -d '=' -f1 varsValues.txt)
readarray -t values < <(cut -d '=' -f2 varsValues.txt)
for (( yy=0; yy<$Npars; yy++ )); do
eval ${vars[$yy]}=${values[$yy]}
done
echo $people
7.2
But I would like it without readarray which does not work on Mac (os x) and IFS (interfield separater).
Any other solution? awk? perl? which I can use in my bash script.
Thanks.
You could use a read loop.
while IFS=\= read var value; do
vars+=($var)
values+=($value)
done < VarsValues.txt
Here's the awk version. Note that NPars is not hardcoded.
vars=($(awk -F= '{print $1}' varsValues.txt))
values=($(awk -F= '{print $2}' varsValues.txt))
Npars=${#vars[#]}
for ((i=0; i<$Npars; i++)); do
eval ${vars[$i]}=${values[$i]}
done
echo $people
You can use declare builtin:
declare -a vars=( $(cut -d '=' -f1 varsValues.txt) )
declare -a values=( $(cut -d '=' -f2 varsValues.txt) )
Although, as commenters have pointed out declare -a is superfluous.
vars=( $(cut -d '=' -f1 varsValues.txt) )
values=( $(cut -d '=' -f2 varsValues.txt) )
Works just as well.
Try:
IFS=$'\n' vars=($(cut -d '=' -f1 varsValues.txt))
IFS=$'\n' values=($(cut -d '=' -f2 varsValues.txt))
perl -0777 -nE '#F= split /[=\r\n]/; say "#F[grep !($_%2), 0..$#F]"; say "#F[grep $_%2, 0..$#F]"' varsValues.txt
or by reading same file twice,
perl -F'=' -lane 'print $F[0]' varsValues.txt
perl -F'=' -lane 'print $F[1]' varsValues.txt
Let's start with this:
$ awk -F'=' '{values[$1]=$2} END{print values["people"]}' file
7.2
$ awk -F'=' '{values[$1]=$2} END{for (name in values) print name, values[name]}' file
languages 2014
oceans 3.4
world 1
something 20.6
countries 205
people 7.2
aa 13.7
Now - what else do you need to do?
Figured I'd toss this in here: https://raw.githubusercontent.com/AdrianTP/new-environment-setup/master/utils/readarray.sh
#!/bin/bash
# from: https://peniwize.wordpress.com/2011/04/09/how-to-read-all-lines-of-a-file-into-a-bash-array/
readarray() {
local __resultvar=$1
declare -a __local_array
let i=0
while IFS=$'\n' read -r line_data; do
__local_array[i]=${line_data}
((++i))
done < $2
if [[ "$__resultvar" ]]; then
eval $__resultvar="'${__local_array[#]}'"
else
echo "${__local_array[#]}"
fi
}
I keep this in a "utils" folder in my "new-environment-setup" Github repo, and I just clone it down and import it whenever I need to read a file into an array of lines an array get a new computer or wipe my drive. It should thus act as a backfill for readarray's shortcomings on Mac.
Import looks like:
# From: https://stackoverflow.com/a/12694189/771948
DIR="${BASH_SOURCE%/*}"
if [[ ! -d "$DIR" ]]; then DIR="$PWD"; fi
. "$DIR/utils/readarray.sh"
Usage looks like readarray "<output_var_name>" "<input_file_name>".
Yes it's a little rough. Sorry about that. It may not even work correctly anymore, but it did at one point, so I thought I would share it here to plant the idea of simply...writing your own backfill.
Mac uses an outdated version of bash (due to licencing reasons) by default which is lacking the readarray command.
This solution worked best for me (Mac user):
Check version of bash (probably version 3 from 2007)
bash --version
Download latest version of bash
brew install bash
Open a new terminal (which will load the new environment), then check the new version of bash (should be version 5 or higher)
bash --version
Check location(s) of bash
which -a bash
Output:
/usr/local/bin/bash
/bin/bash
You can see that you now have two versions of bash. Usually, both of these paths are in your PATH variable.
Check PATH variable
echo $PATH
The /usr/local/bin/bash should be standing before the /bin/bash in this variable. The shell is searching for executables in the order of occurrence in the PATH variable and takes the first one it finds.
Make sure to use a bash shell (rather than zsh) when using this command.
Try out the readarray command by e.g. redirecting the output of the ls
command with command substitution to the readarray command to generate an array containing a list with the filenames of the current folder:
readarray var < <(ls); echo ${var[#]}
Also, if you want to write a bash script make sure to use the correct Shebang:
#!/usr/local/bin/bash

Resources