Is there a way to only require one echo in this scenario? - bash

I have the following line of code:
for h in "${Hosts[#]}" ; do echo "$MyLog" | grep -m 1 -B 3 -A 1 $h >> /LogOutput ; done
My hosts variable is a large array of hosts
Is there a better way to do this that doesn't require me to echo on each loop? Like grep on a variable instead?

No echo, no loop
#!/bin/bash
hosts=(host1 host2 host3)
MyLog="
asf host
sdflkj
sadkjf
sdlkjds
lkasf
sfal
asf host2
sdflkj
sadkjf
"
re="${hosts[#]}"
egrep -m 1 -B 3 -A 1 ${re// /|} <<< "$MyLog"
Variant with one echo
echo "$MyLog" | egrep -m 1 -B 3 -A 1 ${re// /|}
Usage
$ ./test
sdlkjds
lkasf
sfal
asf host2
sdflkj

One echo, no loops, and all grepping done in parallel, with GNU Parallel:
echo "$MyLog" | parallel -k --tee --pipe 'grep -m 1 -B 3 -A 1 {}' ::: "${hosts[#]}"
The -k keeps the output in order.
The --tee and the --pipe ensure that the stdin is duplicated to all processes.
The processes that are run in parallel are enclosed in single quotes.

printf your string to multiple-line that you can then grep? Something like:
printf '%s\n' "${Hosts[#]}" | grep -m 1 -B 3 -A 1 $h >> /LogOutput

Assuming you're on GNU system. otherwise info grep
From grep --help
grep --help | head -n1
Output
Usage: grep [OPTION]... PATTERN [FILE]...
So according to that you can do.
for h in "${Hosts[#]}" ; do grep -m 1 -B 3 -A 1 "$h" "$MyLog" >> /LogOutput ; done

Related

how to group all arguments as position argument for `xargs`

I have a script which takes in only one positional parameter which is a list of values, and I'm trying to get the parameter from stdin with xargs.
However by default, xargs passes all the lists to my script as positional parameters, e.g. when doing:
echo 1 2 3 | xargs myScript, it will essentially be myScript 1 2 3, and what I'm looking for is myScript "1 2 3". What is the best way to achieve this?
Change the delimiter.
$ echo 1 2 3 | xargs -d '\n' printf '%s\n'
1 2 3
Not all xargs implementations have -d though.
And not sure if there is an actual use case for this but you can also resort to spawning another shell instance if you have to. Like
$ echo -e '1 2\n3' | xargs sh -c 'printf '\''%s\n'\'' "$*"' sh
1 2 3
If the input can be altered, you can do this. But not sure if this is what you wanted.
echo \"1 2 3\"|xargs ./myScript
Here is the example.
$ cat myScript
#!/bin/bash
echo $1; shift
echo $1; shift
echo $1;
$ echo \"1 2 3\"|xargs ./myScript
1 2 3
$ echo 1 2 3|xargs ./myScript
1
2
3

grep -c kills script when no match using set -e

Basic example:
#!/bin/bash
set -e
set -x
NUM_LINES=$(printf "Hello\nHi" | grep -c "How$")
echo "Number of lines: ${NUM_LINES}" # never prints 0
Output:
++ grep -c 'How$'
++ printf 'Hello\nHi'
+ NUM_LINES=0
If there are matches, it prints the correct number of lines. Also grep "How$" | wc -l works instead of using grep -c "How$".
You can suppress grep's exit code by running : when it "fails". : always succeeds.
NUM_LINES=$(printf "Hello\nHi" | grep -c "How$" || :)

append variables from a while loop into a command line option

I have a while loop, where A=1~3
mysql -e "select A from xxx;" while read A;
do
whatever
done
The mySQL command will return only numbers, each number in each line. So the while loop here will have A=1, A=2, A=3
I would like to append the integer number in the loop (here is A=1~3) into a command line to run outside the while loop. Any bash way to do this?
parallel --joblog test.log --jobs 2 -k sh ::: 1.sh 2.sh 3.sh
You probably want something like this:
mysql -e "select A from xxx;" | while read A; do
whatever > standard_out 2>standard_error
echo "$A.sh"
done | xargs parallel --joblog test.log --jobs 2 -k sh :::
Thanks for enlightening me. xargs works perfectly here:
Assuming we have A.csv (mimic the mysql command)
1
2
3
4
We can simply do:
cat A.csv | while read A; do
echo "echo $A" > $A.sh
echo "$A.sh"
done | xargs -I {} parallel --joblog test.log --jobs 2 -k sh ::: {}
The above will print the following output as expected
1
2
3
4
Here -I {} & {} are the argument list markers:
https://www.cyberciti.biz/faq/linux-unix-bsd-xargs-construct-argument-lists-utility/

Use argument twice from standard output pipelining

I have a command line tool which receives two arguments:
TOOL arg1 -o arg2
I would like to invoke it with the same argument provided it for arg1 and arg2, and to make that easy for me, i thought i would do:
each <arg1_value> | TOOL $1 -o $1
but that doesn't work, $1 is not replaced, but is added once to the end of the commandline.
An explicit example, performing:
cp fileA fileA
returns an error fileA and fileA are identical (not copied)
While performing:
echo fileA | cp $1 $1
returns the following error:
usage: cp [-R [-H | -L | -P]] [-fi | -n] [-apvX] source_file target_file
cp [-R [-H | -L | -P]] [-fi | -n] [-apvX] source_file ... target_directory
any ideas?
If you want to use xargs, the [-I] option may help:
-I replace-str
Replace occurrences of replace-str in the initial-arguments with names read from standard input. Also, unquoted blanks do not terminate input items; instead the separa‐
tor is the newline character. Implies -x and -L 1.
Here is a simple example:
mkdir test && cd test && touch tmp
ls | xargs -I '{}' cp '{}' '{}'
Returns an Error cp: tmp and tmp are the same file
The xargs utility will duplicate its input stream to replace all placeholders in its argument if you use the -I flag:
$ echo hello | xargs -I XXX echo XXX XXX XXX
hello hello hello
The placeholder XXX (may be any string) is replaced with the entire line of input from the input stream to xargs, so if we give it two lines:
$ printf "hello\nworld\n" | xargs -I XXX echo XXX XXX XXX
hello hello hello
world world world
You may use this with your tool:
$ generate_args | xargs -I XXX TOOL XXX -o XXX
Where generate_args is a script, command or shell function that generates arguments for your tool.
The reason
each <arg1_value> | TOOL $1 -o $1
did not work, apart from each not being a command that I recognise, is that $1 expands to the first positional parameter of the current shell or function.
The following would have worked:
set - "arg1_value"
TOOL "$1" -o "$1"
because that sets the value of $1 before calling you tool.
You can re-run a shell to perform variable expansion, with sh -c. The -c takes an argument which is command to run in a shell, performing expansion. Next arguments of sh will be interpreted as $0, $1, and so on, to use in the -c. For example:
sh -c 'echo $1, i repeat: $1' foo bar baz will print execute echo $1, i repeat: $1 with $1 set to bar ($0 is set to foo and $2 to baz), finally printing bar, i repeat: bar
The $1,$2...$N are only visible to bash script to interpret arguments to those scripts and won't work the way you want them to. Piping redirects stdout to stdin and is not what you are looking for either.
If you just want a one-liner, use something like
ARG1=hello && tool $ARG1 $ARG1
Using GNU parallel to use STDIN four times, to print a multiplication table:
seq 5 | parallel 'echo {} \* {} = $(( {} * {} ))'
Output:
1 * 1 = 1
2 * 2 = 4
3 * 3 = 9
4 * 4 = 16
5 * 5 = 25
One could encapsulate the tool using awk:
$ echo arg1 arg2 | awk '{ system("echo TOOL " $1 " -o " $2) }'
TOOL arg1 -o arg2
Remove the echo within the system() call and TOOL should be executed in accordance with requirements:
echo arg1 arg2 | awk '{ system("TOOL " $1 " -o " $2) }'
Double up the data from a pipe, and feed it to a command two at a time, using sed and xargs:
seq 5 | sed p | xargs -L 2 echo
Output:
1 1
2 2
3 3
4 4
5 5

BASH: Function isn't being run after installation into new server

Perhaps BASH differences? Worked fine in old server, not working in new.
It never echos "made it" in the get_running_palaces() function but instead outputs
comm: /dev/fd/63: No such file or directory
comm: /dev/fd/63: No such file or directory
#!/bin/bash
TYPE=$1
get_palaces(){
for PALACE in $(ls -trI shared /home | sort); do
if [ -d "/home/$PALACE/palace" ]; then
echo $PALACE
fi
done
}
# comm -12 file1 file2 Print only lines present in both file1 and file2.
# comm -3 file1 file2 Print lines in file1 not in file2, and vice vers
get_running_palaces(){
echo "made it";
PSFRONT_A=$(ps ax | grep '[p]sfront -p .* -r /home/.*/palace ' | sed 's| *\([0-9]*\).*/home/\(.*\)/palace.*$|\2|' | uniq | sort)
PSERVER_A=$(ps ax | grep '[p]server.* -f /home/.*/palace/psdata/pserver.conf ' | sed 's| *\([0-9]*\).*/home/\(.*\)/palace.*$|\2|' | sort)
ERRORS=$(comm -3 <(echo "${PSERVER_A[*]}") <(echo "${PSFRONT_A[*]}"))
if [ ! -z "$ERRORS" ]; then
comm -3 <(echo "${PSERVER_A[*]}") <(echo "${ERRORS[*]}")
else
echo "$PSERVER_A"
fi
}
case "$TYPE" in
online)
KNOWN_PALACES=$(get_palaces)
ERROR_LESS=$(get_running_palaces)
ONLINE=$(comm -12 <(echo "${KNOWN_PALACES[*]}") <(echo "${ERROR_LESS[*]}"))
[ ! -z "$ONLINE" ] && echo "$ONLINE"
;;
offline)
KNOWN_PALACES=$(get_palaces | sort)
ERROR_LESS=$(get_running_palaces)
OFFLINE=$(comm -3 <(echo "${KNOWN_PALACES[*]}") <(echo "${ERROR_LESS[*]}"))
[ ! -z "$OFFLINE" ] && echo "$OFFLINE"
;;
*)
get_palaces
;;
esac
exit 0;
Information:
New server:
uname -a
Linux www.ipalaces.org 2.6.32-274.7.1.el5.028stab095.1 #1 SMP Mon Oct 24 20:49:24 MSD 2011 x86_64 GNU/Linux
lsb_release -rd
-bash: lsb_release: command not found
bash --version
GNU bash, version 4.1.5(1)-release (x86_64-pc-linux-gnu)
Old server:
uname -a
Linux ipalaces.org 2.6.32-5-686 #1 SMP Mon Jan 16 16:04:25 UTC 2012 i686 GNU/Linux
lsb_release -rd
Description: Debian GNU/Linux 6.0.4 (squeeze)
Release: 6.0.4
bash --version
GNU bash, version 4.1.5(1)-release (i486-pc-linux-gnu)
Process substitution requires /dev/fd/* on Linux (how it's implemented varies on how Bash is built, I think). Maybe you have a screwed up /dev/ structure at the point where this script is running? Stuff like that happens.
I've seen boot-time bash scripts fail from trying to generate a here document, which required /tmp which wasn't mounted yet (and would come from tmpfs later, so there is no such directory in the root volume or anywhere else).
Does process substitution work at all on that system? I mean if you log in to a system that is up and running, can you do things like
diff <(echo "a") <(echo "b")
?
If that doesn't work, you either have to fix /dev, or change how Bash is built (get it to uses fifos for process substitution) or just change your script not to rely on process substitution.
If you cannot figure out how to enable process substitution in Bash on the new server, perhaps you should refactor the script to use a more traditional processing model. Basically, that boils down to using temporary files.
ps ax |
grep '[p]sfront -p .* -r /home/.*/palace ' |
sed 's| *\([0-9]*\).*/home/\(.*\)/palace.*$|\2|' |
uniq | sort >/tmp/PSFRONT_A
ps ax |
grep '[p]server.* -f /home/.*/palace/psdata/pserver.conf ' |
sed 's| *\([0-9]*\).*/home/\(.*\)/palace.*$|\2|' |
sort >/tmp/PSERVER_A
ERRORS=$(comm -3 /tmp/PSERVER_A /tmp/PSFRONT_A)
rm /tmp/PSERVER_A /tmp/PSFRONT_A
Incidentally, this is completely POSIX compatible, so you could change the shebang line to #!/bin/sh while you are at it.
You should simplify the grep | sed and refactor the recurring functionality; also, proper use of temporary files calls for the use of a trap to remove the temporary files even if the script is interrupted by a signal midway through.
t=`mktemp -t -d palaces.XXXXXXXX` || exit 127
trap 'rm -rf $t' 0
trap 'exit 126' 1 2 3 5 15
psg () {
local re
re=$1
ps ax |
sed -n "\\%$re%"'s| *\([0-9]*\).*/home/\(.*\)/palace.*$|\2|p'
}
psg '[p]sfront -p .* -r /home/.*/palace ' |
uniq | sort >$t/PSFRONT_A
psg '[p]server.* -f /home/.*/palace/psdata/pserver\.conf ' |
sort >$t/PSERVER_A
comm -3 $t/PSERVER_A $t/PSFRONT_A >$t/ERRORS
if [ -s $t/ERRORS ]; then
comm -3 $t/PSERVER_A $t/ERRORS
else
cat $t/PSERVER_A
fi
The rest of the script can be adapted accordingly.

Resources