Expansion of variable does not work when calling bash functions - bash

See also my previous question.
So... I have a script:
function go_loop (){
for i in `grep -v ^# $1`; do
$2
done
}
go_loop "/tmp/text.txt" "echo $i"
I should have in a result:
9
20
21
...
But apparently I only get an empty result. How can I feed the second input parameter to the loop?
Please don't advice me do this:
for i in `grep -v ^# $1`; do
echo $i
done
I need to make 2 input parameters, first - name of file, second - name of execution command

You need to eval the second parameter like this:
eval $2
and pass it like this:
go_loop "/tmp/text.txt" 'echo $i'

You can do this using exec inside your loop, which will run the $2 as bash command:
[root#box ~]# ./test.sh 1 ls
test.sh tests_passed.txt
[root#box ~]# cat test
exec $2
The exec builtin command is used to
replace the shell with a given program (executing it, not as new process)
set redirections for the program to execute or for the current shell

Related

grep output different in bash script

I am creating a bash script that will simply use grep to look through a bunch of logs for a certain string.
Something interesting happens though.
For the purpose of testing all of the log files the files are named test1.log, test2.log, test3.log, etc.
When using the grep command:
grep -oHnR TEST Logs/test*
The output contains all instances from all files in the folder as expected.
But when using a command but contained in the bash script below:
#!/bin/bash
#start
grep -oHnR $1 $2
#end
The output displays the instances from only 1 file.
When running the script I am using the following command:
bash test.bash TEST Logs/test*
Here is an example of the expected output (what occurs when simply using grep):
Logs/test2.log:8:TEST
Logs/test2.log:20:TEST
Logs/test2.log:41:TEST
Logs/test.log:2:TEST
Logs/test.log:18:TEST
and here is an example of the output received when using the bash script:
Logs/test2.log:8:TEST
Logs/test2.log:20:TEST
Logs/test2.log:41:TEST
Can someone explain to me why this happens?
When you call the line
bash test.bash TEST Logs/test*
this will be translated by the shell to
bash test.bash TEST Logs/test1.log Logs/test2.log Logs/test3.log Logs/test4.log
(if you have four log files).
The command line parameters TEST, Logs/test1.log, Logs/test2.log, etc. will be given the names $1, $2, $3, etc.; $1 will be TEST, $2 will be Logs/test1.log.
You just ignore the remaining parameters and use just one log file when you use $2 only.
A correct version would be this:
#!/bin/bash
#start
grep -oHnR "$#"
#end
This will pass all the parameters properly and also take care of nastinesses like spaces in file names (your version would have had trouble with these).
To understand what's happening, you can use a simpler script:
#!/bin/bash
echo $1
echo $2
That outputs the first two arguments, as you asked for.
You want to use the first argument, and then use all the rest as input files. So use shift like this:
#!/bin/bash
search=$1
shift
echo "$1"
echo "$#"
Notice also the use of double quotes.
In your case, because you want the search string and the filenames to be passed to grep in the same order, you don't even need to shift:
#!/bin/bash
grep -oHnR -e "$#"
(I added the -e in case the search string begins with -)
The unquoted * is being affected by globbing when you are calling the script.
Using set -x to output what is running from the script makes this more clear.
$ ./greptest.sh TEST test*
++ grep -oHnR TEST test1.log
$ ./greptest.sh TEST "test*"
++ grep -oHnR TEST test1.log test2.log test3.log
In the first case, bash is expanding the * into the list of file names versus the second case it is being passed to grep. In the first case you actually have >2 args (as each filename expanded would become an arg) - adding echo $# to the script shows this too:
$ ./greptest.sh TEST test*
++ grep -oHnR TEST test1.log
++ echo 4
4
$ ./greptest.sh TEST "test*"
++ grep -oHnR TEST test1.log test2.log test3.log
++ echo 2
2
You probably want to escape the wildcard on your bash invocation:
bash test.bash TEST Logs/test\*
That way it'll get passed through to grep as an *, otherwise the shell will have expanded it to every file in the Logs dir whose name starts with test.
Alternatively, change your script to allow more than one file on the command line:
#!/bin/bash
hold=$1
shift
grep -oHnR $hold $#

Ubuntu bash script output correct value - A script calls another script

I have a script that calls another script, but the output from the script seems to be wrong. Below is my code:
print_output.sh
#!/bin/bash
echo $1
total=$(eval $1 | awk '{sum+=$1} END {print sum}')
echo "Total is $total"
try.sh
#!/bin/bash
./print_output.sh "ssh $1 'cd /somelocation/logs; cat data-$1.txt' | grep ..."
Usuage is: ./try.sh ukdry-01
I expect the output to be:
uk
along with the "Total is 11" in this case. But the output is appearing as "cat data.txt".
Where I am going wrong? As I expect the input to be "ukdry-01". The "cat data.txt" is a command that be evaluated (i.e eval $1) from print_output.sh and executed within the script.
data.txt contains for example
1 AUTH
2 AND
8 BOOLEAN
The idea is that the total is printed. The script will ssh to a user inputted to an hostname i.e. ukdry-01, goes to data-ukdry-01.txt through cat and then logs out and performs a grep on the data found. The data is filtered is fed into print_output.sh which are all integers and prints out a total of column one. The usuage has to be the script name i.e. ./try.sh ukdry-01 where internally it calls ./print_output and then the ssh commands etc.
Below is what had worked for me:
try.sh
#!/bin/bash
./print_output.sh $# "ssh $1 'cd /somelocation/logs; cat data-$1.txt' | grep ..."
Usuage is: ./try.sh ukdry-01
The $# copies the positional parameters and I can make use of ukdry-01 as $1.

Reusing output from last command in Bash

Is the output of a Bash command stored in any register? E.g. something similar to $? capturing the output instead of the exit status.
I could assign the output to a variable with:
output=$(command)
but that's more typing...
You can use $(!!)
to recompute (not re-use) the output of the last command.
The !! on its own executes the last command.
$ echo pierre
pierre
$ echo my name is $(!!)
echo my name is $(echo pierre)
my name is pierre
The answer is no. Bash doesn't allocate any output to any parameter or any block on its memory. Also, you are only allowed to access Bash by its allowed interface operations. Bash's private data is not accessible unless you hack it.
Very Simple Solution
One that I've used for years.
Script (add to your .bashrc or .bash_profile)
# capture the output of a command so it can be retrieved with ret
cap () { tee /tmp/capture.out; }
# return the output of the most recent command that was captured by cap
ret () { cat /tmp/capture.out; }
Usage
$ find . -name 'filename' | cap
/path/to/filename
$ ret
/path/to/filename
I tend to add | cap to the end of all of my commands. This way when I find I want to do text processing on the output of a slow running command I can always retrieve it with ret.
If you are on mac, and don't mind storing your output in the clipboard instead of writing to a variable, you can use pbcopy and pbpaste as a workaround.
For example, instead of doing this to find a file and diff its contents with another file:
$ find app -name 'one.php'
/var/bar/app/one.php
$ diff /var/bar/app/one.php /var/bar/two.php
You could do this:
$ find app -name 'one.php' | pbcopy
$ diff $(pbpaste) /var/bar/two.php
The string /var/bar/app/one.php is in the clipboard when you run the first command.
By the way, pb in pbcopy and pbpaste stand for pasteboard, a synonym for clipboard.
One way of doing that is by using trap DEBUG:
f() { bash -c "$BASH_COMMAND" >& /tmp/out.log; }
trap 'f' DEBUG
Now most recently executed command's stdout and stderr will be available in /tmp/out.log
Only downside is that it will execute a command twice: once to redirect output and error to /tmp/out.log and once normally. Probably there is some way to prevent this behavior as well.
Inspired by anubhava's answer, which I think is not actually acceptable as it runs each command twice.
save_output() {
exec 1>&3
{ [ -f /tmp/current ] && mv /tmp/current /tmp/last; }
exec > >(tee /tmp/current)
}
exec 3>&1
trap save_output DEBUG
This way the output of last command is in /tmp/last and the command is not called twice.
Yeah, why type extra lines each time; agreed.
You can redirect the returned from a command to input by pipeline, but redirecting printed output to input (1>&0) is nope, at least not for multiple line outputs.
Also you won't want to write a function again and again in each file for the same. So let's try something else.
A simple workaround would be to use printf function to store values in a variable.
printf -v myoutput "`cmd`"
such as
printf -v var "`echo ok;
echo fine;
echo thankyou`"
echo "$var" # don't forget the backquotes and quotes in either command.
Another customizable general solution (I myself use) for running the desired command only once and getting multi-line printed output of the command in an array variable line-by-line.
If you are not exporting the files anywhere and intend to use it locally only, you can have Terminal set-up the function declaration. You have to add the function in ~/.bashrc file or in ~/.profile file. In second case, you need to enable Run command as login shell from Edit>Preferences>yourProfile>Command.
Make a simple function, say:
get_prev() # preferably pass the commands in quotes. Single commands might still work without.
{
# option 1: create an executable with the command(s) and run it
#echo $* > /tmp/exe
#bash /tmp/exe > /tmp/out
# option 2: if your command is single command (no-pipe, no semi-colons), still it may not run correct in some exceptions.
#echo `"$*"` > /tmp/out
# option 3: (I actually used below)
eval "$*" > /tmp/out # or simply "$*" > /tmp/out
# return the command(s) outputs line by line
IFS=$(echo -en "\n\b")
arr=()
exec 3</tmp/out
while read -u 3 -r line
do
arr+=($line)
echo $line
done
exec 3<&-
}
So what we did in option 1 was print the whole command to a temporary file /tmp/exe and run it and save the output to another file /tmp/out and then read the contents of the /tmp/out file line-by-line to an array.
Similar in options 2 and 3, except that the commands were exectuted as such, without writing to an executable to be run.
In main script:
#run your command:
cmd="echo hey ya; echo hey hi; printf `expr 10 + 10`'\n' ; printf $((10 + 20))'\n'"
get_prev $cmd
#or simply
get_prev "echo hey ya; echo hey hi; printf `expr 10 + 10`'\n' ; printf $((10 + 20))'\n'"
Now, bash saves the variable even outside previous scope, so the arr variable created in get_prev function is accessible even outside the function in the main script:
#get previous command outputs in arr
for((i=0; i<${#arr[#]}; i++))
do
echo ${arr[i]}
done
#if you're sure that your output won't have escape sequences you bother about, you may simply print the array
printf "${arr[*]}\n"
Edit:
I use the following code in my implementation:
get_prev()
{
usage()
{
echo "Usage: alphabet [ -h | --help ]
[ -s | --sep SEP ]
[ -v | --var VAR ] \"command\""
}
ARGS=$(getopt -a -n alphabet -o hs:v: --long help,sep:,var: -- "$#")
if [ $? -ne 0 ]; then usage; return 2; fi
eval set -- $ARGS
local var="arr"
IFS=$(echo -en '\n\b')
for arg in $*
do
case $arg in
-h|--help)
usage
echo " -h, --help : opens this help"
echo " -s, --sep : specify the separator, newline by default"
echo " -v, --var : variable name to put result into, arr by default"
echo " command : command to execute. Enclose in quotes if multiple lines or pipelines are used."
shift
return 0
;;
-s|--sep)
shift
IFS=$(echo -en $1)
shift
;;
-v|--var)
shift
var=$1
shift
;;
-|--)
shift
;;
*)
cmd=$option
;;
esac
done
if [ ${#} -eq 0 ]; then usage; return 1; fi
ERROR=$( { eval "$*" > /tmp/out; } 2>&1 )
if [ $ERROR ]; then echo $ERROR; return 1; fi
local a=()
exec 3</tmp/out
while read -u 3 -r line
do
a+=($line)
done
exec 3<&-
eval $var=\(\${a[#]}\)
print_arr $var # comment this to suppress output
}
print()
{
eval echo \${$1[#]}
}
print_arr()
{
eval printf "%s\\\n" "\${$1[#]}"
}
Ive been using this to print space-separated outputs of multiple/pipelined/both commands as line separated:
get_prev -s " " -v myarr "cmd1 | cmd2; cmd3 | cmd4"
For example:
get_prev -s ' ' -v myarr whereis python # or "whereis python"
# can also be achieved (in this case) by
whereis python | tr ' ' '\n'
Now tr command is useful at other places as well, such as
echo $PATH | tr ':' '\n'
But for multiple/piped commands... you know now. :)
-Himanshu
Like konsolebox said, you'd have to hack into bash itself. Here is a quite good example on how one might achieve this. The stderred repository (actually meant for coloring stdout) gives instructions on how to build it.
I gave it a try: Defining some new file descriptor inside .bashrc like
exec 41>/tmp/my_console_log
(number is arbitrary) and modify stderred.c accordingly so that content also gets written to fd 41. It kind of worked, but contains loads of NUL bytes, weird formattings and is basically binary data, not readable. Maybe someone with good understandings of C could try that out.
If so, everything needed to get the last printed line is tail -n 1 [logfile].
Not sure exactly what you're needing this for, so this answer may not be relevant. You can always save the output of a command: netstat >> output.txt, but I don't think that's what you're looking for.
There are of course programming options though; you could simply get a program to read the text file above after that command is run and associate it with a variable, and in Ruby, my language of choice, you can create a variable out of command output using 'backticks':
output = `ls` #(this is a comment) create variable out of command
if output.include? "Downloads" #if statement to see if command includes 'Downloads' folder
print "there appears to be a folder named downloads in this directory."
else
print "there is no directory called downloads in this file."
end
Stick this in a .rb file and run it: ruby file.rb and it will create a variable out of the command and allow you to manipulate it.
If you don't want to recompute the previous command you can create a macro that scans the current terminal buffer, tries to guess the -supposed- output of the last command, copies it to the clipboard and finally types it to the terminal.
It can be used for simple commands that return a single line of output (tested on Ubuntu 18.04 with gnome-terminal).
Install the following tools: xdootool, xclip , ruby
In gnome-terminal go to Preferences -> Shortcuts -> Select all and set it to Ctrl+shift+a.
Create the following ruby script:
cat >${HOME}/parse.rb <<EOF
#!/usr/bin/ruby
stdin = STDIN.read
d = stdin.split(/\n/)
e = d.reverse
f = e.drop_while { |item| item == "" }
g = f.drop_while { |item| item.start_with? "${USER}#" }
h = g[0]
print h
EOF
In the keyboard settings add the following keyboard shortcut:
bash -c '/bin/sleep 0.3 ; xdotool key ctrl+shift+a ; xdotool key ctrl+shift+c ; ( (xclip -out | ${HOME}/parse.rb ) > /tmp/clipboard ) ; (cat /tmp/clipboard | xclip -sel clip ) ; xdotool key ctrl+shift+v '
The above shortcut:
copies the current terminal buffer to the clipboard
extracts the output of the last command (only one line)
types it into the current terminal
I have an idea that I don't have time to try to implement immediately.
But what if you do something like the following:
$ MY_HISTORY_FILE = `get_temp_filename`
$ MY_HISTORY_FILE=$MY_HISTORY_FILE bash -i 2>&1 | tee $MY_HISTORY_FILE
$ some_command
$ cat $MY_HISTORY_FILE
$ # ^You'll want to filter that down in practice!
There might be issues with IO buffering. Also the file might get too huge. One would have to come up with a solution to these problems.
I think using script command might help. Something like,
script -c bash -qf fifo_pid
Using bash features to set after parsing.
Demo for non-interactive commands only: http://asciinema.org/a/395092
For also supporting interactive commands, you'd have to hack the script binary from util-linux to ignore any screen-redrawing console codes, and run it from bashrc to save your login session's output to a file.
You can use -exec to run a command on the output of a command. So it will be a reuse of the output as an example given with a find command below:
find . -name anything.out -exec rm {} \;
you are saying here -> find a file called anything.out in the current folder, if found, remove it. If it is not found, the remaining after -exec will be skipped.

Passing a variable into awk within a shell script

I have a shell script that I'm writing to search for a process by name and return output if that process is over a given value.
I'm working on finding the named process first. The script currently looks like this:
#!/bin/bash
findProcessName=$1
findCpuMax=$2
#echo "parameter 1: $findProcessName, parameter2: $findCpuMax"
tempFile=`mktemp /tmp/processsearch.XXXXXX`
#echo "tempDir: $tempFile"
processSnapshot=`ps aux > $tempFile`
findProcess=`awk -v pname="$findProcessName" '/pname/' $tempFile`
echo "process line: "$findProcess
`rm $tempFile`
The error is occuring when I try to pass the variable into the awk command. I checked my version of awk and it definitely does support the -v flag.
If I replace the '/pname/' portion of the findProcess variable assignment the script works.
I checked my syntax and it looks right. Could anyone point out where I'm going wrong?
The processSnapshot will always be empty: the ps output is going to the file
when you pass the pattern as a variable, use the pattern match operator:
findProcess=$( awk -v pname="$findProcessName" '$0 ~ pname' $tempFile )
only use backticks when you need the output of a command. This
`rm $tempFile`
executes the rm command, returns the output back to the shell and, it the output is non-empty, the shell attempts to execute that output as a command.
$ `echo foo`
bash: foo: command not found
$ `echo whoami`
jackman
Remove the backticks.
Of course, you don't need the temp file at all:
pgrep -fl $findProcessName

bash set -x and stream

Can you explain the output of the following test script to me:
# prepare test data
echo "any content" > myfile
# set bash to inform me about the commands used
set -x
cat < myfile
output:
+cat
any content
Namely why does the line starting with + not show the "< myfile" bit?
How to force bash to do that. I need to inform the user of my script's doings as in:
mysql -uroot < the_new_file_with_a_telling_name.sql
and I can't.
EDIT: additional context: I use variables. Original code:
SQL_FILE=`ls -t $BACKUP_DIR/default_db* | head -n 1` # get latest db
mysql -uroot mydatabase < ${SQL_FILE}
-v won't expand variables and cat file.sql | mysql will produce two lines:
+mysql
+cat file.sql
so neither does the trick.
You could try set -v or set -o verbose instead which enables command echoing.
Example run on my machine:
[me#home]$ cat x.sh
echo "any content" > myfile
set -v
cat < myfile
[me#home]$ bash x.sh
cat < myfile
any content
The caveat here is that set -v simply echos the command literally and does not do any shell expansion or iterpolation. As pointed out by Jonathan in the comments, this can be a problem if the filename is defined in a variable (e.g. command < $somefile) making it difficult to identify what $somefile refers to.
The difference there is quite simple:
in the first case, you're using the program cat, and you're redirecting the contents of myfile to the standard input of cat. This means you're executing cat, and that's what bash shows you when you have set -x;
in a possible second case, you could use cat myfile, as pointed by #Jonathan Leffler, and you'd see +cat myfile, which is what you're executing: the program cat with the parameter myfile.
From man bash:
-x After expanding each simple command, for command, case command,
select command, or arithmetic for command, display the expanded
value of PS4, followed by the command and its expanded arguments or
associated word list.
As you can see, it simply displays the command line expanded, and its argument list -- redirections are neither part of the expanded command cat nor part of its argument list.
As pointed by #Shawn Chin, you may use set -v, which, as from man bash:
-v Print shell input lines as they are read.
Basically, that's the way bash works with its -x command. I checked on a Solaris 5.10 box, and the /bin/sh there (which is close to a genuine Bourne shell) also omits I/O redirection.
Given the command file (x3.sh):
echo "Hi" > Myfile
cat < Myfile
rm -f Myfile
The trace output on the Solaris machine was:
$ sh -x x3.sh
+ echo Hi
+ cat
Hi
+ rm -f Myfile
$ /bin/ksh -x x3.sh
+ echo Hi
+ 1> Myfile
+ cat
+ 0< Myfile
Hi
+ rm -f Myfile
$ bash -x x3.sh
+ echo Hi
+ cat
Hi
+ rm -f Myfile
$
Note that bash and sh (which are definitely different executables) produce the same output. The ksh output includes the I/O redirection information — score 1 for the Korn shell.
In this specific example, you can use:
cat myfile
to see the name of the file. In the general case, it is hard, but consider using ksh instead of bash to get the I/O redirection reported.

Resources