I'm writing a test suite for my app and using a bash script to check that the test suite output matches the expected output. Here is a section of the script:
for filename in test/*.bcs ;
do
./BCSC $filename > /dev/null
NUMBER=`echo "$filename" | awk -F"[./]" '{print $2}'`
gcc -g -m32 -mstackrealign runtime.c $filename.s -o test/e$NUMBER
# run the file and diff against expected output
echo "Running test file... "$filename
test/e$NUMBER > test/e$NUMBER.out
if [ $NUMBER = "4" ]
then
# it's trying to read the line
# Pass some input to the file...
fi
diff test/e$NUMBER.out test/o$NUMBER.out
done
Test #4 tests reading input from stdin. I'd like to test for script #4, and if so pass it a set of sample inputs.
I just realized you could do it like
test/e4 < test/e4.in > test/e4.out
where e4.in has the sample inputs. Is there another way to pass input to a running script?
If you want to supply the input data directly in the script, use a here-document:
test/e$NUMBER > test/e$NUMBER.out
if [ $NUMBER = "4" ]; then
then
test/e$NUMBER > test/e$NUMBER.out <<END_DATA
test input goes here
you can supply as many lines of input as you want
END_DATA
else
test/e$NUMBER > test/e$NUMBER.out
fi
There are several variants: if you quote the delimiter (i.e. <<'END_DATA'), it won't do things like replace $variable replacement in the here-document. If you use <<-DELIMITER, it'll remove leading tab characters from each line of input (so you can indent the input to match the surrounding code). See the "Here Documents" section in the bash man page for details.
The way you mentioned is the conventional method to redirect a file into stdin when issuing a command/script.
Maybe it'll help if you'll elaborate on the "other way" you're looking for, as in, why do you even need a different way to do so? Is there anything you need to do which this method does not allow?
You can do:
cat test/e4.in | test/e4 > test/e4.out
Related
I am creating a script (myscript.sh) in BASH that reads from STDOUT, typically a stream of data that comes from cat, or from a file and outputs the stream of data (amazing!), like this:
$cat myfile.txt
hello world!
$cat myfile.txt | myscript.sh
hello world!
$myscript.sh myfile.txt
hello world!
But I also would like the following behaviour: if I call the script without arguments I'd like it to output a brief help:
$myscript.sh
I am the help: I just print what you say.
== THE PROBLEM ==
The problem is that I am capturing the stream of data like this:
if [[ $# -eq 0 ]]; then
stream=$(cat <&0)
elif [[ -n "$stream" ]]; then
echo "I am the help: I just print what you say."
else
echo "Unknown error."
fi
And when I call the script with no arguments like this:
$myscript.sh
It SHOULD print the "help" part, but it just keep waiting for a stream of data in line 2 of code above...
Is there any way to tell bash that if nothing comes from STDOUT just break and continue executing?
Thanks in advance.
There's always a standard input stream; if no arguments are given and input isn't redirected, standard input is the terminal.
If you want to treat that specially, use test -t to test if standard input is connected to a terminal.
if [[ $# -eq 0 && -t 0 ]]; then
echo "I am the help: I just print what you say."
else
stream=$(cat -- "$#")
fi
There's no need to test $#. Just pass your arguments to cat; if it gets filenames it will read from them, otherwise it will read from standard input.
I agree to #Barmar's solution.
However, it might be better to entirely avoid a situation where your program behavior depends on whether the input file descriptor is a terminal (there are situations where a terminal is mimicked even though there's none -- in such a situation, your script would just produce the help string).
You could instead introduce a special - argument to explicitly request reading from stdin. This will result in simpler option handling and uniform behavior of your script, no matter what's the environment.
First answer is to help yourself - try running the script with bash -x myscript.sh. It will include lot of information to help you.
If you specific case, the condition $# -eq 0 was flipped. As per requirement, you want to print the help message is NOT ARGUMENT ARE PROVIDED:
if [[ $# -eq 0 ]] ; then
echo "I am the help: I just print what you say."
exit 0
fi
# Rest of you script, read data from file, etc.
cat -- "$#"
Assuming this approach is taken, and if you want to process standard input or a file, simple pass '-' as parameter: cat foobar.txt | myscript.sh -
I need to add new lines with specific information to one or multiple files at the same time.
I tried to automate this task using the following script:
for i in /apps/data/FILE*
do
echo "nice weather 20190830 friday" >> $i
done
It does the job yet I wish I can automate it more and let the script ask me for to provide the file name and the line I want to add.
I expect the output to be like
enter file name : file01
enter line to add : IWISHIKNOW HOWTODOTHAT
Thank you everyone.
In order to read user input you can use
read user_input_file
read user_input_text
read user_input_line
You can print before the question as you like with echo -n:
echo -n "enter file name : "
read user_input_file
echo -n "enter line to add : "
read user_input_text
echo -n "enter line position : "
read user_input_line
In order to add line at the desired position you can "play" with head and tail
head -n $[$user_input_line - 1] $user_input_file > $new_file
echo $user_input_text >> $new_file
tail -n +$user_input_line $user_input_file >> $new_file
Requiring interactive input is horrible for automation. Make a command which accepts a message and a list of files to append to as command-line arguments instead.
#!/bin/sh
msg="$1"
shift
echo "$msg" | tee -a "$#"
Usage:
scriptname "today is a nice day" file1 file2 file3
The benefits for interactive use are obvious -- you get to use your shell's history mechanism and filename completion (usually bound to tab) but also it's much easier to build more complicated scripts on top of this one further on.
The design to put the message in the first command-line argument is baffling to newcomers, but allows for a very simple overall design where "the other arguments" (zero or more) are the files you want to manipulate. See how grep has this design, and sed, and many many other standard Unix commands.
You can use read statement to prompt for input,
read does make your script generic, but if you wish to automate it then you have to have an accompanying expect script to provide inputs to the read statement.
Instead you can take in arguments to the script which helps you in automation.. No prompting...
#!/usr/bin/env bash
[[ $# -ne 2 ]] && echo "print usage here" && exit 1
file=$1 && shift
con=$1
for i in `ls $file`
do
echo $con >> $i
done
To use:
./script.sh "<filename>" "<content>"
The quotes are important for the content so that the spaces in the content are considered to be part of it. For filenames use quotes so that the shell does not expand them before calling the script.
Example: ./script.sh "file*" "samdhaskdnf asdfjhasdf"
Here's my problem, from console if I type the below,
var=`history 1`
echo $var
I get the desired output. But when I do the same inside a shell script, it is not showing any output. Also, for other commands like pwd, ls etc, the script shows the desired output without any issue.
As value of variable contains space, add quotes around it.
E.g.:
var='history 1'
echo $var
I believe all you need is this as follows:
1- Ask user for the number till which user need to print the history in script.
2- Run the script and take Input from user and get the output as follows:
cat get_history.ksh
echo "Enter the line number of history which you want to get.."
read number
if [[ $# -eq 0 ]]
then
echo "Usage of script: get_history.ksh number_of_lines"
exit
else
history "$number"
fi
Added logic where it will check arguments if number of arguments passed is 0 then it will exit from script then.
By default history is turned off in a script, therefore you need to turn it on:
set -o history
var=$(history 1)
echo "$var"
Note the preferred use of $( ) rather than the deprecated backticks.
However, this will only look at the history of the current process, that is this shell script, so it is fairly useless.
Is the output of a Bash command stored in any register? E.g. something similar to $? capturing the output instead of the exit status.
I could assign the output to a variable with:
output=$(command)
but that's more typing...
You can use $(!!)
to recompute (not re-use) the output of the last command.
The !! on its own executes the last command.
$ echo pierre
pierre
$ echo my name is $(!!)
echo my name is $(echo pierre)
my name is pierre
The answer is no. Bash doesn't allocate any output to any parameter or any block on its memory. Also, you are only allowed to access Bash by its allowed interface operations. Bash's private data is not accessible unless you hack it.
Very Simple Solution
One that I've used for years.
Script (add to your .bashrc or .bash_profile)
# capture the output of a command so it can be retrieved with ret
cap () { tee /tmp/capture.out; }
# return the output of the most recent command that was captured by cap
ret () { cat /tmp/capture.out; }
Usage
$ find . -name 'filename' | cap
/path/to/filename
$ ret
/path/to/filename
I tend to add | cap to the end of all of my commands. This way when I find I want to do text processing on the output of a slow running command I can always retrieve it with ret.
If you are on mac, and don't mind storing your output in the clipboard instead of writing to a variable, you can use pbcopy and pbpaste as a workaround.
For example, instead of doing this to find a file and diff its contents with another file:
$ find app -name 'one.php'
/var/bar/app/one.php
$ diff /var/bar/app/one.php /var/bar/two.php
You could do this:
$ find app -name 'one.php' | pbcopy
$ diff $(pbpaste) /var/bar/two.php
The string /var/bar/app/one.php is in the clipboard when you run the first command.
By the way, pb in pbcopy and pbpaste stand for pasteboard, a synonym for clipboard.
One way of doing that is by using trap DEBUG:
f() { bash -c "$BASH_COMMAND" >& /tmp/out.log; }
trap 'f' DEBUG
Now most recently executed command's stdout and stderr will be available in /tmp/out.log
Only downside is that it will execute a command twice: once to redirect output and error to /tmp/out.log and once normally. Probably there is some way to prevent this behavior as well.
Inspired by anubhava's answer, which I think is not actually acceptable as it runs each command twice.
save_output() {
exec 1>&3
{ [ -f /tmp/current ] && mv /tmp/current /tmp/last; }
exec > >(tee /tmp/current)
}
exec 3>&1
trap save_output DEBUG
This way the output of last command is in /tmp/last and the command is not called twice.
Yeah, why type extra lines each time; agreed.
You can redirect the returned from a command to input by pipeline, but redirecting printed output to input (1>&0) is nope, at least not for multiple line outputs.
Also you won't want to write a function again and again in each file for the same. So let's try something else.
A simple workaround would be to use printf function to store values in a variable.
printf -v myoutput "`cmd`"
such as
printf -v var "`echo ok;
echo fine;
echo thankyou`"
echo "$var" # don't forget the backquotes and quotes in either command.
Another customizable general solution (I myself use) for running the desired command only once and getting multi-line printed output of the command in an array variable line-by-line.
If you are not exporting the files anywhere and intend to use it locally only, you can have Terminal set-up the function declaration. You have to add the function in ~/.bashrc file or in ~/.profile file. In second case, you need to enable Run command as login shell from Edit>Preferences>yourProfile>Command.
Make a simple function, say:
get_prev() # preferably pass the commands in quotes. Single commands might still work without.
{
# option 1: create an executable with the command(s) and run it
#echo $* > /tmp/exe
#bash /tmp/exe > /tmp/out
# option 2: if your command is single command (no-pipe, no semi-colons), still it may not run correct in some exceptions.
#echo `"$*"` > /tmp/out
# option 3: (I actually used below)
eval "$*" > /tmp/out # or simply "$*" > /tmp/out
# return the command(s) outputs line by line
IFS=$(echo -en "\n\b")
arr=()
exec 3</tmp/out
while read -u 3 -r line
do
arr+=($line)
echo $line
done
exec 3<&-
}
So what we did in option 1 was print the whole command to a temporary file /tmp/exe and run it and save the output to another file /tmp/out and then read the contents of the /tmp/out file line-by-line to an array.
Similar in options 2 and 3, except that the commands were exectuted as such, without writing to an executable to be run.
In main script:
#run your command:
cmd="echo hey ya; echo hey hi; printf `expr 10 + 10`'\n' ; printf $((10 + 20))'\n'"
get_prev $cmd
#or simply
get_prev "echo hey ya; echo hey hi; printf `expr 10 + 10`'\n' ; printf $((10 + 20))'\n'"
Now, bash saves the variable even outside previous scope, so the arr variable created in get_prev function is accessible even outside the function in the main script:
#get previous command outputs in arr
for((i=0; i<${#arr[#]}; i++))
do
echo ${arr[i]}
done
#if you're sure that your output won't have escape sequences you bother about, you may simply print the array
printf "${arr[*]}\n"
Edit:
I use the following code in my implementation:
get_prev()
{
usage()
{
echo "Usage: alphabet [ -h | --help ]
[ -s | --sep SEP ]
[ -v | --var VAR ] \"command\""
}
ARGS=$(getopt -a -n alphabet -o hs:v: --long help,sep:,var: -- "$#")
if [ $? -ne 0 ]; then usage; return 2; fi
eval set -- $ARGS
local var="arr"
IFS=$(echo -en '\n\b')
for arg in $*
do
case $arg in
-h|--help)
usage
echo " -h, --help : opens this help"
echo " -s, --sep : specify the separator, newline by default"
echo " -v, --var : variable name to put result into, arr by default"
echo " command : command to execute. Enclose in quotes if multiple lines or pipelines are used."
shift
return 0
;;
-s|--sep)
shift
IFS=$(echo -en $1)
shift
;;
-v|--var)
shift
var=$1
shift
;;
-|--)
shift
;;
*)
cmd=$option
;;
esac
done
if [ ${#} -eq 0 ]; then usage; return 1; fi
ERROR=$( { eval "$*" > /tmp/out; } 2>&1 )
if [ $ERROR ]; then echo $ERROR; return 1; fi
local a=()
exec 3</tmp/out
while read -u 3 -r line
do
a+=($line)
done
exec 3<&-
eval $var=\(\${a[#]}\)
print_arr $var # comment this to suppress output
}
print()
{
eval echo \${$1[#]}
}
print_arr()
{
eval printf "%s\\\n" "\${$1[#]}"
}
Ive been using this to print space-separated outputs of multiple/pipelined/both commands as line separated:
get_prev -s " " -v myarr "cmd1 | cmd2; cmd3 | cmd4"
For example:
get_prev -s ' ' -v myarr whereis python # or "whereis python"
# can also be achieved (in this case) by
whereis python | tr ' ' '\n'
Now tr command is useful at other places as well, such as
echo $PATH | tr ':' '\n'
But for multiple/piped commands... you know now. :)
-Himanshu
Like konsolebox said, you'd have to hack into bash itself. Here is a quite good example on how one might achieve this. The stderred repository (actually meant for coloring stdout) gives instructions on how to build it.
I gave it a try: Defining some new file descriptor inside .bashrc like
exec 41>/tmp/my_console_log
(number is arbitrary) and modify stderred.c accordingly so that content also gets written to fd 41. It kind of worked, but contains loads of NUL bytes, weird formattings and is basically binary data, not readable. Maybe someone with good understandings of C could try that out.
If so, everything needed to get the last printed line is tail -n 1 [logfile].
Not sure exactly what you're needing this for, so this answer may not be relevant. You can always save the output of a command: netstat >> output.txt, but I don't think that's what you're looking for.
There are of course programming options though; you could simply get a program to read the text file above after that command is run and associate it with a variable, and in Ruby, my language of choice, you can create a variable out of command output using 'backticks':
output = `ls` #(this is a comment) create variable out of command
if output.include? "Downloads" #if statement to see if command includes 'Downloads' folder
print "there appears to be a folder named downloads in this directory."
else
print "there is no directory called downloads in this file."
end
Stick this in a .rb file and run it: ruby file.rb and it will create a variable out of the command and allow you to manipulate it.
If you don't want to recompute the previous command you can create a macro that scans the current terminal buffer, tries to guess the -supposed- output of the last command, copies it to the clipboard and finally types it to the terminal.
It can be used for simple commands that return a single line of output (tested on Ubuntu 18.04 with gnome-terminal).
Install the following tools: xdootool, xclip , ruby
In gnome-terminal go to Preferences -> Shortcuts -> Select all and set it to Ctrl+shift+a.
Create the following ruby script:
cat >${HOME}/parse.rb <<EOF
#!/usr/bin/ruby
stdin = STDIN.read
d = stdin.split(/\n/)
e = d.reverse
f = e.drop_while { |item| item == "" }
g = f.drop_while { |item| item.start_with? "${USER}#" }
h = g[0]
print h
EOF
In the keyboard settings add the following keyboard shortcut:
bash -c '/bin/sleep 0.3 ; xdotool key ctrl+shift+a ; xdotool key ctrl+shift+c ; ( (xclip -out | ${HOME}/parse.rb ) > /tmp/clipboard ) ; (cat /tmp/clipboard | xclip -sel clip ) ; xdotool key ctrl+shift+v '
The above shortcut:
copies the current terminal buffer to the clipboard
extracts the output of the last command (only one line)
types it into the current terminal
I have an idea that I don't have time to try to implement immediately.
But what if you do something like the following:
$ MY_HISTORY_FILE = `get_temp_filename`
$ MY_HISTORY_FILE=$MY_HISTORY_FILE bash -i 2>&1 | tee $MY_HISTORY_FILE
$ some_command
$ cat $MY_HISTORY_FILE
$ # ^You'll want to filter that down in practice!
There might be issues with IO buffering. Also the file might get too huge. One would have to come up with a solution to these problems.
I think using script command might help. Something like,
script -c bash -qf fifo_pid
Using bash features to set after parsing.
Demo for non-interactive commands only: http://asciinema.org/a/395092
For also supporting interactive commands, you'd have to hack the script binary from util-linux to ignore any screen-redrawing console codes, and run it from bashrc to save your login session's output to a file.
You can use -exec to run a command on the output of a command. So it will be a reuse of the output as an example given with a find command below:
find . -name anything.out -exec rm {} \;
you are saying here -> find a file called anything.out in the current folder, if found, remove it. If it is not found, the remaining after -exec will be skipped.
I'm trying to write a small script that either takes input from a file or from user, then it gets rid of any blank lines from it.
I'm trying to make it so that if there is no file name specified it will prompt the user for input. Also is the best way to output the manual input to a file then run the code or to store it in a variable?
So far I have this but when I run it with a file it give 1 line of error before returning the output I want. The error says ./deblank: line 1: [blank_lines.txt: command not found
if [$# -eq "$NO_ARGS"]; then
cat > temporary.txt; sed '/^$/d' <temporary.txt
else
sed '/^$/d' <$#
fi
Where am I going wrong?
You need spaces around [ and ]. In bash, [ is a command and you need spaces around it for bash to interpret it so.
You can also check for the presence of arguments by using (( ... )). So your script could be rewritten as:
if ((!$#)); then
cat > temporary.txt; sed '/^$/d' <temporary.txt
else
sed '/^$/d' "$#"
fi
If you want to use only the first argument, then you need to say $1 (and not $#).
Try using this
if [ $# -eq 0 ]; then
cat > temporary.txt; sed '/^$/d' <temporary.txt
else
cat $# | sed '/^$/d'
fi
A space is needed between [ and $# and your usage of $# is not good. $# represents all arguments and -eq is used to compare numeric values.
There are multiple problems here:
You need to leave a space between the square brackets [ ] and the variables.
When using a string type, you cannot use -eq, use == instead.
When using a string comparison you need to use double square brackets.
So the code should look like:
if [[ "$#" == "$NO_ARGS" ]]; then
cat > temporary.txt; sed '/^$/d' <temporary.txt
else
sed '/^$/d' <$#
fi
Or else use $# instead.
Instead of forcing user input to a file, I'd force the given file to stdin:
#!/bin/bash
if [[ $1 && -r $1 ]]; then
# it's a file
exec 0<"$1"
elif ! tty -s; then
: # input is piped from stdin
else
# get input from user
echo "No file specified, please enter your input, ctrl-D to end"
fi
# now, let sed read from stdin
sed '/^$/d'