I'm trying to write a very simple bash script that modifies a number of files, and I'm outputting the results of each command to a log as a check whether the command was completed successfully. Everything appears to be working except I can't pass CAT with variables to my script -- I keep getting a cat: >>: No such file or directory error.
#! /bin/bash
file1="./file1"
file2="./file2"
check () {
if ( $1 > /dev/null ) then
echo " $1 : completed" | tee -a log
return 0;
else
echo "ERR> $1 : command failed" | tee -a log
return 1;
fi
}
check "cp $file1 $file1.bak" # this works fine
check "sed -i s/text/newtext/g $file1" # this works, too
check "cat $file1 >> $file2" # this does not work
I've tried any number of combinations of quoting the command. The only way that I can get it to work is by using the following:
check $(cat $file1 >> $file2)
However, this does not pass the command itself to check only the return value, so $1 in function check carries /dev/null and not the command performed, which is not the particular behaviour I want.
Just for completeness, the log file looks like:
cp ./file1 ./file1.bak : completed
sed -i s/text/newtext/g ./file1 : completed
ERR> cat ./file1 >> ./file2 : command failed
I'm sure the solution is rather simple, but it has eluded me for a few hours and no amount of Google searches has yielded any help. Thanks for having a look.
The problem is that the I/O redirection in your cat command is not being interpreted as I/O redirection but rather as a simple argument to the cat command. It's not cat so much as the I/O redirection that is causing grief. Trying a pipeline would also give you problems.
Options available to remedy this include:
check "cp $file1 $file2" # Use copy instead of cat and I/O redirection; clobbers file2
check "eval cat $file1 >> $file2" # Use eval to handle I/O redirection, piping, etc
If either $file1 or $file2 contains shell special characters, the eval option is dangerous.
The cp command substitutes something that works without needing I/O redirection. You could even use a (microscopic) shell script to handle the job — where your script executes the shell script, and the shell script handles the redirection:
#!/bin/sh
exec cat ${1:?} >> ${2:?}
This generates a default error message if either argument 1 or 2 is missing (but doesn't object to extra arguments).
EDIT: the approach I first tried below doesn't quite work. There's another trick that can rescue this, even without resorting to bash magic, but it's getting ugly.
The >> redirection occurs at the wrong level, in this case. You wind up asking cat to read ./file, then a file named >>, then ./file2. To get the redirection to occur you'll need to do it elsewhere (see below), or invoke eval.
I'd recommend not using eval, but instead, rejiggering the logic of function check instead. You can redirect check at the top level, e.g.,:
check() {
if "$#"; then
echo " $# : completed" | tee -a log
return 0
fi
echo "ERR> $# : failed, status $?" | tee -a log
return 1
}
check cp "$file1" "$file.bak" # doesn't print anything
check sed -i s/text/newtext/g "$file1" >/dev/null # does print, so >/dev/null
check cat "$file1" >> "$file2"
(The double quotes here in the invocations of check are in case file1 and/or file2 ever acquire meta-characters like * or ;, or white space, etc.)
EDIT: as #cdm and #rici note, this fails for the append-to-file cases, because check's output is redirected even for the tee command. Again the redirection is happening at the wrong level. It's possible to fix this by adding another level of indirection:
append_to_file() {
local fname
fname="$1"
shift
"$#" >> "$fname"
}
check cp "$file1" "$file.bak"
check append_to_file /dev/null sed -e s/text/newtext/g "$file1"
check append_to_file "$file2" cat "$file1"
Now, though, the completed and failure messages log append_to_file at the front, which is really pretty klunky. I think I'd go back to eval instead.
Related
(This question is a follow-up on this comment, in an answer about git hooks)
I'm far too unskilled in bash (so far) to understand fully the remark and how to act accordingly. More specifically, I've been advised to avoid using bash command cat this way :
echo "$current_branch" $(cat "$1") > "$1"
because the order of operations depends on the specific shell and it could end up destroying the contents of the passed argument, so the commit message itself if I got it right?
Also, how to "save the contents in a separate step"?
Would the following make any sense?
tmp = "$1"
echo "$current_branch" $(cat $tmp) > "$1"
The proposed issue is not about overwriting variables or arguments, but about the fact that both reading from and writing to a file at the same time is generally a bad idea.
For example, this command may look like it will just write a file to itself, but instead it truncates it:
cat myfile > myfile # Truncates the file to size 0
However, this is not a problem in your specific command. It is guaranteed to work in a POSIX compliant shell because the order of operations specify that redirections will happen after expansions:
The words that are not variable assignments or redirections shall be expanded. If any fields remain following their expansion, the first field shall be considered the command name and remaining fields are the arguments for the command.
Redirections shall be performed as described in Redirection.
Double-however, it's still a bit fragile in the sense that seemingly harmless modifications may trigger the problem, such as if you wanted to run sed on the result. Since the redirection (> "$1") and command substitution $(cat "$1") are now in separate commands, the POSIX definition no longer saves you:
# Command may now randomly result in the original message being deleted
echo "$current_branch $(cat "$1")" | sed -e 's/(c)/©/g' > "$1"
Similarly, if you refactor it into a function, it will also suddenly stop working:
# Command will now always delete the original message
modify_message() {
echo "$current_branch $(cat "$1")"
}
modify_message "$1" > "$1"
You can avoid this by writing to a temporary file, and then replace your original.
tmp=$(mktemp) || exit
echo "$current_branch $(cat "$1")" > "$tmp"
mv "$tmp" "$1"
In my opinion, it's better to save to another file.
You may try something like
echo "$current_branch" > tmp
cat "$1" >> tmp # merge these into
# echo "$current_branch" $(cat "$1") > tmp
# may both OK
mv tmp "$1"
However I am not sure if my understanding is right, or there are some better solutions.
This is what I considered as the core of question. It is hard to decide the "precedence" of $() block and >. If > is executed "earlier", then echo "$current_branch" will rewrite "$1" file and drop the original content of "$1", which is a disaster. If $() is executed "earlier", then everything works as expected. However, there exists a risk, and we should avoid it.
A command group would be far better than a command substitution here. Note the similarity to Geno Chen's answer.
{
echo "$current_branch"
cat "$1"
} > tmp && mv tmp "$1"
I designed a custom script to grep a concatenated list of .bash_history backup files. In my script, I am creating a temporary file with mktemp and saving it to a variable temp. Next, I am redirecting output to that file using the cat command.
Is there a means to create a temporary file (using mktemp), redirect output to it, then store it in a variable in one command, while preserving newline characters?
The below snippet of code works just fine, but I have a feeling there is a more terse and canonical way to achieve this in one line – maybe using process substitution or something of the like.
# Concatenate all .bash_history files into a temporary file `temp`.
temp="$(mktemp)"
cat "$HOME/.bash_history."* > $temp
trap 'rm -f $temp' 0
# Set `HISTFILE` shell variable to the `temp` file.
HISTFILE="$temp"
keyword="$1"
# Search for `keyword` using the `history` command
if [[ "$keyword" ]]; then
# Enable history
set -o history
history | grep "$keyword"
# Disable history
set +o history
else
echo -e "usage: search <keyword>"
exit 0
fi
If you're comfortable with the side effect of making the assignment conditional on tempfile not previously having a nonempty value, this is straightforward via the ${var:=value} expansion:
cat "$HOME/.bash_history" >"${tempfile:=$(mktemp)}"
cat myfile.txt | f=`mktemp` && cat > "${f}"
I guess there is more than one way to do it. I found following to be working for me:
cat myfile.txt > $(echo "$(mktemp)")
Also don't forget about tee:
cat myfile.txt | tee "$(mktemp)" > /dev/null
I made a script like this:
#! /usr/bin/bash
a=`ls ../wrfprd/wrfout_d0${i}* | cut -c22-25`
b=`ls ../wrfprd/wrfout_d0${i}* | cut -c27-28`
c=`ls ../wrfprd/wrfout_d0${i}* | cut -c30-31`
d=`ls ../wrfprd/wrfout_d0${i}* | cut -c33-34`
f=$a$b$c$d
echo $f
sed "s/.* startdate=.*/export startdate=${f}/g" ./post_process > post_process2
echo command works and gives 2008042118 that is what I want but in file post_process2 is like this export startdate= and can not recall variable f. I want to produce a line like export startdate=2008042118
First -- don't use ls here -- it's both expensive in terms of performance (compared to globbing, which is performed internal to the shell without starting any external programs), and doesn't guarantee useful output for the full range of possible filenames, making its use in this context inherently bug-prone. A better way to retrieve pieces from a filename, assuming a ksh-derived shell such as bash or zsh, would look like this:
#!/bin/bash
# this is an array, but we're only going to use the first element
file=( "../wrfprd/wrfout_d0${i}"* )
[[ -e $file ]] || { echo "No file found" >&2; exit 1; }
f=${file:22:4}${file:27:2}${file:30:2}${file:33:2}
Second, don't use sed to modify code -- doing so requires that your runtime user have permission to modify its own code, and moreover invites injection vulnerabilities. Just write your content out to a data file:
printf '%s\n' "$f" >startdate.txt
...and, in your second script, to read in the value from that file:
# if the shebang is #!/bin/bash
startdate=$(<startdate.txt)
# if the shebang is #!/bin/sh
startdate=$(cat startdate.txt)
Is the output of a Bash command stored in any register? E.g. something similar to $? capturing the output instead of the exit status.
I could assign the output to a variable with:
output=$(command)
but that's more typing...
You can use $(!!)
to recompute (not re-use) the output of the last command.
The !! on its own executes the last command.
$ echo pierre
pierre
$ echo my name is $(!!)
echo my name is $(echo pierre)
my name is pierre
The answer is no. Bash doesn't allocate any output to any parameter or any block on its memory. Also, you are only allowed to access Bash by its allowed interface operations. Bash's private data is not accessible unless you hack it.
Very Simple Solution
One that I've used for years.
Script (add to your .bashrc or .bash_profile)
# capture the output of a command so it can be retrieved with ret
cap () { tee /tmp/capture.out; }
# return the output of the most recent command that was captured by cap
ret () { cat /tmp/capture.out; }
Usage
$ find . -name 'filename' | cap
/path/to/filename
$ ret
/path/to/filename
I tend to add | cap to the end of all of my commands. This way when I find I want to do text processing on the output of a slow running command I can always retrieve it with ret.
If you are on mac, and don't mind storing your output in the clipboard instead of writing to a variable, you can use pbcopy and pbpaste as a workaround.
For example, instead of doing this to find a file and diff its contents with another file:
$ find app -name 'one.php'
/var/bar/app/one.php
$ diff /var/bar/app/one.php /var/bar/two.php
You could do this:
$ find app -name 'one.php' | pbcopy
$ diff $(pbpaste) /var/bar/two.php
The string /var/bar/app/one.php is in the clipboard when you run the first command.
By the way, pb in pbcopy and pbpaste stand for pasteboard, a synonym for clipboard.
One way of doing that is by using trap DEBUG:
f() { bash -c "$BASH_COMMAND" >& /tmp/out.log; }
trap 'f' DEBUG
Now most recently executed command's stdout and stderr will be available in /tmp/out.log
Only downside is that it will execute a command twice: once to redirect output and error to /tmp/out.log and once normally. Probably there is some way to prevent this behavior as well.
Inspired by anubhava's answer, which I think is not actually acceptable as it runs each command twice.
save_output() {
exec 1>&3
{ [ -f /tmp/current ] && mv /tmp/current /tmp/last; }
exec > >(tee /tmp/current)
}
exec 3>&1
trap save_output DEBUG
This way the output of last command is in /tmp/last and the command is not called twice.
Yeah, why type extra lines each time; agreed.
You can redirect the returned from a command to input by pipeline, but redirecting printed output to input (1>&0) is nope, at least not for multiple line outputs.
Also you won't want to write a function again and again in each file for the same. So let's try something else.
A simple workaround would be to use printf function to store values in a variable.
printf -v myoutput "`cmd`"
such as
printf -v var "`echo ok;
echo fine;
echo thankyou`"
echo "$var" # don't forget the backquotes and quotes in either command.
Another customizable general solution (I myself use) for running the desired command only once and getting multi-line printed output of the command in an array variable line-by-line.
If you are not exporting the files anywhere and intend to use it locally only, you can have Terminal set-up the function declaration. You have to add the function in ~/.bashrc file or in ~/.profile file. In second case, you need to enable Run command as login shell from Edit>Preferences>yourProfile>Command.
Make a simple function, say:
get_prev() # preferably pass the commands in quotes. Single commands might still work without.
{
# option 1: create an executable with the command(s) and run it
#echo $* > /tmp/exe
#bash /tmp/exe > /tmp/out
# option 2: if your command is single command (no-pipe, no semi-colons), still it may not run correct in some exceptions.
#echo `"$*"` > /tmp/out
# option 3: (I actually used below)
eval "$*" > /tmp/out # or simply "$*" > /tmp/out
# return the command(s) outputs line by line
IFS=$(echo -en "\n\b")
arr=()
exec 3</tmp/out
while read -u 3 -r line
do
arr+=($line)
echo $line
done
exec 3<&-
}
So what we did in option 1 was print the whole command to a temporary file /tmp/exe and run it and save the output to another file /tmp/out and then read the contents of the /tmp/out file line-by-line to an array.
Similar in options 2 and 3, except that the commands were exectuted as such, without writing to an executable to be run.
In main script:
#run your command:
cmd="echo hey ya; echo hey hi; printf `expr 10 + 10`'\n' ; printf $((10 + 20))'\n'"
get_prev $cmd
#or simply
get_prev "echo hey ya; echo hey hi; printf `expr 10 + 10`'\n' ; printf $((10 + 20))'\n'"
Now, bash saves the variable even outside previous scope, so the arr variable created in get_prev function is accessible even outside the function in the main script:
#get previous command outputs in arr
for((i=0; i<${#arr[#]}; i++))
do
echo ${arr[i]}
done
#if you're sure that your output won't have escape sequences you bother about, you may simply print the array
printf "${arr[*]}\n"
Edit:
I use the following code in my implementation:
get_prev()
{
usage()
{
echo "Usage: alphabet [ -h | --help ]
[ -s | --sep SEP ]
[ -v | --var VAR ] \"command\""
}
ARGS=$(getopt -a -n alphabet -o hs:v: --long help,sep:,var: -- "$#")
if [ $? -ne 0 ]; then usage; return 2; fi
eval set -- $ARGS
local var="arr"
IFS=$(echo -en '\n\b')
for arg in $*
do
case $arg in
-h|--help)
usage
echo " -h, --help : opens this help"
echo " -s, --sep : specify the separator, newline by default"
echo " -v, --var : variable name to put result into, arr by default"
echo " command : command to execute. Enclose in quotes if multiple lines or pipelines are used."
shift
return 0
;;
-s|--sep)
shift
IFS=$(echo -en $1)
shift
;;
-v|--var)
shift
var=$1
shift
;;
-|--)
shift
;;
*)
cmd=$option
;;
esac
done
if [ ${#} -eq 0 ]; then usage; return 1; fi
ERROR=$( { eval "$*" > /tmp/out; } 2>&1 )
if [ $ERROR ]; then echo $ERROR; return 1; fi
local a=()
exec 3</tmp/out
while read -u 3 -r line
do
a+=($line)
done
exec 3<&-
eval $var=\(\${a[#]}\)
print_arr $var # comment this to suppress output
}
print()
{
eval echo \${$1[#]}
}
print_arr()
{
eval printf "%s\\\n" "\${$1[#]}"
}
Ive been using this to print space-separated outputs of multiple/pipelined/both commands as line separated:
get_prev -s " " -v myarr "cmd1 | cmd2; cmd3 | cmd4"
For example:
get_prev -s ' ' -v myarr whereis python # or "whereis python"
# can also be achieved (in this case) by
whereis python | tr ' ' '\n'
Now tr command is useful at other places as well, such as
echo $PATH | tr ':' '\n'
But for multiple/piped commands... you know now. :)
-Himanshu
Like konsolebox said, you'd have to hack into bash itself. Here is a quite good example on how one might achieve this. The stderred repository (actually meant for coloring stdout) gives instructions on how to build it.
I gave it a try: Defining some new file descriptor inside .bashrc like
exec 41>/tmp/my_console_log
(number is arbitrary) and modify stderred.c accordingly so that content also gets written to fd 41. It kind of worked, but contains loads of NUL bytes, weird formattings and is basically binary data, not readable. Maybe someone with good understandings of C could try that out.
If so, everything needed to get the last printed line is tail -n 1 [logfile].
Not sure exactly what you're needing this for, so this answer may not be relevant. You can always save the output of a command: netstat >> output.txt, but I don't think that's what you're looking for.
There are of course programming options though; you could simply get a program to read the text file above after that command is run and associate it with a variable, and in Ruby, my language of choice, you can create a variable out of command output using 'backticks':
output = `ls` #(this is a comment) create variable out of command
if output.include? "Downloads" #if statement to see if command includes 'Downloads' folder
print "there appears to be a folder named downloads in this directory."
else
print "there is no directory called downloads in this file."
end
Stick this in a .rb file and run it: ruby file.rb and it will create a variable out of the command and allow you to manipulate it.
If you don't want to recompute the previous command you can create a macro that scans the current terminal buffer, tries to guess the -supposed- output of the last command, copies it to the clipboard and finally types it to the terminal.
It can be used for simple commands that return a single line of output (tested on Ubuntu 18.04 with gnome-terminal).
Install the following tools: xdootool, xclip , ruby
In gnome-terminal go to Preferences -> Shortcuts -> Select all and set it to Ctrl+shift+a.
Create the following ruby script:
cat >${HOME}/parse.rb <<EOF
#!/usr/bin/ruby
stdin = STDIN.read
d = stdin.split(/\n/)
e = d.reverse
f = e.drop_while { |item| item == "" }
g = f.drop_while { |item| item.start_with? "${USER}#" }
h = g[0]
print h
EOF
In the keyboard settings add the following keyboard shortcut:
bash -c '/bin/sleep 0.3 ; xdotool key ctrl+shift+a ; xdotool key ctrl+shift+c ; ( (xclip -out | ${HOME}/parse.rb ) > /tmp/clipboard ) ; (cat /tmp/clipboard | xclip -sel clip ) ; xdotool key ctrl+shift+v '
The above shortcut:
copies the current terminal buffer to the clipboard
extracts the output of the last command (only one line)
types it into the current terminal
I have an idea that I don't have time to try to implement immediately.
But what if you do something like the following:
$ MY_HISTORY_FILE = `get_temp_filename`
$ MY_HISTORY_FILE=$MY_HISTORY_FILE bash -i 2>&1 | tee $MY_HISTORY_FILE
$ some_command
$ cat $MY_HISTORY_FILE
$ # ^You'll want to filter that down in practice!
There might be issues with IO buffering. Also the file might get too huge. One would have to come up with a solution to these problems.
I think using script command might help. Something like,
script -c bash -qf fifo_pid
Using bash features to set after parsing.
Demo for non-interactive commands only: http://asciinema.org/a/395092
For also supporting interactive commands, you'd have to hack the script binary from util-linux to ignore any screen-redrawing console codes, and run it from bashrc to save your login session's output to a file.
You can use -exec to run a command on the output of a command. So it will be a reuse of the output as an example given with a find command below:
find . -name anything.out -exec rm {} \;
you are saying here -> find a file called anything.out in the current folder, if found, remove it. If it is not found, the remaining after -exec will be skipped.
I have the following shell script. The purpose is to loop thru each line of the target file (whose path is the input parameter to the script) and do work against each line. Now, it seems only work with the very first line in the target file and stops after that line got processed. Is there anything wrong with my script?
#!/bin/bash
# SCRIPT: do.sh
# PURPOSE: loop thru the targets
FILENAME=$1
count=0
echo "proceed with $FILENAME"
while read LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done < $FILENAME
echo "\ntotal $count targets"
In do_work.sh, I run a couple of ssh commands.
The problem is that do_work.sh runs ssh commands and by default ssh reads from stdin which is your input file. As a result, you only see the first line processed, because the command consumes the rest of the file and your while loop terminates.
This happens not just for ssh, but for any command that reads stdin, including mplayer, ffmpeg, HandBrakeCLI, httpie, brew install, and more.
To prevent this, pass the -n option to your ssh command to make it read from /dev/null instead of stdin. Other commands have similar flags, or you can universally use < /dev/null.
A very simple and robust workaround is to change the file descriptor from which the read command receives input.
This is accomplished by two modifications: the -u argument to read, and the redirection operator for < $FILENAME.
In BASH, the default file descriptor values (i.e. values for -u in read) are:
0 = stdin
1 = stdout
2 = stderr
So just choose some other unused file descriptor, like 9 just for fun.
Thus, the following would be the workaround:
while read -u 9 LINE; do
let count++
echo "$count $LINE"
sh ./do_work.sh $LINE
done 9< $FILENAME
Notice the two modifications:
read becomes read -u 9
< $FILENAME becomes 9< $FILENAME
As a best practice, I do this for all while loops I write in BASH.
If you have nested loops using read, use a different file descriptor for each one (9,8,7,...).
More generally, a workaround which isn't specific to ssh is to redirect standard input for any command which might otherwise consume the while loop's input.
while read -r line; do
((count++))
echo "$count $line"
sh ./do_work.sh "$line" </dev/null
done < "$filename"
The addition of </dev/null is the crucial point here, though the corrected quoting is also somewhat important for robustness; see also When to wrap quotes around a shell variable?. You will want to use read -r unless you specifically require the slightly odd legacy behavior you get for backslashes in the input without -r. Finally, avoid upper case for your private variables.
Another workaround of sorts which is somewhat specific to ssh is to make sure any ssh command has its standard input tied up, e.g. by changing
ssh otherhost some commands here
to instead read the commands from a here document, which conveniently (for this particular scenario) ties up the standard input of ssh for the commands:
ssh otherhost <<'____HERE'
some commands here
____HERE
ssh -n option prevents checking the exit status of ssh when using HEREdoc while piping output to another program.
So use of /dev/null as stdin is preferred.
#!/bin/bash
while read ONELINE ; do
ssh ubuntu#host_xyz </dev/null <<EOF 2>&1 | filter_pgm
echo "Hi, $ONELINE. You come here often?"
process_response_pgm
EOF
if [ ${PIPESTATUS[0]} -ne 0 ] ; then
echo "aborting loop"
exit ${PIPESTATUS[0]}
fi
done << input_list.txt
This was happening to me because I had set -e and a grep in a loop was returning with no output (which gives a non-zero error code).