Pretty-print for shell script - bash

I'm looking for something similiar to indent but for (bash) scripts. Console only, no colorizing, etc.
Do you know of one ?

Vim can indent bash scripts. But not reformat them before indenting.
Backup your bash script, open it with vim, type gg=GZZ and indent will be corrected. (Note for the impatient: this overwrites the file, so be sure to do that backup!)
Though, some bugs with << (expecting EOF as first character on a line) e.g.
EDIT: ZZ not ZQ

In bash I do this:
reindent() {
source <(echo "Zibri () {";cat "$1"; echo "}")
declare -f Zibri|head --lines=-1|tail --lines=+3 | sed -e "s/^\s\s\s\s//"
}
this eliminates comments and reindents the script "bash way".
If you have HEREDOCS in your script, they got ruined by the sed in the previous function.
So use:
reindent() {
source <(echo "Zibri () {";cat "$1"; echo "}")
declare -f Zibri|head --lines=-1|tail --lines=+3"
}
But all your script will have a 4 spaces indentation.
Or you can do:
reindent ()
{
rstr=$(mktemp -u "XXXXXXXXXX");
source <(echo "Zibri () {";cat "$1"|sed -e "s/^\s\s\s\s/$rstr/"; echo "}");
echo '#!/bin/bash';
declare -f Zibri | head --lines=-1 | tail --lines=+3 | sed -e "s/^\s\s\s\s//;s/$rstr/ /"
}
which takes care also of heredocs.

bash5+ has a --pretty-print option.. it will remove comments though, including a first-line '#!/bin...'

shfmt works very well.
You can format bash scripts and also check the formatting from pre-commit hooks.
# reformat
shfmt -l -w script.sh
# check if the formatting is OK
shfmt -d script.sh
# works on the whole directory as well
shfmt -l -w .
The only option absent is that it does not reformat according to line length (yet).
Since it is written in go you can just download the binary for most platforms, e.g. for Travis (.travis.yml):
install:
- curl -LsS -o ~/shfmt https://github.com/mvdan/sh/releases/download/v3.1.2/shfmt_v3.1.2_linux_amd64
- chmod +x ~/shfmt
script:
- ~/shfmt -d .
There is also a cross compiled js version on npm and a lot of editor plugins (see related projects)

Related

Bash get the command that is piping into a script

Take the following example:
ls -l | grep -i readme | ./myscript.sh
What I am trying to do is get ls -l | grep -i readme as a string variable in myscript.sh. So essentially I am trying to get the whole command before the last pipe to use inside myscript.sh.
Is this possible?
No, it's not possible.
At the OS level, pipelines are implemented with the mkfifo(), dup2(), fork() and execve() syscalls. This doesn't provide a way to tell a program what the commands connected to its stdin are. Indeed, there's not guaranteed to be a string representing a pipeline of programs being used to generate stdin at all, even if your stdin really is a FIFO connected to another program's stdout; it could be that that pipeline was generated by programs calling execve() and friends directly.
The best available workaround is to invert your process flow.
It's not what you asked for, but it's what you can get.
#!/usr/bin/env bash
printf -v cmd_str '%q ' "$#" # generate a shell command representing our arguments
while IFS= read -r line; do
printf 'Output from %s: %s\n' "$cmd_str" "$line"
done < <("$#") # actually run those arguments as a command, and read from it
...and then have your script start the things it reads input from, rather than receiving them on stdin.
...thereafter, ./yourscript ls -l, or ./yourscript sh -c 'ls -l | grep -i readme'. (Of course, never use this except as an example; see ParsingLs).
It can't be done generally, but using the history command in bash it can maybe sort of be done, provided certain conditions are met:
history has to be turned on.
Only one shell has been running, or accepting new commands, (or failing that, running myscript.sh), since the start of myscript.sh.
Since command lines with leading spaces are, by default, not saved to the history, the invoking command for myscript.sh must have no leading spaces; or that default must be changed -- see Get bash history to remember only the commands run with space prefixed.
The invoking command needs to end with a &, because without it the new command line wouldn't be added to the history until after myscript.sh was completed.
The script needs to be a bash script, (it won't work with /bin/dash), and the calling shell needs a little prep work. Sometime before the script is run first do:
shopt -s histappend
PROMPT_COMMAND="history -a; history -n"
...this makes the bash history heritable. (Code swiped from unutbu's answer to a related question.)
Then myscript.sh might go:
#!/bin/bash
history -w
printf 'calling command was: %s\n' \
"$(history | rev |
grep "$0" ~/.bash_history | tail -1)"
Test run:
echo googa | ./myscript.sh &
Output, (minus the "&" associated cruft):
calling command was: echo googa | ./myscript.sh &
The cruft can be halved by changing "&" to "& fg", but the resulting output won't include the "fg" suffix.
I think you should pass it as one string parameter like this
./myscript.sh "$(ls -l | grep -i readme)"
I think that it is possible, have a look at this example:
#!/bin/bash
result=""
while read line; do
result=$result"${line}"
done
echo $result
Now run this script using a pipe, for example:
ls -l /etc | ./script.sh
I hope that will be helpful for you :)

Bash scripting - escape characters in unix

I am trying to execute a word search in my directory where I look into all text files, and try to find words that have a length of 14.
$ ls *.txt | & grep -o -w '\w\{14,14\}'
This works as intended when I run it in a command line.
Now I want to run this same exact command but in a .sh file
In my file, I have this:
eval $("ls *.txt | & grep -o -w '\w\{14,14\}'")
I then get this error:
test.sh: line 1: ls *.txt | & grep -o -w '\w{14,14}': command not found
Is the problem that I have escape characters in my text? How can I get it so that when I run my .sh file, I get the same output as if I ran it on the command line?
You don't need to eval something to put it in a shell script. eval is evil, and should be avoided like the plague (as in, only use if you have invented the antidote yourself). Your statement could be changed to simply grep -o -w '\w\{14,14\}' *.txt and chucked verbatim into a script file.
An excellent web site to learn both the fundamentals and the subtleties of Bash scripting is Greg's Wiki.

Reusing output from last command in Bash

Is the output of a Bash command stored in any register? E.g. something similar to $? capturing the output instead of the exit status.
I could assign the output to a variable with:
output=$(command)
but that's more typing...
You can use $(!!)
to recompute (not re-use) the output of the last command.
The !! on its own executes the last command.
$ echo pierre
pierre
$ echo my name is $(!!)
echo my name is $(echo pierre)
my name is pierre
The answer is no. Bash doesn't allocate any output to any parameter or any block on its memory. Also, you are only allowed to access Bash by its allowed interface operations. Bash's private data is not accessible unless you hack it.
Very Simple Solution
One that I've used for years.
Script (add to your .bashrc or .bash_profile)
# capture the output of a command so it can be retrieved with ret
cap () { tee /tmp/capture.out; }
# return the output of the most recent command that was captured by cap
ret () { cat /tmp/capture.out; }
Usage
$ find . -name 'filename' | cap
/path/to/filename
$ ret
/path/to/filename
I tend to add | cap to the end of all of my commands. This way when I find I want to do text processing on the output of a slow running command I can always retrieve it with ret.
If you are on mac, and don't mind storing your output in the clipboard instead of writing to a variable, you can use pbcopy and pbpaste as a workaround.
For example, instead of doing this to find a file and diff its contents with another file:
$ find app -name 'one.php'
/var/bar/app/one.php
$ diff /var/bar/app/one.php /var/bar/two.php
You could do this:
$ find app -name 'one.php' | pbcopy
$ diff $(pbpaste) /var/bar/two.php
The string /var/bar/app/one.php is in the clipboard when you run the first command.
By the way, pb in pbcopy and pbpaste stand for pasteboard, a synonym for clipboard.
One way of doing that is by using trap DEBUG:
f() { bash -c "$BASH_COMMAND" >& /tmp/out.log; }
trap 'f' DEBUG
Now most recently executed command's stdout and stderr will be available in /tmp/out.log
Only downside is that it will execute a command twice: once to redirect output and error to /tmp/out.log and once normally. Probably there is some way to prevent this behavior as well.
Inspired by anubhava's answer, which I think is not actually acceptable as it runs each command twice.
save_output() {
exec 1>&3
{ [ -f /tmp/current ] && mv /tmp/current /tmp/last; }
exec > >(tee /tmp/current)
}
exec 3>&1
trap save_output DEBUG
This way the output of last command is in /tmp/last and the command is not called twice.
Yeah, why type extra lines each time; agreed.
You can redirect the returned from a command to input by pipeline, but redirecting printed output to input (1>&0) is nope, at least not for multiple line outputs.
Also you won't want to write a function again and again in each file for the same. So let's try something else.
A simple workaround would be to use printf function to store values in a variable.
printf -v myoutput "`cmd`"
such as
printf -v var "`echo ok;
echo fine;
echo thankyou`"
echo "$var" # don't forget the backquotes and quotes in either command.
Another customizable general solution (I myself use) for running the desired command only once and getting multi-line printed output of the command in an array variable line-by-line.
If you are not exporting the files anywhere and intend to use it locally only, you can have Terminal set-up the function declaration. You have to add the function in ~/.bashrc file or in ~/.profile file. In second case, you need to enable Run command as login shell from Edit>Preferences>yourProfile>Command.
Make a simple function, say:
get_prev() # preferably pass the commands in quotes. Single commands might still work without.
{
# option 1: create an executable with the command(s) and run it
#echo $* > /tmp/exe
#bash /tmp/exe > /tmp/out
# option 2: if your command is single command (no-pipe, no semi-colons), still it may not run correct in some exceptions.
#echo `"$*"` > /tmp/out
# option 3: (I actually used below)
eval "$*" > /tmp/out # or simply "$*" > /tmp/out
# return the command(s) outputs line by line
IFS=$(echo -en "\n\b")
arr=()
exec 3</tmp/out
while read -u 3 -r line
do
arr+=($line)
echo $line
done
exec 3<&-
}
So what we did in option 1 was print the whole command to a temporary file /tmp/exe and run it and save the output to another file /tmp/out and then read the contents of the /tmp/out file line-by-line to an array.
Similar in options 2 and 3, except that the commands were exectuted as such, without writing to an executable to be run.
In main script:
#run your command:
cmd="echo hey ya; echo hey hi; printf `expr 10 + 10`'\n' ; printf $((10 + 20))'\n'"
get_prev $cmd
#or simply
get_prev "echo hey ya; echo hey hi; printf `expr 10 + 10`'\n' ; printf $((10 + 20))'\n'"
Now, bash saves the variable even outside previous scope, so the arr variable created in get_prev function is accessible even outside the function in the main script:
#get previous command outputs in arr
for((i=0; i<${#arr[#]}; i++))
do
echo ${arr[i]}
done
#if you're sure that your output won't have escape sequences you bother about, you may simply print the array
printf "${arr[*]}\n"
Edit:
I use the following code in my implementation:
get_prev()
{
usage()
{
echo "Usage: alphabet [ -h | --help ]
[ -s | --sep SEP ]
[ -v | --var VAR ] \"command\""
}
ARGS=$(getopt -a -n alphabet -o hs:v: --long help,sep:,var: -- "$#")
if [ $? -ne 0 ]; then usage; return 2; fi
eval set -- $ARGS
local var="arr"
IFS=$(echo -en '\n\b')
for arg in $*
do
case $arg in
-h|--help)
usage
echo " -h, --help : opens this help"
echo " -s, --sep : specify the separator, newline by default"
echo " -v, --var : variable name to put result into, arr by default"
echo " command : command to execute. Enclose in quotes if multiple lines or pipelines are used."
shift
return 0
;;
-s|--sep)
shift
IFS=$(echo -en $1)
shift
;;
-v|--var)
shift
var=$1
shift
;;
-|--)
shift
;;
*)
cmd=$option
;;
esac
done
if [ ${#} -eq 0 ]; then usage; return 1; fi
ERROR=$( { eval "$*" > /tmp/out; } 2>&1 )
if [ $ERROR ]; then echo $ERROR; return 1; fi
local a=()
exec 3</tmp/out
while read -u 3 -r line
do
a+=($line)
done
exec 3<&-
eval $var=\(\${a[#]}\)
print_arr $var # comment this to suppress output
}
print()
{
eval echo \${$1[#]}
}
print_arr()
{
eval printf "%s\\\n" "\${$1[#]}"
}
Ive been using this to print space-separated outputs of multiple/pipelined/both commands as line separated:
get_prev -s " " -v myarr "cmd1 | cmd2; cmd3 | cmd4"
For example:
get_prev -s ' ' -v myarr whereis python # or "whereis python"
# can also be achieved (in this case) by
whereis python | tr ' ' '\n'
Now tr command is useful at other places as well, such as
echo $PATH | tr ':' '\n'
But for multiple/piped commands... you know now. :)
-Himanshu
Like konsolebox said, you'd have to hack into bash itself. Here is a quite good example on how one might achieve this. The stderred repository (actually meant for coloring stdout) gives instructions on how to build it.
I gave it a try: Defining some new file descriptor inside .bashrc like
exec 41>/tmp/my_console_log
(number is arbitrary) and modify stderred.c accordingly so that content also gets written to fd 41. It kind of worked, but contains loads of NUL bytes, weird formattings and is basically binary data, not readable. Maybe someone with good understandings of C could try that out.
If so, everything needed to get the last printed line is tail -n 1 [logfile].
Not sure exactly what you're needing this for, so this answer may not be relevant. You can always save the output of a command: netstat >> output.txt, but I don't think that's what you're looking for.
There are of course programming options though; you could simply get a program to read the text file above after that command is run and associate it with a variable, and in Ruby, my language of choice, you can create a variable out of command output using 'backticks':
output = `ls` #(this is a comment) create variable out of command
if output.include? "Downloads" #if statement to see if command includes 'Downloads' folder
print "there appears to be a folder named downloads in this directory."
else
print "there is no directory called downloads in this file."
end
Stick this in a .rb file and run it: ruby file.rb and it will create a variable out of the command and allow you to manipulate it.
If you don't want to recompute the previous command you can create a macro that scans the current terminal buffer, tries to guess the -supposed- output of the last command, copies it to the clipboard and finally types it to the terminal.
It can be used for simple commands that return a single line of output (tested on Ubuntu 18.04 with gnome-terminal).
Install the following tools: xdootool, xclip , ruby
In gnome-terminal go to Preferences -> Shortcuts -> Select all and set it to Ctrl+shift+a.
Create the following ruby script:
cat >${HOME}/parse.rb <<EOF
#!/usr/bin/ruby
stdin = STDIN.read
d = stdin.split(/\n/)
e = d.reverse
f = e.drop_while { |item| item == "" }
g = f.drop_while { |item| item.start_with? "${USER}#" }
h = g[0]
print h
EOF
In the keyboard settings add the following keyboard shortcut:
bash -c '/bin/sleep 0.3 ; xdotool key ctrl+shift+a ; xdotool key ctrl+shift+c ; ( (xclip -out | ${HOME}/parse.rb ) > /tmp/clipboard ) ; (cat /tmp/clipboard | xclip -sel clip ) ; xdotool key ctrl+shift+v '
The above shortcut:
copies the current terminal buffer to the clipboard
extracts the output of the last command (only one line)
types it into the current terminal
I have an idea that I don't have time to try to implement immediately.
But what if you do something like the following:
$ MY_HISTORY_FILE = `get_temp_filename`
$ MY_HISTORY_FILE=$MY_HISTORY_FILE bash -i 2>&1 | tee $MY_HISTORY_FILE
$ some_command
$ cat $MY_HISTORY_FILE
$ # ^You'll want to filter that down in practice!
There might be issues with IO buffering. Also the file might get too huge. One would have to come up with a solution to these problems.
I think using script command might help. Something like,
script -c bash -qf fifo_pid
Using bash features to set after parsing.
Demo for non-interactive commands only: http://asciinema.org/a/395092
For also supporting interactive commands, you'd have to hack the script binary from util-linux to ignore any screen-redrawing console codes, and run it from bashrc to save your login session's output to a file.
You can use -exec to run a command on the output of a command. So it will be a reuse of the output as an example given with a find command below:
find . -name anything.out -exec rm {} \;
you are saying here -> find a file called anything.out in the current folder, if found, remove it. If it is not found, the remaining after -exec will be skipped.

bash set -x and stream

Can you explain the output of the following test script to me:
# prepare test data
echo "any content" > myfile
# set bash to inform me about the commands used
set -x
cat < myfile
output:
+cat
any content
Namely why does the line starting with + not show the "< myfile" bit?
How to force bash to do that. I need to inform the user of my script's doings as in:
mysql -uroot < the_new_file_with_a_telling_name.sql
and I can't.
EDIT: additional context: I use variables. Original code:
SQL_FILE=`ls -t $BACKUP_DIR/default_db* | head -n 1` # get latest db
mysql -uroot mydatabase < ${SQL_FILE}
-v won't expand variables and cat file.sql | mysql will produce two lines:
+mysql
+cat file.sql
so neither does the trick.
You could try set -v or set -o verbose instead which enables command echoing.
Example run on my machine:
[me#home]$ cat x.sh
echo "any content" > myfile
set -v
cat < myfile
[me#home]$ bash x.sh
cat < myfile
any content
The caveat here is that set -v simply echos the command literally and does not do any shell expansion or iterpolation. As pointed out by Jonathan in the comments, this can be a problem if the filename is defined in a variable (e.g. command < $somefile) making it difficult to identify what $somefile refers to.
The difference there is quite simple:
in the first case, you're using the program cat, and you're redirecting the contents of myfile to the standard input of cat. This means you're executing cat, and that's what bash shows you when you have set -x;
in a possible second case, you could use cat myfile, as pointed by #Jonathan Leffler, and you'd see +cat myfile, which is what you're executing: the program cat with the parameter myfile.
From man bash:
-x After expanding each simple command, for command, case command,
select command, or arithmetic for command, display the expanded
value of PS4, followed by the command and its expanded arguments or
associated word list.
As you can see, it simply displays the command line expanded, and its argument list -- redirections are neither part of the expanded command cat nor part of its argument list.
As pointed by #Shawn Chin, you may use set -v, which, as from man bash:
-v Print shell input lines as they are read.
Basically, that's the way bash works with its -x command. I checked on a Solaris 5.10 box, and the /bin/sh there (which is close to a genuine Bourne shell) also omits I/O redirection.
Given the command file (x3.sh):
echo "Hi" > Myfile
cat < Myfile
rm -f Myfile
The trace output on the Solaris machine was:
$ sh -x x3.sh
+ echo Hi
+ cat
Hi
+ rm -f Myfile
$ /bin/ksh -x x3.sh
+ echo Hi
+ 1> Myfile
+ cat
+ 0< Myfile
Hi
+ rm -f Myfile
$ bash -x x3.sh
+ echo Hi
+ cat
Hi
+ rm -f Myfile
$
Note that bash and sh (which are definitely different executables) produce the same output. The ksh output includes the I/O redirection information — score 1 for the Korn shell.
In this specific example, you can use:
cat myfile
to see the name of the file. In the general case, it is hard, but consider using ksh instead of bash to get the I/O redirection reported.

Copy current command at bash prompt to clipboard

I would like a quick keyboard command sequence to copy the current command at a bash prompt to the clipboard.
So that, for example, to copy the last bash command to the clipboard, I'd press up+[some command sequence] to copy it. Or, for example, to search for a command in bash hisory, I'd use ctrl+r, search, display it on the command prompt, and then [some command sequence] to copy it, etc.
My current solution is using bash pipes: Pipe to/from the clipboard
So, to copy the previous command to clipboard:
echo "!!" | pbcopy
Which isn't too terrible, but what if the command to copy isn't the last command, etc.
What's the proper way to achieve what I'm trying to achieve here?
Taking #Lauri's post for inspiration, here's a solution using the bind command:
bind '"\C-]":"\C-e\C-u pbcopy <<"EOF"\n\C-y\nEOF\n"'
ctrl-] then will copy whatever is on the current bash prompt to the clipboard.
To make it persistent, you can add the bind command as above to your ~/.bashrc, or you can strip off the outer quotes and remove the 'bind' part of the call and add the result to your ~/.inputrc.
Non-OS-X users will have to swap pbcopy out with the appropriate command, probably xclip.
A quoted heredoc was used instead of a an echo+pipe technique so that both single and double quotes in the command at the bash prompt are preserved. With this technique, for example, I was able to hit ctrl-], copy the actual bind command from the terminal prompt, and paste it here in the answer. So the heredoc technique handles all of the special characters in the bind command here.
You can use READLINE_LINE with bind -x in bash 4:
copyline() { printf %s "$READLINE_LINE"|pbcopy; }
bind -x '"\C-xc":copyline'
You can install bash 4 and make it the default login shell by running brew install bash;echo /usr/local/bin/bash|sudo tee -a /etc/shells;chsh -s /usr/local/bin/bash.
I also use this function to copy the last command:
cl() { history -p '!!'|tr -d \\n|pbcopy; }
I spent a decent amount of time today writing a simple zsh implementation for macOS; usage is as follows:
example command: git commit -m "Changed a few things"
command that copies: c git commit -m "Changed a few things"
# The second command does not actually execute the command, it just copies it.
# Using zsh, this should reduce the whole process to about 3 keystrokes:
#
# 1) CTRL + A (to go to the beginning of the line)
# 2) 'c' + ' '
# 3) ENTER
preexec() is a zsh hook function that gets called right when you press enter, but before the command actually executes.
Since zsh strips arguments of certain characters like ' " ', we will want to use preexec(), which allows you to access the unprocessed, original command.
Pseudocode goes like this:
1) Make sure the command has 'c ' in the beginning
2) If it does, copy the whole command, char by char, to a temp variable
3) Pipe the temp variable into pbcopy, macOS's copy buffer
Real code:
c() {} # you'll want this so that you don't get a command unrecognized error
preexec() {
tmp="";
if [ "${1:0:1}" = "c" ] && [ "${1:1:1}" = " " ] && [ "${1:2:1}" != " " ]; then
for (( i=2; i<${#1}; i++ )); do
tmp="${tmp}${1:$i:1}";
done
echo "$tmp" | pbcopy;
fi
}
Go ahead and stick the two aforementioned functions in your .zshrc file, or wherever you want (I put mine in a file in my .oh-my-zsh/custom directory).
If anyone has a more elegant solution, plz speak up.
Anything to avoid using the mouse.
If xsel is installed on your system you can add this in .inputrc :
C-]: '\C-e\C-ucat <<"EOF" | tr -d "\\n" | xsel -ib\n\C-y\nEOF\n'
Alternatively, if xclip is installed you could add this:
C-]: '\C-e\C-ucat <<"EOF" | tr -d "\\n" | xclip -se c\n\C-y\nEOF\n'
Notice: Used code from #Clayton's answer.
I use history to find the command number that I am looking for, then I do:
echo "!command_number" | xclip -in
$ history | cut -c 8- | tail -1 | pbcopy
or in .zshrc file add an alias
alias copy='history | cut -c 8- | tail -1 | pbcopy'

Resources