Related
It seems that by executing code in PS0 and PS1 variables (which are eval'ed before and after a prompt command is run, as I understand) it should be possible to record time of each running command and display it in the prompt. Something like that:
user#machine ~/tmp
$ sleep 1
user#machine ~/tmp 1.01s
$
However, I quickly got stuck with recording time in PS0, since something like this doesn't work:
PS0='$(START=$(date +%s.%N))'
As I understand, START assignment happens in a sub-shell, so it is not visible in the outer shell. How would you approach this?
I was looking for a solution to a different problem and came upon this question, and decided that sounds like a cool feature to have. Using #Scheff's excellent answer as a base in addition to the solutions I developed for my other problem, I came up with a more elegant and full featured solution.
First, I created a few functions that read/write the time to/from memory. Writing to the shared memory folder prevents disk access and does not persist on reboot if the files are not cleaned for some reason
function roundseconds (){
# rounds a number to 3 decimal places
echo m=$1";h=0.5;scale=4;t=1000;if(m<0) h=-0.5;a=m*t+h;scale=3;a/t;" | bc
}
function bash_getstarttime (){
# places the epoch time in ns into shared memory
date +%s.%N >"/dev/shm/${USER}.bashtime.${1}"
}
function bash_getstoptime (){
# reads stored epoch time and subtracts from current
local endtime=$(date +%s.%N)
local starttime=$(cat /dev/shm/${USER}.bashtime.${1})
roundseconds $(echo $(eval echo "$endtime - $starttime") | bc)
}
The input to the bash_ functions is the bash PID
Those functions and the following are added to the ~/.bashrc file
ROOTPID=$BASHPID
bash_getstarttime $ROOTPID
These create the initial time value and store the bash PID as a different variable that can be passed to a function. Then you add the functions to PS0 and PS1
PS0='$(bash_getstarttime $ROOTPID) etc..'
PS1='\[\033[36m\] Execution time $(bash_getstoptime $ROOTPID)s\n'
PS1="$PS1"'and your normal PS1 here'
Now it will generate the time in PS0 prior to processing terminal input, and generate the time again in PS1 after processing terminal input, then calculate the difference and add to PS1. And finally, this code cleans up the stored time when the terminal exits:
function runonexit (){
rm /dev/shm/${USER}.bashtime.${ROOTPID}
}
trap runonexit EXIT
Putting it all together, plus some additional code being tested, and it looks like this:
The important parts are the execution time in ms, and the user.bashtime files for all active terminal PIDs stored in shared memory. The PID is also shown right after the terminal input, as I added display of it to PS0, and you can see the bashtime files added and removed.
PS0='$(bash_getstarttime $ROOTPID) $ROOTPID experiments \[\033[00m\]\n'
As #tc said, using arithmetic expansion allows you to assign variables during the expansion of PS0 and PS1. Newer bash versions also allow PS* style expansion so you don't even need a subshell to get the current time. With bash 4.4:
# PS0 extracts a substring of length 0 from PS1; as a side-effect it causes
# the current time as epoch seconds to PS0time (no visible output in this case)
PS0='\[${PS1:$((PS0time=\D{%s}, PS1calc=1, 0)):0}\]'
# PS1 uses the same trick to calculate the time elapsed since PS0 was output.
# It also expands the previous command's exit status ($?), the current time
# and directory ($PWD rather than \w, which shortens your home directory path
# prefix to "~") on the next line, and finally the actual prompt: 'user#host> '
PS1='\nSeconds: $((PS1calc ? \D{%s}-$PS0time : 0)) Status: $?\n\D{%T} ${PWD:PS1calc=0}\n\u#\h> '
(The %N date directive does not seem to be implemented as part of \D{...} expansion with bash 4.4. This is a pity since we only have a resolution in single second units.)
Since PS0 is only evaluated and printed if there is a command to execute, the PS1calc flag is set to 1 to do the time difference (following the command) in PS1 expansion or not (PS1calc being 0 means PS0 was not previously expanded and so didn't re-evaluate PS1time). PS1 then resets PS1calc to 0. In this way an empty line (just hitting return) doesn't accumulate seconds between return key presses.
One nice thing about this method is that there is no output when you have set -x active. No subshells or temporary files in sight: everything is done within the bash process itself.
I took this as puzzle and want to show the result of my puzzling:
First I fiddled with time measurement. The date +%s.%N (which I didn't realize before) was where I started from. Unfortunately, it seems that bashs arithmetic evaluation seems not to support floating points. Thus, I chosed something else:
$ START=$(date +%s.%N)
$ awk 'BEGIN { printf("%fs", '$(date +%s.%N)' - '$START') }' /dev/null
8.059526s
$
This is sufficient to compute the time difference.
Next, I confirmed what you already described: sub-shell invocation prevents usage of shell variables. Thus, I thought about where else I could store the start time which is global for sub-shells but local enough to be used in multiple interactive shells concurrently. My solution are temp. files (in /tmp). To provide a unique name I came up with this pattern: /tmp/$USER.START.$BASHPID.
$ date +%s.%N >/tmp/$USER.START.$BASHPID ; \
> awk 'BEGIN { printf("%fs", '$(date +%s.%N)' - '$(cat /tmp/$USER.START.$BASHPID)') }' /dev/null
cat: /tmp/ds32737.START.11756: No such file or directory
awk: cmd. line:1: BEGIN { printf("%fs", 1491297723.111219300 - ) }
awk: cmd. line:1: ^ syntax error
$
Damn! Again I'm trapped in the sub-shell issue. To come around this, I defined another variable:
$ INTERACTIVE_BASHPID=$BASHPID
$ date +%s.%N >/tmp/$USER.START.$INTERACTIVE_BASHPID ; \
> awk 'BEGIN { printf("%fs", '$(date +%s.%N)' - '$(cat /tmp/$USER.START.$INTERACTIVE_BASHPID)') }' /dev/null
0.075319s
$
Next step: fiddle this together with PS0 and PS1. In a similar puzzle (SO: How to change bash prompt color based on exit code of last command?), I already mastered the "quoting hell". Thus, I should be able to do it again:
$ PS0='$(date +%s.%N >"/tmp/${USER}.START.${INTERACTIVE_BASHPID}")'
$ PS1='$(awk "BEGIN { printf(\"%fs\", "$(date +%s.%N)" - "$(cat /tmp/$USER.START.$INTERACTIVE_BASHPID)") }" /dev/null)'"$PS1"
0.118550s
$
Ahh. It starts to work. Thus, there is only one issue - to find the right start-up script for the initialization of INTERACTIVE_BASHPID. I found ~/.bashrc which seems to be the right one for this, and which I already used in the past for some other personal customizations.
So, putting it all together - these are the lines I added to my ~/.bashrc:
# command duration puzzle
INTERACTIVE_BASHPID=$BASHPID
date +%s.%N >"/tmp/${USER}.START.${INTERACTIVE_BASHPID}"
PS0='$(date +%s.%N >"/tmp/${USER}.START.${INTERACTIVE_BASHPID}")'
PS1='$(awk "BEGIN { printf(\"%fs\", "$(date +%s.%N)" - "$(cat /tmp/$USER.START.$INTERACTIVE_BASHPID)") }" /dev/null)'"$PS1"
The 3rd line (the date command) has been added to solve another issue. Comment it out and start a new interactive bash to find out why.
A snapshot of my cygwin xterm with bash where I added the above lines to ./~bashrc:
Notes:
I consider this rather as solution to a puzzle than a "serious productive" solution. I'm sure that this kind of time measurement consumes itself a lot of time. The time command might provide a better solution: SE: How to get execution time of a script effectively?. However, this was a nice lecture for practicing the bash...
Don't forget that this code pollutes your /tmp directory with a growing number of small files. Either clean-up the /tmp from time to time or add the appropriate commands for clean-up (e.g. to ~/.bash_logout).
Arithmetic expansion runs in the current process and can assign to variables. It also produces output, which you can consume with something like \e[$((...,0))m (to output \e[0m) or ${t:0:$((...,0))} (to output nothing, which is presumably better). 64-bit integer support in Bash supports will count POSIX nanoseconds until the year 2262.
$ PS0='${t:0:$((t=$(date +%s%N),0))}'
$ PS1='$((( t )) && printf %d.%09ds $((t=$(date +%s%N)-t,t/1000000000)) $((t%1000000000)))${t:0:$((t=0))}\n$ '
0.053282161s
$ sleep 1
1.064178281s
$
$
PS0 is not evaluated for empty commands, which leaves a blank line (I'm not sure if you can conditionally print the \n without breaking things). You can work around that by switching to PROMPT_COMMAND instead (which also saves a fork):
$ PS0='${t:0:$((t=$(date +%s%N),0))}'
$ PROMPT_COMMAND='(( t )) && printf %d.%09ds\\n $((t=$(date +%s%N)-t,t/1000000000)) $((t%1000000000)); t=0'
0.041584565s
$ sleep 1
1.077152833s
$
$
That said, if you do not require sub-second precision, I would suggest using $SECONDS instead (which is also more likely to return a sensible answer if something sets the time).
As correctly stated in the question, PS0 runs inside a sub-shell which makes it unusable for this purpose of setting the start time.
Instead, one can use the history command with epoch seconds %s and the built-in variable $EPOCHSECONDS to calculate when the command finished by leveraging only $PROMPT_COMMAND.
# Save start time before executing command (does not work due to PS0 sub-shell)
# preexec() {
# STARTTIME=$EPOCHSECONDS
# }
# PS0=preexec
# Save end time, without duplicating commands when pressing Enter on an empty line
precmd() {
local st=$(HISTTIMEFORMAT='%s ' history 1 | awk '{print $2}');
if [[ -z "$STARTTIME" || (-n "$STARTTIME" && "$STARTTIME" -ne "$st") ]]; then
ENDTIME=$EPOCHSECONDS
STARTTIME=$st
else
ENDTIME=0
fi
}
__timeit() {
precmd;
if ((ENDTIME - STARTTIME >= 0)); then
printf 'Command took %d seconds.\n' "$((ENDTIME - STARTTIME))";
fi
# Do not forget your:
# - OSC 0 (set title)
# - OSC 777 (notification in gnome-terminal, urxvt; note, this one has preexec and precmd as OSC 777 features)
# - OSC 99 (notification in kitty)
# - OSC 7 (set url) - out of scope for this question
}
export PROMPT_COMMAND=__timeit
Note: If you have ignoredups in your $HISTCONTROL, then this will not report back for a command that is re-run.
Following #SherylHohman use of variables in PS0 I've come with this complete script. I've seen you don't need a PS0Time flag as PS0Calc doesn't exists on empty prompts so _elapsed funct just exit.
#!/bin/bash
# string preceding ms, use color code or ascii
_ELAPTXT=$'\E[1;33m \uf135 '
# extract time
_printtime () {
local _var=${EPOCHREALTIME/,/};
echo ${_var%???}
}
# get diff time, print it and end color codings if any
_elapsed () {
[[ -v "${1}" ]] || ( local _VAR=$(_printtime);
local _ELAPSED=$(( ${_VAR} - ${1} ));
echo "${_ELAPTXT}$(_formatms ${_ELAPSED})"$'\n\e[0m' )
}
# format _elapsed with simple string substitution
_formatms () {
local _n=$((${1})) && case ${_n} in
? | ?? | ???)
echo $_n"ms"
;;
????)
echo ${_n:0:1}${_n:0,-3}"ms"
;;
?????)
echo ${_n:0:2}","${_n:0,-3}"s"
;;
??????)
printf $((${_n:0:3}/60))m+$((${_n:0:3}%60)),${_n:0,-3}"s"
;;
???????)
printf $((${_n:0:4}/60))m$((${_n:0:4}%60))s${_n:0,-3}"ms"
;;
*)
printf "too much!"
;;
esac
}
# prompts
PS0='${PS1:(PS0time=$(_printtime)):0}'
PS1='$(_elapsed $PS0time)${PS0:(PS0time=0):0}\u#\h:\w\$ '
img of result
Save it as _prompt and source it to try:
source _prompt
Change text, ascii codes and colors in _ELAPTXT
_ELAPTXT='\e[33m Elapsed time: '
I'm thinking of writing a script for cygwin to cd into a windows directory which is copied from Windows explorer.
e.g.
cdw D:\working\test
equals to
cd /cygdrive/d/working/test
But it seems for shell script, all backslashs in parameters are ignored unless using single quote 'D:\working\test' or double backslashs D:\\working\\test.
But in my case it would be very inconvenience because I can't simply paste the directory name in the command line to execute the script.
Is there any way to make cdw D:\working\test working?
Well, you can do it, but you want something strange :)
cdw()
{
set $(history | tail -1 )
shift 2
path="$*"
cd $(cygpath "$path")
}
Example of usage:
$ cdw D:\working\test
$ pwd
/cygdrive/d/working/test
The main point here is the usage of history.
You don't use an argument directly, but get it from the history in the form it was typed.
$ rawarg() { set $(history | tail -1 ); shift 2; echo "$#"; }
$ rawarg C:\a\b\c\d
C:\a\b\c\d
Of course, you can use this trick in a interactive shell only (for obvious reasons).
The problem you deal with is related to the shell. Any argument you add to cdw on the command line, will be processed by the shell before cdw gets executed.
In order to prevent that processing to happen, you need at least one level of quoting,
either by enclosing the whole string in single quotes:
cd 'D:\working\test'
or with double backslashses:
cd D:\\working\test
A separate program will not help, because the damage is already done before it runs. ;-)
However, I have a possible function for cdw, which works in my AST UWIN ksh:
function cdw { typeset dir
read -r dir?"Paste Directory Path: "
cd ${dir:?}
}
And this one works in Bash (which does not support read var?prompt):
function cdw {
typeset dir
printf "Paste Directory Path: "
read -r dir || return
cd ${dir:?}
}
For me, I just type the two single quotes around the Pasted value.
The solution to add single quotes allows to copy paste
I want to make a script that takes a file path for argument, and cds into its folder.
Here is what I made :
#!/bin/bash
#remove the file name, and change every space into \space
shorter=`echo "$1" | sed 's/\/[^\/]*$//' | sed 's/\ /\\\ /g'`
echo $shorter
cd $shorter
I actually have 2 questions (I am a relative newbie to shell scripts) :
How could I make the cd "persistent" ? I want to put this script into /usr/bin, and then call it from wherever in the filesystem. Upon return of the script, I want to stay in the $shorter folder. Basically, if pwd was /usr/bin, I could make it by typing . script /my/path instead of ./script /my/path, but what if I am in an other folder ?
The second question is trickier. My script fails whenever there is a space in the given argument. Although $shorter is exactly what I want (for instance /home/jack/my\ folder/subfolder), cd fails whith the error /usr/bin/script : line 4 : cd: /home/jack/my\: no file or folder of this type. I think I have tried everything, using things like cd '$shorter' or cd "'"$shorter"'" doesn't help. What am I missing ??
Thanks a lot for your answers
in your .bashrc add the following line:
function shorter() { cd "${1%/*}"; }
% means remove the smaller pattern from the end
/* is the patern
Then in your terminal:
$ . ~/.bashrc # to refresh your bash configuration
$ type shorter # to check if your new function is available
shorter is a function
shorter ()
{
cd "${1%/*}"
}
$ shorter ./your/directory/filename # this will move to ./your/directory
The first part:
The change of directory won't be “persistent” beyond the lifetime of your script, because your script runs in a new shell process. You could, however, use a shell alias or a shell function. For example, you could embed the code in a shell function and define it in your .bash_profile or other source location.
mycdfunction () {
cd /blah/foo/"$1"
}
As for the “spaces in names” bit:
The general syntax for referring to a variable in Bourne shells is: "$var" — the "double quotes" tell the shell to expand any variables inside of them, but to group the outcome as a single parameter.
Omitting the double quotes around $var tells the shell to expand the variable, but then split the results into parameters (“words”) on whitespace. This is how the shell splits up parameters, normally.
Using 'single quotes' causes the shell to not expand any contents, but group the parameters togethers.
You can use \ (backslash-blank) to escape a space when you're typing (or in a script), but that's usually harder to read than using 'single quotes' or "double quotes"…
Note that the expansion phase includes: $variables wild?cards* {grouping,names}with-braces $(echo command substitution) and other effects.
| expansion | no expansion
-------------------------------------------------------
grouping | " " | ' '
splitting | (no punc.) | (not easily done)
For the first part, there is no need for the shorter variable at all. You can just do:
#!/bin/bash
cd "${1%/*}"
Explanation
Most shells, including bash, have what is called Parameter Expansion and they are very powerful and efficient as they allow you to manipulate variables nativly within the shell that would normally require a call to an external binary.
Two common examples of where you can use Parameter Expansion over an external call would be:
${var%/*} # replaces dirname
${var##*/} # replaces basename
See this FAQ on Parameter Expansion to learn more. In fact, while you're there might as well go over the whole FAQ
When you put your script inside /usr/bin you can call it anywhere. And to deal with whitespace in the shell just put the target between "" (but this doesn't matter !!).
Well here is a demo:
#!/bin/bash
#you can use dirname but that's not apropriate
#shorter=$(dirname $1)
#Use parameter expansion (too much better)
shorter=${1%/*}
echo $shorter
An alternate way to do it, since you have dirname on your Mac:
#!/bin/sh
cd "$(dirname "$1")"
Since you mentioned in the comments that you wanted to be able to drag files into a window and cd to them, you might want to make your script allow file or directory paths as arguments:
#!/bin/sh
[ -f "$1" ] && set "$(dirname "$1")" # convert a file to a directory
cd "$1"
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
We all know how to use <ctrl>-R to reverse search through history, but did you know you can use <ctrl>-S to forward search if you set stty stop ""? Also, have you ever tried running bind -p to see all of your keyboard shortcuts listed? There are over 455 on Mac OS X by default.
What is your single most favorite obscure trick, keyboard shortcut or shopt configuration using bash?
Renaming/moving files with suffixes quickly:
cp /home/foo/realllylongname.cpp{,-old}
This expands to:
cp /home/foo/realllylongname.cpp /home/foo/realllylongname.cpp-old
cd -
It's the command-line equivalent of the back button (takes you to the previous directory you were in).
Another favorite:
!!
Repeats your last command. Most useful in the form:
sudo !!
My favorite is '^string^string2' which takes the last command, replaces string with string2 and executes it
$ ehco foo bar baz
bash: ehco: command not found
$ ^ehco^echo
foo bar baz
Bash command line history guide
rename
Example:
$ ls
this_has_text_to_find_1.txt
this_has_text_to_find_2.txt
this_has_text_to_find_3.txt
this_has_text_to_find_4.txt
$ rename 's/text_to_find/been_renamed/' *.txt
$ ls
this_has_been_renamed_1.txt
this_has_been_renamed_2.txt
this_has_been_renamed_3.txt
this_has_been_renamed_4.txt
So useful
I'm a fan of the !$, !^ and !* expandos, returning, from the most recent submitted command line: the last item, first non-command item, and all non-command items. To wit (Note that the shell prints out the command first):
$ echo foo bar baz
foo bar baz
$ echo bang-dollar: !$ bang-hat: !^ bang-star: !*
echo bang-dollar: baz bang-hat: foo bang-star: foo bar baz
bang-dollar: baz bang-hat: foo bang-star: foo bar baz
This comes in handy when you, say ls filea fileb, and want to edit one of them: vi !$ or both of them: vimdiff !*. It can also be generalized to "the nth argument" like so:
$ echo foo bar baz
$ echo !:2
echo bar
bar
Finally, with pathnames, you can get at parts of the path by appending :h and :t to any of the above expandos:
$ ls /usr/bin/id
/usr/bin/id
$ echo Head: !$:h Tail: !$:t
echo Head: /usr/bin Tail: id
Head: /usr/bin Tail: id
When running commands, sometimes I'll want to run a command with the previous ones arguments. To do that, you can use this shortcut:
$ mkdir /tmp/new
$ cd !!:*
Occasionally, in lieu of using find, I'll break-out a one-line loop if I need to run a bunch of commands on a list of files.
for file in *.wav; do lame "$file" "$(basename "$file" .wav).mp3" ; done;
Configuring the command-line history options in my .bash_login (or .bashrc) is really useful. The following is a cadre of settings that I use on my Macbook Pro.
Setting the following makes bash erase duplicate commands in your history:
export HISTCONTROL="erasedups:ignoreboth"
I also jack my history size up pretty high too. Why not? It doesn't seem to slow anything down on today's microprocessors.
export HISTFILESIZE=500000
export HISTSIZE=100000
Another thing that I do is ignore some commands from my history. No need to remember the exit command.
export HISTIGNORE="&:[ ]*:exit"
You definitely want to set histappend. Otherwise, bash overwrites your history when you exit.
shopt -s histappend
Another option that I use is cmdhist. This lets you save multi-line commands to the history as one command.
shopt -s cmdhist
Finally, on Mac OS X (if you're not using vi mode), you'll want to reset <CTRL>-S from being scroll stop. This prevents bash from being able to interpret it as forward search.
stty stop ""
How to list only subdirectories in the current one ?
ls -d */
It's a simple trick, but you wouldn't know how much time I needed to find that one !
ESC.
Inserts the last arguments from your last bash command. It comes in handy more than you think.
cp file /to/some/long/path
cd ESC.
Sure, you can "diff file1.txt file2.txt", but Bash supports process substitution, which allows you to diff the output of commands.
For example, let's say I want to make sure my script gives me the output I expect. I can just wrap my script in <( ) and feed it to diff to get a quick and dirty unit test:
$ cat myscript.sh
#!/bin/sh
echo -e "one\nthree"
$
$ ./myscript.sh
one
three
$
$ cat expected_output.txt
one
two
three
$
$ diff <(./myscript.sh) expected_output.txt
1a2
> two
$
As another example, let's say I want to check if two servers have the same list of RPMs installed. Rather than sshing to each server, writing each list of RPMs to separate files, and doing a diff on those files, I can just do the diff from my workstation:
$ diff <(ssh server1 'rpm -qa | sort') <(ssh server2 'rpm -qa | sort')
241c240
< kernel-2.6.18-92.1.6.el5
---
> kernel-2.6.18-92.el5
317d315
< libsmi-0.4.5-2.el5
727,728d724
< wireshark-0.99.7-1.el5
< wireshark-gnome-0.99.7-1.el5
$
There are more examples in the
Advanced Bash-Scripting Guide at http://tldp.org/LDP/abs/html/process-sub.html.
My favorite command is "ls -thor"
It summons the power of the gods to list the most recently modified files in a conveniently readable format.
More of a novelty, but it's clever...
Top 10 commands used:
$ history | awk '{print $2}' | awk 'BEGIN {FS="|"}{print $1}' | sort | uniq -c | sort -nr | head
Sample output:
242 git
83 rake
43 cd
33 ss
24 ls
15 rsg
11 cap
10 dig
9 ping
3 vi
^R reverse search. Hit ^R, type a fragment of a previous command you want to match, and hit ^R until you find the one you want. Then I don't have to remember recently used commands that are still in my history. Not exclusively bash, but also: ^E for end of line, ^A for beginning of line, ^U and ^K to delete before and after the cursor, respectively.
I often have aliases for vi, ls, etc. but sometimes you want to escape the alias. Just add a back slash to the command in front:
Eg:
$ alias vi=vim
$ # To escape the alias for vi:
$ \vi # This doesn't open VIM
Cool, isn't it?
Here's a couple of configuration tweaks:
~/.inputrc:
"\C-[[A": history-search-backward
"\C-[[B": history-search-forward
This works the same as ^R but using the arrow keys instead. This means I can type (e.g.) cd /media/ then hit up-arrow to go to the last thing I cd'd to inside the /media/ folder.
(I use Gnome Terminal, you may need to change the escape codes for other terminal emulators.)
Bash completion is also incredibly useful, but it's a far more subtle addition. In ~/.bashrc:
if [ -f /etc/bash_completion ]; then
. /etc/bash_completion
fi
This will enable per-program tab-completion (e.g. attempting tab completion when the command line starts with evince will only show files that evince can open, and it will also tab-complete command line options).
Works nicely with this also in ~/.inputrc:
set completion-ignore-case on
set show-all-if-ambiguous on
set show-all-if-unmodified on
I use the following a lot:
The :p modifier to print a history result. E.g.
!!:p
Will print the last command so you can check that it's correct before running it again. Just enter !! to execute it.
In a similar vein:
!?foo?:p
Will search your history for the most recent command that contained the string 'foo' and print it.
If you don't need to print,
!?foo
does the search and executes it straight away.
I have got a secret weapon : shell-fu.
There are thousand of smart tips, cool tricks and efficient recipes that most of the time fit on a single line.
One that I love (but I cheat a bit since I use the fact that Python is installed on most Unix system now) :
alias webshare='python -m SimpleHTTPServer'
Now everytime you type "webshare", the current directory will be available through the port 8000. Really nice when you want to share files with friends on a local network without usb key or remote dir. Streaming video and music will work too.
And of course the classic fork bomb that is completely useless but still a lot of fun :
$ :(){ :|:& };:
Don't try that in a production server...
You can use the watch command in conjunction with another command to look for changes. An example of this was when I was testing my router, and I wanted to get up-to-date numbers on stuff like signal-to-noise ratio, etc.
watch --interval=10 lynx -dump http://dslrouter/stats.html
type -a PROG
in order to find all the places where PROG is available, usually somewhere in ~/bin
rather than the one in /usr/bin/PROG that might have been expected.
I like to construct commands with echo and pipe them to the shell:
$ find dir -name \*~ | xargs echo rm
...
$ find dir -name \*~ | xargs echo rm | ksh -s
Why? Because it allows me to look at what's going to be done before I do it. That way if I have a horrible error (like removing my home directory), I can catch it before it happens. Obviously, this is most important for destructive or irrevocable actions.
When downloading a large file I quite often do:
while ls -la <filename>; do sleep 5; done
And then just ctrl+c when I'm done (or if ls returns non-zero). It's similar to the watch program but it uses the shell instead, so it works on platforms without watch.
Another useful tool is netcat, or nc. If you do:
nc -l -p 9100 > printjob.prn
Then you can set up a printer on another computer but instead use the IP address of the computer running netcat. When the print job is sent, it is received by the computer running netcat and dumped into printjob.prn.
pushd and popd almost always come in handy
One preferred way of navigating when I'm using multiple directories in widely separate places in a tree hierarchy is to use acf_func.sh (listed below). Once defined, you can do
cd --
to see a list of recent directories, with a numerical menu
cd -2
to go to the second-most recent directory.
Very easy to use, very handy.
Here's the code:
# do ". acd_func.sh"
# acd_func 1.0.5, 10-nov-2004
# petar marinov, http:/geocities.com/h2428, this is public domain
cd_func ()
{
local x2 the_new_dir adir index
local -i cnt
if [[ $1 == "--" ]]; then
dirs -v
return 0
fi
the_new_dir=$1
[[ -z $1 ]] && the_new_dir=$HOME
if [[ ${the_new_dir:0:1} == '-' ]]; then
#
# Extract dir N from dirs
index=${the_new_dir:1}
[[ -z $index ]] && index=1
adir=$(dirs +$index)
[[ -z $adir ]] && return 1
the_new_dir=$adir
fi
#
# '~' has to be substituted by ${HOME}
[[ ${the_new_dir:0:1} == '~' ]] && the_new_dir="${HOME}${the_new_dir:1}"
#
# Now change to the new dir and add to the top of the stack
pushd "${the_new_dir}" > /dev/null
[[ $? -ne 0 ]] && return 1
the_new_dir=$(pwd)
#
# Trim down everything beyond 11th entry
popd -n +11 2>/dev/null 1>/dev/null
#
# Remove any other occurence of this dir, skipping the top of the stack
for ((cnt=1; cnt <= 10; cnt++)); do
x2=$(dirs +${cnt} 2>/dev/null)
[[ $? -ne 0 ]] && return 0
[[ ${x2:0:1} == '~' ]] && x2="${HOME}${x2:1}"
if [[ "${x2}" == "${the_new_dir}" ]]; then
popd -n +$cnt 2>/dev/null 1>/dev/null
cnt=cnt-1
fi
done
return 0
}
alias cd=cd_func
if [[ $BASH_VERSION > "2.05a" ]]; then
# ctrl+w shows the menu
bind -x "\"\C-w\":cd_func -- ;"
fi
Expand complicated lines before hitting the dreaded enter
Alt+Ctrl+e — shell-expand-line (may need to use Esc, Ctrl+e on your keyboard)
Ctrl+_ — undo
Ctrl+x, * — glob-expand-word
$ echo !$ !-2^ * Alt+Ctrl+e
$ echo aword someotherword * Ctrl+_
$ echo !$ !-2^ * Ctrl+x, *
$ echo !$ !-2^ LOG Makefile bar.c foo.h
&c.
I've always been partial to:
ctrl-E # move cursor to end of line
ctrl-A # move cursor to beginning of line
I also use shopt -s cdable_vars, then you can create bash variables to common directories. So, for my company's source tree, I create a bunch of variables like:
export Dcentmain="/var/localdata/p4ws/centaur/main/apps/core"
then I can change to that directory by cd Dcentmain.
pbcopy
This copies to the Mac system clipboard. You can pipe commands to it...try:
pwd | pbcopy
$ touch {1,2}.txt
$ ls [12].txt
1.txt 2.txt
$ rm !:1
rm [12].txt
$ history | tail -10
...
10007 touch {1,2}.txt
...
$ !10007
touch {1,2}.txt
$ for f in *.txt; do mv $f ${f/txt/doc}; done
Using 'set -o vi' from the command line, or better, in .bashrc, puts you in vi editing mode on the command line. You start in 'insert' mode so you can type and backspace as normal, but if you make a 'large' mistake you can hit the esc key and then use 'b' and 'f' to move around as you do in vi. cw to change a word. Particularly useful after you've brought up a history command that you want to change.
Similar to many above, my current favorite is the keystroke [alt]. (Alt and "." keys together) this is the same as $! (Inserts the last argument from the previous command) except that it's immediate and for me easier to type. (Just can't be used in scripts)
eg:
mkdir -p /tmp/test/blah/oops/something
cd [alt].
String multiple commands together using the && command:
./run.sh && tail -f log.txt
or
kill -9 1111 && ./start.sh
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
Shell scripts are often used as glue, for automation and simple one-off tasks. What are some of your favorite "hidden" features of the Bash shell/scripting language?
One feature per answer
Give an example and short description of the feature, not just a link to documentation
Label the feature using bold title as the first line
See also:
Hidden features of C
Hidden features of C#
Hidden features of C++
Hidden features of Delphi
Hidden features of Python
Hidden features of Java
Hidden features of JavaScript
Hidden features of Ruby
Hidden features of PHP
Hidden features of Perl
Hidden features of VB.Net
insert preceding line's final parameter
alt-. the most useful key combination ever, try it and see, for some reason no one knows about this one.
press it again and again to select older last parameters.
great when you want to do something else to something you used just a moment ago.
If you want to keep a process running after you log out:
disown -h <pid>
is a useful bash built-in. Unlike nohup, you can run disown on an already-running process.
First, stop your job with control-Z, get the pid from ps (or use echo $!), use bg to send it to the background, then use disown with the -h flag.
Don't forget to background your job or it will be killed when you logout.
Almost everything listed under EXPANSION section in the manual
In particular, parameter expansion:
$ I=foobar
$ echo ${I/oo/aa} #replacement
faabar
$ echo ${I:1:2} #substring
oo
$ echo ${I%bar} #trailing substitution
foo
$ echo ${I#foo} #leading substitution
bar
My favorite:
sudo !!
Rerun the previous command with sudo.
More magic key combinations:
Ctrl + r begins a “reverse incremental search” through your command history. As you continue to type, it retrieves the most recent command that contains all the text you enter.
Tab completes the word you've typed so far if it's unambiguous.
Tab Tab lists all completions for the word you've typed so far.
Alt + * inserts all possible completions, which is particularly helpful, say, if you've just entered a potentially destructive command with wildcards:
rm -r source/d*.c Alt + *
rm -r source/delete_me.c source/do_not_delete_me.c
Ctrl + Alt + e performs alias, history, and shell expansion on the current line. In other words, the current line is redisplayed as it will be processed by the shell:
ls $HOME/tmp Ctrl Alt + e
ls -N --color=tty -T 0 /home/cramey
Get back history commands and arguments
It's possible to selectively access previous commands and arguments using the ! operator. It's very useful when you are working with long paths.
You can check your last commands with history.
You can use previous commands with !<n> being n the index of the command in history, negative numbers count backwards from the last command in history.
ls -l foo bar
touch foo bar
!-2
You can use previous arguments with !:<n>, zero is the command, >= 1 are the arguments.
ls -l foo
touch !:2
cp !:1 bar
And you can combine both with !<n>:<m>
touch foo bar
ls -l !:1 !:2
rm !-2:1 !-2:2
!-2
You can also use argument ranges !<n>:<x>-<y>
touch boo far
ls -l !:1-2
Other ! special modifiers are:
* for all the arguments
ls -l foo bar
ls !*
^ for the first argument (!:1 == !^)
$ for the last argument
ls -l foo bar
cat !$ > /dev/null
I like the -x feature, allowing to see what's going on in your script.
bash -x script.sh
SECONDS=0; sleep 5 ; echo "that took approximately $SECONDS seconds"
SECONDS
Each time this parameter is
referenced, the number of seconds
since shell invocation is returned.
If a value is assigned to SECONDS,
the value returned upon subsequent
references is the number of seconds
since the assignment plus the value
assigned. If SECONDS is unset, it
loses its special properties, even if
it is subsequently reset.
Here is one of my favorites. This sets tab completion to not be case sensitive. It's really great for quickly typing directory paths, especially on a Mac where the file system is not case sensitive by default. I put this in .inputrc in my home folder.
set completion-ignore-case on
The special variable random:
if [[ $(($RANDOM % 6)) = 0 ]]
then echo "BANG"
else
echo "Try again"
fi
Regular expression handling
Recent bash releases feature regular expression matching, so you can do:
if [[ "mystring" =~ REGEX ]] ; then
echo match
fi
where REGEX is a raw regular expression in the format described by man re_format.
Matches from any bracketed parts are stored in the BASH_REMATCH array, starting at element 1 (element 0 is the matched string in its entirety), so you can use this to do regex-powered parsing too.
Ctrlx Ctrle
This will load the current command into the editor defined in the variable VISUAL. This is really useful for long commands like some of those listed here.
To use vi as your editor:
export VISUAL=vi
Quick & Dirty correction of typos (especially useful for long commands over slow connections where using the command history and scrolling through it would be horrible):
$ cat /proc/cupinfo
cat: /proc/cupinfo: No such file or directory
$ ^cup^cpu
Also try !:s/old/new which substitutes old with new in the previous command once.
If you want to substitute many occurrences you can do a global substitution with !:gs/old/new.
You can use the gs and s commands with any history event, e.g.
!-2:s/old/new
To substitute old with new (once) in the second to last command.
Here two of my favorites:
To check the syntax w/o really executing the script use:
bash -n script.sh
Go back to the last directory (yes I know pushd and popd, but this is quicker)
cd -
Using Infix Boolean Operators
Consider the simple if:
if [ 2 -lt 3 ]
then echo "Numbers are still good!"
fi
That -lt looks kinda ugly. Not very modern. If you use double brackets around your boolean expression you can the normal boolean operators!
if [[ 2 < 3 ]]
then echo "Numbers are still good!"
fi
Arrays:
#!/bin/bash
array[0]="a string"
array[1]="a string with spaces and \"quotation\" marks in it"
array[2]="a string with spaces, \"quotation marks\" and (parenthesis) in it"
echo "There are ${#array[*]} elements in the array."
for n in "${array[#]}"; do
echo "element = >>${n}<<"
done
More details on arrays (and other advanced bash scripting stuff) can be found in the Advanced Bash-Scripting Guide.
Running a command before displaying the bash prompt
Set a command in the "PROMPT_COMMAND" env variable and it will be run automatically before each prompt.
Example:
[lsc#home]$ export PROMPT_COMMAND="date"
Fri Jun 5 15:19:18 BST 2009
[lsc#home]$ ls
file_a file_b file_c
Fri Jun 5 15:19:19 BST 2009
[lsc#home]$ ls
For the next april fools, add "export PROMPT_COMMAND=cd" to someone's .bashrc then sit back and watch the confusion unfold.
Magic key combinations from the bash man pages:
Ctrl + a and Ctrl + e move the cursor to the beginning and end of the current line, respectively.
Ctrl + t and Alt + t transpose the character and word before the cursor with the current one, then move the cursor forward.
Alt + u and Alt + l convert the current word (from the cursor to the end) to uppercase and lowercase.
Hint: Press Alt + – followed by either of these commands to convert the beginning of the current word.
Bonus man tips:
While viewing man pages, use / to search for text within the pages. Use n to jump ahead to the next match or N for the previous match.
Speed your search for a particular command or sub-section within the man pages by taking advantage of their formatting:
o Instead of typing /history expansion to find that section, try /^history, using the caret (^) to find only lines that begin with "history."
o Try / read, with a few leading spaces, to search for that builtin command. Builtins are always indented in the man pages.
export TMOUT=$((15*60))
Terminate bash after 15 minutes of idle time, set to 0 to disable. I usually put this to ~/.bashrc on my root accounts. It's handy when administrating your boxes and you may forget to logout before walking away from the terminal.
Undo
C-S-- Control Shift Minus Undo-es typing actions.
Kill / Yank
Any delete operation C-w (delete previous word), C-k (delete to end of line), C-u (delete to start of line) etc... copies it's deleted text to the kill ring, you can paste the last kill with: C-y and cycle through (and paste from) the ring of deleted items with Alt-y
You can ignore certain files while tab completing by setting th FIGNORE variable.
For example, if you have a subverion repo and you want to navigate more easily do
export FIGNORE=".svn"
now you can cd without being blocked by .svn directories.
Using arithmetic:
if [[ $((2+1)) = $((1+2)) ]]
then echo "still ok"
fi
Brace expansion
Standard expansion with {x,y,z}:
$ echo foo{bar,baz,blam}
foobar foobaz fooblam
$ cp program.py{,.bak} # very useful with cp and mv
Sequence expansion with {x..y}:
$ echo {a..z}
a b c d e f g h i j k l m n o p q r s t u v w x y z
$ echo {a..f}{0..3}
a0 a1 a2 a3 b0 b1 b2 b3 c0 c1 c2 c3 d0 d1 d2 d3 e0 e1 e2 e3 f0 f1 f2 f3
I recently read Csh Programming Considered Harmful which contained this astounding gem:
Consider the pipeline:
A | B | C
You want to know the status of C, well, that's easy: it's in $?, or
$status in csh. But if you want it from A, you're out of luck -- if
you're in the csh, that is. In the Bourne shell, you can get it, although
doing so is a bit tricky.
Here's something I had to do where I ran dd's
stderr into a grep -v pipe to get rid of the records in/out noise, but had
to return the dd's exit status, not the grep's:
device=/dev/rmt8
dd_noise='^[0-9]+\+[0-9]+ records (in|out)$'
exec 3>&1
status=`((dd if=$device ibs=64k 2>&1 1>&3 3>&- 4>&-; echo $? >&4) |
egrep -v "$dd_noise" 1>&2 3>&- 4>&-) 4>&1`
exit $status;
Truncate content of a file (zeroing file)
> file
Specifically, this is very good for truncating log files, when the file is open by another process, which still may write to the file.
Not really a feature but rather a direction: I found many "hidden features", secrets and various bash usefulness at commandlinefu.com. Many of the highest rated answers to this answers, I learned them on that site :)
Another small one:
Alt+#
comments out the current line and moves it into the history buffer.
So when you're assembling a command line and you need to issue an interim command to e.g. find a file, you just hit alt+#, issue the other command, go up in the history, uncomment and proceed.
Braces in lieu of do and done in for loop
For loop body are usually in do...done (just an example):
for f in *;
do
ls "$f";
done
But we can use a C style using braces:
for f in *; {
ls "$f";
}
I think this looks better than do...doneand I prefer this one. I have not yet found this in any Bash documentation, so this is really a hidden feature.
C style numeric expressions:
let x="RANDOM%2**8"
echo -n "$x = 0b"
for ((i=8; i>=0; i--)); do
let n="2**i"
if (( (x&n) == n )); then echo -n "1"
else echo -n "0"
fi
done
echo ""
These properties are another one of my favorites.
export HISTCONTROL=erasedups
export HISTSIZE=1000
The first one makes sure bash doesn't log commands more than once, will really improves history's usefulness. The other expands the history size to 1000 from the default of 100. I actually set this to 10000 on my machines.