How to prepend a file using fish - shell

I saw there were several good answers for bash and even zsh(i.e. Here). Although I wasn't able to find a good one for fish.
Is there a canonical or clean one to prepend a string or a couple of lines into an existing file (in place)? Similar to what cat "new text" >> test.txt do for append.

As part of fish's intentional aim towards simplicity, it avoids syntactic sugar found in zsh. The equivalent to the zsh-only code <<< "to be prepended" < text.txt | sponge text.txt in fish is:
begin; echo "to be prepended"; cat test.txt; end | sponge test.txt
sponge is a tool from the moreutils package; the fish version requires it just as much as the zsh original did. However, you could replace it with a function easily enough; consider the below:
# note that this requires GNU chmod, though it works if you have it installed under a
# different name (f/e, installing "coreutils" on MacOS with nixpkgs, macports, etc),
# it tries to figure that out.
function copy_file_permissions -a srcfile destfile
if command -v coreutils &>/dev/null # works with Nixpkgs-installed coreutils on Mac
coreutils --coreutils-prog=chmod --reference=$srcfile -- $destfile
else if command -v gchmod &>/dev/null # works w/ Homebrew coreutils on Mac
gchmod --reference=$srcfile -- $destfile
else
# hope that just "chmod" is the GNU version, or --reference won't work
chmod --reference=$srcfile -- $destfile
end
end
function mysponge -a destname
set tempfile (mktemp -t $destname.XXXXXX)
if test -e $destname
copy_file_permissions $destname $tempfile
end
cat >$tempfile
mv -- $tempfile $destname
end
function prependString -a stringToPrepend outputName
begin
echo $stringToPrepend
cat -- $outputName
end | mysponge $outputName
end
prependString "First Line" out.txt
prependString "No I'm First" out.txt

For the specific case that file size is small to medium (fit in memory), consider using the ed program, which will avoid the temporary files by loading all data into memory. For example, using the following script. This approach avoid the need to install extra packages (moreutils, etc.).
#! /usr/env fish
function prepend
set t $argv[1]
set f $argv[2]
echo '0a\n$t\n.\nwq\n' | ed $f
end

Related

Print all variables used within a bash script [duplicate]

In my script in bash, there are lot of variables, and I have to make something to save them to file.
My question is how to list all variables declared in my script and get list like this:
VARIABLE1=abc
VARIABLE2=def
VARIABLE3=ghi
set will output the variables, unfortunately it will also output the functions defines as well.
Luckily POSIX mode only outputs the variables:
( set -o posix ; set ) | less
Piping to less, or redirect to where you want the options.
So to get the variables declared in just the script:
( set -o posix ; set ) >/tmp/variables.before
source script
( set -o posix ; set ) >/tmp/variables.after
diff /tmp/variables.before /tmp/variables.after
rm /tmp/variables.before /tmp/variables.after
(Or at least something based on that :-) )
compgen -v
It lists all variables including local ones.
I learned it from Get list of variables whose name matches a certain pattern, and used it in my script.
for i in _ {a..z} {A..Z}; do eval "echo \${!$i#}" ; done | xargs printf "%s\n"
This must print all shell variables names. You can get a list before and after sourcing your file just like with "set" to diff which variables are new (as explained in the other answers). But keep in mind such filtering with diff can filter out some variables that you need but were present before sourcing your file.
In your case, if you know your variables' names start with "VARIABLE", then you can source your script and do:
for var in ${!VARIABLE#}; do
printf "%s%q\n" "$var=" "${!var}"
done
UPDATE: For pure BASH solution (no external commands used):
for i in _ {a..z} {A..Z}; do
for var in `eval echo "\\${!$i#}"`; do
echo $var
# you can test if $var matches some criteria and put it in the file or ignore
done
done
Based on some of the above answers, this worked for me:
before=$(set -o posix; set | sort);
source file:
comm -13 <(printf %s "$before") <(set -o posix; set | sort | uniq)
If you can post-process, (as already mentioned) you might just place a set call at the beginning and end of your script (each to a different file) and do a diff on the two files. Realize that this will still contain some noise.
You can also do this programatically. To limit the output to just your current scope, you would have to implement a wrapper to variable creation. For example
store() {
export ${1}="${*:2}"
[[ ${STORED} =~ "(^| )${1}($| )" ]] || STORED="${STORED} ${1}"
}
store VAR1 abc
store VAR2 bcd
store VAR3 cde
for i in ${STORED}; do
echo "${i}=${!i}"
done
Which yields
VAR1=abc
VAR2=bcd
VAR3=cde
A little late to the party, but here's another suggestion:
#!/bin/bash
set_before=$( set -o posix; set | sed -e '/^_=*/d' )
# create/set some variables
VARIABLE1=a
VARIABLE2=b
VARIABLE3=c
set_after=$( set -o posix; unset set_before; set | sed -e '/^_=/d' )
diff <(echo "$set_before") <(echo "$set_after") | sed -e 's/^> //' -e '/^[[:digit:]].*/d'
The diff+sed pipeline command line outputs all script-defined variables in the desired format (as specified in the OP's post):
VARIABLE1=a
VARIABLE2=b
VARIABLE3=c
Here's something similar to the #GinkgoFr answer, but without the problems identified by #Tino or #DejayClayton,
and is more robust than #DouglasLeeder's clever set -o posix bit:
+ function SOLUTION() { (set +o posix; set) | sed -ne '/^\w\+=/!q; p;'; }
The difference is that this solution STOPS after the first non-variable report, e.g. the first function reported by set
BTW: The "Tino" problem is solved. Even though POSIX is turned off and functions are reported by set,
the sed ... portion of the solution only allows variable reports through (e.g. VAR=VALUE lines).
In particular, the A2 does not spuriously make it into the output.
+ function a() { echo $'\nA2=B'; }; A0=000; A9=999;
+ SOLUTION | grep '^A[0-9]='
A0=000
A9=999
AND: The "DejayClayton" problem is solved (embedded newlines in variable values do not disrupt the output - each VAR=VALUE get a single output line):
+ A1=$'111\nA2=222'; A0=000; A9=999;
+ SOLUTION | grep '^A[0-9]='
A0=000
A1=$'111\nA2=222'
A9=999
NOTE: The solution provided by #DouglasLeeder suffers from the "DejayClayton" problem (values with embedded newlines).
Below, the A1 is wrong and A2 should not show at all.
$ A1=$'111\nA2=222'; A0=000; A9=999; (set -o posix; set) | grep '^A[0-9]='
A0=000
A1='111
A2=222'
A9=999
FINALLY: I don't think the version of bash matters, but it might. I did my testing / developing on this one:
$ bash --version
GNU bash, version 4.4.12(1)-release (x86_64-pc-msys)
POST-SCRIPT: Given some of the other responses to the OP, I'm left < 100% sure that set always converts newlines within the value to \n, which this solution relies upon to avoid the "DejayClayton" problem. Perhaps that's a modern behavior? Or a compile-time variation? Or a set -o or shopt option setting? If you know of such variations, please add a comment...
If you're only concerned with printing a list of variables with static values (i.e. expansion doesn't work in this case) then another option would be to add start and end markers to your file that tell you where your block of static variable definitions is, e.g.
#!/bin/bash
# some code
# region variables
VAR1=FOO
VAR2=BAR
# endregion
# more code
Then you can just print that part of the file.
Here's something I whipped up for that:
function show_configuration() {
local START_LINE=$(( $(< "$0" grep -m 1 -n "region variables" | cut -d: -f1) + 1 ))
local END_LINE=$(( $(< "$0" grep -m 1 -n "endregion" | cut -d: -f1) - 1 ))
< "$0" awk "${START_LINE} <= NR && NR <= ${END_LINE}"
}
First, note that the block of variables resides in the same file this function is in, so I can use $0 to access the contents of the file.
I use "region" markers to separate different regions of code. So I simply grep for the "variable" region marker (first match: grep -m 1) and let grep prefix the line number (grep -n). Then I have to cut the line number from the match output (splitting on :). Lastly, add or subtract 1 because I don't want the markers to be part of the output.
Now, to print that range of the file I use awk with line number conditions.
Try using a script (lets call it "ls_vars"):
#!/bin/bash
set -a
env > /tmp/a
source $1
env > /tmp/b
diff /tmp/{a,b} | sed -ne 's/^> //p'
chmod +x it, and:
ls_vars your-script.sh > vars.files.save
From a security perspective, either #akostadinov's answer or #JuvenXu's answer is preferable to relying upon the unstructured output of the set command, due to the following potential security flaw:
#!/bin/bash
function doLogic()
{
local COMMAND="${1}"
if ( set -o posix; set | grep -q '^PS1=' )
then
echo 'Script is interactive'
else
echo 'Script is NOT interactive'
fi
}
doLogic 'hello' # Script is NOT interactive
doLogic $'\nPS1=' # Script is interactive
The above function doLogic uses set to check for the presence of variable PS1 to determine if the script is interactive or not (never mind if this is the best way to accomplish that goal; this is just an example.)
However, the output of set is unstructured, which means that any variable that contains a newline can totally contaminate the results.
This, of course, is a potential security risk. Instead, use either Bash's support for indirect variable name expansion, or compgen -v.
Try this : set | egrep "^\w+=" (with or without the | less piping)
The first proposed solution, ( set -o posix ; set ) | less, works but has a drawback: it transmits control codes to the terminal, so they are not displayed properly. So for example, if there is (likely) a IFS=$' \t\n' variable, we can see:
IFS='
'
…instead.
My egrep solution displays this (and eventually other similars ones) properly.
I probably have stolen the answer while ago ... anyway slightly different as a func:
##
# usage source bin/nps-bash-util-funcs
# doEchoVars
doEchoVars(){
# if the tmp dir does not exist
test -z ${tmp_dir} && \
export tmp_dir="$(cd "$(dirname $0)/../../.."; pwd)""/dat/log/.tmp.$$" && \
mkdir -p "$tmp_dir" && \
( set -o posix ; set )| sort >"$tmp_dir/.vars.before"
( set -o posix ; set ) | sort >"$tmp_dir/.vars.after"
cmd="$(comm -3 $tmp_dir/.vars.before $tmp_dir/.vars.after | perl -ne 's#\s+##g;print "\n $_ "' )"
echo -e "$cmd"
}
The printenv command:
printenv prints all environment variables along with their values.
Good Luck...
Simple way to do this is to use bash strict mode by setting system environment variables before running your script and to use diff to only sort the ones of your script :
# Add this line at the top of your script :
set > /tmp/old_vars.log
# Add this line at the end of your script :
set > /tmp/new_vars.log
# Alternatively you can remove unwanted variables with grep (e.g., passwords) :
set | grep -v "PASSWORD1=\|PASSWORD2=\|PASSWORD3=" > /tmp/new_vars.log
# Now you can compare to sort variables of your script :
diff /tmp/old_vars.log /tmp/new_vars.log | grep "^>" > /tmp/script_vars.log
You can now retrieve variables of your script in /tmp/script_vars.log.
Or at least something based on that!
TL;DR
With: typeset -m <GLOBPATH>
$ VARIABLE1=abc
$ VARIABLE2=def
$ VARIABLE3=ghi
$ noglob typeset -m VARIABLE*
VARIABLE1=abc
VARIABLE2=def
VARIABLE3=ghi
¹ documentation for typeset can be found in man zshbuiltins, or man zshall.

Ruby equivalent of Perls one line string replacer

With perl, you can do this:
$ perl -pi -e 's/foo/bar/g' *.txt
Which will replace the string "foo" with "bar" on all *.txt files in the current directory.
I like this, but I was wondering if the same thing is possible using Ruby.
Yep. Ruby has an equivalent for most of Perl's command line options, and many of them are identical.
$ ruby -pi -e 'gsub /foo/, "bar"' *.txt
Here are the relevant docs from man ruby:
-i extension – Specifies in-place-edit mode. The extension,
if specified, is added to old file name to make a backup copy. For
example:
% echo matz > /tmp/junk
% cat /tmp/junk
matz
% ruby -p -i.bak -e '$_.upcase!' /tmp/junk
% cat /tmp/junk
MATZ
% cat /tmp/junk.bak
matz
-n – Causes Ruby to assume the following loop around your
script, which makes it iterate over file name arguments somewhat like
sed -n or awk.
while gets
...
end
-p – Acts mostly same as -n switch, but print the value of
variable $_ at the each end of the loop. For example:
% echo matz | ruby -p -e '$_.tr! "a-z", "A-Z"'
MATZ
My code above uses Kernel#gsub, which is only available in -p/-n mode. Per the docs:
gsub(pattern, replacement) → $_
gsub(pattern) {|...| block } → $_
Equivalent to $_.gsub..., except that $_ will be updated if
substitution occurs. Available only when -p/-n command line option
specified.
There are a handful of other such Kernel methods, which are useful to know: chomp, chop, and (naturally) sub.
Check out man ruby; there are a lot of great features.

Performance with bash loop when renaming files

Sometimes I need to rename some amount of files, such as add a prefix or remove something.
At first I wrote a python script. It works well, and I want a shell version. Therefore I wrote something like that:
$1 - which directory to list,
$2 - what pattern will be replacement,
$3 - replacement.
echo "usage: dir pattern replacement"
for fname in `ls $1`
do
newName=$(echo $fname | sed "s/^$2/$3/")
echo 'mv' "$1/$fname" "$1/$newName&&"
mv "$1/$fname" "$1/$newName"
done
It works but very slowly, probably because it needs to create a process (here sed and mv) and destroy it and create same process again just to have a different argument. Is that true? If so, how to avoid it, how can I get a faster version?
I thought to offer all processed files a name (using sed to process them at once), but it still needs mv in the loop.
Please tell me, how you guys do it? Thanks. If you find my question hard to understand please be patient, my English is not very good, sorry.
--- update ---
I am sorry for my description. My core question is: "IF we should use some command in loop, will that lower performance?" Because in for i in {1..100000}; do ls 1>/dev/null; done creating and destroying a process will take most of the time. So what I want is "Is there any way to reduce that cost?".
Thanks to kev and S.R.I for giving me a rename solution to rename files.
Every time you call an external binary (ls, sed, mv), bash has to fork itself to exec the command and that takes a big performance hit.
You can do everything you want to do in pure bash 4.X and only need to call mv
pat_rename(){
if [[ ! -d "$1" ]]; then
echo "Error: '$1' is not a valid directory"
return
fi
shopt -s globstar
cd "$1"
for file in **; do
echo "mv $file ${file//$2/$3}"
done
}
Simplest first. What's wrong with rename?
mkdir tstbin
for i in `seq 1 20`
do
touch tstbin/filename$i.txt
done
rename .txt .html tstbin/*.txt
Or are you using an older *nix machine?
To avoid re-executing sed on each file, you could instead setup two name streams, one original, and one transformed, then sip from the ends:
exec 3< <(ls)
exec 4< <(ls | sed 's/from/to/')
IFS=`echo`
while read -u3 orig && read -u4 to; do
mv "${orig}" "${to}";
done;
I think you can store all of file names into a file or string, and use awk and sed do it once instead of one by one.

How to list variables declared in script in bash?

In my script in bash, there are lot of variables, and I have to make something to save them to file.
My question is how to list all variables declared in my script and get list like this:
VARIABLE1=abc
VARIABLE2=def
VARIABLE3=ghi
set will output the variables, unfortunately it will also output the functions defines as well.
Luckily POSIX mode only outputs the variables:
( set -o posix ; set ) | less
Piping to less, or redirect to where you want the options.
So to get the variables declared in just the script:
( set -o posix ; set ) >/tmp/variables.before
source script
( set -o posix ; set ) >/tmp/variables.after
diff /tmp/variables.before /tmp/variables.after
rm /tmp/variables.before /tmp/variables.after
(Or at least something based on that :-) )
compgen -v
It lists all variables including local ones.
I learned it from Get list of variables whose name matches a certain pattern, and used it in my script.
for i in _ {a..z} {A..Z}; do eval "echo \${!$i#}" ; done | xargs printf "%s\n"
This must print all shell variables names. You can get a list before and after sourcing your file just like with "set" to diff which variables are new (as explained in the other answers). But keep in mind such filtering with diff can filter out some variables that you need but were present before sourcing your file.
In your case, if you know your variables' names start with "VARIABLE", then you can source your script and do:
for var in ${!VARIABLE#}; do
printf "%s%q\n" "$var=" "${!var}"
done
UPDATE: For pure BASH solution (no external commands used):
for i in _ {a..z} {A..Z}; do
for var in `eval echo "\\${!$i#}"`; do
echo $var
# you can test if $var matches some criteria and put it in the file or ignore
done
done
Based on some of the above answers, this worked for me:
before=$(set -o posix; set | sort);
source file:
comm -13 <(printf %s "$before") <(set -o posix; set | sort | uniq)
If you can post-process, (as already mentioned) you might just place a set call at the beginning and end of your script (each to a different file) and do a diff on the two files. Realize that this will still contain some noise.
You can also do this programatically. To limit the output to just your current scope, you would have to implement a wrapper to variable creation. For example
store() {
export ${1}="${*:2}"
[[ ${STORED} =~ "(^| )${1}($| )" ]] || STORED="${STORED} ${1}"
}
store VAR1 abc
store VAR2 bcd
store VAR3 cde
for i in ${STORED}; do
echo "${i}=${!i}"
done
Which yields
VAR1=abc
VAR2=bcd
VAR3=cde
A little late to the party, but here's another suggestion:
#!/bin/bash
set_before=$( set -o posix; set | sed -e '/^_=*/d' )
# create/set some variables
VARIABLE1=a
VARIABLE2=b
VARIABLE3=c
set_after=$( set -o posix; unset set_before; set | sed -e '/^_=/d' )
diff <(echo "$set_before") <(echo "$set_after") | sed -e 's/^> //' -e '/^[[:digit:]].*/d'
The diff+sed pipeline command line outputs all script-defined variables in the desired format (as specified in the OP's post):
VARIABLE1=a
VARIABLE2=b
VARIABLE3=c
Here's something similar to the #GinkgoFr answer, but without the problems identified by #Tino or #DejayClayton,
and is more robust than #DouglasLeeder's clever set -o posix bit:
+ function SOLUTION() { (set +o posix; set) | sed -ne '/^\w\+=/!q; p;'; }
The difference is that this solution STOPS after the first non-variable report, e.g. the first function reported by set
BTW: The "Tino" problem is solved. Even though POSIX is turned off and functions are reported by set,
the sed ... portion of the solution only allows variable reports through (e.g. VAR=VALUE lines).
In particular, the A2 does not spuriously make it into the output.
+ function a() { echo $'\nA2=B'; }; A0=000; A9=999;
+ SOLUTION | grep '^A[0-9]='
A0=000
A9=999
AND: The "DejayClayton" problem is solved (embedded newlines in variable values do not disrupt the output - each VAR=VALUE get a single output line):
+ A1=$'111\nA2=222'; A0=000; A9=999;
+ SOLUTION | grep '^A[0-9]='
A0=000
A1=$'111\nA2=222'
A9=999
NOTE: The solution provided by #DouglasLeeder suffers from the "DejayClayton" problem (values with embedded newlines).
Below, the A1 is wrong and A2 should not show at all.
$ A1=$'111\nA2=222'; A0=000; A9=999; (set -o posix; set) | grep '^A[0-9]='
A0=000
A1='111
A2=222'
A9=999
FINALLY: I don't think the version of bash matters, but it might. I did my testing / developing on this one:
$ bash --version
GNU bash, version 4.4.12(1)-release (x86_64-pc-msys)
POST-SCRIPT: Given some of the other responses to the OP, I'm left < 100% sure that set always converts newlines within the value to \n, which this solution relies upon to avoid the "DejayClayton" problem. Perhaps that's a modern behavior? Or a compile-time variation? Or a set -o or shopt option setting? If you know of such variations, please add a comment...
If you're only concerned with printing a list of variables with static values (i.e. expansion doesn't work in this case) then another option would be to add start and end markers to your file that tell you where your block of static variable definitions is, e.g.
#!/bin/bash
# some code
# region variables
VAR1=FOO
VAR2=BAR
# endregion
# more code
Then you can just print that part of the file.
Here's something I whipped up for that:
function show_configuration() {
local START_LINE=$(( $(< "$0" grep -m 1 -n "region variables" | cut -d: -f1) + 1 ))
local END_LINE=$(( $(< "$0" grep -m 1 -n "endregion" | cut -d: -f1) - 1 ))
< "$0" awk "${START_LINE} <= NR && NR <= ${END_LINE}"
}
First, note that the block of variables resides in the same file this function is in, so I can use $0 to access the contents of the file.
I use "region" markers to separate different regions of code. So I simply grep for the "variable" region marker (first match: grep -m 1) and let grep prefix the line number (grep -n). Then I have to cut the line number from the match output (splitting on :). Lastly, add or subtract 1 because I don't want the markers to be part of the output.
Now, to print that range of the file I use awk with line number conditions.
Try using a script (lets call it "ls_vars"):
#!/bin/bash
set -a
env > /tmp/a
source $1
env > /tmp/b
diff /tmp/{a,b} | sed -ne 's/^> //p'
chmod +x it, and:
ls_vars your-script.sh > vars.files.save
From a security perspective, either #akostadinov's answer or #JuvenXu's answer is preferable to relying upon the unstructured output of the set command, due to the following potential security flaw:
#!/bin/bash
function doLogic()
{
local COMMAND="${1}"
if ( set -o posix; set | grep -q '^PS1=' )
then
echo 'Script is interactive'
else
echo 'Script is NOT interactive'
fi
}
doLogic 'hello' # Script is NOT interactive
doLogic $'\nPS1=' # Script is interactive
The above function doLogic uses set to check for the presence of variable PS1 to determine if the script is interactive or not (never mind if this is the best way to accomplish that goal; this is just an example.)
However, the output of set is unstructured, which means that any variable that contains a newline can totally contaminate the results.
This, of course, is a potential security risk. Instead, use either Bash's support for indirect variable name expansion, or compgen -v.
Try this : set | egrep "^\w+=" (with or without the | less piping)
The first proposed solution, ( set -o posix ; set ) | less, works but has a drawback: it transmits control codes to the terminal, so they are not displayed properly. So for example, if there is (likely) a IFS=$' \t\n' variable, we can see:
IFS='
'
…instead.
My egrep solution displays this (and eventually other similars ones) properly.
I probably have stolen the answer while ago ... anyway slightly different as a func:
##
# usage source bin/nps-bash-util-funcs
# doEchoVars
doEchoVars(){
# if the tmp dir does not exist
test -z ${tmp_dir} && \
export tmp_dir="$(cd "$(dirname $0)/../../.."; pwd)""/dat/log/.tmp.$$" && \
mkdir -p "$tmp_dir" && \
( set -o posix ; set )| sort >"$tmp_dir/.vars.before"
( set -o posix ; set ) | sort >"$tmp_dir/.vars.after"
cmd="$(comm -3 $tmp_dir/.vars.before $tmp_dir/.vars.after | perl -ne 's#\s+##g;print "\n $_ "' )"
echo -e "$cmd"
}
The printenv command:
printenv prints all environment variables along with their values.
Good Luck...
Simple way to do this is to use bash strict mode by setting system environment variables before running your script and to use diff to only sort the ones of your script :
# Add this line at the top of your script :
set > /tmp/old_vars.log
# Add this line at the end of your script :
set > /tmp/new_vars.log
# Alternatively you can remove unwanted variables with grep (e.g., passwords) :
set | grep -v "PASSWORD1=\|PASSWORD2=\|PASSWORD3=" > /tmp/new_vars.log
# Now you can compare to sort variables of your script :
diff /tmp/old_vars.log /tmp/new_vars.log | grep "^>" > /tmp/script_vars.log
You can now retrieve variables of your script in /tmp/script_vars.log.
Or at least something based on that!
TL;DR
With: typeset -m <GLOBPATH>
$ VARIABLE1=abc
$ VARIABLE2=def
$ VARIABLE3=ghi
$ noglob typeset -m VARIABLE*
VARIABLE1=abc
VARIABLE2=def
VARIABLE3=ghi
¹ documentation for typeset can be found in man zshbuiltins, or man zshall.

What is your single most favorite command-line trick using Bash? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
We all know how to use <ctrl>-R to reverse search through history, but did you know you can use <ctrl>-S to forward search if you set stty stop ""? Also, have you ever tried running bind -p to see all of your keyboard shortcuts listed? There are over 455 on Mac OS X by default.
What is your single most favorite obscure trick, keyboard shortcut or shopt configuration using bash?
Renaming/moving files with suffixes quickly:
cp /home/foo/realllylongname.cpp{,-old}
This expands to:
cp /home/foo/realllylongname.cpp /home/foo/realllylongname.cpp-old
cd -
It's the command-line equivalent of the back button (takes you to the previous directory you were in).
Another favorite:
!!
Repeats your last command. Most useful in the form:
sudo !!
My favorite is '^string^string2' which takes the last command, replaces string with string2 and executes it
$ ehco foo bar baz
bash: ehco: command not found
$ ^ehco^echo
foo bar baz
Bash command line history guide
rename
Example:
$ ls
this_has_text_to_find_1.txt
this_has_text_to_find_2.txt
this_has_text_to_find_3.txt
this_has_text_to_find_4.txt
$ rename 's/text_to_find/been_renamed/' *.txt
$ ls
this_has_been_renamed_1.txt
this_has_been_renamed_2.txt
this_has_been_renamed_3.txt
this_has_been_renamed_4.txt
So useful
I'm a fan of the !$, !^ and !* expandos, returning, from the most recent submitted command line: the last item, first non-command item, and all non-command items. To wit (Note that the shell prints out the command first):
$ echo foo bar baz
foo bar baz
$ echo bang-dollar: !$ bang-hat: !^ bang-star: !*
echo bang-dollar: baz bang-hat: foo bang-star: foo bar baz
bang-dollar: baz bang-hat: foo bang-star: foo bar baz
This comes in handy when you, say ls filea fileb, and want to edit one of them: vi !$ or both of them: vimdiff !*. It can also be generalized to "the nth argument" like so:
$ echo foo bar baz
$ echo !:2
echo bar
bar
Finally, with pathnames, you can get at parts of the path by appending :h and :t to any of the above expandos:
$ ls /usr/bin/id
/usr/bin/id
$ echo Head: !$:h Tail: !$:t
echo Head: /usr/bin Tail: id
Head: /usr/bin Tail: id
When running commands, sometimes I'll want to run a command with the previous ones arguments. To do that, you can use this shortcut:
$ mkdir /tmp/new
$ cd !!:*
Occasionally, in lieu of using find, I'll break-out a one-line loop if I need to run a bunch of commands on a list of files.
for file in *.wav; do lame "$file" "$(basename "$file" .wav).mp3" ; done;
Configuring the command-line history options in my .bash_login (or .bashrc) is really useful. The following is a cadre of settings that I use on my Macbook Pro.
Setting the following makes bash erase duplicate commands in your history:
export HISTCONTROL="erasedups:ignoreboth"
I also jack my history size up pretty high too. Why not? It doesn't seem to slow anything down on today's microprocessors.
export HISTFILESIZE=500000
export HISTSIZE=100000
Another thing that I do is ignore some commands from my history. No need to remember the exit command.
export HISTIGNORE="&:[ ]*:exit"
You definitely want to set histappend. Otherwise, bash overwrites your history when you exit.
shopt -s histappend
Another option that I use is cmdhist. This lets you save multi-line commands to the history as one command.
shopt -s cmdhist
Finally, on Mac OS X (if you're not using vi mode), you'll want to reset <CTRL>-S from being scroll stop. This prevents bash from being able to interpret it as forward search.
stty stop ""
How to list only subdirectories in the current one ?
ls -d */
It's a simple trick, but you wouldn't know how much time I needed to find that one !
ESC.
Inserts the last arguments from your last bash command. It comes in handy more than you think.
cp file /to/some/long/path
cd ESC.
Sure, you can "diff file1.txt file2.txt", but Bash supports process substitution, which allows you to diff the output of commands.
For example, let's say I want to make sure my script gives me the output I expect. I can just wrap my script in <( ) and feed it to diff to get a quick and dirty unit test:
$ cat myscript.sh
#!/bin/sh
echo -e "one\nthree"
$
$ ./myscript.sh
one
three
$
$ cat expected_output.txt
one
two
three
$
$ diff <(./myscript.sh) expected_output.txt
1a2
> two
$
As another example, let's say I want to check if two servers have the same list of RPMs installed. Rather than sshing to each server, writing each list of RPMs to separate files, and doing a diff on those files, I can just do the diff from my workstation:
$ diff <(ssh server1 'rpm -qa | sort') <(ssh server2 'rpm -qa | sort')
241c240
< kernel-2.6.18-92.1.6.el5
---
> kernel-2.6.18-92.el5
317d315
< libsmi-0.4.5-2.el5
727,728d724
< wireshark-0.99.7-1.el5
< wireshark-gnome-0.99.7-1.el5
$
There are more examples in the
Advanced Bash-Scripting Guide at http://tldp.org/LDP/abs/html/process-sub.html.
My favorite command is "ls -thor"
It summons the power of the gods to list the most recently modified files in a conveniently readable format.
More of a novelty, but it's clever...
Top 10 commands used:
$ history | awk '{print $2}' | awk 'BEGIN {FS="|"}{print $1}' | sort | uniq -c | sort -nr | head
Sample output:
242 git
83 rake
43 cd
33 ss
24 ls
15 rsg
11 cap
10 dig
9 ping
3 vi
^R reverse search. Hit ^R, type a fragment of a previous command you want to match, and hit ^R until you find the one you want. Then I don't have to remember recently used commands that are still in my history. Not exclusively bash, but also: ^E for end of line, ^A for beginning of line, ^U and ^K to delete before and after the cursor, respectively.
I often have aliases for vi, ls, etc. but sometimes you want to escape the alias. Just add a back slash to the command in front:
Eg:
$ alias vi=vim
$ # To escape the alias for vi:
$ \vi # This doesn't open VIM
Cool, isn't it?
Here's a couple of configuration tweaks:
~/.inputrc:
"\C-[[A": history-search-backward
"\C-[[B": history-search-forward
This works the same as ^R but using the arrow keys instead. This means I can type (e.g.) cd /media/ then hit up-arrow to go to the last thing I cd'd to inside the /media/ folder.
(I use Gnome Terminal, you may need to change the escape codes for other terminal emulators.)
Bash completion is also incredibly useful, but it's a far more subtle addition. In ~/.bashrc:
if [ -f /etc/bash_completion ]; then
. /etc/bash_completion
fi
This will enable per-program tab-completion (e.g. attempting tab completion when the command line starts with evince will only show files that evince can open, and it will also tab-complete command line options).
Works nicely with this also in ~/.inputrc:
set completion-ignore-case on
set show-all-if-ambiguous on
set show-all-if-unmodified on
I use the following a lot:
The :p modifier to print a history result. E.g.
!!:p
Will print the last command so you can check that it's correct before running it again. Just enter !! to execute it.
In a similar vein:
!?foo?:p
Will search your history for the most recent command that contained the string 'foo' and print it.
If you don't need to print,
!?foo
does the search and executes it straight away.
I have got a secret weapon : shell-fu.
There are thousand of smart tips, cool tricks and efficient recipes that most of the time fit on a single line.
One that I love (but I cheat a bit since I use the fact that Python is installed on most Unix system now) :
alias webshare='python -m SimpleHTTPServer'
Now everytime you type "webshare", the current directory will be available through the port 8000. Really nice when you want to share files with friends on a local network without usb key or remote dir. Streaming video and music will work too.
And of course the classic fork bomb that is completely useless but still a lot of fun :
$ :(){ :|:& };:
Don't try that in a production server...
You can use the watch command in conjunction with another command to look for changes. An example of this was when I was testing my router, and I wanted to get up-to-date numbers on stuff like signal-to-noise ratio, etc.
watch --interval=10 lynx -dump http://dslrouter/stats.html
type -a PROG
in order to find all the places where PROG is available, usually somewhere in ~/bin
rather than the one in /usr/bin/PROG that might have been expected.
I like to construct commands with echo and pipe them to the shell:
$ find dir -name \*~ | xargs echo rm
...
$ find dir -name \*~ | xargs echo rm | ksh -s
Why? Because it allows me to look at what's going to be done before I do it. That way if I have a horrible error (like removing my home directory), I can catch it before it happens. Obviously, this is most important for destructive or irrevocable actions.
When downloading a large file I quite often do:
while ls -la <filename>; do sleep 5; done
And then just ctrl+c when I'm done (or if ls returns non-zero). It's similar to the watch program but it uses the shell instead, so it works on platforms without watch.
Another useful tool is netcat, or nc. If you do:
nc -l -p 9100 > printjob.prn
Then you can set up a printer on another computer but instead use the IP address of the computer running netcat. When the print job is sent, it is received by the computer running netcat and dumped into printjob.prn.
pushd and popd almost always come in handy
One preferred way of navigating when I'm using multiple directories in widely separate places in a tree hierarchy is to use acf_func.sh (listed below). Once defined, you can do
cd --
to see a list of recent directories, with a numerical menu
cd -2
to go to the second-most recent directory.
Very easy to use, very handy.
Here's the code:
# do ". acd_func.sh"
# acd_func 1.0.5, 10-nov-2004
# petar marinov, http:/geocities.com/h2428, this is public domain
cd_func ()
{
local x2 the_new_dir adir index
local -i cnt
if [[ $1 == "--" ]]; then
dirs -v
return 0
fi
the_new_dir=$1
[[ -z $1 ]] && the_new_dir=$HOME
if [[ ${the_new_dir:0:1} == '-' ]]; then
#
# Extract dir N from dirs
index=${the_new_dir:1}
[[ -z $index ]] && index=1
adir=$(dirs +$index)
[[ -z $adir ]] && return 1
the_new_dir=$adir
fi
#
# '~' has to be substituted by ${HOME}
[[ ${the_new_dir:0:1} == '~' ]] && the_new_dir="${HOME}${the_new_dir:1}"
#
# Now change to the new dir and add to the top of the stack
pushd "${the_new_dir}" > /dev/null
[[ $? -ne 0 ]] && return 1
the_new_dir=$(pwd)
#
# Trim down everything beyond 11th entry
popd -n +11 2>/dev/null 1>/dev/null
#
# Remove any other occurence of this dir, skipping the top of the stack
for ((cnt=1; cnt <= 10; cnt++)); do
x2=$(dirs +${cnt} 2>/dev/null)
[[ $? -ne 0 ]] && return 0
[[ ${x2:0:1} == '~' ]] && x2="${HOME}${x2:1}"
if [[ "${x2}" == "${the_new_dir}" ]]; then
popd -n +$cnt 2>/dev/null 1>/dev/null
cnt=cnt-1
fi
done
return 0
}
alias cd=cd_func
if [[ $BASH_VERSION > "2.05a" ]]; then
# ctrl+w shows the menu
bind -x "\"\C-w\":cd_func -- ;"
fi
Expand complicated lines before hitting the dreaded enter
Alt+Ctrl+e — shell-expand-line (may need to use Esc, Ctrl+e on your keyboard)
Ctrl+_ — undo
Ctrl+x, * — glob-expand-word
$ echo !$ !-2^ * Alt+Ctrl+e
$ echo aword someotherword * Ctrl+_
$ echo !$ !-2^ * Ctrl+x, *
$ echo !$ !-2^ LOG Makefile bar.c foo.h
&c.
I've always been partial to:
ctrl-E # move cursor to end of line
ctrl-A # move cursor to beginning of line
I also use shopt -s cdable_vars, then you can create bash variables to common directories. So, for my company's source tree, I create a bunch of variables like:
export Dcentmain="/var/localdata/p4ws/centaur/main/apps/core"
then I can change to that directory by cd Dcentmain.
pbcopy
This copies to the Mac system clipboard. You can pipe commands to it...try:
pwd | pbcopy
$ touch {1,2}.txt
$ ls [12].txt
1.txt 2.txt
$ rm !:1
rm [12].txt
$ history | tail -10
...
10007 touch {1,2}.txt
...
$ !10007
touch {1,2}.txt
$ for f in *.txt; do mv $f ${f/txt/doc}; done
Using 'set -o vi' from the command line, or better, in .bashrc, puts you in vi editing mode on the command line. You start in 'insert' mode so you can type and backspace as normal, but if you make a 'large' mistake you can hit the esc key and then use 'b' and 'f' to move around as you do in vi. cw to change a word. Particularly useful after you've brought up a history command that you want to change.
Similar to many above, my current favorite is the keystroke [alt]. (Alt and "." keys together) this is the same as $! (Inserts the last argument from the previous command) except that it's immediate and for me easier to type. (Just can't be used in scripts)
eg:
mkdir -p /tmp/test/blah/oops/something
cd [alt].
String multiple commands together using the && command:
./run.sh && tail -f log.txt
or
kill -9 1111 && ./start.sh

Resources