Bash: select a previous command that matches a pattern - bash

I know about the bash history navigation with the Up and Down arrows.
I would like a lazy way to select a previous command that matches some regex (that is shorter than the whole command, so it takes less time to be typed).
Is it possible with bash?
If not, do other shells have such a feature?

You can always use CTRL-R to search your history backward, and type some part of the previous command. Hitting CTRL-R again (after the first hit) repeats your query (jumps to the next match if any).
Personally I use this for regex search (as regex searching is not possible yet (AFAIK)):
# search (using perl regexp syntax) your entire history
function histgrep()
{
grep -P $# ~/.bash_history
}
Edit:
For searching most recent history items via that function, see this (on setting $PROMPT_COMMAND ).

Zsolt, avoid hard-coded filenames, use the HISTFILE variable instead, with a fallback if you're really paranoid: ${HISTFILE:-~/.bash_history} ;-)
Any why grepping directly through the history file?! You'd lose the history number, which is necessary to replicate the command (e.g. !33 to execute again the 33th entry from your history) without having to copy&paste grep's output.
Please keep in mind that using that kind of $# expansions may fail at various (epic) levels. For instance, an argument beginning with "-" (histgrep -h) will usually hang, or shoot yourself in the foot. Indeed, this basic example may be worked around easily, following the classic "--" way of separating arguments from options, but the discussion has no ending, remembering that the arguments to be provided to that hack would be regular expressions. ;-)
Oh, and isn't histgrep somehow a too verbose choice? h, i, Tab, g, Tab :)
IMHO I'd stick to using ^R, falling back to history | grep ... whenever necessary.
Anyway, for the sake of the example, I'd (lazily) rewrite this little helper as:
function hgrep() { history | grep -P -- "$*"; }

See my answer here
Example:
$echo happy
happy
$!?pp?
happy

I prefere this solution since one can see all the contents of the whole history in addition to search by regex, and the regex-type is according to the very good regex engine of vim:
vim -u /root/.vimrc -M + <( history )
This I have given in this my post:
https://unix.stackexchange.com/questions/718828/search-in-whole-bash-history-list-with-full-featured-regex-engine
Regards
Anton Wessel

Related

Conditional Characters in Shell Globbing - Bash/Zsh

I'm trying to get a case statement to match one of four inputs in a Bash/Zsh shell:
v
-v
version
--version
I'm looking for this in the below case statement:
case "$1" in
?(--)version|?(-)v)
# do stuff
;;
esac
And I find that this isn't working. From what I've read, ?(pattern) is how to match 0 or one occurrences of a pattern.
I did have it working by matching the case
--version|version|-v|v)
But it would be nice to have something neater, plus, it's a learning experience!
I imagine this is probably down to me not escaping things properly, but I also tried encompassing my original string in double quotes ("), yet I got no output again.
Any advice?
To provide an answer to this, the best solution (if insisting on using some form of globbing) is:
(|--)version|(|-)v )
But as rightly pointed out by #kvantour, it's much better to just stick with the simpler:
--version|version|-v|v )

Weird issue when running grep with the --include option

Here is the code at the bash shell. How is the file mask supposed to be specified, if not this way? I expected both commands to find the search expression, but it's not happening. In this example, I know in advance that I prefer to restrict the search to python source code files only, because unqualified searches are silly time wasters.
So, this works as expected:
grep -rni '/home/ga/projects' -e 'def Pr(x,u,v)'
/home/ga/projects/anom/anom.py:27:def Pr(x,u,v): blah, blah, ...
but this won't work:
grep --include=\*.{py} -rni '/home/ga/projects' -e 'def Pr(x,u,v)'
I'm using GNU grep version 2.16.
--include=\*.{py} looks like a broken attempt to use brace expansion (an unquoted {...} expression).
However, for brace expansion
to occur in bash (and ksh and zsh), you must either have:
a list of at least 2 items, separated with ,; e.g. {py,txt}, which expands to 2 arguments, py and txt.
or, a range of items formed from two end points, separated with ..; e.g., {1..3}, which expands to 3 arguments, 1, 2, and 3.
Thus, with a single item, simply do not use brace expansion:
--include=\*.py
If you did have multiple extensions to consider, e.g., *.py as well as *.pyc files, here's a robust form that illustrates the underlying shell features:
'--include=*.'{py,pyc}
Here:
Brace expansion is applied, because {...} contains a 2-item list.
Since the {...} directly follows the literal (single-quoted) string --include=*., the results of the brace expansion include the literal part.
Therefore, 2 arguments are ultimately passed to grep, with the following literal content:
--include=*.py
--include=*.pyc
Your command fails because of the braces '{}'. It will search for it in the file name. You can create a file such as 'myscript.{py}' to convince yourself. You'll see it will appear in the results.
The correct option parameter would be '*.py' or the equivalent \*.py. Either way will protect it from being (mis)interpreted by the shell.
On the other side, I can only advise to use the command find for such jobs :
find /home/ga/projects -regex '.*\.py$' -exec grep -e "def Pr(x,u,v)" {} +
That will protect you from hard to understand shell behaviour.
Try like this (using quotes to be safe; also better readability than backslash escaping IMHO):
grep --include='*.py' ...
your \*.{py} brace expansion usage isn't supported at all by grep. Please see the comments below for the full investigation regarding this. For the record, blame this answer for the resulting brace wars ;)
By the way, the brace expansion works generally fine in Bash. See mklement0 answer for more details.
Ack. As an alternative, you might consider switching to ack instead from now on. It's a tool just like grep, but fully optimized for programmers.
It's a great fit for what you are doing. A nice quote about it:
Every once in a while something comes along that improves an idea so much, you can't ignore it. Such a thing is ack, the grep replacement.

What platform independent way to find directory of shell executable in shell script?

According to POSIX:
http://pubs.opengroup.org/onlinepubs/9699919799/utilities/sh.html
there are some cases where it not obvious. For example:
If the file is not in the current working directory,
the implementation may perform a search for an executable
file using the value of PATH, as described in Command Search and Execution.
My Bash 4.x doesn't follow this optional rule (due to security concern??) so I can't test how it be in real life...
What platform independent way to find directory of shell executable in shell script?
PS. Also dirname $0 case fail with:
#!/bin/sh
echo $0
dirname $0
when you:
$ sh runme.sh
runme.sh
.
So you need something like:
CMDPATH=`cd $(dirname $0); echo $PWD`
To made code dependent only on built-in shell capabilities I rewrite code to:
PREVPWD=$PWD
cd ${0%${0##*/}}.
CMDPATH=$PWD
cd $PREVPWD
This look ugly but doesn't require fork any executables...
EDIT3:
Though not strictly POSIX yet, realpath is a GNU core app since 2012. Full disclosure: never heard of it before I noticed it in the info coreutils TOC and immediately thought of this question, but using the following function as demonstrated should reliably, (soon POSIXLY?), and, I hope, efficiently
provide its caller with an absolutely sourced $0:
% _abs_0() {
> o1="${1%%/*}"; ${o1:="${1}"}; ${o1:=`realpath -s "${1}"`}; eval "$1=\${o1}";
> }
% _abs_0 ${abs0:="${0}"} ; printf %s\\n "${abs0}"
/no/more/dots/in/your/path2.sh
EDIT4: It may be worth highlighting that this solution uses POSIX parameter expansion to first check if the path actually needs expanding and resolving at all before attempting to do so. This should return an absolutely sourced $0via a messenger variable (with the notable exception that -s will preserve symlinks) as efficiently as I could imagine it could be done whether or not the path is already absolute.
EDIT2:
Now I believe I understand your problem much better which, unfortunately, actually renders most of the below irrelevant.
(minor edit: before finding realpath in the docs, I had at least pared down my version of this not to depend on the time field, but, fair warning, after testing some I'm less convinced ps is fully reliable in its command path expansion capacity)
On the other hand, you could do this:
ps ww -fp $$ | grep -Eo '/[^:]*'"${0#*/}"
eval "abs0=${`ps ww -fp $$ | grep -Eo ' /'`#?}"
I need to fix it to work better with fields instead of expecting the time field to come just before the process's path and relying on its included colon as a reference, especially because this will not work with a colon in your process's path, but that's trivial and will happen soon, I think. The functionality is otherwise POSIX compliant, I believe. Probably parameter expansion alone can do what is necessary, I think.
Not strictly relevant (or correct):
This should work in every case that conforms to POSIX guidelines:
echo ${0%/*}
EDIT:
So I'll confess that, at least at first blush, I don't fully understand the issue you describe. Obviously in your question you demonstrate some familiarity with POSIX standards for variable string manipulation via parameter expansion (even if your particular implementation seems slightly strained at a glance), so it's likely I'm missing some vital piece of information in my interpretation of your question and perhaps, at least in its current form, this is not the answer you seek.
I have posted before on parameter expansion for inline variable null/set tests which may or may not be of use to you as you can see at the "Portable Way to Check Emptiness of a Shell Variable" question. I mention this mainly because my answer there was in large part copied/pasted from the POSIX guidelines on parameter expansion, includes an anchored link to the guidelines coverage on this subject, and a few examples from both the canonical documentation and my own perhaps less expertly demonstrated constructs.
I will freely admit however, that while I do not yet fully understand what it is you ask, I don't believe that you will find a specific answer there. Instead I suspect you may have forgotten, as I do occasionally, that the # and % operators in POSIX string manipulation are used to specify the part of the string you want to remove, not that you wish to retain as some might find more intuitive. What I mean is any string slice you search for in this way is designed to disappear from your output, which will then be only what your remains of your original string after your specified search string is removed.
So here's a bit of an overview:
Whereas a single instance of either operator will remove only as little as possible to fully satisfy your search, but when doubly instanced the search is called in a greedy form and removes as much of the original string as your search could possibly allow.
Other than that you need only know some basic regex and remember that # begins its search for your removal string from the left and scans through to the right, and that % begins instead its search from the right and scans through to the left.
## short example before better learning if I'm on the right track
## demonstrating path manipulation with '#' and '%'
% _path_one='/one/two/three/four.five'
% _path_two='./four.five'
## short searching from the right with our wildcard to the right
## side of a single character removes everything to the right of
## of the specified character and the character itself
## this is a very simple means of stripping extensions and paths
% echo ${_path_one%.*} ${_path_one%/*}
/one/two/three/four /one/two/three
## long searching from the left with the wildcard to the left of
## course produces opposite results
% echo ${_path_one##*.} ${_path_one##*/}
five four.five
## will soon come back to show more probably
I believe you can get it using readlink:
scriptPath=$(readlink -f -- "$0")
scriptDirectory=${scriptPath%/*}

Why do I get this error using {1..9} in zsh?

I run the following code
zgrep -c compinit /usr/share/man/man{1..9}/zsh*
I get
zsh: no matches found: /usr/share/man/man2/zsh*
This is strange, since the following works
echo Masi{1..9}/masi
This suggests me that the problem may be a bug in Zsh.
Is the above a bug in Zsh for {1..9}?
It's not a bug, and it is working inside words fine. The trouble you're having here is that {1..9} is not a wildcard expression like * is; as your echo example shows, it's an iterative expansion. So your zgrep example is exactly the same as if you had typed each alternate version into the command line, and then since there are no man pages starting with zsh in man2, it errors out. (It's erroring out on a failure to find a match, not anything intrinsically related to your brace sequence expansion.)
If you did this, on the other hand:
zgrep -c compinit /usr/share/man/man[1-9]/zsh*
you'd get the results you expect, because [1-9] is a normal wildcard expression.
In zsh, if you want to use ranges in filenames, zle offers <1-n> on any real names it can expand on. That is to say:
$ touch a0b a1b a5b a7b
$ print a<0-100>b
And then hit <Tab> right after the final b would leave you with print a0b a1b a5b a7b expanded on the line.
For all other intents and purposes - perhaps full range requirements, non-file and scripting use - I'd express this using the rather succinct idiomatic zsh loop as:
for n ({1..50}); do print $n; done
Will allow you process the whole sequence range of numbers 1 to 50 :) after which you can do all sorts of useful things with, such as a file collection that doesn't exist yet:
arr=($(for n ({1..50}); do print /my/path/file$n.txt; done)) && print $arr[33]

Search and replace in Shell

I am writing a shell (bash) script and I'm trying to figure out an easy way to accomplish a simple task.
I have some string in a variable.
I don't know if this is relevant, but it can contain spaces, newlines, because actually this string is the content of a whole text file.
I want to replace the last occurence of a certain substring with something else.
Perhaps I could use a regexp for that, but there are two moments that confuse me:
I need to match from the end, not from the start
the substring that I want to scan for is fixed, not variable.
for truncating at the start: ${var#pattern}
truncating at the end ${var%pattern}
${var/pattern/repl} for general replacement
the patterns are 'filename' style expansion, and the last one can be prefixed with # or % to match only at the start or end (respectively)
it's all in the (long) bash manpage. check the "Parameter Expansion" chapter.
amn expression like this
s/match string here$/new string/
should do the trick - s is for sustitute, / break up the command, and the $ is the end of line marker. You can try this in vi to see if it does what you need.
I would look up the man pages for awk or sed.
Javier's answer is shell specific and won't work in all shells.
The sed answers that MrTelly and epochwolf alluded to are incomplete and should look something like this:
MyString="stuff ttto be edittted"
NewString=`echo $MyString | sed -e 's/\(.*\)ttt\(.*\)/\1xxx\2/'`
The reason this works without having to use the $ to mark the end is that the first '.*' is greedy and will attempt to gather up as much as possible while allowing the rest of the regular expression to be true.
This sed command should work fine in any shell context used.
Usually when I get stuck with Sed I use this page,
http://sed.sourceforge.net/sed1line.txt

Resources