Fuzzy search Shell command? - bash

The following situation:
I am on a different mac (no command history) using the Terminal (bash) remembering only a part of a command e.g. searching for a command with util in it. Did not remember that it was mdutil.
How to fuzzy search for a command in an efficient manner completely in the terminal, without creating new files?
Typical ways I do it now:
To find that command I could google, not always efficient and needs internet connection and browser.
Or Tab Tab, see all commands and scroll through them until I recognize the right one.
Or output all commands to a textfile and search in that.

I guess you could do something like this:
oldIFS="$IFS"
IFS=:
for dir in $PATH; do
ls $dir/*util* 2> /dev/null
done
IFS="$oldIFS"
That would loop through all the directories in your $PATH looking for a command that contains util.

How about starting with man -k and refining, like this:
man -k util | grep -i meta
Moose::Util::MetaRole(3pm) - Apply roles to any metaclass, as well as the object base class
mdutil(1) - manage the metadata stores used by Spotlight

compgen -ca | grep util
did it the best. Instead of util you can search any part of a command.
Like gniourf_gniourf said, a better solution would be
compgen -caX '!*util*'

Related

searching in the source code

Often I do different projects and sometimes there is a lack of documentation.
So I decided to use open-source code for looking how people solved different problems.
The idea is if I run into function what I don't how to use I look for different developers used that function before.
Approach:
I downloaded a few pretty decent projects done by other people and put them into one folder.
Now, if I don't know how a function is used (e.g. main() ), I do :
find . -name \*.py | xargs cat | grep -n "main()"
Consequently I get examples of its use:
But there is a problem. I don't know from which file examples are. It'd be perfectly if it was possible to get name of the file as well as number of line.
It seems to be limitation of use "cat" command because it mixes all files together and as result I get information about number not in the file but rather in cat output. So I feel this approach is bad in the root.
i.e.
I want to be able to look for functions/symbols in plethora of source code
and get information about the line and file where a certain combination was met.
I prefer console-way.
Any advice?
Try this:
find . -name \*.py -exec grep -nH "main()" {} \;
Explanation:
The "-exec" option says to execute the following command, up until \; for each file it finds.
The "-H" option to grep causes it to print the name of the file in which the string was found.
The "-n" option causes grep to print the line numbers.
The {} is a placeholder that expands to the name of the file that "find" just found.
You need only grep command:
$ grep -nr 'main()' /path/to/projects/folder/* | grep '.py:'
Want to search source files ? Why not http://beyondgrep.com/ ?
I wont answer you from the point of the bash.
I dont know which editor/IDE are you using, but for code dissecting there is no better tool for me then:
Vim with Ctags combination
Ctrl-p,Ctrl-p funky and MRU plugin +
proper search and regex usage.
good vim debugger
There is no part of code that cant be examined. Please sorry if are using some other tools, I am just suggesting you what do I find is the best for code analysis for me.

Loop through a directory with Grep (newbie)

I'm trying to do loop through the current directory that the script resides in, which has a bunch of files that end with _list.txt I would like to grep each file name and assign it to a variable and then execute some additional commands and then move on to the next file until there are no more _list.txt files to be processed.
I assume I want something like:
while file_name=`grep "*_list.txt" *`
do
Some more code
done
But this doesn't work as expected. Any suggestions of how to accomplish this newbie task?
Thanks in advance.
If I understand you problem correctly, you don't need a grep. You can just do:
for file in *_list.txt
do
# use $file, like echo $file
done
grep is one of the most useful commands of Unix. You must comprehend it well; see some useful examples here. As far as your current requirement, I think following code will be useful:
for file in *.*
do
echo "Happy Programming"
done
In place of *.* you can also use regular expressions. For more such useful examples, see First Time Linux, or read all grep options at your terminal using man grep.

What is the `< <()` syntax?

I've been using RVM for a while, and every time I just copied and pasted the following command to get it setup:
bash < <(curl -s https://rvm.beginrescueend.com/install/rvm)
It bugs me that I don't fully understand the syntax, and why we need the double <, and the parentheses. Can some one explain this or point me to the right reference?
The first one is input redirection. It feeds the contents of a file into the program as input. The second construct is <() and it's process redirection: it treats output of a process like a file. In this case, the effect is that you will run the contents of that url as though it was a bash script -- very dangerous! If you don't trust to source completely, don't do that. An attacker could use this method to have you run commands that would compromise your system.
Just my 2 cents. Bashs structure <() as #Daenyth stated "treats output of a process like a file". This structure may be very useful. Just consider following:
diff <(ls dir1) <(ls dir2)
This will use vimdiff to show differences between contents of dir1 and dir2. Using vimdiff instead diff will even cooler.

Possible to grab a text from a online .txt file via bash?

Is it possible to grab text from a online text file via grep/cat/awk or someting else? (in bash)
The way i currently do this is i download the text file to the drive and grep/cat into the file for it's text.
curl -o "$TMPDIR"/"text.txt" http://www.example.com/text.txt
cat/grep "$TMPDIR"/text.txt
rm -rf "$TMPDIR"/"text.txt"
Is one of the text grabbers (or another one) capable enough to grab something from a text file on the internet?
This would get rid of the whole downloadfile-readfile-deletefile process and just replace it with one command, speeding up things considerably if you have a lot of those strings.
I couldn't find anything via the man pages or googling around, maybe you guys know something.
Use curl -o - http://www.example.com/text.txt | grep "something".
-o - tells curl that it "downloads to stdout", other utils such as wget, lynx and links also have corresponding functionality.
You might try netcat - this is exactly what it was made for.
You could at least pipe your commands to avoid manually creating a temporary file:
curl … | cat/grep …

bash script: How to implement your own history mechanism?

I'm implementing an interactive bash script similar to the MySQL client, /usr/bin/mysql. In this script I need to issue various types of 'commands'. I also need to provide a history mechanism whereby the user can use the up/down arrow keys to scroll through the commands entered so far.
The snippet listed here (Example 15-6, Detecting the arrow keys) does not exactly do what I want it to. I really want the following:
The up/down arrow keys should operate in silent mode. Meaning, they should not echo their character codes on the terminal.
The other keys however (which will be used to read the command names and their arguments) must not operate in silent mode.
The problem with read -s -n3 is that it does not satisfy my simultaneously conflicting requirements of silent mode and echo mode, based solely on the character code. Also, the value -n3 will work for arrow keys but, for other/regular keys, it won't 'return control' to the calling program until 3 characters have been consumed.
Now, I could try -n1 and manually assemble the input, one character at a time (yuck!). But the character-code based silent-/echo-mode switching problem would still persist!
Has anyone attempted this thing in bash? (Note: I cannot use C, nor other scripting languages like Perl, Python, etc.)
EDIT
Continuing with Dennis' answer... You will also need to manually add your desired entries to your history via history -s, like so...
while read -e x; do
history -s "$x"
# ...
done
You can use read -e to have read use readline. It will process your cursor keys and maintain the history for you. You will also need to manually add your desired entries to your history via history -s, like so:
while read -e x; do
history -s "$x"
# ...
done
MySQL and Bash use the Readline library to implement this. Maybe you could use something like rlwrap or rlfe?
rlwrap has a special "one-shot" mode to act as a replacement for the 'read' shell command. If you wish, every occurrence of this command in your script can be given its own history and completion word list.
Use it like this:
REPLY=$(rlwrap -o cat)
or, specifying a history file and a completion wordlist:
REPLY=$(rlwrap -H my_history -f my_completions -o cat)

Resources