I was googling how to verify the availability of commands.
In the process, I found the builtin variable $commands.
https://zsh.sourceforge.io/Doc/Release/Zsh-Modules.html#index-commands
But when I tried, some commands were not in $commands even though they were available.
I want to get insights whether it can be useful to check for the existence of commands by knowing that timing and conditions.
tl;dr
You may have command(s) moved or installed that have never been run or the shell has never traversed the location of the command(s) subsequent to their move or installation and as a result the command hash table, which $commands accesses, is not up-to-date with these commands.
If you need a reliable way to check if a command exists use command -v, see this answer.
If you just want to see the command(s) in the command hash table run hash -r or rehash (ZSH only).
Context
What is $commands?
You've found this in the documentation you've linked in your question:
This array gives access to the command hash table. The keys are the names of external commands, the values are the pathnames of the files that would be executed when the command would be invoked. Setting a key in this array defines a new entry in this table in the same way as with the hash builtin. Unsetting a key as in ‘unset "commands[foo]"’ removes the entry for the given key from the command hash table.
– https://zsh.sourceforge.io/Doc/Release/Zsh-Modules.html#index-commands
"Gives access to the command hash table"... let's see what this is.
What is the "command hash table"?
For this we'll go to chapter 3 of the ZSH Guide, specifically section 1 (External Commands).
The only major issue is therefore how to find [external commands]. This is done through the parameters $path and $PATH [...].
There is a subtlety here. The shell tries to remember where the commands are, so it can find them again the next time. It keeps them in a so-called 'hash table', and you find the word 'hash' all over the place in the documentation: all it means is a fast way of finding some value, given a particular key. In this case, given the name of a command, the shell can find the path to it quickly. You can see this table, in the form key=value, by typing hash.
In fact the shell only does this when the option HASH_CMDS is set, as it is by default. As you might expect, it stops searching when it finds the directory with the command it's looking for. There is an extra optimisation in the option HASH_ALL, also set by default: when the shell scans a directory to find a command, it will add all the other commands in that directory to the hash table. This is sensible because on most UNIX-like operating systems reading a whole lot of files in the same directory is quite fast.
– https://zsh.sourceforge.io/Guide/zshguide03.html
Ok, so we now know $commands gives access to the command hash table which is essentially a cache of locations. This cache needs to be updated, right?
When does the command hash table get updated?
1. When the command is found for the first time
We know this from the documentation above:
The shell tries to remember where the commands are, so it can find them again the next time. It keeps them in a so-called 'hash table' [...].
There is additional documentation, for HASH_CMDS, that supports this:
Note the location of each command the first time it is executed. Subsequent invocations of the same command will use the saved location, avoiding a path search.
– https://github.com/zsh-users/zsh/blob/daa208e90763d304dc1d554a834d0066e0b9937c/Doc/Zsh/options.yo#L1275-L1283
2. When your shell scans a directory looking for a command and finds other commands
Again, we know this because of the above documentation:
There is an extra optimisation in the option HASH_ALL, also set by default: when the shell scans a directory to find a command, it will add all the other commands in that directory to the hash table.
Ok this is all the context.
Back to the problem, why?
Why are the commands you are looking for not appearing in the $commands array? Why are they not in the command hash table?
Well, we now know when the command hash table is updated, so we can surmise that the udpate conditions weren't met and some possibilities as to what situation you may be in:
When the command is found for the first time
You have recently installed a new command and have never run it.
You have had an existing command moved and have never run it.
When your shell scans a directory looking for a command and finds other commands
You have not run a command that necessitated a path search that would have traversed the location where your new/moved command is installed.
Anything that can be done?
It depends on what you're doing.
It's critical I know the existence of a command
Don't use $commands. Use command -v <command_name>, see this answer.
I just want the see the command in the command hash table
You can force the command hash table to update with hash -r or in ZSH rehash.
Further reading
What is the purpose of the hash command?
Related
What will happen when you type a command and it's not recognized as one of the internal commands ?
The usual answer you get is that bash will look for it from search path. Even recent professional text books offer you that answer. Unfortunately that is not exactly what happens.
Bash will check first its own built-in hash table for match and if no match goes to read search path contents and adds the path where executable was found into hash table. The idea of using hash table in the first place is to avoid repeatedly searching the path and thus speeding up the things. However, it may result in strange looking situations as we see in our test next.
We can see what's currently hashed by command:
hash
hash: hash table empty
You'll see that answer if you just opened your terminal.
Let's run "date" and see again:
date
Tue Jan 20 14:13:42 AST 2015
hash
hits command
1 /bin/date
So far everything ok. Let's move now "date" to another location. We select a destination which is in search path:
sudo mv /bin/date /usr/bin/date
Let's check that we are still able to run "date", it should be found since the location is in search path:
date
bash: /bin/date: No such file or directory
Oops. Bash is not trying to find from the search path at all when hash table search results a path (which is invalid now).
Functionality is described in
Bash Reference Manual
You can manually force "hash" to reload the new location of "date"
hash date
and everything is fine again.
Why bash is not able to search at $PATH when hash table returns invalid path for an executable ?
Why bash is not able to search at $PATH when hash table returns invalid path for an executable ?
It is because bash adds an entry in hash table ONLY when a command runs successfully.
If you got it earlier:
hash
hits command
1 /bin/date
And then run something like:
qwerty
-bash: qwerty: command not found
Since qwerty command wasn't found, it won't be added in hash table. If you run hash again you will still get:
hash
hits command
1 /bin/date
No entry of qwerty got added in table. So bash shell goes by assumption that if an entry is available in hash table (which is nothing but a cache) then that binary can be run again using that hash entry and $PATH won't be searched.
That is how Bash is currently designed. An error message will be shown in that situation instead of attempting to find executable at search path $PATH.
I have two separate scripts with the same filename, in different paths, for different projects:
/home/me/projects/alpha/bin/hithere and /home/me/projects/beta/bin/hithere.
Correspondingly, I have two separate bash completion scripts, because the proper completions differ for each of the scripts. In the completion scripts, the "complete" command is run for each completion specifying the full name of the script in question, i.e.
complete -F _alpha_hithere_completion /home/me/projects/alpha/bin/hithere
However, only the most-recently-run script seems to have an effect, regardless of which actual version of hithere is invoked: it seems that bash completion only cares about the filename of the command and disregards path information.
Is there any way to change this behavior so that I can have these two independent scripts with the same name, each with different completion functions?
Please note that I'm not interested in a solution which requires alpha to know about beta, or which would require a third component to know about either of them--that would defeat the purpose in my case.
The Bash manual describes the lookup process for completions:
If the command word is a full pathname, a compspec for the full pathname is searched for first. If no compspec is found for the full pathname, an attempt is made to find a compspec for the portion following the final slash. If those searches do not result in a compspec, any compspec defined with the -D option to complete is used as the default.
So the full path is used by complete, but only if you invoke the command via its full path. As for getting completions to work using just the short name, I think your only option (judging from the spec) is going to be some sort of dynamic hook that determines which completion function to invoke based on the $PWD - I don't see any evidence that Bash supports overloading a completion name like you're envisioning.
Yes, this is possible. But it's a bit tricky. I am using this for a future scripting concept I am developing: All scripts have the same name as they are build scripts, but still bash completion can do its job.
For this I use a two step process: First of all I place a main script in ~/.config/bash_completion.d. This script is designed to cover all scripts of the particular shared script name. I configured ~/.bashrc in order to load that bash completion file for these scripts.
The script will obtain the full file path of the particular script file I want to have bash completion for. From this path I generate an identifier. For this identifier there exists a file that provides actual bash completion data. So if bash completion is performed the bash completion function from the main bash completion script will check for that file and load it's content. Then it will continue with regular bash completion operation.
If you have two scripts with the same name you will have two different identifiers as those scripts share the same name but have different paths. Therefore two different configurations for bash completion can be used.
This concept works like a charm.
NOTE: I will update this answer soon providing some source code.
Previously I wrote a script which log my previously visited directories to sqlite3 db. I wrote some shortcut to quickly search and navigate through history. Now I am thinking of doing the same with my bash commands.
When I execute a command in bash, how can I get the command name? Do I have to change the part of bash's source-code responsible for writing bash-history? Once I have a database of my command history, I can do smart search in it.
Sorry to come to this question so late!
I tend to run a lot of shells where I work and as a result long running shells history will get mixed up or lost all the time. I finally got so fed up I started logging to a database :)
I haven't worked out the integration totally but here is my setup:
Recompile bash with SYSLOG enabled. Since bash version 4.1 this code is all in place, it just needs to be enabled in the config-top.h i believe.
Install new bash and configure your syslog client to log user.info messages
Install rsyslog and rsyslog-pgsql plugin as well as postgresql. I had a couple of problems getting this installed on debian testing PM me if you run into problems or ask here :)
Configure the user messages to feed into the database.
At the end of all this all your commands should be logged into a database called table called systemevents. You will definitely want to set up indexes on a couple of the fields if you use the shell regularly as queries can start to take forever :)
Here are a couple of the indexes i set up:
Indexes:
"systemevents_pkey" PRIMARY KEY, btree (id)
"systemevents_devicereportedtime_idx" btree (devicereportedtime)
"systemevents_fromhost_idx" hash (fromhost)
"systemevents_priority_idx" btree (priority)
"systemevents_receivedat_idx" btree (receivedat)
fromhost, receivedat, and devicereportedtime are especially helpful!
From just the short time I've been using it this is really amazing. It lets me find commands across any servers ive been on recently! Never lose a command again! Also you can correlate it with downtime / other problems if you have multiple users.
Im planning on writing my own rsyslog plugin to make the history format in the database a little more usable. Ill update when I do :)
Good luck!
You can use the Advanced Shell History tool to write your shell history to sqlite3 and query the database from the command line using the provided ash_query tool.
vagrant#precise32:~$ ash_query -Q
Query Description
CWD Shows the history for the current working directory only.
DEMO Shows who did what, where and when (not WHY).
ME Select the history for just the current session.
RCWD Shows the history rooted at the current working directory.
You can write your own custom queries and also make them available from the command line.
This tool give you a lot of extra historical information besides commands - you get exit codes, start and stop times, current working directory, tty, etc.
Full disclosure - I am the author and maintainer.
Bash already records all of your commands to ~/.bash_history which is a plain text file.
You browse the contents with the up/down arrow, or search it by pressing control-r.
Take a look at fc:
fc: fc [-e ename] [-lnr] [first] [last] or fc -s [pat=rep] [command]
Display or execute commands from the history list.
fc is used to list or edit and re-execute commands from the history list.
FIRST and LAST can be numbers specifying the range, or FIRST can be a
string, which means the most recent command beginning with that
string.
Options:
-e ENAME select which editor to use. Default is FCEDIT, then EDITOR,
then vi
-l list lines instead of editing
-n omit line numbers when listing
-r reverse the order of the lines (newest listed first)
With the `fc -s [pat=rep ...] [command]' format, COMMAND is
re-executed after the substitution OLD=NEW is performed.
A useful alias to use with this is r='fc -s', so that typing `r cc'
runs the last command beginning with `cc' and typing `r' re-executes
the last command.
Exit Status:
Returns success or status of executed command; non-zero if an error occurs.
You can invoke it to get the text to insert into your table, but why bother if it's already saved by bash?
In order to get the full history either use history command and process its output:
$ history > history.log
or flush the history (as it is being kept in memory by BASH) using:
$ history -a
and then process ~/.bash_history
I am creating a new CLI application, where I want to get some sensitive input from the user. Since, this input can be quite descriptive as well as the information is a bit sensitive, I wanted to allow user to enter a command like this from this app:
app new entry
after which, I want to provide user with a VIM session where he can write this descriptive input, which when he exits from this VIM session, will be captured by my script and used for further processing.
Can someone tell me a way (probably some hidden VIM feature - since, I am always amazed by them) so that I can do so, without creating any temporary file? As explained in a comment below, I would prefer a some-what in-memory file, since the information can be a bit sensitive, and hence, I would like to process it first via my script and then only, write it to disk in an encrypted manner.
Git actually does this: when you type git commit, a new Vim instance is created and a temporary file is used in this instance. In that file, you type your commit message
Once Vim gets closed again, the content of the temporary file is read and used by Git. Afterwards, the temporary file gets deleted again.
So, to get what you want, you need the following steps:
create a unique temporary file (Create a tempfile without opening it in Ruby)
open Vim on that file (Ruby, Difference between exec, system and %x() or Backticks)
wait until Vim gets terminated again (also contained in the above SO thread)
read the tempoarary file (How can I read a file with Ruby?)
delete the temporary file (Deleting files in ruby)
That's it.
You can make shell create file descriptors attached to your function and make vim write there, like this: (but you need to split script in two parts: one that calls vim and one that processes its input):
# First script
…
vim --cmd $'aug ScriptForbidReading\nau BufReadCmd /proc/self/fd/* :' --cmd 'aug END' >(second-script)
. Notes:
second-script might actually be a function defined in first script (at least in zsh). This also requires bash or zsh (tested only on the latter).
Requires *nix, maybe won’t work on some OSes considered to be *nix.
BufReadCmd is needed because vim hangs when trying to read write-only descriptor.
It is suggested that you set filetype (if needed) right away, without using ftdetect plugins: in case your script is not the only one which will use this method.
Zsh will wait for second-script to finish, so you may continue script right after vim command in case information from second-script is not needed (it would be hard to get from there).
Second script will be launched from a subshell. Thus no variable modifications will be seen in code running after vim call.
Second script will receive whatever vim saves on standard input. Parent standard input is not directly accessible, but using </dev/tty will probably work.
This is for zsh/bash script. Nothing will really prevent you from using the same idea in ruby (it is likely more convenient and does not require splitting into two scripts), but I do not know ruby enough to say how one can create file descriptors in ruby.
Using vim for this seems like overkill.
The highline ruby gem might do what you need:
require 'highline'
irb> pw = HighLine.new.ask('info: ') {|q| q.echo = false }
info:
=> "abc"
The user's text is not displayed when you set echo to false.
This is also safer than creating a file and then deleting it, because then you'd have to ensure that the delete was secure (overwriting the file several times with random data so it can't be recovered; see the shred or srm utilities).
I wrote a script that's retrieving the currently run command using $BASH_COMMAND. The script is basically doing some logic to figure out current command and file being opened for each tmux session. Everything works great, except when user runs a piped command (i.e. cat file | less), in which case $BASH_COMMAND only seems to store the first command before the pipe. As a result, instead of showing the command as less[file] (which is the actual program that has the file open), the script outputs it as cat[file].
One alternative I tried using is relying on history 1 instead of $BASH_COMMAND. There are a couple issues with this alternative as well. First, it does not auto-expand aliases, like $BASH_COMMAND does, which in some cases could cause the script to get confused (for example, if I tell it to ignore ls, but use ll instead (mapped to ls -l), the script will not ignore the command, processing it anyway), and including extra conditionals for each alias doesn't seem like a clean solution. The second problem is that I'm using HISTIGNORE to filter out some common commands, which I still want the script to be aware of, using history will just make the script ignore the last command unless it's tracked by history.
I also tried using ${#PIPESTATUS[#]} to see if the array length is 1 (no pipes) or higher (pipes used, in which case I would retrieve the history instead), but it seems to always only be aware of 1 command as well.
Is anyone aware of other alternatives that could work for me (such as another variable that would store $BASH_COMMAND for the other subcalls that are to be executed after the current subcall is complete, or some way to be aware if the pipe was used in the last command)?
i think that you will need to change a bit your implementation and use "history" command to get it to work. Also, use the command "alias" to check all of the configured alias.. the command "which" to check if the command is actually stored in any PATH dir. good luck