Previously I wrote a script which log my previously visited directories to sqlite3 db. I wrote some shortcut to quickly search and navigate through history. Now I am thinking of doing the same with my bash commands.
When I execute a command in bash, how can I get the command name? Do I have to change the part of bash's source-code responsible for writing bash-history? Once I have a database of my command history, I can do smart search in it.
Sorry to come to this question so late!
I tend to run a lot of shells where I work and as a result long running shells history will get mixed up or lost all the time. I finally got so fed up I started logging to a database :)
I haven't worked out the integration totally but here is my setup:
Recompile bash with SYSLOG enabled. Since bash version 4.1 this code is all in place, it just needs to be enabled in the config-top.h i believe.
Install new bash and configure your syslog client to log user.info messages
Install rsyslog and rsyslog-pgsql plugin as well as postgresql. I had a couple of problems getting this installed on debian testing PM me if you run into problems or ask here :)
Configure the user messages to feed into the database.
At the end of all this all your commands should be logged into a database called table called systemevents. You will definitely want to set up indexes on a couple of the fields if you use the shell regularly as queries can start to take forever :)
Here are a couple of the indexes i set up:
Indexes:
"systemevents_pkey" PRIMARY KEY, btree (id)
"systemevents_devicereportedtime_idx" btree (devicereportedtime)
"systemevents_fromhost_idx" hash (fromhost)
"systemevents_priority_idx" btree (priority)
"systemevents_receivedat_idx" btree (receivedat)
fromhost, receivedat, and devicereportedtime are especially helpful!
From just the short time I've been using it this is really amazing. It lets me find commands across any servers ive been on recently! Never lose a command again! Also you can correlate it with downtime / other problems if you have multiple users.
Im planning on writing my own rsyslog plugin to make the history format in the database a little more usable. Ill update when I do :)
Good luck!
You can use the Advanced Shell History tool to write your shell history to sqlite3 and query the database from the command line using the provided ash_query tool.
vagrant#precise32:~$ ash_query -Q
Query Description
CWD Shows the history for the current working directory only.
DEMO Shows who did what, where and when (not WHY).
ME Select the history for just the current session.
RCWD Shows the history rooted at the current working directory.
You can write your own custom queries and also make them available from the command line.
This tool give you a lot of extra historical information besides commands - you get exit codes, start and stop times, current working directory, tty, etc.
Full disclosure - I am the author and maintainer.
Bash already records all of your commands to ~/.bash_history which is a plain text file.
You browse the contents with the up/down arrow, or search it by pressing control-r.
Take a look at fc:
fc: fc [-e ename] [-lnr] [first] [last] or fc -s [pat=rep] [command]
Display or execute commands from the history list.
fc is used to list or edit and re-execute commands from the history list.
FIRST and LAST can be numbers specifying the range, or FIRST can be a
string, which means the most recent command beginning with that
string.
Options:
-e ENAME select which editor to use. Default is FCEDIT, then EDITOR,
then vi
-l list lines instead of editing
-n omit line numbers when listing
-r reverse the order of the lines (newest listed first)
With the `fc -s [pat=rep ...] [command]' format, COMMAND is
re-executed after the substitution OLD=NEW is performed.
A useful alias to use with this is r='fc -s', so that typing `r cc'
runs the last command beginning with `cc' and typing `r' re-executes
the last command.
Exit Status:
Returns success or status of executed command; non-zero if an error occurs.
You can invoke it to get the text to insert into your table, but why bother if it's already saved by bash?
In order to get the full history either use history command and process its output:
$ history > history.log
or flush the history (as it is being kept in memory by BASH) using:
$ history -a
and then process ~/.bash_history
Related
I was googling how to verify the availability of commands.
In the process, I found the builtin variable $commands.
https://zsh.sourceforge.io/Doc/Release/Zsh-Modules.html#index-commands
But when I tried, some commands were not in $commands even though they were available.
I want to get insights whether it can be useful to check for the existence of commands by knowing that timing and conditions.
tl;dr
You may have command(s) moved or installed that have never been run or the shell has never traversed the location of the command(s) subsequent to their move or installation and as a result the command hash table, which $commands accesses, is not up-to-date with these commands.
If you need a reliable way to check if a command exists use command -v, see this answer.
If you just want to see the command(s) in the command hash table run hash -r or rehash (ZSH only).
Context
What is $commands?
You've found this in the documentation you've linked in your question:
This array gives access to the command hash table. The keys are the names of external commands, the values are the pathnames of the files that would be executed when the command would be invoked. Setting a key in this array defines a new entry in this table in the same way as with the hash builtin. Unsetting a key as in ‘unset "commands[foo]"’ removes the entry for the given key from the command hash table.
– https://zsh.sourceforge.io/Doc/Release/Zsh-Modules.html#index-commands
"Gives access to the command hash table"... let's see what this is.
What is the "command hash table"?
For this we'll go to chapter 3 of the ZSH Guide, specifically section 1 (External Commands).
The only major issue is therefore how to find [external commands]. This is done through the parameters $path and $PATH [...].
There is a subtlety here. The shell tries to remember where the commands are, so it can find them again the next time. It keeps them in a so-called 'hash table', and you find the word 'hash' all over the place in the documentation: all it means is a fast way of finding some value, given a particular key. In this case, given the name of a command, the shell can find the path to it quickly. You can see this table, in the form key=value, by typing hash.
In fact the shell only does this when the option HASH_CMDS is set, as it is by default. As you might expect, it stops searching when it finds the directory with the command it's looking for. There is an extra optimisation in the option HASH_ALL, also set by default: when the shell scans a directory to find a command, it will add all the other commands in that directory to the hash table. This is sensible because on most UNIX-like operating systems reading a whole lot of files in the same directory is quite fast.
– https://zsh.sourceforge.io/Guide/zshguide03.html
Ok, so we now know $commands gives access to the command hash table which is essentially a cache of locations. This cache needs to be updated, right?
When does the command hash table get updated?
1. When the command is found for the first time
We know this from the documentation above:
The shell tries to remember where the commands are, so it can find them again the next time. It keeps them in a so-called 'hash table' [...].
There is additional documentation, for HASH_CMDS, that supports this:
Note the location of each command the first time it is executed. Subsequent invocations of the same command will use the saved location, avoiding a path search.
– https://github.com/zsh-users/zsh/blob/daa208e90763d304dc1d554a834d0066e0b9937c/Doc/Zsh/options.yo#L1275-L1283
2. When your shell scans a directory looking for a command and finds other commands
Again, we know this because of the above documentation:
There is an extra optimisation in the option HASH_ALL, also set by default: when the shell scans a directory to find a command, it will add all the other commands in that directory to the hash table.
Ok this is all the context.
Back to the problem, why?
Why are the commands you are looking for not appearing in the $commands array? Why are they not in the command hash table?
Well, we now know when the command hash table is updated, so we can surmise that the udpate conditions weren't met and some possibilities as to what situation you may be in:
When the command is found for the first time
You have recently installed a new command and have never run it.
You have had an existing command moved and have never run it.
When your shell scans a directory looking for a command and finds other commands
You have not run a command that necessitated a path search that would have traversed the location where your new/moved command is installed.
Anything that can be done?
It depends on what you're doing.
It's critical I know the existence of a command
Don't use $commands. Use command -v <command_name>, see this answer.
I just want the see the command in the command hash table
You can force the command hash table to update with hash -r or in ZSH rehash.
Further reading
What is the purpose of the hash command?
I have a UniVerse (Rocket U2) system, and want to be able to call certain UniVerse/TCL commands from a shell script. However whenever I run the uv binary it seems to stop the execution of the rest of the shell script.
For Example if I run:
/u2/uv/bin/uv
It starts a UniVerse session. The next line of the script (RUNPY run_tests.py) is meant to be executed in the TCL environment, but is never input to TCL. I have tried passing in string parameters to the uv binary to be executed, but doesn't appear to do anything.
Is there a way to call UniVerse/TCL commands from a UNIX/Shell environment?
You can type this manually or put it into a shell script. I have not run into any issues with this paradigm, but your choice of shell could theoretically affect this. You certainly want to either be in the directory of the account you want execute it in or cd to it in the script.
/u2/uv/bin/uv <<start
RUNPY run_tests.py
start
Good Luck.
One thing to watch out for is if you have a LOGIN paragraph or something else that runs automatically to start your application (which is really common), then you need to find a way to bypass this for non-interactive users.
https://groups.google.com/forum/#!topic/comp.databases.pick/B2hzuXq3X9A mentions
IF OCONV(#TTY,'MCU')='PHANTOM' THEN ABORT
In UD, I kick off scripts from unix as a phantom to a) capture the log output in PH and b) end the process if extra input is requested, rather than hanging around. In UD that's
$echo "PHANTOM COUNT VOC" | udt
UniData Release 8.1 Build: (2008)
Current UniData home is /unidata/ud81/.
Current working directory is /usr/ud81/demo
:PHANTOM COUNT VOC
PHANTOM process 18743448 started.
COMO file is '_PH_/dsiroot45172_18743448'.
:
Critical abort condition found.
$cat _PH_/dsiroot45172_18743448
COUNT VOC
14670 record(s) counted.
PHANTOM process 18743448 has completed.
Van Amburg's answer is the most correct for handling multiple lines of input. The variant I used was instead of the << command for multi-line strings I just added quotes around a single command (single and double quotes both work):
/u2/uv/bin/uv "RUNPY run_tests.py"
I know one can dig in history or copy & paste the commands right after one run them. But both ways requires several extra operations. Being a tend-to-be lazy person, I really like to have a feature to log all commands that were run successfully to a file (e.g .commands.log) for archiving purpose. Failed commands are not of my interest. A bonus point would be to exclude some banal commands starting with cd, ls, etc, or sensitive information (e.g password).
One way I am thinking is to have a function that actually does dig up history and append to the .commands.log:
savecmd # save the command just before this
savecmd -l 5 # save last 5 commands
savecmd -t 1 # save commands ran during last 1 hour;
This approach does not do anything about return status (success/failure), but I can leave with it if there is no better way.
But I still prefer the auto-logging daemon
I am creating a new CLI application, where I want to get some sensitive input from the user. Since, this input can be quite descriptive as well as the information is a bit sensitive, I wanted to allow user to enter a command like this from this app:
app new entry
after which, I want to provide user with a VIM session where he can write this descriptive input, which when he exits from this VIM session, will be captured by my script and used for further processing.
Can someone tell me a way (probably some hidden VIM feature - since, I am always amazed by them) so that I can do so, without creating any temporary file? As explained in a comment below, I would prefer a some-what in-memory file, since the information can be a bit sensitive, and hence, I would like to process it first via my script and then only, write it to disk in an encrypted manner.
Git actually does this: when you type git commit, a new Vim instance is created and a temporary file is used in this instance. In that file, you type your commit message
Once Vim gets closed again, the content of the temporary file is read and used by Git. Afterwards, the temporary file gets deleted again.
So, to get what you want, you need the following steps:
create a unique temporary file (Create a tempfile without opening it in Ruby)
open Vim on that file (Ruby, Difference between exec, system and %x() or Backticks)
wait until Vim gets terminated again (also contained in the above SO thread)
read the tempoarary file (How can I read a file with Ruby?)
delete the temporary file (Deleting files in ruby)
That's it.
You can make shell create file descriptors attached to your function and make vim write there, like this: (but you need to split script in two parts: one that calls vim and one that processes its input):
# First script
…
vim --cmd $'aug ScriptForbidReading\nau BufReadCmd /proc/self/fd/* :' --cmd 'aug END' >(second-script)
. Notes:
second-script might actually be a function defined in first script (at least in zsh). This also requires bash or zsh (tested only on the latter).
Requires *nix, maybe won’t work on some OSes considered to be *nix.
BufReadCmd is needed because vim hangs when trying to read write-only descriptor.
It is suggested that you set filetype (if needed) right away, without using ftdetect plugins: in case your script is not the only one which will use this method.
Zsh will wait for second-script to finish, so you may continue script right after vim command in case information from second-script is not needed (it would be hard to get from there).
Second script will be launched from a subshell. Thus no variable modifications will be seen in code running after vim call.
Second script will receive whatever vim saves on standard input. Parent standard input is not directly accessible, but using </dev/tty will probably work.
This is for zsh/bash script. Nothing will really prevent you from using the same idea in ruby (it is likely more convenient and does not require splitting into two scripts), but I do not know ruby enough to say how one can create file descriptors in ruby.
Using vim for this seems like overkill.
The highline ruby gem might do what you need:
require 'highline'
irb> pw = HighLine.new.ask('info: ') {|q| q.echo = false }
info:
=> "abc"
The user's text is not displayed when you set echo to false.
This is also safer than creating a file and then deleting it, because then you'd have to ensure that the delete was secure (overwriting the file several times with random data so it can't be recovered; see the shred or srm utilities).
I wrote a script that's retrieving the currently run command using $BASH_COMMAND. The script is basically doing some logic to figure out current command and file being opened for each tmux session. Everything works great, except when user runs a piped command (i.e. cat file | less), in which case $BASH_COMMAND only seems to store the first command before the pipe. As a result, instead of showing the command as less[file] (which is the actual program that has the file open), the script outputs it as cat[file].
One alternative I tried using is relying on history 1 instead of $BASH_COMMAND. There are a couple issues with this alternative as well. First, it does not auto-expand aliases, like $BASH_COMMAND does, which in some cases could cause the script to get confused (for example, if I tell it to ignore ls, but use ll instead (mapped to ls -l), the script will not ignore the command, processing it anyway), and including extra conditionals for each alias doesn't seem like a clean solution. The second problem is that I'm using HISTIGNORE to filter out some common commands, which I still want the script to be aware of, using history will just make the script ignore the last command unless it's tracked by history.
I also tried using ${#PIPESTATUS[#]} to see if the array length is 1 (no pipes) or higher (pipes used, in which case I would retrieve the history instead), but it seems to always only be aware of 1 command as well.
Is anyone aware of other alternatives that could work for me (such as another variable that would store $BASH_COMMAND for the other subcalls that are to be executed after the current subcall is complete, or some way to be aware if the pipe was used in the last command)?
i think that you will need to change a bit your implementation and use "history" command to get it to work. Also, use the command "alias" to check all of the configured alias.. the command "which" to check if the command is actually stored in any PATH dir. good luck