Log successful commands to a file at current directory - bash

I know one can dig in history or copy & paste the commands right after one run them. But both ways requires several extra operations. Being a tend-to-be lazy person, I really like to have a feature to log all commands that were run successfully to a file (e.g .commands.log) for archiving purpose. Failed commands are not of my interest. A bonus point would be to exclude some banal commands starting with cd, ls, etc, or sensitive information (e.g password).
One way I am thinking is to have a function that actually does dig up history and append to the .commands.log:
savecmd # save the command just before this
savecmd -l 5 # save last 5 commands
savecmd -t 1 # save commands ran during last 1 hour;
This approach does not do anything about return status (success/failure), but I can leave with it if there is no better way.
But I still prefer the auto-logging daemon

Related

Want to run a list of commands, but be able to edit the list while it's running

I have a list of (bash) commands I want to run:
<Command 1>
<Command 2>
...
<Command n>
Each command takes a long time to run, and sometimes after seeing the output of (e.g.) <Command 1>, I'd like to update a parameter of <Command 5>, or add a new <Command k> at an arbitrary position in the list. But I want to be able to walk away from my machine at any time, and have it keep working through my last update to the list.
This is similar to the question here: Edit shell script while it's running. Some of those answers could be made to serve, but that question had the additional constraint of wanting to edit the script file itself, and I suspect there is a simpler answer because I don't have that exact constraint.
My current solution is to end my script with a call to a second script. I can edit the second file while the first one runs, this lets me append new commands to the end of my list, but I can't make any changes to the list of commands in the first file. And once execution has started in the second file, I can't make any more changes. But I often stop my script to insert updates, and this sometimes means stopping a long command that is almost complete, only so that I can update later items on the list before I leave my machine for a time. I could of course chain together many files in this way, but that seems a mess for what (hopefully) has a simple solution.
This is more of a conceptual answer than one where I provide the full code. My idea would be to run Redis (Redis description here) - it is pretty simple to install - and use it as a data-structure server. In your case, the data structure would be a list of jobs.
So, you basically add each job to a Redis list which you can do using LPUSH at the command-line:
echo "lpush jobs job1" | redis-cli
You can then start one, or more, workers, in parallel if you wish and they sit in a loop, doing repeated BLPOP of jobs (blocking pop, waiting till there are jobs) off the list and processing them:
#!/bin/bash
while :; do
job=$(echo brpop jobs 0 | redis_cli)
do $job
done
And then you are at liberty to modify the list while the worker(s) is/are running using deletions and insertions.
Example here.
I would say to put each command that you want to run in a file and in the main file list all of the command files
ex: main.sh
#!/bin/bash
# Here you define the absolute path of your script
scriptPath="/home/script/"
# Name of your script
scriptCommand1="command_1.sh"
scriptCommand2="command_2.sh"
...
scriptCommandN="command_N.sh"
# Here you execute your script
$scriptPath/$scriptCommand1
$scriptPath/$scriptCommand2
...
$scriptPath/$scriptCommandN
I suppose while 1 is running you can then modify the other since they are external files

When data is piped from one program via | is there a way to detect what that program was from the second program?

Say you have a shell command like
cat file1 | ./my_script
Is there any way from inside the 'my_script' command to detect the command run first as the pipe input (in the above example cat file1)?
I've been digging into it and so far I've not found any possibilities.
I've been unable to find any environment variables set in the process space of the second command recording the full command line, the command data the my_script commands sees (via /proc etc) is just _./my_script_ and doesn't include any information about it being run as part of a pipe. Checking the process list from inside the second command even doesn't seem to provide any data since the first process seems to exit before the second starts.
The best information I've been able to find suggests in bash in some cases you can get the exit codes of processes in the pipe via PIPESTATUS, unfortunately nothing similar seems to be present for the name of commands/files in the pipe. My research seems to be saying it's impossible to do in a generic manner (I can't control how people decide to run my_script so I can't force 3rd party pipe replacement tools to be used over build in shell pipes) but it just at the same time doesn't seem like it should be impossible since the shell has the full command line present as the command is run.
(update adding in later information following on from comments below)
I am on Linux.
I've investigated the /proc/$$/fd data and it almost does the job. If the first command doesn't exit for several seconds while piping data to the second command can you read /proc/$$/fd/0 to see the value pipe:[PIPEID] that it symlinks to. That can then be used to search through the rest of the /proc//fd/ data for other running processes to find another process with a pipe open using the same PIPEID which gives you the first process pid.
However in most real world tests I've done of piping you can't trust that the first command will stay running long enough for the second one to have time to locate it's pipe fd in /proc before it exits (which removes the proc data preventing it being read). So if this method will return any information is something I can't rely on.

Last run time of shell script?

I need to create some sort of fail safe in one of my scripts, to prevent it from being re-executed immediately after failure. Typically when a script fails, our support team reruns the script using a 3rd party tool. Which is usually ok, but it should not happen for this particular script.
I was going to echo out a time-stamp into the log, and then make a condition to see if the current time-stamp is at least 2 hrs greater than the one in the log. If so, the script will exit itself. I'm sure this idea will work. However, this got me curious to see if there is a way to pull in the last run time of the script from the system itself? Or if there is an alternate method of preventing the script from being immediately rerun.
It's a SunOS Unix system, using the Ksh Shell.
Just do it, as you proposed, save the date >some file and check it at the script start. You can:
check the last line (as an date string itself)
or the last modification time of the file (e.g. when the last date command modified the somefile
Other common method is create one specified lock-file, or pid-file such /var/run/script.pid, Its content is usually the PID (and hostname, if needed) of the process what created it. Of course, the file-modification time tell you when it is created, by its content you can check the running PID. If the PID doesn't exists, (e.g. pre process is died) and the file modification time is older as X minutes, you can start the script again.
This method is good mainly because you can use simply the cron + some script_starter.sh what will periodically check the script running status and restart it when needed.
If you want use system resources (and have root access) you can use the accton + lastcomm.
I don't know SunOS but probably knows those programs. The accton starts the system-wide accounting of all programs, (needs to be root) and the lastcomm command_name | tail -n 1 shows when the command_name is executed last time.
Check the man lastcomm for the command line switches.

Logging bash-history in a database

Previously I wrote a script which log my previously visited directories to sqlite3 db. I wrote some shortcut to quickly search and navigate through history. Now I am thinking of doing the same with my bash commands.
When I execute a command in bash, how can I get the command name? Do I have to change the part of bash's source-code responsible for writing bash-history? Once I have a database of my command history, I can do smart search in it.
Sorry to come to this question so late!
I tend to run a lot of shells where I work and as a result long running shells history will get mixed up or lost all the time. I finally got so fed up I started logging to a database :)
I haven't worked out the integration totally but here is my setup:
Recompile bash with SYSLOG enabled. Since bash version 4.1 this code is all in place, it just needs to be enabled in the config-top.h i believe.
Install new bash and configure your syslog client to log user.info messages
Install rsyslog and rsyslog-pgsql plugin as well as postgresql. I had a couple of problems getting this installed on debian testing PM me if you run into problems or ask here :)
Configure the user messages to feed into the database.
At the end of all this all your commands should be logged into a database called table called systemevents. You will definitely want to set up indexes on a couple of the fields if you use the shell regularly as queries can start to take forever :)
Here are a couple of the indexes i set up:
Indexes:
"systemevents_pkey" PRIMARY KEY, btree (id)
"systemevents_devicereportedtime_idx" btree (devicereportedtime)
"systemevents_fromhost_idx" hash (fromhost)
"systemevents_priority_idx" btree (priority)
"systemevents_receivedat_idx" btree (receivedat)
fromhost, receivedat, and devicereportedtime are especially helpful!
From just the short time I've been using it this is really amazing. It lets me find commands across any servers ive been on recently! Never lose a command again! Also you can correlate it with downtime / other problems if you have multiple users.
Im planning on writing my own rsyslog plugin to make the history format in the database a little more usable. Ill update when I do :)
Good luck!
You can use the Advanced Shell History tool to write your shell history to sqlite3 and query the database from the command line using the provided ash_query tool.
vagrant#precise32:~$ ash_query -Q
Query Description
CWD Shows the history for the current working directory only.
DEMO Shows who did what, where and when (not WHY).
ME Select the history for just the current session.
RCWD Shows the history rooted at the current working directory.
You can write your own custom queries and also make them available from the command line.
This tool give you a lot of extra historical information besides commands - you get exit codes, start and stop times, current working directory, tty, etc.
Full disclosure - I am the author and maintainer.
Bash already records all of your commands to ~/.bash_history which is a plain text file.
You browse the contents with the up/down arrow, or search it by pressing control-r.
Take a look at fc:
fc: fc [-e ename] [-lnr] [first] [last] or fc -s [pat=rep] [command]
Display or execute commands from the history list.
fc is used to list or edit and re-execute commands from the history list.
FIRST and LAST can be numbers specifying the range, or FIRST can be a
string, which means the most recent command beginning with that
string.
Options:
-e ENAME select which editor to use. Default is FCEDIT, then EDITOR,
then vi
-l list lines instead of editing
-n omit line numbers when listing
-r reverse the order of the lines (newest listed first)
With the `fc -s [pat=rep ...] [command]' format, COMMAND is
re-executed after the substitution OLD=NEW is performed.
A useful alias to use with this is r='fc -s', so that typing `r cc'
runs the last command beginning with `cc' and typing `r' re-executes
the last command.
Exit Status:
Returns success or status of executed command; non-zero if an error occurs.
You can invoke it to get the text to insert into your table, but why bother if it's already saved by bash?
In order to get the full history either use history command and process its output:
$ history > history.log
or flush the history (as it is being kept in memory by BASH) using:
$ history -a
and then process ~/.bash_history

Get last bash command including pipes

I wrote a script that's retrieving the currently run command using $BASH_COMMAND. The script is basically doing some logic to figure out current command and file being opened for each tmux session. Everything works great, except when user runs a piped command (i.e. cat file | less), in which case $BASH_COMMAND only seems to store the first command before the pipe. As a result, instead of showing the command as less[file] (which is the actual program that has the file open), the script outputs it as cat[file].
One alternative I tried using is relying on history 1 instead of $BASH_COMMAND. There are a couple issues with this alternative as well. First, it does not auto-expand aliases, like $BASH_COMMAND does, which in some cases could cause the script to get confused (for example, if I tell it to ignore ls, but use ll instead (mapped to ls -l), the script will not ignore the command, processing it anyway), and including extra conditionals for each alias doesn't seem like a clean solution. The second problem is that I'm using HISTIGNORE to filter out some common commands, which I still want the script to be aware of, using history will just make the script ignore the last command unless it's tracked by history.
I also tried using ${#PIPESTATUS[#]} to see if the array length is 1 (no pipes) or higher (pipes used, in which case I would retrieve the history instead), but it seems to always only be aware of 1 command as well.
Is anyone aware of other alternatives that could work for me (such as another variable that would store $BASH_COMMAND for the other subcalls that are to be executed after the current subcall is complete, or some way to be aware if the pipe was used in the last command)?
i think that you will need to change a bit your implementation and use "history" command to get it to work. Also, use the command "alias" to check all of the configured alias.. the command "which" to check if the command is actually stored in any PATH dir. good luck

Resources