Get last bash command including pipes - bash

I wrote a script that's retrieving the currently run command using $BASH_COMMAND. The script is basically doing some logic to figure out current command and file being opened for each tmux session. Everything works great, except when user runs a piped command (i.e. cat file | less), in which case $BASH_COMMAND only seems to store the first command before the pipe. As a result, instead of showing the command as less[file] (which is the actual program that has the file open), the script outputs it as cat[file].
One alternative I tried using is relying on history 1 instead of $BASH_COMMAND. There are a couple issues with this alternative as well. First, it does not auto-expand aliases, like $BASH_COMMAND does, which in some cases could cause the script to get confused (for example, if I tell it to ignore ls, but use ll instead (mapped to ls -l), the script will not ignore the command, processing it anyway), and including extra conditionals for each alias doesn't seem like a clean solution. The second problem is that I'm using HISTIGNORE to filter out some common commands, which I still want the script to be aware of, using history will just make the script ignore the last command unless it's tracked by history.
I also tried using ${#PIPESTATUS[#]} to see if the array length is 1 (no pipes) or higher (pipes used, in which case I would retrieve the history instead), but it seems to always only be aware of 1 command as well.
Is anyone aware of other alternatives that could work for me (such as another variable that would store $BASH_COMMAND for the other subcalls that are to be executed after the current subcall is complete, or some way to be aware if the pipe was used in the last command)?

i think that you will need to change a bit your implementation and use "history" command to get it to work. Also, use the command "alias" to check all of the configured alias.. the command "which" to check if the command is actually stored in any PATH dir. good luck

Related

Bash: How to create a test mode that displays commands instead of executing them

I have a bash script that executes a series of commands, some involving redirection. See cyrus-mark-ham-spam.
I want the script to have a test mode, where all the commands run are printed instead of executing them. As you can see, I have tried to do that by just putting "echo" on the front of each command in test mode.
Unfortunately this doesn't deal with redirection - any redirections are still done, so the program leaves lots of temp files littered about the place when run in test mode.
I have tried various ways to get round this, like quoting the whole command and passing it to a function that either prints it or runs it, but either the redirections work in test mode, or they don't work in run mode.
I thought this must have come up before, and wonder if there is a known solution which does not involve every command being repeated with an if TEST round the pair?
Please note, this is NOT a duplicate of show commands without executing them because neither that question, nor its answers, covers redirection (which is the essence of this question).
I see that it is not a duplicate but there is not general solution to this. You need to look at each command separately.
As long as the command doesn't use arguments enclosed in spaces, like
cmd -a -b -c > filename
, you can quote it:
echo 'cmd -a -b -c > filename'
But real life code is more complex, sure.

When data is piped from one program via | is there a way to detect what that program was from the second program?

Say you have a shell command like
cat file1 | ./my_script
Is there any way from inside the 'my_script' command to detect the command run first as the pipe input (in the above example cat file1)?
I've been digging into it and so far I've not found any possibilities.
I've been unable to find any environment variables set in the process space of the second command recording the full command line, the command data the my_script commands sees (via /proc etc) is just _./my_script_ and doesn't include any information about it being run as part of a pipe. Checking the process list from inside the second command even doesn't seem to provide any data since the first process seems to exit before the second starts.
The best information I've been able to find suggests in bash in some cases you can get the exit codes of processes in the pipe via PIPESTATUS, unfortunately nothing similar seems to be present for the name of commands/files in the pipe. My research seems to be saying it's impossible to do in a generic manner (I can't control how people decide to run my_script so I can't force 3rd party pipe replacement tools to be used over build in shell pipes) but it just at the same time doesn't seem like it should be impossible since the shell has the full command line present as the command is run.
(update adding in later information following on from comments below)
I am on Linux.
I've investigated the /proc/$$/fd data and it almost does the job. If the first command doesn't exit for several seconds while piping data to the second command can you read /proc/$$/fd/0 to see the value pipe:[PIPEID] that it symlinks to. That can then be used to search through the rest of the /proc//fd/ data for other running processes to find another process with a pipe open using the same PIPEID which gives you the first process pid.
However in most real world tests I've done of piping you can't trust that the first command will stay running long enough for the second one to have time to locate it's pipe fd in /proc before it exits (which removes the proc data preventing it being read). So if this method will return any information is something I can't rely on.

Are shell scripts read in their entirety when invoked?

I ask because I recently made a change to a KornShell (ksh) script that was executing. A short while after I saved my changes, the executing process failed. Judging from the error message, it looked as though the running process had seen some -- but not all -- of my changes. This strongly suggests that when a shell script is invoked, the entire script is not read into memory.
If this conclusion is correct, it suggests that one should avoid making changes to scripts that are running.
$ uname -a
SunOS blahblah 5.9 Generic_122300-61 sun4u sparc SUNW,Sun-Fire-15000
No. Shell scripts are read either line-by-line, or command-by-command followed by ;s, with the exception of blocks such as if ... fi blocks which are interpreted as a chunk:
A shell script is a text file containing shell commands. When such a
file is used as the first non-option argument when invoking Bash, and
neither the -c nor -s option is supplied (see Invoking Bash), Bash
reads and executes commands from the file, then exits. This mode of
operation creates a non-interactive shell.
You can demonstrate that the shell waits for the fi of an if block to execute commands by typing them manually on the command line.
http://www.gnu.org/software/bash/manual/bashref.html#Executing-Commands
http://www.gnu.org/software/bash/manual/bashref.html#Shell-Scripts
It's funny that most OS'es I know, do NOT read the entire content of any script in memory, and run it from disk. Doing otherwise would allow making changes to the script, while running. I don't understand why that is done, given the fact :
scripts are usually very small (and don't take many memory anyway)
at some point, and shown in this thread, people would start making changes to a script that is already running anyway
But, acknowledging this, here's something to think about: If you decided that a script is not running OK (because you are writing/changing/debugging), do you care on the rest of the running of that script ? you can go ahead making the changes, save them, and ignore all output and actions, done by the current run.
But .. Sometimes, and that depends on the script in question, a subsequent run of the same script (modified or not), can become a problem since the current/previous run is doing an abnormal run. It would typically skip some stuff, or sudenly jump to parts in the script, it shouldn't. And THAT may be a problem. It may leave "things" in a bad state; particularly if file manipulation/creation is involved.
So, as a general rule : even if the OS supports the feature or not, it's best to let the current run finish, and THEN save the updated script. You can change it already, but don't save it.
It's not like in the old days of DOS, where you actually have only one screen in front of you (one DOS screen), so you can't say you need to wait on run completion, before you can open a file again.
No they are not and there are many good reasons for that.
One of the things you should keep in mind is that a shell is not an interpreter even if there are some similarities. Shells are designed to work with a stream of commands. Either from the TTY ,a PIPE, FIFO or even a socket.
The shell reads from its resource line by line until a EOF is returned by the kernel.
The most shells have no extra support for interpreting files. they work with a file as they would work with a terminal.
In fact this is considered to be a nice feature because you can do interesting stuff like this How do Linux binary installers (.bin, .sh) work?
You can use a binary file and prepend shell scripts. You can't do this with an interpreter. because it parses the whole file or at least it would try it and fail. A shell would just interpret it line by line and doesnt care about the garbage at the end of the file. You just have to make sure the execution of the script gets terminated before it reaches the binary part.

Converting a history command into a shell script

This is sort of one of those things that I figured a lot of people would use a lot, but I can't seem to find any people who have written about this sort of thing.
I find that a lot of times I do a lot of iteration on a command-line one-liner and when I end up using it a lot, or anticipate wanting to use it in the future, or when it becomes cumbersome to work with in one line, it generally is a good idea to turn the one-liner into a shell script and stick it somewhere reasonable and easily accessible like ~/bin.
It's obviously too cumbersome to use any sort of roundabout method involving a text editor to get this done, and it's possible to simply do it on the shell, for instance in zsh typing
echo "#!/usr/bin/env sh" > ~/bin/command_from_history_number_523.sh && echo !523 >> ~/bin/command_from_history_number_523.sh
followed by pressing Tab to inject the !523rd command and somehow shoehorning it into an acceptable string to be saved.
This is particularly cumbersome and has at minimum three problems:
Does not work in bash as it does not complete the !523
Requires some manual inspection and string escapement
Requires too much typing such as the script name must be entered twice
So it looks like I need to do some meta shell scripting here.
I think a good solution would function under both bash and zsh, and it should probably work by taking two arguments, an integer for the history command number and a name for the shell script to poop out in a hardcoded directory which contains that one command. Furthermore, under bash, it appears that multi-line commands are treated as separate commands, but I'm willing to assume that we only care about one-liners here and I only use zsh anyway at this point.
The stumbling block here is that i think I'll still be running shell scripts through bash even when using zsh, so it won't likely then be able to parse zsh's history files. I may need to make this into two separate programs then.
Update: I agree with #Floris 's comment that direct use of the commands like !! would be helpful though I am not sure how to make this work. Suppose I have the usage be
mkscript command_number_24 !24
this is inadequate because mkscript will be receiving the expanded out contents of the 24th command. if the 24th command contains any file globs or somesuch they will have been expanded already. This is bad, and I basically want the contents of the history file, i.e. the raw command string. I guess this can be worked around by manually implementing those shortcuts in here. Or just screw it and just take an integer argument.
function mkscript() {
echo '#!/bin/bash' > ~/bin/$2
history -p '!'$1 >> ~/bin/$2
}
Only tested in Bash.
Update from OP: In zsh I can accomplish this with fc -l $2 $2

Get Input for a bash script by capturing it from a VIM session

I am creating a new CLI application, where I want to get some sensitive input from the user. Since, this input can be quite descriptive as well as the information is a bit sensitive, I wanted to allow user to enter a command like this from this app:
app new entry
after which, I want to provide user with a VIM session where he can write this descriptive input, which when he exits from this VIM session, will be captured by my script and used for further processing.
Can someone tell me a way (probably some hidden VIM feature - since, I am always amazed by them) so that I can do so, without creating any temporary file? As explained in a comment below, I would prefer a some-what in-memory file, since the information can be a bit sensitive, and hence, I would like to process it first via my script and then only, write it to disk in an encrypted manner.
Git actually does this: when you type git commit, a new Vim instance is created and a temporary file is used in this instance. In that file, you type your commit message
Once Vim gets closed again, the content of the temporary file is read and used by Git. Afterwards, the temporary file gets deleted again.
So, to get what you want, you need the following steps:
create a unique temporary file (Create a tempfile without opening it in Ruby)
open Vim on that file (Ruby, Difference between exec, system and %x() or Backticks)
wait until Vim gets terminated again (also contained in the above SO thread)
read the tempoarary file (How can I read a file with Ruby?)
delete the temporary file (Deleting files in ruby)
That's it.
You can make shell create file descriptors attached to your function and make vim write there, like this: (but you need to split script in two parts: one that calls vim and one that processes its input):
# First script
…
vim --cmd $'aug ScriptForbidReading\nau BufReadCmd /proc/self/fd/* :' --cmd 'aug END' >(second-script)
. Notes:
second-script might actually be a function defined in first script (at least in zsh). This also requires bash or zsh (tested only on the latter).
Requires *nix, maybe won’t work on some OSes considered to be *nix.
BufReadCmd is needed because vim hangs when trying to read write-only descriptor.
It is suggested that you set filetype (if needed) right away, without using ftdetect plugins: in case your script is not the only one which will use this method.
Zsh will wait for second-script to finish, so you may continue script right after vim command in case information from second-script is not needed (it would be hard to get from there).
Second script will be launched from a subshell. Thus no variable modifications will be seen in code running after vim call.
Second script will receive whatever vim saves on standard input. Parent standard input is not directly accessible, but using </dev/tty will probably work.
This is for zsh/bash script. Nothing will really prevent you from using the same idea in ruby (it is likely more convenient and does not require splitting into two scripts), but I do not know ruby enough to say how one can create file descriptors in ruby.
Using vim for this seems like overkill.
The highline ruby gem might do what you need:
require 'highline'
irb> pw = HighLine.new.ask('info: ') {|q| q.echo = false }
info:
=> "abc"
The user's text is not displayed when you set echo to false.
This is also safer than creating a file and then deleting it, because then you'd have to ensure that the delete was secure (overwriting the file several times with random data so it can't be recovered; see the shred or srm utilities).

Resources