I ask because I recently made a change to a KornShell (ksh) script that was executing. A short while after I saved my changes, the executing process failed. Judging from the error message, it looked as though the running process had seen some -- but not all -- of my changes. This strongly suggests that when a shell script is invoked, the entire script is not read into memory.
If this conclusion is correct, it suggests that one should avoid making changes to scripts that are running.
$ uname -a
SunOS blahblah 5.9 Generic_122300-61 sun4u sparc SUNW,Sun-Fire-15000
No. Shell scripts are read either line-by-line, or command-by-command followed by ;s, with the exception of blocks such as if ... fi blocks which are interpreted as a chunk:
A shell script is a text file containing shell commands. When such a
file is used as the first non-option argument when invoking Bash, and
neither the -c nor -s option is supplied (see Invoking Bash), Bash
reads and executes commands from the file, then exits. This mode of
operation creates a non-interactive shell.
You can demonstrate that the shell waits for the fi of an if block to execute commands by typing them manually on the command line.
http://www.gnu.org/software/bash/manual/bashref.html#Executing-Commands
http://www.gnu.org/software/bash/manual/bashref.html#Shell-Scripts
It's funny that most OS'es I know, do NOT read the entire content of any script in memory, and run it from disk. Doing otherwise would allow making changes to the script, while running. I don't understand why that is done, given the fact :
scripts are usually very small (and don't take many memory anyway)
at some point, and shown in this thread, people would start making changes to a script that is already running anyway
But, acknowledging this, here's something to think about: If you decided that a script is not running OK (because you are writing/changing/debugging), do you care on the rest of the running of that script ? you can go ahead making the changes, save them, and ignore all output and actions, done by the current run.
But .. Sometimes, and that depends on the script in question, a subsequent run of the same script (modified or not), can become a problem since the current/previous run is doing an abnormal run. It would typically skip some stuff, or sudenly jump to parts in the script, it shouldn't. And THAT may be a problem. It may leave "things" in a bad state; particularly if file manipulation/creation is involved.
So, as a general rule : even if the OS supports the feature or not, it's best to let the current run finish, and THEN save the updated script. You can change it already, but don't save it.
It's not like in the old days of DOS, where you actually have only one screen in front of you (one DOS screen), so you can't say you need to wait on run completion, before you can open a file again.
No they are not and there are many good reasons for that.
One of the things you should keep in mind is that a shell is not an interpreter even if there are some similarities. Shells are designed to work with a stream of commands. Either from the TTY ,a PIPE, FIFO or even a socket.
The shell reads from its resource line by line until a EOF is returned by the kernel.
The most shells have no extra support for interpreting files. they work with a file as they would work with a terminal.
In fact this is considered to be a nice feature because you can do interesting stuff like this How do Linux binary installers (.bin, .sh) work?
You can use a binary file and prepend shell scripts. You can't do this with an interpreter. because it parses the whole file or at least it would try it and fail. A shell would just interpret it line by line and doesnt care about the garbage at the end of the file. You just have to make sure the execution of the script gets terminated before it reaches the binary part.
Related
I have a UniVerse (Rocket U2) system, and want to be able to call certain UniVerse/TCL commands from a shell script. However whenever I run the uv binary it seems to stop the execution of the rest of the shell script.
For Example if I run:
/u2/uv/bin/uv
It starts a UniVerse session. The next line of the script (RUNPY run_tests.py) is meant to be executed in the TCL environment, but is never input to TCL. I have tried passing in string parameters to the uv binary to be executed, but doesn't appear to do anything.
Is there a way to call UniVerse/TCL commands from a UNIX/Shell environment?
You can type this manually or put it into a shell script. I have not run into any issues with this paradigm, but your choice of shell could theoretically affect this. You certainly want to either be in the directory of the account you want execute it in or cd to it in the script.
/u2/uv/bin/uv <<start
RUNPY run_tests.py
start
Good Luck.
One thing to watch out for is if you have a LOGIN paragraph or something else that runs automatically to start your application (which is really common), then you need to find a way to bypass this for non-interactive users.
https://groups.google.com/forum/#!topic/comp.databases.pick/B2hzuXq3X9A mentions
IF OCONV(#TTY,'MCU')='PHANTOM' THEN ABORT
In UD, I kick off scripts from unix as a phantom to a) capture the log output in PH and b) end the process if extra input is requested, rather than hanging around. In UD that's
$echo "PHANTOM COUNT VOC" | udt
UniData Release 8.1 Build: (2008)
Current UniData home is /unidata/ud81/.
Current working directory is /usr/ud81/demo
:PHANTOM COUNT VOC
PHANTOM process 18743448 started.
COMO file is '_PH_/dsiroot45172_18743448'.
:
Critical abort condition found.
$cat _PH_/dsiroot45172_18743448
COUNT VOC
14670 record(s) counted.
PHANTOM process 18743448 has completed.
Van Amburg's answer is the most correct for handling multiple lines of input. The variant I used was instead of the << command for multi-line strings I just added quotes around a single command (single and double quotes both work):
/u2/uv/bin/uv "RUNPY run_tests.py"
I spent some time building this handy bash script that accepts input via stdin. I got the idea from the top answer to this question: Pipe input into a script
However, I did something really dumb. I typed the following into the terminal:
echo '{"test": 1}' > ./myscript.sh
I meant to pipe it | to my script instead of redirecting > the output of echo.
Up until this point in my life, I never accidentally clobbered any file in this manner. I'm honestly surprised that it took me until today to make this mistake. :D
At any rate, now I've made myself paranoid that I'll do this again. Aside from marking the script as read-only or making backup copies of it, is there anything else I can do to protect myself? Is it a bad practice in the first place to write a script that accepts input from stdin?
Yes, there is one thing you can do -- check your scripts into a source-code-control repository (git, svn, etc).
bash scripts are code, and any non-trivial code you write should be checked in to source-code-control (and changes committed regularly) so that when something like this happens, you can just restore the most-recently-committed version of the file and continue onwards.
This is a very open-ended question, but I usually put scripts in a global bin folder (~/.bin or so). This lets me invoke them as myscript rather than path/to/myscript.sh, so if I accidentally used > instead of |, it'd just create a file by that name in the current directory - which is virtually never ~/.bin.
Say you have a shell command like
cat file1 | ./my_script
Is there any way from inside the 'my_script' command to detect the command run first as the pipe input (in the above example cat file1)?
I've been digging into it and so far I've not found any possibilities.
I've been unable to find any environment variables set in the process space of the second command recording the full command line, the command data the my_script commands sees (via /proc etc) is just _./my_script_ and doesn't include any information about it being run as part of a pipe. Checking the process list from inside the second command even doesn't seem to provide any data since the first process seems to exit before the second starts.
The best information I've been able to find suggests in bash in some cases you can get the exit codes of processes in the pipe via PIPESTATUS, unfortunately nothing similar seems to be present for the name of commands/files in the pipe. My research seems to be saying it's impossible to do in a generic manner (I can't control how people decide to run my_script so I can't force 3rd party pipe replacement tools to be used over build in shell pipes) but it just at the same time doesn't seem like it should be impossible since the shell has the full command line present as the command is run.
(update adding in later information following on from comments below)
I am on Linux.
I've investigated the /proc/$$/fd data and it almost does the job. If the first command doesn't exit for several seconds while piping data to the second command can you read /proc/$$/fd/0 to see the value pipe:[PIPEID] that it symlinks to. That can then be used to search through the rest of the /proc//fd/ data for other running processes to find another process with a pipe open using the same PIPEID which gives you the first process pid.
However in most real world tests I've done of piping you can't trust that the first command will stay running long enough for the second one to have time to locate it's pipe fd in /proc before it exits (which removes the proc data preventing it being read). So if this method will return any information is something I can't rely on.
I am creating a new CLI application, where I want to get some sensitive input from the user. Since, this input can be quite descriptive as well as the information is a bit sensitive, I wanted to allow user to enter a command like this from this app:
app new entry
after which, I want to provide user with a VIM session where he can write this descriptive input, which when he exits from this VIM session, will be captured by my script and used for further processing.
Can someone tell me a way (probably some hidden VIM feature - since, I am always amazed by them) so that I can do so, without creating any temporary file? As explained in a comment below, I would prefer a some-what in-memory file, since the information can be a bit sensitive, and hence, I would like to process it first via my script and then only, write it to disk in an encrypted manner.
Git actually does this: when you type git commit, a new Vim instance is created and a temporary file is used in this instance. In that file, you type your commit message
Once Vim gets closed again, the content of the temporary file is read and used by Git. Afterwards, the temporary file gets deleted again.
So, to get what you want, you need the following steps:
create a unique temporary file (Create a tempfile without opening it in Ruby)
open Vim on that file (Ruby, Difference between exec, system and %x() or Backticks)
wait until Vim gets terminated again (also contained in the above SO thread)
read the tempoarary file (How can I read a file with Ruby?)
delete the temporary file (Deleting files in ruby)
That's it.
You can make shell create file descriptors attached to your function and make vim write there, like this: (but you need to split script in two parts: one that calls vim and one that processes its input):
# First script
…
vim --cmd $'aug ScriptForbidReading\nau BufReadCmd /proc/self/fd/* :' --cmd 'aug END' >(second-script)
. Notes:
second-script might actually be a function defined in first script (at least in zsh). This also requires bash or zsh (tested only on the latter).
Requires *nix, maybe won’t work on some OSes considered to be *nix.
BufReadCmd is needed because vim hangs when trying to read write-only descriptor.
It is suggested that you set filetype (if needed) right away, without using ftdetect plugins: in case your script is not the only one which will use this method.
Zsh will wait for second-script to finish, so you may continue script right after vim command in case information from second-script is not needed (it would be hard to get from there).
Second script will be launched from a subshell. Thus no variable modifications will be seen in code running after vim call.
Second script will receive whatever vim saves on standard input. Parent standard input is not directly accessible, but using </dev/tty will probably work.
This is for zsh/bash script. Nothing will really prevent you from using the same idea in ruby (it is likely more convenient and does not require splitting into two scripts), but I do not know ruby enough to say how one can create file descriptors in ruby.
Using vim for this seems like overkill.
The highline ruby gem might do what you need:
require 'highline'
irb> pw = HighLine.new.ask('info: ') {|q| q.echo = false }
info:
=> "abc"
The user's text is not displayed when you set echo to false.
This is also safer than creating a file and then deleting it, because then you'd have to ensure that the delete was secure (overwriting the file several times with random data so it can't be recovered; see the shred or srm utilities).
I wrote a script that's retrieving the currently run command using $BASH_COMMAND. The script is basically doing some logic to figure out current command and file being opened for each tmux session. Everything works great, except when user runs a piped command (i.e. cat file | less), in which case $BASH_COMMAND only seems to store the first command before the pipe. As a result, instead of showing the command as less[file] (which is the actual program that has the file open), the script outputs it as cat[file].
One alternative I tried using is relying on history 1 instead of $BASH_COMMAND. There are a couple issues with this alternative as well. First, it does not auto-expand aliases, like $BASH_COMMAND does, which in some cases could cause the script to get confused (for example, if I tell it to ignore ls, but use ll instead (mapped to ls -l), the script will not ignore the command, processing it anyway), and including extra conditionals for each alias doesn't seem like a clean solution. The second problem is that I'm using HISTIGNORE to filter out some common commands, which I still want the script to be aware of, using history will just make the script ignore the last command unless it's tracked by history.
I also tried using ${#PIPESTATUS[#]} to see if the array length is 1 (no pipes) or higher (pipes used, in which case I would retrieve the history instead), but it seems to always only be aware of 1 command as well.
Is anyone aware of other alternatives that could work for me (such as another variable that would store $BASH_COMMAND for the other subcalls that are to be executed after the current subcall is complete, or some way to be aware if the pipe was used in the last command)?
i think that you will need to change a bit your implementation and use "history" command to get it to work. Also, use the command "alias" to check all of the configured alias.. the command "which" to check if the command is actually stored in any PATH dir. good luck