As used in these examples, for instance:
shell out to bundle from inside a command invoked by bundle exec
or
shell out to a Ruby command that is not part of your current bundle,
http://bundler.io/man/bundle-exec.1.html
or
i'm shelling out to the heroku command in the rake task
https://github.com/sstephenson/rbenv/issues/400
It means executing a subprocess using backticks (as in `command`), the system call, or other similar methods. These execute the process in a sub-shell, hence the name.
You can find a lot more details in this answer: https://stackoverflow.com/a/18623297/29470
Spawning a pipeline of connected programs via an intermediate shell —
a.k.a. “shelling out”
http://julialang.org/blog/2012/03/shelling-out-sucks/
And the related reddit comment thread: http://www.reddit.com/r/programming/comments/1bwbyf/shelling_out_sucks/
So, from what I can gather, I presume it means "going out from the context of the executing program, to the surrounding program, or execution environment", in broad terms. Usually you go out to the unix shell, hence the term shell out.
Related
I am just tying to understand the bitbake build system.
I have a doubt regarding how shell functions/task gets executed.
I am going through below documentation https://docs.yoctoproject.org/bitbake/
In a part of documentation it says shell scripts are executed by /bin/sh.
In another part of documentation it says "BitBake writes a shell script to ${T}/run.do_taskname.pid and then executes the script".
What is run.do_taskname.pid?what exactly does it do?what exactly happens when bitbake encounters a shell script?
Thanks in advance
I am attempting to write an expect script, which executes/runs another shell script. This shell script configures an emulator, so the expect script is intended to automatically configure the emulator by sending back the appropriate data. However, when I wrote exec followed by the name of the shell script in my expect script, nothing happened. The console just sits and waits. Entering strings and whatnot does not appease the script. Failure to launch. DOA... I read from other posts that using exec is not a good fit when interacting with the subprogram is necessary.
Any advice for how I can execute the shell script within the expect script then?
Thanks!
If you want to interact with the shell script, you need to spawn it, then expect to see patterns and send responses.
If you're brand new to expect, check out the book "Exploring Expect" by the author of expect Don Libes.
I ask because I recently made a change to a KornShell (ksh) script that was executing. A short while after I saved my changes, the executing process failed. Judging from the error message, it looked as though the running process had seen some -- but not all -- of my changes. This strongly suggests that when a shell script is invoked, the entire script is not read into memory.
If this conclusion is correct, it suggests that one should avoid making changes to scripts that are running.
$ uname -a
SunOS blahblah 5.9 Generic_122300-61 sun4u sparc SUNW,Sun-Fire-15000
No. Shell scripts are read either line-by-line, or command-by-command followed by ;s, with the exception of blocks such as if ... fi blocks which are interpreted as a chunk:
A shell script is a text file containing shell commands. When such a
file is used as the first non-option argument when invoking Bash, and
neither the -c nor -s option is supplied (see Invoking Bash), Bash
reads and executes commands from the file, then exits. This mode of
operation creates a non-interactive shell.
You can demonstrate that the shell waits for the fi of an if block to execute commands by typing them manually on the command line.
http://www.gnu.org/software/bash/manual/bashref.html#Executing-Commands
http://www.gnu.org/software/bash/manual/bashref.html#Shell-Scripts
It's funny that most OS'es I know, do NOT read the entire content of any script in memory, and run it from disk. Doing otherwise would allow making changes to the script, while running. I don't understand why that is done, given the fact :
scripts are usually very small (and don't take many memory anyway)
at some point, and shown in this thread, people would start making changes to a script that is already running anyway
But, acknowledging this, here's something to think about: If you decided that a script is not running OK (because you are writing/changing/debugging), do you care on the rest of the running of that script ? you can go ahead making the changes, save them, and ignore all output and actions, done by the current run.
But .. Sometimes, and that depends on the script in question, a subsequent run of the same script (modified or not), can become a problem since the current/previous run is doing an abnormal run. It would typically skip some stuff, or sudenly jump to parts in the script, it shouldn't. And THAT may be a problem. It may leave "things" in a bad state; particularly if file manipulation/creation is involved.
So, as a general rule : even if the OS supports the feature or not, it's best to let the current run finish, and THEN save the updated script. You can change it already, but don't save it.
It's not like in the old days of DOS, where you actually have only one screen in front of you (one DOS screen), so you can't say you need to wait on run completion, before you can open a file again.
No they are not and there are many good reasons for that.
One of the things you should keep in mind is that a shell is not an interpreter even if there are some similarities. Shells are designed to work with a stream of commands. Either from the TTY ,a PIPE, FIFO or even a socket.
The shell reads from its resource line by line until a EOF is returned by the kernel.
The most shells have no extra support for interpreting files. they work with a file as they would work with a terminal.
In fact this is considered to be a nice feature because you can do interesting stuff like this How do Linux binary installers (.bin, .sh) work?
You can use a binary file and prepend shell scripts. You can't do this with an interpreter. because it parses the whole file or at least it would try it and fail. A shell would just interpret it line by line and doesnt care about the garbage at the end of the file. You just have to make sure the execution of the script gets terminated before it reaches the binary part.
I want to run a shell command within ccl, but this command may be hung for some reason. So I want to kill all the sub process generated by this command. How can I do this?
I have tried trivial-shell to run the shell command, when the command not hung, it works well.
I also use with-timeout macro which is in trivial-shell to check the timeout, it just give me a timeout-error condition, the shell process is still hunging there. Here I just want to kill them all and return something.
Thank you all.
As far as I can tell, trivial-shell only provides a synchronous shell call so there's no simple way to terminate ongoing subprocesses.
I suggest calling Clozure Common Lisp's implementation-specific ccl:run-program function with :wait nil to run the jobs asynchronously. You can then call ccl:signal-external-process on the running process to kill it if you need. Documentation here.
Bash commands are available from an interactive tclsh session. E.g. in a tclsh session you can have
% ls
instead of
$ exec ls
However, you cant have a tcl script which calls bash commands directly (i.e. without exec).
How can I make tclsh to recognize bash commands while interpreting tcl script files, just like it does in an interactive session?
I guess there is some tcl package (or something like that), which is being loaded automatically while launching an interactive session to support direct calls of bash commans. How can I load it manually in tcl script files?
If you want to have specific utilities available in your scripts, write bridging procedures:
proc ls args {
exec {*}[auto_execok ls] {*}$args
}
That will even work (with obvious adaptation) for most shell builtins or on Windows. (To be fair, you usually don't want to use an external ls; the internal glob command usually suffices, sometimes with extra help from some file subcommands.) Some commands need a little more work (e.g., redirecting input so it comes from the terminal, with an extra <#stdin or </dev/tty; that's needed for stty on some platforms) but that works reasonably well.
However, if what you're asking for is to have arbitrary execution of external programs without any extra code to mark that they are external, that's considered to be against the ethos of Tcl. The issue is that it makes the code quite a lot harder to maintain; it's not obvious that you're doing an expensive call-out instead of using something (relatively) cheap that's internal. Putting in the exec in that case isn't that onerous…
What's going on here is that the unknown proc is getting invoked when you type a command like ls, because that's not an existing tcl command, and by default, that command will check that if the command was invoked from an interactive session and from the top-level (not indirectly in a proc body) and it's checking to see if the proc name exists somewhere on the path. You can get something like this by writing your own proc unknown.
For a good start on this, examine the output of
info body unknown
One thing you should know is that ls is not a Bash command. It's a standalone utility. The clue for how tclsh runs such utilities is right there in its name - sh means "shell". So it's the rough equivalent to Bash in that Bash is also a shell. Tcl != tclsh so you have to use exec.