I'm learning about Docker at the moment, and going through the Dockerfile reference, specifically the RUN instruction. There are two forms of RUN - the shell form, which runs the command in a shell, and the exec form, which "does not invoke a command shell" (quoted from the Note section).
If I understood the documentation correctly, my question is - If, and how can, Docker run a command without a shell?
Note that the answers to Can a command be executed without a shell?'s don't actually answer the question.
If I understand your question correctly, you're asking how something can be run (specifically in the context of docker) without invoking a command shell.
The way things are run in the linux kernel are usually using the exec family of system calls.
You pass it the path to the executable you want to run and the arguments that need to be passed to it via an execl call for example.
This is actually what your shell (sh, bash, ksh, zsh) does under the hood anyway. You can observe this yourself if you run something like strace -f bash -c "cat /tmp/foo"
In the output of that command you'll see something like this:
execve("/bin/cat", ["cat", "/tmp/foo"], [/* 66 vars */]) = 0
What's really going on is that bash looks up cat in $PATH, it then finds that cat is actually an executable binary available at /bin/cat. It then simply invokes it via execve. and the correct arguments as you can see above.
You can trivially write a C program that does the same thing as well.
This is what such a program would look like:
#include<unistd.h>
int main() {
execl("/bin/cat", "/bin/cat", "/tmp/foo", (char *)NULL);
return 0;
}
Every language provides its own way of interfacing with these system calls. C does, Python does and Go, which is what's used to write Docker for the most part, does as well. A RUN instruction in the docker likely translates to one of these exec calls when you hit docker build. You can run an strace -f docker build and then grep for exec calls in the log to see how the magic happens.
The only difference between running something via a shell and directly is that you lose out on all the fancy stuff your shell will do for you, such as variable expansion, executable searching etc.
A program can execute another program without a shell; you just create a new process and load an executable onto it, so you don't need the shell for that. The shell is needed for a user to start a program because it is the user interface to the system. Also, a program is not able to run a built-in command like cd or rm without a shell because there's no executable to be found (there are alternative ways, thought, but not as simple).
In very general - docker run will start container with its default process, when docker exec allow you to run any process you want inside the container.
For example, running microsoft/iis container by docker run microsoft/iiswill run default process, which is powershell.
But you can run cmd by running docker exec -i my_container cmd
See this answer for more details.
Related
In cmd, it is possible to use Linux commands with the ubuntu or bash commands, but they are very fickle. In batch, it is also possible to make a VBScript-batch hybrid, which got me thinking, is it possible to make a Bash-batch hybrid? Besides being a tongue-twister, I feel that Bash-batch scripts may be really useful.
What I have tried so far
So far I tried using the empty bash and ubuntu commands alone since they switch the normal command-prompt to the Ubuntu/Bash shell, but even if you put commands after the ubuntu/bash they wouldn't show or do anything.
After I tried that, I tried using the ubuntu -run command, but like I said earlier, it’s really fickle and inconsistent on what things work and what things don't. It is less inconsistent when you pipe things into it, but it still usually doesn't work.
I looked here since it seemed like it would answer my question and I tried it, but it didn't work since it required another program (I think).
I also looked to this and I guess it failed miserably, but interesting concept.
What I've gotten from all of my research is that most people think when this is mentioned of a file that could be run either as a .bat file or as .sh shell file instead of my goal, to make a file that runs both batch and Bash commands in the same instance.
What I want this for relates to my other question where I am trying to hash a string instead of a file in cmd, and you could do it with a Bash command, but I would still like to keep the file as a batch file.
Sure you can use Bash in batch, assuming it is available. Just use the command bash -c 'cmd', where cmd is the command that you want to run in Bash.
The following batch line pipes the Hello to cat -A command that prints it including the invisible symbols:
echo Hello | bash -c "cat -A"
Compare the output with the result of the version completely written in Bash:
bash -c "echo Hello | cat -A"
They will slightly differ!
I'm working on an interactive Ruby script, which build and packages resources. In the middle of the process, I'd like to drop into an interactive shell, but pre-cd'd into a specific working directory, as well as with an explanatory message (CTRL-D to continue). The interactive bash + given initial command is what's problematic.
Per the answer for doing something like this in Bash, given at https://stackoverflow.com/a/36152028, I've tried
system '/bin/bash', '--init-file', '<(echo "cd ~/src/devops; pwd")'
However, bash runs interactively but completely ignores the '<(echo "cd ~/src/devops; pwd")' section.
Interestingly system '/bin/bash', '--init-file complains if no argument is given, but literally anything runs bash, but with no initial command.
*Note that (--rcfile instead of --init-file) has the same effect.
Change the working directory of the Ruby script first, so that bash inherits the correct working directory.
curr_dir = Dir.pwd
Dir.chdir("#{Dir.home}/src/devops")
system "/bin/bash"
Dir.chdir(curr_dir) # Restore the original working directory if desired
Oh, this is probably far better (you can probably guess how little familiarity I have with Ruby):
system("/bin/bash", :chdir=>"#{Dir.home}/src/devops")
I have a node.js app which is using the child_process.execFile command to run a command-line utility.
I'm worried that it would be possible for a user to run commands locally (a rm / -rf horror scenario comes to mind).
How secure is using execFile for Bash scripts? Any tips to ensure that flags I pass to execFile are escaped by the unix box hosting the server?
Edit
To be more precise, I'm more wondering if the arguments being sent to the file could be interpreted as a command and executed.
The other concern is inside the bash script itself, which is technically outside the scope of this question.
Using child_process.execFile by itself is perfectly safe as long as the user doesn't get to specify the command name.
It does not run the command in a shell (like child_process.exec does), so there is no need to escape anything.
child_process.execFile will execute commands with the user id of the node process, so it can do anything that user could do, which includes removing all the server files.
Not a good idea to let user pass in command as you seem to be implying by your question.
You could consider running the script in a sandbox by using chroot, and limiting the commands and what resides on the available file system, but this could get complet in a hurry.
The command you pass will get executed directly via some flavor of exec, so unless what you trying to execute is a script, it does not need to be escaped in any way.
I am using a scheduler to run a unix script which starts up my application. The script is in the PATH of the user used by the scheduler. Hence, can be run from an y
My application log files are created relative to where the script is run from. Unfortunatley, the scheduler does not run the script from the folder I had hoped hence log files are not going to correct folder.
Is there any way in I get the script to run and behaves as it was run from a specified folder, e.g. ./ScriptName.sh Working_Folder | Run_Folder
Note: I cannot change the script
if your scheduler run your tasks using a shell (which it probably do) you can use { cd /log/dir ; script; } directly as command.
if not, you need to use a wrapper script as stated by #Gilles but i would do:
#!/bin/sh
cd /log/dir
exec /path/to/script "$#"
to save a little memory. The extra exec will make sure only the script interpreter is in memory instead of both (sh and the script interpreter).
If you can't change the script, you'll have to make the scheduler run a different command, not the script directly. For example, make the scheduler run a simple wrapper script.
#!/bin/sh
cd /desired/directory/for/log/files
/path/to/script "$#"
Bash commands are available from an interactive tclsh session. E.g. in a tclsh session you can have
% ls
instead of
$ exec ls
However, you cant have a tcl script which calls bash commands directly (i.e. without exec).
How can I make tclsh to recognize bash commands while interpreting tcl script files, just like it does in an interactive session?
I guess there is some tcl package (or something like that), which is being loaded automatically while launching an interactive session to support direct calls of bash commans. How can I load it manually in tcl script files?
If you want to have specific utilities available in your scripts, write bridging procedures:
proc ls args {
exec {*}[auto_execok ls] {*}$args
}
That will even work (with obvious adaptation) for most shell builtins or on Windows. (To be fair, you usually don't want to use an external ls; the internal glob command usually suffices, sometimes with extra help from some file subcommands.) Some commands need a little more work (e.g., redirecting input so it comes from the terminal, with an extra <#stdin or </dev/tty; that's needed for stty on some platforms) but that works reasonably well.
However, if what you're asking for is to have arbitrary execution of external programs without any extra code to mark that they are external, that's considered to be against the ethos of Tcl. The issue is that it makes the code quite a lot harder to maintain; it's not obvious that you're doing an expensive call-out instead of using something (relatively) cheap that's internal. Putting in the exec in that case isn't that onerous…
What's going on here is that the unknown proc is getting invoked when you type a command like ls, because that's not an existing tcl command, and by default, that command will check that if the command was invoked from an interactive session and from the top-level (not indirectly in a proc body) and it's checking to see if the proc name exists somewhere on the path. You can get something like this by writing your own proc unknown.
For a good start on this, examine the output of
info body unknown
One thing you should know is that ls is not a Bash command. It's a standalone utility. The clue for how tclsh runs such utilities is right there in its name - sh means "shell". So it's the rough equivalent to Bash in that Bash is also a shell. Tcl != tclsh so you have to use exec.