boot2docker command works on shell, but not in script - macos

New to docker here. I have a series of commands which, if fire them off on the shell, work just fine, but if I put them in a script, don't.
boot2docker destroy
boot2docker init
boot2docker start
boot2docker ssh &
host=$(boot2docker ip 2> /dev/null)
# everything works fine up to here
ssh -i $HOME/.ssh/id_boot2docker -o "StrictHostKeyChecking no" -o "UserKnownHostsFile /dev/null" docker#$host docker run --net=host my-image
If I don't try to run a command via ssh, everything works. Viz:
ssh -i $HOME/.ssh/id_boot2docker -o "StrictHostKeyChecking no" -o "UserKnownHostsFile /dev/null" docker#$host
This brings up the docker ssh prompt. But if I do run the command via the script (and this is what I actually need to do) I get the error message:
level="fatal" msg="Post http:///var/run/docker.sock/v1.16/containers/create: dial unix /var/run/docker.sock: no such file or directory. Are you trying to connect to a TLS-enabled daemon without TLS?"
Again, if I just enter that last command, or the whole litany of commands, into the shell, no problems. How can I make this script work?
Thanks
update
If I put that last line in its own script, and run the two scripts in sequence from the command line, everything is fine (same as just typing all the commands in sequence.) If I chain the scripts, or create a third to run them in sequence, I get the error. What am I to make of this?
Thanks

host probably isn't defined when you try to use it. You can probably confirm that by echoing it's value before running ssh. Easiest solution would be to put these two lines together in the same file:
host=$(boot2docker ip 2> /dev/null)
ssh -i $HOME/.ssh/id_boot2docker -o "StrictHostKeyChecking no" -o "UserKnownHostsFile /dev/null" docker#$host docker run --net=host my-image

Related

How to fix "Pseudo-terminal will not be allocated because stdin is not a terminal" in Alpine Linux?

I am writing a PHP program that runs a variety of shell commands. Sometimes it needs to call su, and by design, I want that to prompt for an elevated privilege password. Using passthru() in PHP works fine for this.
I have chosen to write functional tests only for my program, since it is dependent on ssh, su and other shell commands. I thus want to run the real thing inside PHPUnit to see if it is working as I expect.
Since this requires a SSH server and user account configurations, I have set up the tests to run in Docker. This approach looks like it will be fine - the image builds, when it runs it calls PHPUnit, and then exits. I expect there will be a way I can return a result via Docker to a calling system, such as Travis CI.
I have selected Alpine Linux as my base Docker image, but I am having problems with terminal allocation. I originally thought PHPUnit was getting in the way (see the original version of this question) but I have now narrowed it down to SSH, either run by PHP or even just at the console.
This works fine:
su -c whoami
However, this does not:
ssh localhost -t 'su -c whoami'
I get:
su: must be suid to work properly
Connection to localhost closed.
Note that ssh is set up with passwordless access to localhost (for testing purposes) so the only password I am expecting to enter is the su one.
The manual for ssh suggests that -f is helpful for where passwords to be asked for (the switch "requests ssh to go to background just before command execution") so I try this:
ssh localhost -t -f 'su -c whoami'
And I get:
Pseudo-terminal will not be allocated because stdin is not a terminal.
4275760bde94:~$ su: must be suid to work properly
Ah, another error! OK, so I've tried these ideas, particularly forcing a terminal:
ssh localhost -tt -f 'su -c whoami'
The double-t still emits complaints about SUID, and still does not work.
However, if I do this on my Ubuntu dev machine (also configured with PPK access to self) it works fine:
$ ssh localhost -t 'su -c whoami'
Password:
root
Connection to localhost closed.
That makes it look like Alpine or BusyBox's OpenSSH is at fault. How can I dig into this further?
One solution is just to swap to another base distro, and Ubuntu would surely work fine, but that will massively inflate my Docker image size (currently on a very nice 68M total). So, I'd like to persist with Alpine for a bit if I can.
Unlikely to be Docker
I did wonder whether Docker might be preventing a terminal from being created or attached, but quickly discounted this, since I can get an interactive shell with docker exec -it container_name sh. Furthermore, I can do the ssh and then su in two separate commands inside a Docker shell just fine.
Bash doesn't help
I notice that Bash is available in Alpine, but this has not helped either, which surprises me:
/ $ apk add bash
bash-4.3$ bash
bash-4.3$ su nonpriv
bash-4.3$ ssh localhost whoami
nonpriv
bash-4.3$ ssh localhost 'su -c whoami'
su: must be suid to work properly
bash-4.3$ ssh localhost 'su -s /bin/bash -c whoami'
su: must be suid to work properly
bash-4.3$ ssh -t localhost 'su -s /bin/bash -c whoami'
su: must be suid to work properly
Connection to localhost closed.
bash-4.3$ ssh -tt localhost 'su -s /bin/bash -c whoami'
su: must be suid to work properly
Connection to localhost closed.
bash-4.3$ ssh -tf localhost 'su -s /bin/bash -c whoami'
Pseudo-terminal will not be allocated because stdin is not a terminal.
bash-4.3$ su: must be suid to work properly
bash-4.3$ ssh -ttf localhost 'su -s /bin/bash -c whoami'
bash-4.3$ su: must be suid to work properly
Connection to localhost closed.
Trying shell interpolation
I have found this nearly works:
/ $ ssh -t localhost "$( su -c whoami )"
sh: Password:: not found
sh: root: not found
Connection to localhost closed.
This uses the standard Alpine sh shell, and uses the "$()" construct, found on the aforementioned link, and which I do not fully understand. It outputs Password:, which is the prompt in su, and then waits for the password. When the password is entered the whoami is run, which prints root.
So it looks like it is running the command, but I am not sure if it is running the whoami immediately and then passing it to ssh (not what I want) or whether it is getting a remote shell to localhost then then doing it (this is what I intend).
In any case, it is trying to run stdout lines as if they were commands themselves. How can I get it just to print the output run via SSH?
I guessed in my question that this would work in an Ubuntu container, and that proved to be correct. Interestingly I still get the "must be run from a terminal" error with this command:
su -c whoami
But not here, where a terminal is allocated explicitly in an SSH passwordless command to self:
ssh -t localhost 'su -c whoami'
Unfortunately the new OS has bumped up my 68M image to 430M, urgh! I should therefore be most happy to receive new answers that get this working on Alpine.

BASH instead of CSH while running Commands on a Remote Linux Server over SSH

I would like to run a command on a remote server using ssh, under bash, while my default session is csh.
minimal example (true command is more complex and is generated by my IDE remote debugger):
ssh hostname 'ls | head'
I don't have admin privileges. Trying chsh -s /bin/bash results with an error chsh: cannot lock /etc/passwd; try again later.
I tried adding to .cshrc the following
setenv SHELL /bin/bash
exec /bin/bash --login
but it freezes the console when sending the command through ssh (while regular ssh works)
Any idea how to solve that?
NOTE: I must have a solution that would configure the host, because I don't have access to the ssh command which is generated automatically by the debugger of my IDE. On the IDE I can only set the host name and port number. (EDIT) Therefore solutions like ssh hostname '/bin/bash -c "ls | head"' wont apply
EDIT2:
Actual command shown by IDE (again, I can't edit it):
ssh://username#localhost:2213/home/lab/username/anaconda2/envs/tf_011b/bin/python -u /specific/a/home/cc/cs/username/.pycharm_helpers/pydev/pydevd.py --multiproc --qt-support --client '0.0.0.0' --port 41823 --file /home/lab/username/remote_py/nlteach/show_attend_and_tell/train_saat_classifier.py --train_dir=/home/lab/username/nlteach/output/train/d=cub/imSD=11%imSP=rnd%tcSP=cvpr16/CSat/res50%lr0_02LrDTexpLrDc0_938OrmspWDc0/emb=512%ldTrn=0%nU=512%noHid=1%lr=0_02%lrDT=fix%lrDc=1%o=rmsp/
I am not sure why, but on a bash enabled server it works, while it fails on the csh host.
Thanks!
Invoke bash on the remote side, telling it what commands to run:
ssh hostname '/bin/bash -c "ls | head"'
If the command is too complicated (eg because of quotation mark escaping), then write your commands to a script, copy the script, then run the script:
scp script.bash hostname:/tmp/
ssh hostname '/bin/bash /tmp/script.bash'

run ssh script into ubuntu instance do something, when exit, stay in ubuntu

I am running a very simple script that will ssh into a remote ubuntu instance, move around the directory structure execute a few things, then I want the prompt to stay in Ubuntu. When the script ends, in ends back at the local prompt. How do I make modify the script so that it finishes with the remote prompt?
local$ ssh -i xxx.pem ubuntu#xxx.ap-region.compute.amazonaws.com \
"cd virtualenv; ls -lh;"
There are two things needed to be added to your commandline:
The bash command in the end starts the bash shell (you can start any other you want)
The -t switch will make sure the remote server will allocate you TTY and your shell will work as expected:
local$ ssh -t -i xxx.pem ubuntu#xxx.ap-region.compute.amazonaws.com \
"cd virtualenv; ls -lh; bash"

Ahow to use multiple terminals in the docker container?

I know it is weird to use multiple terminals in the docker container.
My purpose is to test some commands and build a dockerfile with these commands finally.
So I need to use multiple terminals, say, two. One is running some commands, the other is used to test that commands.
If I use a real machine, I can ssh it to use multiple terminals, but in docker, how can I do this?
Maybe the solution is to run docker with CMD /bin/bash, and in that bash, using screen?
EDIT
In my situation, one shell run a server program, the other run a client program to test the server program. Because the server program and client program are compiled together. So, the default link method in docker is not suitable.
The docker way would be to run the server in one container and the client in another. You can use links to make the server visible from the client and you can use volumes to make the files at the server available from the client. If you really want to have two terminals to the same container there is nothing stopping you from using ssh. I tested this docker server:
from: https://docs.docker.com/examples/running_ssh_service/
# sshd
#
# VERSION 0.0.1
FROM ubuntu:14.04
MAINTAINER Thatcher R. Peskens "thatcher#dotcloud.com"
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
You need to base this image on your image or the otherway around to get all the functionality together. After you have built and started your container you can get it's IP using
docker inspect <id or name of container>
from the docker host you can now ssh in with root and the password from the docker file. Now you can spawn as many ssh clients as you want. I tested with:
while true; do echo "test" >> tmpfile; sleep 1; done
from one client and
tail -f tmpfile
from another
If I understand correctly the problem, then you can use nsenter.
Assuming you have a running docker named nginx (with nginx started), run the following command from the host:
nsenter -m -u -i -n -p -t `docker inspect --format {{.State.Pid}} nginx`
This will start a program in the given name space of the PID (default $SHELL).
You can run more then one shell by issuing it more then once (from the host). Then you can run any binary that exist in the given docker or tail, rm, etc files. For example, tail the log file of nginx.
Further information can be found in the nsenter man.
If you want to just play around, you can run sshd in your image and explore it the way you are used to:
docker run -d -p 22 your_image /usr/sbin/sshd -D
When you are done with your explorations, you can proceed to create Dockerfile as usual.

How to run a command in background using ssh and detach the session

I'm currently trying to ssh into a remote machine and run a script, then leave the node with the script running. Below is my script. However, when it runs, the script is successfully run on the machine but ssh session hangs. What's the problem?
ssh -x $username#$node 'rm -rf statuslist
mkdir statuslist
chmod u+x ~/monitor/concat.sh
chmod u+x ~/monitor/script.sh
nohup ./monitor/concat.sh &
exit;'
There are some situations when you want to execute/start some scripts on a remote machine/server (which will terminate automatically) and disconnect from the server.
eg: A script running on a box which when executed
takes a model and copies it to a remote server
creates a script for running a simulation with the model and push it to server
starts the script on the server and disconnect
The duty of the script thus started is to run the simulation in the server and once completed (will take days to complete) copy the results back to client.
I would use the following command:
ssh remoteserver 'nohup /path/to/script `</dev/null` >nohup.out 2>&1 &'
#CKeven, you may put all those commands on one script, push it to the remote server and initiate it as follows:
echo '#!/bin/bash
rm -rf statuslist
mkdir statuslist
chmod u+x ~/monitor/concat.sh
chmod u+x ~/monitor/script.sh
nohup ./monitor/concat.sh &
' > script.sh
chmod u+x script.sh
rsync -azvp script.sh remotehost:/tmp
ssh remotehost '/tmp/script.sh `</dev/null` >nohup.out 2>&1 &'
Hope this works ;-)
Edit:
You can also use
ssh user#host 'screen -S SessionName -d -m "/path/to/executable"'
Which creates a detached screen session and runs target command within it
What do you think about using screen for this? You could run screen via ssh to start the command (concat.sh) and then you'd be able to return to the screen session if you wanted to monitor it (could be handy, depending on what concat does).
To be more specific, try this:
ssh -t $username#$node screen -dm -S testing ./monitor/concat.sh
You should find that the prompt returns immediately, and that concat.sh is running on the remote machine. I'll explain some of the options:
ssh -t makes a TTY. screen needs this.
screen -dm makes it start in "detached" mode. This is like "background" for your purposes.
-S testing gives your screen session a name. It is optional but recommended.
Now, once you've done this, you can go to the remote machine and run this:
screen -r testing
This will attach you to the screen session which contains your program. From there you can control it, kill it, see its output, and so on. Ctrl-A, then d will detach you from the screen session. screen -ls will list all running sessions.
It could be the standard input stream. Try ssh -n ... or ssh -f ....
For me, only this worked:
screen -dmS name sh my-script.sh
This, of course, depends on screen, and lets you attach later, if you ever want stdin or stdout. Screen will terminate itself when my-script.sh ends.
Below is a much more common decision that required some efforts to find, and it really works for me:
#!/usr/bin/bash
theScreenSessionName="test"
theTabNumber="1"
theStuff="date; hostname; cd /usr/local; pwd; /usr/local/bin/top"
echo "this is a test"
ssh -f user#server "/usr/local/bin/screen -x $theScreenSessionName -p $theTabNumber -X stuff \"
$theStuff
\""
It sends $theStuff list of commands to the tab No $theTabNumber of the screen session $theScreenSessionName preliminarily created at the 'server' on behalf of 'user'.
Please be aware of a trailing whitespace after
-X stuff \"
that is sent to overcome a 'stuff' option's glitch. The whitespace and $theStuff in the next line are appended by 'Enter' (^M) keystrokes. DON'T MISS 'EM!
The "this is a test" message is echoed in the initial terminal, and $theStuff commands are really executed inside the mentioned screen/tab.

Resources