Gitalb pipeline : SSH to windows and execute script - windows

i'm trying to setup a Gitlab pipeline, and one of the steps include running a .bat script on a Windows Server.
Windows Server has a SSH Daemon installed and configured.
I've tried the following command from a Unix host
sshpass -p <pwd> ssh -o StrictHostKeyChecking=no <user>#<ip>
"C:\Temp\test.bat"
and everything is working fine.
Gitlab job will be executed from a custom image as this:
build_and_deploy_on_integrazione:
stage: build
tags:
- maven
image: <custom_image>
script:
- apt-get update -y
- apt-get install -y sshpass
- sshpass -p <pwd> ssh -o StrictHostKeyChecking=no <user>#<ip>
"C:\Temp\test.bat"
- echo "Done"
just to be sure i've started a container of the custom image from command line on the same machine that is hosting the Gitlab Runner instance and executed the step of the script, and it's also running fine.
But when i run the pipeline from Gitlab the bat file is not executed, the only output i see is
Warning: Permanently added '<ip>' (RSA) to the list of known hosts.
and nothing else.
i've checked on the SSH Daemon log and the connection is executed correctly, so the "SSH" part of the script seems to be working, but the script is not executed.

Related

sshpass not executing in bash script

I have a dockerfile: (these are the relevent commands)
RUN apk app --update bash openssh sshpass
CMD ["bin/sh", "/home/build/build.sh"]
Which my dockerfile gets ran by this command
docker run --rm -it -v $(pwd):/home <image-name>
and all of the commands within my bash script, that are within the mounted volume execute. These commands range from npm installs to using tar to zip up a file and I want to SFTP that tar.gz file.
I am using sshpass to automate logging in which I know isn't secured, but I'm not worried about that with this application.
sshpass -p <password> sftp -P <port> username#host << EOF
<command>
<command>
EOF
But the sshpass command is never executed. I've tested my docker run command by appending /bin/sh to it and trying it and it also does not run. The SFTP command by itself does.
And when I say it's never executed, I don't receive an error or anything.
Two possible reason
You apk command is wrong, it should be RUN apk add --update bash openssh sshpass, but I assume it typo
Seems like the known host entry is missing, you should check logs `docker logs -f , Also need to add entry in for known-host, check the suggested build script below.
Here is a working example that you can try
Dockerfile
FROM alpine
RUN apk add --update bash openssh sshpass
COPY build.sh /home/build/build.sh
CMD ["bin/sh", "/home/build/build.sh"]
build script
#!/bin/bash
echo "adding host to known host"
mkdir -p ~/.ssh
touch ~/.ssh/known_hosts
ssh-keyscan sftp >> ~/.ssh/known_hosts
echo "run command on remote server"
sshpass -p pass sftp foo#sftp << EOF
ls
pwd
EO
Now build the image, docker build -t ssh-pass .
and finally, the docker-compose for testing the above
version: '3'
services:
sftp-client:
image: ssh-pass
depends_on:
- sftp
sftp:
image: atmoz/sftp
ports:
- "2222:22"
command: foo:pass:1001
so you will able to connect the sftp container using docker-compose up

Execute a sudo command through SSH on a remote server using Git Bash?

I am trying to use a .sh script on Windows 10 (through Git Bash) to restart my nginx server.
This is what I'm trying
$ ssh myname#myserver 'sudo /usr/sbin/nginx -s reload'
sudo: Command not found.
I'm not sure why this happens, I know sudo isn't defined in Git Bash but shouldn't the command execute on the server? When I ssh in manually and run the same command it works:
$ ssh myname#myserver
$ myserver:/home/myname[ 51 ] --> sudo /usr/sbin/nginx -s reload
Password:

boot2docker command works on shell, but not in script

New to docker here. I have a series of commands which, if fire them off on the shell, work just fine, but if I put them in a script, don't.
boot2docker destroy
boot2docker init
boot2docker start
boot2docker ssh &
host=$(boot2docker ip 2> /dev/null)
# everything works fine up to here
ssh -i $HOME/.ssh/id_boot2docker -o "StrictHostKeyChecking no" -o "UserKnownHostsFile /dev/null" docker#$host docker run --net=host my-image
If I don't try to run a command via ssh, everything works. Viz:
ssh -i $HOME/.ssh/id_boot2docker -o "StrictHostKeyChecking no" -o "UserKnownHostsFile /dev/null" docker#$host
This brings up the docker ssh prompt. But if I do run the command via the script (and this is what I actually need to do) I get the error message:
level="fatal" msg="Post http:///var/run/docker.sock/v1.16/containers/create: dial unix /var/run/docker.sock: no such file or directory. Are you trying to connect to a TLS-enabled daemon without TLS?"
Again, if I just enter that last command, or the whole litany of commands, into the shell, no problems. How can I make this script work?
Thanks
update
If I put that last line in its own script, and run the two scripts in sequence from the command line, everything is fine (same as just typing all the commands in sequence.) If I chain the scripts, or create a third to run them in sequence, I get the error. What am I to make of this?
Thanks
host probably isn't defined when you try to use it. You can probably confirm that by echoing it's value before running ssh. Easiest solution would be to put these two lines together in the same file:
host=$(boot2docker ip 2> /dev/null)
ssh -i $HOME/.ssh/id_boot2docker -o "StrictHostKeyChecking no" -o "UserKnownHostsFile /dev/null" docker#$host docker run --net=host my-image

Ahow to use multiple terminals in the docker container?

I know it is weird to use multiple terminals in the docker container.
My purpose is to test some commands and build a dockerfile with these commands finally.
So I need to use multiple terminals, say, two. One is running some commands, the other is used to test that commands.
If I use a real machine, I can ssh it to use multiple terminals, but in docker, how can I do this?
Maybe the solution is to run docker with CMD /bin/bash, and in that bash, using screen?
EDIT
In my situation, one shell run a server program, the other run a client program to test the server program. Because the server program and client program are compiled together. So, the default link method in docker is not suitable.
The docker way would be to run the server in one container and the client in another. You can use links to make the server visible from the client and you can use volumes to make the files at the server available from the client. If you really want to have two terminals to the same container there is nothing stopping you from using ssh. I tested this docker server:
from: https://docs.docker.com/examples/running_ssh_service/
# sshd
#
# VERSION 0.0.1
FROM ubuntu:14.04
MAINTAINER Thatcher R. Peskens "thatcher#dotcloud.com"
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
You need to base this image on your image or the otherway around to get all the functionality together. After you have built and started your container you can get it's IP using
docker inspect <id or name of container>
from the docker host you can now ssh in with root and the password from the docker file. Now you can spawn as many ssh clients as you want. I tested with:
while true; do echo "test" >> tmpfile; sleep 1; done
from one client and
tail -f tmpfile
from another
If I understand correctly the problem, then you can use nsenter.
Assuming you have a running docker named nginx (with nginx started), run the following command from the host:
nsenter -m -u -i -n -p -t `docker inspect --format {{.State.Pid}} nginx`
This will start a program in the given name space of the PID (default $SHELL).
You can run more then one shell by issuing it more then once (from the host). Then you can run any binary that exist in the given docker or tail, rm, etc files. For example, tail the log file of nginx.
Further information can be found in the nsenter man.
If you want to just play around, you can run sshd in your image and explore it the way you are used to:
docker run -d -p 22 your_image /usr/sbin/sshd -D
When you are done with your explorations, you can proceed to create Dockerfile as usual.

Jenkins not able to execute ssh script

I have below ssh script which I am trying to execute by Jenkins, it runs fine when I invoke it from shell.
#ssh to remote machine
sshpass ssh 10.40.94.36 -l root -o StrictHostKeyChecking=no
#Remove old slave.jar
rm -f slave.jar
#download slave.jar to that machine
wget http://10.40.95.14:8080/jnlpJars/slave.jar
pwd
#make new dir to that machine
mkdir //var//Jenkins
# make slave online
java -jar slave.jar -jnlpUrl http://10.40.95.14:8080/computer/nodeV/slave-agent.jnlp
When I execute this script through shell it downloads the jar file to remote machine and also makes a new directory. But When I invoke it by shell plugin of jenkins, every command runs seprately. so the jar gets downloaded at master and also directory get created at master.
Also I am using sshpass for passwordless automated login, which fails sometime. Is there any other way of doing this.

Resources