sshpass not executing in bash script - bash

I have a dockerfile: (these are the relevent commands)
RUN apk app --update bash openssh sshpass
CMD ["bin/sh", "/home/build/build.sh"]
Which my dockerfile gets ran by this command
docker run --rm -it -v $(pwd):/home <image-name>
and all of the commands within my bash script, that are within the mounted volume execute. These commands range from npm installs to using tar to zip up a file and I want to SFTP that tar.gz file.
I am using sshpass to automate logging in which I know isn't secured, but I'm not worried about that with this application.
sshpass -p <password> sftp -P <port> username#host << EOF
<command>
<command>
EOF
But the sshpass command is never executed. I've tested my docker run command by appending /bin/sh to it and trying it and it also does not run. The SFTP command by itself does.
And when I say it's never executed, I don't receive an error or anything.

Two possible reason
You apk command is wrong, it should be RUN apk add --update bash openssh sshpass, but I assume it typo
Seems like the known host entry is missing, you should check logs `docker logs -f , Also need to add entry in for known-host, check the suggested build script below.
Here is a working example that you can try
Dockerfile
FROM alpine
RUN apk add --update bash openssh sshpass
COPY build.sh /home/build/build.sh
CMD ["bin/sh", "/home/build/build.sh"]
build script
#!/bin/bash
echo "adding host to known host"
mkdir -p ~/.ssh
touch ~/.ssh/known_hosts
ssh-keyscan sftp >> ~/.ssh/known_hosts
echo "run command on remote server"
sshpass -p pass sftp foo#sftp << EOF
ls
pwd
EO
Now build the image, docker build -t ssh-pass .
and finally, the docker-compose for testing the above
version: '3'
services:
sftp-client:
image: ssh-pass
depends_on:
- sftp
sftp:
image: atmoz/sftp
ports:
- "2222:22"
command: foo:pass:1001
so you will able to connect the sftp container using docker-compose up

Related

Gitalb pipeline : SSH to windows and execute script

i'm trying to setup a Gitlab pipeline, and one of the steps include running a .bat script on a Windows Server.
Windows Server has a SSH Daemon installed and configured.
I've tried the following command from a Unix host
sshpass -p <pwd> ssh -o StrictHostKeyChecking=no <user>#<ip>
"C:\Temp\test.bat"
and everything is working fine.
Gitlab job will be executed from a custom image as this:
build_and_deploy_on_integrazione:
stage: build
tags:
- maven
image: <custom_image>
script:
- apt-get update -y
- apt-get install -y sshpass
- sshpass -p <pwd> ssh -o StrictHostKeyChecking=no <user>#<ip>
"C:\Temp\test.bat"
- echo "Done"
just to be sure i've started a container of the custom image from command line on the same machine that is hosting the Gitlab Runner instance and executed the step of the script, and it's also running fine.
But when i run the pipeline from Gitlab the bat file is not executed, the only output i see is
Warning: Permanently added '<ip>' (RSA) to the list of known hosts.
and nothing else.
i've checked on the SSH Daemon log and the connection is executed correctly, so the "SSH" part of the script seems to be working, but the script is not executed.

How to add an entry to the hosts file inside a Docker container?

I have a Kafka instance running on my local machine (macOS Mojave) and I'm trying to have a Docker container see that.
There are two files in the Java program that will be built as the Docker container:
docker-entrypoint.sh:
#!/bin/bash
HOST_DOMAIN="kafka"
HOST_IP=$(awk '/32 host/ { print f } {f=$2}' <<< "$(</proc/net/fib_trie)" | head -n 1)
Dockerfile:
# ...
COPY docker-entrypoint.sh ./
RUN chmod 755 docker-entrypoint.sh
RUN apt-get install -y sudo
CMD ["./docker-entrypoint.sh"]
Now I want to write the following line:
$HOST_IP\t$HOST_DOMAIN
to /etc/hosts so the Docker container can work with Kafka. How can I do that, considering elevated access is needed to write to that file? I have tried these:
1- Changing CMD ["./docker-entrypoint.sh"] to CMD ["sudo", "./docker-entrypoint.sh"]
2- Using sudo tee
3- Using su root;tee ...
4- Running echo "%<user> ALL=(ALL) ALL" | tee -a /etc/sudoers > /dev/null, so I can then tee ... without sudo.
1, 2, and 3 lead to the following error:
sudo: no tty present and no askpass program specified
I don't understand this error. A search for it had solutions for when one is sshing to run a command, but here there is no ssh.
To do 4, I already need to be sudo, correct?
So, how can I achieve what I'm looking to do?
Dockerfile commands typically run as root unless you've changed the user account, so you should not need sudo.
You don't need to edit any hosts file
You can use host.docker.internal to reach the host from the container
https://docs.docker.com/docker-for-mac/networking/
Otherwise, just run Kafka in a container if you want to setup things locally

How to substitute a command output into a string and append that to a file (in Alpine Linux running in Docker)

I'm trying to build the following Dockerfile:
FROM alpine:latest
EXPOSE 9050 9051
RUN apk --update add tor
RUN echo "ControlPort 9051" >> /etc/tor/torrc
RUN password_hash=$(tor --hash-password "foo")
RUN echo "HashedControlPassword $password_hash" >> /etc/tor/torrc
CMD ["tor"]
I'm trying to add the line HashedControllPassword [pw] to /etc/tor/torrc, where [pw] is generated by the command tor --hash-password "foo". (I'm using "foo" as password in this example).
If I build the image using docker build --tag my_tor . and enter the command line using
docker run -it my_tor /bin/ash
and run cat /etc/tor/torrc, I see
ControlPort 9051
HashedControlPassword
In other words, in the end the torrc doesn't seem to contain the hashed password. However, similar commands in my Ubuntu terminal do work. Can anyone spot what the problem is?
You can use ARG
FROM alpine:latest
EXPOSE 9050 9051
ARG password
RUN apk --update add tor
RUN echo "ControlPort 9051" >> /etc/tor/torrc
RUN echo "HashedControlPassword $(tor --hash-password $password)" >> /etc/tor/torrc
CMD ["tor"]
And then build using:
docker build --build-arg password=foo Dockerfile
In general I would not bake password in an image. It would be better to provide those things when you run the container using -e.

Executing a shell script within docker with RUN command

New to dockers, so please bear with me.
My Dockerfile contains an ENTRYPOINT:
ENV MONGOD_START "mongod --fork --logpath /var/log/mongodb.log --logappend --smallfiles"
ENTRYPOINT ["/bin/sh", "-c", "$MONGOD_START"]
I have a shell script add an entry to database through python script, and starts the server.
The script startApp.sh
chmod +x /addAddress.py
python /addAddress.py $1
cd /myapp/webapp
grunt serve --force
Now, all the below RUN commands are unsuccessful in executing this script.
sudo docker run -it --privileged myApp -C /bin/bash && /myApp/webapp/startApp.sh loc
sudo docker run -it --privileged myApp /myApp/webapp/startApp.sh loc
The docker log of container is
"about to fork child process, waiting until server is ready for connections. forked process: 7 child process started successfully, parent exiting "
Also, the startApp.sh executes fine when I open a bash prompt in docker and run it.
I am unable to figure out what wrong I am doing, help please.
I would suggest you to create an entrypoint.sh file:
#!/bin/sh
# Initialize start DB command
# Pick from env variable MONGOD_START if it exists
# else use the default value provided in quotes
START_DB=${MONGOD_START:-"mongod --fork --logpath /var/log/mongodb.log --logappend --smallfiles"}
# This will start your DB in background
${START_DB} &
# Go to startApp directory and execute commands
`chmod +x /addAddress.py;python /addAddress.py $1; \
cd /myapp/webapp ;grunt serve --force`
Then modify your Dockerfile by removing the last line and replacing it with following 3 lines:
COPY entrypoint.sh /
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
Then rebuild your container image using
docker build -t NAME:TAG .
Now you run following command to verify if ENTRYPOINT is /entrypoint.sh
docker inspect NAME:TAG | less
I guess (and I might be wrong, since I'm neither a MongoDB nor a Docker expert) that your combination of mongod --fork and /bin/sh -c is the culprit.
What you're essentially executing is this:
/bin/sh -c mongod --fork ...
which
executes a shell
this shell executes a single command and waits for it to finish
this command launches MongoDB in daemon mode
MongoDB forks and immediately exits
The easiest fix is probably to just use
CMD ["mongod"]
like the official MongoDB Docker does.

Bash / Docker exec: file redirection from inside a container

I can't figure out how to read content of a file from a Docker container. I want to execute content of a SQL file into my PGSQL container. I tried:
docker exec -it app_pgsql psql --host=127.0.0.1 --username=foo foo < /usr/src/app/migrations/*.sql
My application is mounted in /usr/src/app. But I got an error:
bash: /usr/src/app/migrations/*.sql: No such file or directory
It seems that Bash interprets this path as an host path, not a guest one. Indeed, executing the command in two times works perfectly:
docker exec -it app_pgsql
psql --host=127.0.0.1 --username=foo foo < /usr/src/app/migrations/*.sql
I think that's more a Bash issue than a Docker one, but I'm still stuck! :)
Try and use a shell to execute that command
sh -c 'psql --host=127.0.0.1 --username=foo foo < /usr/src/app/migrations/*.sql'
The full command would be:
docker exec -it app_pgsql sh -c 'psql --host=127.0.0.1 --username=foo foo < /usr/src/app/migrations/*.sql'
try with sh -c "your long command"
Also working when piping backup to the mysql command:
cat backup.sql | docker exec -i CONTAINER /usr/bin/mysql -u root --password=root DATABASE
You can use the database client in order to connect to you container and redirect the database file, then you can perform the restore.
Here is an example with MySQL: a container running MySQL, using the host network stack. Since that the container is using the host network stack (if you don't have any restriction on your MySQL or whatever database), you can connect via localhost and performing the commands transparently
mysql -h 127.0.0.1 -u user -pyour_passwd database_name < db_backup.sql
You can do the same with PostgresSQL (Restore a postgres backup file using the command line?):
pg_restore --host 127.0.0.1 --port 5432 --username "postgres" --dbname "mydatabase" --no-password --clean "/home/dinesh/db/mydb.backup"
Seems like that "docker exec" does not support input redirection.. I will verify this and maybe open an issue for Docker Community at GitHub, if it is applicable.

Resources