pass in environment variables when running a local Docker image - windows

I have the following docker image:
REPOSITORY TAG IMAGE ID CREATED SIZE
mcr.microsoft.com/mssql/server 2017-latest a9ac6b268134 2 months ago 1.49GB
I am trying to run my local version, rather than re-downloading. The following does start, but then dies because it needs to accept the Eula. Also it needs an Sql server password to be useful:
docker run -i -t a9ac6b268134
So I am trying to pass those in. That's where I'm failing. This is one of my attempts. What am I missing?
docker run -i -t a9ac6b268134 -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=Aaaaaa1!" -p 1433:1433

Did you try passing the image id at the end?
docker run -i -t -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=Aaaaaa1!" -p 1433:1433 a9ac6b268134
Source: https://hub.docker.com/_/microsoft-mssql-server

Related

Get the name of running docker container inside shell script

I am currently developing an application, in which I want to automate a testing process to speed up my development time. I use a postgres db container, and I then want to check that the preparation of the database is correct.
My process is currently as follows:
docker run -p 5432:5432 --env-file=".db_env" -d postgres # Start the postgres db
# Prep the db, do some other stuff
# ...
docker exec -it CONTAINER_NAME psql -U postgres
Currently, I have to to docker ps to get the container name and then paste it and replace CONTAINER_NAME. The container is the only one running, so I am thinking I could easily find the container id or the container name automatically instead of using docker ps to manually retrieve it, but I don't know how. How do I do this using bash?
Thank you!
The container id is being returned from the docker run command:
CONTAINER_ID=$(docker run -p 5432:5432 --env-file=".db_env" -d postgres)
You can choose the name of your container with docker run --name CONTAINER_NAME.
https://docs.docker.com/engine/reference/run/#name---name
You can get its ID using:
docker ps -aqf "name=postgres"
If you're using Bash, you can do something like:
docker exec -it $(docker ps -aqf "name=postgres") psql -U postgres
In the end, I took use of #mrcl's answer, from which I developed a complete answer. Thank you for that #mrcl!
CONTAINER_ID=$(docker run -p 5432:5432 --env-file=".db_env" -d postgres)
# Do some other stuff
# ...
docker exec -it $CONTAINER_ID psql -U postgres

Dockerfile does not replace text but command line does, possible bug?

I have a simple Dockerfile that copies over a template which I used sed to replace some of the variables. Pretty straight forward. Looks very doable and from what I've seen/read for all intents and purposes, it should do it.
COPY /my-dir/my-textfile.conf /to/my/docker/path.conf
RUN sed -i s:TEXTTOREPLACE:my-new-text:g /to/my/docker/path.conf
I then run docker build.... then docker run ... bash
then I cat my file and TEXTTOREPLACE is still there.
Run the same sed command in the bash and it works no problem.
Any thoughts? What am I doing wrong/not seeing?
Thanks!
EDIT per request: base image is debian:7.11, work station is MAC OSX
Just to recap.
I have the file my-textfile.conf in my working directory. Its content is:
I need to change TEXTTOREPLACE with my-new-text
My test system is Ubuntu Linux 16.04 running Docker version 18.09.0, build
4d60db4.
This is the Dockerfile
FROM debian:7.11
COPY my-textfile.conf /tmp/path.conf
RUN sed -i s:TEXTTOREPLACE:my-new-text:g /tmp/path.conf
I run the following commands:
docker build -t mytestimage .
docker run -ti -d --name mytestcontainer mytestimage
docker exec -ti mytestcontainer /bin/bash
Then, inside the container, I run:
cat /tmp/path.conf
and I get this result:
I need to change my-new-text with my-new-text
So it seems it works as expected.

Jenkins console does not show the output of command runs on docker container

Running below command to execute my tests on docker container
sudo docker exec -i 6d49272f772c bash -c "mvn clean install test"
Above command running on Jenkins execute bash. But Jenkins console does not show the logs for test execution.
I had a similar problem with docker start (which is similar to docker exec). I used the -i option and it would work fine outside Jenkins, but the console in Jenkins didn't show any output from this command. I replaced -i with -a similar to the following:
sudo docker container create -it --name container-name some-docker-image some-command
sudo docker container start -a container-name
sudo docker container rm -f container-name
The docker exec method doesn't have a -a option so possibly removing the -i option would work too (since you are not interacting with the container in Jenkins), so if that doesn't work than you can convert to the following commands and achieve similar results with standard out being captured.

How to use run_input.py to debug a user’s input?

How can I debug a customer's input?
What is run_input.py?
How do I use it?
Check which integrations version the customer currently has.
in MT: run kubectl describe pods -n namespace | grep Image: in watchdog
in ST: run docker ps in the user’s machine
Create a docker in the right environment (watchdog prod / dev is a good choice) - this command creates a docker with the integrations version you choose. The docker will delete itself when you exit it
:
docker run --rm -ti -e 'VAULT_GITHUB_TOKEN=<your github token>' -v `pwd`/out:/out docker-registry-2-i.alooma.io/integrations:<user's commit id> bash
Run the debugging script - (location - scripts/run_input.py) :
to see the possible flags of the script, run pypy run_input.py -h
execution example:
pypy run_input.py -e prod -d gadventures-1 -i f0dd88a4-ee3b-4414-aa90-c21b323bf473 -s e05fd -k .kafka_dump -r .result_dump
if you don’t specify a state id, the script will run configure_tasks.

Running Docker Commands with a bash script inside a container

I'm trying to automate deployment with webhooks to the Docker hub based on this tutorial. One container runs the web app on port 80. On the same host I run another container that listens for post requests from the docker hub, triggering the host to update the webapp image. The post request triggers a bash script that looks like this:
echo pulling...
docker pull my_username/image
docker stop img
docker rm img
docker run --name img -d -p 80:80 my_username/image
A test payload succesfully triggers the script. However, the container logs the following complaints:
pulling...
app/deploy.sh: line 4: docker: command not found
...
app/deploy.sh: line 7: docker: command not found
It seems that the bash script does not access the host implicitly. How to proceed?
Stuff I tried but did not work:
when firing up the listener container I added the host IP like this based on the docs:
HOSTIP=`ip -4 addr show scope global dev eth0 | grep inet | awk '{print\$2}' | cut -d / -f 1`
docker run --name listener --add-host=docker:${HOSTIP} -e TOKEN="test654321" -d -p 5000:5000 mjhea0/docker-hook-listener
similarly, I substituted the --add-host command with --add-host=dockerhost:$(ip route | awk '/docker0/ { print $NF }') based on this suggestion.
Neither the docker binary nor the docker socket will be present in a container by default (why would it?).
You can solve this fairly easily by mounting the binary and socket from the host when you start the container e.g:
$ docker run -v $(which docker):/usr/bin/docker -v /var/run/docker.sock:/var/run/docker.sock debian docker --version
Docker version 1.7.0, build 0baf609
You seem to be a bit confused about how Docker works; I'm not sure exactly what you mean by "access the host implicitly" or how you think it would work. Think of a container as a isolated and ephemeral machine, completely separate from your host, something like a fast VM.

Resources