This is my docker file
# Use the Oracle SQLcl image as the base image
FROM container-registry.oracle.com/database/sqlcl:latest
# Set the current working directory
WORKDIR /app/
# Set the TNS_ADMIN environment variable
ENV TNS_ADMIN /opt/oracle/network/admin/
# Copy the TNSNAMES.ORA file into the container
COPY TNSNAMES.ORA $TNS_ADMIN/TNSNAMES.ORA
# Login with AQTOPDATA
ENTRYPOINT ["sql", "connection_string"]
# Execute the script
CMD ["#scripts/script1.sql"]
If I access the container via bash:
winpty docker run --rm -it --entrypoint bash sqlcl_opt
and do:
sql connection_string
SQL > #"my path/myfile.sql"
The file is executed.
However, if I do:
docker run sqlcl_opt #"my path/myfile.sql"
It is not. How can I troubleshoot what the problem is? Is it taking the file from the container or from my local machine?
Related
I have custom entrypoint where a environment variable is exported. The value of the environment variable is constructed using two other variables provided at runtime.
Snippet from Dockerfile CMD ["bash", "/opt/softwareag/laas-api-server/entrypoint.sh"].
Snippet from entrypoint.sh
export URL="$SCHEME://$HOST:$PORT"
echo "URL:$URL"
Command docker run -e HOST="localhost" -e PORT="443" mycentos prints URL:localhost:443 as expected but the same variable appears to have lost the value when the following command is executed.
docker exec -ti <that-running-container-from-myimage> bash
container-prompt> echo $URL
<empty-line>
Why would the exported variable appear to have lost the value of URL? What is getting lost here?
The environment variable will not persist across all bash session. when the container run it will only available in that entrypoint session but later it will not available if it set using export.
docker ENV vs RUN export
If you want to use across all session you should set them in Dockerfile.
ENV SCHEME=http
ENV HOST=example.com
ENV PORT=3000
And in the application side, you can use them together.also it will be available for all session.
curl "${SCHEME}://${HOST}:${PORT}
#
Step 8/9 : RUN echo "${SCHEME}://${HOST}:${PORT}"
---> Running in afab41115019
http://example.com:3000
Now if we look into the way you are using, it will not work because
export URL="$SCHEME://$HOST:$PORT"
# only in this session
echo "URL:$URL"
# will be available for node process too but for this session only
node app.js
For example look into this Dockerfile
FROM node:alpine
RUN echo $'#!/bin/sh \n\
export URL=example.com \n\
echo "${URL}" \n\
node -e \'console.log("ENV URL value inside nodejs", process.env.URL)\' \n\
exec "$#" \n\
' >> /bin/entrypoint.sh
RUN chmod +x /bin/entrypoint.sh
entrypoint ["entrypoint.sh"]
So you when you Run docker container for the first time you will able to see the expected response.
docker run -it --rm myapp
example.com
ENV URL value inside nodejs example.com
Now we want to check for later session.
docker run -it --rm abc tail -f /dev/null
example.com
ENV URL value inside nodejs example.com
so the container is up during this time, we can verify for another session
docker exec -it myapp sh -c "node -e 'console.log(\"ENV URL value inside nodejs\", process.env.URL)'"
ENV URL value inside nodejs undefined
As we can same script but different behaviour because of docker, so the variable is only available in that session, you can write them to file if you are interested in later use.
Here is my dockerfile
FROM httpd:latest
ENV ENV_VARIABLE "http://localhost:8081"
# COPY BUILD AND CONFIGURATION FILES
COPY entrypoint.sh /
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
Here is the entrypoint.sh file
#!/bin/bash
sed -i 's,ENV_VARIABLE,'"$ENV_VARIABLE"',g' /path/to/config/file
exec "$#"
To run the container
docker run -e ENV_VARIABLE=some-value <image-name>
The sed command works perfectly fine and the value from environment variable gets reflected in config file. But whenever i start the container the container stops automatically.
I ran the command docker logs to check the logs but logs were empty.
The Dockerfile reference notes:
If CMD is defined from the base image, setting ENTRYPOINT will reset CMD to an empty value. In this scenario, CMD must be defined in the current image to have a value.
So you need to find the CMD from the base image and repeat it in your Dockerfile. Among other places, you can find that on the Docker hub listing of the image history
CMD ["httpd-foreground"]
docker inspect httpd or docker history httpd would also be able to tell you this.
I'm newibe to docker, I used a dockerfile to make a container when i try to run the next comment:
docker run --name ai --rm -it -v /C/AI_project/:/AI_project project:latest bash
it makes the container but with empty AI_project folder.I tried to edited this line a lot of times but it never copy the folder.
How to add a folder from local host to the container?
here my result
If all that you want to do is copying a file from the host to the container, you can use docker cp:
docker cp local.file container:/path/local.file
If you want to mount a host file on the container when starting the container, you should do something like:
docker run -v local.file:/path/local.file --name name image
I am trying to develop a Dockerfile for my application that loads a large number of environment variables after initialisation. Somehow, these variables are not reachable when I later execute the following commands:
docker exec -it container_name bash
printenv
My environment variables are not visible. If I load the files manually however, they are:
docker exec -it container_name bash
source .env
printenv
... environment variables are shown ...
This is my dockerfile:
Dockerfile
FROM python:3.6
WORKDIR /usr/src/app
COPY requirements.txt /usr/src/app/requirements.txt
RUN pip install -r requirements.txt
COPY . /usr/src/app/
RUN chmod 755 load_env_variables.sh
ENTRYPOINT ["/bin/bash", "-c", "/usr/src/app/load_env_variables.sh"]
load_env_variables.sh
#!/bin/bash
source .env
python start_application
And my .env file contains lines als follows: 'export name=value'.
The reason for this behavior is that docker exec -it container_name bash starts a new bash. A new bash has only the standard environment variables plus the ones specified in the .bashrc or .bash_profile files.
A proper solution for your problem would be to use the option --env-file with the docker run command. Be aware that the env-file needs to look like this:
test1=test1
test2=test2
I'm having trouble with Docker creating a container that does not have environment variables set that I know I set in the image definition.
I have created a Dockerfile that generates an image of OpenSuse 42.3. I need to have some environment variables set up in the image so that anyone that starts a container from the image can use a code that I've compiled and placed in the image.
I have created a shell file called "image_env_setup.sh" that contains the necessary environment variable definitions. I also manually added those environment variable definitions to the Dockerfile.
USER codeUser
COPY ./docker/image_env_setup.sh /opt/MyCode
ENV PATH="$PATH":"/opt/MyCode/bin:/usr/lib64/mpi/gcc/openmpi/bin"
ENV LD_LIBRARY_PATH="/usr/lib64:/opt/MyCode/lib:"
ENV PS1="[\u#docker: \w]\$ "
ENV TERM="xterm-256color"
ENV GREP_OPTIONS="--color=auto"
ENV EDITOR=/usr/bin/vim
USER root
RUN chmod +x /opt/MyCode/image_env_setup.sh
USER codeUser
RUN /opt/MyCode/image_env_setup.sh
RUN /bin/bash -c "source /opt/MyCode/image_env_setup.sh"
The command that I use to create the container is:
docker run -it -d --name ${containerName} -u $userID:$groupID \
-e USER=$USER --workdir="/home/codeUser" \
--volume="${home}:/home/codeUser" ${imageName} /bin/bash \
The only thing that works is to pass the shell file to be run again when the container starts up.
docker start $MyImageTag
docker exec -it $MyImageTag /bin/bash --rcfile /opt/MyCode/image_env_setup.sh
I didn't think it would be that difficult to just have the shell variables setup within the container so that any entry into it would provide a user with them already defined.
RUN entries cannot modify environment variables (I assume you want to set more variables in image_env_setup.sh). Only ENV entries in the Dockerfile (and docker options like --rcfile can change the environment).
You can also decide to source image_env_setup.sh from the .bashrc, of course.
For example, you could either pre-fabricate a .bashrc and pull it in with COPY, or do
RUN echo '. /opt/MyCode/image_env_setup.sh' >> ~/.bashrc
you can put /opt/MyCode/image_env_setup.sh in ~/.bash_profile or ~/.bashrc of the container so that everytime you get into the container you have the env's set