I'm trying to create a docker image which sets a custom tomcat port (I know you can set an external port with the docker flag "-p 8888:8080" but for my use case I want to change the internal port as well).
When I try to start catalina.sh the run argument is being ignored for some reason.
Dockerfile:
# Tomcat 8 alpine dockerfile copied here (URL below)... minus the CMD line at the end
# https://github.com/docker-library/tomcat/blob/5f1abae99c0b1ebbd4f020bc4b5696619d948cfd/8.0/jre8-alpine/Dockerfile
ADD server.xml $CATALINA_HOME/conf/server.xml
ADD start-tomcat.sh /start-tomcat.sh
RUN chmod +x /start-tomcat.sh
ENTRYPOINT ["/bin/sh","/start-tomcat.sh"]
The tomcat file, server.xml, is the same as the default except for the line:
<Connector port="${port.http.nonssl}" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" />
start-tomcat.sh:
#!/bin/sh
export JAVA_OPTS=-Dport.http.nonssl=${PORT}
catalina.sh run
The image builds successfully, but when I run with
docker run -p 8888:8888 -e PORT=8888 customtomcat
I just get a list of catalina.sh commands as if I didn't give it an argument. I've also tried
/usr/local/tomcat/bin/catalina.sh run
sh -c "catalina.sh run"
sh -c "/usr/local/tomcat/bin/catalina.sh run"
cd /usr/local/tomcat/bin
./catalina.sh run
I'm pretty sure I'm missing something simple here. I'd guess it has something to do with the syntax, but maybe it has something to do with docker or alpine that I'm not aware of. This is my first time using alpine linux.
---Edit 1---
To explain my use case... I'm setting PORT after the docker image is created because it's being set by an apache mesos task. For my purposes I need to run the docker container (from marathon) in host mode, not bridged mode.
---Edit 2---
I modified things to only focus on my main issue. The docker file now only has the following appended to the end:
ADD start-tomcat.sh /start-tomcat.sh
RUN chmod +x /start-tomcat.sh
ENTRYPOINT ["/bin/sh","/start-tomcat.sh"]
And start-tomcat.sh:
#!/bin/bash
catalina.sh run
Still no luck.
Update: for the "catalina.sh run" to fail with an invalid option, first check for linefeeds from a Windows system. They'll cause errors when the shell script is read in a Linux environment.
Looking at catalina.sh, I believe you want CATALINA_OPTS, not JAVA_OPTS:
# Control Script for the CATALINA Server
#
# Environment Variable Prerequisites
#
# Do not set the variables in this script. Instead put them into a script
# setenv.sh in CATALINA_BASE/bin to keep your customizations separate.
#
# CATALINA_HOME May point at your Catalina "build" directory.
#
# CATALINA_BASE (Optional) Base directory for resolving dynamic portions
# of a Catalina installation. If not present, resolves to
# the same directory that CATALINA_HOME points to.
#
# CATALINA_OUT (Optional) Full path to a file where stdout and stderr
# will be redirected.
# Default is $CATALINA_BASE/logs/catalina.out
#
# CATALINA_OPTS (Optional) Java runtime options used when the "start",
# "run" or "debug" command is executed.
# Include here and not in JAVA_OPTS all options, that should
# only be used by Tomcat itself, not by the stop process,
# the version command etc.
# Examples are heap size, GC logging, JMX ports etc.
#
# CATALINA_TMPDIR (Optional) Directory path location of temporary directory
# the JVM should use (java.io.tmpdir). Defaults to
# $CATALINA_BASE/temp.
#
# JAVA_HOME Must point at your Java Development Kit installation.
# Required to run the with the "debug" argument.
#
# JRE_HOME Must point at your Java Runtime installation.
# Defaults to JAVA_HOME if empty. If JRE_HOME and JAVA_HOME
# are both set, JRE_HOME is used.
#
# JAVA_OPTS (Optional) Java runtime options used when any command
# is executed.
# Include here and not in CATALINA_OPTS all options, that
# should be used by Tomcat and also by the stop process,
# the version command etc.
# Most options should go into CATALINA_OPTS.
Related
When running Github actions on a self hosted runner machine, how do I access existing custom environment variables that have been set on the machine, in my Github action .yaml script?
I have set those variables and restarted the runner virtual machine several times, but they are not accessible using the $VAR syntax in my script.
If you want to set a variable only for one run, you can add an export command when you configure the self-hosted runner on the Github repository, before running the ./run.sh command:
Example (linux) with a TEST variable:
# Create the runner and start the configuration experience
$ ./config.sh --url https://github.com/owner/repo --token ABCDEFG123456
# Add new variable
$ export TEST="MY_VALUE"
# Last step, run it!
$ ./run.sh
That way, you will be able to access the variable by using $TEST, and it will also appear when running env:
job:
runs-on: self-hosted
steps:
- run: env
- run: echo $VAR
If you want to set a variable permanently, you can add a file to the etc/profile.d/<filename>.sh, as suggested by #frennky above, but you will also have to update the shell for it be aware of the new env variables, each time, before running the ./run.sh command:
Example (linux) with a HTTP_PROXY variable:
# Create the runner and start the configuration experience
$ ./config.sh --url https://github.com/owner/repo --token ABCDEFG123456
# Create new profile http_proxy.sh file
$ sudo touch /etc/profile.d/http_proxy.sh
# Update the http_proxy.sh file
$ sudo vi /etc/profile.d/http_proxy.sh
# Add manually new line in the http_proxy.sh file
$ export HTTP_PROXY=http://my.proxy:8080
# Save the changes (:wq)
# Update the shell
$ bash
# Last step, run it!
$ ./run.sh
That way, you will also be able to access the variable by using $HTTP_PROXY, and it will also appear when running env, the same way as above.
job:
runs-on: self-hosted
steps:
- run: env
- run: echo $HTTP_PROXY
- run: |
cd $HOME
pwd
cd ../..
cat etc/profile.d/http_proxy.sh
The etc/profile.d/<filename>.sh will persist, but remember that you will have to update the shell each time you want to start the runner, before executing ./run.sh command. At least that is how it worked with the EC2 instance I used for this test.
Reference
Inside the application directory of the runner, there is a .env file, where you can put all variables for jobs running on this runner instance.
For example
LANG=en_US.UTF-8
TEST_VAR=Test!
Every time .env changes, restart the runner (assuming running as service)
sudo ./svc.sh stop
sudo ./svc.sh start
Test by printing the variable
I have the following Dockerfile in a simple Spring Boot app:
FROM maven:3.6-jdk-8-alpine as build
WORKDIR /app
COPY ./pom.xml ./pom.xml
RUN mvn dependency:go-offline -B
# copy your other files
COPY ./src ./src
# build for release
RUN mvn package -DskipTests
FROM openjdk:8-jre-alpine
ARG artifactid
ARG version
ENV artifact ${artifactid}-${version}.jar
WORKDIR /app
COPY --from=build /app/target/${artifact} /app
EXPOSE 8080
ENTRYPOINT ["sh", "-c"]
CMD ["java","-jar ${artifact}"]
When I build it with the required arguments:
docker build --build-arg artifactid=spring-demo --build-arg version=0.0.1 -t spring-demo .
it builds with no errors.
When I try to run the image with:
docker container run -it spring-demo
it fails with the following error:
Usage: java [-options] class [args...]
(to execute a class)
or java [-options] -jar jarfile [args...]
(to execute a jar file)
where options include:
-d32 use a 32-bit data model if available
-d64 use a 64-bit data model if available
-server to select the "server" VM
The default VM is server,
because you are running on a server-class machine.
-cp <class search path of directories and zip/jar files>
-classpath <class search path of directories and zip/jar files>
A : separated list of directories, JAR archives,
and ZIP archives to search for class files.
-D<name>=<value>
set a system property
-verbose:[class|gc|jni]
enable verbose output
-version print product version and exit
-version:<value>
Warning: this feature is deprecated and will be removed
in a future release.
require the specified version to run
-showversion print product version and continue
-jre-restrict-search | -no-jre-restrict-search
Warning: this feature is deprecated and will be removed
in a future release.
include/exclude user private JREs in the version search
-? -help print this help message
-X print help on non-standard options
-ea[:<packagename>...|:<classname>]
-enableassertions[:<packagename>...|:<classname>]
enable assertions with specified granularity
-da[:<packagename>...|:<classname>]
-disableassertions[:<packagename>...|:<classname>]
disable assertions with specified granularity
-esa | -enablesystemassertions
enable system assertions
-dsa | -disablesystemassertions
disable system assertions
-agentlib:<libname>[=<options>]
load native agent library <libname>, e.g. -agentlib:hprof
see also, -agentlib:jdwp=help and -agentlib:hprof=help
-agentpath:<pathname>[=<options>]
load native agent library by full pathname
-javaagent:<jarpath>[=<options>]
load Java programming language agent, see java.lang.instrument
-splash:<imagepath>
show splash screen with specified image
See http://www.oracle.com/technetwork/java/javase/documentation/index.html for more details.
What's wrong with the above settings, please?
The app example code can be found here.
You should delete that ENTRYPOINT line and use the shell form of CMD.
# No ENTRYPOINT
CMD java -jar ${artifact}
The Dockerfile ENTRYPOINT and CMD lines get combined into a single command line. In your Dockerfile, that gets interpreted as
sh -c java '-jar ${artifact}'
But the sh -c option only actually takes the next single word and interprets it as the command to run; so that really gets processed as
sh -c 'java' # '-jar ${artifact}'
ignoring the -jar option.
There are two ways to "spell" CMD (and ENTRYPOINT and RUN). As you've done it with JSON arrays, you specify exactly the "words" that go into the command line, so for example, -jar ${artifact} would be passed as a single argument including the embedded space. If you just pass a command line, Docker will insert a sh -c wrapper for you, and the shell will handle word parsing and variable interpolation. You shouldn't ever need to manually include sh -c in a Dockerfile.
It looks to me that you have an error with the sh -c. The arguments are not read correctly. You could check that if you do a docker inspect on the exited container. In the output search for the "CMD".
ENTRYPOINT ["sh", "-c"]
CMD ["java","-jar ${artifact}"]
If you would like to run it with sh -c, you have to quote the arguments as one like:
CMD ["java -jar ${artifact}"]
Can you give it a try?
ENV variables are only available during the build. To get env variables into the container at runtime, you have to use --env or -e or --env-file. It is best to use --env-file.
See this for same problem answered already: How do I pass environment variables to Docker containers?
Also look at this: Use environment variables in CMD
Here is one possible solution:
Keep your CMD instruction as this:
CMD ["java","-jar ${artifact}"]
Use this docker run command:
docker container run -it -e artifact=spring-demo-0.0.1.jar spring-demo
I want to export docker container hostname as an environment variable which I can later use in my app. In my docker file I call my script "run" as last command
CMD run
The run file is executable and works fine with rest of commands I perform but before them I want to export container hostname to an env. variable as follows
"run" File Try 1
#!/bin/bash
export DOCKER_MACHINE_IP=`hostname -i`
my_other_commands
exec tail -f /dev/null
But when I enter docker container and check, the variable is not set. If I use
echo $DOCKER_MACHINE_IP
in run file after exporting, it shows ip on console when I try
docker logs
I also tried sourcing another script from "run" file as follows
"run" File Try 2
#!/bin/bash
source ./bin/script
my_other_commands
exec tail -f /dev/null
and the script again contains the export command. But this also does not set the environment variable. What I am doing wrong?
When you execute a script, any environment variable set by that script will be lost when the script exits.
But for both the cases you've posted above the environment variable should be accessible for the commands in your scripts, but when you enter the docker container via docker run you will get a new shell, which does not contain your variable.
tl;dr Your exported environment variable will only be available to sub shells of the shell which set the variable. And if you need it when logging in you should source the ./bin/script file.
During the build stage of my docker images, i would like to set some environment variables automatically for every subsequent "RUN" command.
However, I would like to set these variables from within the docker conatiner, because setting them depends on some internal logic.
Using the dockerfile "ENV" command is not good, because that cannot rely on internal logic. (It cannot rely on a command run inside the docker container)
Normally (if this were not docker) I would set my ~/.profile file. However, docker does not load this file in non-interactive shells.
So at them moment I have to run each docker RUN command with:
RUN bash -c "source ~/.profile && do_something_here"
However, this is very tedious (and unclean) when I have to repeat this every time I want to run a bash command. Is there some other "profile" file I can use instead.
you can try setting the arg as env like this
ARG my_env
ENV my_env=${my_env}
in Dockerfile,
and pass the 'my_env=prod' in build-args so that you can use the set env for subsequent RUN commands
you can also use env_file: option in docker compose yml file in case of a stack deploy
I had a similar problem and couldn't find a satisfactory solution. What I did was creating a script that would source the variables, then do the operation. I would then rewrite the RUN commands in the Dockerfile to use that script instead.
In your case, if you need to run multiple commands, you could create a wrapper that loads the variables, runs the command given as argument, and include that script in the docker image.
From examples I've seen one can set environment variables in docker-compose.yml like so:
services:
postgres:
image: my_node_app
ports: -8080:8080
environment:
APP_PASSWORD: mypassword
...
For security reasons, my use case requires me to fetch the password from a server that we have a bash client for:
#!/bin/bash
get_credential <server> <dev-environment> <role> <key>
In docker documentation, I found this, which says that I can pass in shell environment variable values to docker compose. So I can run the bash client to grab the passwords in my starting shell that creates the docker instances. However, that requires me to have my bash client outside docker and inside my maven project.
Another way to do this would be to run/cmd/entrypoint a bash script that can set environment variable for the docker instance. Since my docker image runs node.js, currently my Dockerfile is like this:
FROM node:4-slim
MAINTAINER myself
# ... do Dockerfile stuff
# TRIAL #1: run a bash script to set the environment varable --- UNSUCCESSFUL!
COPY set_en_var.sh /
RUN chmod +x /set_en_var.sh
RUN /bin/bash /set_en_var.sh
# original entry point
#ENTRYPOINT ["node", "mynodeapp.js", "configuration.js"]
# TRIAL #2: use a bash script as entrypoint that sets
# the environment variable and runs my node app . --- UNSUCCESSFUL TOO!
ENTRYPOINT ["/entrypoint.sh"]
Here is the code for entrypoint.sh:
. mybashclient.sh
cred_str=$(get_credential <server> <dev-environment> <role> <key>)
export APP_PASSWORD=( $cred_str )
# run the original entrypoint command
node mynodeapp.js configuration.js
And here is code for my set_en_var.sh:
. mybashclient.sh
cred_str=$(get_credential <server> <dev-environment> <role> <key>
export APP_PASSWORD=( $cred_str )
So 2 questions:
Which is a better choice, having my bash client for password live inside docker or outside docker?
If I were to have it inside docker, how can I use cmd/run/entrypoint to achieve this?
Which is a better choice, having my bash client for password live inside docker or outside docker?
Always have it inside. You don't want dependencies on the host OS. You want to avoid that situation as much as possible
If I were to have it inside docker, how can I use cmd/run/entrypoint to achieve this?
Consider the below line of code you used
RUN /bin/bash /set_en_var.sh
This won't work at all. Because you don't make any change to the docker container as such. You just run a bash which gets some environment variables and then the bash exits and nothing on the OS gets changes. Dockerfile build will only maintain changes that happened to the OS from that command. And in your case except for that session of the bash, nothing changes.
Next your approach to do this during the build time is also not justified. If you build it with the environment variables inside it then you are breaking the purpose of having a command to fetch the latest credentials. Suppose your change the password, then this would require you to rebuild the image (in case it had worked)
Now your entrypoint.sh approach is the right one and it should work. You should just check what is going wrong with it. Also echo the cred_str for your testing to make sure you are getting the right credentials detail back from the command
Last you should change the line
node mynodeapp.js configuration.js
to
exec node mynodeapp.js configuration.js
This makes sure that your node process becomes the PID 1.