I currently have to containerize a legacy Spring war project that uses Apache Tomcat server.
On my local machine I do not run into any trouble running with the following configurations
On the actual server it the server seems to use startup.sh instead of catalina.sh to start the server as can be seen in the shell script below
export CATALINA_OPTS="-Denv=product -Denv.servername=instance3 -Dfile.encoding=UTF-8 -Dspring.profiles.active=prod"
cd $CATALINA_HOME/bin
./startup.sh
#./catalina.sh run
I have tried building a dockerfile cotaing the information from the shell script as below.
FROM tomcat:9.0.40
EXPOSE 8083
COPY ./build/libs/ROOT.war "$CATALINA_HOME"/webapps/ROOT.war
ENV JAVA_OPTS='-Dspring.profiles.active=local'
RUN chmod +x $CATALINA_HOME/bin/startup.sh
RUN cd $CATALINA_HOME/bin
RUN /startup.sh
However I run into an error stating that /startup.sh was not found.
=> CACHED [2/5] COPY ./build/libs/ROOT.war /usr/local/tomcat/webapps/ROOT.war 0.0s
=> [3/5] RUN chmod +x /usr/local/tomcat/bin/startup.sh 0.3s
=> [4/5] RUN cd /usr/local/tomcat/bin 0.3s
=> ERROR [5/5] RUN /startup.sh 0.2s
------
> [5/5] RUN /startup.sh:
#9 0.162 /bin/sh: 1: /startup.sh: not found
This is my first time using a war file so I am a bit unfamiliar with what I am doing, so any type of feedback would be deeply appreciated.
Thank you in advance!
Related
I created an instance in Google Cloud Compute Engine (Debian OS) to host a Spring Boot Maven Application. I installed maven.Now while configuring the instance I added below script in the automation startup box -
cd spring-boot-app/
mvn clean package
cd target/
nohup java -jar artifact-1.0.jar &
I used nohup and & to run the application in background.
Now when I stop then start/resume the instance and open terminal through SSH & run the following command -
ps ax | grep java
I don't see my app running. What I am doing wrong here ?
When monitoring the logs, I found this -
cd spring-boot-app/ directory does not exists
So I realized that the script is being executed in the root directory. And when I connected to the terminal using SSH in Google Cloud Compute Engine VM I was redirected to home directory of my username.
Thus by updating the script I was able to run the startup script.
cd home/{username}/
cd spring-boot-app/
mvn clean package
cd target/
nohup java -jar artifact-1.0.jar &
I have a deploy script that I am trying to use for my server for CD but I am running into issues writing the bash script to complete some of my required steps such as running npm and the migration commands.
How would I go about getting into a container bash, from this script, running the commands below and then exiting to finish pulling up the changes?
Here is the script I am trying to automate:
cd /Project
docker-compose -f docker-compose.prod.yml down
git pull
docker-compose -f docker-compose.prod.yml build
# all good until here because it opens bash and does not allow more commands to run
docker-compose -f docker-compose.prod.yml run --rm web bash
npm install # should be run inside of web bash
python manage.py migrate_all # should be run inside of web bash
exit # should be run inside of web bash
# back out of web bash
docker-compose -f docker-compose.prod.yml up -d
Typically a Docker image is self-contained, and knows how to start itself up without any user intervention. With some limited exceptions, you shouldn't ever need to docker-compose run interactive shells to do post-deploy setup, and docker exec should be reserved for emergency debugging.
You're doing two things in this script.
The first is to install Node packages. These should be encapsulated in your image; your Dockerfile will almost always look something like
FROM node
WORKDIR /app
COPY package*.json .
RUN npm ci # <--- this line
COPY . .
CMD ["node", "index.js"]
Since the dependencies are in your image, you don't need to re-install them when the image starts up. Conversely, if you change your package.json file, re-running docker-compose build will re-run the npm install step and you'll get a clean package tree.
(There's a somewhat common setup that puts the node_modules directory into an anonymous volume, and overwrites the image's code with a bind mount. If you update your image, it will get the old node_modules directory from the anonymous volume and ignore the image updates. Delete these volumes: and use the code that's built into the image.)
Database migrations are a little trickier since you can't run them during the image build phase. There are two good approaches to this. One is to always have the container run migrations on startup. You can use an entrypoint script like:
#!/bin/sh
python manage.py migrate_all
exec "$#"
Make this script be executable and make it be the image's ENTRYPOINT, leaving the CMD be the command to actually start the application. On every container startup it will run migrations and then run the main container command, whatever it may be.
This approach doesn't necessarily work well if you have multiple replicas of the container (especially in a cluster environment like Docker Swarm or Kubernetes) or if you ever need to downgrade. In these cases it might make more sense to manually run migrations by hand. You can do that separately from the main container lifecycle with
docker-compose run web \
python manage.py migrate_all
Finally, in terms of the lifecycle you describe, Docker images are immutable: this means that it's safe to rebuild new images while the old ones are running. A minimum-downtime approach to the upgrade sequence you describe might look like:
git pull
# Build new images (includes `npm install`)
docker-compose build
# Run migrations (if required)
docker-compose run web python manage.py migrate_all
# Restart all containers
docker-compose up --force-recreate
I`d like to know is there good way to move folder/file, which is outside the building context, to
Inside the build context when docker-compose build.
Is that possible to solve by using init.sh or startup.sh by using Docker-compose?
When I build this Dockerfile,
….
# set assets to inside docker container
COPY ../../frontend/src/assets /var/www/assets
….
And I did docker-compose build
However I got error about this
Step 19/21 : COPY ../../frontend/src/assets /var/www/assets
ERROR: Service 'test' failed to build: COPY failed: Forbidden path outside the build context: ../../frontend/src/assets ()
If I execute “ cp -rf ../../frontend/src/assets ./“ before build and change path of folder in Dockerfile, this is no problem,
But if I could, I want to make this less operation.
I tried to run my simple Spring Web application jar on docker but I always get the following error. ubuntu & openjdk images exist and their state is UP. I could not run my jar file on docker? How can I get rid of this error?
ubuntu#ip-172-31-16-5:~/jar$ **docker run -d -p 8080:8080 spring-docker tail -f /dev/null**
c8eb92e5315adbaccfd894ed9e74b8e0d0eed88a81eaa07037cf8ada133c81fd
docker: **Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"java\": executable file not found in $PATH": unknown.**
Related DockerFile:
FROM ubuntu
FROM openjdk
VOLUME /tmp
ADD /spring-boot-web-0.0.1-SNAPSHOT.jar myapp.jar
RUN sh -c 'touch /myapp.jar'
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/myapp.jar"]
Check with below sequence which works for me.
Build image using below command.
docker build -t demo-img .
DockerFile
FROM openjdk:8-jdk-alpine
VOLUME /tmp
COPY demo-0.0.1-SNAPSHOT.jar app.jar
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
then run like below.
docker run --name demo-container -d -p 8080:8080 demo-img
Make sure you are running all these command from the directory in which DockerFile and jar is present.
When i try to run the 'maven servicemix' it throws following error.
To run i used following command:
.\bin\servicemix
Error:
jdk1.7.0_71/bin/java: cannot execute binary file
Any idea how to run maven service mix?
I found the solution by adding 'sudo' permission, run as follows in terminal.
Go inside the service mix folder:
cd /apache-servicemix-7.0.0.M1/
Provide 'sudo' permission:
sudo ./bin/servicemix
Done!