I have to run a maven wrapper command from dockerfile but I don't know how can I do it?
When I tried wrote like this:
RUN ./mvnw -s settings.xml clean install
this command not work. I have error mvnw: not found
my dockerfile:
FROM ubuntu
WORKDIR /app
COPY ./ ./
RUN ./mvnw -s .mvn/settings.xml -B -f /app/pom.xml dependency:resolve-plugins dependency:resolve dependency:go-offline
I find the way how to fix this issue.
I had just added chnod +x ./mvnw command and final RUN command looks like this:
RUN chmod +x ./mvnw && \
./mvnw -s .mvn/settings.xml -B -f /app/pom.xml dependency:resolve-plugins dependency:resolve dependency:go-offline
Related
I am trying to dockerize a Go application which uses a Go Java JNI library (https://github.com/timob/jnigi) and getting an error on the build stage as follows:
/go/src/github.com/timob/jnigi/cinit.go:8:9: fatal error: jni.h: No such file or directory 8 | #include<jni.h>|^~~~~~~ compilation terminated.
My Dockerfile:
FROM golang:alpine as BUILD
ENV GO111MODULE=auto
RUN apk update && \
apk upgrade && \
apk add git && \
apk add unzip && \
apk add openssl-dev && \
apk add build-base && \
apk add --no-cache gcc musl-dev && \
apk add --no-cache openjdk8-jre
COPY . /go/src/project
WORKDIR /go/src/project
RUN go get -d -v
RUN CGO_ENABLED=1 GOOS=linux GOARCH=amd64 go build -a -installsuffix cgo -o /go/dist/app
FROM alpine:latest AS FINAL
COPY --from=BUILD /go/dist/app /project-runtime/app
RUN apk update && \
apk add tzdata && \
apk add apr && \
apk add ca-certificates && rm -rf /var/cache/apk/* \
apk add openssl
RUN update-ca-certificates
WORKDIR /project-runtime
ENTRYPOINT ["./app"]
The error happens when the "RUN CGO_ENABLED=1 GOOS=linux GOARCH=amd64 go build -a -installsuffix cgo -o /go/dist/app" is executed. How should I add the jni.h file? Could you please help me?
I think you are missing to place this script in the JDK root path https://github.com/timob/jnigi/blob/master/compilevars.sh according to these instructions.
The CGO_CFLAGS needs to be set to add the JNI C header files. The compilevars.sh script will do this.
# put this in your build script
source <gopath>/src/tekao.net/jnigi/compilevars.sh <root path of jdk>
I was writing a Dockerfile and i have concatenated several RUN instructions into one for proper caching but i realised one of the RUN instruction having --no-cache. Could you please advise how the caching will work here.
RUN go mod download \
&& apk update --no-cache \
&& apk add git \
&& CGO_ENABLED=0 go build -o golang-sdk .
The apk update --no-cache does not make sense. Strike it and modify the git install to
RUN apk add git --no-cache \
&& go mod download \
&& CGO_ENABLED=0 go build -o golang-sdk .
Even better: do a two stage build:
FROM golang:latest AS build
WORKDIR /go/src/github.com/you/project/
RUN [yourstuff]
COPY app.go .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /usr/local/bin
COPY --from=build /go/src/github.com/you/project/app .
CMD ["/usr/local/bin/app"]
This way, you can do all the stuff you like while building without needing to think about image sizes, and have the smallest possible image for app.
I trying to reduce the time take to build a docker image to react app,
the react should be static without server rendering.
now it takes around 5-10 minute to create an image and the image size on the local machine is around 1.5GB !!, the issue is that also after the second time of image creation, even I changed smth in the code it doesn't use any cache
I am looking for a solution to cut the time the size and here is my docker File after lot of changes
# Producation and dev build
FROM node:14.2.0-alpine3.10 AS test1
RUN apk update
RUN apk add \
build-base \
libtool \
autoconf \
automake \
jq \
openssh \
libexecinfo-dev
ADD package.json package-lock.json /app/
# set working directory
WORKDIR /app
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
# install app dependencies
COPY package.json ./
COPY package-lock.json ./
ADD . /app/
RUN rm -rf node_modules
RUN npm install --production
# copy production node_modules aside, to prevent collecting them
RUN cp -R node_modules prod_node_modules
# install ALL node_modules, including 'devDependencies'
RUN npm install
RUN npm install react-scripts#3.4.1 -g --silent
RUN npm run build
RUN rm -rf node_modules
RUN cp -R prod_node_modules node_modules
#FROM node:13.14.0
FROM test1
# copy app sources
COPY --from=test1 /app/build .
COPY --from=test1 /app/env-config.js .
# serve is what we use to run the web application
RUN npm install -g serve
# remove the sources & other needless stuff
RUN rm -rf ./src
RUN rm -rf ./prod_node_modules
# Add bash
RUN apk add --no-cache bash
CMD ["/bin/bash", "-c", "serve -s build"]
You're hitting two basic dynamics here. The first is that your image contains a pretty large amount of build-time content, including at least some parts of a C toolchain; since your run-time "stage" is built FROM the build stage as is, it brings all of the build toolchain along with it. The second is that each RUN command produces a new Docker layer with differences from the previous layer, so RUN commands only make the container larger. More specifically RUN rm -rf ... makes the image slightly larger and does not result in space savings.
You can use a multi-stage build to improve this. Each FROM line causes docker build to start over from some specified base image, and you can COPY --from=... previous build stages. I'd do this in two stages, a first stage that builds the application and a second stage that runs it.
# Build stage:
FROM node:14.2.0-alpine3.10 AS build
# Install OS-level dependencies (including C toolchain)
RUN apk update \
&& apk add \
build-base \
libtool \
autoconf \
automake \
jq \
openssh \
libexecinfo-dev
# set working directory
WORKDIR /app
# install app dependencies
# (copy _just_ the package.json here so Docker layer caching works)
COPY package.json package-lock.json ./
RUN npm install
# build the application
COPY . ./
RUN npm run build
# Final stage:
FROM node:14.2.0-alpine3.10
# set working directory
WORKDIR /app
# install dependencies
COPY package.json package-lock.json ./
RUN npm install --production
# get the build tree
COPY --from=build /app/build/ ./build/
# explain how to run the application
ENTRYPOINT ["npx"]
CMD ["serve", "-g", "build"]
Note that when we get to the second stage, we run npm install --production on a clean Node installation; we don't try to shuffle back and forth between dev and prod dependencies. Rather than trying to RUN rm -rf src, we just don't COPY it into the final image.
This also requires making sure you have a .dockerignore file that contains node_modules (which will reduce build times and avoid some potential conflicts; RUN npm install will recreate it in the directory). If you need react-scripts or serve those should be listed in your package.json file.
I have created an image from the following Dockerfile.
FROM alpine
WORKDIR /usr/src/app
RUN apk add nodejs-current
RUN apk add nodejs-npm
RUN npm install pm2 -g
COPY process.yaml .
CMD pm2 start process.yaml --no-daemon --log-date-format 'DD-MM
HH:mm:ss.SSS'
process.yaml looks like this:
- script: ./run-services.sh
watch : false
But run-services.sh does not run in my docker. What is the problem?
The problem is that in alpine the bash is not installed by default. pm2 runs bash scripts files by bash command. so there is two way to solve the problem:
Changing default pm2 interpreter from bash to /bin/sh
- script: ./run-services.sh
interpreter: /bin/sh
watch : false
Installing bash in alpine. So the Dockerfile changes as following:
FROM alpine
RUN apk update && apk add bash
WORKDIR /usr/src/app
RUN apk add nodejs-current
RUN apk add nodejs-npm
RUN npm install pm2 -g
COPY process.yaml .
CMD pm2 start process.yaml --no-daemon --log-date-format 'DD-MM
HH:mm:ss.SSS'
I have this script
echo builing deployme for branch $SourceBranch
rm -Rf project1
rm -Rf project2
git clone ssh://git#gitlab.yyy.com:2222/xxx/project1.git
cd project1
mvn --settings .m2/settings.xml clean package
cd ..
git clone ssh://git#gitlab.yyy.com:2222/xxx/project2.git
cd project2
mvn --settings .m2/settings.xml clean package
which is working in Jenkins.
Is there an easy/fast way to execute this in GitLab CI for some testing the process? Thx!