Mount volume not working in docker - maven

I am new to docker.I am using Windows 10 and installed docker for my machine and working docker via power-shell.
My problem was i can't copy my files from docker-compose.yml file
My file look like below
version: '2'
services:
maven:
image: maven:3.3.3-jdk-8
volumes:
- ~/.m2:/root/.m2
- /d/projects/test/:/code
working_dir: /code
links:
- mongodb
entrypoint: /code/wait-for-it.sh mongodb:27017 -- mvn clean install
environment:
- mongodb_hosts=mongodb
mongodb:
image: mongo:3.2.4
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka:
build: .
ports:
- "9092"
This test project i'm using maven and i have lot of files on it. But it's give the error, like
ERROR: for maven Cannot start service maven: oci runtime error: exec:
"/code/wait-for-it.sh": stat /code/wait-for-it.sh: no such file or
directory ERROR: Encountered errors while bringing up the project.
I shared my local drive also in docker settings, still mount problem was there.
Please help me, thanks for advance.

Related

How to Read/Write file system in docker(Winodws)

I'm trying to deploy my java Spring boot application into windows docker container(wsl support is enabled)
When i tried running my yml file the java server is up and running.
But I've a functionality where i need to access the files From Local Disk C & D (host machine)
When ever I tried to access a file with path like "D:\Folder\example.pdf" I'm getting File not found exception.
Here is my docker-compose.yml file
version: "3"
services:
java_spring_backend:
image: java_spring_backend:latest
restart: unless-stopped
container_name: java_spring_backend
build: ./server/
ports:
- "8081:8080"
You must mount the host folder to your docker container.
ex.
version: "3"
services:
java_spring_backend:
image: java_spring_backend:latest
restart: unless-stopped
container_name: java_spring_backend
build: ./server/
volumes:
- "./Folder:/FolderInsideDocker"
Or as absolut path ex. for wsl: /mnt/c/Folder:/FolderInsideDocker
Link to Documentation

Docker shared volume not working in MacOs

I have a docker-compose.yml file. It works fine in Windows 10 but whenever I try to run that in MacOs it doesnt work especially the shared volume.
Here is the content of my docker-compose.yml file and directory structure
version: '3'
services:
database:
image: mongo
container_name: pcore-database
ports:
- '27017:27017'
node-server:
image: node
container_name: pcore-node-server
volumes:
- ./node-services :/usr/app/node-services
working_dir: /usr/app/node-services
command: npm run dev
ports:
- '3000:3000'
links:
- database
- nginx-server
depends_on:
- database
apache-server:
image: webdevops/php-apache
container_name: pcore-apache-server
working_dir: /app
volumes:
- ./php-services :/app
ports:
- '8000:80'
Check the node-server service and nginx-server
Now when i run command docker-compose up it creates additional directories with same name and throws error.
Check the error and additional directories it created.
I dont know whats going on. Its working fine in windows 10 but in MacOs it creates additional directories and does not share the volumes. Can someone guid me?

Docker - Problem with java netty_tcnative

I am trying to dockerize 4 services and I have a problem with one of the services. Particularly, this service is implemented is spring boot service and uses google vision API. When building the images and starting the containers everything works fine, until it gets to the part where the google vision API code is used. I then have the following runtime errors when running the containers:
netty-tcnative unavailable (this may be normal)
java.lang.IllegalArgumentException: Failed to load any of the given libraries: [netty_tcnative_linux_x86_64, netty_tcnative_linux_x86_64_fedora, netty_tcnative_x86_64, netty_tcnative]
at io.grpc.netty.shaded.io.netty.util.internal.NativeLibraryLoader.loadFirstAvailable(NativeLibraryLoader.java:104) ~[grpc-netty-shaded-1.18.0.jar!/:1.18.0]
at io.grpc.netty.shaded.io.netty.handler.ssl.OpenSsl.loadTcNative(OpenSsl.java:526) ~[grpc-netty-shaded-1.18.0.jar!/:1.18.0]
at io.grpc.netty.shaded.io.netty.handler.ssl.OpenSsl.<clinit>(OpenSsl.java:93) ~[grpc-netty-shaded-1.18.0.jar!/:1.18.0]
at io.grpc.netty.shaded.io.grpc.netty.GrpcSslContexts.defaultSslProvider(GrpcSslContexts.java:244) [grpc-netty-shaded-1.18.0.jar!/:1.18.0]
at io.grpc.netty.shaded.io.grpc.netty.GrpcSslContexts.configure(GrpcSslContexts.java:171) [grpc-netty-shaded-1.18.0.jar!/:1.18.0]
at io.grpc.netty.shaded.io.grpc.netty.GrpcSslContexts.forClient(GrpcSslContexts.java:120) [grpc-netty-shaded-1.18.0.jar!/:1.18.0]
at io.grpc.netty.shaded.io.grpc.netty.NettyChannelBuilder.buildTransportFactory(NettyChannelBuilder.java:385) [grpc-netty-shaded-1.18.0.jar!/:1.18.0]
at io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:435) [grpc-core-1.18.0.jar!/:1.18.0]
at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createSingleChannel(InstantiatingGrpcChannelProvider.java:223) [gax-grpc-1.42.0.jar!/:1.42.0]
at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createChannel(InstantiatingGrpcChannelProvider.java:164) [gax-grpc-1.42.0.jar!/:1.42.0]
at com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.getTransportChannel(InstantiatingGrpcChannelProvider.java:156) [gax-grpc-1.42.0.jar!/:1.42.0]
at com.google.api.gax.rpc.ClientContext.create(ClientContext.java:157) [gax-1.42.0.jar!/:1.42.0]
at com.google.cloud.vision.v1.stub.GrpcImageAnnotatorStub.create(GrpcImageAnnotatorStub.java:84) [google-cloud-vision-1.66.0.jar!/:1.66.0]
at com.google.cloud.vision.v1.stub.ImageAnnotatorStubSettings.createStub(ImageAnnotatorStubSettings.java:120) [google-cloud-vision-1.66.0.jar!/:1.66.0]
at com.google.cloud.vision.v1.ImageAnnotatorClient.<init>(ImageAnnotatorClient.java:136) [google-cloud-vision-1.66.0.jar!/:na]
at com.google.cloud.vision.v1.ImageAnnotatorClient.create(ImageAnnotatorClient.java:117) [google-cloud-vision-1.66.0.jar!/:na]
at com.google.cloud.vision.v1.ImageAnnotatorClient.create(ImageAnnotatorClient.java:108) [google-cloud-vision-1.66.0.jar!/:na]
Complete log file of the error can be found in this link:
Complete Log File.
Here are my docker-compose.yml file and the Dockerfile of the service causing problem:
DockerFile
FROM maven:3.6.0-jdk-8-alpine
WORKDIR /app/back
COPY src src
COPY pom.xml .
RUN mvn clean package
FROM openjdk:8-jdk-alpine
RUN apk add --no-cache curl
WORKDIR /app/back
COPY --from=0 /app/back/target/imagescanner*.jar ./imagescanner.jar
COPY --from=0 /app/back/target/classes/API-Key.json .
ENV GOOGLE_APPLICATION_CREDENTIALS ./API-Key.json
EXPOSE 8088
ENTRYPOINT ["java", "-jar", "./imagescanner.jar"]
docker-compose.yml
version: '3'
services:
front:
container_name: demoLab_front
build: ./front
image: demolab/front:latest
expose:
- "3000"
ports:
- "8087:3000"
restart: always
back:
container_name: demoLab_backGCV
build: ./backGCV
image: demolab/backgcv:latest
depends_on:
- lab
ports:
- "8088:8088"
restart: always
lab:
container_name: demoLab_labGCV
build: ./lab
image: demolab/labgcv:latest
expose:
- "8089"
ports:
- "8089:8089"
restart: always
sift:
container_name: demoLab_labSIFT
build: ./detect-label-service
image: demolab/labsift:latest
expose:
- "5000"
ports:
- "5000:5000"
restart: always
EDIT
After some googling I found out that: GRPC Java examples are not working on Alpine Linux since required libnetty-tcnative-boringssl-static depends on glibc. Alpine is using musl libc and application startup will fail with message similar to mine.
I found this project that try to build the right images but it seems broken for a lot of pepole (the build didn't work for my case)
Problem solved by replacing this line of the Dockerfile:
FROM openjdk:8-jdk-alpine
with this line:
FROM koosiedemoer/netty-tcnative-alpine
The problem: Suppressed: java.lang.UnsatisfiedLinkError: no netty_tcnative in java.library.path
On alpine container.
There is a simple workaround:
apk add libressl
apk add openssl
ln -s /lib/ld-musl-x86_64.so.1 /lib/libcrypt.so.1

Running Sonarqube with docker-compose using bind mount volumes

I’m trying to run Sonarqube in a Docker container on a Centos 7 server using docker-compose. Everything works as expected using named volumes as configured in this docker-compose.yml file:
version: "3"
services:
sonarqube:
image: sonarqube
ports:
- "9000:9000"
networks:
- sonarnet
environment:
- sonar.jdbc.url=jdbc:postgresql://db:5432/sonar
volumes:
- sonarqube_conf:/opt/sonarqube/conf
- sonarqube_data:/opt/sonarqube/data
- sonarqube_extensions:/opt/sonarqube/extensions
- sonarqube_bundled_plugins:/opt/sonarqube/lib/bundled-plugins
db:
image: postgres
networks:
- sonarnet
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=sonar
volumes:
- postgresql:/var/lib/postgresql
- postgresql_data:/var/lib/postgresql/data
networks:
sonarnet:
driver: bridge
volumes:
sonarqube_conf:
sonarqube_data:
sonarqube_extensions:
sonarqube_bundled_plugins:
postgresql:
postgresql_data:
However, my /var/lib/docker/volumes directory is not large enough to house the named volumes. So, I changed the docker-compose.yml file to use bind mount volumes as shown below.
version: "3"
services:
sonarqube:
image: sonarqube
ports:
- "9000:9000"
networks:
- sonarnet
environment:
- sonar.jdbc.url=jdbc:postgresql://db:5432/sonar
volumes:
- /data/sonarqube/conf:/opt/sonarqube/conf
- /data/sonarqube/data:/opt/sonarqube/data
- /data/sonarqube/extensions:/opt/sonarqube/extensions
- /data/sonarqube/bundled_plugins:/opt/sonarqube/lib/bundled-plugins
db:
image: postgres
networks:
- sonarnet
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=sonar
volumes:
- /data/postgresql:/var/lib/postgresql
- /data/postgresql_data:/var/lib/postgresql/data
networks:
sonarnet:
driver: bridge
However, after running docker-compose up -d, the app starts up but none of the bind mount volumes are written to. As a result, the Sonarqube plugins are not loaded and the sonar postgreSQL database is not initialized. I thought it may be a selinux issue, but I temporarily disabled it with no success. I’m unsure what to look at next.
I think my answer from "How to persist configuration & analytics across container invocations in Sonarqube docker image" would help you as well.
For good measure I have also pasted it in here:
.....
Notice this line SONARQUBE_HOME in the Dockerfile for the docker-sonarqube image. We can control this environment variable.
When using docker run. Simply do:
txt
docker run -d \
...
...
-e SONARQUBE_HOME=/sonarqube-data
-v /PERSISTENT_DISK/sonarqubeVolume:/sonarqube-data
This will make Sonarqube create the conf, data and so forth folders and store data therein. As needed.
Or with Kubernetes. In your deployment YAML file. Do:
txt
...
...
env:
- name: SONARQUBE_HOME
value: /sonarqube-data
...
...
volumeMounts:
- name: app-volume
mountPath: /sonarqube-data
And the name in the volumeMounts property points to a volume in the volumes section of the Kubernetes deployment YAML file.
This again will make Sonarqube use the /sonarqube-data mountPath for creating extenions, conf and so forth folders, then save data therein.
And voila your Sonarqube data is thereby persisted.
I hope this will help others.
N.B. Notice that the YAML and Docker run examples are not exhaustive. They focus on the issue of persisting Sonarqube data.
Try it out BobC and let me know.
Have a great day.
The below code will help you in a single command I hope so.
Create a new docker-compose file named as docker-compose.yaml,
version: "3"
services:
sonarqube:
image: sonarqube:8.2-community
depends_on:
- db
ports:
- "9000:9000"
networks:
- sonarqubenet
environment:
SONAR_JDBC_URL: jdbc:postgresql://db:5432/sonarqube
SONAR_JDBC_USERNAME: sonar
SONAR_JDBC_PASSWORD: sonar
volumes:
- sonarqube_data:/opt/sonarqube/data
- sonarqube_extensions:/opt/sonarqube/extensions
- sonarqube_logs:/opt/sonarqube/logs
- sonarqube_temp:/opt/sonarqube/temp
restart: on-failure
container_name: sonarqube
db:
image: postgres
networks:
- sonarqubenet
environment:
POSTGRES_USER: sonar
POSTGRES_PASSWORD: sonar
volumes:
- postgresql:/var/lib/postgresql
- postgresql_data:/var/lib/postgresql/data
restart: on-failure
container_name: postgresql
networks:
sonarqubenet:
driver: bridge
volumes:
sonarqube_data:
sonarqube_extensions:
sonarqube_logs:
sonarqube_temp:
postgresql:
postgresql_data:
Then, execute the command,
$ docker-compose up -d
$ docker container ps
Sounds like the container is running and, as you mentioned, Sonarqube starts-up. When it starts, is it showing that it's using the H2 in memory db? After running docker-compose up -d, use docker logs -f <container_name> to see what's happening on Sonarqube startup.
To simplify viewing your logs with a known name, I suggest you also add a container name to your Sonarqube service. For example, container_name: sonarqube.
Also, while I know the plan is to deprecate the use of environment variables for the username, password and jdbc connection, I've had better luck in docker-compose using environment variables rather than the corresponding property value. For the connection string, try: SONARQUBE_JDBC_URL: jdbc:postgresql://db/sonar without specifying the default port for postgres.

How to have "RUN" command in docker-compose similar to dockerfile?

Docker file
FROM elasticsearch:2
RUN /usr/share/elasticsearch/bin/plugin install --batch cloud-aws
from https://www.elastic.co/blog/elasticsearch-docker-plugin-management
Can someone plz help me to add ES plugin in docker-compose file?
version: '2'
services:
nitrogen:
build: .
ports:
- "8000:8000"
volumes:
- ~/mycode:/mycode
depends_on:
- couchdb
- elasticsearch
elasticsearch:
image: elasticsearch:1.7.5
volumes:
- ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
ports:
- "9200:9200"
- "9300:9300"
In above docker-compose missing is installation of plugin.
Tried this but it runs on local machine, instead of docker container.
command: /usr/share/elasticsearch/bin/plugin install elasticsearch/elasticsearch-river-couchdb/2.6.0
You have to create your own docker image like my-elasticsearch with the Dockerfile you mentioned, then in docker-compose.yml to refer to that image.

Resources