I have a docker-compose under windows with cap_add and privileges set in order to mount a windows net share in the dockerfile (running a debian) by using cifs.
During the build I always get "Unable to apply new capability set". However, if I get with the bash into the running container I can mount without any problem.
Here is the dockerfile relevant code:
RUN apt-get install cifs-utils -y
RUN mkdir /opt/shared
#RUN mount -v -t cifs //10.20.25.14/external /opt/shared -o
"user=username,password=mypass-,domain=mydm,sec=ntlm"
and this is the docker-compose part:
anaconda:
privileged: true
image: piano_anaconda:latest
security_opt:
- seccomp:unconfined
cap_add:
- SYS_ADMIN
- DAC_READ_SEARCH
build:
context: .
dockerfile: dockerfile_anaconda
I have read this as well but it did not really help to mount within the docker file.
What am I missing here?
Thanks in advance to all for your help.
Related
What am I wanting to do?
I want to convert to run my containerized of Spring-Boot's app which is can run on amd64 systems, on Raspberry Pi 4's Docker. How can I fix the my problem?
What's the running OS on Raspberry Pi?
I've installed Ubuntu Server 20.04.2 LTS for arm64 architectures using via Raspberry Pi Imager v1.6.1 on Raspberry Pi 4.
What are the steps I have done to fix the problem?(problem is still continue)
In default version does not work on my Raspberry Pi. Thus, I've tried some couple of things at below.
I changed row of "FROM" from openjdk:15-jdk-slim to arm64v8/openjdk:17 at my dockerfile as you can see:
FROM arm64v8/openjdk:17
COPY . /projects/red-dir
WORKDIR /projects/red-dir
ARG JAR_FILE=target/*.jar
COPY ${JAR_FILE} /projects/red-dir/my_red_app.jar
ENTRYPOINT ["java","-jar","/projects/red-dir/my_red_app.jar"]
Afterwards, I've built and pushed my app using with Maven and Docker:
$ ./mvnw clean package -Dmaven.test.skip=true && java -jar target/my_red_app.jar
$ docker build -t user/my_red_app:v1.0.0 .
Then, My app's depend on a MongoDB, so I changed this image from mongo to arm64v8/mongo on docker-compose.yml as you can see at below:
version: "3"
services:
mongodb:
container_name: mongodb
image: arm64v8/mongo
restart: always
ports:
- 27017:27017
environment:
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD: 12345
networks:
- shared-net
colour_app:
container_name: my_red_app
image: user/my_red_app
restart: always
ports:
- 7070:7070
depends_on:
- mongodb
networks:
- shared-net
networks:
shared-net:
driver: bridge
Things that work and things that don't work
MongoDB is running. I can reach using via MongoDBCompass, but my Spring-Boot app doesn't work. In docker-compose up command, I've encountered an output like this:
...
...
colour_app | standard_init_linux.go:219: exec user process caused: exec format error
colour_app | standard_init_linux.go:219: exec user process caused: exec format error
colour_app | standard_init_linux.go:219: exec user process caused: exec format error
colour_app | standard_init_linux.go:219: exec user process caused: exec format error
Let me know what things I didn't catch. Thank you for reading.
I've solved my problem and tried to create simple guide for ones who are encountered this problem.
BUILD FOR OTHER OS (multi-architecture / buildx command)
Turn On "experimental" feature's on Docker Settings.
For Linux OS:
sudo nano /etc/docker/daemon.json
Add below content to it:
{
"experimental": true
}
For Win10 OS / MAC OS:
Open Docker Desktop application.
Go to Settings.
Select Docker Engine Tab on the left-side.
Find "experimental": false.
Change it to "experimental": true.
Restart Docker.
Create multi-architecture os. (Important: The image you are using should contain the OS you want to create it)
Creating a just SINGLE OS :
You can use load or push commands.
--load means that docker saves it to local disk:
docker buildx build --load --platform linux/arm64 -t <dockerhub_username>/<repository_name>:<tag_name> .
--push means that docker doesn't save it to local disk but save it to cloud (docker hub):
docker buildx build --push --platform linux/arm64 -t <dockerhub_username>/<repository_name>:<tag_name> .
--load and --push can not be set together.
Creating a MANY OS :
You can just use push command with many OS.
--push means that docker doesn't save it to local disk but save it to cloud (docker hub):
docker buildx build --push --platform linux/amd64,linux/arm64,linux/ppc64le -t <dockerhub_username>/<repository_name>:<tag_name> .
I have spring boot application which I am trying to dockerize for the first time. I am using docker version 20.10.1 and my host pc is ubuntu 20.04
for this spring boot application, I have a data directory , which has data created when the application is running. I want to access this data from the host operating system. That is why I am using volume.
When I try to mount my container to named volume or to a host volume, but it always create anonymous volume regardless of the command I type.
Here is my docker file.
FROM openjdk:15
COPY target/lib/* /usr/src/app/lib/
COPY target/core-api-7.3.6.jar /usr/src/app/lib/core-api-7.3.6.jar
COPY config/application.properties /usr/src/app/config/application.properties
COPY data/poscms/config/* /usr/src/app/data/poscms/config/
WORKDIR /usr/src/app
ENTRYPOINT ["java", "-jar", "lib/core-api-7.3.6.jar"]
VOLUME /usr/src/app/data
/usr/src/app/data this is the directory where core-app.jar application will create its runtime data, I need to access these data from my host pc
Following is the command for building the image
docker build -t core-app:5.0 .
then I create image using following command
docker run -it -d -p 7071:7071 core-app:5.0 -v /home/bob/data/:/usr/src/app/data
when I check the volumes by running following command
docker volume ls
I can see anonymous volume being created by this container
and my host path which is /home/kapila/data/ is empty and container data is not written to host path.
I experience the same behaviour with named volume as well.
I created a named volume using following command
docker volume create tmp
docker run -it -d -p 7071:7071 core-app:5.0 -v tmp:/usr/src/app/data
and still docker create anonymous volume and data is not written to tmp volume
my host PC is ubuntu pc. Could someone point out what I am doing wrong here
I do something like this:
In your project root , have these files pertaining to docker as required:
1. DockerFile 2.docker-compose.yml 3. docker-env-preview.env
DockerFile content
FROM openjdk:8-jdk-alpine
ARG jarfilepath
RUN mkdir /src
WORKDIR /src
VOLUME /src/tomcat
ADD $jarfilepath yourprojectname.jar
docker-compose.yml content
version: '3'
services:
project-name:
container_name: project-name-service
build:
context: .
args:
jarfilepath: ./target/project-0.0.1.jar
env_file:
- docker-env-preview.env
ports:
- "8831:8831"
- '5005:5005'
networks:
- projectname_subnet
command: java -jar -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005 projectname.jar --spring.profiles.active=preview
networks:
project-name_subnet:
external: true
docker-env-preview.env
This file will contain your environment variables values. The applicaiton.properties can read this file to fetch the values, like buildserver.ip=${BUILD_SERVER_DOMAIN}. Basically you define what you want need . Like the example below.
GARBABE_SERVER_DOMAIN=h-db-preview
GARBABE_SERVER_PORT=5422
GARBABE_DB=projectdb
GARBABE_USER=user
GARBABE_PASSWORD=pwd
JPA_DDL_AUTO=validate
JPA_DIALECT=org.hibernate.dialect.PostgreSQLDialect
JPA_SHOW_SQL=false
JPA_SE_SQL_COMMENTS=true
JPA_FORMAT_SQL=false
JPA_NON_CONTEXTUAL_CREATION=true
APP_NAME=project-name-service
BUILD_SERVER_METHOD=http
BUILD_SERVER_DOMAIN=7.8.9.4
Commands to execute :
mvn clean package (if you use maven )
docker-compose up -d --build ( execute docker ps -> check the details on the running container),
To view the logs : sudo docker logs <project-name-service> -f
To get into the container console, docker exec -it <project-name-service> bash
I was able to fix the issue, and only change I did, to make it work, is that, to change the base image from
FROM openjdk:15
to
FROM adoptopenjdk/openjdk15:ubi
and now named and host volume mounts are working as expected. I am not sure what is wrong with official openjdk:15 image.
I am on my MacBook terminal. I try to have a jenkins container up and running on my local machine.
I firstly created a docker-compose.yml :
version: '3'
services:
jenkins:
container_name: jenkins
image: jenkins/jenkins
ports:
- "8080:8080"
volumes:
- $PWD/jenkins_home:/var/jenkins_home
networks:
- net
networks:
net:
As you can see in the volumes section, I have defined the jenkins_home folder under my current directory as the volume for jenkins data.
Then under my current directory of my machine, I created a folder named jenkins_home. Here is my current directory:
-rw-r--r-- 1 john 1349604816 220 Sep 4 00:08 docker-compose.yml
drwxr-xr-x 2 john 1349604816 64 Sep 4 00:06 jenkins_home
As you can see, I need to change the ownership of jenkins_home folder in order to have jenkins container be able to write data in it (because the uid is not 1000). So, I executed command:
sudo chown 1000:1000 jenkins_home/
Then, my current directory looks like this:
-rw-r--r-- 1 john 1349604816 220 Sep 4 00:08 docker-compose.yml
drwxr-xr-x 2 1000 1000 64 Sep 4 00:06 jenkins_home
After that I run my container by command: docker-compose up. But I ended up with error:
Starting jenkins ... done
Attaching to jenkins
jenkins | touch: cannot touch '/var/jenkins_home/copy_reference_file.log': Permission denied
jenkins | Can not write to /var/jenkins_home/copy_reference_file.log. Wrong volume permissions?
jenkins exited with code 1
Why I still get the permission error after I changed the ownership of the jenkins_home folder under my current directory on my machine?
P.S. I understand there could be other way to purely have a jenkins container running but still I would like to understand what is wrong with my approach and hopefully could also get it work.
Jenkins needs to create or to use existing jenkins_home directory,
When Docker sees that jenkins_home volume in your machine doesn't exists then it will create it with your osx UID & GID.
If you create the jenkins_home folder you must stay with your current directory permissions and not changed them,
Docker running UID isn't the same as your machine, they may have different UID and GID.
Linux namespaces provide isolation for running processes, limiting
their access to system resources without the running process being
aware of the limitations. For more information on Linux namespaces,
see Linux namespaces.
The best way to prevent privilege-escalation attacks from within a
container is to configure your container’s applications to run as
unprivileged users. For containers whose processes must run as the
root user within the container, you can re-map this user to a
less-privileged user on the Docker host. The mapped user is assigned a
range of UIDs which function within the namespace as normal UIDs from
0 to 65536, but have no privileges on the host machine itself.
There a wonderful video explaining how docker works with namespaces
Does the actual jenkins user/group exist on the Mac?
This is what I do on my linux servers where:
ARG user=jenkins
ARG group=jenkins
ARG uid=1000
ARG gid=1000
On my alpine server:
addgroup -g ${gid} ${group}
adduser -u ${uid} -G ${group} -s /bin/bash -D ${user}
to become
addgroup -g 1000 jenkins
adduser -u 1000 -G jenkins -s /bin/bash -D jenkins
On my centos8 server
groupadd -g ${gid} ${group}
useradd -u ${uid} -g ${group} -s /bin/bash -d ${user}
to become
groupadd -g 1000 jenkins
useradd -u 1000 -g jenkins -s /bin/bash -d jenkins
then:
sudo chown jenkins:jenkins jenkins_home/
I do not use Mac, but I presume it is similar
UPDATE
Based on all the above, try the following:
docker-compose.yml
version: '3'
services:
jenkins:
container_name: jenkins
image: jenkins/jenkins
ports:
- 8080:8080
- 50000:50000
volumes:
- $PWD/jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
networks:
- net
networks:
net:
I have added the following:
port 50000 (only if you want to attach build slave servers, opposed to just running builds on the master)
volume /var/run/docker.sock (to be able to use the docker daemon with Jenkins, you need to mount the volume)
!!DO THE FOLLOWING!! Delete the original jenkins_home directory that you created before. Now run 'docker-compose up', since the host volume directory does not exist, docker will now create the required directory on the host which is based on the configuration in the docker-compose.yml (in this case '$PWD/jenkins_home'), thus it will now have the correct ownership and permissions for the jenkins container to use it.
If that doesn't work, make the jenkins container run in privileged mode, see below:
version: '3'
services:
jenkins:
container_name: jenkins
image: jenkins/jenkins
privileged: true
user: root
ports:
- 8080:8080
- 50000:50000
volumes:
- $PWD/jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
networks:
- net
networks:
net:
I'm trying to get a setup where I can deploy a Docker Container with a java app installed on it where the app can write to a folder in the container and for those files to appear on the host machine.
I believe that a --mount with type=bind is the correct solution for this. However, I cannot seem to get the mount to show up when I run docker inspect MyContainer.
My full setup is as follows;
Docker File:
FROM openjdk:14.0.2-jdk-nanoserver
WORKDIR /monitor
COPY target/monitor.jar ./
ENTRYPOINT ["java", "-jar", "monitor.jar"] --restart unless-stopped
Build Command:
docker build -t monitor .
and my Run command:
docker run --restart unless-stopped --name MonitorContainer monitor --mount type=bind,source=C:/test,target=/monitor
With the above, I'm attempting to have the folder C:/test available for read and write inside the container at the path /monitor. However, when I run docker inspect StreamMonitorContainer I see that the mount section is empty so it appears the mount is not created.
I receive no errors, so it's all rather confusing.
I would appreciate any help, I'm completely novice at this so please be nice :)
p.s. This question differs from Docker bind mount usage because it's dealing with Volumes, not Bind Mounts (despite the title.)
I have an up and running containers and I wish to execute a database backup. Apparently, a simple command from the docker such as: sudo mkdir new_folder result in: bash: sudo: command not found
What have I tried (on an intuitive level) I accessed one of the running container with docker exec -i -t 434a38fedd69/bin/bash and RUN
apt-get update
apt-get install sudo
when exit back to docker and tried to perform sudo mkdir new_folder but I got the same message bash: sudo: command not found
Baresp#adhg MINGW64 /c/Program Files/Docker Toolbox/postgre
$ mkdir new_folder
mkdir: cannot create directory ‘new_folder’: Permission denied
Baresp#adhg MINGW64 /c/Program Files/Docker Toolbox/postgre
$ sudo mkdir new_folder
bash: sudo: command not found
BTW, I'm not sure if this is relevant but the docker-compose file I was using is:
version: '2'
services:
postgres:
image: postgres
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: changeme
PGDATA: /data/postgres
volumes:
- /data/postgres:/data/postgres
ports:
- "5432:5432"
networks:
- postgres
restart: unless-stopped
container_name: xx_postgres
pgadmin:
links:
- postgres:postgres
image: fenglc/pgadmin4
volumes:
- /data/pgadmin:/root/.pgadmin
ports:
- "5050:5050"
networks:
- postgres
restart: unless-stopped
container_name: xx_pgadmin
networks:
postgres:
driver: bridge
First, nothing you do in a docker exec is persistent outside of that particular running container (copy of the image), so if you want future containers run from that image to include sudo, those apt-get commands need to go into the Dockerfile that builds the image. Which, since you're using docker-compose, would require you to first make a Dockerfile and specify its location in the YAML.
Second, what do you mean "exit back to docker"? Nothing you do inside a container is going to have any effect on the system that Docker itself is running on, but it looks like you're running software install commands inside a Docker container and then expecting that to result in the newly-installed software being available outside the container on the Windows system that is running Docker.
To do a backup of the postgres database in the container, you first have to enter the container (similar to how you do it):
docker exec -it postgres bash
(substitude postgres with the real container name you get from docker-compose ps)
Now you are in the container as root. That means, you don't need sudo for anything. Next create your backup folder:
mkdir /tmp/backup
Now run the backup command, from a quick Google I found the following (you might know better):
pg_dumpall > /tmp/backup/filename
Then exit the shell within the container by typing exit. From your host system run the following to copy the backup file out of the container:
docker cp postgres:/tmp/backup/filename .
(postgres is your container name again)