I'm trying to set a localtime to my docker container as the same as my localtime locally to use on a spring-boot application when save a data.
spring-boot app:
...
cpoWorkflowExecution.setStartDate(LocalDateTime.now());
...
Local localtime:
[ainacio#brsp1nf01 cpo-dockers]$ ls -l /etc/localtime
lrwxrwxrwx. 1 root root 39 Oct 6 07:42 /etc/localtime -> ../usr/share/zoneinfo/America/Sao_Paulo
Local timezone:
[ainacio#brsp1nf01 cpo-dockers]$ cat /etc/timezone
America/Sao_Paulo
Container localtime didn't work:
root#ffbd68eeaccd:/# ls -l /etc/localtime
lrwxrwxrwx. 1 root root 27 Nov 17 13:37 /etc/localtime -> /usr/share/zoneinfo/Etc/UTC
Container timezone worked:
root#ffbd68eeaccd:/# cat /etc/timezone
America/Sao_Paulo
Docker-compose:
version: '3'
services:
cpo-executor:
container_name: cpo-executor
image: .../cpo-executor:1.103.3.1
ports:
- 8879:8879
environment:
- TZ=America/Sao_Paulo
volumes:
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
networks:
- cpo-network
depends_on:
- cpo-config-server
I've also tried to set TZ environment variable but it didn't work as well.
I'm generating the docker image using spring-boot maven plugin:
mvn spring-boot:build-image -DskipTests -Dspring-boot.build-image.imageName=.../cpo-executor:1.103.3.1
What am I missing?
Related
I am working on a Springboot project with docker. I tried to mount volume so I could have access to generated files from the Springboot application in my local directory. The data is generated in the docker container but I can not find it the local directory.
I have read many topics but none seems to be helpful.
Please, I am still new to docker and would appreciate suggestions to assist.
I have tried to mount the volume directly in the dockerfile as there is a docker compose file to run the service alongside others. Below is what I have in my Dockerfile and docker-compose
Dockerfile
FROM iron/java:1.8
EXPOSE 8080
ENV USER_NAME myprofile
ENV APP_HOME /home/$USER_NAME/app
#Test Script>>>>>>>>>>>>>>>>>>>>>>
#Modifiable
ENV SQL_SCRIPT $APP_HOME/SCRIPTS_TO_RUN
ENV SQL_OUTPUT_FILE $SQL_SCRIPT/data
ENV NO_OF_USERS 3
ENV RANGE_OF_SKILLS "1-4"
ENV HOST_PATH C:"/Users/user1/IdeaProjects/path/logs"
#>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
RUN adduser -S $USER_NAME
RUN mkdir $APP_HOME
RUN mkdir $SQL_SCRIPT
RUN chown $USER_NAME $SQL_SCRIPT
VOLUME $HOST_PATH: $SQL_SCRIPT
ADD myprofile-*.jar $APP_HOME/myprofile.jar
RUN chown $USER_NAME $APP_HOME/myprofile.jar
USER $USER_NAME
WORKDIR $APP_HOME
RUN sh -c 'touch myprofile.jar'
ENTRYPOINT ["sh", "-c","java -Djava.security.egd=file:/dev/./urandom -jar myprofile.jar -o $SQL_OUTPUT_FILE -n $NO_OF_USERS -r $RANGE_OF_SKILLS"]
Docker-compose
myprofile-backend:
extra_hosts:
- remotehost
container_name: samplecontainer-name
image: sampleimagename
links:
- rabbitmq
- db:redis
expose:
- "8080"
ports:
- "8082:8080"
volumes:
- ./logs/:/tmp/logs
- ./logs/:/app
The problem here is that you are mounting the same folder ./logs twice. Docker-compose volume mount syntax is - <your-host-path>:<your-container-path>. Also, its better to use relative paths when you are building the application. So change docker-compose file to (assuming you want to see the files in ./target relative to the Dockerfile:
myprofile-backend:
extra_hosts:
- remotehost
container_name: samplecontainer-name
image: sampleimagename
links:
- rabbitmq
- db:redis
expose:
- "8080"
ports:
- "8082:8080"
volumes:
- ./logs/:/tmp/logs
- ./target/:/app
I want to run tests using Gradle after docker-compose up (Postgres DB + Spring-Boot app). All flow must be running inside the Gitlab merge request step. The problem is when I was running my test using the script part in gitlab-ci file. Important, in such a situation, we are in the correct directory where GitLab got my project. Part of gitlab-ci file:
before_script:
- ./gradlew clean build
- cp x.jar /path/x.jar
- docker-compose -f /path/docker-compose.yaml up -d
script:
- ./gradlew :functional-tests:clean test -Penv=gitlab --info
But here I can't call http://localhost:8080 -> connection refused. I try put 0.0.0.0 or 172.17.0.3 or docker.host... etc insite tests config, but it didn't work.
So, I made insite docker-compose another container where I try to run my test using the entry point command. To do that, I must have the current GitLab directory, but can't mount them.
My current solution:
Gitlab-ci:
run-functional-tests:
stage: run_functional_tests
image:
name: 'xxxx/docker-compose-java-11:0.0.7'
script:
- ./gradlew clean build -x test
- 'export SHARED_PATH="$(dirname ${CI_PROJECT_DIR})"' // current gitlab worspace dir
- cp $CI_PROJECT_DIR/x.jar $CI_PROJECT_DIR/docker/gitlab/x.jar
- docker-compose -f $CI_PROJECT_DIR/docker/gitlab/docker-compose.yaml up -d
- docker-compose -f $CI_PROJECT_DIR/docker/gitlab/docker-compose.yaml logs -f
timeout: 30m
docker-compose.yaml
version: '3'
services:
postgres:
build:
context: ../postgres
container_name: postgres
restart: always
networks:
- app-postgres
ports:
- 5432
app:
build:
context: .
dockerfile: Dockerfile
restart: always
container_name: app
depends_on:
- postgres
ports:
- "8080:8080"
networks:
- app-postgres
functional-tests:
build:
context: .
container_name: app-functional-tests
working_dir: /app
volumes:
- ${SHARED_PATH}:/app
depends_on:
- app
entrypoint: ["bash", "-c", "sleep 20 && ./gradlew :functional-tests:clean test -Penv=gitlab --info"]
networks:
- app-postgres
networks:
app-postgres:
but in such a situation my working_dir - /app - is empty. Can someone assist with that?
I'm running Docker on Windows:
❯ docker version
Client: Docker Engine - Community
Cloud integration: 1.0.12
Version: 20.10.5
API version: 1.41
Go version: go1.13.15
Git commit: 55c4c88
Built: Tue Mar 2 20:14:53 2021
OS/Arch: windows/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.5
API version: 1.41 (minimum version 1.24)
Go version: go1.13.15
Git commit: 363e9a8
Built: Tue Mar 2 20:26:56 2021
OS/Arch: windows/amd64
Experimental: false
When I launch an interactive session for a Windows docker container via docker run, the docker container terminal respects my current terminal layout - that is, I can use all rows and columns available in the viewport.
For example, running this command:
docker run -i --rm mcr.microsoft.com/windows/servercore:20H2
Yields:
If I run the same container in docker-compose, the behaviour is different. I only have 80 columns and 25 rows available, which causes overlapping text and cursor-hopping.
For example, using these basic docker-compose.yml files:
version: "3.7"
services:
test:
image: mcr.microsoft.com/windows/servercore:20H2
command: cmd.exe
version: "3.7"
services:
test:
image: mcr.microsoft.com/windows/servercore:20H2
command: powershell.exe
When launched via:
docker-compose run --rm test
Yields:
Modifying the docker-compose file to include the stdin_open and tty options as shown below makes no difference.
version: "3.7"
services:
test:
image: mcr.microsoft.com/windows/servercore:20H2
command: powershell.exe
stdin_open: true
tty: true
I want to start a Docker-container with Oracle XE and then run an SQL script (setup_database.sql) to create some tables in docker-compose.
How can I integrate the following commands into my docker-compose:
docker run -d -p 49161:1521 -v "$PWD":/duo --name duodb --hostname duodb --network duo-test -e ORACLE_ALLOW_REMOTE=true wnameless/oracle-xe-11g-r2
Run a terminal in container:
docker exec -ti duodb /bin/bash
go into the right directory:
cd duo/sql
Kick off the setup_database script:
sqlplus system/oracle#xe #setup_database
I've tried to do run this:
oracle:
container_name: duodb
image: wnameless/oracle-xe-11g-r2
ports:
- '49161:1521'
volumes:
- .:/duo
command: ["/bin/bash", "-c", "sqlplus system/oracle#xe #setup_database"]
environment:
- ORACLE_ALLOW_REMOTE=true
But this outputs the following error:
Creating network "duo_default" with the default driver
Creating duodb
Creating duomail
Creating duolocal
Attaching to duomail, duodb, duolocal
duomail | MailDev webapp running at http://0.0.0.0:80
duomail | MailDev SMTP Server running at 0.0.0.0:25
duodb | /bin/bash: sqlplus: command not found
duodb exited with code 127
duolocal | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.20.0.3. Set the 'ServerName' directive globally to suppress this message
duolocal | [Fri Nov 15 08:17:55.944907 2019] [ssl:warn] [pid 1] AH01909: 172.20.0.3:443:0 server certificate does NOT include an ID which matches the server name
duolocal | AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.20.0.3. Set the 'ServerName' directive globally to suppress this message
duolocal | [Fri Nov 15 08:17:55.977329 2019] [ssl:warn] [pid 1] AH01909: 172.20.0.3:443:0 server certificate does NOT include an ID which matches the server name
duolocal | [Fri Nov 15 08:17:55.980390 2019] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.38 (Debian) PHP/7.1.32 OpenSSL/1.1.1d configured -- resuming normal operations
duolocal | [Fri Nov 15 08:17:55.980423 2019] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND'
I am not that docker expert, but as far as I know, the network is automatically created with all containers inside a docker-compose file, therefore you do not need the network. Furthermore, you can name the service so I think container-name is also not needed. In which version do you start the compose file? You could try something like this
version: "3"
services:
duodb:
image: wnameless/oracle-xe-11g-r2
ports:
- 49161:1521
volumes:
- .:/duo
environment:
ORACLE_ALLOW_REMOTE=true
MYSQL_ROOT_USER: root
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: my_database_name
version: "3"
services:
duodb:
image: wnameless/oracle-xe-11g-r2
ports:
- 49161:1521
volumes:
- .:/duo
environment:
- ORACLE_ALLOW_REMOTE=true
- MYSQL_ROOT_USER=root
- MYSQL_ROOT_PASSWORD=secret
- MYSQL_DATABASE=my_database_name
I think this might be related to file system incompatibility (nfts/ext*)
How can I compose my containers and persist the db without the container exiting?
I'm using the bitnami-mongodb-image
Error:
Error executing 'postInstallation': EACCES: permission denied, mkdir '/bitnami/mongodb'
mongodb_1 exited with code 1
Full Output:
Recreating mongodb_1 ... done
Starting node_1 ... done
Attaching to node_1, mongodb_1
mongodb_1 |
mongodb_1 | Welcome to the Bitnami mongodb container
mongodb_1 | Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mongodb
mongodb_1 | Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mongodb/issues
mongodb_1 |
mongodb_1 | nami INFO Initializing mongodb
mongodb_1 | mongodb INFO ==> Deploying MongoDB from scratch...
mongodb_1 | Error executing 'postInstallation': EACCES: permission denied, mkdir '/bitnami/mongodb'
mongodb_1 exited with code 1
Docker Version:
Docker version 18.06.0-ce, build 0ffa825
Windows Version:
Microsoft Windows 10 Pro
Version 10.0.17134 Build 17134
This is my docker-compose.yml so far:
version: "2"
services:
node:
image: "node:alpine"
user: "node"
working_dir: /home/node/app
environment:
- NODE_ENV=development
volumes:
- ./:/home/node/app
ports:
- "8888:8888"
command: "tail -f /dev/null"
mongodb:
image: 'bitnami/mongodb'
ports:
- "27017:27017"
volumes:
- "./data/db:/bitnami"
- "./conf/mongo:/opt/bitnami/mongodb/conf"
I do not use Windows but you can definitely try to use a named volume and see if the permission problem goes away
version: "2"
services:
node:
image: "node:alpine"
user: "node"
working_dir: /home/node/app
environment:
- NODE_ENV=development
volumes:
- ./:/home/node/app
ports:
- "8888:8888"
command: "tail -f /dev/null"
mongodb:
image: 'bitnami/mongodb'
ports:
- "27017:27017"
volumes:
- mongodata:/bitnami:rw
- "./conf/mongo:/opt/bitnami/mongodb/conf"
volumes:
mongodata:
I would like to stress this is a named volume, compared to the host volumes you are using. It is the best option for production and you need to be aware that docker will manage and store the files for you so you will not see the files in your project folder.
If you still want to use host volumes (so volumes that write to that location you specify in your project subfolder on the host machine) you need to apply a permission fix, here is an example for mariadb but it will work for mongo too
https://github.com/bitnami/bitnami-docker-mariadb/issues/136#issuecomment-354644226
In short, you need to know what is the user of the filesystem (in the example 1001 is the user id on my host machine for my logged in user) on your host and then chown that folder to this user so the user will be the same on the folder and your host system.
A full example:
version: "2"
services:
fix-mongodb-permissions:
image: 'bitnami/mongodb:latest'
user: root
command: chown -R 1001:1001 /bitnami
volumes:
- "./data:/bitnami"
mongodb:
image: 'bitnami/mongodb'
ports:
- "27017:27017"
volumes:
- ./data:/bitnami:rw
depends_on:
- fix-mongodb-permissions
I hope this helps