Oracle OUD - Docker Container - Add custom objectClass with custom attributes - oracle

I have docker container inside docker-compose:
oud:
image: ****
container_name: oud
ports:
- 1389:1389
environment:
- OUD_INSTANCE_NAME=OUD_LOCAL
- rootUserDN=cn=admin
- rootUserPassword=admin
- baseDN=o=uzytkownicy
- ldifFile_1=/u01/oracle/user_projects/config/oud_local.ldif
volumes:
- ./docker/infra/oud/config:/u01/oracle/user_projects/config
I try to add the custom objectclass with custom attributes in my oud_local.ldif file, but it doesn't work.
What did I add (example from Oracle docs):
dn: cn=schema
changetype: modify
add: attributeTypes
attributeTypes: ( 1.3.6.1.4.1.32473.1.1.590
NAME ( 'blog' 'blogURL' )
DESC 'URL to a personal weblog'
SYNTAX 1.3.6.1.4.1.1466.115.121.1.15
SINGLE-VALUE
X-ORIGIN 'Oracle Unified Directory Server'
USAGE userApplications )
dn: o=uzytkownicy
objectClass: organization
objectClass: top
o: uzytkownicy
dn: o=grupy,o=uzytkownicy
objectClass: organization
objectClass: top
o: grupy
... and more
Just first block crashes. I can create entries but not an custom objectClass.
Actually I want to have copy of remote test server schema localy in my container because I need it to development.
I tried to export ApacheDS schema .ldif from test and tried to load it in container but every object (about ~2100) got rejected.
Can somebody tell me what did I do wrong?

Ok I solved it by add schemaConfigFile_1 attribute in docker compose:
oud:
image: harbor.nekken.pl/oracle/middleware/oud:12.2.1.4.0
container_name: oud
ports:
- 1389:1389
environment:
- OUD_INSTANCE_NAME=OUD_LOCAL
- rootUserDN=cn=admin
- rootUserPassword=admin
- baseDN=o=uzytkownicy
- schemaConfigFile_1=/u01/oracle/user_projects/config/schema/schema.ldif

Related

docker-compose: no declaration was found in the volumes section

Im trying to use Docker-Compose on Microsoft Windows to create a stack for Seafile.
The error message after creating is:
Deployment error
failed to deploy a stack: Named volume “C:/Users/Administrator/Docker/Volumes/Seafile/Mysql:/var/lib/mysql:rw” is used in service “db” but no declaration was found in the volumes section. : exit status 1
Here's my problematic docker-compose.yaml file :
version: '2'
services:
db:
image: mariadb:10.5
container_name: seafile-mysql
environment:
- MYSQL_ROOT_PASSWORD=db_dev # Requested, set the root's password of MySQL service.
- MYSQL_LOG_CONSOLE=true
volumes:
- C:/Users/Administrator/Docker/Volumes/Seafile/Mysql:/var/lib/mysql # Requested, specifies the path to MySQL data persistent store.
networks:
- seafile-net
memcached:
image: memcached:1.5.6
container_name: seafile-memcached
entrypoint: memcached -m 256
networks:
- seafile-net
seafile:
image: seafileltd/seafile-mc:latest
container_name: seafile
ports:
- "9000:80"
# - "443:443" # If https is enabled, cancel the comment.
volumes:
- C:/Users/Administrator/Docker/Volumes/Seafile/Seafile:/shared # Requested, specifies the path to Seafile data persistent store.
environment:
- DB_HOST=db
- DB_ROOT_PASSWD=db_dev # Requested, the value shuold be root's password of MySQL service.
- TIME_ZONE=Etc/UTC # Optional, default is UTC. Should be uncomment and set to your local time zone.
- SEAFILE_ADMIN_EMAIL=me#example.com # Specifies Seafile admin user, default is 'me#example.com'.
- SEAFILE_ADMIN_PASSWORD=asecret # Specifies Seafile admin password, default is 'asecret'.
- SEAFILE_SERVER_LETSENCRYPT=false # Whether to use https or not.
- SEAFILE_SERVER_HOSTNAME=docs.seafile.com # Specifies your host name if https is enabled.
depends_on:
- db
- memcached
networks:
- seafile-net
networks:
seafile-net:
If you see the error "no declaration was found in the volumes section" - probably you are not declaring the volumes from the root section.
The error message can cause confusion. Here how to do it correctly:
...
services:
...
volumes:
- a:/path1
- b:/path2
...
volumes:
a:
b:
...
I know that this could be somehow scattered and I know Docker could handle it differently in another universe, but at the current version it does it in this way: the root section declares the volume, while the services section just use them.
Let me know if this was your problem.
More info:
https://docs.docker.com/storage/volumes/#use-a-volume-with-docker-compose

Create database on docker startup

I want to create mysql database on docker-compose startup from database.sql script. My database.sql script is on location: src/main/java/com/project_name/resources/db/database.sql. How should I wrote that in my docker-compose.yml file? Right now neither works.
volumes:
- ./database.sql:/data/application/database.sql
or something like:
volumes:
- ./database.sql:/src/main/java/com/project_name/resources/db/database.sql
Try like this:
volumes:
- ./src/main/java/com/project_name/resources/db/:/docker-entrypoint-initdb.d/database.sql
Or just use a database migration tool like Flyway or Liquibase.
You can mount the schema and data in Volumes as I demonstrated below, make sure the backup file have proper access permissions and verify the path in your machine.
version: '3.8'
services:
db:
image: mysql:8.0
restart: always
environment:
- MYSQL_DATABASE=DB_NAME
- MYSQL_USER: DB_USER
- MYSQL_ROOT_PASSWORD=DB_PASSEOD
ports:
- '3306:3306'
volumes:
- db:/var/lib/mysql
- ./db/init.sql:/src/main/java/com/project_name/resources/db/database.sql
volumes:
db:
driver: local

docker reverse proxy - how to use authorization with htpasswd

I want to protect my reverse proxy server with basic authentication support. According to the [read-me][1] I have added -v /path/to/htpasswd:/etc/nginx/htpasswd to my docker-compose file:
version: '2'
services:
frontproxy:
image: traskit/nginx-proxy
container_name: frontproxy
labels:
- "com.github.jrcs.letsencrypt_nginx_proxy_companion.docker_gen"
restart: always
environment:
DEFAULT_HOST: default.vhost
HSTS: "off"
ports:
- "80:80"
- "443:443"
volumes:
- /home/frank/Data/htpasswd:/etc/nginx/htpasswd
- /var/run/docker.sock:/tmp/docker.sock:ro
- "certs-volume:/etc/nginx/certs:ro"
- "/etc/nginx/vhost.d"
- "/usr/share/nginx/html"
nginx-letsencrypt-companion:
restart: always
image: jrcs/letsencrypt-nginx-proxy-companion
volumes:
- "certs-volume:/etc/nginx/certs"
- "/var/run/docker.sock:/var/run/docker.sock:ro"
volumes_from:
- "frontproxy"
volumes:
certs-volume:
The htpasswd file contains what I copied from the .htpasswd file from my working nginx server. I am aware of the difference between .htpasswd and htpasswd but are not understanding which format and name should be used here.
The proxy server connects to the services (in my case radicale) without checking for authorisation (passwords are not stored in the browser!).
What must be changed to make nginx check authorisation?
[1]: https://github.com/nginx-proxy/nginx-proxy#readme
I think you overread that the htpasswd here is a folder and the name of your corresponding htpasswd file has to match your virtual host name:
you have to create a file named as its equivalent VIRTUAL_HOST variable on directory /etc/nginx/htpasswd/$VIRTUAL_HOST
That means:
You mount a folder into /etc/nginx/htpasswd of your docker container
In this folder, you create a passwdfile named according to your vhost adress, like example.de:
You can create this corresponding file with the command:
htpasswd -c example.de username

Is there an equivalent tag for Docker's "aliases" in ansible?

This is an example set-up of a classical bridge docker network.
services:
jitsi-web:
(...)
networks:
meet.jitsi:
jitsi-prosody:
(...)
networks:
meet.jitsi:
aliases:
- xmpp.meet.jitsi
(...)
Now, I wanted to transform this into Ansible-Syntax:
- name: create a docker_network for internal communication
docker_network:
name: jitsi-meet-net
connected:
- jitsi-web
- jitsi-prosody
(...)
appends: yes
But I am struggling with integrating that aliases into this task and couldn't find any hints in documentations.
Aliases are not supported on docker_network, but they are supported on docker_container. So after you add a container to a network you could update the container with the aliased names.
- name: Update network with aliases
docker_container:
name: jitsi-prosody
networks:
- name: jitsi-meet-net
aliases:
- xmpp.meet.jitsi
- zzzz
Check docs on docker_container as they might have other solutions which might fit better to your setup.

Windows 10 bind mounts in docker-compose not working

I'm using docker-compose to manage a multi container application. 1 of those containers needs access to the contents of a directory on the host.
This seems simple according to the various sources of documentation on docker and docker-compose but I'm struggling to get it working.
event_processor:
environment:
- COMPOSE_CONVERT_WINDOWS_PATHS=1
build: ./Docker/event_processor
ports:
- "15672:15672"
entrypoint: python -u /src/event_processor/event_processor.py
networks:
- app_network
volumes:
- C/path/to/interesting/directory:/interesting_directory"
Running this I get the error message:
ERROR: Named volume
"C/path/to/interesting/directory:/interesting_directory:rw" is used in
service "event_processor" but no declaration was found in the
volumes section.
I understand from the docs that a top level declaration is only necessary if data is to be shared between containers
which isn't the case here.
The docs for docker-compose I linked above have an example which seems to do exactly what I need:
version: "3.2"
services:
web:
image: nginx:alpine
ports:
- "80:80"
volumes:
- type: volume
source: mydata
target: /data
volume:
nocopy: true
- type: bind
source: ./static
target: /opt/app/static
networks:
webnet:
volumes:
mydata:
However when I try, I get errors about the syntax:
ERROR: The Compose file '.\docker-compose.yaml' is invalid because:
services.audio_event_processor.volumes contains an invalid type, it
should be a string
So I tried to play along:
volumes:
- type: "bind"
source: "C/path/to/interesting/directory"
target: "/interesting_directory"
ERROR: The Compose file '.\docker-compose.yaml' is invalid because:
services.audio_event_processor.volumes contains an invalid type, it should be a string
So again the same error.
I tried the following too:
volumes:
- type=bind, source=C/path/to/interesting/directory,destination=/interesting_directory
No error, but attaching to the running container, I see the following two folders;
type=bind, source=C
So it seems that I am able to create a number of volumes with 1 string (though the forward slashes are cutting the string in this case) but I am not mapping it to the host directory.
I've read the docs but I think I'm missing something.
Can someone post an example of mounting a a windows directory from a host to a linux container so that the existing contents of the windows dir is available from the container?
OK so there were multiple issues here:
1.
I had
version: '3'
at the top of my docker-compose.yml. The long syntax described here wasn't implemented until 3.4 so I stopped receiving the bizarre syntax error when I updated this to:
version: '3.6'
2.
I use my my docker account on 2 windows PCs. Following a hint from another stackoverflow post, I reset Docker to the factory settings. I had to give docker the computer username and password with the notice that this was necessary to access the contents of the local filesystem - at this point I remembered doing this on another PC so I'm not sure whether the credentials were correct on this on. With the correct credentials for the current PC, I was able to bind-mount the volume with the expected results as follows:
version: '3.6'
event_processor:
environment:
- COMPOSE_CONVERT_WINDOWS_PATHS=1
build: ./Docker/event_processor
ports:
- "15672:15672"
entrypoint: python -u /src/event_processor/event_processor.py
networks:
- app_network
volumes:
- type: bind
source: c:/path/to/interesting/directory
target: /interesting_directory
Now it works as expected. I'm not sure if it was the factory reset or the updated credentials that fixed it. I'll find out tomorrow when I use another PC and update.

Resources