docker reverse proxy - how to use authorization with htpasswd - jwilder-nginx-proxy

I want to protect my reverse proxy server with basic authentication support. According to the [read-me][1] I have added -v /path/to/htpasswd:/etc/nginx/htpasswd to my docker-compose file:
version: '2'
services:
frontproxy:
image: traskit/nginx-proxy
container_name: frontproxy
labels:
- "com.github.jrcs.letsencrypt_nginx_proxy_companion.docker_gen"
restart: always
environment:
DEFAULT_HOST: default.vhost
HSTS: "off"
ports:
- "80:80"
- "443:443"
volumes:
- /home/frank/Data/htpasswd:/etc/nginx/htpasswd
- /var/run/docker.sock:/tmp/docker.sock:ro
- "certs-volume:/etc/nginx/certs:ro"
- "/etc/nginx/vhost.d"
- "/usr/share/nginx/html"
nginx-letsencrypt-companion:
restart: always
image: jrcs/letsencrypt-nginx-proxy-companion
volumes:
- "certs-volume:/etc/nginx/certs"
- "/var/run/docker.sock:/var/run/docker.sock:ro"
volumes_from:
- "frontproxy"
volumes:
certs-volume:
The htpasswd file contains what I copied from the .htpasswd file from my working nginx server. I am aware of the difference between .htpasswd and htpasswd but are not understanding which format and name should be used here.
The proxy server connects to the services (in my case radicale) without checking for authorisation (passwords are not stored in the browser!).
What must be changed to make nginx check authorisation?
[1]: https://github.com/nginx-proxy/nginx-proxy#readme

I think you overread that the htpasswd here is a folder and the name of your corresponding htpasswd file has to match your virtual host name:
you have to create a file named as its equivalent VIRTUAL_HOST variable on directory /etc/nginx/htpasswd/$VIRTUAL_HOST
That means:
You mount a folder into /etc/nginx/htpasswd of your docker container
In this folder, you create a passwdfile named according to your vhost adress, like example.de:
You can create this corresponding file with the command:
htpasswd -c example.de username

Related

docker-compose: no declaration was found in the volumes section

Im trying to use Docker-Compose on Microsoft Windows to create a stack for Seafile.
The error message after creating is:
Deployment error
failed to deploy a stack: Named volume “C:/Users/Administrator/Docker/Volumes/Seafile/Mysql:/var/lib/mysql:rw” is used in service “db” but no declaration was found in the volumes section. : exit status 1
Here's my problematic docker-compose.yaml file :
version: '2'
services:
db:
image: mariadb:10.5
container_name: seafile-mysql
environment:
- MYSQL_ROOT_PASSWORD=db_dev # Requested, set the root's password of MySQL service.
- MYSQL_LOG_CONSOLE=true
volumes:
- C:/Users/Administrator/Docker/Volumes/Seafile/Mysql:/var/lib/mysql # Requested, specifies the path to MySQL data persistent store.
networks:
- seafile-net
memcached:
image: memcached:1.5.6
container_name: seafile-memcached
entrypoint: memcached -m 256
networks:
- seafile-net
seafile:
image: seafileltd/seafile-mc:latest
container_name: seafile
ports:
- "9000:80"
# - "443:443" # If https is enabled, cancel the comment.
volumes:
- C:/Users/Administrator/Docker/Volumes/Seafile/Seafile:/shared # Requested, specifies the path to Seafile data persistent store.
environment:
- DB_HOST=db
- DB_ROOT_PASSWD=db_dev # Requested, the value shuold be root's password of MySQL service.
- TIME_ZONE=Etc/UTC # Optional, default is UTC. Should be uncomment and set to your local time zone.
- SEAFILE_ADMIN_EMAIL=me#example.com # Specifies Seafile admin user, default is 'me#example.com'.
- SEAFILE_ADMIN_PASSWORD=asecret # Specifies Seafile admin password, default is 'asecret'.
- SEAFILE_SERVER_LETSENCRYPT=false # Whether to use https or not.
- SEAFILE_SERVER_HOSTNAME=docs.seafile.com # Specifies your host name if https is enabled.
depends_on:
- db
- memcached
networks:
- seafile-net
networks:
seafile-net:
If you see the error "no declaration was found in the volumes section" - probably you are not declaring the volumes from the root section.
The error message can cause confusion. Here how to do it correctly:
...
services:
...
volumes:
- a:/path1
- b:/path2
...
volumes:
a:
b:
...
I know that this could be somehow scattered and I know Docker could handle it differently in another universe, but at the current version it does it in this way: the root section declares the volume, while the services section just use them.
Let me know if this was your problem.
More info:
https://docs.docker.com/storage/volumes/#use-a-volume-with-docker-compose

Docker / Oracle Database / Change Port 1521

I have set an Oracle docker image (https://github.com/oracle/docker-images/tree/main/OracleDatabase/SingleInstance/dockerfiles) which by default is running on port 1521.
I would like to change the port in the Image to 1531.
I know that in the docker-compose I can set "1531:1521" BUT the other container still searching for port 1521 in the created network.
I tried to modify the port referenced in the Dockerfile of the version I want to use (19.3.0) and also in the createDB.sh but when I try to connect with the SID it fails, the listener is not working as expected.
Anybody already succeeded?
Update 1:
Here is the error message when I try to connect to the database after I changed the port.
SQL> CONNECT sys/HyperSecuredPassword#ORCLCDB AS sysdba; ERROR: ORA-12514: TNS:listener does not currently know of service requested in connect descriptor
Update 2:
I have the following docker-compose.yaml to set up the other containers for my project.
version: "3.8"
services:
hadea-database:
image: hadea_oracle_1521:19.3.0
container_name: hadea_oracle_1930
ports:
- "1521:1521"
environment:
- ORACLE_SID=ORCLCDB
- ORACLE_PDB=ORCLPDB
- ORACLE_PWD=Oracle4System
- ORACLE_MEM=2000
volumes:
- ./database/OracleDB/oradata:/opt/oracle/oradata
- ./database/OracleDB/setup:/opt/oracle/scripts/setup
- ./database/OracleDB/startup:/opt/oracle/scripts/startup
networks:
- hadea-network
hadea-maildev:
image: maildev/maildev
container_name: hadea_maildev
command: bin/maildev --web 80 --smtp 25 --hide-extensions STARTTLS
ports:
- "8081:80"
networks:
- hadea-network
hadea-server:
build:
context: ./server
dockerfile: Dockerfile
container_name: hadea_back
environment:
- HTTP_PORT=3000
- HTTP_HOST=0.0.0.0
- DATABASE_HOST=hadea-database
- DATABASE_PORT=1521 # CONTAINER port NOT the HOST port
- DATABASE_SID=ORCLCDB
- MAIL_HOST=hadea-maildev
- MAIL_PORT=25 # CONTAINER port NOT the HOST port
ports:
- "3000:3000"
working_dir: /usr/src/app
volumes:
- ./server:/usr/src/app
networks:
- hadea-network
depends_on:
- hadea-database
- hadea-maildev
hadea-front:
build:
context: ./front
dockerfile: Dockerfile
container_name: hadea_front
ports:
- "4200:4200"
- "3001:3001"
volumes:
- ./front:/usr/src/app
networks:
- hadea-network
depends_on:
- hadea-database
- hadea-maildev
- hadea-server
networks:
hadea-network:
If you want to change the port used WITHIN the container (I think this is the question), you could try building a new image after modifying the conf file, e.g. (for the 18c image). The other images hard code the 1521 port in various files in that repo depending on the oracle version you are using, so those would have to be changed prior to building the image.
I have been using this image: container-registry.oracle.com/database/express:latest. This is version 18c and it has a conf file within the image located at /etc/sysconfig/oracle-xe-18c.conf, I would just build a new Dockerfile and overwrite that file with a new one that has the port you require. Or, you could extract the entire contents of that directory, dump it to a host directory, modify the file as needed, and map a volume to etc/sysconfig (make sure the permissions are correct). This way you could tweak the file from the host. It might be possible to set the variable in that conf file from an environment variable within a docker-compose.yaml file or on the docker command line. This variable is named LISTENER_PORT. Some of the variables in these scripts are defined locally and do not pull their values from environment variable though.

Configuring ssl in rabbitmq.config using rabbitmq docker image

My goal is to set rabbitmq with ssl support, which was achieved previously using below rabbitmq.config file, which resides in host's /etc/rabbitmq path.
Now I want to be able to configure other rabbitmq user and password than defaults guest guest.
I'm using rabbitmq docker image with following docker-compose configuration:
version: '2'
services:
rabbitmq:
build: ./rabbitmq
ports:
- "8181:8181"
expose:
- "15672"
- "8181"
volumes:
- /etc/rabbitmq:/etc/rabbitmq
environment:
RABBITMQ_DEFAULT_USER: user123
RABBITMQ_DEFAULT_PASS: 1234
Rabbitmq config:
[{rabbit,
[
{loopback_users, []},
{heartbeat,0},
{ssl_listeners, [8181]},
{ssl_options, [{cacertfile, "/etc/rabbitmq/ca/cacert.pem"},
{certfile, "/etc/rabbitmq/server/cert.pem"},
{keyfile, "/etc/rabbitmq/server/key.pem"},
{verify,verify_none},
{fail_if_no_peer_cert,false}]}
]}
].
Rabbitmq dockerfile:
from rabbitmq:management
#and some certificate generating logic
I noticed that once upon adding environment section, current rabbitmq.config file is overriden with auto generated configuration possibly by docker-entrypoint.sh file.
For building configuration using the certs I found environment variables that can do this (look here).
However didn't found any reference for defining ssl_listeners section with its port, as seen in below rabbitmq.config
My question is: how can I create the exact configuration as mentioned below using env variables OR how can I remain with mine rabbitmq.config defining rabbitmq with new user and password in some dynamic way (maybe templating the config file)?
Try this
version: '2'
services:
rabbitmq:
build: ./rabbitmq
ports:
- "8181:8181"
expose:
- "15672"
- "8181"
volumes:
- /etc/rabbitmq:/etc/rabbitmq
command: rabbitmq-server
entrypoint: ""
environment:
RABBITMQ_DEFAULT_USER: user123
RABBITMQ_DEFAULT_PASS: 1234
This will override the docker-entrpoint and just run the rabbitmq server. Now the ./docker-entrypoint.sh sets certain environment variables also. Which may be needed in your case. So to make sure you have everything needed

Bitnami Magento site always point to port 80 for any links

I am new to this area. I have a docker-compose.yml file which starts Magento & MariaDB dockers container. And here is the script:
version: '2'
services:
mariadb:
image: 'bitnami/mariadb:latest'
environment:
- ALLOW_EMPTY_PASSWORD=yes
volumes:
- 'mariadb_data:/bitnami/mariadb'
magento:
image: 'bitnami/magento:latest'
environment:
- ENVIRONMENT=Test3
ports:
- '89:80' #for Test3
volumes:
- 'magento_data:/bitnami/magento'
- 'apache_data:/bitnami/apache'
- 'php_data:/bitnami/php'
depends_on:
- mariadb
volumes:
mariadb_data:
driver: local
magento_data:
driver: local
apache_data:
driver: local
php_data:
driver: local
I tried to use http://127.0.0.1:89 for the site, and it did happen at beginning (e.g. I could open site with URL: http://127.0.0.1:89 ). However when I view page source I found these style/js still points to http://127.0.0.1 (port 80) one. Also I couldn't access its other page like http://120.0.0.1:89/admin.
Then I google, for example some posts mention I need to change base_url value in "core_config_data" table which I did (https://magento.stackexchange.com/questions/39752/how-do-i-fix-my-base-urls-so-i-can-access-my-magento-site). And I do clear the var/cache folder on both Magento & MariaDB containers, but result is still the same. (I didn't find var/session folder which that link mentions. Maybe a little bit different among Bitnami system and others.)
So how could I try now? And also is there anyway that I could set base_url with correct port to MariaDB at very beginning in my docker-compose.yml file?
P.S. Everything works fine if using default port 80.
Thanks a lot!
You can indicate the port where Apache should be listening in the docker-compose.yml file in this way:
version: '2'
services:
mariadb:
image: 'bitnami/mariadb:latest'
environment:
- ALLOW_EMPTY_PASSWORD=yes
volumes:
- 'mariadb_data:/bitnami/mariadb'
magento:
image: 'bitnami/magento:latest'
ports:
- '89:89'
- '443:443'
environment:
- APACHE_HTTP_PORT=89
volumes:
- 'magento_data:/bitnami/magento'
- 'php_data:/bitnami/php'
- 'apache_data:/bitnami/apache'
depends_on:
- mariadb
volumes:
mariadb_data:
driver: local
magento_data:
driver: local
apache_data:
driver: local
php_data:
driver: local
Please, note the use of the APACHE_HTTP_PORT environment variable on the Magento container. Also, note that the port forwarding should be 89:89 in this case.
Take into account that this change should be performed when you launch for the first time the containers. That means that, if you have some volumes already, this method won't work because your configuration will be restored from those volumes. So, ensure that you don't have any volume. You can check it by executing
docker volume ls
and checking that there isn't any volume named
local DATE_apache_data
local DATE_magento_data
local DATE_mariadb_data
Also, you can also delete the volumes executing:
docker-compose down -v

Using a shared MySQL container

Tl;Dr; Trying to get WordPress docker-compose container to talk to another docker-compose container.
On my Mac I have a WordPress & MySQL container which I have built and configured with a linked MySQL server. In production I plan to use a Google Cloud MySQL storage instance, so plan on removing the MySQL container from the docker-compose file (unlinking it) and then separate shared container I can use from multiple docker containers.
The issue I'm having is that I cant connect the WordPress container to the separate MySQL container. Would anyone be able to shed any light on how I might go about this?
I have tried unsuccessfully to create a network as well as tried creating a fixed IP that the local box has reference to via the /etc/hosts file (my preferred configuration as I can update the file according to ENV)
WP:
version: '2'
services:
wordpress:
container_name: spmfrontend
hostname: spmfrontend
domainname: spmfrontend.local
image: wordpress:latest
restart: always
ports:
- 8080:80
# creates an entry in /etc/hosts
extra_hosts:
- "ic-mysql.local:172.20.0.1"
# Sets up the env, passwords etc
environment:
WORDPRESS_DB_HOST: ic-mysql.local:9306
WORDPRESS_DB_USER: root
WORDPRESS_DB_PASSWORD: root
WORDPRESS_DB_NAME: wordpress
WORDPRESS_TABLE_PREFIX: spm
# sets the working directory
working_dir: /var/www/html
# creates a link to the volume local to the file
volumes:
- ./wp-content:/var/www/html/wp-content
# Any networks the container should be associated with
networks:
default:
external:
name: ic-network
MySQL:
version: '2'
services:
mysql:
container_name: ic-mysql
hostname: ic-mysql
domainname: ic-mysql.local
restart: always
image: mysql:5.7
ports:
- 9306:3306
# Create a static IP for the container
networks:
ipv4_address: 172.20.0.1
# Sets up the env, passwords etc
environment:
MYSQL_ROOT_PASSWORD: root # TODO: Change this
MYSQL_USER: root
MYSQL_PASS: root
MYSQL_DATABASE: wordpress
# saves /var/lib/mysql to persistant volume
volumes:
- perstvol:/var/lib/mysql
- backups:/backups
# creates a volume to persist data
volumes:
perstvol:
backups:
# Any networks the container should be associated with
networks:
default:
external:
name: ic-network
What you probably want to do is create a shared Docker network for the two containers to use, and point them both to it. You can create a network using docker network create <name>. I will use sharednet as an example below, but you can use any name you like.
Once the network is there, you can point both containers to it. When you're using docker-compose, you would do this at the bottom of your YAML file. This would go at the top level of the file, i.e. all the way to the left, like volumes:.
networks:
default:
external:
name: sharednet
To do the same thing on a normal container (outside compose), you can pass the --network argument.
docker run --network sharednet [ ... ]

Resources