Using a shared MySQL container - macos

Tl;Dr; Trying to get WordPress docker-compose container to talk to another docker-compose container.
On my Mac I have a WordPress & MySQL container which I have built and configured with a linked MySQL server. In production I plan to use a Google Cloud MySQL storage instance, so plan on removing the MySQL container from the docker-compose file (unlinking it) and then separate shared container I can use from multiple docker containers.
The issue I'm having is that I cant connect the WordPress container to the separate MySQL container. Would anyone be able to shed any light on how I might go about this?
I have tried unsuccessfully to create a network as well as tried creating a fixed IP that the local box has reference to via the /etc/hosts file (my preferred configuration as I can update the file according to ENV)
WP:
version: '2'
services:
wordpress:
container_name: spmfrontend
hostname: spmfrontend
domainname: spmfrontend.local
image: wordpress:latest
restart: always
ports:
- 8080:80
# creates an entry in /etc/hosts
extra_hosts:
- "ic-mysql.local:172.20.0.1"
# Sets up the env, passwords etc
environment:
WORDPRESS_DB_HOST: ic-mysql.local:9306
WORDPRESS_DB_USER: root
WORDPRESS_DB_PASSWORD: root
WORDPRESS_DB_NAME: wordpress
WORDPRESS_TABLE_PREFIX: spm
# sets the working directory
working_dir: /var/www/html
# creates a link to the volume local to the file
volumes:
- ./wp-content:/var/www/html/wp-content
# Any networks the container should be associated with
networks:
default:
external:
name: ic-network
MySQL:
version: '2'
services:
mysql:
container_name: ic-mysql
hostname: ic-mysql
domainname: ic-mysql.local
restart: always
image: mysql:5.7
ports:
- 9306:3306
# Create a static IP for the container
networks:
ipv4_address: 172.20.0.1
# Sets up the env, passwords etc
environment:
MYSQL_ROOT_PASSWORD: root # TODO: Change this
MYSQL_USER: root
MYSQL_PASS: root
MYSQL_DATABASE: wordpress
# saves /var/lib/mysql to persistant volume
volumes:
- perstvol:/var/lib/mysql
- backups:/backups
# creates a volume to persist data
volumes:
perstvol:
backups:
# Any networks the container should be associated with
networks:
default:
external:
name: ic-network

What you probably want to do is create a shared Docker network for the two containers to use, and point them both to it. You can create a network using docker network create <name>. I will use sharednet as an example below, but you can use any name you like.
Once the network is there, you can point both containers to it. When you're using docker-compose, you would do this at the bottom of your YAML file. This would go at the top level of the file, i.e. all the way to the left, like volumes:.
networks:
default:
external:
name: sharednet
To do the same thing on a normal container (outside compose), you can pass the --network argument.
docker run --network sharednet [ ... ]

Related

Testing a container against DynamoDB-Local

I wanted to test a container locally before pushing it to aws ecs.
I ran unit tests against a docker-compose stack including a dynamodb-local container using a Go (aws-sdk-go-v2) endpoint resolver with http://localhost:8000 as the url.
So I wanted to build and test container locally and realised I needed to attach it to the default network created by docker-compose. I struggled with this a bit so I build a stripped down trial. I created an endpoint resolver with a url of http://dynamo-local:8000 (named the container dynamo-local in d-c) and attached it to the default network within docker run.
Now that all works, I can perform the various table operations successfully, but one of the things that confuses me is that if I run aws cli:
aws --endpoint-url=http://localhost:8000 dynamodb list-tables
then the output shows no tables exist when there is definitely a table existing. I had assumed, naively, that as I can access port 8000 of the same container with different endpoints I should be able to access the same resources. Wrong.
Obviously a gap in my education. What am I missing ? I need to expand the trial to a proper test of the full app, so its important to me that I understand what is going on here.
Is there a way I can use the aws cli to access the table?
docker-compose file :
version: '3.5'
services:
localstack:
image: localstack/localstack:latest
container_name: localstack_test
ports:
- '4566:4566'
environment:
- SERVICES=s3,sns,sqs, lambda
- DEBUG=1
- DATA_DIR=
volumes:
- './.AWSServices:/tmp/AWSServices'
- '/var/run/docker.sock:/var/run/docker.sock'
nginx:
build:
context: .
dockerfile: Dockerfile
image: chanonry/urlfiles-nginx:latest
container_name: nginx
ports:
- '8080:80'
dynamodb:
image: amazon/dynamodb-local:1.13.6
container_name: dynamo-local
ports:
- '8000:8000'
networks:
default:
name: test-net

Getting an 404 because of wrong domain running laravel in docker under wsl2

I've been developing for Laravel using Homestead (VirtualBox and Vagrant) on Windows 10. Recently I wanted to switch to Docker and the Linux Sub System on Windows (WSL2).
Under Homestead I've been running my app under my-domain.test. In my docker-compose file I use localhost on port 8008. I can access the website under localhost:8008 but I get an 404 on every single page I want to access. Inspecting the links, Laravel seems to use my old domain my-domain.test for every link generated. So instead of creating links like localhost:8008/xyz it generates links like https://my-domain.test/xyz.
Of course I've updated my .envfile, cleared the (config) cache, cloned a complete new copy of my repository and set up the project in a complete new directory within the sub system. I've also uninstalled all pieces of Vagrant, VirtualBox and Homestead.
I've searched the complete project for references on the old domain. I havn't found anything.
On an other system it works. Somehow my current system seems to hang on the old domain..
How can I achieve this without reseting my whole computer?
This is my docker-compose:
version: '3.3'
services:
pdbv1-db:
image: mariadb
container_name: pdbv1-db
restart: unless-stopped
ports:
- "3306:3306"
tty: true
environment:
MYSQL_ROOT_PASSWORD: pdb
MYSQL_DATABASE: pdb
MYSQL_USER: pdb
MYSQL_PASSWORD: pdb
networks:
- app-network
volumes:
- ./docker-db:/var/lib/mysql
pdbv1-backend:
build:
context: .
dockerfile: Dockerfile
args:
- WITH_XDEBUG=true
- USERID=$UID
env_file:
- .env
user: $UID:$GID
container_name: pdbv1-backend
restart: unless-stopped
tty: true
environment:
SERVICE_NAME: pdbv1-backend
SERVICE_TAGS: dev
PHP_IDE_CONFIG: serverName=Docker
working_dir: /var/www
ports:
- "8008:8080"
volumes:
- ./:/var/www
networks:
- app-network
#Docker Networks
networks:
app-network:
driver: bridge
There's 2 ways to look at this:
go with my-domain.test
add that domain to your windows hosts file and point that to 127.0.0.1
Also check the dockerfile of your nginx and check your nginx conf file for your domain
the laravel code. check in your .env file for the url, is that localhost or my-domain.test?
Then look in the entire sourcecode for my-domain.test
and of course in the database itself as well.
(Edit: I see that you've already done that, but it would be the only explanation)
Frankly I would go with option 1: you will have your my-domain.test back and you can use multiple domains / multiple projects.
I only use localhost for quick stuff and for managing my database and my redis.

Data isn't persisted in the database when using MongoDB with Docker volumes?

There is a service that uses mongodb. But when I restart computer or docker machine, no data is stored in the database.
docker-compose.yml:
version: "3"
Services:
...
mongodb:
restart: always
image: mongo:latest
environment:
- MONGO_DATA_DIR=/dockerdata/db
volumes:
- ./dockerdata/db:/data/db
ports:
- 27017:27017
command: mongod
I tried to do database storage on the host, but it didn't help either:
docker-compose.yml:
version: "3"
Services:
...
mongodb:
restart: always
image: mongo:latest
environment:
- MONGO_DATA_DIR=/c/users/frol/mongodata/db
volumes:
- /c/users/frol/mongodata/db:/data/db
ports:
- 27017:27017
command: mongod
If you make a named volume, docker writes an error:
ERROR: for test_mongodb_1 Cannot create container for service mongodb: fa
To mount local volume: mount /c/users/frol/mongodata/db:/mnt/sda1/var/lib/d
ocker/volumes/test_mongodata/_data, flags: 0x1000: no such file or directory
docker-compose.yml:
version: "3"
services:
...
mongodb:
restart: always
image: mongo:latest
environment:
- MONGO_DATA_DIR=/c/users/frol/mongodata/db
volumes:
- mongodata:/data/db
ports:
- 27017:27017
command: mongod
volumes:
mongodata:
driver: local
driver_opts:
type: none
device: /c/users/frol/mongodata/db
o: bind
Host - win 8.1, docker toolbox 19.03.1 installed.
Help me, please, I'm a novice. How do I make sure that the database data isn't lost?
You first attempt would work if you just fix a simple typo in your compose file:
version: "3"
services:
...
mongodb:
restart: always
image: mongo:latest
environment:
- MONGO_DATA_DIR=/data/db # changed
volumes:
- ./dockerdata/db:/data/db
ports:
- 27017:27017
command: mongod
But, since /data/db is the default value of MONGO_DATA_DIR, setting it is pretty redundant.
But I'd prefer to use a named volume, that way the data persists but I don't have to see the "ugly" database storage folder:
version: "3"
services:
...
mongodb:
restart: always
image: mongo:latest
volumes:
- mongodata:/data/db
ports:
- 27017:27017
command: mongod
volumes:
mongodata:
Don't set $MONGO_DATA_DIR; leave it at its default of /data/db.
services:
mongodb:
restart: always
image: mongo:latest
# No need to specifically set $MONGO_DATA_DIR
volumes:
- ./dockerdata/db:/data/db
ports:
- 27017:27017
# No need to override command:
Docker containers have a separate filesystem space from the host filesystem. A typical setup for most databases is to have the database storage in a fixed location inside the container; for MongoDB that's the /data/db directory. You can mount a named volume or filesystem path there, but the code inside the container doesn't know the difference.
If you do set environment variables like $MONGO_DATA_DIR, they need to reflect paths inside the container; they can't directly specify host-system paths. (#ruohola's answer works because it changes the container-filesystem path of the bind mount to match the container-filesystem path in the environment variable; the host ./dockerdata and container /dockerdata paths are totally unrelated.)
As you are defining the data dir explicitly, you need to map the same directory in the volume to persist the data
version: "3"
services:
...
mongodb:
restart: always
image: mongo:latest
environment:
- MONGO_DATA_DIR=/data/db #data directory
volumes:
- ./dockerdata/db:/data/db #same data directory which you defined above
ports:
- 27017:27017
command: mongod

Running Sonarqube with docker-compose using bind mount volumes

I’m trying to run Sonarqube in a Docker container on a Centos 7 server using docker-compose. Everything works as expected using named volumes as configured in this docker-compose.yml file:
version: "3"
services:
sonarqube:
image: sonarqube
ports:
- "9000:9000"
networks:
- sonarnet
environment:
- sonar.jdbc.url=jdbc:postgresql://db:5432/sonar
volumes:
- sonarqube_conf:/opt/sonarqube/conf
- sonarqube_data:/opt/sonarqube/data
- sonarqube_extensions:/opt/sonarqube/extensions
- sonarqube_bundled_plugins:/opt/sonarqube/lib/bundled-plugins
db:
image: postgres
networks:
- sonarnet
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=sonar
volumes:
- postgresql:/var/lib/postgresql
- postgresql_data:/var/lib/postgresql/data
networks:
sonarnet:
driver: bridge
volumes:
sonarqube_conf:
sonarqube_data:
sonarqube_extensions:
sonarqube_bundled_plugins:
postgresql:
postgresql_data:
However, my /var/lib/docker/volumes directory is not large enough to house the named volumes. So, I changed the docker-compose.yml file to use bind mount volumes as shown below.
version: "3"
services:
sonarqube:
image: sonarqube
ports:
- "9000:9000"
networks:
- sonarnet
environment:
- sonar.jdbc.url=jdbc:postgresql://db:5432/sonar
volumes:
- /data/sonarqube/conf:/opt/sonarqube/conf
- /data/sonarqube/data:/opt/sonarqube/data
- /data/sonarqube/extensions:/opt/sonarqube/extensions
- /data/sonarqube/bundled_plugins:/opt/sonarqube/lib/bundled-plugins
db:
image: postgres
networks:
- sonarnet
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=sonar
volumes:
- /data/postgresql:/var/lib/postgresql
- /data/postgresql_data:/var/lib/postgresql/data
networks:
sonarnet:
driver: bridge
However, after running docker-compose up -d, the app starts up but none of the bind mount volumes are written to. As a result, the Sonarqube plugins are not loaded and the sonar postgreSQL database is not initialized. I thought it may be a selinux issue, but I temporarily disabled it with no success. I’m unsure what to look at next.
I think my answer from "How to persist configuration & analytics across container invocations in Sonarqube docker image" would help you as well.
For good measure I have also pasted it in here:
.....
Notice this line SONARQUBE_HOME in the Dockerfile for the docker-sonarqube image. We can control this environment variable.
When using docker run. Simply do:
txt
docker run -d \
...
...
-e SONARQUBE_HOME=/sonarqube-data
-v /PERSISTENT_DISK/sonarqubeVolume:/sonarqube-data
This will make Sonarqube create the conf, data and so forth folders and store data therein. As needed.
Or with Kubernetes. In your deployment YAML file. Do:
txt
...
...
env:
- name: SONARQUBE_HOME
value: /sonarqube-data
...
...
volumeMounts:
- name: app-volume
mountPath: /sonarqube-data
And the name in the volumeMounts property points to a volume in the volumes section of the Kubernetes deployment YAML file.
This again will make Sonarqube use the /sonarqube-data mountPath for creating extenions, conf and so forth folders, then save data therein.
And voila your Sonarqube data is thereby persisted.
I hope this will help others.
N.B. Notice that the YAML and Docker run examples are not exhaustive. They focus on the issue of persisting Sonarqube data.
Try it out BobC and let me know.
Have a great day.
The below code will help you in a single command I hope so.
Create a new docker-compose file named as docker-compose.yaml,
version: "3"
services:
sonarqube:
image: sonarqube:8.2-community
depends_on:
- db
ports:
- "9000:9000"
networks:
- sonarqubenet
environment:
SONAR_JDBC_URL: jdbc:postgresql://db:5432/sonarqube
SONAR_JDBC_USERNAME: sonar
SONAR_JDBC_PASSWORD: sonar
volumes:
- sonarqube_data:/opt/sonarqube/data
- sonarqube_extensions:/opt/sonarqube/extensions
- sonarqube_logs:/opt/sonarqube/logs
- sonarqube_temp:/opt/sonarqube/temp
restart: on-failure
container_name: sonarqube
db:
image: postgres
networks:
- sonarqubenet
environment:
POSTGRES_USER: sonar
POSTGRES_PASSWORD: sonar
volumes:
- postgresql:/var/lib/postgresql
- postgresql_data:/var/lib/postgresql/data
restart: on-failure
container_name: postgresql
networks:
sonarqubenet:
driver: bridge
volumes:
sonarqube_data:
sonarqube_extensions:
sonarqube_logs:
sonarqube_temp:
postgresql:
postgresql_data:
Then, execute the command,
$ docker-compose up -d
$ docker container ps
Sounds like the container is running and, as you mentioned, Sonarqube starts-up. When it starts, is it showing that it's using the H2 in memory db? After running docker-compose up -d, use docker logs -f <container_name> to see what's happening on Sonarqube startup.
To simplify viewing your logs with a known name, I suggest you also add a container name to your Sonarqube service. For example, container_name: sonarqube.
Also, while I know the plan is to deprecate the use of environment variables for the username, password and jdbc connection, I've had better luck in docker-compose using environment variables rather than the corresponding property value. For the connection string, try: SONARQUBE_JDBC_URL: jdbc:postgresql://db/sonar without specifying the default port for postgres.

Bitnami Magento site always point to port 80 for any links

I am new to this area. I have a docker-compose.yml file which starts Magento & MariaDB dockers container. And here is the script:
version: '2'
services:
mariadb:
image: 'bitnami/mariadb:latest'
environment:
- ALLOW_EMPTY_PASSWORD=yes
volumes:
- 'mariadb_data:/bitnami/mariadb'
magento:
image: 'bitnami/magento:latest'
environment:
- ENVIRONMENT=Test3
ports:
- '89:80' #for Test3
volumes:
- 'magento_data:/bitnami/magento'
- 'apache_data:/bitnami/apache'
- 'php_data:/bitnami/php'
depends_on:
- mariadb
volumes:
mariadb_data:
driver: local
magento_data:
driver: local
apache_data:
driver: local
php_data:
driver: local
I tried to use http://127.0.0.1:89 for the site, and it did happen at beginning (e.g. I could open site with URL: http://127.0.0.1:89 ). However when I view page source I found these style/js still points to http://127.0.0.1 (port 80) one. Also I couldn't access its other page like http://120.0.0.1:89/admin.
Then I google, for example some posts mention I need to change base_url value in "core_config_data" table which I did (https://magento.stackexchange.com/questions/39752/how-do-i-fix-my-base-urls-so-i-can-access-my-magento-site). And I do clear the var/cache folder on both Magento & MariaDB containers, but result is still the same. (I didn't find var/session folder which that link mentions. Maybe a little bit different among Bitnami system and others.)
So how could I try now? And also is there anyway that I could set base_url with correct port to MariaDB at very beginning in my docker-compose.yml file?
P.S. Everything works fine if using default port 80.
Thanks a lot!
You can indicate the port where Apache should be listening in the docker-compose.yml file in this way:
version: '2'
services:
mariadb:
image: 'bitnami/mariadb:latest'
environment:
- ALLOW_EMPTY_PASSWORD=yes
volumes:
- 'mariadb_data:/bitnami/mariadb'
magento:
image: 'bitnami/magento:latest'
ports:
- '89:89'
- '443:443'
environment:
- APACHE_HTTP_PORT=89
volumes:
- 'magento_data:/bitnami/magento'
- 'php_data:/bitnami/php'
- 'apache_data:/bitnami/apache'
depends_on:
- mariadb
volumes:
mariadb_data:
driver: local
magento_data:
driver: local
apache_data:
driver: local
php_data:
driver: local
Please, note the use of the APACHE_HTTP_PORT environment variable on the Magento container. Also, note that the port forwarding should be 89:89 in this case.
Take into account that this change should be performed when you launch for the first time the containers. That means that, if you have some volumes already, this method won't work because your configuration will be restored from those volumes. So, ensure that you don't have any volume. You can check it by executing
docker volume ls
and checking that there isn't any volume named
local DATE_apache_data
local DATE_magento_data
local DATE_mariadb_data
Also, you can also delete the volumes executing:
docker-compose down -v

Resources