Problem to make a cluster with laravel and docker swarm using laradock - laravel

I recently started with docker and I have been following this steps to make a cluster with laradock
https://github.com/jarnovanleeuwen/laravel-dock
first I create with docker-machine 2 nodes, one manager and the other a worker
then with this comand I set the manager node as the leader
docker swarm init --advertise-addr 192.168.99.100
I add the other node as a worker with
docker swarm join
After I access the manager node with docker-machine ssh manager and create a service
docker service create --name registry --publish published=5000,target=5000 registry:2
I leave the virtual machine with exit and cd into the folder laravel-dock, then I up the containers to create the images with
./dock up
then I set it down
./dock down
I build the image with
./dock build
Then i push it into the service I created with
./dock push
The push refers to repository [192.168.99.100:5000/registry]
Get https://192.168.99.100:5000/v2/: http: server gave HTTP response to HTTPS client
With this I try to deploy the stack with
./dock deploy
Are you sure you want to deploy [192.168.99.100:5000/registry:web] to [docker#192.168.99.100]? (y/N) y
Uploading deployment configuration...The authenticity of host '192.168.99.100 (192.168.99.100)' can't be established.
ECDSA key fingerprint is SHA256:r1F+7kuet+grlysNruBECAmYpRVVlvORYhAR4ipNFco.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.99.100' (ECDSA) to the list of known hosts.
docker#192.168.99.100's password:
docker#192.168.99.100's password:
OK
docker#192.168.99.100's password:
./dock-swarm: line 4: .env: No such file or directory
it request the password for the machine that is "tcuser" and come with this error ".env: No such file or directory", so I manually copy the .env to this folder created in the manager node with
docker-machine scp -r .env manager:/home/docker/laraveldock
I try to deploy again and have this error
sudo ./dock deploy
Are you sure you want to deploy [192.168.99.100:5000/registry:web] to [docker#192.168.99.100]? (y/N) y
Uploading deployment configuration...docker#192.168.99.100's password:
docker#192.168.99.100's password:
OK
docker#192.168.99.100's password:
Top-level object must be a mapping
And at this point I have stuck with this error, am I doing something wrong in this steps? also here is the configurations
.env
APP_ID=laraveldock
DEPLOY_SERVER=docker#192.168.99.100
DOCKER_REPOSITORY=192.168.99.100:5000/registry:web
REGISTRY=localhost:5000
REGISTRY_USER=docker
REGISTRY_PASSWORD=tcuser
APP_NAME=Laravel
APP_ENV=local
APP_KEY=base64:25q1JJ78zFzeGYOBIyIndCmuSJBOCaezyW2utE7hE3Y=
APP_DEBUG=true
APP_URL=http://localhost
LOG_CHANNEL=stack
DB_CONNECTION=mysql
DB_DATABASE=laraveldock
DB_USERNAME=root
DB_PASSWORD=secret
REDIS_PASSWORD=rediz
BROADCAST_DRIVER=log
CACHE_DRIVER=file
QUEUE_CONNECTION=sync
SESSION_DRIVER=file
SESSION_LIFETIME=120
MAIL_DRIVER=smtp
MAIL_HOST=smtp.mailtrap.io
MAIL_PORT=2525
MAIL_USERNAME=null
DB_DATABASE=laraveldock
DB_USERNAME=root
DB_PASSWORD=secret
REDIS_PASSWORD=rediz
BROADCAST_DRIVER=log
CACHE_DRIVER=file
QUEUE_CONNECTION=sync
SESSION_DRIVER=file
SESSION_LIFETIME=120
MAIL_DRIVER=smtp
MAIL_HOST=smtp.mailtrap.io
MAIL_PORT=2525
MAIL_USERNAME=null
MAIL_PASSWORD=null
MAIL_ENCRYPTION=null
PUSHER_APP_ID=
PUSHER_APP_KEY=
PUSHER_APP_SECRET=
PUSHER_APP_CLUSTER=mt1
MIX_PUSHER_APP_KEY="${PUSHER_APP_KEY}"
MIX_PUSHER_APP_CLUSTER="${PUSHER_APP_CLUSTER}"
docker-compose.yml
version: '3.7'
# Volumes
volumes:
mysql:
driver: local
redis:
driver: local
services:
# Apache + PHP
app:
environment:
- CONTAINER_ROLE=app
- DB_HOST=db
- DB_PORT=3306
- REDIS_HOST=redis
- REDIS_PORT=6379
env_file: ../.env
ports:
- "80:80"
# Scheduler
scheduler:
user: webdev
environment:
- CONTAINER_ROLE=scheduler
- DB_HOST=db
- DB_PORT=3306
- REDIS_HOST=redis
- REDIS_PORT=6379
env_file: ../.env
# Queue worker
queue:
user: webdev
environment:
- CONTAINER_ROLE=queue
- DB_HOST=db
- DB_PORT=3306
- REDIS_HOST=redis
- REDIS_PORT=6379
env_file: ../.env
# MySQL
db:
image: mysql:5.7
command: ["--character-set-server=utf8mb4", "--collation-server=utf8mb4_unicode_ci"]
environment:
- MYSQL_ROOT_PASSWORD=${DB_PASSWORD}
- MYSQL_DATABASE=${DB_DATABASE}
ports:
- "3306:3306"
volumes:
- mysql:/var/lib/mysql/
# Redis
redis:
image: redis:5.0
command: ["redis-server", "--appendonly", "yes", "--requirepass", "${REDIS_PASSWORD}"]

I spent an hour on this bug. In my case problem was caused by missing docker-compose on remote server.

Related

How to dockerize vue3 laravel9 application?

I am developping a vue3 laravel9 SPA.
The app is presently running in dev and in production on a web hosting.
I would like to distribute it without the need of a web hosting as a dockerized application but I do not know docker that much.
I base my approach on this example : https://github.com/AveryHowell/docker-laravel-vue
I just copied from the example to my app
laravel/nginx (folder)
laravel/docker-compose.yml (file)
laravel/laravel.dockerfile (file)
vue/nginx.conf (file)
vue/vue.dockerfile (file)
The changes I made to this example are very limited.
First I changed the version of mysql in laravel/docker-compose.yml to mariadb:10.3.37
#Mysql DB
mysql:
image: mariadb:10.3.37
container_name: db
restart: unless-stopped
tty: true
ports:
- "33061:3306"
environment:
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: laravel
volumes:
- ../mysql:/mysql
networks:
- app-network
This version is the one I presently use and avoid some trouble with default values in json columns.
Secondly I changed the database config and some urls in .env
APP_NAME='my application'
APP_ENV=local
APP_KEY=base64:Wh2GGnDZ6YLEjNghheIX4qL+6P9lHuS8sO03L/pCufA=
APP_DEBUG=true
APP_URL=http://localhost
APP_API_URL=http://localhost/api/
APP_API_DEV_URL=http://localhost:8080/
LOG_CHANNEL=stack
LOG_DEPRECATIONS_CHANNEL=null
LOG_LEVEL=debug
DB_CONNECTION=mysql
DB_HOST=mysql
DB_PORT=3306
DB_DATABASE=laravel
DB_USERNAME=root
DB_PASSWORD=secret
DB_ROOT_PASSWORD=secret
BROADCAST_DRIVER=log
CACHE_DRIVER=file
FILESYSTEM_DISK=local
QUEUE_CONNECTION=sync
SESSION_DRIVER=cookie
SESSION_LIFETIME=120
SESSION_DOMAIN=.localhost
SANCTUM_STATEFUL_DOMAINS=localhost:8080
SPA_URL=http://localhost:8080
After that I used these commands
docker-compose up -d
docker-compose exec laravel composer update
docker-compose exec laravel composer install
docker-compose exec laravel composer dump-autoload
docker-compose exec laravel php artisan key:generate
docker-compose exec laravel php artisan config:cache
docker-compose exec laravel php artisan migrate:fresh
I had the tables created in mariadb
but when I visited localhost:8080 as said in the example, I got this message
The connection was reset
The connection to the server was reset while the page was loading.
The site could be temporarily unavailable or too busy. Try again in a few moments.
If you are unable to load any pages, check your computer’s network connection.
If your computer or network is protected by a firewall or proxy, make sure that Firefox is permitted to access the web.
What is wrong?
EDIT AFTER FIRST COMMENT
[jaaf#localhost laravel]$ docker container ls --filter label=com.docker.compose.project
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5719d4b36179 laravel-webserver "/docker-entrypoint.…" 44 minutes ago Up 44 minutes 0.0.0.0:80->80/tcp, :::80->80/tcp webserver
9b88d442636c laravel-vue "docker-entrypoint.s…" 44 minutes ago Restarting (1) 51 seconds ago vue
37743ac9307f laravel "docker-php-entrypoi…" 44 minutes ago Up 44 minutes 9000/tcp laravel
1cf764c56ad0 mariadb:10.3.37 "docker-entrypoint.s…" 44 minutes ago Up 44 minutes 0.0.0.0:33061->3306/tcp, :::33061->3306/tcp db
[jaaf#localhost laravel]$

Laravel running inside docker successfully but unable to see anything in the browser?

Have been following this tutorial making a few changes when needed: https://medium.com/#pierangelo1982/dockerize-an-existing-laravel-application-with-docker-compose-a45eb7956cbd
After running docker-composer build and docker-composer up all the migrations run successfully and I get a message saying the app is running on http://0.0.0.0:8000 Hover that url or the one specified in the tutorial show nothing.
Here is my docker file
FROM php:7
RUN apt-get update -y && apt-get install -y openssl zip unzip git
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN docker-php-ext-install pdo pdo_mysql
WORKDIR /app
COPY . /app
RUN composer install
CMD php artisan serve --host=0.0.0.0 --port=8181
EXPOSE 8181
my docker-compose.yaml file
version: '2'
services:
app:
build: .
ports:
- '8009:8000'
volumes:
- .:/app
env_file: .env
working_dir: /app
command: bash -c 'composer install && php artisan migrate && php artisan serve --host 0.0.0.0'
depends_on:
- db
links:
- db
db:
image: 'mysql:5.7'
environment:
- MYSQL_ROOT_PASSWORD=1
- MYSQL_DATABASE=posts
- MYSQL_PASSWORD=1
volumes:
- ./data/:/var/lib/mysql
ports:
- '3306:3306'
phpmyadmin:
depends_on:
- db
image: phpmyadmin/phpmyadmin
restart: always
ports:
- 8090:80
environment:
PMA_HOST: db
MYSQL_ROOT_PASSWORD: 1
and my .env file
APP_NAME=Laravel
APP_ENV=local
APP_KEY=base64:oD3DGqKdE7ne91FGu7YJzAJo721v56uVjAZGRGT6VNk=
APP_DEBUG=true
APP_URL=http://posts.test
LOG_CHANNEL=stack
LOG_LEVEL=debug
DB_CONNECTION=mysql
DB_HOST=db
DB_PORT=3306
DB_DATABASE=posts
DB_USERNAME=root
DB_PASSWORD=1
BROADCAST_DRIVER=log
CACHE_DRIVER=file
QUEUE_CONNECTION=sync
SESSION_DRIVER=file
SESSION_LIFETIME=120
MEMCACHED_HOST=127.0.0.1
REDIS_HOST=127.0.0.1
REDIS_PASSWORD=null
REDIS_PORT=6379
MAIL_MAILER=smtp
MAIL_HOST=mailhog
MAIL_PORT=1025
MAIL_USERNAME=null
MAIL_PASSWORD=null
MAIL_ENCRYPTION=null
MAIL_FROM_ADDRESS=null
MAIL_FROM_NAME="${APP_NAME}"
AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
AWS_DEFAULT_REGION=us-east-1
AWS_BUCKET=
PUSHER_APP_ID=
PUSHER_APP_KEY=
PUSHER_APP_SECRET=
PUSHER_APP_CLUSTER=mt1
MIX_PUSHER_APP_KEY="${PUSHER_APP_KEY}"
MIX_PUSHER_APP_CLUSTER="${PUSHER_APP_CLUSTER}"
Not really sure what is happening at this point since it seems everything is running fine but then I see nothing on the specified port.
Helloy, I think you should check your port forwarding config. Inside container you start web server with command
CMD php artisan serve --host=0.0.0.0 --port=8181
And then I don't see where in you docker-compose.yaml you forwarded this inner port 8181 to external machine host port 8000

Database migrating only working with 127.0.0.1 but access from website is localhost with homestead

I have a Homestead setup in Users/[username]/Homestead and setup a vagrant to run several websites and this is working fine.
ip: 192.168.10.10
memory: 2048
cpus: 2
provider: virtualbox
mariadb: true
authorize: ~/.ssh/id_rsa.pub
keys:
- ~/.ssh/id_rsa
folders:
-
map: '~/Sites/domain1'
to: /home/vagrant/domain1
-
map: '~/Sites/domain2'
to: /home/vagrant/domain2
sites:
-
map: domain1.app
to: /home/vagrant/domain1/public
-
map: domain2.app
to: /home/vagrant/domain2/public
databases:
- homestead
- domain1database
- domain2database
I am running the vagrant up and vagrant ssh from the Users/[username]/Homestead directory.
The problem relates to the migrations and then accessing the database from the frontend such as https://domain1.app and https://domain2.app.
For example the .env with the following settings allows migrations from /User/[username]/Sites/domain1.
DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=33060
DB_DATABASE=domain1database
DB_USERNAME=homestead
DB_PASSWORD=secret
But I can only access these databases frontend with the following.
DB_CONNECTION=mysql
DB_HOST=localhost
DB_PORT=33060
DB_DATABASE=domain1database
DB_USERNAME=homestead
DB_PASSWORD=secret
So at the moment I have to keep changing 127.0.0.1 and localhost.
Do the sites need to reside in the Homestead folder?
UPDATE/SOLUTION:
I have managed to sort the issue with these settings in the .env.
DB_CONNECTION=mysql
DB_HOST=192.168.10.10
DB_PORT=3306
DB_DATABASE=domain1database
DB_USERNAME=homestead
DB_PASSWORD=secret
The same above settings also works in Sequel Pro.

php artisan migrate on Azure (in BitBucket pipeline)

I have setup a pipeline in BitBucket to automatically deploy my master branch of my project to an Azure Web App instance.
The app deploys the files and runs composer update as expected (although it does warn that it's running as root), but php artisan migrate --force returns:
Illuminate\Database\QueryException : SQLSTATE[HY000] [1045] Access
denied for user 'forge'#'127.0.0.1' (using password: NO) (SQL: select
* from information_schema.tables where table_schema = forge and table_name = migrations)
I have already created the .env file, and when I run php artisan migrate from within a shell it runs successfully and the tables are created.
Being that 'forge' is the default user in database.php I figure .env isn't being loaded when the command is fired from the deploy script.
Is there something obvious I've missed to cause this issue, or should I somehow set it up to not run as root?
I could replace the database details in database.php but I feel that's the wrong thing to do.
edit
.env contents (with certain data replaced with ********):
APP_NAME=Laravel
APP_ENV=local
APP_KEY=********
APP_DEBUG=true
APP_URL=********
LOG_CHANNEL=stack
DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=********
DB_DATABASE=********
DB_USERNAME=********
DB_PASSWORD=********
BROADCAST_DRIVER=log
CACHE_DRIVER=file
QUEUE_CONNECTION=sync
SESSION_DRIVER=file
SESSION_LIFETIME=120
REDIS_HOST=127.0.0.1
REDIS_PASSWORD=null
REDIS_PORT=6379
MAIL_DRIVER=smtp
MAIL_HOST=smtp.mailtrap.io
MAIL_PORT=2525
MAIL_USERNAME=null
MAIL_PASSWORD=null
MAIL_ENCRYPTION=null
PUSHER_APP_ID=
PUSHER_APP_KEY=
PUSHER_APP_SECRET=
PUSHER_APP_CLUSTER=mt1
MIX_PUSHER_APP_KEY="${PUSHER_APP_KEY}"
MIX_PUSHER_APP_CLUSTER="${PUSHER_APP_CLUSTER}"
edit 2
I realise I'm yet to publish my bitbucket-pipelines.yml file:
image: php:7.2-fpm
pipelines:
branches:
master:
- step:
script:
- apt-get update && apt-get install -qy git curl libmcrypt-dev mysql-client && apt-get install -qy unzip git
- yes | pecl install mcrypt-1.0.1
- docker-php-ext-install pdo_mysql
- curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
- composer update
- php artisan migrate --force
- php artisan serve --port=80 &
- sleep 5
- curl -vk http://localhost:80
deployment: staging
services:
- mysql
definitions:
services:
mysql:
image: mysql:5.7
environment:
MYSQL_DATABASE: '******'
MYSQL_RANDOM_ROOT_PASSWORD: 'yes'
MYSQL_USER: '******'
MYSQL_PASSWORD: '******'
MYSQL_PORT: '******'
I also have a .env.pipelines file:
APP_ENV=local
APP_KEY=******
DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_DATABASE=******
DB_USERNAME=******
DB_PASSWORD=******
This error basically comes from the after changes in the .env file:
Illuminate\Database\QueryException : SQLSTATE[HY000] [1045] Access
denied for user 'forge'#'127.0.0.1' (using password: NO) (SQL: select
* from information_schema.tables where table_schema = forge and table_name = migrations)
Whenever we change the DB_DATABASE, DB_USERNAME and DB_PASSWORD in .env file, we need to clear the cache.
After completion of .env edit, must be clear cache: php artisan config:cache
NOTE: If no password is set on the database, clear it DB_PASSWORD, empty space must also be removed(In the past I've also faceout this problem, It's consider blank space as a password)
Without seeing your deploy script and how you are connecting with your Azure server you would need to put
php artisan config:clear // This will reload the .env file to cache
after you have connected to your server but before you run
php artisan migrate
Please checkout the link:
https://laravel.com/docs/5.7/configuration#configuration-caching
php artisan config:cache
The above command will just regenerate the cache for you. (if added as a part of deployment script)
Else you can use php artisan config:clear just to clear the existing config and fetch values from .env/config files (add as a part of your deployment script)

Can docker-compose.yml read database connection from laravel .env file?

My folder structure looks like this
- root-dir
-- docker
-- src //contains laravel application
---.env
-- docker-compose.yml
As you might know in both laravel .env and docker-compose.yml files you need to specify the connection settings to db
// .env
DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=homestead
DB_USERNAME=homestead
DB_PASSWORD=secret
// docker-compose.yml
environment:
- MYSQL_ROOT_PASSWORD=secret
- MYSQL_DATABASE=homestead
- MYSQL_USER=homestead
- MYSQL_PASSWORD=secret
Is there a way where I can make docker-compose to "read" the settings from the .env file, since the last one is not tracked by git? so basically if I have to change settings I have to do it only in one file and also to not track the credentials on git for docker-compose.yml
You can do it like(From docker documentation https://docs.docker.com/compose/environment-variables/#the-env-file):
The “.env” file
You can set default values for any environment variables referenced in the Compose file, or used to configure Compose, in an environment file named .env:
$ cat .env
TAG=v1.5
$ cat docker-compose.yml
version: '3'
services:
web:
image: "webapp:${TAG}"
You can also use:
The “env_file” configuration option
You can pass multiple environment variables from an external file through to a service’s containers with the ‘env_file’ option, just like with docker run --env-file=FILE ...:
web:
env_file:
- web-variables.env

Resources