Laravel Sail & Docker Adding Additional Sites (Multi Project) - macos

It is now recommended to use Sail & Docker with Laravel 8
Now I use homestead, but I wanted to upgrade my system to the latest version 8 and I did the setup before I installed the Docker Desktop and Sail http: // localhost everything works, however nodejs npm and mysql redis are ready for everything
The topic I want to learn is sail & docker, how does multiple projects work in this structure?
For example Homestead before working on this config
- map: homestead.test
to: /home/vagrant/project1/public
- map: another.test
to: /home/vagrant/project2/public
Thanks

You need to change the ports (MySQL, Redis, MailHog etc.) if you want to run multiple projects at the same time.
Application, MySQL, and Redis Ports
Add the desired ports to the .env file:
APP_PORT=81
FORWARD_DB_PORT=3307
FORWARD_REDIS_PORT=6380
MailHog Ports
Update MailHog ports in the docker-compose.yml file. Change these lines:
ports:
- 1025:1025
- 8025:8025
to this:
ports:
- 1026:1025
- 8026:8025
Once the containers have been started, you can access your application at http://localhost:81 and the MailHog web interface at http://localhost:8026.

Yes. You have to change to all non-conflict ports for all Laravel Sail project.
If you want to use custom domain like what you did in Homestead,
you can use the Nginx Proxy to achieve Multiple Project just like Homestead
Here is my article: step-by-step tutorial you can follow with....

Related

Laravel Sail Share gives 404 or ERR_CONNECTION_REFUSED

Trying to use the Sail Share command with Laravel Sail, but it's not working. I've tried removing the beyondcodegmbh/expose-server image as recommended here.
It installs and everything seems to be working fine. I can get to the expose dashboard where you can follow the requests, but trying to use the expose http url without the port gives me a 404, and the url with the port just spins forever and never does anything. If I click the https link that comes up in the CLI I get ERR_CONNECTION_REFUSED.
I'm using WSL2 with Ubuntu 20.04. Sail 8.1. Docker version 20.10.12, build e91ed57.
My use case is that I'm trying to interact with an external API while doing my local development, and the API requires an HTTPS redirect url, so obviously I have to have SSL and the ability to expose the URL.
If there's another way to accomplish this without using expose, please let me know. Currently also looking into using Caddy, based on some other resources I've found.
UPDATE
I've tried changing my docker-compose.yml file with the following:
expose:
image: beyondcodegmbh/expose-server:latest
extra_hosts:
- "host.docker.internal:host-gateway"
ports:
- '${APP_PORT:-80}:80'
environment:
port: ${APP_PORT}
domain: ${APP_URL}
username: 'admin'
password: 'admin'
restart: always
volumes:
- ./database/expose.db:/root/.expose
which I got from this link, but it's still not working. I'm using port 80 in the APP_URL in my .env file, and then I put 8080 for the expose port, I'm not sure if that's right.
As of right now, it appears to be a known limitation.
The https links generated from sail share do not work.
See this ticket

How to network 2 separate docker containers to communicate with eachother?

I'm pretty new to docker, and I've tried searching about networking but haven't found a solution that's worked.
I have a Laravel app that is using Laradock.
I also have an external 3rd party API that runs in its own docker container.
I basically want to specify the container name of the api inside my laravel .env file, and have it dynamically resolve the container ip so I can make API calls from my Laravel app. I can already do this with services that are already part of laradock like mariadb/mysql, but since my API is located in an external container, it can't connect to it.
I tried making a network and attaching them with;
docker network create my-network
Then inside my docker-compose.yml files for each of the containers, I specified;
networks:
my-network:
name: "my-network"
But if I try and ping them with;
docker exec -ti laradock-workspace-1 ping my-api
I can't connect and can't really figure out why. Was hoping someone familiar with docker might be able to explain why since I'm sure it's something very obvious I'm missing. Thanks!
By default Docker Compose uses a bridge network to provision inter-container communication. Read this article for more info about inter-container networking.
What matters for you, is that by default Docker Compose creates a hostname that equals the service name in the docker-compose.yml file.
Consider the following docker-compose.yml:
version: '3.9'
services:
server:
image: node:16.9.0
container_name: server
tty: true
stdin_open: true
depends_on:
- mongo
command: bash
mongo:
image: mongo
environment:
MONGO_INITDB_DATABASE: my-database
When you run docker-compose up, Docker will create a default network and assigns the service name as hostname for both mongo and server.
You can now access the backend container via:
docker exec -it server bash
And now you can ping the mongo container using Dockers internal network (default on port 27017 in this case):
curl -v http://mongo:27017/my-database
That's it. The same applies for your setup.

How to get docker containers from different projects speaking to each other

I have developed and dockerised two applications web (react) and api (laravel, mysql), they have separate codebases and separate directories.
Could somebody please help explain how I can get my web application talking to my api whilst using docker at the same time
Update: Ultimately what I want to achieve is to have both my frontend and backend running on port 80 without having to have two web servers running as containers so that my docker development environment will work the same as using valet or mamp etc.
For development you could make use of docker-compose.
Key benefits:
Configure your app's services in YAML.
Single command to create/start the services defined on this configuration.
Compose creates a default network for your app. Each container joins this default network and they can see each other.
I use the following structure for a project.
projectFolder
|_backend (laravel app)
|_frontend (react app)
|_docker-compose.yml
|_backend.dockerfile
|_frontend.dockerfile
My docker-compose.yml
version: "3.3"
services:
frontend:
build:
context: ./
dockerfile: frontend.dockerfile
args:
- NODE_ENV=development
ports:
- "3000:3000"
volumes:
- ./frontend:/opt/app
- ./frontend/package.json:/opt/package.json
environment:
- NODE_ENV=development
backend:
build:
context: ./
dockerfile: backend.dockerfile
working_dir: /var/www/html/actas
volumes:
- ./backend:/var/www/html/actas
environment:
- "DB_PORT=3306"
- "DB_HOST=mysql"
ports:
- "8000:8000"
mysql:
image: mysql:5.6
ports:
- "3306:3306"
volumes:
- dbdata:/var/lib/mysql
environment:
- "MYSQL_DATABASE=homestead"
- "MYSQL_USER=homestead"
- "MYSQL_PASSWORD=secret"
- "MYSQL_ROOT_PASSWORD=secret"
volumes:
dbdata:
Each part of the application is defined by a service in the docker-compose file. E.g.
frontend
backend
mysql
Docker-compose will create a default network and add each container to it. The hostname for
each container will be the service name defined in the yml file.
For example, the backend container access the mysql server with the name mysql. You can
see this on the service definition itself:
backend:
...
environment:
- "DB_PORT=3306"
- "DB_HOST=mysql" <-- The hostname for the mysql container is the name of the service
With this, in the react app, I can setup the proxy configuration in package.json as follows
"proxy": "http://backend:8000",
One last thing, as mentioned by David Maze in the comments. Add the backend to your
hosts file, so the browser could resolve that name.
E.g /etc/hosts on ubuntu
127.0.1.1 backend

Communication between two ddev projects

I got two ddev projects that needs to interact with each other. When a ran into some issues, I check the resolved IP for the connection.
I did it by ssh into project1 and ping project2 (ping project2.ddev.local)
The domain resolves to 127.0.0.1
So every request I send to this domain will stay in the current container and is not routet to the other project.
Steps to reproduce:
Start two separate ddev containers and ssh into one of them. Try to ping the the other project by using the ddev domain.
Is there a solution that two (or more) projects can interact with each other?
Edit 2019-01-08: It's actually easy to do this with just the docker name of the container, no extra docker-compose config is required. For a db container that's ddev-<projectname>-db. So you can access the db container of a project named "d8composer" by using the hostname ddev-d8composer-db; for example mysql -udb -pdb -h ddev-d8composer-db db
Here's another technique that actually does have two projects communicating with each other.
Let's say that you have two projects named project1 and project2, and you want project2 to have access to the db container from project1.
Add a .ddev/docker-compose.extradb.yaml to project2's .ddev folder with this content:
version: '3.6'
services:
web:
external_links:
- ddev-project1-db:proj1-db
And now project1's database container is accessible from the web container on project2. For example, you can mysql -h proj1-db from within the project2 web container.
Note that this is often a bad idea, it's best not to have two dev projects depend on each other, it's better to figure out development environments that are as simple as possible. If you just need an extra database, you might want to try How can I create and load a second database in ddev? . If you just need an additional web container as an API server or whatever, the other answer is better.
A simple example of extra_hosts. I needed to use an HTTPS URL in a Drupal module's UI, entity_share, to cURL another ddev site.
On foo I add a .ddev/docker-compose.hosts.yaml
version: '3.6'
services:
web:
extra_hosts:
- bar.ddev.site:172.18.0.6
I tried this and it worked quite nicely; the basic idea is to run a separate ddev-webserver as a service. We usually think of a ddev "service" as something like redis or memcache or solr, but it can really be an API server of any type, and can use the ddev-webserver image (or any other webserver image you want to use).
For example, add this docker-compose.api.yaml to your project's .ddev folder (updated for ddev v1.1.1):
version: '3.6'
services:
myapi:
image: drud/ddev-webserver:v1.1.0
restart: "no"
labels:
com.ddev.site-name: ${DDEV_SITENAME}
com.ddev.approot: $DDEV_APPROOT
com.ddev.app-url: $DDEV_URL
volumes:
- "../myapi_docroot/:/var/www/html:cached"
- ".:/mnt/ddev_config:ro"
web:
links:
- myapi:$DDEV_HOSTNAME
and put a dummy index.html in your project's ./myapi_docroot.
After ddev start you can ddev ssh -s myapi and do whatever you want there (and myapi_docroot is mounted at /var/www/html). If you ddev ssh into the web container you can curl http://myapi and you'll see the contents of your myapi_docroot/index.html. Your myapi container can access the 'db' container, or you can run another db container, or ...
Note that this mounts a subdirectory of the main project as /var/www/html, but it can actually mount anything you want. For example,
volumes:
- "../../fancyapiproject/:/var/www/html:cached"

Docker: How to install memcached in container

I am using php:5.6-apache docker image for local development. Now I am developing one portal using codeigniter3. I want to use memcached for caching purpose. But it was not found by default in php:5.6-apache image.
How can i install memcached in my container?
Right now I am trying to use memcached container for this purpose but don't get success yet. Any help will be appreciated.
With something like this:
docker-compose.yml
version: "3"
services:
php:
build: .
memcached:
image: memcached
You can point to memcached container from php container as this: memcached:11211
You will need to enable php-memcached mod in your current php Dockerfile.

Resources