Laravel Sail Share gives 404 or ERR_CONNECTION_REFUSED - laravel

Trying to use the Sail Share command with Laravel Sail, but it's not working. I've tried removing the beyondcodegmbh/expose-server image as recommended here.
It installs and everything seems to be working fine. I can get to the expose dashboard where you can follow the requests, but trying to use the expose http url without the port gives me a 404, and the url with the port just spins forever and never does anything. If I click the https link that comes up in the CLI I get ERR_CONNECTION_REFUSED.
I'm using WSL2 with Ubuntu 20.04. Sail 8.1. Docker version 20.10.12, build e91ed57.
My use case is that I'm trying to interact with an external API while doing my local development, and the API requires an HTTPS redirect url, so obviously I have to have SSL and the ability to expose the URL.
If there's another way to accomplish this without using expose, please let me know. Currently also looking into using Caddy, based on some other resources I've found.
UPDATE
I've tried changing my docker-compose.yml file with the following:
expose:
image: beyondcodegmbh/expose-server:latest
extra_hosts:
- "host.docker.internal:host-gateway"
ports:
- '${APP_PORT:-80}:80'
environment:
port: ${APP_PORT}
domain: ${APP_URL}
username: 'admin'
password: 'admin'
restart: always
volumes:
- ./database/expose.db:/root/.expose
which I got from this link, but it's still not working. I'm using port 80 in the APP_URL in my .env file, and then I put 8080 for the expose port, I'm not sure if that's right.

As of right now, it appears to be a known limitation.
The https links generated from sail share do not work.
See this ticket

Related

Docker: port 0.0.0.0:4500->9090/tcp -- What is my host url?

Edit: Solved. It should be
services:
web:
ports:
- "4500:4500" # intead of "4500:9090"
Anyone know which url should I type to get the website deployment? The MySql server which is jdbc:mysql://192.168.99.100:3306/- it works fine.
But the web, tried 192.168.99.100:9090/category/all, 192.168.99.100:4500/category/all localhost:4500/actuator, localhost:9090/actuator, nothing works.
Edit: After use curl from Nightt advice

How to network 2 separate docker containers to communicate with eachother?

I'm pretty new to docker, and I've tried searching about networking but haven't found a solution that's worked.
I have a Laravel app that is using Laradock.
I also have an external 3rd party API that runs in its own docker container.
I basically want to specify the container name of the api inside my laravel .env file, and have it dynamically resolve the container ip so I can make API calls from my Laravel app. I can already do this with services that are already part of laradock like mariadb/mysql, but since my API is located in an external container, it can't connect to it.
I tried making a network and attaching them with;
docker network create my-network
Then inside my docker-compose.yml files for each of the containers, I specified;
networks:
my-network:
name: "my-network"
But if I try and ping them with;
docker exec -ti laradock-workspace-1 ping my-api
I can't connect and can't really figure out why. Was hoping someone familiar with docker might be able to explain why since I'm sure it's something very obvious I'm missing. Thanks!
By default Docker Compose uses a bridge network to provision inter-container communication. Read this article for more info about inter-container networking.
What matters for you, is that by default Docker Compose creates a hostname that equals the service name in the docker-compose.yml file.
Consider the following docker-compose.yml:
version: '3.9'
services:
server:
image: node:16.9.0
container_name: server
tty: true
stdin_open: true
depends_on:
- mongo
command: bash
mongo:
image: mongo
environment:
MONGO_INITDB_DATABASE: my-database
When you run docker-compose up, Docker will create a default network and assigns the service name as hostname for both mongo and server.
You can now access the backend container via:
docker exec -it server bash
And now you can ping the mongo container using Dockers internal network (default on port 27017 in this case):
curl -v http://mongo:27017/my-database
That's it. The same applies for your setup.

How to reference service in another docker-compose.yml in .env?

I have some containers on the same network, but they are in separate docker-compose.yml files.
service_1 is an API container.
service_2 is a Laravel app using Laradock.
I do this inside both my docker-compose.yml files;
networks:
default:
external:
name: my_network
I check with docker network inspect my_network that both my containers are connected.
Now inside service_2 in my Laravel env file, i want to reference my_network and grab the ip/host of service_1 dynamically so I don't have to change the IP everytime.
This doesn't work however, since when I try and make call from my Laravel app it doesn't swap the url and tries to make request to service_1:443.
Hopefully someone can explain!

Laravel Sail & Docker Adding Additional Sites (Multi Project)

It is now recommended to use Sail & Docker with Laravel 8
Now I use homestead, but I wanted to upgrade my system to the latest version 8 and I did the setup before I installed the Docker Desktop and Sail http: // localhost everything works, however nodejs npm and mysql redis are ready for everything
The topic I want to learn is sail & docker, how does multiple projects work in this structure?
For example Homestead before working on this config
- map: homestead.test
to: /home/vagrant/project1/public
- map: another.test
to: /home/vagrant/project2/public
Thanks
You need to change the ports (MySQL, Redis, MailHog etc.) if you want to run multiple projects at the same time.
Application, MySQL, and Redis Ports
Add the desired ports to the .env file:
APP_PORT=81
FORWARD_DB_PORT=3307
FORWARD_REDIS_PORT=6380
MailHog Ports
Update MailHog ports in the docker-compose.yml file. Change these lines:
ports:
- 1025:1025
- 8025:8025
to this:
ports:
- 1026:1025
- 8026:8025
Once the containers have been started, you can access your application at http://localhost:81 and the MailHog web interface at http://localhost:8026.
Yes. You have to change to all non-conflict ports for all Laravel Sail project.
If you want to use custom domain like what you did in Homestead,
you can use the Nginx Proxy to achieve Multiple Project just like Homestead
Here is my article: step-by-step tutorial you can follow with....

How to use host network while linking to a container?

In my docker-compose:
laravel:
image: trackware
links:
- postgis:postgis
ports:
- "80:80"
- "3306:3306"
- "443:443"
- "220:22"
- "8000:8000"
net: "host"
restart: always
volumes:
- C:/H/repositories/pubnub:/share
container_name: laravel
postgis:
image: mdillon/postgis
env_file: .postgis_env
ports:
- "9090:9000"
- "54320:5432"
container_name: postgis
if I run docker-compose up -d I get this error:
Conflicting options: host type networking can't be used with links. This would result in undefined behavior
So, how would I use net: "host" while linking to postgis container?
laravel container needs to run pubnub client, which will need high-performance networking for real time messages handling, and also it needs to link to postgis container to access db.
So, any advice? I am using docker 1.10.2
Since you expose postgis ports to host, you can skip linking and connect to it through localhost:9000. I believe this will work since the Laravel application resides on the host network and they will share those ports.
The reason why we use links keyword is so that docker can internally make hostname resolution so that two disparate containers can communicate with each other.
In your case if you were not using host network and using the link keyword then docker would have created a hostname with each container names internally so both containers can communicate with each others.
When you are using "host" mode network means you are telling docker that i shall be using the "hosts" underlying network and hence by simply exposing the ports on the localhost my containers can communicate with each other.
I dunno reason but... You shouldn't use "host" driver and port mapping, at least you wouldn't get expected result. In case like this "220:22" you'll get 22 port mapped to the host machine.
"Net" is outdated as far as I know, use "network_mode" instead.
Also I would recommend you to update docker-compose to the latest version, now is 1.6.2. Previous versions had some networking problems.
May be you can use "bridge" driver? In your case, I can't see problems which it couldn't resolve.

Resources