How to change Vite dev server asset base path - laravel

I'm running a Vite dev server in a Docker container and trying to load assets from that Vite server in a Laravel Blade template with the #vite directive, but the #vite directive is outputting the assets at http://127.0.0.1:5173/path/to/asset or http://[::]:5173/path/to/asset. I want to test the website on a different computer from the server, on the same network. And if I do that, of course the assets won't load because they're only accessible to the server computer itself.
I see that the #vite directive gets its base path from a public/hot file generated by the Vite dev server at runtime, but I can't for the life of me figure out how the hot file gets generated or what determines its output. I'm assuming surely there must be a configuration option, but I've tried every config and environment variable I can find, and nothing ever affects the hot file, even after restarting and rebuilding the vite docker container.
docker-compose.yml
...
vite:
build:
context: .
dockerfile: vite/Dockerfile
image: me/vite
container_name: appvite
restart: unless-stopped
tty: true
ports:
- "5173:5173"
- "8000:8000"
working_dir: /app/
volumes:
- .:/app/
networks:
- app-network
...
vite.config.js
import { defineConfig } from 'vite';
import laravel from 'laravel-vite-plugin';
export default defineConfig(({ command }) => {
const config = { plugins: [], server: {} };
if (command === 'serve') {
// Dev specific configs
config.plugins.push(
laravel({
input: ['resources/js/app.js'],
refresh: true,
})
);
config.server.origin = "http://test1";
config.origin = "http://test2";
config.base = "http://test3";
config.server.base = "http://test4";
} else {
// Build specific configs
config.plugins.push(
laravel({
input: ['resources/js/app.js'],
refresh: false
})
);
}
return config;
});
vite/Dockerfile
FROM node:16
WORKDIR /app
COPY . /app
EXPOSE 8000
EXPOSE 5173
CMD ["npm", "run", "dev-debug"]
.env
...
ASSET_URL=http://test1
VITE_ASSET_URL=http://test2
...

just change .env
ASSET_URL=MACHINE_IP:VITE_PORT
like: 192.168.1.150:5173
it should work.
Based on Laravel document

you can set the VITE_DEV_SERVER_PROXY environment variable to proxy requests to the development server.
Here's how you can do it in
Add the following line to the .env.local file:
VITE_DEV_SERVER_PROXY=http://192.168.1.100:3000/
This will tell Vite to proxy all requests to the 192.168.1.100:3000 IP address, which should be the IP address of your development machine running the Vite dev server.
Restart the Laravel development server and the Vite dev server.
Now, when you use vite('main.css') on your development server, it should replace localhost:5173/assets/main.css with 192.168.1.100:3000/assets/main.css.

Related

Can't send HTTP Request from one dockerized laravel app to another

I have two dockerized laravel app, both are going to be used as an API.
One is the Main API, and the other is the Payment API
Docker compose of Main API:
version: '3.8'
services:
api:
image: 'myapp/api:1.0'
container_name: 'myapp-api'
restart: 'on-failure'
user: '1000:1000'
build:
context: .
dockerfile: '.docker/Dockerfile'
args:
UID: '1000'
GID: '1000'
ports:
- '${APP_PORT:-80}:80'
volumes:
- '.:/var/www/html'
networks:
- myapp
networks:
myapp:
driver: bridge
Docker compose of Payment API:
version: '3.8'
services:
payment-api:
image: 'myapp/payment-api:1.0'
container_name: 'myapp-payment-api'
restart: 'on-failure'
user: '1000:1000'
build:
context: .
dockerfile: '.docker/Dockerfile'
args:
UID: '1000'
GID: '1000'
ports:
- '${APP_PORT:-801}:80'
volumes:
- '.:/var/www/html'
networks:
- myapp
networks:
myapp:
driver: bridge
From the Main API, I tried calling the health check of Payment API:
use Illuminate\Support\Facades\Http;
Http::get('http://payment-api/api/health_check');
but I am getting cURL error:
cURL error 6: Could not resolve host: payment-api (see https://curl.haxx.se/libcurl/c/libcurl-errors.html) for http://payment-api/api/health_check
Note that health_check doesn't have any middleware attached to it. just plain and simple route to test if the route is reachable.
doing http://localhost:801/api/health_check in the browser works. but wont work when called inside the laravel app.
I am using the container's name as the host here since http://localhost doesn't work either.
Make sure your two container network is same, because docker compose will prefix your network name by folder name.
How to check:
Check both container settings.
use docker inspect <containerName or containerId>, replace "<...>" to your container name or container id, and check NetworkSettings > Networks, two container Networks should be same.
// Example
"NetworkSettings": {
...
"Networks": {
"my-app": { ... } // <- check Network name
}
}
Check from network.
use docker network inspect <networkName>, and check your container are using same network.
// Example
...
"Containers": {
...
"2248c433dae5d0a1b08bdd11dad86184785b89e269b42a76806b11cf6fbaccfa": {
"Name": "myapp-api", // <- should see your first container name
...
},
"3b64e3d359e13bdae60bbfe283a76516ca51678da69c0c81b6e83be315aea8f2": {
"Name": "myapp-payment-api", // <- should see your second container name
...
},
...
},
If this is what you are facing and your don't want docker compose prefix, can just simple add "name" in your yaml.
networks:
myapp:
driver: bridge
name: myapp // like this
Okay, after hours of troubleshooting, I found out that there's a command docker network ls, and my network isn't listed. So I searched, and noticed that the way I defined my network is the legacy-way, so maybe because of that that it wasn't created. Another is, for my Payment API container, I didn't specify the network as external, so even if the network was created, it still probably wont work because docker compose would've assumed that the network is not external and wont be in the same network as the Main API.

Calling backend docker container from frontend + client side vs server side rendering

I'm currently working on a full stack application using Spring-boot (Kotlin), SvelteKit (run with Vite), and MongoDB, each with their own Docker container. My backend service is being forwarded to port 6868 on my localhost, and when I run my frontend service with "npm run dev" locally (which triggers this script): vite dev --host 0.0.0.0 --port 8080 and remove the service from my docker-compose.yml (see frontend-svelte service in docker-compose below), I am able to call my backend at localhost:6868 (see +page.js below). However, when I run my frontend inside of a docker container, the request to localhost:6868 fails. This sort of makes sense to me since localhost:6868 would refer to the inside of the docker container if the code was being sent from the server (docker container) as opposed to the browser. When I change localhost:6868 to spring-boot:8080 (the docker container) the initial request sent Server side does succeed (i.e. the console log below in /frontend-svelte/routes/+page.js does print out) however, there is still an error in the browser for the subsequent requests being sent from the client side. It seems to me that the issue is the discrepancy caused by requests sent from client-side vs server-side, so how can I resolve this issue? Thanks everyone for your help!
docker-compose.yml
version: "3.8"
services:
mongodb:
image: mongo:5.0.2
restart: unless-stopped
ports:
- 27017:27017
volumes:
- db:/data/db
spring-boot:
image: bracket_backend
build:
context: ./backend
dockerfile: Dev.Dockerfile
depends_on:
- mongodb
ports:
- 6868:8080
stdin_open: true
tty: true
volumes:
- ./backend/src:/app/src
env_file:
- ./.env
frontend-svelte:
image: bracket_frontend
build:
context: ./frontend-svelte
dockerfile: Dev.Dockerfile
ports:
- 1234:8080
stdin_open: true
tty: true
volumes:
- ./frontend-svelte/src:/app/src
depends_on:
- spring-boot
volumes:
db:
/backend/Dev.Dockerfile
FROM maven:3.8.6-openjdk-18-slim
WORKDIR /app
COPY ./.mvn ./mvn
COPY ./mvnw ./
COPY ./pomDev.xml ./
# Note that src is mounted as a volume to allow code update w/o restarting container
ENTRYPOINT mvn spring-boot:run -f pomDev.xml
/frontend-svelte/Dev.Dockerfile
FROM node:16-slim
WORKDIR /app
COPY package.json .
RUN npm install --legacy-peer-deps
COPY svelte.config.js .
COPY vite.config.js .
COPY jsconfig.json .
COPY playwright.config.js .
# Note that src is mounted so changes will occur.
ENTRYPOINT npm run dev
/frontend-svelte/routes/+page.js (this is where the request to the backend is made. This succeeds if not run from docker container)
import { getBaseUrl } from '$lib/utils.js';
/** #type {import('./$types').PageLoad} */
export async function load({ params }) {
console.log("test")
const response = await fetch(`http://localhost:6868/api/groups`); // THIS LINE MAKES THE REQUEST TO THE BACKEND
console.log("respoonse is: ");
console.log(response);
if (!response.ok) {
throw new Error(`Error! status: ${response.status}`);
}
return response.json()
}

How to get docker-compose container to see Redis host?

I have this simple docker-compose.yml file:
version: '3.8'
services:
bot:
build:
dockerfile: Dockerfile
context: .
links:
- redis
depends_on:
- redis
redis:
image: redis:7.0.0-alpine
ports:
- "6379:6379"
environment:
- REDIS_REPLICATION_MODE=master
restart: always
volumes:
- cache:/data
command: redis-server
volumes:
cache:
driver: local
This is how the bot (in Go) connects to redis:
import "github.com/go-redis/redis/v8"
func setRedisClient() {
rdb = redis.NewClient(&redis.Options{
Addr: "redis:6379",
Password: "",
DB: 0,
})
}
bot Dockerfile:
FROM golang:1.18.3-alpine3.16
WORKDIR /go/src/bot-go
COPY . .
RUN go build .
RUN ./bot-go
But when I run docker-compose up --build I always get:
panic: dial tcp: lookup redis on 192.168.65.5:53: no such host
redis host is never seen no matter what changes I make to the host or to docker-compose file.
The app does work without Docker when I configure the client to local.
What I am doing wrong exactly?
The problem is the bot-go image never stops building. Change RUN ./bot-go to CMD [ "./bot-go" ] in the Dockerfile and everything will work fine.

Got unknown error: net::ERR_CONNECTION_REFUSED with laravel dusk in docker

I'm trying to implement browser testing with Laravel Dusk in docker environment.
But when I run command php artisan dusk to testing (in my php container) it display this error for all my tests case:
1) Tests\Browser\ExampleTest::testBasicExample
Facebook\WebDriver\Exception\UnknownErrorException: unknown error: net::ERR_CONNECTION_REFUSED
(Session info: headless chrome=96.0.4664.45)
/var/www/vendor/php-webdriver/webdriver/lib/Exception/WebDriverException.php:139
/var/www/vendor/php-webdriver/webdriver/lib/Remote/HttpCommandExecutor.php:372
/var/www/vendor/php-webdriver/webdriver/lib/Remote/RemoteWebDriver.php:585
/var/www/vendor/php-webdriver/webdriver/lib/Remote/RemoteExecuteMethod.php:27
/var/www/vendor/php-webdriver/webdriver/lib/WebDriverNavigation.php:41
/var/www/vendor/laravel/dusk/src/Browser.php:153
/var/www/tests/Browser/ExampleTest.php:19
/var/www/vendor/laravel/dusk/src/Concerns/ProvidesBrowser.php:69
/var/www/tests/Browser/ExampleTest.php:21
Here is my configuration:
I add selenium container to docker-compose file by this introduction
docker-compose.yaml
version: '3.7'
networks:
laravel:
services:
php:
build:
context: .
dockerfile: dockers/php/Dockerfile
container_name: php
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
- ./:/var/www/
ports:
- "9000:9000"
networks:
- laravel
depends_on:
- selenium
selenium:
container_name: selenium
image: 'selenium/standalone-chrome'
volumes:
- './selenium:/selenium'
networks:
- laravel
I have follow introduction in laravel documentation page here to change APP_URL config to http://selenium:4444/wd/hub
DuskTestCase.php
public static function prepare()
{
if (! static::runningInSail()) {
static::startChromeDriver();
}
}
protected function driver()
{
$options = (new ChromeOptions)->addArguments(collect([
'--window-size=1920,1080',
])->unless($this->hasHeadlessDisabled(), function ($items) {
return $items->merge([
'--disable-gpu',
'--headless'
]);
})->all());
return RemoteWebDriver::create(
'http://selenium:4444/wd/hub', // change here
DesiredCapabilities::chrome()->setCapability(
ChromeOptions::CAPABILITY, $options
)
);
}
Then when I run testing in php container it show error above.
UPDATE
I've checked my selenium logs and it shows this error whenever I run command php artisan dusk
Starting ChromeDriver 96.0.4664.45 (76e4c1bb2ab4671b8beba3444e61c0f17584b2fc-refs/branch-heads/4664#{#947}) on port 58188
Only local connections are allowed.
Please see https://chromedriver.chromium.org/security-considerations for suggestions on keeping ChromeDriver safe.
ChromeDriver was started successfully.
[1639062447.255][SEVERE]: bind() failed: Cannot assign requested address (99)
I wonder if this erro come from my configuration or my implement steps. So, this is detail of steps I used to implement testing:
docker-compose build
docker-compose up -d
docker-compose exe php bash
// php container
php artisan config:clear
php artisan dusk
Hope it help to figure out solution
Just fixed this issues. I've config wrong APP_URL in .env.dusk.local.
It should have been APP_URL=http://nginx (with nginx is container containing nginx server)
You are missing shm_size on your selenium instance, it probably doesn't work. Here's the example docker-compose file: https://github.com/SeleniumHQ/docker-selenium/blob/trunk/docker-compose-v3.yml
Note that it should be >=2gb.

How to use Laravel docker container & MySQL DB with a Vue one?

I have an app which uses Vue CLI as a front-end and Laravel as a back-end. Now I am trying to launch my app on a server using docker.
My docker skills can only allow me one thing: Vue docker container. But as far as I have to use Laravel as a back-end I have to create a container for that too (+ MySQL, of course).
So here what I've got: Dockerfile
FROM node:lts-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
EXPOSE 8080
CMD ["npm", "run", "serve"]
docker-compose.yml
version: '3'
services:
web:
build: .
stdin_open: true
tty: true
ports:
- "8080:8080"
volumes:
- "/app/node_modules"
- ".:/app"
The problem is that I understand how to connect Laravel into Dockerfile. It just doesn't add up in my mind.
May be I should use Ubuntu, not just node? Anyways, I'm asking once again for your support
According to this article you will need to follow the steps below.
Make your project folder look like this: (d: directory, f: file)
d: backend
d: frontend
d: etc
d: nginx
d: conf.d
f: default.conf.nginx
d: php
f: .gitignore
d: dockerize
d: backend
f: Dockerfile
f: docker-compose.yml
Add docker-compose.yml
version: '3'
services:
www:
image: nginx:alpine
volumes:
- ./etc/nginx/conf.d/default.conf.nginx:/etc/nginx/conf.d/default.conf
ports:
- 81:80
depends_on:
- backend
- frontend
frontend:
image: node:current-alpine
user: ${UID}:${UID}
working_dir: /home/node/app
volumes:
- ./frontend:/home/node/app
environment:
NODE_ENV: development
command: "npm run serve"
backend:
build:
context: dockerize/backend
# this way container interacts with host on behalf of current user.
# !!! NOTE: $UID is a _shell_ variable, not an environment variable!
# To make it available as a shell var, make sure you have this in your ~/.bashrc (./.zshrc etc):
# export UID="$UID"
user: ${UID}:${UID}
volumes:
- ./backend:/app
# custom adjustments to php.ini
# i. e. "xdebug.remote_host" to debug the dockerized app
- ./etc/php:/usr/local/etc/php/local.conf.d/
environment:
# add our custom config files for the php to scan
PHP_INI_SCAN_DIR: "/usr/local/etc/php/conf.d/:/usr/local/etc/php/local.conf.d/"
command: "php artisan serve --host=0.0.0.0 --port=8080"
mysql:
image: mysql:5.7.22
container_name: mysql
restart: unless-stopped
tty: true
ports:
- "4306:3306"
volumes:
- ./etc/mysql:/var/lib/mysql
environment:
MYSQL_DATABASE: tor
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
SERVICE_TAGS: dev
SERVICE_NAME: mysql
Add default.conf.nginx
server {
listen 81;
server_name frontend;
error_log /var/log/nginx/error.log debug;
location / {
proxy_pass http://frontend:8080;
}
location /sockjs-node {
proxy_pass http://frontend:8080;
proxy_set_header Host $host;
# below lines make ws://localhost/sockjs-node/... URLs work, enabling hot-reload
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
location /api/ {
# on the backend side, the request URI will _NOT_ contain the /api prefix,
# which is what we want for a pure-api project
proxy_pass http://backend:8080/;
proxy_set_header Host localhost;
}
}
Add Dockerfile
FROM php:fpm-alpine
RUN apk add --no-cache $PHPIZE_DEPS oniguruma-dev libzip-dev curl-dev \
&& docker-php-ext-install pdo_mysql mbstring zip curl \
&& pecl install xdebug redis \
&& docker-php-ext-enable xdebug redis
RUN mkdir /app
VOLUME /app
WORKDIR /app
EXPOSE 8080
CMD php artisan serve --host=0.0.0.0 --port=8080
DON'T FORGET TO ADD vue.config.js to your frontend folder
// vue.config.js
module.exports = {
// options...
devServer: {
disableHostCheck: true,
host: 'localhost',
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Headers': 'Origin, X-Requested-With, Content-Type, Accept'
},
watchOptions: {
poll: true
},
proxy: 'http://localhost/api',
}
}
Run sudo docker-compose up
If you want to do migrations run this: sudo docker-compose exec backend php artisan migrate
You will need 4 containers, defined in a docker-compose file:
frontend (your Vue application, which you already have)
backend (Laravel application)
web server (eg. Nginx or Apache)
database (MySQL)
It is possible to combine the 'web-server' and 'backend' containers into one, but this is generally bad advice.
Your compose file would look similar to this:
version: '3'
services:
frontend:
build: ./frontend
ports:
- 8080:8080
volumes:
- ./frontend:/app
backend:
build: ./backend
volumes:
- ./backend:/var/www/my_app
environment:
DB_HOST=db
DB_PORT=3306
webserver:
image: nginx:alpine
ports:
- 8000:8000
volumes:
- ./backend:/var/www/my_app
database:
image: mariadb:latest
container_name: db
ports:
- 3306:3306
environment:
MYSQL_DATABASE: dbname
MYSQL_ROOT_PASSWORD: dbpass
volumes:
- ./sql:/var/lib/mysql
where ./backend contains the Laravel application code, ./frontend contains the Vue application, and both contain a Dockerfile. Refer to Docker Hub for specific instructions on each image needed. This exposes 3 ports to your host system: 8080 (Vue app), 8000 (Laravel app), and 3306 (MySQL).
Alternatively, you can omit the web server if you use the artisan cli's serve command in your Laravel container, similar to what you're already doing in the Dockerfile for your Vue application.
The image would have to include something like CMD php artisan serve --host=0.0.0.0 --port=8000

Resources