Laravel WebSockets: Subscribing to private channels does not work - laravel

Software:
Laravel 5.8
Laravel WebSockets 1.1
Vue 2.6.10
In websockets.php (complete file) I have my local_cert and local_pk setup with my certificates. If I leave this option blank I cannot even connect. I also have set verify_peerto false, because if I don't I cannot connect either.
broadcasting.php:
'pusher' => [
'driver' => 'pusher',
'key' => env('PUSHER_APP_KEY'),
'secret' => env('PUSHER_APP_SECRET'),
'app_id' => env('PUSHER_APP_ID'),
'options' => [
'cluster' => env('PUSHER_APP_CLUSTER'),
'host' => '127.0.0.1',
'port' => 6001,
'scheme' => 'https',
'curl_options' => [
CURLOPT_SSL_VERIFYHOST => 0,
CURLOPT_SSL_VERIFYPEER => 0,
]
],
],
If I get rid of the curl options I get an empty Broadcast exception like described here.
bootstrap.js:
window.Pusher = require('pusher-js');
window.Echo = new Echo({
broadcaster: 'pusher',
key: '7d23096ae0ab2d02d220',
wsHost: window.location.hostname,
wsPort: 6001,
wssPort: 6001,
encrypted: true,
disableStats: true,
auth: {
headers: {
'X-CSRF-TOKEN': window.App.csrfToken,
},
},
})
This is all I get from the logs after running php artisan websockets:serve:
New connection opened for app key 7d23096ae0ab2d02d220.
Connection id 49092664.114416323 sending message {"event":"pusher:connection_established","data":"{\"socket_id\":\"49092664.114416323\",\"activity_timeout\":30}"}
What I should get is messages about listening / joining channels and sending messages etc. But all of that does not work at the moment. I have things like:
Echo.private('notifications.' + this.user.id)
.listen('UserNotificationSent', (e) => {
console.log(e)
})
Events: UserNotificationSent.php for example.
Of course internally I have everything else setup as well: channels with auth, etc. Everything worked locally on my machine on a lower Laravel version (5.4). But I recently updated to 5.8 and deployed to a server and now I struggle with this.
I also opened an issue on github.
IMPORTANT UPDATE
This is actually not due to the deployment, I have the same problem on my local setup. What is interesting is that listening to channels via Echo.channel() works, however, .private() is not working. On Github (link above) I came across a guy who has the exact same problem. We did not find a solution yet.

I found the problem.
I had this in my web.php:
Route::post('/broadcasting/auth', function (Illuminate\Http\Request $req) {
if ($req->channel_name == 'users') {
return Broadcast::auth($req);
}
});
I don't exactly remember why and when I added this, but it was probably from here. For some reason this didn't generate any errors. Anyway, I got rid of this and now it´s working like a charm.

It happens because of the port 6001 is reserved in nginx on live server (explanation at the bottom). I needed to use reverse-proxy on nginx to make it work - and used port 6002 for websockets in live server.
In nginx (upon request, I added the full nginx code):
server {
#The nginx domain configurations
root /var/www/laravel/public;
index index.html index.htm index.php index.nginx-debian.html;
server_name example.com www.example.com;
#WHAT YOU NEED IS FROM HERE...
location / {
try_files $uri $uri/ /index.php?$query_string;
# "But why port 6000, 6002 and 433? Scroll at the bottom"
proxy_pass http://127.0.0.1:6001;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-VerifiedViaNginx yes;
proxy_read_timeout 60;
proxy_connect_timeout 60;
proxy_redirect off;
# Specific for websockets: force the use of HTTP/1.1 and set the Upgrade header
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
#..UNTIL HERE - The rest are classic nginx config and certbot
#The default Laravel nginx config
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/run/php/php7.2-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
#SSL by certbot
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl on;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
ssl_session_cache shared:SSL:30m;
ssl_protocols TLSv1.1 TLSv1.2;
# Diffie Hellman performance improvements
ssl_ecdh_curve secp384r1;
}
Everything that connects to your domain over TLS will be proxied to a local service on port 6001, in plain text. This offloads all the TLS (and certificate management) to Nginx, keeping your websocket server configuration as clean and simple as possible.
This also makes automation via Let’s Encrypt a lot easier, as there are already implementations that will manage the certificate configuration in your Nginx and reload them when needed. - Source - Mattias Geniar
Echo setup:
let isProduction = process.env.MIX_WS_CONNECT_PRODUCTION === 'true';
Vue.prototype.Echo = new LaravelEcho({
broadcaster: 'pusher',
key: process.env.MIX_PUSHER_APP_KEY,
wssHost: window.location.hostname,
wssPort: isProduction ? 6002 : 6001,
wsHost: window.location.hostname,
wsPort: isProduction ? 6002 : 6001,
disableStats: false,
encrypted: isProduction,
enabledTransports: ['ws', 'wss'],
disabledTransports: ['sockjs', 'xhr_polling', 'xhr_streaming']
});
In websockets.php
'apps' => [
[
'id' => env('MIX_PUSHER_APP_ID'),
'name' => env('APP_NAME'),
'key' => env('MIX_PUSHER_APP_KEY'),
'secret' => env('MIX_PUSHER_APP_SECRET'),
'enable_client_messages' => false,
'enable_statistics' => true,
],
],
// I kept them null but I use LetsEncrypt for SSL certs too.
'ssl' => [
'local_cert' => null,
'local_pk' => null,
'passphrase' => null,
]
And broadcasting.php
'pusher' => [
'driver' => 'pusher',
'key' => env('MIX_PUSHER_APP_KEY'),
'secret' => env('MIX_PUSHER_APP_SECRET'),
'app_id' => env('MIX_PUSHER_APP_ID'),
'options' => [
'cluster' => env('MIX_PUSHER_APP_CLUSTER'),
'encrypted' => env('MIX_WS_CONNECT_PRODUCTION'),
'host' => '127.0.0.1',
'port' => env('MIX_WS_CONNECT_PRODUCTION') ? 6002 : 6001,
'scheme' => 'http'
],
],
This was my full cycle that made it work. Hope it helps.
Quoting from Alex Bouma's explanation:
"But why port 6000, 6002 and 433, what a mess!?"
I hear ya! Let me explain a bit, it will hopefully all make sense afterwards.
Here is the thing, opening an port on your server can only be done by only one application at a time (technically that is not true, but let's keep it simple here). So if we would let NGINX listen on port 6001 we cannot start our websockets server also on port 6001 since it will conflict with NGINX and the other way around, therefore we let NGINX listen on port 6002 and let it proxy (NGINX is a reverse proxy after all) all that traffic to port 6001 (the websockets server) over plain http. Stripping away the SSL so the websockets server has no need to know how to handle SSL.
So NGINX will handle all the SSL magic and forward the traffic in plain http to port 6001 on your server where the websockets server is listening for requests.
The reason we are not configuring any SSL in the websockets.php config and we define the scheme in our broadcasting.php as http and use port 6001 is to bypass NGINX and directly communicate with the websockets server locally without needing SSL which faster (and easier to configure and maintain).

Use broadcastAs() method with event
public function broadcastAs()
{
return 'UserNotificationSent';
}
And listen it like this.
.listen('.UserNotificationSent', function (e) {
....
});
Put dot(.) before UserNotificationSent event to listen it
see here https://laravel.com/docs/6.x/broadcasting#broadcast-name

Related

Laravel websockets with nginx

I have followed tutorials to configure laravel with websockets using docker, except that my php version is PHP 8.1.9, and laravel 8.
My nginx container is a reverse proxy to web_dev container, and does SSL termination. So further communication internally is over http.
nginx.conf
server {
listen 80;
server_name mydomain.co.uk *.mydomain.co.uk;
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
server_name mydomain.co.uk *.mydomain.co.uk;
include common.conf;
include ssl.conf;
location / {
include common_location.conf;
proxy_pass http://web_dev;
}
}
server {
listen 6001 ssl;
server_name mydomain.co.uk *.mydomain.co.uk;
include common.conf;
include ssl.conf;
location /ws {
proxy_read_timeout 60;
proxy_connect_timeout 60;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_cache_bypass $http_upgrade;
proxy_set_header Sec-WebSocket-Key 'SGVsbG8sIHdvcmxkIQAAAA==';
proxy_set_header Sec-WebSocket-Version '13';
include common_location.conf;
proxy_pass http://web_dev:6001/;
}
}
Then I have a curl command:
curl -k \
--no-buffer \
--header "Connection: upgrade" \
--header "Upgrade: websocket" \
-v \
https://mydomain.co.uk:6001/ws/app/mydomainkey
This is the output:
Connected to mydomain.co.uk (127.0.0.1) port 6001 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* TLSv1.0 (OUT), TLS header, Certificate Status (22):
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS header, Certificate Status (22):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
....
* TLSv1.2 (IN), TLS handshake, Finished (20):
* TLSv1.2 (OUT), TLS header, Supplemental data (23):
> GET /ws/app/mydomainkey HTTP/1.1
> Host: mydomain.co.uk:6001
> User-Agent: curl/7.81.0
> Accept: */*
> Connection: upgrade
> Upgrade: websocket
>
* TLSv1.2 (IN), TLS header, Supplemental data (23):
* Mark bundle as not supporting multiuse
< HTTP/1.1 101 Switching Protocols
< Server: nginx/1.21.5
< Date: Sun, 04 Sep 2022 12:00:33 GMT
< Connection: upgrade
< Upgrade: websocket
< Sec-WebSocket-Accept: 5Tz1rM6Lpj3X4PQub3+HhJRM11s=
< X-Powered-By: Ratchet/0.4.4
< Strict-Transport-Security: max-age=31536000; includeSubDomains
<
* TLSv1.2 (IN), TLS header, Supplemental data (23):
�r{"event":"pusher:connection_established","data":"{\"socket_id\":\"155833011.137698690\",\"activity_timeout\":30}"}
This I think shows that the port 6001 and the SSL are configured correctly and that the websocket connection was established.
When I go to the url for websockets dashboard and click connect, I get
WebSocket connection to 'wss://mydomain.co.uk:6001/ws/app/bookhaircut?protocol=7&client=js&version=4.3.1&flash=false' failed:
I also tried fetch("wss://mydomain.co.uk:6001/ws/app/mydomainkey?protocol=7&client=js&version=4.3.1&flash=false"); which gives a different error:
On the web_dev container, I'm running supervisor which runs php artisan websockets:serve. I can verify that my web_dev container can connect to its websockets running service, because I ran in php artisan tinker:
`event (new \App\Events\NewTrade('test'))`
=> []
Then I checked supervisor logs I get many entries:
So in conclusion:
nginx is serving correctly because curl command works correctly with nginx with SSL (this also generates log entries in supervisor)
laravel is internally connecting correctly
But pusher-js in web browser is complaining that wss scheme is not supported and that websocket connection failed. My nginx version: nginx/1.21.5
I'm not even using Echo yet. Echo uses pusher-js, and the dashboard is implemented using pusher-js. So if the dashboard doesn't work, nor will my Echo work.
I have now down-graded my pusher to 4.3.1 in package.json but that didn't do anything because the version is hardcoded in the dashboard (also 4.3 though).
Any ideas?
I tried to modify the dashboard to pull a different version https://cdnjs.cloudflare.com/ajax/libs/pusher/6.0.3/pusher.js
And that didnt make any difference.
Could nginx support wss instead of http? I have only seen configs with http. Configs with wss: throw an error.
So I'm out of ideas.
Here's my config files:
websockets.php
'apps' => [
[
'id' => env('PUSHER_APP_ID'),
'name' => env('APP_NAME'),
'key' => env('PUSHER_APP_KEY'),
'secret' => env('PUSHER_APP_SECRET'),
'path' => "/ws",
'capacity' => null,
'enable_client_messages' => false,
'enable_statistics' => true,
],
],
/* ssl is left empty.*/
related problems:
https://github.com/beyondcode/laravel-websockets/issues/102
How I found out:
I first tried to set SSL cert for the internal communication despite nginx doing termination (I had to try it) - but then the curl command broke, I couldn't connect to my websockets unless I removed my SSL certificates (was getting bad gateway).
So I ended up with these:
'pusher' => [
'driver' => 'pusher',
'key' => env('PUSHER_APP_KEY'),
'secret' => env('PUSHER_APP_SECRET'),
'app_id' => env('PUSHER_APP_ID'),
'options' => [
'cluster' => env('PUSHER_APP_CLUSTER'),
'encrypted' => false,
'host' => "127.0.0.1",
'port' => 6001,
'useTLS' => false,
'scheme' => 'http',
],
],
And in config/websockets.php
'apps' => [
[
'id' => env('PUSHER_APP_ID'),
'name' => env('APP_NAME'),
'key' => env('PUSHER_APP_KEY'),
'secret' => env('PUSHER_APP_SECRET'),
'path' => "/ws", //<-- important for nginx path detection
'capacity' => null,
'enable_client_messages' => false,
'enable_statistics' => true,
],
],
'ssl' => [
'local_cert' => null,
'local_pk' => null,
'passphrase' => null,
],
Then I noticed one thing about logging. When I got the websocket error in the browser, I was also receiving a new logging entry for my supervisor/websocket.log.
I tried to remove the 'path' => "/ws", and I received the same exact browser error but no log entry.
That gave me the fist hint when something is better than nothing. And I needed to stick to any settings that would not break this logging feature.
Then I found that when I remove these lines from nginx.conf, it started all working, shockingly!!
proxy_set_header Sec-WebSocket-Key 'SGVsbG8sIHdvcmxkIQAAAA==';
proxy_set_header Sec-WebSocket-Version '13';
Originally I added these so I could simplify the curl command. But I didn't know that this would brake the browser's websockets. So the curl for testing now looks like this:
curl -k \
--no-buffer \
--header "Connection: upgrade" \
--header "Upgrade: websocket" \
--header "Sec-WebSocket-Key: SGVsbG8sIHdvcmxkIQAAAA==" \ //<-- this is random you can use it
--header "Sec-WebSocket-Version: 13" \
-v \
https://mydomain.co.uk:6001/ws/app/mydomainkey

Problem when using Docker and Nginx as reverse proxy for Laravel WebSocket

I've set up my Laravel project with docker and docker-compose.
I've exposed port 6001, which is the port for running the WebSocket as default, and configured Nginx based on the documentation, but when I try to connect to my WS server using my domain, I get this error on the Nginx log file:
upstream prematurely closed connection while reading response header from upstream, client: *, server: site.cpm, request: "GET /app/7f0cb504a1426efd6854a03779a1c8a9?protocol=7&client=js&version=7.0.6&flash=false HTTP/1.1", upstream: "http://172.19.0.5:3030/app/7f0cb504a1426efd6854a03779a1c8a9?protocol=7&client=js&version=7.0.6&flash=false", host: "site.com"
Another weird problem is now I can't even connect to my WS server using direct IP and port.
I should also mention that there isn't any firewall activated on my server (at least, as far as I know).
Nginx config file:
map $http_upgrade $type {
default "web";
websocket "ws";
}
upstream websocket {
server php:6001;
}
server {
listen 80;
listen 443 ssl http2;
listen [::]:443 ssl http2;
ssl_certificate /app/docker/nginx/certs/public-selfsigned.crt;
ssl_certificate_key /app/docker/nginx/certs/private-selfsigned.key;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
server_name panel.melodigram.app;
client_max_body_size 2048M;
index index.php index.html;
error_log /var/log/nginx/fpm.log;
access_log /var/log/nginx/access.log;
root /app/public_html;
location / {
try_files /nonexistent #$type;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass php:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
location #web {
try_files $uri $uri/ /index.php?$query_string;
gzip_static on;
}
location #ws {
proxy_pass http://websocket;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_read_timeout 60;
proxy_connect_timeout 60;
proxy_redirect off;
}
}
Websocet config: websockets.php
<?php
use BeyondCode\LaravelWebSockets\Dashboard\Http\Middleware\Authorize;
return [
/*
* Set a custom dashboard configuration
*/
'dashboard' => [
'port' => env('LARAVEL_WEBSOCKETS_PORT', 6001),
],
/*
* This package comes with multi tenancy out of the box. Here you can
* configure the different apps that can use the webSockets server.
*
* Optionally you specify capacity so you can limit the maximum
* concurrent connections for a specific app.
*
* Optionally you can disable client events so clients cannot send
* messages to each other via the webSockets.
*/
'apps' => [
[
'id' => env('PUSHER_APP_ID'),
'name' => env('APP_NAME'),
'key' => env('PUSHER_APP_KEY'),
'secret' => env('PUSHER_APP_SECRET'),
'path' => env('PUSHER_APP_PATH'),
'capacity' => null,
'enable_client_messages' => false,
'enable_statistics' => true,
],
],
/*
* This class is responsible for finding the apps. The default provider
* will use the apps defined in this config file.
*
* You can create a custom provider by implementing the
* `AppProvider` interface.
*/
'app_provider' => BeyondCode\LaravelWebSockets\Apps\ConfigAppProvider::class,
/*
* This array contains the hosts of which you want to allow incoming requests.
* Leave this empty if you want to accept requests from all hosts.
*/
'allowed_origins' => [
//
],
/*
* The maximum request size in kilobytes that is allowed for an incoming WebSocket request.
*/
'max_request_size_in_kb' => 250,
/*
* This path will be used to register the necessary routes for the package.
*/
'path' => 'laravel-websockets',
/*
* Dashboard Routes Middleware
*
* These middleware will be assigned to every dashboard route, giving you
* the chance to add your own middleware to this list or change any of
* the existing middleware. Or, you can simply stick with this list.
*/
'middleware' => [
'web',
Authorize::class,
],
'statistics' => [
/*
* This model will be used to store the statistics of the WebSocketsServer.
* The only requirement is that the model should extend
* `WebSocketsStatisticsEntry` provided by this package.
*/
'model' => \BeyondCode\LaravelWebSockets\Statistics\Models\WebSocketsStatisticsEntry::class,
/**
* The Statistics Logger will, by default, handle the incoming statistics, store them
* and then release them into the database on each interval defined below.
*/
'logger' => BeyondCode\LaravelWebSockets\Statistics\Logger\HttpStatisticsLogger::class,
/*
* Here you can specify the interval in seconds at which statistics should be logged.
*/
'interval_in_seconds' => 60,
/*
* When the clean-command is executed, all recorded statistics older than
* the number of days specified here will be deleted.
*/
'delete_statistics_older_than_days' => 60,
/*
* Use an DNS resolver to make the requests to the statistics logger
* default is to resolve everything to 127.0.0.1.
*/
'perform_dns_lookup' => false,
],
/*
* Define the optional SSL context for your WebSocket connections.
* You can see all available options at: http://php.net/manual/en/context.ssl.php
*/
'ssl' => [
/*
* Path to local certificate file on filesystem. It must be a PEM encoded file which
* contains your certificate and private key. It can optionally contain the
* certificate chain of issuers. The private key also may be contained
* in a separate file specified by local_pk.
*/
'local_cert' => env('LARAVEL_WEBSOCKETS_SSL_LOCAL_CERT', null),
/*
* Path to local private key file on filesystem in case of separate files for
* certificate (local_cert) and private key.
*/
'local_pk' => env('LARAVEL_WEBSOCKETS_SSL_LOCAL_PK', null),
/*
* Passphrase for your local_cert file.
*/
'passphrase' => env('LARAVEL_WEBSOCKETS_SSL_PASSPHRASE', null),
],
/*
* Channel Manager
* This class handles how channel persistence is handled.
* By default, persistence is stored in an array by the running webserver.
* The only requirement is that the class should implement
* `ChannelManager` interface provided by this package.
*/
'channel_manager' => \BeyondCode\LaravelWebSockets\WebSockets\Channels\ChannelManagers\ArrayChannelManager::class,
];
Docker compose file: docker-compose.yml
version: '3'
services:
nginx:
build: ./docker/nginx
restart: always
command: ['nginx-debug', '-g', 'daemon off;']
volumes:
- ./:/app/
- ./docker/nginx/nginx.conf:/etc/nginx/conf.d/default.conf
ports:
- 80:80
- 443:443
depends_on:
- php
networks:
- internal
php:
build: ./
image: zagreus/melodi
container_name: zagreus-melodi
restart: always
working_dir: /app
command: [ "bash", "/app/docker/initialize.sh" ]
ports:
- 6001:6001
volumes:
- ./:/app/
- ./docker/php/supervisord.conf:/etc/supervisor/conf.d/supervisord.conf
depends_on:
- database
- redis
networks:
- internal
database:
image: mariadb
container_name: melodi-database
restart: always
environment:
- MARIADB_RANDOM_ROOT_PASSWORD=yes
- MARIADB_DATABASE=${DB_DATABASE:?DB_DATABASE not entered}
- MARIADB_USER=${DB_USERNAME:?DB_USERNAME not entered}
- MARIADB_PASSWORD=${DB_PASSWORD?DB_PASSWORD not entered}
volumes:
- ${DB_PERSIST_PATH:?DB_PERSIST_PATH not entered}:/var/lib/mysql
networks:
- internal
# ports:
# - 3306:3306
phpmyadmin:
image: phpmyadmin
container_name: melodi-phpmyadmin
restart: always
depends_on:
- database
ports:
- 8080:80
networks:
- internal
environment:
- PMA_HOST=database
- MYSQL_ROOT_PASSWORD=${DB_PASSWORD:-password}
- PMA_ARBITRARY=1
- UPLOAD_LIMIT=2G
redis:
image: redis:7.0.0-bullseye
container_name: melodi-redis
restart: always
networks:
- internal
networks:
internal:
driver: bridge
Thanks for the time which you give to this question <3
Ok, so I found out that the problem was even though I was using Nginx as a reverse proxy and my Nginx was handling SSL by itself, I've also filled LARAVEL_WEBSOCKETS_SSL_LOCAL_CERT and LARAVEL_WEBSOCKETS_SSL_LOCAL_PK in my environment file that causes the problem.
So if you are using Nginx as a reverse proxy, after making sure that the port is open on your container, just let Nginx handle the SSL and don't configure it again on this package.

Laravel (Nginx, EC2) + Vue.js (S3bucket, CloudFront) - CORS errors

I got stuck with a bit of problem since I deployed to production. I am facing constant CORS error and error code 500 (internal error) whenever I try to 'upload' image to s3 bucket.
Access to XMLHttpRequest at 'https://api.example.co.uk/api/user/7/update' from origin 'https://www.example.co.uk' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
To upload images I am using Storage::disk('s3')->putFileAs.
filesystems.php
's3' => [
'driver' => 's3',
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
'region' => env('AWS_DEFAULT_REGION'),
'bucket' => env('AWS_BUCKET'),
'url' => env('AWS_URL'),
'endpoint' => env('AWS_ENDPOINT'),
'use_path_style_endpoint' => env('AWS_USE_PATH_STYLE_ENDPOINT', false),
],
In my case I do not provide AWS_ENDPOINT and AWS_USE_PATH_STYLE_ENDPOINT is set to FALSE.
Current architectural setup is as follows:
Vue.js (2) lives in S3Bucket and is served as static website via CloudFront (+SSL from AWS)
Laravel lives in EC2 instance and is served by nginx (+SSL via letsencrypt)
Both live under same domain, however Laravel is served under subdomain. We've got https://www.example.co.uk for frontend and https://api.example.co.uk for backend.
Domain registered with GoDaddy, however I moved DNS to Route53.
cors.php
'paths' => [
'api/*',
'sanctum/csrf-cookie',
'broadcasting/auth'
],
'allowed_methods' => ['*'],
'allowed_origins' => ['http://localhost:8080', 'https://www.example.co.uk', 'https://example.co.uk'],
'allowed_origins_patterns' => [],
'allowed_headers' => ['*'],
'exposed_headers' => ['*'],
'max_age' => 0,
'supports_credentials' => true,
];
S3bucket is open to the public, with policy that allow to getObjects. Same bucket got following cors setup (probably not safest but I just want to first make it work)
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET",
"HEAD",
"PUT",
"POST"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": [
"Access-Control-Allow-Origin",
"x-amz-server-side-encryption",
"x-amz-request-id",
"x-amz-id-2"
],
"MaxAgeSeconds": 3000
}
]
CloudFront Behavior is to Redirect HTTP to HTTPS
Allowed HTTP methods are: GET, HEAD, OPTIONS
Cache Policy is my custom policy to attach following headers:
Origin
Access-Control-Request-Method
Access-Control-Allow-Origin
Access-Control-Request-Headers
Origin Request Policy - CORS-S3Origin
Response Headers Policy - CORS-With-Preflight
Nginx api.example.co.uk.conf file
server {
listen 80 default_server;
listen [::]:80 default_server;
listen 443 ssl;
server_name api.example.co.uk www.api.example.co.uk;
ssl_certificate /etc/letsencrypt/live/api.example.co.uk/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/api.example.co.uk/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
location / {
try_files $uri $uri/ /index.php; # /index.php$is_args$args; # =404;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php8.1-fpm.sock;
}
# deny access to .htaccess, if Apache's document root
# concurs with nginx's one
location ~ /\.ht {
deny all;
}
}
If any more information necessary please let me know and please bear in mind that everything works on localhost, problem appeared after deployment.
Moreover, I am not getting any CORS errors for other paths, the only CORS error happens when I explicitly 'upload' images to s3 bucket.
I would appreciate any help with that, as I've been stuck with it for past two days now and I am sure its something silly.
EDIT
Might be worth noting that Storage::url() returns error 500 (cors) as well.
Another important info might be that I have two buckets.
One bucket www.example.co.uk contains the static website and is served with cloudFront.
The other bucket example-live-bucket is the one where I am trying to store files.

Laravel-websocket can't connect to Azure VM through apache2

I want to implement Laravel-WebSocket + pusher substitute (https://github.com/beyondcode/laravel-websockets) with Laravel-echo on an Azure VM with apache2 reverse proxy. However, no matter what I tried, they just can't connect. It's always between error 404, 502 and 500 when a client-side listener tries to connect.
This VM only allows entry through 80 and 443. Therefore, I did reverse proxy which redirects '/WebSocket' to http://127.0.0.1:6001 (where the WebSocket runs. TCP) The server is under https. I've tried to modify server .conf file to no avail. I've tried to enable/disable SSL and encrypted as well. I tried to directly access 127.0.0.1:6001 within the VM using elinks, but with http I got 'error connecting to socket' with https I got 'SSL error'. I'm fairly sure that the request at least reached the machine, as if I shut down the WebSocket server the error changes into 503 Service not available. Also, I can see apache2 error logs as requests being made. Most of the time it is 502 proxy error. (AH01102: error reading status line from remote server 127.0.0.1:6001) If I tweak SSL settings it generally changes to 404, which I take as a sign of incorrect certificate/key in this case. The firewall is open.
I've tried every guide I can find. Most of them are about nginx, which I can't change to. If possible I would rather not set up another virtual host.
This is in websocket.php:
....
'apps' => [
[
'id' => env('PUSHER_APP_ID'),
'name' => env('APP_NAME'),
'key' => env('PUSHER_APP_KEY'),
'secret' => env('PUSHER_APP_SECRET'),
'enable_client_messages' => false,
'enable_statistics' => true,
],
]
....
'ssl' => [
'local_cert' => 'my_self_signed_cert.pem',
'local_pk' => 'my_key.pem',
'passphrase' => null,
]
...
This is the pusher setting in broadcasting.php:
...
'pusher' => [
'driver' => 'pusher',
'key' => env('PUSHER_APP_KEY'),
'secret' => env('PUSHER_APP_SECRET'),
'app_id' => env('PUSHER_APP_ID'),
'options' => [
'cluster' => env('PUSHER_APP_CLUSTER'),
'host' => env('PUSHER_APP_HOST'),
'port' => env('PUSHER_APP_PORT'),
'scheme' => 'https',
'encrypted' => true,
],
]
...
I tried 127.0.0.1 as the host and 6001 as the port as default, then I followed this guide: https://42coders.com/setup-beyondcode-laravel-websockets-with-laravel-forge/ and changed them to my-domain-name and 443 respectively (the websocket is still running on 6001).
This is the echo definition in /resources/js/bootstrap.js (suggested by the package documentation):
import Echo from "laravel-echo";
window.Pusher = require("pusher-js");
window.Echo = new Echo({
broadcaster: "pusher",
key: "my-pusher-key",
wsHost: window.location.hostname + '/websocket',
wsPort:443,
disableStats: true,
});
This is the apache2 setting:
<VirtualHost *:443>
ServerName my_server_name
RequestHeader unset x-forwarded-host
...
SSLEngine on
SSLCertificateFile my-self-cert
SSLCertificateKeyFile my-key
SSLProxyEngine on
SSLProxyVerify none
SSLProxyCheckPeerCN off
SSLProxyCheckPeerName off
SSLProxyCheckPeerExpire off
ProxyPreserveHost On
<Location /websocket>
ProxyPass "http://127.0.0.1:6001" Keepalive=On
ProxyPassReverse "http://127.0.0.1:6001"
</Location>
...
RewriteEngine on
RewriteCond %{HTTP:UPGRADE} websocket [NC]
RewriteCond %{HTTP:CONNECTION} Upgrade [NC]
RewriteRule /(.*) http://127.0.0.1:6001/$1 [P,L]
ProxyRequests Off
</VirtualHost>
Changing the http to ws or wss will trigger the error 'No protocol handler was valid', despite the fact that I already included the wstunnel module.
I expect the WebSocket console to immediately react as soon as the listener subscribes to the broadcast channel.
OK, this issue has finally been resolved! The way to do it is to set up a separate server (https) as a dedicated WebSocket server. Please follow this guide:
https://42coders.com/setup-beyondcode-laravel-websockets-with-laravel-forge/
It looks like the setting has to be on https or the WebSocket server can't receive a response from the backend. Also, I encountered a strange problem where Pusher.php in vendor/pusher/pusher-php-server misses about 400 lines of code, several other files are also missing despite it has the same version of this package (~3.0) as my local implementation. If you get call to undefined method when the WebSocket server receives broadcast event, this is likely the reason. If you can't connect to the websocket on Chrome, but you can on firefox, you need to disable verify-peers, please check the section on Laravel Velvet in the Laravel-Websockets document for more details (even if you don't use Laravel Velvet)

ExpressJS/Node: 404 images

I've set up a basic node/express server which serves public static javascript and css files fine, but returns a 404 error when attempting to serve images.
The strangest part is that everything works fine when run locally. When run on my remote server (linode), the image problem arrises.
It's really got me scratching my head... What might be the problem?
Here's the server:
/**
* Module dependencies.
*/
var express = require('express')
, routes = require('./routes')
var app = module.exports = express.createServer();
// Configuration
app.configure(function(){
app.set('views', __dirname + '/views');
app.set('view engine', 'jade');
app.use(express.bodyParser());
app.use(express.methodOverride());
app.use(express.compiler({ src: __dirname + '/public', enable: ['less'] }));
app.use(express.static(__dirname + '/public'));
app.use(app.router);
});
app.configure('development', function(){
app.use(express.errorHandler({ dumpExceptions: true, showStack: true }));
});
app.configure('production', function(){
app.use(express.errorHandler());
});
// Globals
app.set('view options', {
sitename: 'Site Name',
myname: 'My Name'
});
// Routes
app.get('/', routes.index);
app.get('/*', routes.fourohfour);
app.listen(3000);
console.log("Express server listening on port %d in %s mode", app.address().port, app.settings.env);
if it works fine locally, maybe it's a case sensitivity issue, do your files have capitals etc?
I had this issue, but it ended up being that I had caps in the trailing .JPG extension and was calling .jpg in html. Windows is not case sensitive on file types, CentOS is...
Alrighty, I got around the issue by renaming my images folder from "/public/images" to /public/image. I don't know why the naming would cause an issue, but I'm glad that's all that was needed.
I had this exact issue. All user generated images uploaded to /static/uploads were not being rendered by express. The strange thing is everything in static/images, static/js, static/css were rending fine. I ensured it wasn't a permissions issue but was still getting a 404. Finally I configured NGINX to render all of my static file (which is probably faster anyway) and it worked!
I'd still love to know why Express wasn't rending my images though.
Here's my NGINX conf if anyone is having this issue:
server {
# listen for connections on all hostname/IP and at TCP port 80
listen *:80;
# name-based virtual hosting
server_name staging.mysite.com;
# error and access outout
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
location / {
# redefine and add some request header lines which will be transferred to the proxied server
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
#set the location of the web root
root /var/www/mysite.com;
# set the address of the node proxied server. port should be the port you set in express
proxy_pass http://127.0.0.1:9001;
# forbid all proxy_redirect directives at this level
proxy_redirect off;
}
# do a case insensitive regular expression match for any files ending in the list of extentions
location ~* ^.+\.(html|htm|png|jpeg|jpg|gif|pdf|ico|css|js|txt|rtf|flv|swf)$ {
# location of the web root for all static files
root /var/www/mysite.com/static;
# clear all access_log directives for the current level
#access_log off;
# set the Expires header to 31 December 2037 23:59:59 GMT, and the Cache-Control max-age to 10 years
#expires max;
}
}

Resources