Laravel-websoket - broadcasts nothing if the backend is on another port - laravel

My laravel works like restapi. Front and back on different ports, nginx proxies requests to the back or front.
I can connect from the browser to the web socket on port 6001 without any problems, as can be seen in the statistics, there are no errors in the console. The statistics itself is also available on port 6001, i.e. the web socket is working fine.
But Laravel broadcasts no events on the working server, in statistics and console it is empty.
Nothing blocks traffic, the firewall is disabled.
I spent half a day, but still did not understand what the problem was.
Any thoughts ...

Friends, as always, the answer was simple. In the production process, I did not fix the .env log on the pusher (((

Related

Investigating incoming IP connection Stripe webhook issues on Ubuntu VPS

I'm investigation some issues with Stripe webhooks not reaching our test server.
According to their docs they submit requests from the following IPs: https://stripe.com/docs/ips#webhook-notifications
I have added these IPs to the iptables:
I'm not an iptables expert, but looking at this it seems that it's only matching 54.187.216.72. Other requests from Stripe will fail with a timeout error, which I'm assuming are coming from other IPs.
I can see the only working IP in my apache logs. I think I can rule out ufw / firewall issues because I have tried to temporary disable that as well during testing.
My question: How do I investigate this issue further? Is my iptables setup correct? Is there anything else here that could block IPs other iptables and ufw?
Stripe could not tell me which IP was used on their requests.
I hope I'm providing the correct information here, if not please let me know!
Thanks a lot!
The problem is that you do not have information from network diagnostic tools and cannot get it. When you see in the Stripe dashboard a timeout error this cannot mean only that your server blocks incoming requests or does not respond. Such error messages are generated by the Stripe backend, not network tools. Where are the packets lost? Are they really lost? Quite possible that Stripe has strict time intervals to resolve Promises and just rejects them despite of result.
You can't run traceroute from their IPs. Those IPs doesn't respond to ping. You can talk to your hosting provider and get "no problem here" response. You can turn off the firewall and still have errors. I'm pretty sure your filter rules are not the root of this issue.
P.S.: We had the same one. Without any filtering rules our endpoint was accessible from around the world except those IPs. The problem appeared without making any changes in the configuration. The problem disappeared without any changes in the configuration. Neither Stripe nor the hosting provider had problems found.

Load balancer and WebSockets

Our infrastructure is composed by
1 F5 load balancer
3 nodes
We have an application which uses websockets, so when a user goes to our site, it opens a websocket to the balancer which it connects to the first available node, and it works as expected.
Our truobles arrives with maintenance tasks, when we have to update our software, we need to turn offline 1 node at a time, deploy the new release and then turn it on again. Doing this task, the balancer drops the open websocket connections to the node and the clients retries to connect after few seconds to the first available nodes, creating an inconvenience for the client because he could miss a signal (or more).
How we can keep the connection between the client and the balancer, changing the backend websocket server? Is the load balancer enough to achieve our goal or we need to change our infrastructure?
To avoid this kind of problems I recommend to read about the Azure SignalR. With this you don't need to thing about stuff like load balancer, redis backplane and other infrastructures that you possibly need to a WebSockets connection.
Basically the clients will not connected to your node directly but redirected to Azure SignalR. You can read more about it here: https://learn.microsoft.com/en-us/azure/azure-signalr/signalr-overview
Since it is important to your application to maintain the connection, I don't see how any other way to archive no connection drop to your nodes, since you need to shut them down.
It's important to understand that the F5 is a full TCP proxy. This means that the F5 is the server to the client and the client to the server. If you are using the websockets protocol then you must apply a websockets profile to the F5 Virtual Server in order for the websockets application to be handled properly by the Load Balancer.
Details of the websockets profile can be found here: https://support.f5.com/csp/article/K14754
If a websockets and an HTTP profile are applied to the Virtual Server - meaning that you have websockets and web traffic using the same port and LB nodes - then the F5 will allow the websockets traffic as passthrough. Also keep in mind that if this is an HTTPS virtual sever that you will need to ensure a client and server side HTTPS profile (SSL offload) are applied to the Virtual Server.
While there are a variety of ways that you can fiddle with load balancers to minimize the downtime caused by a software upgrade, none of them solve the problem, which is that your application-layer protocol seems to not tolerate some small network outages.
Even if you have a perfect load balancer and your software deploys cause zero downtime, the customer's computer may be on flaky wifi which causes a network dropout for half a second - or going over ethernet and someone reconfigures some routing on their LAN, etc.
I'd suggest having your server maintain a queue of messages for clients (up to some size/time limit) so that when a client drops a connection - whether it be due to load balancers/upgrades - or any other reason, it can continue without disruption.

GKE + WebSocket + NodePort 30s dropped connections

I have a golang service that implements a WebSocket client using gorilla that is exposed to a Google Container Engine (GKE)/k8s cluster via a NodePort (30002 in this case).
I've got a manually created load balancer (i.e. NOT at k8s ingress/load balancer) with HTTP/HTTPS frontends (i.e. 80/443) that forward traffic to nodes in my GKE/k8s cluster on port 30002.
I can get my JavaScript WebSocket implementation in the browser (Chrome 58.0.3029.110 on OSX) to connect, upgrade and send / receive messages.
I log ping/pongs in the golang WebSocket client and all looks good until 30s in. 30s after connection my golang WebSocket client gets an EOF / close 1006 (abnormal closure) and my JavaScript code gets a close event. As far as I can tell, neither my Golang or JavaScript code is initiating the WebSocket closure.
I don't particularly care about session affinity in this case AFAIK, but I have tried both IP and cookie based affinity in the load balancer with long lived cookies.
Additionally, this exact same set of k8s deployment/pod/service specs and golang service code works great on my KOPS based k8s cluster on AWS through AWS' ELBs.
Any ideas where the 30s forced closures might be coming from? Could that be a k8s default cluster setting specific to GKE or something on the GCE load balancer?
Thanks for reading!
-- UPDATE --
There is a backend configuration timeout setting on the load balancer which is for "How long to wait for the backend service to respond before considering it a failed request".
The WebSocket is not unresponsive. It is sending ping/pong and other messages right up until getting killed which I can verify by console.log's in the browser and logs in the golang service.
That said, if I bump the load balancer backend timeout setting to 30000 seconds, things "work".
Doesn't feel like a real fix though because the load balancer will continue to feed actual unresponsive services traffic inappropriately, never mind if the WebSocket does become unresponsive.
I've isolated the high timeout setting to a specific backend setting using a path map, but hoping to come up with a real fix to the problem.
I think this may be Working as Intended. Google just updated the documentation today (about an hour ago).
LB Proxy Support docs
Backend Service Components docs
Cheers,
Matt
Check out the following example: https://github.com/kubernetes/ingress-gce/tree/master/examples/websocket

HTTP GET requests work but POST requests do not

Our Spring application is running on several different servers. For one of those servers POST requests do not seem to be working. All site functionality that uses GET requests works completely fine; however, as soon as I hit something that uses a POST request (ex. form submit) the site just hangs permanently. The server won't give any response. We can see the requests in Tomcat Manager but they don't time out.
Has anyone ever seen this?
We have found the problem. Our DBA accidentally deleted the MySQL database files on that particular server (/sigh). In our Spring application we use GET requests for record retrieval and the records we were trying to retrieve must have been cached by MySQL. This made it seem as if GET requests were working. When trying to add new data to the database, which we use POST requests to do, Tomcat would wait for a response, which never came, from MySQL.
In my experience if you're getting a timeout error it's almost always due to not having correct ports open for your application. For example, go into your virtual machine's rules and insure port 8080, 8443 or 80, 443 are open for http and https traffic.
In google cloud platform: its under VPC networking -> firewall rules. Azure and AWS are similar.

Amazon Load Balancers dropping Web Socket connections to TorqueBox

I'm running TorqueBox on Amazon AWS. I've created a load balancer, which does TCP pass through for Web Socket connections on port 8675. When I first load up the page this seems to work quite nicely, however if I leave the page open for a while, the connection just stops working. I don't get an error message, it just silently ignores any further messages sent over the connection. If I reload the page at this point, everything works fine again.
I've tried connecting to individual nodes in the cluster directly, and the connection does not get dropped in that case, so my suspicion is that it has something to do with the load balancers.
Any ideas what might be causing this?
More information about your specific architecture might be useful, but my first guess is that you should enable session stickiness so that requests from the same host get directed to the same machine on AWS (if the request gets directed to another machine the protocol would have to be renegociated).

Resources