Laravel echo/pusher not sending ping when receiving data - laravel

I'm running a laravel websocket and have a connection over wss.
I am running commands on the server, and the commands are logged in a file. Each line is also sent over a websocket to the front-end so I can view it. Each laravel-command has it's own file and broadcast-channel.
Commandlogger:
class CommandLogger implements Logger {
public $commandname = '';
public $broadcast = false;
public function __construct($commandname, $broadcast = false) {
$this->commandname = Str::camel(Str::slug($commandname));
$this->broadcast = $broadcast;
}
function log($message) {
$message = Carbon::now()->format('Y-m-d H:i:s').": ".$message;
file_put_contents(storage_path("logs/commands/$this->commandname.log"), $message.PHP_EOL , FILE_APPEND | LOCK_EX);
if($this->broadcast) {
event(new CommandLogCreated($message, $this->commandname));
}
}
}
In Vue.js I listen with Echo(which implements pusherjs):
Echo.private('logs.commands.' + command)
.listen('.command-log:created', event => {
this.log[command] += event.message + "\n";
let splitLog = this.log[command].split("\n");
let splitLength = splitLog.length;
if(splitLength > 200) {
splitLog = splitLog.slice(splitLength - 200, splitLength);
}
this.log[command] = splitLog.join('\n');
this.trigger++;
})
.error(error => {
console.log(error);
});
The issue I'm experiencing only happens when the command is sending a lot of messages to echo.
At a normal rate, with some pauses between the messages, echo does the ping-pong and the connection remains receiving messages.
At higher message rates, is seems echo is not sending the ping-pong and my socket silently stops receiving data. After it stops receiving it starts ping-ponging as if nothing happend. No disconnect has occured on both server and client.
Websocket messages(notice it stops at 205000, gives no error and resumes ping-pong):
Actual command output(at 230000 and still running):
If I refresh the page, I will receive messages again.
I've updated the websockers:serve command (vendor/beyondcode/laravel-websockets/src/Console/Commands/StartServer.php) directly and disabled the pongtracker:
protected function configurePongTracker()
{
//$this->loop->addPeriodicTimer(10, function () {
// $this->laravel
// ->make(ChannelManager::class)
// ->removeObsoleteConnections();
//});
}
Then I rebooted the websocket and tried again. This time, no matter how fast I was sending messages in, echo keeps receiving messages.
TL;DR:
My conclusion is that echo should ping-pong in between receiving messages from the server, because currently it seems to fail at doing so, and the websocket cleans the connection eventually without disconnecting. How can I either force echo to do the ping-pong, or make sure the server does not clean the connection without running the risk of having runaway connections?
Update 1:
I've been diving a little more into the Startserver.php file and found this:
public function removeObsoleteConnections(): PromiseInterface
{
if (! $this->lock()->acquire()) {
return Helpers::createFulfilledPromise(false);
}
$this->getLocalConnections()->then(function ($connections) {
foreach ($connections as $connection) {
$differenceInSeconds = $connection->lastPongedAt->diffInSeconds(Carbon::now());
if ($differenceInSeconds > 120) {
$this->unsubscribeFromAllChannels($connection);
}
}
});
return Helpers::createFulfilledPromise(
$this->lock()->forceRelease()
);
}
So that explains why I stop receiving messages, but there is no disconnect. It just unsubscribes the channels I'm listening to silently(can't see this on front-end) and keeps the connection alive.
I've also created an issue on github(laravel/echo) because I do think this is unwanted behaviour. I'm just not sure if the issue lies within echo, or within pusher js.

I have been through this. check your channel name and type, and make sure you have access. Also, check that there are no other js files interfere with your app.js file like what happened to me. check this laracasts question out

Related

Masstransit: GetSendEndpoint

I have a producer, which send more than 1000 messages in a minute to a specific endpoint. I’m using Microsoft DI and I’ve configured the send Endpoint as described here https://masstransit-project.com/usage/producers.html#send .
// Masstransit setup
serviceCollection.AddMassTransit(mt =>
{
mt.UsingAzureServiceBus((ctx, cfg) =>
{
cfg.Host(massTransitSettings.TestServiceBusConnectionString);
cfg.ReceiveEndpoint("mytestmessage", e =>
{
e.MaxDeliveryCount = 3; //How many times the transport will redeliver the message on negative acknowledgment
});
});
});
serviceCollection.AddTransient<ITestMessageProducer, TestMessageProducer>();
// Producer setup
public class TestMessageProducer : ITestMessageProducer
{
private readonly ISendEndpointProvider _testEndpoint;
public TestMessageProducer(ISendEndpointProvider testEndpoint)
{
_testEndpoint = testEndpoint;
}
public async Task SendTestMessage(ITestMessage testmessage)
{
var endpoint = await _testEndpoint.GetSendEndpoint(new Uri("queue:mytestmessage"));
await endpoint.Send(testmessage);
}
}
Query:
The SendTestMessage function has been called very frequently as mention above. Will it be ok to call “GetSendEndpoint” everytime? I have read somewhere that GetSendEndpoint creates a new instance of ISendEndpoint everytime.
Will the MaxDeliveryCount still be worked on my sendendpoint?
Thank you.
Send endpoints are cached by address, only a single instance will be created.
MaxDeliveryCount is a receive endpoint concern, but you should not configure a receive endpoint without consumers as all messages will be moved to the _skipped queue.

Connection time out in jpos client

I am using jpos client (In one of the class of java Spring MVC Program) to connect the ISO8585 based server, however due to some reason server is not able to respond back, due to which my program keeps waiting for the response and results in hanging my program. So what is the proper way to implement connection timeout?
My client program look like this:
public FieldsModal sendFundTransfer(FieldsModal field){
try {
JposLogger logger = new JposLogger(ISO_LOG_LOCATION);
org.jpos.iso.ISOPackager customPackager = new GenericPackager(ISO_PACKAGER);
ISOChannel channel = new PostChannel(ISO_SERVER_IP, Integer.parseInt(ISO_SERVER_PORT), customPackager);// live
logger.jposlogconfig(channel);
channel.connect();
log4j.info("Connection established using PostChannel");
ISOMsg m = new ISOMsg();
m.set(0, field.getMti());
//m.set(2, field.getField2());
m.set(3, field.getField3());
m.set(4, field.getField4());
m.set(11, field.getField11());
m.set(12, field.getField12());
m.set(17, field.getField17());
m.set(24, field.getField24());
m.set(32, field.getField32());
m.set(34, field.getField34());
m.set(41, field.getField41());
m.set(43, field.getField43());
m.set(46, field.getField46());
m.set(49, field.getField49());
m.set(102,field.getField102());
m.set(103,field.getField103());
m.set(123, field.getField123());
m.set(125, field.getField125());
m.set(126, field.getField126());
m.set(127, field.getField127());
m.setPackager(customPackager);
System.out.println(ISOUtil.hexdump(m.pack()));
channel.send(m);
log4j.info("Message has been send");
ISOMsg r = channel.receive();
r.setPackager(customPackager);
System.out.println(ISOUtil.hexdump(r.pack()));
channel.disconnect();
}catch (Exception err) {
System.out.println("sendFundTransfer : " + err);
}
return field;
}
Well the real proper way would be to use Q2. Given you don't need a persistent connection you coud just set a timeout for the channel.
PostChannel channel = new PostChannel(ISO_SERVER_IP, Integer.parseInt(ISO_SERVER_PORT), customPackager);// live
channel.setTimeout(timeout); //timeout in millies.
This way channel will autodisconnect if nothing happens during the time specified by timeout , and your call to receive will throw an exception.
The alternative is using Q2 and a mux (see QMUX, for which you need to run Q2, or ISOMUX which is kind of deprecated).

What's the difference between socket.to(id) and socket.broadcast.to(id)?

I'm write an application using socket.io.
I'm confused by the official document about socket.broadcast.
From my testing, the below code has the same effect:
socket.to(id).emit('some event')
socket.broadcast.to(id).emit('some event')
What's does broadcast do?
broadcast sets a flag in socket,
Socket.prototype.__defineGetter__('broadcast', function () {
this.flags.broadcast = true;
return this;
});
which tells the manager to omit current socket from broadcasting
Socket.prototype.packet = function (packet) {
if (this.flags.broadcast) {
this.log.debug('broadcasting packet');
this.namespace.in(this.flags.room).except(this.id).packet(packet);
} else {
...
Thus socket.broadcast.to(room) will have the following effect: client that is connected to the socket will not receive the message. Whereas when socket.to(room) all room's clients will receive the message including the one who is connected to socket.
I've just verified this for socket v0.9 but I doubt these mechanics are different for v1.+

Sailsjs Session access during custom onConnect in socket.io

I'm trying to group all my socket.io connection into groups.
I want 1 group for each sails.js session.
My first goal is authentificate all tabs in a same time.
So I tried to do this with onConnect in config/sockets.js like that :
onConnect: function(session, socket) {
// By default: do nothing
// This is a good place to subscribe a new socket to a room, inform other users that
// someone new has come online, or any other custom socket.io logic
if (typeof session.socket == 'undefined'){
session.socket = [];
}
session.socket.push(socket.id);
session.save();
console.log(session, socket);
},
// This custom onDisconnect function will be run each time a socket disconnects
onDisconnect: function(session, socket) {
// By default: do nothing
// This is a good place to broadcast a disconnect message, or any other custom socket.io logic
if(Array.isArray(session.socket)){
var i = session.socket.indexOf(socket.id);
if(i != -1) {
session.socket.splice(i, 1);
session.save();
}
}
console.log(session, socket);
},
But I realize that session doesn't save my modifications.
I tried a session.save but sailsjs doesn't know req !
Session.set(sessionKey, req.session, function (err) {
I want to access to sails.js sesion but I don't know how to do it.
I tried to search a solution but now, after 6 hours of search I think it's time to requiered some help !
Thanks and sorry for my poor english (I'm french).
There appears to be a bug in the implementation of onConnect and onDisconnect in Sails v0.9.x. You can work around it for now by adding the following line before a call to session.save in those methods:
global.req = {}; global.req.session = session;
then changing session.save() to:
session.save(function(){delete global.req;});
That will provide the missing req var as a global, and then delete the global (for safety) after the session is saved.
Note that this issue only affects sessions in the onConnect and onDisconnect methods; inside of controller code session.save should work fine.
Thanks for pointing this out!

RestSharp - when a test runs for the first time, it fails. When I debug, it passes. what's going on?

Pretty basic test:
[TestClass]
public class ApiClientTest
{
private RestClient _client;
[TestInitialize()]
public virtual void TestInitialize()
{
_client = new RestClient("http://localhost:24144");
_client.CookieContainer = new System.Net.CookieContainer();
}
[TestMethod]
public void ApiClientTestCRUD()
{
// 1. Log out twice. Verify Unauthorized.
var response = LogOut();
response = LogOut();
Assert.AreEqual(response.StatusCode, HttpStatusCode.Unauthorized);
// Error here:
Result Message: Assert.AreEqual failed. Expected:<0>.
Actual:< Unauthorized >.
I get <0>, which isn't even something that my WebAPI returns.
I think the issue is with my use of RestSharp, because if I debug one time it passes, and then subsequent runs pass. Any clue what's going on?
To be clear - this occurs when I open up my solution and attempt to run the test for the first time. I can fix it by debugging once, watching it pass, and then running without debugging as much as I want. I can reproduce this by closing VS and opening up the solution again - and running the test without debugging first.
Here's the LogOut method in my WebAPI:
[Authorize]
public HttpResponseMessage LogOut()
{
try
{
if (User.Identity.IsAuthenticated)
{
WebSecurity.Logout();
return Request.CreateResponse(HttpStatusCode.OK, "logged out successfully.");
}
return Request.CreateResponse(HttpStatusCode.Conflict, "already done.");
}
catch (Exception e)
{
return Request.CreateResponse(HttpStatusCode.InternalServerError, e);
}
}
UPDATE:
I ended up running the tests with Trace.WriteLine:
// 1. Log out twice. Verify Unauthorized.
Trace.WriteLine("ENTERING FIRST LOGOUT");
var response = LogOut();
Trace.WriteLine("Content: " + response.Content);
Trace.WriteLine("ErrorMessage: " + response.ErrorMessage);
Trace.WriteLine("ResponseStatus: " + response.ResponseStatus);
Trace.WriteLine("StatusCode: " + response.StatusCode);
Trace.WriteLine("StatusDescription: " + response.StatusDescription);
response = LogOut();
Trace.WriteLine("COMPLETED LOGOUTS");
Assert.AreEqual(response.StatusCode, HttpStatusCode.Unauthorized);
And I found the following:
ENTERING FIRST LOGOUT
Content:
ErrorMessage: Unable to connect to the remote server
ResponseStatus: Error
StatusCode: 0
StatusDescription:
COMPLETED LOGOUTS
My solution has a test project with this RestSharp test, and a WebAPI project that's supposed to be accepting these requests. If I debug, the RestClient connects. If not, it times out. Any tips?
When debugging is not possible to solve the problem go to the old fashion way.
Add Trace.WriteLine (or even append text to a C:\temp.txt file).
Write some string before every return in the LogOut method, then try writing some more information (if it's the last return then write the Exception message, if it's the second return write the Identity information.
Hope this helps.
How are you hosting the server? I see this that you're using port 24144. Maybe in debug mode you're running the express IIS Web Server and that's the port, but in non-debug mode it's not?

Resources