Is there a way to restart a queue in a specific logic - laravel

I have a daemon running, which I am using for a socket connection, is there any way to restart this daemon inside the code when a specific login applies. Any command? Restarting the daemon queue inside that queue
\Ratchet\Client\connect($this->stream)->then(function ($ws) {
$ws->on('message', function ($data) {
});
$ws->on('close', function ($code = null, $reason = null) {
//RESTART THE QUEUE THIS SOCKET BELONGS TO
Log::alert( "userData: WebSocket Connection closed! ({$code} - {$reason})" . PHP_EOL);
});
there are multiple sockets running (dispatching from the same job), and i want to restart all of them if one is closed

Related

How to re-establish grpc bidirectional stream if internet connection is down

I am using a go client and server which is connected with grpc bidirectional stream. I need that stream to long running forever without any disconnection, but the stream disconnects within 3 minutes when the internet is down. Is there any way to stop the client from disconnecting or is there any way to reconnect automatically with the server when internet is down. If so please guide me with this. Thankyou.
When gRPC connection is closed, the state of the gRPC client connection will be
IDLE or TRANSIENT_FAILURE. Here is my example for a custom reconnect mechanism for gRPC bi-directional streaming in Go. First, I have a for loop to keep reconnecting until the gRPC server is up, which the state will become ready after calling conn.Connect().
for {
select {
case <-ctx.Done():
return false
default:
if client.Conn.GetState() != connectivity.Ready {
client.Conn.Connect()
}
// reserve a short duration (customizable) for conn to change state from idle to ready if grpc server is up
time.Sleep(500 * time.Millisecond)
if client.Conn.GetState() == connectivity.Ready {
return true
}
// define reconnect time interval (backoff) or/and reconnect attempts here
time.Sleep(2 * time.Second)
}
}
Also, a goroutine will be spawned in order to execute the reconnect tasks. After successfully reconnect, it will spawn another goroutine to listen to gRPC server.
for {
select {
case <-ctx.Done():
return
case <-reconnectCh:
if client.Conn.GetState() != connectivity.Ready && *isConnectedWebSocket {
if o.waitUntilReady(client, isConnectedWebSocket, ctx) {
err := o.generateNewProcessOrderStream(client, ctx)
if err != nil {
logger.Logger.Error("failed to establish stream connection to grpc server ...")
}
// re-listening server side streaming
go o.listenProcessOrderServerSide(client, reconnectCh, ctx, isConnectedWebSocket)
}
}
}
}
Note that the listening task is handled concurrently by another goroutine.
// listening server side streaming
go o.listenProcessOrderServerSide(client, reconnectCh, websocketCtx, isConnectedWebSocket)
You can check out my code example here. Hope this helps.

Ubuntu Mosquitto broker websocket is not working

I'm new at IoT & MQTT communication protocol. I'm trying to connect my broker which runs at Amazon Ec2 from my Vue web app via Websockets. I have started mosquitto with:
root#ip-xxx-xx-xx-xx:~# mosquitto -c /etc/mosquitto/conf.d/default.conf
1618518468: mosquitto version 1.6.7 starting
1618518468: Config loaded from /etc/mosquitto/conf.d/default.conf.
1618518468: Opening ipv4 listen socket on port 1883.
1618518468: Opening ipv6 listen socket on port 1883.
1618518468: Opening websockets listen socket on port 9001.
/etc/mosquitto/conf.d/default.conf file contains:
listener 1883
protocol mqtt
allow_anonymous true
listener 9001
protocol websockets
allow_anonymous true
My test js file is:
var mqtt = require('mqtt');
var count =0;
var client = mqtt.connect("mqtt://xx.xxx.xxx.xxx",{clientId:"mqttjs01"});
console.log("connected flag " + client.connected);
//handle incoming messages
client.on('message',function(topic, message, packet){
console.log("message is "+ message);
console.log("topic is "+ topic);
});
client.on("connect",function(){
console.log("connected "+ client.connected);
})
//handle errors
client.on("error",function(error){
console.log("Can't connect" + error);
process.exit(1)});
//publish
function publish(topic,msg,options){
console.log("publishing",msg);
if (client.connected == true){
client.publish(topic,msg,options);
}
count+=1;
if (count==2) //ens script
clearTimeout(timer_id); //stop timer
client.end();
}
//////////////
var options={
retain:true,
qos:1};
var topic="acs";
var message="test message";
var topic_list=["topic2","topic3","topic4"];
var topic_o={"topic22":0,"topic33":1,"topic44":1};
console.log("subscribing to topics");
client.subscribe(topic,{qos:0}); //single topic
client.subscribe(topic_list,{qos:1}); //topic list
client.subscribe(topic_o); //object
var timer_id=setInterval(function(){publish(topic,message,options);},5000);
//notice this is printed even before we connect
console.log("end of script");
But I'm getting this error:
New client connected from 176.xxx.xxx.xx as mqttjs01 (p2, c1, k60).
1618518546: Socket error on client mqttjs01, disconnecting.
I have installed libwebsockets, I have tried with various mosquitto versions. Current version is: 1.6.7.
Is there any problem with my client or broker? How can I fix this?
At the end of the publish() function the if statement is missing enclosing braces so it doesn't do what you think it does.
function publish(topic,msg,options){
console.log("publishing",msg);
if (client.connected == true){
client.publish(topic,msg,options);
}
count+=1;
if (count==2) //ens script
clearTimeout(timer_id); //stop timer
client.end();
}
Lets fix the indentation so we can see more clearly.
function publish(topic,msg,options){
console.log("publishing",msg);
if (client.connected == true){
client.publish(topic,msg,options);
}
count+=1;
if (count==2) //ens script
clearTimeout(timer_id); //stop timer
client.end();
}
As you can see client.end() will ALWAYS be called when ever publish() is called. If you only want to publish twice you need to wrap the 2 statements in the braces (this is not python where whitespace has meaning)
if (count==2) { //ens script
clearTimeout(timer_id); //stop timer
client.end();
}
You really should indent all your code properly it will make it much easier to read and to spot errors like this.
Also as #JDAllen mentioned you are not making use of the WebSocket connection, unless this code is running in the browser, where the sandbox will force it to be a WebSocket connection even if you specify mqtt:// as the schema in the URL, and you will have to include the port number to make it actually connect. e.g.
ws://xxx.xxx.xxx.xxx:9001

Socket.io - using multiple nodes

So I was looking into running socket.io across multiple processes.
The guide here: https://socket.io/docs/using-multiple-nodes/ left me with some questions.
It mentions using configuring nginx to load balance between socket.io processes, but it also mentions using the built in cluster module in Node.js below.
Am I supposed to be using nginx AND the cluster module in Node.js for this?
Also how do I tell if load balancing is working?
I tested it using the nginx option with two socket.io processes running using the redis adapter and using the cluster module.
This is what I had in my nginx config:
http {
upstream io_nodes {
ip_hash;
server 127.0.0.1:6001;
server 127.0.0.1:6002;
}
server {
listen 3000;
server_name example.com;
location / {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_http_version 1.1;
proxy_pass http://io_nodes;
}
}
This is an example of my socket.io code (most of it taken from here: https://github.com/elad/node-cluster-socket.io):
var express = require('express'),
cluster = require('cluster'),
net = require('net'),
redis = require('redis'),
sio = require('socket.io'),
sio_redis = require('socket.io-redis');
var port = 6001,
num_processes = require('os').cpus().length;
if (cluster.isMaster) {
console.log('is master 6001');
// This stores our workers. We need to keep them to be able to reference
// them based on source IP address. It's also useful for auto-restart,
// for example.
var workers = [];
// Helper function for spawning worker at index 'i'.
var spawn = function(i) {
workers[i] = cluster.fork();
// Optional: Restart worker on exit
workers[i].on('exit', function(code, signal) {
console.log('respawning worker', i);
spawn(i);
});
};
// Spawn workers.
for (var i = 0; i < num_processes; i++) {
spawn(i);
}
// Helper function for getting a worker index based on IP address.
// This is a hot path so it should be really fast. The way it works
// is by converting the IP address to a number by removing non numeric
// characters, then compressing it to the number of slots we have.
//
// Compared against "real" hashing (from the sticky-session code) and
// "real" IP number conversion, this function is on par in terms of
// worker index distribution only much faster.
var worker_index = function(ip, len) {
var s = '';
for (var i = 0, _len = ip.length; i < _len; i++) {
if (!isNaN(ip[i])) {
s += ip[i];
}
}
return Number(s) % len;
};
// Create the outside facing server listening on our port.
var server = net.createServer({ pauseOnConnect: true }, function(connection) {
// We received a connection and need to pass it to the appropriate
// worker. Get the worker for this connection's source IP and pass
// it the connection.
var worker = workers[worker_index(connection.remoteAddress, num_processes)];
worker.send('sticky-session:connection', connection);
}).listen(port);
} else {
// Note we don't use a port here because the master listens on it for us.
var app = new express();
// Here you might use middleware, attach routes, etc.
// Don't expose our internal server to the outside.
var server = app.listen(0, 'localhost'),
io = sio(server);
// Tell Socket.IO to use the redis adapter. By default, the redis
// server is assumed to be on localhost:6379. You don't have to
// specify them explicitly unless you want to change them.
io.adapter(sio_redis({ host: 'localhost', port: 6379 }));
// Here you might use Socket.IO middleware for authorization etc.
io.on('connection', function(socket) {
console.log('port 6001');
console.log(socket.id);
});
// Listen to messages sent from the master. Ignore everything else.
process.on('message', function(message, connection) {
if (message !== 'sticky-session:connection') {
return;
}
// Emulate a connection event on the server by emitting the
// event with the connection the master sent us.
server.emit('connection', connection);
connection.resume();
});
}
Connections worked just fine with this, although I'm testing it all locally..
How do I know if it's working properly? Every time the client connects, it seems to connect to the socket.io process on port 6001.
The client connect code connects to port 3000.
Am I supposed to be using nginx AND the cluster module in Node.js for this?
If all your server processes are on one computer, you can use the cluster module without NGINX.
If you're using multiple server computers, then you need a piece of network infrastructure like NGINX to load balance among the different servers since node.js clustering cannot do that for you.
And, you can use both together (multiple servers load balanced by something like NGINX and each server running clustering on each server). The key here is that node.js clustering only spreads the load among different processes on the same host.
Also how do I tell if load balancing is working?
You can have each process log the activity it is processing and add the process ID as part of the logging and if you are loading your server with multiple requests at the same time, you should see some load being handled by each process. If you do actual load testing, you should get significantly more throughput when clustering is on and working vs. not using clustering. Keep in mind that total throughput depends upon where your bottlenecks are so if your server is actually database bound and all clustered processes are using the same database, you may not benefit much from clustering the node.js process. If, on the other hand, your node.js process is compute intensive and you have multiple cores in your server, you may get a significant benefit from clustering.
Adding more point to above solution.
Also how do I tell if load balancing is working?
I am using node-debug for the same, it opens multiple debugger as per number of thread. Now you can add breakpoint to check whether load is being distributed properly.

How to listen to ACK packages on ephemeral ports

I need to write a tftp client implementation to send a file from a windows phone 8.1 to a piece of hardware.
Because I need to be able to support windows 8.1 I need to use the Windows.Networking.Sockets classes.
I'm able to send my Write request package but I am having troubles to receive the ack package (wireshark). This ack package is sent to an "ephemeral port" according to the TFTP specification but the port is blocked according to wireshark.
I know how to use sockets on a specific port but I don't know how to be able to receive ack packages send to different (ephemeral) ports. I need to use the port used for that ack package to continue the TFTP communication.
How would I be able to receive the ACK packages and continue to work on a different port? Do I need to bind the socket to multiple ports? I've been trying to find answers on the microsoft docs and google but other implementations gave me no luck so far.
As reference my current implementation:
try {
hostName = new Windows.Networking.HostName(currentIP);
} catch (error) {
WinJS.log && WinJS.log("Error: Invalid host name.", "sample", "error");
return;
}
socketsSample.clientSocket = new Windows.Networking.Sockets.DatagramSocket();
socketsSample.clientSocket.addEventListener("messagereceived", onMessageReceived);
socketsSample.clientSocket.bindEndpointAsync(new Windows.Networking.HostName(hostName), currentPort);
WinJS.log && WinJS.log("Client: connection started.", "sample", "status");
socketsSample.clientSocket.connectAsync(hostName, serviceName).done(function () {
WinJS.log && WinJS.log("Client: connection completed.", "sample", "status");
socketsSample.connected = true;
var remoteFile = "test.txt";
var tftpMode = Modes.Octet;
var sndBuffer = createRequestPacket(Opcodes.Write, remoteFile, tftpMode);
if (!socketsSample.clientDataWriter) {
socketsSample.clientDataWriter =
new Windows.Storage.Streams.DataWriter(socketsSample.clientSocket.outputStream);
}
var writer = socketsSample.clientDataWriter;
var reader;
var stream;
writer.writeBytes(sndBuffer);
// The call to store async sends the actual contents of the writer
// to the backing stream.
writer.storeAsync().then(function () {
// For the in-memory stream implementation we are using, the flushAsync call
// is superfluous, but other types of streams may require it.
return writer.flushAsync();
});
}, onError);
Finally found the issue.
Instead of connectAsynch I used getOutputStreamAsynch and now it receives messages on the client socket:
Some code:
tftpSocket.clientSocket.getOutputStreamAsync(new Windows.Networking.HostName(self.hostName), tftpSocket.serviceNameConnect).then(function (stream) {
console.log("Client: connection completed.", "sample", "status");
var writer = new Windows.Storage.Streams.DataWriter(stream); //use the stream that was created when calling getOutputStreamAsync
tftpSocket.clientDataWriter = writer; //keep the writer in case we need to close sockets we also close the writer
writer.writeBytes(sndBytes);
// The call to store async sends the actual contents of the writer
// to the backing stream.
writer.storeAsync().then(function () {
// For the in-memory stream implementation we are using, the flushAsync call
// is superfluous, but other types of streams may require it.
return writer.flushAsync();
});
}, self.onError);

Subscribing to a removed queue with spring-websocket and RabbitMQ broker (Queue NOT_FOUND)

I have a spring-websocket (4.1.6) application on Tomcat8 that uses a STOMP RabbitMQ (3.4.4) message broker for messaging. When a client (Chrome 47) starts the application, it subscribes to an endpoint creating a durable queue. When this client unsubscribes from the endpoint, the queue will be cleaned up by RabbitMQ after 30 seconds as defined in a custom made RabbitMQ policy. When I try to reconnect to an endpoint that has a queue that was cleaned up, I receive the following exception in the RabbitMQ logs: "NOT_FOUND - no queue 'position-updates-user9zm_szz9' in vhost '/'\n". I don't want to use an auto-delete queue since I have some reconnect logic in case the websocket connection dies.
This problem can be reproduced by adding the following code to the spring-websocket-portfolio github example.
In the container div in the index.html add:
<button class="btn" onclick="appModel.subscribe()">SUBSCRIBE</button>
<button class="btn" onclick="appModel.unsubscribe()">UNSUBSCRIBE</button>
In portfolio.js replace:
stompClient.subscribe("/user/queue/position-updates", function(message) {
with:
positionUpdates = stompClient.subscribe("/user/queue/position-updates", function(message) {
and also add the following:
self.unsubscribe = function() {
positionUpdates.unsubscribe();
}
self.subscribe = function() {
positionUpdates = stompClient.subscribe("/user/queue/position-updates", function(message) {
self.pushNotification("Position update " + message.body);
self.portfolio().updatePosition(JSON.parse(message.body));
});
}
Now you can reproduce the problem by:
Launch the application
click unsubscribe
delete the position-updates queue in the RabbitMQ console
click subscribe
Find the error message in the websocket frame via the chrome devtools and in the RabbitMQ logs.
reconnect logic in case the websocket connection dies.
and
no queue 'position-updates-user9zm_szz9' in vhost
Are fully different stories.
I'd suggest you implement "re-subscribe" logic in case of deleted queue.
Actually that is how STOMP works: it creates auto-deleted (generated) queue for the subscribe and yes, it is removed on the unsubscrire.
See more info in the RabbitMQ STOMP Adapter Manual.
From other side consider to subscribe to the existing AMQP queue:
To address existing queues created outside the STOMP adapter, destinations of the form /amq/queue/<name> can be used.
The problem is Stomp won't recreate the queue if it get's deleted by the RabbitMQ policy. I worked around it by creating the queue myself when the SessionSubscribeEvent is fired.
public void onApplicationEvent(AbstractSubProtocolEvent event) {
if (event instanceof SessionSubscribeEvent) {
MultiValueMap nativeHeaders = (MultiValueMap)event.getMessage().getHeaders().get("nativeHeaders");
List destination = (List)nativeHeaders.get("destination");
String queueName = ((String)destination.get(0)).substring("/queue/".length());
try {
Connection connection = connectionFactory.newConnection();
Channel channel = connection.createChannel();
channel.queueDeclare(queueName, true, false, false, null);
} catch (IOException e) {
e.printStackTrace();
}
}
}

Resources