Do I need to run Mosquitto to interact with a remote mosquitto broker - ruby

I am new to mqtt and would like to get my head around something.
I need to get messages from (subscribe to) topics from a remote mosquitto broker. The documentation for the service says I need to run a mosquitto broker on my server.
If I understand correctly then a script that uses the mqtt gem and manages to connect using something like this:
MQTT::Client.connect(conn_opts) do |c|
# The block will be called when you messages arrive to the topic
c.get('test') do |topic, message|
puts "#{topic}: #{message}"
end
end
IS a broker? Do I need to run mosquitto on my machine or can I get away with just a script and mqtt?
The doc describes the architecture and includes these lines:
The 3rd party platform needs an MQTT broker installed that will allow
communication with the different boxes on our servers. The broker on our servers will
initiate the connection and provide the credentials to allow
bidirectional communication.
The architecture I have in mind is a scheduled background process, using ruby-mqtt, that will spawn, connect with the remote mosquitto server and pull down new messages in batches before finishing. Does this sound like a reasonable approach for getting messages from a remote mosquitto broker?
I have a sneaky suspicion there is something I am not getting... any help/direction will be appreciated. Thanks!

No, you do not need a local MQTT server, you can connect directly to the remote server from your ruby script.
It is typical to keep the MQTT client running all the time, and not just download periodically using cron. Although I imagine that could work, providing you are using QoS 1/2 and disable clean sessions, so that messages are retained on the remote server. Despite its name, MQTT is not a message queuing protocol, it is a publish/subscribe protocol, so it is possible at the remote server will not allow you to build up a large pool of messages.
It may however be desirable to have a local MQTT server (eg mosquitto):
* You local MQTT server could deal with the storing of messages to disk until ruby is ready for them
* It allows multiple local clients to receive the same message without the remote server having to send it over the network multiple times
* Multiple local clients can send messages to each other, even when the remote network is down
Also be warned that ruby-mqtt doesn't support QoS 1 properly yet and also doesn't support persisting of messages or automatic reconnects, so a local mosquitto instance could solve some of those problems for you.

Related

When 2 servers connected to same socket redis adapter. Is both of them get messages from the client same time?

I have two servers server-a and server-b.
For using socket.io usually, the two servers are using redis adapter. Then the client can connect to server-a or server-b.
Now the question is: If the client is connected to server-a and emit a message. Is server-b have an option to get the message?
The client code:
io.emit('sendMessage',myMessage)
The Server-a Code:
io.on('sendMessage',function(){
console.log('Server A got the message')
}
The Server-a Code:
io.on('sendMessage',function(){
console.log('Server B got the message')
}
The client is connected only to server-a. server-a & server-b are using the same redis adapter.
The question is: When client emit a message, is server-b will get it? (Server-B is only connected to the same redis)
What I want to do: I have several servers that should do an action, based on client request. When client request something, all the servers needs to start works. I thought to do with socket.io, and to keep one connection between the client and on of the servers.
All the servers will use socket.io to get the same message from the client.
If you are using the redis adapter properly with all your servers, then when you do something like:
io.emit('sendMessage',myMessage)
from any one of your servers, then that message will end up being sent to all the clients connected to all your servers. What happens internally is the message is sent to a redis channel which all the servers are listening to. When each server gets the message, they then broadcast to all their users but these last steps are transparently handled for you by the redis adapter and redis store.
So, io.emit() is used to send to all connected clients (which uses all the servers in order to carry out the broadcast). It is not used to broadcast the same message directly to all your servers so that they can each manually process that message.
To send to each of your servers, you could probably use your own custom redis publish/subscribe channel messages since each server is already connected to redis and this is something that redis is good at.
Or, you could designate one master socket.io server and have all the other servers connect to it with socket.io. Then any server could ask the central server to broadcast a message to all the other servers.

When to choose a remote queue design versus local queue for get/put activities

I'm trying to figure out under what conditions I would want to implement a remote queue versus a local one for 2 endpoint applications.
Consider this scenario: App A on Server A needs to send messages to App B on Server B via MQServer1.
It seems like the simplest configuration would be to create a single local queue on MQServer1 and configure AppA to put messages to the local queue while configuring AppB to get messages from the same local queue. Both AppA and AppB would connect to the same Queue Manager but execute different commands.
What sort of circumstances would require the need to install another MQ server (e.g. MQServer2) and configure a remote queue on MQServer1 which instead sends the messages from AppA over a channel to a local queue on MQServer2 to be consumed by AppB?
I believe I understand the benefit of remote queuing but I'm not sure when it's best used over a more simpler design.
Here are some problems with what you call the simpler design that you don't have with remote queuing:-
Time Independance - Server1 has to be available all the time, whereas with a remote queue, once the messages have been moved to Server B, Server A and Server 1 don't need to be online when App B wants to get its messages.
Network Efficiency - with two client applications putting or getting from a central queue, you have two inefficient network hops, instead of one efficient channel batched network connection from Server A to Server B (no need for Server 1 in the middle)
Network Problems - No network, no messages. Whereas when they are stored locally, any that have already arrived can be processed even while the network is down. Likewise, the application putting messages is also not held up by a network problem, the messages sit on the transmit queue easy to be moved, and the application can get on with the next thing.
Of course your applications should be written so that they aren't even aware of the difference, and it's just configuration changes that switch you from one design to the other.
Here we can have separate Queue Manager for both the application.Application A will send the message on to the queue defined on local Queue Manager, which in turn transmit it to the Transmission queue via defined channels (Need to do configuration for that in the QueueManager) which in turn send it to the Local queue of the Application B.

Does websocket only broadcasts the data to all clients connected instead of sending to a particular client?

I am new to Websockets. While reading about websockets, I am not been able to find answers to some of my doubts. I would like if someone clarifies it.
Does websocket only broadcasts the data to all clients connected instead of sending to a particular client? Whatever example (mainly chat apps) I tried they sends data to all the clients. Is it possible to alter this?
How it works on clients located on NAT (behind router).
Since client server connection will always remain open, how will it affect server performance for large number of connections?
Since I want all my clients to get real time updates, it is required to connect all my clients to server, so how should I handele the client connection limit?
NOTE:- My client is not a Web browser but a desktop application.
No, websocket is not only for broadcasting. You send messages to specific clients, when you broadcast you just send the same message to all connected clients, but you can send different messages to different clients, for example a game session.
The clients connect to the server and initialise the connections, so NAT is not a problem.
It's good to use a scalable server, e.g. an event driven server (e.g. Node.js) that doesn't use a seperate thread for each connection, or an erlang server with lightweight processes (a good choice for a game server).
This should not be a problem if you use a good server OS (e.g. Linux), but may be a limitation if your server uses a desktop version of Windows (e.g. may be limited to 200 connections).

Do I *really* need RPC and NETBIOS to use transactional NServiceBus queues between local servers and Amazon EC2?

We have been trying - without success - to get transactional message queues working between local servers and our cloud servers up in Amazon EC2.
We're using NServiceBus, and have got the pub/sub examples and various other trivial apps working locally between here and EC2, but trying to spin up the components of our actual application is proving... vexatious.
As far as I can work out, to allow a local server (DYLAN-PC) to send a message transactionally via a queue on an Amazon EC2 instance, I will need to:
Enable NETBIOS name resolution (e.g. via the /etc/lmhosts file) at both ends
Allow RPC connections to be initiated from either end (so open port 135 for RPC plus various other ports)
Configure MSTDC on both systems, enabling remote connections and inbound/outbound connections
Have I missed something? In particular, the requirement to allow NetBIOS in an age where everything (including Active Directory!) runs on DNS seems particularly archaic. Are we doing something stupid trying to use MSMQ between sites like this? This is the first big project where we've tried this kind of distributed architecture, and the deployment/configuration is starting to hurt so much I'm convinced we've taken a wrong turn somewhere... a little perspective or advice would be gratefully received!
If you're look to build a geographically distributed system, where you can't arrange a VPN between these sites, you should be using the gateway capabilities of NServiceBus to communicate over alternate transports (like HTTP) between those sites.
RPC is required for reading from remote queues.
If you push to remote queues and pull from local queues, you won't be using RPC.

How to send a message from Server A to Server B using MSMQ?

How do I setup a message queue that automatically sends all it's messages to another server?
I'm working on a proof of concept for a system that needs to run on multiple servers, writing to local message queues, then have a central service on another server running that reads its local queue to pick up all the messages from the other servers.
From what I've read I believe this is possible, but I'm not seeing how to set it up...
Thanks
When your application send a message to a remote computer, the msmq service actually write the message to a local queue ( temporary outgoing queue). So practically the behaver of msmq is exactly what you want. Can you elaborate more about your scenario?
Update to comment: There is one problem. You can't create a remote queue.

Resources