messaging between clients without polling - client-server

I have a network of MS SQL servers connected to each other with (C++/C#)clients connected to them. and I'm about designing a way of messaging between clients and server-client messaging.
I've alread read about MS SQL Service Broker and other Brokers like Apache Qpid.
but still I cant find out how would this work, I would be thankful is someone could provide me with better sources or if someone has already worked with such an issue.
How could I make sending and recieving messages between clients without possible?
and please make sure this is no school homework or university course project.
I would really appreciate any helpful comment or advice...
+++Thanks+++

MSDN has quite a bit of technical information regarding SQL Service Broker. It is fairly high-level, but if you dig / read enough you will be a pro in no time.
http://msdn.microsoft.com/en-us/library/ms166043%28SQL.90%29.aspx
There are also a bunch of useful code samples floating around on the internet that should get you up and running so you can start experimenting.
http://blogs.msdn.com/b/sql_service_broker/
http://www.mssqltips.com/tip.asp?tip=1836
Best of Luck!

Why not poll? It's easy, the "dumbest thing that could possibly work".
I suggest you consider polling unless you have established what the problem with polling is.
Considerations:
Timeliness. How quickly must the message be recieved?
Frequency. How many messages are sent to each client per hour? Per day?
Plus, if your application has a connection heartbeat anyway, you could have it report whether there are any new messages and kill two birds with one stone.

If you are affraid of doing a
//PSUDO Code
while(!stopped){
try{
message = receiver.fetch(timeout);
}catch(TimeoutException){
//handle
}
}
You could always prefetch, set the prefetch:
receiver.setCapacity(100);
and then you could use the available messages functionality, but in all reality, this sounds like polling, in a backwards way ;)

Related

Simple Server to PUSH lots of data to Browser?

I'm building a Web Application that consumes data pushed from Server.
Each message is JSON and could be large, hundreds of kilobytes, and messages send couple times per minute, and the order doesn't matter.
The Server should be able to persist not yet delivered messages, potentially storing couple of megabytes for client for couple of days, until client won't get online. There's a limit on the storage size for unsent messages, say 20mb per client, and old undelivered messages get deleted when this limit is exceeded.
Server should be able to handle around 1 thousand simultaneous connections. How it could be implemented simply?
Possible Solutions
I was thinking maybe store messages as files on disk and use Browser Pool for 1 sec, to check for new messages and serve it with NGinx or something like that? Is there some configs / modules for NGinx for such use cases?
Or maybe it's better to use MQTT Server or some Message Queue like Rabbit MQ with some Browser Adapter?
Actually, MQTT supports the concept of sessions that persist across client connections, but the client must first connect and request a "non-clean" session. After that, if the client is disconnected, the broker will hold all the QoS=1 or 2 messages destined for that client until it reconnects.
With MQTT v3.x, technically, the server is supposed to hold all the messages for all these disconnected clients forever! Each messages maxes out at a 256MB payload, but the server is supposed to hold all that you give it. This created a big problem for servers that MQTT v5 came in to fix. And most real-world brokers have configurable settings around this.
But MQTT shines if the connections are over unreliable networks (wireless, cell modems, etc) that may drop and reconnect unexpectedly.
If the clients are connected over fairly reliable networks, AMQP with RabbitMQ is considerably more flexible, since clients can create and manage the individual queues. But the neat thing is that you can mix the two protocols using RabbitMQ, as it has an MQTT plugin. So, smaller clients on an unreliable network can connect via MQTT, and other clients can connect via AMQP, and they can all communicate with each other.
MQTT is most likely not what you are looking for. The protocol is meant to be lightweight and as the comments pointed out, the protocol specifies that there may only exist "Control Packets of size up to 268,435,455 (256 MB)" source. Clearly, this is much too small for your use case.
Moreover, if a client isn't connected (and subscribed on that particular topic) at the time of the message being published, the message will never be delivered. EDIT: As #Brits pointed out, this only applies to QoS 0 pubs/subs.
Like JD Allen mentioned, you need a queuing service like Rabbit MQ or AMQ. There are countless other such services/libraries/packages in existence so please investigate more.
If you want to role your own, it might be worth considering using AWS SQS and wrapping some of your own application logic around it. That'll likely be a bit hacky though, so take that suggestion with a grain of salt.

Web server and ZeroMQ patterns

I am running an Apache server that receives HTTP requests and connects to a daemon script over ZeroMQ. The script implements the Multithreaded Server pattern (http://zguide.zeromq.org/page:all#header-73), it successfully receives the request and dispatches it to one of its worker threads, performs the action, responds back to the server, and the server responds back to the client. Everything is done synchronously as the client needs to receive a success or failure response to its request.
As the number of users is growing into a few thousands, I am looking into potentially improving this. The first thing I looked at is the different patterns of ZeroMQ, and whether what I am using is optimal for my scenario. I've read the guide but I find it challenging understanding all the details and differences across patterns. I was looking for example at the Load Balancing Message Broker pattern (http://zguide.zeromq.org/page:all#header-73). It seems quite a bit more complicated to implement than what I am currently using, and if I understand things correctly, its advantages are:
Actual load balancing vs the round-robin task distribution that I currently have
Asynchronous requests/replies
Is that everything? Am I missing something? Given the description of my problem, and the synchronous requirement of it, what would you say is the best pattern to use? Lastly, how would the answer change, if I want to make my setup distributed (i.e. having the Apache server load balance the requests across different machines). I was thinking of doing that by simply creating yet another layer, based on the Multithreaded Server pattern, and have that layer bridge the communication between the web server and my workers.
Some thoughts about the subject...
Keep it simple
I would try to keep things simple and "plain" ZeroMQ as long as possible. To increase performance, I would simply to change your backend script to send request out from dealer socket and move the request handling code to own program. Then you could just run multiple worker servers in different machines to get more requests handled.
I assume this was the approach you took:
I was thinking of doing that by simply creating yet another layer, based on the Multithreaded Server pattern, and have that layer bridge the communication between the web server and my workers.
Only problem here is that there is no request retry in the backend. If worker fails to handle given task it is forever lost. However one could write worker servers so that they handle all the request they got before shutting down. With this kind of setup it is possible to update backend workers without clients to notice any shortages. This will not save requests that get lost if the server crashes.
I have the feeling that in common scenarios this kind of approach would be more than enough.
Mongrel2
Mongrel2 seems to handle quite many things you have already implemented. It might be worth while to check it out. It probably does not completely solve your problems, but it provides tested infrastructure to distribute the workload. This could be used to deliver the request to be handled to multithreaded servers running on different machines.
Broker
One solution to increase the robustness of the setup is a broker. In this scenario brokers main role would be to provide robustness by implementing queue for the requests. I understood that all the requests the worker handle are basically the same type. If requests would have different types then broker could also do lookups to find correct server for the requests.
Using the queue provides a way to ensure that every request is being handled by some broker even if worker servers crashed. This does not come without price. The broker is by itself a single point of failure. If it crashes or is restarted all messages could be lost.
These problems can be avoided, but it requires quite much work: the requests could be persisted to the disk, servers could be clustered. Need has to be weighted against the payoffs. Does one want to use time to write a message broker or the actual system?
If message broker seems a good idea the time which is required to implement one can be reduced by using already implemented product (like RabbitMQ). Negative side effect is that there could be a lot of unwanted features and adding new things is not so straight forward as to self made broker.
Writing own broker could covert toward inventing the wheel again. Many brokers provide similar things: security, logging, management interface and so on. It seems likely that these are eventually needed in home made solution also. But if not then single home made broker which does single thing and does it well can be good choice.
Even if broker product is chosen I think it is a good idea to hide the broker behind ZeroMQ proxy, a dedicated code that sends/receives messages from the broker. Then no other part of the system has to know anything about the broker and it can be easily replaced.
Using broker is somewhat developer time heavy. You either need time to implement the broker or time to get use to some product. I would avoid this route until it is clearly needed.
Some links
Comparison between broker and brokerless
RabbitMQ
Mongrel2

ajax based notification/messaging system

I have a question.
Facebook probably uses ajax to notify user about a new message, is this correct?
If yes, would not this tax db to incredible levels?
I mean millions of users every second of being online requesting message status.
Or am I thinking about this in a wrong way?
You are asking about a technique called polling. And you are correct that it has scalability issues. In general not a good idea.
[rant]I have no idea what facebook does. I hate facebook. It's like a drunk whore who won't stop texting/emailing you and needs to be used.[/rant].
There are better alternatives to polling. One technique is called long polling, and then there is server side push.
See
How do I implement basic "Long Polling"?
and
https://stackoverflow.com/questions/6883540/http-server-to-client-push-technologies-standards-libraries.
In long polling, the client sends a request but does not expect a response immediately; the response could come immediately, in a second, or in an hour. The challenge is for the server to manage the outstanding requests in a non-resource intensive way.
With server side push, the server maintains a connection with clients and can broadcast messages to its connections when events occur.
Which alternative to use depends a bit on your technology stack. For example, node.js has something called socket.io (which I think is server side push using html5 websockets) which I hear good things about.

JMS design: topic and queue combination

I am relatively new to JMS and I have been reading a lot on it lately.
I am planning to design a web app which would do the following:
User logs into the system and publishes a message/question to a topic.
All the users who have subscribed to the topic read the message/question and reply to it.
The originator reviews all the answers and picks the best answer.
The originator now replies to only the user whose answer he/she picked and asks for further clarification.
The responder gets the message and replies.
So, once the originator has picked the answer, the JMS now becomes a request/reply design.
My questions are:
Is it possible to publish to a topic with setJmsReplyTo(tempQueue)?
Can request/reply approach be async?
Is it a good idea to have per user queue?
These questions might some dumb to some of the experts here, but please bare in mind that I am still learning.
Thanks.
Is it possible to publish to a topic with setJmsReplyTo(tempQueue)?
You should be able but I'm not 100% sure about it. By the way, I searched in my bookmarks and found this link that should explain what you have to do to build up a Request/Response system using JMS
http://activemq.apache.org/how-should-i-implement-request-response-with-jms.html
Can request/reply approach be async?
A message listener is an object that acts as an asynchronous event handler for messages. So you approach about request/reply, if using JMS, is by default async.
http://docs.oracle.com/javaee/1.3/jms/tutorial/1_3_1-fcs/doc/prog_model.html#1023398
Is it a good idea to have per user queue?
I don't know how many user you expect to have but having one queue for each user is not a good way to handle the messages. I had a problem similar to yours but we used a single queue for each of the macro area and we structured the message to hold the information of the user that sent it in order to store the information later and use it to further analysis.
JMSReplyTo is just a message header, nothing else. So It is possible to publish a message withing a topic with specific value in this header.
Sure! If you would like to create a scalable system you should design event driven system using async instead of blocking aproach. MessageListener can help you.
It is specific to JMS broker implementation. If queue creation is quite cheap there is no problems with such a solution.

HornetQ : Timeout durable subscriptions where client not connected for X amount of time

We have recently started using hornetq on a project and we use durable subscriptions to topics. However we have no control over the subscribers. What we would like to do is timeout the durable subscriptions if the client hasn't connected in the last 24 hours. There are a couple reasons for wanting to do this. The main reason is that the client id could change or the connecting client might just disappear.
Does anyone have any ideas?
Thanks in advance.
That seems a nice feature to have. You should open a feature request.
Meanwhile you could use management API to discover the queues you have, and the number of consumers on them. At this point you would have to control such thing outside of hornetq.
We would accept contributions if someone implemented the feature, and we could collaborate on the process.

Resources