what is a good work queue for cross platform usage? - resque

Scenario:
In a web-application some parts are realized in PHP and some other in node.js. Communication between PHP and node.js should be realized via an asynchronous queue/worker system.
In the PHP part of the application API requests should be queued. In the node.js part queued API requests should be processed (worker). Results should be saved back to the queue. Later the results should be retrieved using PHP. The queue should support retry strategies and support notification (to the client) on completed requests.
Question:
I do not want to realize the queue on my own. The work queue itself should not run in PHP, because i do not want long running PHP processes.
I found the work queues
beanstalkd
resque
celery
rabbitmq
Are they suitable for this scenario? Resque looks great. However can a PHP client work together with a Ruby queue? Has anybody experience with something similar? Can worker write back results to the working queue? Can clients be notified on results?

after doing a lot of research i am using rabbitmq.
there are "official" client libraries for multiple plattforms out there. thus subsystems running on different plattforms can work together quite simple.
there are php forks of resque out there. but i do like it the rabbitmq way. one message broker, good documentation, "official" client libraries.

Related

How can i Run Web socket In Apache Flink Serverless Java

I have a Java Program to run in Apache flink in AWS i want to run
real time communication through web socket how can i integrate serverless web socket in Apache flink Java ???
Thanks You
Flink is designed to help you process and move data continuously between storage or streaming solutions. It is not intended to, and would not work well with websockets directly for these reasons:
When submitting a job, the runtime serializes your logic and moves it to other TaskManager instances so that it can parallelize them. These can be on another machine entirely. Now, if you were intending to service a websocket with that code, it has just moved elsewhere!
TaskManagers can be stopped and restarted (scaling event, recovering from a checkpoint/savepoint, etc). That's where your websocket connection will be cut.
Also, the Flink planner can decide that your source functions need be read twice if it helps the processing. This means that your websockets would need to maintain a history of messages received, and make sure they are sent once to each operator instance.
This being said you can have a webserver managing the websocket, piping messages back and forth to a Kafka topic, which then Flink can operate on.
Since you're talking about AWS, I suggest you learn about their Websocket API Gateway service. I believe these can be connected easily with Kinesis, which Flink can read from and write to easily.

Golang background processing

How can one do background processing/queueing in Go?
For instance, a user signs up, and you send them a confirmation email - you want to send the confirmation email in the background as it may be slow, and the mail server may be down etc etc.
In Ruby a very nice solution is DelayedJob, which queues your job to a relational database (i.e. simple and reliable), and then uses background workers to run the tasks, and retries if the job fails.
I am looking for a simple and reliable solution, not something low level if possible.
While you could just open a goroutine and do every async task you want, this is not a great solution if you want reliability, i.e. the promise that if you trigger a task it will get done.
If you really need this to be production grade, opt for a distributed work queue. I don't know of any such queues that are specific to golang, but you can work with rabbitmq, beanstalk, redis or similar queuing engines to offload such tasks from your process and add fault tolerance and queue persistence.
A simple Goroutine can make the job:
http://golang.org/doc/effective_go.html#goroutines
Open a gorutine with the email delivery and then answer to the HTTP request or whatever
If you wish use a workqueue you can use Rabbitmq or Beanstalk client like:
https://github.com/streadway/amqp
https://github.com/kr/beanstalk
Or maybe you can create a queue in you process with a FIFO queue running in a goroutine
https://github.com/iNamik/go_container
But maybe the best solution is this job queue library, with this library you can set the concurrency limit, etc:
https://github.com/otium/queue
import "github.com/otium/queue"
q := queue.NewQueue(func(email string) {
//Your mail delivery code
}, 20)
q.Push("foo#bar.com")
I have created a library for running asynchronous tasks using a message queue (currently RabbitMQ and Memcache are supported brokers but other brokers like Redis or Cassandra could easily be added).
You can take a look. It might be good enough for your use case (and it also supports chaining and workflows).
https://github.com/RichardKnop/machinery
It is an early stage project though.
You can also use goworker library to schedule jobs.
http://www.goworker.org/
If you are coming from Ruby background and looking for something like Sidekiq, Resque, or DelayedJob, please check out the library asynq.
Queue semantics are very similar to sidekiq.
https://github.com/hibiken/asynq
If you want a library with a very simple interface, yet robust that feels Go-like, uses Redis as Backend and RabbitMQ as message broker, you can try
https://github.com/Joker666/cogman

Using Torquebox to send messages to the browser

So our team has recently implemented torquebox into our jruby on rails applications. The purpose of this was to be able to receive queue/topic messages from an outside source which is streaming live data.
We have setup our queues/topics and they are receiving the messages without an issue. The next step we want to take is to get these messages on the browser.
So we started to look into leveraging the power of stomp. But we have come across some issues with this. It seems from the documentation that the purpose of using stomp + websockets is to receive messages from the client-side and push those messages to other clients. But we want to receive messages on our queues, and then push these messages to the client-side using websockets. Is this possible? Or would we have to implement a different technology such as Pusher or socket.io to get the queue/topic messages to the browser?
Thanks.
I think stomplets is good solution for this task. In rails application you should use ruby base stomp client, in browser javascript base stomp client. In rails just send data, and in browser just receive.
More detail how do it you can find in torquebox documentation
http://torquebox.org/documentation/2.0.0/stomp.html
It is indeed possible to push messages straight from the server to clients. It took me quite a bit of digging to find it as it is not listed in the documentation directly. Their blog lists it in their example of how to build a chat client using websockets.
http://torquebox.org/news/2011/08/23/stomp-chat-demo-part3/
Basically you use the inject method to choose which channel you're publishing to, and then use the publish method on the returned object to actually send the message. This code excerpt from the article should get you pointed in the right direction.
inject( '/topics/chat' ).publish( message,
:properties=>{
:recipient=>username,
:sender=>'system'
} )
It looks like :properties is the same thing as message headers. I'll be giving this a go over the next couple of days to see how well this works in Rails.

Solution/Architecture: queues or something else?

I have a multiple frontends to my service written in Node.js and workers written in Ruby. Now the question is how to make those communicate? I need to maintain dynamic pool of workers to handle load (spawn more workers when load rises) and messages are quite big ~2-3M because I'm sending images to workers uploaded by users through Node.js frontends. Because I want nice scaling I thought about some queuing solution, but I didn't find any existing solutions (or misunderstood guides) that will provide:
Fallback mechanisms. Solutions I've found so far have single failure point - message broker and there are no ways to provide fallbacks.
Serialization. So when broker fails tasks are not lost.
Ability to pass big messages.
Easy API for Ruby and Node.js
Some API to track queue size so I could rearrange workers pool.
Preferrably lightweight.
Maybe my approach is wrong? Maybe I shouldn't use queues but some other way? Or there's some queueing solution that fits requirements above?
No doubt you require a Queue to scale and you can monitor this queue to spawn "workers".
Apache ActiveMQ is very robust and supports REST protocol. Ruby client is also available to access the queue.
Interesting article on RESTful queue using Apache ActiveMQ
in the end of the day i took ZeroMQ queue solution. Very fast, robust and lightweight implementation. Had to write own broker, but thats the only cons of this solution.
redis publish/subscribe should do the trick
http://redis.io/topics/pubsub

Sinatra message Queue

Starling is a great (at least for small projects) and simple message queue, however, it doesn't actually manage or start workers that consume the queues. Workling does this for Rails projects, but doesn't work for pure ruby applications, neither for Sinatra.
Before I fork workling, or create my own custom one with threads/fork, is there another project that does it?
Look at resque. It is framework agnostic and contains rake tasks to start an arbitrary number of workers to consume your queues. It uses redis lists for the queue backends, so you will need to install and manage that.

Resources