Is there any way to make non-blocking SOAP requests within EventMachine?
I'm creating a ruby application which interacts with the google adwords api (which is SOAP based), using the adwords4r gem. The application uses EM to receive messages over a stomp connection, and then processes those messages by making SOAP calls to the adwords api. Obviously I need those calls to be non-blocking, since the processing will be within the reactor thread. One option would be to use EM.defer, but I'd rather not have the overhead of a bunch of threads in a threadpool.
HandSoap can use EventMachine.
After earning a tumbleweed badge with this question I ended up asking on the #eventmachine IRC. Apparently there is no eventmachine-friendly options for making SOAP calls, besides using EM.defer
Related
I am wondering how Quarkus is handling simultaneous requests to for example a REST API with json-rest.
Example:
REST API is called by lots of clients simultaneously
REST API call calls other APIs
REST API processes the response of the other called APIs and returns the processed response
Questions:
Are the requests queued and processed one by one?
Are requests rejected if the API is busy?
Is handling parallelism offloaded to the infrastructure using tools like Istio?
Can someone please point me to some documentation about that or give an explanation? Thank you.
Quarkus uses Vert.x under the hood which implements an event loop. This means that it can handle thousands of the requests because its threads are not blocked.
You may read more about it in the Vert.x's documentation: https://vertx.io/docs/vertx-core/java/
We are designing an API for an application where the clients (external) can
interact with it synchronously to say:
a) request a plan
b) cancel a plan etc
However once the plan is made, the decision as to whether a plan is
approved or disapproved is done asynchronously. The application itself
can send other notifications to the clients asynchronously. This part has
been implemented using spring's stomp over websocket framework. This
work perfectly fine.
Now, coming to the synchronous part of the API, the plan is to provide
a RESTful interface for the interaction. If this is done this way, the
clients will have to build two different client API's, one using http
for making RESTful calls and another using a stomp client to consume notifications.
Should we rather make it accessible via one interface?
I am not convinced of using Stomp for synchronous calls since I think the REST framework
will address the use case well. However I am concerned about the need for the clients to do both, although it is for different functionality.
Will it be okay to support both? Is this a good design practice. Can someone please advice?
HTTP based clients could a) send requests ('simple polling), in long intervalls to limit bandwidth usage, or b) use HTTP long polling (blocking) to immediately return control to the client code when the server sends for a response
Currently I am using Jersey 1.0 and about to switch to 2.0. For REST requests the may last over a second or two I use the following pattern:
Client calls GET or PUT
Server returns a polling URL to the client
The client polls the URL until it gets a redirect to the completed resource
Pretty standard and straightforward. However, I noticed that Jersey 2.0 has an AsyncResponse capability. But it looks like this is done with no changes on the wire. In other words, the client still blocks for the result while the server is asynchronously processing the request.
So what good is this? Should I be using it instead of my current asynchronous approach for calls >1 second? Or is it really just to keep the connections freed on the server for calls that would be only a few hundred milliseconds?
I want my server to be as scalable as possible but the approach I use now can be tedious for the client. AsyncResponse seems super simple but I'm not sure how it would work for something like a heroku service where you want very short connection times.
AsyncResponse presumably gives you more scalability within the web app server for standard standard requests in terms of thread pooling resources, but I don't think it changes anything about the client experience which will continue to block on read on their connection. Therefore, if you already implemented a polling solution from your client side, this won't add much of any value to you imho.
So our team has recently implemented torquebox into our jruby on rails applications. The purpose of this was to be able to receive queue/topic messages from an outside source which is streaming live data.
We have setup our queues/topics and they are receiving the messages without an issue. The next step we want to take is to get these messages on the browser.
So we started to look into leveraging the power of stomp. But we have come across some issues with this. It seems from the documentation that the purpose of using stomp + websockets is to receive messages from the client-side and push those messages to other clients. But we want to receive messages on our queues, and then push these messages to the client-side using websockets. Is this possible? Or would we have to implement a different technology such as Pusher or socket.io to get the queue/topic messages to the browser?
Thanks.
I think stomplets is good solution for this task. In rails application you should use ruby base stomp client, in browser javascript base stomp client. In rails just send data, and in browser just receive.
More detail how do it you can find in torquebox documentation
http://torquebox.org/documentation/2.0.0/stomp.html
It is indeed possible to push messages straight from the server to clients. It took me quite a bit of digging to find it as it is not listed in the documentation directly. Their blog lists it in their example of how to build a chat client using websockets.
http://torquebox.org/news/2011/08/23/stomp-chat-demo-part3/
Basically you use the inject method to choose which channel you're publishing to, and then use the publish method on the returned object to actually send the message. This code excerpt from the article should get you pointed in the right direction.
inject( '/topics/chat' ).publish( message,
:properties=>{
:recipient=>username,
:sender=>'system'
} )
It looks like :properties is the same thing as message headers. I'll be giving this a go over the next couple of days to see how well this works in Rails.
I am working with XMPP and I have a message callback which is activated on the event of every message being sent. My aim is to send the data arriving by the message to an API within the callback and based on the response send something back using the XMPP client.
User type message (Browser Chat Client)
Message arrives to the server via XMPP
Message is sent to the API
Response is received
Response is sent back to the chat client.
My code for this is as follows
admin_muc_client.activate_message_callbacks do |m|
sender = m.from.resource
room_slug = m.from.node
message = m.body
r = HTTParty.get('http://example.com/api/v1/query?msg=message')
Rails.logger.debug(r.inspect)
admin_muc_client.send_message("Message #{r.body}") if m.from.resource != 'admin'
end
My concern here is that since the callback is evented and the request to the API would be a blocking call this could become a bottleneck for the entire application.
I would prefer to use something like AJAX for Javascript which would fire the request and when the response is available send data. How could I implement that in Ruby?
I have looked at delayed_job and backgroundrb which look like tools for fire and forget operations. I would need something that activates a callback in an asynchronous manner with the response.
I would really really appreciate some help on how to achieve the asynchronous behavior that i want. I am also familiar with message queues like RabbitMQ which I feel would be addition of significant complexity.
Did you look at girl_friday? From it's wiki -
girl_friday is a Ruby library for performing asynchronous tasks. Often times you don't want to block a web response by performing some task, like sending an email, so you can just use this gem to perform it in the background. It works with any Ruby application, including Rails 3 applications.
Why not use any of the zillions of other async solutions (Resque, dj, etc)? Because girl_friday is easier and more efficient than those solutions: girl_friday runs in your Rails process and uses the actor pattern for safe concurrency. Because it runs in the same process, you don't have to monitor a separate set of processes, deploy a separate codebase, waste hundreds of extra MB in RAM for those processes, etc. See my intro to Actors in Ruby for more detail.
You do need to write thread-safe code. This is not hard to do: the actor pattern means that you get a message and process that message. There is no shared data which requires locks and could lead to deadlock in your application code. Because girl_friday does use Threads under the covers, you need to ensure that your Ruby environment can execute Threads efficiently. JRuby, Rubinius 1.2 and Ruby 1.9.2 should be sufficient for most applications. I do not support Ruby 1.8 because of its poor threading support.
I think this is what you are looking for.