I have a webhook simulator I created for my QA team. In order to prevent the webhooks arriving before the synchronous request completes, I forked the code block that sends the webhook to delay sending for 60s.
I was curious if it is possible to test the code in the forked process with RSpec.
# Code block
Process.detach(
fork do
sleep(60)
hook_resp = Faraday.post(url, webhook_body, 'Content-Type' => 'application/json')
logger.debug("Pushed to webhook listener, HTTP #{hook_resp.status} received")
end
)
# RSpec test
it 'should sent to receiver' do
expect(Faraday).to receive(:post)
.with('http://web.hook', anything, anything).and_return(double(status: 200))
subject.call_method(params)
end
Obviously, I am getting a failing test. Is this testable at all? I do not really care about the response from the webhook receiver, just a fire and forget.
Related
In all eventmachine code that I've seen, the callbacks / errorbacks were declared after the actual call of the method.
Here's a simple example:
about = EventMachine::HttpRequest.new('http://google.ca/search?q=eventmachine').get
about.callback { # callback nesting, ad infinitum }
about.errback { # error-handling code }
Why are the callbacks and errorbacks declared AFTER ? Is it not possible that the EM::HttpRequest already finished w/ some sort of success or error state? How does EM guarantee that callbacks and errorbacks are actually caught?
The .get call only sets up the request.
The get request method in EM::HttpRequest module.
EM::HttpRequest uses EM::Deferrable module which is sort of a switch.
Add these two together, and you get a functionality where the request is first built and waits until a response is received. So, for the first iteration of the EM.run do..end loop, the connection is setup, the callbacks are registered and when the response is received, which will be processed in the next iteration/whenever the response is received, the set_deferrable_status is set to :succeeded or :failed and the corresponding callback/errback is executed.
Take the following code....
http = EM::HttpRequest.new('http://google.com').get
http.callback {puts "it was a great call"}
http.errback { puts "it was a bad call" }
You might think that if the asynchronous request happens faster than we can set the callback it's possible that the callback will never be called.
The request is happening asynchronously so we might think it's a possibility. But it's not. What if we put some really long running code in between the time we actually set up the callback? I'll demostrate and show that the callbacks still work.
http = EM::HttpRequest.new('http://google.com').get
#Some really long running code
100000000000.times do
#some really long running code
end
http.callback {puts "it was a great call"}
http.errback { puts "it was a bad call" }
The request in this case completes long before the really long running code completes but the callback will still be called? Why?
The reason is because HttpRequest is a Deferrable. It inherits from it. Even though a Deferrable is something that can run asyc Defferables have a status. success or fail and we still have a reference to that deferable in a variable called http.
When we call http.callback {"puts "it was a great call"} We imediately check to see if the status of the deffereable, in this case http, is success. If it is, imediately call the callback. Else set it up so that the defferable calls it whenever it finishes with a "success" status. It's that simple. As long as we have a reference to that defferable we can set the callback at any time.
My guess was confirmed when I actually took a look at the source code for Defferable.
http://eventmachine.rubyforge.org/EventMachine/Deferrable.html#callback-instance_method
Royce
So lets say I have a sidekiq process that sends off a http post request that I don't want to wait for. I don't want this to be a blocker on the speed of the workers.
One idea I have is to use this simple sample code for EventMachine Http Request
EventMachine.run do
http = EventMachine::HttpRequest.new("http://www.example.com").post :options => {...}
http.callback do
puts "got a response"
puts http.response
EventMachine.stop
end
puts "worker finished"
end
so lets assume my worker process finishes before the callback is called. What will happen here? does this mean the pointer to the call back will fail? I'd like to understand the flow of control here.
Depending on what you need:
You want to utilize CPU
Sidekiq workers are very lightweight. You can run more of them to utilize CPU while waiting responce.
You want workers to finish faster.
You can enqueue each request to be proccessed by different worker. It will be like next_tick() in EM.
I'm excited about Sidekiq and Celluloid because it changes the way we think. http://www.slideshare.net/KyleDrake/hybrid-concurrency-patterns?utm_source=rubyweekly&utm_medium=email
The EventMachine.run block will not return until you call EventMachine.stop. So, on your case, the worker won't finish without the callback being run.
I have a rabbitmq queue full of requests and I want to send the requests as an HTTP GET asynchronously, without the need to wait for each request response. now I'm confused of what is better to use, threads or just EM ? The way i'm using it at the moment is something like the following , but it would be great to know if there is any better implementation with better performance here since it is a very crucial part of the program :
AMQP.start(:host => "localhost") do |connection|
queue = MQ.queue("some_queue")
queue.subscribe do |body|
EventMachine::HttpRequest.new('http://localhost:9292/faye').post :body => {:message => body.to_json }
end
end
With the code above, is the system will wait for each request to finish before starting the next one ? and if there any tips here I would highly appreciate it
HTTP is synchronous so you have to wait for the replies. If you want to simulate an async environment that you could have a thread pool and pass each request to a thread which will wait for the reply, then go back in the pool until the next request. You would either send the thread a callback function to use when the reply is finished or you would immediately return a future reply object, which allows you to put off waiting for the reply until you actually need the reply data.
The other way is to have a pool of processes each one of which is processing a request, waiting for the reply, etc.
In both cases, you have to have a pool that is big enough or else you will still end up waiting some of the time.
I am trying to implement Superfeedr subscriptions using PubSubHubbub and Ruby on Rails. The problem is, the subscriptions are never confirmed, even though my callback prints out the hub.challenge string, which it successfully receives.
def push
feed = Feed.find(params[:id])
if feed.present?
if params['hub.mode'].present? and params['hub.verify_token'] == feed.secret
feed.update_attribute(:is_active, (params['hub.mode'] == 'subscribe'))
render text: params['hub.challenge']
return
elsif params['hub.secret'] == feed.secret
parse(feed, request.raw_post)
end
end
render nothing: true
end
It sets feed.is_active = true, but Superfeedr Analytics shows no sign of subscription.
I am using 1 dyno Heroku hosting and async verification method.
The first thing you should check is the HTTP status code and the response BODY of your subscription request. I expect the code to be 422 to indicate that subscription was failed, but the body will help us know exactly what is going on.
Also, do you see the verification request in the logs?
A common issue with heroku is that if you use hub.verify=sync, you will need 2 dynos, because you have to concurrent requests in this case...
I am using the Typhoeus gem. The official documentation refers to Memoization:
Memoization: Hydra memoizes requests within a single run call. You
can also disable memoization.
hydra = Typhoeus::Hydra.new
2.times do
r = Typhoeus::Request.new("http://localhost/3000/users/1")
hydra.queue r
end
hydra.run # this will result in a single request being issued. However, the on_complete handlers of both will be called.
hydra.disable_memoization
2.times do
r = Typhoeus::Request.new("http://localhost/3000/users/1")
hydra.queue r
end
hydra.run # this will result in a two requests.
How do I write code to send and run a request multiple times but stop on the first successful response? Also, I would like to skip the current request if it has timed-out.
Take a look at Typhoeus's times.rb example.
Don't submit multiple requests to a URL to the Hydra queue, only do one per URL.
Inside the on_complete block you have access to the response object. The response object has a timed_out? method which checks to see if the request timed out. If it did, resubmit your request to Hydra then exit the block, otherwise process the content as normal.