I'm setting up Backburner as a work queue, and my job items need to return JSON for the resulting data they create. I'm not sure how to structure this. As a test I've tried doing:
class PrintJob
include Backburner::Performable
def self.print(text)
puts text
return "results"
end
end
Backburner.configure do |config|
config.beanstalk_url = ["beanstalk://127.0.0.1"]
# etc
end
val = PrintJob.async.print('some cool text')
puts val
and running Backburner.work inside IRB. The puts works but the return value comes back as true instead of "results".
Is there a way to get return values out of async methods? Or should I try a different approach, e.g. having one queue for jobs and another for results? If so, how can I associate the result 'job' with the original work it belongs to?
Note: I'm eventually using Sinatra and not Rails.
Related
I have a controller action somewhat similar to this:
def reports
puts params
#stats = Client.stats(params)
puts params
end
The initial params might look like this:
{ end: '2012-01-01 21:00:19' }
And in my Client model, I have this:
def self.stats(opts)
opts[:start] = (Time.now - 30.days).to_i
...do some calculations..
return stats
end
If I inspect the params object that was sent before and after the function runs, I can see it's been modified by the self.stats method.
In the example above, I'm not sending 'start' in the initial params, the method adds it for the calculations - as expected.
What I was not expecting was that the function would modify the original hash!
Can someone explain why this is happening?
--EDIT--
I forgot to say I tried to create a copy of the params and use that instead, same issue.
def reports
a = params
#stats = Client.stats(a)
puts params
end
The params are still updated?!
That's, because your function call gets a reference to the params not a copy. If you do something like opts[:start] = (Time.now - 30.days).to_i you are editing the params object.
a = params: now both variables point to the same place in the memory. You copied the pointer only.
Google for ruby object copy or ruby deep copy or search at stackoverflow for it. At a first try you could try params.clone.
Whenever you are updating any value of params, take a copy of params like this
a = params.clone
It will create a new element in memory
if you do like this it wont create a new element in memory it will point the same memory
a = params
Try this
I'm trying to write a custom parser for my cucumber results. In doing so, I want to write rspec tests around it. What I currently have is as follows:
describe 'determine_test_results' do
it 'returns a scenario name as the key of the scenario results, with the scenario_line attached' do
pcr = ParseCucumberJsonReport.new
expected_results = {"I can login successfully"=>{"status"=>"passed", "scenario_line"=>4}}
cucumber_results = JSON.parse(IO.read('example_json_reports/json_passing.json'))
pcr.determine_test_results(cucumber_results[0]).should == expected_results
end
end
The problem is, determine_test_results has a sub method called determine_step_results, which means this is really an integration test between the 2 methods and not a unit test for determine_test_results.
How would I mock out the "response" from determine_step_results?
Assume determine_step_results returns {"status"=>"passed", "scenario_line"=>4}
what I have tried:
pcr.stub(:determine_step_results).and_return({"status"=>"passed", "scenario_line"=>6})
and
allow(pcr).to receive(:determine_step_results).and_return({"status"=>"passed", "scenario_line"=>6})
You could utilize stubs for what you're trying to accomplish. Project: RSpec Mocks 2.3 would be good reading regarding this particular case. I have added some code below as a suggestion.
describe 'determine_test_results' do
it 'returns a scenario name as the key of the scenario results, with the scenario_line attached' do
pcr = ParseCucumberJsonReport.new
expected_results = {"I can login successfully"=>{"status"=>"passed", "scenario_line"=>4}}
# calls on pcr will return expected results every time determine_step_results is called in any method on your pcr object.
pcr.stub!(:determine_step_results).and_return(expected_results)
cucumber_results = JSON.parse(IO.read('example_json_reports/json_passing.json'))
pcr.determine_test_results(cucumber_results[0]).should == expected_results
end
end
If all what determine_test_results does is call determine_step_results, you should not really test it, since it is trivial...
If you do decide to test it, all you need to test is that it calls the delegate function, and returns whatever is passed to it:
describe ParseCucumberJsonReport do
describe '#determine_test_results' do
it 'calls determine_step_results' do
result = double(:result)
input = double(:input)
expect(subject).to receive(:determine_step_results).with(input).and_return(result)
subject.determine_test_results(input).should == result
end
end
end
If it is doing anything more (like adding the result to a larger hash) you can describe it too:
describe ParseCucumberJsonReport do
describe '#determine_test_results' do
it 'calls determine_step_results' do
result = double(:result)
input = double(:input)
expect(subject).to receive(:determine_step_results).with(input).and_return(result)
expect(subject.larger_hash).to receive(:merge).with(result)
subject.determine_test_results(input).should == result
end
end
end
If I have a method like this:
require 'tweetstream'
# client is an instance of TweetStream::Client
# twitter_ids is an array of up to 1000 integers
def add_first_users_to_stream(client, twitter_ids)
# Add the first 100 ids to sitestream.
client.sitestream(twitter_ids.slice!(0,100))
# Add any extra IDs individually.
twitter_ids.each do |id|
client.control.add_user(id)
end
return client
end
I want to use rspec to test that:
client.sitestream is called, with the first 100 Twitter IDs.
client.control.add_user() is called with the remaining IDs.
The second point is trickiest for me -- I can't work out how to stub (or whatever) a method on an object that is itself a property of an object.
(I'm using Tweetstream here, although I expect the answer could be more general. If it helps, client.control would be an instance of TweetStream::SiteStreamClient.)
(I'm also not sure a method like my example is best practice, accepting and returning the client object like that, but I've been trying to break my methods down so that they're more testable.)
This is actually a pretty straightforward situation for RSpec. The following will work, as an example:
describe "add_first_users_to_stream" do
it "should add ids to client" do
bulk_add_limit = 100
twitter_ids = (0..bulk_add_limit+rand(50)).collect { rand(4000) }
extras = twitter_ids[bulk_add_limit..-1]
client = double('client')
expect(client).to receive(:sitestream).with(twitter_ids[0...bulk_add_limit])
client_control = double('client_control')
expect(client).to receive(:control).exactly(extras.length).times.and_return(client_control)
expect(client_control).to receive(:add_user).exactly(extras.length).times.and_return {extras.shift}
add_first_users_to_stream(client, twitter_ids)
end
end
def test_post_with_file filename = 'test01.xml'
File.open(filename) do |file|
response = #http_client.post(url, {'documents'=>file})
end
end
How do I modify the above method to handle a multi-file-array post/upload?
file_array = ['test01.xml', 'test02.xml']
You mean like this?
def test_post_with_file(file_array=[])
file_array.each do |filename|
File.open(filename) do |file|
response = #http_client.post(url, {'documents'=>file})
end
end
end
I was having the same problem and finally figured out how to do it:
def test_post_with_file(file_array)
form = file_array.map { |n| ['documents[]', File.open(n)] }
response = #http_client.post(#url, form)
end
You can see in the docs how to pass multiple values: http://rubydoc.info/gems/httpclient/HTTPClient#post_content-instance_method .
In the "body" row, I tried without success to use the 4th example. Somehow HttpClient just decides to apply .to_s to each hash in the array.
Then I tried the 2nd solution and it wouldn't work either because only the last value is kept by the server. But I discovered after some tinkering that the second solution works if the parameter name includes the square brackets to indicate there are mutiple values as an array.
Maybe this is a bug in Sinatra (that's what I'm using), maybe the handling of such data is implementation-dependent, maybe the HttpClient doc is outdated/wrong. Or a combination of these.
My sinatra app has to parse a ~60MB XML-file. This file hardly ever changes: on a nightly cron job, It is overwritten with another one.
Are there tricks or ways to keep the parsed file in memory, as a variable, so that I can read from it on incoming requests, but not have to parse it over and over for each incoming request?
Some Pseudocode to illustrate my problem.
get '/projects/:id'
return #nokigiri_object.search("//projects/project[#id=#{params[:id]}]/name/text()")
end
post '/projects/update'
if params[:token] == "s3cr3t"
#nokogiri_object = reparse_the_xml_file
end
end
What I need to know, is how to create such a #nokogiri_object so that it persists when Sinatra runs. Is that possible at all? Or do I need some storage for that?
You could try:
configure do
##nokogiri_object = parse_xml
end
Then ##nokogiri_object will be available in your request methods. It's a class variable rather than an instance variable, but should do what you want.
The proposed solution gives a warning
warning: class variable access from toplevel
You can use a class method to access the class variable and the warning will disappear
require 'sinatra'
class Cache
##count = 0
def self.init()
##count = 0
end
def self.increment()
##count = ##count + 1
end
def self.count()
return ##count
end
end
configure do
Cache::init()
end
get '/' do
if Cache::count() == 0
Cache::increment()
"First time"
else
Cache::increment()
"Another time #{Cache::count()}"
end
end
Two options:
Save the parsed file to a new file and always read that one.
You can save in a file – serialize - a hash with two keys: 'last-modified' and 'data'.
The 'last-modified' value is a date and you check in every request if that day is today. If it is not today then a new file is downloaded, parsed and stored with today's date.
The 'data' value is the parsed file.
That way you parse just once time, sort of a cache.
Save the parsed file to a NoSQL database, for example redis.