I am using the parallel_tests gem to be able to run multiple features at the same time, the problem I face with my scenario is that I have user based sessions (SSO) so only one user can be logged in at a time.
To combat this I was thinking of being able to randomly select users if they are available, but tracking their login status globally presents an issue for me.
My setup
Before and after each scenario a user will login:
Before('#login_automated_user') do
#user = FactoryGirl.attributes_for(:automated_user)
login_steps("#{APP}")
end
After('#login_automated_user') do
logout_steps
end
I was thinking of having a pool of users in an array and randomly selecting one to use and once finished return the user to the pool:
module UserSession
def choose_user
user_array = factory_girl_users.values
if user_array.length > 0
#user = user_array.pop
end
end
def return_user
user_array << #user
end
def factory_girl_users
Hash[:user_1 => FactoryGirl.attributes_for(:automated_user), :user_2 => FactoryGirl.attributes_for(:automated_user_1)]
end
end
World(UserSession)
This would then make my Before and After hooks look like:
Before('#login_automated_user') do
#user = choose_user
login_steps("#{APP}")
end
After('#login_automated_user') do
logout_steps
return_user
end
One issue I can see here is I'm using #user across two sessions (or more if I had more users) so do I need to separate them out?
Would anyone offer some tips/solutions to contemplate please?
Not sure I understand your exact problem, but assuming it's having single browser for few tests that run in parallel, and thus having one user's login, affecting other test that runs in the same time.
One solution, is having named session per test. Not sure it's reliable.
Other is having each of your process that parallel runs, open its own browser and then you won't have any sessions problem.
It's requiring more memory, but doesn't seem to effect tests speed.
Also instead of logging out, after each test, you could just use reset_sessions! in global teardown.
Related
I have the following Worker
class JobBlastingWorker
include Sidekiq::Worker
sidekiq_options queue: 'job_blasting_worker'
def perform(job_id, action=nil)
job = Job.find(job_id)
JobBlastingService.new(job).call
sidekiq_id = JobBlastingWorker.perform_in(2.minutes, job.id, 're-blast', true)
job.sidekiq_trackers.create(sidekiq_id: sidekiq_id, worker_type: 'blast_version_update')
end
end
In my rspec test, i have the following job_blasting_worker_spec.erb
require 'rails_helper'
describe JobBlastingWorker do
before(:all) do
Rails.cache.clear
end
describe 'perform' do
context 'create' do
it 'creates job schedule for next 2mins' do
#job = create(:job)
worker = JobBlastingWorker.new
expect(JobBlastingWorker).to have_enqueued_sidekiq_job(#job.id, 're-blast').in(2.minutes)
worker.perform(#job.id, 'create')
end
end
end
end
I expect this to work but i realize that the sidekiq job that should be scheduled for the next 2minutes never gets created. Hence, the test fails.
How am i able to ensure that the sidekiq job actually creates for the next 2mins and the test runs successfully?
Well...for this kind of expectation, I suggest just test the message sent to the method.
expect(JobBlastingWorker).to have_enqueued_sidekiq_job(#job.id, 're-blast')
expect(JobBlastingWorker).to receive(:perform_in).with(2.minutes, job.id, 're-blast', true).and_call_original
worker = JobBlastingWorker.new
worker.perform(#job.id, 'create')
Of course, if you dig really hard, I think you will finally find a way to find the active job object in the queue, for example, by using the Redis API directly.
And then you can further examine the object and get the time you set for the job to be performed.
But why? That's ActiveJob responsibility to make sure those jobs will be performed at the right time.
Finding this doesn't help you much, and this behavior should be already tested in RSpec its tests.
I think you don't need to worry about that unless it works incorrectly and you want to reproduce that situation and issue a bug.
On the other hand, the time you send to the method is what you should care about. You don't want someone to change it to 2.hours by accident.
So I suggest you should test the message you send to the method.
In my sinatra web application, I have a route:
get "/" do
temp = MyClass.new("hello",1)
redirect "/home"
end
Where MyClass is:
class MyClass
#instancesArray = []
def initialize(string,id)
#string = string
#id = id
#instancesArray[id] = this
end
def run(id)
puts #instancesArray[id].string
end
end
At some point I would want to run MyClass.run(1), but I wouldn't want it to execute immediately because that would slow down the servers response to some clients. I would want the server to wait to run MyClass.run(temp) until there was some time with a lighter load. How could I tell it to wait until there is an empty/light load, then run MyClass.run(temp)? Can I do that?
Addendum
Here is some sample code for what I would want to do:
$var = 0
get "/" do
$var = $var+1 # each time a request is recieved, it incriments
end
After that I would have a loop that would count requests/minute (so after a minute it would reset $var to 0, and if $var was less than some number, then it would run tasks util the load increased.
As Andrew mentioned (correctly—not sure why he was voted down), Sinatra stops processing a route when it sees a redirect, so any subsequent statements will never execute. As you stated, you don't want to put those statements before the redirect because that will block the request until they complete. You could potentially send the redirect status and header to the client without using the redirect method and then call MyClass#run. This will have the desired effect (from the client's perspective), but the server process (or thread) will block until it completes. This is undesirable because that process (or thread) will not be able to serve any new requests until it unblocks.
You could fork a new process (or spawn a new thread) to handle this background task asynchronously from the main process associated with the request. Unfortunately, this approach has the potential to get messy. You would have to code around different situations like the background task failing, or the fork/spawn failing, or the main request process not ending if it owns a running thread or other process. (Disclaimer: I don't really know enough about IPC in Ruby and Rack under different application servers to understand all of the different scenarios, but I'm confident that here there be dragons.)
The most common solution pattern for this type of problem is to push the task into some kind of work queue to be serviced later by another process. Pushing a task onto the queue is ideally a very quick operation, and won't block the main process for more than a few milliseconds. This introduces a few new challenges (where is the queue? how is the task described so that it can be facilitated at a later time without any context? how do we maintain the worker processes?) but fortunately a lot of the leg work has already been done by other people. :-)
There is the delayed_job gem, which seems to provide a nice all-in-one solution. Unfortunately, it's mostly geared towards Rails and ActiveRecord, and the efforts people have made in the past to make it work with Sinatra look to be unmaintained. The contemporary, framework-agnostic solutions are Resque and Sidekiq. It might take some effort to get up and running with either option, but it would be well worth it if you have several "run when you can" type functions in your application.
MyClass.run(temp) is never actually executing. In your current request to / path you instantiate a new instance of MyClass then it will immediately do a get request to /home. I'm not entirely sure what the question is though. If you want something to execute after the redirect, that functionality needs to exist within the /home route.
get '/home' do
# some code like MyClass.run(some_arg)
end
I am looking for some assistance on where to start to run concurrent tests with Cucumber/Capybara. I need to do this without the 'parallel_tests' gem.the reason being is I can't seem to be able to have separate users login for each process.
I was thinking that I could have a shared pool of users, most likely in a array but I can't share this data across separate processes with the gem.
Some feedback I have received is to use IO.pipe but as yet do not know enough about it.
I have a standalone Cucumber framework, no Rails etc.
Thought I would post my solution in case it helps anyone else. I have ended up separating the user pool away from my application and storing them using Redis.
I then have a simple method that will pick a random user from a Set in Redis and put it back once it has finished with it
def choose_redis_user
#redis = Redis.new
#randUser = #redis.spop("users")
$user_username = #redis.hget(#randUser, "username")
$user_password = #redis.hget(#randUser, "password")
end
def return_redis_user
#redis.sadd("users", #randUser)
end
Then within my tests i can run
login_page.username.set($user_username)
login_page.password.set($user_password)
This works really well with multiple parallel_tests
I'm working with Jruby 1.7.12 and Celluloid (0.16.0). My application is using pools and generates actors in a loop
require 'kaiwa'
Kaiwa::Launcher.run
Celluloid.logger = Kaiwa::Logger.logger
class KaiwaTest
include Celluloid
Celluloid::LINKING_TIMEOUT = 5
def initialize
end
def create_kaiwa_users(handle)
Kaiwa::Manager.create_user(handle)
end
def send_kaiwa_messages(to_handle, from_handle, message)
Kaiwa::Manager.send_message(to_handle, from_handle, message)
end
end
kt = KaiwaTest.pool(size: 4)
(0..1_00_000).to_a.each do |index|
kaiwa_test_pool.async.create_kaiwa_users("user_#{index}")
end
Within my library each user is an actor which gets linked to the manager, which is also an actor. I've tried eliminating the linking altogether and the problem still persists. The minute i create more than 30 user actors my system hangs.
There seems to be some similar timeout errors discussed with a mention of a JRuby issue but nothing that specifically touches the linking timeout issue. I cannot figure out what is causing the issue.
Thanks in advance.
The entire codebase is available at https://github.com/supersid/kaiwa
Would appreciate any help I can get.
Interesting project ( XMPP +Celluloid +jRuby )
I'm assuming kaiwa_test_pool ought to be kt?
I would give this a try using 0.17.0:
https://github.com/celluloid/celluloid/wiki/0.17.0-Prerelease
You do not need a Pool here. You just need a specialized Supervision::Container
What you need to do is instantiate a Manager, using Kaiwa::Manager.supervise as: :manager
Then instantiate a supervision container for your user actors.
Not sure why you create 0..1_00_000 users?
Either way, no Pool needed, use 0.17.0 and use a plain supervision container.
My User model has a nasty method that should not be called simultaneously for two instances of the same record. I need to execute two http requests in a row and at the same time make sure that any other thread does not execute the same method for the same record at the same time.
class User
...
def nasty_long_running_method
// something nasty will happen if this method is called simultaneously
// for two instances of the same record and the later one finishes http_request_1
// before the first one finishes http_request_2.
http_request_1 // Takes 1-3 seconds.
http_request_2 // Takes 1-3 seconds.
update_model
end
end
For example this would break everything:
user = User.first
Thread.new { user.nasty_long_running_method }
Thread.new { user.nasty_long_running_method }
But this would be ok and it should be allowed:
user1 = User.find(1)
user2 = User.find(2)
Thread.new { user1.nasty_long_running_method }
Thread.new { user2.nasty_long_running_method }
What would be the best way to make sure the method is not called simultaneously for two instances of the same record?
I found a gem Remote lock when searching for a solution for my problem. It is a mutex solution that uses Redis in the backend.
It:
is accessible for all processes
does not lock the database
is in memory -> fast and no IO
The method looks like this now
def nasty
$lock = RemoteLock.new(RemoteLock::Adapters::Redis.new(REDIS))
$lock.synchronize("capi_lock_#{user_id}") do
http_request_1
http_request_2
update_user
end
end
I would start with adding a mutex or semaphore. Read about mutex: http://www.ruby-doc.org/core-2.1.2/Mutex.html
class User
...
def nasty
#semaphore ||= Mutex.new
#semaphore.synchronize {
# only one thread at a time can enter this block...
}
end
end
If your class is an ActiveRecord object you might want to use Rails' locking and database transactions. See: http://api.rubyonrails.org/classes/ActiveRecord/Locking/Pessimistic.html
def nasty
User.transaction do
lock!
...
save!
end
end
Update: You updated your question with more details. And it seems like my solutions do not really fit anymore. The first solutions does not work if you have multiple instances running. The second locks only the database row, it does not prevent multiple thread from entering the code block at the same time.
Therefore if would think about building a database based semaphore.
class Semaphore < ActiveRecord::Base
belongs_to :item, :polymorphic => true
def self.get_lock(item, identifier)
# may raise invalid key exception from unique key contraints in db
create(:item => item) rescue false
end
def release
destroy
end
end
The database should have an unique index covering the rows for the polymorphic association to item. That should protect multiple thread from getting a lock for the same item at the same time. Your method would look like this:
def nasty
until semaphore
semaphore = Semaphore.get_lock(user)
end
...
semaphore.release
end
There are a couple of problems to solve around this: How long do you want to wait to get the semaphore? What happens if the external http requests take ages? Do you need to store additional pieces of information (hostname, pid) to identifier what thread lock an item? You will need some kind of cleanup task the removes locks that still exist after a certain period of time or after restarting the server.
Furthermore I think it is a terrible idea to have something like this in a web server. At least you should move all that stuff into background jobs. What might solve your problem, if your app is small and needs just one background job to get everything done.
You state that this is an ActiveRecord model, in which case the usual approach would be to use a database lock on that record. No need for additional locking mechanisms as far as I can see.
Take a look at the short (one page) Rails Guides section on pessimistic locking - http://guides.rubyonrails.org/active_record_querying.html#pessimistic-locking
Basically you can get a lock on a single record or a whole table (if you were updating a lot of things)
In your case something like this should do the trick...
class User < ActiveRecord::Base
...
def nasty_long_running_method
with_lock do
// something nasty will happen if this method is called simultaneously
// for two instances of the same record and the later one finishes http_request_1
// before the first one finishes http_request_2.
http_request_1 // Takes 1-3 seconds.
http_request_2 // Takes 1-3 seconds.
update_model
end
end
end
I recently created a gem called szymanskis_mutex. It is a module that you can include in the class User and provides the method mutual_exclusion(concern) to provide the functionality you want.
It doesnt rely on databases and doesn't depend on how many processes want to enter the critical section at any given moment.
Note that if the class is initialized in different servers it will not work.
I may suite your needs if your app is small enough. Your code would look like this:
class User
include SzymanskisMutex
...
def nasty_long_running_method
mutual_exclusion(:nasty_long) do
http_request_1 // Takes 1-3 seconds.
http_request_2 // Takes 1-3 seconds.
end
update_model
end
end
I suggest rethinking your architecture as this is not going to be scalable - imagine having multiple ruby processes, failing processes, timeouts etc. Also in-process locking and spawning threads is quite dangerous for application servers.
If you want to sleep well with production then try some async background processing framework for long running tasks with serial queue which will ensure order of running tasks. Just simple RabbitMQ or check this QA Best practice for Rails App to run a long task in the background? , eventually try DB but Optimistic Locking.