Run concurrent tests Cucumber/Capybara - ruby

I am looking for some assistance on where to start to run concurrent tests with Cucumber/Capybara. I need to do this without the 'parallel_tests' gem.the reason being is I can't seem to be able to have separate users login for each process.
I was thinking that I could have a shared pool of users, most likely in a array but I can't share this data across separate processes with the gem.
Some feedback I have received is to use IO.pipe but as yet do not know enough about it.
I have a standalone Cucumber framework, no Rails etc.

Thought I would post my solution in case it helps anyone else. I have ended up separating the user pool away from my application and storing them using Redis.
I then have a simple method that will pick a random user from a Set in Redis and put it back once it has finished with it
def choose_redis_user
#redis = Redis.new
#randUser = #redis.spop("users")
$user_username = #redis.hget(#randUser, "username")
$user_password = #redis.hget(#randUser, "password")
end
def return_redis_user
#redis.sadd("users", #randUser)
end
Then within my tests i can run
login_page.username.set($user_username)
login_page.password.set($user_password)
This works really well with multiple parallel_tests

Related

How do I properly use Threads to connect ping a url?

I am trying to ping a large amount of urls and retrieve information regarding the certificate of the url. As I read in this thoughtbot article here Thoughtbot Threads and others, I've read that the best way to do this is by using Threads. When I implement threads however, I keep running into Timeout errors and other problems for urls that I can retrieve successfully on their own. I've been told in another related question that I asked earlier that I should not use Timeout with Threads. However, the examples I see wrap API/NET::HTTP/TCPSocket calls in the Timeout block and based opn what I've read, that entire API/NET::HTTP/TCP Socket call will be nested within the Thread. Here is my code:
class SslClient
attr_reader :url, :port, :timeout
def initialize(url, port = '443', timeout = 30)
#url = url
#port = port
#timeout = timeout
end
def ping_for_certificate_info
context = OpenSSL::SSL::SSLContext.new
certificates = nil
verify_result = nil
Timeout.timeout(timeout) do
tcp_client = TCPSocket.new(url, port)
ssl_client = OpenSSL::SSL::SSLSocket.new tcp_client, context
ssl_client.hostname = url
ssl_client.sync_close = true
ssl_client.connect
certificates = ssl_client.peer_cert_chain
verify_result = ssl_client.verify_result
tcp_client.close
end
{certificate: certificates.first, verify_result: verify_result }
rescue => error
puts url
puts error.inspect
end
end
[VERY LARGE LIST OF URLS].map do |url|
Thread.new do
ssl_client = SslClient.new(url)
cert_info = ssl_client.ping_for_certificate_info
puts cert_info
end
end.map(&:value)
If you run this code in your terminal, you will see many Timeout errors and ERNNO:TIMEDOUT errors for sites like fandango.com, fandom.com, mcaffee.com, google.de etc that should return information. When I run these individually however I get the information I need. When I run them in the thread they tend to fail especially for domains that have a foreign domain name. What I'm asking is whether I am using Threads correctly. This snippet of code that I've pasted is part of a larger piece of code that interacts with ActiveRecord objects in rails depending on the results given. Am I using Timeout and Threads correctly? What do I need to do to make this work? Why would a ping work individually but not wrapped in a thread? Help would be greatly appreciated.
There are several issues:
You'd not spawn thousands of threads, use a connection pool (e.g https://github.com/mperham/connection_pool) so you have maximum 20-30 concurrent requests going (this maximum number should be determined by testing at which point network performance drops and you get these timeouts).
It's difficult to guarantee that your code is not broken when you use threads, that's why I suggest you use something where others figured it out for you, like https://github.com/httprb/http (with examples for thread safety and concurrent requests like https://github.com/httprb/http/wiki/Thread-Safety). There are other libs out there (Typhoeus, patron) but this one is pure Ruby so basic thread safety is easier to achieve.
You should not use Timeout (see https://jvns.ca/blog/2015/11/27/why-rubys-timeout-is-dangerous-and-thread-dot-raise-is-terrifying and https://medium.com/#adamhooper/in-ruby-dont-use-timeout-77d9d4e5a001). Use IO.select or something else.
Also, I suggest you learn about threading issues like deadlocks, starvations and all the gotchas. In your case you are doing a starvation of network resources because all the threads are fighting for bandwidth/network.

Run when you can

In my sinatra web application, I have a route:
get "/" do
temp = MyClass.new("hello",1)
redirect "/home"
end
Where MyClass is:
class MyClass
#instancesArray = []
def initialize(string,id)
#string = string
#id = id
#instancesArray[id] = this
end
def run(id)
puts #instancesArray[id].string
end
end
At some point I would want to run MyClass.run(1), but I wouldn't want it to execute immediately because that would slow down the servers response to some clients. I would want the server to wait to run MyClass.run(temp) until there was some time with a lighter load. How could I tell it to wait until there is an empty/light load, then run MyClass.run(temp)? Can I do that?
Addendum
Here is some sample code for what I would want to do:
$var = 0
get "/" do
$var = $var+1 # each time a request is recieved, it incriments
end
After that I would have a loop that would count requests/minute (so after a minute it would reset $var to 0, and if $var was less than some number, then it would run tasks util the load increased.
As Andrew mentioned (correctly—not sure why he was voted down), Sinatra stops processing a route when it sees a redirect, so any subsequent statements will never execute. As you stated, you don't want to put those statements before the redirect because that will block the request until they complete. You could potentially send the redirect status and header to the client without using the redirect method and then call MyClass#run. This will have the desired effect (from the client's perspective), but the server process (or thread) will block until it completes. This is undesirable because that process (or thread) will not be able to serve any new requests until it unblocks.
You could fork a new process (or spawn a new thread) to handle this background task asynchronously from the main process associated with the request. Unfortunately, this approach has the potential to get messy. You would have to code around different situations like the background task failing, or the fork/spawn failing, or the main request process not ending if it owns a running thread or other process. (Disclaimer: I don't really know enough about IPC in Ruby and Rack under different application servers to understand all of the different scenarios, but I'm confident that here there be dragons.)
The most common solution pattern for this type of problem is to push the task into some kind of work queue to be serviced later by another process. Pushing a task onto the queue is ideally a very quick operation, and won't block the main process for more than a few milliseconds. This introduces a few new challenges (where is the queue? how is the task described so that it can be facilitated at a later time without any context? how do we maintain the worker processes?) but fortunately a lot of the leg work has already been done by other people. :-)
There is the delayed_job gem, which seems to provide a nice all-in-one solution. Unfortunately, it's mostly geared towards Rails and ActiveRecord, and the efforts people have made in the past to make it work with Sinatra look to be unmaintained. The contemporary, framework-agnostic solutions are Resque and Sidekiq. It might take some effort to get up and running with either option, but it would be well worth it if you have several "run when you can" type functions in your application.
MyClass.run(temp) is never actually executing. In your current request to / path you instantiate a new instance of MyClass then it will immediately do a get request to /home. I'm not entirely sure what the question is though. If you want something to execute after the redirect, that functionality needs to exist within the /home route.
get '/home' do
# some code like MyClass.run(some_arg)
end

Keeping a track of sessions in Ruby and Capybara

I am using the parallel_tests gem to be able to run multiple features at the same time, the problem I face with my scenario is that I have user based sessions (SSO) so only one user can be logged in at a time.
To combat this I was thinking of being able to randomly select users if they are available, but tracking their login status globally presents an issue for me.
My setup
Before and after each scenario a user will login:
Before('#login_automated_user') do
#user = FactoryGirl.attributes_for(:automated_user)
login_steps("#{APP}")
end
After('#login_automated_user') do
logout_steps
end
I was thinking of having a pool of users in an array and randomly selecting one to use and once finished return the user to the pool:
module UserSession
def choose_user
user_array = factory_girl_users.values
if user_array.length > 0
#user = user_array.pop
end
end
def return_user
user_array << #user
end
def factory_girl_users
Hash[:user_1 => FactoryGirl.attributes_for(:automated_user), :user_2 => FactoryGirl.attributes_for(:automated_user_1)]
end
end
World(UserSession)
This would then make my Before and After hooks look like:
Before('#login_automated_user') do
#user = choose_user
login_steps("#{APP}")
end
After('#login_automated_user') do
logout_steps
return_user
end
One issue I can see here is I'm using #user across two sessions (or more if I had more users) so do I need to separate them out?
Would anyone offer some tips/solutions to contemplate please?
Not sure I understand your exact problem, but assuming it's having single browser for few tests that run in parallel, and thus having one user's login, affecting other test that runs in the same time.
One solution, is having named session per test. Not sure it's reliable.
Other is having each of your process that parallel runs, open its own browser and then you won't have any sessions problem.
It's requiring more memory, but doesn't seem to effect tests speed.
Also instead of logging out, after each test, you could just use reset_sessions! in global teardown.

Celluloid::TimeoutError: linking timeout of 5 seconds exceeded

I'm working with Jruby 1.7.12 and Celluloid (0.16.0). My application is using pools and generates actors in a loop
require 'kaiwa'
Kaiwa::Launcher.run
Celluloid.logger = Kaiwa::Logger.logger
class KaiwaTest
include Celluloid
Celluloid::LINKING_TIMEOUT = 5
def initialize
end
def create_kaiwa_users(handle)
Kaiwa::Manager.create_user(handle)
end
def send_kaiwa_messages(to_handle, from_handle, message)
Kaiwa::Manager.send_message(to_handle, from_handle, message)
end
end
kt = KaiwaTest.pool(size: 4)
(0..1_00_000).to_a.each do |index|
kaiwa_test_pool.async.create_kaiwa_users("user_#{index}")
end
Within my library each user is an actor which gets linked to the manager, which is also an actor. I've tried eliminating the linking altogether and the problem still persists. The minute i create more than 30 user actors my system hangs.
There seems to be some similar timeout errors discussed with a mention of a JRuby issue but nothing that specifically touches the linking timeout issue. I cannot figure out what is causing the issue.
Thanks in advance.
The entire codebase is available at https://github.com/supersid/kaiwa
Would appreciate any help I can get.
Interesting project ( XMPP +Celluloid +jRuby )
I'm assuming kaiwa_test_pool ought to be kt?
I would give this a try using 0.17.0:
https://github.com/celluloid/celluloid/wiki/0.17.0-Prerelease
You do not need a Pool here. You just need a specialized Supervision::Container
What you need to do is instantiate a Manager, using Kaiwa::Manager.supervise as: :manager
Then instantiate a supervision container for your user actors.
Not sure why you create 0..1_00_000 users?
Either way, no Pool needed, use 0.17.0 and use a plain supervision container.

How do I loop the restart of a daemon?

I am trying to use Ruby's daemon gem and loop the restart of a daemon that has its own loop. My code looks like this now:
require 'daemons'
while true
listener = Daemons.call(:force => true) do
users = accounts.get_updated_user_list
TweetStream::Client.new.follow(users) do |status|
puts "#{status.text}"
end
end
sleep(60)
listener.restart
end
Running this gives me the following error (after 60 seconds):
undefined method `restart' for #<Daemons::Application:0x007fc5b29f5658> (NoMethodError)
So obviously Daemons.call doesn't return a controllable daemon like I think it does. What do I need to do to set this up correctly. Is a daemon the right tool here?
I think this is what you're after, although I haven't tested it.
class RestartingUserTracker
def initialize
#client = TweetStream::Client.new
end
def handle_status(status)
# do whatever it is you're going to do with the status
end
def fetch_users
accounts.get_updated_user_list
end
def restart
#client.stop_stream
users = fetch_users
#client.follow(users) do |status|
handle_status(status)
end
end
end
EM.run do
client = RestartingUserTracker.new
client.restart
EM::PeriodicTimer.new(60) do
client.restart
end
end
Here's how it works:
TweetStream uses EventMachine internally, as a way of polling the API forever and handling the responses. I can see why you might have felt stuck, because the normal TweetStream API blocks forever and doesn't give you a way to intervene at any point. However, TweetStream does allow you to set up other things in the same event loop. In your case, a timer. I found the documentation on how to do that here: https://github.com/intridea/tweetstream#removal-of-on_interval-callback
By starting up our own EventMachine reactor, we're able to inject our own code into the reactor as well as use TweetStream. In this case, we're using a simple timer that just restarts the client every 60 seconds.
EventMachine is an implementation of something called the Reactor Pattern. If you want to fully understand and maintain this code, it would serve you well to find some resources about it and gain a full understanding. The reactor pattern is very powerful, but can be difficult to grasp at first.
However, this code should get you started. Also, I'd consider renaming the RestartingUserTracker to something more appropriate.

Resources