I've written an rspec test using Watir against a web application and it's running fine. However, I now want to be able to run this test against the web application running on different domain names.
My initial thought was that I'd be able to pass a value to spec at the command line to set a variable within my script, but I can't see any easy method of doing this. So my second thought was that I might need to add an array of domains into my script and have it test all of them - but I don't always want to test every domain, and the domains are constantly changing as we add and remove sites to be tested.
What are my options for allowing the choice of targets I want?
You can set an environment variable, as those get passed on. RSpec uses this for its rake tasks, btw.
In your spec you can do something like:
before { #host = ENV['TARGET'] || 'default_target.com' }
You can run it like this:
TARGET=google.com spec .
Or:
TARGET=stackoverflow.com rake spec
Related
We're using Cucumber as our test harness
We have a folder structure like this:
- automation
- api
- api1.feature
- api2.feature
- gui
- gui1.feature
- gui2.feature
- step_definitions
- api_steps.rb
- gui.steps.rb
- support
- hooks.rb
- cucumber.yml
- env.rb
- Gemfile
- Rakefile
I need to have different actions occurring in my hooks.rb file (or support directory in general) for the api tests vs. the gui tests. For my api tests, I need to authorize through our restapi and get an auth cookie. For my gui tests, I need create a selenium browser instance.
I need this to work both when I execute everything by issuing 'cucumber' alone from the 'automation' level of this folder structure AND when I execute a single feature file by doing:
$ cucumber gui/gui1.feature -r features
So my questions are regarding how best to do that.
Can I/should I create some kind of conditional within my hooks.rb file in the 'Before do' block to perform different before actions based on which directory the feature is executing from?
OR Can I/should I create separate support/hooks.rb directories/files within each of the directories 'api' and 'gui'? (effectively will Cucumber recognize and selectively utilize multiple support directories?)
Thanks!
Cucumber really doesn't care about directories and are there for organizational purposes only. I would try to avoid any implementation based on file location.
I would implement this with tagged scenarios and hooks.
Before('#gui') do
# create browser/login
end
Before('#api') do
# create restapi/auth cookie
end
Before ('~#gui','~#api') do
fail('Silly developer, all scenarios must have an #api or #gui tag!')
end
If you run cucumber --help and look in particular at the -r option you will find there are alternative approach you can use. One of these is to run seperate instances of cucumber with different included files. The best way to configure this is with profiles via cucumber.yml.
You can easily set up a gui profile to include hooks/gui.rb and an api profile to include hooks/api
If you need to get even more seperation another approach would be to run cucumber from each of your subfolder e.g. automation/api automation/gui, and create a support/hooks structure inside those folders.
I'm experimenting, and I'm trying to launch dummy Sinatra application from RSpec and kill it when the spec is finished. Something like:
# spec/some_spec.rb
before(:all)
# launch sinatra dummy app
end
after (:all)
# kill sinatra dummy app
end
it 'should return list of whatever' do
expect(JSON.parse(make_request('0.0.0.0:4567/test.json')))
.to include('whatever')
end
I could use system("ruby test/dummy/dummy_app.rb"), but how can I kill that process only? Does anyone know how I can launch the Sinatra inside a test (or from another ruby script)? I know about WebMocks, but I want to see if I can manage to make my test work this way.
Look under RSpec on "Testing Sinatra with Rack::Test". I'd suggest you use that code as boilerplate to get started.
Just add this to your describe block:
def app
Sinatra::Application
end
I would suggest you read up RSpec.
Since you want to test an external system, by the looks of your comment, instead of system "curl whatewer.com", you can use Net::HTTP to make requests and then test against the response.
Have a look at "Testing an external API using RSpec's request specs".
As I'm writing request specs to ensure the features won't be broken I decided to rather write separate Cucumber features. The nice thing is that I can use Capybara, and thanks to Selenium Web Drive, I can launch a server before I run my tests.
So, I created a dummy Sinatra application (that will represent the external service to which the actual code I'm testing is doing requests (including a nasty system('curl whatever.com')).
All I have to do is stub out the methods passed to curl to use Capybara.current_session.server.host and Capybara.current_session.server.port.
Once I'm done with my re-factoring all I have to do is remove the Capybara server variables, and Selenium web drive from the cucumber/capybara configuration.
Tests after a brief change will be still working and will be valid.
Update
In the end I wrote it all with RSpec request tests, as doing it in Cucumber was little bit time consuming and I already spend too much time on this.
I mark these kind of request tests with RSpec tag and Before I lunch these I manually lunch simple Sinatra/Grape dummy API application to which the request are made. (Then I run RSpec tests with this tag)
So basically I end up with specs for functionality that uses net/http that uses WebMock and don't need a server, and request tests for which I need to run the server before I run the specs. So the original question remains, how to lunch a server before tests start
After I cover all the functionality I'm gonig to rewrite the curl to net/http however I'm going to keep those requests specs as I discovered they are nice idea when it comes to crazy API scenarios (like testing https + diggested authentication)
I am writing an automated test suit for a program that has mailing lists. I am trying to decide on the best practice for structuring the tools that I am going to use. The tests need to send email to a variety of email addresses then use the application to perform an action (approve, reject, discard). Then the script finally needs to check its mail and compare the email it has received against the list of emails it expects to receive. Here is the list of tools I am using.
Ruby,
Rake,
Selenium Webdriver,
Test-unit,
Jenkins
What I wanted to do was to treat everything as a dependency (in rake) of the last step(checking the email). My problem came when tried to make every email unique. I plan to embed the time the test was run at and a number assigned to each email in the test into the email (this number will be the same for each run of the test so I can identify where it should go). I need a way to pass the time stamp from the beginning of the test to the end of the test.
The solutions I see to my problems are to get rid of rake (because I can't or don't know how to pass a variable between tasks) or to write to a file then access the file in the seperate tasks.
Any recommendations?
I would advise setting an ENV variable in your Rakefile before each test is run, like this:
ENV['TIMESTAMP_CONTROL'] = Time.now.to_s
You can then reference the variable anywhere in your scripts and Rakefile until you reset it again like any other Ruby variable:
assert_equal ENV['TIMESTAMP_CONTROL'], #email_response_text
so I'm working on a website here and I would like to run multiple browser tests at one time. What I mean by this is it should perform my smoke tests on ie, firefox and chrome at the same time and report back each browser results. I'm currently only testing with ie with rpsec and watir-webdriver but would like to automate for the other 2 browsers. Are there any existing gems out there (I haven't been able to find any), if not what would be the best way to go around solving this issue?
You should try WatirGrid.
It won't make all the work for you but it will give you a platform to launch multiple tests at once. The you can just launch the same test 3 times changing the target browser and the grid will handle where they will be executed.
Why don't need anything except watir-webdriver to run multiple browsers on the same machine.
ie = Watir::Browser.new :ie
firefox = Watir::Browser.new :firefox
chrome = Watir::Browser.new :chrome
opera = Watir::Browser.new :opera
If you have multiple machines or VMs to work with, Jenkins in the answer. My approach is similar to Chuck's, but instead of a flat configuration file I let Jenkins prompt for these values via drop-down menu, etc. Jenkins is easy to set up and can automatically distribute test jobs to any available machine for testing.
So, I click "Google Search Test" and select "Internet Explorer"... then I do the same thing and select a different browser. Concurrent tests in various browsers, with HTML/email output and a great log history.
I'll also be writing more about this, but I'm still on vacation!
Here is an example of configuration file (these assign default values if for example Jenkins is not used to launch them). NOTE: "||=" means "if nil, use this value. If not nil, use the current value". I'm only setting values if Jenkins has not already.
ENV['BROWSER'] ||= "firefox"
ENV['ENVIRONMENT'] ||= "qa"
ENV['LIMIT'] ||= "10"
ENV['DISTRICT'] ||= "any"
ENV['TYPE'] ||= "pkg-new"
# Not necessary, but added for sanity/cleanliness:
$type = ENV['TYPE'].downcase
$browser = ENV['BROWSER'].downcase
$env = ENV['ENVIRONMENT'].downcase
$district = ENV['DISTRICT'].downcase
puts "\t** Testing #{$env.upcase} using #{$browser.upcase}... **"
The Jenkins portion is surprisingly easy - so easy I didn't think it was set up. You create a Variable for your script, and whatever you name the variable becomes ENV["VariableName"] - and is immediately available to your script.
So I have a variable named "BROWSER" that is set by a drop down with Firefox, IE and Chrome options. The user has no room to confuse the script with free text, and they can run a custom test whenever they want. My Devs/PMs/Users love me :D.
If you want to run the exact same test code for the tests you will need to externalize the browser type, either as an environment variable, or a YAML file or somesuch.
Ruby has some stuff that makes dealing with yaml files super easy (I need to write a blog posting on this) so you can put something at the top of your script that calls a method to get the config info and then set the browser type accordingly.
In my testconfig.yml YAML file I have:
global:
browser: ie #possible values: ie, ff, chrome
note I don't currently test against opera (market segment too small) or it would be in the list of possible values. The comment is just there to make life easy on whoever might have to edit that file.
I have a read_config method defined in a readconfig.rb file which looks (in part) like this
require 'yaml'
def read_config
config = YAML.load_file('testconfig.yml')
$browser_type = config['global']['browser']
end
And at the top of my tests there is code like this
require 'rubygems'
require 'readconfig'
require 'watir-webdriver'
read_config
$browser = Watir::Browser.new $browser_type.to_sym
This way I can have a different config file on each system (which also sets up a lot of other things like the current passwords (changed on a regular basis), which test environment to use, and settings for each environment for stuff like the test server URL's, database server and name, etc. When developing tests a simple change to the config file lets me run all the tests facing a given browser. or if I want to run in parallel I can have the systems setup with their own customized config file, let them pull the current scripts from source control, and then run them against whatever browser, server etc is configured in the config file.
This is probably dirt simple stuff to any accomplished ruby dev, but it's like magic for any of us new to ruby, and especially for getting hard-coded values OUT of my scripts and into one single place where I can control and change them.
I'm building VS.NEt 2010 Load Test solution.
Everything works really good except one thing. When I record .webtest script - it grabs the site domain name, like so:
http://test1/page1
http://test1/page2
So the test1 is hardcoded in the script.
What I would like to do is to run same load test again different test environment - the goal is to compare two environment without rewriting the recording.
I see that Run Settings has "Context Parameters" - is this it?
For for a recorded .webtest, this article shows using the context parameters with the {{parameterName}} syntax for form parameters, and can indeed be used for the server name.
From inside a coded test (which inherits from WebTest), you can access it with:
this.Context["KeyName"]