How do I add Ruby code prior to running all the feature files in Cucumber? I tried to do that directly in the feature file (like ap 'hi'), but Cucumber seemed to ignore that and just proceeded to the scenario.
If you want code to be executed before all features/scenarios are ran you can use the AfterConfiguration hook
AfterConfiguration do
#code you want ran
end
This hook is only executed once and after cucumber has finished setting up its environment.
If you want code just to run before all scenarios you can us the Before hook
Before do
#code you want ran
#end
And if you are in a situation where you want the code to run only for certain scenarios you can utilize the tag functionality/filtering of the hooks
#my_scenario
Scenario: This is my scenario
Scenario: This is not my scenario
with the hook
Before('#my_scenario') do
#code you want ran
end
The above hook will only run for the "This is my scenario" scenario.
Related
I have some test set up code that I need to run before any Capybara tests that are running JavaScript with the #javascript tag. I don't want the code to run the rest of the time since this test set up is expensive in terms of system resources and cognitive load.
I've searched the documentation extensively and was unable to find any examples of running arbitrary ruby before tests based in tagging. Can anyone help me out?
Edit: after thinking about this some more, I only need the code to run once before any tests are run, so this is probably a simpler problem then I first described.
Since you're asking about an #javascript tag I'm assuming you're talking about Cucumber driven tests, if you're not then please clarify.
To run code before a test you use Before
Before('#javascript') do
# any code here will get run before each test tagged with #javascript
end
To make it only run that code once you'd need to use a global variable
Before('#javascript') do
$already_run ||= false
return $already_run if $already_run
# code here will get run once before the first test tagged #javascript
$already_run = true
end
Recently I made a simple ruby application and have been using minitest to test it.
Following the advice of the Head First Ruby book, I automated this testing using Rake(I'll write what it told me to put in the Rakefile at the end of this post, in case that helps). The test seems to run fine (everything passes in a way I would expect it to), but I always get this notification at the end of it all:
rvm/gems/ruby-2.3.0/gems/guard-2.14.0/lib/guard/notifier.rb:28: warning: instance variable #notifier not initialized
Testing things manually by telling ruby to include which files I want, does not have this issue, only when I use "rake test" to test things.
As far as I can tell, this is related to when I set up Guard when I was following Michael Hartl's Rails Tutorial, at the end of chapter 3. I followed the directions for setting that up (correctly, as far as I can tell), and this was all in a completely different folder(ultimately my ruby and rails projects do have the same parent folder that they sit in, but are themselves in completely separate ruby_projects and rails_projects folders). If possible, I would like to stop this notification on my ruby application that I am testing. Is there a good way to do this?
Contents of the Rakefile I am using, if that helps:
require "rake/testtask"
Rake::TestTask.new(:test) do |t|
t.libs << "lib"
t.test_files=FileList['test/**/test_*.rb']
end
My test file requires minitest/autorun, and the file for the application that I am testing, then has the normal tests
Seems like there's some weird conflict...
The reason is that Guard::Notifier.connect isn't connected. Normally, when you run guard, Guard.setup is called which does this.
If you're not using guard (e.g. interactively), then calling the following from your Rakefile should work around the problem:
Guard::Notifier.connect(notify: false, silent: true)
Guard::Notifier.disconnect
This will initialize the variable.
For a faster response, always report such issues on the project page on Github. If you can share the project where this occurs, maybe a better fix is possible. (It's best to provide a repository, since it really speeds up fixing things and often errors like this are very hard to simulate without the exact code).
I'm experimenting, and I'm trying to launch dummy Sinatra application from RSpec and kill it when the spec is finished. Something like:
# spec/some_spec.rb
before(:all)
# launch sinatra dummy app
end
after (:all)
# kill sinatra dummy app
end
it 'should return list of whatever' do
expect(JSON.parse(make_request('0.0.0.0:4567/test.json')))
.to include('whatever')
end
I could use system("ruby test/dummy/dummy_app.rb"), but how can I kill that process only? Does anyone know how I can launch the Sinatra inside a test (or from another ruby script)? I know about WebMocks, but I want to see if I can manage to make my test work this way.
Look under RSpec on "Testing Sinatra with Rack::Test". I'd suggest you use that code as boilerplate to get started.
Just add this to your describe block:
def app
Sinatra::Application
end
I would suggest you read up RSpec.
Since you want to test an external system, by the looks of your comment, instead of system "curl whatewer.com", you can use Net::HTTP to make requests and then test against the response.
Have a look at "Testing an external API using RSpec's request specs".
As I'm writing request specs to ensure the features won't be broken I decided to rather write separate Cucumber features. The nice thing is that I can use Capybara, and thanks to Selenium Web Drive, I can launch a server before I run my tests.
So, I created a dummy Sinatra application (that will represent the external service to which the actual code I'm testing is doing requests (including a nasty system('curl whatever.com')).
All I have to do is stub out the methods passed to curl to use Capybara.current_session.server.host and Capybara.current_session.server.port.
Once I'm done with my re-factoring all I have to do is remove the Capybara server variables, and Selenium web drive from the cucumber/capybara configuration.
Tests after a brief change will be still working and will be valid.
Update
In the end I wrote it all with RSpec request tests, as doing it in Cucumber was little bit time consuming and I already spend too much time on this.
I mark these kind of request tests with RSpec tag and Before I lunch these I manually lunch simple Sinatra/Grape dummy API application to which the request are made. (Then I run RSpec tests with this tag)
So basically I end up with specs for functionality that uses net/http that uses WebMock and don't need a server, and request tests for which I need to run the server before I run the specs. So the original question remains, how to lunch a server before tests start
After I cover all the functionality I'm gonig to rewrite the curl to net/http however I'm going to keep those requests specs as I discovered they are nice idea when it comes to crazy API scenarios (like testing https + diggested authentication)
I have a MiniTest suite. I'm using the basic Minitest::Unit::TestCase, not specs. I have setup and teardown methods defined in my TestCase subclass. They work perfectly when I run a test file like so: ruby test/whatever_test.rb. But when I run rake test, setup and teardown are not called. The relevant portion of my Rakefile is:
require 'rake/testtask'
Rake::TestTask.new do |t|
t.test_files = FileList['test/*_test.rb']
t.verbose = true
end
Why wouldn't the setup and teardown be run when Rake::TestTask is used?
I'd paste the test case code into here, but there's quite a lot of it. I'll certainly paste in some subset of it, if there's a particular section you'd like to see.
I'm running Minitest 4.3.2 on Ruby 1.9.3-p194.
The problem was that another test case was overwriting the setup and teardown methods. I had accidentally given two test cases the same class name, which is why the overwriting happened. Naturally, this error didn't happen when I ran a single test case, which explains the difference in behavior when using Rake.
In my case, I was writing tests for socket communication and had added a helper method named send. Since MiniTest uses send internally to call the teardown methods, it was instead calling my own send instead of the method-dispatch.
I was wondering if there is any way to run cucumber scenario with and without javascript without duplicating code.
I develop website that utilizes html5 navigation. However it should work find if browser doesn't support html5 features.
I would like to write cucumber test that would test navigation.
I know I can test basic html navigation with simple cucumber scenario. And I can test html5 navigation with same scenario but with #javascript tag.
I would really love to avoid this code duplication.
I was experimenting with around hooks, hoping that I could simple call block, then call same block with
Capybara.using_driver(Capybara.javascript_driver) { block.call }
However this doesn't work.
Anyone have any idea how to implement this?
P.S.
I'm quite new to Ruby, and just started working with cucumber.
It looks like you need two different scenarios. I'd use the Background feature to avoid step definitions but it's a matter of taste.
Based on the solution by Jon M of using the environment variable, you need to set the current_driver before each scenario runs (which seems better than changing the default_driver).
Before do
if ENV['USE_JS_DRIVER']
Capybara.current_driver = Capybara.javascript_driver
end
end
And then running
cucumber .
USE_JS_DRIVER=1 cucumber .
If you don't want to create separate features to deal with both types of browser, then one solution is to use an environment variable to tell cucumber which type of browser driver to use, and invoke cucumber twice.
You'd need to query the environment variable to set the correct driver, probably in env.rb:
if ENV['USE_JS_DRIVER']
Capybara.current_driver = Capybara.javascript_driver
end
And then you could run either/both of:
cucumber .
USE_JS_DRIVER=1 cucumber .
You'd have to find some useful way of merging the results from both cucumber runs, but depending on your needs this could be a simpler solution than duplicating your scenarios.