i'm using Protractor and Jasmine and would like to organize my E2E test in the best way.
Example:
There is a set of the tests for check registration function (registration with right credentials, register as existed user, etc.).
I need to run those tests in three different projects. Tests are same, but credentials are different. For one project it could be 3 fields in the registration form, in another one - 6.
Now everything is organized in a very complicated way:
each single test is made not as "it" but as a function
there is a function which contains all tests (functions which test)
there is a file with Describe function in each
in that file there is one "it" which call the function which contains all tests
there is test suite for each project
I believe that there is a practice how to organize everything in a right way, that each test was in own "it". So will be happy to see some links or advice.
Thank you in advance!
Since it's a broad question, i will redirect you to few links. You should probably be looking at page-object model of Protractor. It will help you simplify and set a standard to organise your tests in a way that is readable and easy to use. Here's the link to it as described by Protractor team.
page-object model
However if you want to know why do we need to use such a framework, there are many shortcomings to it, which can be solved by using such framework. A detailed explanation is here
shortcomings of protractor and how to overcome them
EDIT: Based on your comments i feel that you are trying make a unified file/function that can cater to all the suites that will be using it. In order to handle such things try adding a generalised function (to fill form fields in your case), export that function and then require it into your test suites. Here's a sample link to it -
Exports and require
Hope this helps.
Related
I'm working on a project regarding website test automation and I hope someone can help me with this question?
How would you recommend setting up some automated test processes that would not constantly need to be updated to test each of the core flows to test the following for a website:
login
register
sign in with facebook
save an item
delete an item
test that the few key pages (both logged in and logged out) are working like
Thanks in advance.
I'm not sure if I understand well.
But here is an instance of project setup.
Split your KeywordTest section into two folders:
A Test folder: it should contain all the KeywordTests that are called when you run your testsuite. Each of those KeywordTests should test a specific feature (Verify_Login_Fail, Verify_Login_Success...)
A Library folder: it should contain all your KeywordTests that are reusable and that might be used often from your Test folder KeywordTests. It's a kind of function library. It avoids code repetition and is easier to maintain.
For instance, you can create a KeywordTest that takes as parameter a Login and Password and that proceeds with the actions on your website to log a user in.
Store the sensible data (that might change often) in file or a database rather than hardcode it. For instance a file Login.csv where you store all the combination of login and password you want to test.
You have to write all the steps into in a class which should be in your library package and then you have to call all the methods from your test class of test package. You should use testng in your test class then create a testsuit of test class and run the script.
I have a Sinatra app with the following structure:
controllers
helpers
models
views
public
I will be using RSpec for testing it. For me, there are two variants of tests - to test with Rack::Test::Methods - to check responses, the content of body and so on, and the second - to test the "core" logic - for example if the method "find_most_expensive" really returns the item with maximum price, to check if a new product is really created and that kind of things.
What I'm wondering is how to organise this tests in the spec folder? Should I have only name_of_controller_spec.rb files and both kind of tests go there? Or should they be separate? And how? To sum up, I have never written tests and I don'y know how exactly and where to put them :( Any kind of advise would be appreciated! :)
You could make just three folders: controllers (with, for example, posts_spec.rb), models (that contains post_spec.rb where testing the methods you have implemented in your model) and helpers (let's say utils_spec.rb).
Take a look at Testing Sinatra with Rack::Test and some repositories on GitHub to have a better idea on how you should organize your code.
https://pragprog.com/book/7web/seven-web-frameworks-in-seven-weeks
The source code is free and it has a chapter on Sinatra tests.
I know, I just ran them as I updated the expect instead of should deprecation warnings.
Only they use shell scripts to test with but you may already be familiar with what you want to use. The point is notice just different names for the folders and not the tests? Maybe could work for you. I like the answer first posted. These can both relate perhaps.
Something I had noticed with Rubocop is to keep your test methods in as little as 25 lines or they fail there. So I would say keep them broken down into small groups.
Ruby koans has tests that you could look at too.
I just completed writing a detailed rspec capybara integration and unit tests for Rails app, which includes mocking Omniauth (twitter) login, filling in forms, data validations, etc. However, I am wondering whether there is a need to write a separate controller or functional test.
Would appreciate your input and any links to further readings etc.
I'll play devil's advocate here, since I know I'm probably in the minority with this opinion: I actually prefer to do exceedingly thorough controller testing. A few reasons:
1) I find it easier to systematically test every path and outcome at the controller level than at the integration test level. My integration tests are primarily just happy-paths, and some of the more common error paths.
2) A lot of potential security issues occur at the controller level. Thorough testing helps me ensure that nothing malicious can get through to my model logic.
3) This is subjective, but it really forces me to think about some of the long-tail paths that my application might go through. What if someone tries to for an invalid password reset token into the URL? Controller testing ensures that I consider all options.
4) Unlike integration tests, they're fairly straight-forward to test. Each action is just a ruby method!
Personally, I think if your request (integration) spec is exercising all code paths you're covered. Ryan Bates has a great Railscast about how he tests here: http://railscasts.com/episodes/275-how-i-test?autoplay=true and about 5:05 in he says a similar thing. Like you I like to write integration tests rather than controller specs. Most of the time controllers simply front CRUD type operations anyway (especially if you're careful about keeping domain logic out of the controller), so all you're testing is the scaffolding.
We have a isolated test automation team responsible for automating only watir+cucumber functional test cases. Their code base is not attached with the rails app that other developers are working on, but kept separate. We have automated several test cases so far, and now what problem we have is, some (watir/cucumber specs)test cases require some data to be preexist into db, so it(testcase) should focus only on the problem stmt, and not creating any data-require itself.
Example, say if it has to check whether rating is working for a post, it requires a post object should preexist and it just checks rating. And not creating 1st post object and then checking its rating.
What are the best approaches here? Like we have fixtures and factory-girl for rails unit testing, what is there for cucumber specs? Or Shall we make use of features only here? These testers may not have idea of all models that exist, do they be aware of them so to make use of fixtures by calling Rails-Model interface.
My idea was, when we write feature file, it should not point or talk about any Model which looks meta stuff. Watir/specs test cases should only be aware of "Web-application"/browser only as the interface to talk/deal with the application. They should not know any other interface(fixture/Models). Hence they should create their own data on their own, by making use of the single interface they know.
Again, what I want to know that, is there any ruby lib/code, given table names, column names, and values(all most like fixtures yml), along with db parameters. It will simply insert them into db, without context of rails environment. And so testers those are having their environment isolated from rails web developers would able to work on their own. Rails fixtures, or factory girls seem to be well coupled with rails. Or am I incorrect?
Like Chirantan said you could use Factory girl with cucumber.
As require your factories in test unit or RSpec, you can do the same in the cucumber's env.rb file or any custom config file.
http://robots.thoughtbot.com/post/284805810/gimme-three-steps
http://www.claytonlz.com/2010/03/zero-to-tested-with-cucumber-and-factory-girl/
http://www.andhapp.com/blog/2009/11/07/using-factory_girl-with-cucumber/
When using cucumber, the Given statement sets the test situation up:
Given I have a basic user with a password
and the When statement triggers the test:
When the user logs in
and the Then statement checks the test results
Then they see the basic menu
The data gets loaded in the Given statement.
Test driven development on wikipedia says first develop a test that will fail because the feature does not exist. Then build the code to pass the test. What does this test look like?
How do you figure out what test will best represent the feature you want to create?
Can someone give an example?
Like if I make a logout button feature to a web application then would the test be hitting the page looking for the button? or what?
I heard test driven is nice for regression testing, I just don't know how to start integrating it with my work.
Well obviously there are areas that are more suited for TDD than others, and running frontend development is one of the areas that I find difficult to do TDD on. But you can.
You can use WATIN or WebAii to do that kind of test. You could then:
Write a test that checks if a button exists on the page ... fail it, then implement it, and pass
Write a test that clicks the button, and checks for something to change on the frontend, fail it, implement feature and pass the test.
But normally you would test the logic behind the actions that you do. You would test the logout functionality on your authenticationservice, that is called by your eventhandler in webforms, or the controller actions in MVC.
What does this test look like?
A test has 3 parts.
it sets up a context
it performs an action
it makes an assertion that the action did what it was supposed to do
How do you figure out what test will best represent the feature you want to create?
Tests are not based on features (unless you are talking about a high level framework like cucumber), they are based on "units" of code. Typically a unit is a function, and you will write multiple tests to assert all possible behaviors of that function are working correctly.
Can someone give an example?
It really varies based on the framework you use. Personally, my favorite is shoulda, which is an extension to the ruby Test::Unit framework
Here is a shoulda example from the readme. In the case of a BDD framework like this, contextual setup happens in its own block
class UserTest < Test::Unit::TestCase
context "A User instance" do
setup do
#user = User.find(:first)
end
should "return its full name" do
assert_equal 'John Doe', #user.full_name
end
context "with a profile" do
setup do
#user.profile = Profile.find(:first)
end
should "return true when sent #has_profile?" do
assert #user.has_profile?
end
end
end
end
Like if I make a logout button feature to a web application then would the test be hitting the page looking for the button? or what?
There are 3 main types of tests.
First you have unit tests (which is what people usually assume you are talking about when you talk about TDD testing). A unit test tests a single unit of work and nothing else. This means that if your method usually hits a database, you make sure that it doesn't actually hit that database for the duration of the test (using a technique called "mocking").
Next, you have integration tests. An integration test usually involves interaction with the infrastructure, and are more "full stack" testing. So from your top level API, if you have an insert method, you would go through the full insert, and then test the resulting data in the database. Because there is more setup in these sorts of tests, they shouldn't really be run from developer machines (it is better to automate these on your build server)
Finally, you have UI testing. This is the most unreliable, and requires a UI scripting framework like Selenium or Waitr to automate clicking around your UI. Don't go crazy with this sort of testing, because these tests are notoriously fragile (a small change can break them), and they wont catch whole classes of issues anyways (like styling).
the unit test would be calling the logout function and verifying that the expected results occurred (user login record ended, for example)
clicking the logout button would be more like an acceptance test - which is also a good thing to do, and (in my opinion) well within the scope of TDD, but it tests TWO features: the button, and the resulting action
It depends on what platform you are using as to how your tests would appear. TDD is much harder in ASP.NET WebForms than ASP.NET MVC because it's very difficult to mock up the HTTP environment in WebForms to get the expected state of Session, Application, ViewState etc. as opposed to ASP.NET MVC.
A typical test is built around Arrange Act Assert.
// Arrange
... setup needed elements for this atomic test
// Act
... set values and/or call methods
// Assert
... test a single expected outcome
It's very difficult to give deeper examples unless you let us know the platform you plan to code with. Please give us more information.
Say I want to make a function that will add one to a number (really simple example).
First off, write a test that says f(10) == 11, then do one that says f(10) != 10. Then write a function that passes those tests. If you realise the function needs more capabilities, add more tests.
The test would be making sure that when the logout function was executed, the user was successfully logged out. Generally a unit testing framework such as NUnit or MSTest (for .Net stuff) would be used.
Web applications are notoriously hard to unit test because of all the contextual information generally required for the execution of server code on a web server. However, a typical example would mock up that information and call the logout logic, and then verify that the correct result was returned. A loose example is an MVC type test using NUnit and Moq:
[Test]
public void LogoutActionShouldLogTheUserOut()
{
var mockController = new Mock<HomeController>() { CallBase = true };
var result = mockController.Object.Logout() as ViewResult;
Assert.That(result.ViewName == "LogoutSuccess",
"Logout function did not return logout view!");
}
This is a loose example because really it's just testing that the "LogoutSuccess" view was returned, and not that any logout logic was executed. In a real test I would mock an HttpContext and ensure the session was cleared or whatever, but I just copied this ;)
Unit tests would not be testing that a UI element was properly wired up to an event handler. If you wanted to ensure that the whole application was working from top to bottom, this would be called integration testing, and you would use something besides unit tests for this. Tools such as Selenium are commonly used for web integration tests, whereas macro recording programs are often used for desktop applications.