Automated spider test - continuous-integration

I'm looking to add a very simple layer of automated integration testing to our current Continuous Integration setup. (CI currently only checks for build breaks).
Is there a product that will:
From a base URL, spider a site &
report back any 404/500 error codes?
Allow me to add a step to logon, to
be able to spider the authorized
pages?
Bonuses / would-be-nice:
Report JS errors
Report 404s linked from CSS
I've had a quick look at SilkTest & Selenium, and they don't seem to feature quite such a site-agnostic approach. (The logon step is obviously something they can do...)
We're simply wanting to cull out the simplest/dumbest of regression errors, and we have an absolute minimum of time to implement such an automated check - hence the spidering. Ideally the solution can be run on the command line, and output its results in something I can parse into TeamCity (continuous integration package).
Much appreciated.

Here is a list of utilities to look at.

SilkTest should be able to handle your use case, you'll need to write a script that navigates through your page, depending on the complexity of your page, a simple recursive descent might be sufficient. If it gets more complex, you might need some sort of already visited URLs to avoid infinite loops.
As for the results, if you use either Silk4J or Silk4Net which both use xUnit runners to drive the tests, I assume you should be able to get the results into TeamCity.

Related

Reuse Cucumber steps across features files in Cypress

Is there a way to reuse steps in our features from "other" step files?
I.e. I have a project with login page, and topbar that I want to test after login:
Got LoginPage.feature and LoginPage.js step file, everything works fine, all tests run correctly.
I would like reuse steps “Given user open TestPage login page” and “When user login using valid credentials” from LoginPage.js in TopBarCmp.feature:
But it always ends with error:
Long time ago I used Specflow(Cucumber for .net) and it was normal to ruse steps with same singatures across all features.
What is correct way of handling that kind of situations, where we would like to use some part that was already automated?
Looks like you can put them either on cypress/integration/common or in cypress/support/step_definitions and they will be available to share across features
this article explains it better https://www.linkedin.com/pulse/part-2-hands-on-test-automation-project-cypress-reis-fernandes

Trying to locate an element on a page contains "/abc/def"

So I have written a bunch of NW tests for our dev environment. Unfortunately, being new to automated testing and to this product I have learned dev is not the same as our prod env. The difference is the dev login button is 'href*="/abc/def"' and prod is 'href*="www.example.com/abc/def". The parents and classes of these elements are too different to try and use.
I am just creating the pages and wondering if there is a way to store the selector with either a wildcard, like when SQL uses %, or a href.contains??
I apologise if none of this makes sense, I am completely fresh to programming in general.
You can click the login button using browser.click(CSS-Selector)
The CSS-Selector should look like this: a[href*="/abc/def"]
So you end up with: browser.click(a[href*="/abc/def"])
The Asterix works as a wildcard wich will look for a substring: additional information about different approaches

Common asserts in any automation project

Can anyone briefly explain what are the common asserts to consider in any automation project please. Whether it might be an in-house or public web application. For example presently i am using selenium (java) to automate an eCommerce web application. As this is my first website to automate, i am running out of ideas where i can verify things expect few which i know mentioned below:
1.Verify each page Title
2.Verify a button, text, link, image, custom text etc
Apart from these is there any thing else i can verify? please feel free to correct my question and if you have worked on various automation projects which areas did you add asserts to verify or validate something on a webpage.
basically, you do automation to decrease the execution time of regression cycles by automating the Test Cases relate to the functionality of the application. so, first develop test cases, using test design techniques like ECP, BVA etc.
Each test case must have an Assertion called expected result or functionality (otherwise it won't be called a Test case).
This assertion can be anything like,
Whether login successful after giving valid credentials
Showing an error message after entering wrong credentials etc.
Selenium helps us to automate web interactions (navigations, clicks, enter texts etc.) and don't perform any assertions for you.
Assertions are available by frameworks like JUnit, TestNG (in Java) with Assertions class. There is built-in support from programming languages like assert keyword in python & Java (http://docs.oracle.com/javase/7/docs/technotes/guides/language/assert.html)
So, whatever you mentioned in your question like common assertions (Verify each page Title etc.), those are just web interactions. they don't decide whether a Test is PASS or FAIL. It is you who define the criteria whether a Test is PASS/FAIL.
For example, there is a test case related to successful login.
here, you automate web interactions like navigate to login page, enter credentials, click Submit button.
Then to validate whether you successfully logged in or not, you look for a web element in the Home Page of the user logged in (like, welcome user) in normal scenario. In Automation, you try to find the text welcome user using webelement. Then you use Assertions provided by frameworks, to assert whether the expected message is present in the webpage like
Assertions.assertEqual(expected_message, actual_message); // just an example.
If expected_message and actual_message is same, then the method don't throw any exception, which results in marking the testcase as PASS by the framework
If expected_message and actual_message is NOT same, then AssertionError is raised by the method assertEqual, which results in marking the test case as FAIL by the framework.

One set of tests for few projects with different parameters

i'm using Protractor and Jasmine and would like to organize my E2E test in the best way.
Example:
There is a set of the tests for check registration function (registration with right credentials, register as existed user, etc.).
I need to run those tests in three different projects. Tests are same, but credentials are different. For one project it could be 3 fields in the registration form, in another one - 6.
Now everything is organized in a very complicated way:
each single test is made not as "it" but as a function
there is a function which contains all tests (functions which test)
there is a file with Describe function in each
in that file there is one "it" which call the function which contains all tests
there is test suite for each project
I believe that there is a practice how to organize everything in a right way, that each test was in own "it". So will be happy to see some links or advice.
Thank you in advance!
Since it's a broad question, i will redirect you to few links. You should probably be looking at page-object model of Protractor. It will help you simplify and set a standard to organise your tests in a way that is readable and easy to use. Here's the link to it as described by Protractor team.
page-object model
However if you want to know why do we need to use such a framework, there are many shortcomings to it, which can be solved by using such framework. A detailed explanation is here
shortcomings of protractor and how to overcome them
EDIT: Based on your comments i feel that you are trying make a unified file/function that can cater to all the suites that will be using it. In order to handle such things try adding a generalised function (to fill form fields in your case), export that function and then require it into your test suites. Here's a sample link to it -
Exports and require
Hope this helps.

Facebook tests users with user to user requests

I asked this question last week but only got 8 views.
A part of the application I'm working on requires creating a ton of user-to-user requests and validating they all get processed correctly in the application. This requires countless hours of QA work and could be automated with a simple script like
users_api = Koala::Facebook::TestUsers.new(config)
users = test_users.create_network(10, true, "email,user_likes,publish_actions")
users.permutations(2) do |u1, u2|
graph = Koala::Facebook::API.new(u1['access_token'])
requests_types.each do |req|
graph # .user_to_user_request(u2, req) Oh noes I can't do this part
end
end
Everything I've seen points to the fact that it's impossible to create user-to-user requests in a script, even for test users. Is there any other (automated) way to do this?
Edit
What I'm trying to find is a way to create user-to-user requests. The validation would still be manual by the QA team. The problem we're facing is that they need to create 90 requests and make sure they didn't skip a single one, then validate the data.
Solution to this is tricky one. You probably have two solutions, depending on what you need.
First one is to manually provide access tokens for tests. That would require creating several fictional users or gathering access tokens from friends via Api Explorer. This is of course very inconvenient, but probably needed for second idea so I'm mentioning it. The question is how much users will you need to test? In most situations 3-4 users should be enough to provide test case.
Second idea will require actually running tests suite once using first idea and recording results using gems like webmock or fakeweb. This will allow you to remember what API response will serve and using it in later tests without need to regenerate tokens. This should also speed up your tests significantly as will avoid waiting for each request from FB API.

Resources