How to access different test-fragment present in external test plan - jmeter

I want to create a common test plan and define multiple test fragment in this file. I want to use some of the specific test fragment in the specific test plan. Here is outline of two different test plan.
common-test-plan.jmx
common-test-plan
|--TestFragment1
| |-Sampler11
|
|--TestFragment2
|-Sampler21
Specific-test-plan.jmx
Some-Test-plan
|--ThreadGroup1
|-IncludeController
|-Module controller(accessing the Include controller)
|-Sampler1
|-Sampler2
I used include controller to include the external test plan component. When I use the Module controller it just shows the include controller in the list. It doesn't show all test fragment present in the external test plan.
Is there any way I can specifically use few of the test fragment present in the external test?

 IncludeController references aren't loaded until you run
the test plan which means that the ModuleController can't reference its
internals
The best you have without a code change is one TestFragment per
IncludeController and then to use a TestFragment in your main Test Plan to
Include them all.  ModuleControllers can then reference each included file
to execute its contents.
Answer based on Anthony Johnson on mailing list

Related

Is there a way to specify a where condition for all tests in a model in DBT?

I would like to write some tests for my dbt model. I only want the tests to test certain rows (data within the last month). I could write a where clause for every single test in yml file like so:
- not_null
config:
where: "current_date-date_column<=30"
However, I was wondering if there is some shortcut to put the clause on the model and have the where clause apply to all tests of the model (which is a lot easier to write and also means I don't have to worry about forgetting if I add more tests).
This article givers an example on how to do that for project level but I don't want the whole project just one model.
Any config that can be applied in the dbt_project.yml can be scoped to specific directories or resources. However, tests are their own resources, independent of the models they test, and currently (dbt v1.2), it is not possible apply config to all tests for a given model.
As a workaround, you could consider putting the .yml file that defines the tests for that model in a directory by itself, and applying the config to a directory.
Apply where to a whole project:
tests:
+where: "date_column = current_date"
Apply where only to .yml files nested in the models/marts/finance/ directory:
tests:
my_project:
marts:
finance:
+where: "date_column = current_date"
Apply where to a specific test:
tests:
my_project:
marts:
finance:
not_null_revenue_id:
+where: "date_column = current_date"
See the docs for resource-path for more info

Cucumber Transforms for Multiple Variable Scenario Outline Examples

I have a set of functionally similar websites that I want to write cucumber specs for to drive both development, and selennium browser tests. The site are in different languages and will have different URLs, but will have mainly the same features.
An example scenario might be
Scenario Outline: Photo Gallery Next Action
Given I visit a "<photo-gallery-page>"
When I click "<next-button>" in the gallery
Then the photo should advance
Examples:
| photo-gallery-page | next-button |
| www.site1.com/photo-gallery | Next |
| www.site2.com/la-galerie-de-photos | Suivant |
This is fine when I have a small number of scenarios and examples. However I'm anticipating hundred of scenarios and fairly regular launch of new sites. I want to avoid having to edit each scenario to add examples when launching new sites.
I think I need to store all my example variables in a per site configuration, so that I can run the same scenario against all sites. Then I can add new configurations fairly easily and avoid editing all the scenario examples and making them unreadable.
site[:en].photo-gallery-page = 'www.site1.com/photo-gallery'
site[:fr].photo-gallery-page = 'www.site2.com/la-galerie-de-photos'
site[:en].next-button = 'Next'
site[:fr].next-button = 'Suivant'
One option would be to store this config somewhere, then generate the site specific gherkin files using a script. I could then run these generated gherkins which would contain the required examples
I'm wondering if there's an easier way. My other idea was if I can use table transforms to replace the example blocks. I've had a read, but as far as I can tell I can only transform a table (and replace it with a custom code block) if it's an inline table within a step. I can't transform an examples block in the same way.
Have I understood that correctly? Any other suggestions on how best to achieve this?
I wonder if there's a better way... This all feels very brittle.
What if:
Given I follow a link to the gallery "MyGallery"
And the gallery "MyGallery" contains the following photos:
|PhotoID|PhotoName|
|1 |MyPhoto1 |
|2 |MyPhoto2 |
And the photo "MyPhoto1" is displayed
When I view the next photo
Then the next photo "MyPhoto2" should be displayed
Note that you've taken out the notion of button names, etc. - implementation details that are presumably better defined in your step definitions. The behaviour you're defining is simply going to a gallery, viewing an image, requesting the next one, viewing the next image. Define how in your step definitions.
There's some reading I found very useful on this topic at http://cuke4ninja.com/. Download the PDF and check out the web automation section (it details the web automation pyramid).
To address your configuration problem, maybe you could define some kind of config. class and supply it to the step definition files via dependency injection. You could make it site specific by loading from different config. files as you suggested in its constructor. Step definitions could pull the relevant site specific data from the config. class' properties. I think this would make your scenario is more readable and less brittle.

What is the best strategy for BDD testing which relies on data

What are some strategies for writing BDD tests, which can test behaviour that relies on certain data being in the system?
For example, say I was working with the following scenario:
Feature: Search for friend
In order to find a friend
As a user
I want to search my list of friends
And filter by 'first name'
How could this test ever succeed unless/until some "dummy" friends had been entered into the system?
More to the point, what "dummy" criteria would the test utilize?
Should I hard-code the name of a friend, assuming it to already exist in the database?
But what if I move my code to a new environment with a fresh database?
Or, should I write code to manually insert dummy data into the system prior to executing each test?
But this would be modifying the internal state of the application from within a test framework, which seems like a bad approach, since we're supposed to be treating the program as a black-box, and only dealing with it through an interface.
Or, would I create other scenarios/tests, in which the data is created using an interface of the program?
For example, 'Feature: Add a new friend to my list'. Then I could run that test, to add a user called 'Lucy', then run the 'Search for friend' tests to search for 'Lucy', which would now exist in the database.
But, then I'd be introducing dependencies between my scenarios, which contradicts the common advice that tests should be independently runnable.
Which one the best strategy? Or is there a better way?
You would use the Given clause in your scenario to get the system into the appropriate state for the test. The actual implementation of this would be hidden in the step definition.
If the data is going to shared across your scenarios then you could have in a background step:
Background:
Given I have the following friends:
| andy smith |
| andy jones |
| andrew brown |
To add these friends you could either insert records directly into the database:
def add_friend(name)
Friend.create!(:name => name)
end
or automate the UI, e.g.:
def add_friend(name)
visit '/friends/new'
fill_in 'Name', :with => name
click_button 'Add'
end
For the scenarios themselves, you would need to think of key examples to validate the behaviour, e.g.:
Scenario: Searching for a existing person by first name
When I search for 'andy'
Then I should see the friends:
| andy smith |
| andy jones |
But I should not see "andrew brown"
Scenario: Searching for a non-existing person by first name
When I search for 'peter'
Then I should not see any friends
You're correct that tests should be independent, so you shouldn't rely on other scenarios to leave the database in a particular state. You will probably need some mechanism to clean-up after each test. For example, the 'database-cleaner' gem if you're using Cucumber and Rails.
You are referring to BDD and Integration style of testing. If you use a decent ORM (NHibernate?) you can create an in-memory database before each test runs and clean it up after the test succeeds and since the db is in memory, it won't take much time comparing to running it on a real database.
You can use the pre/post test hooks to fit in the data necessary for your scenario and clean it up afterwards so that your tests can be run without depending on each other.

In MSTest's Test View, how can I get a list of all Tests with no category?

I'm using MSTest and most of my unit tests have no TestCategory attribute, but I want to put a few tests into a "Slow" category. Then I want to be able to easily run all the tests that have no category assigned.
When I go to Test View, I can filter by Test Categories, but I can't put in a keyword of empty. I can easily find my "Slow" ones, but how to I find my non-Slow ones? I'm trying to avoid putting a test category on all my tests.
I guess it's not the end of the world if I have to... a search and replace should get them all, but if there's a way to find the non-categorized, I would like to know.
You can also exclude these tests from the command line.
mstest /testcontainer:foo.tests.dll /category:!Slow
In the test list editor, add Test Categories as a column, group by None, and sort on Test Categories. Tests with no categories will be at the top. Unfortunately, you can't group by Test Category.
It seems that /category:!Slow doesn't work at all. Bad thing!
So it's necessary to put, for example [TestCategory("unit")] attribute to all of the tests.
Another, better way is to separate unit and integration tests by projects and run such tests separately.

Best way to associate data files with particular tests in RSpec / Ruby

For my RSpec tests I would to automatically associate data files with each test. To clarify, if my tests each require an xml file as input data and then some xpath statements to validate the responses they get back I would like to externalize the xml and xpath as files and have the testing framework easily associate them with the particular test being run by using the unique ID of the test as the file(s) name. I tried to get this behavior but the solution isn't very clean. I wrote a helper method that takes the value of "description" and combines it with FILE to create a unique identifier which is set into a global variable that other utilities can access. The unique identifier is used to associate the data files I need. I have to call this helper method as the first line of every test, which is ugly.
If I have an RSpec example that looks like this:
describe "Basic functions of this server I'm testing" do
it "should give me back a response" do
# Sets a global var to: "my_tests_spec.rb_should_give_me_back_a_response"
TestHelper::who_am_i __FILE__, description
...
end
end
Is there some better/cleaner/slicker way I can get an unique ID for each test that I could use to associate data files with? Perhaps something build into RSpec I'm unaware of?
Thank you,
-Bill
I just learned about the nifty global before and after hooks. I can hide the unique ID creation code there. It makes things much cleaner. I'll probably go with this solution unless there's an even slicker way to acquire a unique ID for each test. Thanks

Resources