JMeter- graph generator plugin - jmeter

I have a graph generator plugin. I want to create graphs after I input the users in GUI mode. Do I have to run the script in advance and then run it again in order to see the graphs? I'm asking because it wants the 'JMeter Results File' which if I don't run, would not be there.

There are two ways to make graphs: at run-time, or from old results. If you want to do the former, put it in your test and make sure you follow the instructions here:
http://jmeter-plugins.org/wiki/GraphsGeneratorListener/#Generate-CSV-PNG-for-current-test-results
Note that, like many listeners, this has a fairly high performance cost, so it suggests you avoid using this while in GUI mode.
Alternatively, you can run your normal test without this listener, then run a second 'fake' test with it to generate your graphs:
http://jmeter-plugins.org/wiki/GraphsGeneratorListener/#Generate-CSV-PNG-for-existing-previous-test-results

Related

Cypress - how to describe a multiple step test

This is probably an anti-pattern, but I'm new to e2e testing and I'm not sure there's a way around my requirements.
I have the need to test a scenario in my system that is several steps long (maybe 50-60 steps). This means going to about 15-20 pages, clicking and entering various details, and seeing the results. The reason the requirement is so long is that the system processes orders from creating an order, creating line items with various details, and running it through a long production process. My most valuable test would be to run through the whole process and verify the results.
I can't find a good way of isolating, say, steps 40-42 to verify that this process alone works well, because to get an order in that state I'd have to run through the first 39 steps.
Is there a good way to write tests to cover this scenario?

Find and run all scenarios where step is used?

I'm kinda new to SpecFlow but i would like to find and run all scenarios where step is used. I know about Ctrl+Shift+Alt+S option, but when it's used 20+ times on many feature files it can be hard to test it all one after another. This question came to my mind when i updated step and needed to retest it.
Specify a tag against the scenarios that contain that step - these will then appear within Test Explorer area if you filter based off 'Traits'. You can then run all scenarios with that tag.
So for example you would have
#TAGHERE
Scenario: Your Scenario
Given
When
Then

Jmeter Report Making Approach

Usually I do run the Jmeter tests multiple times and do select a consistent result out of all runs. And further use the statistics to make the Jmeter Report.
But someone asks from my team that, we need to calculate the Average of all runs and use it for Report making.
If I do so, then I can not generate the in-built Graphs which Jmeter provides. And also the statistics which I present for the test is also not the original it's manipulated/altered by calculating the Average.
Which is the good approach to follow?
I think you can use Merge Results tool in order to join 2 or more results files into a single one and apply your analysis solution(s) onto this generated aggregate result. Moreover you will be able to compare different test runs results.
You can install the tool using JMeter Plugins Manager
I am developing now one tool based on Python + Django + Postgres which will help to run/parse/analyze/monitor JMeter load tests and compare results. In`s on some early stage but already not so bad( but sadly bad documented)
https://github.com/v0devil/JMeter-Control-Center
Also there is static report generator based on Python + Pandas. Maybe you can modify it for your tasks :) https://github.com/v0devil/jmeter_jenkins_report_generator

How to test-drive software that uses external command-line tools

I'm trying to figure out how to test-drive software that launches external processes that take file paths as an input and write output after lengthy processing to stdout or some file? Is there some common patterns on writing tests in this kind of situations? It is hard to create fast executing tests that could verify correct usage of external tools without launching actual tools in tests and inspecting the results.
You could memoize (http://en.wikipedia.org/wiki/Memoization) the external processes. Write a wrapper in Ruby that computes the md5 sum of the input file and checks it against a database of known checksums. If it matches one, copy over the right output; otherwise, invoke the tool normally.
Test right up to your boundaries. In your case, the boundary is the command-line that you construct to invoke the external program (which you can capture by monkey patching). If you're gluing yourself in to that program's stdout (or processing its result by reading files) that's another boundary. The test is whether your program can process that "input".
The 90%-case answer would be to mock the external command-line tools and verify that the right input is being passed to them at the dividing interface between the two. This helps keep the test suite fast. Also you shouldn't have to bring in the command-line tools since they are not 'your code under test' - it brings in the possibility that the unit test could fail either due changes to your code or some change in behavior in the command line utility.
But it seems like you're having trouble defining the 'right input' - in which case using Optimizations like Memoization (as Dave suggests) might give you the best of both worlds.
Assuming the external programs are well-tested, you should just test that your program is passing the correct data to them.
I think you are getting into a common issue with unit testing, in that correctness is really determined by if the integration works, so how does the unit test help you?
The basic answer is that the unit test tests that the parameters you intend to pass to command line tool are in fact getting passed that way, and that the results you anticipate getting back are in fact processed the way you intend to process them.
Then there is a second level of tests, which may or may not be automated (preferably they are, but it does depend on if it is practical), which are at the functional level where the real utilities are called so that you can see that what you intend to pass and what you anticipate getting back match what actually happens.
There would also be nothing wrong with a set of tests which "tests" the external tools (which perhaps run on a different schedule, or only when you upgrade those tools) which establish your assumptions, passing in the raw input and asserting that you get back the raw output. That way if you upgrade the tool you can catch any behavior changes which may affect you.
You have to decide if those last set of tests are worthwhile or not. It very much depends on the tools involved.

In test driven development, do you write every possible test first, then the code?

In doing test driven development I have been in the habit of writing the first unit test for a new piece of functionality first, then writing the code for that functionality. If I have additional tests to write to cover all scenarios, I usually write them after the code is written. Is this considered bad form? Should I try and write every conceivable test for a piece of functionality first, before ever writing that code?
In order to do TDD properly, you always write the test first, and then the functionality second.
To add to that, I would take one scenario at a time, don't write 20 tests and then write the code for those 20 tests. Write one test, red/green flag it, then move on to your next test. This makes sure you're also doing one of the core tenets of TDD, which is to do the simplest implementation possible that meets all of your requirements/scenarios.
actually no, I often discover functionality "on-the-go". Let me explain the "no" a bit further:
I usually start out writing a test case for a high level feature, defining its Interface. After that, I usually set this test to ignore and continue writing tests for each of the Interfaces functionality. My cycle goes like:
Integration Test for Story A (high level API)
Write Unit Test for method xyz called in Integration Test
Implement method (red/green/refactor)
Repeat 2+3 till Integration Test passes.
While doing so, I often realize I have forgotten some small functionality in my main test. I then usually take time to look back at my customers requirements. If its a fit, I go back and add a test for it, set to ignored as I first want to finish what I started.
Sometimes I see the chance to do a refactoring. I usually finish an implementation till I reach a commit point and do refactoring then, however sometimes I stash my changes, go back and do the refactoring (including new tests if nescessary) first. This workflow is powererd by Mercurial MQ.
For most people, TDD and incremental/agile development go together. This looks something like:
Write a test for some feature
Write just enough code to make the test pass, refactoring as necessary
Repeat.
If you happen to have a detailed specification ahead of time, you could write all of the tests first, but you'd have to live with having sone tests not passing for a while.
The sooner you write the tests, the better. I usually find writing tests being harder tasks than actually implementing the functionality because you have to be aware of all the possible outcomes. So I tend to write more tests when I'm "in the zone". And when during coding I realize I might have missed a test case I just note that down on the to-do lists.
So in my opinion it's up to your leisure but I would implement tests in multiple batches.
The way I see it, test driven development isn't necessarily tests first development. Your tests drive your development and you are really writing your tests as you develop your application. You start by writing a simple test that fails because you haven't written the functionality yet. Then you write your code to implement that so that the tests pass.
Then you go back to your test, make modifications that will force you to add more functionality or refactor your code to follow better practices or reduce duplicate code, go fix your code to make the test pass...repeat, repeat, repeat.
A couple of videos that demonstrates this is below, although you can probably find a lot more by googling "TDD Video"
http://agilesoftwaredevelopment.com/videos/test-driven-development-basic-tutorial
(oops, only one video, new users can't insert more than one link)
I try to write a test at some level before each bit of functionality. Sometimes, I have to write a little more code to get through the compiler, but I try to minimise that. Writing the test first means that I've thought about what the code is supposed to achieve before writing it.
One technique I find useful is to keep an index card or notepad handy, and make a note of all the cases that I think of along the way. That allows me to focus on the current task without losing track of all the other things I'm supposed to think about. Afterwards, I can work through the list and either complete the extra cases or drop them as not necessary.
You could do that, but you wouldn't be doing TDD. The problem (well, one of them, anyway) with writing all of your tests up front is that in any case where the requirements are non-trivial, your tests will be building in a lot of assumptions about the structure of the code you're test-driving. Big steps lead to missteps.
One of the keys of successful TDD involves taking small steps. Small steps mean fewer changes to back out when something goes wrong. Small steps mean you can more often get your head around the effects of the changes you're making. And because small steps are easier to take with confidence, they have the paradoxical effect of increasing your velocity.
The TDD cycle starts with requirements. Start by choosing a requirement you know how to define through tests immediately, in a few short steps. If you look at a requirement and you're not sure how to test it, or you think, "Yeah, but to do that, I'd need to [insert ill-defined steps] first", then you should either skip to another requirement that you know how to do, or you should break this requirement into smaller requirements that you know how to do.
Once you have that, you work in a short red-green-refactor cycle: Write a test that quantifies some part of the requirement ("red", because it fails, because it has no implementation to test yet), write any code that will pass the test ("green"), then rework the code to remove duplication, magic numbers, and other code smells ("refactor"). During the refactoring phase, you should continue working in small steps, frequently re-running the test to make sure you haven't broken anything. Continue this cycle until you can look your boss/client in the eye and call the requirement met.
Now that you have one simple piece of your system defined, you've opened up the list of requirements available to implement - requirements that are adjacent to or dependent on the one you just implemented can now be tested and implemented in smaller steps building on what you've already done.
So the upshot of all that is: Don't try to do all your tests at once. One (small) thing at a time.
The point of TDD is that you have to observe that test fails when feature is not yet implemented. So you have to write test before code.
When you get into the TDD rhythm you write one test at a time and make it work. Very short red-green-refactor cycles really feel the rhythm. That being said, there is nothing wrong with other approaches (and they may even make more sense for some types of problems) but typically the only thing you need to do about other tests you are thinking of is write them down (or have your pair if you are pair programming write them down) so you don't forget them. You have to do that anyway because you could forget about a test in the middle of writing a different test.
Do just enough tests to test 1 unit of code at a time.. then write the actual code until it passes the test.. rinse, wash, repeat until done.
If you find yourself needing to write many tests for one unit of code ( a method, a function etc) it might be a sign that you are trying to do too much in that unit... which in turn makes the unit dificult to test & to refactor at a later time.

Resources