My Problem
I've got a script that uses properties to set user defined variables. This works great during GUI testing. When testing in non-GUI mode, however, the script results in only failures.
I am using properties because the values are created in the Endpoint Creation thread group, but I need them to be globally accessible. Because of this, I used some RegEx extractors and a BeanShell assertion to assign the values to several different properties. Here's what that looks like.
Here is my User Defined Variables with the properties set as their values..
I know for a fact that the properties are an issue in non-GUI mode because if I replace the properties with their hard coded values the non-GUI test results in successes.
When I am ready to start testing, I toggle the Endpoint Creation thread group off as I only need it to configure the User Defined Variables.
I should mention that I am required to use non-GUI mode during testing for performance reasons.
Questions
Does non-GUI JMeter treat user defined properties differently than GUI JMeter?
Is there a way for me to keep these properties and have them work in non-GUI mode?
I can think of 2 possible problems:
non-GUI mode of tests execution is much faster and consumes less memory therefore your logic of reading / overwriting / reading again can break somewhere
Beanshell itself is not the best scripting option, it has well known performance problems so it can be the bottleneck of your test.
In both cases check jmeter.log file for any suspicious entries.
Recommendations:
You don't need this User Defined Variables step at all, JMeter Properties are global for all Thread Groups (in fact for the whole JVM) so you can leave only Salt there, other entries can be removed, just refer the properties using __P() or __property() functions
Just in case you use Beanshell scripting for anything else - replace Beanshell test elements with JSR223 Elements and make sure to use Groovy language as it provides optimal performance. Also remember not to use JMeter Functions or Variables inside scripts, go for code-based equivalents instead, to wit:
props.put('someproperty', vars.get('somevariable'))
Related
I have around 10 it blocks in my spec. Among these, Two it blocks are testing important features and I want to highlight these in my report so that I can immediately sense whether those features are working fine or not. Is there any way
to achieve this?
I am using jasmine reporter with protractor for reporting.
Try the reporter below for better reporting
https://www.npmjs.com/package/protractor-html-reporter-2
For better view on the report make a separate describe for the particular it blocks.
Hope it helps you
My request has 3800 viewstates that are coming from the previous request's response. Its very hard to capture the values one by one using reg expression and replacing them with variables.
Is there any simple way to handle them?
There is an alternative way of recording a JMeter test using a cloud-based proxy service. It is capable of exporting recordings in SmartJMX format with automatic detection and correlation of dynamic parameters so you won't have to handle them manually - the necessary PostProcessors and variables substitutions will be added to the test plan automatically.
Check out How to Cut Your JMeter Scripting Time by 80% article for more details.
In general I would recommend talking to your application developers as almost 4k dynamic parameters are too much, it will create at least massive network IO overhead to pass them back and forth and immense CPU/RAM to parse on both sides.
Usually I do run the Jmeter tests multiple times and do select a consistent result out of all runs. And further use the statistics to make the Jmeter Report.
But someone asks from my team that, we need to calculate the Average of all runs and use it for Report making.
If I do so, then I can not generate the in-built Graphs which Jmeter provides. And also the statistics which I present for the test is also not the original it's manipulated/altered by calculating the Average.
Which is the good approach to follow?
I think you can use Merge Results tool in order to join 2 or more results files into a single one and apply your analysis solution(s) onto this generated aggregate result. Moreover you will be able to compare different test runs results.
You can install the tool using JMeter Plugins Manager
I am developing now one tool based on Python + Django + Postgres which will help to run/parse/analyze/monitor JMeter load tests and compare results. In`s on some early stage but already not so bad( but sadly bad documented)
https://github.com/v0devil/JMeter-Control-Center
Also there is static report generator based on Python + Pandas. Maybe you can modify it for your tasks :) https://github.com/v0devil/jmeter_jenkins_report_generator
We need to perform load testing on Java based ERP application, and we're thinking about using JMeter. Is it possible to test more than one screen at time, since output of first screen is the input for second screen, kindly assist us.
If you want to replicate user navigation, I believe you should record two, or more, different navigation scenarios, and then simply run them in parallel. For instructions on how to record a navigation, read this document.
On the other hand, if what you meant is to use the result of a request as input to a subsequent one, have a look here.
I'm trying to figure out how to test-drive software that launches external processes that take file paths as an input and write output after lengthy processing to stdout or some file? Is there some common patterns on writing tests in this kind of situations? It is hard to create fast executing tests that could verify correct usage of external tools without launching actual tools in tests and inspecting the results.
You could memoize (http://en.wikipedia.org/wiki/Memoization) the external processes. Write a wrapper in Ruby that computes the md5 sum of the input file and checks it against a database of known checksums. If it matches one, copy over the right output; otherwise, invoke the tool normally.
Test right up to your boundaries. In your case, the boundary is the command-line that you construct to invoke the external program (which you can capture by monkey patching). If you're gluing yourself in to that program's stdout (or processing its result by reading files) that's another boundary. The test is whether your program can process that "input".
The 90%-case answer would be to mock the external command-line tools and verify that the right input is being passed to them at the dividing interface between the two. This helps keep the test suite fast. Also you shouldn't have to bring in the command-line tools since they are not 'your code under test' - it brings in the possibility that the unit test could fail either due changes to your code or some change in behavior in the command line utility.
But it seems like you're having trouble defining the 'right input' - in which case using Optimizations like Memoization (as Dave suggests) might give you the best of both worlds.
Assuming the external programs are well-tested, you should just test that your program is passing the correct data to them.
I think you are getting into a common issue with unit testing, in that correctness is really determined by if the integration works, so how does the unit test help you?
The basic answer is that the unit test tests that the parameters you intend to pass to command line tool are in fact getting passed that way, and that the results you anticipate getting back are in fact processed the way you intend to process them.
Then there is a second level of tests, which may or may not be automated (preferably they are, but it does depend on if it is practical), which are at the functional level where the real utilities are called so that you can see that what you intend to pass and what you anticipate getting back match what actually happens.
There would also be nothing wrong with a set of tests which "tests" the external tools (which perhaps run on a different schedule, or only when you upgrade those tools) which establish your assumptions, passing in the raw input and asserting that you get back the raw output. That way if you upgrade the tool you can catch any behavior changes which may affect you.
You have to decide if those last set of tests are worthwhile or not. It very much depends on the tools involved.