Jmeter JSONs comparison - jmeter

Currently I am working on moving some API DDT (data from CSV) tests from RobotFramework to Jmeter and what troubles me is the lack of proper JSONs assertion which is able to ignore some keys during comparison. I am totally new to jmeter so I am not sure if there's no such option available.
I am pretty sure we are using the wrong tool for this job, especially because functional testers would take the job of writing new tests. However, my approach (to make it as easy as possible for functionals) is to create jmeter plugin which takes response and compare it to baseline (excluding ignored keys defined in its GUI). What do you think? Is there any builtin I can use instead? Or do you know anything about some existing plugin?
Thanks in advance

The "proper" assertion is JSON Assertion available since JMeter 4.0
You can use arbitrary JSON Path queries to filter response according to your expected result
Example:
If it is not enough - you can always go for JSR223 Assertion, Groovy language has built-in JSON support so it will be way more flexible than any existing or future plugin.

Please find below the approach that I can think of:-
Take the response/HTML/json source code dump for the base line using "save response to a file".
Take the response dump for the AUT that needs to be compare or simply 2nd run dump.
Use 2 FTP sampler's to make calls for the locally saved response dump's.
Use compare assertion to compare the 2 FTP call response's. In the compare assertion, you can use RegEx String and Substitution to mask the timestamps or userID to something common for both so that it will be ignored in comparison.
Below I have shown just an image for my thought's for help.
You need to take care on how to save and fetch the response's.
Hope this help.

Related

how can we increase the string limit or size for base64Encode

[Error](https://i.stack.imgur.com/iqUDy.png)
I tried to pass a string from csv file to ${__base64Encode()} function but in result my API request is getting failed. When I checked my jmeter logs then in that I found that base64Encode function is unable to encode the string because it is too long and limit is 65535. So, is there any way by which I can increase the string limit to encode my string or is there other way because I need to pass that string to initially encode the data to make my script work.
Please find the attached screenshot for the error in mentioned link and let me know if there is any solution to resolve this issue.
Don't inline JMeter Functions or Variables in Groovy scripts, as per JSR223 Sampler documentation:
The JSR223 test elements have a feature (compilation) that can significantly increase performance. To benefit from this feature:
Use Script files instead of inlining them. This will make JMeter compile them if this feature is available on ScriptEngine and cache them.
Or Use Script Text and check Cache compiled script if available property.
When using this feature, ensure your script code does not use JMeter variables or JMeter function calls directly in script code as caching would only cache first replacement. Instead use script parameters.
Amend your code to look like:
vars.put('token', vars.get('token1').bytes.encodeBase64().toString())
and the error should go away
More information on Groovy scripting in JMeter: Apache Groovy: What Is Groovy Used For?
Alternatively just use ${__base64Encode(${token1},token)} function where required, but not in the JSR223 Test Element script

How can I embed a test data set in my JMeter test plan?

At the moment, my JMeter test uses a CSV Data Set Config to iterate through a limited set of input data for each HTTP request that I do.
But I don't want to deal with the hassle of an external file (uploading it to my test runner, etc.) - I'd like to just embed the data into the jmx file itself.
I was hoping for something like a "test data" node, that would work similarly to a CSV data set (with Recycle on EOF especially) and I'd just copy/paste the data into the test plan instead of working with an external file.
I'm thinking I might be able to work around it with a JSR223 preprocessor - but is there a better built-in way?
Edit: As per comment: the data cannot be generated.
If you want to do this via JSR223 Test Elements and Groovy language correct syntax would be
vars.put("messageId", "wibble");
vars is a shorthand for JMeterVariables class instance, see the JavaDoc for available functions and properties.
Easier way would be going for User Defined Variables or User Parameters or even better Set Variables Action
You can create a text contains keys and values separated with tab, copy all text
Notice if you have property file you can replace = with tab
Add to JMeter GUI User Defined Variables and click Add from Clipboard
It'll load all your variables to JMeter without "do that by hand using JMeter's GUI"
.
This is my first go at a script based approach using a JSR223 preprocessor node:
// This is where the data is embedded. Up to a couple of hundred entries
// is probably fine, more than that will likely be a bad idea.
def messageIdList = ["graffle", "wibble", "wobble", "flobble", "gibble", ...]
def messageIndex = (vars.getIteration() -1) % (messageIdList.size() -1)
println "iteration ${vars.iteration}, size ${messageIdList.size()}, index: ${messageIndex}"
vars.put("messageId", messageIdList[messageIndex]);
messageIndex++
This appears to do what I want, even when run in a Thread Group with multiple threads.
I'm not sure exactly what the vars.getIteration() represents, and I'm not clear about the precise lifetime / scope of the variables. But it'll do for now.
Any better answers will cheerfully accepted, marked and upvoted.

how to use jmeter to update xml to post later

I have a JMeter job that I'm taking the XML response from, updating the XML, then PUT the XML back to update the record. I have ALL the parts, except the XML update part.
I need to update XML root/Settings/HasRetry from not only "false" to "true", but also add other nodes directly under it. Order matters in this XML.
xml to update
Ideas on the best way to do this? I tried a via a simple java script:
java replace
...but the JS pukes on that b/c of the special characters in the XML string.
Ideas?
You have an error in your script, last line should look like vars.put("ResponceData", res); as your current implementation simply ignores the replacement logic.
Also be aware that it is not recommended to use Beanshell for scripting, since JMeter 3.1 users are encouraged to switch to JSR223 Test Elements and Groovy language.
Groovy is Java compliant hence in majority of cases valid Beanshell code will be a valid Groovy code, however it is not recommended to use JMeter Variables references like ${ResponceData} as it will ruin scripts compilation feature and performance will be a big question mark so switch to vars.get('ResponceData') instead.
Example Groovy code for values replacement will look something like:
def str = vars.get('ResponceData')
def res = str.replace(vars.get('findSearch'), vars.get('findReplaceWith'))
vars.put('ResponceData', res)
Demo:
See Apache Groovy - Why and How You Should Use It to learn more about Groovy scripting in JMeter.

Jmeter - query, store in variable. Now use that variable in post-processing

Looking to utilize jmeter for some automated testing but facing a problem. preliminary to my tests I want to run a query on my DB and then store the result in a text file.
I thought I'd do it via a JDBC request as such:
Then immediately after I want to do some post-processing that writes the result to our file:
I've tried, too, putting the paramater passed to vars.get in quotation marks, but I get no such luck. Jmeter does write a file, but that file is empty, and if I run the query independently, it does return results.
Does anybody know how to get this desired behavior?
If you look into jmeter.log file you should see a Beanshell Related error.
This is due to "Result Variable Name" being an ArrayList, not String, hence
You need to use vars.getObject() method instead of vars.get();
Ensure you quote the variable name
Remove the ";" at end of SQL Query
You need to iterate the ArrayList somehow or serialize it to file.
If result set is large it's better to consider doing it via JSR223 Sampler using "groovy" as a language as Beanshell has some performance limitations. See Beanshell vs JSR223 vs Java JMeter Scripting: The Performance-Off You've Been Waiting For! guide for benchmarking results and instructions on how to setup groovy scripting engine support.
To write the output to a file , see :
Append data into a file using Apache Commons I/O
If you decide to use Groovy it will be even easier :
http://grails.asia/groovy-file-examples

External Data File for Unit Tests

I'm a newbie to Unit Testing and I'm after some best practice advice. I'm coding in Cocoa using Xcode.
I've got a method that's validating a URL that a user enters. I want it to only accept http:// protocol and only accept URLs that have valid characters.
Is it acceptable to have one test for this and use a test data file? The data file provides example valid/invalid URLs and whether or not the URL should validate. I'm also using this to check the description and domain of the error message.
Why I'm doing this
I've read Pragmatic Unit Testing in Java with JUnit and this gives an example with an external data file, which makes me think this is OK. Plus it means I don't need to write lots of unit tests with very similar code just to test different data.
But on the other hand...
If I'm testing for:
invalid characters
and an invalid protocol
and valid URLs
all in the same test data file (and therefore in the same test) will this cause me problems later on? I read that one test should only fail for one reason.
Is what I'm doing OK?
How do other people use test data in their unit tests, if at all?
In general, use a test data file only when it's necessary. There are a number of disadvantages to using a test data file:
The code for your test is split between the test code and the test data file. This makes the test more difficult to understand and maintain.
You want to keep your unit tests as fast as possible. Having tests that unnecessarily read data files can slow down your tests.
There are a few cases where I do use data files:
The input is large (for example, an XML document). While you could use String concatenation to create a large input, it can make the test code hard to read.
The test is actually testing code that reads a file. Even in this case, you might want to have the test write a sample file in a temporary directory so that all of the code for the test is in one place.
Instead of encoding the valid and invalid URLs in the file, I suggest writing the tests in code. I suggest creating a test for invalid characters, a test for invalid protocol(s), a test for invalid domain(s), and a test for a valid URL. If you don't think that has enough coverage, you can create a mini integration test to test multiple valid and invalid URLs. Here's an example in Java and JUnit:
public void testManyValidUrls() {
UrlValidator validator = new UrlValidator();
assertValidUrl(validator, "http://foo.com");
assertValidUrl(validator, "http://foo.com/home");
// more asserts here
}
private static void assertValidUrl(UrlValidator validator, String url) {
assertTrue(url + " should be considered valid", validator.isValid(url);
}
While I think this is a perfectly reasonable question to ask, I don't think you should be overly concerned about this. Strictly speaking, you are correct that each test should only test for one thing, but that doesn't preclude your use of a data file.
If your System Under Test (SUT) is a simple URL parser/validator, I assume that it takes a single URL as a parameter. As such, there's a limit to how much simultaneously invalid data you can feed into it. Even if you feed in an URL that contains both invalid characters, and an invalid protocol, it would only cause a single result (that the URL was invalid).
What you are describing is a Data-Driven Test (also called a Parameterized Test). If you keep the test itself simple, feeding it with different data is not problematic in itself.
What you do need to be concerned about is that you want to be able to quickly locate why a test fails when/if that happens some months from now. If your test output points to a specific row in you test data file, you should be able to quickly figure out what went wrong. On the other hand, if the only message you get is that the test failed and any of the rows in the file could be at fault, you will begin to see the contours of a test maintainability nightmare.
Personally, I lean slightly towards having the test data as closely associated with the tests as possible. That's because I view the concept of Tests as Executable Specifications as very important. When the test data is hard-coded within each test, it can very clearly specify the relationship between input and expected output. The more you remove the data from the test itself, the harder it becomes to read this 'specification'.
This means that I tend to define the values of input data within each test. If I have to write a lot of very similar tests where the only variation is input and/or expected output, I write a Parameterized Test, but still invoke that Parameterized Test from hard-coded tests (that each is only a single line of code). I don't think I've ever used an external data file.
But then again, these days, I don't even know what my input is, since I use Constrained Non-Determinism. Instead, I work with Equivalence Classes and Derived Values.
take a look at: http://xunitpatterns.com/Data-Driven%20Test.html

Resources