How do we send emails to specific running threads using JMeter? - jmeter

I have used if controller using ${___threadNum} function but it's not working.
Few times working fine and sometimes not.

Unfortunately we cannot help without seeing your If Controller setup, make sure to use the correct expression, i.e. __jexl3() function there
For example this is how you can send emails only by thread # 2:
Textual representation of the function just in case:
${__jexl3(${__threadNum} == 2,)}
And demo:
As you can see only 2nd thread is sending the email.
Also try increasing JMeter logging verbosity to DEBUG level, it can be done by adding the next line to log4j2.xml file:
<Logger name="org.apache.jmeter" level="debug" />
Once you apply the change and restart JMeter you will be able to see the complete information regarding what's going under the hood when you execute your test, it will help you to compare the "working" and "not working" cases and figure out the reason.

Related

Dynamic value changing from server response compare to replace in the JMeter script

Few of the token values are changing compare to response to replaced in the script. please let me know how to over come this issue
1st value
Recorded Server response in JMeter recording log
VI1js8eNsTKaakYaEsdhPPg+nlPY2SL6/0RoyxBL1BE=
Replaced value in the JMeter script
VI1js8eNsTKaakYaEsdhPPg%2BnlPY2SL6%2F0RoyxBL1BE%3D
2nd value
Server response
C/K6QoR6Qjk/pLQAyvQ5FiRXFK9BAxeRJAEDJ+BGA+w=
Replaced value in the JMeter script
C%2FK6QoR6Qjk%2FpLQAyvQ5FiRXFK9BAxeRJAEDJ%2BBGA%2Bw%3D
Please hlep to to over come this issue.
Thanks
Raghav
Most probably there is some form of encryption/decryption logic which is applied to certain request parameters, it's not possible to guess the algorithm so if you don't know it you need to ask around.
Once you will know what exact encryption algorithm is being applied - you should be able to perform the same either using __digest() or __groovy() functions

Jmeter does not expand variables when itself stressed

My Jmeter test plan is setting a token variable using a Regular Expression Extractor (a Post-Processor element), but upon a while this variable is not expanded in its value and as consequent ${token} is sent to the REST API under actual test.
I added Constant Delay Timer elements in between each request a user (thread) sends and it seems to overcome the problem to a great extend but not entirely. I still get some Bad Request from the API side.
I provide a screen capture
And a second screen capture showing the full test plan (or a large portion of it)
Can someone please explain why this happening and somehow resolve it ?
It might be the case your Regular Expression Extractor fails to find the value therefore it falls back to the default (undefined) value of ${token}
My expectation is that under the load your application cannot properly respond, to wit response doesn't contain the token. You can double check it using Debug Sampler and View Results Tree listener combination.
You can temporarily enable saving of response data by adding the next lines to user.properties file:
jmeter.save.saveservice.output_format=xml
jmeter.save.saveservice.response_data=true
and next time you run your test in command-line non-GUI mode JMeter will keep response data for all samplers. You should be able to examine the responses and add a Response Assertion to ensure that the token is present in the POST LOGIN sampler response data

Disable/Enable all Test Action controllers in JMeter test

I have placed many DIFFERENT think times (pauses) in JMeter test. In this way I am trying to simulate real user think times, because at different places user will need different times to think/wait. Sometimes he will need 5 sec to figure out his next action, and sometimes he will need 15 sec. As I manage to check Test Action Controllers are the only way for me to do this. BUT, my problem is that while I am creating/repairing test I don't want to wait for all these pauses when I run the test just to check whether my change is passing. I want some way to easily disable all Test Action controllers during creation of test, and then when I want to run real test with bunch of concurrent threads then just to easily enable all pauses.
You first modify your Test Action this way:
Set sleep to 0
Add as a child to each one a Timer with pause you want
Then
There is a menu called Start no pause in the GUI which is here exactly for this need:
http://www.ubik-ingenierie.com/blog/jmeter-new-gui-features-to-increase-your-productivity/
There is also since 3.0 a "Validation Mode" that you can acces by right click on Thread Group and select validate
https://jmeter.apache.org/changes.html
I believe the easiest way of doing it is running your test via Taurus tool. It has possibility to enable/disable Test Elements basing on their names so you will be able to switch your Test Actions samplers on and off.
See Modifications for Existing Scripts chapter for more details.
Example Taurus YAML config to disable Test Action samplers. Save it as i.e. test.yml in the same folder your .jmx script lives
---
scenarios:
modification_example:
script: test.jmx # Name of your original JMeter test script
modifications:
disable: # Names of the tree elements to disable
- Test Action
execution:
- scenario: modification_example
Running bzt test.yml -gui command will open JMeter GUI with modifications applied.
Taurus entry level information: Navigating your First Steps Using Taurus
Don't waste your time trying "Run no pauses" and "Validate" options, the guy doesn't seem to know how does JMeter work.
All right, here's JMeter-only solution via Beanshell Sampler
import org.apache.jmeter.gui.GuiPackage;
import org.apache.jmeter.gui.tree.JMeterTreeNode;
import org.apache.jmeter.sampler.TestAction;
GuiPackage guiInstance = GuiPackage.getInstance();
guiInstance.getTreeModel().getTestPlan();
List testactionlist = guiInstance.getTreeModel().getNodesOfType(TestAction.class);
for (Object testAction : testactionlist) {
JMeterTreeNode testActionSampler = (JMeterTreeNode) testAction;
testActionSampler.setEnabled(false);
}
I used Test Actions because I did not understand how timers are working. I thought they are always applied to all samplers (which is not true). I can easily accomplish what I want with constant timer. So using test actions wasn't a must. I will just add constant timer to specific sampler and it will be applied only to this sampler. Sorry if I made confusion with a way I asked the question :(
If you open the test in any text editor you will see all the test actions like
you can easyly find & replace all " testname="Test Action" enabled="true" " to enabled="false"

Developing JMeter test plan with results from multiple REST end points

Is this possible in JMeter to develop a test plan that will have result of first test (an ID) will be input of next test and so on in next test upto 4 tests because each test generates a unique ID and each of these IDs are dependent on each other. Each one is related as follows: submission ID > execution ID > both will generate completion ID with result pass or fail. These are REST API calls. I need to run concurrency users load testing. Finally I need measure latency, throughput from each test.
Between sampler requests, parse the api response using JSON post processor, assign it to ${variable_name} and use it in other requests.
It should look something like this.
Thread group
Userdefined variables
Http Sampler
Regex to get id
Http Sampler
Regex to get id
If you want to measure the response time of all the sampler have a simple controller as parent of all samplers
thank you for quick tip. I was able to get one step working by passing ID into a regular expression, but the same regular expression did not work for 3rd step. Let me give more details here. Basically first post command gives submission ID > I used that ID into regular expression > run a get command in next step with an URL something like '/../2ndStep/submissionId' > this is passed > I'm using the same regular expression in next get command with an URL something like '/../3rdStep/submissionId/executions'> this is supposed to give another executionId and it is failing for me. I'm not sure what I'm missing.
thank you all for suggesting working solution. But I need to do this different way to achieve the following requirement.
When I run POST command test on my REST API HTTP request using JMeter, it returns an ID in response. This ID will be used by other steps for completing the job. I'm currently passing ID into regular expression and using that in between the samplers of each step as it was suggested above and then measuring latency, but the GET steps which are dependent on that ID could take sometime to complete. So I can not put those GET steps into one thread because two of the steps are failing as they could take some time to complete. Is there a way to separate POST command from the remaining and start polling GET commands on the remaining steps automatically to remedy this. Bottom line is I need to measure latency of each step and throughput too. Please let me know if there is a way to achieve this in JMeter?
Thanks again,
Santana

How to run JMeter failed threads after test stops?

I'm using JMeter to run a functional test to update the password of a lot of users (22K). I've separated the users in 2 scripts and used a Ultimate Thread Group with Start Threads Count = 100, which is the value with which I got less errors, however I still got 1.5% transactions failed, and I need to rerun only this failed threads, because all users need to have the same password.
I've tried to get answers to this specific problems, but I have only found ways to prevent this from happening, like using a While controller with a timer, or logging the full response for failure, but I haven't found if there is a way to specifically rerun the failed threads.
Does anyone know if this is possible?
You will have to do following.
Use JSR223 sampler to set the rescode=0
While controller with (if rescode!=200)
HTTP Sampler
JSR223 post processor as javascript as the scripting language.
Store response code using prev.getResponseCode()
e.g. vars.put("rescode", prev.getResponseCode());
You might have to add some more intelligence to the script to avoid infinite loop.
Another approach to solving the problem would be to anticipate errors on some of the password update calls and build a data file upon failure with the information you need.
For Example:
Create a regular expression post processor that has default value of false, and template value of true. Make the expression match the expected response, and fail if the sample fails.
Then, after that sampler, you can add an if statement based on the new true/false variable. If it is false, you know the previous password update failed. Inside the if statement, add a dummy sampler with response data containing all the information you need to know which accounts you must retry.
Then, add a simple file writer to this dummy sampler, and log the dummy sampler response data to a file. At the end of a test run this data file would contain all the information you need to re-try all failed accounts.
Sadly this is a slightly manual process, but I'm sure with a little creativity you could automate recursive test runs until the re-try file is empty. Beanshell file IO might let you handle it all inside a single test run.
-Addled

Resources