I am implementing a user Http request Test plan in JMeter and the scenarios need to be run in a certain order because they are dependent, eg:
create user; (extract the user id with Json extractor)
create order for this user id (extract the order id with Json extractor)
ship order for this order id
Is there a way to run the scenarios making sure this dependency among the Http requests? So when having several Threads I will not face any issue like (shipping an order already shipped)
JMeter executes requests upside down (if there are no Logic Controllers which may amend this behaviour) so there is nothing you need to do
JMeter Variables are local to the thread so you have the confidence that each ID will be unique for each JMeter thread (virtual user)
You may find API Testing With JMeter and the JSON Extractor article useful as well
Related
I recorded login and logoff requests using blazemeter. After the record nearly 10 request has been created by blazemeter, which some of them includes .../signalr/.../connectionToken labels.
when i run the test these labels return an error like and .
the test included 10 users. The users have different username and passwords. The other labels (another from these signalr labels) return true.
So, i wonder now if i can disable these pages and not include in the tests? or
any solution for this issue?
if i can disable these pages and not include in the tests
theoretically yes, you can ask around whether SignalR is in scope for performance testing and if not you could disable those, however I believe well-behaved JMeter test should act exactly like a real browser so if the real browser sends these requests to .../signalr/.../connectionToken - JMeter should also send them
or any solution for this issue
I don't think you can record and successfully replay anything in Web 2.0 times, the majority of web applications heavily use dynamic tokens for various reasons (saving client state, security, tracking, etc.). When you record the request you get a hard-coded value which can expire or invalidate by other means, you need to identify all the dynamic values like this SignalR token and perform correlation: the process of dynamic values extraction from the previous responses using suitable JMeter Post-Processors and re-using them in next requests in form of JMeter Variables or Functions
If you record the same user scenario second time and compare two resulting test plans all the request parameters which change will require correlation. There is also an alternative recording option which is capable of exporting recorded requests in "SmartJMX" mode with automatic detection and correlation of dynamic parameters, see How to Cut Your JMeter Scripting Time by 80% for more details.
I could somehow run JMeter test for a user but running HTTP requests a few time such like (multiple users). The problem is that authentication only supports one session and I don't think it's ok to create 50 users in LDAP to be able to test. I tried to use 'Parallel Controller' but after executing the first request the others have the status 'Socket closed'.
I don't think it's ok to create 50 users in LDAP to be able to test
I think this is what you should be really doing.
Parallel Controller is a kind of workaround to bypass JMeter Threads Model limitation when it comes to implementing specific test scenarios like simulating AJAX requests because it assumes several requests executing in parallel triggered by a single thread (virtual user)
In the majority of cases user does sequential actions like open login page -> login -> navigate somewhere -> type something -> etc.
Ideally each JMeter thread (virtual user) must represent a real user with unique credentials so I would strongly recommend either creating as many users as you need to simulate in LDAP, if you're not allowed to have test users in LDAP on permanent basis you can even create them from JMeter like:
setUp Thread Group - create users
normal Thread Group with your main test actions
tearDown Thread Group - delete users
See How to Load Test LDAP with Apache JMeter article to learn more about different types of LDAP requests you can send from JMeter
I am working on migration of scripts from performance center to Jmeter5.2.1.
As part of this migration , we are using same functional flow which we did in performance center.
My scenario consists of users logging in to the web application perform 10-15 iterations and then logout.
This is my Testplan.
TestPlan
--ThreadGroup1
--Once Only Controller (login of users)
--Loop Controller (10 Iterations)
HTTP1
HTTP2
HTTP3
.
.
--Once only Controller (logout of users)
--csv Config data ( username/password)
--csv config data( unique data for the loop controller)
With this approach I am noticing that the time taken to complete the test in Jmeter is much more than what we have in performance center ( I took care of think times and added the similar values)
Why is my test run slow in Jmeter?
Is loop controller sequential? Meaning at a given time it can run only one request?
If not loop controller what other options we have to satisfy my scenario.
If I include different thread groups , carrying JSESSIONIDs needs to be done across thread groups which is not a best practice to do so.
Update:
Comparison between performance center and Jmeter settings
Below are the settings in Jmeter.
Thread Group settings:
TestPlan :
HTTP Cookie manager in Thread Group
CSV data files in Test plan
Once Only counters for Login and Logout
Loop Controller for Iterations.
HTTP request Defaults: ( Even with out checking retrieve all embedded and parallel downloads its taking more than an hour for 3 users)
TestPlan
Performance Center results :
Every Sampler has HTTP Header manager
Entire Test Plan
Given you send the same requests you should have the same response times, no matter which tool is being used under the hood.
It's hard to say what the differences are without seeing the full scripts from the both tools so generic advice is to use a third-party sniffer tool like Wireshark or Fiddler in order to identify the differences and configure JMeter to behave exactly like the "performance center" (whatever it is)
For example I fail to see HTTP Cache Manager and it will cause JMeter to download embedded resources (images, scripts, styles, sounds, fonts) for each and every HTTP request while real browser does it only once.
I also don't see HTTP Header Manager which might be very important, for example if you send Accept header the server will be aware that the client can understand gzipped responses which will greatly reduce network traffic.
More information: How to make JMeter behave more like a real browser
Expecting to send multiple PUT requests through JMeter to test PUT API with below mentioned constraints:
Multiple logged in users.
Multiple users with different data to be updated.
Looking forward to get the help in this regards.
You need to create scripts in Jmeter if you want to send multiple PUT requests. You have to create PUT request in Jmeter using HTTP sampler.
Multiple logged in users.
If the APIs that you are testing uses some authentication mechanism (which i reckon it should), then you have to add other HTTP requests(Authentication requests) as part of Jmeter test. Otherwise, you can increase the number of users in the thread group and do testing.
Multiple users with different data to be updated.
For this task - you have to parametrize the PUT request data. You can go through google and learn about parametrization in Jmeter.
I would suggest you to first explore Jmeter, learn some basics of performance testing and then go ahead with testing.
I've written simple test plan in JMeter, plan consists of one thread group and one controller with "HTTP request" elemets. Plan performes login to website, session refresh and logout.
Is there a way to preform thread running, with different params (login/password) each time?
thanks.
There are lots of different ways to parametrize your test. If you have a lot of users you want to use, I would recommend using a CSV Data Set Config. If you only have a few you want to use, you can try User Defined Variables or User Parameters. Make sure you check out the documentation, as each one is used slightly differently.