Hybris back-office using Jmeter and this ZK plugin - jmeter

I am trying to create a performance test script for Hybris back-office using Jmeter and this ZK plugin(I am assuming which is created using ZK AJAX framework). I am able to generate desktop Id(dtid) and component IDs. For some requests, I am getting the same response as a browser.
But for some requests, I am getting a blank response( {“rs”:[],”rid”:126} ). The script is sending the same parameters as the browser. In failed requests, some co-ordinates like parameters are sending.( data_1 = {“top”:242,”left”:0} ). Is the test failing because of this co-ordinates?
Please help me with this issue? Or Please suggest an alternative tool for testing the Hybris BackOffice?
Thank you

Performance testing a ZK application is generally not easy, and test cases tend to be hard to maintain. It's best to probe the initial page rendering performance without too many interactions (and DON'T forget to send the rmDesktop commands at the end of each test, or your test case will not reflect reality).
I don't have a better/easier alternative to JMeter (similar tools capturing the network requests/responses propose the same challenges).
Besides that the mouse coordinates don't matter for an onClick event unless the server side event listener actually uses those to determine the outcome of the event. In 99.99% of the cases the server side is interested in the button-click event, not the mouse coordinate. If you're getting unexpected responses it's most likely the wrong component-UUID you're firing events to. In such cases the server simply ignores the event since it can't be dispatched to any matching component. Then if no event listener fires the response is most likely empty indicated by {“rs”:[],”rid”:126}.
One important thing is to disable UUID-recycling which will mix UUIDs at server side, likely resulting in the non-deterministic problems you encounter.

Related

Dynamic data not recorded by LOADRUNNER or JMeter

In one of my project we are searching for a vehicle number, it has to show the vehicle number exists or not.but i in the devtool i an not able to find the searched vehicle number.
Let's say I have searched for 5555 , 5555 should record as a parameter in JMeter load runner, even developper tools also it's not showing.
How can I handle this in JMeter.
Let's assume you start a transaction just before you submit your form with '5555' in it within the recording of loadrunner (either default sockets recorder or fiddler-based proxy recorder). You submit your form. You stop your transaction. You then complete your recording.
What do those requests between the start and end transaction markers look like?
If browser developer tools don't show the request it means that there is no request, most probably it happens on the client side only hence there is nothing to test with JMeter/Loadrunner there.
Or it might be hidden/filtered/using another protocol.For example this page won't show sent and received requests at the default view because they're happening in a single WebSocket connection and you need to switch to WS tab and then open Messages view:
The above website cannot be recorded with JMeter/Loadrunner, in JMeter you will need to use WebSocket Samplers and in Loadrunner web_webcoket_* functions
You can also consider using a more powerful packet capturing solution like Wireshark which can catch literally everything and look for your payload in the raw network packets. It will allow you to identify the network protocol.
And last but not the list: try recording your scenario in "porno mode" as it might be the case your application has cached the response and it's being returned from i.e. local storage so no actual network request is being made

Response errors (403) for signalr labels recorded by Blazemeter for Jmeter

I recorded login and logoff requests using blazemeter. After the record nearly 10 request has been created by blazemeter, which some of them includes .../signalr/.../connectionToken labels.
when i run the test these labels return an error like and .
the test included 10 users. The users have different username and passwords. The other labels (another from these signalr labels) return true.
So, i wonder now if i can disable these pages and not include in the tests? or
any solution for this issue?
if i can disable these pages and not include in the tests
theoretically yes, you can ask around whether SignalR is in scope for performance testing and if not you could disable those, however I believe well-behaved JMeter test should act exactly like a real browser so if the real browser sends these requests to .../signalr/.../connectionToken - JMeter should also send them
or any solution for this issue
I don't think you can record and successfully replay anything in Web 2.0 times, the majority of web applications heavily use dynamic tokens for various reasons (saving client state, security, tracking, etc.). When you record the request you get a hard-coded value which can expire or invalidate by other means, you need to identify all the dynamic values like this SignalR token and perform correlation: the process of dynamic values extraction from the previous responses using suitable JMeter Post-Processors and re-using them in next requests in form of JMeter Variables or Functions
If you record the same user scenario second time and compare two resulting test plans all the request parameters which change will require correlation. There is also an alternative recording option which is capable of exporting recorded requests in "SmartJMX" mode with automatic detection and correlation of dynamic parameters, see How to Cut Your JMeter Scripting Time by 80% for more details.

How to track & trend end to end performance (the client experience)

I am trying to figure out how best to track & trend end to end performance between releases. By end to end I mean, what is the experience form the client visiting this app via a browser. This includes download time, dom rendering, javascript rendering, etc.
Currently I am running load tests using Jmeter, which is great to prove application and database capacity. Unfortunately, Jmeter will never allow me to show a full picture of the user experience. Jmeter is not a browser therefore will never simulate javascript and dom rendering impact. IE: if time to first byte is 100ms, but it takes the browser 10 seconds to download assets and render the dom, we have problems.
I need a tool to help me with this. My initial idea is to leverage Selenium. It could run a set of tests (login, view this, create that) and somehow record timings for each. We would need to run the same scenario multiple times and likely through a set of browsers. This would be done before every release and would allow me to identify changes in the experience to the user.
For example, this is what I would like to generate:
action | v1.5 | v1.6 | v1.7
----------------------------------------
login | 2.3s | 3.1s | 1.2s
create user | 2.9s | 2.7s | 1.5s
The problem with selenium is that 1. I am not sure if it is designed for this and 2. it appears that DOM ready or javascript rendering is realllly hard to detect.
Is this the right path? Does anyone have any pointers? Are there tools out there that I could leverage for this?
I think you have good goals, but I would split them:
Measuring DOM rendering, javascript rendering etc. are not really part of "experience from the client visiting this app via a browser", because your clients are usually unaware that you are "rendering dom" or "running javasript" - and they don't care. But they are something I'd want to address after every committed change, not just release to release, because it could be hard to trace degradation back to particular change if such test is not running all the time. So I would put it in continuous integration on build level. See a good discussion here
Then you probably would want to know if server side performance is the same or worsened (or is better). For that JMeter is ideal. Such testing could be done on some schedule (e.g. nightly or on each release) and can be automated using for example JMeter plug-in for Jenkins. If server side performance got worse, you don't really need end-to-end testing, since you already know what will happen.
But if server is doing well, then "end user experience" test using a real browser has a real value, so Selenium actually fits well to do this, and since it can be integrated with any of the testing frameworks (junit, nunit, etc), it also fits into automated process, and can generate some report, including duration (JUnit for instance has a TestWatcher which allows you to add consistent duration measurement to every test).
After all this automation, I would also do a "real end user experience" test, while JMeter performance test is running at the same time against the same server: get a real person to experience the app while it's under load. Because people, unlike automation, are unpredictable, which is good for finding bugs.
Regarding "JMeter is not a browser". It is really not a browser, but it may act like a browser given proper configuration, so make sure you:
add HTTP Cookie Manager to your Test Plan to represent browser cookies and deal with cookie-based authentication
add HTTP Header Manager to send the appropriate headers
configure HTTP Request samplers via HTTP Request Defaults to
Retrieve all embedded resources
Use thread pool of around 5 concurrent threads to do it
Add HTTP Cache Manager to represent browser cache (i.e. embedded resources retrieved only once per virtual user per iteration)
if your application is build on AJAX - you need to mimic AJAX requests with JMeter as well
Regarding "rendering", for example you detect that your application renders slowly on a certain browser and there is nothing you can do by tuning the application. What's next? You will be developing a patch or raising an issue to browser developers? I would recommend focus on areas you can control, and rendering DOM by a browser is not something you can.
If you still need these client-side metrics for any reason you can consider using WebDriver Sampler along with main JMeter load test so real browser metrics can also be added to the final report. You can even use Navigation API to collect the exact timings and add them to the load test report
See Using Selenium with JMeter's WebDriver Sampler to get started.
There are multiple options for tracking your application performance between builds (and JMeter tests executions), i.e.
JChav - JMeter Chart History And Visualisation - a standalone tool
Jenkins Performance Plugin - a Continuous Integration solution

Page Loading Time in Jmeter

I would like to measure loading time of a document in my testing web app. I have used JMeter for this, but I am getting different values for each run. I am measuring average time in the summary report.
I am not sure, that the value is proper or not.Is this approach is correct or Is there any plugin JMeter available?
I have used HTTP watch to get rendering time, but I can't use that tool for more than 1 user (Load Testing). I am using JMeter 2.13. Could you please help me in this?
With the help of aggregate report or csv / xml results you get required information regarding response times BUT
In Jmeter, Response time = Processing time + Latency(time taken by network while transferring data)
In Browser, Response time = Processing time + Latency + Rendering time
Hence you will found a difference between http watch response times and jmeter response times.
If you need to include rendering times also in your response times, then use tools, like loadrunner (commercial), selenium (open source) and so on. Personally in my opinion client side rendering is not a measurable value, unless all of the users accessing the application are having same configuration of hardware, software and network access. However, while JMeter test running with peak load to the system, manually browse the site using various browsers and with the help of developer tools you can find rendering times.
I am getting different values for each run - this will depends upon test data you are using, server health status, network delays and so on.
I doubt you'll be able to get 2 fully identical test run results, there always will be some form of fluctuation caused by underlying hardware and software implementations. You should be receiving similar results with some statistical noise.
If this is not the case, your JMeter test might be misconfigured. From "realness" perspective mind the following configuration:
Make sure you have Retrieve All Embedded Resources from HTML Files box checked and you Use concurrent pool. The easiest way to configure it for all the samplers is using HTTP Request Defaults
Add HTTP Cache Manager to your test plan. Previous setting "tells" JMeter to fetch embedded resources like scripts, styles, etc. from the pages. Real browsers do it as well but they do it only once, on subsequent requests these resources are being returned from browser's cache.
Add HTTP Cookie Manager to your test plan. It represents browser cookies, enables cookie-based authentication and maintains sessions.
Add HTTP Header Manager to represent browser headers like User-Agent, Content-Type, encoding, etc.
When you use a straight HTTP Protocol layer virtual user, independent of the tool (Jmeter, LoadRunner, SOASTA, Grinder, ...) then what you will be timing will be the request/response information coming from the server with very low coloration from the local processing on the client for JavaScript and the final "drawing on the screen" which is rendering.
Up until the point where the server is degraded due to number of requests or network limitations the only area where you can tune is in the page architecture, which if you are waiting to the last 100 yards before deployment to address then you are likely in trouble.
Steve Souders has written quite a bit on the subject of page architecture in his books "High Performance Websites" and related works. In short, the rule of thumb comes down to making fewer requests, smaller responses and serving the data from the closest possible location to the client. These have the effect of minimizing the most expensive finite resource to a web client, the network. For instance, a browser sprite reduces the number of calls for images, minification and compression reduce the size of the transmission and a CDN changes the number of hops to the requested item to a location closer to the end client.
In order to affect changes to page architecture you need to move upstream into your development cycle and your functional testing cycle. You will need to work with development to implement hard gates where code/pages cannot be submitted to the project without first passing performance gates related to design. Your development team and functional testing members will need to respect those gates. As to what the gates should be, I refer you back to the works of Mr Souders as a great source of data for construction of your gate rules.
This gets you to the level of "works for one: Performant for one." Then you can use that as a known good to answer the questions related to server scalability and at which point the service to the client from requests begins to degrade. If you have a CDN in your organization, be sure to take that into account in your test model, for if you do not then you will overload your server vs production.
As far as actual speeding of the "rendering" or drawing on the screen? You need to purchase a faster video card barring changes from the browser manufacturer. Speeding up JavaScript? Make sure that all of your JavaScript is as small and as lean as possible. Have your functional test team test on very dirty browsers with lots of add-ins as well as lower powered hardware for a view of maximum out of spec response. If you need a view of what your standard hardware model looks like from your clients (Browser/OS/some hardware into) then you can process the data in your HTTP request logs and specifically the user agent related to client configuration information.

How to bypass session data in Jmeter

I'm working on what (should be) a simple Jmeter script whose goal is to load a pretty large amount of data into a software system for testing.
Jmeter records a ton of session-specific information that of course, can't really be "played back" in order for this to work. All the target application's url-construction is handled behind the scenes, and sent on the responses. Is there a way to simply "ignore" all this session data and more or less script JMeter as I would if I was running say, a QTP/Selenium test?
To try and clarify, we have buttons that post session-specific urls. I'd like to be able to just "click the buttons" and let things flow naturally without needing to handle any of the session specifics.
Sorry for the "click the button" metaphor, I know the tool doesn't interact with the GUI, but it's the best thing I can come up with.
Session data is not avoidable without back-end changes, such as disabling cookies/shutting off security tokens. Neither of which was an option here.
I handled the problem by capturing all the necessary session tokens and parameterizing my scripts properly.
JMeter doesn't parse HTML and execute Javascript for performance reasons (it takes a lot of time). Instead JMeter works at HTTP protocol level. Thus it uses a lot less system resources than Selenium tests.
You have to structure those HTTP requests yourself and you have to handle session specifics. Maybe, HTTP Proxy Server will make your life easier.

Resources