I'm using CAPL to make an automatic testing, I want to log CAN bus data at a point (such as testfail point), and the CAN bus data need 10 second before the point (not after the point).
Can someone help me?
Open Analysis -> Measurement setup
Rightclick on the right side of the measurement setup window
Add new logging block
Select toggle on and off method as CAPL
Set the pre-trigger time to 10 000 ms or how much you need
Configure output file how you need it
Whenever test fails you trigger the block with startLogging(nameOfYourLoggingBlock)
Remember that you need to end the logging at some point with stopLogging(nameOfYourLoggingBlock)
Related
at the time of recording activity I am recording complete activity till place order, but when run the same action for 5 times, in result everytime its hitting same URL that has been generated while recording, but because of it place order not recording to my admin account.
In the majority of cases unfortunately you will not be able to successfully replay the recorded script because modern web applications widely use dynamic parameters for various reasons like client side state tracking or security reasons.
So you need to parameterize and/or correlate your test and replace recorded hard-coded values with either dynamic parameters which needs to be extracted from the previous response using a suitable JMeter Post-Processor or provide random values or use pre-defined values from i.e. external CSV file.
The easiest way of detecting dynamic parameters is recording your test one more time and comparing 2 recorded script - the parameters which will differ will require your attention.
You can also consider using an alternative recording solution which is capable of exporting recorded scripts in "SmartJMX" mode with automatic detection and correlation of the dynamic values, check out How to Cut Your JMeter Scripting Time by 80% article for more details.
I have recorded my web application through template & just to confirm that load test result which i am getting is correct? Just by increasing No of users does it give proper results? Is it enough for load testing of web application?
First of all you need to ensure that your test does what it is supposed to be doing. Recorded tests can rarely be successfully replayed, so normally you should be acting as follows:
Add View Results Tree listener and run your test with 1 user. Inspect request and response details to verify your test steps.
Perform correlation and parametrization if required.
Correlation: the process of identifying and handling any dynamic parameters. Most often people use Regular Expression Extractor for it.
Parametrization: the process of making your test data driven. For example, if your application assumes multiple authenticated users you need to store the credentials somewhere. Most commonly used test element for this is CSV Data Set Config
Make your test realistic. Virtual users simulated by JMeter need to represent real users using real browsers as close as possible with all the related stuff: cookies, headers, cache, etc. See How To Make JMeter Behave More Like A Real Browser to learn how to configure JMeter to act closer to real users. Also real users need some time to "think" between operations so make sure you are using Timers to simulate this behaviour as well.
Only after you apply the above points you should add more virtual users. Again, run your test with 2-3 users and iterations to ensure your test funcitons as designed. Once you are happy with it you can increase the load, but don't overkill your server, increase the load gradually and check the impact of the increasing load on your application, i.e. how response time, throughput and number of errors change as you increase the load. The same is applicable for decreasing the load, don't turn it off at once, decrease the number of virtual users gradually.
Building a Web Test Plan
Building an Advanced Web Test Plan
I'm an absolute Gatling beginner, and have mostly written my simulations by copying bits and pieces of other simulations in my org's code. I searched around a bunch, and didn't see a similar question (and couldn't find anything in the gatling docs), so now I'm here.
I know Gatling has an after{} hook that will run code after the simulation finishes. I need to know how to multi-thread the after{} hook the same way the simulation is multi-threaded. Basically, can I ramp up users within the after{} hook?
My issue is: My simulation ramps up 100 users, logs them into a random account (from a list of 1000 possible accounts), and then creates 500 projects within that account. This is to test the performance of a project creation endpoint.
The problem is that we have another simulation that tests the performance of an endpoint that counts the number of projects in a given account. That test is starting to suffer because of the sheer volume of projects in these accounts (they're WAYYY more loaded than even our largest real-world account -- by orders of magnitude), so I want my "project creation" simulation to clean up the accounts when it's done.
Ideally, the after would do something like this:
after {
//ramp up 1000 users
//each user should then....
//log into an account
// delete all but N projects (where N is a # of projects close to our largest user account)
}
I have the log in code, and I can write the delete code... but how do I ramp up users within the after {} hook?
Is that even doable? Every example I've seen of the after{} hook (very few) has been something simple like printing text that the test is complete or something.
Thanks in advance!
You can use a global ConcurrentHashMap to store data from exec(session => session) blocks.
Then, you would be able to access this data from an after hook. But then, as explained in the doc, you won't be able to use the Gatling DSL in there. If you're dealing with REST APIs, you can directly use AsyncHttpClient, which Gatling is built upon.
Some people would also log in a file instead of in memory, and delete from a second simulation that would use this file as a feeder.
Anyway, I agree this is a bit cumbersome, but we don't currently consider that setting/cleaning up the system under test is the responsibility of the load test tool.
I'm testing a large BizTalk system using Visual Studio Load Test. Load Test to pushes messages into MQ, these are picked up by BizTalk and then processed.
Rather than having the test finish (and all performance counters ending) as soon as Visual Studio has finished injecting messages to MQ, I want the test to end if and only if some condition is met (in my case if (SELECT COUNT(*) FROM BizTalkMsgBoxDb.dbo.Spool) == 4).
I can see a bunch of ways to run stuff after the test is complete, but no obvious way to extend the test and continue monitoring unless some user-defined exit condition is met.
Is this possible, or if not, does anyone have an idea for a good work-around/hack to achieve this?
You'll want to write a custom load test plugin. Details begin at this URL: http://msdn.microsoft.com/en-us/library/ms243153.aspx
The plugin can manipulate the scenario, extending the duration of the test until your condition is met.
I imagine you want to keep the load test running after queueing up a bunch of requests in order to continue to monitor the performance while the requests are processed. Although we can't control the load test duration, there is a way to achieve this.
Don't limit the test duration: Set the load test duration (or number of iterations) to a very large value -- larger than you anticipate (or know) it will take for the end condition to be satisfied.
Limit the scenario that queues up requests: In the load test scenario properties, in the Options section, set the Maximum Test Iterations so that the user load will drop to zero after sending the desired number of requests. If setting an iteration limit is not possible for some reason, you can instead write a load test plugin that sets the user load to zero in a specified scenario after a certain amount of test time has elapsed.
Check for end condition: Write a web test plugin that checks the database for your end condition. Attach this plugin to a new webtest in a new scenario and set Think Time Between Test Iterations on the scenario so that the test runs only as often as needed (60 seconds?). When the condition is reached, the plugin should write a predetermined value into the user context (the user context is accessible in the web test context as $LoadTestUserContext, and is only available in a load test, not when running a webtest standalone).
Abort the test: Write a load test plugin that looks for the flag value in the user context in the TestFinished event. When the value is found, the plugin calls LoadTest.Abort().
There is one minor disadvantage to this method: the test state is marked as Aborted in the results database.
At time of writing there is (still) no way to extend the duration of the test using a custom load test plugin, nor by having a virtual user type that refused to exit, nor by locking the close-down period of the test and preventing it from exiting that way.
The only way we managed to do something like this was to directly manipulate the LoadTest database and inject performance counter data in afterwards from log files, but this is neither smart nor recommended.
Oh well..
We are using JMeter for performance testing. To generate 1000 user load we are using 8 instance(125 X 8 =1000) of JMeter. All works fine, but at the end of execution, graph generated is of first instance only. I want graph of all 8 instance. What can be done in this matter. Please help.
Option for GUI mode: http://code.google.com/p/jmeter-plugins/wiki/ActiveThreadsOverTime
Option for non-GUI mode: http://code.google.com/p/jmeter-plugins/wiki/ConsoleStatusLogger
Hope this helps...
Create a Simple Data Writer Sampler and save all your results out into one file. After the test has finished, you can import that results file into ANY of the viewers in Jmeter. This would give you a graph for the entire test.
You want to avoid having "real-time" viewers while running a load test (visual graphs, aggregate report, tree view, etc.) as they are very memory intensive.