How to run a test with distribution of load - jmeter

I am new to JMeter and need your help with a problem.
I have 4 test scenarios and I need to run it with 30 users load with distribution as 30,10,30,30 percent. Out of 4 scenarios, 1 scenario create a customer ID and that ID is being used in the rest of the scenarios.TO test this, I have created a test data of customer ID's with my 1 scenarios and saved in a CSV file. Now my question is when I will run my test how would I handle the customer iD's generated at the run time and how to manage it with my test data which I have already created. Please help me.

With regards to reusing the data, generated in the runtime - you can extract the required data, i.e. customer ID using suitable JMeter Post-Processor and store it into a JMeter Variable. Once done the variable can be re-used in other scenarios. The process is known as correlation and there is a lot of information on implementation with examples over the web.
With regards to the distribution there are different approaches as well:
Throughput Controller
Switch Controller
Weighted Switch Controller
With regards to "manage test data you created" - you can read the values from a CSV file using CSV Data Set Config or __CSVRead() function

Related

Is there a way to get only required transactions in Jmeter Summary report

I am new to jmeter and I have couple of questions. Can someone help me out
I am using master-slave architecture ( master and 4 slaves) for 4000 user load, In which machine will I get the consolidated results for the complete load.
I have configured the summary report for results, but how can we get the report only for required transactions and not all from end to end].
It's not exactly what you are looking for, but one option is to generate the HTML report that will be configured to include the transactions of interest. This is done by updating the user.properties file for the following properties:
# This property is used by menu item "Export transactions for report"
# It is used to select which transactions by default will be exported
#jmeter.reportgenerator.exported_transactions_pattern=[a-zA-Z0-9_\\-{}\\$\\.]*[-_][0-9]*
You can use the Transaction ControllerTransaction Controller to get consolidated time taken by the nested elements. Add a Transaction Controller as a parent element and set the flag Generate Parent Sample to get the overall time without the details of the nested elements.
By default JMeter stores all Samplers execution metrics into the .jtl results file
If you're not interested in some of the results you can remove them using Filter Results Tool (doesn't come with JMeter, needs to be installed using JMeter Plugins Manager)

Data integration for Magento to Quick Book

I'm currently new to Talend and I'm learning through videos and documentation, so I'm just not sure how to approach/implement this with best practices.
Goal
Integrate Magento and Quick Book using Talend.
My thoughts
Initially my first thought was I will setup direct DB connection for Magento and will take relevant data which I need and will process it and will send to QuickBook using REST API's(specifically bulk API's in batch)
But then again I thought it would be little hectic for me to query Magento database(multiple joins) so I've another option to use Magento's REST API.
But as I'm not much familiar with the tool I'm struggling little to find best suitable approach, so any help is appreciated.
What I've done till now?
I've saved my auth(for QB) and db(Magento) credentials data in file and using tFileInputDelimited and tContextLoad, I'm storing them in context variables so they can be accessible globally.
I've successfully configured database connection and dbinput but I've not used metadata for connection(should I use that and if Yes how can I pass dynamic values there?). I've used my context variables data in db connection settings.
I've taken relevant fields for now but if I want multiple fields simple query is not enough as Magento stores data in multiple tables for Customer etc but it's not big deal I know but I think it might increase my work.
For now that's what I've built and my next step is send the data to QB using REST while getting access_token and saving it to context variable and again storing the QB reference into Magento DB.
Also I've decided to use QB bulk API's but I'm not sure how I can process data in chunks in Talend(I tried to check multiple resources but no luck) i.e. if the Magento is returning 500 rows I want to process them in chunks of 30 as QB batch max limit is 30, so I will be sending it using REST to QB and as I said I also want to store back QB reference ID in magento(so I can update it later).
Also this all will be on local, then how can I do same in production? how I can maintain development and production environment?
Resources I'm referring
For REST and Auth best practices - https://community.talend.com/t5/How-Tos-and-Best-Practices/Using-OAuth-2-0-with-Talend-to-Access-Goo...
Nice example for batch processing here:
https://community.talend.com/t5/Design-and-Development/Batch-processing-in-talend-job/td-p/51952
Redirect your input to a tFileOutputDelimited.
Enter the output filename, tick the option "Split output in several files" from the "Advanced settings" and enter the value of 1000 into the field "Rows in each output file". This will create n files based on the filename with 1000 in each.
On the next subjob, use a tFileList to iterate over this file list to get records from each file.

Jmeter for concurrent users

I have being using Jmeter-plugin Ultimate thread group for concurrent request.
But now I'm finding it difficult to use because the scenario is :
Each request has a trackingnumber(The trackingnumber are already generated in the system when a form is submitted, so I have to use the generated tracking number from DB) which are generated passed as a POST in http request, these trackingnumber are unique and have configured csv config for passing the trackingnumber. So once when trackingnumber is used, it cant be used again (as it would give me a error message) . So can someone please suggest me how to stress test this scenario where I have to hit a particular URL (with unique trackingnumber from csv file) for approximately 60/30 mins (with varing no of threads) till I get the crash point of the system.
1st way:-
You can pass the tracking numbers via csv file steps as,
allocate all the tracking numbers to specific uses (this can be possible with database query).
copy-paste those tracking numbers in csv file.
pass those tracking numbers as an parameter via csv data set config.
2nd way:-
fill the form & generated tracking number can be fetch via regular expression.
set allocation logic to specific user each time (disable other users).
log-in with this user & pass the fetched tracking number.
Hope will be helpful to you.

Visual Studio Load Test - Using Data Source with Multiple Agents

I'm using Visual Studio 2015 Load Test and running a Web Performance test that has a data source connected. The data source contains user login information for 250 users.
Running this in sequential order on a single agent works fine. However, I'm attempting to add in 10 test agents to share out the load. By design the Load Test copies the data source to each agent and it runs the test. What ends up happening is that all 10 agents start the test using the row 1 user from the data source. I'm hoping there's away to set up the Load Test to run sequentially across all agents (ex: Agent 1 uses row 1, Agent 2 uses row 2, Agent 3 uses row 3, etc...)
I suspect there's not an option to set this up, but wondered if anyone ran into this and had workarounds to offer. I did find this info via http://vsptqrg.codeplex.com
Multiple machines running as a rig
Sequential – This works that same as if you are on one machine. Each agent receives a full copy of the data and each starts with row 1 in the data source. Then each agent will run through each row in the data source and continue looping until the load test completes.
Random – This also works the same as if you run the test on one machine. Each agent will receive a full copy of the data source and randomly select rows.
Unique – This one works a little differently. Each row in the data source will be used once. So if you have 3 agents, the data will be spread across the 3 agents and no row will be used more than once. As with one machine, once every row is used, the web test will stop executing.
You Can Split the Data set/CSV and distribute to each Agent, i.e in your case "25 data set"/agent and execute the test.
Each Agent can use their own Data set/CSV.
CSV Split: http://monchito.com/blog/autosplit-csv
The nearest you can get to what you want is to use the unique setting. However each data source row will only be used once, then the test will stop. With a data source containing 250 line only 250 test executions will take place. I do not know the exact distribution of data source rows to agents when unique is specified.
If more than one execution per data source row is wanted then another approach is to have one data source column per agent. Use the agent_id to select the column. Use the sequential data source access. A variation is to just have one set of data in the data source but append the agent_id to some of the values in the data sources. This answer has some variations on these ideas and some code.
Another possibility is to use the MoveDataTableCursor method to set a specific row for each test execution. This could be called in a PreWebTest method of a WebTestPlugin. The code would use the context parameters $AgentId and $WebTestIteration. The call would be based the following:
MoveDataTableCursor(..., ..., $AgentId * NumberOfAgents + $WebTestIteration);
Notes:
The values of $AgentId and $WebTestIteration from the context are strings, they would need to be converted to numbers to do the multiply and add.
Would also need to check whether the two values are zero-based or one-based.
The documentation for MoveDataTableCursor is not very informative

In jmeter- for web service, how to provide load dynamically

I want to provide load (i.e for 100 users there will be different data for all 100 users)to one of my webservice method dynamically using jmeter.
I have tried using _StringFromFile function but it is not feasible for me to create 100 csv file with different data for 100 users.
Want to know other functions of jmeter which can be use for creating load dynamically.
Looking forward for your reply....
You can use the following test elements for parametrization:
JDBC PreProcessor - to get values from database
__CSVRead() - to get values from CSV file(s)
CSV Data Set Config - basically the same as point 2, might be more convenient and easy to use
__RandomString() - to provide random value

Resources