I have set up Jmeter distributed testing with 1 master and 2 slaves.
During my test, I have added deviceId as a variable with MIN and MAX ranges. when I run the distributed test, the insert is successful in one slave machine and failed in another slave machine with 'E11000 duplicate key error collection'.
How can I set up the test so that when I run insert is successful in both the machines
It depends on how do you really "define" this variable and where the "ranges" come from.
if E is a prefix and 11000 is a postfix it means that you can send only up to 100 000 "unique" entries.
So one of the solutions would be pre-generate the list of these entries and store it into CSV file like:
E00000
E00001
...
E99999
then you can split the file into 2 parts and copy it to the slaves, the lines from the file can be read using CSV Data Set Config
If you need the ability to continue the test where you left off without re-using the data you can go for HTTP Simple Table Server or Redis Data Set instead.
Another possible option is checking whether the value is in the database before extracting using JDBC Request Sampler and generate a new one if current value exists already, however this option is not the best one.
And last but not the least, you can generate really unique values using __UUID() function if the output format is appropriate
Related
I want to use 100 concurrent users in one thread group in Jmeter. but I want to each thread to use different user login password. how to achieve that?
full credit to Apache Jmeter documentation:
Some test plans need to use different values for different users/threads. For example, you might want to test a sequence that requires a unique login for each user. This is easy to achieve with the facilities provided by JMeter.
For example:
Create a text file containing the user names and passwords, separated by commas. Put this in the same directory as your test plan.
Add a CSV DataSet configuration element to the test plan. Name the variables USER and PASS.
Replace the login name with ${USER} and the password with ${PASS} on the appropriate samplers
The CSV Data Set element will read a new line for each thread.
UPDATE documentation link
https://jmeter.apache.org/usermanual/best-practices.html
There are multiple options depending on where do you want to keep the credentials.
The most commonly used approach is storing login/password combinations in the CSV file and using CSV Data Set Config for reading them. By default each JMeter thread will read the next line from the file on each iteration
If your credentials are in the database you can use JDBC PreProcessor
If you plan to run your test in Distributed Mode and don't want to worry about copying test data to all slave machines - there are HTTP Simple Table Server and Redis Data Set Config
More information: JMeter Parameterization - The Complete Guide
I have one use case to create incremental Data ingestion pipeline from one Database to AWS S3. I have created a pipeline and it is working fine except for the one scenario where no incremental data was found.
In case of zero record count, it is writing the file with a header-only (parquet file). I want to skip the target write when there is no incremental record.
How I can implement this in IICS?
I have already tried to implement the router transformation where I have put the condition if record count > 0 then only write to target but still it is not working.
First of all: the target file gets created even before any data is read from source. This is to ensure the process has write access to target location. So even if there will be no data to store, an empty file will get created.
The possible ways out here will be to:
Have a command task check the number of lines in output file and delete it if there is just a header. This would require the file to be created locally, verified, and uploaded to S3 afterwards e.g. using Mass Ingestion task - all invoked sequentially via taskflow
Have a session that will first check if there is any data available, and only then run the data extraction.
We have a constrain in our application, For test data providing in JMeter execution (using CSV Data Set Config element) we are not supposed to provide duplicate test data and it won't accept in all the fields. So we kept unique test data (upto 8K data for 8k concurrent users) for all the fields in CSV format.
Here I have a manual intervention, After each test execution (i.e) 100 users, 1000 users up to 8000 users) we need to delete each row (WRT to users in thread group) in CSV file else the duplicate data will be fetching for next execution and result will be failed.
Here my questions is,
1. How Can i avoid repeated/duplicate test data or to avoid already executed data without deleting in CSV file.
2. During JMeter test execution with CSV files, How can we specify to take the data from the respective rows. For example 101th row, 1001th row & 7999th row (which contains 8000 rows of data)?
The easiest option will be using HTTP Simple Table Server, its READ command has KEEP=FALSE attribute so you will be able to feed your test with the unique data without having to physically remove it from the original CSV file.
You can install HTTP Simple Table Server plugin using JMeter Plugins Manager:
In general if your test doesn't need to be repeatable instead of keeping the data in the CSV file you can consider generating it on the fly using such JMeter Functions as:
__Random()
__RandomString()
__RandomDate()
__UUID()
etc.
We have a flow where GenerateTableFetch takes inpute from splitJson which gives TableName, ColumnName as argument. At once multiple tables are passed as input to GenerateTableFetch and next ExecuteSql executes the query.
Now i want to trigger a new process when all the files for a table has been processed by the below processor (At the end there is PutFile).
How to find that all the files created for a Table has been processed?
You may need NIFI-5601 to accomplish this, there is a patch currently under review at the time of this writing, I hope to get it into NiFi 1.9.0.
EDIT: Adding potential workarounds in the meantime
If you can use ListDatabaseTables instead of getting your table names from a JSON file, then you can set Include Count to true. Then you will get attributes for the table name and the count of its rows. Then you can divide the count by the value of the Partition Size in GTF and that will give you the number of fetches (let's call it X). Then add an attribute via UpdateAttribute called "parent" or something, and set it to ${UUID()}. Keep these attributes in the flow files going into GTF and ExecuteScript, then you can use Wait/Notify to wait until X flow files are received (setting Target Signal Count to ${X}) and using ${parent} as the Release Signal Identifier.
If you can't use ListDatabaseTables, then you may be able to have ExecuteSQLRecord after your SplitJSON, you can execute something like SELECT COUNT(*) FROM ${table.name}. If using ExecuteSQL, you may need a ConvertAvroToJSON, if using ExecuteSQLRecord use a JSONRecordSetWriter. Then you can extract the count from the flow file contents using EvaluateJsonPath.
Once you have the table name and the row count in attributes, you can continue with the flow I outlined above (i.e. determine the number of flow files that GTF will generate, etc.).
How to login multiple users with different input in different threads in Jmeter using CSV data set config?
I have added CSV data set config but the thread is picking only the first entry and i m not able to see the responses for other user login
If you use CSV data set config which contains parameterized values which users(threads) will use while running script.
below is snapshot of jmeter of csv data set config which contain emp.csv file which contains values like,
nachiket,101,test
nikhil,102,test
harish,103,test
which are empname,empid,passwd respectively for 3 users.
if you run test with 3 users then thread1 will pick first and 2nd thread will pick 2nd val so on and you can repeat the file if it has less values than no of threads.
You need to provide enough loops/iterations as given one iteration only CSV Data Set Config will read only the first entry.
Try putting the request you want to parametrize under a Loop Controller, set enough loops and see whether it resolves your issue.
See Using CSV DATA SET CONFIG guide for more details.