I have created a test plan to stress test out MySQL DB. The test plan executes without errors, but the response times for JMeter are pretty much higher than when i actually execute query manually. With JMeter average response times are about 500 ms. While for the same query it takes 0.00 sec when run manually from console.
I am using JDBC samplers to execute queries which include INSERT, UPDATE, SELECT. Are there any best practice to make DB stress test as close to reality as possible. I was not able to find much on DB stress test with JMeter.
Related
In JMeter test, if any transaction takes long time to respond and then fails. Will that transaction be accounted in the calculation of average response time?
It will, you can check it yourself using i.e. Dummy Sampler and Aggregate Report listener combination:
If you don't want to see failed transactions in the results you can filter them out using Filter Results Tool
I am going to test database read performance with Jmeter Java Sampler.
Basically, query database 10000 times with primary key as query condition like ID with Thread group and Java Sampler.
I need to load 10000 records into database before executing the Thread Group.
The 10000 records will be looped for the 10000 times database read.
I have looked into the preprocessor of Jmeter. I can insert 10000 records into database in preprocessor, but I do not know how to pass the 10000 IDs to Thread Group or Java Sampler. It is too long to contact IDs as a String parameter.
How I can archive the purpose? Any comment is welcome.
Instead of inserting the data using PreProcessor I would rather recommend preparing the test data in setUp Thread Group. You can write the generated IDs into a file using i.e. Flexible File Writer and then read them back with the CSV Data Set Config
If the database you're testing supports JDBC protocol it makes more sense to use JDBC Request sampler because JDBC Connection Configuration allows using connection pool pattern which most probably your application will be using when talking to the database, the main idea is to set up JMeter to produce the same footprint as the application which will be accessing the database.
I'm using the Jdbc input in LogStash to retrieve data from MS SQL database once in a minute.
Usually it works fine. But we know database performance is not very reliable thing and sometime it's takes longer than one minute to a query to return. Sometime event 5 minutes.
But the Jdbc scheduler still run a query once a minute so there situations when multiple queries run at the same time. This creates additional pressure on a database and after some time there are 20 almost same queries run at the same time.
I assume I'm not the first person which encounter this problem. I'm sure there is some way to make Jdbc to run next query a minute after the previous once is finished. Am I right?
We run Mondrian (version "3.7.preview.506") on a Tomcat Webserver.
We have some long running MDX-queries.
For example: The first calculation takes 276.764 ms and sends 84 SQL requests to the database (30 to 700ms for each SQL statement).
We see that the SQL-Statements are not executed in parallel - only two "mondrian.rolap.agg.SegmentCacheManager$sqlExecutor" are running at the same time.
Is there a way to force Mondrian/olap4j to execute the SQL statments more in parallel?
What is about the property "mondrian.rolap.maxSqlThreads" which is set to 100 by default?
Afterwards we execute the same MDX query and the calculation is finished in 4.904 ms.
Conclusion - if the "internal cache" (mondrian.rolap.agg.SegmentCacheManager) has loaded the segments the calculation is executed without any database request - but ...
3.How can we "warm up" the internal cache?
One way we tried was to rewrite the MDX-queries - we load several month into the cache by once (MDX-B):
MDX-A: SELECT ... ON ROWS FROM cube01 WHERE {[Time].[Default].[2017].[4]}
becomes
MDX-B: SELECT ... ON COLUMNS, CrossJoin( ... ,{[Time].[Default].[2017].[2]:[Time].[Default].[2017].[4]})" + " ON ROWS FROM cube01
The rewriten MDX query takes 1.235.128 ms (244 SQL requests) - afterwards we execute our orgin MDX query (MDX-A) and the calculating takes 6.987 ms
- the interessting part for us was, that the calculation takes longer as 5 sec. (compared with the second execution of the same query),
even if we did not have any SQL request anymore.
The warm-up of the cache does not work as expected (in our opinion) - MDX-B takes much longer to collect data with one statement, as we would run the the monthly execution in three steps (Febrary to April) - and the calculation in memory also takes more time - why - how does loading segmentation really works?
What is the best practice to load the segments to speed up calculation in memory?
Is there a way to feed the "Mondrian-Cube" with simple SQL statements?
Thanks in advance.
Fact table with 3.026.236 rows - growing daily
6 dimension tables.
Date dimension table 21.183 rows.
We have monitored our test classes with JVM's VisualAdmin.
Mondrian 3.7.preview.506 - olap4j-1.1.0
Database: Oracle Database 11g Release 11.2.0.4.0 - 64bit
(we tried to use also memSQL database, we was only 50% faster ...)
I was implemented a JDBC test plan with my database on a web-server (I built a web server by myself). When I start a simple request from JMeter Client (Ex: SELECT * From link d WHERE d.linkLIKE '%com%'), then the CPU of JMeter would high usage (90-100%) for a long time (~5 mins, but I set my test plan in 6s :(. And on server side, CPU high very short time - 5-7seconds (I think this time for the query to database). I tried to change the HEAP in jmeter.bat to more than 1024m, but is wasn't successful.
Can you help me to solve this problem?
I'd run EXPLAIN PLAN on that SQL query. You're likely to see a TABLE SCAN because of the way you wrote the WHERE clause. That takes a lot of time, more as your table grows, because it requires that you examine each and every record.