I have 30 different db queries. I want to run 5 db queries concurrently at any given time from 30 different queries. But it should not pick already executed query.
Like if any one query completed out of 30 db queries, it has to pick next query from 30 queries.
Can we achieve the above use-case in jmeter?. Please help on this.
For executing queries concurrently use Synchronizing Timer configured as follows:
For picking up unique queries the easiest is using CSV Data Set Config
Related
I have a few views in my Redshift database. There are a couple of users who perform simple select statements on these views. When a single select query is run, it executes quickly (typically a few seconds) but when multiple select queries(same simple select statement) are run at the same time, all the queries get queued on the Redshift side and take forever to retrieve the results. Not sure why the same query taking a few seconds get queued when triggered in parallel with other select queries.
I am curious to know how can this be resolved or if there is any workaround I need to consider.
There are a number of reasons why this could be happening. First off how many queries in parallel are we talking about? 10, 100, 1000?
The WLM configuration determines the parallelism that a cluster is set up to perform. If the WLM has a queue with only one slot then only one query can run at a time.
Just because a query is simple doesn't mean it is easy. If the tables are configured correctly or if a lot of data is being read (or spilled) a lot of system resources could be needed to perform the query. When many such queries come along these resources get overloaded and things slow down. You may need to evaluate your cluster / table configurations to address any issues.
I could keep guessing possibilities but the better approach would be to provide a query example, WLM configuration and some cluster performance metrics (console) to help narrow things down.
I understand jdbcTemplate.batchUpdate is used for sending several records to data base in one communication.
Lets say i have 1000 records to be updated, instead of 1000 communications from Application to database, the Application will send 1000 records in request.
Coming to JdbcBatchItemWriterBuilder its combination of Tasks in a job.
My question is, if there is 1000 records to be processed(INSERT statements) via JdbcBatchItemWriterBuilder, all INSERTS executed in one go? or one after one?
If one after one, connecting to database 1000 times using JdbcBatchItemWriterBuilder causes perf issues? hows that handled?
i would like to understand if Spring batch performs better than running 1000 INSERT staments using jdbcTemplate.update ?
The JdbcBatchItemWriter uses java.sql.PreparedStatement#addBatch and java.sql.Statement#executeBatch internally (See https://github.com/spring-projects/spring-batch/blob/c4010fbffa6b71cbcfe79d523023251ce73666a4/spring-batch-infrastructure/src/main/java/org/springframework/batch/item/database/JdbcBatchItemWriter.java#L189-L195), so there will be a single batch insert for all items of the chunk.
Moreover, this will be executed in a single transaction as described in the Chunk-oriented Processing section of the reference documentation.
I'm using the Jdbc input in LogStash to retrieve data from MS SQL database once in a minute.
Usually it works fine. But we know database performance is not very reliable thing and sometime it's takes longer than one minute to a query to return. Sometime event 5 minutes.
But the Jdbc scheduler still run a query once a minute so there situations when multiple queries run at the same time. This creates additional pressure on a database and after some time there are 20 almost same queries run at the same time.
I assume I'm not the first person which encounter this problem. I'm sure there is some way to make Jdbc to run next query a minute after the previous once is finished. Am I right?
I want to fetch 200k records in single jpa select query within 5 seconds. I am selecting one column which is already indexed. Currently It is taking more than 5 minutes. is it possible to select over 100k of records in 5 seconds?
This is not possible with hibernate or normal native query since it has to create hundreds of thousands of objects in java side and results needs to be sent over the network (Serialization & de-serialization).
You could do below steps for fine tuning,
At DB side you could change the index method default is binery tree instead set it as "HASH" method.
Use Parallel threads to retrieve the results in paginated mode (Use native SQL).
Hope it gives some inputs for further fine tuning.
Use this property to retrieve lakh of records.
query.setHint(org.hibernate.fetchSize, 5000);
how to specify no of records to delete in Tibco JDBC Update activity in batch update mode.
Actually I need to delete 25 million of records from the database so I wrote Tibco code to do the same and it is taking lot of time .. So I am planning to use Batch mode in Delete query so I don't know how to specify no of records in JDBC Update activity.
Help me if any one has any idea.. thanks
From the docs for the Batch Update checkbox:
This field is only meaningful if there are prepared parameters in the
SQL statement (see Prepared Parameters).
In which case the input will be an array of records. It will execute the statement once for each record.
To avoid running out of memory, you will still need to iterate over the 25mil, but you can iterate in groups of 1000 or 10000.
If this is not something you would do often (deleting 25M rows, sounds pretty one-off), an alternative is to use BW to create a file containing the delete statements and then giving the file to a DBA to execute.
please use subset feature of jdbc palette!! Let me know if you face any issues?
I would suggest two points:
If this is an one time activity then it is not adviced to use Tibco BW code for that. SQL script should be the better alternative.
When you say 25 million records- what criteria is this based on. It can be achieved through subset iteration .But there should be proper load testing in the Pre - Prod environment to check that the process is not causing any memory/DB issue.
You can also try using SQL procedure and invoking the same through BW.