Why queries are getting progressively slower when using postgres with spring batch? - spring

I'm running a job using Spring Batch 4.2.0 with postgres (11.2) as backend. It's all wrapped in a spring boot app. I've 5 steps and each runs using a simple partitioning strategy to divide data by id ranges and reads data into each partition (which are processed by separate threads). I've about 18M rows in the table, each step reads, changes few fields and writes back. Each step reads all 18M rows and writes back. The issue I'm facing is, the queries that run to pull data into each thread scans data by id range like,
select field_1, field_2, field_66 from table where id >= 1 and id < 10000.
In this case each thread processes 10_000 rows at a time. When there's no traffic the query takes less than a second to read all 10,000 rows. But when the job runs there's about 70 threads reading all that data in. It goes progressively slower to almost a minute and a half, any ideas where to start troubleshooting this?
I do see autovacuum running in the backgroun for almost the whole duration of job. It definitely has enough memory to hold all that data in memory (about 6GB max heap). Postgres has sufficient shared_buffers 2GB, max_wal_size 2GB but not sure if that in itself is sufficient. Another thing I see is loads of COMMIT queries hanging around when checking through pg_stat_activity. Usually as much as number of partitions. So, instead of 70 connections being used by 70 partitions there are 140 conections used up with 70 of them running COMMIT. As time progresses these COMMITs get progressively slower too.

You are probably hitting https://github.com/spring-projects/spring-batch/issues/3634.
This issue has been fixed and will be part of version 4.2.3 planned to be released this week.

Related

Apache Pulsar Clickhouse Sink - does it have intervals between inserts?

Clickhouse allows high performance writes but only if they are done in bulk and with intervals (recommended is at least 1 second interval between inserts). In the documentation to JDBC connector for Clickhouse batchSize option exists but there is nothing about intervals between inserts and I didn't manage to find insertion logic in the code (I am not a Java guy though).
Does it mean there are no intervals and Pulsar simply does inserts as frequently as it can?
I know nothing about Pulsar.
recommended is at least 1 second interval between inserts
That recommendation is "one insert per second".
Nothing about 1 second to sleep.
This recommendation is too basic and very vague.
Every project is unique, has own environment and requirements.
In one project I insert 10mil. very wide rows per minute with RAID 10 with HDD disks.
In another project I do 1000 inserts with ~100 narrow rows each per second using In-memory parts with single NVME disk.

Why these 2 similar queries in Snowflake have very different performance?

See these 2 Snowflake queries profile images. They are doing similar work (Update the same 370M table join with small tables(one case is 21k, the other one is 9k), but the performance result is 5x).
The query finished around 15 mins, using one xsmall VDW:
Fast query finished around 15 mins
And this query, update the same table of 370M rows, join with an even small DIM table of 9k, but still running after 1 hour 30 mins
Still, running after 90 minutes
From the query profile, I cannot explain why the 2nd query runs so much slower than the first one. The 2nd one is run right after the first one.
Any idea? Thanks
in the second query you can see bytes spilled to local storage is 272gb. This means that the work done in processing was too large to fit in the cluster memory and so had to spill to local attached SSD. From a performance perspective this is a costly operation and I think probably why the 2nd query took so long to run (query 1 only had 2gb of spilling). The easiest solution to this is to increase the size of the VDW - or you could rewrite the query:
https://docs.snowflake.net/manuals/user-guide/ui-query-profile.html#queries-too-large-to-fit-in-memory
Note also that query 1 managed to read 100% of its data set from VDW memory - which is very efficient - whereas query2 could only find about half of its data set there and so had to perform remote io (read from cloud storage) to get the rest. Queries/work performed prior to running query 1 and 2 had retrieved that information to the local VDW cache, and retains this info on an LRU basis.
The join for the slow query is producing more rows than are flowing into it. This can be what you want, but often it's caused by duplicate values in the tables. I'd do a sanity check on whether that's expected here.

Excessive partitioning (too many tasks) on Apache Spark/Cassandra cluster

I'm running a simplistic application on Spark/Cassandra cluster. Since moving to a new environment (Spark 1.5 instead of 1.2 and minor Cassandra version upgrade too) substantial performance downgrade was observed (from 4 s. to 1-5 m. for same task and same amounts of data).
After initial investigation it seems, that for exactly same code from spark-driver's perspective, there are many more tasks generated (20+k, where it used to be up to 5) and logs on executor's end also reflect the same situation:
many sequential executions of the same query on different partitions:
...
CassandraTableScanRDD: Fetched 0 rows from x.y for partition 20324 in 0.138 s.
CassandraTableScanRDD: Fetched 0 rows from x.y for partition 20327 in 0.058 s.
CassandraTableScanRDD: Fetched 0 rows from x.y for partition 20329 in 0.053 s.
...
where it used to be a single one:
CassandraTableScanRDD: Fetched 905 rows from x.y for partition 0 in 2.992 s.
Since application code is the same, I wonder what could possibly have caused such a difference in partitioning behavior and what can be done to remediate that?
NB! Setup of both environments if different, configuration is not shared/inherited.
Thanks.
The new version of the Spark Cassandra Connector uses a System table inside of more modern Cassandra to estimate split size. This table is updated every (5 minutes currently) although the number of splits you are seeing is extremely large. The value read out of this table is divided by your split size.
If you are using C* less than 2.1.5 this table does not exist and the partitioning will need to be done manually.
https://github.com/datastax/spark-cassandra-connector/blob/master/doc/FAQ.md#what-does-inputsplitsize_in_mb-use-to-determine-size
You can manually pass in the number of splits via the ReadConf if you are continuing to see issues.

Hbase concurrency making it slow

I have 1 master server and 5 region server and each server has 200 GB disk space and 16 GB RAM on each. I created a table in HBase which has 10 million records. I am using hbase-0.96 version on hadoop 2.
Table Name - sh_self_profiles
column family - profile
In this table, we have 30 columns in each row.
When I get a single column value from HBase, it takes around 10 ms. My problem is when I hit 100 or more concurrent requests the time slowly accumulates and increases to more than 400 ms instead of completing in 10ms only. When 100 requests are hit linearly, each one takes 10 ms only.
One thing that you should check is how well distributed your table is.
You can do this by going to the HBase master web console http://:60010, you will be able to see how many regions you have for your table. If you have not done anything special on table creation you could easily have only one or two regions, which means that all the requests are being directed to a single region server.
If this is the case, you can recreate your table with pre-split regions (I would suggest a multiple of 5, such as 15 or 20), and make sure that the concurrent gets that you are doing are equally spread over the row-key space.
Also, pls check how much RAM you have allocated to the region server - you might need to increase it from the default. If you are not running anything else other than HBase Region Sever on those machines, you could probably increase to 8GB ram.
Other than that, you could also adjust the default for hbase.regionserver.handler.count.
I hope this helps.
Which client are you using? Are you using the standard Java client, the Thrift client, the HTTP REST client, or something else? If your use case is a high amount of random reads of single column values, I highly recommend you try asynchbase as it is much faster than the standard synchronous Java client.

SQL Server - Merging large tables without locking the data

I have a very large set of data (~3 million records) which needs to be merged with updates and new records on a daily schedule. I have a stored procedure that actually breaks up the record set into 1000 record chunks and uses the MERGE command with temp tables in an attempt to avoid locking the live table while the data is updating. The problem is that it doesn't exactly help. The table still "locks up" and our website that uses the data receives timeouts when attempting to access the data. I even tried splitting it up into 100 record chunks and even tried a WAITFOR DELAY '000:00:5' to see if it would help to pause between merging the chunks. It's still rather sluggish.
I'm looking for any suggestions, best practices, or examples on how to merge large sets of data without locking the tables.
Thanks
Change your front end to use NOLOCK or READ UNCOMMITTED when doing the selects.
You can't NOLOCK MERGE,INSERT, or UPDATE as the records must be locked in order to perform the update. However, you can NOLOCK the SELECTS.
Note that you should use this with caution. If dirty reads are okay, then go ahead. However, if the reads require the updated data then you need to go down a different path and figure out exactly why merging 3M records is causing an issue.
I'd be willing to bet that most of the time is spent reading data from the disk during the merge command and/or working around low memory situations. You might be better off simply stuffing more ram into your database server.
An ideal amount would be to have enough ram to pull the whole database into memory as needed. For example, if you have a 4GB database, then make sure you have 8GB of RAM.. in an x64 server of course.
I'm afraid that I've quite the opposite experience. We were performing updates and insertions where the source table had only a fraction of the number of rows as the target table, which was in the millions.
When we combined the source table records across the entire operational window and then performed the MERGE just once, we saw a 500% increase in performance. My explanation for this is that you are paying for the up front analysis of the MERGE command just once instead of over and over again in a tight loop.
Furthermore, I am certain that merging 1.6 million rows (source) into 7 million rows (target), as opposed to 400 rows into 7 million rows over 4000 distinct operations (in our case) leverages the capabilities of the SQL server engine much better. Again, a fair amount of the work is in the analysis of the two data sets and this is done only once.
Another question I have to ask is well is whether you are aware that the MERGE command performs much better with indexes on both the source and target tables? I would like to refer you to the following link:
http://msdn.microsoft.com/en-us/library/cc879317(v=SQL.100).aspx
From personal experience, the main problem with MERGE is that since it does page lock it precludes any concurrency in your INSERTs directed to a table. So if you go down this road it is fundamental that you batch all updates that will hit a table in a single writer.
For example: we had a table on which INSERT took a crazy 0.2 seconds per entry, most of this time seemingly being wasted on transaction latching, so we switched this over to using MERGE and some quick tests showed that it allowed us to insert 256 entries in 0.4 seconds or even 512 in 0.5 seconds, we tested this with load generators and all seemed to be fine, until it hit production and everything blocked to hell on the page locks, resulting in a much lower total throughput than with the individual INSERTs.
The solution was to not only batch the entries from a single producer in a MERGE operation, but also to batch the batch from producers going to individual DB in a single MERGE operation through an additional level of queue (previously also a single connection per DB, but using MARS to interleave all the producers call to the stored procedure doing the actual MERGE transaction), this way we were then able to handle many thousands of INSERTs per second without problem.
Having the NOLOCK hints on all of your front-end reads is an absolute must, always.

Resources