After table Partition Select query performance get slow - performance

I am using Postgresql 9.1 and I have a table consisting of 36 column and almost 10 cr. 50 lacks record with date time stamp On this Table we have one composite primary key (DEVICE ID TEXT AND DT_DATETIME timestamp without time zone)
Now to get query performance we have partition the table day wise based on the DT_DATETIME Fild. Now After partition I see that the query data retrieval time takes more that the unpartition table. I have on the parameter called constraint_exclusion in config file.
Please any solution for the same.
Let me explain Little farther
I have 45 days GPS data in a table of size 40 GB. Every second We insert min 27 new records(2.5 million record in a day). To keep the table size at steady 45 days we delete 45th days data every night. Now This poses problem in vacuum on this table due to lock.If we have partition table we can simply drop the 45th days child table.
so by partitioning we wanted to increase query performance as well as solve locking problem. We have tried pg_repack but Twice the system load factor increased to 21 and we had to reboot the server.
Ours is a 24x7 system so there is no down time.

try to use pg_bouncer for connection management and memory management or increase RAM in your server....

Related

Truncating a table with many subpartitions taking too long time

We have a job that loads some tables every night from our source db to target db, many of them are partitioned by range or list. Before loading a table we truncate it first and for some reason, this process is taking too long time for particular tables.
For instance,TABLE A has 62 mln rows and has been partitioned by list (column BRANCH_CODE). Number of partitions is 213. Truncating this table took 20 seconds .
TABLE B has 17 mln rows, has been range partitioned by DAY column, interval is month, every partitiion has 213 subpartitions by list (column BRANCH_CODE). So in this case, number of partitions is 60 and number of subpartitions is 12 780. Truncating this table took 15 minutes.
Is the reason of long truncate process too many partitions? Or maybe we have missed some table specs or should we set specifig storage parameters for a table?
Manually gathering fixed object and data dictionary statistics may improve the performance of metadata queries needed to support truncating 12,780 objects:
begin
dbms_stats.gather_fixed_objects_stats;
dbms_stats.gather_dictionary_stats;
end;
/
The above command may take many minutes to complete, but you generally only need to run it once after a significant change to the number of objects in your system. Adding 12,780 subpartitions can cause weird issues like this. (While you're investigating these issues, you might also want to check the space overhead associated with so many subpartitions. It's easy to waste many gigabytes of space when creating so many partitions.)

Vacuuming and Analyse : Drastic change in query cost

I have a Postgres 9.6 installation and I am running into this weird case where - if I run a same query having multiple joins after 10 to 15 mins, there is increase in the value of query cost in the order of few hundreds and its keep on increasing.
I do understand what vacuuming and analyse does, but I am worried about the query cost which starts increases within few minutes of performing vacuum and analyse. I am afraid this might lead do future performance bottlenecks.
PS: I have two table out of which one is heavily written (about 5 million records ) and other is heavily updated (70 K records with postGIS this table mostly have updates on lat lon & geom column)
Does this means I should have auto vacuum run every few hours?
Make Autovaccum aggressive; but if you think autovaccum is using up resources(by looking at cpu ]and IO usage) you could tweak-- autovacuum_vacuum_cost_delay and autovacuum_vacuum_threshold paramters at table level

How to append the data to existing hive table without partition

I have created hive table which contains historical stock data of past 10 years. From now i have to append the data on daily bases.
I thought of creating the partition based on date but it leads many partitions approximately 3000 plus a new partition for every new date, i think this is not feasible.
Can any one suggest a best approach to store all the historical data in the table and append the new data as it comes.
As for every partitioned table, the decision on how to partition your table depends primarily on how you are going to query the table.
Another consideration is how much data you're going to have per partition, as partitions should not bee too small. Each one should be at least at as an absolute minimum as big as one HDFS block since it would otherwise take too many directories.
This said, I don't think 3000 partitions would be a problem. At a previous job we had a huge table with one partition per hour, each hour was about 20Gbytes, and we had 6 months of data, so about 4000 partitions, and it worked just fine.
In our case, most people care the most about the last week and the last day.
I suggest as first thing you research how the table is going to be used, that is, are all the 10 years be used, or just mostly the most recent data ?
As second thing, study how big is the data, consider if it may grow in size with the new loads, and see how big each partition is going to be.
Once you've determined these 2 points, you can make a decision, you could just use daily partitions (which could be fine, 3000 partitions is not bad), or you could do weekly, or monthly.
You can use this command
LOAD DATA LOCAL INPATH '<FILE_PATH>' INTO TABLE <TABLE_NAME>;
It will create new files under HDFS directory mapped to table name. Even though there are not too many partitions with it, you will still run into too many files issue.
Periodically, you need to do this:
Create stage table
Move data by running LOAD command from target table to stage table
You can run insert command into target table selecting from stage table
Now it will load data with number of files equal to number of reducers.
You can delete stage table
You can run this process at regular intervals (probably once in a month).

Oracle Partitioned table is taking long time to fetch

I have a partioned table based on date in oracle db, where each partition has crores of records. The front end application is build to search the data based on a date range (meanining it scans through multiple partitions). What is the best logic to get the data in quickest time?
You should create local indexes which work on partitions.
Normally we go for global indexes which work on whole table while local index is specific to partition which will make partition search faster.
Check this link to see how local indexes work: http://docs.oracle.com/cd/E11882_01/server.112/e25523/partition.htm#i461446
If local indexes don't work then query tuning might help. If that doesn't help then you shld look to redesign schema.
EDIT:
Having said all that, just one basic check to ensure that your query is not scanning all partitions. This can be achieved by including partition criteria [date in your case] as part of where clause.
Interval partitioning may help. It makes partition management much
easier, which then makes it reasonable to have thousands of partitions instead of just dozens or hundreds.
For example, if the current table is partitioned by month, a query for a week will need to read a lot of extra data. But if the table is partitioned by day
then almost no extra data will be scanned.
create table partition_test(a number primary key, b date)
partition by range (b) interval (interval '1' day)
(
partition p1 values less than (date '2000-01-01')
);
But even if this reduces the data per partition from crores to lakhs, that's still a lot of data for an application. Local indexes, as #loki suggested, may help.

When is the right time to create Indexes in Oracle?

A brand new application with Oracle as DataStore is going to be pushed in Production. The Databases use CBO and I have indentified some columns to do indexing. I am expecting the total number of records in a particular table to be 4 million after 6 months. After that very few records will be added and there will not be any updates in the records of Indexed columns. I mean most of the updates will be on NonIndexed columns.
Is it advisable to create Index now? or I need to wait for a couple of months?
If table requires indexes, you will incur a lot of poor performance (full table scan + actual I/O) after the number of rows in the table goes beyond what might reasonably be kept the cache. Assume that is 20000 rows. We'll call it magic number. You hit 20000 rows in a week of production. After that the queries and updates on the table will grow progressively slower, on average, as more rows are added.
You are probably worried about the overhead of inserting new rows with indexed fields. That is a one-time hit. You a trading that against dozens of queries and updates when you delay adding indexes.
The trade off is largely in favor of adding indexes right now. Especially since we do not know what that magic number (20000?) really is. Could be larger. Or smaller.

Resources