Very slow update on a relatively small table in PostgreSQL - performance

Well i have the following table(info from pgAdmin):
CREATE TABLE comments_lemms
(
comment_id integer,
freq integer,
lemm_id integer,
bm25 real
)
WITH (
OIDS=FALSE
);
ALTER TABLE comments_lemms OWNER TO postgres;
-- Index: comments_lemms_comment_id_idx
-- DROP INDEX comments_lemms_comment_id_idx;
CREATE INDEX comments_lemms_comment_id_idx
ON comments_lemms
USING btree
(comment_id);
-- Index: comments_lemms_lemm_id_idx
-- DROP INDEX comments_lemms_lemm_id_idx;
CREATE INDEX comments_lemms_lemm_id_idx
ON comments_lemms
USING btree
(lemm_id);
And one more table:
CREATE TABLE comments
(
id serial NOT NULL,
nid integer,
userid integer,
timest timestamp without time zone,
lemm_length integer,
CONSTRAINT comments_pkey PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);
ALTER TABLE comments OWNER TO postgres;
-- Index: comments_id_idx
-- DROP INDEX comments_id_idx;
CREATE INDEX comments_id_idx
ON comments
USING btree
(id);
-- Index: comments_nid_idx
-- DROP INDEX comments_nid_idx;
CREATE INDEX comments_nid_idx
ON comments
USING btree
(nid);
in comments_lemms there are 8 million entries, in comments - 270 thousands.
Im performing the following sql query:
update comments_lemms set bm25=(select lemm_length from comments where id=comment_id limit 1)
And it takes more than 20 minutes of running and i stop it because pgAdmin looks like its about to crash.
Is there any way to modify this query or indexes or whatever in my database to speed up things a bit? I have to run some similar queries in future and it's quite painful to wait more than 30 minutes for each one.

in comments_lemms there are 8 million entries, in comments - 270 thousands. Im performing the following sql query:
update comments_lemms set bm25=(select lemm_length from comments where id=comment_id limit 1)
In other words, you're making it go through 8M entries, and for each row you're doing a nested loop with an index loopup. PG won't rewrite/optimize it because of the limit 1 instruction.
Try this instead:
update comments_lemms set bm25 = comments.lemm_length
from comments
where comments.id = comments_lemms.comment_id;
It should do two seq scans and hash or merge join them together, then proceed with the update in one go.

Related

WITH Clause performance issue in Oracle 11g

Table myfirst3 have 4 columns and 1.2 million records.
Table mtl_object_genealogy has over 10 million records.
Running the below code takes very long time. How to tune this code using with options?
WITH level1 as (
SELECT mln_parent.lot_number,
mln_parent.inventory_item_id,
gen.lot_num ,--fg_lot,
gen.segment1,
gen.rcv_date.
FROM mtl_lot_numbers mln_parent,
(SELECT MOG1.parent_object_id,
p.segment1,
p.lot_num,
p.rcv_date
FROM mtl_object_genealogy MOG1 ,
myfirst3 p
START WITH MOG1.object_id = p.gen_object_id
AND (MOG1.end_date_active IS NULL OR MOG1.end_date_active > SYSDATE)
CONNECT BY nocycle PRIOR MOG1.parent_object_id = MOG1.object_id
AND (MOG1.end_date_active IS NULL OR MOG1.end_date_active > SYSDATE)
UNION all
SELECT p1.gen_object_id,
p1.segment1,
p1.lot_num,
p1.rcv_date
FROM myfirst3 p1 ) gen
WHERE mln_parent.gen_object_id = gen.parent_object_id )
select /*+ NO_CPU_COSTING */ *
from level1;
execution plan
CREATE TABLE APPS.MYFIRST3
(
TO_ORGANIZATION_ID NUMBER,
LOT_NUM VARCHAR2(80 BYTE),
ITEM_ID NUMBER,
FROM_ORGANIZATION_ID NUMBER,
GEN_OBJECT_ID NUMBER,
SEGMENT1 VARCHAR2(40 BYTE),
RCV_DATE DATE
);
CREATE TABLE INV.MTL_OBJECT_GENEALOGY
(
OBJECT_ID NUMBER NOT NULL,
OBJECT_TYPE NUMBER NOT NULL,
PARENT_OBJECT_ID NUMBER NOT NULL,
START_DATE_ACTIVE DATE NOT NULL,
END_DATE_ACTIVE DATE,
GENEALOGY_ORIGIN NUMBER,
ORIGIN_TXN_ID NUMBER,
GENEALOGY_TYPE NUMBER,
);
CREATE INDEX INV.MTL_OBJECT_GENEALOGY_N1 ON INV.MTL_OBJECT_GENEALOGY(OBJECT_ID);
CREATE INDEX INV.MTL_OBJECT_GENEALOGY_N2 ON INV.MTL_OBJECT_GENEALOGY(PARENT_OBJECT_ID);
Your explain plan shows some very big numbers. The optimizer reckons the final result set will be about 3227,000,000,000 rows. Just returning that many rows will take some time.
All table accesses are Full Table Scans. As you have big tables that will eat time too.
As for improvements, it's pretty hard to for us understand the logic of your query. This is your data model, you business rules, your data. You haven't explained anything so all we can do is guess.
Why are you using the WITH clause? You only use the level result set once, so just have a regular FROM clause.
Why are you using UNION ALL? That operation just duplicates the records retrieved from myfirst3 ( all those values are already included as rows where MOG1.object_id = p.gen_object_id.
The MERGE JOIN CARTESIAN operation is interesting. Oracle uses it to implement transitive closure. It is an expensive operation but that's because treewalking a hierarchy is an expensive thing to do. It is unfortunate for you that you are generating all the parent-child relationships for a table with 27 million records. That's bad.
The full table scans aren't the problem. There are no filters on myfirst3 so obviously the database has to get all the records. If there is one parent for each myfirst3 record that's 10% of the contents mtl_object_genealogy so a full table scan would be efficient; but you're rolling up the entire hierarchy so it's like you're looking at a much greater chunk of the table.
Your indexes are irrelevant in the face of such numbers. What might help is a composite index on mtl_object_genealogy(OBJECT_ID, PARENT_OBJECT_ID, END_DATE_ACTIVE).
You want all the levels of PARENT_OBJECT_ID for the records in myfirst3. If you run this query often and mtl_object_genealogy is a slowly changing table you should consider materializing the transitive closure into a table which just has records for all the permutations of leaf records and parents.
To sum up:
Ditch the WITH clause
Drop the UNION ALL
Tune the tree-walk with a composite index (or materializing it)

Accelerate SQLite Query

I'm currently learning SQLite (called by Python).
According to my previous question (Reorganising Data in SQLLIte), I want to store multiple time series (Training data) in my database.
I have defined the following fields:
CREATE TABLE VARLIST
(
VarID INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT UNIQUE NOT NULL
)
CREATE TABLE DATAPOINTS
(
DataID INTEGER PRIMARY KEY,
timeID INTEGER,
VarID INTEGER,
value REAL
)
CREATE TABLE TIMESTAMPS
(
timeID INTEGER PRIMARY KEY AUTOINCREMENT,
TRAININGS_ID INT,
TRAINING_TIME_SECONDS FLOAT
)
VARLIST has 8 entries, TIMESTAMPS 1e5 entries and DATAPOINTS around 5e6.
When I now want to extract data for a given TrainingsID and VarID, I try it like:
SELECT
(SELECT TIMESTAMPS.TRAINING_TIME_SECONDS
FROM TIMESTAMPS
WHERE t.timeID = timeID) AS TRAINING_TIME_SECONDS,
(SELECT value
FROM DATAPOINTS
WHERE DATAPOINTS.timeID = t.timeID and DATAPOINTS.VarID = 2) as value
FROM
(SELECT timeID
FROM TIMESTAMPS
WHERE TRAININGS_ID = 96) as t;
The command EXPLAIN QUERY PLAN delivers:
0|0|0|SCAN TABLE TIMESTAMPS
0|0|0|EXECUTE CORRELATED SCALAR SUBQUERY 1
1|0|0|SEARCH TABLE TIMESTAMPS USING INTEGER PRIMARY KEY (rowid=?)
0|0|0|EXECUTE CORRELATED SCALAR SUBQUERY 2
2|0|0|SCAN TABLE DATAPOINTS
This basically works.
But there are two problems:
Minor problem: If there is a timeID where no data for the requested VarID is availabe, I get an line with the valueNone`.
I would prefer this line to be skipped.
Big problem: the search is incredibly slow (approx 5 minutes using http://sqlitebrowser.org/).
How do I best improve the performance?
Are there better ways to formulate the SELECT command, or should I modify the database structure itself?
Ok, based on the hints I have got I could extremly accelerate the search by applieng INDEXES as:
CREATE INDEX IF NOT EXISTS DP_Index on DATAPOINTS (VarID,timeID,DataID);
CREATE INDEX IF NOT EXISTS TS_Index on TIMESTAMPS(TRAININGS_ID,timeID);
The EXPLAIN QUERY PLAN output now reads as:
0|0|0|SEARCH TABLE TIMESTAMPS USING COVERING INDEX TS_Index (TRAININGS_ID=?)
0|0|0|EXECUTE CORRELATED SCALAR SUBQUERY 1
1|0|0|SEARCH TABLE TIMESTAMPS USING INTEGER PRIMARY KEY (rowid=?)
0|0|0|EXECUTE CORRELATED SCALAR SUBQUERY 2
2|0|0|SEARCH TABLE DATAPOINTS USING INDEX DP_Index (VarID=? AND timeID=?)
Thanks for your comments.

ORACLE - Partitioning with changing values

Assuming following table:
create table INVOICE(
INVOICE_ID NUMBER
,INVOICE_SK NUMBER
,INVOICE_AMOUNT NUMBER
,INVOICE_TEXT VARCHAR2(4000 Char)
,B2B_FLAG NUMBER -- 0 or 1
,ACTIVE NUMBER(1) -- 0 or 1
)
PARTITION BY LIST (ACTIVE)
SUBPARTITION BY LIST (B2B_FLAG)
( PARTITION p_active_1 values (1)
( SUBPARTITION sp_b2b_flag_11 VALUES (1)
, SUBPARTITION sp_b2b_flag_10 VALUES (0)
)
,
PARTITION p_active_0 values (0)
( SUBPARTITION sp_b2b_flag_01 VALUES (1)
, SUBPARTITION sp_b2b_flag_00 VALUES (0)
)
)
For perfomance reasons the table should get a "Composite List-List" partitioning, see http://docs.oracle.com/cd/E18283_01/server.112/e16541/part_admin001.htm#i1006565.
The problematic point is, that the ACTIVE-Flag will change requently for a huge amount of records and sometimes also the B2B_FLAG. Will Oracle automatically recognize the records, for which the partitioning value has changed and move them to the appropriate partion or do I have to call some kind of maintenance function, in order to reorganize the partitions?
You need to enable row movement on the table or the update statement will fail with ORA-14402: updating partition key column would cause a partition change.
See the following testcase:
create table T_TESTPART
(
pk number(10),
part_key number(10)
)
partition by list (part_key) (
partition p01 values (1),
partition p02 values (2),
partition pdef values (default)
);
alter table T_TESTPART
add constraint pk_pk primary key (PK);
Now insert a row and try to update the partitioning value:
insert into t_testpart values (1,1);
update t_testpart set part_key = 2 where pk = 1;
You will now get the Error mentioned above.
If you enable row movement, the same statement will work and oracle will move the row to the other partition:
alter table t_testpart enable row movement;
update t_testpart set part_key = 2 where pk = 1;
I did not do any performance tests, but Oracle will probably delete the row from the first partition and insert it to the second partition. Consider this when using it in large scale.
In my own databases, I usually only use partitioning on columns that do not change.
Further reading:
http://www.dba-oracle.com/t_callan_oracle_row_movement.htm

T-SQL - wrong query execution plan behaviour

One of our queries degraded after generating load on the DB.
Our query is a join between 3 tables:
Base table which contain 10 M rows.
EventPerson table which contain 5000 rows.
EventPerson788 which is empty.
It seems that the optimizer scans the index on the EventPerson instead of seek, this the script for replicating the issue:
--Create Tables
CREATE TABLE [dbo].[BASE](
[ID] [bigint] NOT NULL,
[IsActive] BIT
PRIMARY KEY CLUSTERED ([ID] ASC)
)ON [PRIMARY]
GO
CREATE TABLE [dbo].[EventPerson](
[DUID] [bigint] NOT NULL,
[PersonInvolvedID] [bigint] NULL,
PRIMARY KEY CLUSTERED ([DUID] ASC)
) ON [PRIMARY]
GO
CREATE NONCLUSTERED INDEX [EventPerson_IDX] ON [dbo].[EventPerson]
(
[PersonInvolvedID] ASC
)
CREATE TABLE [dbo].[EventPerson788](
[EntryID] [bigint] NOT NULL,
[LinkedSuspectID] [bigint] NULL,
[sourceid] [bigint] NULL,
PRIMARY KEY CLUSTERED ([EntryID] ASC)
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[EventPerson788] WITH CHECK
ADD CONSTRAINT [FK7A34153D3720F84A]
FOREIGN KEY([sourceid]) REFERENCES [dbo].[EventPerson] ([DUID])
GO
ALTER TABLE [dbo].[EventPerson788] CHECK CONSTRAINT [FK7A34153D3720F84A]
GO
CREATE NONCLUSTERED INDEX [EventPerson788_IDX]
ON [dbo].[EventPerson788] ([LinkedSuspectID] ASC)
GO
--POPOLATE BASE TABLE
DECLARE #I BIGINT=1
WHILE (#I<10000000)
BEGIN
begin transaction
INSERT INTO BASE(ID) VALUES(#I)
SET #I+=1
if (#I%10000=0 )
begin
commit;
end;
END
go
--POPOLATE EventPerson TABLE
DECLARE #I BIGINT=1
WHILE (#I<5000)
BEGIN
BEGIN TRANSACTION
INSERT INTO EventPerson(DUID,PersonInvolvedID) VALUES(#I,(SELECT TOP 1 ID FROM BASE ORDER BY NEWID()))
SET #I+=1
IF(#I%10000=0 )
COMMIT TRANSACTION ;
END
GO
This the query :
select
count(EventPerson.DUID)
from
EventPerson
inner loop join
Base on EventPerson.DUID = base.ID
left outer join
EventPerson788 on EventPerson.DUID = EventPerson788.sourceid
where
(EventPerson.PersonInvolvedID = 37909 or
EventPerson788.LinkedSuspectID = 37909)
AND BASE.IsActive = 1
Do you have any idea why the optimizer decides to use index scan instead of index seek?
Workaround that already done :
Analyze tables and build statistics.
Rebuild Indices.
Try the FORCESEEK hint
None of the above persuaded the optimizer to run an index seek on EventPerson and seek on the base tables.
Thanks for your help .
The scan is there because of the or condition and the outer join against EventPerson788.
Either it will return rows from EventPerson when EventPerson.PersonInvolvedID = 37909 or when the there exists rows in EventPerson788 where EventPerson788.LinkedSuspectID = 37909. The last part means that every row in EventPerson has to be checked against the join.
The fact that EventPerson788 is empty can not be used by the query optimizer since the query plan is saved to be reused later when there might be matching rows in EventPerson788.
Update:
You can rewrite your query using a union all instead of or to get a seek in EventPerson.
select count(EventPerson.DUID)
from
(
select EventPerson.DUID
from EventPerson
where EventPerson.PersonInvolvedID = 1556 and
not exists (select *
from EventPerson788
where EventPerson788.LinkedSuspectID = 1556)
union all
select EventPerson788.sourceid
from EventPerson788
where EventPerson788.LinkedSuspectID = 1556
) as EventPerson
inner join BASE
on EventPerson.DUID=base.ID
where
BASE.IsActive=1
Well, you're asking SQL Server to count the rows of the EventPerson table - so why do you expect a seek to be better than a scan here?
For a COUNT, the SQL Server optimizer will almost always use a scan - it needs to count the rows, after all - all of them... it will do a clustered index scan, if no other non-nullable columns are indexed.
If you have an index on a small, non-nullable column (e.g. on a ID INT or something like that), it would probably do a scan on that index instead (less data to read to count all rows).
But in general: seek is great for selecting one or a few rows - but it sucks if you're dealing with all rows (like for a count)
You can easily observe this behavior if you're using the AdventureWorks sample database.
When doing a COUNT(*) on the Sales.SalesOrderDetail table which has over 120000 rows like this:
SELECT COUNT(*) FROM Sales.SalesOrderDetail
then you'll get an index scan on IX_SalesOrderDetail_ProductID - it just doesn't pay off to do seeks on over 120000 entries!
However, if you do the same operation on a smaller set of data, like this:
SELECT COUNT(*) FROM Sales.SalesOrderDetail
WHERE ProductID = 897
then you get back 2 rows out of all of them - and SQL Server will now use an index seek on that same index.

Why is this count query so slow?

Hi I'm hosted on Heroku running postgresql 9.1.6 on a their Ika plan (7,5gb ram). I have a table called cars. I need to do the following:
SELECT COUNT(*) FROM "cars" WHERE "cars"."reference_id" = 'toyota_hilux'
Now this takes an awful lot of time (64 sec!!!)
Aggregate (cost=2849.52..2849.52 rows=1 width=0) (actual time=63388.390..63388.391 rows=1 loops=1)
-> Bitmap Heap Scan on cars (cost=24.76..2848.78 rows=1464 width=0) (actual time=1169.581..63387.361 rows=739 loops=1)
Recheck Cond: ((reference_id)::text = 'toyota_hilux'::text)
-> Bitmap Index Scan on index_cars_on_reference_id (cost=0.00..24.69 rows=1464 width=0) (actual time=547.530..547.530 rows=832 loops=1)
Index Cond: ((reference_id)::text = 'toyota_hilux'::text)
Total runtime: 64112.412 ms
A little background:
The table holds around 3.2m rows, and the column that I'm trying to count on, has the following setup:
reference_id character varying(50);
and index:
CREATE INDEX index_cars_on_reference_id
ON cars
USING btree
(reference_id COLLATE pg_catalog."default" );
What am I doing wrong? I expect that this performance is not what I should expect - or should I?
What #Satya claims in his comment is not quite true. In the presence of a matching index, the planner only chooses a full table scan if table statistics imply it would return more than around 5 % (depends) of the table, because it is then faster to scan the whole table.
As you see from your own question this is not the case for your query. It uses a Bitmap Index Scan followed by a Bitmap Heap Scan. Though I would have expected a plain index scan. (?)
I notice two more things in your explain output:
The first scan find 832 rows, while the second reduces the count to 739. This would indicate that you have many dead tuples in your index.
Check the execution time after each step with EXPLAIN ANALYZE and maybe add the results to your question:
First, rerun the query with EXPLAIN ANALYZE two or three times to populate the cache. What's the result of the last run compared to the first?
Next:
VACUUM ANALYZE cars;
Rerun.
If you have lots of write operations on the table, I would set a fill factor lower than 100. Like:
ALTER TABLE cars SET (fillfactor=90);
Lower if your row size is big or you have a lot of write operations. Then:
VACUUM FULL ANALYZE cars;
This will take a while. Rerun.
Or, if you can afford to do this (and other important queries do not have contradicting requirements):
CLUSTER cars USING index_cars_on_reference_id;
This rewrites the table in the physical order of the index, which should make this kind of query much faster.
Normalize schema
If you need this to be really fast, create a table car_type with a serial primary key and reference it from the table cars. This will shrink the necessary index to a fraction of what it is now.
Goes without saying that you make a backup before you try any of this.
CREATE temp TABLE car_type (
car_type_id serial PRIMARY KEY
, car_type text
);
INSERT INTO car_type (car_type)
SELECT DISTINCT car_type_id FROM cars ORDER BY car_type_id;
ANALYZE car_type;
CREATE UNIQUE INDEX car_type_uni_idx ON car_type (car_type); -- unique types
ALTER TABLE cars RENAME COLUMN car_type_id TO car_type; -- rename old col
ALTER TABLE cars ADD COLUMN car_type_id int; -- add new int col
UPDATE cars c
SET car_type_id = ct.car_type_id
FROM car_type ct
WHERE ct.car_type = c.car_type;
ALTER TABLE cars DROP COLUMN car_type; -- drop old varchar col
CREATE INDEX cars_car_type_id_idx ON cars (car_type_id);
ALTER TABLE cars
ADD CONSTRAINT cars_car_type_id_fkey FOREIGN KEY (car_type_id )
REFERENCES car_type (car_type_id) ON UPDATE CASCADE; -- add fk
VACUUM FULL ANALYZE cars;
Or, if you want to go all-out:
CLUSTER cars USING cars_car_type_id_idx;
Your query would now look like this:
SELECT count(*)
FROM cars
WHERE car_type_id = (SELECT car_type_id FROM car_type
WHERE car_type = 'toyota_hilux')
And should be even faster. Mainly because index and table are smaller now, but also because integer handling is faster than varchar handling. The gain will not be dramatic over the clustered table on the varchar column, though.
A welcome side effect: if you have to rename a type, it's a tiny UPDATE to one row now, not messing with the big table at all.

Resources