My remote Postgres query seems to hang forever - sql-order-by

I am running the following query on a remote Postgres instance, from a local client:
select * from matches_tb1 order by match_id desc limit 10;
matches_tb1 is a foreign table and has match_id as unique index. The query seems to hang forever. When I use explain verbose, there is no ORDER BY attached to "Remote SQL". I guess local server did not push down order by to remote server. How can I resolve this?
Attached is explain results:
explain verbose select match_id from matches_tb1 order by match_id desc limit 10;
QUERY PLAN
---------------------------------------------------------------------------------------------------
Limit (cost=33972852.96..33972852.98 rows=10 width=8)
Output: match_id
-> Sort (cost=33972852.96..35261659.79 rows=515522734 width=8)
Output: match_id
Sort Key: matches_tb1.match_id DESC
-> Foreign Scan on public.matches_tb1 (cost=100.00..22832592.02 rows=515522734 width=8)
Output: match_id
Remote SQL: SELECT match_id FROM public.matches_tb1
(8 rows)

For the first query in your question:
select * from matches_tb1 order by match_id desc limit 10;
It appears based on the EXPLAIN plan that Postgres is not using the match_id B-tree index. This is resulting in a very long query, because the database has to scan the entire 500 million record table and sort, to find the 10 records. As to why Postgres cannot use the index, the problem is select *. When the database reaches the leaf node of every entry in the index, it only finds a value for match_id. However, since you are doing select *, the database would have to do a lookup into the clustered index to find the values for all the other columns. If your table has low correlation, then the optimizer would likely choose to abandon the index altogether and just do a full scan of the table.
In contrast, consider one of your other queries which is executing quickly:
select match_id from matches_tb1 where match_id > 4164287140
order by match_id desc limit 10
In this case, the index on match_id can be used, because you are only selecting match_id. In addition, the restriction in the where clause helps even more to make the index more specific.
So the resolution to your problem here is to not do select * with limit, if you want the query to finish quickly. For example, if you only wanted say two columns col1 and col2 from your table, then you may add those columns to the index to cover them. Then, the following query should also be fast:
select match_id, col1, col2 from matches_tb1 order by match_id desc limit 10;

Related

Select max query returning all the rows in a table in Apache Hive

I am querying my data using this query
SELECT date_col,max(rate) FROM crypto group by date_col ;
I am expecting a single row but it is returning all the rows in the table. What is the mistake in this query?
You'll get one row per date_col because you're grouping by it. If you just want the maximum rate then just do SELECT max(rate) FROM crypto;.
If you want to get the date_col for that record too then:
SELECT
date_col,
rate
FROM crypto
WHERE rate = (SELECT MAX(rate) FROM crypto)

Why is Oracle using full table scan when it should use an index?

I'm doing some experimentation with query plans in Oracle, and I have the following table:
--create a table to use
create table SKEWED_DATA(
EMP_ID int,
DEPT int,
COL2 int,
CONSTRAINT SKEWED_DATA_PK PRIMARY KEY (EMP_ID)
);
--add an index on dept
create index SKEWED_DATA_INDEX1 on SKEWED_DATA(DEPT);
I then insert 1 million rows of data where 999,999 rows have dept id 1, and 1 row has dept id 99.
Before calculating statistics on the table, Oracle Autotrace shows that when running the following queries, it is using an index scan for both:
select AVG(COL2) from SKEWED_DATA D where DEPT = 1;
select AVG(COL2) from SKEWED_DATA D where DEPT = 99;
It's my understanding that it would be more efficient in this case to use a full table scan for dept id 1, and an index scan for dept id 2.
I then run the following command to generate statistics for the table:
execute DBMS_STATS.GATHER_TABLE_STATS ('HARRY','SKEWED_DATA');
And querying the dba_tab_statistics and user_tab_col_statistics confirms that stats and histograms have been gathered.
Running an autotrace on the following queries now shows full table scan for both!
select AVG(COL2) from SKEWED_DATA D where DEPT = 1;
select AVG(COL2) from SKEWED_DATA D where DEPT = 99;
My question is: why is Oracle using a full table scan for dept id 99 when there is only 1 row with this value?
UPDATE
I tried running the query for dept 99 with a hint to force Oracle to use the index, and whilst Autotrace believes it to be less efficient, the time it takes is 0.001 seconds, compared to 0.03 seconds when using the full table scan, thus proving (I think?) my theory that Oracle should be using the index in this instance.
select /*+ INDEX(D SKEWED_DATA_INDEX1) */ AVG(COL2) from SKEWED_DATA D where DEPT = 99;
OK, I think I might have solved it. When I had 999,999 rows with dept 1 and 1 row with dept 99, I inspected the number of histogram buckets by running the following query:
select COLUMN_NAME, HISTOGRAM, NUM_BUCKETS, NUM_DISTINCT from USER_TAB_COL_STATISTICS where TABLE_NAME = 'SKEWED_DATA';
This showed that there are 2 distinct values but only 1 bucket. If I change the stats gathering to this:
execute DBMS_STATS.GATHER_TABLE_STATS('HARRY','SKEWED_DATA',estimate_percent=>100);
It then correctly comes up with 2 buckets, and the autotrace shows the 'correct' execution plans. So, I guess it's because of the extreme 'skewness' of my data that Oracle cannot generate the correct stats for it unless the estimate_percent is massive.
Interestingly if I have slightly less skewed data (say about 2-3% of all records with a dept id of 99) Oracle does treat it correctly even when I leave the estimate_percent as default.
So, the moral of the story seems to be: if you have ridiculously skewed data like this and Oracle is not using the correct execution plan, try playing around with the estimate_percent parameter.

optimize query with minus oracle

Wanted to optimize a query with the minus that it takes too much time ... if they can give thanked help.
I have two tables A and B,
Table A: ID, value
Table B: ID
I want all of Table A records that are not in Table B. Showing the value.
For it was something like:
Select ID, value
FROM A
WHERE value> 70
MINUS
Select ID
FROM B;
Only this query is taking too long ... any tips how best this simple query?
Thank you for attention
Are ID and Value indexed?
The performance of Minus and Not Exists depend:
It really depends on a bunch of factors.
A MINUS will do a full table scan on both tables unless there is some
criteria in the where clause of both queries that allows an index
range scan. A MINUS also requires that both queries have the same
number of columns, and that each column has the same data type as the
corresponding column in the other query (or one convertible to the
same type). A MINUS will return all rows from the first query where
there is not an exact match column for column with the second query. A
MINUS also requires an implicit sort of both queries
NOT EXISTS will read the sub-query once for each row in the outer
query. If the correlation field (you are running a correlated
sub-query?) is an indexed field, then only an index scan is done.
The choice of which construct to use depends on the type of data you
want to return, and also the relative sizes of the two tables/queries.
If the outer table is small relative to the inner one, and the inner
table is indexed (preferrable a unique index but not required) on the
correlation field, then NOT EXISTS will probably be faster since the
index lookup will be pretty fast, and only executed a relatively few
times. If both tables a roughly the same size, then MINUS might be
faster, particularly if you can live with only seeing fields that you
are comparing on.
Minus operator versus 'not exists' for faster SQL query - Oracle Community Forums
You could use NOT EXISTS like so:
SELECT a.ID, a.Value
From a
where a.value > 70
and not exists(
Select b.ID
From B
Where b.ID = a.ID)
EDIT: I've produced some dummy data and two datasets for testing to prove the performance increases of indexing. Note: I did this in MySQL since I don't have Oracle on my Macbook.
Table A has 2600 records with 2 columns: ID, val.
ID is an autoincrement integer
Val varchar(255)
Table b has one column, but more records than Table A. Autoincrement (in gaps of 3)
You can reproduce this if you wish: Pastebin - SQL Dummy Data
Here is the query I will be using:
select a.id, a.val from tablea a
where length(a.val) > 3
and not exists(
select b.id from tableb b where b.id = a.id
);
Without Indexes, the runtime is 986ms with 1685 rows.
Now we add the indexes:
ALTER TABLE `tablea` ADD INDEX `id` (`id`);
ALTER TABLE `tableb` ADD INDEX `id` (`id`);
With Indexes, the runtime is 14ms with 1685 rows. That's 1.42% the time it took without indexes!

T-SQL - wrong query execution plan behaviour

One of our queries degraded after generating load on the DB.
Our query is a join between 3 tables:
Base table which contain 10 M rows.
EventPerson table which contain 5000 rows.
EventPerson788 which is empty.
It seems that the optimizer scans the index on the EventPerson instead of seek, this the script for replicating the issue:
--Create Tables
CREATE TABLE [dbo].[BASE](
[ID] [bigint] NOT NULL,
[IsActive] BIT
PRIMARY KEY CLUSTERED ([ID] ASC)
)ON [PRIMARY]
GO
CREATE TABLE [dbo].[EventPerson](
[DUID] [bigint] NOT NULL,
[PersonInvolvedID] [bigint] NULL,
PRIMARY KEY CLUSTERED ([DUID] ASC)
) ON [PRIMARY]
GO
CREATE NONCLUSTERED INDEX [EventPerson_IDX] ON [dbo].[EventPerson]
(
[PersonInvolvedID] ASC
)
CREATE TABLE [dbo].[EventPerson788](
[EntryID] [bigint] NOT NULL,
[LinkedSuspectID] [bigint] NULL,
[sourceid] [bigint] NULL,
PRIMARY KEY CLUSTERED ([EntryID] ASC)
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[EventPerson788] WITH CHECK
ADD CONSTRAINT [FK7A34153D3720F84A]
FOREIGN KEY([sourceid]) REFERENCES [dbo].[EventPerson] ([DUID])
GO
ALTER TABLE [dbo].[EventPerson788] CHECK CONSTRAINT [FK7A34153D3720F84A]
GO
CREATE NONCLUSTERED INDEX [EventPerson788_IDX]
ON [dbo].[EventPerson788] ([LinkedSuspectID] ASC)
GO
--POPOLATE BASE TABLE
DECLARE #I BIGINT=1
WHILE (#I<10000000)
BEGIN
begin transaction
INSERT INTO BASE(ID) VALUES(#I)
SET #I+=1
if (#I%10000=0 )
begin
commit;
end;
END
go
--POPOLATE EventPerson TABLE
DECLARE #I BIGINT=1
WHILE (#I<5000)
BEGIN
BEGIN TRANSACTION
INSERT INTO EventPerson(DUID,PersonInvolvedID) VALUES(#I,(SELECT TOP 1 ID FROM BASE ORDER BY NEWID()))
SET #I+=1
IF(#I%10000=0 )
COMMIT TRANSACTION ;
END
GO
This the query :
select
count(EventPerson.DUID)
from
EventPerson
inner loop join
Base on EventPerson.DUID = base.ID
left outer join
EventPerson788 on EventPerson.DUID = EventPerson788.sourceid
where
(EventPerson.PersonInvolvedID = 37909 or
EventPerson788.LinkedSuspectID = 37909)
AND BASE.IsActive = 1
Do you have any idea why the optimizer decides to use index scan instead of index seek?
Workaround that already done :
Analyze tables and build statistics.
Rebuild Indices.
Try the FORCESEEK hint
None of the above persuaded the optimizer to run an index seek on EventPerson and seek on the base tables.
Thanks for your help .
The scan is there because of the or condition and the outer join against EventPerson788.
Either it will return rows from EventPerson when EventPerson.PersonInvolvedID = 37909 or when the there exists rows in EventPerson788 where EventPerson788.LinkedSuspectID = 37909. The last part means that every row in EventPerson has to be checked against the join.
The fact that EventPerson788 is empty can not be used by the query optimizer since the query plan is saved to be reused later when there might be matching rows in EventPerson788.
Update:
You can rewrite your query using a union all instead of or to get a seek in EventPerson.
select count(EventPerson.DUID)
from
(
select EventPerson.DUID
from EventPerson
where EventPerson.PersonInvolvedID = 1556 and
not exists (select *
from EventPerson788
where EventPerson788.LinkedSuspectID = 1556)
union all
select EventPerson788.sourceid
from EventPerson788
where EventPerson788.LinkedSuspectID = 1556
) as EventPerson
inner join BASE
on EventPerson.DUID=base.ID
where
BASE.IsActive=1
Well, you're asking SQL Server to count the rows of the EventPerson table - so why do you expect a seek to be better than a scan here?
For a COUNT, the SQL Server optimizer will almost always use a scan - it needs to count the rows, after all - all of them... it will do a clustered index scan, if no other non-nullable columns are indexed.
If you have an index on a small, non-nullable column (e.g. on a ID INT or something like that), it would probably do a scan on that index instead (less data to read to count all rows).
But in general: seek is great for selecting one or a few rows - but it sucks if you're dealing with all rows (like for a count)
You can easily observe this behavior if you're using the AdventureWorks sample database.
When doing a COUNT(*) on the Sales.SalesOrderDetail table which has over 120000 rows like this:
SELECT COUNT(*) FROM Sales.SalesOrderDetail
then you'll get an index scan on IX_SalesOrderDetail_ProductID - it just doesn't pay off to do seeks on over 120000 entries!
However, if you do the same operation on a smaller set of data, like this:
SELECT COUNT(*) FROM Sales.SalesOrderDetail
WHERE ProductID = 897
then you get back 2 rows out of all of them - and SQL Server will now use an index seek on that same index.

Why is this count query so slow?

Hi I'm hosted on Heroku running postgresql 9.1.6 on a their Ika plan (7,5gb ram). I have a table called cars. I need to do the following:
SELECT COUNT(*) FROM "cars" WHERE "cars"."reference_id" = 'toyota_hilux'
Now this takes an awful lot of time (64 sec!!!)
Aggregate (cost=2849.52..2849.52 rows=1 width=0) (actual time=63388.390..63388.391 rows=1 loops=1)
-> Bitmap Heap Scan on cars (cost=24.76..2848.78 rows=1464 width=0) (actual time=1169.581..63387.361 rows=739 loops=1)
Recheck Cond: ((reference_id)::text = 'toyota_hilux'::text)
-> Bitmap Index Scan on index_cars_on_reference_id (cost=0.00..24.69 rows=1464 width=0) (actual time=547.530..547.530 rows=832 loops=1)
Index Cond: ((reference_id)::text = 'toyota_hilux'::text)
Total runtime: 64112.412 ms
A little background:
The table holds around 3.2m rows, and the column that I'm trying to count on, has the following setup:
reference_id character varying(50);
and index:
CREATE INDEX index_cars_on_reference_id
ON cars
USING btree
(reference_id COLLATE pg_catalog."default" );
What am I doing wrong? I expect that this performance is not what I should expect - or should I?
What #Satya claims in his comment is not quite true. In the presence of a matching index, the planner only chooses a full table scan if table statistics imply it would return more than around 5 % (depends) of the table, because it is then faster to scan the whole table.
As you see from your own question this is not the case for your query. It uses a Bitmap Index Scan followed by a Bitmap Heap Scan. Though I would have expected a plain index scan. (?)
I notice two more things in your explain output:
The first scan find 832 rows, while the second reduces the count to 739. This would indicate that you have many dead tuples in your index.
Check the execution time after each step with EXPLAIN ANALYZE and maybe add the results to your question:
First, rerun the query with EXPLAIN ANALYZE two or three times to populate the cache. What's the result of the last run compared to the first?
Next:
VACUUM ANALYZE cars;
Rerun.
If you have lots of write operations on the table, I would set a fill factor lower than 100. Like:
ALTER TABLE cars SET (fillfactor=90);
Lower if your row size is big or you have a lot of write operations. Then:
VACUUM FULL ANALYZE cars;
This will take a while. Rerun.
Or, if you can afford to do this (and other important queries do not have contradicting requirements):
CLUSTER cars USING index_cars_on_reference_id;
This rewrites the table in the physical order of the index, which should make this kind of query much faster.
Normalize schema
If you need this to be really fast, create a table car_type with a serial primary key and reference it from the table cars. This will shrink the necessary index to a fraction of what it is now.
Goes without saying that you make a backup before you try any of this.
CREATE temp TABLE car_type (
car_type_id serial PRIMARY KEY
, car_type text
);
INSERT INTO car_type (car_type)
SELECT DISTINCT car_type_id FROM cars ORDER BY car_type_id;
ANALYZE car_type;
CREATE UNIQUE INDEX car_type_uni_idx ON car_type (car_type); -- unique types
ALTER TABLE cars RENAME COLUMN car_type_id TO car_type; -- rename old col
ALTER TABLE cars ADD COLUMN car_type_id int; -- add new int col
UPDATE cars c
SET car_type_id = ct.car_type_id
FROM car_type ct
WHERE ct.car_type = c.car_type;
ALTER TABLE cars DROP COLUMN car_type; -- drop old varchar col
CREATE INDEX cars_car_type_id_idx ON cars (car_type_id);
ALTER TABLE cars
ADD CONSTRAINT cars_car_type_id_fkey FOREIGN KEY (car_type_id )
REFERENCES car_type (car_type_id) ON UPDATE CASCADE; -- add fk
VACUUM FULL ANALYZE cars;
Or, if you want to go all-out:
CLUSTER cars USING cars_car_type_id_idx;
Your query would now look like this:
SELECT count(*)
FROM cars
WHERE car_type_id = (SELECT car_type_id FROM car_type
WHERE car_type = 'toyota_hilux')
And should be even faster. Mainly because index and table are smaller now, but also because integer handling is faster than varchar handling. The gain will not be dramatic over the clustered table on the varchar column, though.
A welcome side effect: if you have to rename a type, it's a tiny UPDATE to one row now, not messing with the big table at all.

Resources