When joining across tables (as in the examples below), is there an efficiency difference between joining on the tables or joining subqueries containing only the needed columns?
In other words, is there a difference in efficiency between these two tables?
SELECT result
FROM result_tbl
JOIN test_tbl USING (test_id)
JOIN sample_tbl USING (sample_id)
JOIN (SELECT request_id
FROM request_tbl
WHERE request_status='A') USING(request_id)
vs
SELECT result
FROM (SELECT result, test_id FROM result_tbl)
JOIN (SELECT test_id, sample_id FROM test_tbl) USING(test_id)
JOIN (SELECT sample_id FROM sample_tbl) USING(sample_id)
JOIN (SELECT request_id
FROM request_tbl
WHERE request_status='A') USING(request_id)
The only way to find out for sure is to run both with tracing turned on and then look at the trace file. But in all probability they will be treated the same: the optimizer will merge all the inline views into the main statement and come up with the same query plan.
It doesn't matter. It may actually be WORSE since you are taking control away from the optimizer which generally knows best.
However, remember if you are doing a JOIN and only including a column from one of the tables that it is QUITE OFTEN better to re-write it as a series of EXISTS statements -- because that's what you really mean. JOINs (with some exceptions) will join matching rows which is a lot more work for the optimizer to do.
e.g.
SELECT t1.id1
FROM table1 t1
INNER JOIN table2 ON something = something
should almost always be
SELECT id1
FROM table1 t1
WHERE EXISTS( SELECT *
FROM table2
WHERE something = something )
For simple queries the optimizer may reduce the query plans into identical ones. Check it out on your DBMS.
Also this is a code smell and probably should be changed:
JOIN (SELECT request_id
FROM request_tbl
WHERE request_status='A')
to
SELECT result
FROM request
WHERE EXISTS(...)
AND request_status = 'A'
No difference.
You can tell by running EXPLAIN PLAN on both those statements - Oracle knows that all you want is the "result" column, so it only does the minimum necessary to get the data it needs - you should find that the plans will be identical.
The Oracle optimiser does, sometimes, "materialize" a subquery (i.e. run the subquery and keep the results in memory for later reuse), but this is rare and only occurs when the optimiser believes this will result in a performance improvement; in any case, Oracle will do this "materialization" whether you specified the columns in the subqueries or not.
Obviously if the only place the "results" column is stored is in the blocks (along with the rest of the data), Oracle has to visit those blocks - but it will only keep the relevant info (the "result" column and other relevant columns, e.g. "test_id") in memory when processing the query.
Related
I was working on task about optimization queries. One of the improvement ways was using WITH clause. I notice that it did very good job, and it lead to shorter time of execution, but i am not sure now, when should I use WITH clause and is there any risk of using it?
Here is one of the queries that I am working on :
WITH MY_TABLE AS
( SELECT PROD_KY,
sum(GROUPISPRIVATE) AS ISPRIVATE,
sum(GROUPISSHARED) AS ISSHARED
FROM
(
SELECT GRP_PROD_CUSTOMER.PROD_KY,
1 as ISPRIVATE,
0 as ISSHARED
FROM CUSTOMER
JOIN GRP_CUSTOMER ON GRP_CUSTOMER.CUST_KY = CUSTOMER.CUST_KY
JOIN GRP_PROD_CUSTOMER ON GRP_PROD_CUSTOMER.GRP_KY = GRP_CUSTOMER.GRP_KY
GROUP BY GRP_PROD_CUSTOMER.PROD_KY
)
GROUP BY PROD_KY
)
SELECT * FROM MY_TABLE;
is there any risk of using it?
Yes. Oracle may decide to materialize the subquery, which means writing its result set to disk and then reading it back (except it might not mean that in 12cR2 or later). That unexpected I/O could be a performance hit. Not always, and usually we can trust the optimizer to make the correct choice. However, Oracle has provided us with hints to tell the optimizer how to handle the result set: /*+ materialize */ to um materialize it and /*+ inline */ to keep it in memory.
I start with this potential downside because I think it's important to understand that the WITH clause is not a silver bullet and it won't improve every single query, and may even degrade performance. For instance I share the scepticism of the other commenters that the query you posted is in any way faster because you re-wrote it as a common table expression.
Generally, the use cases for the WITH clause are:
We want to use the result set from the subquery multiple times
with cte as
( select blah from meh )
select *
from t1
join t2 on t1.id = t2.id
where t1.col1 in ( select blah from cte )
and t2.col2 not in ( select blah from cte)
We want to be build a cascade of subqueries:
with cte as
( select id, blah from meh )
, cte2 as
( select t2.*, cte.blah
from cte
join t2 on t2.id = cte.id)
, cte3 as
( select t3.*, cte2.*
from cte2
join t3 on t3.col2 = cte2.something )
….
This second approach is beguiling and can be useful for implementing complex business logic in pure SQL. But it can lead to a procedural mindset and lose the power sets and joins. This too is a risk.
We want to use recursive WITH clause. This allows us to replace Oracle's own CONNECT BY syntax with a more standard approach. Find out more
In 12c and later we can write user-defined functions in the WITH clause. This is a powerful feature, especially for users who need to implement some logic in PL/SQL but only have SELECT access to the database. Find out more
For the record I have seen some very successful and highly performative uses of the second type of WITH clause. However I have also seen uses of WITH when it would have been just as easy to write an inline view. For instance, this is just using the WITH clause as syntactic sugar ...
with cte as
( select id, blah from meh )
select t2.*, cte.blah
from t2
join cte on cte.id = t2.id
… and would be clearer as ...
select t2.*, cte.blah
from t2
join ( select id, blah from meh ) cte on cte.id = t2.id
WITH clause is introduced in oracle to match SQL-99 standard.
The main purpose is to reduce the complexity and repetitive code.
Lets say you need to find the average salary of one department and then need to fetch all the department(d1) with more than average salary of that department(d1).
This can make multiple references to the subquery more efficient and readable.
The MATERIALIZE and INLINE optimizer hints can be used to influence the decision. The undocumented MATERIALIZE hint tells the optimizer to resolve the subquery as a global temporary table, while the INLINE hint tells it to process the query inline. Decision to use the hint is purely depends on logic that we are going to implement in query.
In oracle 12c, declaration of PL/SQL Block in WITH clause is introduced.
You must refer it from oracle documents.
Cheers!!
Your query is rather useless in terms of WITH statement (aka Common Table Expression, CTE)
Anyway, using the WITH clause brings several benefits:
The query is better readable (in my opinion)
You can use the same subquery several times in the main query. You can even cascade them.
Oracle can materialize the subquery, i.e. Oracle may create a temporary table and stores result of the subquery in it. This can give better performance.
The WITH clause may be processed as an inline view or resolved as a temporary table. The SQL WITH clause is very similar to the use of Global temporary tables. This technique is often used to improve query speed for complex subqueries and enables the Oracle optimizer to push the necessary predicates into the views.
The advantage of the latter is that repeated references to the subquery may be more efficient as the data is easily retrieved from the temporary table, rather than being requeried by each reference. You should assess the performance implications of the WITH clause on a case-by-case basis.
You can read more here:
http://www.dba-oracle.com/t_with_clause.htm
https://oracle-base.com/articles/misc/with-clause
one point to consider is, that different RDBMS handle the with clause - aka common table expressions (CTE) aka subquery factoring - differently:
Oracle may use a materialization or an inlining (as already explained in the answer provided by APC)
postgres always uses a materialization in releases up to 11 (so here a CTE is an optimization fence). In postgres 12 the behaviour changes and is similar to Oracles approach: https://info.crunchydata.com/blog/with-queries-present-future-common-table-expressions. You even have something that almost looks like a hint (though it is known that postgres does not use hints...)
in SQL Server currently a CTE is always inlined, as explained in https://erikdarlingdata.com/2019/08/what-would-materialized-ctes-look-like-in-sql-server/
So depending on the RDBMS you use and its version your mileage may vary.
I'm having some difficulty with joining a view to another table. This is on an Oracle RAC system running 11.2
I'll try and give as much detail as possible without going into specific table structures as my company would not like that.
You all know how this works. "Hey, can you write some really ugly software to implement our crazy ideas?"
The idea of what they wanted me to do was to make a view where the end user wouldn't know if they were going after the new table or the old table so one of the tables is a parameter table that will return "ON" or "OFF" and is used in the case statements.
There are some not too difficult but nested case statements in the select clause
I have a view:
create view my_view as
select t1.a as a, t1.b as b, t1.c as c,
sum(case when t2.a = 'xx' then case when t3.a then ... ,
case when t2.a = 'xx' then case when t3.a then ... ,
from table1 t1
join table t2 on (t1.a = t2.a etc...)
full outer join t3 on (t1.a = t3.a etc...)
full outer join t4 on (t1.a = t4.a etc...)
group by t1.a, t1.b, t2.c, and all the ugly case statements...
Now, when I run the query
select * from my_view where a='xxx' and b='yyy' and c='zzz'
the query runs great and the cost is 10.
However, when I join this view with another table everything falls apart.
select * from my_table mt join my_view mv on (mt.a = mv.a and mt.b=mv.b and mt.c=mv.c) where ..."
everything falls apart with a cost though the roof.
What I think is happening is the predicates are not getting pushed to the view. As such, the view is now doing full tables scans and joining everything to everything and then finally removing all the rows.
Every hint, tweak, or anything I've done doesn't appear to help.
When looking at the plan it looks like it has the predicates.
But this happens after everything is joined.
Sorry if this is cryptic but any help would be greatly appreciated.
Since you have the view with a "GROUP BY", predicates could not be pushed to the inner query
Also, you have the group by functions in a case statement, which could also make it worse for the optimizer
Oracle introduces enhancements to Optimizer every version/release/patch. It is hard to say what is supported in the version you're running. However, you can try:
See if removing the case from the GROUP BY function will make any difference
Otherwise, you have to take the GROUP BY and GROUP BY functions from the view to the outer most query
After many keyboard indentations on my forehead I may have tricked Oracle into pushing the predicates. I don't know exactly why this works but simplifying things may have helped.
I changed all my ON clauses to USING clauses and in this way the column names now match the columns from which I'm joining to. On some other predicates that were constants I added in a where clause to the view.
The end result is I can now join this view with another table and the cost is reasonable and the plan shows that the predicates are being pushed.
Thank you to everybody who looked at this problem.
At first, I seen the select statement on Oracle Docs.
I have some question about oracle select behaviour, when my query contain select,join,where.
see this below for information:
My sample table:
[ P_IMAGE_ID ]
IMAGE_ID (PK)
FILE_NAME
FILE_TYPE
...
...
[ P_IMG_TAG ]
IMG_TAG_ID (PK)
IMAGE_ID (FK)
TAG
...
...
My requirement are: get distinct of image when it's tag is "70702".
Method 1: Select -> Join -> Where -> Distinct
SELECT DISTINCT PID.IMAGE_ID
, PID.FILE_NAME
FROM P_IMAGE_ID PID
INNER JOIN P_IMG_TAG PTAG
ON PTAG.IMAGE_ID = PID.IMAGE_ID
WHERE PTAG.TAG = '70702';
I think the query behaviour should be like:
join table -> hint where cause -> distinct select
I use Oracle SQL developer to get the explain plan:
Method 1 cost 76.
Method 2: Select -> Where -> Where -> Distinct
SELECT DISTINCT PID.IMAGE_ID
, PID.FILE_NAME
FROM P_IMAGE_ID PID
WHERE PID.IMAGE_ID IN
(
SELECT PTAG.IMAGE_ID
FROM P_IMG_TAG PTAG
WHERE PTAG.TAG = '70702'
);
I think the second query behaviour should be like:
hint where cause -> hint where cause -> distinct select
I use Oracle SQL developer to get the explain plan too:
Method 2 cost 76 too. Why?
I believe when I try where cause first for reduce the database process and avoid join table that query performance should be better than the table join query, but now when I test it, I am confused, why 2 method cost are equal ?
Or am I misunderstood something ?
List of my question here:
Why 2 method above cost are equal ?
If the result of sub select Tag = '70702' more than thousand or million or more, use join table should be better alright ?
If the result of sub select Tag = '70702' are least, use sub select for reduce data query process is better alright ?
When I use method 1 Select -> Join -> Where -> Distinct mean the database process table joining before hint where cause alright ?
Someone told me when i move hint cause Tag = '70702' into join cause
(ie. INNER JOIN P_IMG_TAG PTAG ON PAT.IMAGE_ID = PID.IMAGE_ID AND PTAG.TAG = '70702' ) it's performance may be better that's alright ?
I read topic subselect vs outer join and subquery or inner join but both are for SQL Server, I don't sure that may be like Oracle database.
The DBMS takes your query and executes something. But it doesn't execute steps that correspond to SQL statement parts in the order they appear in an SQL statement.
Read about "relational query optimization", which could just as well be called "relational query implementation". Eg for Oracle.
Any language processor takes declarations and calls as input and implements the described behaviour in terms of internal data structures and operations, maybe through one or more levels of "intermediate code" running on a "virtual machine", eventually down to physical machines. But even just staying in the input language, SQL queries can be rearranged into other SQL queries that return the same value but perform significantly better under simple and general implementation assumptions. Just as you know that your question's queries always return the same thing for a given database, the DBMS can know. Part of how it knows is that there are many rules for taking a relational algebra expression and generating a different but same-valued expression. Certain rewrite rules apply under certain limited circumstances. There are rules that take into consideration SQL-level relational things like primary keys, unique columns, foreign keys and other constraints. Other rules use implementation-oriented SQL-level things like indexes and statistics. This is the "relational query rewriting" part of relational query optimization.
Even when two different but equivalent queries generate different plans, the cost can be similar because the plans are so similar. Here, both a HASH and SORT index are UNIQUE. (It would be interesting to know what the few top plans were for each of your queries. It is quite likely that those few are the same for both, but that the plan that is more directly derived from the particular input expression is the one that is offered when there's little difference.)
The way to get the DBMS to find good query plans is to write the most natural expression of a query that you can find.
I am new to Oracle and working with a fairly large database. I would like to perform a query that will select the desired columns, order by a certain column and also limit the results. According to everything I have read, the below query should be working but it is returning "ORA-00918: column ambiguously defined":
SELECT * FROM(SELECT * FROM EAI.EAI_EVENT_LOG e,
EAI.EAI_EVENT_LOG_MESSAGE e1 WHERE e.SOURCE_URL LIKE '%.XML'
ORDER BY e.REQUEST_DATE_TIME DESC) WHERE ROWNUM <= 20
Any suggestions would be greatly appreciated :D
The error message means your result set contains two columns with the same name. Each column in a query's projection needs to have a unique name. Presumably you have a column (or columns) with the same name in both EAI_EVENT_LOG and EAI_EVENT_LOG_MESSAGE.
You also want to join on that column. At the moment you are generating a cross join between the two tables. In other words, if you have a hundred records in EAI_EVENT_LOG and two hundred records EAI_EVENT_LOG_MESSAGE your result set will be twenty thousand records (without the rownum). This is probably your intention.
"By switching to innerjoin, will that eliminate the error with the
current code?"
No, you'll still need to handle having two columns with the same name. Basically this comes from using SELECT * on two multiple tables. SELECT * is bad practice. It's convenient but it is always better to specify the exact columns you want in the query's projection. That way you can include (say) e.TRANSACTION_ID and exclude e1.TRANSACTION_ID, and avoid the ORA-00918 exception.
Maybe you have some columns in both EAI_EVENT_LOG and EAI_EVENT_LOG_MESSAGE tables having identical names? Instead of SELECT * list all columns you want to select.
Other problem I see is that you are selecting from two tables but you're not joining them in the WHERE clause hence the result set will be the cross product of those two table.
You need to stop using SQL '89 implicit join syntax.
Not because it doesn't work, but because it is evil.
Right now you have a cross join which in 99,9% of the cases is not what you want.
Also every sub-select needs to have it's own alias.
SELECT * FROM
(SELECT e.*, e1.* FROM EAI.EAI_EVENT_LOG e
INNER JOIN EAI.EAI_EVENT_LOG_MESSAGE e1 on (......)
WHERE e.SOURCE_URL LIKE '%.XML'
ORDER BY e.REQUEST_DATE_TIME DESC) s WHERE ROWNUM <= 20
Please specify a join criterion on the dotted line.
Normally you do a join on a keyfield e.g. ON (e.id = e1.event_id)
It's bad idea to use select *, it's better to specify exactly which fields you want:
SELECT e.field1 as customer_id
,e.field2 as customer_name
.....
This question already has answers here:
COUNT(*) vs. COUNT(1) vs. COUNT(pk): which is better? [duplicate]
(5 answers)
Closed 8 years ago.
I'm running a query like this in MSSQL2008:
select count(*)
from t1
inner join t2 on t1.id = t2.t1_id
inner join t3 on t1.id = t3.t1_id
Assume t1.id has a NOT NULL constraint. Since they're inner joins and t1.id can never be null, using count(t1.id) instead of count(*) should produce the exact same end result. My question is: Would the performance be the same?
I'm also wondering whether the joins could affect this. I realize that adding or removing a join will affect both performance and the length of the result set. Suppose that without changing the join pattern, you set count to target only one table. Would it make any difference? In other words, is there a difference between these two queries:
select count(*) from t1 inner join t2 on t1.id = t2.t1_id
select count(t1.*) from t1 inner join t2 on t1.id = t2.t1_id
COUNT(id) vs. COUNT(*) in MySQL answers this question for MySQL, but I couldn't find answers for MS-SQL specifically, and I can't find anything at all that takes the join factor into account.
NOTE: I tried to find this information on both Google and SO, but it was difficult to figure out how to word my search.
I tried a few SELECT COUNT(*) FROM MyTable vs. SELECT COUNT(SomeColumn) FROM MyTable with various sizes of tables, and where the SomeColumn once is a clustering key column, once it's in a non-clustered index, and once it's in no index at all.
In all cases, with all sizes of tables (from 300'000 rows to 170 million rows), I never see any difference in terms of either speed nor execution plan - in all cases, the COUNT is handled by doing a clustered index scan --> i.e. scanning the whole table, basically. If there is a non-clustered index involved, then the scan is on that index - even when doing a SELECT COUNT(*)!
There doesn't seem to be any difference in terms of speed or approach how those things are counted - to count them all, SQL Server just needs to scan the whole table - period.
Tests were done on SQL Server 2008 R2 Developer Edition
select count(*) will be slower as it attempts to fetch everything. Specifying a column (PK or any other indexed column) will speed up things as the query engine knows ahead of time what it is looking for. It'll also use an index as opposed to going against the table.