Join two tables in Oracle concatenating results by id - oracle

I want to join two tables in Oracle and list the results concatenated if they have the same parent id in this way:
Table All
|id | other fields|service_id|
|------|-------------|----------|
| 827 |xxxxxxxx |null |
| 828 |xxxxxxxx |327 |
| 829 |xxxxxxxx |328 |
| 860 |xxxxxxxx |null |
| 861 |xxxxxxxx |326 |
Table Services
| id | parent_id |
| ---- | -------|
| 326 | 860 |
| 327 | 827 |
| 328 | 827 |
I want a query that returns this
|id | sub_id |
|------|---------|
| 827 | 828,829 |
| 828 | null |
| 829 | null |
| 860 | 861 |
| 861 | null |
Thanks a lot!

By joining the all table to service and then back to all in an alias you can get the ID and sub id's in a list that then just need to be combined from multiple rows into 1 column per ID.
This should get you the "Raw data" that then needs to be aggregrated...
TESTED: Rextester working example
NOTE: Since you have different "LEVELS" of depth for your sub_ID i'm not really sure what you want so 860 and 123 isn't included because it's a completely different field than the source of 827
SELECT A.ID as ID, A2.ID as Sub_ID
FROM ALL A
LEFT JOIN SERVICES S
on S.Pareint_ID = A.ID
LEFT JOIN All A2
on A2.Service_ID = S.ID
Now... if we assume that you have a version of oracle which supports ListAgg
SELECT A.ID as ID, ListAgg(A2.ID,',') within group (Order by A2.ID) as Sub_ID
FROM ALL A
LEFT JOIN SERVICES S
on S.Parent_ID = A.ID
LEFT JOIN All A2
on A2.Service_ID = S.ID
GROUP BY A.ID
Giving Us:
+----+-----+---------+
| | ID | SUB_ID |
+----+-----+---------+
| 1 | 827 | 828,829 | -- These are all.id's...
| 2 | 828 | NULL |
| 3 | 829 | NULL |
| 4 | 860 | NULL | --> Now why is 123 present as it's a service.id
| 5 | 861 | NULL |
+----+-----+---------+
**Note all is a reserved word and either needs to be escaped or if your table name really isn't all; adjust accordingly.
LISTAGG Docs

Related

Limit rows examined in Oracle

My table has millions of records. In this query below, can I make Oracle 12c examine the first X rows only instead of doing a full table scan?
The value of X, I imagine should be Offset + Fetch Next , so in this case 15
SELECT * FROM table OFFSET 5 ROWS FETCH NEXT 10 ROWS ONLY;
Thanks in advance
Edit 1
These are the tables involved and this is the actual query
Orders - This table has 113k records in my test DB ( and over 8 million in prod db like my original question mentioned)
--------------------------
| Id | SKUField1|SKUField2|
--------------------------
| 1 | Value1 | Value2 |
| 2 | Value2 | Value2 |
| 3 | Value1 | Value3 |
--------------------------
Products - This table has 2 million records in my test DB ( prod db is similar)
---------------
| PId| SKU_NUM|
---------------
| 1 | Value1 |
| 2 | Value2 |
| 3 | Value3 |
---------------
Note that values of Orders.SKUField1 and Orders.SKUField2 come from the Products.SKU_NUM values
Actual Query:
SELECT /*+ gather_plan_statistics */ Id, PId, SKUField1, SKUField2, SKU_NUM
FROM Orders
LEFT JOIN (
-- this inner query reduces size of Products from 2 million rows down to 1462 rows
select * from Products where SKU_NUM in (
select SKUField1 from Orders
)
) p1 ON SKUField1 = p1.SKU_NUM
LEFT JOIN (
-- this inner query reduces size of table B from 2 million rows down to 459 rows
select * from Products where SKU_NUM in (
select SKUField2 from Orders
)
) p4 ON SKUField2 = p4.SKU_NUM
OFFSET 5 ROWS FETCH NEXT 10 ROWS ONLY
Execution Plan:
--------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Time | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 10 |00:00:00.06 | 8013 | | | |
|* 1 | VIEW | | 1 | 00:00:01 | 10 |00:00:00.06 | 8013 | | | |
|* 2 | WINDOW NOSORT STOPKEY | | 1 | 00:00:01 | 15 |00:00:00.06 | 8013 | 27M| 1904K| |
|* 3 | HASH JOIN RIGHT OUTER | | 1 | 00:00:01 | 15 |00:00:00.06 | 8013 | 1162K| 1162K| 1344K (0)|
| 4 | VIEW | | 1 | 00:00:01 | 1462 |00:00:00.04 | 6795 | | | |
| 5 | NESTED LOOPS | | 1 | 00:00:01 | 1462 |00:00:00.04 | 6795 | | | |
| 6 | NESTED LOOPS | | 1 | 00:00:01 | 1462 |00:00:00.04 | 5333 | | | |
| 7 | SORT UNIQUE | | 1 | 00:00:01 | 1469 |00:00:00.04 | 3010 | 80896 | 80896 |71680 (0)|
| 8 | TABLE ACCESS FULL | Orders | 1 | 00:00:01 | 113K|00:00:00.02 | 3010 | | | |
|* 9 | INDEX UNIQUE SCAN | UIX_Product_SKU_NUM | 1469 | 00:00:01 | 1462 |00:00:00.01 | 2323 | | | |
| 10 | TABLE ACCESS BY INDEX ROWID | Products | 1462 | 00:00:01 | 1462 |00:00:00.01 | 1462 | | | |
|* 11 | HASH JOIN RIGHT OUTER | | 1 | 00:00:01 | 15 |00:00:00.02 | 1218 | 1142K| 1142K| 1335K (0)|
| 12 | VIEW | | 1 | 00:00:01 | 459 |00:00:00.02 | 1213 | | | |
| 13 | NESTED LOOPS | | 1 | 00:00:01 | 459 |00:00:00.02 | 1213 | | | |
| 14 | NESTED LOOPS | | 1 | 00:00:01 | 459 |00:00:00.02 | 754 | | | |
| 15 | SORT UNIQUE | | 1 | 00:00:01 | 462 |00:00:00.02 | 377 | 24576 | 24576 |22528 (0)|
| 16 | INDEX FAST FULL SCAN | Orders_SKUField2_IDX6 | 1 | 00:00:01 | 113K|00:00:00.01 | 377 | | | |
|* 17 | INDEX UNIQUE SCAN | UIX_Product_SKU_NUM | 462 | 00:00:01 | 459 |00:00:00.01 | 377 | | | |
| 18 | TABLE ACCESS BY INDEX ROWID| Products | 459 | 00:00:01 | 459 |00:00:00.01 | 459 | | | |
| 19 | TABLE ACCESS FULL | Orders | 1 | 00:00:01 | 15 |00:00:00.01 | 5 | | | |
--------------------------------------------------------------------------------------------------------------------------------------------------
Hence, based on the "A-Rows" column values for row Ids 8 and 16 in the execution plan, it seems like there are full table scans on the Orders table (though row id 16 atleast seems to be using an index). So my question is is it true that there is a full table scan on the orders table even though I am using Offset/Fetch Next
Although your FETCH clause may use a full table scan, Oracle will still only fetch the first X rows from the table.
In the following example, the "TABLE ACCESS FULL" operation does start to read the entire table, but it gets cutoff part of the way through by the "WINDOW NOSORT STOPKEY" operation. Not all full table scans actually scan the full table. You would see similar behavior if your code ended with WHERE ROWNUM <= 50.
CREATE TABLE some_table AS SELECT * FROM all_objects;
EXPLAIN PLAN FOR SELECT * FROM some_table OFFSET 5 ROWS FETCH NEXT 10 ROWS ONLY;
SELECT * FROM TABLE(dbms_xplan.display);
Plan hash value: 2559837639
-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 15 | 7410 | 2 (0)| 00:00:01 |
|* 1 | VIEW | | 15 | 7410 | 2 (0)| 00:00:01 |
|* 2 | WINDOW NOSORT STOPKEY| | 15 | 2010 | 2 (0)| 00:00:01 |
| 3 | TABLE ACCESS FULL | SOME_TABLE | 15 | 2010 | 2 (0)| 00:00:01 |
-------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=15 AND
"from$_subquery$_002"."rowlimit_$$_rownumber">5)
2 - filter(ROW_NUMBER() OVER ( ORDER BY NULL )<=15)
The performance implications get more complicated if you also want to order the results. If that is the case, you may want to post the full query and execution plan.
(EDIT: 2022-09-25)
Yes, there is a full table scan on the ORDERS table happening on line 8 of the execution plan. As you mentioned, you can look at the "A-rows" column to tell what's really happening.
But the third full table scan of ORDERS, on line 19, is not a "full" full table scan. The operation "WINDOW NOSORT STOPKEY" stops that full table scan as soon as the 15 necessary rows are read. So the FETCH syntax is helping at least a little.
Applying a FETCH to a query does not mean that every single table will be limited. Although, in your query, it does seem like there ought to be a way to reduce the full table scans. Perhaps an index on SKUField1 would help?
Since Oracle as I know don't provide something like limit or top you can created by yourself like the following:
what is happening here, the inner query gets all the first 10 records and the outer query get those, you can still use any clauses like where or order or any others
SELECT * FROM (
SELECT * FROM Customers WHERE CustomerID <= 10 ORDER BY CustomerID
)
The full article will be found about this topic here at Oracle-Fetch
I am using Online Oracle so you can try it from your end, please let me know if you still have a problem.

Explanation of Oracle PARTITION BY vs GROUP BY for similar results

I have the following table (Marks):
firstname lastname Mark
------------------------------
arun prasanth 40
ann antony 45
sruthy abc 41
new abc 47
arun prasanth 45
arun prasanth 49
ann antony 49
And would like to add a column that tags if a record with specific columns occurs more than once. This is the result:
firstname lastname Mark MULTI_FLAG
----------------------------------------------
arun prasanth 40 1
ann antony 45 1
sruthy abc 41 0
new abc 47 0
arun prasanth 45 1
arun prasanth 49 1
ann antony 49 1
I can get the result with the following GROUP BY query:
SELECT M1.firstname
,M1.lastname
,M1.Mark
,M2.MULTI_COUNT
FROM Marks M1
JOIN (SELECT firstname, lastname, CASE WHEN COUNT (*) > 1 THEN 1 ELSE 0 END AS MULTI_COUNT
FROM Marks
GROUP BY firstname, lastname) M2
ON M2.firstname = M1.firstname AND M2.lastname = M1.lastname;
Or by this much prettier PARTITION BY query:
SELECT
firstname,
lastname,
CASE WHEN COUNT(*) OVER (PARTITION BY
firstname,
lastname) > 1 THEN 1 ELSE 0 END AS MULTI_FLAG
FROM
Marks
Running the GROUP BY query on a similar large table returned in:
34 m 56 s 595 ms
Running the PARTITION BY query on a similar large table returned in:
First run: 55 m 47 s 851 ms
Second run: 36 m 46 s 95 ms
I would be interested in knowing:
The best way to achieve my results
What accounts for the performance difference.
EDIT: How to read the query plan.
EDIT:
Oracle Version
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
PL/SQL Release 11.2.0.3.0 - Production
"CORE 11.2.0.3.0 Production"
TNS for Linux: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - Production
PARTITION BY Plan
PLAN_TABLE_OUTPUT
Plan hash value: 3822227444
---------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 668K| 90M| | 90429 (1)| 00:18:06 |
| 1 | WINDOW SORT | | 668K| 90M| 98M| 90429 (1)| 00:18:06 |
|* 2 | HASH JOIN RIGHT OUTER | | 668K| 90M| | 69340 (1)| 00:13:53 |
| 3 | TABLE ACCESS FULL | COUNTRY_REGION_MAPPINGS | 177 | 4779 | | 3 (0)| 00:00:01 |
| 4 | NESTED LOOPS | | | | | | |
| 5 | NESTED LOOPS | | 377K| 41M| | 69335 (1)| 00:13:53 |
| 6 | MAT_VIEW ACCESS FULL | PROJINFO_MAX_ITER_MVW | 17713 | 328K| | 782 (1)| 00:00:10 |
|* 7 | INDEX RANGE SCAN | Q_CLIN_ASSUM_BYCOUN_PK | 1 | | | 3 (0)| 00:00:01 |
| 8 | TABLE ACCESS BY INDEX ROWID| Q_CLINICAL_ASSUM_BYCOUNTRY | 21 | 2016 | | 4 (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access(UPPER("CRM"."COUNTRY"(+))=UPPER("QCAB"."TRIAL_COUNTRY"))
7 - access("PMIM"."OPPORTUNITYNUM"="QCAB"."OPPORTUNITYNUM" AND "PMIM"."CONTRACTNUM"="QCAB"."CONTRACTNUM"
AND "PMIM"."ITERATION"="QCAB"."ITERATION")
filter(UPPER("QCAB"."SHEET_LOC") LIKE '%COUNTRY ASSUMPTIONS%' OR UPPER("QCAB"."SHEET_LOC") LIKE
'INPUT%')
GROUP BY Plan
PLAN_TABLE_OUTPUT
Plan hash value: 648231064
------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 912 | 2052K| | 226K (1)| 00:45:22 |
|* 1 | HASH JOIN | | 912 | 2052K| | 226K (1)| 00:45:22 |
| 2 | TABLE ACCESS FULL | COUNTRY_REGION_MAPPINGS | 177 | 4779 | | 3 (0)| 00:00:01 |
|* 3 | HASH JOIN | | 89667 | 194M| 45M| 226K (1)| 00:45:22 |
| 4 | NESTED LOOPS | | | | | | |
| 5 | NESTED LOOPS | | 377K| 41M| | 69335 (1)| 00:13:53 |
| 6 | MAT_VIEW ACCESS FULL | PROJINFO_MAX_ITER_MVW | 17713 | 328K| | 782 (1)| 00:00:10 |
|* 7 | INDEX RANGE SCAN | Q_CLIN_ASSUM_BYCOUN_PK | 1 | | | 3 (0)| 00:00:01 |
| 8 | TABLE ACCESS BY INDEX ROWID | Q_CLINICAL_ASSUM_BYCOUNTRY | 21 | 2016 | | 4 (0)| 00:00:01 |
| 9 | VIEW | | 668K| 1377M| | 86518 (1)| 00:17:19 |
| 10 | HASH GROUP BY | | 668K| 72M| 80M| 86518 (1)| 00:17:19 |
|* 11 | HASH JOIN RIGHT OUTER | | 668K| 72M| | 69340 (1)| 00:13:53 |
| 12 | TABLE ACCESS FULL | COUNTRY_REGION_MAPPINGS | 177 | 2478 | | 3 (0)| 00:00:01 |
| 13 | NESTED LOOPS | | | | | | |
| 14 | NESTED LOOPS | | 377K| 35M| | 69335 (1)| 00:13:53 |
| 15 | MAT_VIEW ACCESS FULL | PROJINFO_MAX_ITER_MVW | 17713 | 328K| | 782 (1)| 00:00:10 |
|* 16 | INDEX RANGE SCAN | Q_CLIN_ASSUM_BYCOUN_PK | 1 | | | 3 (0)| 00:00:01 |
| 17 | TABLE ACCESS BY INDEX ROWID| Q_CLINICAL_ASSUM_BYCOUNTRY | 21 | 1701 | | 4 (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("R2"."TRIAL_COUNTRY_CD"="CRM"."COUNTRY_CD" AND
UPPER("CRM"."COUNTRY")=UPPER("QCAB"."TRIAL_COUNTRY"))
3 - access("R2"."OPPORTUNITYNUM"="QCAB"."OPPORTUNITYNUM" AND "R2"."ITERATION"="QCAB"."ITERATION" AND
"R2"."CONTRACTNUM"="QCAB"."CONTRACTNUM" AND "R2"."ASSUMPTION"="QCAB"."ASSUMPTION")
7 - access("PMIM"."OPPORTUNITYNUM"="QCAB"."OPPORTUNITYNUM" AND "PMIM"."CONTRACTNUM"="QCAB"."CONTRACTNUM" AND
"PMIM"."ITERATION"="QCAB"."ITERATION")
filter(UPPER("QCAB"."SHEET_LOC") LIKE '%COUNTRY ASSUMPTIONS%' OR UPPER("QCAB"."SHEET_LOC") LIKE 'INPUT%')
11 - access(UPPER("CRM"."COUNTRY"(+))=UPPER("QCAB"."TRIAL_COUNTRY"))
16 - access("PMIM"."OPPORTUNITYNUM"="QCAB"."OPPORTUNITYNUM" AND "PMIM"."CONTRACTNUM"="QCAB"."CONTRACTNUM" AND
"PMIM"."ITERATION"="QCAB"."ITERATION")
filter(UPPER("QCAB"."SHEET_LOC") LIKE '%COUNTRY ASSUMPTIONS%' OR UPPER("QCAB"."SHEET_LOC") LIKE 'INPUT%')
Typically you start with the analytic function count(*) which leads to a compact SQL.
The drawback of this aproach is that the data must be sorted (see WINDOW SORT operation). The GROUP BY approach avoids
the sorting as HASH GROUP BY may be used, which can lead to a better performance.
Your example is a bit more involved, as you do not use table but a view that joins three tables - this join is performed twice, for the GROUP BY and for the detail data; which
is of course not optimal.
So I'll start with the analytic function version of the query (possible with a PARALLELoption).
If you want to try the GROUP BY a lightway version is possible:
1) group only the duplicated keys
2) make OUTER JOIN to assign the MULTI_FLAG
example with execution plan below - simple test with your data
with dups as (
select firstname,lastname from tmp
group by firstname,lastname
having count(*) > 1)
select tmp.FIRSTNAME, tmp.LASTNAME, tmp.MARK,
case when dups.firstname is not NULL then 1 else 0 end as MULTI_FLAG
from tmp
left outer join dups on tmp.firstname = dups.firstname and tmp.lastname = dups.lastname;
You still need to access your view twice, but the final join will be faster (espetially if you have only small number of duplicated keys).
--------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
--------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 105K| 26M| | 1673 (1)| 00:00:21 |
|* 1 | HASH JOIN RIGHT OUTER| | 105K| 26M| 11M| 1673 (1)| 00:00:21 |
| 2 | VIEW | | 105K| 10M| | 128 (4)| 00:00:02 |
|* 3 | FILTER | | | | | | |
| 4 | HASH GROUP BY | | 105K| 10M| | 128 (4)| 00:00:02 |
| 5 | TABLE ACCESS FULL| TMP | 105K| 10M| | 125 (1)| 00:00:02 |
| 6 | TABLE ACCESS FULL | TMP | 105K| 15M| | 125 (1)| 00:00:02 |
--------------------------------------------------------------------------------------

Explain Plan in Oracle - how do I detect the most efficient query?

Intro
I am writing a query to compare 2 different oracle database tables containing hashes.
I want to find which hashes from location 1, have not been migrated over to location 2.
I have three different queries, and have written explain plan for statements for them.
However, the results don't tell me that much.
Question
How do I find which is the most efficient and fastest?
Current Guess
My suspicion however is that the first query is the fastest since it makes a one-off use of the remote link. But this is just a guess that is not supported by actual results.
Code
--------------------------
EXPLAIN PLAN
SET statement_id = 'ex_plan1' FOR
select* from document doc left outer join migrated_document#V2_PROD migrated on doc.hash = migrated.document_hash AND migrated.document_hash is null ;
SELECT PLAN_TABLE_OUTPUT
FROM TABLE(DBMS_XPLAN.DISPLAY(NULL, 'ex_plan1','BASIC'));
---------------------------------------------------
| Id | Operation | Name |
---------------------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | HASH JOIN RIGHT OUTER| |
| 2 | REMOTE | MIGRATED_DOCUMENT |
| 3 | TABLE ACCESS FULL | DOCUMENT |
---------------------------------------------------
--------------------------
EXPLAIN PLAN
SET statement_id = 'ex_plan2' FOR
select* from document doc where not exists( select 1 from migrated_document#V2_PROD migrated where migrated.document_hash = doc.HASH ) ;
SELECT PLAN_TABLE_OUTPUT
FROM TABLE(DBMS_XPLAN.DISPLAY(NULL, 'ex_plan2','BASIC'));
------------------------------------------------
| Id | Operation | Name |
------------------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | FILTER | |
| 2 | TABLE ACCESS FULL| DOCUMENT |
| 3 | REMOTE | MIGRATED_DOCUMENT |
------------------------------------------------
--------------------------
EXPLAIN PLAN
SET statement_id = 'ex_plan3' FOR
select* from document doc where doc.hash not in ( select migrated.document_hash from migrated_document#V2_PROD migrated) ;
SELECT PLAN_TABLE_OUTPUT
FROM TABLE(DBMS_XPLAN.DISPLAY(NULL, 'ex_plan3','BASIC'));
------------------------------------------------
| Id | Operation | Name |
------------------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | NESTED LOOPS ANTI | |
| 2 | TABLE ACCESS FULL| DOCUMENT |
| 3 | REMOTE | MIGRATED_DOCUMENT |
------------------------------------------------
--------------------------
Update
I updated the explain plan statements to get more results.
Due to the remote operation... could something cost less but be more slow?
If I am reading the data correctly it seems that option 2 is best.
But I still think option 1 is quicker.
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost | Inst |IN-OUT|
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 105K| 51M| 194 | | |
| 1 | HASH JOIN RIGHT OUTER| | 105K| 51M| 194 | | |
| 2 | REMOTE | MIGRATED_DOCUMENT | 1 | 275 | 2 | V2_MN~ | R->S |
| 3 | TABLE ACCESS FULL | DOCUMENT | 105K| 23M| 192 | | |
-------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost | Inst |IN-OUT|
----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 105K| 23M| 104K| | |
| 1 | FILTER | | | | | | |
| 2 | TABLE ACCESS FULL| DOCUMENT | 105K| 23M| 192 | | |
| 3 | REMOTE | MIGRATED_DOCUMENT | 1 | 50 | 1 | V2_MN~ | R->S |
----------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost | Inst |IN-OUT|
----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 105K| 29M| 526 | | |
| 1 | NESTED LOOPS ANTI | | 105K| 29M| 526 | | |
| 2 | TABLE ACCESS FULL| DOCUMENT | 105K| 23M| 192 | | |
| 3 | REMOTE | MIGRATED_DOCUMENT | 1 | 50 | 0 | V2_MN~ | R->S |
----------------------------------------------------------------------------------------
In a perfect world, you would choose the plan with the lowest cost. However I do not think your first query does what you want. It looks to me like no rows will join; you should be using a filter predicate rather than a join.
Instead of
select * from document doc
left outer join migrated_document#V2_PROD migrated
on doc.hash = migrated.document_hash
AND migrated.document_hash is null
it should be
select * from document doc
left outer join migrated_document#V2_PROD migrated
on doc.hash = migrated.document_hash
WHERE migrated.document_hash is null

Auto increment without sequence

I have a select statement that generate set value thereafter I want insert that set of values into another table, MY concern is I'm using select statement in select I'm using one one more select clause((select max(org_id)+1 from org)) where I'm trying to get max value and increment by one but I'm not able get incremented value instead I'm getting same value you can see column name id_limit
select abc,abc1,abc3,abc4,(select max(org_id)+1 from org) as id_limit from xyz
current output
-----------------------------------------------------------------
| abc | abc1 | abc3 | abc4 | id_limit |
----------------------------------------------------------------|
| BUSINESS_UNIT | 0 | 100 | London | 6 |
| BUSINESS_UNIT | 0 | 200 | Sydney | 6 |
| BUSINESS_UNIT | 0 | 300 | Kiev | 6 |
-----------------------------------------------------------------
I'm trying to get expected out output
-----------------------------------------------------------------
| abc | abc1 | abc3 | abc4 | id_limit |
----------------------------------------------------------------|
| BUSINESS_UNIT | 0 | 100 | London | 6 |
| BUSINESS_UNIT | 0 | 200 | Sydney | 7 |
| BUSINESS_UNIT | 0 | 300 | Kiev | 8 |
-----------------------------------------------------------------
Yes, in Oracle 12.
create table foo (
id number generated by default on null as identity
);
https://oracle-base.com/articles/12c/identity-columns-in-oracle-12cr1
In previous versions you use sequence/trigger as explained here:
How to create id with AUTO_INCREMENT on Oracle?

INSERT trigger with INSERT INTO WHERE condition

I am working on a trigger which needs INSERT INTO with WHERE logic.
I have three tables.
Absence_table:
-----------------------------
| user_id | absence_reason |
-----------------------------
| 1234567 | 40 |
| 1234567 | 50 |
| 1213 | 40 |
| 1314 | 20 |
| 1111 | 20 |
-----------------------------
company_table:
-----------------------------
| user_id | company_id |
-----------------------------
| 1234567 | 10201 |
| 1213 | 10200 |
| 1314 | 10202 |
| 1111 | 10200 |
-----------------------------
employment_table:
--------------------------------------
| user_id | emp_type | emp_no |
--------------------------------------
| 1234567 | Int | 1 |
| 1213 | Int | 2 |
| 1314 | Int | 3 |
| 1111 | Ext | 4 |
--------------------------------------
and finally I have the table out where data should be going only who have emp_type = Int in employment_table and have company_id = 10200
out:
--------------------------------
| employee_id | absence_reason |
--------------------------------
| 1 | 40 |
| 1 | 50 |
| 2 | 40 |
| 3 | 20 |
--------------------------------
Here is my trigger:
CREATE OR REPLACE TRIGGER "INOUT"."ABSENCE_TRIGGER"
AFTER INSERT ON absence_table
FOR EACH ROW
DECLARE
BEGIN
CASE
WHEN INSERTING THEN
INSERT INTO out (absence_reason, employee_id)
VALUES (:NEW.absence_reason, (SELECT employee_id FROM employment_table WHERE user_id = :NEW.user_id)
WHERE user_id IN
(SELECT user_id FROM employment_table WHERE employment_type = 'INT')
AND user_id IN
(SELECT user_id FROM company_table WHERE company_id = '10200');
END CASE;
END absence_trigger;
It is obviously not working and I can't figure out what should I do to make it work. Any suggestions?
change the insert to this:
insert into out (absence_reason, employee_id)
select :NEW.absence_reason, e.emp_no
from employment_table e
inner join company_table c
on c.user_id = e.user_id
where e.user_id = :NEW.user_id
and e.emp_type = 'INT'
and c.company_id = '10200';
which should work. note you had emp_no in your sample structure yet employee_id in the trigger insert too. i've assumed emp_no is right. also emp_type vs employment_type.
Finally in your trigger you have company_id in quotes. Is it really a varchar2? if so OK, if not, don't use quotes.
The parentheses are not balanced. The one for values is not closed. This is the cause of your specific error, but #DazzaL's answer looks like the correct solution.

Resources