first row VS Next row VS rownum - oracle

Those 3 queries return the same result set but use 2 different technics. Is there an advantage using one over the other? The plan and the execution time are in the same cost range.
Select *
from user_tab_columns
order by data_length desc,table_name, column_name
fetch first 5 rows only
Select *
from user_tab_columns
order by data_length desc,table_name, column_name
fetch next 5 rows only
select *
from (
Select *
from user_tab_columns
order by data_length desc,table_name, column_name
)
where rownum <=5

The keywords first and next as used in the fetch clause are perfect substitutes for each other, they can be used interchangeably - this is stated clearly in the documentation. So you really only have two queries there, not three. (The first two are really identical.)
The first query is easier to write and maintain than the last query. On the other hand, it is only available in Oracle 12.1 and later versions; in Oracle 11.2 and earlier, the only option is your last query.
The fetch clause is more flexible, for example it allows you to specify with ties (to include more than 5 rows if rows with rownum 4, 5, 6 and 7 are tied on the order by criteria, for example).

Related

how to use 50 thousand Ids at where or join clause in oracle pl/sql for a select query?

I have a list of 50 thousand receipt Ids (hard coded values). i want to apply these 50 thousand Ids in where condition or join operation. I have used below 'with' clause to create a temp table to collect those 50 thousand Ids. Then I used this temp table in join query for filtering.
with temp_receiptIds(receiptId)
as
(
select 'M0000001' from dual
union
select 'M0000002' from dual
union
select 'M0000003' from dual
union
select 'M0000004' from dual
..
..
...
union
select 'M0049999' from dual
union
select 'M0050000' from dual
)
select sal.receiptId, prd.product_name, prd.product_price, sal.sales_date, sal.seller_name
from product prd
join sales sal on prd.product_id=sal.product_id
join temp_receiptIds tmp on tmp.receiptId=sal.receiptId
Whenever I run the above select join query to extract data as requested by business people, it takes about 8 minutes to fetch result in the production server.
Is my above approach correct? Are there any simpler approach than this by considering best performance in the production server.
Please note, every second , the production database is used by customer. since production db is very busy, can I run this query in production db directly, will it cause slow performance in the customer using website which calls this production db in every second. Correct answers would be greatly appreciated! Thanks
Why wouldn't you store those receiptIDs into a table?
create table receiptids as
with temp_receiptIds(receiptId)
as
(
select 'M0000001' from dual
union all --> "union ALL" instead of "union"
...
)
select * from temp_receiptids;
Index it:
create index i1recid on receiptids (receiptIdD);
See how that query now behaves.
If you - for some reason - can't do that, see whether UNION ALL within the CTE does any good. For 50.000 rows, it could make a difference.

Strange Behaviour Related to ORA-01400 Error [duplicate]

Oracle 12 introduced nice feature (which should have been there long ago btw!) - identity columns. So here's a script:
CREATE TABLE test (
a INTEGER GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
b VARCHAR2(10)
);
-- Ok
INSERT INTO test (b) VALUES ('x');
-- Ok
INSERT INTO test (b)
SELECT 'y' FROM dual;
-- Fails
INSERT INTO test (b)
SELECT 'z' FROM dual UNION ALL SELECT 'zz' FROM DUAL;
First two inserts run without issues providing values for 'a' of 1 and 2. But the third one fails with ORA-01400: cannot insert NULL into ("DEV"."TEST"."A"). Why did this happen? A bug? Nothing like this is mentioned in the documentation part about identity column restrictions. Or am I just doing something wrong?
I believe the below query works, i havent tested!
INSERT INTO Test (b)
SELECT * FROM
(
SELECT 'z' FROM dual
UNION ALL
SELECT 'zz' FROM dual
);
Not sure, if it helps you any way.
For, GENERATED ALWAYS AS IDENTITY Oracle internally uses a Sequence only. And the options on general Sequence applies on this as well.
NEXTVAL is used to fetch the next available sequence, and obviously it is a pseudocolumn.
The below is from Oracle
You cannot use CURRVAL and NEXTVAL in the following constructs:
A subquery in a DELETE, SELECT, or UPDATE statement
A query of a view or of a materialized view
A SELECT statement with the DISTINCT operator
A SELECT statement with a GROUP BY clause or ORDER BY clause
A SELECT statement that is combined with another SELECT statement with the UNION, INTERSECT, or MINUS set operator
The WHERE clause of a SELECT statement
DEFAULT value of a column in a CREATE TABLE or ALTER TABLE statement
The condition of a CHECK constraint
The subquery and SET operations rule above should answer your Question.
And for the reason for NULL, when pseudocolumn(eg. NEXTVAL) is used with a SET operation or any other rules mentioned above, the output is NULL, as Oracle couldnt extract them in effect with combining multiple selects.
Let us see the below query,
select rownum from dual
union all
select rownum from dual
the result is
ROWNUM
1
1

ORA-00907: missing right parenthesis when create varray

In my program there are a lot of situation when i need to get additional information about knowing ids. So i have list of ids, which length may be very long (for example 100000 elements in it).
How i can use this list and transfer in oracle for getting sql without using temp tables?
No i try to use collection:
CREATE TYPE TEST_VARRAY IS VARRAY(5000) OF NUMBER(18);
SELECT G.ID, G.NAME FROM ANY_TABLE G
WHERE G.ID IN
(
SELECT COLUMN_VALUE FROM TABLE(
NEW TEST_VARRAY
(0,1,2,3... and so on ...,995,996,997,998,999)
)
);
there are 1000 numbers. And when I try execute this query the error ORA-00907: missing right parenthesis tips is appeared! But if i delete first 0 (so we have 999 numbers) the sql is executed ok.
What is problem here?
There is a limit in Oracle IN clause.
A comma-delimited list of expressions can contain no more than 1000
expressions. A comma-delimited list of sets of expressions can contain
any number of sets, but each set can contain no more than 1000
expressions.
Read here or here or here
In my opinion, you are misusing collections, at least I am not sure something like you did is good.
As far as I understand you generate this query before run, so what is the problem to do like that?
with ids as (select /*+ materialize */ 1 id from dual union all
select 2 from dual union all
select 3 from dual union all
select 4 from dual union all
/* repeat with the ids you need */
select 9999 from dual)
select *
from yourTable, ids
where yourTable.id = ids.id;
And that's it! Without any limitations, with pure SQL only. I have added materialize hint to ensure it is not performance relevant, but I think it can be skipped.
No temporary tables, no collections, nothing to create and support. Just SQL.
If you will put ids out of with into from clause it will work in any RDBMS (I guess).

A fast query that selects the number of rows in each table

I want a query that selects the number of rows in each table
but they are NOT updated statistically .So such query will not be accurate:
select table_name, num_rows from user_tables
i want to select several schema and each schema has minimum 500 table some of them contain a lot of columns . it will took for me days if i want to update them .
from the site ask tom he suggest a function includes this query
'select count(*)
from ' || p_tname INTO l_columnValue;
such query with count(*) is really slow and it will not give me fast results.
Is there a query that can give me how many rows are in table in a fast way ?
You said in a comment that you want to delete (drop?) empty tables. If you don't want an exact count but only want to know if a table is empty you can do a shortcut count:
select count(*) from table_name where rownum < 2;
The optimiser will stop when it reaches the first row - the execution plan shows a 'count stopkey' operation - so it will be fast. It will return zero for an empty table, and one for a table with any data - you have no idea how much data, but you don't seem to care.
You still have a slight race condition between the count and the drop, of course.
This seems like a very odd thing to want to do - either your application uses the table, in which case dropping it will break something even if it's empty; or it doesn't, in which case it shouldn't matter whether it has (presumably redundant) and it can be dropped regardless. If you think there might be confusion, that sounds like your source (including DDL) control needs some work, maybe?
To check if either table in two schemas have a row, just count from both of them; either with a union:
select max(c) from (
select count(*) as c from schema1.table_name where rownum < 2
union all
select count(*) as c from schema2.table_name where rownum < 2
);
... or with greatest and two sub-selects, e.g.:
select greatest(
(select count(*) from schema1.table_name where rownum < 2),
(select count(*) from schema2.table_name where rownum < 2)
) from dual;
Either would return one if either table has any rows, and would only return zero f they were both empty.
Full Disclosure: I had originally suggested a query that specifically counts a column that's (a) indexed and (b) not null. #AlexPoole and #JustinCave pointed out (please see their comments below) that Oracle will optimize a COUNT(*) to do this anyway. As such, this answer has been altered significantly.
There's a good explanation here for why User_Tables shouldn't be used for accurate row counts, even when statistics are up to date.
If your tables have indexes which can be used to speed up the count by doing an index scan rather than a table scan, Oracle will use them. This will make the counts faster, though not by any means instantaneous. That said, this is the only way I know to get an accurate count.
To check for empty (zero row) tables, please use the answer posted by Alex Poole.
You could make a table to hold the counts of each table. Then, set a trigger to run on INSERT for each of the tables you're counting that updates the main table.
You'd also need to include a trigger for DELETE.

Does Oracle's ROWNUM build the whole table before it extract the rows you want?

I need to make a navigation panel that shows only a subset of a possible large result set. This subset is 20 records before and 20 records after the resulted record set. As I navigate the results through the navigation panel, I'll be applying a sliding window design using ROWNUM to get the next subset. My question is does Oracle's ROWNUM build the whole table before it extracts the rows you want? Or is it intelligent enough to only generate the rows I need? I googled and I couldn't find an explanation on this.
The pre-analytic-function method for doing this would be:
select col1, col2 from (
select col1, col2, rownum rn from (
select col1, col2 from the_table order by sort_column
)
where rownum <= 20
)
where rn > 10
The Oracle optimizer will recognize in this case that it only needs to get the top 20 rows to satisfy the inner query. It will likely have to look at all the rows (unless, say, the sort column is indexed in a way that lets it avoid the sort altogether) but it will not need to do a full sort of all the rows.
Your solution will not work (as Bob correctly pointed out) but you can use row_number() to do what you want:
SELECT col1,
col2
FROM (
SELECT col1,
col2,
row_number() over (order by some_column) as rn
FROM your_table
) t
WHERE rn BETWEEN 10 AND 20
Note that this solution has the added benefit that you can order the final result on a different criteria if you want to.
Edit: forgot to answer your initial question:
With the above solution, yes Oracle will have to build the full result in order to find out the correct numbering.
With 11g and above you might improve your query using the query cache.
Concerning the question's title.
See http://www.orafaq.com/wiki/ROWNUM and this in-depth explanation by Tom Kyte.
Concerning the question's goal.
This should be what you're looking for: Paging with Oracle
I don't think your design is quite going to work out as you've planned. Oracle assigns values to ROWNUM in the order that they are produced by the query - the first row produced is assigned ROWNUM=1, the second is assigned ROWNUM=2, etc. Notice that in order to have ROWNUM=21 assigned the query must first return the first twenty rows and thus if you write a query which says
SELECT *
FROM MY_TABLE
WHERE ROWNUM >= 21 AND
ROWNUM <= 40
no rows will be returned because in order for there to be rows with ROWNUM >= 21 the query must first return all the rows with ROWNUM <= 20.
I hope this helps.
It's an old question but you should try this - http://www.inf.unideb.hu/~gabora/pagination/results.html

Resources