I found this solution for selecting a random row from a table in Oracle. Actually sorting rows in a random manner, but you can fetch only the first row for a random result.
SELECT *
FROM table
ORDER BY dbms_random.value;
I just don't understand how it works. After ORDER BY it should be a column used for sorting. I see that "dbms_random.value" returns a value lower than zero. This behavior can be explained or is just like that?
Thanks
you could also think of it like this:
SELECT col1, col2, dbms_random.value
FROM table
ORDER BY 3
In this example the number 3 = the third column
When you order by dbms_random.value, Oracle orders by the expression, not for a column.For every record Oracle calculate a random number, and then order by this number.
In a similar way, is like this:
select * from emp order by upper(ename);
You have an order by based on a function.
Related
I have an order table(id_order,id_category) and category table(id_category,description).
How can I insert to an order, more than 1 type of category in oracle Apex app?
One option is to use a shuttle item; it'll let you easily move desired categories from its left to right side. The result is a semi-colon separated list of values. For example, if you chose categories 1, 5 and 7, result is 1;5;7 so - in order to insert them properly into the order table, you'll first have to split them to rows.
How? Use apex_string.split:
SQL> select * from table(apex_string.split('1;5;7', ';'));
COLUMN_VALUE
--------------------------------------------------------------------------------
1
5
7
SQL>
All rows would share the same order_id, I presume (a sequence might be a good choice for its value).
I have a below query in Oracle having duplicate rows, where file_data is a BLOB column.
SELECT attachsysfilename, file_seq, version, file_size, lastupddttm, lastupdoprid, file_data
from PS_LP_EX_FILEATTCH
I want to apply distinct clause on top of it to get unique records. But unable to do so because of BLOB column.
Can someone please help in this regards?
How can I use the Scalar subquery on file_data column to get the DISTINCT records from the table?
assuming you have a primaru key for the PS_LP_EX_FILEATTCH table's row you could rey using subquery for an aggreagted result of the related primary key
select t.*, ps.file_data
from (
SELECT min(pk) my_id
attachsysfilename
,file_seq
,version
,file_size
,lastupddttm
,lastupdoprid
from PS_LP_EX_FILEATTCH
group by attachsysfilename
,file_seq
,version
,file_size
,lastupddttm
,lastupdoprid
) t
inner join PS_LP_EX_FILEATTCH ps ON t.my_id = ps.pk
You could use a hash of the BLOB values and group by the hash along with all the other (the non-BLOB) columns, select one pk (or rowid, see discussion below) from each group, for example min(pk) or min(rowid), and then select the corresponding rows from the table.
For hashing you could use ora_hash, but that is only for school work. If this is a serious project, you probably need to use dbms_crypto.hash.
Whether this is a correct solution depends on the possibility of collisions when hashing the BLOB values. In Oracle 11.1 - 11.2 you can use SHA-1 hashes (160 bits); perhaps this is enough to distinguish between your BLOB values. In higher Oracle versions, longer hashes (up to 512 bits in my version, 12.2) are available. Obviously, the longer the hashes, the slower the query - but also the higher the likelihood that you won't incorrectly identify different BLOB values as "duplicates" due to collisions.
Other responders asked about or mentioned a primary key (pk) column or columns in your table. If you have one, you can use it instead of the rowid in my query below - but rowid should work OK for this. (Still, pk is preferred if your table has one.)
dbms_crypto.hash takes an integer argument (1, 2, 3, etc.) for the hashing algorithm to be used. These are defined as named constants in the package. Alas, in SQL you can't reference package constants; you need to find the values beforehand. (Or, in Oracle 12.1 or higher, you can do it on the fly, by including a function in a with clause - but let's keep it simple.)
So, to cover Oracle 11.1 and higher, I'll assume we want to use the SHA-1 algorithm. To find its integer value from the package, I can do this:
begin
dbms_output.put_line(dbms_crypto.hash_sh1);
end;
/
3
PL/SQL procedure successfully completed.
If your Oracle version is higher, you can check for the value of hash_sh256, for example; on my system, it's 4. Remember this number, since we will use it below.
The query is:
select {whatever columns you need, including the BLOB}
from {your table}
where rowid in (
select min(rowid)
from {your table}
group by {the non-BLOB columns},
dbms_crypto.hash({BLOB column}, 3)
)
;
Notice the number 3 used in the hash function - that's the value of dbms_crypto.hash_sh1, which we found earlier.
I used below query to get the distinct rows including BLOB column.
select
attachsysfilename,
file_seq,
version,
lastupddttm,
lastupdoprid,
file_data,
ROW_NUMBER() OVER (PARTITION BY attachsysfilename,file_seq,version,lastupddttm,lastupdoprid ORDER BY attachsysfilename ,file_seq ,version,lastupddttm DESC,lastupdoprid) RNK
from ps_lp_ex_fileattch a
) WHERE RNK=1
I read that 'The ORDERED hint causes Oracle to join tables in the order in which they appear in the FROM clause.'
But does it also fetch the rows in specific order?
For example: If I have ordered hint on column emp_code which has values as 'A','B' and 'C'[lets consider that more than 2 tables are joined to get emp_code ].
Will the output always have the specific order of rows? For example will 'A' always be the first row and 'C' be the last? does it decides the order of rows? and if yes then how?
No. The only thing that controls the order of rows in the final result set is the use of the ORDER BY clause in the SELECT statement. Hints are to influence the access plan chosen by the optimizer, not ordering of the result set.
select emp_id,
emp_name
from emp
order by emp_id -- <this is the only thing that controls the order of rows in the result set
;
I want a query that selects the number of rows in each table
but they are NOT updated statistically .So such query will not be accurate:
select table_name, num_rows from user_tables
i want to select several schema and each schema has minimum 500 table some of them contain a lot of columns . it will took for me days if i want to update them .
from the site ask tom he suggest a function includes this query
'select count(*)
from ' || p_tname INTO l_columnValue;
such query with count(*) is really slow and it will not give me fast results.
Is there a query that can give me how many rows are in table in a fast way ?
You said in a comment that you want to delete (drop?) empty tables. If you don't want an exact count but only want to know if a table is empty you can do a shortcut count:
select count(*) from table_name where rownum < 2;
The optimiser will stop when it reaches the first row - the execution plan shows a 'count stopkey' operation - so it will be fast. It will return zero for an empty table, and one for a table with any data - you have no idea how much data, but you don't seem to care.
You still have a slight race condition between the count and the drop, of course.
This seems like a very odd thing to want to do - either your application uses the table, in which case dropping it will break something even if it's empty; or it doesn't, in which case it shouldn't matter whether it has (presumably redundant) and it can be dropped regardless. If you think there might be confusion, that sounds like your source (including DDL) control needs some work, maybe?
To check if either table in two schemas have a row, just count from both of them; either with a union:
select max(c) from (
select count(*) as c from schema1.table_name where rownum < 2
union all
select count(*) as c from schema2.table_name where rownum < 2
);
... or with greatest and two sub-selects, e.g.:
select greatest(
(select count(*) from schema1.table_name where rownum < 2),
(select count(*) from schema2.table_name where rownum < 2)
) from dual;
Either would return one if either table has any rows, and would only return zero f they were both empty.
Full Disclosure: I had originally suggested a query that specifically counts a column that's (a) indexed and (b) not null. #AlexPoole and #JustinCave pointed out (please see their comments below) that Oracle will optimize a COUNT(*) to do this anyway. As such, this answer has been altered significantly.
There's a good explanation here for why User_Tables shouldn't be used for accurate row counts, even when statistics are up to date.
If your tables have indexes which can be used to speed up the count by doing an index scan rather than a table scan, Oracle will use them. This will make the counts faster, though not by any means instantaneous. That said, this is the only way I know to get an accurate count.
To check for empty (zero row) tables, please use the answer posted by Alex Poole.
You could make a table to hold the counts of each table. Then, set a trigger to run on INSERT for each of the tables you're counting that updates the main table.
You'd also need to include a trigger for DELETE.
I need to make a navigation panel that shows only a subset of a possible large result set. This subset is 20 records before and 20 records after the resulted record set. As I navigate the results through the navigation panel, I'll be applying a sliding window design using ROWNUM to get the next subset. My question is does Oracle's ROWNUM build the whole table before it extracts the rows you want? Or is it intelligent enough to only generate the rows I need? I googled and I couldn't find an explanation on this.
The pre-analytic-function method for doing this would be:
select col1, col2 from (
select col1, col2, rownum rn from (
select col1, col2 from the_table order by sort_column
)
where rownum <= 20
)
where rn > 10
The Oracle optimizer will recognize in this case that it only needs to get the top 20 rows to satisfy the inner query. It will likely have to look at all the rows (unless, say, the sort column is indexed in a way that lets it avoid the sort altogether) but it will not need to do a full sort of all the rows.
Your solution will not work (as Bob correctly pointed out) but you can use row_number() to do what you want:
SELECT col1,
col2
FROM (
SELECT col1,
col2,
row_number() over (order by some_column) as rn
FROM your_table
) t
WHERE rn BETWEEN 10 AND 20
Note that this solution has the added benefit that you can order the final result on a different criteria if you want to.
Edit: forgot to answer your initial question:
With the above solution, yes Oracle will have to build the full result in order to find out the correct numbering.
With 11g and above you might improve your query using the query cache.
Concerning the question's title.
See http://www.orafaq.com/wiki/ROWNUM and this in-depth explanation by Tom Kyte.
Concerning the question's goal.
This should be what you're looking for: Paging with Oracle
I don't think your design is quite going to work out as you've planned. Oracle assigns values to ROWNUM in the order that they are produced by the query - the first row produced is assigned ROWNUM=1, the second is assigned ROWNUM=2, etc. Notice that in order to have ROWNUM=21 assigned the query must first return the first twenty rows and thus if you write a query which says
SELECT *
FROM MY_TABLE
WHERE ROWNUM >= 21 AND
ROWNUM <= 40
no rows will be returned because in order for there to be rows with ROWNUM >= 21 the query must first return all the rows with ROWNUM <= 20.
I hope this helps.
It's an old question but you should try this - http://www.inf.unideb.hu/~gabora/pagination/results.html