Getting ORA-00913 on really big IN clauses [duplicate] - oracle

This question already has answers here:
SQL IN Clause 1000 item limit
(5 answers)
How to pass values to IN operator dynamically?
(1 answer)
PL/SQL - Use "List" Variable in Where In Clause
(3 answers)
how to select a list of 10,000 unique ids from dual in oracle SQL
(4 answers)
Closed 2 years ago.
I am trying to build an SQL query with around 100.000 given values.
For example:
Given values = [1, 2, 4, 5, 30, ...]
And I want to select all elements with an ID matching one of the elements.
I tried it like this:
SELECT x FROM y WHERE
someOtherColumn = 'test'
AND (
id = 1
OR id = 2
OR id = 4
OR id = 5
OR id = 30
-- ...
);
And like this:
SELECT x FROM y WHERE
someOtherColumn = 'test'
AND (
id IN (1, 2, 4, 5, 30, ...) --- 1000 values per IN clause
OR id IN (...)
-- ...
);
Both give me the same Error:
ORA-00913: Zu viele Werte
Is there another way to do this?
This is not a 1000 Limit in IN clause issue!

Where do the values for your IN clause originate? Load your values into a reference table, then select that in your IN clause.
create ref_table (ref_value varchar2(20);
insert into ref_table ('1');
insert into ref_table ('2');
SELECT x FROM y WHERE
someOtherColumn = 'test'
AND (
id IN (select ref_value from ref_table);

Related

In oracle sql queries - Is it valid to use % in between a search string

For example
select * from tbl where msg like ‘%\<CDT\>5000%\<DBT\>1000%’
msg: <TXN1><CDT>5000<\CDT><\TXN1><something else><TXN2><DBT>1000<\DBT><\TXN2>
I am looking to extract column values, if it has CDT as 5000 and DBT as 1000
Title question is:
is it valid to use % in between a search string, for example
where msg like ‘%5000%1000%’
Yes, it is valid.
What would CDT and DBT be? Extract which column values?
I'm not sure but I think what you want is to take the value of everything between and
The following code does that
with raw_text(t) as( select '<TXN1><CDT>5000</CDT></TXN1>' from dual)
SELECT *
FROM raw_text
CROSS JOIN
XMLTABLE (
'//CDT'
PASSING XMLTYPE (raw_text.t)
COLUMNS CDT VARCHAR2 (1000) PATH './text()')
Lets assume that you have valid XML data (a single root element, matching opening and closing tags and using /, and not \, for the closing tags). For example:
CREATE TABLE tbl (id, msg) AS
SELECT 1, '<ROOT><TXN1><CDT>5000</CDT></TXN1><something /><TXN2><DBT>1000</DBT></TXN2></ROOT>' FROM DUAL UNION ALL
SELECT 2, '<ROOT><TXN1><CDT>50000</CDT></TXN1><something /><TXN2><DBT>1000</DBT></TXN2></ROOT>' FROM DUAL UNION ALL
SELECT 3, '<ROOT><TXN1><CDT>5000</CDT></TXN1><something /><TXN2><DBT>10000</DBT></TXN2></ROOT>' FROM DUAL UNION ALL
SELECT 4, '<ROOT><TXN2><DBT>1000</DBT></TXN2><TXN1><CDT>5000</CDT></TXN1><something /></ROOT>' FROM DUAL UNION ALL
SELECT 5, '<ROOT><TXN1><CDT note="match me too">5000</CDT></TXN1><something /><TXN2><DBT>1000</DBT></TXN2></ROOT>' FROM DUAL;
Where ids 1, 4 and 5 all have values 5000 and 1000 but id 4 has the order of the transactions reversed in the XML (which is completely valid XML and does not change the data at all) and id 5 is the same as id 1 but with an added attribute on the CDT element. ids 2 has CDT of 50000 instead of 5000 and 3 has DBT of 10000 instead of 1000.
Then, yes, you could use:
SELECT *
FROM tbl
WHERE msg LIKE '%<CDT>5000%<DBT>1000%'
But it would return rows 1, 2 and 3 and would not match row 4 or 5. This is probably not what you want.
You could eliminate rows 2 and 3 by matching the end tags as well:
SELECT *
FROM tbl
WHERE msg LIKE '%<CDT>5000</CDT>%<DBT>1000</DBT>%'
But that still does not match when the tags are reversed or when there are additional attributes.
If you want to match the values then you can use XMLEXISTS:
SELECT *
FROM tbl
WHERE XMLEXISTS('//CDT[text()=5000]' PASSING XMLTYPE(msg))
AND XMLEXISTS('//DBT[text()=1000]' PASSING XMLTYPE(msg))
Which outputs:
ID
MSG
1
<ROOT><TXN1><CDT>5000</CDT></TXN1><something /><TXN2><DBT>1000</DBT></TXN2></ROOT>
4
<ROOT><TXN2><DBT>1000</DBT></TXN2><TXN1><CDT>5000</CDT></TXN1><something /></ROOT>
5
<ROOT><TXN1><CDT note="match me too">5000</CDT></TXN1><something /><TXN2><DBT>1000</DBT></TXN2></ROOT>
db<>fiddle here

Allow multiple values from SSRS in oracle

I have a query that gets contract_types 1 to 10. This query is being used in an SSRS report to filter out a larger dataset. I am using -1 for nulls and -2 for all.
I would like to know how we would allow multiple values - does oracle concatenate the inputs together so '1,2,3' would be passed in? Say we get select -1,0,1 in SSRS, how could we alter the bottom query to return values?
My query to get ContractTypes:
SELECT
ContractType,
CASE WHEN ContractType = -2 THEN 'All'
WHEN ContractType = -1 THEN'Null'
ELSE to_Char(ContractType)
END AS DisplayFigure
FROM ContractTypes
which returns
ContractType DisplayFig
-1 Null
0 0
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
10 10
This currently is only returning single values or all, not muliple values:
SELECT *
FROM Employee
WHERE NVL(CONTRACT_TYPE, -1) = :contract_type or :contract_type = -2
I'm assuming we want to do something like:
WHERE NVL(CONTRACT_TYPE, -1) IN (:contract_type)
But this doesn't seem to work.
Data in Employee
Name ContractType
Bob 1
Sue 0
Bill Null
Joe 2
In my report, I want to be able to select contract_type as -1(null),0,1 using the 'allow muliple values' checkbox. At the moment, I can only select either 'all' using my -2 value, or single contract types.
My input would be: contract type = -1,1,2
My output would be Bill, Bob, Joe.
This is how I'm executing my code
I use SSRS with Oracle a lot so I see where you're coming from. Thankfully, they work pretty well together.
First make sure the parameter is set to allow multiple values. This adds a Select All option to your dropdown so you don't have to worry about adding a special case for "All". You'll want to make sure the dataset for the parameter has a row with -1 as the Value and a friendly description for the Label.
Next, the WHERE clause would be just as you mentioned:
WHERE NVL(CONTRACT_TYPE, -1) IN (:contract_type)
SSRS automatically populates the values. There is no XML or string manipulation needed. Keep in mind that this will not work with single-value parameters.
If for some reason this still doesn't work as expected in your environment, there is another workaround you can use which is more universal and works even with ODBC connections.
In the dataset parameter properties, use an expression like this to concatenate the values into a single, comma-separated string:
="," + Join(Parameters!Parameter.Value, ",") + ","
Then use an expression like this in your WHERE clause:
where :parameter like '%,' + Column + ',%'
Obviously, this is less efficient because it most likely won't be using an index, but it works.
I don't know SSRS, but - if I understood you correctly, you'll have to split that comma-separated values list into rows. Something like in this example:
SQL> select *
2 from dept
3 where deptno in (select regexp_substr('&&contract_type', '[^,]+', 1, level)
4 from dual
5 connect by level <= regexp_count('&&contract_type', ',') + 1
6 );
Enter value for contract_type: 10,20,40
DEPTNO DNAME LOC
---------- -------------------- --------------------
20 RESEARCH DALLAS
10 ACCOUNTING NEW YORK
40 OPERATIONS BOSTON
SQL>
Applied to your code:
select *
from employee
where nvl(contract_type, -1) in (select regexp_substr(:contract_type, '[^,]+', 1, level)
from dual
connect by level <= regexp_substr(:contract_type, ',') + 1
)
If you have the comma separated list of numbers and then if you like to split it then, the below seems simple and easy to maintain.
select to_number(column_value) from xmltable(:val);
Inputs: 1,2,3,4
Output:
I guess I understood your problem. If I am correct the below should solve your problem:
with inputs(Name, ContractType) as
(
select 'Bob', 1 from dual union all
select 'Sue', 0 from dual union all
select 'Bill', Null from dual union all
select 'Joe', 2 from dual
)
select *
from inputs
where decode(:ContractType,'-2',-2,nvl(ContractType,-1)) in (select to_number(column_value) from xmltable(:ContractType))
Inputs: -1,1,2
Output:
Inputs: -2
Output:

Oracle rownum = 1 to select topmost row from the set fails [duplicate]

This question already has answers here:
Oracle SELECT TOP 10 records [duplicate]
(6 answers)
How do I do top 1 in Oracle? [duplicate]
(9 answers)
How do I limit the number of rows returned by an Oracle query after ordering?
(14 answers)
Fetch the rows which have the Max value for a column for each distinct value of another column
(35 answers)
Closed 5 years ago.
I need to select from two tables,
RATING_TABLE
RATING_TYPE RATING_PRIORITY
TITAN 1
PLATINUM(+) 1
PLATINUM 2
DIAMOND(+) 3
DIAMOND 3
GOLD 4
SILVER 4
RATING_STORAGE
RATING AMOUNT
SILVER 200
GOLD 510
DIAMOND 850
PLATINUM(+) 980
TITAN 5000
I want to select the rating from RATING_STORAGE table based on RATING_PRIORITY from RATING_TABLE.
I want to select one row with lowest rating priority. If two rating priority are eqaul I want to choose the one with the lowest amount.
So I used the query,
select s.rating,s.amount
from RATING_TABLE r, RATING_STORAGE s
where r.rating_type= s.rating_type
and rownum=1
order by r.rating_priority asc , s.amount asc ;
I am getting correct output when sorting the result but rownum=1 fails to give the topmost row.
Thanks in Advance.
You need to select after sorting is done, in your case:
select *
from (select s.rating
,s.amount
from rating_table r
,rating_storage s
where r.rating_type = s.rating_type
and rownum = 1
order by r.rating_priority asc
,s.amount asc)
where rownum = 1;

What if the value of order field is the same for all the records [duplicate]

This question already has answers here:
Why does Oracle return specific sequence if 'orderby' values are identical?
(4 answers)
Closed 7 years ago.
All, Let's say the SQL looks like below.
Select a, b ,c from table1 order by c
If all the rows in table1 have the same field value in the field c. I want to know if the result has the same order for each time I executed the SQL.
Let's say data in the table1 looks like below.
a b c
-------------------------------------------
1 x1 2014-4-1
....
100 x100 2014-4-1
....
1000 x1000 2014-4-1
....
How Oracle determine the rows sequence for the same order by value?
Added
Will they be random sequence for each time?
One simple answer is NO. There is no guarantee that the ORDER BY on equal values will return the same sorted result every time. It might seem to you it is always stable, however, there are many reasons when it could change.
For example, the sorting on equal values might defer after:
Gathering statistics
Adding an index on the column
For example,
Let's say I have a table t:
SQL> SELECT * FROM t ORDER BY b;
A B
---------- ----------
1 1
2 1
3 2
4 2
5 3
6 3
6 rows selected.
The sorting on the column having similar values is just like:
SQL> CREATE TABLE t1 AS SELECT * FROM t ORDER BY b, DBMS_RANDOM.VALUE;
Table created.
SQL> SELECT * FROM t1 ORDER BY b;
A B
---------- ----------
1 1
2 1
4 2
3 2
5 3
6 3
6 rows selected.
So, similar data in bot the tables, however, ORDER BY on the column having equal values, dos not guarantee the same sorting.
They must not be random (change each time), but the order is not guaranteed (change sometimes).

How to put more than 1000 values into an Oracle IN clause [duplicate]

This question already has answers here:
SQL IN Clause 1000 item limit
(5 answers)
Closed 8 years ago.
Is there any way to get around the Oracle 10g limitation of 1000 items in a static IN clause? I have a comma delimited list of many of IDs that I want to use in an IN clause, Sometimes this list can exceed 1000 items, at which point Oracle throws an error. The query is similar to this...
select * from table1 where ID in (1,2,3,4,...,1001,1002,...)
Put the values in a temporary table and then do a select where id in (select id from temptable)
select column_X, ... from my_table
where ('magic', column_X ) in (
('magic', 1),
('magic', 2),
('magic', 3),
('magic', 4),
...
('magic', 99999)
) ...
I am almost sure you can split values across multiple INs using OR:
select * from table1 where ID in (1,2,3,4,...,1000) or
ID in (1001,1002,...,2000)
You may try to use the following form:
select * from table1 where ID in (1,2,3,4,...,1000)
union all
select * from table1 where ID in (1001,1002,...)
Where do you get the list of ids from in the first place? Since they are IDs in your database, did they come from some previous query?
When I have seen this in the past it has been because:-
a reference table is missing and the correct way would be to add the new table, put an attribute on that table and join to it
a list of ids is extracted from the database, and then used in a subsequent SQL statement (perhaps later or on another server or whatever). In this case, the answer is to never extract it from the database. Either store in a temporary table or just write one query.
I think there may be better ways to rework this code that just getting this SQL statement to work. If you provide more details you might get some ideas.
Use ...from table(... :
create or replace type numbertype
as object
(nr number(20,10) )
/
create or replace type number_table
as table of numbertype
/
create or replace procedure tableselect
( p_numbers in number_table
, p_ref_result out sys_refcursor)
is
begin
open p_ref_result for
select *
from employees , (select /*+ cardinality(tab 10) */ tab.nr from table(p_numbers) tab) tbnrs
where id = tbnrs.nr;
end;
/
This is one of the rare cases where you need a hint, else Oracle will not use the index on column id. One of the advantages of this approach is that Oracle doesn't need to hard parse the query again and again. Using a temporary table is most of the times slower.
edit 1 simplified the procedure (thanks to jimmyorr) + example
create or replace procedure tableselect
( p_numbers in number_table
, p_ref_result out sys_refcursor)
is
begin
open p_ref_result for
select /*+ cardinality(tab 10) */ emp.*
from employees emp
, table(p_numbers) tab
where tab.nr = id;
end;
/
Example:
set serveroutput on
create table employees ( id number(10),name varchar2(100));
insert into employees values (3,'Raymond');
insert into employees values (4,'Hans');
commit;
declare
l_number number_table := number_table();
l_sys_refcursor sys_refcursor;
l_employee employees%rowtype;
begin
l_number.extend;
l_number(1) := numbertype(3);
l_number.extend;
l_number(2) := numbertype(4);
tableselect(l_number, l_sys_refcursor);
loop
fetch l_sys_refcursor into l_employee;
exit when l_sys_refcursor%notfound;
dbms_output.put_line(l_employee.name);
end loop;
close l_sys_refcursor;
end;
/
This will output:
Raymond
Hans
I wound up here looking for a solution as well.
Depending on the high-end number of items you need to query against, and assuming your items are unique, you could split your query into batches queries of 1000 items, and combine the results on your end instead (pseudocode here):
//remove dupes
items = items.RemoveDuplicates();
//how to break the items into 1000 item batches
batches = new batch list;
batch = new batch;
for (int i = 0; i < items.Count; i++)
{
if (batch.Count == 1000)
{
batches.Add(batch);
batch.Clear()
}
batch.Add(items[i]);
if (i == items.Count - 1)
{
//add the final batch (it has < 1000 items).
batches.Add(batch);
}
}
// now go query the db for each batch
results = new results;
foreach(batch in batches)
{
results.Add(query(batch));
}
This may be a good trade-off in the scenario where you don't typically have over 1000 items - as having over 1000 items would be your "high end" edge-case scenario. For example, in the event that you have 1500 items, two queries of (1000, 500) wouldn't be so bad. This also assumes that each query isn't particularly expensive in of its own right.
This wouldn't be appropriate if your typical number of expected items got to be much larger - say, in the 100000 range - requiring 100 queries. If so, then you should probably look more seriously into using the global temporary tables solution provided above as the most "correct" solution. Furthermore, if your items are not unique, you would need to resolve duplicate results in your batches as well.
Yes, very weird situation for oracle.
if you specify 2000 ids inside the IN clause, it will fail.
this fails:
select ...
where id in (1,2,....2000)
but if you simply put the 2000 ids in another table (temp table for example), it will works
below query:
select ...
where id in (select userId
from temptable_with_2000_ids )
what you can do, actually could split the records into a lot of 1000 records and execute them group by group.
Here is some Perl code that tries to work around the limit by creating an inline view and then selecting from it. The statement text is compressed by using rows of twelve items each instead of selecting each item from DUAL individually, then uncompressed by unioning together all columns. UNION or UNION ALL in decompression should make no difference here as it all goes inside an IN which will impose uniqueness before joining against it anyway, but in the compression, UNION ALL is used to prevent a lot of unnecessary comparing. As the data I'm filtering on are all whole numbers, quoting is not an issue.
#
# generate the innards of an IN expression with more than a thousand items
#
use English '-no_match_vars';
sub big_IN_list{
#_ < 13 and return join ', ',#_;
my $padding_required = (12 - (#_ % 12)) % 12;
# get first dozen and make length of #_ an even multiple of 12
my ($a,$b,$c,$d,$e,$f,$g,$h,$i,$j,$k,$l) = splice #_,0,12, ( ('NULL') x $padding_required );
my #dozens;
local $LIST_SEPARATOR = ', '; # how to join elements within each dozen
while(#_){
push #dozens, "SELECT #{[ splice #_,0,12 ]} FROM DUAL"
};
$LIST_SEPARATOR = "\n union all\n "; # how to join #dozens
return <<"EXP";
WITH t AS (
select $a A, $b B, $c C, $d D, $e E, $f F, $g G, $h H, $i I, $j J, $k K, $l L FROM DUAL
union all
#dozens
)
select A from t union select B from t union select C from t union
select D from t union select E from t union select F from t union
select G from t union select H from t union select I from t union
select J from t union select K from t union select L from t
EXP
}
One would use that like so:
my $bases_list_expr = big_IN_list(list_your_bases());
$dbh->do(<<"UPDATE");
update bases_table set belong_to = 'us'
where id in ($bases_list_expr)
UPDATE
Instead of using IN clause, can you try using JOIN with the other table, which is fetching the id. that way we don't need to worry about limit. just a thought from my side.
Instead of SELECT * FROM table1 WHERE ID IN (1,2,3,4,...,1000);
Use this :
SELECT * FROM table1 WHERE ID IN (SELECT rownum AS ID FROM dual connect BY level <= 1000);
*Note that you need to be sure the ID does not refer any other foreign IDS if this is a dependency. To ensure only existing ids are available then :
SELECT * FROM table1 WHERE ID IN (SELECT distinct(ID) FROM tablewhereidsareavailable);
Cheers

Resources