Is there a faster/better way to run this TSQL WHILE loop? - performance

I'll start by just giving a brief explanation of my base data. I have a dataset where each row represents one work item. After this item is allocated to an employee, the employee will work the item. During this process, the item can hit a milestone called 'folder'. Each one of the work items is graded at the start of the process (Grade A, Grade B, etc), with better grades indicating a higher likelihood that the item makes it from allocation to folder.
What I am doing right now is running a test for a new way of allocating these work items. The items are routed to either someone in the test group or control group. What my code is doing is looping through the dataset to randomly select a percentage of the rows based on each item Grade + Group combination. The output is set up to show if the test or control group is expected to produce more folders based on the item grade and overall. It is also set up to show how many times the test/control group has more folders for each time it iterates through this loop.
My code works. It does what I need it to do. I just can't help but wonder if there is a better way to do this. Full disclosure, I'm not a DBA. I'm not a SQL expert. I'm just an analyst trying to figure out if there is a way I can optimize my code or run it differently. My code went from being very long and explicit to the point where I have it now using nested while loops. The performance didn't get any better but I cut it down drastically. If I set the code to 10,000 iterations, which is what I would prefer the minimum be, it takes over an hour to run through and my dataset is only going to grow so it will take longer and longer. If anyone has any tips or suggestions on a different way to structure this so it runs faster (assuming its possible), which would allow me to run the code more often (I work with impatient people lol), it would be greatly apprecaited.
This is a sample of my data.
This is my code (The #leads temp table is what is in the linked dataset):
--Creating a new table to put the random sample of allocations into
DROP TABLE IF EXISTS #FINAL;
CREATE TABLE #FINAL (LeadGrade varchar(20),
[Group] varchar(20),
Allocations int,
Folders int,
Pass int);
--Creating a table of test condition group names for the nested loops
DROP TABLE IF EXISTS #group;
CREATE TABLE #group (groupnum int,
testgroup varchar(8));
INSERT INTO #group
VALUES (1, 'Test'),
(2, 'Control');
--Declaring all the necessary variables to run through the random sampling
DECLARE #GROUPNUM int = 1;
DECLARE #GRADENUM int = 1;
DECLARE #COUNTER int = 1;
DECLARE #ITERATIONS int = 1000;
--Top layer of the loop
--Checks how many times the inner loops have run. As long as it is less than the specified number of iterations, repeats the inner loops
WHILE #COUNTER <= #ITERATIONS
BEGIN
--Creates a temp table of the leads with their milestones, adding on a random number value to each row
--This random number will be different with each iteration due to the table being dropped and repopulated
DROP TABLE IF EXISTS #allleadsrandom;
SELECT l.*,
CHECKSUM(NEWID()) AS RandomNumber
INTO #allleadsrandom
FROM #leads l
ORDER BY CHECKSUM(NEWID()) DESC;
--2nd layer loop. This loop will change the LeadGrade on each pass thru from A to B to C to D
WHILE #GRADENUM <= 4
BEGIN
--3rd layer loop. This loop pulls data from the #allleadsrandom temp table starting
--It uses the lead grade based on the higher loop number and the Group based on if it is the 1st (test) or 2nd (control) pass thru of this loop
WHILE #GROUPNUM <= 2
BEGIN
--Pulling the top # of rows from #allrandomleads, based on the pct of leads in the data that are comprised of the current lead grade
DROP TABLE IF EXISTS #focus;
SELECT TOP (SELECT PctOfLeads * 10 FROM #leadgradepct WHERE GradeNum = #GRADENUM)
*
INTO #focus
FROM #allleadsrandom
WHERE LeadGrade = (SELECT LeadGrade FROM #leadgradepct WHERE gradenum = #GRADENUM)
AND [group] = (SELECT testgroup FROM #group WHERE groupnum = #GROUPNUM);
--Takes data from previous temp table and sums the number of allocations and folders
DROP TABLE IF EXISTS #loader;
SELECT LeadGrade,
[Group],
SUM(allocation) AS Allocations,
SUM(folder) AS Folders
INTO #loader
FROM #focus
GROUP BY LeadGrade,
[Group];
--Uses the loader temp table and inserts new row into the final temp table for analysis
INSERT INTO #FINAL
SELECT l.LeadGrade,
l.[Group],
l.Allocations,
l.Folders,
#COUNTER AS Pass
FROM #loader l;
--Reset the loop for the group condition, changing from test to control group. If control group was run, end the loop
SET #GROUPNUM = #GROUPNUM + 1;
END;
--Reset the Group from control to test. Reset the loop for lead grade condition, changing grade from A to B to C to D. If Grade in current pass was D, end the loop
SET #GROUPNUM = 1;
SET #GRADENUM = #GRADENUM + 1;
END;
--Reset the entire loop. Set Grade back to A. Set Group back to test. Add one to the counter. If maximum number of iterations reached, end the loop.
SET #GROUPNUM = 1;
SET #GRADENUM = 1;
SET #COUNTER = #COUNTER + 1;
END;
---------------------------------------------------------------------------------------------------------------------------------------
--Takes all the iterations of the nested loops and creates one final table of information
--Gives a summary of allocations and average folder count broken out by lead grade and test group
DROP TABLE IF EXISTS #results;
SELECT LeadGrade,
[Group],
SUM(Allocations) / #ITERATIONS AS AvgAllocations,
SUM(Folders) / #ITERATIONS AS AvgFolders,
MAX(Pass) AS #OfIterations
INTO #results
FROM #FINAL
GROUP BY LeadGrade,
[Group]
ORDER BY LeadGrade ASC,
[Group] DESC;
SELECT *
FROM #results
ORDER BY LeadGrade ASC,
[Group] DESC;
SELECT [Group],
SUM([AvgFolders]) AS AvgExpectedFolders
FROM #results
GROUP BY [Group]
ORDER BY [Group] DESC;
-------------------------------------------------------------------------------------------------------------------------------------------
--Another loop to go through the #FINAL temp table and output
--a comparison of how many times the test group had more folders vs how many times the control group had more folders
DECLARE #TESTTALLY AS int = 0;
DECLARE #CONTROLTALLY AS int = 0;
DECLARE #PASS AS int = 1;
WHILE #PASS <= #ITERATIONS
BEGIN
--Get the total folders for the test group and the total folders for the Control group
DECLARE #TESTCOUNT AS int = (SELECT SUM(Folders) AS Folders
FROM #FINAL
WHERE Pass = #PASS
AND [Group] = 'Test'
GROUP BY [Group]);
DECLARE #CONTROLCOUNT AS int = (SELECT SUM(Folders) AS Folders
FROM #FINAL
WHERE Pass = #PASS
AND [Group] = 'Control'
GROUP BY [Group]);
--Compare the folders in the test to control group for a unique iteration, add a tally to the group with more folders
IF #TESTCOUNT > #CONTROLCOUNT
SET #TESTTALLY = #TESTTALLY + 1;
ELSE
SET #CONTROLTALLY = #CONTROLTALLY + 1;
SET #PASS = #PASS + 1;
END;
SELECT #TESTTALLY AS [Test Group More Folders],
#CONTROLTALLY AS [Control Group More Folders];

Related

Randomly select 10 subjects and retain all of their observations

I am stuck with a the following problem in SAS. I have a dataset of this format:
The dataet consists of 500ids with different number of observations per ID. I'm trying to randomly select 5id's and at the same time retain all of their observations. I built a random generator in the first place saving a vector with 10 numbers in the interval [1,500]. However it became clumpsy when I tried to use this vector in order to select the ids correspoding to the vector with the random numbers. To be more clear, I want my net result to be a dataset which includes all observations correspoding to ID 1,10,43, 22, 67, or any other sequence of 5 numbers.
Any tip will be more than appreciated!
From your question, I assume you already have your 10 random numbers. If they are saved in a table/dataset, you can run a left join between them and your original dataset, by id. This will pull out all the original observations with the same id.
Let's say that your ramdonly selected numbers are saved in a table called "random_ids". Then, you can do:
proc sql;
create table want as
select distinct
t1.id,
t2.*
from random_ids as t1
left join have as t2 on t1.id = t2.id;
quit;
If your random numbers are not saved in a dataset, you may simply copy them to a where statement, like:
proc sql;
create table want as
select distinct
*
from have
where id in (1 10 43 22 67) /*here you put the ids you want*/
quit;
Best,
Proc SURVEYSELECT is your friend.
data have;
call streaminit(123);
do _n_ = 1 to 500;
id = rand('integer', 1e6);
do seq = 1 to rand('integer', 35);
output;
end;
end;
run;
proc surveyselect noprint data=have sampsize=5 out=want;
cluster id;
run;
proc sql noprint;
select count(distinct id) into :id_count trimmed from want;
%put NOTE: &=id_count;
If you don't have the procedure as part of your SAS license, you can do sample selection per k/n algorithm. NOTE: Earliest archived post for k/n is May 1996 SAS-L message which has code based on a 1995 SAS Observations magazine article.
proc sql noprint;
select count(distinct id) into :N trimmed from have;
proc sort data=have;
by id;
data want_kn;
retain N &N k 5;
if _n_ = 1 then call streaminit(123);
keep = rand('uniform') < k / N;
if keep then k = k - 1;
do until (last.id);
set have;
by id;
if keep then output;
end;
if k = 0 then stop;
N = N - 1;
drop k N keep;
run;
proc sql noprint;
select count(distinct id) into :id_count trimmed from want_kn;
%put NOTE: &=id_count;

Sequential vs parallel solution

I will try to present my problem as simplified as possible.
Assume that we have 3 tables in Oracle 11g.
Persons (person_id, name, surname, status, etc )
Actions (action_id, person_id, action_value, action_date, calculated_flag)
Calculations (calculation_id, person_id,computed_value,computed_date)
What I want is for each person that meets certain criteria (let's say status=3)
I should get the sum of action_values from the Actions table where calculated_flag=0. (something like this select sum(action_value) from Actions where calculated_flag=0 and person_id=current_id).
Then I shall use that sum in a some kind of formula and update the Calculations table for that specific person_id.
update Calculations set computed_value=newvalue, computed_date=sysdate
where person_id=current_id
After that calculated_flag for participated rows will be set to 1.
update Actions set calculated_flag=1
where calculated_flag=0 and person_id=current_id
Now this can be easily done sequentially, by creating a cursor that will run through Persons table and then execute each action needed for the specific person.
(I don't provide the code for the sequential solution as the above is just an example that resembles my real-world setup.)
The problem is that we are talking about quite big amount of data and sequential approach seems like a waste of computational time.
It seems to me that this task could be performed in parallel for number of person_ids.
So the question is:
Can this kind of task be performed using parallelization in PL/SQL?
What would the solution look like? That is, what special packages (e.g. DBMS_PARALLEL_EXECUTE), keywords (e.g. bulk collect), methods should be used and in what manner?
Also, should I have any concerns about partial failure of parallel updates?
Note that I am not quite familiar with parallel programming with PL/SQL.
Thanks.
Edit 1.
Here my pseudo code for my sequential solution
procedure sequential_solution is
cursor persons_of_interest is
select person_id from persons
where status = 3;
tempvalue number;
newvalue number;
begin
for person in persons_of_interest
loop
begin
savepoint personsp;
--step 1
select sum(action_value) into tempvalue
from actions
where calculated_flag = 0
and person_id = person.person_id;
newvalue := dosomemorecalculations(tempvalue);
--step 2
update calculations set computed_value = newvalue, computed_date = sysdate
where person_id = person.person_id;
--step 3
update actions set calculated_flag = 1;
where calculated_flag = 0 and person_id = person.person_id;
--step 4 (didn't mention this step before - sorry)
insert into actions
( person_id, action_value, action_date, calculated_flag )
values
( person.person_id, 100, sysdate, 0 );
exception
when others then
rollback to personsp;
-- this call is defined with pragma AUTONOMOUS_TRANSACTION:
log_failure(person_id);
end;
end loop;
end;
Now, how would I speed up the above either with forall and bulk colletct or with parallel programming Under the following constrains:
proper memory management (taking into consideration large amount of data)
For a single person if one part of the step sequence fails - all steps should be rolled back and the failure logged.
I can propose the following. Let's say you have 1 000 000 rows in persons table, and you want to process 10 000 persons per iteration. So you can do it in this way:
declare
id_from persons.person_id%type;
id_to persons.person_id%type;
calc_date date := sysdate;
begin
for i in 1 .. 100 loop
id_from := (i - 1) * 10000;
id_to := i * 10000;
-- Updating Calculations table, errors are logged into err$_calculations table
merge into Calculations c
using (select p.person_id, sum(action_value) newvalue
from Actions a join persons p on p.person_id = a.person_id
where a.calculated_flag = 0
and p.status = 3
and p.person_id between id_from and id_to
group by p.person_id) s
on (s.person_id = c.person_id)
when matched then update
set c.computed_value = s.newvalue,
c.computed_date = calc_date
log errors into err$_calculations reject limit unlimited;
-- updating actions table only for those person_id which had no errors:
merge into actions a
using (select distinct p.person_id
from persons p join Calculations c on p.person_id = c.person_id
where c.computed_date = calc_date
and p.person_id between id_from and id_to)
on (c.person_id = p.person_id)
when matched then update
set a.calculated_flag = 1;
-- inserting list of persons for who calculations were successful
insert into actions (person_id, action_value, action_date, calculated_flag)
select distinct p.person_id, 100, calc_date, 0
from persons p join Calculations c on p.person_id = c.person_id
where c.computed_date = calc_date
and p.person_id between id_from and id_to;
commit;
end loop;
end;
How it works:
You split the data in persons table into chunks about 10000 rows (depends on gaps in numbers of ID's, max value of i * 10000 should be knowingly more than maximal person_id)
You make a calculation in the MERGE statement and update the Calculations table
LOG ERRORS clause prevents exceptions. If an error occurs, the row with the error will not be updated, but it will be inserted into a table for errors logging. The execution will not be interrupted. To create this table, execute:
begin
DBMS_ERRLOG.CREATE_ERROR_LOG('CALCULATIONS');
end;
The table err$_calculations will be created. More information about DBMS_ERRLOG package see in the documentation.
The second MERGE statement sets calculated_flag = 1 only for rows, where no errors occured. INSERT statement inserts the these rows into actions table. These rows could be found just with the select from Calculations table.
Also, I added variables id_from and id_to to calculate ID's range to update, and the variable calc_date to make sure that all rows updated in first MERGE statement could be found later by date.

Using the data from a for loop as a table in the From in a Select

I am converting some code in Access over to Oracle, and one of the queries in Access uses a table that I am unable to use in Oracle. I am unable to create new tables, so I am trying to figure out a way to use the logic behind the table in the FOR section of my select.
The logic of the table is similar to:
FOR i = 1 To 100
number = number + 1
.AddNew
!tbl_number = number
NEXT i
I'm trying to convert this to oracle, and so far I have:
FOR i in 1 .. 100 LOOP
number := number + 1;
--This is where I am stuck; How do I simulate the table part
END LOOP;
I was thinking a cursor or a record would be the answer, but I can't seem to figure out how to implement that. In the end I basically want to have:
SELECT
table.number
FROM
(
--My for loop logic
) table
EDIT
The calculation is a bit more complicated; that was just an example. They aren't actually sequential, and there isn't really a pattern to rows.
EDIT
Here is a more complicated version of the for loop which is closer to what I'm actually doing:
FOR i in 1 .. 100 LOOP
number1 := number1 + 7;
number2 := (number2 + 8) / number1;
--This is where I am stuck; How do I simulate the table part
END LOOP;
You could use a recursive query (assuming you are on Oralce 11gR2 or later):
with example(idx, number1, number2) as (
-- Anchor Section
select 1
, 1 -- initial value
, 2 -- initial value
from dual
union all
-- Recursive Section
select prev.idx + 1
, prev.number1 + 7
, (prev.number2 + 8) / prev.number1
from example prev
where prev.idx < 100 -- The Guard
)
select * from example;
In the Anchor section set all the values for your first record. Then in the Recursive section setup the logic to determine the next records values as a function of the prior records values.
The Anchor section could select the initial values from some other table rather than being hard coded as in my example.
The recursive section needs to select from the named subquery (in this case example) but may also join to other tables as needed.
You need to generate a set with sequential integer numbers. Maybe you can use this (for Oracle 10g and above):
SELECT
ROWNUM NUM
FROM
DUAL D1,
DUAL D2
CONNECT BY
(D1.DUMMY = D2.DUMMY AND ROWNUM <= 100)

Sorting by value returned by a function in oracle

I have a function that returns a value and displays a similarity between tracks, i want the returned result to be ordered by this returned value, but i cannot figure out a way on how to do it, here is what i have already tried:
CREATE OR REPLACE PROCEDURE proc_list_similar_tracks(frstTrack IN tracks.track_id%TYPE)
AS
sim number;
res tracks%rowtype;
chosenTrack tracks%rowtype;
BEGIN
select * into chosenTrack from tracks where track_id = frstTrack;
dbms_output.put_line('similarity between');
FOR res IN (select * from tracks WHERE ROWNUM <= 10)LOOP
SELECT * INTO sim FROM ( SELECT func_similarity(frstTrack, res.track_id)from dual order by sim) order by sim; //that's where i am getting the value and where i am trying to order
dbms_output.put_line( chosenTrack.track_name || '(' ||frstTrack|| ') and ' || res.track_name || '(' ||res.track_id|| ') ---->' || sim);
END LOOP;
END proc_list_similar_tracks;
/
declare
begin
proc_list_similar_tracks(437830);
end;
/
no errors are given, the list is just presented unsorted, is it not possible to order by a value that was returned by a function? if so, how do i accomplish something like this? or am i just doing something horribly wrong?
Any help will be appreciated
In the interests of (over-)optimisation I would avoid ordering by a function if I could possibly avoid it; especially one that queries other tables. If you're querying a table you should be able to add that part to your current query, which enables you to use it normally.
However, let's look at your function:
There's no point using DBMS_OUTPUT for anything but debugging unless you're going to be there looking at exactly what is output every time the function is run; you could remove these lines.
The following is used only for a DBMS_OUTPUT and is therefore an unnecessary SELECT and can be removed:
select * into chosenTrack from tracks where track_id = frstTrack;
You're selecting a random 10 rows from the table TRACKS; why?
FOR res IN (select * from tracks WHERE ROWNUM <= 10)LOOP
Your ORDER BY, order by sim, is ordering by a non-existent column as the column SIM hasn't been declared within the scope of the SELECT
Your ORDER BY is asking for the least similar as the default sort order is ascending (this may be correct but it seems wrong?)
Your function is not a function, it's a procedure (one without an OUT parameter).
Your SELECT INTO is attempting to place multiple rows into a single-row variable.
Assuming your "function" is altered to provide the maximum similarity between the parameter and a random 10 TRACK_IDs it might look as follows:
create or replace function list_similar_tracks (
frstTrack in tracks.track_id%type
) return number is
sim number;
begin
select max(func_similarity(frstTrack, track_id)) into sim
from tracks
where rownum <= 10
;
return sim;
end list_similar_tracks;
/
However, the name of the function seems to preclude that this is what you're actually attempting to do.
From your comments, your question is actually:
I have the following code; how do I print the top 10 function results? The current results are returned unsorted.
declare
sim number;
begin
for res in ( select * from tracks ) loop
select * into sim
from ( select func_similarity(var1, var2)
from dual
order by sim
)
order by sim;
end loop;
end;
/
The problem with the above is firstly that you're ordering by the variable sim, which is NULL in the first instance but changes thereafter. However, the select from DUAL is only a single row, which means you're randomly ordering by a single row. This brings us back to my point at the top - use SQL where possible.
In this case you can simply SELECT from the table TRACKS and order by the function result. To do this you need to give the column created by your function result an alias (or order by the positional argument as already described in Emmanuel's answer).
For instance:
select func_similarity(var1, var2) as function_result
from dual
Putting this together the code becomes:
begin
for res in ( select *
from ( select func_similarity(variable, track_id) as f
from tracks
order by f desc
)
where rownum <= 10 ) loop
-- do something
end loop;
end;
/
You have a query using a function, let's say something like:
select t.field1, t.field2, ..., function1(t.field1), ...
from table1 t
where ...
Oracle supports order by clause with column indexes, i.e. if the field returned by the function is the nth one in the select (here, field1 is in position 1, field2 in position 2), you just have to add:
order by n
For instance:
select t.field1, function1(t.field1) c2
from table1 t
where ...
order by 2 /* 2 being the index of the column computed by the function */

How to put more than 1000 values into an Oracle IN clause [duplicate]

This question already has answers here:
SQL IN Clause 1000 item limit
(5 answers)
Closed 8 years ago.
Is there any way to get around the Oracle 10g limitation of 1000 items in a static IN clause? I have a comma delimited list of many of IDs that I want to use in an IN clause, Sometimes this list can exceed 1000 items, at which point Oracle throws an error. The query is similar to this...
select * from table1 where ID in (1,2,3,4,...,1001,1002,...)
Put the values in a temporary table and then do a select where id in (select id from temptable)
select column_X, ... from my_table
where ('magic', column_X ) in (
('magic', 1),
('magic', 2),
('magic', 3),
('magic', 4),
...
('magic', 99999)
) ...
I am almost sure you can split values across multiple INs using OR:
select * from table1 where ID in (1,2,3,4,...,1000) or
ID in (1001,1002,...,2000)
You may try to use the following form:
select * from table1 where ID in (1,2,3,4,...,1000)
union all
select * from table1 where ID in (1001,1002,...)
Where do you get the list of ids from in the first place? Since they are IDs in your database, did they come from some previous query?
When I have seen this in the past it has been because:-
a reference table is missing and the correct way would be to add the new table, put an attribute on that table and join to it
a list of ids is extracted from the database, and then used in a subsequent SQL statement (perhaps later or on another server or whatever). In this case, the answer is to never extract it from the database. Either store in a temporary table or just write one query.
I think there may be better ways to rework this code that just getting this SQL statement to work. If you provide more details you might get some ideas.
Use ...from table(... :
create or replace type numbertype
as object
(nr number(20,10) )
/
create or replace type number_table
as table of numbertype
/
create or replace procedure tableselect
( p_numbers in number_table
, p_ref_result out sys_refcursor)
is
begin
open p_ref_result for
select *
from employees , (select /*+ cardinality(tab 10) */ tab.nr from table(p_numbers) tab) tbnrs
where id = tbnrs.nr;
end;
/
This is one of the rare cases where you need a hint, else Oracle will not use the index on column id. One of the advantages of this approach is that Oracle doesn't need to hard parse the query again and again. Using a temporary table is most of the times slower.
edit 1 simplified the procedure (thanks to jimmyorr) + example
create or replace procedure tableselect
( p_numbers in number_table
, p_ref_result out sys_refcursor)
is
begin
open p_ref_result for
select /*+ cardinality(tab 10) */ emp.*
from employees emp
, table(p_numbers) tab
where tab.nr = id;
end;
/
Example:
set serveroutput on
create table employees ( id number(10),name varchar2(100));
insert into employees values (3,'Raymond');
insert into employees values (4,'Hans');
commit;
declare
l_number number_table := number_table();
l_sys_refcursor sys_refcursor;
l_employee employees%rowtype;
begin
l_number.extend;
l_number(1) := numbertype(3);
l_number.extend;
l_number(2) := numbertype(4);
tableselect(l_number, l_sys_refcursor);
loop
fetch l_sys_refcursor into l_employee;
exit when l_sys_refcursor%notfound;
dbms_output.put_line(l_employee.name);
end loop;
close l_sys_refcursor;
end;
/
This will output:
Raymond
Hans
I wound up here looking for a solution as well.
Depending on the high-end number of items you need to query against, and assuming your items are unique, you could split your query into batches queries of 1000 items, and combine the results on your end instead (pseudocode here):
//remove dupes
items = items.RemoveDuplicates();
//how to break the items into 1000 item batches
batches = new batch list;
batch = new batch;
for (int i = 0; i < items.Count; i++)
{
if (batch.Count == 1000)
{
batches.Add(batch);
batch.Clear()
}
batch.Add(items[i]);
if (i == items.Count - 1)
{
//add the final batch (it has < 1000 items).
batches.Add(batch);
}
}
// now go query the db for each batch
results = new results;
foreach(batch in batches)
{
results.Add(query(batch));
}
This may be a good trade-off in the scenario where you don't typically have over 1000 items - as having over 1000 items would be your "high end" edge-case scenario. For example, in the event that you have 1500 items, two queries of (1000, 500) wouldn't be so bad. This also assumes that each query isn't particularly expensive in of its own right.
This wouldn't be appropriate if your typical number of expected items got to be much larger - say, in the 100000 range - requiring 100 queries. If so, then you should probably look more seriously into using the global temporary tables solution provided above as the most "correct" solution. Furthermore, if your items are not unique, you would need to resolve duplicate results in your batches as well.
Yes, very weird situation for oracle.
if you specify 2000 ids inside the IN clause, it will fail.
this fails:
select ...
where id in (1,2,....2000)
but if you simply put the 2000 ids in another table (temp table for example), it will works
below query:
select ...
where id in (select userId
from temptable_with_2000_ids )
what you can do, actually could split the records into a lot of 1000 records and execute them group by group.
Here is some Perl code that tries to work around the limit by creating an inline view and then selecting from it. The statement text is compressed by using rows of twelve items each instead of selecting each item from DUAL individually, then uncompressed by unioning together all columns. UNION or UNION ALL in decompression should make no difference here as it all goes inside an IN which will impose uniqueness before joining against it anyway, but in the compression, UNION ALL is used to prevent a lot of unnecessary comparing. As the data I'm filtering on are all whole numbers, quoting is not an issue.
#
# generate the innards of an IN expression with more than a thousand items
#
use English '-no_match_vars';
sub big_IN_list{
#_ < 13 and return join ', ',#_;
my $padding_required = (12 - (#_ % 12)) % 12;
# get first dozen and make length of #_ an even multiple of 12
my ($a,$b,$c,$d,$e,$f,$g,$h,$i,$j,$k,$l) = splice #_,0,12, ( ('NULL') x $padding_required );
my #dozens;
local $LIST_SEPARATOR = ', '; # how to join elements within each dozen
while(#_){
push #dozens, "SELECT #{[ splice #_,0,12 ]} FROM DUAL"
};
$LIST_SEPARATOR = "\n union all\n "; # how to join #dozens
return <<"EXP";
WITH t AS (
select $a A, $b B, $c C, $d D, $e E, $f F, $g G, $h H, $i I, $j J, $k K, $l L FROM DUAL
union all
#dozens
)
select A from t union select B from t union select C from t union
select D from t union select E from t union select F from t union
select G from t union select H from t union select I from t union
select J from t union select K from t union select L from t
EXP
}
One would use that like so:
my $bases_list_expr = big_IN_list(list_your_bases());
$dbh->do(<<"UPDATE");
update bases_table set belong_to = 'us'
where id in ($bases_list_expr)
UPDATE
Instead of using IN clause, can you try using JOIN with the other table, which is fetching the id. that way we don't need to worry about limit. just a thought from my side.
Instead of SELECT * FROM table1 WHERE ID IN (1,2,3,4,...,1000);
Use this :
SELECT * FROM table1 WHERE ID IN (SELECT rownum AS ID FROM dual connect BY level <= 1000);
*Note that you need to be sure the ID does not refer any other foreign IDS if this is a dependency. To ensure only existing ids are available then :
SELECT * FROM table1 WHERE ID IN (SELECT distinct(ID) FROM tablewhereidsareavailable);
Cheers

Resources