How to execute an INSERT that's in a SELECT statement? - oracle

for my DB course i have a table: lab4Central which columns are: productid, description and plantid,
The plant QRO has and id = 1000, an example: 12799, 'Product 12799', 1000.
and the plant SLP has an id = 2000, ex: 29665, 'Product 29665', 2000.
I have to add new registers for other 2 plants: GDA and MTY.
For GDA the registers are the same of the plant QRO but it has tu adjust the productid + 20000, the same for MTY but with the registers of SLP so at the end it will look like:
Plant GDA: 329799, 'Product 32799', 3000.
plant MTY: 49665, 'Product 49665', 4000.
AS you can see for GDA the register is the same of the one in QRO but another plantid and we add 20000 to the productid, the same for MTY.
I code this which give me the correct values:
SELECT 'INSERT INTO LAB4CENTRAL VALUES('||(PRODUCTID+20000) || ',' || DESCRIPTION || ','|| 3000 ||');' FROM LAB4CENTRAL WHERE PLANTID=1000;
But it's just a select and i don't know how to execute the insert statement so it insert the data in the table.
hope you can help me.

What you want is actually the opposite of what you wrote. Instead an Insert... Select... is probably what you are after.
INSERT INTO LAB4CENTRAL
SELECT ProductID + 20000, 'Product' || Productid + 20000, 3000
FROM LAB4CENTRAL
WHERE PlantID = 1000;
That may need to be tweaked to fit your data, but the basic idea is to write a SELECT statement that gives you the result set that you then want to insert into the table.

assuming you want insert the select result in the column ProductId , Description and your_col_fro_3000
You could use a INSERT SELECT
INSERT INTO LAB4CENTRAL(ProductID, Description, your_col_for_3000)
SELECT ProductID + 20000, 'Product' || (Productid + 20000), 3000
FROM LAB4CENTRAL
WHERE PlantID = 1000;

It can very well be a trick to generate INSERT statements that then manually are executed, maybe in batches of circa 100 lines.
Otherwise one would do an INSERT ... SELECT statement. But that is not possible for the same table, hence this solution.
See also OUTPUT TO FILE or such (I do not know at this moment).

Related

oracle trigger exact fetch returns more than requested number of rows

I am trying to get the Quantity from the transaction table. Try to get the quantity of the sell and quantity of buy. Use Portfolio_Number, Stock_Code, Buy_Sell to verify the quantity.
Transaction Table (Portfolio_Number, Transaction_Date,
Stock_Code, Exchange_Code, Broker_Number, Buy_Sell, Quantity, Price_Per_Share)
create or replace trigger TR_Q5
before Insert on
Transaction
for each row
declare
V_quantityB number(7,0);
V_quantityS number(7,0);
begin
if(:new.buy_sell ='S') then
select quantity
into V_quantityS
from transaction
where :new.portfolio_number = portfolio_number
and :new.stock_code = stock_code
and buy_sell='S'
;
if V_quantityS>=1 then
Raise_Application_Error(-20020, 'not S');
end if;
end if;
try to insert
INSERT INTO Transaction
(Portfolio_Number, Transaction_Date, Stock_Code, Exchange_Code, Broker_Number, Buy_Sell, Quantity, Price_Per_Share)
values
(500, To_Date('09-Feb-2020 16:41:00', 'DD-Mon-YYYY HH24:MI:SS'), 'IBM', 'TSX', 4, 'S', 10000, 25.55 );
but it shows up the error
exact fetch returns more than requested number of rows
The error you mentioned is self-explanatory. select you wrote should return just 1 row, but it returns more than that. As you can't put several rows into a scalar number variable, you got the error.
What would fix it? For example, aggregation:
select sum(quantity)
into V_quantityS
...
or perhaps
select distinct quantity
or even
select quantity
...
where rownum = 1
However, beware: trigger is created on the transaction table, and you are selecting from it at the same time which leads to mutating table error. What to do with that? Use a compound trigger.

Group by subquery

I have following query
Select id, name, add1 || ' ' ||add2 address,
case
when subId =1 then 'Maths'
else 'Science'
End,
nvl(col1, col2) sampleCol
From Student_tbl
Where department = 'Student'
I want to group by this query by address as default
I tried
Group by add1 ,add2 ,id, name, subId, col1, col2
and
Group by add1 || ' ' ||add2,id, name,
case
when subId =1 then 'Maths'
else 'Science'
End,
nvl(col1, col2)
Both group by returns same result. I am unsure which query is right.
Anybody help me on this?
Always try to implement all the columns (with same format) that you mentioned in the select statement in "Group By" except aggregated columns. In your case I would prefer the second approach.
SELECT id
,NAME
,add1 || ' ' || add2 address
,CASE
WHEN subId = 1
THEN 'Maths'
ELSE 'Science'
END
,nvl(col1, col2) sampleCol
FROM Student_tbl
WHERE department = 'Student'
GROUP BY id
,NAME
,add1 || ' ' || add2
,CASE
WHEN subId = 1
THEN 'Maths'
ELSE 'Science'
END
,nvl(col1, col2)
I can't see any aggregated columns in your Select. If your select does not require aggregation then you can simply get rid off group by. You can implement distinct in case of any duplicate records in your result set.
The two queries will not necessarily give the same result. Which is correct depends on your requirement. Here is an example, using just the new ADDRESS column which you get by aggregating the input columns ADD1 and ADD2.
Suppose in one row you have ADD1 = 123 Main Street, Portland, and ADD2 = Oregon. Then in the output, ADDRESS = 123 Main Street, Portland, Oregon
In another row you have ADD1 = 123 Main Street, and ADD2 = Portland, Oregon. For this row, the resulting ADDRESS is the same.
If you group by ADDRESS the two output rows will land in the same group, but if you group by ADD1, ADD2 they will be in different groups. In this example it is likely you want to group by ADDRESS, but in other, similar-structure cases that wouldn't be what you want or need.
After your last comment I THINK to finally understand what you are expecting from us.
Both queries are correct basically because for both approaches you've listed in your SELECT clause all the fields present in your GROUP BY clause
What you are doing is a bit strange here because the GROUP BY contains the id, I guess this is the unique identifier of each row so you are finally not grouping anything. You'll get as much as rows as your table contains.
The reason why it returns the same reults is purely data based. There might be scenarios where the 2 queries returns different results.
In your case, if it returns the same results, it would mean that col1 is never NULL

construct to be used in a for loop

I have sample Data like:
Table empdata:
Name Desig Sal
-------------------------
john staff 26000
sen owner 50000
smith assistant 10000
i want to print each of the columns like
Current field value is : John
Current field value is : staff
Current field value is : 26000
Current field value is : sen
Current field value is : owner
Current field value is : 50000.. and so on
I am able to use cursor to fetch the emp data:
cursor c1 is
select name, desig, sal from empdata;
but i want to iterate over the columns too. I have shown 3 columns here, but there are atleast 30 columns in actual data, and i would like to print each of the field.
Please help.
Hi you can use this kind basic code.
begin
for i in (select * from emp where rownum < 5)
Loop
dbms_output.put_line('Current field value is: '||i.Emp_id);
dbms_output.put_line('Current field value is: '||i.emp_name);
end loop;
end;
If I understand you correctly, I think you're after something like:
select name,
desig,
sal,
(select approved from approval_table apv1 where apv1.data = emp.name) name_aprvd,
(select approved from approval_table apv2 where apv2.data = emp.desig) desig_aprvd,
(select approved from approval_table apv3 where apv3.data = emp.sal) sal_aprvd
from empdata emp;
Quite what you expect to do with the information once you've got it, I'm not sure. Maybe you return this as a cursor? Maybe you pass it into a procedure? I'm not sure, but hopefully you have enough information to sort out your requirement?

Oracle Trigger Subquery problem

CREATE OR REPLACE TRIGGER "DISC_CLIENT"
BEFORE INSERT ON "PURCHASE"
FOR EACH ROW
DECLARE
checkclient PURCHASE.CLIENTNO%TYPE;
BEGIN
SELECT Clientno INTO checkclient
FROM PURCHASE
GROUP BY ClientNo
HAVING SUM(Amount)=(SELECT MAX(SUM(Amount)) FROM PURCHASE GROUP BY Clientno);
IF :new.ClientNo = checkclient
new.Amount := (:old.Amount * 0.90);
END IF;
END;
/
Seem to be having a problem with this trigger. I know there I cant use the WHEN() clause for subqueries so im hoping this would work but it doesnt! Ideas anyone? :/
Basically im trying to get this trigger to apply a discount to the amount value before inserting if the client matches the top client! : )
There's a non-pretty but easy way round this, create a view and update that. You can then explicitly state all the columns in your trigger and put them in the table. You'd also be much better off creating a 1 row 2 column table, max_amount and then inserting the maximum amount and clientno into that each time. You should also really have a discounted amount column in the purchase table, as you ought to know who you've given discounts to. The amount charged is then amount - discount. This get's around both the mutating table and being unable to update :new.amount as well as making your queries much, much faster. As it stands you don't actually apply a discount if the current transaction is the highest, only if the client has placed the previous highest, so I've written it like that.
create or replace view purchase_view as
select *
from purchase;
CREATE OR REPLACE TRIGGER TR_PURCHASE_INSERT
BEFORE INSERT ON PURCHASE_VIEW
FOR EACH ROW
DECLARE
checkclient max_amount.clientno%type;
checkamount max_amount.amount%type;
discount purchase.discount%type;
BEGIN
SELECT clientno, amount
INTO checkclient, checkamount
FROM max_amount;
IF :new.clientno = checkclient then
discount := 0.1 * :new.amount;
ELSIF :new.amount > checkamount then
update max_amount
set clientno = :new.clientno
, maxamount = :new.amount
;
END IF;
-- Don-t specify columns so it breaks if you change
-- the table and not the trigger
insert into purchase
values ( :new.clientno
, :new.amount
, discount
, :new.other_column );
END TR_PURCHASE_INSERT;
/
As I remember a trigger can't select from a table it's fired for.
Otherwise you'll get ORA-04091: table XXXX is mutating, trigger/function may not see it. Tom advises us not to put too much logic into triggers.
And if I understand your query, it should be like this:
SELECT Clientno INTO checkclient
FROM PURCHASE
GROUP BY ClientNo
HAVING SUM(Amount)=(select max (sum_amount) from (SELECT SUM(Amount) as sum_amount FROM PURCHASE GROUP BY Clientno));
This way it will return the client who spent the most money.
But I think it's better to do it this way:
select ClientNo
from (
select ClientNo, sum (Amount) as sum_amount
from PURCHASE
group by ClientNo)
order by sum_amount
where rownum

How to put more than 1000 values into an Oracle IN clause [duplicate]

This question already has answers here:
SQL IN Clause 1000 item limit
(5 answers)
Closed 8 years ago.
Is there any way to get around the Oracle 10g limitation of 1000 items in a static IN clause? I have a comma delimited list of many of IDs that I want to use in an IN clause, Sometimes this list can exceed 1000 items, at which point Oracle throws an error. The query is similar to this...
select * from table1 where ID in (1,2,3,4,...,1001,1002,...)
Put the values in a temporary table and then do a select where id in (select id from temptable)
select column_X, ... from my_table
where ('magic', column_X ) in (
('magic', 1),
('magic', 2),
('magic', 3),
('magic', 4),
...
('magic', 99999)
) ...
I am almost sure you can split values across multiple INs using OR:
select * from table1 where ID in (1,2,3,4,...,1000) or
ID in (1001,1002,...,2000)
You may try to use the following form:
select * from table1 where ID in (1,2,3,4,...,1000)
union all
select * from table1 where ID in (1001,1002,...)
Where do you get the list of ids from in the first place? Since they are IDs in your database, did they come from some previous query?
When I have seen this in the past it has been because:-
a reference table is missing and the correct way would be to add the new table, put an attribute on that table and join to it
a list of ids is extracted from the database, and then used in a subsequent SQL statement (perhaps later or on another server or whatever). In this case, the answer is to never extract it from the database. Either store in a temporary table or just write one query.
I think there may be better ways to rework this code that just getting this SQL statement to work. If you provide more details you might get some ideas.
Use ...from table(... :
create or replace type numbertype
as object
(nr number(20,10) )
/
create or replace type number_table
as table of numbertype
/
create or replace procedure tableselect
( p_numbers in number_table
, p_ref_result out sys_refcursor)
is
begin
open p_ref_result for
select *
from employees , (select /*+ cardinality(tab 10) */ tab.nr from table(p_numbers) tab) tbnrs
where id = tbnrs.nr;
end;
/
This is one of the rare cases where you need a hint, else Oracle will not use the index on column id. One of the advantages of this approach is that Oracle doesn't need to hard parse the query again and again. Using a temporary table is most of the times slower.
edit 1 simplified the procedure (thanks to jimmyorr) + example
create or replace procedure tableselect
( p_numbers in number_table
, p_ref_result out sys_refcursor)
is
begin
open p_ref_result for
select /*+ cardinality(tab 10) */ emp.*
from employees emp
, table(p_numbers) tab
where tab.nr = id;
end;
/
Example:
set serveroutput on
create table employees ( id number(10),name varchar2(100));
insert into employees values (3,'Raymond');
insert into employees values (4,'Hans');
commit;
declare
l_number number_table := number_table();
l_sys_refcursor sys_refcursor;
l_employee employees%rowtype;
begin
l_number.extend;
l_number(1) := numbertype(3);
l_number.extend;
l_number(2) := numbertype(4);
tableselect(l_number, l_sys_refcursor);
loop
fetch l_sys_refcursor into l_employee;
exit when l_sys_refcursor%notfound;
dbms_output.put_line(l_employee.name);
end loop;
close l_sys_refcursor;
end;
/
This will output:
Raymond
Hans
I wound up here looking for a solution as well.
Depending on the high-end number of items you need to query against, and assuming your items are unique, you could split your query into batches queries of 1000 items, and combine the results on your end instead (pseudocode here):
//remove dupes
items = items.RemoveDuplicates();
//how to break the items into 1000 item batches
batches = new batch list;
batch = new batch;
for (int i = 0; i < items.Count; i++)
{
if (batch.Count == 1000)
{
batches.Add(batch);
batch.Clear()
}
batch.Add(items[i]);
if (i == items.Count - 1)
{
//add the final batch (it has < 1000 items).
batches.Add(batch);
}
}
// now go query the db for each batch
results = new results;
foreach(batch in batches)
{
results.Add(query(batch));
}
This may be a good trade-off in the scenario where you don't typically have over 1000 items - as having over 1000 items would be your "high end" edge-case scenario. For example, in the event that you have 1500 items, two queries of (1000, 500) wouldn't be so bad. This also assumes that each query isn't particularly expensive in of its own right.
This wouldn't be appropriate if your typical number of expected items got to be much larger - say, in the 100000 range - requiring 100 queries. If so, then you should probably look more seriously into using the global temporary tables solution provided above as the most "correct" solution. Furthermore, if your items are not unique, you would need to resolve duplicate results in your batches as well.
Yes, very weird situation for oracle.
if you specify 2000 ids inside the IN clause, it will fail.
this fails:
select ...
where id in (1,2,....2000)
but if you simply put the 2000 ids in another table (temp table for example), it will works
below query:
select ...
where id in (select userId
from temptable_with_2000_ids )
what you can do, actually could split the records into a lot of 1000 records and execute them group by group.
Here is some Perl code that tries to work around the limit by creating an inline view and then selecting from it. The statement text is compressed by using rows of twelve items each instead of selecting each item from DUAL individually, then uncompressed by unioning together all columns. UNION or UNION ALL in decompression should make no difference here as it all goes inside an IN which will impose uniqueness before joining against it anyway, but in the compression, UNION ALL is used to prevent a lot of unnecessary comparing. As the data I'm filtering on are all whole numbers, quoting is not an issue.
#
# generate the innards of an IN expression with more than a thousand items
#
use English '-no_match_vars';
sub big_IN_list{
#_ < 13 and return join ', ',#_;
my $padding_required = (12 - (#_ % 12)) % 12;
# get first dozen and make length of #_ an even multiple of 12
my ($a,$b,$c,$d,$e,$f,$g,$h,$i,$j,$k,$l) = splice #_,0,12, ( ('NULL') x $padding_required );
my #dozens;
local $LIST_SEPARATOR = ', '; # how to join elements within each dozen
while(#_){
push #dozens, "SELECT #{[ splice #_,0,12 ]} FROM DUAL"
};
$LIST_SEPARATOR = "\n union all\n "; # how to join #dozens
return <<"EXP";
WITH t AS (
select $a A, $b B, $c C, $d D, $e E, $f F, $g G, $h H, $i I, $j J, $k K, $l L FROM DUAL
union all
#dozens
)
select A from t union select B from t union select C from t union
select D from t union select E from t union select F from t union
select G from t union select H from t union select I from t union
select J from t union select K from t union select L from t
EXP
}
One would use that like so:
my $bases_list_expr = big_IN_list(list_your_bases());
$dbh->do(<<"UPDATE");
update bases_table set belong_to = 'us'
where id in ($bases_list_expr)
UPDATE
Instead of using IN clause, can you try using JOIN with the other table, which is fetching the id. that way we don't need to worry about limit. just a thought from my side.
Instead of SELECT * FROM table1 WHERE ID IN (1,2,3,4,...,1000);
Use this :
SELECT * FROM table1 WHERE ID IN (SELECT rownum AS ID FROM dual connect BY level <= 1000);
*Note that you need to be sure the ID does not refer any other foreign IDS if this is a dependency. To ensure only existing ids are available then :
SELECT * FROM table1 WHERE ID IN (SELECT distinct(ID) FROM tablewhereidsareavailable);
Cheers

Resources