Inserting a record in PL/SQL block using for loop is yielding error - oracle

I am creating a table of inetegers called 'integer_properties' from 1 to 1000 the table columns are:
integer,isPrime,isOdd,isEven,digitCount;
I want to insert records using a for loop i tried following but the error says: 'missing SELECT keyword'
BEGIN
for k in 1..1000
loop
insert into integer_properties(integer,
isPrime,
isEven,
isOdd,
digitCount)
values(k,null,null,null,null);
end loop;
END;
It is tedious to enter 1000 numbers with DDL command without using PL/SQL block. I am trying to enter the loop variable values in the integer column. Is it possible to do that?

You can do it in a single query.
insert into integer_properties
select level, null, null, null, null
from dual
connect by level <= 1000;
commit;
LEVEL is a pseudocolumn used in hierarchical queries.
Your code seems fine, but using INTEGER as column name might be is causing problem. If you enclose it in double quotes, it will work fine. So, better to avoid using keywords while naming columns.

Related

How to duplicate rows with a column change for a list of tables?

Given the following example:
BEGIN
FOR r IN (
SELECT * FROM table_one WHERE change_id = 0
) LOOP
r.change_id := -1;
INSERT INTO table_one VALUES r;
END LOOP;
END;
This inserts new rows to table_one with the exact same content, except the intended change on column change_id to the value -1. I don't have to specify the columns inside of the script as I have to in an INSERT INTO table_one (change_id, ...) SELECT -1, ... FROM table_one WHERE change_id=0;
It works perfectly fine. But how to modify this script to work with a list of tables? The internal structure of those tables are different, but all of them have the necessary column change_id.
Of course the easiest solution would be to copy and paste this snippet x-times and replace the fix table name inside. But is there an option to work with a list of tables in an array?
My approach was like this:
DECLARE
TYPE tablenamearray IS VARRAY(30) OF VARCHAR2(30);
tablenames tablenamearray;
BEGIN
tablenames := tablenamearray('TABLE_ONE', 'TABLE_TWO', 'TABLE_THREE'); -- up to table 30...
FOR i IN tablenames.first..tablenames.last LOOP
/* Found no option to use tablenames(i) here with dynamic SQL */
END LOOP;
END;
Note: There is no technical primary key like an id with a sequence behind. The primary key is build by three columns incl. the change_id column.
You cannot create a SQL statement where the statement is not known at parse time. So, you cannot have a variable as a table name. What you're looking for is Dynamic SQL, which is a fairly complicated topic, but basically you're going to wind up building a SQL statement with DBMS_SQL or running a statement as a string with EXECUTE IMMEDIATE.

oracle plsql chunking the rows

I have a select statement which returns 0 or more rows.
I'm trying to come up with a plsql proc with a cursor to produce xml output fro, all rows returned into 100 rows at a time. i'm doing this to chunk loo rows at a time based on requirement.
So basically my program should follow below logic
cursor c1 is select id,code_id,code_desc from table order by id; --returns some rows
if c1%notfound
then return;` -- exit from procedure
else
loop
grab first 100 rows from select and append to a variable
and assign it to a variable;
update this variable into a clob field in a table.
grab next 100 rows and append into a variable
update this variable into a clob field in a table in another row;see below
table data
and so on
and grab remaining rows and append into a variable
print the variable;
until no data found;
exit
I'm trying to do convert the output from select statement into xml text.
The output should look something like below:
TABLE: STG_XML_DATA
LOOP_NO(NUMBER), XML_TEXT(CLOB), ROWS_PROCESSED
1 <XML><id>1</ID><id>2</ID>..<ID>100</ID></XML> 100
2 <XML><id>101</ID><id>102</ID>..<ID>200</ID></XML> 200
3 <XML><id>301</ID><id>102</ID>..<ID>320</ID></XML> 20
Can someone please help
First of all, can you do this with a single INSERT ... SELECT statement that does what you want with reasonable performance? If you're doing a million rows, yes, breaking them up into chunks may be a good idea. But if it's 100, that might be your best bet.
For your actual question, you want to use BULK COLLECT into a collection variable and possibly FORALL. So your function is going to look something like this:
DECLARE
TYPE id_tt IS TABLE OF NUMBER;
TYPE desc_tt IS TABLE OF VARCHAR2(100);
l_ids id_tt;
l_code_ids id_tt;
l_code_descs desc_tt;
cursor c1 is select id,code_id,code_desc from table
BEGIN
OPEN c1;
LOOP
FETCH c1 BULK COLLECT INTO l_ids, l_code_ids, l_code_descs
LIMIT 100;
EXIT WHEN l_ids.COUNT = 0;
FORALL idx IN 1..l_ids.COUNT
INSERT [... some insert statement here ...]
[... maybe some other processing here...]
END LOOP;
CLOSE c1;
END;
What you absolutely do not want to do is fetch a row, process it, fetch another row, etc. SQL is a set-oriented language, so try to operate on sets. Every time you switch context from SQL to PL/SQL there is a cost and it can kill your performance.
See: http://www.oracle.com/technetwork/issue-archive/2012/12-sep/o52plsql-1709862.html

Function results column names to be used in select statement

I have function which returns column names and i am trying to use the column name as part of my select statement, but my results are coming as column name instead of values
FUNCTION returning column name:
get_col_name(input1, input2)
Can И use this query to the results of the column from table -
SELECT GET_COL_NAME(input1,input2) FROM TABLE;
There are a few ways to run dynamic SQL directly inside a SQL statement. These techniques should be avoided since they are usually complicated, slow, and buggy. Before you do this try to find another way to solve the problem.
The below solution uses DBMS_XMLGEN.GETXML to produce XML from a dynamically created SQL statement, and then uses XML table processing to extract the value.
This is the simplest way to run dynamic SQL in SQL, and it only requires built-in packages. The main limitation is that the number and type of columns is still fixed. If you need a function that returns an unknown number of columns you'll need something more powerful, like the open source program Method4. But that level of dynamic code gets even more difficult and should only be used after careful consideration.
Sample schema
--drop table table1;
create table table1(a number, b number);
insert into table1 values(1, 2);
commit;
Function that returns column name
create or replace function get_col_name(input1 number, input2 number) return varchar2 is
begin
if input1 = 0 then
return 'a';
else
return 'b';
end if;
end;
/
Sample query and result
select dynamic_column
from
(
select xmltype(dbms_xmlgen.getxml('
select '||get_col_name(0,0)||' dynamic_column from table1'
)) xml_results
from dual
)
cross join
xmltable
(
'/ROWSET/ROW'
passing xml_results
columns dynamic_column varchar2(4000) path 'DYNAMIC_COLUMN'
);
DYNAMIC_COLUMN
--------------
1
If you change the inputs to the function the new value is 2 from column B. Use this SQL Fiddle to test the code.

Writing Procedure to enforce constraints + Testing

I need to set a constraint that the user is unable to enter any records after he/she has entered 5 records in a single month. Would it be advisable that I write a trigger or procedure for that? Else is that any other ways that I can setup the constraint?
Instead of writing a trigger i have opt to write a procedure for the constraint but how do i check if the procedure is working?
Below is the procedure:
CREATE OR REPLACE PROCEDURE InsertReadingCheck
(
newReadNo In Int,
newReadValue In Int,
newReaderID In Int,
newMeterID In Int
)
AS
varRowCount Int;
BEGIN
Select Count(*) INTO varRowCount
From Reading
WHERE ReaderID = newReaderID
AND Trunc(ReadDate,'mm') = Trunc(Sysdate,'mm');
IF (varRowCount >= 5) THEN
BEGIN
DBMS_OUTPUT.PUT_LINE('*************************************************');
DBMS_OUTPUT.PUT_LINE('');
DBMS_OUTPUT.PUT_LINE(' You attempting to enter more than 5 Records ');
DBMS_OUTPUT.PUT_LINE('');
DBMS_OUTPUT.PUT_LINE('*************************************************');
ROLLBACK;
RETURN;
END;
ELSIF (varRowCount < 5) THEN
BEGIN
INSERT INTO Reading
VALUES(seqReadNo.NextVal, sysdate, newReadValue,
newReaderID, newMeterID);
COMMIT;
END;
END IF;
END;
Anyone can help me look through
This is the sort of thing that you should avoid putting in a trigger. Especially the ROLLBACK and the COMMIT. This seems extremely dangerous (and I'm not even sure whether it's possible). You might have other transactions that you wish to commit that you rollback or vice versa.
Also, by putting this in a trigger you are going to get the following error:
ORA-04091: table XXXX is mutating, trigger/function may not see it
There are ways round this but they're excessive and involve doing something funky in order to get round Oracle's insistence that you do the correct thing.
This is the perfect opportunity to use a stored procedure to insert data into your table. You can check the number of current records prior to doing the insert meaning that there is no need to do a ROLLBACK.
It depends upon your application, if insert is already present in your application many times then trigger is better option.
This is a behavior constraint. Its a matter of opinion but I would err on the side of keeping this kind of business logic OUT of your database. I would instead keep track of who added what records in the records table, and on what day/times. You can have a SP to get this information, but then your code behind should handle whether or not the user can see certain links (or functions) based on the data that's returned. Whether that means keeping the user from accessing the page(s) where they insert records, or give them read only views is up to you.
One declarative way you could solve this problem that would obey all concurrency rules is to use a separate table to keep track of number of inserts per user per month:
create table inserts_check (
ReaderID integer not null,
month date not null,
number_of_inserts integer constraint max_number_of_inserts check (number_of_inserts <= 5),
primary key (ReaderID, month)
);
Then create a trigger on the table (or all tables) for which inserts should be capped at 5:
create trigger after insert on <table>
for each row
begin
MERGE INTO inserts_check t
USING (select 5 as ReaderID, trunc(sysdate, 'MM') as month, 1 as number_of_inserts from dual) s
ON (t.ReaderID = s.ReaderID and t.month = s.month)
WHEN MATCHED THEN UPDATE SET t.number_of_inserts = t.number_of_inserts + 1
WHEN NOT MATCHED THEN INSERT (ReaderID, month, number_of_inserts)
VALUES (s.ReaderID, s.month, s.number_of_inserts);
end;
Once the user has made 5 inserts, the constraint max_number_of_inserts will fail.

plsql cursor via straight sql

Im looking at the following two ways of retrieving a value and storing it later via an insert statement. i.e. via Pl/SQL cursors or via direct SQL.
Is there any advantage to either approach? Or is there a more efficient approach?
Approach 1
Cursor system_date
Is
select sysdate from dual;
system_date_rec system_date%type;
Open system_Date;
Fetch system_date into system_date_rec;
Insert into table(dateValue)
values(system_date_rec.date);
Approach 2
dateString varchar(20);
Select sysdate into dateString from dual;
Insert into table(dateValue)
values(dateString);
How about approach 3:
Insert into table(dateValue)
values(sysdate);
or assuming you did actually need to do a select to get the data:
Insert into table(dateValue)
select dateValue from other_table where ...;
Regarding whether an explicit cursor or a SELECT INTO is preferable when one or the other is needed, I would go for the SELECT INTO because it is neater and safer if you expect the query to return exactly one row:
select some_value
into l_var
from other_table
where ...;
if l_var = 'A' then
do_something;
end if;
Now you will get an exception (NO_DATA_FOUND or TOO_MANY_ROWS) if the number of rows returned is not as expected. With the cursor you will just end up with l_var unchanged, or set to the value from the first matching row - which probably means yu've got a bug but don't know it.
Each approach has it's merit but if it's one and only one value you are getting then I'd go with select ... into ... as this is much simpler and will check that you have one and only one value.
Although Tony's approach is possibly preferable to both in the right circumstances.
If you also want to get the value back there is always the RETURNING clause of the insert statement.
my_date_value date;
...
INSERT into table(datevalue)
values (sysdate)
returning sysdate into my_date_value;
I'd agree with #Tony and #MikeyByCrikey that select ... into is generally preferable, not least - in my personal, subjective opinion - because it keeps the select and into together instead of having the select out of sight up in the declare section. Not really an issue if it's simple but you've suggested you're doing several big queries and manipulations, which implies a longish procedure.
Slightly off-topic, but if all the manipulations are to gather data for a single insert at the end, then rather than having lots of separate variables I'd consider declaring a single variable as a row type and updating the columns as appropriate:
declare
l_row my_table%ROWTYPE;
begin
select ... into l_row.column1;
select ... into l_row.column2;
if l_row.column2 = 'A' then
/* do something */
end if;
l_row.column3 := 'somevalue';
fetch ... into l_row.column4;
/* etc */
insert into my_table values l_row;
end;

Resources