PL/SQL Bind Variable in an Insert Statement , - oracle

VARIABLE dept_id NUMBER
SET AUTOPRINT ON
DECLARE
max_dept departments.department_id%TYPE;
dept_name departments.department_name%TYPE := 'Revenue';
BEGIN
SELECT MAX(department_id)
INTO max_dept
FROM departments;
:dept_id := max_dept +10;
INSERT INTO departments (department_id,department_name,location_id)
VALUES(:dept_id,dept_name,NULL);
END;
Returns error
Error report: ORA-01400: cannot insert NULL into
("HR"."DEPARTMENTS"."DEPARTMENT_ID") ORA-06512: at line 13
01400. 00000 - "cannot insert NULL into (%s)"
*Cause:

I'm going to suggest something quite different here. That approach is doomed to failure once your application gets out "in the wild".
Let's say your application is a huge success and now you have dozens of people all using it at the same time, and lets assume currently 1000 is the highest department number.
Now we have 20 people all at roughly the same time doing:
SELECT MAX(department_id)
INTO max_dept
FROM departments;
They will all get 1000 as a result, and they will all then try insert 1010 into the table. One of two things will then happen
a) all except of one them will get an error due a primary key violation,
b) you have will multiple rows all with dept=1010
Either of these obviously is not great. This is why we have a thing called a sequence that can guarantee to give you unique values. You just do:
create sequence DEPT_SEQ;
and then do your inserts:
INSERT INTO departments (department_id,department_name,location_id)
VALUES(dept_seq.nextval,dept_name,NULL);
There are even easier mechanisms (google for "oracle identity column") but this heopfully explains the way forward and will save you from the problems with your current approach.

Related

Will a row still be written even if 'ORA-01438: value larger than specified precision allowed for this column' error occurs?

When I put in a number of a certain size and submit the form I get the ORA-01438 error. It seems all of the documents are written to the db but I can't be sure. Maybe the error is occurring in a POST that I am not privy to?
Additional Context: Site gives a 500 error and I am forced to return to the home page to do anything else. When I try to do another form submission I get a hibernate error stating:
org.hibernate.HibernateException: null index column for collection: com.collection.name.here
A quick test: table contains two columns; one is varchar2(2), another is number(2):
SQL> create table test (col_c varchar2(2), col_n number(2));
Table created.
Insert a string longer than 2 characters:
SQL> insert into test (col_c) values ('abc');
insert into test (col_c) values ('abc')
*
ERROR at line 1:
ORA-12899: value too large for column "SCOTT"."TEST"."COL_C" (actual: 3,
maximum: 2)
Insert a number:
SQL> insert into test (col_n) values (123);
insert into test (col_n) values (123)
*
ERROR at line 1:
ORA-01438: value larger than specified precision allowed for this column
Both raised an error. You said that
It seems all of the documents are written to the db but I can't be sure
Well, check it! How? By selecting from a table.
SQL> select * from test;
no rows selected
SQL>
Nope, nothing was inserted, so I'd put my bet to the same outcome in your case. If error was raised, your insert failed.
As of Hibernate issues, no idea; I don't know Hibernate.

how to make a trigger like primary key constraint?

i need to define a trigger which i want to apply on a column of table. The trigger should restrict the user to input duplicate and not null values. Or you can say, i need to know the logic of primary key.
Just because you seem intent on seeing this fail, and not to take anything away from APC's points, this appears to work at first glance as long as it's a before trigger:
create table t42 (id number);
create trigger trig42
before insert or update on t42
for each row
declare
c number;
begin
if :new.id is null then
raise_application_error(-20001, 'ID is null');
end if;
select count(*) into c from t42 where id = :new.id;
if c > 0 then
raise_application_error(-20002, 'ID is not unique');
end if;
end;
/
It compiles and if you insert data you get the behaviour you seem to want:
insert into t42 values (1);
1 rows inserted.
insert into t42 values (1);
Error starting at line 20 in command:
insert into t42 values (1)
Error report:
SQL Error: ORA-20002: ID is not unique
ORA-06512: at "STACKOVERFLOW.TRIG42", line 9
ORA-04088: error during execution of trigger 'STACKOVERFLOW.TRIG42'
insert into t42 values (null);
Error starting at line 22 in command:
insert into t42 values (null)
Error report:
SQL Error: ORA-20001: ID is null
ORA-06512: at "STACKOVERFLOW.TRIG42", line 5
ORA-04088: error during execution of trigger 'STACKOVERFLOW.TRIG42'
select * from t42;
ID
----------
1
Which seems to do what you want. But not if you have more than one session. I haven't committed in this session; in another session I can do:
insert into t42 values (1);
1 row created.
select * from t42;
ID
----------
1
1 row selected.
Hmm, that's strange. Well, maybe it's deferred... let's commit them both:
commit;
select * from t42;
ID
----------
1
1
2 rows selected.
Oops. Once session can't see another session's uncommitted data, so this will never work.
Also, the mutating table problem exhibits itself when we insert multiple rows in a single statement:
SQL> insert into t42 select level+1 from dual connect by level <= 5;
insert into t42 select level+1 from dual connect by level <= 5
*
ERROR at line 1:
ORA-04091: table STACKOVERFLOW.T42 is mutating, trigger/function may not see it
ORA-06512: at "STACKOVERFLOW.TRIG42", line 7
ORA-04088: error during execution of trigger 'STACKOVERFLOW.TRIG42'
SQL>
Double oops.
Even with an after trigger and a package to work around the mutating table issue, you'd still have this problem (I think), unless you lock the whole table for every insert or update. As APC said the constraint is implemented deep in the bowels of the database, not at this level.
is it not possible to define a trigger, which checks the value before
insertion that it should not be null and unique as well?
Not when you have more than one session, no. And even within one session, unless you have an index on the column the performance won't scale as the count(*) will get progressively slower. And if you do have an index, well, why not make it a unique index in the first place?
Finally, from the trigger design guidelines:
Do not create triggers that duplicate database features.
For example, do not create a trigger to reject invalid data if you can
do the same with constraints (see "How Triggers and Constraints
Differ").
" i want to learn, how primary key is made(it is a trigger of course)"
There is no "of course" about it. A constraint is not a trigger. It is an internal process which uses an index and a lot of low level activity to enforce relational constraints in a reliable and efficient manner.
If you want to learn the rules are quite straightforward: not null, uniqueness, serialization. So just try to implement a primary key in triggers. You'll find you can't (spoiler alert!) because of the "mutating table" problem. And if you don't understand what that means, well there's a good topic to read about.
there is a question "is it not possible to define a trigger, which
checks the value before insertion that it should not be null and
unique as well? "
The answer to that question is, No. Well, you could code a trigger-based implementation but like other "mutating table" workarounds it would require a package and AFTER statement triggers (so technically not before insertion).
But seriously, what would be the point? You won't learn anything about how primary keys actually work. And mutating tables almost always point to a poor data model, and that would certainly be the case here.
Primary key is not a trigger. It is a key, because it identifies the whole row, that's why it should be unique (and implicitly not null). It is "primary", because it is the candidate key that is most appropriate - by your decision - to be the main reference key for your table. You can add it as ALTER TABLE your_table_name ADD CONSTRAINT PK_your_table_name PRIMARY KEY (your_key_column).
If you do not want to add a primary key like that (which is a bad idea), but want to add a unique index to that table: CREATE UNIQUE INDEX UQ_IX_your_table_your_column ON your_table_name (unique_column_name).
The NOT NULL constraint should be put on the column.

plsql for inserting a table

declare
dno number(4);
dname varchar2(5);
ddate date;
dbasic number(10);
djob varchar2(15);
dcomm number(5);
dept number(5);
dmgr number(5);
begin
select empno,ename,hiredate,sal,job1,comm,deptno,mgr
into dno,dname,ddate,dbasic,djob,dcomm,dept,dmgr
from emp
where empno=&userno;
if sql%rowcount>0
then
insert into newempl
values(dno,dname,djob,dmgr,ddate,dbasic,dcomm,dept);
dbms_output.put_line('records inserted into it');
dbms_output.put_line(dno||' '||dname||' '||ddate||' '||dbasic);
end if;
end;
Error report:
ORA-01858: a non-numeric character was found where a numeric was expected
ORA-06512: at line 19
01858. 00000 - "a non-numeric character was found where a numeric was expected"
*Cause: The input data to be converted using a date format model was
incorrect. The input data did not contain a number where a number was
required by the format model.
*Action: Fix the input data or the date format model to make sure the
elements match in number and type. Then retry the operation.
I do not understand what the error is.
From the error message it looks like you're inserting values into the wrong columns. Without seeing your table structure (from describe newmpl for example) this is a bit of a guess, but this statement:
insert into newempl
values(dno,dname,djob,dmgr,ddate,dbasic,dcomm,dept);
... is assuming that the columns in the newempl table are in a certain order, which may not be (and appears to not be) the case. More specifically here, I think it's complaining about hiredate, as you're implicitly putting the djob value in that column - assuming the new table looks like emp - and the djob value can't be converted into a date.
Update based on comment: from how you said you created the table, this is equivalent to:
insert into newempl(dno, dname, ddate, dbasic, djob, dcomm, dept, dmgr)
values(dno,dname,djob,dmgr,ddate,dbasic,dcomm,dept);
... so as you can see when it's laid out like that the columns are not aligned, and you are indeed trying to put your djob value into the ddate column, which won't work.
It is always safer to explicitly specify the columns, both to prevent problems with different ordering in different environments (though that shouldn't really happen with controlled code) and to prevent this breaking if a new column is added. Something like:
insert into newempl(empno,ename,jon1,mgr,hiredate,sal,comm,deptno)
values(dno,dname,djob,dmgr,ddate,dbasic,dcomm,dept);
As an aside, when declaring your local variables you could specify them based on the table, for example dno emp.empno%TYPE. And as another aside based on your comment, I'd recommend giving local variables different names to the table columns to avoid confusion.
As a_horse_with_no_name said, this can be done with a simple SQL insert, and even within a PL/SQL block it doesn't need separate select and insert statements; you could just do:
insert into newempl(empno,ename,jon1,mgr,hiredate,sal,comm,deptno)
select empno,ename,jon1,mgr,hiredate,sal,comm,deptno
from emp
where empno=&userno;
Unfortunately none of this addresses the requirement that 'the employees who are managers must be inserted into new table', since you aren't doing anything with the mgr column. I don't think it would be constructive to do that part of the task for you at this point though, and I'm not sure where &userno fits in to that.

Overcoming the restriction on bulk inserts over a database link

It appears as though there's an implementation restriction that forbids the use of forall .. insert on Oracle, when used over a database link. This is a simple example to demonstrate:
connect schema/password#db1
create table tmp_ben_test (
a number
, b number
, c date
, constraint pk_tmp_ben_test primary key (a, b)
);
Table created.
connect schema/password#db2
Connected.
declare
type r_test is record ( a number, b number, c date);
type t__test is table of r_test index by binary_integer;
t_test t__test;
cursor c_test is
select 1, level, sysdate
from dual
connect by level <= 10
;
begin
open c_test;
fetch c_test bulk collect into t_test;
forall i in t_test.first .. t_test.last
insert into tmp_ben_test#db1
values t_test(i)
;
close c_test;
end;
/
Very confusingly this fails in 9i with the following error:
ERROR at line 1: ORA-01400: cannot insert NULL into
("SCHEMA"."TMP_BEN_TEST"."A") ORA-02063: preceding line from DB1
ORA-06512: at line 18
If was only after checking in 11g that I realised this was an implementation restriction.
ERROR at line 18: ORA-06550: line 18, column 4: PLS-00739: FORALL
INSERT/UPDATE/DELETE not supported on remote tables
The really obvious way round this is to change forall .. to:
for i in t_test.first .. t_test.last loop
insert into tmp_ben_test#db1
values t_test(i);
end loop;
but, I'd rather keep it down to a single insert if at all possible. Tom Kyte suggests the use of a global temporary table. Inserting the data into a GTT and then over a DB link seems like massive overkill for a set of data that is already in a user-defined type.
Just to clarify this example is extremely simplistic compared to what is actually happening. There is no way we will be able to do a simple insert into and there is no way all the operations could be done on a GTT. Large parts of the code have to be done in user-defined type.
Is there another, simpler or less DMLy, way around this restriction?
What restrictions do you face on the remote database? If you can create objects there you have a workaround: on the remote database create the collection type and a procedure which takes the collection as a parameter and executes the FORALL statement.
If you create the t__test/r_test type in db2 and then create a public synonym for them on db1 then you should be able to call a procedure from db1 to db2 filling the t_table and returning in to db1. Then you should be able to insert into your local table.
I'm assuming you would use packaged types and procedures in the real world, not anonymous blocks.
Also, it would not be the ideal solution for big datasets, then GTT or similar would be better.

ORA-04091: table [blah] is mutating, trigger/function may not see it

I recently started working on a large complex application, and I've just been assigned a bug due to this error:
ORA-04091: table SCMA.TBL1 is mutating, trigger/function may not see it
ORA-06512: at "SCMA.TRG_T1_TBL1_COL1", line 4
ORA-04088: error during execution of trigger 'SCMA.TRG_T1_TBL1_COL1'
The trigger in question looks like
create or replace TRIGGER TRG_T1_TBL1_COL1
BEFORE INSERT OR UPDATE OF t1_appnt_evnt_id ON TBL1
FOR EACH ROW
WHEN (NEW.t1_prnt_t1_pk is not null)
DECLARE
v_reassign_count number(20);
BEGIN
select count(t1_pk) INTO v_reassign_count from TBL1
where t1_appnt_evnt_id=:new.t1_appnt_evnt_id and t1_prnt_t1_pk is not null;
IF (v_reassign_count > 0) THEN
RAISE_APPLICATION_ERROR(-20013, 'Multiple reassignments not allowed');
END IF;
END;
The table has a primary key "t1_pk", an "appointment event id"
t1_appnt_evnt_id and another column "t1_prnt_t1_pk" which may or may
not contain another row's t1_pk.
It appears the trigger is trying to make sure that nobody else with the
same t1_appnt_evnt_id has referred to the same one this row is referring to a referral to another row, if this one is referring to another row.
The comment on the bug report from the DBA says "remove the trigger, and perform the check in the code", but unfortunately they have a proprietary code generation framework layered on top of Hibernate, so I can't even figure out where it actually gets written out, so I'm hoping that there is a way to make this trigger work. Is there?
I think I disagree with your description of what the trigger is trying to
do. It looks to me like it is meant to enforce this business rule: For a
given value of t1_appnt_event, only one row can have a non-NULL value of
t1_prnt_t1_pk at a time. (It doesn't matter if they have the same value in the second column or not.)
Interestingly, it is defined for UPDATE OF t1_appnt_event but not for the other column, so I think someone could break the rule by updating the second column, unless there is a separate trigger for that column.
There might be a way you could create a function-based index that enforces this rule so you can get rid of the trigger entirely. I came up with one way but it requires some assumptions:
The table has a numeric primary key
The primary key and the t1_prnt_t1_pk are both always positive numbers
If these assumptions are true, you could create a function like this:
dev> create or replace function f( a number, b number ) return number deterministic as
2 begin
3 if a is null then return 0-b; else return a; end if;
4 end;
and an index like this:
CREATE UNIQUE INDEX my_index ON my_table
( t1_appnt_event, f( t1_prnt_t1_pk, primary_key_column) );
So rows where the PMNT column is NULL would appear in the index with the inverse of the primary key as the second value, so they would never conflict with each other. Rows where it is not NULL would use the actual (positive) value of the column. The only way you could get a constraint violation would be if two rows had the same non-NULL values in both columns.
This is perhaps overly "clever", but it might help you get around your problem.
Update from Paul Tomblin: I went with the update to the original idea that igor put in the comments:
CREATE UNIQUE INDEX cappec_ccip_uniq_idx
ON tbl1 (t1_appnt_event,
CASE WHEN t1_prnt_t1_pk IS NOT NULL THEN 1 ELSE t1_pk END);
I agree with Dave that the desired result probalby can and should be achieved using built-in constraints such as unique indexes (or unique constraints).
If you really need to get around the mutating table error, the usual way to do it is to create a package which contains a package-scoped variable that is a table of something that can be used to identify the changed rows (I think ROWID is possible, otherwise you have to use the PK, I don't use Oracle currently so I can't test it). The FOR EACH ROW trigger then fills in this variable with all rows that are modified by the statement, and then there is an AFTER each statement trigger that reads the rows and validate them.
Something like (syntax is probably wrong, I haven't worked with Oracle for a few years)
CREATE OR REPLACE PACKAGE trigger_pkg;
PROCEDURE before_stmt_trigger;
PROCEDURE for_each_row_trigger(row IN ROWID);
PROCEDURE after_stmt_trigger;
END trigger_pkg;
CREATE OR REPLACE PACKAGE BODY trigger_pkg AS
TYPE rowid_tbl IS TABLE OF(ROWID);
modified_rows rowid_tbl;
PROCEDURE before_stmt_trigger IS
BEGIN
modified_rows := rowid_tbl();
END before_each_stmt_trigger;
PROCEDURE for_each_row_trigger(row IN ROWID) IS
BEGIN
modified_rows(modified_rows.COUNT) = row;
END for_each_row_trigger;
PROCEDURE after_stmt_trigger IS
BEGIN
FOR i IN 1 .. modified_rows.COUNT LOOP
SELECT ... INTO ... FROM the_table WHERE rowid = modified_rows(i);
-- do whatever you want to
END LOOP;
END after_each_stmt_trigger;
END trigger_pkg;
CREATE OR REPLACE TRIGGER before_stmt_trigger BEFORE INSERT OR UPDATE ON mytable AS
BEGIN
trigger_pkg.before_stmt_trigger;
END;
CREATE OR REPLACE TRIGGER after_stmt_trigger AFTER INSERT OR UPDATE ON mytable AS
BEGIN
trigger_pkg.after_stmt_trigger;
END;
CREATE OR REPLACE TRIGGER for_each_row_trigger
BEFORE INSERT OR UPDATE ON mytable
WHEN (new.mycolumn IS NOT NULL) AS
BEGIN
trigger_pkg.for_each_row_trigger(:new.rowid);
END;
With any trigger-based (or application code-based) solution you need to
put in locking to prevent data corruption in a multi-user environment.
Even if your trigger worked, or was re-written to avoid the mutating table
issue, it would not prevent 2 users from simultaneously updating
t1_appnt_evnt_id to the same value on rows where t1_appnt_evnt_id is not
null: assume there are currenly no rows where t1_appnt_evnt_id=123 and
t1_prnt_t1_pk is not null:
Session 1> update tbl1
set t1_appnt_evnt_id=123
where t1_prnt_t1_pk =456;
/* OK, trigger sees count of 0 */
Session 2> update tbl1
set t1_appnt_evnt_id=123
where t1_prnt_t1_pk =789;
/* OK, trigger sees count of 0 because
session 1 hasn't committed yet */
Session 1> commit;
Session 2> commit;
You now have a corrupted database!
The way to avoid this (in trigger or application code) would be to lock
the parent row in the table referenced by t1_appnt_evnt_id=123 before performing the check:
select appe_id
into v_app_id
from parent_table
where appe_id = :new.t1_appnt_evnt_id
for update;
Now session 2's trigger must wait for session 1 to commit or rollback before it performs the check.
It would be much simpler and safer to implement Dave Costa's index!
Finally, I'm glad no one has suggested adding PRAGMA AUTONOMOUS_TRANSACTION to your trigger: this is often suggested on forums and works in as much as the mutating table issue goes away - but it makes the data integrity problem even worse! So just don't...
I had similar error with Hibernate. And flushing session by using
getHibernateTemplate().saveOrUpdate(o);
getHibernateTemplate().flush();
solved this problem for me. (I'm not posting my code block as I was sure that everything was written properly and should work - but it did not until I added the previous flush() statement). Maybe this can help someone.

Resources