How can I use my generated triggeres from sqlplus console into the oracle forms bulider? - oracle

I've created the following table:
create table Citizens_lic
(
No NUMBER(10) ,
ID NUMBER(10)
constraint Citizens_ID_pk_1 PRIMARY KEY,
F_Name VARCHAR2(32) ,
M_Name VARCHAR2(32) ,
L_Name VARCHAR2(32) ,
DOB DATE ,
POF VARCHAR2(32) ,
GENDER VARCHAR2(32) ,
Soc_status VARCHAR2(32) ,
work_status VARCHAR2(32) ,
ISS_DATE date ,
EXP_Date date
)
Then, I've generated some triggers
for id, no, iss_date Exp_date
from the sqlplus command as like the following pics
after that, all triggers work fine from the sqlplus command, all inserted values like id and no columns are generated itself automatically once a row is created
Now I want to show you where I got stuck!
I went to the oracle forms builder
I've made this form
and deleted all the ID, No, Iss_date, Exp_date item boxes because there's no need for it, each one must be already generated by a trigger.
then I ran it
unable to insert!
Now lets get it over the flow, anyone help :)

Part of the problem is that you have written four triggers for one event (well, strictly three triggers, because you have two scripts with the same trigger name but I assume that is just a cut'n'paste bloomer and you really intended the fourth script to create a trigger called citizens_lic_trigg_4). Another part of the problem is that you have two triggers populating :new.no and no trigger populating :new.id, which being the primary must be populated.
Four triggers firing on insert causes four times as much overhead as one trigger firing. So it's better to have just one trigger, for performance reasons. But it also makes it easier to avoid the errors in your code, because scanning one script is simpler than scanning four. Particularly when you're just editing the cached statement (ed afiedt.buf) so you can't eyeball all four scripts (*).
So, a better implementation would be:
create or replace trigger citizens_lic_trigg
before insert on citizens_lic
for each row
begin
/* or maybe these two assignments should be the other way round??? */
:new.id := citizens_lic_seq_1.nextval;
:new.no := round(dbms_random.value(1000500000,1099999999));
:new.iss_date := sysdate;
:new.exp_date := sysdate + (365*5);
end;
(*) Unless you take a screenshot after each edit, as you have done here. But that's really inefficient: in the long run you will find it beneficial to have separate named files for each script, so you can save them in source control.

two thoughts: both your triggers _1 and _2 insert into new.no. in your second screenshot you twice create the trigger _3. I'd say your problem is that you provide no value for your PK, the id. HTH

Related

"Sequence generated" Is Not Shown to be in Order? Oracle forms

execute_query applied to show all records, sequence works fine but not in descending order
please help if there's any way to order this data block by the sequence in ("No", column)
create table Citizens_lic
(
No NUMBER(10) ,
ID NUMBER(10)
constraint Citizens_ID_pk_1 PRIMARY KEY,
F_Name VARCHAR2(32) ,
M_Name VARCHAR2(32) ,
L_Name VARCHAR2(32) ,
DOB DATE ,
POB VARCHAR2(32) ,
GENDER VARCHAR2(32) ,
WORK_STATUS VARCHAR2(32) ,
Soc_status VARCHAR2(32) ,
ISS_DATE date ,
EXP_Date date
)
this is the table, here is the sequence:
CREATE SEQUENCE CITIZENS_LIC_NO_SEQ_1
START WITH 1
INCREMENT BY 1
here is the trigger:
CREATE OR REPLACE TRIGGER CITIZENS_LIC_NO_TRIGG_1
BEFORE INSERT ON CITIZENS_LIC
FOR EACH ROW
BEGIN
SELECT CITIZENS_LIC_NO_SEQ_1.NEXTVAL
INTO :new.NO
FROM DUAL;
END;
try to add any value, it will work fine, but when you add values from the forms builder, it will change the order according to your mouse click
another matter is when I try to delete anything from the table,
the sequence ruined the order and the deleted value disappeared with its sequence number forever!
might forms' trigger help but I don't know which one is good to use
If you want to sort rows in a data block, open its properties palette, find ORDER BY property and put whatever you want in there. In your case, it seems that it would be
order by no desc
When you execute a query in that data block, the result will be sorted by the no column in descending order.
As of deleting rows: of course, it will be lost. What did you expect? Sequences guarantee unique, but not gapless list of numbers. Mind its cache; you don't even have to delete any rows, but two consecutive sessions might produce gaps. That's how sequences work; I'd just accept it, if I were you.
If you want to create gapless numbers, you'll have to write your own code and it won't be as simple as you think. You'll have to pay attention to inserts (which is simple), updates and deletes. Once again: stick to sequences.

Hung up on For In Loop and variable for external table loader

I have to load a number of files every day into our database system. My solution was to use a java procedure to generate a table of all the files in the directory folder and loop through each of them through the external table loader. I'm running into two hangups with this
Declare
what_to_load VARCHAR2(255);
CURSOR folder_contents
IS
select filename
from database.DIR_LIST
where filename like 'DCOpenOrders_%'
and filename like '%.csv';
BEGIN
DELETE FROM database.DIR_LIST;
database.GET_DIR_LIST( 'directory_path_files_are_in' );
FOR each_record IN folder_contents
LOOP
what_to_load := each_record.filename;
EXECUTE IMMEDIATE 'DROP table database.my_table';
execute immediate 'CREATE table database.my_table
(Region VARCHAR2(10),
District VARCHAR2(10),
Originating_Store VARCHAR2(80),
Order_Date VARCHAR2(30),
Ship_Location VARCHAR2(10),
Orig_Ord_No VARCHAR2(30),
Field_G VARCHAR2(30),
Line_No VARCHAR2(10),
POS_UPC VARCHAR2(30),
Item_Descr VARCHAR2(80),
Ord_Qty VARCHAR2(10),
Line_Status VARCHAR2(30),
Report_Date VARCHAR2(30),
Ship_Type VARCHAR2(30),
ERR_FLAG VARCHAR2(10),
ERR_LOG VARCHAR2(800)
)
ORGANIZATION EXTERNAL
( type oracle_loader
default directory WORK_DIR
access parameters
( records delimited by NEWLINE
skip 1
fields terminated by '',''
optionally enclosed by ''"''
missing FIELD VALUES are NULL)
location ('''||each_record.filename||''')
)
reject limit unlimited';
Execute Immediate 'Grant All on database.my_table to USER';
* merge statement goes here*
End Loop;
commit;
end;
Again, the idea is that every time this runs it will get the new list of csv files in the dir_list table with the java procedure get_dir_list, then for every file name I set as equal to the variable and use the variable in the external table loader to load up the file.
I'm running into [s]two[/s] problems
EDIT: Ok, making the corrections below to cursor row identification, now I hit the point where when I go to the second pass through my cursor appears to be wrong or missing - it will go through a loop just fine if the only action is to do a put_line. But with an execute immediate statement in there such as the "Grant All" then as soon as it completes one pass it throws ORA-08103 at the top of the loop and refuses to go on
3) I'm aware of an ask tom on this (https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:37593123416931) that says to use the alter table command. However when I try that it doesn't accept my attempt at that
execute immediate 'alter table database.my_table location('''||filename||''')';
throws out an error (plus I'd still need to get it to do another loop there to put the name of the current file into the external loader)
Any suggestions or help? I should note that we are on windows, not unix (since most solutions people offer on these places assume the latter) and I can't grab another program or module to do the job due to approval restrictions (since that seems to be another common solution)
Thanks!
For your first problem, your cursor loop variable is confusingly called filename. Your are referring to that record directly, instead of the column from the cursor. Changing the name slightly to make it a little clearer:
FOR filenames IN folder_contents
LOOP
what_to_load := filesnames.filename;
The rest is less obvious, but it isn't going to be happy that you're dropping and recreating the table in the middle of a block that refers to it statically. You need to make all references dynamic:
execute immediate 'Grant All on database.my_table ...';
-- grant to who/what? and why?
And your merge will have to be dynamic too. At least unless you can get the alter table to work, but you haven't said what the problem is with that. Actually, from what you posted, that's the same cursor variable reference problem:
execute immediate 'alter table database.my_table location('''||filenames.filename||''')';
If you aren't dropping/creating the table in the block, and create it once statically and just alter it, then you can use a static merge - just the alter needs to be dynamic.
A simpler approach might be to create the external table once, with a specific fixed name; loop through the list of real files; and for each of those in turn, rename or copy that to the fixed file name and perform the merge. Each time you query the external table it rereads the file anyway, so changing its contents in the background is OK. Dropping/recreating or even altering the table then wouldn't be necessary.
You could also, as that Ask Tom pst mentions, supply all the file names to the external table at once, as they have the same structure, either with the drop/create or with the alter approach.

PL/SQL Stored Procedure create tables

I've been tasked with improving old PL/SQL and Oracle SQL legacy code. In all there are around 7000 lines of code! One aspect of the existing code that really surprises me is the previous coder needlessly created hundreds of lines of code by not writing any procedures or functions - instead the coder essentially repeats the same code throughout.
For example, in the existing code there are literally 40 or more repetitions of the following SQL:
CREATE TABLE tmp_clients
AS
SELECT * FROM live.clients;
CREATE TABLE tmp_customers
AS
SELECT * FROM live.customers;
CREATE TABLE tmp_suppliers
AS
SELECT * FROM live.suppliers WHERE type_id = 1;
and many, many more.....
I'm very new to writing in PL/SQL, though I have recently purchased the excellent book "Oracle PL/SQL programming" by Steven Feuerstein. However, as far as I can tell, I should be able to write a callable procedure such as:
procedure create_temp_table (new_table_nme in varchar(60)
source_table in varchar(60))
IS
s_query varchar2(100);
BEGIN
s_query := 'CREATE TABLE ' + new_table_nme + 'AS SELECT * FROM ' + source_table;
execute immediate s_query;
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -955 THEN
NULL;
ELSE
RAISE;
END IF;
END;
I would then simply call the procedure as follows:
create_temp_table('tmp.clients', 'live.clients');
create_temp_table('tmp.customers', 'live.customers');
Is my proposed approach reasonable given the problem as stated?
Are the datatypes in the procedure call reasonable, ie should varchar2(60) be used, or is it possible to force the 'source_table' parameter to be a table name in the schema? What happens if the table name is more than 60 characters?
I want to be able to pass a third non-required parameter in cases where the data has to be restricted in a trivial way, ie to deal with cases "WHERE type_id = 1". How do I modify the procedure to include a parameter that is only used occasionally and how would I modify the rest of the code. I would probably add some sort of IF/ELSE statement to check whether the third parameter was not NULL and then construct the s_query accordingly.
How would I check that the table has actually been created successfully?
I want to trap for two other exceptions, namely
The new table (eg 'tmp.clients') already exists; and
The source table doesn't exist.
Does the EXCEPTION as written handle these cases?
More generally, from where can I obtain the SQL error codes and their meanings?
Any suggested improvements to the code would be gratefully received.
You could get rid of a lot of code (gradually!) by using GLOBAL temporary tables.
Execute immediate is not a bad practice but if there are other options then they should be used. Global temp tables are common where you want to extract and transform data but once processed you don't need it anymore until the next load. Each user can only see the data they insert and no redo logs are generated. You can index the data for faster querying if required.
Something like this
-- Create table
create global temporary table GT_CLIENTS
(
id NUMBER(10) not null,
Client_id NUMBER(10) not null,
modified_by_id NUMBER(10),
transaction_id NUMBER(10),
local_transaction_id VARCHAR2(30) not null,
last_modified_date_tz TIMESTAMP(6) WITH TIME ZONE not null
)
on commit preserve rows;
I recommend the on commit preserve rows option so that you can debug your procedure and see what went into the table.
Usage would be
INSERT INTO GT_CLIENTS
SELECT * FROM live.clients;
If this is the route you want to take to minimize changes, then the error for source table does not exist is -942 which you will want to stop for rather than continuing as your temp table would not have been created. Similarly, just continuing if you get an object already exists error will be problematic as you will not have reloaded it with the new data - the create failed so the table still has the data from the last run. So I would definitely do some more thinking about your exception handler.
That said, I also concur that this is generally not the best way to do things. Creating and dropping objects in a multi-user environment is a disaster in the making, and seems a silly waste of resources when there are more appropriate options available.

create complex trigger on table

I have a table participants having structure as shown below:
Pid number
name varchar2(20)
version number
Whenever i inserted any record in participants table ,version =1 get populated.
For Example ,if i inserted pid=1 ,name='Gaurav' then record with version =1 get populated in participants table .
Now my issue is with update on participants table,
Suppose i am updating name ='Niharika' for pid=1 in participants table then a new record with pid=1 ,name='Niharika' and version =2 need to be created on the same table .
Again i update name='Rohan' for pid='1' in participants table a new record with pid=1 ,name='Rohan' and version=3 needs to be created .
How can i achieve this , clearly speaking i need to get max(version)+1 for that pid that is going to update .
I can achieve this using view and insert into view using instead of trigger ,but i am not satisfied with my solution .
I have also created compound trigger ,even that is not working for me because inside trigger i need to use insert statement for that table and this will give me recursive error
You should really have two tables. Make one with the structure you described as a "logging" table. It will keep the history of all the records. Have another table which is considered "current" which is the same but without the version column. Then, when inserts/update occur on the "current" tables' records, have a mechanism (trigger, for example) SELECT FOR UPDATE the max(version) in the logging table, add one, and insert into the logging table. This way, you're not going to run into mutating table errors or anything weird like that. There is a bit of serialization this way, but it's the closest to what you're trying to do.
Not usually recommended, but here's how you can do it anyways with no other extra logging table(s)-
CREATE or REPLACE
TRIGGER part_upd
AFTER UPDATE of name
ON participants
FOR EACH ROW
DECLARE
retval BOOLEAN;
BEGIN
retval := insert_row(:old.pid,:new.name);
END part_upd;
The function-
CREATE or REPLACE
FUNCTION insert_row (pid1 number, name1 varchar2)
RETURN boolean
IS
PRAGMA autonomous_transaction;
BEGIN
INSERT INTO participants
SELECT pid1, name1, max(vers)+1
FROM participants
WHERE pid = pid1;
COMMIT;
RETURN true;
END;
You'll have to fine tune the Trigger and Function properly by adding logging and exception handling. Read more about autonomous_transaction.

History records, missing records, filling in the blanks

I have a table that contains a history of costs by location. These are updated on a monthly basis.
For example
Location1, $500, 01-JAN-2009
Location1, $650, 01-FEB-2009
Location1, $2000, 01-APR-2009
if I query for March 1, I want to return the value for Feb 1, since March 1 does not exist.
I've written a query using an oracle analytic, but that takes too much time (it would be fine for a report, but we are using this to allow the user to see the data visually through the front and and switch dates, requerying takes too long as the table is something like 1 million rows).
So, the next thought I had was to simply update the table with the missing data. In the case above, I'd simply add in a record identical to 01-FEB-2009 except set the date to 01-MAR-2009.
I was wondering if you all had thoughts on how to best do this.
My plan had been to simply create a cursor for a location, fetch the first record, then fetch the next, and if the next record was not for the next month, insert a record for the missing month.
A little more information:
CREATE TABLE MAXIMO.FCIHIST_BY_MONTH
(
LOCATION VARCHAR2(8 BYTE),
PARKALPHA VARCHAR2(4 BYTE),
LO2 VARCHAR2(6 BYTE),
FLO3 VARCHAR2(1 BYTE),
REGION VARCHAR2(4 BYTE),
AVG_DEFCOST NUMBER,
AVG_CRV NUMBER,
FCIDATE DATE
)
And then the query I'm using (the system will pass in the date and the parkalpha). The table is approx 1 million rows, and, again, while it takes a reasonable amount of time for a report, it takes way too long for an interactive display
select location, avg_defcost, avg_crv, fcimonth, fciyear,fcidate from
(select location, avg_defcost, avg_crv, fcimonth, fciyear, fcidate,
max(fcidate) over (partition by location) my_max_date
from FCIHIST_BY_MONTH
where fcidate <='01-DEC-2008'
and parkalpha='SAAN'
)
where fcidate=my_max_date;
The best way to do this is to create a PL/SQL stored procedure that works backwards from the present and runs queries that fail to return data. Each month that it fails to return data it inserts a row for the missing data.
create or replace PROCEDURE fill_in_missing_data IS
cursor have_data_on_date is
select locaiton, trunc(date_filed) have_date
from the_table
group by location, trunc(date_field)
order by desc 1
;
a_date date;
day_offset number;
n_days_to_insert number;
BEGIN
a_date := trunc(sysdate);
for r1 in fill_in_missing_data loop
if r1.have_date < a_date then
-- insert dates in a loop
n_days_to_insert := a_date - r1.have_date; -- Might be off by 1, need to test.
for day_offset in 1 .. n_days_to_insert loop
-- insert missing day
insert into the_table ( location, the_date, amount )
values ( r1.location, a_date-day_offset, 0 );
end loop;
end if;
a_date := r1.have_date;
-- this is a little tricky - I am going to test this and update it in a few minutes
end loop;
END;
Filling in the missing data will (if you are careful) make the queries much simpler and run faster.
I would also add a flag to the table to indicate that the data is missing data filled in so that if
you need to remove it (or create a view without it) later you can.
I have filled in missing data and also filled in dummy data so that outer join were not necessary so as to improve query performance a number of times. It is not "clean" and "perfect" but I follow Leflar's #1 Law, "always go with what works."
You can create a job in Oracle that will automatically run at off-peak times to fill in the missing data. Take a look at: This question on stackoverflow about creating jobs.
What is your precise use case underlying this request?
In every system I have worked on, if there is supposed to be a record for MARCH and there isn't a record for MARCH the users would like to know that fact. Apart from anything they might want to investigate why the MARCH record is missing.
Now if this is basically a performance issue then you ought to tune the query. Or if it presentation issue - you want to generate a matrix of twelve rows and that is difficult if a doesn't have a record for some reason - then that is a different matter, with a variety of possible solutions.
But seriously, I think it is a bad practice for the database to invent replacements for missing records.
edit
I see from your recent comment on your question that is did turn out to be a performance issue - indexes fixed the problem. So I feel vindicated.

Resources