execute_query applied to show all records, sequence works fine but not in descending order
please help if there's any way to order this data block by the sequence in ("No", column)
create table Citizens_lic
(
No NUMBER(10) ,
ID NUMBER(10)
constraint Citizens_ID_pk_1 PRIMARY KEY,
F_Name VARCHAR2(32) ,
M_Name VARCHAR2(32) ,
L_Name VARCHAR2(32) ,
DOB DATE ,
POB VARCHAR2(32) ,
GENDER VARCHAR2(32) ,
WORK_STATUS VARCHAR2(32) ,
Soc_status VARCHAR2(32) ,
ISS_DATE date ,
EXP_Date date
)
this is the table, here is the sequence:
CREATE SEQUENCE CITIZENS_LIC_NO_SEQ_1
START WITH 1
INCREMENT BY 1
here is the trigger:
CREATE OR REPLACE TRIGGER CITIZENS_LIC_NO_TRIGG_1
BEFORE INSERT ON CITIZENS_LIC
FOR EACH ROW
BEGIN
SELECT CITIZENS_LIC_NO_SEQ_1.NEXTVAL
INTO :new.NO
FROM DUAL;
END;
try to add any value, it will work fine, but when you add values from the forms builder, it will change the order according to your mouse click
another matter is when I try to delete anything from the table,
the sequence ruined the order and the deleted value disappeared with its sequence number forever!
might forms' trigger help but I don't know which one is good to use
If you want to sort rows in a data block, open its properties palette, find ORDER BY property and put whatever you want in there. In your case, it seems that it would be
order by no desc
When you execute a query in that data block, the result will be sorted by the no column in descending order.
As of deleting rows: of course, it will be lost. What did you expect? Sequences guarantee unique, but not gapless list of numbers. Mind its cache; you don't even have to delete any rows, but two consecutive sessions might produce gaps. That's how sequences work; I'd just accept it, if I were you.
If you want to create gapless numbers, you'll have to write your own code and it won't be as simple as you think. You'll have to pay attention to inserts (which is simple), updates and deletes. Once again: stick to sequences.
Related
I've created the following table:
create table Citizens_lic
(
No NUMBER(10) ,
ID NUMBER(10)
constraint Citizens_ID_pk_1 PRIMARY KEY,
F_Name VARCHAR2(32) ,
M_Name VARCHAR2(32) ,
L_Name VARCHAR2(32) ,
DOB DATE ,
POF VARCHAR2(32) ,
GENDER VARCHAR2(32) ,
Soc_status VARCHAR2(32) ,
work_status VARCHAR2(32) ,
ISS_DATE date ,
EXP_Date date
)
Then, I've generated some triggers
for id, no, iss_date Exp_date
from the sqlplus command as like the following pics
after that, all triggers work fine from the sqlplus command, all inserted values like id and no columns are generated itself automatically once a row is created
Now I want to show you where I got stuck!
I went to the oracle forms builder
I've made this form
and deleted all the ID, No, Iss_date, Exp_date item boxes because there's no need for it, each one must be already generated by a trigger.
then I ran it
unable to insert!
Now lets get it over the flow, anyone help :)
Part of the problem is that you have written four triggers for one event (well, strictly three triggers, because you have two scripts with the same trigger name but I assume that is just a cut'n'paste bloomer and you really intended the fourth script to create a trigger called citizens_lic_trigg_4). Another part of the problem is that you have two triggers populating :new.no and no trigger populating :new.id, which being the primary must be populated.
Four triggers firing on insert causes four times as much overhead as one trigger firing. So it's better to have just one trigger, for performance reasons. But it also makes it easier to avoid the errors in your code, because scanning one script is simpler than scanning four. Particularly when you're just editing the cached statement (ed afiedt.buf) so you can't eyeball all four scripts (*).
So, a better implementation would be:
create or replace trigger citizens_lic_trigg
before insert on citizens_lic
for each row
begin
/* or maybe these two assignments should be the other way round??? */
:new.id := citizens_lic_seq_1.nextval;
:new.no := round(dbms_random.value(1000500000,1099999999));
:new.iss_date := sysdate;
:new.exp_date := sysdate + (365*5);
end;
(*) Unless you take a screenshot after each edit, as you have done here. But that's really inefficient: in the long run you will find it beneficial to have separate named files for each script, so you can save them in source control.
two thoughts: both your triggers _1 and _2 insert into new.no. in your second screenshot you twice create the trigger _3. I'd say your problem is that you provide no value for your PK, the id. HTH
Trying to implement a friendship table ..
To explain wat i have done till now
my DDL
<!-- WORKING -- "relationship" - This table used to store the relationship between users -->
create table relationship(
relation_id number(8),
FromUserName varchar2(30),
ToUserName varchar2(30),
StatusId number,
SentTime timestamp,
constraint relationship_pk primary key(relation_id),
foreign key (FromUserName) references users(username),
foreign key (ToUserName) references users(username)
);
<!--WORKING add the unique key to 'relationship' table so that a user can send request at time to user only oncle -->
ALTER TABLE relationship
ADD CONSTRAINT relation_unique UNIQUE (FromUserName, ToUserName);
Here is an image to explain the problem
My problem
have a look at last two rows . .. the users kamlesh1 send request to jitu1 and again jitu1 sends request to kamlesh1 and when i kamlesh1 accepts the request the statusid changes to 1 similar case for kamlesh to jitu when jitu accepts the request.
I want to prevent this kind of duplication i.e
once a user has sent u a request u cannot sent a request to him just accept his request or reject it.
I just could'nt think of proper question title ...if u could help with that too.
Please help
You could create a unique function-based index for this:
CREATE UNIQUE INDEX relation_unique ON relationship ( LEAST(FromUserName, ToUserName), GREATEST(FromUserName, ToUserName) );
A couple of side notes: You don't need a NUMBER (38 digits of precision) to store a value that is either 0 or 1. NUMBER(1) should suffice. Also, you probably don't need the granularity of TIMESTAMP for SentTime - a DATE should do the trick, and might make arithmetic a bit easier (DATE arithmetic is expressed in days, TIMESTAMP arithmetic in intervals). Last, using CamelCase for column names in Oracle isn't a good idea since Oracle object names aren't case-sensitive unless you enclose them in double quotes. If you were to inspect the data dictionary you would see your columns like this: FROMUSERNAME, TOUSERNAME. Much better to use column names like FROM_USERNAME and TO_USERNAME (or USERNAME_FROM and USERNAME_TO).
You should order the persons. Say, add
alter table relationship
add constraint relation_order_chk
check (fromusername < tousername);
Then, when inserting, do something like
create or replace procedure AddRelationship(p_from varchar2, p_to varchar2 ...) is
begin
insert into relationship (fromusername, tousername, ...)
values(least(p_from, p_to), greatest(p_from, p_to), ...);
end;
I have problem with creating triggers to table.
create table dwarfs (
name varchar2(20),
nickname varchar2(20),
note varchar2(20),
primary key (name,nickname)
);
Idea:
When someone want to insert data without entering name trigger should add default name for example "Dwarf1"
I created trigger but I get
communicate:SQL Error: ORA-01400: cannot insert NULL into
01400. 00000 - "cannot insert NULL into (%s)"
create or replace trigger t_d
before insert or update on dwarfs
for each row
when (new.name=null or new.name= '')
declare
begin
:new.name:='Dwarf1';
end;
As #kodirko noted in his comment, the comparison (new.name=null or new.name='') will never work, because a comparison with NULL always returns NULL, not TRUE or FALSE. To determine if a column is NULL you need to use the special comparison construct IS NULL. Also note that because nickname is part of the primary key it also must never be NULL - so, when taken all together you might try rewriting your trigger as:
create or replace trigger t_d
before insert or update on dwarfs
for each row
when (new.name IS NULL OR NEW.NICKNAME IS NULL)
begin
IF :new.NAME IS NULL THEN
:new.NAME := 'Dwarf1';
END IF;
IF :new.NICKNAME IS NULL THEN
:new.NICKNAME := :new.NAME;
END IF;
end;
Share and enjoy.
This is a typical case of not seeing the forest for the trees. Yes, there is the technical detail of using = in a test for null. The main point, however is...
NEVER, EVER ASSIGN DEFAULT VALUES TO KEY FIELDS!!!
If a field is a key field, the tuple is explicitly unusable without it. If that data is missing, there is something very, very wrong and the remainder of the row should be prevented at all costs from being inserted into the database.
This does not, of course, apply to an Identity or auto generating value that is defined as the surrogate key. Surrogate keys are, by definition, completely independent of the entity data. (This points out a disadvantage of surrogate keys, but that is a different discussion.) This applies only to attribute fields that have been further identified as key fields.
If the value is missing and a default value is not supplied, any attempt to insert the row will generate an error. Which is exactly what you want to happen. Don't make it easy for the users to destroy the integrity of the database.
I am creating some record which have id, ts ... So firstly I call select to get ts and id:
select SEQ_table.nextval, CURRENT_TIMESTAMP from dual
and then I call insert
insert into table ...id, ts ...
this works good in 99 % but sometimes when there is a big load the order of record is bad because I need record.id < (record+1).id and record.ts < (record+1).ts but this conditional is met. How I can solve this problem ? I am using oracle database.
You should not use the result of a sequence for ordering. This might look strange but think about how sequences are cached and think about RAC. Every instance has it's own sequence cache .... For performance you need big caches. sequences had better be called random unique key generators that happen to work sequenctially most of the time.
The timestamp format has a time resolution upto microsecond level. When hardware becomes quicker and load increases it could be that you get multiple rows at the same time. There is not much you can do about that, until oracle takes the resolution a step farther again.
Use an INSERT trigger to populate the id and ts columns.
create table sotest
(
id number,
ts timestamp
);
create sequence soseq;
CREATE OR REPLACE TRIGGER SOTEST_BI_TRIG BEFORE
INSERT ON SOTEST REFERENCING NEW AS NEW FOR EACH ROW
BEGIN
:new.id := soseq.nextval;
:new.ts := CURRENT_TIMESTAMP;
END;
/
PHIL#PHILL11G2 > insert into sotest values (NULL,NULL);
1 row created.
PHIL#PHILL11G2 > select * from sotest;
ID TS
---------- ----------------------------------
1 11-MAY-12 13.29.33.771515
PHIL#PHILL11G2 >
You should also pay attention to the other answer provided. Is id meant to be a meaningless primary key (it usually is in apps - it's just a key to join on)?
I have a table that contains a history of costs by location. These are updated on a monthly basis.
For example
Location1, $500, 01-JAN-2009
Location1, $650, 01-FEB-2009
Location1, $2000, 01-APR-2009
if I query for March 1, I want to return the value for Feb 1, since March 1 does not exist.
I've written a query using an oracle analytic, but that takes too much time (it would be fine for a report, but we are using this to allow the user to see the data visually through the front and and switch dates, requerying takes too long as the table is something like 1 million rows).
So, the next thought I had was to simply update the table with the missing data. In the case above, I'd simply add in a record identical to 01-FEB-2009 except set the date to 01-MAR-2009.
I was wondering if you all had thoughts on how to best do this.
My plan had been to simply create a cursor for a location, fetch the first record, then fetch the next, and if the next record was not for the next month, insert a record for the missing month.
A little more information:
CREATE TABLE MAXIMO.FCIHIST_BY_MONTH
(
LOCATION VARCHAR2(8 BYTE),
PARKALPHA VARCHAR2(4 BYTE),
LO2 VARCHAR2(6 BYTE),
FLO3 VARCHAR2(1 BYTE),
REGION VARCHAR2(4 BYTE),
AVG_DEFCOST NUMBER,
AVG_CRV NUMBER,
FCIDATE DATE
)
And then the query I'm using (the system will pass in the date and the parkalpha). The table is approx 1 million rows, and, again, while it takes a reasonable amount of time for a report, it takes way too long for an interactive display
select location, avg_defcost, avg_crv, fcimonth, fciyear,fcidate from
(select location, avg_defcost, avg_crv, fcimonth, fciyear, fcidate,
max(fcidate) over (partition by location) my_max_date
from FCIHIST_BY_MONTH
where fcidate <='01-DEC-2008'
and parkalpha='SAAN'
)
where fcidate=my_max_date;
The best way to do this is to create a PL/SQL stored procedure that works backwards from the present and runs queries that fail to return data. Each month that it fails to return data it inserts a row for the missing data.
create or replace PROCEDURE fill_in_missing_data IS
cursor have_data_on_date is
select locaiton, trunc(date_filed) have_date
from the_table
group by location, trunc(date_field)
order by desc 1
;
a_date date;
day_offset number;
n_days_to_insert number;
BEGIN
a_date := trunc(sysdate);
for r1 in fill_in_missing_data loop
if r1.have_date < a_date then
-- insert dates in a loop
n_days_to_insert := a_date - r1.have_date; -- Might be off by 1, need to test.
for day_offset in 1 .. n_days_to_insert loop
-- insert missing day
insert into the_table ( location, the_date, amount )
values ( r1.location, a_date-day_offset, 0 );
end loop;
end if;
a_date := r1.have_date;
-- this is a little tricky - I am going to test this and update it in a few minutes
end loop;
END;
Filling in the missing data will (if you are careful) make the queries much simpler and run faster.
I would also add a flag to the table to indicate that the data is missing data filled in so that if
you need to remove it (or create a view without it) later you can.
I have filled in missing data and also filled in dummy data so that outer join were not necessary so as to improve query performance a number of times. It is not "clean" and "perfect" but I follow Leflar's #1 Law, "always go with what works."
You can create a job in Oracle that will automatically run at off-peak times to fill in the missing data. Take a look at: This question on stackoverflow about creating jobs.
What is your precise use case underlying this request?
In every system I have worked on, if there is supposed to be a record for MARCH and there isn't a record for MARCH the users would like to know that fact. Apart from anything they might want to investigate why the MARCH record is missing.
Now if this is basically a performance issue then you ought to tune the query. Or if it presentation issue - you want to generate a matrix of twelve rows and that is difficult if a doesn't have a record for some reason - then that is a different matter, with a variety of possible solutions.
But seriously, I think it is a bad practice for the database to invent replacements for missing records.
edit
I see from your recent comment on your question that is did turn out to be a performance issue - indexes fixed the problem. So I feel vindicated.