Oracle: Function Parameter - how to implement? - oracle

i have a little problem and i want to ask you for help :)
so to make it simple, im using an oracle database and i want to create an "check constraint" on one of my tables, i.e. below.
Table XY
Attribute A || Attribute B || Attribute C
for attribute A and B the user can add whatever he wants - and for attribute C i want to use a "check constraint" using a user defined function - which checks if the combination of A and B is valid or not.
my problem is, i have no idea how to implement the input parameters for the function so i can make a check while the user creates the entry in the database.
in other words, the user already added 1 to A and 3 to B and once he wants to add an entry for C i want to (example) check if A + B = 4
i hope you can help me, because im going bananas right now x)
kind regards
alex
EDIT (copied from comment below)
I have 3 tables:
GROUP
GROUP_LIST
TM
GROUP includes the ID of a GROUP, GROUP_LIST is the connection table between the GROUP and the TM - so in GROUP_LIST I connect a GROUP with a TM - but it's possible to connect 1 GROUP with several TMs f.e.:
GROUPID || TM
1 || 1
1 || 2
1 || 3
2 || 1
and so on and my real issue is following: now, I want a check on the TM attribute which checks if the TM I'm trying to fill in, is already existing in the same GROUPID. I hope now its clear what my intention is ...

It seems rather pointless to require users to enter a 'correct' value for a particular column. Instead of defining C as a normal column I suggest defining it as a computed column, e.g.:
CREATE TABLE TBL
(A NUMBER,
B NUMBER,
C AS (A + B));
In this way C will always be computed correctly.
SQLFiddle here
Best of luck.
EDIT
Based on information from OP in the comment below it appears that this answer doesn't address the real need so I've added a second answer (below). I considered deleting this answer but because it contains the comment from OP which adds important information (which I've since edited into the question) I decided to leave this answer in place, but to forfeit the rep gain by making this answer Community Wiki.

If you use ouf-of-line check constraint syntax you could define a condition with any of the fields of the specific table, something like:
ALTER TABLE xy ADD CONSTRAINT mymulticheck CHECK (A + B = C);
Note that you have many restriction on this type of constraint, as example you cannot use a user defined function, and obviously the condition is fixed and identical for all the rows.

Based on new information from OP it appears that the correct way to solve this issue would be to add a UNIQUE constraint to the GROUP_LIST table to ensure that the combination of GROUP_ID and TM is unique:
ALTER TABLE GROUP_LIST
ADD CONSTRAINT GROUP_LIST_UNIQUE_1
UNIQUE (GROUP_ID, TM);
Best of luck.

Related

Get the last inserted row ID in trafodion

I want to get the row ID or record ID for last inserted record in the table in Trafodion.
Example:
1 | John <br/>
2 | Michael
When executing an INSERT statement, I want to return the created ID, means 3.
Could anyone tell me how to do that using trafodion or is it not possible ?
Are you using a sequence generator to generate unique ids for this table? Something like this:
create table idcol (a largeint generated always as identity not null,
b int,
primary key(a desc));
Either way, with or without sequence generator, you could get the highest key with this statement:
select max(a) from idcol;
The problem is that this statement could be very inefficient. Trafodion has a built-in optimization to read the min of a key column, but it doesn't use the same optimization for the max value, because HBase didn't have a reverse scan until recently. We should make use of the reverse scan, please feel free to file a JIRA. To make this more efficient with the current code, I added a DESC to the primary key declaration. With a descending key, getting the max key will be very fast:
explain select max(a) from idcol;
However, having the data grow from higher to lower values might cause issues in HBase, I'm not sure whether this is a problem or not.
Here is yet another solution: Use the Trafodion feature that allows you to select the inserted data, showing you the inserted values right away:
select * from (insert into idcol(b) values (11),(12),(13)) t(a,b);
A B
-------------------- -----------
1 11
2 12
3 13
--- 3 row(s) selected.

How to update a column with concatenate of two other column in a same table

I have a table with 3 columns a, b and c. I want to know how to update the value of third column with concatenate of two other columns in each row.
before update
A B c
-------------
1 4
2 5
3 6
after update
A B c
-------------
1 4 1_4
2 5 2_5
3 6 3_6
How can I do this in oracle?
Use the concatentation operator ||:
update mytable set
c = a || '_' || b
Or better, to avoid having to rerun this whenever rows are inserted or updated:
create view myview as
select *, a || '_' || b as c
from mytable
Firstly, you are violating the rules of normalization. You must re-think about the design. If you have the values in the table columns, then to get a computed value, all you need is a select statement to fetch the result the way you want. Storing computed values is generally a bad idea and considered a bad design.
Anyway,
Since you are on 11g, If you really want to have a computed column, then I would suggest a VIRTUAL COLUMN than manually updating the column. There is a lot of overhead involved with an UPDATE statement. Using a virtual column would reduce a lot of the overhead. Also, you would completely get rid of the manual effort and those lines of code to do the update. Oracle does the job for you.
Of course, you will use the same condition of concatenation in the virtual column clause.
Something like,
Column_c varchar2(50) GENERATED ALWAYS AS (column_a||'_'||column_b) VIRTUAL
Note : There are certain restrictions on its use. So please refer the documentation before implementing it. However, for the simple use case provided by OP, a virtual column is a straight fit.
Update I did a small test. There were few observations. Please read this question for a better understanding about how to implement my suggestion.

Oracle Query for finding non-repeated rows

I will try to put forward this question in a simple way. Consider that I have a table with two columns (Name, Contact_No). We can have the same name but with different Contact No in the table. All I want to know is to find out which name is NOT repeated in the entire table. Or in other words, which name is unique in this table and has appeared only once.. This is just an example, the actual scenario is quite different but if anyone can help me with this example, I'll be able to handle the actual scenario.
Here is an example
Name Contact_No
A 123
A 124
B 125
C 126
C 127
All I want is to find (B) which is not repeated in the entire table. Thanks
You can simply do this:
SELECT name FROM tbl_name GROUP BY name HAVING COUNT(name) = 1
Check Out SQLFIDDLE

ORACLE Mutating Table error in one trigger, but not another; Why?

Okay, I have two tables - ORDERS and ORDERLINES - which have essentially the same problem, with triggers on each to address the issue. The issue is that in addition to the PK with table-level uniqueness, on a field called RECID, there is another field, RECNO, which needs to be unique with relation to another field.
The tables are FK related as follows:
ORDERS.WAREHOUSEID > WAREHOUSES.CUSTOMERID > CUSTOMERS
and
ORDERSLINES.ORDERID > ORDERS
On ORDERS and ORDERSLINES I have BEFORE INSERT triggers to assign the realm-specific unique RECNO.
In ORDERS, RECNO needs to be unique within the realm of a CUSTOMERS record.
In ORDERLINES, RECNO needs to be unique within the realm of an ORDERS record.
The trigger on ORDERS works perfectly fine. When a new order is inserted, it is assigned the next unique RECNO within the customer it belongs to.
The trigger on ORDERLINES on the other hand, which should assign the next unique RECNO within the order it belongs to, throws the dreaded {ORA-04091: table ORDERLINES is mutating, trigger/function may not see it} exception.
Here is the trigger that works:
CREATE OR REPLACE TRIGGER ORDERS_BI
BEFORE INSERT ON ORDERS
FOR EACH ROW
DECLARE
CUSTID WAREHOUSES.CUSTOMERID%TYPE;
BEGIN
SELECT MIN(CUSTOMERID) INTO CUSTID FROM WAREHOUSES
WHERE NVL(WARE_ID, '-') = NVL(:NEW.WAREHOUSEID, '-');
SELECT NVL(MAX(RECNO), 0) + 1
INTO :NEW.RECNO
FROM deploy.ORDERS O
LEFT JOIN deploy.WAREHOUSES W
ON NVL(W.REC, '-') = NVL(O.WAREHOUSEID, '-')
WHERE NVL(W.CUSTOMERID, '-') = NVL(CUSTID, '-');
END;
And here is the trigger that does NOT work:
CREATE OR REPLACE TRIGGER ORDERLINES_BI
BEFORE INSERT ON ORDERLINES
FOR EACH ROW
DECLARE
nORDERID ORDERLINES.ORDERID%TYPE;
BEGIN
SELECT MIN(ORDERID) INTO nORDERID FROM REVORDERS
WHERE ORDERID = :NEW.ORDERID;
SELECT NVL(MAX(RECNO), 0) + 1
INTO :NEW.RECNO
FROM deploy.ORDERLINES L
LEFT JOIN deploy.ORDERS O
ON O.ORDERID = L.ORDERID
WHERE O.ORDERID = nORDERID;
END;
Can SOMEONE please explain WHY the first one works, and the second one doesn't?
And is there some way I can re-write the second to make it work?
I looked at your code first, rather than your explanation. My first thought was "this person is trying to fake a sequence." This obviously isn't the answer to your question but it's the reason you're getting into trouble in the first place.
The obvious solution when you're having problems faking sequences is to use a real one.
As Nicholas has already noted ORA-04091 occurs when you try to read from the table from which a trigger is fired. There are various ways to avoid this, most of which avoid trying to do something slightly funky. However, they don't influence the root cause of the error; that is you're doing something wrong. This error is normally indicative of one or both of two things:
You're putting far too much logic into a trigger
Your data-model is flawed.
The solution to the first is to move the logic to a package, which has the added benefit of removing a layer of obfuscation. The solution to the second is to normalise your database properly.
In your case, from what information you've provided, your data-model seems to be okay, though as I've said I disagree with the implementation.
This leaves you with four options to solve your problem, which I detail in order I would do them
Remove your triggers.
Replace your current logic with a sequence.
Remove all your trigger logic into a procedure.
Hack around your error.
I'm not going to discuss point 3 as you can do that yourself. Nicholas has partially covered point 4 and I'm not going to advocate something I disagree with. This leaves points 1 and 2. You say
In ORDERS, RECNO needs to be unique within the realm of a CUSTOMERS
record.
This is not how you've implemented it. Your code makes RECNO consecutive within the realm of a CUSTOMERS record. The primary key of both ORDERS and ORDERLINES are by definition unique within the realm of a CUSTOMERS record.
In itself, this implies that option 1 is best for you. Remove the triggers entirely; the primary keys of the table are already doing everything you need. This also invalidates option 2; if you add a sequence then it will basically be a separate primary key.
There is no reason I can think of that you would need an order to be consecutively unique within each customer; why bother doing so?
You are getting that error because the second trigger is trying to read table while it is being modified. This can also happen when a trigger on a parent table causes an insert on a child table referencing a foreign key.
As a quick work around create view and try to use instead of trigger.
Also take a look at Tom's example of how to deal with mutating issues.
Besides, if leave the second trigger as it is, any inserts into your_table select .. from table will raise mutating error. For example:
This insert will work
insert into ORDERLINES(column1, column2... columnN)
values(val1, val2,..., valN)
But this one wont.
insert into ORDERLINES(column1, column2... columnN)
select val, val..val from table

How to protect a running column within Oracle/PostgreSQL (kind of MAX-result locking or something)

I'd need advice on following situation with Oracle/PostgreSQL:
I have a db table with a "running counter" and would like to protect it in the following situation with two concurrent transactions:
T1 T2
SELECT MAX(C) FROM TABLE WHERE CODE='xx'
-- C for new : result + 1
SELECT MAX(C) FROM TABLE WHERE CODE='xx';
-- C for new : result + 1
INSERT INTO TABLE...
INSERT INTO TABLE...
So, in both cases, the column value for INSERT is calculated from the old result added by one.
From this, some running counter handled by the db would be fine. But that wouldn't work because
the counter values or existing rows are sometimes changed
sometimes I'd like there to be multiple counter "value groups" (as with the CODE mentioned) : with different values for CODE the counters would be independent.
With some other databases this can be handled with SERIALIZABLE isolation state but at least with Oracle&Postgre the phantom reads are prevented but as the result the table ends up with two distinct rows with same counter value. This seems to have to do with the predicate locking, locking "all the possible rows covered by the query" - some other db:s end up to lock the whole table or something..
SELECT ... FOR UPDATE -statements seem to be for other purposes and don't even seem to work with MAX() -function.
Setting an UNIQUE contraint on the column would probably be the solution but are there some other ways to prevent the situation?
b.r. Touko
EDIT: One more option could probably be manual locking even though it doesn't appear nice to me..
Both Oracle and PostgreSQL support what's called sequences and the perfect fit for your problem. You can have a regular int column, but define one sequence per group, and do a single query like
--PostgreSQL
insert into table (id, ... ) values (nextval(sequence_name_for_group_xx), ... )
--Oracle
insert into table (id, ... ) values (sequence_name_for_group_xx.nextval, ... )
Increments in sequences are atomic, so your problem just wouldn't exist. It's only a matter of creating the required sequences, one per group.
the counter values or existing rows are sometimes changed
You should to put a unique constraint on that column if this would be a problem for your app. Doing so would guarantee a transaction at SERIALIZABLE isolation level would abort if it tried to use the same id as another transaction.
One more option could probably be manual locking even though it doesn't appear nice to me..
Manual locking in this case is pretty easy: just take a SHARE UPDATE EXCLUSIVE or stronger lock on the table before selecting the maximum. This will kill concurrent performance, though.
sometimes I'd like there to be multiple counter "value groups" (as with the CODE mentioned) : with different values for CODE the counters would be independent.
This leads me to the Right Solution for this problem: sequences. Set up several sequences, one for each "value group" you want to get IDs in their own range. See Section 9.15 of The Manual for the details of sequences and how to use them; it looks like they're a perfect fit for you. Sequences will never give the same value twice, but might skip values: if a transaction gets the value '2' from a sequence and aborts, the next transaction will get the value '3' rather than '2'.
The sequence answer is common, but might not be right. The viability of this solution depends on what you actually need. If what you semantically want is "some guaranteed to be unique number" then that is what a sequence is for. However, if what you want is to make sure that your value increases by exactly one on each insert (as you have asked), then DO NOT USE A SEQUENCE! I have run into this trap before myself. Sequences are not guaranteed to be sequential! They can skip numbers. Depending on what sort of optimizations you have configured, they can skip LOTS of numbers. Even if you have things configured just right so that you shouldn't skip any numbers, that is not guaranteed, and is not what sequences are for. So, you are only asking for trouble if you (mis)use them like that.
One step better solution is to bundle the select into the insert, like so:
INSERT INTO table(code, c, ...)
VALUES ('XX', (SELECT MAX(c) + 1 AS c FROM table WHERE code = 'XX'), ...);
(I haven't test run that query, but I'm pretty sure it should work. My apologies if it doesn't.) But, doing something like that reflects the semantic intent of what you are trying to do. However, this is inefficient, because you have to do a scan for MAX, and the inference I am taking from your sample is that you have a small number of code values relative to the size of the table, so you are going to do an expensive, full table scan on every insert. That isn't good. Also, this doesn't even get you the ACID guarantee you are looking for. The select is not transactionally tied to the insert. You can't "lock" the result of the MAX() function. So, you could still have two transactions running this query and they both do the sub-select and get the same max, both add one, and then both try to insert. It's a much smaller window, but you may still technically have a race condition here.
Ultimately, I would challenge that you probably have the wrong data model if you are trying to increment on insert. You should insert with a unique key, most commonly a sequence value (at least as an easy, surrogate key for any natural key). That gets the data safely inserted. Then, if you need a count of things, then have one table that stores your counts.
CREATE TABLE code_counts (
code VARCHAR(2), --or whatever
count NUMBER
);
If you really want to store the code count of each item as it is inserted, the separate count table also allows you to do so correctly, transactionally, like so:
UPDATE code_counts SET count = count + 1 WHERE code = 'XX' RETURNING count INTO :count;
INSERT INTO table(code, c, ...) VALUES ('XX', :count, ...);
COMMIT;
The key is that the update locks the counter table and reserves that value for you. Then your insert uses that value. And all of that is committed as one transactional change. You have to do this in a transaction. Having a separate count table avoids the full table scan of doing SELECT MAX().... In essense, what this does is re-implements a sequence, but it also guarantees you sequencial, ordered use.
Without knowing your whole problem domain and data model, it is hard to say, but abstracting your counts out to a separate table like this where you don't have to do a select max to get the right value is probably a good idea. Assuming, of course, that a count is what you really care about. If you are just doing logging or something where you want to make sure things are unique, then use a sequence, and a timestamp to sort by.
Note that I'm saying not to sort by a sequence either. Basically, never trust a sequence to be anything other than unique. Because when you get to caching sequence values on a multi-node system, your application might even consume them out of order.
This is why you should use the Serial datatype, which defers the lookup of C to the time of insert (which uses table locks i presume). You would then not specify C, but it would be generated automatically. If you need C for some intermediate calculation, you would need to save first, then read C and finally update with the derived values.
Edit: Sorry, I didn't read your whole question. What about solving your other problems with normalization? Just create a second table for each specific type (for each x where A='x'), where you have another auto increment. Manually edited sequences could be another column in the same table, which uses the generated sequence as a base (i.e if pk = 34 you can have another column mypk='34Changed').
You can create sequential collumn by using sequence as default value:
First, you have to create the sequence counter:
CREATE SEQUENCE SEQ_TABLE_1 START WITH 1 INCREMENT BY 1;
So, you can use it as default value:
CREATE TABLE T (
COD NUMERIC(10) DEFAULT NEXTVAL('SEQ_TABLE_1') NOT NULL,
collumn1...
collumn2...
);
Now you don't need to worry about sequence on inserting rows:
INSERT INTO T (collumn1, collumn2) VALUES (value1, value2);
Regards.

Resources