I have come into a bump at my current company where they have an account and a member. For some reason or another both are stored in separate tables.
Right now a member and an account can be registered. That's fine, except the users of both member and an account can have the same username. This is of course as you all know just wrong. Especially since they use the username to login to the same system except with different functionality levels.
Right now we are doing a check at the application level, and we're just wondering if it's possible to get the database to enforce two columns to be unique, say like a union of the two tables.
Can't set them up as primary or foreign key at the moment but that's for future anyway. Right now looking for a quick fix. In the future I will probably merge databases and get all members added on as new rows in the account table and add a boolean for IsMember.
In general, I agree with the consensus opinion that it's better to fix the design than to kluge a fix using triggers. However, a properly implemented trigger-based solution is still probably better than your current situation.
If you're going to use triggers, the right way to do it is to:
Create a new table that will contain nothing but usernames, with a primary key enforcing uniqueness (this may, in fact, be a good candidate for an indes-organized table).
Create before-insert triggers on both existing tables that add the new username to the new table. If the new username already exists, an error will be thrown, preventing the insert of both rows. Of course, the application will need to be able to handle this error gracefully (presumably it already can, for scenarios in which the new username already exists in the table it's being added to).
The wrong way to do this would be to make the trigger select from the other table, in order to verify uniqueness.
You can add a trigger that enforces your requirement.
The recommended triggers tend to be really brittle with concurrent transaction.
What you can do (AFAIK) is to create a materialized view containing the union of the column in question and put a unique constraint on that column.
Make sure you do some performance tests though.
As you use a soft delete pattern.
A trigger could be used (on each table) as a temporary measure.
By inserting a disabled record in the the other table, you will get a failure if the other record already exists
Remember this will not enforce the rule on existing data, only records that are inserted will be checked
Something like this:
-- Insert into the accounts table too
CREATE OR REPLACE TRIGGER tr_member_chk
BEFORE INSERT ON members
FOR EACH ROW
BEGIN
INSERT INTO account (name, id, etc, isenabled) VALUES(:new.name, :new.id, :new.etc, 0);
END;
-- Insert into the members table too
CREATE OR REPLACE TRIGGER tr_account_chk
BEFORE INSERT ON accounts
FOR EACH ROW
BEGIN
INSERT INTO members (name, id, etc,isenabled) VALUES(:new.name, :new.id, :new.etc,0);
END;
Related
I'm learning Oracle and I had a problem. I have my "Chat" table:
CREATE TABLE chat (
id_chat NUMBER,
id_user NUMBER,
start_chat DATE,
end_chat DATE
);
Now, I created a trigger so that it does not allow me to enter a "Chat" if an old one was already registered with the same id_user. This is my trigger:
create or replace trigger distChat
before insert on Chat
for each row
begin
if :new.id_user = :old.id_user then
Raise_Application_Error(-20099,'YOU CAN'T INSERT DUPLICATED DATA.');
end if;
end distChat;
But still it still allows me to enter Chat with the same user code. Any help or recommendation to improve my trigger that does not work.
Thank you.
Don't use a TRIGGER.
Two reasons:
Oracle provides a mechanism for preventing duplicates
Triggers are expensive, and another database object to maintain
Do this
ALTER table CHAT ADD CONSTRAINT xpk_chat PRIMARY KEY ( ID_CHAT );
I don't know your data model, but I think you want ID_CHAT to distinguish a chat. If you do this for ID_USER, then a user couldn't ever have more than one chat...and who would want to use that system? If I'm wrong, just change the column referenced in the ALTER command above.
Now your table will have a constraint that prevents duplicate values on ID_CHAT column. This is called a PRIMARY KEY (Docs)
Additionally, you will have an INDEX, so querying your CHAT's by their ID value could be quicker.
P.S. Your Trigger isn't doing what you want it to do. If you were to do it with a trigger you would need to raise the exception if :new.id_user in (select distinct id_user from chat)...so basically if the ID resulting in the INSERT was already found in the table, there would be an exception. The beauty of the PK constraint is, that the database does this FOR YOU.
How do I start a trigger so that this allows nobody to be able to rent a movie if their unpaid balance exceeds 50 dollars?
What you have here is a cross-row table constraint - i.e. you can't just put a single Oracle CONSTRAINT on a column, as these can only look at data within a single row at a time.
Oracle has support for only two cross-row constraint types - uniqueness (e.g. primary keys and unique constraints) and referential integrity (foreign keys).
In your case, you'll have to hand-code the constraint yourself - and with that comes the responsibility to ensure that the constraint is not violated in the presence of multiple sessions, each of which cannot see data inserted/updated by other concurrent sessions (at least, until they commit).
A simplistic approach is to add a trigger that issues a query to count how many records conflict with the new record; but this won't work because the trigger cannot see rows that have been inserted/updated by other sessions but not committed yet; so the trigger will sometimes allow members to rent 6 videos, as long as (for example) they get two cashiers to enter the data in separate terminals.
One way to get around this problem is to put some element of serialization in - e.g. the trigger would first request a lock on the member record (e.g. with a SELECT FOR UPDATE) before it's allowed to check the rentals; that way, if a 2nd session tries to insert rentals, it will wait until the first session does a commit or rollback.
Another way around this problem is to use an aggregating Materialized View, which would be based on a query that is designed to find any rows that fail the test; the expectation is that the MV will be empty, and you put a table constraint on the MV such that if a row was ever to appear in the MV, the constraint would be violated. The effect of this is that any statement that tries to insert rows that violate the constraint will cause a constraint violation when the MV is refreshed.
Writing the query for this based on your design is left as an exercise for the reader :)
If you want to restrict something about your table data then you should have a look at Constraints and not Triggers.
Constraints are ensuring some conditions about your table data. Like your example.
Triggers are fired when some action (i.e. INSERT, UPDATE, DELETE) took place and you can do some work then as a reaction to this action.
Lets say i have a query which is fetching col1 after joining multiple tables. I want to insert values of that col1 in a table which is on remote db i.e. i would be using dblink to do that.
Now that col1 would be fetched from 4-5 different db's. There is chances that a value1 fetch from db1 would b in db2 as well. How can i avoid duplicates ?
In my remote db, I have created col1 a primary key. so when inserting, an error would be thrown if there is a duplicate key, end result failing rest of the process. Which i don't want to. I was thiking about 2 approaches
Write a PLSQL script, For each value, determine if value already exists or not. If it doesn't then insert.
Write a PLSQL script and insert and catch the duplicate key exception. The exception would be ignore and it will keep inserting (it doesn't sound that good).
Which approach would you prefer? Is there anything else i can do ?
I would use the MERGE statement and WHEN NOT MATCHED THEN INSERT.
The same merger could also update but it doesn't have to, just leave the update part out.
The different databases can have duplicate primary keys but that doesn't mean the records are duplicates. The actual data may be different in each case. Or the records may represent the same real world thing but at different statuses, Don't know, you haven't provided enough explanation.
The point is, you need much more analysis of why duplicate records can exist and probably a more sophisticated approach to handling collisions. Do you need to take all records (in which case you need a synthetic key)? Or do you take only one instance (so how do you decide precedence)? Other scenarios may exist.
In any case, MERGE or PL/SQL loops are likely to be too crude a solution.
First off, I would suggest that your target database drive all of these inserts because inserting/updating across a database link can create some locking issues and further complicate things especially with multiple databases attempting to access and perform DML on the same table. However if that isn't possible the solutions below will work.
I would fix your primary key problem by including a table look-up on the target table for each row.
INSERT INTO customer#dblink.oracle.com cust
(emp_name,
emp_id)
VALUES
(SELECT
cust.employee_name,
cust.employee_id --primary_key
FROM
source_table st
WHERE NOT EXISTS
(SELECT 1
FROM customer#dblink.oracle.com cust
WHERE cust.employee_id = st.emp_id));
Again, I would not recommend DML transactions across database links unless absolutely necessary as you can sometimes have weird locking behavior.
A PL/SQL procedure or anonymous PL/SQL block could be used to create a bulk processing solution as follows:
CREATE OR REPLACE PROCEDURE send_unique_data
AS
TYPE tab_cust IS TABLE OF customer#dblink.oracle.com%ROWTYPE
INDEX BY PLS_INTEGER;
t_records tab_cust;
BEGIN
SELECT
cust.employee_name,
cust.employee_id --primary_key
BULK COLLECT
INTO t_records
FROM source_table;
FORALL i IN t_records.FIRST...t_records.LAST SAVE EXCEPTIONS
INSERT INTO customer#dblink.oracle.com
VALUES t_records(i);
END send_unique_data;
You can also call the system SQL%BULKEXCEPTIONS collection in case you want to do anything with the records that threw exceptions (such as unique_constraint violations). Be warned that this solution will cause the target table to suffers from performance issues if there are lots of duplicate data attempting to be inserted.
We have a web application (Grails) which we are going to sell licenses for based on the number of users. There is a table in the database (Oracle 10g) which holds users. Customers will host their own copy of the software and database. Can someone suggest strategies for limiting the number of records which are allowed to exist in the user table in a way which can't reasonably be subverted by the customer? Thanks.
You should at least consider avoiding all technical means here and instead insisting that your customer sign an SLSA with an audit provision, and then audit here and there.
All these technical means introduce risks of failure, ranging from flat-out crashes to mysterious performance problems. The more stealthy and devious, the more stealthy and devious the bugs.
It will depend on your definition of "reasonably". If they're hosting the database, they'll always be able to allow more rows.
The simplest possible solution would be an AFTER STATEMENT trigger that counted the number of rows and threw an exception if too many rows had been inserted. They could, of course, drop or disable that trigger. On the other hand, your application could also query the data dictionary to verify that the trigger was present and enabled.
You could make it more difficult for them to remove the trigger by creating a DDL trigger that looked for statements that affected this trigger or the table in question and disallowed them. That would require that the attacker find and remove that trigger as well before they could remove the trigger on the table.
You could deliver a database job (DBMS_SCHEDULER or DBMS_JOB) that periodically ran, looked for the statement and DDL triggers and re-created them if they were missing. The attacker could figure out that there was a database job that was recreating the objects and remove that job, then remove the DDL trigger, then remove the statement trigger. In this job, you could potentially send a notification back to you (via email or http or something else) alerting you to the issue though that may be tricky from a networking standpoint-- your customer's firewall may not allow outbound HTTP requests from the database server back to your servers.
If you have a license key that is being checked, you can embed the number of users allowed in that license key and bounce that against the number of rows in the table during the login table.
If the customer doesn't have access to modify the table definition, you could use a simple set of constraints on the table:
CREATE TABLE user_table
(id NUMBER PRIMARY KEY
,name VARCHAR2(100) NOT NULL
,rn NUMBER NOT NULL
,CONSTRAINT rn_check CHECK (rn = TRUNC(rn) AND rn BETWEEN 1 AND 30)
,CONSTRAINT rn_uk UNIQUE (rn)
);
Now, the column rn must take an integer value between 1 and 30, and duplicates are not allowed: thus, a maximum of 30 rows may be added.
I currently have 2 schemas, A and B.
B has a table, and A executes selects inserts and updates on it.
In our sql scripts, we have granted permissions to A so it can complete its tasks.
grant select on B.thetable to A
etc,etc
Now, table 'thetable' is dropped and another table is renamed to B at least once a day.
rename someothertable to thetable
After doing this, we get an error when A executes a select on B.thetable.
ORA-00942: table or view does not exist
Is it possible that after executing the drop + rename operations, grants are lost as well?
Do we have to assign permissions once again ?
update
someothertable has no grants.
update2
The daily process that inserts data into 'thetable' executes a commit every N insertions, so were not able to execute any rollback. That's why we use 2 tables.
Thanks in advance
Yes, once you drop the table, the grant is also dropped.
You could try to create a VIEW selecting from thetable and granting SELECT on that.
Your strategy of dropping a table regularly does not sound quite right to me though. Why do you have to do this?
EDIT
There are better ways than dropping the table every day.
Add another column to thetable that states if the row is valid.
Put an index on that column (or extend your existing index that you use to select from that table).
Add another condition to your queries to only consider "valid" rows or create a view to handle that.
When importing data, set the new rows to "new". Once the import is done, you can delete all "valid" rows and set the "new" rows to "valid" in a single transaction.
If the import fails, you can just rollback your transaction.
Perhaps the process that renames the table should also execute a procedure that does your grants for you? You could even get fancy and query the dictionary for existing grants and apply those to the renamed table.
No :
"Oracle Database automatically transfers integrity constraints, indexes, and grants on the old object to the new object."
http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_9019.htm#SQLRF01608
You must have another problem
Another approach would be to use a temporary table for the work you're doing. After all, it sounds like it is just the data is transitory, at least in that table, and you wouldn't keep having to reapply the grants each time you had a new set of data/create the new table