I have two schemas:
my_user sec_admin
sec_admin has the right the create policies on any table of my_user.
We have created a procedure inside of a package my_package.my_proc() that generates and executes all the create policy statements.
So far so good, everything has been working fine.
Now, we've realized that when my_user drops a table (and recreates it), the attached policy automatically gets dropped. A solution is to re-use manually my_package.my_proc() to re-generate the policy. But the fact is, the drop and create table statements are created by SAS developers who don't have access to Oracle SQL Developer. Therefore, the call to my_package.my_proc() should be automatized, by using a trigger
so we've created a simple trigger on my_user:
create or replace
TRIGGER test_trig
AFTER CREATE
ON SCHEMA
DECLARE
--oper ddl_log.operation%TYPE;
v_object_type varchar2(100);
BEGIN
SELECT ora_dict_obj_type
into v_object_type
FROM DUAL;
If upper(v_object_type)='TABLE' then
SEC_ADMIN.my_package.my_proc();
end if;
END test_trig;
It seems that when we create a table, the first time the trigger partially works. When we create a second table, the trigger seems to work, but PF_OWNER inside of dba_policies is my_user instead of SEC_ADMIN. Therefore my policy is not working anymore.
It seems that I have to run the procedure SEC_ADMIN.my_package.my_proc() as SEC_ADMIN as part of a solution but I don't know how.
I'm also open to alternative solutions if you have any. It seems like the problem is that whenever a table is dropped, the attached policy is dropped with it...
Thanks for your help
Related
I am using sequences to create IDs, so while executing insert stored procedure it will create unique value for ID. But after some time it is losing the definition for the sequence.
Not sure why this is happening again and again and how to solve the problem?
I am using Oracle SQL Developer and in the edit table property there is 'Identity Column' setting. See below:
Next step is setting up trigger and sequence:
It was working fine for some time until this property defaulted. Now it is not there anymore:
Still have this trigger and sequence object in the schema and able to setup again but it will break later.
How to avoid this problem in future?
I think it is just a bug/limitation in your client software, Oracle SQL Developer. The "Identity Column" tab is a handy way to create the corresponding sequence and trigger but it doesn't seem to recognise existing elements. I've just verified my own system and that's exactly what happens.
It makes sense, because adding a new sequence and trigger is a pretty straightforward task (all you need is a template) but displaying current sequence is hard given that a trigger can implement any conceivable logic. Surely it could be done but the cost-benefit ratio probably left things this way.
In short, your app is not broken so nothing needs to be fixed on your side.
This is what I received from IT support regarding the issue:
A few possibilities that might cause this:
1 - Another user with limited privileges might be editing the table using SQL Developer. In this case, if this user's privilege is not enough to obtain the sequence and/or trigger information from the database, the tool might leave the fields blank and disable it when table changes are saved.
2 - The objects are being changed or removed outside of SQL Developer, causing it to lose the information. In my tests I noticed that dropping the trigger and recreating it with the same name caused the identity property information to be lost on SQL Developer.
Even being the trigger enabled, and working for inserts it could not retrieve the information.
Then, if I run an alter trigger to enable it (even tough dba_trigger is reporting it as already enabled), SQL Developer will list the information again:
ALTER TRIGGER "AWS"."TABLE1_TRG" ENABLE;
So it looks like there are some issues with the SQL Developer, that is causing this behavior.
Next time it happen, please check if the trigger still exist on the database and is enabled with the query below:
select owner, trigger_name, TRIGGER_TYPE, TRIGGERING_EVENT, TABLE_OWNER, TABLE_NAME, STATUS
from dba_triggers
where trigger_name = 'ENTER_YOUR_TRG_NAME'; --Just change the trigger name in WHERE
I have created a synonym for a dblink.
create synonym dblink2 for dblink1
But when I query anything using the synonym instead of the dblink, I'm getting connection description for remote database not found error.
SELECT * FROM DUAL#DBLINK2
How do I query using the synonym?Edit: I know that it'll work if I create a view of the table using dblink. But my requirement is the above question.
Unfortunately creation of synonyms for dblinks is not supported. If you read the documentation on synonyms, you will find that the permitted objects for synonyms are only:
Use the CREATE SYNONYM statement to create a synonym, which is an
alternative name for a table, view, sequence, procedure, stored
function, package, materialized view, Java class schema object,
user-defined object type, or another synonym.
The reason why your second query fails is that the synomym you have created is not functioning correctly. It is not being validated properly at creation time, and you can create any sort of incorrect synonyms like that. To verify, just test the following statement:
create synonym dblink3 for no_object_with_this_name;
You will still get a response like this:
*Synonym DBLINK3 created.*
But of course nothing will work via this synonym.
I don't see the point in creating a synonym for the dblink itself. Ideally you create the synonym for the remote table using the dblink.
CREATE DATABASE LINK my_db_link CONNECT TO user IDENTIFIED BY passwd USING 'alias';
CREATE SYNONYM my_table FOR remote_table#my_db_link;
Now, you could query the remote table using the synonym:
SELECT * FROM my_table;
I'm trying to think of the business issue that gets solved by putting a synonym on a db_link, and the only thing I can think of is that you need to deploy constant code that will be selecting from some_Table#some_dblink, and although the table names are constant different users may be looking across different db_links. Or you just want to be able to swap which db_link you are operating across with a simple synonym repoint.
Here's the problem: it can't be done that way. db_link synonyms are not allowed.
Your only solution is to have the code instead reference the tables by synonyms, and set private synonyms to point across the correct db_link. That way your code continues to "Select from REMOTE_TABLE1" and you just can flip which DB_LINK you are getting that remote table from.
Is it a pain to have to set/reset 100+ private synonyms? Yep. But if it is something you need to do often then bundle up a procedure to do it for you where you pass in the db_link name and it cycles through and resets the synonyms for you.
While I understand that this question is 3+ years old, someone might be able to benefit from a different answer in the future.
Let's imagine that I have 4 databases, 2 for production and 2 for dev / testing.
Prod DBs: PRDAPP1DB1 and PRDAPP2DB1
Dev DBs: DEVAPP1DB1 and DEVAPP2DB1
The "APP2" databases are running procedures to extract and import data from the APP1 databases. In these procedures, there are various select statements, such as:
declare
iCount INTEGER;
begin
insert into tbl_impdata1
select sysdate, col1, col2, substr(col3,1,10), substr(col3,15,3)
from tbl1#dblink2; -- Where dblink2 points to DEVAPP1DB1
...
<more statements here>
...
EXCEPTION
<exception handling code here>
end;
Now that is okay for development but the dblink2 constantly needs to be changed to dblink1 when deploying the updated procedure to production.
As it was pointed out, synonyms cannot be used for this purpose.
But instead, create the db links with the same name, different connection string.
E.g. on production:
CREATE DATABASE LINK "MyDBLINK" USING 'PRDAPP1DB1';
And on dev:
CREATE DATABASE LINK "MyDBLINK" USING 'DEVAPP1DB1';
And then in the procedures, change all "#dblink1" and "#dblink2" to "#mydblink" and it all should be transparent from there.
If you are trying to have the DB link accessible for multiple schemas (users) the answer is to create a public db link
example:
CREATE PUBLIC DATABASE LINK dblink1 CONNECT TO user IDENTIFIED BY password USING 'tnsalias';
After that any schema can issue a:
SELECT * FROM TABLE#dblink1
I have an error: 'ORA-04092: cannot COMMIT in a trigger' when trying to execute ddl command in one simple oracle after update trigger. Trigger needs to create public database link after one field in column is updated. Here is the source:
create or replace
TRIGGER CreateLinkTrigger
after UPDATE of Year ON tableInit
for each row
DECLARE
add_link VARCHAR2(200);
BEGIN
IF :new.year = '2014'
then
add_link := q'{create public database link p2014 connect to test14 identified by temp using 'ora'}';
execute immediate add_link;
END IF;
END;
So, as You can see i need to create new public database link after new year has been activated.
So when i try to update table 'tableInit' with year value of '2014' i get ORA-04092 error.
Is there any way to avoid this error, or another solution for this?
Thanks...
Creating a database link on the fly seems like an unusual thing to do; your schema should generally be static and stable. However, if you must, it would be simpler to wrap the update and the link in a procedure, or just issue two statements - presumably whatever performs the update is fairly controlled anyway, otherwise you'd have to deal with multiple people triggering this multiple times, which would be even more of a mess.
You can probably make this work by adding PRAGMA autonomous_transaction; to your trigger, as demonstrated for a similar issue (creating a view rather than a link) in this answer, but I'm not in a position to test that at the moment.
create or replace
TRIGGER CreateLinkTrigger
after UPDATE of Year ON tableInit
for each row
DECLARE
add_link VARCHAR2(200);
PRAGMA autonomous_transaction;
BEGIN
...
You could also make the trigger submit an asynchronous job to perform the DDL, as described in this answer, and there's more of an example in this answer, where you'd change the job's anonymous block to do your execute immediate.
It would probably be better to just create the links for the next few years in advance during a maintenance window, or on a schedule, or from a procedure; rather than trying to associate a schema change to a data change.
When building a deploy script for a Visual Studio 2010 SQL database project, is there any way to instruct the process to use DROP and CREATE for stored procedures and other DML instead of always ALTERing them?
As an example, if I change a stored procedure, the deploy script will generate something like this...
ALTER PROCEDURE MyProc
AS
SELECT yadda...
I want the deploy script to create something like this instead
IF EXISTS MyProc
DROP MyProc
CREATE PROCEDURE MyProc
AS
SELECT yadda....
It would make version controlled upgrade scripts a bit easier to manage, and deployed changes would perform better. Also if this is not possible, a way to at least issue a RECOMPILE with the ALTER would help some.
This question asks something that seems similar, but I do not want this behavior for tables, just DML.
I'm not familiar enough with database projects to give an answer about whether it's possible to do a DROP and CREATE. However, in general I find that CREATE and ALTER is better than DROP and CREATE.
By CREATE and ALTER I mean something like:
IF NOT EXISTS (SELECT 1 FROM INFORMATION_SCHEMA.ROUTINES
WHERE ROUTINE_NAME = 'MyProc'
AND ROUTINE_TYPE = 'PROCEDURE')
BEGIN;
-- CREATE PROC has to be the first statement in a batch so
-- cannot appear within a conditional block. To get around
-- this, make the statement a string and use sp_ExecuteSql.
DECLARE #DummyCreateText NVARCHAR(100);
SET #DummyCreateText = 'CREATE PROC dbo.MyProc AS SELECT 0;';
EXEC sp_ExecuteSql #DummyCreateText;
END;
GO
ALTER PROCEDURE dbo.MyProc
AS
SELECT yadda...
The advantage of CREATE and ALTER over DROP and CREATE is that the stored proc is only created once. Once created it is never dropped so there is no chance of permissions getting dropped and not recreated.
In a perfect world the permissions on the stored proc would be applied via a database role so it would be easy to reapply them after dropping and recreating the stored proc. In reality, however, I often find that after a few years other applications may start using the same stored proc or well-meaning DBAs may apply new permissions for some reason. So
I've found that DROP and CREATE tend to cause applications to break after a few years (and it's always worse when it's someone else's application that you know nothing about). CREATE and ALTER avoids these problems.
By the way, the dummy create statement, "CREATE PROC dbo.MyProc AS SELECT 0" , works with any stored procedure. If the real stored procedure is going to have parameters or return a recordset with multiple columns that can all be specified in the ALTER PROC statement. The CREATE PROC statement just has to create the simplest stored procedure possible. (of course, the name of the stored proc in the CREATE PROC statement will need to change to match the name of your stored proc)
We have an application written in Delphi 2010 which connects to SQL Server Database. Now we're in the process of migrating to Oracle. With SQL Server it was very easy to perform insert, update, delete right from a dbgrid connected to a Stored Procedure.
It's because stored procedures in SQL Server can easily act as a table so that you can do any operation on it, providing it returns the necessary columns within the resultset. Now with Oracle I don't know how do do it. I connect a DBGrid to a DataSource, dataset of which is a Stored Procedure object,but I can't edit the grid. Just Select is possible.
What do I have to do to to achieve this?I use UniDac component suite to connect to Oracle database.
Oracle does not support such functionality. IOW, in Oracle you cannot edit result set provided by a stored procedure or include stored procedure into INSERT INTO <name>, UPDATE <name> or DELETE FROM <name>.
While it is traditional for SQL Server developers to "always" use stored procedures (due to many reasons), it is not traditional for Oracle developers. But it is possible with Oracle too. Search for "REF CURSOR" to see how to fetch data using SP. And use normal or packaged (preferred) SP to post updates to a DB. These procedure will receive old / new field values through arguments.
I cannot say precisely about UniDAC, I can say about AnyDAC. But I will expect UniDAC has similar functionality. To use SP for posting updates you will need to use TXxxUpdateSQL component.
OK,here I'm answering the question though I can see very few are dealing with Delphi recently. Let's say we have a stored proc in Oracle database:
CREATE OR REPLACE PROCEDURE GET_EMPLOYEES
(V_CUR IN OUT SYS_REFCURSOR)
AS
BEGIN
OPEN V_CUR FOR SELECT * FROM EMPLOYEES;
END GET_EMPLOYEES;
Now, in Delphi you pick a stored procedure component (probably from ODAC or UniDac component suite).Set its StoredProcName GET_EMPLOYEES. Then you can add all the fields that the procedure returns in a cursor.If you run the application and activate the stored procedure you'll be able to see all the records. But if you try to insert, modify or delete anything you'll fail to do so. Now, there's a very tricky thing. If you check, you'll see that ReadOnly property of all fields are set to True. Even after you set them to False nothing will change in the real database, although you can edit the DBGrid.
So, we've come to the main part. How did the old Delphi-SQL Server partnership work so that you could do any operation right from a DBGrid? Well, we must understand that there's no magic. If it's SQL, then SQL has only one way of INSERTING,UPDATING and DELETING records-it's with the appropriate SQL statements.With Delphi-SQL Server there seems to be an implicit SQL statement that we never paid attention. But with Oracle, we have to provide our own statements for each operation.
If you use UniDac or ODAC then there's SQLInsert,SQLUpdate,SQLDelete properties in a StoredProc object.If you want to insert a record through DBGrid, then you should edit its SQLInsert property to
INSERT INTO EMPLOYEES VALUES(:EMPLOYEEID,:EMPLOYEENAME)
where variables following : are corresponding to te fields of the stored procedure.They're simply bind variales.When updating and deleting though you'll need some unique value to represent a specific record. Primary key is one option(maybe the only option as I haven't been able to figure out how to use ROWID for the same purpose).So the sql statements for UPDATE and DELETE would be
DELETE FROM EMPLOYEES WHERE EMPLOYEEID=:EMPLOYEEID
and
UPDATE EMPLOYEES SET EMPLOYEENAME=:EMPLOYEENAME WHERE EMPLOYEEID=:EMPLOYEEID
P.S. I just found a way to use ROWID for update and delete statements. In your stored procedure if you choose ROWID too and give it an alias then you can construct your UPDATE and DELETE Statements like such:
UPDATE EMPLOYEES SET EMPLOYEENAME=:EMPLOYEENAME,..... WHERE ROWID=:RECORD_ROWID
DELETE FROM EMPLOYEES WHERE ROWID=:RECORD_ROWID
In the preceding statements RECORD_ROWID is the fieldname returned from stored procedure as a result of aliasing ROWID. If you use :ROWID instead you'll get "ORA-01745: invalid host/bind variable name" error. This is because in a binding variable a colon cannot be followed by a reserved word. And ROWID is a reserved word.