how to track and modify an in-flight oracle query before execution? - oracle

My Application is sending below query to Oracle.
SELECT * FROM EMPLOYEE WHERE DATE > (SYSDATE - 1) order by employee_id
I cannot change this query from the application. I am looking for a way for oracle to monitor the queries, change it to below query format and return the result.
SELECT * FROM EMPLOYEE WHERE DATE > (SYSDATE - 1) and Currently_employed = 'YES' order by employee_id
Thank You.
Subrat

If the application can connect to the database as a different user then the table owner, you could also implement a Virtual Private Database policy to modify the query in-flight. See here:
https://oracle-base.com/articles/8i/virtual-private-databases
You could also, if the application is connecting as a separate user, create a view as suggested by #ekochergin, and have a synonym "EMPLOYEE" in the application's user schema point to the view in the data owner's schema.
If the application is connecting as the data owner, your options are much more limited. Renaming the table and replacing it with a view as suggested would be the easiest option. If you've got a lot of money to invest in Oracle's Database Application Firewall appliance you could also modify the query in-flight with a security policy there.

You might needed to rename employee table to somewhat like "EMP_TABLE" and create a view named "EMPLOYEE" using
create view employee as select * from emp_table where currently_employed = 'YES';
Please test it against a test instance before implementing on live

Use the SQL Translation Framework if you only need to convert a small number of statements. If you need to modify many statements then you should look into the options described in the other answers, such as Virtual Private Database, views, or synonyms.
For this sample schema:
create table employee
(
employee_id number,
hire_date date,
currently_employed varchar2(3)
);
insert into employee values(1, sysdate, 'NO');
insert into employee values(1, sysdate, 'YES');
commit;
Create the following translator profile and then create the specific translation:
begin
dbms_sql_translator.create_profile('EMPLOYEE_TRANSLATOR_PROFILE');
dbms_sql_translator.register_sql_translation
(
profile_name => 'EMPLOYEE_TRANSLATOR_PROFILE',
sql_text => q'[SELECT * FROM EMPLOYEE WHERE HIRE_DATE > (SYSDATE - 1) order by employee_id]',
translated_text => q'[SELECT * FROM EMPLOYEE WHERE HIRE_DATE > (SYSDATE - 1) and Currently_employed = 'YES' order by employee_id]'
);
end;
/
The translation profile must be enabled in each session. Since you have no control over the application, you can create a logon profile that will automatically run the commands to enable the translation:
--Logon trigger that enables profiles.
--I'm not sure why, but you must create this trigger as SYS.
create or replace trigger sys.logon_trg
after logon on database
--Add your username here:
when (user in ('JHELLER'))
begin
execute immediate 'alter session set sql_translation_profile = jheller.employee_translator_profile';
execute immediate q'[alter session set events = '10601 trace name context forever, level 32']';
end;
/
Now, when the application runs the original query that would normally return two rows, it will run the second query that only returns one row:
SQL> SELECT * FROM EMPLOYEE WHERE HIRE_DATE > (SYSDATE - 1) order by employee_id;
EMPLOYEE_ID HIRE_DATE CUR
----------- --------- ---
1 12-FEB-21 YES
But be careful of tiny syntax changes that will prevent the translation. For example, if SELECT is changed to select, the query will not be replaced and will return two rows again:
SQL> select * FROM EMPLOYEE WHERE HIRE_DATE > (SYSDATE - 1) order by employee_id;
EMPLOYEE_ID HIRE_DATE CUR
----------- --------- ---
1 12-FEB-21 YES
1 12-FEB-21 NO

Related

oracle database data synchronized from one to multi relative tables

I hava a user table , which is quite simple
create table user (
user_id int primary key,
user_name varchar2(20)
)
And I build a couples of relative tables assocaite with user table and each table has a user_id , user_name.
So here comes a question, I happend misinput a data with wrong name, but then I just linked to this wrong record with all relative tables. If I want correct the user table and same time synchronized user_name in all relative tables.How I do in simple way? Plus I didn't set any constraint with these tables.
Edit:
So let me put that more clearly. I can query all user from user table, and then I just create a select in the jsp page. And this selector got two field user_id, user_name. This is how we call it a selector. First I recorded a man with '01','tam' maybe, and I just recorded another row in salary with 'tam','$1300'. This was all wrong cause name was 'tom'. It's easily to change user or salary , but in our system, there are over 40 tables linked to user. I know it's a bad idea but it is designed that way
by our dba and it already worked a long time.
We'll start by making the problem explicit. The data model violates Third Normal Form: instead of relying on user_id to reference user_name every table dependent on the user table has the attribute. The consequence of this is that correcting a mistake in user_name means propagating that change to every table.
Further more it seems that this application lacks a mechanism for correcting errors, or rather propagating the correction to all the impacted tables. So, what to do?
Dynamic SQL and the data dictionary to the rescue:
declare
l_id user.user_id%type := 1234;
l_old_name user.user_name%type := 'Tam';
l_new_name user.user_name%type := 'Tom';
begin
for rec in ( select table_name from user_tab_cols where column_name = 'USER_ID'
intersect
select table_name from user_tab_cols where column_name = 'USER_NAME'
)
loop
execute immediate 'update '|| rec.table_name ||
' set user_name = :1 where user_id = :2 and user_name = :3'
using l_new_name, l_id, l_old_name;
commit;
end loop;
end;
/
No guarantees about performance, because it depends on the data and indexing for each table.
"it already worked a long time"
Which makes me wonder how many data inconsistencies are contained in your system that you don't know about? Maybe your DBA needs to brush up on their data modelling skills.

how to use or create temp table in Oracle

I am pretty new to Oracle.
I am just stuck when i try to achieve the following logic. I am creating a sql script in oracle that will help me to generate a report. This script will run twice a day so i should't pick the same file when it runs next time.
1) run the query and save the result setand store the Order Id in the temp table when the job runs #11 Am
2) Run the query second time # 3 pm check the temp table and return the result set that's not in temp table.
Following query will generate the result set but not sure how to create a temp table and valid against when it run.
select
rownum as LineNum,
'New' as ActionCode,
ORDER_ID,
AmountType,
trun(sysdate),
trun(systime)
from crd.V_IVZ_T19 t19
where
(t19.acct_cd in
(select fc.child_acct_cd
from cs_config fc
where fc.parent_acct ike 'G_TRI_RPT'))
and t19.date>= trunc(sysdate)
and t19.date<= trunc(sysdate);
Any help much appreciated. I am not sure how to get only the timestamp also.
TEMP table is not the idea here, cause temp table data will not store the data for a long time (just for a session), you just need to create a normal table. Hope it will help you:
--- table for storing ORDER_ID for further checking, make a correct DataType, you also can add date period in the table to control expired order_ids';
CREATE TABLE order_id_store (
order_id NUMBER,
end_date DATE
);
--- filling the table for further checking
INSERT INTO order_id_store
SELECT ORDER_ID, trunc(sysdate)
FROM crd.V_IVZ_T19 t19
WHERE t19.order_id NOT IN (SELECT DISTINCT order_id FROM order_id_store)
AND t19.date>= trunc(sysdate)
AND t19.date<= trunc(sysdate);
--- delete no need data by date period, for example for last 2 days:
DELETE FROM order_id_store WHERE end_date <= trunc(SYSDATE - 2);
COMMIT;
---- for select report without already existed data
SELECT
rownum as LineNum,
'New' as ActionCode,
ORDER_ID,
AmountType,
trun(sysdate),
trun(systime)
FROM crd.V_IVZ_T19 t19
WHERE
(t19.acct_cd in
(select fc.child_acct_cd
from cs_config fc
where fc.parent_acct ike 'G_TRI_RPT'))
AND t19.order_id NOT IN (SELECT DISTINCT order_id FROM order_id_store)
AND t19.date>= trunc(sysdate)
AND t19.date<= trunc(sysdate);
I'm not sure about your "t19.date>=" and "t19.date<=", cause the close duration taking there, make that correct if it's not.

ORA-02437: "primary key violated" - why can't I see duplicate ID in SQL Developer?

I would receive an error:
ORA-02437: cannot validate (%s.%s) - primary key violated
Cause: attempted to validate a primary key with duplicate values or null values
I found it was because I have a stored procedure that increments the ID, but it had failed to do so when it re-ran and had an error related to one of my datatypes. I found I now had a duplicate ID in my database table. All this made sense and I was able to easily rectify it with a DELETE FROM MyTable WHERE ID = x, where x was the offending duplicate ID. The problem I have is the only way I was able to even find the IDs that were duplicated is in the first place is because I did a SELECT * FROM MyTable WHERE ID = x -- where x was one greater than the last ID I could actually see. I found it just by an educated guess. So:
Why can't I see these duplicate IDs when I open the table in Oracle SQL Developer? It only shows the last row as the ID before the duplicates. I don't think it is because of my primary key constraint, since the first line in my stored procedure is to remove that (and put it back, at the end - probably when I got my error), and it was not present when I looked at my table.
Is there some way to make these last IDs that got inserted into the table visible, so I wouldn't have to guess or assume that the duplicate IDs are "hiding" as one greater than the last ID I have in my table, in the future? There is a commit; in my stored procedure, so they should have appeared -- unless, of course, the procedure got hung up before it could run that line of code (highly probable).
Stored procedure that runs:
create or replace
PROCEDURE PRC_MYTABLE_INTAKE(
, EMPLOYEE_ID IN NVARCHAR2
, TITLE_POSITION IN NVARCHAR2
, CREATED_DATE IN DATE
, LAST_MODIFIED IN DATE
) AS
myid integer := 0;
appid integer := 0;
BEGIN
-- disable PK constraint so it can be updated
EXECUTE IMMEDIATE 'ALTER TABLE MYTABLE DROP CONSTRAINT MYTABLE_PK';
COMMIT;
-- assign ID to myid
SELECT ID INTO myid FROM MYTABLE WHERE ROWID IN (SELECT MAX(ROWID) FROM MYTABLE);
-- increment
myid := myid + 1;
-- assign APPLICATION_ID to appid
SELECT APPLICATION_ID INTO appid FROM MYTABLE WHERE ROWID IN (SELECT MAX(ROWID) FROM MYTABLE);
-- increment
appid := appid + 1;
-- use these ids to insert with
INSERT INTO MYTABLE (ID, APPLICATION_ID,
, EMPLOYEE_ID
, TITLE_POSITION
, CREATED_DATE
, LAST_MODIFIED
) VALUES(myid, appid,
, EMPLOYEE_ID
, TITLE_POSITION
, CREATED_DATE
, LAST_MODIFIED
);
COMMIT;
-- re-enable the PK constraint
EXECUTE IMMEDIATE 'ALTER TABLE PASS ADD CONSTRAINT MYTABLE_PK PRIMARY KEY (ID)';
COMMIT;
END;
Here's one problem:
SELECT ID
INTO myid
FROM MYTABLE
WHERE ROWID IN (SELECT MAX(ROWID) FROM MYTABLE)
There is no correlation between ID and ROWID, so you're not getting the maximum current ID, you're just getting the one that happens to be on the row that is furthest from the start of a datafile with a high number.
The code you need is:
SELECT COALESCE(MAX(ID),0)
FROM MYTABLE;
Or better yet, just use a sequence.
No idea why you're dropping the PK either.
Furthermore, when you issue the query:
SELECT APPLICATION_ID INTO appid ...
... that could be for a different row than the one you already got the id for, because a change could have been committed to the table.
Of course another issue is that you can't run two instances of this procedure at the same time either.
For David Aldridge, since he wants to look at code instead of the real reason I posted my question, run this ---
CREATE TABLE YOURSCHEMA.TESTING
(
TEST_ID NVARCHAR2(100) NOT NULL
, TEST_TYPE NVARCHAR2(100) NOT NULL
, CONSTRAINT TEST_PK PRIMARY KEY
(
TEST_ID
)
ENABLE
);
create or replace
PROCEDURE PRC_TESTING_INSERT(
TEST_TYPE IN NVARCHAR2
) AS
testid integer := 0;
BEGIN
-- disable PK constraint so it can be updated
EXECUTE IMMEDIATE 'ALTER TABLE TESTING DROP CONSTRAINT TEST_PK';
COMMIT;
-- assign TEST_ID to testid
SELECT TEST_ID INTO testid FROM TESTING WHERE ROWID IN (SELECT MAX(ROWID) FROM TESTING);
-- increment
testid := testid + 1;
-- use this id to insert with
INSERT INTO TESTING (TEST_ID, TEST_TYPE) VALUES(testid, TEST_TYPE);
COMMIT;
-- re-enable the PK constraint
EXECUTE IMMEDIATE 'ALTER TABLE TESTING ADD CONSTRAINT TEST_PK PRIMARY KEY (TEST_ID)';
COMMIT;
END;
SET serveroutput on;
DECLARE
test_type varchar(100);
BEGIN
test_type := 'dude';
YOURSCHEMA.PRC_TESTING_INSERT(test_type);
-- to verify the variable got set and procedure ran, could do:
--dbms_output.enable;
--dbms_output.put_line(test_type);
END;
Now, because there is no data in the table, the stored procedure will fail with ORA-06512: no data found. If you then try and run it again, you will get ORA-02443: cannot drop constraint - nonexistent constraint, because the EXECUTE IMMEDIATE 'ALTER TABLE TESTING DROP CONSTRAINT TEST_PK'; successfully dropped it, and the procedure never ran the command at the end to re-add it. This is what made me think I needed the commits, but even without them, it still will not complete the whole procedure.
To prove that the procedure DOES run, if given proper data, run this after creating the table, but before creating/running the stored procedure:
INSERT INTO TESTING (TEST_ID, TEST_TYPE)
VALUES ('1', 'hi');
And if you run the proc from a new table (not one with its constraint dropped), it will run fine.
Since mathguy didn't post this as the answer, though I'll credit him for the information...
Answer to why I can't see the duplicates is because the COMMIT does not occur in the procedure when it failed due to a datatype mismatch (which we found was actually in the application's code that sent the variable's values into this procedure, not in the stored procedure, itself). (It's also why I'll mark down anyone that says you don't have to add so many COMMIT lines in this procedure.) The commands were run in the session of the user that starts it - in my case, another session of the same DB user I was logged in with, but started from my application, instead of my SQL Developer session. It also explains why I could do a COMMIT, myself, but it did not affect the application's session - I could not commit any actions ran from another session. Had I ran a COMMIT as an OracleCommand and did an .ExecuteNonQuery on my OracleConnection right after the failure within the catch of my application, I would have seen the rows in SQL Developer without having to do a special query.
So, in short, the only way to see the items was with a direct query using WHERE ID =, find the last ID and increment it, and put it in the query.

Declaring and using variables in PL-SQL

I am new to PL-SQL. I do not understand why I am getting the error "PLS-00428: an INTO clause is expected in this SELECT statement"
What I'm trying to accomplish is to create a variable c_limit and load it's value. I then want to use that variable later to filter data.
Basically I am playing around in the demo db to see what I can/can't do with PL-SQL.
The code worked up to the point that I added "select * from demo_orders where CUSTOMER_ID = custID;"
declare
c_limit NUMBER(9,2);
custID INT;
BEGIN
custID := 6;
-- Save the credit limit
select credit_limit INTO c_limit
from demo_customers cust
where customer_id = custID;
select * from demo_orders where CUSTOMER_ID = custID;
dbms_output.Put_line(c_limit);
END;
If you are using a SQL SELECT statement within an anonymous block (in PL/SQL - between the BEGIN and the END keywords) you must select INTO something so that PL/SQL can utilize a variable to hold your result from the query. It is important to note here that if you are selecting multiple columns, (which you are by "SELECT *"), you must specify multiple variables or a record to insert the results of your query into.
for example:
SELECT 1
INTO v_dummy
FROM dual;
SELECT 1, 2
INTO v_dummy, v_dummy2
FROM dual;
It is also worth pointing out that if your SELECT * FROM.... will return multiple rows, PL/SQL will throw an error. You should only expect to retrieve 1 row of data from a SELECT INTO.
Looks like the error is from the second select query.
select * from demo_orders where CUSTOMER_ID = custID;
PL-SQL won't allow a standalone sql select query for info.
http://pls-00428.ora-code.com/
You need to do some operation with the second select query

How to find the elapse time user was logged into database using trigger

I am trying to calculate the total time user logged into the database using a trigger
my table structure is seen below:
create table stats$user_log
(
user_id varchar2(30),
session_id number(8),
host varchar2(30),
logon_day date,
logon_time varchar2(10),
logoff_day date,
logoff_time varchar2(10),
elapsed_minutes varchar2(32)
);
My trigger for logon is as follows:
create or replace trigger
logon_audit_trigger
AFTER LOGON ON DATABASE
BEGIN
insert into stats$user_log values(
user,
sys_context('USERENV','SESSIONID'),
sys_context('USERENV','HOST'),
sysdate,
to_char(sysdate, 'hh24:mi:ss'),
null,
null,
null
);
END;
/
My trigger for logoff is as follows:
create or replace trigger
logoff_audit_trigger
BEFORE LOGOFF ON DATABASE
BEGIN
UPDATE
stats$user_log
set
logoff_day = sysdate,
logoff_time = to_char(sysdate, 'hh24:mi:ss'),
elapsed_minutes = round((logoff_day - logon_day)*1440,2)
WHERE
sys_context('USERENV','SESSIONID') = session_id;
END;
/
When the user logs out everything is captured except the elapse_minutes column it remains as null.
Can anyone tell me where i'm going wrong please and thanks
At the time you do the update, the logoff_day you refer to in the right-hand side of the set expression is still null, so the expression evaluates to null.
Any column values you refer to have to be the pre-update values, or changing the order that the columns are assigned within the set clause would change how the update worked, which at best be confusing. An update that sets a column based on its old value - e.g. set salary = salary * 1.1 - would be particularly problematic.
You can refer to sysdate a second time instead:
logoff_day = sysdate,
logoff_time = to_char(sysdate, 'hh24:mi:ss'),
elapsed_minutes = round((sysdate - logon_day)*1440,2)
If session auditing is enabled, the database already does this for you. Why create that for yourself? Check dba_audit_session for the results. You might need to talk to your dba / security staff to get access but it might be worth it.

Resources