How do I count page hit or no of visitor in oracle form d2k application?
Here's one option:
create a table, e.g.
create table hits (form_name varchar2(30), hits number);
create a stored procedure which will be used to maintain that table. It'll be an autonomous transaction procedure (so that you could commit without affecting the main transaction), and it'll lock the hits table (will it affect execution in multi-user environment? Shouldn't have, it is really fast).
create or replace procedure p_hits (par_form_name in varchar2)
is
pragma autonomous_transaction;
begin
lock table hits in exclusive mode;
merge into hits h
using (select par_form_name form_name from dual) x
on (x.form_name = h.form_name)
when matched then
update set h.hits = h.hits + 1
when not matched then
insert (form_name, hits) values (par_form_name, 1);
commit;
end p_hits;
/
call that procedure in form's WHEN-NEW-FORM-INSTANCE form-level trigger:
p_hits(:system.current_form);
That would be all; query the hits table to see its contents.
Related
I need to restrict access to DDL with some table in oracle, forbid drop of the table. How I can do this? I just can create DDL trigger for database and schema
create table my_table(
id int primary key not null,
first_val int,
second_val int
);
create trigger delete_disabling_trigger
before drop on database
begin
if --some condition
dbms_output.put_line('delete_disabling_trigger');
RAISE_APPLICATION_ERROR(-175,'Cant delete this table');
end if;
end;
If you need table that someone won't be able to drop you can create the table in another schema. Give grants to select, insert, update, delete and create synonym for user that is using the table.
Scenario: User X needs table T which can't be dropped.
Create user persist identified by password.
Create table persist.T ...
GRANT SELECT, INSERT, UPDATE, DELETE on persist.T to X;
create synonym X.T for persist.T;
In such scenario X can manipulate data but can't change structure or drop table. Using trigger seems to be weird solution to block dropping table.
EDIT
create or replace trigger delete_disabling_trigger
before drop on database
begin
if(ORA_DICT_OBJ_NAME = 'MY_TABLE') then --and ORA_DICT_OBJ_OWNER = 'YOUR_SCHEMA'
dbms_output.put_line('delete_disabling_trigger');
RAISE_APPLICATION_ERROR(-20000,'Cant delete this table');
end if;
end;
/
I have just implemented a trigger to stop the drop of 5 tables which are being dropped from 2 development schemas by some as yet unknown process. We register the attempt in a table with an autonomous transaction, and stop the drop.
I hope to identify the process soon and get rid of the trigger.
CREATE OR REPLACE TRIGGER whos_dropping_my_table
BEFORE DROP
ON database
declare
PRAGMA AUTONOMOUS_TRANSACTION;
begin
insert into some_table_I_prepared_earlier
VALUES( SUBSTR(ora_sysevent,1,50),
SUBSTR(ora_dict_obj_owner,1,50),
SUBSTR(ora_dict_obj_name,1,50),
SUBSTR(ora_dict_obj_TYPE,1,50),
UPPER(sys_context('USERENV','TERMINAL')),
SYSDATE,
UPPER(sys_context('USERENV','OS_USER'))
);
commit;
if SUBSTR(ora_dict_obj_name,1,50) in
('TABLE_1','TABLE_2','TABLE_3','TABLE_4','Table_5')
then
begin
RAISE_APPLICATION_ERROR(num => -20998,
msg => 'Stop deleting my table, whoever you are');
end;
end if;
end;
My terminology will be loose but the point will be clear. I have built a procedure which merges data using Merge Statement. Now my list of tables is growing and I am at a point where I think I need generic function so I just pass table name of source, destination and on condition and can achieve merge.
This will make my code less complex to maintain, otherwise I have to write one procedure per table to make it easy to maintain but still not compact.
Although due to professional reasons a linear code is more efficient as end product but that is less relevant.
Here is my general code.
SET DEFINE OFF;
PROMPT drop Procedure XXX_PROJECTS_MERGE;
DROP PROCEDURE CUSTOM.XXX_PROJECTS_MERGE;
PROMPT Procedure XXX_PROJECTS_MERGE;
/***************************************************************************************************
Prompt Procedure XXX_PROJECTS_MERGE;
--
-- XXX_PROJECTS_MERGE (Procedure)
--
***************************************************************************************************/
CREATE OR REPLACE PROCEDURE CUSTOM.XXX_PROJECTS_MERGE (
errbuf OUT VARCHAR2,
retcode OUT NUMBER,
x_Start_Period_Name IN VARCHAR2)
AS
x_retcode NUMBER := 0;
x_errbuf VARCHAR2 (200) := NULL;
BEGIN
-- Update or insert non transactional tables ------------------------------
-- Refreshing Key Project Table--------------------------------------------------------------------
FND_FILE.PUT_LINE (FND_FILE.LOG, 'Starting Project Table Refresh Process');
MERGE INTO CUSTOM.XXX_PROJECT GPRJ
USING (SELECT ROWID,
PROJECT_ID,
set of columns
FROM PA.PA_PROJECTS_ALL
WHERE ORG_ID = 21
AND LAST_UPDATE_DATE >=
(SELECT MAX (LAST_UPDATE_DATE)
FROM CUSTOM.XXX_PROJECT)) OPRJ
ON (GPRJ.PROJECT_ID = OPRJ.PROJECT_ID AND GPRJ.ROWID = OPRJ.ROWID)
WHEN MATCHED
THEN
UPDATE SET
GPRJ.NAME = OPRJ.NAME,
Above set of columns in above update form
WHEN NOT MATCHED
THEN
INSERT (set of columns)
VALUES (set of values as selected above);
COMMIT;
FND_FILE.PUT_LINE (
FND_FILE.LOG,
'Number of Rows Processed or Merged For XXX_PROJECT Table '
|| TO_CHAR (SQL%ROWCOUNT));
END;
/
SHOW ERRORS;
Now I want above to be generic procedure where I pass different set of inputs and can run for any table.
Is there a way to commit only the data inserted/updated on a table through a database link and not the data of the current session? Or are they considered one and the same?
For example:
INSERT INTO main_database.main_table(value1, value2)
VALUES (1 , 2)
INSERT INTO database.table#database_link(value3, value4)
VALUES (3 , 4)
And do a commit for only the database link table?
Background on why I would want to do this:
The main database is for (multiple) records while the database link is for (monetary) transactions (processed on a separate server). I want to update the records first to check to see if any of the constraints fail, but not commit the data until the transaction is complete. If the transaction fails, I want to rollback the records to save me the effort of deleting/undoing inserts/updates which could get messy.
I am assuming there is not but I am hoping that there is a way. Thanks in advance.
create or replace procedure proc_1 ( i IN number )
as
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
INSERT INTO test_table#remote_sid (id, description)
VALUES (i, 'Description for ' || i);
COMMIT;
END;
/
create or replace procedure proc_base ( i IN number )
as
begin
insert into local_tab (id) values (i);
proc_1( i );
rollback;
end;
/
No.
If the insert is in a pl/SQL then you can run the first insert in an autonomous transaction, but in the absence of other functionality that's little different than insert....; commit; insert...
I'm wondering if I will miss any data if I replace a trigger while my oracle database is in use. I created a toy example and it seems like I won't, but one of my coworkers claims otherwise.
create table test_trigger (id number);
create table test_trigger_h (id number);
create sequence test_trigger_seq;
--/
create or replace trigger test_trigger_t after insert on test_trigger for each row
begin
insert into test_trigger_h (id) values (:new.id);
end;
/
--/
begin
for i in 1..100000 loop
insert into test_trigger (id) values (test_trigger_seq.nextval);
end loop;
end;
/
--/
begin
for i in 1..10000 loop
execute immediate 'create or replace trigger test_trigger_t after insert on test_trigger for each row begin insert into test_trigger_h (id) values (:new.id); end;';
end loop;
end;
/
ran the two loops at the same time
select count(1) from test_trigger;
COUNT(1)
100000
select count(1) from test_trigger_h;
COUNT(1)
100000
create or replace is locking the table. So all the inserts will wait until it completes. Don't worry about missed inserts.
I think you might be going about testing this in the wrong way. Your insert statements won't take any time at all and so the replacement of the trigger can fit in through the gaps between inserts. As least this is what I infer due to the below.
If you change your test to ensure you have a long running SQL statement, e.g.
create table test_trigger (id number);
create table test_trigger_h (id number);
create sequence test_trigger_seq;
create or replace trigger test_trigger_t
after insert on test_trigger for each row
begin
insert into test_trigger_h (id) values (:new.id);
end;
/
insert into test_trigger
select level
from dual
connect by level <= 1000000;
If you then try to replace the trigger in a separate session it will not occur until after the insert has completed.
Unfortunately, I can't find anything in the documentation to back me up; this is just behavior that I'm aware of.
Following URL answers that trigger can be modified while application is running. its will a "library cache" lock and NOT a "data" lock. Oracle handles it internally without you worrying abt it.
Check out question raised by Ben- Can a trigger be locked; how would one determine that it is?
-- Run this from session 2:
select * from v$access where object = upper('test_trigger_t');
I have a query inside the function with RESULT_CACHE.
So when the table is changed - my cache is invalidated and function is executed again.
What I want is to implement the function that depends only on input parameters, and doesn't depend on any implicit dependencies (like tables, etc).
Is it possible (without dynamic sql)?
a function that depends only on its parameters can be declared DETERMINISTIC. The results of this function will be cached in some cases. This thread on the OTN forums shows how deterministic function results get cached inside SQL statements.
As of 10gR2, the function results don't get cached across SQL statements nor do they get cached in PL/SQL. Still, this cache feature can be useful if you call a function in a SELECT where it might get called lots of time.
I don't have a 11gR2 instance available right now, so I can't test the RESULT_CACHE feature, but have you considered delaring your function relying on a fixed dummy table (a table that never gets updated for instance)?
The correct answer is NO.
A solution in cases where things like result caches and materialized views won't work because of invalidations or too much overhead is the Oracle In-Memory Database Cache option. See result caches ..... what about heavily modified data It's a real smart option, not cheap.
If you use a database link it is possible to create a function result cache that will read from a table when a parameter changes but will not be invalidated when the table changes.
Obviously there are some issues with this approach; performance (even for a self-link), maintenance, the function may return the wrong result, everybody hates database links, etc.
Note that RELIES_ON is deprecated in 11gR2. Dependencies are automatically determined at run-time, even dynamic SQL wouldn't help you here. But apparently this dependency tracking doesn't work over database links.
The script below demonstrates how this works. Remove "#myself" from the function to see how it normally works. Some of the code is based on this great article.
--For testing, create a package that will hold a counter.
create or replace package counter is
procedure reset;
procedure increment;
function get_counter return number;
end;
/
create or replace package body counter as
v_counter number := 0;
procedure reset is begin v_counter := 0; end;
procedure increment is begin v_counter := v_counter + 1; end;
function get_counter return number is begin return v_counter; end;
end;
/
--Create database link
create database link myself connect to <username> identified by "<password>"
using '<connect string>';
drop table test purge;
create table test(a number primary key, b varchar2(100));
insert into test values(1, 'old value1');
insert into test values(2, 'old value2');
commit;
--Cached function that references a table and keeps track of the number of executions.
drop function test_cache;
create or replace function test_cache(p_a number) return varchar2 result_cache is
v_result varchar2(100);
begin
counter.increment;
select b into v_result from test#myself where a = p_a;
return v_result;
end;
/
--Reset
begin
counter.reset;
end;
/
--Start with 0 calls
select counter.get_counter from dual;
--First result is "value 1", is only called once no matter how many times it runs.
select test_cache(1) from dual;
select test_cache(1) from dual;
select test_cache(1) from dual;
select counter.get_counter from dual;
--Call for another parameter, counter only increments by 1.
select test_cache(2) from dual;
select test_cache(2) from dual;
select test_cache(2) from dual;
select counter.get_counter from dual;
--Now change the table. This normally would invalidate the cache.
update test set b = 'new value1' where a = 1;
update test set b = 'new value2' where a = 2;
commit;
--Table was changed, but old values are still used. Counter was not incremented.
select test_cache(1) from dual;
select test_cache(2) from dual;
select counter.get_counter from dual;
--The function is not dependent on the table.
SELECT ro.id AS result_cache_id
, ro.name AS result_name
, do.object_name
FROM v$result_cache_objects ro
, v$result_cache_dependency rd
, dba_objects do
WHERE ro.id = rd.result_id
AND rd.object_no = do.object_id;
Two options:
Don't query any table.
Implement your own cache - wrap the function in a package, and store the query results in a PL/SQL table in memory. The downside to this approach, however, is that the cache only works within a single session. Each session will maintain its own cache.