Here's the code I'm working on:
begin
DBMS_STATS.GATHER_TABLE_STATS (ownname => 'appdata' ,
tabname => 'TRANSACTIONS',
cascade => true,
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
method_opt=>'for all indexed columns size 1',
granularity => 'ALL',
degree => 1);
end;
After executing the code, PL/SQL procedure successfully completed is displayed.
How to view the statistics for the particular table, analyzed by DBMS_STATS ?
You may see information in DBA_TABLES
SELECT *
FROM DBA_TABLES where table_name='TRANSACTIONS';
e.g. Column LAST_ANALYZED shows when it was last analyzed.
There are also information column by column in
SELECT * FROM all_tab_columns where table_name='TRANSACTIONS';
where you could find min value, max value, etc.
Related
I'm using Oracle 11GR2 and I have a problem when I try to insert a lot of rows into a table.
Here is my table :
CREATE TABLE ref_bic (
id_bic NUMBER(9),
country_id NUMBER(9) NOT NULL,
bank_code VARCHAR(5),
bic_code VARCHAR(20),
bank_name VARCHAR(150) NOT NULL,
CONSTRAINT pk_ref_bic PRIMARY KEY (id_bic)
);
And here is an example of what I'm inserting :
INSERT INTO REF_BIC (country_id, bank_code, bic_code, bank_name) VALUES (123, '123456', '12345', 'SOME BANK NAME');
Note that the id_bic is self generated.
Now here is my problem, I have more than 30k rows like this one to insert in my database the first time I'm creating it, and every-time it takes me more than 30 minutes to insert all the datas.
I have heard that I could use PARALLEL and APPEND to insert is faster, and that the only requirement is to use
ALTER SESSION FORCE PARALLEL DML;
I tried and it don't seem to work
INSERT /*+ APPEND PARALLEL(REF_BIC) */ INTO REF_BIC (country_id, bank_code, bic_code, bank_name) VALUES (123, '123456', '12345', 'SOME BANK NAME');
It is not interpreted, even if I take off the /* */, it's only making me an error.
Now, it seems like the parallel insert need to be from a subquery like
INSERT /*+ APPEND PARALLEL(REF_BIC) */ INTO REF_BIC SELECT * FROM SOME_TABLE;
But i can't use a subquery since I am creating my database for the first time, it is totally empty.
So here are my questions :
Does parallel insert work without subquery ?
And if not, how can I insert quicker my < 30k rows ?
I have more than 30k rows like this one to insert in my database
Use a text editor/excel to construct and put all of them in a single query joined by UNION ALL , I bet it will be much faster ( with or without parallel hint )
insert into REF_BIC (country_id, bank_code, bic_code, bank_name)
select 123, '123456', '12345', 'SOME BANK NAME' from dual union all
select 124, '123457', '12348', 'SECOND BANK NAME' from dual union all
..
..
First of all, use like types, remove the single quotes around the bank_code and bic_code values.
Just turn you insert statements into an anonymous block. Before the first put the line
BEGIN
After the last put the lines
commit;
end;
I had a similar situation with about 12K insert records, after sandwiching it within an anonymous block it finished within seconds.
Given this example (DUP_VAL_ON_INDEX Exception), is it possible to capture the values that violated the constraint so they may be logged?
Would the approach be the same if there are multiple violations generated by a bulk insert?
BEGIN
-- want to capture '01' and '02'
INSERT INTO Employee(ID)
SELECT ID
FROM (
SELECT '01' ID FROM DUAL
UNION
SELECT '02' ID FROM DUAL
);
EXCEPTION
WHEN DUP_VAL_ON_INDEX THEN
-- log values here
DBMS_OUTPUT.PUT_LINE('Duplicate value on an index');
END;
Ideally, I would suggest using DML error logging. For example
Create the error log table
begin
dbms_errlog.create_error_log( dml_table_name => 'EMPLOYEE',
err_log_table_name => 'EMPLOYEE_ERR' );
end;
Use DML error logging
BEGIN
insert into employee( id )
select id
from (select '01' id from dual
union all
select '02' from dual)
log errors into employee_err
reject limit unlimited;
END;
For every row that fails, this will log the data for the row into the EMPLOYEE_ERR table along with the exception. You can then query the error log table to see all the errors rather than getting just the first row that failed.
If creating the error log table isn't an option, you could move from SQL to PL/SQL with bulk operations. That will be slower but you could use the SAVE EXCEPTIONS clause of the FORALL statement to create a nested table of exceptions that you could then iterate over.
For people who would be interested to know more about this, please go through this link.
I want to create a trigger in Oracle 11g. The problem is that I want a trigger which runs every time when there is a SELECT statement. Is this possible or is there other way to achieve the same result. This is the PL/SQL block:
CREATE TRIGGER time_check
BEFORE INSERT OR UPDATE OF users, passwd, last_login ON table
FOR EACH ROW
BEGIN
delete from table where last_login < sysdate - 30/1440;
END;
I'm trying to implement a table where I can store user data. I want to "flush" the rows which are old than one hour. Are there other alternatives to how I could implement this?
p.s Can you tell me is this PL/SQL block is correct. Are there any mistakes?
BEGIN
sys.dbms_scheduler.create_job(
job_name => '"ADMIN"."USERSESSIONFLUSH"',
job_type => 'PLSQL_BLOCK',
job_action => 'begin
-- Insert PL/SQL code here
delete from UserSessions where last_login < sysdate - 30/1440;
end;',
repeat_interval => 'FREQ=MINUTELY;INTERVAL=2',
start_date => systimestamp at time zone 'Asia/Nicosia',
job_class => '"DEFAULT_JOB_CLASS"',
comments => 'Flushes expired user sessions',
auto_drop => FALSE,
enabled => FALSE);
sys.dbms_scheduler.set_attribute( name => '"ADMIN"."USERSESSIONFLUSH"', attribute => 'job_priority', value => 5);
sys.dbms_scheduler.set_attribute( name => '"ADMIN"."USERSESSIONFLUSH"', attribute => 'logging_level', value => DBMS_SCHEDULER.LOGGING_FAILED_RUNS);
sys.dbms_scheduler.enable( '"ADMIN"."USERSESSIONFLUSH"' );
END;
I'm not aware of a way of having a trigger on select. From the documentation, the only statements you can trigger on are insert/delete/update (and some DDL).
For what you want to do, I would suggest a simpler solution: use the DBMS_SCHEDULER package to schedule a cleanup job every so often. It won't add overhead to your select queries, so it should have less performance impact globally.
You'll find lots of examples in: Examples of Using the Scheduler
I'm trying to record DELETE statements in a certain table using Oracle's auditing features. I ran:
SQL> AUDIT DELETE TABLE BY TPMDBO BY ACCESS;
Audit succeeded.
I'm unclear if this audits the deletion of a table schema itself (ie, dropping the table), or if it audits the deletion of one or more rows within any table (ie, the delete command). If the latter, how do I limit this auditing to only a table called Foo? Thanks!
UPDATE:
SQL> show parameter audit
NAME TYPE VALUE
------------------------------------ ----------- -------------
audit_file_dest string /backup/audit
audit_sys_operations boolean TRUE
audit_syslog_level string
audit_trail string XML, EXTENDED
There is a new feature called fine-grained auditing (FGA), that stores log in SYS.FGA_LOG$ instead SYS.AUD$. Here is the FGA manual.
BEGIN
DBMS_FGA.ADD_POLICY(
object_schema => 'HR',
object_name => 'FOO',
policy_name => 'my_policy',
policy_owner => 'SEC_MGR',
enable => TRUE,
statement_types => 'DELETE',
audit_condition => 'USER = ''myuser''',
audit_trail => DBMS_FGA.DB);
END;
/
Yes, your original command should audit DELETE operations (not DROP) for this user on all tables. Examine show parameter audit
I have table data1 with 2 field: user_id and data_id. I have 2 indexes on user_id and data_id. They are a non unique indexed.
a function:
FUNCTION user_filter(p_schema IN VARCHAR2,
p_object IN VARCHAR2) RETURN VARCHAR2 IS
BEGIN
RETURN 'user_id='||session_pkg.user_id;
END;
I register this function as rls policy on data1:
DBMS_RLS.ADD_POLICY(OBJECT_SCHEMA => '',
OBJECT_NAME => 'data1',
POLICY_NAME => 'user_filter',
POLICY_FUNCTION => 'user_filter');
To have best performance, do I have to create 1 more Index like following?
create index data3_idx on data (user_ID, data_id);
Thanks,
In general it would be wasteful to have three indexes for two columns (data_id), (user_id,data_id) and (user_id) since Oracle can use the compound index for queries that filter on user_id and queries that filter on both columns.
In your case the DBMS_RLS.ADD_POLICY procedure will add the filter user_id=XX to all requests on this object. This means that you could replace the index on data_id with a more efficient compound index.