Oracle Database - Standard Edition - Version 12.1.0.2.0, platform Windows 64-bit
I'm getting ora-12033 error while trying to create mview, all necessary columns are already included in the mview log. When i'm trying to do same thing on 11g version it works, but 12c throwing ora error.
Please help me solve this problem or provide links that may help.
Here is code:
CREATE MATERIALIZED VIEW LOG ON contract_details
with rowid (count, contract_id) including new values;
CREATE MATERIALIZED VIEW m_contract_sum
refresh fast on commit
as
select d.contract_id as contract_id,
count(*) as count_grp,
count(d.count) as cnt_count,
sum(d.count) as sm_count
from contract_details d
group by d.contract_id;
Thanks in advance!
There might be a problem with extended stats, which are automatically created to gather stats on group of columns.
You can verify it by executing following query:
SELECT column_name, data_default, virtual_column, hidden_column
FROM dba_tab_cols
WHERE table_name = 'CONTRACT_DETAILS' AND hidden_column = 'YES';
If this is a case, you need to drop the stats (using dbms_stats.drop_extended_stats) before (re-)creating materialized view.
e.g.
BEGIN
dbms_stats.drop_extended_stats('master', 'contract_details', '(x, y)');
END;
/
Afterwards you probably would like to gather the stats again, by using
BEGIN
dbms_stats.create_extended_stats('master', 'contract_details', '(x, y)')
END;
/
Related
I've a problem which I'm hard to find solution. Hope you guys in this community can solve.
On daily basis I'm copying table from one database(T_TAGS_REMOTE) to table on another database (T_TAGS_LOCAL) through DB links. For this I truncate T_TAGS_LOCAL table first and then perform insert.
Above task is done through Linux job.
Problem comes when
Sometimes T_TAGS_REMOTE from remote database is not accessible giving ORA error
Sometimes T_TAGS_REMOTE have not complete data rows (i,e SYSDATE COUNT < SYSDATE-1 COUNT)
Requirements:
STOP truncating STOP inserting when any of the above problem (1) or (2) has encountered
MyCode:
BEGIN
SELECT COUNT(1) AS OLD_RECORDS_COUNT FROM T_TAGS_LOCAL;
EXECUTE IMMEDIATE 'TRUNCATE TABLE T_TAGS_LOCAL';
INSERT /*+ APPEND */ INTO T_TAGS_LOCAL SELECT * FROM AK.T_TAGS_REMOTE#NETCOOL;
END;
/
Please suggest BETTER option for table copy or code to handle this problem.
I would not use the technique you are using, it would always generate issues. Instead, I think your use case fits a replication using materialized views. A materialized view log in source, and a materialized view using the dblink in target
You only need to decide the refresh method, that could be FAST ON COMMIT, as I guess your table is not very big as you are copying the whole table each and every single day.
Example
In Source
SQL> create table t ( c1 number primary key, c2 number ) ;
Table created.
SQL> declare
begin
for i in 1 .. 100000
loop
insert into t values ( i , dbms_random.value ) ;
end loop;
commit ;
end;
/ 2 3 4 5 6 7 8 9
PL/SQL procedure successfully completed.
SQL> create materialized view log on t with primary key ;
Materialized view log created.
SQL> select count(*) from t ;
COUNT(*)
----------
100000
In Target
SQL> create materialized view my_copy_of_t build immediate refresh fast on demand as
select * from your_source#your_db_link
-- To refresh in target
SQL> select count(*) from my_copy_of_t ;
COUNT(*)
----------
100000
Now, we change source
SQL> insert into t values ( 100001 , dbms_random.value );
1 row inserted
SQL> commit ;
Commit completed.
In target, for refreshing
SQL> exec dbms_mview.refresh('MY_COPY_OF_T');
The only requirement for FAST REFRESH ON DEMAND is that you must have a materialized view log for each of the tables that are part of the Materialized View. In your case, as you are replicating a table, you only need a materialized view log on the source table.
A better option might be using a materialized view. The way you do it now, you'd refresh it on demand using a database job scheduled via DBMS_JOB or DBMS_SCHEDULER.
I've trying to use 0xDBE as a replace for pgAdmin+PL/SQL Developer + Aginity Workbench for Greenplum, but there is one bad thing in introspection:
The IDE shows wrong DDL both for Oracle and PostgreSQL (and Greenplum also).
e.g. it shows this:
create VIEW LATENCIES (
TASK_NAME VARCHAR2(250),
DESTINATION_NAME VARCHAR2(200),
APPLIED DATE
);
instead of this:
create or replace view latencies_new as
select table_schema, destination_name, min(applied) as applied from (
select table_schema, table_name, destination_name, max(unload_start) as applied
from o2g_applies_full
where apply_id is not null
and unload_start > sysdate - 1
group by table_schema, table_name, destination_name
) group by table_schema, destination_name;
in Oracle RDBMS. View and underlying tables are in the same schema, which is chosen for sync in DB options in DataGrip.
Therefore Visualisation diagrams don't work at all.
The same situation with Postgres/GP - it can't show real DDL for external tables/views etc.
Is there any way to fix it? Maybe I should change drivers (now I use drivers downloaded from JetBrains site)?
One can try to use SQL Scripts → SQL Generator action to get DDL:
Found an answer by myself...
If you try to directly copy DDL from Database window (on the left) - you can only copy first code mentioned in original post, but when you choose "View Editor" - "DDL" tab - then you will see full DDL.
I'm running a query across a database link to a Sybase server from Oracle.
In it's where clause is a restriction on date, and I want it tied to sysdate, so something like this:
select * from some_remote_view where some_numeric_key = 1 and
some_date > sysdate+2
The problem is, when I do explain plan, only the condition some_numeric_key = 1 shows up in the actual sql that is getting remoted to the sybase server. Oracle is expecting to perform the date filter on its side.
This is causing a performance nightmare - I need that date filter remoted across to have this query working quickly
Even if I try something like casting the sysdate to a charcater string like this:
to_char(sysdate-2,'YYYY-MM-DD')
It still does not remote it.
Is there anything I can do to get Oracle to remote this date filter across the db link to Sybase?
Doing integration between Oracle and other platforms I often run into this problem, not just with SYSDATE but with other non-standard functions as well.
There are two methods to work around the issue, the first being the most reliable in my experience.
First, you can create a view on the remote db with the filters you need, then on the Oracle side you just select from the new view without additional filters.
Second, if you are not allowed to create objects on the remote side, try using bind variables (of the correct data type!) in your Oracle SELECT statement, e.g.:
declare
v_some_date constant date := sysdate + 2;
begin
insert into oracle_table (...)
select ...
from remote_table#db_link t
where t.some_numeric_key = 1
and t.some_date > v_some_date;
commit;
end;
/
Description
I have an Oracle stored procedure that has been running for 7 or so years both locally on development instances and on multiple client test and production instances running Oracle 8, then 9, then 10, and recently 11. It has worked consistently until the upgrade to Oracle 11g. Basically, the procedure opens a reference cursor, updates a table then completes. In 10g the cursor will contain the expected results but in 11g the cursor will be empty. No DML or DDL changed after the upgrade to 11g. This behavior is consistent on every 10g or 11g instance I've tried (10.2.0.3, 10.2.0.4, 11.1.0.7, 11.2.0.1 - all running on Windows).
The specific code is much more complicated but to explain the issue in somewhat realistic overview: I have some data in a header table and a bunch of child tables that will be output to PDF. The header table has a boolean (NUMBER(1) where 0 is false and 1 is true) column indicating whether that data has been processed yet.
The view is limited to only show rows in that have not been processed (the view also joins on some other tables, makes some inline queries and function calls, etc). So at the time when the cursor is opened, the view shows one or more rows, then after the cursor is opened an update statement runs to flip the flag in the header table, a commit is issued, then the procedure completes.
On 10g, the cursor opens, it contains the row, then the update statement flips the flag and running the procedure a second time would yield no data.
On 11g, the cursor never contains the row, it's as if the cursor does not open until after the update statement runs.
I'm concerned that something may have changed in 11g (hopefully a setting that can be configured) that might affect other procedures and other applications. What I'd like to know is whether anyone knows why the behavior is different between the two database versions and whether the issue can be resolved without code changes.
Update 1: I managed to track the issue down to a unique constraint. It seems that when the unique constraint is present in 11g the issue is reproducible 100% of the time regardless of whether I'm running the real world code against the actual objects or the following simple example.
Update 2: I was able to completely eliminate the view from the equation. I have updated the simple example to show the problem exists even when querying directly against the table.
Simple Example
CREATE TABLE tbl1
(
col1 VARCHAR2(10),
col2 NUMBER(1)
);
INSERT INTO tbl1 (col1, col2) VALUES ('TEST1', 0);
/* View is no longer required to demonstrate the problem
CREATE OR REPLACE VIEW vw1 (col1, col2)
AS
SELECT col1, col2
FROM tbl1
WHERE col2 = 0;
*/
CREATE OR REPLACE PACKAGE pkg1
AS
TYPE refWEB_CURSOR IS REF CURSOR;
PROCEDURE proc1 (crs OUT refWEB_CURSOR);
END pkg1;
CREATE OR REPLACE PACKAGE BODY pkg1
IS
PROCEDURE proc1 (crs OUT refWEB_CURSOR)
IS
BEGIN
OPEN crs FOR
SELECT col1
FROM tbl1
WHERE col1 = 'TEST1'
AND col2 = 0;
UPDATE tbl1
SET col2 = 1
WHERE col1 = 'TEST1';
COMMIT;
END proc1;
END pkg1;
Anonymous Block Demo
DECLARE
crs1 pkg1.refWEB_CURSOR;
TYPE rectype1 IS RECORD (
col1 vw1.col1%TYPE
);
rec1 rectype1;
BEGIN
pkg1.proc1 ( crs1 );
DBMS_OUTPUT.PUT_LINE('begin first test');
LOOP
FETCH crs1
INTO rec1;
EXIT WHEN crs1%NOTFOUND;
DBMS_OUTPUT.PUT_LINE(rec1.col1);
END LOOP;
DBMS_OUTPUT.PUT_LINE('end first test');
END;
/* After creating this index, the problem is seen */
CREATE UNIQUE INDEX unique_col1 ON tbl1 (col1);
/* Reset data to initial values */
TRUNCATE TABLE tbl1;
INSERT INTO tbl1 (col1, col2) VALUES ('TEST1', 0);
DECLARE
crs1 pkg1.refWEB_CURSOR;
TYPE rectype1 IS RECORD (
col1 vw1.col1%TYPE
);
rec1 rectype1;
BEGIN
pkg1.proc1 ( crs1 );
DBMS_OUTPUT.PUT_LINE('begin second test');
LOOP
FETCH crs1
INTO rec1;
EXIT WHEN crs1%NOTFOUND;
DBMS_OUTPUT.PUT_LINE(rec1.col1);
END LOOP;
DBMS_OUTPUT.PUT_LINE('end second test');
END;
Example of what the output on 10g would be:
begin first test
TEST1
end first test
begin second test
TEST1
end second test
Example of what the output on 11g would be:
begin first test
TEST1
end first test
begin second test
end second test
Clarification
I can't remove the COMMIT because in the real world scenario the procedure is called from a web application. When the data provider on the front end calls the procedure it will issue an implicit COMMIT when disconnecting from the database anyways. So if I remove the COMMIT in the procedure then yes, the anonymous block demo would work but the real world scenario would not because the COMMIT would still happen.
Question
Why is 11g behaving differently? Is there anything I can do other than re-write the code?
This appears to be a bug discovered fairly recently. Metalink Bug 1045196 describes the exact problem. Hopefully a patch will be released soon. For those of you who can't get past the Metalink wall here are a few details:
Metalink
Bug 10425196: PL/SQL RETURNING REF CURSOR ACTS DIFFERENTLY ON 11.1.0.6 VS 10.2.0.5
Type: Defect
Severity: 2 - Severe Loss of Service
Status: Code Bug
Created: 22-Dec-2010
DIAGNOSTIC ANALYSIS from original case submission:
- 10.2.0.4 Windows Expected Behavior
- 10.2.0.5 Solaris Expected Behavior
- 11.1.0.6 Solaris Un-Expected Behavior
- 11.1.0.7 Windows Un-Expected Behavior
- 11.2.0.1 Solaris Un-Expected Behavior
- 11.2.0.2 Solaris Un-Expected Behavior
FURTHER DETAILS I can confirm:
- 10.2.0.3 Windows Expected Behavior
- 11.2.0.1 Windows Un-Expected Behavior
Additional Details
Changing the OPTIMIZER_FEATURES_ENABLE='10.2.0.4' parameter does not resolve the problem. So it seems to be related more to a design change in the 11g database engine rather than an optimizer tweak.
Code Workaround
This appears to be a result of the use of the index when querying the table and not the act of updating the table and/or committing. Using my example above, here are two ways to ensure the query does not use the index. Both may affect the performance of the query.
Affecting the performance of the query might be temporarily acceptable until a patch is released but I believe that using FLASHBACK as #Edgar Chupit suggested could affect the performance of the entire instance (or may not be available on some instances) so that option may not be acceptable for some. Either way, at this point in time code changes appear to be the only known workaround.
Method 1: Change your code to wrap the column in a function to prevent the unique index on this one column from being used. In my case this is acceptable because although the column is unique it will never contain lower case characters.
SELECT col1
FROM tbl1
WHERE UPPER(col1) = 'TEST1'
AND col2 = 0;
Method 2: Change your query to use a hint preventing the index from being used. You might expect the NO_INDEX(unique_col1) hint to work, but it does not. The RULE hint does not work. You can use the FULL(tbl1) hint but it's likely that this may slow down your query more than using method 1.
SELECT /*+ FULL(tbl1) */ col1
FROM tbl1
WHERE col1 = 'TEST1'
AND col2 = 0;
Oracle's Response and Proposed Workaround
Oracle support has finally responded with the following Metalink update:
Oracle Support - July 20, 2011 5:51:19 AM GMT-07:00 [ODM Proposed Solution(s)]
Development has reported this will be a significant issue to fix and
has suggested that the following workaround be applied:
edit init.ora/spfile with the following undocumented parameter:
"_row_cr" = false
Oracle Support - July 20, 2011 5:49:20 AM GMT-07:00 [ODM Cause Justification]
Development has determined this to be a defect
Oracle Support - July 20, 2011 5:48:27 AM GMT-07:00 [ODM Cause Determination]
Cause has been traced to a row source cursor optimization
Oracle Support - July 20, 2011 5:47:27 AM GMT-07:00 [ODM Issue Verification]
Development has confirmed this to be an issue in 11.2.0.1
After some further correspondence it sounds as though this isn't being treated as a bug so much as a design decision moving forward:
Oracle Support - July 21, 2011 5:58:07 AM GMT-07:00 [ODM Proposed Solution Justif]
From 10.2.0.5 onward (which includes 11.2.0.2) we have an optimization called
ROW CR it is only applicable to queries which use an unique index to
determine the row in the table.
A brief overview of this optimization is that we try to avoid rollbacks while
constructing a CR block if the present block has no uncommitted changes.
So the difference seen in 11.2.0.2 is because of this optimization. The
suggested workaround is to turn off of this optimization so that things will
work exactly as they used to work in 10.2.0.4
In our case, given our client environments and since it is isolated to a single stored procedure we will continue to use our code workaround to prevent any unknown instance-wide side effects from affecting other applications and users.
This is indeed strange issue, thanks for sharing!
It really looks like a behavior change in Oracle starting with Oracle 11.1 and there is even confirmed bug with similar issue on metalink (bug#10425196). Unfortunately at the moment there is no much information available on metalink on subject mater, but I've also opened SR with Oracle asking to provide more information.
While at the moment I can not provide you an explanation why it happens and if there is a (hidden) parameter that can reverse this behavior back to 10g style, I think I can provide you with workaround. You can use Oracle flashback query functionality to force Oracle to retrieve data as to expected point in time.
If you change your code as follows:
OPEN crs FOR
SELECT col1
>>> FROM vw1 as of scn dbms_flashback.get_system_change_number
WHERE col1 = 'TEST1';
then result should be the same as in 10g.
And this is simplified version of original test case:
SQL> drop table tbl1;
Table dropped
SQL> create table tbl1(col1 varchar2(10), col2 number);
Table created
SQL> create unique index tbl1_idx on tbl1(col1);
Index created
SQL> insert into tbl1(col1,col2) values('TEST1',0);
1 row inserted
SQL> DECLARE
2 cursor web_cursor is
3 SELECT col1
4 FROM tbl1
5 WHERE col2 = 0 and col1 = 'TEST1';
6
7 rec1 web_cursor%rowtype;
8 BEGIN
9 OPEN web_cursor;
10
11 UPDATE tbl1
12 SET col2 = 1
13 WHERE col1 = 'TEST1';
14
15 -- different result depending on commit!
16 commit;
17
18 DBMS_OUTPUT.PUT_LINE('Start');
19 LOOP
20 FETCH web_cursor
21 INTO rec1;
22
23 EXIT WHEN web_cursor%NOTFOUND;
24
25 DBMS_OUTPUT.PUT_LINE(rec1.col1);
26 END LOOP;
27 DBMS_OUTPUT.PUT_LINE('Finish');
28 END;
29 /
Start
Finish
PL/SQL procedure successfully completed
If you comment out commit on line 16 than the output will be:
Start
TEST1
Finish
PL/SQL procedure successfully completed
From Metalink (aka Oracle Support)
Status bug 10425196 : 92 - Closed, Not a Bug
PROBLEM:
When calling a stored procedure that returns a REF CURSOR, different behavior
is seen in 10.2.0.5 and earlier vs 11.1.0.6 and later databases.
Sequence Of Events
Call stored procedure passing in a Ref Cursor
Open Ref Cursor against TableA
Update some data inside TableA from inside the stored procedure
COMMIT the update
Procedure execution ends returning Ref Cursor back to caller
10.2.0.5 and Earlier
The returned cursor does not see the updated data as it was opened prior to
the data being updated. This is the expected behavior.
11.1.0.6 and Later
The returned cursor sees the updated data and returns the updated data which
is different than the 10.2.0.5 and earlier behavior.
DIAGNOSTIC ANALYSIS:
10.2.0.4 Windows Expected Behavior
10.2.0.5 Solaris Expected Behavior
11.1.0.6 Solaris Un-Expected Behavior
11.1.0.7 Windows Un-Expected Behavior
11.2.0.1 Solaris Un-Expected Behavior
11.2.0.2 Solaris Un-Expected Behavior
RELATED BUGS:
None found.
If it is necessary, you can revert back to the pre-10.2.0.5 behavior setting the following startup parameter and restart the database.
_row_cr = false
When I navigate through the Oracle application with my application user and the right responsibility, I see the data.
I use the "record history" menu to see which table/view is used by application.
So, I got PA_EXPEND_ITEMS_ADJUST2_V.
When I'm connected with apps user in a sqlplus session,
SELECT * FROM PA_EXPEND_ITEMS_ADJUST2_V
gives me 0 rows.
I guess that there's something is misconfigurated with the apps but what ?
How may I view the rows of PA_EXPEND_ITEMS_ADJUST2_V using apps user in a sqlplus session ?
How may I see the data in the Oracle view like I see it through the application ?
There is probably some row-level security happening here. Possibly based on views, possibly the built-in RLS/FGAC/VPD (or whatever acronym they give it with that version). That's where the database rewrites the query behind the scenes to add in filters.
Generally there are based on SYS_CONTEXT values.
In Oracle Applications you have to execute the APPS.FND_GLOBAL.apps_initialize procedure to have the same context in a SQL*Plus session. I use the following script to start a session:
SET SERVEROUTPUT ON
DECLARE
l_user_id NUMBER;
l_resp_id NUMBER;
l_app_id NUMBER;
l_resp_name VARCHAR2(100) := '<Name of your responsibility>';
l_login VARCHAR2(30) := '<USERLOGIN>'
BEGIN
SELECT user_id INTO l_user_id FROM fnd_user WHERE user_name = l_login;
SELECT application_id, responsibility_id
INTO l_app_id, l_resp_id
FROM fnd_responsibility_vl
WHERE responsibility_name = l_resp_name;
apps.fnd_global.apps_initialize(l_user_id, l_resp_id, l_app_id);
dbms_output.put_line('l_user_id = '||l_user_id);
dbms_output.put_line('l_resp_id = '||l_resp_id);
dbms_output.put_line('l_app_id = '||l_app_id);
END;
/
You will need to log into oracle with the same user ( or a user with the same rights/roles ) as what the application is using.
You need to talk to your DBA.
Another possibility (apart from row-level security, which may be involved) is that the view is based on one or more global temporary tables - which means you won't see the data unless you query from within the same session that inserts it.
Or, perhaps, the app is deleting the data after it's finished with it ;)