Derby xsa12 error on DB table that exists.... but doesn't - derby

I'm not sure what is going on with my derby database, but I seem to have tables that I can can see from the ij interface...
ij> show tables in derbytest;
TABLE_SCHEM |TABLE_NAME |REMARKS
------------------------------------------------------------------------
DERBYTEST |DATATYPETEST |
DERBYTEST |LOCATION |
DERBYTEST |SUIVI |
now I get the tables description...
ij> describe derbytest.datatypetest;
COLUMN_NAME |TYPE_NAME|DEC&|NUM&|COLUM&|COLUMN_DEF|CHAR_OCTE&|IS_NULL&
------------------------------------------------------------------------------
A_DATE |DATE |0 |10 |10 |NULL |NULL |NO
AN_INT |INTEGER |0 |10 |10 |NULL |NULL |YES
A_DECIMAL |DECIMAL |0 |10 |5 |NULL |NULL |YES
A_STRING |VARCHAR |NULL|NULL|20 |NULL |40 |YES
A_SWITCH |BOOLEAN |NULL|NULL|1 |NULL |NULL |YES
So I guess the table exists, but...
ij> select * from derbytest.datatypetest;
ERREUR XSAI2 : Le conglomÚrat (1á232) demandÚ n'existe pas.
So a quick check to see if the problem is being caused by an 'empty' table..
ij> select * from derbytest.suivi;
OBS |DATE |TIME
-----------------------------------------------------------------------
which to me suggests not!
I'm not sure I fully understand the implication of the error message, i found this in the docs
Table 36. Class XSAI: Store - access.protocol.interface SQLSTATE
Message Text XSAI2 The conglomerate () requested does not
exist.
which isn't amazingly helpful!
I've had a look at the various API docs for the engine, language, testing and tools, but I don't know where to start to look, any pointers would be helpfull.
It may be related to how I am setting the database, so some quick background.
I connecting to this test DB from a java test class. It gathers info from another datasource (XL of flat file) then drops it into this database (or that is the aim). I am only showing a small 'test' that I may to ensure my connection was working.
I have another schema in this file that has more tables, and they all have this same problem.
Have I not correctly closed a conection and lost data?
Have I somehow inadvertently delected a data file, that contained the missing 'conglomerate'
Any help is greately appreciated.
David.
ps I have other test DB's that I haven't checked to see if they have the same problem.
I'm running java 6 on XP.
edit1: Just checked the other testDB I am using, it contains no tables! I obviously cleaned up after myself. Now where did that cat go ??

That is odd behavior, to be sure. I'm not sure what's wrong.
Did you create your table inside a transaction but not yet commit that transaction?
Did you create your table using an in-memory database, in which case it disappears when you close the database?
Did you create your database in one location on the disk, then later connect to a different location with 'create=true', in which case Derby creates a new blank database in the new location?
Did you create your database using one schema, then connect with a different schema?
The error message does suggest some internal damage to the table. The number in parentheses (1a232) is a conglomerate number, which is also used to identify the conglomerate's filename in your filesystem. So you could look in the filesystem and match up the files that are there to the tables in your database (by selecting from sys.sysconglomerates).
You get a conglomerate for the table itself, plus additional conglomerates for each secondary index, both those created by CREATE INDEX and those created by implicit constraints such as UNIQUE or REFERENCES.
If you suspect you have table damage, it's best to restore from a backup. Did you experience any system crashes, disk full events, etc., which might have indicated table damage?

Related

Oracle Materialized View Content-Based Refresh

How can I update each row with a specific value in a column from a Materialized View?
Example:
ID|VALUE|CLIENT
----------------
1 |A |00
2 |B |01
3 |C |00
After an update the table looks like:
ID|VALUE|CLIENT
----------------
1 |B |00
2 |D |01
3 |C |00
but the refresh shall only effect the rows of a the specific Client '00', so the MView shall look like:
ID|VALUE|CLIENT
----------------
1 |B |00
2 |B |01
3 |C |00
is there any way to get that without replacing the MView with a table?
I never tried it & can't right now as I have 11g XE available which doesn't support it, but you might: the keyword is partitioning (make sure your Oracle license is OK with it, as partitioning works in Enterprise Edition).
The idea is: create partition on CLIENT so that you could refresh only the '00' one.
Have a look at Partitioning and Materialized Views (this is the 11g documentation; if you use some other, find the appropriate documentation yourself).

how to disable TOAD parallel slave processes in Oracle?

I have a 12.5 TOAD going against multiple Oracle SIDs. On one of them the first connection opens a session with 4 instant parallel slave processes (which show up as additional sessions in v$session, freaking out the local DBA). And then each next SQL editor adds another 5.
On other SIDs this is not happening.
Is there a known way to disable this in TOAD? (so far nothing worked)
EDIT #1: Okay, this turned out not to be related to TOAD. Every session opened against that instance (just a blank conn too) automatically creates 4 additional slave processes right away, which are only seen within gv$session for your own connection (which is why it looked like other TOAD connections were not having it).
I will keep this thread open for some time until I find out what the deal is with the worker processes.
FINAL EDIT: Finally found out that they force multiple threads for each statement on the instance level, so this has nothing to do with TOAD or clients.
This isn't "real" parallelism. Oracle uses small parallel queries for GV$ dynamic performance views on a Real Application Cluster (RAC). Oracle is currently a shared-everything architecture where all the data resides on every node. Except for dynamic performance views, since most activity only happens on a specific node and is tracked on that node.
This behavior will only occur on some SIDs because GV$ only uses parallel queries if the database is clustered. The queries may consume parallel processes, but only one per node and these queries usually do not use up much resources.
These queries should not normally be a problem. I can think of a few scenarios where they would look like a problem, but would not be the root problem:
PARALLEL_MAX_SERVERS too low. Parallel sessions should not be a scarce resource. A DBA is right to worry about run-away parallelism. But when organizations worry about such
a small number of parallel sessions it's usually because they've created an artificial scarcity by shrinking PARALLEL_MAX_SERVERS. The default value for
parallel_max_servers is usually "PARALLEL_THREADS_PER_CPU * CPU_COUNT * concurrent_parallel_users * 5".
If your server was purchased in this century there's no need to worry about a few extra sessions.
Bad interconnect RAC needs a really good network connection between nodes. 100Mbps Ethernet is not going to cut it and the nodes will spend a lot of time communicating.
Bad dictionary or fixed object statistics. Data dictionary queries may be slow if dictionary or fixed object stats have never been gathered. If these queries are running for a long time, try gathering stats with: exec dbms_stats.gather_dictionary_stats; and exec dbms_stats.gather_fixed_object_stats;.
Below is a demonstration of GV$ using parallel queries. This will only work on a RAC database.
> explain plan for select * from v$process;
Explained.
> select * from table(dbms_xplan.display(format => 'basic'));
PLAN_TABLE_OUTPUT
--------------------------------------------------
Plan hash value: 4113158240
------------------------------------
| Id | Operation | Name |
------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | FIXED TABLE FULL| X$KSUPR |
------------------------------------
8 rows selected.
> explain plan for select * from gv$process;
Explained.
> select * from table(dbms_xplan.display(format => 'basic'));
PLAN_TABLE_OUTPUT
--------------------------------------------------
Plan hash value: 3900509504
-------------------------------------------
| Id | Operation | Name |
-------------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | PX COORDINATOR | |
| 2 | PX SEND QC (RANDOM)| :TQ10000 |
| 3 | VIEW | GV$PROCESS |
| 4 | FIXED TABLE FULL | X$KSUPR |
-------------------------------------------
11 rows selected.
>
To minimize the number of sessions set the following options on the Oracle|Transaction page in Options.
Disable "Execute queries in threads"
Enable "Execute scripts in Toad session"
Set "Session for Explain Plan" to "Main Toad Session"

Counting Hits with a Structured Predicate containing LIKE in Oracle returns wrong results

I'm trying to use Oracle Text to perform a query where i'm searching for any OS name that starts with "AIX" and also contains the substring 'XYZ'. Somehow this formulation of the query results in 0 results, even though if I break it up into separate parts there are clearly results:
SELECT
COUNT(*) AS cnt
FROM
package_master
WHERE
CONTAINS(doc,'%XYZ%',1)>0 AND UPPER(os) LIKE 'AIX%'
This returns 0 results.
But curiously if I modify it to:
SELECT
COUNT(*) AS cnt
FROM
package_master
WHERE
CONTAINS(doc,'%XYZ%',1)>0 AND UPPER(os)='AIX 6.1.0.0'
it returns results, but of course only those that pertain to AIX 6.1.0.0...
I'm using Oracle 11g2.
Is it possible there is a bug in the ORACLE TEXT package?
I guess I can break into two INTERSECT queries and do a COUNT(*) of the results, but that complicates matters and seems to run for a long while... I would like to use the simple 'AND' form.... If possible...
This works but runs for a long while and is unnecessarily complex:
SELECT count(*) FROM (
SELECT
host, package_name
FROM
package_master
WHERE
CONTAINS(doc,'%XYZ%',1)>0
INTERSECT
SELECT
host, package_name
FROM
package_master
WHERE
UPPER(os) LIKE 'AIX%'
)
Also note if I try to do an EXPLAIN on the original query, it's as though the "LIKE" portion of the query is not even executed at all...! This is rather bizarre:
Plan hash value: 1075233541
----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 238 | 55 (0)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 238 | | |
|* 2 | DOMAIN INDEX | PACKAGE_MASTER_IDX7 | 100 | 23800 | 55 (0)| 00:00:01 |
---------------------------------------------------------------------------------------
-
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("CTXSYS"."CONTAINS"("DOC",'%XYZ%',1)>0)
filter(UPPER("OS") LIKE 'AIX%')
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit. NLS_COMP value is BINARY, NLS_SORT value is null. The table is only loaded once so it is not an issue with synching the index.
In a bizarre twist, I am no longer seeing this issue! I cannot reproduce the 0 result behavior and now if I perform an EXPLAIN plan, I see that the optimizer is working properly. Not much has changed. Maybe Oracle just needed a restart... I guess I will close out this question, even though there was no satisfactory reason/explanation as to how the issue resolved itself.
Actually the issue came back...
My counts are once again are showing incorrect values...
The oracle optimizer once again decided to ignore one of the conditions of the WHERE clause. I ran the EXPLAIN PLAN and confirmed that it was ignoring one half of the where clause, which looks like a bug to me.
I had decided to rewrite all the queries such that the CONTAINS() portion is in one area and the rest of the filtering is done in a separate place.
It appears to be holding up.
The new query format that I picked that seems to be working is:
WITH x AS (
SELECT
*
FROM
package_master_naught
WHERE
CONTAINS(p_n_c,'%XYZ%',1)>0
)
SELECT
COUNT(*) AS cnt
FROM
x
WHERE
UPPER(os) LIKE 'AIX%';

Materialized View fast refresh taking a long time

I have a large table that is replicated from Oracle 10.2.0.4 to and Oracle 9i database using MView replication over the network. The master table is about 50GB, 160M rows and there are about 2 - 3M new or updates rows per day.
The master table has a materialized view log created using rowid.
The full refresh of the view works and takes about 5 hours, which we can live with.
However the fast refresh is struggling to keep up. Oracle seems to require two queries against the mlog and master table to do the refresh, the first looks like this:
SELECT /*+ */
DISTINCT "A1"."M_ROW$$"
FROM "GENEVA_ADMIN"."MLOG$_BILLSUMMARY" "A1"
WHERE "A1"."M_ROW$$" <> ALL (SELECT "A2".ROWID
FROM "GENEVA_ADMIN"."BILLSUMMARY" "A2"
WHERE "A2".ROWID = "A1"."M_ROW$$")
AND "A1"."SNAPTIME$$" > :1
AND "A1"."DMLTYPE$$" <> 'I'
The current plan is:
---------------------------------------------------------------
| Id | Operation | Name |
---------------------------------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | HASH UNIQUE | |
| 2 | FILTER | |
| 3 | TABLE ACCESS BY INDEX ROWID| MLOG$_BILLSUMMARY |
| 4 | INDEX RANGE SCAN | MLOG$_BILLSUMMARY_AK1 |
| 5 | TABLE ACCESS BY USER ROWID | BILLSUMMARY |
When there are 3M rows changed, this query literally runs forever - its basically useless. However, if I rewrite it slightly and tell it to full scan the master table and mlog table, it completes in 20 minutes.
The problem is that the above query is coming out of the inners of Oracle and I cannot change it. The problem is really the FILTER operation on line 2 - if I could get it to full scan both tables and hash join / anti-join, I am confident I can get it to complete quick enough, but no receipe of hints I offer will get this query to stop using the FILTER operation - maybe its not even valid. I can use hints to get it to full scan both the tables, but the FILTER operation remains, and I understand it execute long 5 for each row returned by line 3, which will be 2- 3M rows.
Has anyone got any ideas on how to trick this query into the plan I want without changing the actual query, or better, any ways of getting replication to take a more sensible plan for my tablesizes?
Thanks,
Stephen.
As you wrote the queries are part of an internal Oracle mechanism so your tuning options are limited. The fast-refresh algorithm seems to behave differently in the more recent versions, check Alberto Dell’Era’s analysis.
You could also look into SQL profiles (10g feature). With the package DBMS_SQLTUNE this should allow you to tune individual SQL statements.
How do the estimated cardinalities look for the refresh query in comparison to the actual cardinalities? Maybe the MLOG$ table statistics are incorrect.
It might be better to have no statistics on the table and lock them in order to invoke dynamic sampling, which ought to give a reasonable estimation based on the multiple predicates in the query.

Hidden features in Oracle

Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I enjoyed the answers and questions about hidden features in sql server
What can you tell us about Oracle?
Hidden tables, inner workings of ..., secret stored procs, package that has good utils...
Since Apex is now part of every Oracle database, these Apex utility functions are useful even if you aren't using Apex:
SQL> declare
2 v_array apex_application_global.vc_arr2;
3 v_string varchar2(2000);
4 begin
5
6 -- Convert delimited string to array
7 v_array := apex_util.string_to_table('alpha,beta,gamma,delta', ',');
8 for i in 1..v_array.count
9 loop
10 dbms_output.put_line(v_array(i));
11 end loop;
12
13 -- Convert array to delimited string
14 v_string := apex_util.table_to_string(v_array,'|');
15 dbms_output.put_line(v_string);
16 end;
17 /
alpha
beta
gamma
delta
alpha|beta|gamma|delta
PL/SQL procedure successfully completed.
"Full table scans are not always bad. Indexes are not always good."
An index-based access method is less efficient at reading rows than a full scan when you measure it in terms of rows accessed per unit of work (typically per logical read). However many tools will interpret a full table scan as a sign of inefficiency.
Take an example where you are reading a few hundred invoices frmo an invoice table and looking up a payment method in a small lookup table. Using an index to probe the lookup table for every invoice probably means three or four logical io's per invoice. However, a full scan of the lookup table in preparation for a hash join from the invoice data would probably require only a couple of logical reads, and the hash join itself would cmoplete in memory at almost no cost at all.
However many tools would look at this and see "full table scan", and tell you to try to use an index. If you do so then you may have just de-tuned your code.
Incidentally over reliance on indexes, as in the above example, causes the "Buffer Cache Hit Ratio" to rise. This is why the BCHR is mostly nonsense as a predictor of system efficiency.
The cardinality hint is mostly undocumented.
explain plan for
select /*+ cardinality(#inner 5000) */ *
from (select /*+ qb_name(inner) */ * from dual)
/
select * from table(dbms_xplan.display)
/
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 5000 | 10000 | 2 (0)| 00:00:01 |
| 1 | TABLE ACCESS FULL| DUAL | 1 | 2 | 2 (0)| 00:00:01 |
--------------------------------------------------------------------------
The Buffer Cache Hit Ratio is virtually meaningless as a predictor of system efficiency
You can view table data as of a previous time using Flashback Query, with certain limitations.
Select *
from my_table as of timestamp(timestamp '2008-12-01 15:21:13')
11g has a whole new feature set around preserving historical changes more robustly.
Frequent rebuilding of indexes is almost always a waste of time.
wm_concat works like the the MySql group_concat but it is undocumented.
with data:
-car- -maker-
Corvette Chevy
Taurus Ford
Impala Chevy
Aveo Chevy
select wm_concat(car) Cars, maker from cars
group by maker
gives you:
-Cars- -maker-
Corvette, Impala, Aveo Chevy
Taurus Ford
The OVERLAPS predicate is undocumented.
http://oraclesponge.wordpress.com/2008/06/12/the-overlaps-predicate/
I just found out about the pseudo-column Ora_rowSCN. If you don't set your table up for this, this pcolumn gives you the block SCN. This could be really useful for the emergency, "Oh crap I have no auditing on this table and wonder if someone has changed the data since yesterday."
But even better is if you create the table with Rowdependecies ON. That puts the SCN of the last change on every row. This will help you avoid a "Lost Edit" problem without having to include every column in your query.
IOW, when you app grabs a row for user modification, also select the Ora_rowscn. Then when you post the user's edits, include Ora_rowscn = v_rscn in addition to the unique key in the where clause. If someone has touched the row since you grabbed it, aka lost edit, the update will match zero rows since the ora_rowscn will have changed.
So cool.
If you get the value of PASSWORD column on DBA_USERS you can backup/restore passwords without knowing them:
ALTER USER xxx IDENTIFIED BY VALUES 'xxxx';
Bypass the buffer cache and read straight from disk using direct path reads.
alter session set "_serial_direct_read"=true;
Causes a tablespace (9i) or fast object (10g+) checkpoint, so careful on busy OLTP systems.
More undocumented stuff at http://awads.net/wp/tag/undocumented/
Warning: Use at your own risk.
I don't know if this counts as hidden, but I was pretty happy when I saw this way of quickly seeing what happened with a SQL statement you are tuning.
SELECT /*+ GATHER_PLAN_STATISTICS */ * FROM DUAL;
SELECT * FROM TABLE(dbms_xplan.display_cursor( NULL, NULL, 'RUNSTATS_LAST'))
;
PLAN_TABLE_OUTPUT
-----------------------------------------------------
SQL_ID 5z36y0tq909a8, child number 0
-------------------------------------
SELECT /*+ GATHER_PLAN_STATISTICS */ * FROM DUAL
Plan hash value: 272002086
---------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads |
---------------------------------------------------------------------------------------------
| 1 | TABLE ACCESS FULL| DUAL | 1 | 1 | 1 |00:00:00.02 | 3 | 2 |
---------------------------------------------------------------------------------------------
12 rows selected.
Where:
E-Rows is estimated rows.
A-Rows is actual rows.
A-Time is actual time.
Buffers is actual buffers.
Where the estimated plan varies from the actual execution by orders of magnitude, you know you have problems.
Not a hidden feature, but Finegrained-access-control (FGAC), also known as row-level security, is something I have used in the past and was impressed with the efficiency of its implementation. If you are looking for something that guarantees you can control the granularity of how rows are exposed to users with differing permissions - regardless of the application that is used to view data (SQL*Plus as well as your web app) - then this a gem.
The built-in fulltext indexing is more widely documented, but still stands out because of its stability (just try running a full-reindexing of fulltext-indexed columns on similar data samples on MS-SQL and Oracle and you'll see the speed difference).
WITH Clause
Snapshot tables. Also found in Oracle Lite, and extremely useful for rolling your own replication mechanism.
#Peter
You can actually bind a variable of type "Cursor" in TOAD, then use it in your statement and it will display the results in the result grid.
exec open :cur for select * from dual;
Q: How to call a stored with a cursor from TOAD?
A: Example, change to your cursor, packagename and stored proc name
declare cursor PCK_UTILS.typ_cursor;
begin
PCK_UTILS.spc_get_encodedstring(
'U',
10000002,
null,
'none',
cursor);
end;
The Model Clause (available for Oracle 10g and up)
WM_CONCAT for string aggregation
Scalar subquery caching is one of the most surprising features in Oracle
-- my_function is NOT deterministic but it is cached!
select t.x, t.y, (select my_function(t.x) from dual)
from t
-- logically equivalent to this, uncached
select t.x, t.y, my_function(t.x) from t
The "caching" subquery above evaluates my_function(t.x) only once per unique value of t.x. If you have large partitions of the same t.x value, this will immensely speed up your queries, even if my_function is not declared DETERMINISTIC. Even if it was DETERMINISTIC, you can safe yourself a possibly expensive SQL -> PL/SQL context switch.
Of course, if my_function is not a deterministic function, then this can lead to wrong results, so be careful!

Resources