We've noticed in the TOAD schema browser some weird types which seem to pop up randomly throughout the day on our database. We found them by using the Schema Browser in TOAD under Types -> Collection Types. They have names like these:
SYSTPHYP5bsxIC47gU0Z4MApeAw==
SYSTPHYP8cBHQYUDgU0Z4MApvyA==
SYSTPHYPwYo541RfgU0Z4MAqeTQ==
They seem to have randomly generated names and we're pretty sure our application is not creating them. They are all TABLE OF NUMBER(20)
Does anyone have an explanation of what these types are for?
These are most likely related to use of the collect aggregate function. You can find some info here on them:
http://orasql.org/2012/04/28/a-funny-fact-about-collect/
Looks like in the past there was a bug (Bug 4033868 fixed in 11g) where these types did not clean up after themselves.
Related
I have recently started using JetBrains DataGrip as a replacement for HeidiSQL.
My issue relates to the list of tables in the Database Explorer. We have a large database (700+ tables) and scrolling to find the table I'm looking for is rather cumbersome. For a table with a nice, long, specific name I can just start typing and get the highlighting on the table names, which usually takes me right where I need to go:
But if I'm trying to get to the table user and we have lots of other tables with "user" in the name in various positions, the highlighting tool only takes me to the first table with "user" in the name, which is not the actual "user" table (and because we have tables with names like billingUsers, it doesn't scroll to anywhere near the actual user table that I'm looking for).
What other methods should I be using when I want to find a particular table in DataGrip? In HeidiSQL, there was the "filter tables" box which would filter the list of tables based on the search term, which got me much closer to my intended destination much more quickly. Does DataGrip have any sort of "quick filter" like this? Or is there some other tool or prompt I should use to go directly to this table instead?
During the writing of this question I came across the "double Shift" shortcut, which is helpful and closer to what I'm looking for -- though in my case I have several of our development environments configured in DataGrip, and it lists the table in every environment - whereas in the Database Explorer, I usually only have one environment expanded at a time (and only want to see results from that database, not the others).
StackOverflow suggested this question to me, which has the same issue as the double-Shift shortcut -- it shows results from all databases, not just the one I'm working in.
I am from the DataGrip teram. We are currently working on that and expecting the filtering functionality in 2022.3. Anyway, now there are several workarounds (and Shift+Shift is one of them). See them here:
https://youtrack.jetbrains.com/issue/DBE-3017/Show-only-filtered-items-when-typing-in-the-database-explorer
I tried to create a ddl script with some triggers. Its for a university submission, so multiple users share the DB and get their own schemes.
However with Tools->Preferences ->Database->PL/SQL Compiler -> PLScope identifiers: "All" the proccess always crashed, while setting it to None it got created just fine.
What exacty is the difference between those two settings?
PL/Scope is
a compiler-driven tool that collects data about identifiers in PL/SQL source code at program-unit compilation time and makes it available in static data dictionary views. The collected data includes information about identifier types, usages (declaration, definition, reference, call, assigment) and the location of each usage in the source code.
Here are identifiers; you can query all_identifiers or user_identifiers; DBA has dba_identifiers on disposal. It
displays information about the identifiers in the stored objects accessible to the current user.
Why it fails when set to "All"? No idea. Try to query all_identifiers and compare the list with the one in documentation, see what they do. Maybe you'll find something useful.
Looking for a workaround to %RowType not including oracle table Invisible columns. I want invisible columns so as not to effect legacy code during a transition, and be able to use %RowType or similar in the new code to access all the columns. One thing I tried is creating a Type Record with the full table structure, but it does not seem to allow %Type references to individual columns, i.e.
Type t_Inv_Test Is Record
(
Test_Column_Vis Varchar2(20),
Test_Column_Inv Varchar2(20)
);
Cannot do:
Function Qry(p_Test_Val In t_Inv_Test.Test_Column_Vis%Type)
Return t_Inv_Test.Test_Column_Inv%Type;
After looking at other Invisible column questions I am also considering defining a view with all columns and then use the View%Rowtype.
What is the best way to do this?
Thanks
Joe
"I want invisible columns so as not to effect legacy code during a transition"
This sounds like a use case for Oracle Edition-based Redefinition. EBR allows us to maintain two different versions of our data model in one live database. It is the sort of highly neat functionality which Oracle provides and that justifies the license cost (discuss).
Anyway you should definitely check it out before you embark on hand-rolling your own implementation of it. Find out more
"oracle tables are non-editionable objects."
Yes, there is only one version of the actual table. What EBR enables is the presentation of different projections of the table to different users. The idea is you define an edition before you add the column to the table. Users connecting using the old edition see the version of the table without the column; switch to the new edition and they see the column. Once you have migrated all your legacy apps to the new model you can retire the old edition.
This magic is achieved through views and triggers, pretty much as you propose doing, but with the guarantee of robustness which comes from using Oracle built-in functionality.
After running 'generateChangelog' on an Oracle database, the changelogFile has wrong type (or even better, simply bad value) for some fields, independently of the used driver.
More closer, some of the RAW columns are translated to STRING (it sounds okay), but values like "E01005C6842100020200000E10000000" are translated to "[B#433defed". Which seems to be some blob like entity. Also, these are the only data related differences between the original database content and backup.
When I try to restore the DB by 'update', these columns show problems "Unexpected error running Liquibase: *****: invalid hex number".
Is there any way forcing liquibase to save the problem columns "as-is", or anything else to overcome this situation? Or is it a bug?
I think more information is needed to be able to diagnose this. Ideally, if you suspect something may be a bug, you provide three things:
what steps you took (this would include the versions of things being used, relevant configuration, commands issued, etc.)
what the actual results were
what the expected results were
Right now we have some idea of 1 (ran generateChangelog on Oracle, then tried to run update) but we are missing things like what the structure of the Oracle database was, what version of Oracle/Liquibase, and what was the actual command issued. We have some idea of the actual results (columns that are of type RAW in Oracle are converted to STRING in the changelog, and it may be also converting the data in those columns to different values than you expect) and some idea of the expected results (you expected the RAW data to be saved in the changelog and then be able to re-deploy that change).
That being said, using Liquibase to back up and restore a database (especially one that has columns of type RAW/CLOB/BLOB) is probably not a good idea.
Liquibase is primarily aimed at helping manage change to the structure of a database and not so much with the data contained within it.
I'm instantiating a client-side representation of an Oracle Schema data-model in custom Table/Column/Constraint/Index data structures, in C/C++ using OCI. For this, I'm selecting from:
all_tables
all_tab_comments
all_col_comments
all_cons_columns
all_constraints
etc...
And then I'm using OCI to describe all tables, for precise information about column types. This is working, but our CI testing farm is often failing inside this schema data-model introspection code, because another test is running in parallel and creating/deleting tables in the middle of this serie of queries and describe calls I'm making.
My question is thus how can I introspect this schema atomically such that another session does not concurrently change that very schema I'm instropecting?
Would using a Read-only Serializable transaction around the selects and describes be enough? I.e. does MVCC apply to Oracle's data dictionaries? What would be the likelihood of SnapShot too Old errors on such system dictionaries?
If full atomicity is not possible, are there steps I could take to minimize the possibility of getting inconsistent / stale info?
I was thinking maybe left-joins to reduce the number of queries, and/or replacing the OCIDescribeAny() calls with other dictionary accesses joined to other tables, to get all table/column info in a single query each?
I'd appreciate some expert input on this concurrency issue. Thanks, --DD
a typical read-write conflict. from the top of my head i see 2 ways around it:
use dbms_lock package in both "introspection" and "another test".
rewrite your retrospection query so that it returns one big thing of what you need. there are multiple ways to do that:
use xmlagg and alike.
use listagg and get one big string or clob.
just use a bunch of unions to get one resultset, as it's guaranteed to be consistent.
hope that helps.