SQL Developer: Difference between PLSCOPE_SETTINGS 'None' and 'All'? - oracle

I tried to create a ddl script with some triggers. Its for a university submission, so multiple users share the DB and get their own schemes.
However with Tools->Preferences ->Database->PL/SQL Compiler -> PLScope identifiers: "All" the proccess always crashed, while setting it to None it got created just fine.
What exacty is the difference between those two settings?

PL/Scope is
a compiler-driven tool that collects data about identifiers in PL/SQL source code at program-unit compilation time and makes it available in static data dictionary views. The collected data includes information about identifier types, usages (declaration, definition, reference, call, assigment) and the location of each usage in the source code.
Here are identifiers; you can query all_identifiers or user_identifiers; DBA has dba_identifiers on disposal. It
displays information about the identifiers in the stored objects accessible to the current user.
Why it fails when set to "All"? No idea. Try to query all_identifiers and compare the list with the one in documentation, see what they do. Maybe you'll find something useful.

Related

Invoking an select script through ODI (Oracle Data Integrator)

May I have your opinion on below queries please:
Option 1:
I have select script handy with me which fetch data by joining many source tables and performs some transformations like aggregations (group by), data conversion, sub-string etc.
Can I invoke this script through ODI mapping and return results (transformed data output) can be inserted into target of ODI mapping ?
Option2:
Convert the select script into equivalent ODI mapping by using equivalent ODI transformations , functions , look ups etc and use various tables (tables in join clause) as source of mappings.
Basically develop ODI mapping which is equivalent to provided select script plus a target table to insert records into it.
I need to know pros and cons of both options in above (if option 1 is possible).
Is it still possible to track transformation errors, join source tables and where clause condition related errors etc through ODI with option 1?
Log file for mapping failure will have as granular level details as offered by option 2?
Can I still enable Flow Control at Knowledge Module and redirect select script errors into E$_ error tables provided by ODI?
Thanks,
Rajneesh
Option 1: ODI 12c includes that concept out of the box. On the physical tab of a mapping, click on the source node (datastore). Then in the properties pane, there is the CUSTOM_TEMPLATE option under "Extract Options" menu. This allows to enter a custom SQL statement that will be used instead of the code generated by ODI.
However it is probably less maintainable over time than option 2. SQL is less visual than mapping components. Also if you need to bulk change it, it will be trickier. Changing a component in several mappings can be done with the SDK. Changing SQL code would require to parse it. You might indeed have less information in your operator logs as the SQL would be seen as just one block of code. It also wouldn't provide any lineage.
I believe using Flow Control would work but I haven't tested it.
Option 2 would take more time to complete but with that you would benefit from all the functionalities of ODI.
My own preference would be to occasionally use option 1 for really complex SQL queries but to use option 2 for most of the normal use cases.

Oracle SYSTPH* Type

We've noticed in the TOAD schema browser some weird types which seem to pop up randomly throughout the day on our database. We found them by using the Schema Browser in TOAD under Types -> Collection Types. They have names like these:
SYSTPHYP5bsxIC47gU0Z4MApeAw==
SYSTPHYP8cBHQYUDgU0Z4MApvyA==
SYSTPHYPwYo541RfgU0Z4MAqeTQ==
They seem to have randomly generated names and we're pretty sure our application is not creating them. They are all TABLE OF NUMBER(20)
Does anyone have an explanation of what these types are for?
These are most likely related to use of the collect aggregate function. You can find some info here on them:
http://orasql.org/2012/04/28/a-funny-fact-about-collect/
Looks like in the past there was a bug (Bug 4033868 fixed in 11g) where these types did not clean up after themselves.

Can full information about an Oracle schema data-model be selected atomically?

I'm instantiating a client-side representation of an Oracle Schema data-model in custom Table/Column/Constraint/Index data structures, in C/C++ using OCI. For this, I'm selecting from:
all_tables
all_tab_comments
all_col_comments
all_cons_columns
all_constraints
etc...
And then I'm using OCI to describe all tables, for precise information about column types. This is working, but our CI testing farm is often failing inside this schema data-model introspection code, because another test is running in parallel and creating/deleting tables in the middle of this serie of queries and describe calls I'm making.
My question is thus how can I introspect this schema atomically such that another session does not concurrently change that very schema I'm instropecting?
Would using a Read-only Serializable transaction around the selects and describes be enough? I.e. does MVCC apply to Oracle's data dictionaries? What would be the likelihood of SnapShot too Old errors on such system dictionaries?
If full atomicity is not possible, are there steps I could take to minimize the possibility of getting inconsistent / stale info?
I was thinking maybe left-joins to reduce the number of queries, and/or replacing the OCIDescribeAny() calls with other dictionary accesses joined to other tables, to get all table/column info in a single query each?
I'd appreciate some expert input on this concurrency issue. Thanks, --DD
a typical read-write conflict. from the top of my head i see 2 ways around it:
use dbms_lock package in both "introspection" and "another test".
rewrite your retrospection query so that it returns one big thing of what you need. there are multiple ways to do that:
use xmlagg and alike.
use listagg and get one big string or clob.
just use a bunch of unions to get one resultset, as it's guaranteed to be consistent.
hope that helps.

Forcing Oracle to use Primary Key Index without using Hints

We have an application that generates some temporary tables and then processes the data. I dont really have control of the way the application creates this and the subsequent queries involved. What we have noticed is that Oracle uses a full table scan instead of using the index which is the primary key of the tables. If it used the primary key index the process would run a whole lot faster.
Since I do not have control over the select queries generated by the application I cannot use hints and force Oracle to use primary key index. Is there any other setting I could change somewhere that could force Oracle to use primary key index for the temporary tables?
The two most common reasons for a query not using indexes are:
It's quicker to do a full table scan.
Poor statistics.
If your queries are selecting all of the table or doing joins without mentioning a primary key in the where clause etc., chances are it's quicker to do a full scan. Without the query and indexes, and preferably an explain plan as well it's impossible to tell for certain.
I would, however, recommend that you ask your DBA to re-gather - I hope, if not gather for the first time - statistics on the table. Use dbms_stats.gather_table_stats, with an estimate percentage of 25%+.
If the tables are re-created each time the application is run then try and gather statistics after creation and primary key generation. If they are truncated and re-filled each time, then ask your DBA to rebuild them and the PK and then gather statistics as this could significantly increase query runtime.
With no control over anything I don't see how you can improve the query time any other way.
You can use hints without changing SQL by leveraging SQL Profiles. Wrap your hint(s) into a SQL Profile that takes effect for that particular SQL ID.
I understand you don't have control over SQL, I have many apps where I encounter the same restriction. After checking query structure and statistics as in Ben's post and you have proved that hinting to use the index will improve performance why not try a manually created SQL profile.
Christian Antognini has a great paper here about SQL Profiles and creating them manually. The paper mentions creating SQL Profiles manually is undocumented. I would agree undocumented, but that doesn't necessarily mean unsupported. I would say there is little documentation out there, but if you want proof that Oracle allows manual creation, check the API or look at the coe_xfr_sql_profile.sql file in the SQLT utility directory.
I also posted a cheatsheet on how to quickly manually create a SQL Profile here.

load data into text file from oracle database views

I want to load data into text file that is generated after executing "views" in Oracle?How can I achieve this in oracle using UNIX.for example-
I want the same in Oracle on unix box.Please help me out as it alredy cosume lots of time.
your early response is highly appreciated!!
As Thomas asked, we need to know what you are doing with the "flat file". For example, if you're loading it into spreadsheet or doing some other processing that expects a defined format, then you need to use SQL*Plus and spool to a file. If you're looking to save a table (data + table definition) for moving it to another Oracle database then EXP/IMP is the tool to use.
We generally describe the data retrieval process as "selecting" from a table/view, not "executing" a table/view.
If you have access to directories on the database server, and authority to create "Directory" objects in Oracle, then you have lots of options.
For example, you can use the UTL_FILE package (part of the PL/SQL built-ins) to read or write files at the operating system level.
Or use the "external table" functionality to define objects that look like single tables to Oracle but are actually flat files at the OS level. Well documented in the Oracle docs.
Also, for one-time tasks, most of the tools for working SQL and PL/SQL provide facilities for moving data to and from the database. In the Windows environment, Toad's good at that. So is Oracle's free SQLDeveloper, which runs on many platforms. You wouldn't want to use those for a process that runs every day, but they're fine for single moves. I've generally found these easier to use than SQLPlus spooling, but that's a primitive version of the same functionality.
As stated by others, we need to know a bit more about what you're trying to do.

Resources