Can Oracle allow Permanent Alias for a table? - oracle

I was given an oracle dump file for an existing system. The dump file contained the table PARTS but when I look on the queries being done by the code. It uses mostly M_PARTS and just on one occasion, it uses PARTS. Does oracle allow multiple name on a table?
Note that I am not talking about the alias feature. ie.
Select M_PARTS.*
from PARTS M_PARTS
I want to know if there is a setting to make permanent alias in oracle. Where I just create a table PARTS and I can refer to it as either PARTS or M_PARTS in my query.

Kind of, as you can create synonyms:
CREATE SYNONYM PARTS FOR THE_SCHEMA.M_PARTS;
It is weird however, that the dump file would be inconsistent that way. Are you sure it is the same table? How was the file created?

Yes using synonyms.

Although a synonym was a solution, I found the actual script to build the database and it uses a materialized view instead of a synonym.
create materialized view M_Parts
tablespace USERS
refresh fast
as select * from Parts

Related

Oracle user DB export command's scope (User/Schema level)?

I'm totally novice in terms of Oracle DB knowledge. Trying to understand IMPDB command and its scope.
Issue: Suppose there are 500 tables in a particular DB, many of them (60% - 70% or more) are coming as zero records when we're importing the data into a fresh Oracle DB (getting the data from one vendor who has the DB). The doubt is, how can most of the tables be zero records in a DB (why were they created at the first place then?). Also, we're assuming maybe the vendor is using a specific user while generating the .DMP files who has no access to those tables and hence the 0 count. When we asked the vendor, they said, that's not how Oracle works, they've provided user export dump and said, "Schema is a collection of database objects owned by a specific user. Those objects include tables, indexes, views, functions, stored procedures, etc."
When asked about the zero records issue, they said they're pulling correctly and have no understanding as to why so many tables are zero. The SO community has great experts in Oracle DB, can anyone shed some light as to:
What might be the issue?
Is our assumption correct (i.e, that user doesn't have access to those tables which got zero records)?
What's the right way forward?
4) Anything else you want to add.
The vendor is correct - the utility used to generate the export, EXPDP (the compliment to IMPDP) can create a full dump of all of the database objects of a specific user. However, the parameters used to generate the export can vary greatly, and it's absolutely possible for an export to not include table data IF the EXPDP command/parameters used to create the export are specified in that way. For example, let's imagine that someone wants to export a specific schema using the following commmand:
expdp [USER]#[DATABASE] schemas=test directory=DATA_PUMP_DIR dumpfile=test.dmp logfile=test.log query=TEST.TABLE:'"WHERE row_date>sysdate"'
While the export is being generated, all of the rows in that specific table will be evaluated based on the where condition. Unless rows have a date that is in the future, none of the rows dated prior and up to the sysdate will be exported. If a where condition like that is applied to the entire export, you'll have tables with 0 rows in the dump file.
That is just an example - it might also be the case that the tables really have 0 rows. This is possible for a lot of reasons - perhaps it is an older schema with tables that have previously been truncated. Perhaps that particular database isn't used often, and the tables within the schema are empty because rows were never added to the tables. Maybe a developer or another DBA created a bunch of unnecessary tables and they simply were never dropped. It could be a plethora of potential reasons/issues for a schema to have empty tables, and that doesn't mean there is something wrong with the database or the export file being generated. Applications and their technical requirements change all the time, and it's possible that the schema simply wasn't updated when those tables were no longer needed.
The first thing I would recommend is:
Ask the vendor to provide record counts of each table in that schema from their end for validation purposes. This will tell you if the tables are empty in the database. If they are empty in the database, they will be empty in your export. This is very simple and can be achieved with a query like select owner, table_name, num_rows, sample_size, last_analyzed from all_tables where owner=[SCHEMA]; provided that their table statistics are up to date.
If this is a big concern for you, you can always ask them to exclude those tables in the export with a command like:
expdp [USER]#[DATABASE] schemas=test exclude=TABLE:"IN ('Table1', 'Table2')" directory=DATA_PUMP_DIR dumpfile=test.dmp logfile=test.log
Or simply exclude them during your import with a command like:
impdp [USER]#[DATABASE] schemas=test exclude=TABLE:"IN ('Table1', 'Table2')" directory=DATA_PUMP_DIR dumpfile=test.dmp logfile=test.log
Either way should work, but be careful and ensure that there will be no issues from a constraint/child record perspective. You can also exclude the constraints. There are many ways to work around it.
IF THERE ARE INCONSISTENCIES BETWEEN THE COUNTS AND THE ROWS IMPORTED, I would recommend asking the vendor for the specific EXPDP command or parameter file that was used to generate the export. This will let you know if the empty rows are being caused by a clause in the export command.
It's impossible to know if your assumption is correct without knowing more about the database the export is coming from or seeing the the commands being used to generate the export. I would ask the vendor to verify record counts before assuming that it's a permission issue. Empty tables are created all the time.

Selection of table (or view) in oracle, that I cannot find in TOAD

I am reverse-engineering an application which administers an Oracle database.
Everything is new to me (application + database)
There is a statement there somewhere, which is:
SELECT * FROM XXX#YYY (XXX is a word, YYY another word)
If I go into my database with TOAD I can't find an 'XXX#YYY' table nor view. If I copy paste the statement in TOAD's editor, I get results as if the table exists.
I know that the '#' symbol is allowed for naming an Oracle object. Is it possible that it means something else here though?
How can I find the table (or view)? Is it possible to get information through a statement such as which schema does 'XXX#YYY' belong to or weather it is a table or a view, so that I can track it?
The database consists of many schemas. There is a default one. Is it possible that XXX#YYY may belong to another schema, rather than the default?
Please help me find the table.
Identifier behind # is database link. It is a way to access objects on some remote Oracle server. more info on http://docs.oracle.com/cd/E11882_01/server.112/e26088/statements_5005.htm#SQLRF01205
In Toad/Oracle XXX#YYY means object#database_link.
Look for the schema in your DB, there you will find the table.
Btw: I think its better to use SCHEMA.TABLENAME
If you have problems finding the SCHEMA, go to View->Toad Options, select Treeview at Browser style and then it should display all schemas.

Dynamic SQL-Loader control file

I have 20 tables that are temp-tables where we load and validate data constantly and I have a control file for each table.
How can I have a unique control file that just changes the table the data is loaded into?
Any suggestion?
Thanks in advance!
---Oracle info---
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bi
Suggest you write your control file load the data into a synonym rather than into the specific table. Begin each load run by redefining the synonym to the table you want.
Maybe you can use multiple INTO TABLE clauses, and distinguish bitween them, somehow, with the WHEN clause.
Look here for more details

Script Oracle tables (DDL) with data insert statements into single/multiple sql files

I am needing to export the tables for a given schema, into DDL scripts and Insert statements - and have it scripted such that, the order of dependencies/constraints is maintained.
I came across this article suggesting how to archive the database with data - http://www.dba-oracle.com/t_archiving_data_in_file_structures.htm - not sure if the article is applicable for oracle 10g/11g.
I have seen "export table with data" features in "Sql Developer", "Toad for Oracle", "DreamCoder for Oracle" etc, but i would need to do this one table at a time, and will still need to figure out the right order of script execution manually.
Are there any tools/scripts that can utilize oracle metadata and generate DDL script with data?
Note that some of the tables have CLOB datatype columns - so the tool/script would need to be able to handle these columns.
P.S. I am needing something similar to the "Generate Scripts" feature in SQL Server 2008, where one can specify "script data" option and get back a self-sufficient script with DDL and data, generated in the order of table constraints. Please see: http://www.kodyaz.com/articles/sql-server-script-data-with-generate-script-wizard.aspx
Thanks for your help!
Firstly, recognise that this isn't necessarily possible. A view can use a function in a package that also selects from the view. Another issue is that you might need to load data into tables and then apply constraints, even though this might be slower than the other way round.
In short, you will need to do some work here.
Work out the dependencies in your system. ALL_DEPENDENCIES is the primary mechanism.
Then use DBMS_METADATA.GET_DDL to extract the DDL statements. For small data volumes, I'd extract the constraints separately for applying after the data load.
In current versions you can create external tables to unload data from regular tables into OS files (and obviously go the other way round). But if you've got exotic datatypes (BLOB, RAW, XMLTYPEs, User Defined Types....) it will be more challenging.
I suggest that you use Oracle standard export and import (exp/imp) here, is there a reason why you won't consider it? Note in addition you can use the "indexfile" option on the import to output the SQL statements (unfortunately this doesn't include the inserts) to a file instead of actually executing them.

Oracle: is it possible to create a synonym for a schema?

Firstly
I am an oracle newbie, and I don't have a local oracle guru to help me.
Here is my problem / question
I have some SQL scripts which have to be released to a number of Oracle instances.
The scripts create stored procedures.
The schema in which the stored procedures are created is different from the schema which contains the tables from which the stored procedures are reading.
On the different instances, the schema containing the tables has different names.
Obviously, I do not want to have to edit the scripts to make them bespoke for different instances.
It has been suggested to me that the solution may be to set up synonyms.
Is it possible to define a synonym for the table schema on each instance, and use the synonym in my scripts?
Are there any other ways to make this work without editing the scripts every time?
Thank you for any help.
Yes, you can create synonym for a schema.
select ksppinm, ksppstvl from x$ksppi a, x$ksppsv b where a.indx=b.indx and ksppinm like '%schema%synonym%'
ALTER SYSTEM SET "_enable_schema_synonyms" = true SCOPE=SPFILE;
STARTUP FORCE
show parameter synonym
Assuming you already have a schema named ORA...
CREATE SCHEMA SYNONYM ORASYN for ORA; -- create synonym for schema
CREATE TABLE ORASYN.TAB1(id number(10)); -- create table in schema
More information here: https://dbaclass.com/article/how-to-create-synonym-for-a-schema/
It'd help to know what version of Oracle, but as of 10g--No, you can't make a synonym for a schema.
You can create synonyms for the tables, which would allow you not to specify the schema in the scripts. But it means that the synonyms have to be identical on every instance to be of any use...
The other option would be to replace the schema references with variables, so when the script runs the user is prompted for the schema names. I prefer this approach, because it's less work. Here's an example that would work in SQLPlus:
CREATE OR REPLACE &schema1..vw_my_view AS
SELECT *
FROM &&schema2..some_other_table
The beauty of this is that the person who runs the script would only be prompted once for each variable, not every time the variable is encountered. So be careful about typos :)
Yes, there is a hidden way to create a schema synonym.
There is a hidden parameter _enable_schema_synonyms. It's false by default , but you can set it to true and create a synonym.

Resources