Oracle user DB export command's scope (User/Schema level)? - oracle

I'm totally novice in terms of Oracle DB knowledge. Trying to understand IMPDB command and its scope.
Issue: Suppose there are 500 tables in a particular DB, many of them (60% - 70% or more) are coming as zero records when we're importing the data into a fresh Oracle DB (getting the data from one vendor who has the DB). The doubt is, how can most of the tables be zero records in a DB (why were they created at the first place then?). Also, we're assuming maybe the vendor is using a specific user while generating the .DMP files who has no access to those tables and hence the 0 count. When we asked the vendor, they said, that's not how Oracle works, they've provided user export dump and said, "Schema is a collection of database objects owned by a specific user. Those objects include tables, indexes, views, functions, stored procedures, etc."
When asked about the zero records issue, they said they're pulling correctly and have no understanding as to why so many tables are zero. The SO community has great experts in Oracle DB, can anyone shed some light as to:
What might be the issue?
Is our assumption correct (i.e, that user doesn't have access to those tables which got zero records)?
What's the right way forward?
4) Anything else you want to add.

The vendor is correct - the utility used to generate the export, EXPDP (the compliment to IMPDP) can create a full dump of all of the database objects of a specific user. However, the parameters used to generate the export can vary greatly, and it's absolutely possible for an export to not include table data IF the EXPDP command/parameters used to create the export are specified in that way. For example, let's imagine that someone wants to export a specific schema using the following commmand:
expdp [USER]#[DATABASE] schemas=test directory=DATA_PUMP_DIR dumpfile=test.dmp logfile=test.log query=TEST.TABLE:'"WHERE row_date>sysdate"'
While the export is being generated, all of the rows in that specific table will be evaluated based on the where condition. Unless rows have a date that is in the future, none of the rows dated prior and up to the sysdate will be exported. If a where condition like that is applied to the entire export, you'll have tables with 0 rows in the dump file.
That is just an example - it might also be the case that the tables really have 0 rows. This is possible for a lot of reasons - perhaps it is an older schema with tables that have previously been truncated. Perhaps that particular database isn't used often, and the tables within the schema are empty because rows were never added to the tables. Maybe a developer or another DBA created a bunch of unnecessary tables and they simply were never dropped. It could be a plethora of potential reasons/issues for a schema to have empty tables, and that doesn't mean there is something wrong with the database or the export file being generated. Applications and their technical requirements change all the time, and it's possible that the schema simply wasn't updated when those tables were no longer needed.
The first thing I would recommend is:
Ask the vendor to provide record counts of each table in that schema from their end for validation purposes. This will tell you if the tables are empty in the database. If they are empty in the database, they will be empty in your export. This is very simple and can be achieved with a query like select owner, table_name, num_rows, sample_size, last_analyzed from all_tables where owner=[SCHEMA]; provided that their table statistics are up to date.
If this is a big concern for you, you can always ask them to exclude those tables in the export with a command like:
expdp [USER]#[DATABASE] schemas=test exclude=TABLE:"IN ('Table1', 'Table2')" directory=DATA_PUMP_DIR dumpfile=test.dmp logfile=test.log
Or simply exclude them during your import with a command like:
impdp [USER]#[DATABASE] schemas=test exclude=TABLE:"IN ('Table1', 'Table2')" directory=DATA_PUMP_DIR dumpfile=test.dmp logfile=test.log
Either way should work, but be careful and ensure that there will be no issues from a constraint/child record perspective. You can also exclude the constraints. There are many ways to work around it.
IF THERE ARE INCONSISTENCIES BETWEEN THE COUNTS AND THE ROWS IMPORTED, I would recommend asking the vendor for the specific EXPDP command or parameter file that was used to generate the export. This will let you know if the empty rows are being caused by a clause in the export command.
It's impossible to know if your assumption is correct without knowing more about the database the export is coming from or seeing the the commands being used to generate the export. I would ask the vendor to verify record counts before assuming that it's a permission issue. Empty tables are created all the time.

Related

Oracle Export/Import issues with Tablespace

I created a dump of a local oracle database like this:
expdp mydb/passwd -schemas=myschema -dumpfile=mydumpfile.dmp -logfile=oralog.log
I sent the dump to someone who is supposed to import the dump in his oracle server. Now, he tells me, the import fails due to some errors related to tablespaces (like tablespace XYZ is not available, - the database XYZ is in no relation to the respective database). Besides, he asks me to give some information about the dump concerning the tablespaces.
Since I am usually working with MySQL and have limited knowledge about these Oracle-Tablespace things: I would really appreciate to get some advise.
Use REMAP_TABLESPACE parameter.
For example,
REMAP_TABLESPACE=(source1:destination1,source2:destination1,source3:destination1,source4:destination1)
Go through the documentation about Data Pump Import. A small quote -
Multiple REMAP_TABLESPACE parameters can be specified, but no two can
have the same source tablespace. The target schema must have
sufficient quota in the target tablespace.
Note that use of the REMAP_TABLESPACE parameter is the only way to
remap a tablespace in Data Pump Import. This is a simpler and cleaner
method than the one provided in the original Import utility. That
method was subject to many restrictions (including the number of
tablespace subclauses) which sometimes resulted in the failure of some
DDL commands.
By contrast, the Data Pump Import method of using the REMAP_TABLESPACE
parameter works for all objects, including the user, and it works
regardless of how many tablespace subclauses are in the DDL statement.

Do two users access the same database or different?

I installed Oracle on my system, so now orcl is the SID, which is the unique identifier of my database instance.
Now starter db was created as part of the installation. I created 2 users user1 and user2 using the system account.
Using SQL developer I am accessing the users, this shows me 2 different connections with all the database objects like tables, stored procedures views etc.
so
When using these 2 users, am I accessing the same database? I am giving all the ddl commands by logging into the user1 or user 2, does all this data goes into the same .dbf file?
The database instance can be connected to only one database, then does this essentially mean that everytime I create a new database, to make a database instance to point to that, I need to do a configuration change?
In my experience with Oracle, the typical unit of division is a schema. Schemas in Oracle are used more like you would use databases in SQL Server or PostgreSQL. They represent both users and a logical separation of objects. Physical separation would usually be done using tablespaces. Tablespaces are a group of physical files where data is stored. Schemas can share or use different tablespaces. Having one tablespace per schema is uncommon; they usually share a few tablespaces or often even just one.
With that in mind, to answer your questions more directly,
1) Like in any other database, you can specify the schema the object belongs to:
CREATE TABLE MY_SCHEMA.TABLE_X ( X NUMBER )
If the schemas on two CREATE statements are different, then it will create different objects. What's different in Oracle is that the default schema changes for every user. The default schema is always the currently connected schema/user. So if you omit the schema like so:
CREATE TABLE TABLE_X ( X NUMBER )
then the implied schema is the currently connected schema/user. So if I'm logged in as MY_SCHEMA, then the above is equivalent to the first example. When connecting as two different users, then the implied schema will be different and the DDL is not equivalent between the two users. So running the same statement would create two different objects if you do not specify a schema.
The two objects may be stored in the same physical file if they are in the same tablespace. (They are most likely in the USERS tablespace if you did not create one explicitly and did not specify a different default tablespace when creating the schemas.) Regardless, they are still two completely separate objects.
If you specify the schema explicitly like in the first example, then the DDL is equivalent regardless of who executes it (although permissions may prevent some users from executing it). So it would result in creating the object once, and attempting to create it a second time would result in an error unless you're using CREATE OR REPLACE or something similar.
2) I don't know the answer to this question, but as I said, in Oracle, the basic unit of separation is usually the schema, not a database. I believe the question you're asking is a large part of the reason why the schemas are used in the way they are. Having multiple actual databases on the same machine/instance is far more difficult in Oracle than in other databases (if not impossible), so it's much simpler to have a single database with many schemas.

Attempting to use SQL-Developer to analyze a system table dump created with 'exp'

I'm attempting to recover the data from a specific table that exists in a system table dump I performed earlier. I would like to append the rows existing in the dump to any rows that may exist in the active table. The problem is, it's likely that the name of the table in the dump is not the same as what exists in the database currently (They're dynamically created with a prefix of ARC_TREND_). In addition, I don't know the name of the table as it exists in the dump, I was hoping to use SQL Developer to analyze the dump file as I can recognize the correct table by it's columns and it's existing rows.
While i'm going on blind faith that SQL Developer can work with my dump file, when attempting to open it, i'm getting a Java Heap OutOfMemory exception raised. I've adjusted the maximum heap size from 640m to 1024m in both sqldeveloper.bat and in sqldeveloper.conf, but to no avail.
Can someone recommend a course of action for me to take to recover the data from a table which exists in a exp created dump file? A graphical tool would be nice, but I'm no stranger to the command line. I need to analyze the tables that exist in the dump in order to pick the correct one out. Then I assume I can use imp TABLE= to bring it back into the active instance. It likely won't match the existing table name, so I will use SQL Developer to copy the rows from the imported table to the table where I need them to be.
The dump was taken from a Linux server running 10g, and will be imported to (the same server & database instance, upgraded) an 11g instance of the same database.
Thanks
Since you're referring to imp rather than impdp, I assume this wasn't exported with data pump. Either way, I doubt you'll get anything useful through SQL Developer.
Fortunately most of what you're trying to do is quite easy from the command line; just run imp with the INDEXFILE parameter, which will give you a text file containing all the table (commented out with REM) and index creation commands. From that you should be able to spot the table from its column names.
You can't really see any row data though, so if there's more than one possible match you might need to import several tables and inspect the data in them in the database to see which one you really want.

Creating re-runnable Oracle DDL SQL Scripts

Our development team does all of their development on their local machines, databases included. When we make changes to schema's we save the SQL to a file that is then sent to the version control system (If there is a better practice for this I'd be open to hearing about that as well).
When working on SQL Server we'd wrap our updates around "if exists" statements to make them re-runnable. I am not working on an Oracle 10g project and I can't find any oracle functions that do the same thing. I was able to find this thread on dbaforums.org but the answer here seems a bit kludgy.
I am assuming this is for some sort of Automating the Build process and redoing the build from scratch if something fails.
As Shannon pointed out, PL/SQL objects such as Procedures, functions and Packages have the "create or replace" option, so a second recompile/re-run would be ok. Grants should be fine too.
As for Table creations and DDLs, you could take one of the following approaches.
1) Do not add any drop commands to the scripts and ask your development team to come up with the revert-script for the individual modules.
So for each create table that they add to the build, they will have an equivalent "DROP TABLE.." added to a script say."build_rollback.sql". If your build fails , you can run this script before running the build from scratch.
2)The second (and most frequently used approach I have seen) is to include the DROP table just before the create table statement and then Ignore the"Table or view does not exist" errors in the build log. Something like..
DROP TABLE EMP;
CREATE TABLE EMP (
.......
.......
);
The thread you posted has a major flaw. The most important one is that you always create tables incrementally. Eg, your database already has 100 tables and you are adding 5 more as part of this release. The script spools the DROP Create for all 100 tables and then executes it which does not make a lot of sense (unless you are building your database for the first time).
An SQL*Plus script will continue past errors unless otherwise configured to.
So you could have all of your scripts use :
DROP TABLE TABLE_1;
CREATE TABLE TABLE_1 (...
This is an option in PowerDesigner, I know.
Another choice would be to write a PL/SQL script which scrubs a schema, iterating over all existing tables, views, packages, procedures, functions, sequences, and synonyms in the schema, issuing the proper DDL statement to drop them.
I'd consider decomposing the SQL to create the database; one giant script containing everything for the schema sounds murderous to maintain in a shared environment. Dividing at a Schema / Object Type / Name level might be prudent, keeping fully dependent object types (like Tables and Indexes) together.

Script Oracle tables (DDL) with data insert statements into single/multiple sql files

I am needing to export the tables for a given schema, into DDL scripts and Insert statements - and have it scripted such that, the order of dependencies/constraints is maintained.
I came across this article suggesting how to archive the database with data - http://www.dba-oracle.com/t_archiving_data_in_file_structures.htm - not sure if the article is applicable for oracle 10g/11g.
I have seen "export table with data" features in "Sql Developer", "Toad for Oracle", "DreamCoder for Oracle" etc, but i would need to do this one table at a time, and will still need to figure out the right order of script execution manually.
Are there any tools/scripts that can utilize oracle metadata and generate DDL script with data?
Note that some of the tables have CLOB datatype columns - so the tool/script would need to be able to handle these columns.
P.S. I am needing something similar to the "Generate Scripts" feature in SQL Server 2008, where one can specify "script data" option and get back a self-sufficient script with DDL and data, generated in the order of table constraints. Please see: http://www.kodyaz.com/articles/sql-server-script-data-with-generate-script-wizard.aspx
Thanks for your help!
Firstly, recognise that this isn't necessarily possible. A view can use a function in a package that also selects from the view. Another issue is that you might need to load data into tables and then apply constraints, even though this might be slower than the other way round.
In short, you will need to do some work here.
Work out the dependencies in your system. ALL_DEPENDENCIES is the primary mechanism.
Then use DBMS_METADATA.GET_DDL to extract the DDL statements. For small data volumes, I'd extract the constraints separately for applying after the data load.
In current versions you can create external tables to unload data from regular tables into OS files (and obviously go the other way round). But if you've got exotic datatypes (BLOB, RAW, XMLTYPEs, User Defined Types....) it will be more challenging.
I suggest that you use Oracle standard export and import (exp/imp) here, is there a reason why you won't consider it? Note in addition you can use the "indexfile" option on the import to output the SQL statements (unfortunately this doesn't include the inserts) to a file instead of actually executing them.

Resources