How do I export a table from 1 database to another database in Oracle (sql plus) - oracle

If I have 1 table in a database, and I want to export it, then import it into new table in a different database?
Should I set up the table with same fields in database two, or is there a way create empty table so all the import will work?

If you have a dblink established, a quick way to copy a table without intermediate files would be to execute this from the target database (the one where you want the new table to be copied):
create table my_new_table as
select *
from my_original_table#my_original_database
This presupposes the dblink, of course, and also that there is sufficient redo space to allow that much data to be copied in one fell swoop.
If not, you could also build the table this way and then do a bunch of insert into transactions to move the data in chunks.
If you only want the structure (your question sort of implied that, but I wasn't sure), you can always add a where 1 = 3 to copy only the structure.
This won't import constrains or indexes, but I'm not sure if that matters for what you seek.

Related

Oracle DB Create Table as Copy Vs. Merging Data into Empty Table

My question is related to Oracle DB performance and ideally finding the better method of the two paths when creating a backup table
Create a new table as a copy of an existing
Merging data to an existing (empty table - The two tables are identical)
If it is a small table, it doesn't matter - both will be fast. Though, CTAS (create table as select) is probably the most usual way to create a "copy" of existing table.
If a table is very large, I don't know how it (CTAS) compares to merge; you should test it.
However, a backup table? Are you sure that's the right way to backup a table? I'd rather think of a proper (RMAN) database backup, or - at least - export (using Export Data Pump) into a file that resides in a filesystem (and can be stored elsewhere, e.g. onto an external hard disk drive, DVD and similar (does anyone use tapes any more? We do)).
Because, if database breaks down, along with your "original" table, that "backup" table will be lost as well.

Data Migration between 2 table spaces

I need to migrate data from one a table store in table space (A) to a different table stored in table space (B).
Standard way of moving data is to use Data Pump - export from the source, import into the target.
This is 12c version's documentation: https://docs.oracle.com/database/121/SUTIL/GUID-501A9908-BCC5-434C-8853-9A6096766B5A.htm#SUTIL2877, have a look.
Additionally, depending on database version you use, there are the original export/import utilities you might (have to) use.
[EDIT] Whoa? Two different "connections" became "tablespaces" (after you edited the question).
If it means that tables reside in the same database (but in different tablespaces), then a simple insert does the job, e.g.
insert into table_b select * from table_a
Tablespace isn't involved into that operation.

Oracle user DB export command's scope (User/Schema level)?

I'm totally novice in terms of Oracle DB knowledge. Trying to understand IMPDB command and its scope.
Issue: Suppose there are 500 tables in a particular DB, many of them (60% - 70% or more) are coming as zero records when we're importing the data into a fresh Oracle DB (getting the data from one vendor who has the DB). The doubt is, how can most of the tables be zero records in a DB (why were they created at the first place then?). Also, we're assuming maybe the vendor is using a specific user while generating the .DMP files who has no access to those tables and hence the 0 count. When we asked the vendor, they said, that's not how Oracle works, they've provided user export dump and said, "Schema is a collection of database objects owned by a specific user. Those objects include tables, indexes, views, functions, stored procedures, etc."
When asked about the zero records issue, they said they're pulling correctly and have no understanding as to why so many tables are zero. The SO community has great experts in Oracle DB, can anyone shed some light as to:
What might be the issue?
Is our assumption correct (i.e, that user doesn't have access to those tables which got zero records)?
What's the right way forward?
4) Anything else you want to add.
The vendor is correct - the utility used to generate the export, EXPDP (the compliment to IMPDP) can create a full dump of all of the database objects of a specific user. However, the parameters used to generate the export can vary greatly, and it's absolutely possible for an export to not include table data IF the EXPDP command/parameters used to create the export are specified in that way. For example, let's imagine that someone wants to export a specific schema using the following commmand:
expdp [USER]#[DATABASE] schemas=test directory=DATA_PUMP_DIR dumpfile=test.dmp logfile=test.log query=TEST.TABLE:'"WHERE row_date>sysdate"'
While the export is being generated, all of the rows in that specific table will be evaluated based on the where condition. Unless rows have a date that is in the future, none of the rows dated prior and up to the sysdate will be exported. If a where condition like that is applied to the entire export, you'll have tables with 0 rows in the dump file.
That is just an example - it might also be the case that the tables really have 0 rows. This is possible for a lot of reasons - perhaps it is an older schema with tables that have previously been truncated. Perhaps that particular database isn't used often, and the tables within the schema are empty because rows were never added to the tables. Maybe a developer or another DBA created a bunch of unnecessary tables and they simply were never dropped. It could be a plethora of potential reasons/issues for a schema to have empty tables, and that doesn't mean there is something wrong with the database or the export file being generated. Applications and their technical requirements change all the time, and it's possible that the schema simply wasn't updated when those tables were no longer needed.
The first thing I would recommend is:
Ask the vendor to provide record counts of each table in that schema from their end for validation purposes. This will tell you if the tables are empty in the database. If they are empty in the database, they will be empty in your export. This is very simple and can be achieved with a query like select owner, table_name, num_rows, sample_size, last_analyzed from all_tables where owner=[SCHEMA]; provided that their table statistics are up to date.
If this is a big concern for you, you can always ask them to exclude those tables in the export with a command like:
expdp [USER]#[DATABASE] schemas=test exclude=TABLE:"IN ('Table1', 'Table2')" directory=DATA_PUMP_DIR dumpfile=test.dmp logfile=test.log
Or simply exclude them during your import with a command like:
impdp [USER]#[DATABASE] schemas=test exclude=TABLE:"IN ('Table1', 'Table2')" directory=DATA_PUMP_DIR dumpfile=test.dmp logfile=test.log
Either way should work, but be careful and ensure that there will be no issues from a constraint/child record perspective. You can also exclude the constraints. There are many ways to work around it.
IF THERE ARE INCONSISTENCIES BETWEEN THE COUNTS AND THE ROWS IMPORTED, I would recommend asking the vendor for the specific EXPDP command or parameter file that was used to generate the export. This will let you know if the empty rows are being caused by a clause in the export command.
It's impossible to know if your assumption is correct without knowing more about the database the export is coming from or seeing the the commands being used to generate the export. I would ask the vendor to verify record counts before assuming that it's a permission issue. Empty tables are created all the time.

Will rowid keep unchanged if I export from one oracle and then import to another oracle

I have an oracle table, and I export the data from my oracle server, and then import the data into another oracle server.
My question is: for every row in the table, will the rowid keep unchanged after importing into another oracle?
I guess the answer is NO, but I have no idea how rowid is generated.
No, the row IDs will almost certainly change. Even within the same database, from the docs:
If you delete and reinsert a row with the Import and Export utilities, for example, then its rowid may change.
The row ID represents the location of the row within a block, within a data file, within a tablespace. (That documentation explains that more.) Even if the target database has the same tablespaces and data files, the import will load data into files and blocks as efficiently as it can, and will not make any attempt to preserve old row IDs - which it won't know anyway as they are not part of the exported data. Even if it could try, that would involve writing each row to a specific place on disk, which would slow things down quite a bit, and existing data in the target DB might already be using the same row ID.
ROWID is a pseudocolumn, not part of the the actual row, and it would be meaningless to include it in the exported data.
Although you can use the ROWID pseudocolumn in the SELECT and WHERE clause of a query, these pseudocolumn values are not actually stored in the database.
It isn't even necessarily unique.
Also, you shouldn't really be using it directly, except possibly within a single query/statement (here is one use) or maybe procedure, as they can change even within an existing database if Oracle decides it needs to reorganize things. That's partly why the documentation also says:
You should not use ROWID as the primary key of a table.

ORACLE :Are grants removed when an object is dropped?

I currently have 2 schemas, A and B.
B has a table, and A executes selects inserts and updates on it.
In our sql scripts, we have granted permissions to A so it can complete its tasks.
grant select on B.thetable to A
etc,etc
Now, table 'thetable' is dropped and another table is renamed to B at least once a day.
rename someothertable to thetable
After doing this, we get an error when A executes a select on B.thetable.
ORA-00942: table or view does not exist
Is it possible that after executing the drop + rename operations, grants are lost as well?
Do we have to assign permissions once again ?
update
someothertable has no grants.
update2
The daily process that inserts data into 'thetable' executes a commit every N insertions, so were not able to execute any rollback. That's why we use 2 tables.
Thanks in advance
Yes, once you drop the table, the grant is also dropped.
You could try to create a VIEW selecting from thetable and granting SELECT on that.
Your strategy of dropping a table regularly does not sound quite right to me though. Why do you have to do this?
EDIT
There are better ways than dropping the table every day.
Add another column to thetable that states if the row is valid.
Put an index on that column (or extend your existing index that you use to select from that table).
Add another condition to your queries to only consider "valid" rows or create a view to handle that.
When importing data, set the new rows to "new". Once the import is done, you can delete all "valid" rows and set the "new" rows to "valid" in a single transaction.
If the import fails, you can just rollback your transaction.
Perhaps the process that renames the table should also execute a procedure that does your grants for you? You could even get fancy and query the dictionary for existing grants and apply those to the renamed table.
No :
"Oracle Database automatically transfers integrity constraints, indexes, and grants on the old object to the new object."
http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_9019.htm#SQLRF01608
You must have another problem
Another approach would be to use a temporary table for the work you're doing. After all, it sounds like it is just the data is transitory, at least in that table, and you wouldn't keep having to reapply the grants each time you had a new set of data/create the new table

Resources