Will rowid keep unchanged if I export from one oracle and then import to another oracle - oracle

I have an oracle table, and I export the data from my oracle server, and then import the data into another oracle server.
My question is: for every row in the table, will the rowid keep unchanged after importing into another oracle?
I guess the answer is NO, but I have no idea how rowid is generated.

No, the row IDs will almost certainly change. Even within the same database, from the docs:
If you delete and reinsert a row with the Import and Export utilities, for example, then its rowid may change.
The row ID represents the location of the row within a block, within a data file, within a tablespace. (That documentation explains that more.) Even if the target database has the same tablespaces and data files, the import will load data into files and blocks as efficiently as it can, and will not make any attempt to preserve old row IDs - which it won't know anyway as they are not part of the exported data. Even if it could try, that would involve writing each row to a specific place on disk, which would slow things down quite a bit, and existing data in the target DB might already be using the same row ID.
ROWID is a pseudocolumn, not part of the the actual row, and it would be meaningless to include it in the exported data.
Although you can use the ROWID pseudocolumn in the SELECT and WHERE clause of a query, these pseudocolumn values are not actually stored in the database.
It isn't even necessarily unique.
Also, you shouldn't really be using it directly, except possibly within a single query/statement (here is one use) or maybe procedure, as they can change even within an existing database if Oracle decides it needs to reorganize things. That's partly why the documentation also says:
You should not use ROWID as the primary key of a table.

Related

Data Migration between 2 table spaces

I need to migrate data from one a table store in table space (A) to a different table stored in table space (B).
Standard way of moving data is to use Data Pump - export from the source, import into the target.
This is 12c version's documentation: https://docs.oracle.com/database/121/SUTIL/GUID-501A9908-BCC5-434C-8853-9A6096766B5A.htm#SUTIL2877, have a look.
Additionally, depending on database version you use, there are the original export/import utilities you might (have to) use.
[EDIT] Whoa? Two different "connections" became "tablespaces" (after you edited the question).
If it means that tables reside in the same database (but in different tablespaces), then a simple insert does the job, e.g.
insert into table_b select * from table_a
Tablespace isn't involved into that operation.

How do I export a table from 1 database to another database in Oracle (sql plus)

If I have 1 table in a database, and I want to export it, then import it into new table in a different database?
Should I set up the table with same fields in database two, or is there a way create empty table so all the import will work?
If you have a dblink established, a quick way to copy a table without intermediate files would be to execute this from the target database (the one where you want the new table to be copied):
create table my_new_table as
select *
from my_original_table#my_original_database
This presupposes the dblink, of course, and also that there is sufficient redo space to allow that much data to be copied in one fell swoop.
If not, you could also build the table this way and then do a bunch of insert into transactions to move the data in chunks.
If you only want the structure (your question sort of implied that, but I wasn't sure), you can always add a where 1 = 3 to copy only the structure.
This won't import constrains or indexes, but I'm not sure if that matters for what you seek.

How to update/insert a table without creating a new table (temporary or otherwise)

Background: My team has an etl job that updates an aggregate table. Each row contains data for a particular date, but this row can and will get updated after the row date (which means any row can contain data from multiple jobs). This ETL job missed some data for one day last week and now I need to backfill it.
Problem: I have the missing data, and what I was planning on doing was dumping that data into a temporary table and then merging it with the agg table. That way I can deal with whether the ETL job already contains a row for that data (update) or whether a new row needs to be added (insert), but I don't have sufficient permissions to create a temp table, and I'd prefer not to involve the DBA.
Question: Can I do an insert/update sort of behavior without creating a temporary table (this is Oracle SQL by the way).
Edit: The data is coming from a tsv file.
Why do you want to avoid involving the DBA? The DBA should have full knowledge of what's going on in the database, as they are ultimately responsible for the condition of the data within it. So you shouldn't be playing sneaky commando with them.
As you have a file of missing data, the easiest way to present it to the database is with an external table. This requires the creation of the table and probably a directory object as well. You will need the DBA's help with this task.
The only way to avoid creating database objects is to convert your TSV file into a series of DML statements. An IDE which supports regex and/or records macros will prove invaluable here. I like TextPad; other editors are available.
The DML statement for doing upserts in Oracle is the MERGE statement. The one thing you need to watch for is recency. Your missing data comes from last week. If a row exists it may have have been added or amended in the intervening period. You must write your MERGE statement so it does not overwrite more recent data with the older stuff. Hopefully your table has useful metadata columns such as DATE_CREATED and LAST_UPDATED.

Attempting to use SQL-Developer to analyze a system table dump created with 'exp'

I'm attempting to recover the data from a specific table that exists in a system table dump I performed earlier. I would like to append the rows existing in the dump to any rows that may exist in the active table. The problem is, it's likely that the name of the table in the dump is not the same as what exists in the database currently (They're dynamically created with a prefix of ARC_TREND_). In addition, I don't know the name of the table as it exists in the dump, I was hoping to use SQL Developer to analyze the dump file as I can recognize the correct table by it's columns and it's existing rows.
While i'm going on blind faith that SQL Developer can work with my dump file, when attempting to open it, i'm getting a Java Heap OutOfMemory exception raised. I've adjusted the maximum heap size from 640m to 1024m in both sqldeveloper.bat and in sqldeveloper.conf, but to no avail.
Can someone recommend a course of action for me to take to recover the data from a table which exists in a exp created dump file? A graphical tool would be nice, but I'm no stranger to the command line. I need to analyze the tables that exist in the dump in order to pick the correct one out. Then I assume I can use imp TABLE= to bring it back into the active instance. It likely won't match the existing table name, so I will use SQL Developer to copy the rows from the imported table to the table where I need them to be.
The dump was taken from a Linux server running 10g, and will be imported to (the same server & database instance, upgraded) an 11g instance of the same database.
Thanks
Since you're referring to imp rather than impdp, I assume this wasn't exported with data pump. Either way, I doubt you'll get anything useful through SQL Developer.
Fortunately most of what you're trying to do is quite easy from the command line; just run imp with the INDEXFILE parameter, which will give you a text file containing all the table (commented out with REM) and index creation commands. From that you should be able to spot the table from its column names.
You can't really see any row data though, so if there's more than one possible match you might need to import several tables and inspect the data in them in the database to see which one you really want.

Oracle ROWID as function/procedure parameter

I just would like to hear different opinions about ROWID type usage as input parameter of any function or procedure.
I have normally used and seen primary keys used as input parameters but is there some kind of disadvantages to use ROWID as input parameter? I think it's kind a simple and selects are pretty quick if used in WHERE clause.
For example:
FUNCTION get_row(p_rowid IN ROWID) RETURN TABLE%ROWTYPE IS...
From the concept guide:
Physical rowids provide the fastest possible access to a row of a given table. They contain the physical address of a row (down to the specific block) and allow you to retrieve the row in a single block access. Oracle guarantees that as long as the row exists, its rowid does not change.
The main drawback of a ROWID is that while it is normally stable, it can change under some circumstances:
The table is rebuilt (ALTER TABLE MOVE...)
Export / Import obviously
Partition table with row movement enable
A primary key identifies a row logically, you will always find the correct row, even after a delete+insert. A ROWID identifies the row physically and is not as persistent as a primary key.
You can safely use ROWID in a single SQL statement since Oracle will guarantee the result is coherent, for example to remove duplicates in a table. To be on the safe side, I would suggest you only use the ROWID accross statements when you have a lock on the row (SELECT ... FOR UPDATE).
From a performance point of view, the Primary key access is a bit more expensive but you will normally notice this only if you do a lot of single row access. If performance is critical though, you usually can get greater benefit in that case from using set processing than single row processing with rowid. In particular, if there are a lot of roundtrips between the DB and the application, the cost of the row access will probably be negligible compared to the roundtrips cost.

Resources