Column count in a export and import from source to destination - oracle

Can anyone help me with the below query. Do no of column should match in source and target table while export and import data from source to destination using datapump in Oracle 11.1
Eg: we are exporting sourcedb.tab(10 columns) and importing to targetdb.tab(11 columns).
Will this work or will give an error.

This should work but I haven't tried.
From Oracle 11.2 documentation (Can't find that for 11.1, but most likely the same):
When Data Pump detects that the source table and target table do not
match (the two tables do not have the same number of columns or the
target table has a column name that is not present in the source
table), it compares column names between the two tables. If the tables
have at least one column in common, then the data for the common
columns is imported into the table (assuming the datatypes are
compatible). The following restrictions apply:
This behavior is not > supported for network imports.
The following types of columns cannot be dropped: object columns,
object attributes, nested table columns, and ref columns based on a
primary key.
Also note that you need to set parameter TABLE_EXISTS_ACTION=APPEND (or TRUNCATE , which remove all existing data). Otherwise, data pump will take the default value of SKIP leaving the table as is.
11.2 Documentation of Data Pump Import

It won't work, as far as I can tell. Target table has to match the source table.
So, what can you do?
create a database link between those two databases and insert rows manually, e.g.
insert into target#db_link (col1, col2, ..., col10, col11)
select col1, col2, ..., col10, null
from source
drop 11th column from the target table, perform import and then alter table to re-create the 11th column

Related

How can I merge two tables using ROWID in oracle?

I know that ROWID is distinct for each row in different tables.But,I am seeing somewhere that two tables are being merged using rowid.So,I also tried to see it,but I am getting the blank output.
I have person table which looks as:
scrowid is the column which contains rowid as:
alter table ot.person
add scrowid VARCHAR2(200) PRIMARY KEY;
I populated this person table as:
insert into ot.person(id,name,age,scrowid)
select id,name, age,a.rowid from ot.per a;
After this I also created another table ot.temp_person by same steps.Both table has same table structure and datatypes.So, i wanted to see them using inner join and I tried them as:
select * from ot.person p inner join ot.temp_person tp ON p.scrowid=tp.scrowid
I got my output as empty table:
Is there is any possible way I can merge two tables using rowid? Or I have forgotten some steps?If there is any way to join these two tables using rowid then suggest me.
Define scrowid as datatype ROWID or UROWID then it may work.
However, in general the ROWID may change at any time unless you lock the record, so it would be a poor key to join your tables.
I think perhaps you misunderstood the merging of two tables via rowid, unless what you actually saw was a Union, Cross Join, or Full Outer Join. Any attempt to match rowid, requardless of you define it, doomed to fail. This results from it being an internal definition. Rowid in not just a data type it is an internal structure (That is an older version of description but Oracle doesn't link documentation versions.) Those fields are basically:
- The data object number of the object
- The data block in the datafile in which the row resides
- The position of the row in the data block (first row is 0)
- The datafile in which the row resides (first file is 1). The file
number is relative to the tablespace.
So while it's possible for different tables to have the same rowid, it would be exteremly unlikely. Thus making an inner join on them always return null.

Hive not creating separate directories for skewed table

My hive version is 1.2.1. I am trying to create a skewed table but it clearly doesn't seem to be working. Here is my table creation script:-
CREATE EXTERNAL TABLE IF NOT EXISTS mydb.mytable
(
country string,
payload string
)
PARTITIONED BY (year int,month int,day int,hour int)
SKEWED BY (country) on ('USA','Brazil') STORED AS DIRECTORIES
STORED AS TEXTFILE;
INSERT OVERWRITE TABLE mydb.mytable PARTITION(year = 2019, month = 10, day=05, hour=18)
SELECT country,payload FROM mydb.mysource;
The select query returns names of countries and some associated string data (payload). So, based on the way I have specified skewing on the column 'country' I was expecting the insert statement to cause creation of separate directories for USA & Brazil (the select query returns enough rows with country as USA & Brazil), but this clearly didn't happen. I see that hive created directory called 'HIVE_DEFAULT_LIST_BUCKETING_DIR_NAME' and all the values went into a single file in that directory. Skewed table is only supposed to send rows with default values (those not specified in table creation statement) to common directory (which is what HIVE_DEFAULT_LIST_BUCKETING_DIR_NAME seems to be) and should create dedicated directories for the rows with skew values. But instead all is going to the default directory and the other directory isn't even created. Do I have to toggle any hive options to make this thing work?
It looks like old bug, doesn't look like it's fixed yet. https://issues.apache.org/jira/browse/HIVE-13697. Basically internally when Hive stores these skew values specified during the table creation, they are converted to lower case before storing in the metastore. That's why the workaround for now is to convert case in the select statement so it goes to the right bucket. I tested this and this way it works.

Data Migration between 2 table spaces

I need to migrate data from one a table store in table space (A) to a different table stored in table space (B).
Standard way of moving data is to use Data Pump - export from the source, import into the target.
This is 12c version's documentation: https://docs.oracle.com/database/121/SUTIL/GUID-501A9908-BCC5-434C-8853-9A6096766B5A.htm#SUTIL2877, have a look.
Additionally, depending on database version you use, there are the original export/import utilities you might (have to) use.
[EDIT] Whoa? Two different "connections" became "tablespaces" (after you edited the question).
If it means that tables reside in the same database (but in different tablespaces), then a simple insert does the job, e.g.
insert into table_b select * from table_a
Tablespace isn't involved into that operation.

Delphi XE7 TFDTable View RowID error

I'm converting a Delphi 5 / BDE application to Delphi XE7 / FireDAC. One of my forms has a TFDTable component that points to an Oracle view containing a group by clause in its create statement.
This used to work fine in the BDE application, but with FireDAC I'm getting this error.
ORA-01446: cannot select ROWID from, or sample, a view with DISTINCT,
GROUP BY, etc.
I understand the error I'm getting from Oracle, but I'm not selecting ROWID, FireDAC is! Is there a property in the TFDTable that I can set to prevent it from adding ROWID to the query? If not, how am I supposed to use this view?
FireDAC fetches ROWID because it tries to identify tuples in the resultset for possible updates. For stopping that just enable the ReadOnly option which will properly make the grouped view resultset read only (properly as one cannot just identify particular tuples if they are grouped in a resultset for updating).
The SQL command is generated in TFDPhysCommandGenerator.GenerateSelectTable method, if you wanted to know the source of this problem. There is appended generic unique tuple identifier to the select list according to the ReadOnly property setting (which is ROWID for Oracle DBMS).
Include fiMeta in FetchOptions.Items.
TFDQuery, TFDTable, TFDMemTable, and TFDCommand automatically retrieve
the unique identifying columns (mkPrimaryKeyFields) for the main
(first) table in the SELECT ... FROM ... statements, when fiMeta is
included in FetchOptions.Items. Note:
mkPrimaryKeyFields querying may be time consuming;
the application may need to explicitly specify unique identifying columns, when FireDAC fails to determine them correctly.
To explicitly specify columns, exclude fiMeta from FetchOptions.Items,
and use one of the following options:
set UpdateOptions.KeyFields to a ';' separated list of column names;
include pfInKey into the corresponding TField.ProviderFlags property.
When the application creates persistent fields, then initially
TField.ProviderFlags will be set correctly. After that, automatic
field setup will not happen, when the DB structure or query is
changed. You should manually update ProviderFlags to adjust the column
list. Also, if the primary key consists of several fields, then all of
them must be included into persistent fields. Row Identifying Columns
Alternatively, a row identifying column may be included into the
SELECT list. When FireDAC founds such columns, it will not retrieve
mkPrimaryKeyFields metadata and it will use this column. The supported
DBMSs are the following:
DBMS Row identifying column
Firebird DB_KEY
Informix ROWID
Interbase DB_KEY / RDB$DB_KEY
Oracle ROWID
PostgreSQL OID. The table must be created with OIDs.
SQLite ROWID
Source : http://docwiki.embarcadero.com/RADStudio/XE8/en/Unique_Identifying_Fields_%28FireDAC%29

Have dynamic columns in external tables

My requirement is that I have to use a single external table in a store procedure for different text files which have different columns.
Can I use dynamic columns in external tables in Oracle 11g? Like this:
create table ext_table as select * from TBL_test
organization external (
type oracle_loader
default directory DATALOAD
access parameters(
records delimited by newline
fields terminated by '#'
missing field values are null
)
location ('APD.txt')
)
reject limit unlimited;
The set of columns that are defined for an external table, just like the set of columns that are defined for a regular table, must be known at the time the external table is defined. You can't choose at runtime to determine that the table has 30 columns today and 35 columns tomorrow. You could also potentially define the external table to have the maximum number of columns that any of the flat files will have, name the columns generically (i.e. col1 through col50) and then move the complexity of figuring out that column N of the external table is really a particular field to the ETL code. It's not obvious, though, why that would be more useful than creating the external table definition properly.
Why is there a requirement that you use a single external table definition to load many differently formatted files? That does not seem reasonable.
Can you drop and re-create the external table definition at runtime? Or does that violate the requirement for a single external table definition?

Resources