Export bytea data from Postgres and Import it as blob to Oracle DB - oracle

I have a table with bytea column type I need to export this data and import it to similar table in Oracle Db (as blob because there is no bytea type)

You could use oraclle_fdw, define the empty Oracle table as foreign table in PostgreSQL and move the data with
INSERT INTO foreigntab SELECT * FROM localtab;
The performance won't be top notch, but it is simple.

Related

Steps to restore one table from impdp in oracle 10g

I need to import only one table from a full backup (expdp) to a newly create table on the same database.
So can i import the table directly with a new name? or i have to create a new table first with the same parameters then import?
impdp hr DIRECTORY=dpump_dir1 DUMPFILE=expschema.dmp TABLES=hr.employees REMAP_TABLE=hr.employees:emps
Another question will remap_table effect my already exist employees table? or it will only create a new table called emps and import the data of employees from the dump to it?
...................... Update ......................
I found that there is no remap_table in oracle 10g so can i use this method:
create user johny identified by 1234;
grant create session to johny;
impdp system/****** DIRECTORY=dpump_dir1 DUMPFILE=expschema.dmp LOGFILE= tb_imp.log TABLES='HR.employees' REMAP_SCHEMA=HR: johny;
We need the table temporarily only so i can drop johny later. also the above method will not effect the original employees table in the hr schema , right?
Oracle will import the table employees from your dump file and remame it to emps, if an employees table exists it will not be affeted, if the emps table already exists it will not do anything.
You can change the behavior when a table exists by using the table_exists_action parameter either specifiying:
TRUNCATE => truncate the table if exists and import data from the
dump
REPLACE => will first replace the table with the definition
from the dump and then import the data
APPEND => will append the
data to the table leaving the existing data in.

what's SparkSQL SQL query to write into JDBC table?

For SQL query in Spark.
For read, we can read jdbc by
CREATE TEMPORARY TABLE jdbcTable
USING org.apache.spark.sql.jdbc
OPTIONS dbtable ...;
For write, what is the query to write the data to the remote JDBC table using SQL?
NOTE: I want it to be SQL query.
plz provide the pure "SQL query" that can write to jdbc when using HiveContext.sql(...) of SparkSQL.
An INSERT OVERWRITE TABLE will write to your database using the JDBC connection:
DROP TABLE IF EXISTS jdbcTemp;
CREATE TABLE jdbcTemp
USING org.apache.spark.sql.jdbc
OPTIONS (...);
INSERT OVERWRITE TABLE jdbcTemp
SELECT * FROM my_spark_data;
DROP TABLE jdbcTemp;
You can write the dataframe with jdbc similar to follows.
df.write.jdbc(url, "TEST.BASICCREATETEST", new Properties)
Yes, you can. If you want to save a dataframe into an existing table you can use
df.insertIntoJDBC(url, table, overwrite)
and if you want to create new table to save this dataframe, the you can use
df.createJDBCTable(url, table, allowExisting)

Convert ntext to clob

I have to copy data from one table to another which one table is in Oracle and one is in MSSQL Server. I want to copy the data from MSSQL Server table to Oracle table. The problem is that the MSSQL Server table has one column which is of data type ntext and the destination column in Oracle table is clob.
When I use the query
insert into oracle.table select * from sqlserver.table#mssql; I get the following error:
SQL Error: ORA-00997: illegal use of LONG datatype
Can anyone advice on this please?
I tried it through a PL/SQL Procedure and it worked. I created a cursor, passed in the values to my variables declared in VARCHAR2 and then run an EXECUTE IMMEDIATE for the INSERT INTO....SELECT * FROM <TABLE_NAME>#MSSQL.

Changing Storage Option for XMLType column in Oracle 11g

I am using XMLType column in some of my oracle database table. Earlier(in 11.2.0.2) the default storage type considered is CLOB. So If you issue a query for the XMLType columns, I can see the content of the column as XML string. But when I drop and re-create all the tables and inserted some data, I could not get the content of the XMLType columns. It simpley display the XMLType in the cloumn value. I have a doubt that whether the storage type is chaged in BINARY XML? So I issue the following alter statement:
ALTER TABLE "MYSCHEMA"."SYSTEMPROP"
MODIFY ("XMLCOL")
XMLTYPE COLUMN "XMLCOL" STORE AS CLOB;
Please note that there are already some data present in the table. Event after when I delete and insert a row, the content is showing as XMLType. I am using SQL developer UI tool. Can anybody suggest a way to fix this issue?
Edit:
Ok, Now we have decided that we will store the XMLType column content as SECURE FILE BINARY XML. So we have table like this:
CREATE TABLE XMYTYPETEST
(
ID NUMBER(8) NOT NULL,
VID NUMBER(4) NOT NULL,
UserName VARCHAR2(50),
DateModified TIMESTAMP(6),
Details XMLType
)XMLTYPE COLUMN Details STORE AS SECUREFILE BINARY XML;
Insert into XMYTYPETEST values(10001,1,'XXXX',sysdate,'<test><node1>BLOBTest</node1></test>');
Select * from XMYTYPETEST;
The XMLType colum is displayed as "SYS.XMLType" in sql developer. So how to get the content of the binary XML?
Edit:
SELECT x.ID,x.Vid, x.details.getCLOBVal() FROM XMYTYPETESTx where x.ID=100000;
The above query works out for me finally.
The underlying storage for xmldata inside oracle database is either CLOB or Binary.
And it defaults to Binary storage in 11g.
But irrespective of the storage, your queries on the xmltype column should yield you consistent results.
>>>> So how to get the content of the binary XML?
The way to get the content of an xmltype column using queries does not change.
select xmlquery(..)
select xmlcast(xmlquery(...))
select extract(), extractValue(), ...
These are some of the ways data within xml is extracted.
Hope this helps.

oracle - how to copy partitioned table to a new schema on new tablespace

I created a datapump export (Oracle 11g) from SCHEMA A on a partitioned table (tablespace TEST``) usingTABLE=MYPARTTBL:MYPART`.
I created a new schema SCHEMA B and imported the dump of SCHEMA A's partitioned table with success, but it created the table using the same tablespace TEST.
What I need to do is import the partitioned table to a different tablespace TEST_NEW.
What's a good way to do this? Considering now, that I have a copy of SCHEMA A's partitioned table in SCHEMA B.
Here's my export parfile parameters:
DIRECTORY=DW_PUMP
TABLES=MRA.FACT_USE:P_20111009
DUMPFILE=MRA.TBLPART-20111209.dmp
LOGFILE=MRA.TBLPART-20111209.log
Use the REMAP_TABLESPACE parameter when importing:
REMAP_TABLESPACE=TEST:TEST_NEW

Resources