I have 191 table and view creation scripts for Oracle DB. I thought it will be more efficient to use the Execute SQL task in SSIS. I set up my Execute SQL task and Foreach loop container. I have connection to my Oracle DB 11g but i keep getting error whn I run the task.
Error: 0xC002F210 at Execute SQL Task, Execute SQL Task: Executing the query "--------------------------------------------------..." failed with the following error: "ORA-00911: invalid character". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.
Task failed: Execute SQL Task
When I create a simple insert query and save to a .sql, the row gets inserted to the db so I know there's no issue with my connection to the DB.
One of my sql files looks like that.
--------------------------------------------------------
-- DDL for Table ACCOUNT
--------------------------------------------------------
CREATE TABLE "DBSCHEMA"."ACCOUNT"
( "MNT_IN_BO" CHAR(1 BYTE),
"ACCT_KEY" CHAR(34 BYTE),
"BRCH_MNM" CHAR(8 BYTE),
"CUS_SBB" CHAR(8 BYTE),
"CUS_MNM" CHAR(20 BYTE),
"ACC_TYPE" CHAR(10 BYTE),
"ACC_SEQNO" NUMBER,
"CATEGORY" CHAR(20 BYTE),
"CURRENCY" CHAR(3 BYTE),
"OTHER_CCY" CHAR(3 BYTE),
"CONTINGENT" CHAR(1 BYTE),
"INTRNAL" CHAR(1 BYTE),
"SHORTNAME" CHAR(15 BYTE),
"BO_ACCTNO" CHAR(34 BYTE),
"EXT_ACCTNO" CHAR(34 BYTE),
"IBAN" CHAR(34 BYTE),
"COUNTRY" CHAR(2 BYTE),
"DATEOPENED" DATE,
"DATECLOSED" DATE,
"DATEMAINT" DATE,
"DATE_DWNL" DATE,
"DESCR1" CHAR(35 BYTE),
"DESCR2" CHAR(35 BYTE),
"TYPEFLAG" NUMBER,
"TSTAMP" NUMBER
) SEGMENT CREATION IMMEDIATE
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
STORAGE(INITIAL 16384 NEXT 8724480 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "SYSTEM" ;
When I run the creation script against the Db directly, the table is created.
What could be the issue? How do I get the task to ignore errors and move to the next file whilst keeping a record of which files had errors.
Related
I'm trying to rewrite the trigger so it doesn't give a mutating table (ORA-04091)error. The table and trigger definition are as follows (commented part of the trigger giving the mutating table exception )
CREATE TABLE "COPYREAL"."PL_EDUCPLANS"
( "PLANID" NUMBER(10,0),
"STUDYFORM" NUMBER(2,0) NOT NULL ENABLE,
"SKILLCODE" NUMBER(2,0) NOT NULL ENABLE,
"YEARBEGIN" NUMBER(4,0) NOT NULL ENABLE,
"SPECCODE" VARCHAR2(10 CHAR) NOT NULL ENABLE,
"SEMESTERS" NUMBER(2,0) NOT NULL ENABLE,
"BIFURCATE_SEMESTER" NUMBER(2,0),
"CHAIRID" NUMBER(4,0),
"LESS10" NUMBER(1,0),
"CURATOR_CHAIR" NUMBER(4,0),
"SCHOOL_DISCS" NUMBER(1,0),
"MARKSYSTEMID" NUMBER(4,0) NOT NULL ENABLE,
"SKILL" VARCHAR2(50 CHAR),
"VKR_WEEKS" NUMBER(2,0),
"TOTALHOURS_GOS" NUMBER(5,0),
"NORM_LEARN_TIME" NUMBER(3,0),
"SKILL_ENG" VARCHAR2(50 CHAR),
"FIS_ITEM_UID" NUMBER(6,0),
"TOTALCOST" NUMBER(10,0),
"TOTALCOST_STR" VARCHAR2(800 CHAR),
"FORWP" NUMBER(1,0) DEFAULT 0 NOT NULL ENABLE,
"ISOLD" NUMBER(1,0),
"COMMENTARY" VARCHAR2(500 CHAR),
CONSTRAINT "PK_PL_EDUCPLANS" PRIMARY KEY ("PLANID")
USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "SYSTEM" ENABLE,
CONSTRAINT "FK_PL_EDUCPLANS_CHAIRID" FOREIGN KEY ("CHAIRID")
REFERENCES "COPYREAL"."RB_DEPARTMENTS" ("CODE") ENABLE,
CONSTRAINT "FK_CURATORCHAIR_DEP" FOREIGN KEY ("CURATOR_CHAIR")
REFERENCES "COPYREAL"."RB_DEPARTMENTS" ("CODE") ENABLE,
CONSTRAINT "FK_PL_EDUCPLANS_RB_SKILLS" FOREIGN KEY ("SKILLCODE")
REFERENCES "COPYREAL"."RB_SKILLS" ("CODE") ENABLE,
CONSTRAINT "FK_PL_EDUCPLANS_RB_SPECS" FOREIGN KEY ("SPECCODE")
REFERENCES "COPYREAL"."RB_SPECIALITY" ("CODE") ENABLE,
CONSTRAINT "FK_PL_EDUCPLANS_RB_STUDYFORMS" FOREIGN KEY ("STUDYFORM")
REFERENCES "COPYREAL"."RB_STUDYFORMS" ("CODE") ENABLE,
CONSTRAINT "FK_PLEDUCPLANS_SCMARKSYSTEMS" FOREIGN KEY ("MARKSYSTEMID")
REFERENCES "COPYREAL"."SC_MARKSYSTEMS" ("MARKSYSTEMID") ENABLE
);
The trigger is:
create or replace TRIGGER "COPYREAL".tr_pl_educplans
before insert or update or delete
on pl_educPlans
for each row
DECLARE
l_flag NUMBER(1);
begin
IF :NEW.FORWP = 0 THEN
if inserting or updating then
delete from pl_processed_plans where planid=:new.planid;
DELETE FROM PL_PROCESSED_PLANS_B WHERE PLANID=:NEW.PLANID;
/* select case when exists(
SELECT 1
FROM PL_EDUCPLANS EP
where EP.FORWP=0 AND EP.YEARBEGIN=:NEW.YEARBEGIN AND EP.STUDYFORM=:NEW.STUDYFORM AND ep.SKILLCODE=:NEW.SKILLCODE and ep.SPECCODE=:NEW.SPECCODE
) then 1 else 0 end
into l_flag from Dual;
IF L_FLAG = 1 THEN
RAISE_APPLICATION_ERROR(-20001, 'Plan exists!');
end if;
*/
end if;
if deleting then
delete from pl_processed_plans where planid=:old.planid;
delete from pl_processed_plans_b where planid=:old.planid;
delete from pl_plan_activities_mt where planid=:old.planid;
delete from pl_plan_activities_b where planid=:old.planid;
END IF;
end if;
end;
insert runs as it should (giving the exception "plan exists" when trying to insert the duplicate value for a regular plan), that is once you run the update on the table (obvously you cannot select from tje same table that fired the trigger in the first place)
basically what I am trying to achieve here is to enforce the following business logic
you cannot have 2 regular plans(the plan is considered regular when it has forwp attribute value of 0)) that have the same studyform,skillcode,speccode and yearbegin values for each other and you can have as many as you want irregular plans that have the same studyform,skillcode,speccode and yearbegin values for each other.
an irregular plan (the plan is considered irrregular when it has forwp attribute value of 1)
before the introduction of the forwp attribute the previous business logic was enforced by a unique constraint(studyform,skillcode,speccode,yearbegin ) on the pl_educplans table and I'm not too sure how to enforce the rule now
I've read the suggestions given on https://oracle-base.com/articles/9i/mutating-table-exceptions but I'm not sure how to apply them to my case
is enforcing such business logic rule even possible with a trigger? or should such check be done on the application level?
It is not clear where to put the SQL file in order for my H2 database to be initialized.
in my application-h2.properties file I have:
# H2
spring.h2.console.enabled=true
spring.h2.console.path=/h2
# Datasource
#spring.datasource.url=jdbc:h2:file:~/test
spring.datasource.url=jdbc:h2:mem:testdb;Mode=Oracle
spring.datasource.platform=h2
spring.datasource.username=user
spring.datasource.password=user
spring.jpa.hibernate.ddl-auto=none
spring.datasource.continue-on-error=true
spring.datasource.initialization-mode=always
spring.datasource.driver-class-name=org.h2.Driver
spring.profiles.active=h2
spring.jpa.database-platform=org.hibernate.dialect.Oracle10gDialect
My SQL file is pure oracle SQL generated from sqlDeveloper. I tried to cut and paste it in the H2 console but it didn't accept it. I am hoping this way will work.
------------------------update 1---------------------------
schema.sql
schema.sql]: CREATE SEQUENCE "foo"."ADDRESSID_SEQ" MINVALUE 1 MAXVALUE 9999999999999999999999999999 INCREMENT BY 1 START WITH 1 CACHE 20 NOORDER NOCYCLE; nested exception is org.h2.jdbc.JdbcSQLException: Syntax error in SQL statement "CREATE SEQUENCE ""foo"".""ADDRESSID_SEQ"" MINVALUE[*] 1 MAXVALUE 9999999999999999999999999999 INCREMENT BY 1 START WITH 1 CACHE 20 NOORDER NOCYCLE"; SQL statement:
data.sql
data.sql]: CREATE SEQUENCE "foo"."ADDRESSID_SEQ" MINVALUE 1 MAXVALUE 9999999999999999999999999999 INCREMENT BY 1 START WITH 1 CACHE 20 NOORDER NOCYCLE; nested exception is org.h2.jdbc.JdbcSQLException: Syntax error in SQL statement "CREATE SEQUENCE ""foo"".""ADDRESSID_SEQ"" MINVALUE[*] 1 MAXVALUE 9999999999999999999999999999 INCREMENT BY 1 START WITH 1 CACHE 20 NOORDER NOCYCLE"; SQL statement:
CREATE SEQUENCE "foo"."ADDRESSID_SEQ" MINVALUE 1 MAXVALUE 9999999999999999999999999999 INCREMENT BY 1 START WITH 1 CACHE 20 NOORDER NOCYCLE [42000-148]
Error
Error creating bean with name 'entityManagerFactory': Post-processing of FactoryBean's singleton object failed; nested exception is org.springframework.jdbc.datasource.init.ScriptStatementFailedException: Failed to execute SQL script statement #3
This is the same sql file, I just renamed it. It has creates and inserts. However if I name it schema.sql or data.sql it still fails on the third line. It doesn't appear to fail when creating a user or schema.
------------------Update 2----------------
CREATE USER foo ifentified by foo;
CREATE SCHEMA foo;
CREATE TABLE foo.ADDRESS
(ADDRESS_ID NUMBER(22,0),
CUSTOMER_ID NUMBER(*,0),
COMPANY_NAME VARCHAR2(100 BYTE),
ADDITIONAL_ADDRESS_INFO VARCHAR2(100 BYTE),
STREET VARCHAR2(100 BYTE),
ADDITIONAL_STREET_INFO VARCHAR2(100 BYTE),
HOUSE_NUMBER VARCHAR2(20 BYTE),
ZIP VARCHAR2(20 BYTE),
CITY VARCHAR2(50 BYTE),
STATE VARCHAR2(20 BYTE),
COUNTRY_CODE CHAR(2 BYTE),
PHONE VARCHAR2(20 BYTE),
CREATED_AT TIMESTAMP (6) DEFAULT CURRENT_TIMESTAMP,
MODIFIED_AT TIMESTAMP (6) DEFAULT CURRENT_TIMESTAMP,
VALIDATED_AT TIMESTAMP (6),
VALIDATION_RESULT VARCHAR2(100 CHAR)
) SEGMENT CREATION DEFERRED
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
COMMENT ON COLUMN foo.ADDRESS.ADDRESS_ID IS 'primary key for table address';
COMMENT ON COLUMN foo.ADDRESS.CUSTOMER_ID IS 'foreign key for table customer';
COMMENT ON COLUMN foo.ADDRESS.CREATED_AT IS 'initially created at';
COMMENT ON COLUMN foo.ADDRESS.MODIFIED_AT IS 'date of last modification';
Error
Syntax error in SQL statement "CREATE TABLE FOO.ADDRESS (ADDRESS_ID NUMBER(22,0), CUSTOMER_ID NUMBER(*[*],0), COMPANY_NAME VARCHAR2(100 BYTE), ADDITIONAL_ADDRESS_INFO VARCHAR2(100 BYTE), STREET VARCHAR2(100 BYTE), ADDITIONAL_STREET_INFO VARCHAR2(100 BYTE), HOUSE_NUMBER VARCHAR2(20 BYTE), ZIP VARCHAR2(20 BYTE), CITY VARCHAR2(50 BYTE), STATE VARCHAR2(20 BYTE), COUNTRY_CODE CHAR(2 BYTE), PHONE VARCHAR2(20 BYTE), CREATED_AT TIMESTAMP (6) DEFAULT CURRENT_TIMESTAMP, MODIFIED_AT TIMESTAMP (6) DEFAULT CURRENT_TIMESTAMP, VALIDATED_AT TIMESTAMP (6), VALIDATION_RESULT VARCHAR2(100 CHAR) ) SEGMENT CREATION DEFERRED PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING COMMENT ON COLUMN FOO.ADDRESS.ADDRESS_ID IS 'primary key for table address' "; expected "long"; SQL statement:
You should name the file data.sql and save it in src/main/resources folder. In this location the file will be detected and executed automatically. This, in the case that you want to leave default schema generation (as defined by your #Entity annotated classes).
If you want to generate also the schema manually you can create the file schema.sql where you put all the details for creating the schema.
edited answer after update 2.
you cannot use "
CREATE SEQUENCE "foo"."ADDRESSID_SEQ" ...
so try instead (foo is the schema-name)
CREATE SEQUENCE foo.ADDRESSID_SEQ START WITH 1 INCREMENT BY 1 MINVALUE 1 NOMAXVALUE NOCYCLE CACHE 20;
do'nt known if you can use NOORDER
use BIGINT instead of NUMBER(...) see ...expected "long"... in error.
Below are the sample table and file details for the question which I have asked on "Issue while executing stored procedure which consists both update and insert statements". Below are the steps I am following before executing the Procedure.
I will get a file from the Vendor which contains the data in the below format.
6437,,01/01/2017,3483.92,,
14081,,01/01/2017,8444.23,,
I am loading these data to the table NMAC_PTMS_NOTEBK_SG. In the above file 1st column will be the asset.
I am updating the table with extra column with name lse_id with respect to that asset. Now the NMAC_PTMS_NOTEBK_SG table will have the data in the below format.
LSE_ID AST_ID PRPRTY_TAX_DDCTN_CD LIEN_DT ASES_PRT_1_AM ASES_PRT_2_AM
5868087 5049 Null 01-01-2017 3693.3 NULL
Now my procedure will start. In my procedure the logic should be in a way I need to take the lse_id from NMAC_PTMS_NOTEBK_SG and compare the same in MJL table (here lse_id = app_lse_s). Below is the structure for MJL table.
CREATE TABLE LPR_LP_TEST.MJL
(
APP_LSE_S CHAR(10 BYTE) NOT NULL,
DT_ENT_S TIMESTAMP(3) NOT NULL,
DT_FOL_S TIMESTAMP(3),
NOTE_TYPE_S CHAR(4 BYTE) NOT NULL,
PRCS_C CHAR(1 BYTE) NOT NULL,
PRIO_C CHAR(1 BYTE) NOT NULL,
FROM_S CHAR(3 BYTE) NOT NULL,
TO_S CHAR(3 BYTE) NOT NULL,
NOTE_TITLE_S VARCHAR2(41 BYTE) NOT NULL,
INFO_S VARCHAR2(4000 BYTE),
STAMP_L NUMBER(10) NOT NULL,
PRIVATE_C CHAR(1 BYTE),
LSE_ACC_C CHAR(1 BYTE),
COL_STAT_S CHAR(4 BYTE),
INFO1_S VARCHAR2(250 BYTE),
INFO2_S VARCHAR2(250 BYTE),
INFO3_S VARCHAR2(250 BYTE),
INFO4_S VARCHAR2(250 BYTE),
NTBK_RSN_S CHAR(4 BYTE)
)
TABLESPACE LPR_LP_TEST
PCTUSED 0
PCTFREE 25
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 64K
NEXT 1M
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
)
LOGGING
NOCOMPRESS
NOCACHE
NOPARALLEL
MONITORING;
CREATE UNIQUE INDEX LPR_LP_TEST.MJL_IDX0 ON LPR_LP_TEST.MJL
(APP_LSE_S, DT_ENT_S)
LOGGING
TABLESPACE LPR_LP_TEST
PCTFREE 10
INITRANS 2
MAXTRANS 255
STORAGE (
INITIAL 64K
NEXT 1M
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
)
NOPARALLEL;
CREATE OR REPLACE TRIGGER LPR_LP_TEST."MT_MJL_AIUD"
AFTER INSERT OR UPDATE OR DELETE ON mjl
BEGIN
mpkg_trig_mjl.mp_mjl_aiud;
END mt_mjl_aiud;
/
CREATE OR REPLACE TRIGGER LPR_LP_TEST."MT_MJL_AIUDR"
AFTER INSERT OR UPDATE OR DELETE ON mjl FOR EACH ROW
BEGIN
mpkg_trig_mjl.mp_mjl_aiudr (INSERTING, UPDATING, DELETING,
:NEW.app_lse_s, :NEW.prcs_c, :NEW.note_type_s,
:OLD.app_lse_s, :OLD.prcs_c, :OLD.note_type_s);
END mt_mjl_aiudr;
/
CREATE OR REPLACE TRIGGER LPR_LP_TEST."MT_MJL_BIUD"
BEFORE INSERT OR UPDATE OR DELETE ON mjl
BEGIN
mpkg_trig_mjl.mp_mjl_biud;
END mt_mjl_biud;
/
CREATE OR REPLACE TRIGGER LPR_LP_TEST."MT_MJL_OBIUR"
BEFORE INSERT OR UPDATE ON mjl FOR EACH ROW
BEGIN
IF INSERTING THEN
:NEW.stamp_l := mpkg_util.mp_time_ticker;
ELSE
IF :OLD.stamp_l > 999999990 THEN
:NEW.stamp_l := 1;
ELSE
:NEW.stamp_l := :OLD.stamp_l + 1;
END IF;
END IF;
END mt_mjl_obiur;
/
Below is the procedure I am using which you have provided in previous post and it is almost working good for me.
CREATE OR REPLACE PROCEDURE LPR_LP_TEST.SP_PTMS_NOTES
(
p_app_lse_s IN mjl.app_lse_s%TYPE,
--p_dt_ent_s IN mjl.dt_ent_s%TYPE,
--p_note_type_s IN mjl.note_type_s%TYPE,
--p_prcs_c IN mjl.prcs_c%TYPE,
--p_prio_c IN mjl.prio_c%TYPE,
--p_note_title_s IN mjl.note_title_s%TYPE,
--p_info1_s IN mjl.info1_s%TYPE,
--p_info2_s IN mjl.info2_s%TYPE
)
AS
--v_rowcount_i number;
--v_lien_date mjl.info1_s%TYPE;
--v_lien_date NMAC_PTMS_NOTEBK_SG.LIEN_DT%TYPE;
--v_asst_amount mjl.info2_s%TYPE;
v_app_lse_s mjl.app_lse_s%TYPE;
BEGIN
v_app_lse_s := trim(p_app_lse_s);
-- I hope this dbms_output line is for temporary debug purposes only
-- and will be removed in the production version!
dbms_output.put_line(app_lse_s);
merge into mjl tgt
using (select lse_s app_lse_s,
sysdate dt_ent_s,
'SPPT' note_type_s,
'Y' prcs_c,
'1' prio_c,
'Property Tax Assessment' note_title_s,
lien_dt info1_s,
ases_prt_1_am info2_s
from nmac_ptms_notebk_sg
where lse_id = v_app_lse_s) src
on (trim(tgt.app_lse_s) = trim(src.app_lse_s))
-- and tgt.dt_ent_s = src.dt_ent_s)
when matched then
update set --tgt.dt_ent_s = src.dt_ent_s,
tgt.note_title_s = src.note_title_s,
tgt.info1_s = src.info1_s,
tgt.info2_s = src.info2_s
where --tgt.dt_ent_s != src.dt_ent_s
tgt.note_title_s != src.note_title_s
or tgt.info1_s != src.info1_s
or tgt.info2_s != src.info2_s
when not matched then
insert (tgt.app_lse_s,
tgt.dt_ent_s,
tgt.note_type_s,
tgt.prcs_c,
tgt.prio_c,
tgt.from_s,
tgt.to_s,
tgt.note_title_s,
tgt.info1_s,
tgt.info2_s)
values (src.app_lse_s,
src.dt_ent_s,
src.note_type_s,
src.prcs_c,
src.prio_c,
src.from_s,
src.to_s,
src.note_title_s,
src.info1_s,
src.info2_s);
commit;
end;
Now the logic should be I need to pass lse_id from the file which I
have already saved to the procedure.
If the lse_id which I am passing is matching with the app_lse_s in
the mjl table then I need to update that row and some of the harcoded
fields which I am doing it correclty.
If the lse_id is not matching then I have to insert a new row for that
lease and the hardcoded fields.
The issue which I am facing is the dt_ent_s in the mjl table is a
unique constraint.
Please let me know if the above is making any sense to you...
"The issue which I am facing is the dt_ent_s in the mjl table is a unique constraint."
Actually it's not, it's part of a compound unique key. So really your ON clause should match on
on (tgt.app_lse_s = src.app_lse_s
and tgt.dt_ent_s = src.dt_ent_s)
Incidentally, the use of trim() in the ON clause is worrying, especially trim(tgt.app_lse_s). If you're inserting values with trailing or leading spaces your "unique key" will produce multiple hits when you trim them. You should trim the spaces when you load the data from the file and insert trimmed values in your table.
"ORA-00001: unique constraint (LPR_LP_TEST.MJL_IDX0) violated"
MJL_IDX0 must me a unique index. That means you need to include its columns in any consideration of unique records.
Clearly there is a difference between your straight INSERT logic and your MERGE INSERT logic. You need to compare the two statements and figure out what the difference is.
Hi have created table which consist clob datatype :
CREATE TABLE MQ_GET_CLOB
(
SEQ_NO NUMBER NOT NULL,
MESSAGE_TEXT CLOB,
MESSAGE_LENGTH NUMBER NOT NULL,
STATUS VARCHAR2(15 BYTE)
)
TABLESPACE DATA_L1
LOB (MESSAGE_TEXT) STORE AS
(TABLESPACE DATA_L1
STORAGE (INITIAL 6144)
CHUNK 4000
NOCACHE LOGGING);
After creating table when I looked into Metadata , I found that LOB storage clause is getting skipped.
CREATE TABLE MQ_GET_CLOB
(
SEQ_NO NUMBER NOT NULL,
MESSAGE_TEXT CLOB,
MESSAGE_LENGTH NUMBER NOT NULL,
STATUS VARCHAR2(15 BYTE)
)
TABLESPACE DATA_L1
PCTUSED 40
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (
PCTINCREASE 0
BUFFER_POOL DEFAULT
)
LOGGING
NOCOMPRESS
NOCACHE
NOPARALLEL
MONITORING;
Which leads to
ORA-01658: unable to create INITIAL extent for segment in tablespace
SYSAUX
I am not sure why table is using SYSAUX tablespace.
Whether anyone have faced same issue before ? This there something I have missed ?
Thanks.
I'm importing a database dump from one Oracle 10g installation into another. The source has a layout with several tablespaces. The target has one default tablespace for the user I'm importing the dump into.
Everything works fine, for ordinary tables. The tables are relocated from their original tablespace to the user's default. The problem I'm facing, several tables contain CLOBs with explicit storage directives. That is, they name their storage tablespace. The imp command seems to be unable to relocate these CLOBs to the user's default tablespace.
Is there any hidden command line option for the imp command to relocate the CLOB storage to the user's default tablespace or even one named tablespace?
The error message ORACLE 959 looks like this:
IMP-00017: Nachfolgende Anweisung war wegen Oracle-Fehler 959 erfolglos:
"CREATE TABLE "IF_MDE_DATA_OUT" ("OID" NUMBER(10, 0) NOT NULL ENABLE, "CLIEN"
"T_OID" NUMBER(10, 0) NOT NULL ENABLE, "TS_CREATE" TIMESTAMP (6) NOT NULL EN"
"ABLE, "TS_UPDATE" TIMESTAMP (6) NOT NULL ENABLE, "OP_CREATE" VARCHAR2(30) N"
"OT NULL ENABLE, "OP_UPDATE" VARCHAR2(30) NOT NULL ENABLE, "IDENTIFIER" VARC"
"HAR2(50), "TRANSFERTYPE" VARCHAR2(20) NOT NULL ENABLE, "STORE" NUMBER(10, 0"
"), "DATUM" DATE, "STATE" NUMBER(3, 0) NOT NULL ENABLE, "DATA_OLD" LONG RAW,"
" "SUPPLIER" NUMBER(10, 0), "BUYER" NUMBER(10, 0), "GOODS_OUT_IDS" VARCHAR2("
"4000), "CUSTOM_FIELD" VARCHAR2(50), "DATA_ARCHIVE" BLOB, "DATA" BLOB) PCTF"
"REE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE(INITIAL 65536 FREELISTS 1"
" FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "DATA32M" LOGGING NOCOMP"
"RESS LOB ("DATA_ARCHIVE") STORE AS (TABLESPACE "DATA32M" ENABLE STORAGE IN"
" ROW CHUNK 8192 PCTVERSION 10 NOCACHE LOGGING STORAGE(INITIAL 65536 FREELI"
"STS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)) LOB ("DATA") STORE AS (TABLE"
"SPACE "DATA32M" ENABLE STORAGE IN ROW CHUNK 8192 PCTVERSION 10 NOCACHE LOGG"
"ING STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAUL"
"T))"
IMP-00003: ORACLE-Fehler 959 aufgetreten
ORA-00959: Tablespace 'DATA32M' nicht vorhanden
You could pre-create the table using the storage parameters you need, and set the import to ignore errors.
Like Karl, I recommend Datadump but use REMAP_TABLESPACE
If you are using Data Pump Dumps, you could try the remap_schema option to correct the tablespace.