Impdp with remap failing on POST_TABLE_ACTION still using old schema - oracle

I'm trying to import an oracle dump while remapping schema and tablespace:
impdp usr/pass \
EXCLUDE=table_statistics \
DIRECTORY=EXPDP \
REMAP_SCHEMA=ORG_USR:NEW_USR \
REMAP_TABLESPACE=ORG_TS:NEW_TS \
DUMPFILE=FILE.dmp \
PARALLEL=2 \
LOGFILE=FILE.imp.log
The job imports all tables and then starts processing object types:
Import: Release 12.1.0.2.0 - Production on Thu Aug 17 11:13:06 2017
[ ... the import remaps correct ]
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "NEW_USR"."SOMETABLE" 112.4 MB 1402414 rows
. . imported "NEW_USR"."SOMEOTHERTABLE" 235.9 MB 955249 rows
. . imported "NEW_USR"."SOMETABLE3" 86.91 MB 440513 rows
[... everything works until ...]
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
Processing object type SCHEMA_EXPORT/TABLE/POST_TABLE_ACTION
ORA-39083: Object type POST_TABLE_ACTION failed to create with error:
ORA-01435: user does not exist
Failing sql is:
BEGIN
SYS.DBMS_SNAPSHOT_UTL.SYNC_UP_LOG('ORG_USR','TABLEA');
END;
ORA-39083: Object type POST_TABLE_ACTION failed to create with error:
ORA-01435: user does not exist
Failing sql is:
BEGIN
SYS.DBMS_SNAPSHOT_UTL.SYNC_UP_LOG('ORG_USR','TABLEB');
END;
ORA-39083: Object type POST_TABLE_ACTION failed to create with error:
ORA-01435: user does not exist
Failing sql is:
BEGIN
SYS.DBMS_SNAPSHOT_UTL.SYNC_UP_LOG('ORG_USR','TABLEC');
END;
Processing object type SCHEMA_EXPORT/MATERIALIZED_VIEW
Processing object type SCHEMA_EXPORT/TABLE/MATERIALIZED_VIEW_LOG
Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 3 error(s) at Thu Aug 17 11:19:02 2017 elapsed 0 00:05:56
I did specify REMAP_SCHEMA and REMAP_TABLESPACE, and the import runs mostly correct.
BUT at the end part trying Processing object type SCHEMA_EXPORT/TABLE/POST_TABLE_ACTION it somehow fails using the OLD User.
Can someone tell me what ist going wrong and how to fix it?

The soulution was to import excluding post_table_action and execute the statements manually after altering the schemaname to the correct value:
impdp usr/pass \
EXCLUDE=post_table_action \
DIRECTORY=EXPDP \
REMAP_SCHEMA=ORG_USR:NEW_USR \
REMAP_TABLESPACE=ORG_TS:NEW_TS \
DUMPFILE=FILE.dmp \
PARALLEL=2 \
LOGFILE=FILE.imp.log
and
BEGIN
SYS.DBMS_SNAPSHOT_UTL.SYNC_UP_LOG('ORG_USR','TABLEA');
END;
BEGIN
SYS.DBMS_SNAPSHOT_UTL.SYNC_UP_LOG('ORG_USR','TABLEB');
END;
BEGIN
SYS.DBMS_SNAPSHOT_UTL.SYNC_UP_LOG('ORG_USR','TABLEC');
END;

Related

Call sqlcl script hosted in a private repo in github

I can successfully do this:
sql my_connection_string #script1.sql
However, I am struggling to make this work, using a script hosted in a private repo.
sql my_connection_string https://my_private_repo_url/script1.sql
How can I make this work in a single line?
You've got it.
sql user/password#server:ip/service #https://internet.com/script.sql
Here's a public SQL file, courtesy Kris.
c:\sqlcl\22.4\sqlcl\bin>sql hr/oracle#localhost:1523/orclpdb1 #https://gist.githubusercontent.com/krisrice/68e23a2101fe10c8efc26371b8c59d5c/raw/e5a2512d4f74dc79d37e4328e36b81fba15d36a3/dbms_cloud_metric_sql_pkg.sql
SQLcl: Release 22.4 Production on Mon Feb 13 08:18:39 2023
Copyright (c) 1982, 2023, Oracle. All rights reserved.
Last Successful login time: Mon Feb 13 2023 08:18:20 -05:00
Connected to:
Oracle Database 23c Enterprise Edition Release 23.0.0.0.0 - Beta
Version 23.1.0.0.0
login.sql found in the CWD. DB access is restricted for login.sql.
Adjust the SQLPATH to include the path to enable full functionality.
Package OCI_METRICS compiled
Package Body OCI_METRICS compiled
LINE/COL ERROR
--------- -------------------------------------------------------------
0/0 PL/SQL: Compilation unit analysis terminated
12/10 PLS-00201: identifier 'DBMS_CLOUD_TYPES.RESP' must be declared
Errors: check compiler log
Error starting at line : 149 File # https://gist.githubusercontent.com/krisrice/68e23a2101fe10c8efc26371b8c59d5c/raw/e5a2512d4f74dc79d37e4328e36b81fba15d36a3/dbms_cloud_metric_sql_pkg.sql
In command -
begin
oci_metrics.setup('OCI$RESOURCE_PRINCIPAL', 'us-phoenix-1');
for r in ( /* MAIN QUERY for metrics Should only need to adjust this section to adhere to this structure
Expected Columns:
NAMESPACE - The namespace of the metric : 'my_namespace'
COMPARTMENT_ID - The OCID of the compartment to post the metric : 'ocid1.compartment....'
RESOURCE_GROUP - The resource group of the metric : 'my_resource_group'
NAME - The name of the metric : 'my_metric_name'
DIMENSIONS - The dimensions of the metric in json format : '{"factioid":"this","other":"else"}'
VALUE - The numberic value of the metric : 123.45
*/
select 'a_namespace' namespace,
'ocid1.compartment.oc1..aaaaaaaacw2ft7eu33tlaoppsu6mck7qn2wsqefuixcjhza6xhhsbnhvjorq' compartment_id,
'sample_resource_group' resource_group,
'A_SAMPLE_METRIC' name,
'{"machine":"'|| machine ||'","username":"' ||username ||'"}' dimensions,
count(1) value
from v$session
group by username,machine
/* END MAIN QUERY for metrics */
) loop
oci_metrics.addMetric(r.namespace, r.compartment_id, r.resource_group, r.name, r.dimensions, r.value, systimestamp);
end loop;
-- send any remaining metrics
oci_metrics.sendBatch;
end;
Error report -
ORA-04063: package body "HR.OCI_METRICS" has errors
ORA-06508: PL/SQL: could not find program unit being called: "HR.OCI_METRICS"
ORA-06512: at line 2
04063. 00000 - "%s has errors"
*Cause: Attempt to execute a stored procedure or use a view that has
errors. For stored procedures, the problem could be syntax errors
or references to other, non-existent procedures. For views,
the problem could be a reference in the view's defining query to
a non-existent table.
Can also be a table which has references to non-existent or
inaccessible types.
*Action: Fix the errors and/or create referenced objects as necessary.
SQL>
1:0 ¦ HR ¦ localhost:1523/orclpdb1 ¦ viins ¦ None ¦ 00:00:00.347
p_comments => 'see if we can say Hello!',
p_source =>
'select ''Aloha!'' HI_GREETING from dual'
);
COMMIT;
END;
SQL>

How to check if i have the rights to create functions?

So i'm trying to create a stored function but i keep getting this error:
Error report -
ORA-00604: error occurred at recursive SQL level 1
ORA-20900: No access to modify schema
ORA-06512: at line 3
00604. 00000 - "error occurred at recursive SQL level %s"
*Cause: An error occurred while processing a recursive SQL statement
(a statement applying to internal dictionary tables).
*Action: If the situation described in the next error on the stack
can be corrected, do so; otherwise contact Oracle Support.
Fact is, 2-3 hours ago i created some functions and worked like a charm. I can't really understand what is happenning.
Code here (i don't know if this is relevant, as the function doesnt even go to compile)
CREATE OR REPLACE FUNCTION wsxsxfunct(x_data number)
return ECHIPE.id_echipa%type IS
y ECHIPE.id_echipa%type;
BEGIN
SELECT ID_ECHIPA INTO y
FROM ECHIPE E, PILOTI P, REZULTATE R, CURSE C
WHERE P.ID_PILOT = R.ID_PILOT
AND E.ID_ECHIPA = P.ECHIPA
AND R.ID_CURSA = C.ID_CURSA
AND EXTRACT(MONTH FROM C.DATA_CURSA) = x_data;
RETURN y;
EXCEPTION
WHEN NO_DATA_FOUND
THEN dbms_output.put_line('Nu a avut loc nicio cursa in luna ' || x_data);
RAISE_APPLICATION_ERROR(-20000,
'Nu exista angajati cu numele dat');
WHEN TOO_MANY_ROWS
THEN dbms_output.put_line('Au avut loc mai multe curse in luna ' || x_data);
RAISE_APPLICATION_ERROR(-20001,
'Exista mai multi angajati cu numele dat');
WHEN OTHERS
THEN dbms_output.put_line('Trebuie sa apelezi functia cu un numar intre 1-12, reprezentand numarul lunii');
RAISE_APPLICATION_ERROR(-20002,'Alta eroare!');
END;
/
Ignore those messages from the code. thanks
EDIT:
I tried old codes from the functions i created a few hours ago, and i'm still getting the same error report.
ORA-20900 (and generally errors between 20000 and 20999) are 'custom' errors, namely, they come from a call to RAISE_APPLICATION_ERROR.
Thus this is not a (native) privileges issue. Most likely is that an administrator has created a DDL trigger which blocks your attempt to create a function (and potentially other objects).
Speak to your DBA.

Create New Rows from Oracle CLOB and Write to HDFS

In an Oracle database, I can read this table containing a CLOB type (note the newlines):
ID MY_CLOB
001 500,aaa,bbb
500,ccc,ddd
480,1,2,bad
500,eee,fff
002 777,0,0,bad
003 500,yyy,zzz
I need to process this, and import into an HDFS table with new rows for each MY_CLOB line starting with "500,". In this case, the hive table should look like:
ID C_1 C_2 C_3
001 500 aaa bbb
001 500 ccc ddd
001 500 eee fff
003 500 yyy zzz
This solution to my previous question succeeds in producing this on Oracle. But writing the result to HDFS with a Python driver is very slow, or never succeeds.
Following this solution, I've tested a similar regex + pyspark solution that might work for my purposes:
<!-- begin snippet: js hide: true -->
import cx_Oracle
#... query = """SELECT ID, MY_CLOB FROM oracle_table"""
#... cx_oracle_results <--- fetchmany results (batches) from query
import re
from pyspark.sql import Row
from pyspark.sql.functions import col
def clob_to_table(clob_lines):
m = re.findall(r"^(500),(.*),(.*)",
clob_lines, re.MULTILINE)
return Row(C_1 = m.group(1), C_2 = m.group(2), C_3 = m.group(3))
# Process each batch of results and write to hive as parquet
for batch in cx_oracle_results():
# batch is like [(1,<cx_oracle object>), (2,<cx_oracle object>), (3,<cx_oracle object>)]
# When `.read()` looks like [(1,"500,a,b\n500c,d"), (2,"500,e,e"), (3,"500,z,y\n480,-1,-1")]
df = sc.parallelize(batch).toDF(["ID", "MY_CLOB"])\
.withColumn("clob_as_text", col("MY_CLOB")\
.read()\ # Converts cx_oracle CLOB object to text.
.map(clob_to_table)
df.write.mode("append").parquet("myschema.pfile")
But reading oracle cursor results and feeding them into pyspark this way doesn't work well.
I'm trying to to run a sqoop job generated by another tool, importing the CLOB as text, and hoping I can process the sqooped table into a new hive table like the above in reasonable time. Perhaps with pyspark with a solution similar to above.
Unfortunately, this sqoop job doesn't work.
sqoop import -Doraoop.timestamp.string=false -Doracle.sessionTimeZone=America/Chicago
-Doraoop.import.hint=" " -Doraoop.oracle.session.initialization.statements="alter session disable parallel query;"
-Dkite.hive.tmp.root=/user/hive/kite_tmp/wassadamo --verbose
--connect jdbc:oracle:thin:#ldap://connection/string/to/oracle
--num-mappers 8 --split-by date_column
--query "SELECT * FROM (
SELECT ID, MY_CLOB
FROM oracle_table
WHERE ROWNUM <= 1000
) WHERE \$CONDITIONS"
--create-hive-table --hive-import --hive-overwrite --hive-database my_db
--hive-table output_table --as-parquetfile --fields-terminated-by \|
--delete-target-dir --target-dir $HIVE_WAREHOUSE --map-column-java=MY_CLOB=String
--username wassadamo --password-file /user/wassadamo/.oracle_password
But I get an error (snippet below):
20/07/13 17:04:08 INFO mapreduce.Job: map 0% reduce 0%
20/07/13 17:05:08 INFO mapreduce.Job: Task Id : attempt_1594629724936_3157_m_000001_0, Status : FAILED
Error: java.io.IOException: SQLException in nextKeyValue
...
Caused by: java.sql.SQLDataException: ORA-01861: literal does not match format string
This seems to have been caused by mapping the CLOB column to string. I did this based on this answer.
How can I fix this? I'm open to a different pyspark solution as well
Partial answer: the oracle error seems to have been due to
--split-by date_column
This date_column is an Oracle Date type. Turns out it doesn't work when sqooping from Oracle. It would be nice to be able to split on this. But splitting on ID (varchar2) seems to be working.
The issue of performantly parsing the text MY_CLOB field and creating new rows for each line remains.

INSERT OUTPUT Oracle

In a SSIS package I build, I need to capture the output of a update clause in Oracle, in order to send a warning email
I have read the related question Is there an Oracle equivalent to SQL Server's OUTPUT INSERTED.*? But it doesn't give me a proper result set, that I can capture through a Execute SQL task
DECLARE
TYPE ra_InfoErrorMail is RECORD
(LFUKID Crpdta.F580002.LFUKID%TYPE
,LFAA10 Crpdta.F580002.LFAA10%TYPE
,LFJOBDETLS Crpdta.F580002.LFJOBDETLS%TYPE);
TYPE ta_InfoErrorMail is TABLE OF ra_InfoErrorMail;
t_InfoErrorMail ta_InfoErrorMail;
BEGIN
UPDATE CRPDTA.F580002 SET LFKY = 'ERROR' WHERE LFAA10 = 'MYPROJECT' AND LFUSER ='MYUSER'
RETURNING LFUKID,LFAA10,LFJOBDETLS BULK COLLECT INTO t_InfoErrorMail;
--SELECT LFUKID,LFAA10,LFJOBDETLS FROM t_InfoErrorMail t; -- this doesn't work
END;
How do I get the whole t_InfoErrorMail in a neat SSIS-capturable ResultSet ?

Oracle - Selecting from remote database taken from PL/SQL parameter

I am creating a procedure to merge two tables across different database instances. The DB_FROM and the DB_TO are given as parameters to the procedure. Everything else is hardcoded.
PROCEDURE MERGE_TABLE_1(DB_FROM, DB_TO) AS
BEGIN
MERGE INTO TABLE_1#DB_TO DSTN
USING (SELECT * FROM TABLE_1#DB_FROM) SRC
ON (DSTN.ID = SRC.ID)
WHEN MATCHED THEN
WHEN NOT MATCHED THEN
END MERGE_TABLE_1
I get the below error when I attempt to compile
Error(1): ORA-04052: error occurred when looking up remote object
TABLE_1#DB_TO ORA-00604: error occurred at recursive SQL level 1
ORA-02019: connection description for remote database not found
Nope, it will not work that way. You cannot use variables as table names, column names or db link names in static SQL. You can achieve what you want using dynamic sql:
EXECUTE IMMEDIATE
'MERGE INTO TABLE_1#' || DB_TO || ' DSTN
USING (SELECT * FROM TABLE_1#' || DB_FROM || ') SRC
ON (DSTN.ID = SRC.ID)
WHEN MATCHED THEN
WHEN NOT MATCHED THEN ...';

Resources