I want to dump db schema so I can later use it with DBIx:Class.
Connection itself is apparently ok, however there are complaints about moniker clashes that I've tried to resolve by using naming => {ALL=>'v8', force_ascii=>1}
perl -MDBIx::Class::Schema::Loader=make_schema_at,dump_to_dir:./lib -E "$|++;make_schema_at('EC::Schema', { debug => 1, naming => {ALL=>'v8', force_ascii=>1} }, [ 'dbi:Oracle:', 'XX/PP#TNS' ])"
output (ending up without content in ./lib)
Bad table or view 'V_TRANSACTIONS', ignoring: DBIx::Class::Schema::Loader::DBI::_sth_for(): DBI Exception: DBD::Oracle::db prepare failed: ORA-04063: view "ss.vv" has errors (DBD ERROR: error possibly near <*> indicator at char 22 in 'SELECT * FROM ..
Unable to load schema - chosen moniker/class naming style results in moniker clashes. Change the naming style, or supply an explicit moniker_map: tables "ss"."AQ$_PRARAC_ASY_QUEUE_TABLE_S", "ss"."AQ$PRARAC_ASY_QUEUE_TABLE_S" reduced to the same source moniker 'AqDollarSignPraracAsyQueueTableS';
Any suggestions how to solve clashes or use some other dump schema method are welcome.
Related
I am trying to import an oracle 11g dump file using impdp utility but while doing so, inter alia, I am facing two major errors:
First, It is showing the following error:
Processing object type DATABASE_EXPORT/TABLESPACE
ORA-39083: Object type TABLESPACE:"HIS_USER" failed to create with error:
ORA-01119: error in creating database file '/oracle/app/oracle/oradata/dwhrajdr1/his_user13.dbf'
ORA-27040: file create error, unable to create file
OSD-04002: unable to open file
O/S-Error: (OS 3) The system cannot find the path specified.
so to solve this, I have created the tablesapce with same name but now it is showing that 'HIS_USER' tablespace already exists.
Second, I am getting thousands of errors, where it is showing user or role does not exist:
Failing sql is:
GRANT EXECUTE ANY ASSEMBLY TO "DSS"
ORA-39083: Object type SYSTEM_GRANT failed to create with error:
ORA-01917: user or role 'DSS' does not exist
Please suggest how to solve these errors!
How can I import the dumpfile without making hundreds of users/roles or tablespaces?
you can generate sql statement using impdp the following way.
http://www.dba-oracle.com/t_convert_expdp_dmp_file_sql.htm
then adjust parameter accordingly.
scott
I'm using strimzi Kafka operator to work with a Confluent Cluster to achieve an Oracle2Kafka type KafkaConnector with JdbcSourceConnector from confluent.
The KafkaConnector spec
# connector
connection.url: jdbc:oracle:thin:#HOST:PORT/SERVICE
connection.user: USER
connection.password: PASS
dialect.name: OracleDatabaseDialect
topic.prefix: test-topic-
mode: bulk
db.timezone: Europe/Madrid
table.whitelist: TEST_TABLE
But I get the following error in the strimzi-cluster-operator logs.
io.strimzi.operator.cluster.operator.assembly.ConnectRestException: PUT /connectors/confluent-cluster-int-20200706-02/config returned 400 (Bad Request): Connector configuration is invalid and contains the following 2 error(s):
Invalid value java.sql.SQLException: ORA-00604: error occurred at recursive SQL level 1
ORA-06502: PL/SQL: numeric or value error: character string buffer too small
ORA-06512: at line 43
for configuration Couldn't open connection to jdbc:oracle:thin:#***:1534/***
Invalid value java.sql.SQLException: ORA-00604: error occurred at recursive SQL level 1
ORA-06502: PL/SQL: numeric or value error: character string buffer too small
ORA-06512: at line 43
for configuration Couldn't open connection to jdbc:oracle:thin:#***:1534/***
You can also find the above list of errors at the endpoint `/connector-plugins/{connectorType}/config/validate`
at io.strimzi.operator.cluster.operator.assembly.KafkaConnectApiImpl.lambda$null$2(KafkaConnectApi.java:208) ~[io.strimzi.cluster-operator-0.18.0.jar:0.18.0]
If I modify the source I get a much specific stacktrace
2020-07-14 09:33:46,636 ERROR SQLException (io.confluent.connect.jdbc.source.JdbcSourceConnectorConfig) [qtp742672280-21]
java.sql.SQLException: ORA-00604: error occurred at recursive SQL level 1
ORA-06502: PL/SQL: numeric or value error: character string buffer too small
ORA-06512: at line 43
at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:509)
at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:456)
at oracle.jdbc.driver.T4CTTIoer11.processError(T4CTTIoer11.java:451)
at oracle.jdbc.driver.T4CTTIfun.processError(T4CTTIfun.java:1123)
at oracle.jdbc.driver.T4CTTIoauthenticate.processError(T4CTTIoauthenticate.java:552)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:553)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:269)
at oracle.jdbc.driver.T4CTTIoauthenticate.doOAUTH(T4CTTIoauthenticate.java:501)
at oracle.jdbc.driver.T4CTTIoauthenticate.doOAUTH(T4CTTIoauthenticate.java:1292)
at oracle.jdbc.driver.T4CTTIoauthenticate.doOAUTH(T4CTTIoauthenticate.java:1025)
at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:747)
at oracle.jdbc.driver.PhysicalConnection.connect(PhysicalConnection.java:793)
at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:57)
at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:747)
at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:562)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:208)
at io.confluent.connect.jdbc.dialect.GenericDatabaseDialect.getConnection(GenericDatabaseDialect.java:223)
at io.confluent.connect.jdbc.source.JdbcSourceConnectorConfig$TableRecommender.validValues(JdbcSourceConnectorConfig.java:606)
at io.confluent.connect.jdbc.source.JdbcSourceConnectorConfig$CachingRecommender.validValues(JdbcSourceConnectorConfig.java:653)
at org.apache.kafka.common.config.ConfigDef.validate(ConfigDef.java:607)
at org.apache.kafka.common.config.ConfigDef.validate(ConfigDef.java:622)
at org.apache.kafka.common.config.ConfigDef.validate(ConfigDef.java:530)
at org.apache.kafka.common.config.ConfigDef.validateAll(ConfigDef.java:513)
at org.apache.kafka.common.config.ConfigDef.validate(ConfigDef.java:495)
at org.apache.kafka.connect.connector.Connector.validate(Connector.java:135)
Some things I already checked
KafkaConnect image is a self-made one with the required addons for the JdbcSourceConnector with an OracleDriver
Dockerfile
FROM strimzi/kafka:0.18.0-kafka-2.5.0
USER root:root
COPY ./kafka-connect-jdbc-5.4.0.jar /opt/kafka/plugins/
COPY ./ojdbc6.jar /opt/kafka/libs/
USER 1001
KafkaConnect resource is deployed successfully, as KafkaConnect topics are being populated on confluent cluster (connect-cluster-configs, connect-cluster-configs, ...)
Oracle driver seems to be successfully loaded. If I add a typo in the credentials or connection chain the error is self-explanatory and makes sense. I also tried other versions of oracle drivers.
Back in time (4 months ago), this same config was working (both on a local strimzi-deployed-cluster and confluent). Now the local cluster works fine, but confluent one fails with the described error.
Tried several upgraded latest versions of strimzi operator and kafka-jdbc-connector
(edit) As suggested in strimzi slack, tried the PUT /connector-plugins/JdbcSourceConnector/config/validate rest endpoint of KafkaConnect and got same 2 errors, in the whitelist and blacklist fields
result
"value": {
"name": "table.blacklist",
"value": "",
"recommended_values": [],
"errors": ["Invalid value java.sql.SQLException: ORA-00604: error occurred at recursive SQL level 1\nORA-06502: PL/SQL: numeric or value error: character string buffer too small\nORA-06512: at line 43\n for configuration Couldn't open connection to jdbc:oracle:thin:#***:1534/***"],
"visible": true
}
(edit) I've tried to leave whitelist field empty and error is the same. The database seems not have been changed and the connection chain works fine from source code spring-data access.
I'm out of ideas, any hint is welcome x)
I don't know tools you use, but - the error you got means this:
SQL> declare
2 l_var varchar2(1); -- note length, only 1 character
3 begin
4 l_var := 'Littlefoot'; -- I'm little, but can't fit
5 end;
6 /
declare
*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error: character string buffer too small
ORA-06512: at line 4
SQL>
What to do?
SQL> declare
2 l_var varchar2(20); -- is that enough?
3 begin
4 l_var := 'Littlefoot';
5 end;
6 /
PL/SQL procedure successfully completed. --> Yes, it is!
SQL>
Therefore, check the sizes in your code.
Calling tensorflow_datasets.load('cycle_gan/apple2orange') works fine
but tensorflow_datasets.load('cycle_gan/vangogh2photo') gives me an error.
I've tried this on my desktop and laptop and both gave the same error message.
Here's the code I ran and the error message I got:
import tensorflow_datasets as tfds
dataset = tfds.load('cycle_gan/vangogh2photo',
data_dir='data', batch_size=1, download=True, in_memory=False)
InvalidArgumentError: Failed to create a NewWriteableFile: data\downloads\extracted\ZIP.peop.eecs.berk.edu_taes_park_Cycl_data_vanNiw0c-cL4JRL2gjUnWYOr9woVN9V1peDW4GG0decqv8.zip.incomplete_bf327518b23f41ee9a3a469cc0b541ba\vangogh2photo\testB\2014-12-10 12:08:40.jpg : The filename, directory name, or volume label syntax is incorrect.
; Unknown error
then it says
During handling of the above exception, another exception occurred:
(traceback)
ExtractError: Error while extracting data\downloads\peop.eecs.berk.edu_taes_park_Cycl_data_vanNiw0c-cL4JRL2gjUnWYOr9woVN9V1peDW4GG0decqv8.zip to data\downloads\extracted\ZIP.peop.eecs.berk.edu_taes_park_Cycl_data_vanNiw0c-cL4JRL2gjUnWYOr9woVN9V1peDW4GG0decqv8.zip : Failed to create a NewWriteableFile: data\downloads\extracted\ZIP.peop.eecs.berk.edu_taes_park_Cycl_data_vanNiw0c-cL4JRL2gjUnWYOr9woVN9V1peDW4GG0decqv8.zip.incomplete_bf327518b23f41ee9a3a469cc0b541ba\vangogh2photo\testB\2014-12-10 12:08:40.jpg : The filename, directory name, or volume label syntax is incorrect.
; Unknown error
How do I fix this?
Which OS are you using?
There is an issue with some datasets on Windows when composing the URLs to fetch the files or the URLs where to save them locally.
For the Oxford Pets III dataset, the link below provides the fix:
https://github.com/tensorflow/tensorflow/issues/31171#issuecomment-529169445
Perhaps something similar may apply here?
I'm using pg_dump to backup and recreate a database's structure. To that end I'm calling pg_dump -scf backup.sql for the backup. But it fails with the following error:
pg_dump: [archiver (db)] query failed: ERROR: Cannot support lock statement yet
pg_dump: [archiver (db)] query was: LOCK TABLE my_schema.my_table IN ACCESS SHARE MODE
I couldn't find reference this particular error anywhere. Is it possible to get around this?
edit: for a little more context, I ran it in verbose mode, and this is what is displayed before the errors:
pg_dump: last built-in OID is 16383
pg_dump: reading extensions
pg_dump: identifying extension members
pg_dump: reading schemas
pg_dump: reading user-defined tables
I'm trying to validate using libxml-ruby's DTD#validate, but I keep getting the following warnings:
Warning: failed to load external entity "xhtml-lat1.ent" at :29.
Warning: failed to load external entity "xhtml-symbol.ent" at :34.
Warning: failed to load external entity "xhtml-special.ent" at :39.
I wouldn't mind, except I use things like …, which are defined in those, causing my XHTML to appear to be invalid.
How do I tell the DTD about those extra files? I tried running from a directory containing the .dtd file and all of the .ents, but that doesn't help.
Reading the release notes I would suspect that you need to either use
XML.default_substitute_entities = true
or
XML.default_load_external_dtd = true
or both.