I have a CSV exported from a table in another cockroachDB database. When trying to import this CSV to another cockroachDB instance I get the error
could not parse "NULL" as type int
The failing column has type INT8 and is nullable. Is there a way around this?
To get this working, you can add WITH nullif = 'NULL' to the end of your IMPORT TABLE statement as documented here.
It would be nice if this "just worked" in CockroachDB, and I just filed an issue in the CockroachDB GitHub repo.
Related
there's a hudi table that is written as parquet file in s3, I ma trying to query it using athena, firstly it worked fine, then when I try to add a column and try to query it again I get this error:
GENERIC_INTERNAL_ERROR: Field new_test2 not found in log schema. Query cannot proceed! Derived Schema Fields:
despite the fact that when I try to query the same table using spark.sql it works fine. I don't know why this is happening as far as I understand that athena can handle schema changes, so why it's mentioning that the col doesn't exist?
and also if I tried to alter the table to add the col, we get an error of duplicate col which means it can see it.
ps: this error happens in athena engine v3, but when I set athena to automatic it works fine.
I have a simple question, how exactly do you use the oracle import tool to import a database with the option of automatically resize the column length, so it automatically fits the data before importing the data.
Give you an example: if I have a table TABLE1 that has a column called "comment", comment field length is 250. Since I'm importing TABLE1 from source (which is in western character set) into target database (which is AL-UTF32 character set). Some of the records data will grow, i.e. 1 record's comment field data will grow from 250 into 260 because of the character set conversion.
My question is: how do I import TABLE1, so that target database will automatically change the field "comment" from 250 into the max data field length of this field (after character set conversion grows the data). So I can import TABLE1 with no errors.
What's the import option or command line? Is there a way to know which columns cause data issue?
Thank you
Ideally, you would build your target table beforehand, with the column widths you need defined at that point. You would then tailor a sqlldr (SQL Loader) control file to your input format.
I am importing a SQL Server table into Informatica as a source. One of the column's datatype in the SQL Server table is DATE.
When I import it, its showing up as nvarchar in Informatica.
Can someone help me understand this?
Can you please provide more details? Here's a quick test I've done:
I've created a sample table on SQL Server 11 with date, datetime and smalldatetime columns:
Imported it as Source to Informatica 9.5.1:
As you can see, all looks pretty good. Let me know what can I do to replicate the issue. Or simply alter the imported source to reflect the table structure.
I have a table (in SQL Server DB ) with columns defined as 'date'. When I imported, they converted to 'nvarchar (10)'. however, I have another column defined as 'datetime', which imported as datetime properly.
Looks there is an issue with 'date' defined column.
This can also be because of a particular driver you are using to import the table. I haven't tried it any time with SQL Server however have seen this issue with Oracle table imports.
To solve this problem I have to import the table through "DataDirect 7.1 SQL Server Wire Protocol"
I am trying to import an excel file into an oracle table via sql developer. One of the oracle columns is of type CLOB, and during the verification step of the import wizard, i get the following message in the information column: "Data Types CLOB, not supported for import." The data fields i am attempting to import for the CLOB column is empty. Does anybody have any idea what might be wrong? Thanks.
If it is not possible, How can I import/export CLOB data in Oracle?
You just need to use a more recent copy of SQL Developer. We support importing into a CLOB field now from Excel.
And then when it's over, checking the data...
I am trying to take a schema from an existing database and place it in a new database.
I've created dependant tablespaces for the data and everything seems to work ok except any tables with XMLTYPE columns error and fail with the error message below. The XMLTYPE are unvalidated CLOBs
KUP-11007: conversion error loading table "SCHEMA"."TABLE_NAME"
ORA-01400: cannot insert NULL into (XML_COLUMN)
KUP-11009: data for row: XML_COLUMN : 0X''
Some investigation seemed to indicate that using TABLES=TABLE_NAME instead of SCHEMA=SCHEMA would help but I have had no such luck.
Note that there are no constraints on this column and that some data could indeed be null (though after the import I get 0 of my several million records)
The command I am using to initiate the datapump is:
impdp TABLES=SCHEMA.TABLE_NAME DIRECTORY=DATA_PUMP_DIR DUMPFILE=oracledpexport.dmp LOGFILE=LOGFILE.LOG TABLE_EXISTS_ACTION=REPLACE
We have been facing some problems during ORACLE import process.
The IMPDP process was not able to import tables containing XML data types.
The reason for this is due to a bug in ORACLE 11g R1 version.
The work around for this is to use EXP process to create a dump instead of EXPDP.
For a permanent fix, we have to explicitly save XML Type columns as CLOB
Also, Oracle has confirmed that this issue has been fixed in ORACLE 11gR2 version.