Error : 3737 ,Name requires more than 30 bytes in LATIN internal form? - jdbc

I want to fastload in teradata with JDBC.
I used preparestatements.
My table name is :XXX_XXXX_XXXXXXXX_XXXXXXXX
and none of my column names are bigger than 30 chars too.
But I got this error.
I dont understand why.
Thanks.

Your tablename is probably too long, when you check http://developer.teradata.com/doc/connectivity/jdbc/reference/current/jdbcug_chapter_2.html#BABIIEAG you'll find
JDBC FastLoad creates two temporary error tables with the following naming convention: ._ERR_1 and ._ERR_2
and
The name of the destination table in the Teradata Database that is to be used by JDBC FastLoad CSV must not exceed 24 characters because of the name of the two error tables created by JDBC FastLoad CSV
If this was a standard FastLoad i would simply add ERRORTABLES and use my own error table names, but this seems not to be available in JDBC FastLoad.
So your only option is to create the table with a shorter name, FastLoad it and then submit a RENAME TABLE.

Related

SQL LOADER Control File without fields

I'm working on a task to load Database table from a flat file. My database table has 60 columns.
Now, In SQL LOADER control file, Is it mandatory to mention all the 60 fields ?
Is there a way to tell SQL LOADER that all 60 columns should be treated as required without mentioning the fields in the Control File ?
Oracle 12c (and higher versions) offer express mode.
In a few words (quoting the document):
The SQLLoader TABLE parameter triggers express mode. The value of the TABLE parameter is the name of the table that SQLLoader will load. If TABLE is the only parameter specified, then SQL* loader will do the following:
Looks for a data file in the current directory with the same name as the table being loaded that has an extension of ".dat". The upper/lower case used in the name of the data file is the same as the case for the table name specified in the TABLE parameter
Assumes the order of the fields in the data file matches the order of the columns in the table
Assumes the fields are terminated by commas, but there is no enclosure character
(...) order of the fields in the data file matches the order of the columns in the table. The following SQL*Loader command will load the table from the data file.
sqlldr userid=scott table=emp
Notice that no control file is used. After executing the SQL*Loader command, a SELECT from the table will return (...)
I guess that's what you're after.

Dbeaver: migrating a table between Oracle schemas truncate CLOB column

Using DBeaver, I'm trying to migrate a table from an Oracle Instance to another. I just right-click over the desired table, select Export Data and follow the wizard.
My problem is that the CLOB column is truncated. In the source database instance the max CLOB length is 6046297, but in the target it is 970823. The source has 340 records with the CLOB column value larger than 970823.
I've just noticed now that the source table has 24806 rows and the target has 12876. The table sequence id, the max value is 70191 in the source and 58185 in the target. The source has 22716 registers with id less than 58185 and the target has 12876, so it wasn't just a truncation. DBeaver is not transferring half of the registers.
I'm connecting to Oracle with the JDBC driver. Is there an configuration in DBeaver or in the connection or in the driver that would allow me to transfer this table? Maybe I just try to use another tool.

Cannot delete table starting with underscores

I have a table named "__refactorlog" created by importing all tables from a RDBMS source using Sqoop. Since this table does not contain business data I'm trying to delete it now using:
DROP TABLE __refactorlog;
However, I get the following error:
FAILED: ParseException line 1:11 mismatched input ''__refactorlog'' expecting Identifier near 'table' in table name
I've checked that I'm in the correct database, that the table shows up normally using "show tables;", qualifying it with the db name and even quoting the table name, but I always get the same error.
The table is located in HDFS under /user/hive/warehouse/databasename.db/__refactorlog and it contains a small part-m-00000 file. I've also been able to delete other non-business tables without any issues, but they all had alfanumeric names.
Any idea how to delete the table (esp. the metadata, the HDFS files I can delete manually if needed)?
BTW, I'm using Hive 0.10 bundled in CDH 4.7.

Sqoop - Create empty hive partitioned table based on schema of oracle partitioned table

I have an oracle table which has 80 columns and id partitioned on state column. My requirement is to create a hive table with similar schema of oracle table and partitioned on state.
I tried using sqoop -create-hive-table option. But keep getting an error
ERROR sqoop.Sqoop: Got exception running Sqoop: java.lang.IllegalArgumentException: Partition key state cannot be a column to import.
I understand that in Hive the partitioned column should not be in table definition, but then how do I get around the issue?
I do not want to manually write create table command, as I have 50 such tables to import and would like to use sqoop.
Any suggestion or ideas?
Thanks
There is a turn around for this.
Below is the procedure i fallow :
On Oracle run query to get the schema for a table and store it to a file.
Move that file to Hadoop
On Hadoop create a shell script which constructs a HQL file.
That hql file contains "Hive create table statement along with columns". For this we can use the above file(Oracle schema file copied to hadoop).
For this script to run u need to just pass Hive database name,table name, partition column name,path, etc.. depending on u r customization level.At the end of this shell script add "hive -f HQL filename".
If everything is ready it just takes couple of mins for each table creation.

Import most recent data from CSV to SQL Server with SSIS

Here's the deal; the issue isn't with getting the CSV into SQL Server, it's getting it to work how I want it... which I guess is always the issue :)
I have a CSV file with columns like: DATE, TIME, BARCODE, etc... I use a derived column transformation to concatenate the DATE and TIME into a DATETIME for my import into SQL Server, and I import all data into the database. The issue is that we only get a new .CSV file every 12 hours, and for example sake we will say the .CSV is updated four times in a minute.
With the logic that we will run the job every 15 minutes, we will get a ton of overlapping data. I imagine I will use a variable, say LastCollectedTime which can be pulled from my SQL database using the MAX(READTIME). My problem comes in that I only want to collect rows with a readtime more recent than that variable.
Destination table structure:
ID, ReadTime, SubID, ...datacolumns..., LastModifiedTime where LastModifiedTime has a default value of GETDATE() on the last insert.
Any ideas? Remember, our readtime is a Derived Column, not sure if it matters or not.
Here is one approach that you can make use of:
Let's assume that your destination table in SQL Server is named BarcodeData.
Create a staging table (say BarcodeStaging) in your database that has the same column structure as your destination table BarcodeData into which CSV data is imported into.
In the SSIS package, add an Execute SQL Task before the Data Flow Task to truncate the staging table BarcodeStaging.
Import the CSV data into the staging table BarcodeStaging and not into the actual destination table.
Use the MERGE statement (I assume that you are using SQL Server 2008 or higher version), to compare the staging table BarCodeStaging and the actual destination table BarcodeData using the DateTime column as the join key. If there are unmatched rows, then copy the rows from the staging table and insert them into the destination table.
Technet link to MERGE statement: http://technet.microsoft.com/en-us/library/bb510625.aspx
Hope that helps.

Resources