Sqoop import converting TINYINT to BOOLEAN - hadoop

I am attempting to import a MySQL table of NFL play results into HDFS using Sqoop. I issued the following command to achieve this:
sqoop import \
--connect jdbc:mysql://127.0.0.1:3306/nfl \
--username <username> -P \
--table play
Unfortunately, there are columns of type TINYINT, which are being converted to booleans upon import. For instance, there is a 'quarter' column for which quarter of the game the play occurred in. The value in this column is converted to 'true' if the play occurred in the first quarter and 'false' otherwise.
In fact, I did a sqoop import-all-tables, importing the entire NFL database I have, and it behaves like this uniformly.
Is there a way around this, or perhaps some argument for import or import-all-tables that prevents this from happening?

Add tinyInt1isBit=false in your JDBC connection URL. Something like
jdbc:mysql://127.0.0.1:3306/nfl?tinyInt1isBit=false
Another solution would be to explicitly override the column mapping for the datatype TINYINT(1) column. For example, if the column name is foo, then pass the following option to Sqoop during import: --map-column-hive foo=tinyint. In the case of non-Hive imports to HDFS, use --map-column-java foo=integer.
Source

Related

Import the all tables from RDBMS using sqoop

I am trying to import data from testing mysql database to hadoop using sqoop. But in some tables having primary and some tables does not have primary key.
$sqoop import-all-tables --connect jdbc:mysql://192.168.0.101/mysql -username test -P --warehouse-dir /home/user_all_tables
17/08/01 22:46:54 ERROR tool.ImportAllTablesTool: Error during import:
No primary key could be found for table general_log. Please specify
one with --split-by or perform a sequential import with '-m 1'.
Kindly suggest me how to use split by in sqoop command line.
For the import-all-tables tool to be useful, the following conditions must be met:
Each table must have a single-column primary key.
You must intend to import all columns of each table.
You must not intend to use non-default splitting column, nor impose any conditions via a WHERE clause.
Default option doesn't fit with the non primary key table therefore it is not working. Here I will suggests to use -m 1 option to strict the import with one mapper only.
Sqoop command:
import-all-tables --connect jdbc:mysql://192.168.0.101/mysql -username test \
-P --warehouse-dir /home/user_all_tables -m 1

How can i import a column type SDO_GEOMETRY from Oracle to HDFS with Sqoop?

ISSUE
I'm using Sqoop to fetch data from Oracle and put it to HDFS. Unlike other basic datatypes i understand SDO_GEOMETRY is meant for spatial data.
My Sqoop job fails while fetching datatype SDO_GEOMETRY.
Need help to import the column Shape with SDO_GEOMETRY datatype from Oracle to Hdfs.
I have more than 1000 tables which has the SDO_GEOMETRY datatype , how can i handle the datatype in general while sqoop imports happen ?
I have tried the --map-column-java and --map-column-hive , but i still get the error.
error :
ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: Hive does not support the SQL type for column
SHAPE
SQOOP COMMAND
Below is the sqoop command that i have :
sqoop import --connect 'jdbc:oracle:thin:XXXXX/xxxxx#(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(Host=xxxxxxx)(Port=1521))(CONNECT_DATA=(SID=xxxxx)))' -m 1 --create-hive-table --hive-import --fields-terminated-by '^' --null-string '\\\\N' --null-non-string '\\\\N' --hive-overwrite --hive-table PROD.PLAN1 --target-dir test/PLAN1 --table PROD.PLAN --map-column-hive SE_XAO_CAD_DATA=BINARY --map-column-java SHAPE=String --map-column-hive SHAPE=STRING --delete-target-dir
The default type mapping that Sqoop provides between relational databases and Hadoop is not working in your case that is why Sqoop Job Fails. You need to override the mapping as geometry datatypes not supported by sqoop.
Use the below parameter in your sqoop job
syntax:--map-column-java col1=javaDatatype,col2=javaDatatype.....
sqoop import
.......
........
--map-column-java columnNameforSDO_GEOMETRY=String
As your column name is Shape
--map-column-java Shape=String
Sqoop import to HDFS
Sqoop does not support all of the RDBMS datatypes.
If a particular datatype is not supported, you will get error like:
No Java type for SQL type .....
Solution
Add --map-column-java in your sqoop command.
Syntax: --map-column-java col-name=java-type,...
For example, --map-column-java col1=String,col2=String
Sqoop import to HIVE
You need same --map-column-java mentioned above.
By default, sqoop supports these JDBC types and convert them in the corresponding hive types:
INTEGER
SMALLINT
VARCHAR
CHAR
LONGVARCHAR
NVARCHAR
NCHAR
LONGNVARCHAR
DATE
TIME
TIMESTAMP
CLOB
NUMERIC
DECIMAL
FLOAT
DOUBLE
REAL
BIT
BOOLEAN
TINYINT
BIGINT
If your datatype is not in this list, you get error like:
Hive does not support the SQL type for .....
Solution
You need to add --map-column-hive in your sqoop import command.
Syntax: --map-column-hive col-name=hive-type,...
For example, --map-column-hive col1=string,col2='varchar(100)'
Add --map-column-java SE_XAO_CAD_DATA=String,SHAPE=String --map-column-hive SE_XAO_CAD_DATA=BINARY,SHAPE=STRING in your command.
Don't use multiple --map-column-java and --map-column-hive.
For importing SDO GEOMETRY from Oracle to HIVE through SQOOP,
use the SQOOP free form query option along with Oracle's SDO_UTIL.TO_GEOJSON, SDO_UTIL.TO_WKTGEOMETRY functions.
The SQOOP --query option allows us to add a SELECT SQL QUERY so that we can get the required data only from the table. And, in the SQL query we can include SDO_UTIL package functions like TO_GEOJSON and TO_WKTGEOMETRY. It looks something like,
sqoop import \
...
--query 'SELECT ID, NAME, \
SDO_UTIL.TO_GEOJSON(MYSHAPECOLUMN), \
SDO_UTIL.TO_WKTGEOMETRY(MYSHAPECOLUMN) \
FROM MYTABLE WHERE $CONDITIONS' \
...
This returns the SDO GEOMETRY as Geojson and WKT formats as per the definitions of functions and can directly be inserted into respective HIVE STRING-type columns without any other type mapping in the SQOOP command.
Choose the Geojson and WKT as per requirement and this approach also can be extended to other spatial functions available.

what is the relevence of -m 1

I am executing below sqoop command::=
sqoop import --connect 'jdbc:sqlserver://10.xxx.xxx.xx:1435;database=RRAM_Temp' --username DRRM_DATALOADER --password ****** --table T_VND --hive-import --hive-table amitesh_db.amit_hive_test --as-textfile --target-dir amitesh_test_hive -m 1
I have two queries::-
1) what is the relevence of -m 1? as far as I know Its the number of mapper that I am assigning to the sqoop job. If that is true, then, the moment I assign -m 2, the execution start throwing error as below:
ERROR tool.ImportTool: Error during import: No primary key could be found for table xxx. Please specify one with --split-by or perform a sequential import with '-m 1'
Now, I am forced to change my concept, now I see, it has something to do with database primary key. Can somebody help me a logic behind this?
2) I have ordered the above sqoop command to save the file as text file format.But when I go to the location suggested by the execution, I find tbl_name.jar. Why, if --as-textfile is a wrong sytax, then what is the right one. Or is there any other location that I can find the file in?
1) To have -m or --num-mappers to be set to a value greater than 1, the table must either have PRIMARY KEY or the sqoop command must be provided with a --split-by column. Controlling Parallelism would explain the logic behind this.
2) The FileFormat of the data imported into the Hive table amit_hive_test would be plain text(--as-textfile). As this is --hive-import, the data will be first imported into the --target-dir and then is loaded (LOAD DATA INPATH) into the Hive table. The resultant data will be inside the table's LOCATION and not in --target-dir.

Sqoop Hive table import, Table dataType doesn't match with database

Using Sqoop to import data from oracle to hive, its working fine but it create table in hive with only 2 dataTypes String and Double. I want to use timeStamp as datatype for some columns.
How can I do it.
bin/sqoop import --table TEST_TABLE --connect jdbc:oracle:thin:#HOST:PORT:orcl --username USER1 -password password -hive-import --hive-home /user/lib/Hive/
In addition to above answers we may also have to observe when the error is coming, e.g.
In my case I had two types of data columns that caused error: json and binary
for json column the error came while a Java Class was executing, at the very beginning of the import process :
/04/19 09:37:58 ERROR orm.ClassWriter: Cannot resolve SQL type
for binary column, error was thrown while importing into the hive tables (after data is imported and put into HDFS files)
16/04/19 09:51:22 ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: Hive does not support the SQL type for column featured_binary
To get rid of these two errors, I had to provide the following options
--map-column-java column1_json=String,column2_json=String,featured_binary=String --map-column-hive column1_json=STRING,column2_json=STRING,featured_binary=STRING
In summary, we may have to provide the
--map-column-java
or
--map-column-hive
depending upon the failure.
You can use the parameter --map-column-hive to override default mapping. This parameter expects a comma-separated list of key-value pairs separated by = to specify which column should be matched to which type in Hive.
sqoop import \
...
--hive-import \
--map-column-hive id=STRING,price=DECIMAL
A new feature was added with sqoop-2103/sqoop 1.4.5 that lets you call out the decimal precision with the map-column-hive parameter. Example:
--map-column-hive 'TESTDOLLAR_AMT=DECIMAL(20%2C2)'
This syntax would define the field as a DECIMAL(20,2). The %2C is used as a comma and the parameter needs to be in single quotes if submitting from the bash shell.
I tried using Decimal with no modification and I got a Decimal(10,0) as a default.

Encoding columns in Hive

I'm importing a table from mysql to hive using Sqoop. Some columns are latin1 encoded. Is there any way to do either:
Set the encoding for those columns as latin1 in Hive. OR
Convert the columns to utf-8 while importing with sqoop?
In Hive --default-character-set is used to set the character set for whole database not specific to few columns. I was not able to find Sqoop parameter which will convert tables columns to utf-8 in fly rather the columns are expected to set type fixed.
$ sqoop import --connect jdbc:mysql://server.foo.com/db --table bar \
--direct -- --default-character-set=latin1
I believe you would need to convert Latin1 columns to utf-8 first in your MySql and then you can import from Sqoop. You can use the following script to convert the all the columns into utf-8, which I found here.
mysql --database=dbname -B -N -e "SHOW TABLES" | \
awk '{print "ALTER TABLE", $1, "CONVERT TO CHARACTER SET utf8 COLLATE \
utf8_general_ci;"}' | mysql --database=dbname &
Turned out the problem was unrelated. The column works fine regardless of encoding...but the table's schema had changed in mysql. I assumed that since I'm passing in the overwrite flag, sqoop would remake the table every time in Hive. Not so! The schema changes in mysql didn't get transferred to Hive, so the data in the md5 column was actually data from a different column.
The "fix" we settled on was, before every sqoop import check for schema changes, and if there was a change, drop the table and re-import. This forces a schema update in Hive.
Edit: my original sqoop command was something like:
sqoop import --connect jdbc:mysql://HOST:PORT/DB --username USERNAME --password PASSWORD --table uploads --hive-table uploads --hive-import --hive-overwrite --split-by id --num-mappers 8 --hive-drop-import-delims --null-string '\\N' --null-non-string '\\N'
But now I manually issue a drop table uploads to hive first if the schema changes.

Resources