Sqoop import Null string - hadoop

The Null values are displayed as '\N' when a hive external table is queried.
Below is the sqoop import script:
sqoop import -libjars /usr/lib/sqoop/lib/tdgssconfig.jar,/usr/lib/sqoop/lib/terajdbc4.jar -Dmapred.job.queue.name=xxxxxx \
--connect jdbc:teradata://xxx.xx.xxx.xx/DATABASE=$db,LOGMECH=LDAP --connection-manager org.apache.sqoop.teradata.TeradataConnManager \
--username $user --password $pwd --query "
select col1,col2,col3 from $db.xxx
where \$CONDITIONS" \
--null-string '\N' --null-non-string '\N' \
--fields-terminated-by '\t' --num-mappers 6 \
--split-by job_number \
--delete-target-dir \
--target-dir $hdfs_loc
Please advise what change should be done to the script so that nulls are displayed as nulls when the external hive table is queried.

Sathiyan- Below are my findings after many trials
If (null string) property is not included during sqoop import, then NULLs are stored as [blank for integer columns] and [blank for string columns] in HDFS.
2.If the HIVE table on top of HDFS is queried, we would see [NULL for integer column] and [blank for String columns]
If the (--null-string '\N') property is included during sqoop import, then NULLs are stored as ['\N' for both integer and string columns].
If the HIVE table on top of HDFS is queried, we would see [NULL for both integer and string columns not '\N']

In your sqoop script you mentioned --null-string '\N' --null-non-string '\N which means,
--null-string '\N' = The string to be written for a null value for string columns
--null-non-string '\N' = The string to be written for a null value for non-string columns

If any value is NULL in the table and we want to sqoop that table ,then sqoop will import NULL value as string null in HDFS. So, that will create problem to use Null condition in our query using hive
For example: – Lets insert NULL value to mysql table “cities”.
mysql> insert into cities values(6,7,NULL);
By default, Sqoop will import NULL value as string null in HDFS.
Lets sqoop and see what happens:–
sqoop import –connect jdbc:mysql://localhost:3306/sqoop –username sqoop -P –table cities –hive-import –hive-overwrite –hive-table vikas.cities -m 1
http://deltafrog.com/how-to-handle-null-value-during-sqoop-import-export/

In The sqoop import command remove the --null-string and --null-non-string '\N' option.
by default system will assign null for both strings and non string values.
I have tried --null-string '\N' and --null-string '' and other options but getting blank and different issues.

Related

Export Sqoop fields with enclosure and/or delimiter character inside

I'm trying to export this type of data in PostgreSQL
"WIFI:S:FIBRA-3;T:WPA;P:YOdfdgg4677;;";"2021-05-18 14:31:34"
"'":.56#!:&7:&":8";"2021-05-19 15:56:22"
but I am not able to recognize the first field correctly, I think because of the double quotes.
The command that I'm using is:
export \
--connect $DB_JDBC_URL_MAIN \
--username=$DB_USER \
--password="$DB_PASSWORD" \
--table "$DB_SCHEMA.$DB_TABLE" \
--export-dir $EXPORT_DIR \
--input-lines-terminated-by '\n' \
--input-fields-terminated-by ';' \
--input-null-string 'N/A' \
--optionally-enclosed-by '\"' \
--escaped-by \\ \
I hope you can help me.

Sqoop --escaped-by --optionally-enclosed-by

i have a requirement to import the data into .csv file with comma(,) as a delimiter .
i am using below sqoop options .
--optionally-enclosed-by '\"'
--escaped-by '\\'
below is the input data and output data i want .
input "foo output i want ""foo
but i am getting below
input "foo output "foo
another example :
input foo" output i want foo""
but i am getting below
input foo" output foo"
how can i achieve the desired output
Refer SqoopGuide 7.2.11. Large Objects for a better understanding of --enclosed-by,--escaped-by and --optionally-enclosed-by with examples.
Based on the question, below are the details understood.
--fields-terminated-by , Since you need a file with a comma as the delimiter.
--optionally-enclosed-by '\"' This will enclose only the fields whose data contains delimiter comma , in them.
--escaped-by \\ Used to escape the enclosing characters(double quotes in this case) if they are present in the data field which requires enclosing.
Example:
Input: Suppose if the data in source table is like below with the respective columns. For representation, I used pipe(|) as the delimiter.
Some string, with a comma.|1|2|3...
Another "string with quotes"|4|5|6...
Output: sqoop import --fields-terminated-by , --enclosed-by '\"' --escaped-by \ ...
"Some string, with a comma.","1","2","3"...
"Another \"string with quotes\"","4","5","6"...
Explanation: All fields are terminated by comma and all fields are enclosed by double-quotes. If there is any field with double-quotes in the data then those quotes will be escaped by a backslash() as in the second line.
Output: sqoop import --fields-terminated-by , --optionally-enclosed-by '\"' --escaped-by \ ...
"Some string, with a comma.",1,2,3...
"Another \"string with quotes\"",4,5,6...
Explanation: All fields are terminated by comma and only fields contacting the comma in the data are enclosed by double-quotes. If there is any field with double-quotes in the data then those quotes will be escaped by a backslash() as in the second line and even this column will also be enclosed as in the second line.
For your scenario:
Input: Suppose if the data in the source table is like below with the respective columns. For representation, I used pipe(|) as the delimiter.
"foo|bar"|1|2
foo"|3|4|"bar
Possible Output: sqoop import --fields-terminated-by , --enclosed-by '\"' --escaped-by \ ...
"\"foo","bar\"","1","2"
"foo\"","3","4","\"bar"
Possible Output: sqoop import --fields-terminated-by , --optionally-enclosed-by '\"' --escaped-by \ ...
"\"foo","bar\"",1,2
"foo\"",3,4,"\"bar"

How to export a Hive table into a CSV file including header?

I used this Hive query to export a table into a CSV file.
hive -f mysql.sql
row format delimited fields terminated by ','
select * from Mydatabase,Mytable limit 100"
cat /LocalPath/* > /LocalPath/table.csv
However, it does not include table column names.
How to export in csv the column names ?
show tablename ?
You should add set hive.cli.print.header=true; before your select query to get column names as the first row of your output. The output would look as Mytable.col1, Mytable.col2 ....
If you don't want the table name with the column names, use set hive.resultset.use.unique.column.names=false;. The first row of your output would then look like col1, col2 ...
Invoking hive command-line with the parameters suggested in the other answer here works for a plain select. So, you can extract the column names and create the csv to start with, as follows:
hive -S --hiveconf hive.cli.print.header=true --hiveconf hive.resultset.use.unique.column.names=false --database Mydatabase -e 'select * from Mytable limit 0;' > /LocalPath/table.csv
Post which you can have the actual data extraction part run, except this time, remember to append to the csv:
cat /LocalPath/* >> /LocalPath/table.csv ## From your question with >> for append

Sqoop: using octal value(\0) as delimiter

Since I have special char in one of the fields, I wanted to use lower value as delimiter. Hive works fine with the delimiter(\0) but sqoop fails with NoSuchElement Exception. Looks like it is not detecting the delimiter as \0.
This is how my hive an sqoop script looks like. Any help please.
CREATE TABLE SCHEMA.test
(
name CHAR(20),
id int,
dte_report date
)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\0'
LOCATION '/user/$USER/test';
sqoop-export \
-Dmapred.job.name="TEST" \
-Dorg.apache.sqoop.export.text.dump_data_on_error=true \
--options-file ${OPTION_FILE_LOCATION}\conn_mysql \
--export-dir /user/$USER/test \
--input-fields-terminated-by '\0' \
--input-lines-terminated-by '\n' \
--input-null-string '\\N' \
--input-null-non-string '\\N' \
--table MYSQL_TEST \
--validate \
--outdir /export/home/$USER/javalib
In VI editor, the delimiter looks like '^#' and with od -c the delimiter is \0
Set the character set to UTF 8 in the my sql conn string that can resolve this issue.
mysql.url=jdbc:mysql://localhost:3306/nbs?useJvmCharsetConverters=false&useDynamicCharsetInfo=false&useUnicode=true&characterEncoding=UTF-8&characterSetResults=UTF-8&useEncoding=true
You should use \000 as delimiter , it will generate that character as a delimiter.

Exporting/ETL data from Hive to MySQL

I have a table in Hive called foo. I want to load data into MySQL table bar from Hive table foo. I want to do it using Sqoop Action - Oozie.
I am looking for an Oozie workflow.
--
My Hive table schema is:
hive> CREATE EXTERNAL TABLE IF NOT EXISTS foo (
id int, city string
)
row format delimited
fields terminated by '\t'
lines terminated by '\n'
LOCATION
'/user/cloudera/foo' ;
--
My MySQL table schema is:
mysql> create table bar (id int, city varchar(25));
--
I loaded local file foo to Hive table foo:
[cloudera#localhost ~]$ cat foo
1 a
4 b
Content of file foo are tab separated.
hive> load data local inpath '/home/cloudera/foo' into table foo;
--
Sqoop command could be something like:
sqoop export --connect jdbc:mysql://ap1.abcxyz.net/test --username rio --password r3o% --table bar --export-dir /user/cloudera/foo --input-fields-terminated-by '\t' --input-lines-terminated-by '\n'
--
Thanks,
Rio

Resources