sqoop with sql server retrieving more records - sqoop

Q: I want to import 5000 rows from SQL server using SQOOP but its giving me 20000 rows. I am using below query.
sudo -E -u hdfs sqoop import --connect "jdbc:sqlserver://hostname;username=*****;password=*****;database=*****" --driver com.microsoft.sqlserver.jdbc.SQLServerDriver --query "select top 5000 * from Tb_Emp where \$CONDITIONS" --split-by EmpID -m 4 --target-dir /home/sqoop_SQLServeroutput
retrieved 20000 records
every mapper is getting 5000 records. but if i do this on mysql then it gives 5000 records as expected.
sudo -E -u hdfs sqoop import --connect jdbc:mysql://hostname/<database_name> --username **** --password **** --query 'select * from Tb_Emp where $CONDITIONS limit 5000' --split-by EmpID -m 4 --target-dir /home/sqoop_MySqloutput
retrieved 5000 records.
don't why its happening.

Using the "top x" or "limit x" clauses do not make much sense with Sqoop as it can return different values on each query execution (there is no "order by"). Also in addition the clause will very likely confuse split generation, ending with not that easily deterministic outputs. Having said that I would recommend you to use only 1 mapper (-m 1 or --num-mappers 1) in case that you need to import predefined number of rows. Another solution would be to create temporary table with the required data on the MySQL/SQL Server side and import this whole temp table with Sqoop.

Related

Sqoop import from Teradata - No more room in database

I am new to Big data, when I am using Sqoop commands to import data from teradata into my Hadoop cluster I am encountering a "No more room in database" error
I am doing the following:
1.The data I am trying to pull into my Hadoop cluster is a view table
2.The I have used the following sqoop command
sqoop import --connect "jdbc:teradata://xxx.xxx.xxx.xxx/DATABASE=XY" \
-- username user1 \
-- password xyc
-- query "
SELECT * FROM TABLE1 WHERE .... AND \$CONDITIONS \
" \
--split-by ITEM_1 \
--delete-target-dir \
--target-dir /user/home/folder1 \
--as-avrodatafile;
I know that the default mappers is 4 since I do not have a primary key for my view, I am using split-by.
Using --num-mappers 1, works but takes a long time for to port over roughly 36GB of data, hence I wanted to increase the num-mappers to 4 or more, however, I am getting the "no more room" error. Does anyone know what's happening?

How to use sqoop validation?

Can you please help me with the below points.
I have a oracle data base with huge no.of records today - suppose 5TB data, so we can use the vaildator sqoop framework- It will validate and import in the HDFS.
Then, Suppose tomorrow- i will receive the new records on top of the above TB data, so how can i import those new records (only new records to the existing directory) and validation by using the validator sqoop framework.
I have a requirement, how to use sqoop validator if new records arrives.
I need sqoop validatior framework used in new records arrives to be imported in HDFS.
Please help me team.Thanks.
Thank You,
Sipra
My understanding is that you need to validate the oracle database for new records before you start your delta process. I don’t think you can validate based on the size of the records. But if you have a offset or a TS column that will be helpful for validation.
how do I know if there is new records in oracle since last run/job/check ??
You can do this in two sqoop import approaches, following is the examples and explanation for both.
sqoop incremental
Following is an example for the sqoop incremental import
sqoop import --connect jdbc:mysql://localhost:3306/ydb --table yloc --username root -P --check-column rDate --incremental lastmodified --last-value 2014-01-25 --target-dir yloc/loc
This link explained it : https://www.tutorialspoint.com/sqoop/sqoop_import.html
sqoop import using query option
Here you basically use the where condition in the query and pull the data which is greater than the last received date or offset column.
Here is the syntax for it sqoop import \
--connect "jdbc:mysql://quickstart.cloudera:3306/retail_db" \
--username retail_dba --password cloudera \
--query 'select * from sample_data where $CONDITIONS AND salary > 1000' \
--split-by salary \
--target-dir hdfs://quickstart.cloudera/user/cloudera/sqoop_new
Isolate the validation and import job
If you want to run the validation and import job independently you have an other utility in sqoop which is sqoop eval, with this you can run the query on the rdbms and point the out put to the file or to a variable In your code and use that for validation purpose as you want.
Syntax :$ sqoop eval \
--connect jdbc:mysql://localhost/db \
--username root \
--query “SELECT * FROM employee LIMIT 3”
Explained here : https://www.tutorialspoint.com/sqoop/sqoop_eval.htm
validation parameter in sqoop
You can use this parameter to validate the counts between what’s imported/exported between RDBMS and HDFS
—validate
More on that : https://sqoop.apache.org/docs/1.4.6/SqoopUserGuide.html#validation

Error : sqoop to add records in hdfs

My scenario : I will get daily 100 records in hdfs through sqoop at particular time. But, yesterday i got only 50 records for that particular time today i need to get 50+100 records in hdfs through sqoop for that particular time. Please help me. Thanks in advance.
To handle such scenario, you need to add a where condition on time. No matters, what the record count is.
You can use something like this in sqoop import command using --query parameter:
sqoop import \
--connect jdbc:mysql://localhost:3306/sqoop \
--username sqoop \
--password sqoop \
--query 'SELECT * from records
WHERE recordTime BETWEEN ('<datetime>' AND NOW()) \
--target-dir /user/hadoop/records
You need to modify the where condition as per your table schema.
Please refer Sqoop Documentation for more details.
sqoop import --connect jdbc:mysql://localhost:3306/your_mysql_databasename --username root -P --query 'SELECT * from records WHERE recordTime BETWEEN ('' AND NOW()) --target-dir /where you want to store data
and make when sqoop ask for password enter your mysql password eg.(my pwd is root)

Importing external table using multiple conditions sqoop

I would like import some selected rows from an external table into HDFS directory using sqoop
Below is table rows in MYSQL database
The column names are name,bank,salary,company
Surender,HDFC,60000,CTS
Raja,AXIS,80000,TCS
Raj,HDFC,70000,TCS
Kumar,AXIS,70000,CTS
all I need is to have multiple where conditions in sqoop commands. How to have multiple where conditions in sqoop commands.
sqoop import --connect jdbc:mysql://192.891.289.1/testing --username root -P
--query 'select * from records where salary>30000 and bank='HDFC' $CONDITIONS'
--target-dir '/user/cloudera/surender' -m 1
The above query is returning error. I am getting error as "Unknown column "HDFC" in where clause
The reason is you need to put "and" before $CONDITIONS. Instead of:
where salary>30000 and bank='HDFC' $CONDITIONS
Try using
where salary>30000 and bank='HDFC' and \$CONDITIONS'

Configuring Sqoop with Mysql?

I have successfully installed SQOOP now the problem is that how to implement it with RDBMS and how to load data from RDBMS to HDFS using SQOOP.
By Using Sqoop You can Load Data directly to Hive Tables or Store the data in Some target Directory in HDFS
If you Need to copy data from RDBMS into Some directory
sqoop import
--connect ConnectionString
--username username
--password Your_Database_Password {In case no password Do not Specify it}
--table tableName
--col column_name(s) {In case you need to call specific columns}
--target-dir '/tmp/myfolder'
--boundary-query 'Select min,max from table name'
--m 5 {set number of mappers to 5}
--fields-terminated-by ',' {how do you want your data to look in target file}
Boundary Query : This is something you can specify. If you do not specify this , then by default this is run in as an inner query which adds up to a complex query.
If you specify this explicitly then this runs as a normal query and hence the performance is increased.
Also you may want to restrict the number of observation ,say based on column ID, and suppose you need data from ID 1 to 1000. Then using Boundary condition and split-by you will be able to restrict your import data.
--boundary-query "select 0,1000 from employee'
--split-by ID
Split-By : You use Split by on a Sqoop import to specify the column on basis of which split is required. By default,if you do not specify this, sqoop pics up table's primary key as the Split_by column.
Split By picks up data from tables and stores them in different folders based on number of mappers. By Default Number of Mappers are 4.
This may seem unwanted but in case you have a composite primary key or no primary key at all, then sqoop fails to pick up data and may error out.
Note: You may not face any issue if you set the number of mappers to 1. In this case, no split by condition is used since there is only one mapper. So query runs fine. This can be done using
--m 1
If you Need to copy data from RDBMS into Hive Table
sqoop import
--connect ConnectionString
--username username
--password Your_Database_Password {In case no password Do not Specify it}
--table tableName
--boundary-query 'Select min,max from table name'
--m 5 {set number of mappers to 5}
--hive-import
--hive-table serviceorderdb.productinfo
--m 1
Running a query instead of calling entire table itself
sqoop import
--connect ConnectionString
--username username
--password Your_Database_Password
--query 'select name from employees where name like '%s' and $CONDITIONS'
--m 5 {set number of mappers to 5}
--target-dir '/tmp/myfolder'
--fields-terminated-by ',' {how do you want your data to look in target file}
You may see $conditions as extra parameter $CONDITIONS. This is because this time you specified no table and specified a query explicity. When Sqoop runs, it searches for a boundary conditions, which it does not find. Then It Searches for a table and a primary key for applying boundary query which again it will not find. Hence we use $CONDITIONS to explicitly specify that we are not using a query and use default boundry condition from query result.
Checking if your connection is set up properly : For this you can just call list databases and if the you see your data populated then your connection is fine.
$ sqoop list-databases
--connect jdbc:mysql://localhost/
--username root
--password pwd
Connection String for Different Databases :
MYSQL: jdbc:mysql://<hostname>:<port>/<dbname>
jdbc:mysql://127.0.0.1:3306/test_database
Oracle :#//host_name:port_number/service_name
jdbc:oracle:thin:scott/tiger#//myhost:1521/myservicename
You may learn more about sqoop imports from : https://sqoop.apache.org/docs/1.4.1-incubating/SqoopUserGuide.html
By using sqoop import command you can import data from RDBMS to HDFS, Hive and HBase
sqoop import --connect jdbc:mysql://localhost:portnumber/DBName --username root --table emp --password root -m1
By using this command data will be stored in HDFS.
Sample commands to run sqoop import (load data from RDBMS to HDFS):
Postgres
sqoop import --connect jdbc:postgresql://postgresHost/databaseName
--username username --password 123 --table tableName
MySQL
sqoop import --connect jdbc:mysql://mysqlHost/databaseName --username username --password 123 --table tableName
Oracle*
sqoop import --connect jdbc:oracle:thin:#oracleHost:1521/databaseName --username USERNAME --password 123 --table TABLENAME
SQL Server
sqoop import --connect 'jdbc:sqlserver://sqlserverhost:1433;database=dbname;username=<username>;password=<password>' --table tableName
*Sqoop won't find any columns from a table if you don't specify both the username and the table in correct case. Usually, specifying both in uppercase will resolve the issue.
Read the Sqoop User's Guide: https://sqoop.apache.org/docs/1.4.5/SqoopUserGuide.html
I also recommend the Apache Sqoop Cookbook. You will learn how to use import and export tools, do incremental import jobs, save jobs, solve problems with jdbc drivers and much more. http://shop.oreilly.com/product/0636920029519.do

Resources