Importing external table using multiple conditions sqoop - sqoop

I would like import some selected rows from an external table into HDFS directory using sqoop
Below is table rows in MYSQL database
The column names are name,bank,salary,company
Surender,HDFC,60000,CTS
Raja,AXIS,80000,TCS
Raj,HDFC,70000,TCS
Kumar,AXIS,70000,CTS
all I need is to have multiple where conditions in sqoop commands. How to have multiple where conditions in sqoop commands.
sqoop import --connect jdbc:mysql://192.891.289.1/testing --username root -P
--query 'select * from records where salary>30000 and bank='HDFC' $CONDITIONS'
--target-dir '/user/cloudera/surender' -m 1
The above query is returning error. I am getting error as "Unknown column "HDFC" in where clause

The reason is you need to put "and" before $CONDITIONS. Instead of:
where salary>30000 and bank='HDFC' $CONDITIONS
Try using
where salary>30000 and bank='HDFC' and \$CONDITIONS'

Related

Adding column in sqoop import

While importing data from SQL through sqoop, Is it possible to add a new column and insert time stamp into that column?
Is it possible by any other ways before getting data into HDFS?
You may use --query parameter of sqoop command and add SQL function to get current timestamp in query.
Example: To import stud table from MySQL having rollnum and name columns.
sqoop import --connect jdbc:mysql://localhost:3306/test --driver com.mysql.jdbc.Driver --username root --query 'select name, rollnum, current_timestamp from stud where $CONDITIONS' --target-dir '/tmp/stud1' --split-by id
Note current_timestamp mysql function used in query.

incremental "lastmodified" not working in sqoop

I'm trying sqoop to perform incremental import from Teradata DB to Hive. Below is the query:
sqoop import --connect jdbc:teradata://xxx.xxx.x.xx/DATABASE=DBN --driver com.teradata.jdbc.TeraDriver --username userN --password pass --query "SELECT alias.colA, alias.call_date, alias.colB, alias.colC FROM tableName alias where \$CONDITIONS" --target-dir /apps/hive/warehouse/staging.db/tableName -m 26 --check-column call_date --incremental append --split-by alias.colA --last-value '2016-02-01'
The column call_date is of DATE type, values in the format 'YYYY-MM-DD'.
When I use 'append' for --incremental, everything works fine. But when I put 'lastmodified', the following error is thrown:
ERROR util.SqlTypeMap: It seems like you are looking up a column that does not
ERROR util.SqlTypeMap: exist in the table. Please ensure that you've specified
ERROR util.SqlTypeMap: correct column names in Sqoop options.
ERROR tool.ImportTool: Imported Failed: column not found: call_date
I'm using sqoop 1.4.4.2.1 on HDP 2.1
While Teradata DB is 14.10
Any pointers will be helpful.
I think, in case of query you can perform the last value check in the query itself some think like this
"SELECT alias.colA, alias.call_date, alias.colB, alias.colC FROM tableName alias where call_date >'2016-02-01' and \$CONDITIONS" .
Reference (refer section Incrementally Updating Data in Hive > 1.Ingest the data.)
https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_dataintegration/content/incrementally-updating-hive-table-with-sqoop-and-ext-table.html

data disapear import sqoop hive oracle

I mamage to connect to DB and import data from oracle to file or to HIVE.
But now, i would like to import data from query into Hive using sqoop on oracle.
I previously used the following :
sqoop import --connect 'jdbc:oracle:thin:#server1:1521:ICIS' -P -- username JAPHONIE --query 'SELECT * FROM CONTRACTS INNER JOIN CONTRACT_VERSIONS ON CV_CON_NUMBER = CON_NUMBER WHERE $CONDITIONS' --target-dir BOUH --split-by CON_NUMBER --where '1=1'
This one create my data in my folder BOUH, there is no problem on this point.
But when i use the following :
sqoop import --connect 'jdbc:oracle:thin:#server1:1521:ICIS' -P --username JAPHONIE --query 'SELECT * FROM CONTRACTS INNER JOIN CONTRACT_VERSIONS ON CV_CON_NUMBER = CON_NUMBER WHERE $CONDITIONS' --target-dir BOUH --split-by CON_NUMBER --where '1=1' --hive-import --hive-table BOUH
My BOUH folder contains only _SUCCESS, no data, and the table in HIVE is created but empty...
I dont understand where doest the problem come from. I dont have any error message either...
do you have any idea ?
EDIT : I manage to load my table by, first, execute the 2nd query which creates the table without data, then delete the file folder which is empty and execute the 1st query which extract properly the data... but i would like to do the same in one query...
The data you imported will be saved under /user/hive/warehouse as it is an internal hive table, It won't be saved in the BOUH folder you have mentioned in --target-dir. Your script is correct and you should be able to see data in hive table , as you are saying you are not able to see data, please look at /user/hive/warehouse folder once. Still if you are not able to see data, please paste sqoop logs here.

sqoop with sql server retrieving more records

Q: I want to import 5000 rows from SQL server using SQOOP but its giving me 20000 rows. I am using below query.
sudo -E -u hdfs sqoop import --connect "jdbc:sqlserver://hostname;username=*****;password=*****;database=*****" --driver com.microsoft.sqlserver.jdbc.SQLServerDriver --query "select top 5000 * from Tb_Emp where \$CONDITIONS" --split-by EmpID -m 4 --target-dir /home/sqoop_SQLServeroutput
retrieved 20000 records
every mapper is getting 5000 records. but if i do this on mysql then it gives 5000 records as expected.
sudo -E -u hdfs sqoop import --connect jdbc:mysql://hostname/<database_name> --username **** --password **** --query 'select * from Tb_Emp where $CONDITIONS limit 5000' --split-by EmpID -m 4 --target-dir /home/sqoop_MySqloutput
retrieved 5000 records.
don't why its happening.
Using the "top x" or "limit x" clauses do not make much sense with Sqoop as it can return different values on each query execution (there is no "order by"). Also in addition the clause will very likely confuse split generation, ending with not that easily deterministic outputs. Having said that I would recommend you to use only 1 mapper (-m 1 or --num-mappers 1) in case that you need to import predefined number of rows. Another solution would be to create temporary table with the required data on the MySQL/SQL Server side and import this whole temp table with Sqoop.

Configuring Sqoop with Mysql?

I have successfully installed SQOOP now the problem is that how to implement it with RDBMS and how to load data from RDBMS to HDFS using SQOOP.
By Using Sqoop You can Load Data directly to Hive Tables or Store the data in Some target Directory in HDFS
If you Need to copy data from RDBMS into Some directory
sqoop import
--connect ConnectionString
--username username
--password Your_Database_Password {In case no password Do not Specify it}
--table tableName
--col column_name(s) {In case you need to call specific columns}
--target-dir '/tmp/myfolder'
--boundary-query 'Select min,max from table name'
--m 5 {set number of mappers to 5}
--fields-terminated-by ',' {how do you want your data to look in target file}
Boundary Query : This is something you can specify. If you do not specify this , then by default this is run in as an inner query which adds up to a complex query.
If you specify this explicitly then this runs as a normal query and hence the performance is increased.
Also you may want to restrict the number of observation ,say based on column ID, and suppose you need data from ID 1 to 1000. Then using Boundary condition and split-by you will be able to restrict your import data.
--boundary-query "select 0,1000 from employee'
--split-by ID
Split-By : You use Split by on a Sqoop import to specify the column on basis of which split is required. By default,if you do not specify this, sqoop pics up table's primary key as the Split_by column.
Split By picks up data from tables and stores them in different folders based on number of mappers. By Default Number of Mappers are 4.
This may seem unwanted but in case you have a composite primary key or no primary key at all, then sqoop fails to pick up data and may error out.
Note: You may not face any issue if you set the number of mappers to 1. In this case, no split by condition is used since there is only one mapper. So query runs fine. This can be done using
--m 1
If you Need to copy data from RDBMS into Hive Table
sqoop import
--connect ConnectionString
--username username
--password Your_Database_Password {In case no password Do not Specify it}
--table tableName
--boundary-query 'Select min,max from table name'
--m 5 {set number of mappers to 5}
--hive-import
--hive-table serviceorderdb.productinfo
--m 1
Running a query instead of calling entire table itself
sqoop import
--connect ConnectionString
--username username
--password Your_Database_Password
--query 'select name from employees where name like '%s' and $CONDITIONS'
--m 5 {set number of mappers to 5}
--target-dir '/tmp/myfolder'
--fields-terminated-by ',' {how do you want your data to look in target file}
You may see $conditions as extra parameter $CONDITIONS. This is because this time you specified no table and specified a query explicity. When Sqoop runs, it searches for a boundary conditions, which it does not find. Then It Searches for a table and a primary key for applying boundary query which again it will not find. Hence we use $CONDITIONS to explicitly specify that we are not using a query and use default boundry condition from query result.
Checking if your connection is set up properly : For this you can just call list databases and if the you see your data populated then your connection is fine.
$ sqoop list-databases
--connect jdbc:mysql://localhost/
--username root
--password pwd
Connection String for Different Databases :
MYSQL: jdbc:mysql://<hostname>:<port>/<dbname>
jdbc:mysql://127.0.0.1:3306/test_database
Oracle :#//host_name:port_number/service_name
jdbc:oracle:thin:scott/tiger#//myhost:1521/myservicename
You may learn more about sqoop imports from : https://sqoop.apache.org/docs/1.4.1-incubating/SqoopUserGuide.html
By using sqoop import command you can import data from RDBMS to HDFS, Hive and HBase
sqoop import --connect jdbc:mysql://localhost:portnumber/DBName --username root --table emp --password root -m1
By using this command data will be stored in HDFS.
Sample commands to run sqoop import (load data from RDBMS to HDFS):
Postgres
sqoop import --connect jdbc:postgresql://postgresHost/databaseName
--username username --password 123 --table tableName
MySQL
sqoop import --connect jdbc:mysql://mysqlHost/databaseName --username username --password 123 --table tableName
Oracle*
sqoop import --connect jdbc:oracle:thin:#oracleHost:1521/databaseName --username USERNAME --password 123 --table TABLENAME
SQL Server
sqoop import --connect 'jdbc:sqlserver://sqlserverhost:1433;database=dbname;username=<username>;password=<password>' --table tableName
*Sqoop won't find any columns from a table if you don't specify both the username and the table in correct case. Usually, specifying both in uppercase will resolve the issue.
Read the Sqoop User's Guide: https://sqoop.apache.org/docs/1.4.5/SqoopUserGuide.html
I also recommend the Apache Sqoop Cookbook. You will learn how to use import and export tools, do incremental import jobs, save jobs, solve problems with jdbc drivers and much more. http://shop.oreilly.com/product/0636920029519.do

Resources