Error Message when sqooping oracle table into hive - oracle

I was looking to find out how I can fix the following error message that I keep getting when I am sqooping a data table into oracle. I was able to sqoop another table this morning but every attempt after that has failed. The following is the error within the log file:
Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
I am not sure why I keep getting this error message as without the Kerberos Ticket I am unable to login to Putty in order to run the script to sqoop the table and hence I am confused as to a) why I am getting this error b)what I can do to fix it.
Would appreciate it if somebody could advise where I am going wrong.
Thanks in advance.
Part of the Error Message from Log File:
17/01/05 15:08:52 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.7.1
17/01/05 15:08:52 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
17/01/05 15:08:52 INFO oracle.OraOopManagerFactory: Data Connector for Oracle and Hadoop is disabled.
17/01/05 15:08:52 INFO manager.SqlManager: Using default fetchSize of 1000
17/01/05 15:08:52 INFO tool.CodeGenTool: Beginning code generation
17/01/05 15:08:53 INFO manager.OracleManager: Time zone has been set to GMT
17/01/05 15:08:53 INFO manager.SqlManager: Executing SQL statement: SELECT * FROM TABLE_NAME where SNAPSHOT_DATE_TIME >= '01-APR-16' and (1 = 0)
17/01/05 15:08:53 INFO manager.SqlManager: Executing SQL statement: SELECT * FROM TABLE_NAME where SNAPSHOT_DATE_TIME >= '01-APR-16' and (1 = 0)
17/01/05 15:08:53 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce
Note: /tmp/sqoop-username/compile/ba4df230b0d18377522bbfe053ed3661/QueryResult.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
17/01/05 15:08:55 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-username/compile/ba4df230b0d18377522bbfe053ed3661/QueryResult.jar
17/01/05 15:08:55 INFO mapreduce.ImportJobBase: Beginning query import.
17/01/05 15:08:55 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
17/01/05 15:08:55 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
17/01/05 15:08:56 WARN security.UserGroupInformation: PriviledgedActionException as:username (auth:KERBEROS) cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
17/01/05 15:08:56 WARN ipc.Client: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
....

Related

Importing data from Sybase IQ using Sqoop: Syntax error near 'Committed' on line 1

I am trying to import a table from Sybase IQ to a Hive table using the following sqoop command:
sqoop import \
--verbose \
--connect jdbc:sybase:Tds:server:port#?ServiceName=dbName \
--username username \
-P \
--driver com.sybase.jdbc4.jdbc.SybDriver \
--table myName.someTable \
--target-dir file:////tmp/HELLO/tej01
From this I get the following error thrown: Syntax error near 'Committed' on line 1
I've also tried escaping the "." in the table name using the section below but I get the same error:
--table myName\".\"someTable \
Can someone please give me some direction as to how I can resolve this?
19/12/05 01:45:34 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-mapr-1803
19/12/05 01:45:34 DEBUG tool.BaseSqoopTool: Enabled debug logging.
19/12/05 01:45:35 DEBUG sqoop.ConnFactory: Loaded manager factory: org.apache.sqoop.manager.oracle.OraOopManagerFactory
19/12/05 01:45:35 DEBUG sqoop.ConnFactory: Loaded manager factory: com.cloudera.sqoop.manager.DefaultManagerFactory
19/12/05 01:45:35 WARN sqoop.ConnFactory: Parameter --driver is set to an explicit driver however appropriate connection manager is not being set (via --connection-manager). Sqoop is going to fall back to org.apache.sqoop.manager.GenericJdbcManager. Please specify explicitly which connection manager should be used next time.
19/12/05 01:45:35 INFO manager.SqlManager: Using default fetchSize of 1000
19/12/05 01:45:35 INFO tool.CodeGenTool: Beginning code generation
19/12/05 01:45:35 DEBUG manager.SqlManager: Execute getColumnInfoRawQuery : SELECT t.* FROM myName.someTable AS t WHERE 1=0
19/12/05 01:45:35 DEBUG manager.SqlManager: No connection paramenters specified. Using regular API for making connection.
19/12/05 01:45:35 ERROR manager.SqlManager: Error executing statement: com.sybase.jdbc4.jdbc.SybSQLException: SQL Anywhere Error -131: Syntax error near 'Committed' on line 1
com.sybase.jdbc4.jdbc.SybSQLException: SQL Anywhere Error -131: Syntax error near 'Committed' on line 1
at com.sybase.jdbc4.tds.Tds.processEed(Tds.java:4201)
at com.sybase.jdbc4.tds.Tds.nextResult(Tds.java:3318)
at com.sybase.jdbc4.jdbc.ResultGetter.nextResult(ResultGetter.java:78)
at com.sybase.jdbc4.jdbc.SybStatement.nextResult(SybStatement.java:302)
at com.sybase.jdbc4.jdbc.SybStatement.nextResult(SybStatement.java:284)
at com.sybase.jdbc4.jdbc.SybStatement.updateLoop(SybStatement.java:2762)
at com.sybase.jdbc4.jdbc.SybStatement.executeUpdate(SybStatement.java:2746)
at com.sybase.jdbc4.jdbc.SybPreparedStatement.executeUpdate(SybPreparedStatement.java:330)
at com.sybase.jdbc4.tds.Tds.setOption(Tds.java:2139)
at com.sybase.jdbc4.jdbc.SybConnection.setTransactionIsolation(SybConnection.java:2806)
at org.apache.sqoop.manager.SqlManager.makeConnection(SqlManager.java:907)
at org.apache.sqoop.manager.GenericJdbcManager.getConnection(GenericJdbcManager.java:52)
at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:760)
at org.apache.sqoop.manager.SqlManager.execute(SqlManager.java:783)
at org.apache.sqoop.manager.SqlManager.getColumnInfoForRawQuery(SqlManager.java:280)
at org.apache.sqoop.manager.SqlManager.getColumnTypesForRawQuery(SqlManager.java:251)
at org.apache.sqoop.manager.SqlManager.getColumnTypes(SqlManager.java:237)
at org.apache.sqoop.manager.ConnManager.getColumnTypes(ConnManager.java:300)
at org.apache.sqoop.orm.ClassWriter.getColumnTypes(ClassWriter.java:1846)
at org.apache.sqoop.orm.ClassWriter.generate(ClassWriter.java:1657)
at org.apache.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:107)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:479)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:606)
at org.apache.sqoop.Sqoop.run(Sqoop.java:143)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227)
at org.apache.sqoop.Sqoop.main(Sqoop.java:236)
19/12/05 01:45:35 ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: No columns to generate for ClassWriter
at org.apache.sqoop.orm.ClassWriter.generate(ClassWriter.java:1663)
at org.apache.sqoop.tool.CodeGenTool.generateORM(CodeGenTool.java:107)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:479)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:606)
at org.apache.sqoop.Sqoop.run(Sqoop.java:143)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227)
at org.apache.sqoop.Sqoop.main(Sqoop.java:236)
This seems to be a known bug (SAP KBA 2707539).
SAP Targeted CR
I was getting this error message with jconnect16.jar. However the error does not occur with jconn4.jar. Using version 4 fixed the issue for me.

sqoop import from oracle fails

i have an error while offloading oracle table to hdfs, here is the command:
sqoop import -Dmapreduce.job.queuename=root.username \
--connect jdbc:oracle:thin:#//someExadataHostname/dbInstance \
--username user \
--password welcome1 \
--table TB_RECHARGE_DIM_APPLICATION \
--target-dir /data/in/sqm/dev/unprocessed/sqoop/oracle_db_exa_test \
--delete-target-dir \
--m 1
it throws an error:
Warning: /opt/cloudera/parcels/CDH-5.10.1-1.cdh5.10.1.p0.10/bin/../lib/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
18/01/10 14:27:24 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.10.1
18/01/10 14:27:24 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
18/01/10 14:27:24 INFO teradata.TeradataManagerFactory: Loaded connector factory for 'Cloudera Connector Powered by Teradata' on version 1.5c5
18/01/10 14:27:25 INFO oracle.OraOopManagerFactory: Data Connector for Oracle and Hadoop is disabled.
18/01/10 14:27:25 INFO manager.SqlManager: Using default fetchSize of 1000
18/01/10 14:27:25 INFO tool.CodeGenTool: Beginning code generation
18/01/10 14:27:29 INFO manager.OracleManager: Time zone has been set to GMT
18/01/10 14:27:29 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM TB_RECHARGE_DIM_APPLICATION t WHERE 1=0
18/01/10 14:27:29 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce
Note: /tmp/s/compile/926451c21b6a6623f9763b96c7afa503/TB_RECHARGE_DIM_APPLICATION.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
18/01/10 14:27:31 INFO orm.CompilationManager: Writing jar file: /tmp/compile/926451c21b6a6623f9763b96c7afa503/TB_RECHARGE_DIM_APPLICATION.jar
18/01/10 14:27:32 INFO tool.ImportTool: Destination directory /data/in/sqm/dev/unprocessed/sqoop/oracle_db_exa_test deleted.
18/01/10 14:27:32 INFO manager.OracleManager: Time zone has been set to GMT
18/01/10 14:27:34 INFO manager.OracleManager: Time zone has been set to GMT
18/01/10 14:27:34 INFO mapreduce.ImportJobBase: Beginning import of TB_RECHARGE_DIM_APPLICATION
18/01/10 14:27:34 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
18/01/10 14:27:34 INFO manager.OracleManager: Time zone has been set to GMT
18/01/10 14:27:34 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
18/01/10 14:27:34 INFO hdfs.DFSClient: Created token for username: HDFS_DELEGATION_TOKEN owner=username#company.CO.ID, renewer=yarn, realUser=, issueDate=1515569254366, maxDate=1516174054366, sequenceNumber=29920785, masterKeyId=849 on ha-hdfs:nameservice1
18/01/10 14:27:34 INFO security.TokenCache: Got dt for hdfs://nameservice1; Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:nameservice1, Ident: (token for username: HDFS_DELEGATION_TOKEN owner=username#company.CO.ID, renewer=yarn, realUser=, issueDate=1515569254366, maxDate=1516174054366, sequenceNumber=29920785, masterKeyId=849)
18/01/10 14:28:10 WARN hdfs.DFSClient: Slow waitForAckedSeqno took 33367ms (threshold=30000ms). File being written: /user/username/.staging/job_1508590044386_4156415/libjars/commons-lang3-3.4.jar, block: BP-673686138-10.54.0.2-1453972538527:blk_3947617000_2874005894, Write pipeline datanodes: [DatanodeInfoWithStorage[10.54.1.110:50010,DS-bfb333fb-f63f-4c85-b60f-3ce0889fe16d,DISK], DatanodeInfoWithStorage[10.54.0.187:50010,DS-5c692f55-614c-4d33-9e83-0758d2d54555,DISK], DatanodeInfoWithStorage[10.54.0.183:50010,DS-8530593e-b498-455e-9aaa-b1a12c8ec3b2,DISK]]
18/01/10 14:28:13 INFO db.DBInputFormat: Using read commited transaction isolation
18/01/10 14:28:14 INFO mapreduce.JobSubmitter: number of splits:1
18/01/10 14:28:14 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1508590044386_4156415
18/01/10 14:28:14 INFO mapreduce.JobSubmitter: Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:nameservice1, Ident: (token for username: HDFS_DELEGATION_TOKEN owner=username#company.CO.ID, renewer=yarn, realUser=, issueDate=1515569254366, maxDate=1516174054366, sequenceNumber=29920785, masterKeyId=849)
18/01/10 14:28:15 INFO impl.YarnClientImpl: Submitted application application_1508590044386_4156415
18/01/10 14:28:15 INFO mapreduce.Job: The url to track the job: https://host:8090/proxy/application_1508590044386_4156415/
18/01/10 14:28:15 INFO mapreduce.Job: Running job: job_1508590044386_4156415
18/01/10 14:28:28 INFO mapreduce.Job: Job job_1508590044386_4156415 running in uber mode : false
18/01/10 14:28:28 INFO mapreduce.Job: map 0% reduce 0%
18/01/10 14:29:38 INFO mapreduce.Job: Task Id : attempt_1508590044386_4156415_m_000000_0, Status : FAILED
Error: java.lang.RuntimeException: java.lang.RuntimeException: java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection
at org.apache.sqoop.mapreduce.db.DBInputFormat.setDbConf(DBInputFormat.java:170)
at org.apache.sqoop.mapreduce.db.DBInputFormat.setConf(DBInputFormat.java:161)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:749)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.lang.RuntimeException: java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection
at org.apache.sqoop.mapreduce.db.DBInputFormat.getConnection(DBInputFormat.java:223)
at org.apache.sqoop.mapreduce.db.DBInputFormat.setDbConf(DBInputFormat.java:168)
... 10 more
Caused by: java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection
at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:673)
at oracle.jdbc.driver.PhysicalConnection.<init>(PhysicalConnection.java:715)
at oracle.jdbc.driver.T4CConnection.<init>(T4CConnection.java:385)
at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:30)
at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:564)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:247)
at org.apache.sqoop.mapreduce.db.DBConfiguration.getConnection(DBConfiguration.java:302)
at org.apache.sqoop.mapreduce.db.DBInputFormat.getConnection(DBInputFormat.java:216)
... 11 more
Caused by: oracle.net.ns.NetException: The Network Adapter could not establish the connection
at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:445)
at oracle.net.resolver.AddrResolution.resolveAndExecute(AddrResolution.java:464)
at oracle.net.ns.NSProtocol.establishConnection(NSProtocol.java:594)
at oracle.net.ns.NSProtocol.connect(NSProtocol.java:229)
at oracle.jdbc.driver.T4CConnection.connect(T4CConnection.java:1360)
at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:486)
... 19 more
Caused by: java.net.SocketTimeoutException: connect timed out
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at oracle.net.nt.TcpNTAdapter.connect(TcpNTAdapter.java:162)
at oracle.net.nt.ConnOption.connect(ConnOption.java:133)
at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:411)
... 24 more
i am able to list the table using
sqoop list-tables --connect jdbc:oracle:thin:#//someExadataHost/dbInstance --username user --password pass
i dont know why the network cant established connection but it successfully launching the job (it means sqoop able to connect and identify oracle table is exist right?). the map task never finished so..
any idea about this? thank you
There might be many reasons you are facing this issue:
The listener is not configured properly
The listener process (service) is not running.Re-start it with the
lsnrctl start command or on Windows by starting the listener
service.
Also please check the hostname in oracle net manager and
listener.Both should be same.Path (Local -> service naming -> orcl)
Hope this helps!!
Add --driver oracle.jdbc.driver.OracleDriver to the command line.

sqoop hive import job is running for long time and its Tracking URL is UNASSIGNED

I am running hadoop on a single node cluster and Trying to sqoop import mysql data into hive tables.
The issue is while executing sqoop job, the process taking more time and data is not imported as shown below:
Sqoop command:
sqoop job --create Hcustomers -- import --connect jdbc:mysql://localhost/retaildb --username root -P --table customers --check-column id --incremental append --last-value 0 --target-dir '/user/hive/retail/Hcustomers' -m 2;
After execution of sqoop job:
chaithu#localhost:~$ sqoop job --exec Hcustomers;
Warning: /opt/cloudera/parcels/CDH-5.13.1-1.cdh5.13.1.p0.2/bin/../lib/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
17/12/20 22:47:27 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.13.1
Enter password:
17/12/20 22:47:32 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
17/12/20 22:47:32 INFO tool.CodeGenTool: Beginning code generation
Wed Dec 20 22:47:32 IST 2017 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
17/12/20 22:47:32 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `customers` AS t LIMIT 1
17/12/20 22:47:32 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `customers` AS t LIMIT 1
17/12/20 22:47:32 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce
Note: /tmp/sqoop-chaithu/compile/fa134412efb9ef64f2cb5a5ebfd29956/customers.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
17/12/20 22:47:34 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-chaithu/compile/fa134412efb9ef64f2cb5a5ebfd29956/customers.jar
17/12/20 22:47:38 INFO tool.ImportTool: Maximal id query for free form incremental import: SELECT MAX(`id`) FROM `customers`
17/12/20 22:47:38 INFO tool.ImportTool: Incremental import based on column `id`
17/12/20 22:47:38 INFO tool.ImportTool: Lower bound value: 0
17/12/20 22:47:38 INFO tool.ImportTool: Upper bound value: 11
17/12/20 22:47:38 WARN manager.MySQLManager: It looks like you are importing from mysql.
17/12/20 22:47:38 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
17/12/20 22:47:38 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
17/12/20 22:47:38 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
17/12/20 22:47:38 INFO mapreduce.ImportJobBase: Beginning import of customers
17/12/20 22:47:38 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
17/12/20 22:47:38 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
17/12/20 22:47:38 INFO client.RMProxy: Connecting to ResourceManager at localhost/127.0.0.1:8032
17/12/20 22:47:39 INFO mapreduce.JobSubmissionFiles: Permissions on staging directory /user/chaithu/.staging are incorrect: rwxrwx---. Fixing permissions to correct value rwx------
Wed Dec 20 22:47:45 IST 2017 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
17/12/20 22:47:45 INFO db.DBInputFormat: Using read commited transaction isolation
17/12/20 22:47:45 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN(`id`), MAX(`id`) FROM `customers` WHERE ( `id` > 0 AND `id` <= 11 )
17/12/20 22:47:45 INFO db.IntegerSplitter: Split size: 5; Num splits: 2 from: 1 to: 11
17/12/20 22:47:45 INFO mapreduce.JobSubmitter: number of splits:2
17/12/20 22:47:46 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1513788823385_0002
17/12/20 22:47:46 INFO impl.YarnClientImpl: Submitted application application_1513788823385_0002
17/12/20 22:47:46 INFO mapreduce.Job: The url to track the job: http://localhost:8088/proxy/application_1513788823385_0002/
17/12/20 22:47:46 INFO mapreduce.Job: Running job: job_1513788823385_0002
As shown above the execution gets stuck and no data is imported. Attaching the screen shot for job application.Sqoop Job Status on Hadoop job tracker
I am just trying to import only 11 rows from mysql to hive, I think it should not take much time. What might be the issue.
Kindly suggest me on the same.

Sqoop running into local job runner mode

When i run sqoop am not sure why it runs into local job runner mode and then says that i have provided invalid jobtracker url for LocalJobRunner. Can anyone tell whats going on?
$ bin/sqoop import -jt myjobtracker:50070 --connect jdbc:mysql://mydbhost.com/mydata --username foo --password bar --as-parquetfile --table campaigns --target-dir hdfs://myhdfs:8020/user/myself/campaigns
14/08/20 21:04:50 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-SNAPSHOT
14/08/20 21:04:50 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
14/08/20 21:04:51 INFO manager.SqlManager: Using default fetchSize of 1000
14/08/20 21:04:51 INFO tool.CodeGenTool: Beginning code generation
14/08/20 21:04:51 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `campaigns` AS t LIMIT 1
14/08/20 21:04:51 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `campaigns` AS t LIMIT 1
14/08/20 21:04:51 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `campaigns` AS t LIMIT 1
14/08/20 21:04:51 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/lib/hadoop-mapreduce
Note: /tmp/sqoop-myself/compile/6acdb40688239f19ddf86a1290ad6c64/campaigns.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
14/08/20 21:04:54 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-myself/compile/6acdb40688239f19ddf86a1290ad6c64/campaigns.jar
14/08/20 21:04:54 WARN manager.MySQLManager: It looks like you are importing from mysql.
14/08/20 21:04:54 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
14/08/20 21:04:54 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
14/08/20 21:04:54 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
14/08/20 21:04:54 INFO mapreduce.ImportJobBase: Beginning import of campaigns
14/08/20 21:04:54 WARN conf.Configuration: mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
14/08/20 21:04:54 WARN mapred.JobConf: The variable mapred.child.ulimit is no longer used.
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/share/hbase/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
14/08/20 21:04:54 WARN conf.Configuration: mapred.jar is deprecated. Instead, use mapreduce.job.jar
14/08/20 21:04:56 WARN conf.Configuration: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
14/08/20 21:04:56 INFO mapreduce.Cluster: Failed to use org.apache.hadoop.mapred.LocalClientProtocolProvider due to error: Invalid "mapreduce.jobtracker.address" configuration value for LocalJobRunner : "myjobtracker:50070"
14/08/20 21:04:56 ERROR security.UserGroupInformation: PriviledgedActionException as:myself (auth:SIMPLE) cause:java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
14/08/20 21:04:56 ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:121)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:83)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:76)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1239)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1235)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
at org.apache.hadoop.mapreduce.Job.connect(Job.java:1234)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1263)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1287)
at org.apache.sqoop.mapreduce.ImportJobBase.doSubmitJob(ImportJobBase.java:186)
at org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:159)
at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:247)
at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:665)
at org.apache.sqoop.manager.MySQLManager.importTable(MySQLManager.java:102)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:497)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:601)
at org.apache.sqoop.Sqoop.run(Sqoop.java:143)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227)
at org.apache.sqoop.Sqoop.main(Sqoop.java:236)
Figured out the problem, i was running sqoop 1.4.5 and pointing it to the latest hadoop 2.0.0-cdh4.4.0 which had the yarn stuff also thats why it was complaining.
When i pointed sqoop to hadoop-0.20/2.0.0-cdh4.4.0 (MR1 i think) it worked.

WARN conf.HiveConf: hive-site.xml not found on CLASSPATH-TOS for Big Data

i'am Hadoop Newbie. And i'm discovring Talend Open Studio for Big Data.
I'am trying the components relating to Hive: tHiveConnection etc.
when executing the job i get this error:
[statistics] connecting to socket on port 3556
[statistics] connected
13/04/18 14:08:52 WARN conf.HiveConf: hive-site.xml not found on CLASSPATH
13/04/18 14:08:52 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
13/04/18 14:08:52 INFO metastore.ObjectStore: ObjectStore, initialize called
13/04/18 14:08:52 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
13/04/18 14:08:54 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
13/04/18 14:08:54 INFO metastore.ObjectStore: Initialized ObjectStore
Hive history file=/tmp/admin.qlv/hive_job_log_admin.qlv_201304181408_152240551.txt
13/04/18 14:08:55 INFO exec.HiveHistory: Hive history file=/tmp/admin.qlv/hive_job_log_admin.qlv_201304181408_152240551.txt
13/04/18 14:08:56 INFO service.HiveServer: Putting temp output to file \tmp\admin.qlv\admin.qlv_2013041814085651067019081102470.pipeout
13/04/18 14:08:56 INFO service.HiveServer: Running the query: set hive.fetch.output.serde = org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
13/04/18 14:08:56 INFO service.HiveServer: Putting temp output to file \tmp\admin.qlv\admin.qlv_2013041814085651067019081102470.pipeout
13/04/18 14:08:56 INFO service.HiveServer: Running the query: SELECT pokes.num, pokes.val FROM pokes
13/04/18 14:08:56 INFO ql.Driver: <PERFLOG method=Driver.run>
13/04/18 14:08:56 INFO ql.Driver: <PERFLOG method=compile>
13/04/18 14:08:56 INFO parse.ParseDriver: Parsing command: SELECT pokes.num, pokes.val FROM pokes
13/04/18 14:08:56 INFO parse.ParseDriver: Parse Completed
13/04/18 14:08:56 INFO parse.SemanticAnalyzer: Starting Semantic Analysis
13/04/18 14:08:56 INFO parse.SemanticAnalyzer: Completed phase 1 of Semantic Analysis
13/04/18 14:08:56 INFO parse.SemanticAnalyzer: Get metadata for source tables
13/04/18 14:08:56 INFO hive.metastore: Trying to connect to metastore with URI thrift://:8020
13/04/18 14:08:56 ERROR parse.SemanticAnalyzer: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to fetch table pokes
at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:897)
at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:831)
at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:954)
at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:7524)
at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:243)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:431)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:336)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:909)
at org.apache.hadoop.hive.service.HiveServer$HiveServerHandler.execute(HiveServer.java:191)
at org.apache.hadoop.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:187)
at org.apache.hadoop.hive.jdbc.HiveStatement.execute(HiveStatement.java:127)
at bigdataproject.requetehive_0_1.RequeteHive.tHiveRow_1Process(RequeteHive.java:433)
at bigdataproject.requetehive_0_1.RequeteHive.runJobInTOS(RequeteHive.java:660)
at bigdataproject.requetehive_0_1.RequeteHive.main(RequeteHive.java:516)
Caused by: java.lang.NullPointerException
at org.apache.thrift.transport.TSocket.open(TSocket.java:166)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.openStore(HiveMetaStoreClient.java:264)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:197)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:157)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2093)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2103)
at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:889)
... 13 more
FAILED: Error in semantic analysis: Unable to fetch table pokes
13/04/18 14:08:56 ERROR ql.Driver: FAILED: Error in semantic analysis: Unable to fetch table pokes
org.apache.hadoop.hive.ql.parse.SemanticException: Unable to fetch table pokes
at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:1129)
at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:7524)
at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:243)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:431)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:336)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:909)
at org.apache.hadoop.hive.service.HiveServer$HiveServerHandler.execute(HiveServer.java:191)
at org.apache.hadoop.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:187)
at org.apache.hadoop.hive.jdbc.HiveStatement.execute(HiveStatement.java:127)
at bigdataproject.requetehive_0_1.RequeteHive.tHiveRow_1Process(RequeteHive.java:433)
at bigdataproject.requetehive_0_1.RequeteHive.runJobInTOS(RequeteHive.java:660)
at bigdataproject.requetehive_0_1.RequeteHive.main(RequeteHive.java:516)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to fetch table pokes
at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:897)
at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:831)
at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:954)
... 11 more
Caused by: java.lang.NullPointerException
at org.apache.thrift.transport.TSocket.open(TSocket.java:166)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.openStore(HiveMetaStoreClient.java:264)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:197)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:157)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2093)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2103)
at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:889)
... 13 more
13/04/18 14:08:56 INFO ql.Driver: </PERFLOG method=compile start=1366290536085 end=1366290536467 duration=382>
13/04/18 14:08:56 INFO ql.Driver: <PERFLOG method=releaseLocks>
13/04/18 14:08:56 INFO ql.Driver: </PERFLOG method=releaseLocks start=1366290536467 end=1366290536467 duration=0>
Query returned non-zero code: 10, cause: FAILED: Error in semantic analysis: Unable to fetch table pokes
[statistics] disconnected
do you have idea about it?
thanks.
Finally i found the solution for this issue. Though it is very simple, i took a long time to find out.
Please make sure about port -> it should be 10000 for standalone mode
Start Hive Thrift Server by entering "hive --service hiveserver"
Check also in http://raj-hadoop.blogspot.in/2013/11/talend-open-studio-warn-confhiveconf.html
Thats it! you can play around Talend now!

Resources