sqoop import from oracle fails - oracle

i have an error while offloading oracle table to hdfs, here is the command:
sqoop import -Dmapreduce.job.queuename=root.username \
--connect jdbc:oracle:thin:#//someExadataHostname/dbInstance \
--username user \
--password welcome1 \
--table TB_RECHARGE_DIM_APPLICATION \
--target-dir /data/in/sqm/dev/unprocessed/sqoop/oracle_db_exa_test \
--delete-target-dir \
--m 1
it throws an error:
Warning: /opt/cloudera/parcels/CDH-5.10.1-1.cdh5.10.1.p0.10/bin/../lib/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
18/01/10 14:27:24 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.10.1
18/01/10 14:27:24 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
18/01/10 14:27:24 INFO teradata.TeradataManagerFactory: Loaded connector factory for 'Cloudera Connector Powered by Teradata' on version 1.5c5
18/01/10 14:27:25 INFO oracle.OraOopManagerFactory: Data Connector for Oracle and Hadoop is disabled.
18/01/10 14:27:25 INFO manager.SqlManager: Using default fetchSize of 1000
18/01/10 14:27:25 INFO tool.CodeGenTool: Beginning code generation
18/01/10 14:27:29 INFO manager.OracleManager: Time zone has been set to GMT
18/01/10 14:27:29 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM TB_RECHARGE_DIM_APPLICATION t WHERE 1=0
18/01/10 14:27:29 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce
Note: /tmp/s/compile/926451c21b6a6623f9763b96c7afa503/TB_RECHARGE_DIM_APPLICATION.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
18/01/10 14:27:31 INFO orm.CompilationManager: Writing jar file: /tmp/compile/926451c21b6a6623f9763b96c7afa503/TB_RECHARGE_DIM_APPLICATION.jar
18/01/10 14:27:32 INFO tool.ImportTool: Destination directory /data/in/sqm/dev/unprocessed/sqoop/oracle_db_exa_test deleted.
18/01/10 14:27:32 INFO manager.OracleManager: Time zone has been set to GMT
18/01/10 14:27:34 INFO manager.OracleManager: Time zone has been set to GMT
18/01/10 14:27:34 INFO mapreduce.ImportJobBase: Beginning import of TB_RECHARGE_DIM_APPLICATION
18/01/10 14:27:34 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
18/01/10 14:27:34 INFO manager.OracleManager: Time zone has been set to GMT
18/01/10 14:27:34 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
18/01/10 14:27:34 INFO hdfs.DFSClient: Created token for username: HDFS_DELEGATION_TOKEN owner=username#company.CO.ID, renewer=yarn, realUser=, issueDate=1515569254366, maxDate=1516174054366, sequenceNumber=29920785, masterKeyId=849 on ha-hdfs:nameservice1
18/01/10 14:27:34 INFO security.TokenCache: Got dt for hdfs://nameservice1; Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:nameservice1, Ident: (token for username: HDFS_DELEGATION_TOKEN owner=username#company.CO.ID, renewer=yarn, realUser=, issueDate=1515569254366, maxDate=1516174054366, sequenceNumber=29920785, masterKeyId=849)
18/01/10 14:28:10 WARN hdfs.DFSClient: Slow waitForAckedSeqno took 33367ms (threshold=30000ms). File being written: /user/username/.staging/job_1508590044386_4156415/libjars/commons-lang3-3.4.jar, block: BP-673686138-10.54.0.2-1453972538527:blk_3947617000_2874005894, Write pipeline datanodes: [DatanodeInfoWithStorage[10.54.1.110:50010,DS-bfb333fb-f63f-4c85-b60f-3ce0889fe16d,DISK], DatanodeInfoWithStorage[10.54.0.187:50010,DS-5c692f55-614c-4d33-9e83-0758d2d54555,DISK], DatanodeInfoWithStorage[10.54.0.183:50010,DS-8530593e-b498-455e-9aaa-b1a12c8ec3b2,DISK]]
18/01/10 14:28:13 INFO db.DBInputFormat: Using read commited transaction isolation
18/01/10 14:28:14 INFO mapreduce.JobSubmitter: number of splits:1
18/01/10 14:28:14 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1508590044386_4156415
18/01/10 14:28:14 INFO mapreduce.JobSubmitter: Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:nameservice1, Ident: (token for username: HDFS_DELEGATION_TOKEN owner=username#company.CO.ID, renewer=yarn, realUser=, issueDate=1515569254366, maxDate=1516174054366, sequenceNumber=29920785, masterKeyId=849)
18/01/10 14:28:15 INFO impl.YarnClientImpl: Submitted application application_1508590044386_4156415
18/01/10 14:28:15 INFO mapreduce.Job: The url to track the job: https://host:8090/proxy/application_1508590044386_4156415/
18/01/10 14:28:15 INFO mapreduce.Job: Running job: job_1508590044386_4156415
18/01/10 14:28:28 INFO mapreduce.Job: Job job_1508590044386_4156415 running in uber mode : false
18/01/10 14:28:28 INFO mapreduce.Job: map 0% reduce 0%
18/01/10 14:29:38 INFO mapreduce.Job: Task Id : attempt_1508590044386_4156415_m_000000_0, Status : FAILED
Error: java.lang.RuntimeException: java.lang.RuntimeException: java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection
at org.apache.sqoop.mapreduce.db.DBInputFormat.setDbConf(DBInputFormat.java:170)
at org.apache.sqoop.mapreduce.db.DBInputFormat.setConf(DBInputFormat.java:161)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:73)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:749)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.lang.RuntimeException: java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection
at org.apache.sqoop.mapreduce.db.DBInputFormat.getConnection(DBInputFormat.java:223)
at org.apache.sqoop.mapreduce.db.DBInputFormat.setDbConf(DBInputFormat.java:168)
... 10 more
Caused by: java.sql.SQLRecoverableException: IO Error: The Network Adapter could not establish the connection
at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:673)
at oracle.jdbc.driver.PhysicalConnection.<init>(PhysicalConnection.java:715)
at oracle.jdbc.driver.T4CConnection.<init>(T4CConnection.java:385)
at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:30)
at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:564)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:247)
at org.apache.sqoop.mapreduce.db.DBConfiguration.getConnection(DBConfiguration.java:302)
at org.apache.sqoop.mapreduce.db.DBInputFormat.getConnection(DBInputFormat.java:216)
... 11 more
Caused by: oracle.net.ns.NetException: The Network Adapter could not establish the connection
at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:445)
at oracle.net.resolver.AddrResolution.resolveAndExecute(AddrResolution.java:464)
at oracle.net.ns.NSProtocol.establishConnection(NSProtocol.java:594)
at oracle.net.ns.NSProtocol.connect(NSProtocol.java:229)
at oracle.jdbc.driver.T4CConnection.connect(T4CConnection.java:1360)
at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:486)
... 19 more
Caused by: java.net.SocketTimeoutException: connect timed out
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at oracle.net.nt.TcpNTAdapter.connect(TcpNTAdapter.java:162)
at oracle.net.nt.ConnOption.connect(ConnOption.java:133)
at oracle.net.nt.ConnStrategy.execute(ConnStrategy.java:411)
... 24 more
i am able to list the table using
sqoop list-tables --connect jdbc:oracle:thin:#//someExadataHost/dbInstance --username user --password pass
i dont know why the network cant established connection but it successfully launching the job (it means sqoop able to connect and identify oracle table is exist right?). the map task never finished so..
any idea about this? thank you

There might be many reasons you are facing this issue:
The listener is not configured properly
The listener process (service) is not running.Re-start it with the
lsnrctl start command or on Windows by starting the listener
service.
Also please check the hostname in oracle net manager and
listener.Both should be same.Path (Local -> service naming -> orcl)
Hope this helps!!

Add --driver oracle.jdbc.driver.OracleDriver to the command line.

Related

Sqoop query execution Error With Sucess

While executing the sqoop query getting error message but after few second query executed successfully and table records are also imported successfully from sql server to hive.Number of records also matching with sql-server and hive.
Unable to get why error is throwing. I am sharing the log information, can any one explain the reason behind this. For the security reason i am hiding the IP's.
sqoop import --connect "jdbc:sqlserver://10.128.**.***:1433;database=COCO_Pilot" --username sa --password Passw0rd --table RO_Transaction --hive-import --create-hive-table --hive-table coco_pilot.ro_transaction --warehouse-dir /user/landing --hive-overwrite -m 1;
Warning: /usr/hdp/2.6.2.0-205/hbase does not exist! HBase imports will fail.
Please set $HBASE_HOME to the root of your HBase installation.
Warning: /usr/hdp/2.6.2.0-205/accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
17/12/06 19:54:45 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6.2.6.2.0-205
17/12/06 19:54:45 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
17/12/06 19:54:45 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override
17/12/06 19:54:45 INFO tool.BaseSqoopTool: delimiters with --fields-terminated-by, etc.
17/12/06 19:54:45 INFO manager.SqlManager: Using default fetchSize of 1000
17/12/06 19:54:45 INFO tool.CodeGenTool: Beginning code generation
17/12/06 19:54:46 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM [RO_Transaction] AS t WHERE 1=0
17/12/06 19:54:46 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/hdp/2.6.2.0-205/hadoop-mapreduce
Note: /tmp/sqoop-hdfs/compile/c592fdb7dc832a5adea6b13f299abeeb/RO_Transaction.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
17/12/06 19:54:48 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hdfs/compile/c592fdb7dc832a5adea6b13f299abeeb/RO_Transaction.jar
17/12/06 19:54:48 INFO mapreduce.ImportJobBase: Beginning import of RO_Transaction
17/12/06 19:54:49 INFO client.RMProxy: Connecting to ResourceManager at slave1.snads.com/10.20.30.5:8050
17/12/06 19:54:49 INFO client.AHSProxy: Connecting to Application History server at slave1.snads.com/10.20.30.5:10200
17/12/06 19:54:52 INFO db.DBInputFormat: Using read commited transaction isolation
17/12/06 19:54:52 INFO mapreduce.JobSubmitter: number of splits:1
17/12/06 19:54:52 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1512027414889_0102
17/12/06 19:54:53 INFO impl.YarnClientImpl: Submitted application application_1512027414889_0102
17/12/06 19:54:53 INFO mapreduce.Job: The url to track the job: http://slave1.snads.com:8088/proxy/application_1512027414889_0102/
17/12/06 19:54:53 INFO mapreduce.Job: Running job: job_1512027414889_0102
17/12/06 19:55:00 INFO mapreduce.Job: Job job_1512027414889_0102 running in uber mode : false
17/12/06 19:55:00 INFO mapreduce.Job: map 0% reduce 0%
17/12/06 19:55:19 INFO mapreduce.Job: Task Id : attempt_1512027414889_0102_m_000000_0, Status : FAILED
Error: java.lang.RuntimeException: java.lang.RuntimeException: com.microsoft.sqlserver.jdbc.SQLServerException: The TCP/IP connection to the host 10.128.**.***, port 1433 has failed. Error: "connect timed out. Verify the connection properties, check that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port, and that no firewall is blocking TCP connections to the port.".
at org.apache.sqoop.mapreduce.db.DBInputFormat.setConf(DBInputFormat.java:167)
at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:749)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:170)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:164)
Caused by: java.lang.RuntimeException: com.microsoft.sqlserver.jdbc.SQLServerException: The TCP/IP connection to the host 10.128.**.***, port 1433 has failed. Error: "connect timed out. Verify the connection properties, check that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port, and that no firewall is blocking TCP connections to the port.".
at org.apache.sqoop.mapreduce.db.DBInputFormat.getConnection(DBInputFormat.java:220)
at org.apache.sqoop.mapreduce.db.DBInputFormat.setConf(DBInputFormat.java:165)
... 9 more
Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: The TCP/IP connection to the host 10.128.**.***, port 1433 has failed. Error: "connect timed out. Verify the connection properties, check that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port, and that no firewall is blocking TCP connections to the port.".
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDriverError(SQLServerException.java:170)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.connectHelper(SQLServerConnection.java:1049)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.login(SQLServerConnection.java:833)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.connect(SQLServerConnection.java:716)
at com.microsoft.sqlserver.jdbc.SQLServerDriver.connect(SQLServerDriver.java:841)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:247)
at org.apache.sqoop.mapreduce.db.DBConfiguration.getConnection(DBConfiguration.java:302)
at org.apache.sqoop.mapreduce.db.DBInputFormat.getConnection(DBInputFormat.java:213)
... 10 more
17/12/06 19:55:40 INFO mapreduce.Job: map 100% reduce 0%
17/12/06 19:55:41 INFO mapreduce.Job: Job job_1512027414889_0102 completed successfully
17/12/06 19:55:41 INFO mapreduce.Job: Counters: 31
File System Counters
FILE: Number of bytes read=0
FILE: Number of bytes written=165681
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=87
HDFS: Number of bytes written=199193962
HDFS: Number of read operations=4
HDFS: Number of large read operations=0
HDFS: Number of write operations=2
Job Counters
Failed map tasks=1
Launched map tasks=2
Other local map tasks=2
Total time spent by all maps in occupied slots (ms)=475111
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=36547
Total vcore-milliseconds taken by all map tasks=36547
Total megabyte-milliseconds taken by all map tasks=486513664
Map-Reduce Framework
Map input records=1827459
Map output records=1827459
Input split bytes=87
Spilled Records=0
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=136
CPU time spent (ms)=21330
Physical memory (bytes) snapshot=1431515136
Virtual memory (bytes) snapshot=13562888192
Total committed heap usage (bytes)=1369964544
File Input Format Counters
Bytes Read=0
File Output Format Counters
Bytes Written=199193962
17/12/06 19:55:41 INFO mapreduce.ImportJobBase: Transferred 189.9662 MB in 52.4188 seconds (3.624 MB/sec)
17/12/06 19:55:41 INFO mapreduce.ImportJobBase: Retrieved 1827459 records.
17/12/06 19:55:41 INFO mapreduce.ImportJobBase: Publishing Hive/Hcat import job data to Listeners
17/12/06 19:55:41 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM [RO_Transaction] AS t WHERE 1=0
17/12/06 19:55:42 WARN hive.TableDefWriter: Column id had to be cast to a less precise type in Hive
17/12/06 19:55:42 WARN hive.TableDefWriter: Column Transaction_Date had to be cast to a less precise type in Hive
17/12/06 19:55:42 WARN hive.TableDefWriter: Column Pump_No had to be cast to a less precise type in Hive
17/12/06 19:55:42 WARN hive.TableDefWriter: Column Nozzle_No had to be cast to a less precise type in Hive
17/12/06 19:55:42 WARN hive.TableDefWriter: Column product had to be cast to a less precise type in Hive
17/12/06 19:55:42 WARN hive.TableDefWriter: Column unit_price had to be cast to a less precise type in Hive
17/12/06 19:55:42 WARN hive.TableDefWriter: Column Volume had to be cast to a less precise type in Hive
17/12/06 19:55:42 WARN hive.TableDefWriter: Column Amount had to be cast to a less precise type in Hive
17/12/06 19:55:42 WARN hive.TableDefWriter: Column Start_Totlizer had to be cast to a less precise type in Hive
17/12/06 19:55:42 WARN hive.TableDefWriter: Column End_Totlizer had to be cast to a less precise type in Hive
17/12/06 19:55:42 INFO hive.HiveImport: Loading uploaded data into Hive
17/12/06 19:55:42 WARN conf.HiveConf: HiveConf of name hive.custom-extensions.root does not exist
17/12/06 19:55:42 WARN conf.HiveConf: HiveConf of name hive.custom-extensions.root does not exist
Logging initialized using configuration in jar:file:/usr/hdp/2.6.2.0-205/hive/lib/hive-common-1.2.1000.2.6.2.0-205.jar!/hive-log4j.properties
OK
Time taken: 2.146 seconds
Loading data to table coco_pilot.ro_transaction
Table coco_pilot.ro_transaction stats: [numFiles=1, numRows=0, totalSize=199193962, rawDataSize=0]
OK
Time taken: 0.725 seconds

sqoop query failing to import table

I am trying to execute below sqoop import.
sqoop import --connect 'jdbc:sqlserver://server-IP;database=db_name' --username xxx --password xxx --table xxx --hive-import --hive-table amit_hive --target-dir /user/hive/amitesh123 -m 1.
I have to import a DB table directly to the desired location. as per my understanding goes, above sqoop command line syntax is correctly written. But on executing it, I am get following error:-
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=xxx, access=EXECUTE, inode="/user/hive/amitesh123":hive:hdfs:drwx
Somebody informed me that we have to mention the HIVE database name as well with the above sqoop commands,is that true?,if yes, can someone help me with the parameter that I have use? As per my knowledge goes, we just need to mention the --table to bring the table from DB to HIVE table. please suggest
To test further, I created a new folder , and gave 777 rights to it, still I am getting same error. I have now added the HIVE DB.HIVEtable name with --hive-table, so now the new sqoop import is as follows::
sqoop import --connect 'jdbc:sqlserver://server-IP;database=db_name' --username xxx --password xxx --table xxx --hive-import --hive-table amitesh_db.amit_hive --target-dir /amitesh012345/amitesh -m 1.
However, the permission denied error still there...
INFO mapreduce.Job: Job job_1486315054135_2834 failed with state FAILED due to: Job setup failed : org.apache.hadoop.security.AccessControlException: Permission denied: user=xxx, access=WRITE, inode="/amitesh012345/amitesh/_temporary/1":hdfs:hdfs:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:320)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1720)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1704)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1687)
at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3890)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:983) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2045)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3002)
at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2970)
at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1061)
at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1877)
at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.setupJob(FileOutputCommitter.java:305)
at org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.handleJobSetup(CommitterEventHandler.java:254)
at org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.run(CommitterEventHandler.java:234)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=at732615, access=WRITE, inode="/amitesh012345/amitesh/_temporary/1":hdfs:hdfs:drwxr-xr-x
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:320)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1720)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1704)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1687)
at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3890)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:983)
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2045)
at org.apache.hadoop.ipc.Client.call(Client.java:1475)
at org.apache.hadoop.ipc.Client.call(Client.java:1412)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy9.mkdirs(Unknown Source)
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:55
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy10.mkdirs(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3000)
... 13 more
17/03/14 05:23:38 INFO mapreduce.Job: Counters: 2
Job Counters
Total time spent by all maps in occupied slots (ms)=0
Total time spent by all reduces in occupied slots (ms)=0
17/03/14 05:23:38 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
17/03/14 05:23:38 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 24.4698 seconds (0 bytes/sec)
17/03/14 05:23:38 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
17/03/14 05:23:38 INFO mapreduce.ImportJobBase: Retrieved 0 records.
17/03/14 05:23:38 ERROR tool.ImportTool: Error during import: Import job failed!
2nd full stacktrace
+++++++++++++
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
17/03/14 05:38:02 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6_IBM_27
17/03/14 05:38:02 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
17/03/14 05:38:02 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override
17/03/14 05:38:02 INFO tool.BaseSqoopTool: delimiters with --fields-terminated-by, etc.
17/03/14 05:38:02 INFO manager.SqlManager: Using default fetchSize of 1000
17/03/14 05:38:02 INFO tool.CodeGenTool: Beginning code generation
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:path_to/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:path_to/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
17/03/14 05:38:03 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM [T_VND] AS t WHERE 1=0
17/03/14 05:38:03 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is path_to/hadoop
Note: path_to/T_VND.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
17/03/14 05:38:04 INFO orm.CompilationManager: Writing jar file: path_to/T_VND.jar
17/03/14 05:38:04 INFO mapreduce.ImportJobBase: Beginning import of T_VND
17/03/14 05:38:05 INFO impl.TimelineClientImpl: Timeline service address: http://xxxxxx/
17/03/14 05:38:05 INFO client.RMProxy: Connecting to ResourceManager at xxxxxx/server-IP:port
17/03/14 05:38:06 INFO db.DBInputFormat: Using read commited transaction isolation
17/03/14 05:38:07 INFO mapreduce.JobSubmitter: number of splits:1
17/03/14 05:38:07 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1486315054135_2836
17/03/14 05:38:07 INFO impl.YarnClientImpl: Submitted application application_1486315054135_2836
17/03/14 05:38:07 INFO mapreduce.Job: The url to track the job: http://xxxxxx/server-IP:port/proxy/application_1486315054135_2836/
17/03/14 05:38:07 INFO mapreduce.Job: Running job: job_1486315054135_2836
17/03/14 05:38:13 INFO mapred.ClientServiceDelegate: Application state is completed. FinalApplicationStatus=FAILED. Redirecting to job history server
17/03/14 05:38:13 INFO mapreduce.Job: Job job_1486315054135_2836 running in uber mode : false
17/03/14 05:38:13 INFO mapreduce.Job: map 0% reduce 100%
17/03/14 05:38:13 INFO mapreduce.Job: Job job_1486315054135_2836 failed with state FAILED due to:
17/03/14 05:38:13 INFO mapreduce.ImportJobBase: The MapReduce job has already been retired. Performance
17/03/14 05:38:13 INFO mapreduce.ImportJobBase: counters are unavailable. To get this information,
17/03/14 05:38:13 INFO mapreduce.ImportJobBase: you will need to enable the completed job store on
17/03/14 05:38:13 INFO mapreduce.ImportJobBase: the jobtracker with:
17/03/14 05:38:13 INFO mapreduce.ImportJobBase: mapreduce.jobtracker.persist.jobstatus.active = true
17/03/14 05:38:13 INFO mapreduce.ImportJobBase: mapreduce.jobtracker.persist.jobstatus.hours = 1
17/03/14 05:38:13 INFO mapreduce.ImportJobBase: A jobtracker restart is required for these settings
17/03/14 05:38:13 INFO mapreduce.ImportJobBase: to take effect.
17/03/14 05:38:13 ERROR tool.ImportTool: Error during import: Import job failed!

numberformat exception for input string :"1201" in sqoop

I am getting following error while exporting data from HDFS to MySQL using sqoop. I created the table employee with bigint and varchar.
17/01/04 19:30:14 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6
17/01/04 19:30:14 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
17/01/04 19:30:15 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
17/01/04 19:30:15 INFO tool.CodeGenTool: Beginning code generation
17/01/04 19:30:16 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `employee` AS t LIMIT 1
17/01/04 19:30:16 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `employee` AS t LIMIT 1
17/01/04 19:30:16 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /home/adithyan/hadoop_dir/hadoop-1.2.1
Note: /tmp/sqoop-adithyan/compile/0555b3b23ccf665b309ee88adde7936e/employee.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
17/01/04 19:30:21 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-adithyan/compile/0555b3b23ccf665b309ee88adde7936e/employee.jar
17/01/04 19:30:21 INFO mapreduce.ExportJobBase: Beginning export of employee
17/01/04 19:30:28 INFO input.FileInputFormat: Total input paths to process : 1
17/01/04 19:30:28 INFO input.FileInputFormat: Total input paths to process : 1
17/01/04 19:30:29 INFO util.NativeCodeLoader: Loaded the native-hadoop library
17/01/04 19:30:29 WARN snappy.LoadSnappy: Snappy native library not loaded
17/01/04 19:30:31 INFO mapred.JobClient: Running job: job_201701041906_0005
17/01/04 19:30:32 INFO mapred.JobClient: map 0% reduce 0%
17/01/04 19:31:21 INFO mapred.JobClient: Task Id : attempt_201701041906_0005_m_000000_0, Status : FAILED
java.io.IOException: Can't export data, please check failed map task logs
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:112)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:39)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:364)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: java.lang.RuntimeException: Can't parse input data: '1201'
at employee.__loadFromFields(employee.java:378)
at employee.parse(employee.java:306)
at org.apache.sqoop.mapreduce.TextExportMapper.map(TextExportMapper.java:83)
... 10 more
Caused by: java.lang.NumberFormatException: For input string: "1201"
at
Following is the data in HDFS which I am trying to export using sqoop
1201, gopal, manager, 50000, TP
1202, manisha, preader, 50000, TP
1203, kalil, php dev, 30000, AC
1204, prasanth, php dev, 30000, AC
1205, kranthi, admin, 20000, TP
1206, satish p, grp des, 20000, GR
I am using mysql-5.1.28.jar, hadoop-1.2.8.
Can somebody help me on this?

Error Message when sqooping oracle table into hive

I was looking to find out how I can fix the following error message that I keep getting when I am sqooping a data table into oracle. I was able to sqoop another table this morning but every attempt after that has failed. The following is the error within the log file:
Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
I am not sure why I keep getting this error message as without the Kerberos Ticket I am unable to login to Putty in order to run the script to sqoop the table and hence I am confused as to a) why I am getting this error b)what I can do to fix it.
Would appreciate it if somebody could advise where I am going wrong.
Thanks in advance.
Part of the Error Message from Log File:
17/01/05 15:08:52 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.7.1
17/01/05 15:08:52 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
17/01/05 15:08:52 INFO oracle.OraOopManagerFactory: Data Connector for Oracle and Hadoop is disabled.
17/01/05 15:08:52 INFO manager.SqlManager: Using default fetchSize of 1000
17/01/05 15:08:52 INFO tool.CodeGenTool: Beginning code generation
17/01/05 15:08:53 INFO manager.OracleManager: Time zone has been set to GMT
17/01/05 15:08:53 INFO manager.SqlManager: Executing SQL statement: SELECT * FROM TABLE_NAME where SNAPSHOT_DATE_TIME >= '01-APR-16' and (1 = 0)
17/01/05 15:08:53 INFO manager.SqlManager: Executing SQL statement: SELECT * FROM TABLE_NAME where SNAPSHOT_DATE_TIME >= '01-APR-16' and (1 = 0)
17/01/05 15:08:53 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce
Note: /tmp/sqoop-username/compile/ba4df230b0d18377522bbfe053ed3661/QueryResult.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
17/01/05 15:08:55 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-username/compile/ba4df230b0d18377522bbfe053ed3661/QueryResult.jar
17/01/05 15:08:55 INFO mapreduce.ImportJobBase: Beginning query import.
17/01/05 15:08:55 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
17/01/05 15:08:55 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
17/01/05 15:08:56 WARN security.UserGroupInformation: PriviledgedActionException as:username (auth:KERBEROS) cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
17/01/05 15:08:56 WARN ipc.Client: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
....

WARN conf.HiveConf: hive-site.xml not found on CLASSPATH-TOS for Big Data

i'am Hadoop Newbie. And i'm discovring Talend Open Studio for Big Data.
I'am trying the components relating to Hive: tHiveConnection etc.
when executing the job i get this error:
[statistics] connecting to socket on port 3556
[statistics] connected
13/04/18 14:08:52 WARN conf.HiveConf: hive-site.xml not found on CLASSPATH
13/04/18 14:08:52 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
13/04/18 14:08:52 INFO metastore.ObjectStore: ObjectStore, initialize called
13/04/18 14:08:52 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
13/04/18 14:08:54 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
13/04/18 14:08:54 INFO metastore.ObjectStore: Initialized ObjectStore
Hive history file=/tmp/admin.qlv/hive_job_log_admin.qlv_201304181408_152240551.txt
13/04/18 14:08:55 INFO exec.HiveHistory: Hive history file=/tmp/admin.qlv/hive_job_log_admin.qlv_201304181408_152240551.txt
13/04/18 14:08:56 INFO service.HiveServer: Putting temp output to file \tmp\admin.qlv\admin.qlv_2013041814085651067019081102470.pipeout
13/04/18 14:08:56 INFO service.HiveServer: Running the query: set hive.fetch.output.serde = org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
13/04/18 14:08:56 INFO service.HiveServer: Putting temp output to file \tmp\admin.qlv\admin.qlv_2013041814085651067019081102470.pipeout
13/04/18 14:08:56 INFO service.HiveServer: Running the query: SELECT pokes.num, pokes.val FROM pokes
13/04/18 14:08:56 INFO ql.Driver: <PERFLOG method=Driver.run>
13/04/18 14:08:56 INFO ql.Driver: <PERFLOG method=compile>
13/04/18 14:08:56 INFO parse.ParseDriver: Parsing command: SELECT pokes.num, pokes.val FROM pokes
13/04/18 14:08:56 INFO parse.ParseDriver: Parse Completed
13/04/18 14:08:56 INFO parse.SemanticAnalyzer: Starting Semantic Analysis
13/04/18 14:08:56 INFO parse.SemanticAnalyzer: Completed phase 1 of Semantic Analysis
13/04/18 14:08:56 INFO parse.SemanticAnalyzer: Get metadata for source tables
13/04/18 14:08:56 INFO hive.metastore: Trying to connect to metastore with URI thrift://:8020
13/04/18 14:08:56 ERROR parse.SemanticAnalyzer: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to fetch table pokes
at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:897)
at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:831)
at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:954)
at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:7524)
at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:243)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:431)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:336)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:909)
at org.apache.hadoop.hive.service.HiveServer$HiveServerHandler.execute(HiveServer.java:191)
at org.apache.hadoop.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:187)
at org.apache.hadoop.hive.jdbc.HiveStatement.execute(HiveStatement.java:127)
at bigdataproject.requetehive_0_1.RequeteHive.tHiveRow_1Process(RequeteHive.java:433)
at bigdataproject.requetehive_0_1.RequeteHive.runJobInTOS(RequeteHive.java:660)
at bigdataproject.requetehive_0_1.RequeteHive.main(RequeteHive.java:516)
Caused by: java.lang.NullPointerException
at org.apache.thrift.transport.TSocket.open(TSocket.java:166)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.openStore(HiveMetaStoreClient.java:264)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:197)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:157)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2093)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2103)
at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:889)
... 13 more
FAILED: Error in semantic analysis: Unable to fetch table pokes
13/04/18 14:08:56 ERROR ql.Driver: FAILED: Error in semantic analysis: Unable to fetch table pokes
org.apache.hadoop.hive.ql.parse.SemanticException: Unable to fetch table pokes
at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:1129)
at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:7524)
at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:243)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:431)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:336)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:909)
at org.apache.hadoop.hive.service.HiveServer$HiveServerHandler.execute(HiveServer.java:191)
at org.apache.hadoop.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:187)
at org.apache.hadoop.hive.jdbc.HiveStatement.execute(HiveStatement.java:127)
at bigdataproject.requetehive_0_1.RequeteHive.tHiveRow_1Process(RequeteHive.java:433)
at bigdataproject.requetehive_0_1.RequeteHive.runJobInTOS(RequeteHive.java:660)
at bigdataproject.requetehive_0_1.RequeteHive.main(RequeteHive.java:516)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to fetch table pokes
at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:897)
at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:831)
at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:954)
... 11 more
Caused by: java.lang.NullPointerException
at org.apache.thrift.transport.TSocket.open(TSocket.java:166)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.openStore(HiveMetaStoreClient.java:264)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:197)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:157)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2093)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2103)
at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:889)
... 13 more
13/04/18 14:08:56 INFO ql.Driver: </PERFLOG method=compile start=1366290536085 end=1366290536467 duration=382>
13/04/18 14:08:56 INFO ql.Driver: <PERFLOG method=releaseLocks>
13/04/18 14:08:56 INFO ql.Driver: </PERFLOG method=releaseLocks start=1366290536467 end=1366290536467 duration=0>
Query returned non-zero code: 10, cause: FAILED: Error in semantic analysis: Unable to fetch table pokes
[statistics] disconnected
do you have idea about it?
thanks.
Finally i found the solution for this issue. Though it is very simple, i took a long time to find out.
Please make sure about port -> it should be 10000 for standalone mode
Start Hive Thrift Server by entering "hive --service hiveserver"
Check also in http://raj-hadoop.blogspot.in/2013/11/talend-open-studio-warn-confhiveconf.html
Thats it! you can play around Talend now!

Resources