I'm trying to delete a tenant created using the TenantMgtAdminServiceStub. In the carbon logs it shows the following:
TID: [-1234] [] [2016-04-07 23:52:10,645] INFO {org.wso2.carbon.core.services.util.CarbonAuthenticationUtil} - 'admin#carbon.super [-1234]' logged in at [2016-04-07 23:52:10,645-0500] {org.wso2.carbon.core.services.util.CarbonAuthenticationUtil}
TID: [-1234] [] [2016-04-07 23:52:10,784] INFO {org.wso2.carbon.core.services.util.CarbonAuthenticationUtil} - 'admin#carbon.super [-1234]' logged in at [2016-04-07 23:52:10,784-0500] {org.wso2.carbon.core.services.util.CarbonAuthenticationUtil}
TID: [-1234] [] [2016-04-07 23:52:10,866] INFO {org.wso2.carbon.tenant.mgt.services.TenantMgtAdminService} - Starting Tenant Deletion process... {org.wso2.carbon.tenant.mgt.services.TenantMgtAdminService}
TID: [-1234] [] [2016-04-07 23:52:11,150] ERROR {org.wso2.carbon.tenant.mgt.services.TenantMgtAdminService} - Error in notifying tenant addition. {org.wso2.carbon.tenant.mgt.services.TenantMgtAdminService}
org.wso2.carbon.stratos.common.exception.StratosException: Error occurred while revoking the access tokens in tenant 10
at org.wso2.carbon.identity.oauth2.internal.OAuthTenantMgtListenerImpl.onPreDelete(OAuthTenantMgtListenerImpl.java:121)
at org.wso2.carbon.tenant.mgt.util.TenantMgtUtil.triggerPreTenantDelete(TenantMgtUtil.java:118)
at org.wso2.carbon.tenant.mgt.services.TenantMgtAdminService.notifyTenantDeletion(TenantMgtAdminService.java:123)
at org.wso2.carbon.tenant.mgt.services.TenantMgtAdminService.deleteTenant(TenantMgtAdminService.java:557)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
...
Caused by: org.wso2.carbon.identity.oauth2.IdentityOAuth2Exception: Error occurred while retrieving 'ACTIVE or EXPIRED' access tokens for user tenant id : 10
at org.wso2.carbon.identity.oauth2.dao.TokenMgtDAO.getAccessTokensOfTenant(TokenMgtDAO.java:1192)
at org.wso2.carbon.identity.oauth2.internal.OAuthTenantMgtListenerImpl.onPreDelete(OAuthTenantMgtListenerImpl.java:87)
... 59 more
Caused by: org.h2.jdbc.JdbcSQLException: Ambiguous column name "USER_DOMAIN"; SQL statement:
SELECT ACCESS_TOKEN, REFRESH_TOKEN, TIME_CREATED, REFRESH_TOKEN_TIME_CREATED, VALIDITY_PERIOD, REFRESH_TOKEN_VALIDITY_PERIOD, USER_TYPE, TOKEN_SCOPE, ACCESS_TOKEN_TABLE.TOKEN_ID, AUTHZ_USER, USER_DOMAIN, CONSUMER_KEY FROM (SELECT AUTHZ_USER, USER_DOMAIN, CONSUMER_KEY_ID, TOKEN_ID, ACCESS_TOKEN, REFRESH_TOKEN, TIME_CREATED, REFRESH_TOKEN_TIME_CREATED, VALIDITY_PERIOD, REFRESH_TOKEN_VALIDITY_PERIOD, USER_TYPE FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TENANT_ID=? AND (TOKEN_STATE='ACTIVE' OR TOKEN_STATE='EXPIRED')) ACCESS_TOKEN_TABLE JOIN IDN_OAUTH_CONSUMER_APPS ON ID = CONSUMER_KEY_ID LEFT JOIN IDN_OAUTH2_ACCESS_TOKEN_SCOPE ON ACCESS_TOKEN_TABLE.TOKEN_ID = IDN_OAUTH2_ACCESS_TOKEN_SCOPE.TOKEN_ID [90059-140]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:327)
at org.h2.message.DbException.get(DbException.java:167)
at org.h2.message.DbException.get(DbException.java:144)
at org.h2.expression.ExpressionColumn.mapColumn(ExpressionColumn.java:105)
at org.h2.expression.ExpressionColumn.mapColumns(ExpressionColumn.java:82)
at org.h2.command.dml.Select.init(Select.java:710)
at org.h2.command.Parser.parseSelect(Parser.java:1410)
at org.h2.command.Parser.parsePrepared(Parser.java:392)
...
at org.apache.tomcat.jdbc.pool.JdbcInterceptor.invoke(JdbcInterceptor.java:109)
at org.apache.tomcat.jdbc.pool.DisposableConnectionFacade.invoke(DisposableConnectionFacade.java:80)
at com.sun.proxy.$Proxy12.prepareStatement(Unknown Source)
at org.wso2.carbon.identity.oauth2.dao.TokenMgtDAO.getAccessTokensOfTenant(TokenMgtDAO.java:1149)
... 60 more
TID: [-1234] [] [2016-04-07 23:52:11,159] ERROR {org.wso2.carbon.tenant.mgt.services.TenantMgtAdminService} - Error deleting tenant with domain: juan7.com and tenant id: 10. {org.wso2.carbon.tenant.mgt.services.TenantMgtAdminService}
java.lang.Exception: Error in notifying tenant addition.
at org.wso2.carbon.tenant.mgt.services.TenantMgtAdminService.notifyTenantDeletion(TenantMgtAdminService.java:127)
at org.wso2.carbon.tenant.mgt.services.TenantMgtAdminService.deleteTenant(TenantMgtAdminService.java:557)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
...
Caused by: org.wso2.carbon.stratos.common.exception.StratosException: Error occurred while revoking the access tokens in tenant 10
at org.wso2.carbon.identity.oauth2.internal.OAuthTenantMgtListenerImpl.onPreDelete(OAuthTenantMgtListenerImpl.java:121)
at org.wso2.carbon.tenant.mgt.util.TenantMgtUtil.triggerPreTenantDelete(TenantMgtUtil.java:118)
at org.wso2.carbon.tenant.mgt.services.TenantMgtAdminService.notifyTenantDeletion(TenantMgtAdminService.java:123)
... 57 more
Caused by: org.wso2.carbon.identity.oauth2.IdentityOAuth2Exception: Error occurred while retrieving 'ACTIVE or EXPIRED' access tokens for user tenant id : 10
at org.wso2.carbon.identity.oauth2.dao.TokenMgtDAO.getAccessTokensOfTenant(TokenMgtDAO.java:1192)
at org.wso2.carbon.identity.oauth2.internal.OAuthTenantMgtListenerImpl.onPreDelete(OAuthTenantMgtListenerImpl.java:87)
... 59 more
Caused by: org.h2.jdbc.JdbcSQLException: Ambiguous column name "USER_DOMAIN"; SQL statement:
SELECT ACCESS_TOKEN, REFRESH_TOKEN, TIME_CREATED, REFRESH_TOKEN_TIME_CREATED, VALIDITY_PERIOD, REFRESH_TOKEN_VALIDITY_PERIOD, USER_TYPE, TOKEN_SCOPE, ACCESS_TOKEN_TABLE.TOKEN_ID, AUTHZ_USER, USER_DOMAIN, CONSUMER_KEY FROM (SELECT AUTHZ_USER, USER_DOMAIN, CONSUMER_KEY_ID, TOKEN_ID, ACCESS_TOKEN, REFRESH_TOKEN, TIME_CREATED, REFRESH_TOKEN_TIME_CREATED, VALIDITY_PERIOD, REFRESH_TOKEN_VALIDITY_PERIOD, USER_TYPE FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TENANT_ID=? AND (TOKEN_STATE='ACTIVE' OR TOKEN_STATE='EXPIRED')) ACCESS_TOKEN_TABLE JOIN IDN_OAUTH_CONSUMER_APPS ON ID = CONSUMER_KEY_ID LEFT JOIN IDN_OAUTH2_ACCESS_TOKEN_SCOPE ON ACCESS_TOKEN_TABLE.TOKEN_ID = IDN_OAUTH2_ACCESS_TOKEN_SCOPE.TOKEN_ID [90059-140]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:327)
at org.h2.message.DbException.get(DbException.java:167)
at org.h2.message.DbException.get(DbException.java:144)
at org.h2.expression.ExpressionColumn.mapColumn(ExpressionColumn.java:105)
at org.h2.expression.ExpressionColumn.mapColumns(ExpressionColumn.java:82)
at org.h2.command.dml.Select.init(Select.java:710)
at org.h2.command.Parser.parseSelect(Parser.java:1410)
at org.h2.command.Parser.parsePrepared(Parser.java:392)
...
at org.wso2.carbon.identity.oauth2.dao.TokenMgtDAO.getAccessTokensOfTenant(TokenMgtDAO.java:1149)
... 60 more
The code that I'm using to delete the tenant is pretty straightforward:
String cookie = getCookie();
TenantMgtAdminServiceStub tenantMgtStub = null;
String tenantMgtEndpoint = apiManagerConfiguration.getCarbonUrl() + "TenantMgtAdminService";
tenantMgtStub = new TenantMgtAdminServiceStub(tenantMgtEndpoint);
Options option = tenantMgtStub._getServiceClient().getOptions();
option.setManageSession(true);
option.setProperty(org.apache.axis2.transport.http.HTTPConstants.COOKIE_STRING, cookie);
tenantMgtStub.deleteTenant(domain);
Any help will be appreciated.
First you need to deactivate the tenant and then delete the tenant
tenantMgtStub.deactivateTenant(domainName);
tenantMgtStub.deleteTenant(domainName);
Hope this will help
Related
I'm trying to use 'DBCPConnectionPoolLookup' service in 'ExecuteGroovyScript' to dynamically query the required database based on 'database.name' parameter in the input flow file.
The processor is successfully able to get the corresponding 'DBCPConnectionPool' service for querying but I'm getting the an exception java.sql.SQLException: Cannot get a connection, pool error Timeout waiting for idle object. As opposed to if I directly use the 'DBCPConnectionPool' service without the 'Lookup' service without changing any configuration it works fine.
I access the service as follows:
def clientDb = CTL.SQLLookupService.getConnection(flowFile.getAttributes())
Then use the 'clientDb' object to query as:
clientDb.rows(timseriesSqlCountQuery).eachWithIndex { row, idx ->numRowsTimeSeries= row.c}
I have tried increasing the values of Max Wait Time and Max Total Connections to higher values in 'DBCPConnectionPool' service, it does not help.
Please find below detail links of images for code,error and configuration
Exception
Configuration of 'ExecuteGroovyScript'
Configuration of 'DBCPConnectionPool' service
Configuration of 'DBCPConnectionPoolLookup' service
Script Code
import org.apache.nifi.distributed.cache.client.Deserializer
import org.apache.nifi.distributed.cache.client.Serializer
import org.apache.nifi.distributed.cache.client.exception.DeserializationException
import org.apache.nifi.distributed.cache.client.exception.SerializationException
import groovy.sql.Sql
import java.time.*
try {
def flowFile = session.get()
def isBootstrap=flowFile."isBootstrap"
def timseriesSqlQuery='SELECT id FROM [dbo].[Points] where ([MappedToEquipment] = \'Mapped\' or PointStatus = \'Mapped\')'
def timseriesSqlCountQuery='SELECT count(id) as c FROM [dbo].[Points] where ([MappedToEquipment] = \'Mapped\' or PointStatus = \'Mapped\')'
def spaceSqlQuery='select id from (select id from dbo.organization union select id from dbo.facility union select id from dbo.building union select id from dbo.floor union select id from dbo.wing union select id from dbo.room union select id from dbo.systems) tmp'
def spaceSqlCountQuery='select count(id) as c from (select id from dbo.organization union select id from dbo.facility union select id from dbo.building union select id from dbo.floor union select id from dbo.wing union select id from dbo.room union select id from dbo.systems) tmp'
def cache = CTL.lastIngestTimeMap
def clientDb = CTL.SQLLookupService.getConnection(flowFile.getAttributes())//SQL.staticService
int numRowsTimeSeries=0
int numRowsSpace=0
clientDb.rows(timseriesSqlCountQuery).eachWithIndex { row, idx ->numRowsTimeSeries= row.c}
clientDb.rows(spaceSqlCountQuery).eachWithIndex { row, idx ->numRowsSpace= row.c}
}
Exception from Nifi logs
2019-09-12 06:18:33,629 ERROR [Timer-Driven Process Thread-3] o.a.n.p.groovyx.ExecuteGroovyScript ExecuteGroovyScript[id=b435c079-ee6c-3c42-a6ea-020968267ecf] ExecuteGroovyScript[id=b435c079-ee6c-3c42-a6ea-020968267ecf] failed to process session due to java.lang.ClassCastException; Processor Administratively Yielded for 1 sec: java.lang.ClassCastException
java.lang.ClassCastException: null
2019-09-12 06:18:33,629 WARN [Timer-Driven Process Thread-3] o.a.n.controller.tasks.ConnectableTask Administratively Yielding ExecuteGroovyScript[id=b435c079-ee6c-3c42-a6ea-020968267ecf] due to uncaught Exception: java.lang.ClassCastException
java.lang.ClassCastException: null
2019-09-12 06:18:33,629 ERROR [Timer-Driven Process Thread-9] o.a.n.p.groovyx.ExecuteGroovyScript ExecuteGroovyScript[id=9b81ca15-93a5-3953-9f40-d0874cfe2531] ExecuteGroovyScript[id=9b81ca15-93a5-3953-9f40-d0874cfe2531] failed to process session due to java.lang.ClassCastException; Processor Administratively Yielded for 1 sec: java.lang.ClassCastException
java.lang.ClassCastException: null
2019-09-12 06:18:33,629 WARN [Timer-Driven Process Thread-9] o.a.n.controller.tasks.ConnectableTask Administratively Yielding ExecuteGroovyScript[id=9b81ca15-93a5-3953-9f40-d0874cfe2531] due to uncaught Exception: java.lang.ClassCastException
java.lang.ClassCastException: null
2019-09-12 06:18:33,708 ERROR [Timer-Driven Process Thread-10] o.a.n.p.groovyx.ExecuteGroovyScript ExecuteGroovyScript[id=a1ec4496-dca3-38ab-a47b-43d7ff95e40f] org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: Cannot get a connection, pool error Timeout waiting for idle object: org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: Cannot get a connection, pool error Timeout waiting for idle object
org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: Cannot get a connection, pool error Timeout waiting for idle object
at org.apache.nifi.dbcp.DBCPConnectionPool.getConnection(DBCPConnectionPool.java:308)
at org.apache.nifi.dbcp.DBCPService.getConnection(DBCPService.java:49)
at sun.reflect.GeneratedMethodAccessor106.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:84)
at com.sun.proxy.$Proxy89.getConnection(Unknown Source)
at org.apache.nifi.processors.groovyx.ExecuteGroovyScript.onInitSQL(ExecuteGroovyScript.java:339)
at org.apache.nifi.processors.groovyx.ExecuteGroovyScript.onTrigger(ExecuteGroovyScript.java:439)
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.sql.SQLException: Cannot get a connection, pool error Timeout waiting for idle object
at org.apache.commons.dbcp2.PoolingDataSource.getConnection(PoolingDataSource.java:142)
at org.apache.commons.dbcp2.BasicDataSource.getConnection(BasicDataSource.java:1563)
at org.apache.nifi.dbcp.DBCPConnectionPool.getConnection(DBCPConnectionPool.java:305)
... 19 common frames omitted
Caused by: java.util.NoSuchElementException: Timeout waiting for idle object
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:451)
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:365)
at org.apache.commons.dbcp2.PoolingDataSource.getConnection(PoolingDataSource.java:134)
... 21 common frames omitted
2019-09-12 06:18:33,708 ERROR [Timer-Driven Process Thread-2] o.a.n.p.groovyx.ExecuteGroovyScript ExecuteGroovyScript[id=54d1e251-88f2-33f3-0489-722879a802bd] org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: Cannot get a connection, pool error Timeout waiting for idle object: org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: Cannot get a connection, pool error Timeout waiting for idle object
org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: Cannot get a connection, pool error Timeout waiting for idle object
at org.apache.nifi.dbcp.DBCPConnectionPool.getConnection(DBCPConnectionPool.java:308)
at org.apache.nifi.dbcp.DBCPService.getConnection(DBCPService.java:49)
at sun.reflect.GeneratedMethodAccessor106.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:84)
at com.sun.proxy.$Proxy89.getConnection(Unknown Source)
at org.apache.nifi.processors.groovyx.ExecuteGroovyScript.onInitSQL(ExecuteGroovyScript.java:339)
at org.apache.nifi.processors.groovyx.ExecuteGroovyScript.onTrigger(ExecuteGroovyScript.java:439)
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.sql.SQLException: Cannot get a connection, pool error Timeout waiting for idle object
at org.apache.commons.dbcp2.PoolingDataSource.getConnection(PoolingDataSource.java:142)
at org.apache.commons.dbcp2.BasicDataSource.getConnection(BasicDataSource.java:1563)
at org.apache.nifi.dbcp.DBCPConnectionPool.getConnection(DBCPConnectionPool.java:305)
... 19 common frames omitted
Caused by: java.util.NoSuchElementException: Timeout waiting for idle object
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:451)
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:365)
at org.apache.commons.dbcp2.PoolingDataSource.getConnection(PoolingDataSource.java:134)
... 21 common frames omitted
Finally after bring down Nifi twice I have found the solution. The problem seemed to be in the code which I was using, I used the object returned by CTL.index.getConnection(flowFile.getAttributes()) to query the SQL table which actually is a connection table, now due to this Nifi used up all available connections to SQL, due to which even if I reverted to using 'DBCPConnectionPool' service instead if 'Lookup' I was getting the above error. When I used to restart Nifi it used to work fine.
The actual code to be used in your script for using 'Lookup' Service is
def connectionObj = CTL.index.getConnection(flowFile.getAttributes())
def clientDb = new Sql(connectionObj)
Now use the 'clientDb' object to query your table
clientDb.rows(timseriesSqlCountQuery).eachWithIndex { row, idx ->numRowsTimeSeries= row.c}
After Capacity expansion, everyday, the sentry(sentry-1.5.1+cdh5.15.2+470) alawys down at 9 p.m, as flow:
Caused by: javax.jdo.JDODataStoreException: Iteration request failed : SELECT 'org.apache.sentry.provider.db.service.model.MPath' AS NUCLEUS_TYPE,`A0`.`PATH_NAME`,`A0`.`PATH_ID`,`A0`.`AUTHZ_OBJ_ID` FROM `AUTHZ_PATH` `A0` WHERE EXISTS (SELECT 'org.apache.sentry.provider.db.service.model.MAuthzPathsMapping' AS NUCLEUS_TYPE,`A0_SUB`.`AUTHZ_OBJ_ID` AS `DN_DATASTOREID` FROM `AUTHZ_PATHS_MAPPING` `A0_SUB` WHERE `A0_SUB`.`AUTHZ_SNAPSHOT_ID` = ? AND `A0`.`AUTHZ_OBJ_ID` = `A0_SUB`.`AUTHZ_OBJ_ID`)
NestedThrowables:
java.sql.SQLException: No value specified for parameter 1
at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:451)
at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:252)
at org.apache.sentry.provider.db.service.persistent.SentryStore.retrieveFullPathsImageCore(SentryStore.java:2542)
at org.apache.sentry.provider.db.service.persistent.SentryStore.access$2800(SentryStore.java:131)
at org.apache.sentry.provider.db.service.persistent.SentryStore$39.execute(SentryStore.java:2518)
at org.apache.sentry.provider.db.service.persistent.SentryStore$39.execute(SentryStore.java:2509)
at org.apache.sentry.provider.db.service.persistent.TransactionManager.executeTransaction(TransactionManager.java:123)
at org.apache.sentry.provider.db.service.persistent.SentryStore.retrieveFullPathsImageUpdate(SentryStore.java:2508)
at org.apache.sentry.hdfs.PathImageRetriever.retrieveFullImage(PathImageRetriever.java:52)
at org.apache.sentry.hdfs.PathImageRetriever.retrieveFullImage(PathImageRetriever.java:32)
at org.apache.sentry.hdfs.DBUpdateForwarder.retrieveFullImage(DBUpdateForwarder.java:141)
at org.apache.sentry.hdfs.DBUpdateForwarder.getAllUpdatesFrom(DBUpdateForwarder.java:92)
at org.apache.sentry.hdfs.SentryPlugin.getAllPathsUpdatesFrom(SentryPlugin.java:163)
at org.apache.sentry.hdfs.SentryHDFSServiceProcessor.getPathsUpdatesFrom(SentryHDFSServiceProcessor.java:130)
at org.apache.sentry.hdfs.SentryHDFSServiceProcessor.get_authz_updates(SentryHDFSServiceProcessor.java:68)
... 10 more
Caused by: java.sql.SQLException: No value specified for parameter 1
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:965)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:898)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:887)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:861)
at com.mysql.jdbc.PreparedStatement.checkAllParametersSet(PreparedStatement.java:2211)
When ever i am tring to load a file from my azure datalake storage to an Hive table using below command,
hiveContext.sql(LOAD DATA INPATH 'adl://bienodad56872stgadlstemp.azuredatalakestore.net/Enriched/Nielsen/NielsenScantrack/Incremental_withoutRepartition/NLS_SYN_SCT.csv' OVERWRITE INTO TABLE sample.test03)
i am getting an error :ApplicationMaster: User class threw exception: java.lang.reflect.InvocationTargetException
java.lang.reflect.InvocationTargetException
Whole error Log:
17/07/05 05:45:48 INFO SparkSqlParser: Parsing command: CREATE TABLE IF NOT EXISTS sample.test03 ( GEO STRING,UPC STRING,WeekEnding STRING,BaseDollars INT,BaseDollars_AnyPromo INT,BaseDollars_Display INT,BaseDollars_FeatAndDisp INT,BaseDollars_FeatAndOrDisp INT,BaseDollars_Feature INT,BaseDollars_NoPromo INT,BaseDollars_TPR INT,BaseUnits INT,BaseUnits_AnyPromo INT,BaseUnits_Display INT,BaseUnits_EQ STRING,BaseUnits_EQ_AnyPromo STRING,BaseUnits_EQ_Display STRING,BaseUnits_EQ_FeatAndDisp STRING,BaseUnits_EQ_FeatAndOrDisp STRING,BaseUnits_EQ_Feature STRING,BaseUnits_EQ_NoPromo STRING,BaseUnits_EQ_TPR STRING,BaseUnits_FeatAndDisp INT,BaseUnits_FeatAndOrDisp INT,BaseUnits_Feature INT,BaseUnits_NoPromo INT,BaseUnits_TPR INT,Dollars INT,Dollars_AnyPromo INT,Dollars_Display INT,Dollars_FeatAndDisp INT,Dollars_FeatAndOrDisp INT,Dollars_Feature INT,Dollars_NoPromo INT,Dollars_TPR INT,PACV_Discount INT,PACV_DispWOFeat INT,PACV_FeatAndDisp INT,PACV_FeatWODisp INT,Units INT,Units_AnyPromo INT,Units_Display INT,Units_EQ INT,Units_EQ_AnyPromo STRING,Units_EQ_Display STRING,Units_EQ_FeatAndDisp STRING,Units_EQ_FeatAndOrDisp STRING,Units_EQ_Feature STRING,Units_EQ_NoPromo STRING,Units_EQ_TPR STRING,Units_FeatAndDisp INT,Units_FeatAndOrDisp INT,Units_Feature INT,Units_NoPromo INT,Units_TPR INT,ACV INT ) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS TEXTFILE
17/07/05 05:45:49 INFO SparkSqlParser: Parsing command: LOAD DATA INPATH 'adl://bienodad56872stgadlstemp.azuredatalakestore.net/Enriched/Nielsen/NielsenScantrack/Incremental_withoutRepartition/NLS_SYN_SCT.csv' OVERWRITE INTO TABLE sample.test03
17/07/05 05:45:49 INFO SessionState: Could not get hdfsEncryptionShim, it is only applicable to hdfs filesystem.
17/07/05 05:45:49 INFO Hive: Replacing src:adl://bienodad56872stgadlstemp.azuredatalakestore.net/Enriched/Nielsen/NielsenScantrack/Incremental_withoutRepartition/NLS_SYN_SCT.csv, dest: wasb://bieno-da-d-56872-unilevercom-hdi-01#049bienobrunilevercomstg.blob.core.windows.net/hive/warehouse/sample.db/test03/NLS_SYN_SCT.csv, Status:false
17/07/05 05:45:49 ERROR ApplicationMaster: User class threw exception: java.lang.reflect.InvocationTargetException
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.sql.hive.client.Shim_v0_14.loadTable(HiveShim.scala:633)
at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadTable$1.apply$mcV$sp(HiveClientImpl.scala:646)
at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadTable$1.apply(HiveClientImpl.scala:646)
at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadTable$1.apply(HiveClientImpl.scala:646)
at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:280)
at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:227)
at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:226)
at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:269)
at org.apache.spark.sql.hive.client.HiveClientImpl.loadTable(HiveClientImpl.scala:645)
at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadTable$1.apply$mcV$sp(HiveExternalCatalog.scala:248)
at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadTable$1.apply(HiveExternalCatalog.scala:246)
at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadTable$1.apply(HiveExternalCatalog.scala:246)
at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:72)
at org.apache.spark.sql.hive.HiveExternalCatalog.loadTable(HiveExternalCatalog.scala:246)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.loadTable(SessionCatalog.scala:297)
at org.apache.spark.sql.execution.command.LoadDataCommand.run(tables.scala:335)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:115)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:136)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:133)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:114)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:86)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:86)
at org.apache.spark.sql.Dataset.<init>(Dataset.scala:186)
at org.apache.spark.sql.Dataset.<init>(Dataset.scala:167)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:65)
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:582)
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:682)
at com.accenture.Unilever.Nielsen.RestatementSample.Restatement(RestatementSample.scala:70)
at com.accenture.Unilever.StageToEnrich.RestatementLogic$.main(RestatementLogic.scala:36)
at com.accenture.Unilever.StageToEnrich.RestatementLogic.main(RestatementLogic.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:627)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error moving: adl://bienodad56872stgadlstemp.azuredatalakestore.net/Enriched/Nielsen/NielsenScantrack/Incremental_withoutRepartition/NLS_SYN_SCT.csv into: wasb://bieno-da-d-56872-unilevercom-hdi-01#049bienobrunilevercomstg.blob.core.windows.net/hive/warehouse/sample.db/test03/NLS_SYN_SCT.csv
at org.apache.hadoop.hive.ql.metadata.Hive.replaceFiles(Hive.java:2919)
at org.apache.hadoop.hive.ql.metadata.Hive.loadTable(Hive.java:1640)
... 44 more
Caused by: java.io.IOException: Error moving: adl://bienodad56872stgadlstemp.azuredatalakestore.net/Enriched/Nielsen/NielsenScantrack/Incremental_withoutRepartition/NLS_SYN_SCT.csv into: wasb://bieno-da-d-56872-unilevercom-hdi-01#049bienobrunilevercomstg.blob.core.windows.net/hive/warehouse/sample.db/test03/NLS_SYN_SCT.csv
at org.apache.hadoop.hive.ql.metadata.Hive.replaceFiles(Hive.java:2913)
... 45 more
I can execute the same code from HIVE shell but from spark script i am getting this error. Is there any special Jar file i need to include. Any help will be appreciated.
When I use hive with select * from table_name;, it works.
When I use select t.a from table_name t OR select * from table_name where ..., the following error happens :
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1414949555870_118360, Tracking URL = N/A
Kill Command = /usr/local/hadoop-2.5.1/bin/hadoop job -kill job_1414949555870_118360
java.io.IOException: Job status not available
at org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:322)
at org.apache.hadoop.mapreduce.Job.getJobState(Job.java:347)
at org.apache.hadoop.mapred.JobClient$NetworkedJob.getJobState(JobClient.java:295)
at org.apache.hadoop.hive.shims.HadoopShimsSecure.isJobPreparing(HadoopShimsSecure.java:104)
at org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:242)
at org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:541)
at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:431)
at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:136)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:151)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:65)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1485)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1263)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1091)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:931)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:921)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:268)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:220)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:422)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:790)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:684)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:623)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Ended Job = job_1414949555870_118360 with exception java.io.IOException(Job status not available )
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask`
So, what's wrong about hive?
DEBUG info as follow, maybe useful!! Please help me!!!
15/04/14 16:53:48 DEBUG ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client#60cb201b
15/04/14 16:53:48 DEBUG mapred.ClientServiceDelegate: Failed to contact AM/History for job job_1414949555870_118441 retrying..
java.net.UnknownHostException: Invalid host name: local host is: (unknown); destination host is: "slave109":43759; java.net.UnknownHostException; For more details see: http://wiki.apache.org/hadoop/UnknownHost
Hive Version : 0.13.1
Pig Version : 0.13.0
I was trying to get read the hive tables using pig with the below command.
grunt> DATA = LOAD 'dev.profile' USING org.apache.hcatalog.pig.HCatLoader();
I get the below piece of log
2014-07-16 22:44:58,986 [main] WARN org.apache.hadoop.hive.conf.HiveConf - DEPRECATED: hive.metastore.ds.retry.* no longer has any effect. Use hive.hmshandler.retry.* instead
2014-07-16 22:44:59,037 [main] INFO hive.metastore - Trying to connect to metastore with URI thrift://localhost:10000
2014-07-16 22:44:59,057 [main] INFO hive.metastore - Connected to metastore.
2014-07-16 22:45:02,019 [main] WARN org.apache.hadoop.hive.conf.HiveConf - DEPRECATED: hive.metastore.ds.retry.* no longer has any effect. Use hive.hmshandler.retry.* instead
2014-07-16 22:45:02,166 [main] WARN org.apache.hadoop.hive.conf.HiveConf - DEPRECATED: hive.metastore.ds.retry.* no longer has any effect. Use hive.hmshandler.retry.* instead
when i do the describe the results comes properly as expected.
grunt> describe DATA
2014-07-16 22:46:42,189 [main] WARN org.apache.hadoop.hive.conf.HiveConf - DEPRECATED: hive.metastore.ds.retry.* no longer has any effect. Use hive.hmshandler.retry.* instead
DATA: {name: chararray,age: int,salary: int}
but when i dump the data i get SocketTimeoutException
2014-07-16 22:47:25,146 [main] ERROR hive.log - Got exception: org.apache.thrift.transport.TTransportException java.net.SocketTimeoutException: Read timed out
org.apache.thrift.transport.TTransportException: java.net.SocketTimeoutException: Read timed out
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)
at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_databases(ThriftHiveMetastore.java:600)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_databases(ThriftHiveMetastore.java:587)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getDatabases(HiveMetaStoreClient.java:826)
at org.apache.hcatalog.common.HiveClientCache$CacheableHiveMetaStoreClient.isOpen(HiveClientCache.java:276)
at org.apache.hcatalog.common.HiveClientCache.get(HiveClientCache.java:146)
at org.apache.hcatalog.common.HCatUtil.getHiveClient(HCatUtil.java:548)
at org.apache.hcatalog.pig.PigHCatUtil.getHiveMetaClient(PigHCatUtil.java:158)
at org.apache.hcatalog.pig.PigHCatUtil.getTable(PigHCatUtil.java:200)
at org.apache.hcatalog.pig.HCatLoader.getSchema(HCatLoader.java:195)
at org.apache.pig.newplan.logical.relational.LOLoad.getSchemaFromMetaData(LOLoad.java:175)
at org.apache.pig.newplan.logical.relational.LOLoad.<init>(LOLoad.java:89)
at org.apache.pig.parser.LogicalPlanBuilder.buildLoadOp(LogicalPlanBuilder.java:885)
at org.apache.pig.parser.LogicalPlanGenerator.load_clause(LogicalPlanGenerator.java:3568)
at org.apache.pig.parser.LogicalPlanGenerator.op_clause(LogicalPlanGenerator.java:1625)
at org.apache.pig.parser.LogicalPlanGenerator.general_statement(LogicalPlanGenerator.java:1102)
at org.apache.pig.parser.LogicalPlanGenerator.statement(LogicalPlanGenerator.java:560)
at org.apache.pig.parser.LogicalPlanGenerator.query(LogicalPlanGenerator.java:421)
at org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:188)
at org.apache.pig.PigServer$Graph.parseQuery(PigServer.java:1712)
at org.apache.pig.PigServer$Graph.access$000(PigServer.java:1420)
at org.apache.pig.PigServer.storeEx(PigServer.java:1004)
at org.apache.pig.PigServer.store(PigServer.java:974)
at org.apache.pig.PigServer.openIterator(PigServer.java:887)
at org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:752)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:372)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:228)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:203)
at org.apache.pig.tools.grunt.Grunt.run(Grunt.java:66)
at org.apache.pig.Main.run(Main.java:542)
at org.apache.pig.Main.main(Main.java:156)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:152)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
... 40 more
2014-07-16 22:47:25,148 [main] ERROR hive.log - Converting exception to MetaException
2014-07-16 22:47:25,151 [main] INFO hive.metastore - Trying to connect to metastore with URI thrift://localhost:10000
2014-07-16 22:47:25,152 [main] INFO hive.metastore - Connected to metastore.
2014-07-16 22:47:45,173 [main] ERROR org.apache.pig.PigServer - exception during parsing: Error during parsing. Cannot get schema from loadFunc org.apache.hcatalog.pig.HCatLoader
Failed to parse: Can not retrieve schema from loader org.apache.hcatalog.pig.HCatLoader#1342464f
at org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:198)
at org.apache.pig.PigServer$Graph.parseQuery(PigServer.java:1712)
at org.apache.pig.PigServer$Graph.access$000(PigServer.java:1420)
at org.apache.pig.PigServer.storeEx(PigServer.java:1004)
at org.apache.pig.PigServer.store(PigServer.java:974)
at org.apache.pig.PigServer.openIterator(PigServer.java:887)
at org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:752)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:372)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:228)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:203)
at org.apache.pig.tools.grunt.Grunt.run(Grunt.java:66)
at org.apache.pig.Main.run(Main.java:542)
at org.apache.pig.Main.main(Main.java:156)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
Caused by: java.lang.RuntimeException: Can not retrieve schema from loader org.apache.hcatalog.pig.HCatLoader#1342464f
at org.apache.pig.newplan.logical.relational.LOLoad.<init>(LOLoad.java:91)
at org.apache.pig.parser.LogicalPlanBuilder.buildLoadOp(LogicalPlanBuilder.java:885)
at org.apache.pig.parser.LogicalPlanGenerator.load_clause(LogicalPlanGenerator.java:3568)
at org.apache.pig.parser.LogicalPlanGenerator.op_clause(LogicalPlanGenerator.java:1625)
at org.apache.pig.parser.LogicalPlanGenerator.general_statement(LogicalPlanGenerator.java:1102)
at org.apache.pig.parser.LogicalPlanGenerator.statement(LogicalPlanGenerator.java:560)
at org.apache.pig.parser.LogicalPlanGenerator.query(LogicalPlanGenerator.java:421)
at org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:188)
... 17 more
Caused by: org.apache.pig.impl.logicalLayer.FrontendException: ERROR 2245: Cannot get schema from loadFunc org.apache.hcatalog.pig.HCatLoader
at org.apache.pig.newplan.logical.relational.LOLoad.getSchemaFromMetaData(LOLoad.java:179)
at org.apache.pig.newplan.logical.relational.LOLoad.<init>(LOLoad.java:89)
... 24 more
Caused by: java.io.IOException: org.apache.thrift.transport.TTransportException: java.net.SocketTimeoutException: Read timed out
at org.apache.hcatalog.pig.PigHCatUtil.getTable(PigHCatUtil.java:205)
at org.apache.hcatalog.pig.HCatLoader.getSchema(HCatLoader.java:195)
at org.apache.pig.newplan.logical.relational.LOLoad.getSchemaFromMetaData(LOLoad.java:175)
... 25 more
Caused by: org.apache.thrift.transport.TTransportException: java.net.SocketTimeoutException: Read timed out
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)
at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:204)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_table(ThriftHiveMetastore.java:1036)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_table(ThriftHiveMetastore.java:1022)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:997)
at org.apache.hcatalog.common.HCatUtil.getTable(HCatUtil.java:194)
at org.apache.hcatalog.pig.PigHCatUtil.getTable(PigHCatUtil.java:201)
... 27 more
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:152)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
... 37 more
2014-07-16 22:47:45,176 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 2245: Cannot get schema from loadFunc org.apache.hcatalog.pig.HCatLoader
Even though i am able to connect to metastore i am not able to retrieve the data. What could be the reason for read fail ?
and at times the process fails with java.lang.OutOfMemoryError: Java heap space
Any help would be greatly appreciated.
Edit the hive-site.xml.
Replace hive.metastore.ds.retry with /hive.hmshandler.retry.
vim /usr/local/Cellar/hive/0.13.1/libexec/conf/hive-site.xml
:%s/hive.metastore.ds.retry/hive.hmshandler.retry/g
:wq