MobileFirst 6.3.0 Getting UnavailableShardsException - elasticsearch

After redeploy the application because a DB2 password update, the application was unable to be launched.
I found the following log:
[11/23/17 12:23:29:988 CST] 000000f1 DataReceiver E Error sending bulk request: java.lang.RuntimeException: failure in bulk execution:
[0]: index [worklight], type [app_activities], id [yRA1jtBzT9ScN0Hj-Fft0g], message [UnavailableShardsException[[worklight][0] [2] shardIt, [0] active : Timeout waiting for [1m], request: org.elasticsearch.action.bulk.BulkShardRequest#e38c893c]]
[3]: index [worklight], type [devices], id [c2062eef-e266-4209-83d2-13d043ae2a9d], message [UnavailableShardsException[[worklight][2] [2] shardIt, [0] active : Timeout waiting for [1m], request: org.elasticsearch.action.bulk.BulkShardRequest#34a9dfa3]]
[6]: index [worklight], type [devices], id [1ed55e4b-26e0-38ba-9f83-2b65d951722e], message [UnavailableShardsException[[worklight][0] [2] shardIt, [0] active : Timeout waiting for [1m], request: org.elasticsearch.action.bulk.BulkShardRequest#e38c893c]]
[8]: index [worklight], type [devices], id [1ed55e4b-26e0-38ba-9f83-2b65d951722e], message [UnavailableShardsException[[worklight][0] [2] shardIt, [0] active : Timeout waiting for [1m], request: org.elasticsearch.action.bulk.BulkShardRequest#e38c893c]]
[10]: index [worklight], type [app_activities], id [e4SxG701QwOwg7L5VztsTQ], message [UnavailableShardsException[[worklight][0] [2] shardIt, [0] active : Timeout waiting for [1m], request: org.elasticsearch.action.bulk.BulkShardRequest#e38c893c]]
[12]: index [worklight], type [devices], id [c2062eef-e266-4209-83d2-13d043ae2a9d], message [UnavailableShardsException[[worklight][2] [2] shardIt, [0] active : Timeout waiting for [1m], request: org.elasticsearch.action.bulk.BulkShardRequest#34a9dfa3]]
[13]: index [worklight], type [app_activities], id [wB2fqKAkT9-JAgfAqSPHZw], message [UnavailableShardsException[[worklight][0] [2] shardIt, [0] active : Timeout waiting for [1m], request: org.elasticsearch.action.bulk.BulkShardRequest#e38c893c]]
at com.ibm.elasticsearch.servlet.DataReceiver.processData(DataReceiver.java:132)
at com.ibm.elasticsearch.servlet.DataReceiver.processDataLegacy(DataReceiver.java:85)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:88)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:613)
at org.apache.wink.server.internal.handlers.InvokeMethodHandler.handleRequest(InvokeMethodHandler.java:63)
at org.apache.wink.server.handlers.AbstractHandler.handleRequest(AbstractHandler.java:33)
at org.apache.wink.server.handlers.RequestHandlersChain.handle(RequestHandlersChain.java:26)
at org.apache.wink.server.handlers.RequestHandlersChain.handle(RequestHandlersChain.java:22)
at org.apache.wink.server.handlers.AbstractHandlersChain.doChain(AbstractHandlersChain.java:67)
at org.apache.wink.server.internal.handlers.CreateInvocationParametersHandler.handleRequest(CreateInvocationParametersHandler.java:54)
I have not seen this error before

Related

FAIL: Configuration for 'config' failed because of UnavailableShardsException[[.opendistro_security][0] primary shard is not active

I was trying to configure opendistro elastic search by my own certificates.
when i did a curl to esip:9200 the response was
Open Distro Security not initialized.
Later when i tried to run security admin.sh for initializing security the error was like this
Open Distro Security Admin v7
Will connect to localhost:9300 ... done
Connected as CN=master
Elasticsearch Version: 7.8.0
Open Distro Security Version: 1.9.0.0
Contacting elasticsearch cluster 'elasticsearch' ...
Clustername: elasticsearch
Clusterstate: RED
Number of nodes: 1
Number of data nodes: 1
.opendistro_security index already exists, so we do not need to create one.
ERR: .opendistro_security index state is RED.
Populate config from /Users/jk/Desktop/ELK 7.9.0/opendistroforelasticsearch-1.9.0/plugins/opendistro_security/securityconfig
Will update '_doc/config' with plugins/opendistro_security/securityconfig/config.yml
FAIL: Configuration for 'config' failed because of UnavailableShardsException[[.opendistro_security][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.opendistro_security][0]] containing [index {[.opendistro_security][_doc][config], source[n/a, actual length: [3.7kb], max length: 2kb]}] and a refresh]]
Will update '_doc/roles' with plugins/opendistro_security/securityconfig/roles.yml
FAIL: Configuration for 'roles' failed because of NodeClosedException[node closed {master}{FXhShYtXTIOatM7kb36ePQ}{sxggZ8ceRHu4maB_ARDaBQ}{192.168.0.108}{192.168.0.108:9300}{dmr}]
Will update '_doc/rolesmapping' with plugins/opendistro_security/securityconfig/roles_mapping.yml
WARNING: JAVA_HOME not set, will use /usr/bin/java
Open Distro Security Admin v7
Will connect to localhost:9300 ... done
Connected as CN=master
Elasticsearch Version: 7.8.0
Open Distro Security Version: 1.9.0.0
Contacting elasticsearch cluster 'elasticsearch' ...
Clustername: elasticsearch
Clusterstate: RED
Number of nodes: 1
Number of data nodes: 1
.opendistro_security index already exists, so we do not need to create one.
ERR: .opendistro_security index state is RED.
Populate config from /Users/jk/Desktop/ELK 7.9.0/opendistroforelasticsearch-1.9.0/plugins/opendistro_security/securityconfig
Will update '_doc/config' with plugins/opendistro_security/securityconfig/config.yml
FAIL: Configuration for 'config' failed because of UnavailableShardsException[[.opendistro_security][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.opendistro_security][0]] containing [index {[.opendistro_security][_doc][config], source[n/a, actual length: [3.7kb], max length: 2kb]}] and a refresh]]
Will update '_doc/roles' with plugins/opendistro_security/securityconfig/roles.yml
FAIL: Configuration for 'roles' failed because of UnavailableShardsException[[.opendistro_security][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.opendistro_security][0]] containing [index {[.opendistro_security][_doc][roles], source[{"roles":"eyJfbWV0YSI6eyJ0eXBlIjoicm9sZXMiLCJjb25maWdfdmVyc2lvbiI6Mn0sImtpYmFuYV9yZWFkX29ubHkiOnsicmVzZXJ2ZWQiOnRydWV9LCJzZWN1cml0eV9yZXN0X2FwaV9hY2Nlc3MiOnsicmVzZXJ2ZWQiOnRydWV9LCJhbGVydGluZ192aWV3X2FsZXJ0cyI6eyJyZXNlcnZlZCI6dHJ1ZSwiaW5kZXhfcGVybWlzc2lvbnMiOlt7ImluZGV4X3BhdHRlcm5zIjpbIi5vcGVuZGlzdHJvLWFsZXJ0aW5nLWFsZXJ0KiJdLCJhbGxvd2VkX2FjdGlvbnMiOlsicmVhZCJdfV19LCJhbGVydGluZ19jcnVkX2FsZXJ0cyI6eyJyZXNlcnZlZCI6dHJ1ZSwiaW5kZXhfcGVybWlzc2lvbnMiOlt7ImluZGV4X3BhdHRlcm5zIjpbIi5vcGVuZGlzdHJvLWFsZXJ0aW5nLWFsZXJ0KiJdLCJhbGxvd2VkX2FjdGlvbnMiOlsiY3J1ZCJdfV19LCJhbGVydGluZ19mdWxsX2FjY2VzcyI6eyJyZXNlcnZlZCI6dHJ1ZSwiaW5kZXhfcGVybWlzc2lvbnMiOlt7ImluZGV4X3BhdHRlcm5zIjpbIi5vcGVuZGlzdHJvLWFsZXJ0aW5nLWNvbmZpZyIsIi5vcGVuZGlzdHJvLWFsZXJ0aW5nLWFsZXJ0KiJdLCJhbGxvd2VkX2FjdGlvbnMiOlsiY3J1ZCJdfV19fQ=="}]}] and a refresh]]
Will update '_doc/rolesmapping' with plugins/opendistro_security/securityconfig/roles_mapping.yml
FAIL: Configuration for 'rolesmapping' failed because of UnavailableShardsException[[.opendistro_security][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.opendistro_security][0]] containing [index {[.opendistro_security][_doc][rolesmapping], source[{"rolesmapping":"eyJfbWV0YSI6eyJ0eXBlIjoicm9sZXNtYXBwaW5nIiwiY29uZmlnX3ZlcnNpb24iOjJ9LCJhbGxfYWNjZXNzIjp7InJlc2VydmVkIjpmYWxzZSwiYmFja2VuZF9yb2xlcyI6WyJhZG1pbiJdLCJkZXNjcmlwdGlvbiI6Ik1hcHMgYWRtaW4gdG8gYWxsX2FjY2VzcyJ9LCJvd25faW5kZXgiOnsicmVzZXJ2ZWQiOmZhbHNlLCJ1c2VycyI6WyIqIl0sImRlc2NyaXB0aW9uIjoiQWxsb3cgZnVsbCBhY2Nlc3MgdG8gYW4gaW5kZXggbmFtZWQgbGlrZSB0aGUgdXNlcm5hbWUifSwibG9nc3Rhc2giOnsicmVzZXJ2ZWQiOmZhbHNlLCJiYWNrZW5kX3JvbGVzIjpbImxvZ3N0YXNoIl19LCJraWJhbmFfdXNlciI6eyJyZXNlcnZlZCI6ZmFsc2UsImJhY2tlbmRfcm9sZXMiOlsia2liYW5hdXNlciJdLCJkZXNjcmlwdGlvbiI6Ik1hcHMga2liYW5hdXNlciB0byBraWJhbmFfdXNlciJ9LCJyZWFkYWxsIjp7InJlc2VydmVkIjpmYWxzZSwiYmFja2VuZF9yb2xlcyI6WyJyZWFkYWxsIl19LCJtYW5hZ2Vfc25hcHNob3RzIjp7InJlc2VydmVkIjpmYWxzZSwiYmFja2VuZF9yb2xlcyI6WyJzbmFwc2hvdHJlc3RvcmUiXX0sImtpYmFuYV9zZXJ2ZXIiOnsicmVzZXJ2ZWQiOnRydWUsInVzZXJzIjpbImtpYmFuYXNlcnZlciJdfX0="}]}] and a refresh]]
Will update '_doc/internalusers' with plugins/opendistro_security/securityconfig/internal_users.yml
FAIL: Configuration for 'internalusers' failed because of UnavailableShardsException[[.opendistro_security][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.opendistro_security][0]] containing [index {[.opendistro_security][_doc][internalusers], source[{"internalusers":"eyJfbWV0YSI6eyJ0eXBlIjoiaW50ZXJuYWx1c2VycyIsImNvbmZpZ192ZXJzaW9uIjoyfSwiYWRtaW4iOnsiaGFzaCI6IiQyYSQxMiRWY0NEZ2gyTkRrMDdKR04wcmpHYk0uQWQ0MXFWUi9ZRkpjZ0hwMFVHbnM1SkR5bXYuLlRPRyIsInJlc2VydmVkIjp0cnVlLCJiYWNrZW5kX3JvbGVzIjpbImFkbWluIl0sImRlc2NyaXB0aW9uIjoiRGVtbyBhZG1pbiB1c2VyIn0sImtpYmFuYXNlcnZlciI6eyJoYXNoIjoiJDJhJDEyJDRBY2dBdDN4d09XYWRBNXM1YmxMNmV2MzlPWEROaG1PZXNFb28zM2VadHJxMk4wWXJVM0guIiwicmVzZXJ2ZWQiOnRydWUsImRlc2NyaXB0aW9uIjoiRGVtbyBraWJhbmFzZXJ2ZXIgdXNlciJ9LCJraWJhbmFybyI6eyJoYXNoIjoiJDJhJDEyJEpKU1hOZlRvd3o3VXU1dHRYZmVZcGVZRTBhckFDdmN3bFBCU3RCMUYuTUk3ZjBVOVo0REdDIiwicmVzZXJ2ZWQiOmZhbHNlLCJiYWNrZW5kX3JvbGVzIjpbImtpYmFuYXVzZXIiLCJyZWFkYWxsIl0sImF0dHJpYnV0ZXMiOnsiYXR0cmlidXRlMSI6InZhbHVlMSIsImF0dHJpYnV0ZTIiOiJ2YWx1ZTIiLCJhdHRyaWJ1dGUzIjoidmFsdWUzIn0sImRlc2NyaXB0aW9uIjoiRGVtbyBraWJhbmFybyB1c2VyIn0sImxvZ3N0YXNoIjp7Imhhc2giOiIkMmEkMTIkdTFTaFI0bDR1QlMzVXY1OVBhMnk1LjF1UXVaQnJadG1OZnFCM2lNLy5qTDBYb1Y5c2doUzIiLCJyZXNlcnZlZCI6ZmFsc2UsImJhY2tlbmRfcm9sZXMiOlsibG9nc3Rhc2giXSwiZGVzY3JpcHRpb24iOiJEZW1vIGxvZ3N0YXNoIHVzZXIifSwicmVhZGFsbCI6eyJoYXNoIjoiJDJhJDEyJGFlNHljd3p3dkx0Wnh3WjgyUm1pRXVuQmJJUGlBbUdaZHVCQWpLTjBUWGR3UUZ0Q3dBUnoyIiwicmVzZXJ2ZWQiOmZhbHNlLCJiYWNrZW5kX3JvbGVzIjpbInJlYWRhbGwiXSwiZGVzY3JpcHRpb24iOiJEZW1vIHJlYWRhbGwgdXNlciJ9LCJzbmFwc2hvdHJlc3RvcmUiOnsiaGFzaCI6IiQyeSQxMiREcHdtZXRIS3dnWW5vcmJnZHZPUkNlbnY0TkFLOGNQVWc4QUk2cHhMQ3VXZi9BTGMwLnY3VyIsInJlc2VydmVkIjpmYWxzZSwiYmFja2VuZF9yb2xlcyI6WyJzbmFwc2hvdHJlc3RvcmUiXSwiZGVzY3JpcHRpb24iOiJEZW1vIHNuYXBzaG90cmVzdG9yZSB1c2VyIn19"}]}] and a refresh]]
Will update '_doc/actiongroups' with plugins/opendistro_security/securityconfig/action_groups.yml
FAIL: Configuration for 'actiongroups' failed because of UnavailableShardsException[[.opendistro_security][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.opendistro_security][0]] containing [index {[.opendistro_security][_doc][actiongroups], source[{"actiongroups":"eyJfbWV0YSI6eyJ0eXBlIjoiYWN0aW9uZ3JvdXBzIiwiY29uZmlnX3ZlcnNpb24iOjJ9fQ=="}]}] and a refresh]]
Will update '_doc/tenants' with plugins/opendistro_security/securityconfig/tenants.yml
FAIL: Configuration for 'tenants' failed because of UnavailableShardsException[[.opendistro_security][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.opendistro_security][0]] containing [index {[.opendistro_security][_doc][tenants], source[{"tenants":"eyJfbWV0YSI6eyJ0eXBlIjoidGVuYW50cyIsImNvbmZpZ192ZXJzaW9uIjoyfSwiYWRtaW5fdGVuYW50Ijp7InJlc2VydmVkIjpmYWxzZSwiZGVzY3JpcHRpb24iOiJEZW1vIHRlbmFudCBmb3IgYWRtaW4gdXNlciJ9fQ=="}]}] and a refresh]]
Will update '_doc/nodesdn' with plugins/opendistro_security/securityconfig/nodes_dn.yml
FAIL: Configuration for 'nodesdn' failed because of UnavailableShardsException[[.opendistro_security][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[.opendistro_security][0]] containing [index {[.opendistro_security][_doc][nodesdn], source[{"nodesdn":"eyJfbWV0YSI6eyJ0eXBlIjoibm9kZXNkbiIsImNvbmZpZ192ZXJzaW9uIjoyfX0="}]}] and a refresh]]
ERR: cannot upload configuration, see errors above

Exception 'Cannot get a connection, pool error Timeout waiting for idle object' when using 'DBCPConnectionPoolLookup' service in Nifi

I'm trying to use 'DBCPConnectionPoolLookup' service in 'ExecuteGroovyScript' to dynamically query the required database based on 'database.name' parameter in the input flow file.
The processor is successfully able to get the corresponding 'DBCPConnectionPool' service for querying but I'm getting the an exception java.sql.SQLException: Cannot get a connection, pool error Timeout waiting for idle object. As opposed to if I directly use the 'DBCPConnectionPool' service without the 'Lookup' service without changing any configuration it works fine.
I access the service as follows:
def clientDb = CTL.SQLLookupService.getConnection(flowFile.getAttributes())
Then use the 'clientDb' object to query as:
clientDb.rows(timseriesSqlCountQuery).eachWithIndex { row, idx ->numRowsTimeSeries= row.c}
I have tried increasing the values of Max Wait Time and Max Total Connections to higher values in 'DBCPConnectionPool' service, it does not help.
Please find below detail links of images for code,error and configuration
Exception
Configuration of 'ExecuteGroovyScript'
Configuration of 'DBCPConnectionPool' service
Configuration of 'DBCPConnectionPoolLookup' service
Script Code
import org.apache.nifi.distributed.cache.client.Deserializer
import org.apache.nifi.distributed.cache.client.Serializer
import org.apache.nifi.distributed.cache.client.exception.DeserializationException
import org.apache.nifi.distributed.cache.client.exception.SerializationException
import groovy.sql.Sql
import java.time.*
try {
def flowFile = session.get()
def isBootstrap=flowFile."isBootstrap"
def timseriesSqlQuery='SELECT id FROM [dbo].[Points] where ([MappedToEquipment] = \'Mapped\' or PointStatus = \'Mapped\')'
def timseriesSqlCountQuery='SELECT count(id) as c FROM [dbo].[Points] where ([MappedToEquipment] = \'Mapped\' or PointStatus = \'Mapped\')'
def spaceSqlQuery='select id from (select id from dbo.organization union select id from dbo.facility union select id from dbo.building union select id from dbo.floor union select id from dbo.wing union select id from dbo.room union select id from dbo.systems) tmp'
def spaceSqlCountQuery='select count(id) as c from (select id from dbo.organization union select id from dbo.facility union select id from dbo.building union select id from dbo.floor union select id from dbo.wing union select id from dbo.room union select id from dbo.systems) tmp'
def cache = CTL.lastIngestTimeMap
def clientDb = CTL.SQLLookupService.getConnection(flowFile.getAttributes())//SQL.staticService
int numRowsTimeSeries=0
int numRowsSpace=0
clientDb.rows(timseriesSqlCountQuery).eachWithIndex { row, idx ->numRowsTimeSeries= row.c}
clientDb.rows(spaceSqlCountQuery).eachWithIndex { row, idx ->numRowsSpace= row.c}
}
Exception from Nifi logs
2019-09-12 06:18:33,629 ERROR [Timer-Driven Process Thread-3] o.a.n.p.groovyx.ExecuteGroovyScript ExecuteGroovyScript[id=b435c079-ee6c-3c42-a6ea-020968267ecf] ExecuteGroovyScript[id=b435c079-ee6c-3c42-a6ea-020968267ecf] failed to process session due to java.lang.ClassCastException; Processor Administratively Yielded for 1 sec: java.lang.ClassCastException
java.lang.ClassCastException: null
2019-09-12 06:18:33,629 WARN [Timer-Driven Process Thread-3] o.a.n.controller.tasks.ConnectableTask Administratively Yielding ExecuteGroovyScript[id=b435c079-ee6c-3c42-a6ea-020968267ecf] due to uncaught Exception: java.lang.ClassCastException
java.lang.ClassCastException: null
2019-09-12 06:18:33,629 ERROR [Timer-Driven Process Thread-9] o.a.n.p.groovyx.ExecuteGroovyScript ExecuteGroovyScript[id=9b81ca15-93a5-3953-9f40-d0874cfe2531] ExecuteGroovyScript[id=9b81ca15-93a5-3953-9f40-d0874cfe2531] failed to process session due to java.lang.ClassCastException; Processor Administratively Yielded for 1 sec: java.lang.ClassCastException
java.lang.ClassCastException: null
2019-09-12 06:18:33,629 WARN [Timer-Driven Process Thread-9] o.a.n.controller.tasks.ConnectableTask Administratively Yielding ExecuteGroovyScript[id=9b81ca15-93a5-3953-9f40-d0874cfe2531] due to uncaught Exception: java.lang.ClassCastException
java.lang.ClassCastException: null
2019-09-12 06:18:33,708 ERROR [Timer-Driven Process Thread-10] o.a.n.p.groovyx.ExecuteGroovyScript ExecuteGroovyScript[id=a1ec4496-dca3-38ab-a47b-43d7ff95e40f] org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: Cannot get a connection, pool error Timeout waiting for idle object: org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: Cannot get a connection, pool error Timeout waiting for idle object
org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: Cannot get a connection, pool error Timeout waiting for idle object
at org.apache.nifi.dbcp.DBCPConnectionPool.getConnection(DBCPConnectionPool.java:308)
at org.apache.nifi.dbcp.DBCPService.getConnection(DBCPService.java:49)
at sun.reflect.GeneratedMethodAccessor106.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:84)
at com.sun.proxy.$Proxy89.getConnection(Unknown Source)
at org.apache.nifi.processors.groovyx.ExecuteGroovyScript.onInitSQL(ExecuteGroovyScript.java:339)
at org.apache.nifi.processors.groovyx.ExecuteGroovyScript.onTrigger(ExecuteGroovyScript.java:439)
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.sql.SQLException: Cannot get a connection, pool error Timeout waiting for idle object
at org.apache.commons.dbcp2.PoolingDataSource.getConnection(PoolingDataSource.java:142)
at org.apache.commons.dbcp2.BasicDataSource.getConnection(BasicDataSource.java:1563)
at org.apache.nifi.dbcp.DBCPConnectionPool.getConnection(DBCPConnectionPool.java:305)
... 19 common frames omitted
Caused by: java.util.NoSuchElementException: Timeout waiting for idle object
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:451)
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:365)
at org.apache.commons.dbcp2.PoolingDataSource.getConnection(PoolingDataSource.java:134)
... 21 common frames omitted
2019-09-12 06:18:33,708 ERROR [Timer-Driven Process Thread-2] o.a.n.p.groovyx.ExecuteGroovyScript ExecuteGroovyScript[id=54d1e251-88f2-33f3-0489-722879a802bd] org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: Cannot get a connection, pool error Timeout waiting for idle object: org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: Cannot get a connection, pool error Timeout waiting for idle object
org.apache.nifi.processor.exception.ProcessException: java.sql.SQLException: Cannot get a connection, pool error Timeout waiting for idle object
at org.apache.nifi.dbcp.DBCPConnectionPool.getConnection(DBCPConnectionPool.java:308)
at org.apache.nifi.dbcp.DBCPService.getConnection(DBCPService.java:49)
at sun.reflect.GeneratedMethodAccessor106.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:84)
at com.sun.proxy.$Proxy89.getConnection(Unknown Source)
at org.apache.nifi.processors.groovyx.ExecuteGroovyScript.onInitSQL(ExecuteGroovyScript.java:339)
at org.apache.nifi.processors.groovyx.ExecuteGroovyScript.onTrigger(ExecuteGroovyScript.java:439)
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.sql.SQLException: Cannot get a connection, pool error Timeout waiting for idle object
at org.apache.commons.dbcp2.PoolingDataSource.getConnection(PoolingDataSource.java:142)
at org.apache.commons.dbcp2.BasicDataSource.getConnection(BasicDataSource.java:1563)
at org.apache.nifi.dbcp.DBCPConnectionPool.getConnection(DBCPConnectionPool.java:305)
... 19 common frames omitted
Caused by: java.util.NoSuchElementException: Timeout waiting for idle object
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:451)
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:365)
at org.apache.commons.dbcp2.PoolingDataSource.getConnection(PoolingDataSource.java:134)
... 21 common frames omitted
Finally after bring down Nifi twice I have found the solution. The problem seemed to be in the code which I was using, I used the object returned by CTL.index.getConnection(flowFile.getAttributes()) to query the SQL table which actually is a connection table, now due to this Nifi used up all available connections to SQL, due to which even if I reverted to using 'DBCPConnectionPool' service instead if 'Lookup' I was getting the above error. When I used to restart Nifi it used to work fine.
The actual code to be used in your script for using 'Lookup' Service is
def connectionObj = CTL.index.getConnection(flowFile.getAttributes())
def clientDb = new Sql(connectionObj)
Now use the 'clientDb' object to query your table
clientDb.rows(timseriesSqlCountQuery).eachWithIndex { row, idx ->numRowsTimeSeries= row.c}

shard allocation failing after power outage elasticsearch

[tag:failed to recover from translog]
[2016-08-25 11:23:12,410][WARN ][indices.cluster ] [Cobalt Man] [[fct_providers][4]] marking and sending shard failed due to [failed recovery]
[fct_providers][[fct_providers][4]] IndexShardRecoveryException[failed to recovery from gateway]; nested: EngineCreationFailureException[failed to recover from translog]; nested: EngineException[failed to recover from translog]; nested: ElasticsearchException[unexpected exception reading from translog snapshot of /root/elasticsearch-2.3.4/data/elasticsearch/nodes/0/indices/fct_providers/4/translog/translog-31.tlog]; nested: EOFException[read past EOF. pos [1047332] length: [4] end: [1047332]];
at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:250)
at org.elasticsearch.index.shard.StoreRecoveryService.access$100(StoreRecoveryService.java:56)
at org.elasticsearch.index.shard.StoreRecoveryService$1.run(StoreRecoveryService.java:129)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: [fct_providers][[fct_providers][4]] EngineCreationFailureException[failed to recover from translog]; nested: EngineException[failed to recover from translog]; nested: ElasticsearchException[unexpected exception reading from translog snapshot of /root/elasticsearch-2.3.4/data/elasticsearch/nodes/0/indices/fct_providers/4/translog/translog-31.tlog]; nested: EOFException[read past EOF. pos [1047332] length: [4] end: [1047332]];
at org.elasticsearch.index.engine.InternalEngine.(InternalEngine.java:177)
at org.elasticsearch.index.engine.InternalEngineFactory.newReadWriteEngine(InternalEngineFactory.java:25)
at org.elasticsearch.index.shard.IndexShard.newEngine(IndexShard.java:1509)
at org.elasticsearch.index.shard.IndexShard.createNewEngine(IndexShard.java:1493)
at org.elasticsearch.index.shard.IndexShard.internalPerformTranslogRecovery(IndexShard.java:966)
at org.elasticsearch.index.shard.IndexShard.performTranslogRecovery(IndexShard.java:938)
at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:241)
... 5 more
Caused by: [fct_providers][[fct_providers][4]] EngineException[failed to recover from translog]; nested: ElasticsearchException[unexpected exception reading from translog snapshot of /root/elasticsearch-2.3.4/data/elasticsearch/nodes/0/indices/fct_providers/4/translog/translog-31.tlog]; nested: EOFException[read past EOF. pos [1047332] length: [4] end: [1047332]];
at org.elasticsearch.index.engine.InternalEngine.recoverFromTranslog(InternalEngine.java:240)
at org.elasticsearch.index.engine.InternalEngine.(InternalEngine.java:174)
... 11 more
Caused by: ElasticsearchException[unexpected exception reading from translog snapshot of /root/elasticsearch-2.3.4/data/elasticsearch/nodes/0/indices/fct_providers/4/translog/translog-31.tlog]; nested: EOFException[read past EOF. pos [1047332] length: [4] end: [1047332]];
at org.elasticsearch.index.translog.TranslogReader.readSize(TranslogReader.java:102)
at org.elasticsearch.index.translog.TranslogReader.access$000(TranslogReader.java:46)
at org.elasticsearch.index.translog.TranslogReader$ReaderSnapshot.readOperation(TranslogReader.java:294)
at org.elasticsearch.index.translog.TranslogReader$ReaderSnapshot.next(TranslogReader.java:287)
at org.elasticsearch.index.translog.MultiSnapshot.next(MultiSnapshot.java:70)
at org.elasticsearch.index.shard.TranslogRecoveryPerformer.recoveryFromSnapshot(TranslogRecoveryPerformer.java:105)
at org.elasticsearch.index.shard.IndexShard$1.recoveryFromSnapshot(IndexShard.java:1578)
at org.elasticsearch.index.engine.InternalEngine.recoverFromTranslog(InternalEngine.java:238)
... 12 more
Caused by: java.io.EOFException: read past EOF. pos [1047332] length: [4] end: [1047332]
at org.elasticsearch.common.io.Channels.readFromFileChannelWithEofException(Channels.java:102)
at org.elasticsearch.index.translog.ImmutableTranslogReader.readBytes(ImmutableTranslogReader.java:84)
at org.elasticsearch.index.translog.TranslogReader.readSize(TranslogReader.java:91)
... 19 more

Nutch Elasticsearch Integration

I'm following this tutorial for setting up nutch alongwith Elasticsearch. Whenever I try to index the data into the ES, it returns an error. Following are the logs:-
Command:-
bin/nutch index elasticsearch -all
Logs when I add elastic.port(9200) in conf/nutch-site.xml :-
2016-05-05 13:22:49,903 INFO basic.BasicIndexingFilter - Maximum title length for indexing set to: 100
2016-05-05 13:22:49,904 INFO indexer.IndexingFilters - Adding org.apache.nutch.indexer.basic.BasicIndexingFilter
2016-05-05 13:22:49,904 INFO anchor.AnchorIndexingFilter - Anchor deduplication is: off
2016-05-05 13:22:49,904 INFO indexer.IndexingFilters - Adding org.apache.nutch.indexer.anchor.AnchorIndexingFilter
2016-05-05 13:22:49,905 INFO indexer.IndexingFilters - Adding org.apache.nutch.indexer.metadata.MetadataIndexer
2016-05-05 13:22:49,906 INFO indexer.IndexingFilters - Adding org.apache.nutch.indexer.more.MoreIndexingFilter
2016-05-05 13:22:49,961 INFO elastic.ElasticIndexWriter - Processing remaining requests [docs = 0, length = 0, total docs = 0]
2016-05-05 13:22:49,961 INFO elastic.ElasticIndexWriter - Processing to finalize last execute
2016-05-05 13:22:54,898 INFO client.transport - [Peggy Carter] failed to get node info for [#transport#-1][ubuntu][inet[localhost/127.0.0.1:9200]], disconnecting...
org.elasticsearch.transport.ReceiveTimeoutTransportException: [][inet[localhost/127.0.0.1:9200]][cluster:monitor/nodes/info] request_id [1] timed out after [5000ms]
at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:366)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
2016-05-05 13:22:55,682 INFO indexer.IndexWriters - Adding org.apache.nutch.indexwriter.elastic.ElasticIndexWriter
2016-05-05 13:22:55,683 INFO indexer.IndexingJob - Active IndexWriters :
ElasticIndexWriter
elastic.cluster : elastic prefix cluster
elastic.host : hostname
elastic.port : port (default 9300)
elastic.index : elastic index command
elastic.max.bulk.docs : elastic bulk index doc counts. (default 250)
elastic.max.bulk.size : elastic bulk index length. (default 2500500 ~2.5MB)
2016-05-05 13:22:55,711 INFO elasticsearch.plugins - [Adrian Toomes] loaded [], sites []
2016-05-05 13:23:00,763 INFO client.transport - [Adrian Toomes] failed to get node info for [#transport#-1][ubuntu][inet[localhost/127.0.0.1:92$0]], disconnecting...
org.elasticsearch.transport.ReceiveTimeoutTransportException: [][inet[localhost/127.0.0.1:9200]][cluster:monitor/nodes/info] request_id [0] time$ out after [5000ms]
at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:366)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
2016-05-05 13:23:00,766 INFO indexer.IndexingJob - IndexingJob: done.
Logs when default port 9300 is used:-
2016-05-05 13:58:44,584 INFO elasticsearch.plugins - [Mentallo] loaded [], sites []
2016-05-05 13:58:44,673 WARN transport.netty - [Mentallo] Message not fully read (response) for [0] handler future(org.elasticsearch.client.transport.TransportClientNodesService$SimpleNodeSampler$1#3c80f1dd), error [true], resetting
2016-05-05 13:58:44,674 INFO client.transport - [Mentallo] failed to get node info for [#transport#-1][ubuntu][inet[localhost/127.0.0.1:9300]], disconnecting...
org.elasticsearch.transport.RemoteTransportException: Failed to deserialize exception response from stream
Caused by: org.elasticsearch.transport.TransportSerializationException: Failed to deserialize exception response from stream
at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:173)
at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:125)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.StreamCorruptedException: Unsupported version: 1
at org.elasticsearch.common.io.ThrowableObjectInputStream.readStreamHeader(ThrowableObjectInputStream.java:46)
at java.io.ObjectInputStream.<init>(ObjectInputStream.java:301)
at org.elasticsearch.common.io.ThrowableObjectInputStream.<init>(ThrowableObjectInputStream.java:38)
at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:170)
... 23 more
2016-05-05 13:58:44,676 INFO indexer.IndexingJob - IndexingJob: done.
I've configured everything fine. Have had a look at various threads as well but to no avail. Also java version for both ES and JVM is same. Is there a bug in here?
I'm using Nutch 2.3.1 and have tried with both ES 1.4.4 and 2.3.2. I can see data in Mongo but I cannot index data in ES. Why??

Why es cluster stop to work until i delete the old index?

In es document,it introduce that,If we restart Node 1,If Node 1 still has copies of the old shards, it will try to reuse them, copying over from the primary shard only the files that have changed in the meantime.
So I did an experiment.
Here are 5 nodes in my cluster,Primary shards 1 is saved in node 1,and replica shards 1 is saved in node 2.When i restart node 1 and node 2,Primary shards 1's state become UNASSIGNED,and replica shards 1's state become UNASSIGNED too,the health of the cluster become red,and the health never become green.And the cluster stop to work until i delete the old index.
Here is part of the master log.
[ERROR][marvel.agent ] [es10] background thread had an uncaught exception
ElasticsearchException[failed to flush exporter bulks]
at org.elasticsearch.marvel.agent.exporter.ExportBulk$Compound.flush(ExportBulk.java:104)
at org.elasticsearch.marvel.agent.exporter.ExportBulk.close(ExportBulk.java:53)
at org.elasticsearch.marvel.agent.AgentService$ExportingWorker.run(AgentService.java:201)
at java.lang.Thread.run(Thread.java:745)
Suppressed: ElasticsearchException[failed to flush [default_local] exporter bulk]; nested: ElasticsearchException[failure in bulk execution, only the first 100 failures are printed:
[8]: index [.marvel-es-data], type [cluster_info], id [nm4dj3ucSRGsdautV_GDDw], message [UnavailableShardsException[[.marvel-es-data][1] primary shard is not active Timeout: [1m], request: [shard bulk {[.marvel-es-data][1]}]]]];
at org.elasticsearch.marvel.agent.exporter.ExportBulk$Compound.flush(ExportBulk.java:106)
... 3 more
Caused by: ElasticsearchException[failure in bulk execution, only the first 100 failures are printed:
[8]: index [.marvel-es-data], type [cluster_info], id [nm4dj3ucSRGsdautV_GDDw], message [UnavailableShardsException[[.marvel-es-data][1] primary shard is not active Timeout: [1m], request: [shard bulk {[.marvel-es-data][1]}]]]]
at org.elasticsearch.marvel.agent.exporter.local.LocalBulk.flush(LocalBulk.java:114)
at org.elasticsearch.marvel.agent.exporter.ExportBulk$Compound.flush(ExportBulk.java:101)
... 3 more
[2016-02-19 12:53:18,769][ERROR][marvel.agent ] [es10] background thread had an uncaught exception
ElasticsearchException[failed to flush exporter bulks]
at org.elasticsearch.marvel.agent.exporter.ExportBulk$Compound.flush(ExportBulk.java:104)
at org.elasticsearch.marvel.agent.exporter.ExportBulk.close(ExportBulk.java:53)
at org.elasticsearch.marvel.agent.AgentService$ExportingWorker.run(AgentService.java:201)
at java.lang.Thread.run(Thread.java:745)
Suppressed: ElasticsearchException[failed to flush [default_local] exporter bulk]; nested: ElasticsearchException[failure in bulk execution, only the first 100 failures are printed:
[8]: index [.marvel-es-data], type [cluster_info], id [nm4dj3ucSRGsdautV_GDDw], message [UnavailableShardsException[[.marvel-es-data][1] primary shard is not active Timeout: [1m], request: [shard bulk {[.marvel-es-data][1]}]]]];
at org.elasticsearch.marvel.agent.exporter.ExportBulk$Compound.flush(ExportBulk.java:106)
... 3 more
Caused by: ElasticsearchException[failure in bulk execution, only the first 100 failures are printed:
[8]: index [.marvel-es-data], type [cluster_info], id [nm4dj3ucSRGsdautV_GDDw], message [UnavailableShardsException[[.marvel-es-data][1] primary shard is not active Timeout: [1m], request: [shard bulk {[.marvel-es-data][1]}]]]]
at org.elasticsearch.marvel.agent.exporter.local.LocalBulk.flush(LocalBulk.java:114)
at org.elasticsearch.marvel.agent.exporter.ExportBulk$Compound.flush(ExportBulk.java:101)
... 3 more

Resources