kura - cannot uninstall deployment package remotely(deploy-v2) - osgi

I am new to kura and I have been trying to remotely uninstall a deployment package using Amit's MQTT application, but I am unable to do so. This is the request payload I send from the application-
dp.name=hello_osgi
job.id=12345891011L
dp.version=1.0.0
I get the following error in the response topic-
-- listing properties --
response.code=500
response.exception.message=java.lang.String cannot be cast to java.lang.Long,
response.exception.stack=java.lang.ClassCastException: java.lang.String cannot be cast to java.lang.Long
at org.eclipse.kura.core.deployment.uninstall.DeploymentPackageUninstallOptions.
<init>(DeploymentPackageUninstallOptions.java:38)
at org.eclipse.kura.core.deployment.CloudDeploymentHandlerV2.doExecUninstall(CloudDeploymentHandlerV2.java:594)
at org.eclipse.kura.core.deployment.CloudDeploymentHandlerV2.doExec(CloudDeploymentHandlerV2.java:343)
at org.eclipse.kura.cloud.MessageHandlerCallable.call(Cloudlet.java:270)
at org.eclipse.kura.cloud.MessageHandlerCallable.call(Cloudlet.java:1)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745) ,
response.code=500
Malformed uninstall request
Following is the kura console-
16:12:04,707 [MQTT Call: test-client] INFO CloudServiceImpl:440 - Message arrived on topic: $EDC/amir-kura/test-client/DEPLOY-V2/EXEC/uninstall
16:12:04,709 [pool-3-thread-2] ERROR CloudDeploymentHandlerV2:597 - Malformed uninstall request!
16:12:04,710 [pool-3-thread-2] INFO DataServiceImpl:441 - Storing message on topic :$EDC/#account-name/CLIENT_QED0U1F74NLHA7M0Q5KI606QAU/DEPLOY-V2/REPLY/REQUEST_OTFGFHBKFSCVOI156408A4SU26, priority: 1
16:12:04,733 [DataServiceImpl:Submit] INFO MqttDataTransport:512 - Publishing message on topic: $EDC/amir-kura/CLIENT_QED0U1F74NLHA7M0Q5KI606QAU/DEPLOY-V2/REPLY/REQUEST_OTFGFHBKFSCVOI156408A4SU26 with QoS: 0
16:12:04,745 [pool-3-thread-2] INFO DataServiceImpl:444 - Stored message on topic :$EDC/#account-name/CLIENT_QED0U1F74NLHA7M0Q5KI606QAU/DEPLOY-V2/REPLY/REQUEST_OTFGFHBKFSCVOI156408A4SU26, priority: 1
Is there some other way to send the request payload ?

A quick look at DeploymentPackageUninstallOptions reveals that you are sending job.id as a String instead of a Long.
Instead of
String reqId = "12345891011L";
payload.addMetric("job.id", reqId);
the code should do
long reqId = 12345891011L;
payload.addMetric("job.id", reqId);
Or better, use KuraUninstallPayload which already implements all methods with the correct type.
I don't know Amit's MQTT UTility, but I think you can set the type of the variable in some way (or change his application to set the correct type and then send a pull request).

Related

port out of range:-1 for Web Socket API via WSO2 API Manager

I am trying to run a web-socket service via a WSO2 API Manager (as an API Gateway). I had a working proof-of-concept with the gateway running against a service on my laptop (gateway is on a server, but i ran the service in Eclipse to test it). Now I am trying to get it working against a service running on another server. If I call the URL that is configured as the endpoint in the API Definition in the gateway then it works. if i run via the gateway then it doesn't. The wso2carbon.log shows:
TID: [-1] [] [2019-07-02 16:00:55,260] ERROR {org.apache.synapse.core.axis2.Axis2Sender} - Unexpected error during sending message out {org.apache.synapse.core.axis2.Axis2Sender}
java.lang.IllegalArgumentException: port out of range:-1
at java.net.InetSocketAddress.checkPort(InetSocketAddress.java:143)
at java.net.InetSocketAddress.<init>(InetSocketAddress.java:224)
at io.netty.bootstrap.Bootstrap.connect(Bootstrap.java:97)
at org.wso2.carbon.websocket.transport.WebsocketConnectionFactory.cacheNewConnection(WebsocketConnectionFactory.java:169)
at org.wso2.carbon.websocket.transport.WebsocketConnectionFactory.getChannelHandler(WebsocketConnectionFactory.java:79)
at org.wso2.carbon.websocket.transport.WebsocketTransportSender.sendMessage(WebsocketTransportSender.java:106)
at org.apache.axis2.transport.base.AbstractTransportSender.invoke(AbstractTransportSender.java:112)
at org.apache.axis2.engine.AxisEngine.send(AxisEngine.java:442)
at org.apache.axis2.description.OutOnlyAxisOperationClient.executeImpl(OutOnlyAxisOperation.java:297)
at org.apache.axis2.client.OperationClient.execute(OperationClient.java:149)
at org.apache.synapse.core.axis2.Axis2FlexibleMEPClient.send(Axis2FlexibleMEPClient.java:592)
at org.apache.synapse.core.axis2.Axis2Sender.sendOn(Axis2Sender.java:83)
at org.apache.synapse.core.axis2.Axis2SynapseEnvironment.send(Axis2SynapseEnvironment.java:548)
at org.apache.synapse.endpoints.AbstractEndpoint.send(AbstractEndpoint.java:382)
at org.apache.synapse.endpoints.AddressEndpoint.send(AddressEndpoint.java:65)
at org.apache.synapse.mediators.builtin.SendMediator.mediate(SendMediator.java:121)
at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:97)
at org.apache.synapse.mediators.AbstractListMediator.mediate(AbstractListMediator.java:59)
at org.apache.synapse.mediators.base.SequenceMediator.mediate(SequenceMediator.java:158)
at org.apache.synapse.core.axis2.Axis2SynapseEnvironment.injectMessage(Axis2SynapseEnvironment.java:1005)
at org.wso2.carbon.inbound.endpoint.protocol.websocket.InboundWebsocketSourceHandler.injectToSequence(InboundWebsocketSourceHandler.java:469)
at org.wso2.carbon.inbound.endpoint.protocol.websocket.InboundWebsocketSourceHandler.handleHandshake(InboundWebsocketSourceHandler.java:182)
at org.wso2.carbon.inbound.endpoint.protocol.websocket.InboundWebsocketSourceHandler.channelRead(InboundWebsocketSourceHandler.java:131)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at org.wso2.carbon.apimgt.gateway.handlers.WebsocketInboundHandler.channelRead(WebsocketInboundHandler.java:125)
at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:147)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:244)
at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:147)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:110)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
at java.lang.Thread.run(Thread.java:748)
TID: [-1] [] [2019-07-02 16:00:55,267] WARN {org.apache.synapse.core.axis2.Axis2SynapseEnvironment} - Executing fault handler due to exception encountered {org.apache.synapse.core.axis2.Axis2SynapseEnvironment}
TID: [-1] [] [2019-07-02 16:00:55,267] WARN {org.apache.synapse.endpoints.EndpointContext} - Endpoint : AnonymousEndpoint with address ws://redacted.example.com/notifications/v1 will be marked SUSPENDED as it failed {org.apache.synapse.endpoints.EndpointContext}
TID: [-1] [] [2019-07-02 16:00:55,267] WARN {org.apache.synapse.endpoints.EndpointContext} - Suspending endpoint : AnonymousEndpoint with address ws://redacted.example.com/notifications/v1 - last suspend duration was : 30000ms and current suspend duration is : 30000ms - Next retry after : Tue Jul 02 16:01:25 EEST 2019 {org.apache.synapse.endpoints.EndpointContext}
TID: [-1] [] [2019-07-02 16:00:55,267] INFO {org.apache.synapse.mediators.builtin.LogMediator} - STATUS = Executing default 'fault' sequence, ERROR_CODE = 0, ERROR_MESSAGE = Unexpected error during sending message out {org.apache.synapse.mediators.builtin.LogMediator}
TID: [-1] [] [2019-07-02 16:00:55,345] INFO {org.apache.synapse.mediators.builtin.LogMediator} - STATUS = Executing default 'fault' sequence, ERROR_CODE = 303001, ERROR_MESSAGE = Currently , Address endpoint : [ Name : AnonymousEndpoint ] [ State : SUSPENDED ] {org.apache.synapse.mediators.builtin.LogMediator}
Running v 2.1 of WSO2 API Manager (yes, we are actively planning an upgrade but I need to get it working on the current version if possible). Unfortunately I am having problems repeating my initial PoC against my machine too. I think it's something in the gateway (although I am not aware of having changed anything). However, my IT department has changed which firewall we have on our local machines in the meantime so I can't rule that out...
When using wss endpoint, we were able to observe some errors and we were able to get rid of the errors with the following approach.
Please include the following parameter in the SecureWebSocketInboundEndpoint.xml file which resides in the <APIM_HOME>/repository/deployment/server/synapse-configs/default/inbound-endpoints directory.
<parameter name="wss.ssl.protocols">TLSv1.1,TLSv1.2</parameter>
Also, please remove the following parameters from the same SecureWebSocketInboundEndpoint.xml file if the following parameters(wss.ssl.trust.store.file and wss.ssl.trust.store.pass) exist in the file.
<parameter name="wss.ssl.trust.store.file">repository/resources/security/client-truststore.jks</parameter>
<parameter name="wss.ssl.trust.store.pass">wso2carbon</parameter>
Please use the following sample web socket client to try out and run the WSS client. Please change the variable carbonKeyStoreLocation to point to <API-M_HOME>/repository/resources/security/wso2carbon.jks. Note that port for the WSS API is 8099.
You can download the WSS client in the following WSO2 official documentation under the WSS Support section. (In the second step.)[1] Further please change the access token, web socket endpoint and the carbonKeyStoreLocation with your one to try out the scenario.
[1] https://docs.wso2.com/display/AM260/Create+a+WebSocket+API
When using ws endpoint, you do not need to configure the carbonKeyStoreLocation. Only you need to include the correct access token and correct ws endpoint. You can get the WS client in the same above documentation.[1]

YARN container launch failed

I am unable to run queries on hive. Query fails just after launching map reduce operation (MAP 0% REDUCE 0%). Found the following error in nodemanager logs.
2017-03-16 11:53:03,581 ERROR [ContainerLauncher #0] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Container launch failed for container_1489041811986_0005_01_000002 : java.lang.IllegalArgumentException: Does not contain a valid host:port authority: slave_1:60805
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:213)
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:153)
at org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy$ContainerManagementProtocolProxyData.newProxy(ContainerManagementProtocolProxy.java:258)
at org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy$ContainerManagementProtocolProxyData.<init>(ContainerManagementProtocolProxy.java:244)
at org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy.getProxy(ContainerManagementProtocolProxy.java:129)
at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl.getCMProxy(ContainerLauncherImpl.java:409)
at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:138)
at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:375)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
I guess it is not able to map hostname slave_1 to its ip.
Any help will be appreciated.
Thanks.
I have got the same error and solved it for several days with with the following step:
open the file /etc/hosts;
Since your error message is "Does not contain a valid host:port
authority: slave_1:60805", there should be a value as "salve_1" in
file "/etc/hosts", for example: "127.0.0.1 salve_1" or "127.0.1.1
salve_1";
you need to remove the character "_" or "-" for this hostname and
then try again. in your example, you can change it to "slave1";
In my case, I removed "-" character in the hostname and then it worked.
Hope that it works for you.

Use Flume to stream a webpage data to HDFS

I have 3 node cluster, using latest cloudera parcels for 5.9 version. OS is CentOS 6.7 on all three of them. I am using Flume for the first time.
My purpose is to stream webpage data into HDFS. However this webpage is a third party website, news site in my case so I don't know which Port to use to connect.
Curl and telnet was happening on port 80, hence i used it. But getting error.
My Flume.conf is:
tier1.sources = http-source
tier1.channels = mem-channel-1
tier1.sinks = hdfs-sink
tier1.sources.http-source.type = http
tier1.sources.http-source.handler = org.apache.flume.source.http.JSONHandler
tier1.sources.http-source.bind = 132.247.1.32
tier1.sources.http-source.port = 80
tier1.sources.http-source.channels = mem-channel-1
tier1.channels.mem-channel-1.type = memory
tier1.sinks.hdfs-sink.type = hdfs
tier1.sinks.hdfs-sink.channel = mem-channel-1
tier1.sinks.hdfs-sink.hdfs.path = /flume/events/%y-%m-%d/%H%M/%S
# Other properties are specific to each type of
# source, channel, or sink. In this case, we
# specify the capacity of the memory channel.
tier1.channels.mem-channel-1.capacity = 100
Error
2016-12-19 16:45:00,353 WARN org.mortbay.log: failed SelectChannelConnector#132.247.1.32:80: java.net.BindException: Cannot assign requested address
2016-12-19 16:45:00,353 WARN org.mortbay.log: failed Server#36772002: java.net.BindException: Cannot assign requested address
2016-12-19 16:45:00,353 ERROR org.apache.flume.source.http.HTTPSource: Error while starting HTTPSource. Exception follows.
java.net.BindException: Cannot assign requested address**
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
at org.mortbay.jetty.nio.SelectChannelConnector.doStart2016-12-19 16:45:00,364 ERROR org.apache.flume.lifecycle.LifecycleSupervisor: Unable to start EventDrivenSourceRunner: { source:org.apache.flume.source.http.HTTPSource{
name:http-source,state:IDLE} } - Exception follows.
java.lang.RuntimeException: java.net.BindException: Cannot assign requested address(SelectChannelConnector.java:315)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at org.mortbay.jetty.Server.doStart(Server.java:235)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at org.apache.flume.source.http.HTTPSource.start(HTTPSource.java:207)
at org.apache.flume.source.EventDrivenSourceRunner.start(EventDrivenSourceRunner.java:44)
at org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:251)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Try changing the source config as below :
httpagent.sources.http-source.port = 80
httpagent.sources.http-source.bind = localhost
httpagent.sources.http-source.url = 132.247.1.32
Note : If 132.247.1.32 doesn't work, try giving the hostname.
The HTTP source for Flume provides a way to use GET/POST to send data into a flume agent. The HTTP source does not go out and fetch data from websites for you, it sets up a HTTP server and waits for GET/POST and accepts the data from that endpoint.
My recommendation would be to create a custom source that fetches the webpage you require.

Getting ClusterBlockException while running queries using node client

My elasticsearch cluster(version 2.0) is started and the node client is built successfully, but for some reason I'm getting the following error while running queries using node client.
20:15:15.479 [Pool:entitytaskscheduler: Thread#1] DEBUG c.b.o.e.t.c.DataCollectorStatusUpdateTask - collectors updated due to agent reconnected:{}
ClusterBlockException[blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];]
at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(ClusterBlocks.java:154)
at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedRaiseException(ClusterBlocks.java:144)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.<init>(TransportSearchTypeAction.java:116)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.<init>(TransportSearchQueryThenFetchAction.java:73)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.<init>(TransportSearchQueryThenFetchAction.java:67)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction.doExecute(TransportSearchQueryThenFetchAction.java:64)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction.doExecute(TransportSearchQueryThenFetchAction.java:53)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:70)
at org.elasticsearch.action.search.TransportSearchAction.doExecute(TransportSearchAction.java:99)
at org.elasticsearch.action.search.TransportSearchAction.doExecute(TransportSearchAction.java:44)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:70)
at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:58)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:347)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:85)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:59)
at com.hidden.ppp.management.dc.DataCollectorPollStatusDAOESImpl.findDCIdsUpdatedInTime(DataCollectorPollStatusDAOESImpl.java:151)
at com.hidden.ppp.engine.taskexecutor.cptaskexecs.DataCollectorStatusUpdateTask.execute(DataCollectorStatusUpdateTask.java:199)
at com.hidden.ppp.engine.taskexecutor.cptaskexecs.DataCollectorStatusUpdateTaskRunner.run(DataCollectorStatusUpdateTaskRunner.java:27)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
20:15:15.558 [Pool:entitytaskscheduler: Thread#1] WARN c.b.o.m.d.DataCollectorPollStatusDAOESImpl - blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];
20:15:15.558 [Pool:entitytaskscheduler: Thread#1] DEBUG c.b.o.e.t.c.DataCollectorStatusUpdateTask - collectors for which polls updated after epoc time:1453128243336 - dcids: []
ClusterBlockException[blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];]
at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(ClusterBlocks.java:154)
at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedRaiseException(ClusterBlocks.java:144)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.<init>(TransportSearchTypeAction.java:116)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.<init>(TransportSearchQueryThenFetchAction.java:73)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.<init>(TransportSearchQueryThenFetchAction.java:67)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction.doExecute(TransportSearchQueryThenFetchAction.java:64)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction.doExecute(TransportSearchQueryThenFetchAction.java:53)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:70)
at org.elasticsearch.action.search.TransportSearchAction.doExecute(TransportSearchAction.java:99)
at org.elasticsearch.action.search.TransportSearchAction.doExecute(TransportSearchAction.java:44)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:70)
at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:58)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:347)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:85)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:59)
at com.hidden.ppp.management.dc.DataCollectorPollStatusDAOESImpl.findDCIdsNotUpdatedInTime(DataCollectorPollStatusDAOESImpl.java:182)
at com.hidden.ppp.engine.taskexecutor.cptaskexecs.DataCollectorStatusUpdateTask.execute(DataCollectorStatusUpdateTask.java:204)
at com.hidden.ppp.engine.taskexecutor.cptaskexecs.DataCollectorStatusUpdateTaskRunner.run(DataCollectorStatusUpdateTaskRunner.java:27)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
I've even disabled the "multicast" as per this post - still no luck. Surprisingly, I could access the elasticsearch from sense. Any clues on what is going wrong ?
I faced the same error message and was not able to understand the problem first. I was developing a node client Java application on my laptop, using an Elasticsearch data node on a remote server. For production use, I needed to deploy the Java application on this remote server.
I configured the Java application to talk to the local host only (being on the same host now):
elasticsearch.discovery.zen.ping.unicast.hosts=127.0.0.1
And got the same exception
ClusterBlockException[blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];]
Looking at the logs I also found this entry:
[WARN] [TP-Processor2] DiscoveryService.waitForInitialState -> [cerbera] waited for 30s and no initial state was set by the discovery
So basically, the question was: Why doesn't it find the Elasticsearch data node? I changed port ranges and also played with the multicast setting - without success.
Finally, I checked elasticsearch.yml and found the data node not listening to localhost (127.0.0.1), but instead on the ethernet interface 192.168.1.2.
network.host: 192.168.1.2
http.port: 9200
The final change was simple, I just needed to reconfigure the node client configuration to talk to the correct interface
elasticsearch.discovery.zen.ping.unicast.hosts=192.168.1.2
Now my node client is talking to elasticsearch via the correct interface. Job done.
I had the same problem (using k8s ) I finally replaced my elastic image and the issue was solved...
moved from 6.5.4-debian-9-r41 to 6.8.16-debian-10-r5 (using bitnami images)
I know it is not the best answer - but I really tried suggested answers and nothing worked for me. so my recommendation is to update to a newer better version. (docker makes that easy:) )

WSO2 DAS 3.0.0 with API Manager 1.9.0 not working

I am using trying to use DAS 3.0.0 as replacement of BAM with WSO2 API Manager 1.9.0/1.9.1 with Oracle for WSO2AM_STATS_DB.
I am following http://blog.rukspot.com/2015/09/publishing-apim-runtime-statistics-to.html
I can see data in DAS's carbon dashboard in Data Explorer tables ORG_WSO2_APIMGT_STATISTICS_REQUEST and ORG_WSO2_APIMGT_STATISTICS_RESPONSE.
But data is not stored in Oracle. Therefore I am not able to see Statistics in publisher of AM. It keeps saying "Data publishing is enabled. Generate some traffic to see statistics."
I am getting following error in log:
[2015-12-08 13:00:00,022] INFO {org.wso2.carbon.analytics.spark.core.AnalyticsT
ask} - Executing the schedule task for: APIM_STAT_script for tenant id: -1234
[2015-12-08 13:00:00,037] INFO {org.wso2.carbon.analytics.spark.core.AnalyticsT
ask} - Executing the schedule task for: Throttle_script for tenant id: -1234
Exception in thread "dag-scheduler-event-loop" java.lang.NoClassDefFoundError: o
rg/xerial/snappy/SnappyInputStream
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:274)
at org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.sc
ala:66)
at org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.sc
ala:60)
at org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcas
t$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
at org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.s
cala:80)
at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(Torre
ntBroadcastFactory.scala:34)
at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastMan
ager.scala:62)
at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1291)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DA
GScheduler$$submitMissingTasks(DAGScheduler.scala:874)
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DA
GScheduler$$submitStage(DAGScheduler.scala:815)
at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGSchedul
er.scala:799)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAG
Scheduler.scala:1426)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAG
Scheduler.scala:1418)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
Caused by: java.lang.ClassNotFoundException: org.xerial.snappy.SnappyInputStream
cannot be found by spark-core_2.10_1.4.1.wso2v1
at org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(Bundl
eLoader.java:501)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.
java:421)
at org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.
java:412)
at org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(De
faultClassLoader.java:107)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
... 15 more
Am I missing something?
Can anyone please help me to figure out this issue?
Thanks in advance.
Move all the libraries(jars) into your project's /WEB-INF/lib. Now all the libraries/jars under /WEB-INF/lib will come under classpath.
use snappy-java jar file and it will work as you want.

Resources