Setup a graylog2 server with elasticsearch in a vagrant machine - elasticsearch

I'm trying to Install graylog2 server on my local dev machine and encountering problems with elasticsearch setup.
My elasticsearch is installed as a service on a vagrant machine running on my dev machine. so My elasticsearch isn't installed in 127.0.0.1 but in 192.168.50.4 (the ip of the vagrant machine) I have ports 9200 forwarded from the vagrant machine but graylog2 server seems to fail to find it and stops running with a :
ERROR: Could not successfully connect to ElasticSearch. Check that
your cluster state is not RED and that ElasticSearch is running
properly.
Adding port 9300 forwarded from the vagrant machine changed the error to:
Caused by: org.elasticsearch.common.netty.channel.ChannelException:
Failed to bind to: 0.0.0.0/0.0.0.0:9350
I tried this settings in graylog conf file:
elasticsearch_network_host =192.168.50.4
but that only changes the error to an exception failing to bind to
Caused by: org.elasticsearch.common.netty.channel.ChannelException:
Failed to bind to: /192.168.50.4:9350 at
org.elasticsearch.common.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
But didn't help.
I'll be glad for any direction what am I doing wrong (either with elastic search configuration or the vagrant or graylog2)
Thanks!
Update following advice by the answer below I changed the following config:
elasticsearch_discovery_zen_ping_multicast_enabled = false
elasticsearch_discovery_zen_ping_unicast_hosts = 192.168.50.4:9300
I now get this error:
2014-06-16 23:04:34,946 WARN : org.elasticsearch.transport.netty - [graylog2-server] Message not fully read (response) for [6] handler org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$4#67bd250a, error [true], resetting
2014-06-16 23:04:36,451 WARN : org.elasticsearch.discovery.zen.ping.unicast - [graylog2-server] failed to send ping to [[#zen_unicast_1#][inet[/192.168.50.4:9300]]]
org.elasticsearch.transport.RemoteTransportException: Failed to deserialize exception response from stream
Caused by: org.elasticsearch.transport.TransportSerializationException: Failed to deserialize exception response from stream
at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:169)
at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:123)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:310)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.io.InvalidClassException: failed to read class descriptor
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1603)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1517)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1622)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1517)
looks that graylog2 still fails to connect to elastic search in a correct way
Details (update): graylog2-server-0.20.2, elasticsearch 1.1.0 (I think) - I can replace if that's the problem. java OpenJDK 64-Bit java version "1.7.0_55"
More Updates (thanks #sheena) When downgrading the elasticsearch version to 0.90.10 we got some progress but still not working:
Here is the current log:
2014-06-17 13:27:16,394 INFO : org.graylog2.Main - Graylog2 0.20.2 starting up. (JRE: Oracle Corporation 1.7.0_55 on Linux 3.13.0-29-generic)
2014-06-17 13:27:16,475 INFO : org.graylog2.plugin.system.NodeId - Node ID: e7245f12-2e8b-4803-9e88-7529169b5a91
2014-06-17 13:27:16,670 INFO : org.graylog2.buffers.ProcessBuffer - Initialized ProcessBuffer with ring size <1024> and wait strategy <BlockingWaitStrategy>.
2014-06-17 13:27:16,692 INFO : org.graylog2.buffers.OutputBuffer - Initialized OutputBuffer with ring size <1024> and wait strategy <BlockingWaitStrategy>.
2014-06-17 13:27:16,964 DEBUG: com.ning.http.client.providers.netty.NettyAsyncHttpProvider - Number of application's worker threads is 8
2014-06-17 13:27:17,272 INFO : org.elasticsearch.node - [graylog2-server] version[0.90.10], pid[24419], build[0a5781f/2014-01-10T10:18:37Z]
2014-06-17 13:27:17,273 INFO : org.elasticsearch.node - [graylog2-server] initializing ...
2014-06-17 13:27:17,273 DEBUG: org.elasticsearch.node - [graylog2-server] using home [/home/alon/Downloads/graylog2-server-0.20.2], config [/home/alon/Downloads/graylog2-server-0.20.2/config], data [[/home/alon/Downloads/graylog2-server-0.20.2/data]], logs [/home/alon/Downloads/graylog2-server-0.20.2/logs], work [/home/alon/Downloads/graylog2-server-0.20.2/work], plugins [/home/alon/Downloads/graylog2-server-0.20.2/plugins]
2014-06-17 13:27:17,281 INFO : org.elasticsearch.plugins - [graylog2-server] loaded [], sites []
2014-06-17 13:27:17,320 DEBUG: org.elasticsearch.common.compress.lzf - using [UnsafeChunkDecoder] decoder
2014-06-17 13:27:18,655 DEBUG: org.elasticsearch.threadpool - [graylog2-server] creating thread_pool [generic], type [cached], keep_alive [30s]
2014-06-17 13:27:18,740 DEBUG: org.elasticsearch.threadpool - [graylog2-server] creating thread_pool [index], type [fixed], size [4], queue_size [200]
2014-06-17 13:27:18,744 DEBUG: org.elasticsearch.threadpool - [graylog2-server] creating thread_pool [bulk], type [fixed], size [4], queue_size [50]
2014-06-17 13:27:18,745 DEBUG: org.elasticsearch.threadpool - [graylog2-server] creating thread_pool [get], type [fixed], size [4], queue_size [1k]
2014-06-17 13:27:18,745 DEBUG: org.elasticsearch.threadpool - [graylog2-server] creating thread_pool [search], type [fixed], size [12], queue_size [1k]
2014-06-17 13:27:18,745 DEBUG: org.elasticsearch.threadpool - [graylog2-server] creating thread_pool [suggest], type [fixed], size [4], queue_size [1k]
2014-06-17 13:27:18,745 DEBUG: org.elasticsearch.threadpool - [graylog2-server] creating thread_pool [percolate], type [fixed], size [4], queue_size [1k]
2014-06-17 13:27:18,746 DEBUG: org.elasticsearch.threadpool - [graylog2-server] creating thread_pool [management], type [scaling], min [1], size [5], keep_alive [5m]
2014-06-17 13:27:18,747 DEBUG: org.elasticsearch.threadpool - [graylog2-server] creating thread_pool [flush], type [scaling], min [1], size [2], keep_alive [5m]
2014-06-17 13:27:18,747 DEBUG: org.elasticsearch.threadpool - [graylog2-server] creating thread_pool [merge], type [scaling], min [1], size [2], keep_alive [5m]
2014-06-17 13:27:18,747 DEBUG: org.elasticsearch.threadpool - [graylog2-server] creating thread_pool [refresh], type [scaling], min [1], size [2], keep_alive [5m]
2014-06-17 13:27:18,748 DEBUG: org.elasticsearch.threadpool - [graylog2-server] creating thread_pool [warmer], type [scaling], min [1], size [2], keep_alive [5m]
2014-06-17 13:27:18,748 DEBUG: org.elasticsearch.threadpool - [graylog2-server] creating thread_pool [snapshot], type [scaling], min [1], size [2], keep_alive [5m]
2014-06-17 13:27:18,748 DEBUG: org.elasticsearch.threadpool - [graylog2-server] creating thread_pool [optimize], type [fixed], size [1], queue_size [null]
2014-06-17 13:27:18,768 DEBUG: org.elasticsearch.transport.netty - [graylog2-server] using worker_count[8], port[9350], bind_host[null], publish_host[null], compress[false], connect_timeout[30s], connections_per_node[2/3/6/1/1], receive_predictor[512kb->512kb]
2014-06-17 13:27:18,784 DEBUG: org.elasticsearch.discovery.zen.ping.unicast - [graylog2-server] using initial hosts [192.168.50.4:9300], with concurrent_connects [10]
2014-06-17 13:27:18,787 DEBUG: org.elasticsearch.discovery.zen - [graylog2-server] using ping.timeout [3s], master_election.filter_client [true], master_election.filter_data [false]
2014-06-17 13:27:18,788 DEBUG: org.elasticsearch.discovery.zen.elect - [graylog2-server] using minimum_master_nodes [-1]
2014-06-17 13:27:18,790 DEBUG: org.elasticsearch.discovery.zen.fd - [graylog2-server] [master] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
2014-06-17 13:27:18,801 DEBUG: org.elasticsearch.discovery.zen.fd - [graylog2-server] [node ] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
2014-06-17 13:27:18,845 DEBUG: org.elasticsearch.monitor.jvm - [graylog2-server] enabled [true], last_gc_enabled [false], interval [1s], gc_threshold [{old=GcThreshold{name='old', warnThreshold=10000, infoThreshold=5000, debugThreshold=2000}, default=GcThreshold{name='default', warnThreshold=10000, infoThreshold=5000, debugThreshold=2000}, young=GcThreshold{name='young', warnThreshold=1000, infoThreshold=700, debugThreshold=400}}]
2014-06-17 13:27:18,846 DEBUG: org.elasticsearch.monitor.os - [graylog2-server] Using probe [org.elasticsearch.monitor.os.JmxOsProbe#7b01e044] with refresh_interval [1s]
2014-06-17 13:27:18,849 DEBUG: org.elasticsearch.monitor.process - [graylog2-server] Using probe [org.elasticsearch.monitor.process.JmxProcessProbe#3103c203] with refresh_interval [1s]
2014-06-17 13:27:18,854 DEBUG: org.elasticsearch.monitor.jvm - [graylog2-server] Using refresh_interval [1s]
2014-06-17 13:27:18,854 DEBUG: org.elasticsearch.monitor.network - [graylog2-server] Using probe [org.elasticsearch.monitor.network.JmxNetworkProbe#1cc7580f] with refresh_interval [5s]
2014-06-17 13:27:18,857 DEBUG: org.elasticsearch.monitor.network - [graylog2-server] net_info
host [stox-alonisser]
vboxnet0 display_name [vboxnet0]
address [/fe80:0:0:0:800:27ff:fe00:0%4] [/192.168.50.1]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
wlan0 display_name [wlan0]
address [/fe80:0:0:0:e8b:fdff:fe62:dc9d%3] [/192.168.20.107]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
lo display_name [lo]
address [/0:0:0:0:0:0:0:1%1] [/127.0.0.1]
mtu [65536] multicast [false] ptp [false] loopback [true] up [true] virtual [false]
2014-06-17 13:27:18,858 DEBUG: org.elasticsearch.monitor.fs - [graylog2-server] Using probe [org.elasticsearch.monitor.fs.JmxFsProbe#2c8807d7] with refresh_interval [1s]
2014-06-17 13:27:19,196 DEBUG: org.elasticsearch.indices.store - [graylog2-server] using indices.store.throttle.type [MERGE], with index.store.throttle.max_bytes_per_sec [20mb]
2014-06-17 13:27:19,204 DEBUG: org.elasticsearch.cache.memory - [graylog2-server] using bytebuffer cache with small_buffer_size [1kb], large_buffer_size [1mb], small_cache_size [10mb], large_cache_size [500mb], direct [true]
2014-06-17 13:27:19,220 DEBUG: org.elasticsearch.script - [graylog2-server] using script cache with max_size [500], expire [null]
2014-06-17 13:27:19,234 DEBUG: org.elasticsearch.cluster.routing.allocation.decider - [graylog2-server] using node_concurrent_recoveries [2], node_initial_primaries_recoveries [4]
2014-06-17 13:27:19,235 DEBUG: org.elasticsearch.cluster.routing.allocation.decider - [graylog2-server] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active]
2014-06-17 13:27:19,236 DEBUG: org.elasticsearch.cluster.routing.allocation.decider - [graylog2-server] using [cluster_concurrent_rebalance] with [2]
2014-06-17 13:27:19,243 DEBUG: org.elasticsearch.gateway.local - [graylog2-server] using initial_shards [quorum], list_timeout [30s]
2014-06-17 13:27:19,424 DEBUG: org.elasticsearch.indices.recovery - [graylog2-server] using max_bytes_per_sec[20mb], concurrent_streams [3], file_chunk_size [512kb], translog_size [512kb], translog_ops [1000], and compress [true]
2014-06-17 13:27:19,486 DEBUG: org.elasticsearch.indices.memory - [graylog2-server] using index_buffer_size [265.4mb], with min_shard_index_buffer_size [4mb], max_shard_index_buffer_size [512mb], shard_inactive_time [30m]
2014-06-17 13:27:19,487 DEBUG: org.elasticsearch.indices.cache.filter - [graylog2-server] using [node] weighted filter cache with size [20%], actual_size [530.8mb], expire [null], clean_interval [1m]
2014-06-17 13:27:19,489 DEBUG: org.elasticsearch.indices.fielddata.cache - [graylog2-server] using size [-1] [-1b], expire [null]
2014-06-17 13:27:19,507 DEBUG: org.elasticsearch.gateway.local.state.meta - [graylog2-server] using gateway.local.auto_import_dangled [YES], with gateway.local.dangling_timeout [2h]
2014-06-17 13:27:19,511 DEBUG: org.elasticsearch.bulk.udp - [graylog2-server] using enabled [false], host [null], port [9700-9800], bulk_actions [1000], bulk_size [5mb], flush_interval [5s], concurrent_requests [4]
2014-06-17 13:27:19,514 DEBUG: org.elasticsearch.cluster.routing.allocation.decider - [graylog2-server] using node_concurrent_recoveries [2], node_initial_primaries_recoveries [4]
2014-06-17 13:27:19,514 DEBUG: org.elasticsearch.cluster.routing.allocation.decider - [graylog2-server] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active]
2014-06-17 13:27:19,515 DEBUG: org.elasticsearch.cluster.routing.allocation.decider - [graylog2-server] using [cluster_concurrent_rebalance] with [2]
2014-06-17 13:27:19,516 DEBUG: org.elasticsearch.cluster.routing.allocation.decider - [graylog2-server] using node_concurrent_recoveries [2], node_initial_primaries_recoveries [4]
2014-06-17 13:27:19,516 DEBUG: org.elasticsearch.cluster.routing.allocation.decider - [graylog2-server] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active]
2014-06-17 13:27:19,516 DEBUG: org.elasticsearch.cluster.routing.allocation.decider - [graylog2-server] using [cluster_concurrent_rebalance] with [2]
2014-06-17 13:27:19,528 INFO : org.elasticsearch.node - [graylog2-server] initialized
2014-06-17 13:27:19,529 INFO : org.elasticsearch.node - [graylog2-server] starting ...
2014-06-17 13:27:19,552 DEBUG: org.elasticsearch.netty.channel.socket.nio.SelectorUtil - Using select timeout of 500
2014-06-17 13:27:19,552 DEBUG: org.elasticsearch.netty.channel.socket.nio.SelectorUtil - Epoll-bug workaround enabled = false
2014-06-17 13:27:19,618 DEBUG: org.elasticsearch.transport.netty - [graylog2-server] Bound to address [/0:0:0:0:0:0:0:0:9350]
2014-06-17 13:27:19,622 INFO : org.elasticsearch.transport - [graylog2-server] bound_address {inet[/0:0:0:0:0:0:0:0:9350]}, publish_address {inet[/192.168.20.107:9350]}
2014-06-17 13:27:19,658 DEBUG: org.elasticsearch.transport.netty - [graylog2-server] connected to node [[#zen_unicast_1#][inet[/192.168.50.4:9300]]]
2014-06-17 13:27:22,628 WARN : org.elasticsearch.discovery - [graylog2-server] waited for 3s and no initial state was set by the discovery
2014-06-17 13:27:22,628 INFO : org.elasticsearch.discovery - [graylog2-server] graylog2/vWsYLp5JQoOJMva0FZgRsA
2014-06-17 13:27:22,629 DEBUG: org.elasticsearch.gateway - [graylog2-server] can't wait on start for (possibly) reading state from gateway, will do it asynchronously
2014-06-17 13:27:22,629 INFO : org.elasticsearch.node - [graylog2-server] started
2014-06-17 13:27:22,642 DEBUG: org.elasticsearch.transport.netty - [graylog2-server] disconnected from [[#zen_unicast_1#][inet[/192.168.50.4:9300]]]
2014-06-17 13:27:22,644 DEBUG: org.elasticsearch.discovery.zen - [graylog2-server] filtered ping responses: (filter_client[true], filter_data[false])
--> target [[Crimson Daffodil][vPHcWzoCQteDG19hofaayA][inet[/10.0.2.15:9300]]], master [[Crimson Daffodil][vPHcWzoCQteDG19hofaayA][inet[/10.0.2.15:9300]]]
2014-06-17 13:27:27,634 ERROR: org.graylog2.Main -

elasticsearch_network_host is not what you think. It is about the elasticsearch /client/ within graylog, and not the elasticsearch server you want to connect with. So graylog is trying to listen on 192.168.50.4 which isn't a valid IP address on the graylog system (your dev machine).
You most likely want to set these variables in graylog2 config:
elasticsearch_discovery_zen_ping_multicast_enabled = false
elasticsearch_discovery_zen_ping_unicast_hosts = 192.168.50.4:9300
Here is where I got stuck, but that was because I had elasticsearch 1.0 installed when I needed 0.90. I'll now more once my puppet/vagrant stack finishes re-provisioning. =)
EDIT: Mine is working now.

Related

Problem in Flink UI on Mesos cluster with two slave nodes

I have four physical nodes with docker installed on each of them. I configured Mesos,Flink,Zookeeper,Hadoop and Marathon on docker of each one. I had already had three nodes,one slave and two masters, that I had run Flink on Marathon and its UI had been run without any problems. After that, I changed the cluster,two masters and two slaves. I added this Json file in Marathon, it was ran, but Flink UI was not shown in both slave nodes. The error is in following.
{
"id": "flink",
"cmd": "/home/flink-1.7.2/bin/mesos-appmaster.sh -Djobmanager.heap.mb=1024 -Djobmanager.rpc.port=6123 -Drest.port=8081 -Dmesos.resourcemanager.tasks.mem=1024 -Dtaskmanager.heap.mb=1024 -Dtaskmanager.numberOfTaskSlots=2 -Dparallelism.default=2 -Dmesos.resourcemanager.tasks.cpus=1",
"cpus": 1.0,
"mem": 1024,
"instances": 2
}
Error:
Service temporarily unavailable due to an ongoing leader election. Please refresh
I cleared Zookeeper contents with this commands:
/home/zookeeper-3.4.14/bin/zkCleanup.sh /var/lib/zookeeper/data/ -n 10
rm -rf /var/lib/zookeeper/data/version-2
rm /var/lib/zookeeper/data/zookeeper_server.pid
Also, I ran this command and delete Flink contents in Zookeeper:
/home/zookeeper-3.4.14/bin/zkCli.sh
delete /flink/default/leader/....
But still one of Flink UI has problem.
I have configured Flink high availability like this:
high-availability: zookeeper
high-availability.storageDir: hdfs:///flink/ha/
high-availability.zookeeper.quorum: 0.0.0.0:2181,10.32.0.3:2181,10.32.0.4:2181,10.32.0.5:2181
fs.hdfs.hadoopconf: /opt/hadoop/etc/hadoop
fs.hdfs.hdfssite: /opt/hadoop/etc/hadoop/hdfs-site.xml
recovery.zookeeper.path.mesos-workers: /mesos-workers
env.java.home: /opt/java
mesos.master: 10.32.0.2:5050,10.32.0.3:5050
Because I used Mesos cluster, I did not change any thing in flink-conf.yaml.
This is part of slave log which has error:
- Remote connection to [null] failed with java.net.ConnectException:
Connection refused: localhost/127.0.0.1:37797
2019-07-03 07:22:42,922 WARN akka.remote.ReliableDeliverySupervisor
- Association with remote system [akka.tcp://flink#localhost:37797] has failed, address is now gated for [50] ms.
Reason: [Association failed with [akka.tcp://flink#localhost:37797]]
Caused by: [Connection refused: localhost/127.0.0.1:37797]
2019-07-03 07:22:43,003 WARN akka.remote.transport.netty.NettyTransport
- Remote connection to [null] failed with java.net.ConnectException:
Connection refused: localhost/127.0.0.1:37797
2019-07-03 07:22:43,004 WARN akka.remote.ReliableDeliverySupervisor
- Association with remote system [akka.tcp://flink#localhost:37797]
has failed, address is now gated for [50] ms.
Reason: [Association failed with [akka.tcp://flink#localhost:37797]]
Caused by: [Connection refused: localhost/127.0.0.1:37797]
2019-07-03 07:22:43,072 WARN akka.remote.transport.netty.NettyTransport
- Remote connection to [null] failed with java.net.ConnectException:
Connection refused: localhost/127.0.0.1:37797
2019-07-03 07:22:43,073 WARN akka.remote.ReliableDeliverySupervisor
- Association with remote system [akka.tcp://flink#localhost:37797]
has failed, address is now gated for [50] ms.
Reason: [Association failed with [akka.tcp://flink#localhost:37797]]
Caused by: [Connection refused: localhost/127.0.0.1:37797]
2019-07-03 07:23:45,891 WARN
org.apache.flink.runtime.webmonitor.retriever.impl.RpcGatewayRetriever
- Error while retrieving the leader gateway. Retrying to connect to
akka.tcp://flink#localhost:37797/user/dispatcher.
This is Zookeeper log for the node that has the error in Flink UI:
2019-07-03 09:43:33,425 [myid:] - INFO [main:QuorumPeerConfig#136] - Reading configuration from: /home/zookeeper-3.4.14/bin/../conf/zoo.cfg
2019-07-03 09:43:33,434 [myid:] - INFO [main:QuorumPeer$QuorumServer#185] - Resolved hostname: 0.0.0.0 to address: /0.0.0.0
2019-07-03 09:43:33,435 [myid:] - INFO [main:QuorumPeer$QuorumServer#185] - Resolved hostname: 10.32.0.3 to address: /10.32.0.3
2019-07-03 09:43:33,435 [myid:] - INFO [main:QuorumPeer$QuorumServer#185] - Resolved hostname: 10.32.0.2 to address: /10.32.0.2
2019-07-03 09:43:33,435 [myid:] - INFO [main:QuorumPeer$QuorumServer#185] - Resolved hostname: 10.32.0.5 to address: /10.32.0.5
2019-07-03 09:43:33,435 [myid:] - WARN [main:QuorumPeerConfig#354] - Non-optimial configuration, consider an odd number of servers.
2019-07-03 09:43:33,436 [myid:] - INFO [main:QuorumPeerConfig#398] - Defaulting to majority quorums
2019-07-03 09:43:33,438 [myid:3] - INFO [main:DatadirCleanupManager#78] - autopurge.snapRetainCount set to 3
2019-07-03 09:43:33,438 [myid:3] - INFO [main:DatadirCleanupManager#79] - autopurge.purgeInterval set to 0
2019-07-03 09:43:33,438 [myid:3] - INFO [main:DatadirCleanupManager#101] - Purge task is not scheduled.
2019-07-03 09:43:33,445 [myid:3] - INFO [main:QuorumPeerMain#130] - Starting quorum peer
2019-07-03 09:43:33,450 [myid:3] - INFO [main:ServerCnxnFactory#117] - Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory
2019-07-03 09:43:33,452 [myid:3] - INFO [main:NIOServerCnxnFactory#89] - binding to port 0.0.0.0/0.0.0.0:2181
2019-07-03 09:43:33,458 [myid:3] - INFO [main:QuorumPeer#1159] - tickTime set to 2000
2019-07-03 09:43:33,458 [myid:3] - INFO [main:QuorumPeer#1205] - initLimit set to 10
2019-07-03 09:43:33,458 [myid:3] - INFO [main:QuorumPeer#1179] - minSessionTimeout set to -1
2019-07-03 09:43:33,459 [myid:3] - INFO [main:QuorumPeer#1190] - maxSessionTimeout set to -1
2019-07-03 09:43:33,464 [myid:3] - INFO [main:QuorumPeer#1470] - QuorumPeer communication is not secured!
2019-07-03 09:43:33,464 [myid:3] - INFO [main:QuorumPeer#1499] - quorum.cnxn.threads.size set to 20
2019-07-03 09:43:33,465 [myid:3] - INFO [main:QuorumPeer#669] - currentEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation
2019-07-03 09:43:33,519 [myid:3] - INFO [main:QuorumPeer#684] - acceptedEpoch not found! Creating with a reasonable default of 0. This should only happen when you are upgrading your installation
2019-07-03 09:43:33,566 [myid:3] - INFO [ListenerThread:QuorumCnxManager$Listener#736] - My election bind port: /0.0.0.0:3888
2019-07-03 09:43:33,574 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:QuorumPeer#910] - LOOKING
2019-07-03 09:43:33,575 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:FastLeaderElection#813] - New election. My id = 3, proposed zxid=0x0
2019-07-03 09:43:33,581 [myid:3] - INFO [WorkerReceiver[myid=3]:FastLeaderElection#595] - Notification: 1 (message format version), 1 (n.leader), 0x200000004 (n.zxid), 0x5 (n.round), LOOKING (n.state), 1 (n.sid), 0x2 (n.peerEpoch) LOOKING (my state)
2019-07-03 09:43:33,581 [myid:3] - INFO [WorkerReceiver[myid=3]:FastLeaderElection#595] - Notification: 1 (message format version), 1 (n.leader), 0x200000004 (n.zxid), 0x5 (n.round), LEADING (n.state), 1 (n.sid), 0x3 (n.peerEpoch) LOOKING (my state)
2019-07-03 09:43:33,581 [myid:3] - INFO [WorkerReceiver[myid=3]:FastLeaderElection#595] - Notification: 1 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 3 (n.sid), 0x0 (n.peerEpoch) LOOKING (my state)
2019-07-03 09:43:33,582 [myid:3] - INFO [WorkerSender[myid=3]:QuorumCnxManager#347] - Have smaller server identifier, so dropping the connection: (4, 3)
2019-07-03 09:43:33,583 [myid:3] - INFO [WorkerReceiver[myid=3]:FastLeaderElection#595] - Notification: 1 (message format version), 1 (n.leader), 0x200000004 (n.zxid), 0x5 (n.round), LOOKING (n.state), 3 (n.sid), 0x2 (n.peerEpoch) LOOKING (my state)
2019-07-03 09:43:33,583 [myid:3] - INFO [WorkerSender[myid=3]:QuorumCnxManager#347] - Have smaller server identifier, so dropping the connection: (4, 3)
2019-07-03 09:43:33,583 [myid:3] - INFO [WorkerReceiver[myid=3]:FastLeaderElection#595] - Notification: 1 (message format version), 1 (n.leader), 0x200000004 (n.zxid), 0x5 (n.round), LEADING (n.state), 1 (n.sid), 0x3 (n.peerEpoch) LOOKING (my state)
2019-07-03 09:43:33,584 [myid:3] - INFO [WorkerReceiver[myid=3]:FastLeaderElection#595] - Notification: 1 (message format version), 1 (n.leader), 0x200000004 (n.zxid), 0x5 (n.round), LOOKING (n.state), 2 (n.sid), 0x2 (n.peerEpoch) LOOKING (my state)
2019-07-03 09:43:33,585 [myid:3] - INFO [/0.0.0.0:3888:QuorumCnxManager$Listener#743] - Received connection request /10.32.0.5:42182
2019-07-03 09:43:33,585 [myid:3] - INFO [WorkerReceiver[myid=3]:FastLeaderElection#595] - Notification: 1 (message format version), 1 (n.leader), 0x200000004 (n.zxid), 0x5 (n.round), FOLLOWING (n.state), 2 (n.sid), 0x3 (n.peerEpoch) LOOKING (my state)
2019-07-03 09:43:33,585 [myid:3] - INFO [WorkerReceiver[myid=3]:FastLeaderElection#595] - Notification: 1 (message format version), 1 (n.leader), 0x200000004 (n.zxid), 0x5 (n.round), FOLLOWING (n.state), 2 (n.sid), 0x3 (n.peerEpoch) LOOKING (my state)
2019-07-03 09:43:33,587 [myid:3] - INFO [WorkerReceiver[myid=3]:FastLeaderElection#595] - Notification: 1 (message format version), 1 (n.leader), 0x200000004 (n.zxid), 0x5 (n.round), LOOKING (n.state), 4 (n.sid), 0x2 (n.peerEpoch) LOOKING (my state)
2019-07-03 09:43:33,587 [myid:3] - WARN [RecvWorker:4:QuorumCnxManager$RecvWorker#1025] - Connection broken for id 4, my id = 3, error =
java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:1010)
2019-07-03 09:43:33,589 [myid:3] - WARN [RecvWorker:4:QuorumCnxManager$RecvWorker#1028] - Interrupting SendWorker
2019-07-03 09:43:33,588 [myid:3] - INFO [/0.0.0.0:3888:QuorumCnxManager$Listener#743] - Received connection request /10.32.0.5:42184
2019-07-03 09:43:33,589 [myid:3] - WARN [SendWorker:4:QuorumCnxManager$SendWorker#941] - Interrupted while waiting for message on queue
java.lang.InterruptedException
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2088)
at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:418)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1094)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:74)
at org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:929)
2019-07-03 09:43:33,589 [myid:3] - WARN [SendWorker:4:QuorumCnxManager$SendWorker#951] - Send worker leaving thread
2019-07-03 09:43:33,590 [myid:3] - INFO [WorkerReceiver[myid=3]:FastLeaderElection#595] - Notification: 1 (message format version), 1 (n.leader), 0x200000004 (n.zxid), 0x5 (n.round), FOLLOWING (n.state), 4 (n.sid), 0x3 (n.peerEpoch) LOOKING (my state)
2019-07-03 09:43:33,590 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:QuorumPeer#980] - FOLLOWING
2019-07-03 09:43:33,591 [myid:3] - INFO [WorkerReceiver[myid=3]:FastLeaderElection#595] - Notification: 1 (message format version), 1 (n.leader), 0x200000004 (n.zxid), 0x5 (n.round), FOLLOWING (n.state), 4 (n.sid), 0x3 (n.peerEpoch) FOLLOWING (my state)
2019-07-03 09:43:33,593 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Learner#86] - TCP NoDelay set to: true
2019-07-03 09:43:33,597 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Environment#100] - Server environment:zookeeper.version=3.4.14-4c25d480e66aadd371de8bd2fd8da255ac140bcf, built on 03/06/2019 16:18 GMT
2019-07-03 09:43:33,597 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Environment#100] - Server environment:host.name=629a802d822d
2019-07-03 09:43:33,597 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Environment#100] - Server environment:java.version=1.8.0_191
2019-07-03 09:43:33,597 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Environment#100] - Server environment:java.vendor=Oracle Corporation
2019-07-03 09:43:33,597 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Environment#100] - Server environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre
2019-07-03 09:43:33,598 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Environment#100] - Server environment:java.class.path=/home/zookeeper-3.4.14/bin/../zookeeper-server/target/classes:/home/zookeeper-3.4.14/bin/../build/classes:/home/zookeeper-3.4.14/bin/../zookeeper-server/target/lib/*.jar:/home/zookeeper-3.4.14/bin/../build/lib/*.jar:/home/zookeeper-3.4.14/bin/../lib/slf4j-log4j12-1.7.25.jar:/home/zookeeper-3.4.14/bin/../lib/slf4j-api-1.7.25.jar:/home/zookeeper-3.4.14/bin/../lib/netty-3.10.6.Final.jar:/home/zookeeper-3.4.14/bin/../lib/log4j-1.2.17.jar:/home/zookeeper-3.4.14/bin/../lib/jline-0.9.94.jar:/home/zookeeper-3.4.14/bin/../lib/audience-annotations-0.5.0.jar:/home/zookeeper-3.4.14/bin/../zookeeper-3.4.14.jar:/home/zookeeper-3.4.14/bin/../zookeeper-server/src/main/resources/lib/*.jar:/home/zookeeper-3.4.14/bin/../conf:
2019-07-03 09:43:33,598 [myid:3] - INFO
[QuorumPeer[myid=3]/0.0.0.0:2181:Environment#100] - Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
2019-07-03 09:43:33,598 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Environment#100] - Server environment:java.io.tmpdir=/tmp
2019-07-03 09:43:33,598 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Environment#100] - Server environment:java.compiler=<NA>
2019-07-03 09:43:33,598 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Environment#100] - Server environment:os.name=Linux
2019-07-03 09:43:33,598 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Environment#100] - Server environment:os.arch=amd64
2019-07-03 09:43:33,598 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Environment#100] - Server environment:os.version=4.18.0-21-generic
2019-07-03 09:43:33,598 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Environment#100] - Server environment:user.name=root
2019-07-03 09:43:33,598 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Environment#100] - Server environment:user.home=/root
2019-07-03 09:43:33,598 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Environment#100] - Server environment:user.dir=/
2019-07-03 09:43:33,599 [myid:3] - INFO
[QuorumPeer[myid=3]/0.0.0.0:2181:ZooKeeperServer#174] - Created server with tickTime 2000 minSessionTimeout 4000 maxSessionTimeout 40000 datadir /var/lib/zookeeper/data/version-2 snapdir /var/lib/zookeeper/data/version-2
2019-07-03 09:43:33,600 [myid:3] - INFO
[QuorumPeer[myid=3]/0.0.0.0:2181:Follower#65] - FOLLOWING - LEADER ELECTION TOOK - 25
2019-07-03 09:43:33,601 [myid:3] - INFO
[QuorumPeer[myid=3]/0.0.0.0:2181:QuorumPeer$QuorumServer#185] - Resolved hostname: 10.32.0.2 to address: /10.32.0.2
2019-07-03 09:43:33,637 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:Learner#336] - Getting a snapshot from leader 0x300000000
2019-07-03 09:43:33,644 [myid:3] - INFO [QuorumPeer[myid=3]/0.0.0.0:2181:FileTxnSnapLog#301] - Snapshotting: 0x300000000 to /var/lib/zookeeper/data/version-2/snapshot.300000000
2019-07-03 09:44:24,320 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#222] - Accepted socket connection from /150.20.11.157:55744
2019-07-03 09:44:24,324 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#949] - Client attempting to establish new session at /150.20.11.157:55744
2019-07-03 09:44:24,327 [myid:3] - WARN
[QuorumPeer[myid=3]/0.0.0.0:2181:Follower#119] - Got zxid 0x300000001 expected 0x1
2019-07-03 09:44:24,327 [myid:3] - INFO [SyncThread:3:FileTxnLog#216] - Creating new log file: log.300000001
2019-07-03 09:44:24,384 [myid:3] - INFO [CommitProcessor:3:ZooKeeperServer#694] - Established session 0x300393be5860000 with negotiated timeout 10000 for client /150.20.11.157:55744
2019-07-03 09:44:24,892 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#222] - Accepted socket connection from /150.20.11.157:55746
2019-07-03 09:44:24,892 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#949] - Client attempting to establish new session at /150.20.11.157:55746
2019-07-03 09:44:24,908 [myid:3] - INFO [CommitProcessor:3:ZooKeeperServer#694] - Established session 0x300393be5860001 with negotiated timeout 10000 for client /150.20.11.157:55746
2019-07-03 09:44:26,410 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#222] - Accepted socket connection from /150.20.11.157:55748
2019-07-03 09:44:26,411 [myid:3] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#903] - Connection request from old client /150.20.11.157:55748; will be dropped if server is in r-o mode
2019-07-03 09:44:26,411 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#949] - Client attempting to establish new session at /150.20.11.157:55748
2019-07-03 09:44:26,422 [myid:3] - INFO [CommitProcessor:3:ZooKeeperServer#694] - Established session 0x300393be5860002 with negotiated timeout 10000 for client /150.20.11.157:55748
2019-07-03 09:45:41,553 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1056] - Closed socket connection for client /150.20.11.157:55746 which had sessionid 0x300393be5860001
2019-07-03 09:45:41,567 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1056] - Closed socket connection for client /150.20.11.157:55744 which had sessionid 0x300393be5860000
2019-07-03 09:45:41,597 [myid:3] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#376] - Unable to read additional data from client sessionid 0x300393be5860002, likely client has closed socket
2019-07-03 09:45:41,597 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1056] - Closed socket connection for client /150.20.11.157:55748 which had sessionid 0x300393be5860002
2019-07-03 09:46:20,896 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#222] - Accepted socket connection from /10.32.0.5:45998
2019-07-03 09:46:20,901 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#949] - Client attempting to establish new session at /10.32.0.5:45998
2019-07-03 09:46:20,916 [myid:3] - INFO [CommitProcessor:3:ZooKeeperServer#694] - Established session 0x300393be5860003 with negotiated timeout 40000 for client /10.32.0.5:45998
2019-07-03 09:46:43,827 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#222] - Accepted socket connection from /150.20.11.157:55864
2019-07-03 09:46:43,830 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#949] - Client attempting to establish new session at /150.20.11.157:55864
2019-07-03 09:46:43,856 [myid:3] - INFO [CommitProcessor:3:ZooKeeperServer#694] - Established session 0x300393be5860004 with negotiated timeout 10000 for client /150.20.11.157:55864
2019-07-03 09:46:44,336 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory#222] -
Accepted socket connection from /150.20.11.157:55866
2019-07-03 09:46:44,336 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer#949] - Client attempting to establish new session at /150.20.11.157:55866
2019-07-03 09:46:44,348 [myid:3] - INFO [CommitProcessor:3:ZooKeeperServer#694]
- Established session 0x300393be5860005 with negotiated timeout 10000 for client /150.20.11.157:55866
Would you please guide me how to use both Mesos slaves to run Flink platform?
Any help would be really appreciated.

Nutch Elasticsearch Integration

I'm following this tutorial for setting up nutch alongwith Elasticsearch. Whenever I try to index the data into the ES, it returns an error. Following are the logs:-
Command:-
bin/nutch index elasticsearch -all
Logs when I add elastic.port(9200) in conf/nutch-site.xml :-
2016-05-05 13:22:49,903 INFO basic.BasicIndexingFilter - Maximum title length for indexing set to: 100
2016-05-05 13:22:49,904 INFO indexer.IndexingFilters - Adding org.apache.nutch.indexer.basic.BasicIndexingFilter
2016-05-05 13:22:49,904 INFO anchor.AnchorIndexingFilter - Anchor deduplication is: off
2016-05-05 13:22:49,904 INFO indexer.IndexingFilters - Adding org.apache.nutch.indexer.anchor.AnchorIndexingFilter
2016-05-05 13:22:49,905 INFO indexer.IndexingFilters - Adding org.apache.nutch.indexer.metadata.MetadataIndexer
2016-05-05 13:22:49,906 INFO indexer.IndexingFilters - Adding org.apache.nutch.indexer.more.MoreIndexingFilter
2016-05-05 13:22:49,961 INFO elastic.ElasticIndexWriter - Processing remaining requests [docs = 0, length = 0, total docs = 0]
2016-05-05 13:22:49,961 INFO elastic.ElasticIndexWriter - Processing to finalize last execute
2016-05-05 13:22:54,898 INFO client.transport - [Peggy Carter] failed to get node info for [#transport#-1][ubuntu][inet[localhost/127.0.0.1:9200]], disconnecting...
org.elasticsearch.transport.ReceiveTimeoutTransportException: [][inet[localhost/127.0.0.1:9200]][cluster:monitor/nodes/info] request_id [1] timed out after [5000ms]
at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:366)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
2016-05-05 13:22:55,682 INFO indexer.IndexWriters - Adding org.apache.nutch.indexwriter.elastic.ElasticIndexWriter
2016-05-05 13:22:55,683 INFO indexer.IndexingJob - Active IndexWriters :
ElasticIndexWriter
elastic.cluster : elastic prefix cluster
elastic.host : hostname
elastic.port : port (default 9300)
elastic.index : elastic index command
elastic.max.bulk.docs : elastic bulk index doc counts. (default 250)
elastic.max.bulk.size : elastic bulk index length. (default 2500500 ~2.5MB)
2016-05-05 13:22:55,711 INFO elasticsearch.plugins - [Adrian Toomes] loaded [], sites []
2016-05-05 13:23:00,763 INFO client.transport - [Adrian Toomes] failed to get node info for [#transport#-1][ubuntu][inet[localhost/127.0.0.1:92$0]], disconnecting...
org.elasticsearch.transport.ReceiveTimeoutTransportException: [][inet[localhost/127.0.0.1:9200]][cluster:monitor/nodes/info] request_id [0] time$ out after [5000ms]
at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:366)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
2016-05-05 13:23:00,766 INFO indexer.IndexingJob - IndexingJob: done.
Logs when default port 9300 is used:-
2016-05-05 13:58:44,584 INFO elasticsearch.plugins - [Mentallo] loaded [], sites []
2016-05-05 13:58:44,673 WARN transport.netty - [Mentallo] Message not fully read (response) for [0] handler future(org.elasticsearch.client.transport.TransportClientNodesService$SimpleNodeSampler$1#3c80f1dd), error [true], resetting
2016-05-05 13:58:44,674 INFO client.transport - [Mentallo] failed to get node info for [#transport#-1][ubuntu][inet[localhost/127.0.0.1:9300]], disconnecting...
org.elasticsearch.transport.RemoteTransportException: Failed to deserialize exception response from stream
Caused by: org.elasticsearch.transport.TransportSerializationException: Failed to deserialize exception response from stream
at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:173)
at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:125)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.StreamCorruptedException: Unsupported version: 1
at org.elasticsearch.common.io.ThrowableObjectInputStream.readStreamHeader(ThrowableObjectInputStream.java:46)
at java.io.ObjectInputStream.<init>(ObjectInputStream.java:301)
at org.elasticsearch.common.io.ThrowableObjectInputStream.<init>(ThrowableObjectInputStream.java:38)
at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:170)
... 23 more
2016-05-05 13:58:44,676 INFO indexer.IndexingJob - IndexingJob: done.
I've configured everything fine. Have had a look at various threads as well but to no avail. Also java version for both ES and JVM is same. Is there a bug in here?
I'm using Nutch 2.3.1 and have tried with both ES 1.4.4 and 2.3.2. I can see data in Mongo but I cannot index data in ES. Why??

Flume: kafka channel and hdfs sink get unable to deliver event error

I want to try this new Flafka flow: only use kafka channel transfer data to hdfs sink. I tried it from kafka channel and logger sink which is easier to monitor. My configuration file is:
# Name the components on this agent
a1.sinks = sink1
a1.channels = channel1
a1.channels.channel1.type = org.apache.flume.channel.kafka.KafkaChannel
a1.channels.channel1.brokerList = localhost:9093,localhost:9094
a1.channels.channel1.topic = par4
a1.channels.channel1.zookeeperConnect = localhost:2181
a1.channels.channel1.parseAsFlumeEvent = false
a1.channels.cnannel1.kafka.consumer.timeout.ms = 1000000
a1.sinks.sink1.channel = channel1
a1.sinks.sink1.type = logger
I set up zookeeper and two brokers locally using above port number, and I have a producer client keep push messages to kafka.
I got following messages:
2015-07-02 20:22:37,619 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.node.PollingPropertiesFileConfigurationProvider.start(PollingPropertiesFileConfigurationProvider.java:61)] Configuration provider starting
2015-07-02 20:22:37,623 (conf-file-poller-0) [INFO - org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:133)] Reloading configuration file:conf/example.conf
2015-07-02 20:22:37,629 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1017)] Processing:sink1
2015-07-02 20:22:37,629 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:1017)] Processing:sink1
2015-07-02 20:22:37,629 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty(FlumeConfiguration.java:931)] Added sinks: sink1 Agent: a1
2015-07-02 20:22:37,633 (conf-file-poller-0) [WARN - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.validateSources(FlumeConfiguration.java:508)] Agent configuration for 'a1' has no sources.
2015-07-02 20:22:37,635 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration.validateConfiguration(FlumeConfiguration.java:141)] Post-validation flume configuration contains configuration for agents: [a1]
2015-07-02 20:22:37,635 (conf-file-poller-0) [INFO - org.apache.flume.node.AbstractConfigurationProvider.loadChannels(AbstractConfigurationProvider.java:145)] Creating channels
2015-07-02 20:22:37,639 (conf-file-poller-0) [INFO - org.apache.flume.channel.DefaultChannelFactory.create(DefaultChannelFactory.java:42)] Creating instance of channel channel1 type org.apache.flume.channel.kafka.KafkaChannel
2015-07-02 20:22:37,650 (conf-file-poller-0) [INFO - org.apache.flume.channel.kafka.KafkaChannel.configure(KafkaChannel.java:168)] Group ID was not specified. Using flume as the group id.
2015-07-02 20:22:37,658 (conf-file-poller-0) [INFO - org.apache.flume.channel.kafka.KafkaChannel.configure(KafkaChannel.java:188)] {metadata.broker.list=localhost:9093,localhost:9094, request.required.acks=-1, group.id=flume, zookeeper.connect=localhost:2181, consumer.timeout.ms=100, auto.commit.enable=false}
2015-07-02 20:22:37,665 (conf-file-poller-0) [INFO - org.apache.flume.node.AbstractConfigurationProvider.loadChannels(AbstractConfigurationProvider.java:200)] Created channel channel1
2015-07-02 20:22:37,666 (conf-file-poller-0) [INFO - org.apache.flume.sink.DefaultSinkFactory.create(DefaultSinkFactory.java:42)] Creating instance of sink: sink1, type: logger
2015-07-02 20:22:37,669 (conf-file-poller-0) [INFO - org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:114)] Channel channel1 connected to [sink1]
2015-07-02 20:22:37,674 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:138)] Starting new configuration:{ sourceRunners:{} sinkRunners:{sink1=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor#3362ba9e counterGroup:{ name:null counters:{} } }} channels:{channel1=org.apache.flume.channel.kafka.KafkaChannel{name: channel1}} }
2015-07-02 20:22:37,675 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:145)] Starting Channel channel1
2015-07-02 20:22:37,677 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.channel.kafka.KafkaChannel.start(KafkaChannel.java:96)] Starting Kafka Channel: channel1
2015-07-02 20:22:37,885 (lifecycleSupervisor-1-0) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Verifying properties
2015-07-02 20:22:37,903 (lifecycleSupervisor-1-0) [WARN - kafka.utils.Logging$class.warn(Logging.scala:83)] Property auto.commit.enable is not valid
2015-07-02 20:22:37,903 (lifecycleSupervisor-1-0) [WARN - kafka.utils.Logging$class.warn(Logging.scala:83)] Property consumer.timeout.ms is not valid
2015-07-02 20:22:37,903 (lifecycleSupervisor-1-0) [WARN - kafka.utils.Logging$class.warn(Logging.scala:83)] Property group.id is not valid
2015-07-02 20:22:37,904 (lifecycleSupervisor-1-0) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Property metadata.broker.list is overridden to localhost:9093,localhost:9094
2015-07-02 20:22:37,904 (lifecycleSupervisor-1-0) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Property request.required.acks is overridden to -1
2015-07-02 20:22:37,904 (lifecycleSupervisor-1-0) [WARN - kafka.utils.Logging$class.warn(Logging.scala:83)] Property zookeeper.connect is not valid
2015-07-02 20:22:37,929 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.channel.kafka.KafkaChannel.start(KafkaChannel.java:99)] Topic = par4
2015-07-02 20:22:37,929 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.register(MonitoredCounterGroup.java:120)] Monitored counter group for type: CHANNEL, name: channel1: Successfully registered new MBean.
2015-07-02 20:22:37,930 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start(MonitoredCounterGroup.java:96)] Component type: CHANNEL, name: channel1 started
2015-07-02 20:22:37,930 (conf-file-poller-0) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:173)] Starting Sink sink1
2015-07-02 20:22:37,939 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Verifying properties
2015-07-02 20:22:37,939 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Property auto.commit.enable is overridden to false
2015-07-02 20:22:37,939 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Property consumer.timeout.ms is overridden to 100
2015-07-02 20:22:37,939 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Property group.id is overridden to flume
2015-07-02 20:22:37,939 (SinkRunner-PollingRunner-DefaultSinkProcessor) [WARN - kafka.utils.Logging$class.warn(Logging.scala:83)] Property metadata.broker.list is not valid
2015-07-02 20:22:37,940 (SinkRunner-PollingRunner-DefaultSinkProcessor) [WARN - kafka.utils.Logging$class.warn(Logging.scala:83)] Property request.required.acks is not valid
2015-07-02 20:22:37,942 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Property zookeeper.connect is overridden to localhost:2181
2015-07-02 20:22:37,951 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] [flume_MACC02PHH5LG3QC-1435893757951-c4c69fb7], Connecting to zookeeper instance at localhost:2181
2015-07-02 20:22:37,952 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:160)] Unable to deliver event. Exception follows.
java.lang.IllegalStateException: close() called when transaction is OPEN - you must either commit or rollback first
at com.google.common.base.Preconditions.checkState(Preconditions.java:172)
at org.apache.flume.channel.BasicTransactionSemantics.close(BasicTransactionSemantics.java:179)
at org.apache.flume.sink.LoggerSink.process(LoggerSink.java:105)
at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
at java.lang.Thread.run(Thread.java:745)
^C2015-07-02 20:22:39,497 (agent-shutdown-hook) [INFO - org.apache.flume.lifecycle.LifecycleSupervisor.stop(LifecycleSupervisor.java:79)] Stopping lifecycle supervisor 12
2015-07-02 20:22:39,499 (agent-shutdown-hook) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Shutting down producer
2015-07-02 20:22:39,499 (agent-shutdown-hook) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Closing all sync producers
2015-07-02 20:22:39,501 (agent-shutdown-hook) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:150)] Component type: CHANNEL, name: channel1 stopped
2015-07-02 20:22:39,501 (agent-shutdown-hook) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:156)] Shutdown Metric for type: CHANNEL, name: channel1. channel.start.time == 1435893757930
2015-07-02 20:22:39,501 (agent-shutdown-hook) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:162)] Shutdown Metric for type: CHANNEL, name: channel1. channel.stop.time == 1435893759501
2015-07-02 20:22:39,501 (agent-shutdown-hook) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:178)] Shutdown Metric for type: CHANNEL, name: channel1. channel.capacity == 0
2015-07-02 20:22:39,502 (agent-shutdown-hook) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:178)] Shutdown Metric for type: CHANNEL, name: channel1. channel.current.size == 0
2015-07-02 20:22:39,502 (agent-shutdown-hook) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:178)] Shutdown Metric for type: CHANNEL, name: channel1. channel.event.put.attempt == 0
2015-07-02 20:22:39,504 (agent-shutdown-hook) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:178)] Shutdown Metric for type: CHANNEL, name: channel1. channel.event.put.success == 0
2015-07-02 20:22:39,504 (agent-shutdown-hook) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:178)] Shutdown Metric for type: CHANNEL, name: channel1. channel.event.take.attempt == 0
2015-07-02 20:22:39,504 (agent-shutdown-hook) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:178)] Shutdown Metric for type: CHANNEL, name: channel1. channel.event.take.success == 0
2015-07-02 20:22:39,504 (agent-shutdown-hook) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:178)] Shutdown Metric for type: CHANNEL, name: channel1. channel.kafka.commit.time == 0
2015-07-02 20:22:39,504 (agent-shutdown-hook) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:178)] Shutdown Metric for type: CHANNEL, name: channel1. channel.kafka.event.get.time == 0
2015-07-02 20:22:39,504 (agent-shutdown-hook) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:178)] Shutdown Metric for type: CHANNEL, name: channel1. channel.kafka.event.send.time == 0
2015-07-02 20:22:39,504 (agent-shutdown-hook) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.stop(MonitoredCounterGroup.java:178)] Shutdown Metric for type: CHANNEL, name: channel1. channel.rollback.count == 0
2015-07-02 20:22:39,505 (agent-shutdown-hook) [INFO - org.apache.flume.channel.kafka.KafkaChannel.stop(KafkaChannel.java:123)] Kafka channel channel1 stopped. Metrics: CHANNEL:channel1{channel.event.put.attempt=0, channel.event.put.success=0, channel.kafka.event.get.time=0, channel.current.size=0, channel.event.take.attempt=0, channel.event.take.success=0, channel.kafka.event.send.time=0, channel.capacity=0, channel.kafka.commit.time=0, channel.rollback.count=0}
2015-07-02 20:22:39,505 (agent-shutdown-hook) [INFO - org.apache.flume.node.PollingPropertiesFileConfigurationProvider.stop(PollingPropertiesFileConfigurationProvider.java:83)] Configuration provider stopping
I don't understand why I have this unable to deliver event error. (I also tried to set up HDFS sink which gives me the same error.)
I also don't understand why I didn't successfully set consumer.timeout.ms
Looking for help, thanks!
Based on the answer from the community, this question can be solved by following two JIRA topic.
https://issues.apache.org/jira/browse/FLUME-2734
https://issues.apache.org/jira/browse/FLUME-2735

Graylog2 - Startup fail. Address already in use

I am trying to install graylog2. I have installed open-jdk7. I have also installed elasticsearch and mongodb using apt on ubuntu 14.04.
I am new to both graylog and elasticsearch. I just want to try a trail installation and try these out. And I also did search similar questions and tried their suggestions. But none of them worked for my case.
I have followed the installation instructions on graylog.org. But when I try to start the graylog2 server I get the following error.
2015-02-12 03:19:36,216 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.IndexerClusterCheckerThread] periodical in [0s], polling every [30s].
2015-02-12 03:19:36,222 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.GarbageCollectionWarningThread] periodical, running forever.
2015-02-12 03:19:36,225 INFO : org.graylog2.periodical.IndexerClusterCheckerThread - Indexer not fully initialized yet. Skipping periodic cluster check.
2015-02-12 03:19:36,229 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.ThroughputCounterManagerThread] periodical in [0s], polling every [1s].
2015-02-12 03:19:36,280 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.DeadLetterThread] periodical, running forever.
2015-02-12 03:19:36,295 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.ClusterHealthCheckThread] periodical in [0s], polling every [20s].
2015-02-12 03:19:36,299 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.InputCacheWorkerThread] periodical, running forever.
2015-02-12 03:19:36,334 DEBUG: org.graylog2.periodical.ClusterHealthCheckThread - No input running in cluster!
2015-02-12 03:19:36,368 DEBUG: org.graylog2.caches.DiskJournalCache - Committing output-cache (entries 0)
2015-02-12 03:19:36,383 DEBUG: org.graylog2.caches.DiskJournalCache - Committing input-cache (entries 0)
2015-02-12 03:19:36,885 ERROR: com.google.common.util.concurrent.ServiceManager - Service IndexerSetupService [FAILED] has failed in the STARTING state.
org.elasticsearch.transport.BindTransportException: Failed to bind to [9300]
at org.elasticsearch.transport.netty.NettyTransport.doStart(NettyTransport.java:396)
at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:85)
at org.elasticsearch.transport.TransportService.doStart(TransportService.java:90)
at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:85)
at org.elasticsearch.node.internal.InternalNode.start(InternalNode.java:242)
at org.graylog2.initializers.IndexerSetupService.startUp(IndexerSetupService.java:101)
at com.google.common.util.concurrent.AbstractIdleService$2$1.run(AbstractIdleService.java:54)
at com.google.common.util.concurrent.Callables$3.run(Callables.java:95)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.common.netty.channel.ChannelException: Failed to bind to: /127.0.0.1:9300
at org.elasticsearch.common.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
at org.elasticsearch.transport.netty.NettyTransport$3.onPortNumber(NettyTransport.java:387)
at org.elasticsearch.common.transport.PortsRange.iterate(PortsRange.java:58)
at org.elasticsearch.transport.netty.NettyTransport.doStart(NettyTransport.java:383)
... 8 more
Caused by: java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.elasticsearch.common.netty.channel.socket.nio.NioServerBoss$RegisterTask.run(NioServerBoss.java:193)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:372)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:296)
at org.elasticsearch.common.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
... 1 more
Elastic search is showing the following status
{
"cluster_name" : "graylog2",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 0,
"active_shards" : 0,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0
}
The following are the changes I made to elasticsearch.yml
cluster.name: graylog2
network.bind_host: 127.0.0.1
network.host: 127.0.0.1
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["127.0.0.1", MYSYS IP]
and graylog2.conf
is_master = true
password_secret = changed
root_password_sha2 = changed
elasticsearch_max_docs_per_index = 20000000
elasticsearch_shards = 1
elasticsearch_replicas = 0
elasticsearch_cluster_name = graylog2
elasticsearch_discovery_zen_ping_multicast_enabled = false
elasticsearch_discovery_zen_ping_unicast_hosts = IP_ARR:9300
mongodb_useauth = false
I tried killing the process on the port 9300 and tried starting graylog again. But I got the following error
2015-02-12 04:01:24,976 INFO : org.elasticsearch.transport - [graylog2-server] bound_address {inet[/127.0.0.1:9300]}, publish_address {inet[/127.0.0.1:9300]}
2015-02-12 04:01:25,227 INFO : org.elasticsearch.discovery - [graylog2-server] graylog2/LGkZJDz1SoeENKj6Rr0e8w
2015-02-12 04:01:25,252 DEBUG: org.elasticsearch.cluster.service - [graylog2-server] processing [update local node]: execute
2015-02-12 04:01:25,253 DEBUG: org.elasticsearch.cluster.service - [graylog2-server] cluster state updated, version [0], source [update local node]
2015-02-12 04:01:25,259 DEBUG: org.elasticsearch.cluster.service - [graylog2-server] set local cluster state to version 0
2015-02-12 04:01:25,259 DEBUG: org.elasticsearch.cluster.service - [graylog2-server] processing [update local node]: done applying updated cluster_state (version: 0)
2015-02-12 04:01:25,325 WARN : org.elasticsearch.transport.netty - [graylog2-server] exception caught on transport layer [[id: 0x82f30fa7]], closing connection
java.nio.channels.UnresolvedAddressException
at sun.nio.ch.Net.checkAddress(Net.java:127)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:644)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink.connect(NioClientSocketPipelineSink.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink.eventSunk(NioClientSocketPipelineSink.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:574)
.......
2015-02-12 04:01:28,536 DEBUG: org.elasticsearch.action.admin.cluster.health - [graylog2-server] no known master node, scheduling a retry
2015-02-12 04:01:28,564 DEBUG: org.elasticsearch.transport.netty - [graylog2-server] disconnected from [[graylog2-server][LGkZJDz1SoeENKj6Rr0e8w][ubuntu-greylog-9945][inet[/127.0.0.1:9300]]{client=true, data=false, master=false}]
2015-02-12 04:01:28,573 DEBUG: org.elasticsearch.discovery.zen - [graylog2-server] filtered ping responses: (filter_client[true], filter_data[false]) {none}
2015-02-12 04:01:28,590 WARN : org.elasticsearch.transport.netty - [graylog2-server] exception caught on transport layer [[id: 0xe27feaff]], closing connection
java.nio.channels.UnresolvedAddressException
at sun.nio.ch.Net.checkAddress(Net.java:127)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:644)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink.connect(NioClientSocketPipelineSink.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink.eventSunk(NioClientSocketPipelineSink.java:70)
Can you please point out to what I am doing wrong here and what I am missing??
if ES and greylog2 running on same server, try (del/comment) in elasticsearch.conf
#transport.tcp.port: 9300
and (add/uncomment) in greylog.conf
elasticsearch_transport_tcp_port = 9350

ElasticSearch 0.17.9 - 400 Bad Request on Windows Server 2008 R2 x64

I'm attempting to get ElasticSearch going. I downloaded ES 0.17.9 as well as elasticsearch-head. I'm using JDK 1.7.0.
I start the ES server with no errors, then using es-head, I've successfully connected to it (http://localhost:9200). Cluster info/node info all that overview stuff works fine. However, when I go to "Any Request" and try to do a PUT, it fails with a "400 bad request". The request is to http://localhost:9200/ with path PAFRetail/indextypes/1 and data: {"test":"test"}
Cluster info/node info looks like this:
{
cluster_name: elasticsearch
nodes: {
uQT1ZhL_SnedSdGOvUL0mQ: {
name: Aardwolf
indices: {
size: 0b
size_in_bytes: 0
docs: {
num_docs: 0
}
cache: {
field_evictions: 0
field_size: 0b
field_size_in_bytes: 0
filter_count: 0
filter_evictions: 0
filter_size: 0b
filter_size_in_bytes: 0
}
merges: {
current: 0
total: 0
total_time: 0s
total_time_in_millis: 0
}
}
os: {
timestamp: 1319478468512
uptime: 271 hours, 18 minutes and 54 seconds
uptime_in_millis: 976734000
load_average: [ ]
cpu: {
sys: 1
user: 3
idle: 94
}
mem: {
free: 2.7gb
free_in_bytes: 2921869312
used: 5.2gb
used_in_bytes: 5662928896
free_percent: 47
used_percent: 52
actual_free: 3.7gb
actual_free_in_bytes: 4035424256
actual_used: 4.2gb
actual_used_in_bytes: 4549373952
}
swap: {
used: 5.3gb
used_in_bytes: 5776781312
free: 18.2gb
free_in_bytes: 19583340544
}
}
process: {
timestamp: 1319478468517
open_file_descriptors: -1
cpu: {
percent: 0
sys: 592 milliseconds
sys_in_millis: 592
user: 3 seconds and 213 milliseconds
user_in_millis: 3213
total: 3 seconds and 805 milliseconds
total_in_millis: 3805
}
mem: {
resident: 133.5mb
resident_in_bytes: 140062720
share: -1b
share_in_bytes: -1
total_virtual: 1.3gb
total_virtual_in_bytes: 1421873152
}
}
jvm: {
timestamp: 1319478468521
uptime: 11 minutes, 48 seconds and 84 milliseconds
uptime_in_millis: 708084
mem: {
heap_used: 35.7mb
heap_used_in_bytes: 37529328
heap_committed: 247.5mb
heap_committed_in_bytes: 259522560
non_heap_used: 21.2mb
non_heap_used_in_bytes: 22253880
non_heap_committed: 23.1mb
non_heap_committed_in_bytes: 24313856
}
threads: {
count: 31
peak_count: 33
}
gc: {
collection_count: 1
collection_time: 36 milliseconds
collection_time_in_millis: 36
collectors: {
ParNew: {
collection_count: 1
collection_time: 36 milliseconds
collection_time_in_millis: 36
}
ConcurrentMarkSweep: {
collection_count: 0
collection_time: 0 milliseconds
collection_time_in_millis: 0
}
}
}
}
network: {
tcp: {
active_opens: 31054
passive_opens: 2985
curr_estab: 70
in_segs: 3066040425
out_segs: 3067744188
retrans_segs: 7917
estab_resets: 2809
attempt_fails: 48
in_errs: 0
out_rsts: 3474
}
}
transport: {
server_open: 7
}
http: {
server_open: 1
}
}
}
}
I also set up debug logging and get this:
E:\ElasticSearch\elasticsearch-0.17.9\bin>"C:\Program Files\Java
\jdk1.7.0\bin\java" -Xms256m -Xmx1g -Xss128k -XX:+UseParNewGC -XX:
+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:SurvivorRatio=8
-XX:MaxTenuringThreshold=1 -XX:CMSInitiatingOccupancyFraction=75 -XX:
+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -
Delasticsearch -Des-foreground=yes -Des.path.home="E:\ElasticSear
ch\elasticsearch-0.17.9" -cp ";E:\ElasticSearch\elasticsearch-0.17.9/
lib/*;E:\ElasticSearch\elasticsearch-0.17.9/lib/sigar/*"
"org.elasticsearch.bootstrap.ElasticSearch"
[2011-10-24 10:36:00,812][INFO ][node ] [Aardwolf]
{elasticsearch/0.17.9}[46508]: initializing ...
[2011-10-24 10:36:00,820][INFO ][plugins ] [Aardwolf]
loaded [], sites []
[2011-10-24 10:36:01,643][DEBUG][cache.memory ] [Aardwolf]
using bytebuffer cache with small_buffer_size [1kb], large_buffer_size
[1mb], small_cache_size [10mb], large_cache_size [500mb],
direct [true]
[2011-10-24 10:36:01,657][DEBUG][threadpool ] [Aardwolf]
creating thread_pool [cached], type [cached], keep_alive [30s]
[2011-10-24 10:36:01,659][DEBUG][threadpool ] [Aardwolf]
creating thread_pool [index], type [cached], keep_alive [5m]
[2011-10-24 10:36:01,659][DEBUG][threadpool ] [Aardwolf]
creating thread_pool [search], type [cached], keep_alive [5m]
[2011-10-24 10:36:01,660][DEBUG][threadpool ] [Aardwolf]
creating thread_pool [percolate], type [cached], keep_alive [5m]
[2011-10-24 10:36:01,661][DEBUG][threadpool ] [Aardwolf]
creating thread_pool [management], type [scaling], min [1], size [20],
keep_alive [5m]
[2011-10-24 10:36:01,663][DEBUG][threadpool ] [Aardwolf]
creating thread_pool [merge], type [scaling], min [1], size [20],
keep_alive [5m]
[2011-10-24 10:36:01,664][DEBUG][threadpool ] [Aardwolf]
creating thread_pool [snapshot], type [scaling], min [1], size [40],
keep_alive [5m]
[2011-10-24 10:36:01,671][DEBUG][transport.netty ] [Aardwolf]
using worker_count[16], port[9300-9400], bind_host[null],
publish_host[null], compress[false], connect_timeout[30s],
connections_
per_node[2/4/1]
[2011-10-24 10:36:01,682][DEBUG][discovery.zen.ping.multicast]
[Aardwolf] using group [224.2.2.4], with port [54328], ttl [3], and
address [null]
[2011-10-24 10:36:01,687][DEBUG][discovery.zen.ping.unicast]
[Aardwolf] using initial hosts [], with concurrent_connects [10]
[2011-10-24 10:36:01,688][DEBUG][discovery.zen ] [Aardwolf]
using ping.timeout [3s]
[2011-10-24 10:36:01,690][DEBUG][discovery.zen.fd ] [Aardwolf]
[master] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2011-10-24 10:36:01,693][DEBUG][discovery.zen.fd ] [Aardwolf]
[node ] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
[2011-10-24 10:36:01,720][DEBUG][env ] [Aardwolf]
using node location [E:\ElasticSearch\elasticsearch-0.17.9\data
\elasticsearch\nodes\0], local_node_id [0]
[2011-10-24 10:36:01,908][DEBUG][cluster.routing.allocation]
[Aardwolf] using node_concurrent_recoveries [2],
node_initial_primaries_recoveries [4]
[2011-10-24 10:36:01,909][DEBUG][cluster.routing.allocation]
[Aardwolf] using [allow_rebalance] with [indices_all_active]
[2011-10-24 10:36:01,909][DEBUG][cluster.routing.allocation]
[Aardwolf] using [cluster_concurrent_rebalance] with [2]
[2011-10-24 10:36:01,912][DEBUG][gateway.local ] [Aardwolf]
using initial_shards [quorum], list_timeout [30s]
[2011-10-24 10:36:01,936][DEBUG][monitor.jvm ] [Aardwolf]
enabled [false], last_gc_enabled [false], interval [1s], gc_threshold
[5s]
[2011-10-24 10:36:02,461][DEBUG][monitor.os ] [Aardwolf]
Using probe [org.elasticsearch.monitor.os.SigarOsProbe#60e70884] with
refresh_interval [1s]
[2011-10-24 10:36:02,504][DEBUG][monitor.process ] [Aardwolf]
Using probe
[org.elasticsearch.monitor.process.SigarProcessProbe#49acd265] with
refresh_interval [1s]
[2011-10-24 10:36:02,507][DEBUG][monitor.jvm ] [Aardwolf]
Using refresh_interval [1s]
[2011-10-24 10:36:02,508][DEBUG][monitor.network ] [Aardwolf]
Using probe
[org.elasticsearch.monitor.network.SigarNetworkProbe#5c0273e1] with
refresh_interval [5s]
[2011-10-24 10:36:02,689][DEBUG][monitor.network ] [Aardwolf]
net_info
host [Apollo]
lo display_name [Software Loopback Interface 1]
address [/127.0.0.1] [/0:0:0:0:0:0:0:1]
mtu [-1] multicast [true] ptp [false] loopback [true]
up [true] virtual [false]
net0 display_name [WAN Miniport (PPTP)]
address
mtu [-1] multicast [true] ptp [false] loopback [false]
up [false] virtual [false]
net1 display_name [WAN Miniport (SSTP)]
address
mtu [-1] multicast [true] ptp [false] loopback [false]
up [false] virtual [false]
net2 display_name [WAN Miniport (L2TP)]
address
mtu [-1] multicast [true] ptp [false] loopback [false]
up [false] virtual [false]
eth0 display_name [WAN Miniport (Network Monitor)]
address
mtu [-1] multicast [true] ptp [false] loopback [false]
up [false] virtual [false]
eth1 display_name [WAN Miniport (IP)]
address
mtu [-1] multicast [true] ptp [false] loopback [false]
up [false] virtual [false]
eth2 display_name [WAN Miniport (IPv6)]
address
mtu [-1] multicast [true] ptp [false] loopback [false]
up [false] virtual [false]
ppp0 display_name [WAN Miniport (PPPOE)]
address
mtu [-1] multicast [true] ptp [false] loopback [false]
up [false] virtual [false]
ppp1 display_name [RAS Async Adapter]
address
mtu [-1] multicast [true] ptp [false] loopback [false]
up [false] virtual [false]
eth3 display_name [Intel(R) PRO/1000 PT Dual Port Network
Connection]
address
mtu [-1] multicast [true] ptp [false] loopback [false]
up [false] virtual [false]
eth4 display_name [Intel(R) PRO/1000 PT Dual Port Network
Connection #2]
address
mtu [-1] multicast [true] ptp [false] loopback [false]
up [false] virtual [false]
eth5 display_name [Intel(R) PRO/1000 EB Network Connection with I/O
Acceleration]
address [/10.44.0.16] [/fe80:0:0:0:5071:1913:9d81:8cee
%12]
mtu [1500] multicast [true] ptp [false] loopback
[false] up [true] virtual [false]
eth6 display_name [Intel(R) PRO/1000 EB Network Connection with I/O
Acceleration #2]
address
mtu [-1] multicast [true] ptp [false] loopback [false]
up [false] virtual [false]
net3 display_name [Microsoft ISATAP Adapter]
address [/fe80:0:0:0:0:5efe:a2c:10%14]
mtu [1280] multicast [false] ptp [true] loopback
[false] up [false] virtual [false]
net4 display_name [Teredo Tunneling Pseudo-Interface]
address [/2001:0:4137:9e76:1c93:19dc:f5d3:ffef] [/
fe80:0:0:0:1c93:19dc:f5d3:ffef%15]
mtu [1280] multicast [false] ptp [true] loopback
[false] up [true] virtual [false]
net5 display_name [WAN Miniport (IKEv2)]
address
mtu [-1] multicast [true] ptp [false] loopback [false]
up [false] virtual [false]
eth7 display_name [Intel(R) PRO/1000 EB Network Connection with I/O
Acceleration-QoS Packet Scheduler-0000]
address
mtu [-1] multicast [true] ptp [false] loopback [false]
up [false] virtual [false]
eth8 display_name [Intel(R) PRO/1000 EB Network Connection with I/O
Acceleration-WFP LightWeight Filter-0000]
address
mtu [-1] multicast [true] ptp [false] loopback [false]
up [false] virtual [false]
eth9 display_name [WAN Miniport (Network Monitor)-QoS Packet
Scheduler-0000]
address
mtu [-1] multicast [true] ptp [false] loopback [false]
up [false] virtual [false]
eth10 display_name [WAN Miniport (IP)-QoS Packet Scheduler-0000]
address
mtu [-1] multicast [true] ptp [false] loopback [false]
up [false] virtual [false]
eth11 display_name [WAN Miniport (IPv6)-QoS Packet Scheduler-0000]
address
mtu [-1] multicast [true] ptp [false] loopback [false]
up [false] virtual [false]
[2011-10-24 10:36:02,823][DEBUG][http.netty ] [Aardwolf]
using max_chunk_size[8kb], max_header_size[8kb],
max_initial_line_length[4kb], max_content_length[100mb]
[2011-10-24 10:36:02,828][DEBUG][index.shard.recovery ] [Aardwolf]
using concurrent_streams [5], file_chunk_size [100kb], translog_size
[100kb], translog_ops [1000], and compress [true]
[2011-10-24 10:36:02,831][DEBUG][indices.memory ] [Aardwolf]
using index_buffer_size [98.9mb], with min_shard_index_buffer_size
[4mb], max_shard_index_buffer_size [512mb], shard_inactive_tim
e [30m]
[2011-10-24 10:36:02,841][DEBUG][indices.cache.filter ] [Aardwolf]
using [node] filter cache with size [197.9mb]
[2011-10-24 10:36:02,892][INFO ][node ] [Aardwolf]
{elasticsearch/0.17.9}[46508]: initialized
[2011-10-24 10:36:02,893][INFO ][node ] [Aardwolf]
{elasticsearch/0.17.9}[46508]: starting ...
[2011-10-24 10:36:02,913][DEBUG]
[netty.channel.socket.nio.NioProviderMetadata] Using the autodetected
NIO constraint level: 0
[2011-10-24 10:36:02,963][DEBUG][transport.netty ] [Aardwolf]
Bound to address [/0:0:0:0:0:0:0:0:9300]
[2011-10-24 10:36:03,020][INFO ][transport ] [Aardwolf]
bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/
10.44.0.16:9300]}
[2011-10-24 10:36:06,183][DEBUG][discovery.zen ] [Aardwolf]
ping responses: {none}
[2011-10-24 10:36:06,187][DEBUG][cluster.service ] [Aardwolf]
processing [zen-disco-join (elected_as_master)]: execute
[2011-10-24 10:36:06,188][DEBUG][cluster.service ] [Aardwolf]
cluster state updated, version [1], source [zen-disco-join
(elected_as_master)]
[2011-10-24 10:36:06,190][INFO ][cluster.service ] [Aardwolf]
new_master [Aardwolf][uQT1ZhL_SnedSdGOvUL0mQ][inet[/10.44.0.16:9300]],
reason: zen-disco-join (elected_as_master)
[2011-10-24 10:36:06,223][DEBUG][transport.netty ] [Aardwolf]
Connected to node [[Aardwolf][uQT1ZhL_SnedSdGOvUL0mQ][inet[/
10.44.0.16:9300]]]
[2011-10-24 10:36:06,228][DEBUG][river.cluster ] [Aardwolf]
processing [reroute_rivers_node_changed]: execute
[2011-10-24 10:36:06,228][INFO ][discovery ] [Aardwolf]
elasticsearch/uQT1ZhL_SnedSdGOvUL0mQ
[2011-10-24 10:36:06,228][DEBUG][cluster.service ] [Aardwolf]
processing [zen-disco-join (elected_as_master)]: done applying updated
cluster_state
[2011-10-24 10:36:06,230][DEBUG][river.cluster ] [Aardwolf]
processing [reroute_rivers_node_changed]: no change in cluster_state
[2011-10-24 10:36:06,243][DEBUG][gateway.local ] [Aardwolf]
elected state from [[Aardwolf][uQT1ZhL_SnedSdGOvUL0mQ][inet[/
10.44.0.16:9300]]]
[2011-10-24 10:36:06,244][DEBUG][cluster.service ] [Aardwolf]
processing [local-gateway-elected-state]: execute
[2011-10-24 10:36:06,247][DEBUG][cluster.service ] [Aardwolf]
cluster state updated, version [8], source [local-gateway-elected-
state]
[2011-10-24 10:36:06,248][DEBUG][river.cluster ] [Aardwolf]
processing [reroute_rivers_node_changed]: execute
[2011-10-24 10:36:06,248][DEBUG][river.cluster ] [Aardwolf]
processing [reroute_rivers_node_changed]: no change in cluster_state
[2011-10-24 10:36:06,252][INFO ][gateway ] [Aardwolf]
recovered [0] indices into cluster_state
[2011-10-24 10:36:06,253][DEBUG][cluster.service ] [Aardwolf]
processing [local-gateway-elected-state]: done applying updated
cluster_state
[2011-10-24 10:36:06,311][INFO ][http ] [Aardwolf]
bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/
10.44.0.16:9200]}
[2011-10-24 10:36:06,312][INFO ][node ] [Aardwolf]
{elasticsearch/0.17.9}[46508]: started
[2011-10-24 10:36:16,227][DEBUG][cluster.service ] [Aardwolf]
processing [routing-table-updater]: execute
[2011-10-24 10:36:16,228][DEBUG][cluster.service ] [Aardwolf]
processing [routing-table-updater]: no change in cluster_state
[2011-10-24 10:36:16,811][DEBUG][cluster.service ] [Aardwolf]
processing [create-index [PAFRetail], cause [auto(index api)]]:
execute
[2011-10-24 10:36:16,812][DEBUG][cluster.service ] [Aardwolf]
processing [create-index [PAFRetail], cause [auto(index api)]]: no
change in cluster_state
I notice there are no "action" logs so it seems like it's dying earlier... any ideas?
Thanks!
-Tim
Figured it out - apparently ElasticSearch doesn't like it when the index has uppercase letters in it. As soon as I changed "PAFRetail" to "pafretail", everything worked fine.
Thanks for looking!
-Tim

Resources