Graylog2 - Startup fail. Address already in use - elasticsearch

I am trying to install graylog2. I have installed open-jdk7. I have also installed elasticsearch and mongodb using apt on ubuntu 14.04.
I am new to both graylog and elasticsearch. I just want to try a trail installation and try these out. And I also did search similar questions and tried their suggestions. But none of them worked for my case.
I have followed the installation instructions on graylog.org. But when I try to start the graylog2 server I get the following error.
2015-02-12 03:19:36,216 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.IndexerClusterCheckerThread] periodical in [0s], polling every [30s].
2015-02-12 03:19:36,222 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.GarbageCollectionWarningThread] periodical, running forever.
2015-02-12 03:19:36,225 INFO : org.graylog2.periodical.IndexerClusterCheckerThread - Indexer not fully initialized yet. Skipping periodic cluster check.
2015-02-12 03:19:36,229 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.ThroughputCounterManagerThread] periodical in [0s], polling every [1s].
2015-02-12 03:19:36,280 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.DeadLetterThread] periodical, running forever.
2015-02-12 03:19:36,295 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.ClusterHealthCheckThread] periodical in [0s], polling every [20s].
2015-02-12 03:19:36,299 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog2.periodical.InputCacheWorkerThread] periodical, running forever.
2015-02-12 03:19:36,334 DEBUG: org.graylog2.periodical.ClusterHealthCheckThread - No input running in cluster!
2015-02-12 03:19:36,368 DEBUG: org.graylog2.caches.DiskJournalCache - Committing output-cache (entries 0)
2015-02-12 03:19:36,383 DEBUG: org.graylog2.caches.DiskJournalCache - Committing input-cache (entries 0)
2015-02-12 03:19:36,885 ERROR: com.google.common.util.concurrent.ServiceManager - Service IndexerSetupService [FAILED] has failed in the STARTING state.
org.elasticsearch.transport.BindTransportException: Failed to bind to [9300]
at org.elasticsearch.transport.netty.NettyTransport.doStart(NettyTransport.java:396)
at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:85)
at org.elasticsearch.transport.TransportService.doStart(TransportService.java:90)
at org.elasticsearch.common.component.AbstractLifecycleComponent.start(AbstractLifecycleComponent.java:85)
at org.elasticsearch.node.internal.InternalNode.start(InternalNode.java:242)
at org.graylog2.initializers.IndexerSetupService.startUp(IndexerSetupService.java:101)
at com.google.common.util.concurrent.AbstractIdleService$2$1.run(AbstractIdleService.java:54)
at com.google.common.util.concurrent.Callables$3.run(Callables.java:95)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.common.netty.channel.ChannelException: Failed to bind to: /127.0.0.1:9300
at org.elasticsearch.common.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
at org.elasticsearch.transport.netty.NettyTransport$3.onPortNumber(NettyTransport.java:387)
at org.elasticsearch.common.transport.PortsRange.iterate(PortsRange.java:58)
at org.elasticsearch.transport.netty.NettyTransport.doStart(NettyTransport.java:383)
... 8 more
Caused by: java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.elasticsearch.common.netty.channel.socket.nio.NioServerBoss$RegisterTask.run(NioServerBoss.java:193)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:372)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:296)
at org.elasticsearch.common.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
... 1 more
Elastic search is showing the following status
{
"cluster_name" : "graylog2",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 0,
"active_shards" : 0,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0
}
The following are the changes I made to elasticsearch.yml
cluster.name: graylog2
network.bind_host: 127.0.0.1
network.host: 127.0.0.1
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["127.0.0.1", MYSYS IP]
and graylog2.conf
is_master = true
password_secret = changed
root_password_sha2 = changed
elasticsearch_max_docs_per_index = 20000000
elasticsearch_shards = 1
elasticsearch_replicas = 0
elasticsearch_cluster_name = graylog2
elasticsearch_discovery_zen_ping_multicast_enabled = false
elasticsearch_discovery_zen_ping_unicast_hosts = IP_ARR:9300
mongodb_useauth = false
I tried killing the process on the port 9300 and tried starting graylog again. But I got the following error
2015-02-12 04:01:24,976 INFO : org.elasticsearch.transport - [graylog2-server] bound_address {inet[/127.0.0.1:9300]}, publish_address {inet[/127.0.0.1:9300]}
2015-02-12 04:01:25,227 INFO : org.elasticsearch.discovery - [graylog2-server] graylog2/LGkZJDz1SoeENKj6Rr0e8w
2015-02-12 04:01:25,252 DEBUG: org.elasticsearch.cluster.service - [graylog2-server] processing [update local node]: execute
2015-02-12 04:01:25,253 DEBUG: org.elasticsearch.cluster.service - [graylog2-server] cluster state updated, version [0], source [update local node]
2015-02-12 04:01:25,259 DEBUG: org.elasticsearch.cluster.service - [graylog2-server] set local cluster state to version 0
2015-02-12 04:01:25,259 DEBUG: org.elasticsearch.cluster.service - [graylog2-server] processing [update local node]: done applying updated cluster_state (version: 0)
2015-02-12 04:01:25,325 WARN : org.elasticsearch.transport.netty - [graylog2-server] exception caught on transport layer [[id: 0x82f30fa7]], closing connection
java.nio.channels.UnresolvedAddressException
at sun.nio.ch.Net.checkAddress(Net.java:127)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:644)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink.connect(NioClientSocketPipelineSink.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink.eventSunk(NioClientSocketPipelineSink.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:574)
.......
2015-02-12 04:01:28,536 DEBUG: org.elasticsearch.action.admin.cluster.health - [graylog2-server] no known master node, scheduling a retry
2015-02-12 04:01:28,564 DEBUG: org.elasticsearch.transport.netty - [graylog2-server] disconnected from [[graylog2-server][LGkZJDz1SoeENKj6Rr0e8w][ubuntu-greylog-9945][inet[/127.0.0.1:9300]]{client=true, data=false, master=false}]
2015-02-12 04:01:28,573 DEBUG: org.elasticsearch.discovery.zen - [graylog2-server] filtered ping responses: (filter_client[true], filter_data[false]) {none}
2015-02-12 04:01:28,590 WARN : org.elasticsearch.transport.netty - [graylog2-server] exception caught on transport layer [[id: 0xe27feaff]], closing connection
java.nio.channels.UnresolvedAddressException
at sun.nio.ch.Net.checkAddress(Net.java:127)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:644)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink.connect(NioClientSocketPipelineSink.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientSocketPipelineSink.eventSunk(NioClientSocketPipelineSink.java:70)
Can you please point out to what I am doing wrong here and what I am missing??

if ES and greylog2 running on same server, try (del/comment) in elasticsearch.conf
#transport.tcp.port: 9300
and (add/uncomment) in greylog.conf
elasticsearch_transport_tcp_port = 9350

Related

Websocket connection error with HiveMQ 2.1.0 + Paho javascript mqttws31.js

Hi I am using Hivemq (windows) too and have an issue !
websocket are open and I receive some information:
2017-01-19 11:05:27,065 INFO - Starting HiveMQ Server
2017-01-19 11:05:27,070 INFO - HiveMQ version: 3.2.1
2017-01-19 11:05:27,074 INFO - HiveMQ home directory: C:\hivemq-3.2.1
2017-01-19 11:05:27,115 INFO - Log Configuration was overridden by C:\hivemq-3.2.1\conf\logback.xml
2017-01-19 11:05:31,533 INFO - Loaded Plugin HiveMQ JMX Metrics Reporting Plugin - v3.0.0
2017-01-19 11:05:31,534 INFO - Loaded Plugin HiveMQ JVM Metrics Plugin - v3.1.0
2017-01-19 11:05:31,535 INFO - Loaded Plugin HiveMQ MQTT Message Log Plugin - v3.0.0
2017-01-19 11:05:31,551 INFO - JMX Metrics Reporting started.
2017-01-19 11:05:31,574 INFO - Starting TCP listener on address 127.0.0.1 and port 1883
2017-01-19 11:05:31,701 INFO - Starting Websocket listener on address 127.0.0.1 and port 9001
2017-01-19 11:05:31,705 INFO - Started TCP Listener on address 127.0.0.1 and on port 1883
2017-01-19 11:05:31,706 INFO - Started Websocket Listener on address 127.0.0.1 and on port 9001
2017-01-19 11:05:31,707 INFO - Started HiveMQ in 4637ms
2017-01-19 11:05:31,708 INFO - No valid license file found. Using evaluation license, restricted to 25 connections.
2017-01-19 11:05:46,058 INFO - Client mosq/#IL\R8\7_1OBQj3hs# connected
2017-01-19 11:05:46,138 INFO - Subscribe from client mosq/#IL\R8\7_1OBQj3hs# received: domoticz/in QoS: 0
2017-01-19 11:05:49,867 INFO - Client mosq/#IL\R8\7_1OBQj3hs# sent a message to topic "domoticz/out": "{
"Battery" : 100,
"RSSI" : 7,
"description" : "",
"dtype" : "Temp + Humidity",
"id" : "62721",
"idx" : 3,
"name" : "bureau",
"nvalue" : 0,
"stype" : "THGN122/123, THGN132, THGR122/228/238/268",
"svalue1" : "19.0",
"svalue2" : "34",
"svalue3" : "2",
"unit" : 1
}
" (QoS: 0, retained: false)
2017-01-19 11:06:05,761 INFO - Client mosq/#IL\R8\7_1OBQj3hs# sent a message to topic "domoticz/out": "{
"Battery" : 100,
"RSSI" : 7,
"description" : "",
"dtype" : "Temp + Humidity",`enter code here
`
but i always have issue in chrome console :
`WebSocket connection to 'ws://127.0.0.1:9001/' failed: Connection closed before receiving a handshake response
k._doConnect # mqttws31-min.js:36
k._disconnected # mqttws31-min.js:54
k._on_socket_error # mqttws31-min.js:51
(anonymous) # mqttws31-min.js:19e
i am not a specialist please help
According to the Eclipse Paho Wiki
The path portion of the url specified on the MQTT connect should be "mqtt"
For instance ws://m2m.eclipse.org:800/mqtt . mqtt should be the default with the option for an alternative to be configured / specified
However the default path used by paho javascript is "/ws"
HiveMQ uses default configuration of "/mqtt" for websocket's path
Possible solutions are
Change path in Client to "/mqtt"
client = new Paho.MQTT.Client("127.0.0.1", Number(9001), "/mqtt", "clientId");
Change path in HiveMQ config to "/ws"
<websocket-listener>
...
<path>/ws</path>
...
</websocket-listener>
Regards,
Florian, from the HiveMQ Team.

Apache storm Supervisor routinely shutting down worker

I made topology in Apache Storm(0.9.6) with kafka-storm, zookeeper(3.4.6)
(3 zookeeper each node, and 3 supervisor each node. operate 3 topology)
I add 2 storm&zookeeper nodes and change topology.worker configuration 3 to 5.
But after 2 nodes, storm supervisor routinely shutting down worker. Checked with iostat command, read and write throughput is under 1mb.
In supervisor log, show like below.
2016-10-19T15:07:38.904+0900 b.s.d.supervisor [INFO] Shutting down and clearing state for id ee13ada9-641e-463a-9be5-f3ed66fdb8f3. Current supervisor time: 1476857258. State: :timed-out, Heartbeat: #backtype.storm.daemon.common.WorkerHeartbeat{:time-secs 1476857226, :storm-id "top3-17-1476839721", :executors #{[36 36] [6 6] [11 11] [16 16] [21 21] [26 26] [31 31] [-1 -1] [1 1]}, :port 6701}
2016-10-19T15:07:38.905+0900 b.s.d.supervisor [INFO] Shutting down b278933f-f9c7-4189-b615-1d70c7988f17:ee13ada9-641e-463a-9be5-f3ed66fdb8f3
2016-10-19T15:07:38.907+0900 b.s.util [INFO] Error when trying to kill 9306. Process is probably already dead.
2016-10-19T15:07:44.948+0900 b.s.d.supervisor [INFO] Shutting down and clearing state for id d6df820a-7c29-4bff-a606-9e8e36fafab2. Current supervisor time: 1476857264. State: :disallowed, Heartbeat: #backtype.storm.daemon.common.WorkerHeartbeat{:time-secs 1476857264, :storm-id "top3-17-1476839721", :executors #{[-1 -1]}, :port 6701}
2016-10-19T15:07:44.949+0900 b.s.d.supervisor [INFO] Shutting down b278933f-f9c7-4189-b615-1d70c7988f17:d6df820a-7c29-4bff-a606-9e8e36fafab2
2016-10-19T15:07:45.954+0900 b.s.util [INFO] Error when trying to kill 11171. Process is probably already dead.
2016-10-19T15:07:45.954+0900 b.s.d.supervisor [INFO] Shut down b278933f-f9c7-4189-b615-1d70c7988f17:d6df820a-7c29-4bff-a606-9e8e36fafab2
And in zookeeper.out log... show like below(xxx ip address is another storm zookeeper address)
2016-09-20 02:31:06,031 [myid:5] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1007] - Closed socket connection for client /xxx.xxx.xxx.xxx:39426 which had sessionid 0x5574372bbf00004
2016-09-20 02:31:08,116 [myid:5] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#357] - caught end of stream exception
EndOfStreamException: Unable to read additional data from client sessionid 0x5574372bbf0000a, likely client has closed socket
at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:745)
I don't know why worker is down routinely. How can i fix it? Is this something wrong?
Oh, my zookeeper and storm configuration is like below
zoo.cfg(same all nodes)
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/log/zkdata/1
clientPort=2181
server.1=storm01:2888:3888
server.2=storm02:2888:3888
server.3=storm03:2888:3888
server.4=storm04:2888:3888
server.5=storm05:2888:3888
autopurge.purgeInterval=1
storm.yaml
storm.zookeeper.servers:
- "storm01"
- "storm02"
- "storm03"
- "storm04"
- "storm05"
storm.zookeeper.port: 2181
zookeeper.multiple.setup:
follower.port:2888
election.port:3888
nimbus.host: "storm01"
storm.supervisor.hosts:
- "storm01"
- "storm02"
- "storm03"
- "storm04"
- "storm05"
supervisor.slots.ports:
- 6700
- 6701
- 6702
- 6703
- 6704
storm.local.dir: /log/storm-data
worker.childopts: "-Xmx5120m -Djava.net.preferIPv4Stack=true"
topology.workers: 5
storm.log.dir: /log/storm-log

Nutch Elasticsearch Integration

I'm following this tutorial for setting up nutch alongwith Elasticsearch. Whenever I try to index the data into the ES, it returns an error. Following are the logs:-
Command:-
bin/nutch index elasticsearch -all
Logs when I add elastic.port(9200) in conf/nutch-site.xml :-
2016-05-05 13:22:49,903 INFO basic.BasicIndexingFilter - Maximum title length for indexing set to: 100
2016-05-05 13:22:49,904 INFO indexer.IndexingFilters - Adding org.apache.nutch.indexer.basic.BasicIndexingFilter
2016-05-05 13:22:49,904 INFO anchor.AnchorIndexingFilter - Anchor deduplication is: off
2016-05-05 13:22:49,904 INFO indexer.IndexingFilters - Adding org.apache.nutch.indexer.anchor.AnchorIndexingFilter
2016-05-05 13:22:49,905 INFO indexer.IndexingFilters - Adding org.apache.nutch.indexer.metadata.MetadataIndexer
2016-05-05 13:22:49,906 INFO indexer.IndexingFilters - Adding org.apache.nutch.indexer.more.MoreIndexingFilter
2016-05-05 13:22:49,961 INFO elastic.ElasticIndexWriter - Processing remaining requests [docs = 0, length = 0, total docs = 0]
2016-05-05 13:22:49,961 INFO elastic.ElasticIndexWriter - Processing to finalize last execute
2016-05-05 13:22:54,898 INFO client.transport - [Peggy Carter] failed to get node info for [#transport#-1][ubuntu][inet[localhost/127.0.0.1:9200]], disconnecting...
org.elasticsearch.transport.ReceiveTimeoutTransportException: [][inet[localhost/127.0.0.1:9200]][cluster:monitor/nodes/info] request_id [1] timed out after [5000ms]
at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:366)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
2016-05-05 13:22:55,682 INFO indexer.IndexWriters - Adding org.apache.nutch.indexwriter.elastic.ElasticIndexWriter
2016-05-05 13:22:55,683 INFO indexer.IndexingJob - Active IndexWriters :
ElasticIndexWriter
elastic.cluster : elastic prefix cluster
elastic.host : hostname
elastic.port : port (default 9300)
elastic.index : elastic index command
elastic.max.bulk.docs : elastic bulk index doc counts. (default 250)
elastic.max.bulk.size : elastic bulk index length. (default 2500500 ~2.5MB)
2016-05-05 13:22:55,711 INFO elasticsearch.plugins - [Adrian Toomes] loaded [], sites []
2016-05-05 13:23:00,763 INFO client.transport - [Adrian Toomes] failed to get node info for [#transport#-1][ubuntu][inet[localhost/127.0.0.1:92$0]], disconnecting...
org.elasticsearch.transport.ReceiveTimeoutTransportException: [][inet[localhost/127.0.0.1:9200]][cluster:monitor/nodes/info] request_id [0] time$ out after [5000ms]
at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:366)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
2016-05-05 13:23:00,766 INFO indexer.IndexingJob - IndexingJob: done.
Logs when default port 9300 is used:-
2016-05-05 13:58:44,584 INFO elasticsearch.plugins - [Mentallo] loaded [], sites []
2016-05-05 13:58:44,673 WARN transport.netty - [Mentallo] Message not fully read (response) for [0] handler future(org.elasticsearch.client.transport.TransportClientNodesService$SimpleNodeSampler$1#3c80f1dd), error [true], resetting
2016-05-05 13:58:44,674 INFO client.transport - [Mentallo] failed to get node info for [#transport#-1][ubuntu][inet[localhost/127.0.0.1:9300]], disconnecting...
org.elasticsearch.transport.RemoteTransportException: Failed to deserialize exception response from stream
Caused by: org.elasticsearch.transport.TransportSerializationException: Failed to deserialize exception response from stream
at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:173)
at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:125)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.StreamCorruptedException: Unsupported version: 1
at org.elasticsearch.common.io.ThrowableObjectInputStream.readStreamHeader(ThrowableObjectInputStream.java:46)
at java.io.ObjectInputStream.<init>(ObjectInputStream.java:301)
at org.elasticsearch.common.io.ThrowableObjectInputStream.<init>(ThrowableObjectInputStream.java:38)
at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:170)
... 23 more
2016-05-05 13:58:44,676 INFO indexer.IndexingJob - IndexingJob: done.
I've configured everything fine. Have had a look at various threads as well but to no avail. Also java version for both ES and JVM is same. Is there a bug in here?
I'm using Nutch 2.3.1 and have tried with both ES 1.4.4 and 2.3.2. I can see data in Mongo but I cannot index data in ES. Why??

Errors reading data from 1G file on localCluster mode with apache storm

Hi I'm using storm with local cluster mode for developing.
I ran a simple code that contains spout and two bolts, the code example count words from log file.
code example url :
http://kaviddiss.com/2013/05/17/how-to-get-started-with-storm-framework-in-5-minutes/
the code works perfectly with small log files (7.3M), but when I try to run a big log file (100M-1000M) I'm getting exceptions.
I set a long delay till the cluster is going down.
May I miss some configuration options here?
exceptions:
11326 [Thread-6] INFO backtype.storm.daemon.supervisor - Launching worker with assignment #backtype.storm.daemon.supervisor.LocalAssignment{:storm-id "HelloStorm-1-1403522378", :executors ([3 3] [ 4 4] [2 2] [1 1])} for this supervisor 868aff95-7b63-44d1-ad55-2dd07d9c7ba2 on port 1024 with id df052251-45ec-4bc3-a486-c2bf11a8a0fa
11336 [Thread-6] INFO backtype.storm.daemon.worker - Launching worker for HelloStorm-1-1403522378 on 868aff95-7b63-44d1-ad55-2dd07d9c7ba2:1024 with id df052251-45ec-4bc3-a486-c2bf11a8a0fa and conf {"dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.secs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.ma x.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.kryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper. session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.interval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/local/li b:/opt/local/lib:/usr/lib", "topology.executor.send.buffer.size" 1024, "storm.local.dir" "/var/tmp//77d5cd63-9539-44a4-892a-9e91553987df", "storm.messaging.netty.buffer_size" 5242880, "supervisor.w orker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.sha red.thread.pool.size" 4, "nimbus.host" "localhost", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "storm.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_t hreads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.transfer.buffer.size" 1024, "topology.worker.childopts " nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "supervisor.monitor. frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topology.spout.w ait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "nimbus.thrift.max_buffer_size" 1048576, "topology.max.spout.pending" nil, "storm.zookeeper.retry.interval" 1000, "topology.sleep.spout. wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" (1024 1025 1026), "topology.debug" false, "nimbus.task.launch.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx256m", "nimbus.thrift.port" 6627, "topol ogy.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "topology.disruptor.wait.strategy" "com.lm ax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serialization.DefaultKryoFactory", "drpc.invoc ations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransportPlugin", "topology.state.synchroniz ation.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.netty.Context", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topolog y.optimize" true, "topology.max.task.parallelism" nil}
11337 [Thread-6] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
11344 [Thread-6-EventThread] INFO backtype.storm.zookeeper - Zookeeper state update: :connected:none
11358 [Thread-6] INFO com.netflix.curator.framework.imps.CuratorFrameworkImpl - Starting
11611 [Thread-6] INFO backtype.storm.daemon.executor - Loading executor line-reader-spout:[2 2]
11618 [Thread-6] INFO backtype.storm.daemon.executor - Loaded executor tasks line-reader-spout:[2 2]
11632 [Thread-16-line-reader-spout] INFO backtype.storm.daemon.executor - Opening spout line-reader-spout:(2)
Start Time: 18512885554479686
11634 [Thread-16-line-reader-spout] INFO backtype.storm.daemon.executor - Opened spout line-reader-spout:(2)
11636 [Thread-16-line-reader-spout] INFO backtype.storm.daemon.executor - Activating spout line-reader-spout:(2)
11638 [Thread-6] INFO backtype.storm.daemon.executor - Finished loading executor line-reader-spout:[2 2]
11677 [Thread-6] INFO backtype.storm.daemon.executor - Loading executor word-counter:[3 3]
11721 [Thread-6] INFO backtype.storm.daemon.executor - Loaded executor tasks word-counter:[3 3]
11725 [Thread-6] INFO backtype.storm.daemon.executor - Finished loading executor word-counter:[3 3]
11733 [Thread-6] INFO backtype.storm.daemon.executor - Loading executor word-spitter:[4 4]
11735 [Thread-6] INFO backtype.storm.daemon.executor - Loaded executor tasks word-spitter:[4 4]
11737 [Thread-6] INFO backtype.storm.daemon.executor - Finished loading executor word-spitter:[4 4]
11746 [Thread-6] INFO backtype.storm.daemon.executor - Loading executor __system:[-1 -1]
11747 [Thread-6] INFO backtype.storm.daemon.executor - Loaded executor tasks __system:[-1 -1]
11748 [Thread-6] INFO backtype.storm.daemon.executor - Finished loading executor __system:[-1 -1]
11761 [Thread-6] INFO backtype.storm.daemon.executor - Loading executor __acker:[1 1]
11765 [Thread-6] INFO backtype.storm.daemon.executor - Loaded executor tasks __acker:[1 1]
11767 [Thread-6] INFO backtype.storm.daemon.executor - Timeouts disabled for executor __acker:[1 1]
11768 [Thread-6] INFO backtype.storm.daemon.executor - Finished loading executor __acker:[1 1]
11768 [Thread-6] INFO backtype.storm.daemon.worker - Launching receive-thread for 868aff95-7b63-44d1-ad55-2dd07d9c7ba2:1024
11786 [Thread-6] INFO backtype.storm.daemon.worker - Worker has topology config {"storm.id" "HelloStorm-1-1403522378", "dev.zookeeper.path" "/tmp/dev-storm-zookeeper", "topology.tick.tuple.freq.se cs" nil, "topology.builtin.metrics.bucket.size.secs" 60, "topology.fall.back.on.java.serialization" true, "topology.max.error.report.per.interval" 5, "zmq.linger.millis" 0, "topology.skip.missing.k ryo.registrations" true, "storm.messaging.netty.client_worker_threads" 1, "ui.childopts" "-Xmx768m", "storm.zookeeper.session.timeout" 20000, "nimbus.reassign" true, "topology.trident.batch.emit.in terval.millis" 50, "nimbus.monitor.freq.secs" 10, "logviewer.childopts" "-Xmx128m", "java.library.path" "/usr/local/lib:/opt/local/lib:/usr/lib", "topology.executor.send.buffer.size" 1024, "storm.l ocal.dir" "/var/tmp//77d5cd63-9539-44a4-892a-9e91553987df", "storm.messaging.netty.buffer_size" 5242880, "supervisor.worker.start.timeout.secs" 120, "topology.enable.message.timeouts" true, "inputF ile" "test_log.log", "nimbus.cleanup.inbox.freq.secs" 600, "nimbus.inbox.jar.expiration.secs" 3600, "drpc.worker.threads" 64, "topology.worker.shared.thread.pool.size" 4, "nimbus.host" "localhost", "storm.messaging.netty.min_wait_ms" 100, "storm.zookeeper.port" 2000, "transactional.zookeeper.port" nil, "topology.executor.receive.buffer.size" 1024, "transactional.zookeeper.servers" nil, "stor m.zookeeper.root" "/storm", "storm.zookeeper.retry.intervalceiling.millis" 30000, "supervisor.enable" true, "storm.messaging.netty.server_worker_threads" 1, "storm.zookeeper.servers" ["localhost"], "transactional.zookeeper.root" "/transactional", "topology.acker.executors" nil, "topology.kryo.decorators" (), "topology.name" "HelloStorm", "topology.transfer.buffer.size" 1024, "topology.worker .childopts" nil, "drpc.queue.size" 128, "worker.childopts" "-Xmx768m", "supervisor.heartbeat.frequency.secs" 5, "topology.error.throttle.interval.secs" 10, "zmq.hwm" 0, "drpc.port" 3772, "superviso r.monitor.frequency.secs" 3, "drpc.childopts" "-Xmx768m", "topology.receiver.buffer.size" 8, "task.heartbeat.frequency.secs" 3, "topology.tasks" nil, "storm.messaging.netty.max_retries" 30, "topolo gy.spout.wait.strategy" "backtype.storm.spout.SleepSpoutWaitStrategy", "nimbus.thrift.max_buffer_size" 1048576, "topology.max.spout.pending" 1, "storm.zookeeper.retry.interval" 1000, "topology.slee p.spout.wait.strategy.time.ms" 1, "nimbus.topology.validator" "backtype.storm.nimbus.DefaultTopologyValidator", "supervisor.slots.ports" (1024 1025 1026), "topology.debug" false, "nimbus.task.launc h.secs" 120, "nimbus.supervisor.timeout.secs" 60, "topology.kryo.register" nil, "topology.message.timeout.secs" 30, "task.refresh.poll.secs" 10, "topology.workers" 1, "supervisor.childopts" "-Xmx25 6m", "nimbus.thrift.port" 6627, "topology.stats.sample.rate" 0.05, "worker.heartbeat.frequency.secs" 1, "topology.tuple.serializer" "backtype.storm.serialization.types.ListDelegateSerializer", "top ology.disruptor.wait.strategy" "com.lmax.disruptor.BlockingWaitStrategy", "nimbus.task.timeout.secs" 30, "storm.zookeeper.connection.timeout" 15000, "topology.kryo.factory" "backtype.storm.serializ ation.DefaultKryoFactory", "drpc.invocations.port" 3773, "logviewer.port" 8000, "zmq.threads" 1, "storm.zookeeper.retry.times" 5, "storm.thrift.transport" "backtype.storm.security.auth.SimpleTransp ortPlugin", "topology.state.synchronization.timeout.secs" 60, "supervisor.worker.timeout.secs" 30, "nimbus.file.copy.expiration.secs" 600, "storm.messaging.transport" "backtype.storm.messaging.nett y.Context", "logviewer.appender.name" "A1", "storm.messaging.netty.max_wait_ms" 1000, "drpc.request.timeout.secs" 600, "storm.local.mode.zmq" false, "ui.port" 8080, "nimbus.childopts" "-Xmx1024m", "storm.cluster.mode" "local", "topology.optimize" true, "topology.max.task.parallelism" nil}
11786 [Thread-6] INFO backtype.storm.daemon.worker - Worker df052251-45ec-4bc3-a486-c2bf11a8a0fa for storm HelloStorm-1-1403522378 on 868aff95-7b63-44d1-ad55-2dd07d9c7ba2:1024 has finished loading
11801 [Thread-18-word-counter] INFO backtype.storm.daemon.executor - Preparing bolt word-counter:(3)
11821 [Thread-18-word-counter] INFO backtype.storm.daemon.executor - Prepared bolt word-counter:(3)
11823 [Thread-20-word-spitter] INFO backtype.storm.daemon.executor - Preparing bolt word-spitter:(4)
11825 [Thread-20-word-spitter] INFO backtype.storm.daemon.executor - Prepared bolt word-spitter:(4)
11838 [Thread-24-__acker] INFO backtype.storm.daemon.executor - Preparing bolt __acker:(1)
11840 [Thread-22-__system] INFO backtype.storm.daemon.executor - Preparing bolt __system:(-1)
11854 [Thread-24-__acker] INFO backtype.storm.daemon.executor - Prepared bolt __acker:(1)
12173 [Thread-22-__system] INFO backtype.storm.daemon.executor - Prepared bolt __system:(-1)
112055 [main-EventThread] INFO com.netflix.curator.framework.state.ConnectionStateManager - State change: SUSPENDED
112058 [main-EventThread] WARN backtype.storm.cluster - Received event :disconnected::none: with disconnected Zookeeper.
112058 [Thread-6-EventThread] INFO com.netflix.curator.framework.state.ConnectionStateManager - State change: SUSPENDED
112058 [Thread-6-EventThread] WARN backtype.storm.cluster - Received event :disconnected::none: with disconnected Zookeeper.
121441 [main-EventThread] INFO com.netflix.curator.framework.state.ConnectionStateManager - State change: SUSPENDED
121442 [main-EventThread] WARN backtype.storm.cluster - Received event :disconnected::none: with disconnected Zookeeper.
121442 [main-EventThread] INFO com.netflix.curator.framework.state.ConnectionStateManager - State change: SUSPENDED
121442 [main-EventThread] WARN backtype.storm.cluster - Received event :disconnected::none: with disconnected Zookeeper.
121443 [main-EventThread] INFO com.netflix.curator.framework.state.ConnectionStateManager - State change: SUSPENDED
121443 [main-EventThread] WARN backtype.storm.cluster - Received event :disconnected::none: with disconnected Zookeeper.
121443 [ConnectionStateManager-0] WARN com.netflix.curator.framework.state.ConnectionStateManager - There are no ConnectionStateListeners registered.
121444 [ConnectionStateManager-0] WARN com.netflix.curator.framework.state.ConnectionStateManager - There are no ConnectionStateListeners registered.
134654 [main-EventThread] INFO com.netflix.curator.framework.state.ConnectionStateManager - State change: SUSPENDED
134655 [ConnectionStateManager-0] WARN com.netflix.curator.framework.state.ConnectionStateManager - There are no ConnectionStateListeners registered.
134655 [ConnectionStateManager-0] WARN com.netflix.curator.framework.state.ConnectionStateManager - There are no ConnectionStateListeners registered.
134656 [main-EventThread] WARN com.netflix.curator.ConnectionState - Session expired event received
134656 [main-EventThread] WARN backtype.storm.cluster - Received event :disconnected::none: with disconnected Zookeeper.
134656 [main-EventThread] WARN com.netflix.curator.ConnectionState - Session expired event received
134657 [main-EventThread] INFO com.netflix.curator.framework.state.ConnectionStateManager - State change: LOST
134657 [ConnectionStateManager-0] WARN com.netflix.curator.framework.state.ConnectionStateManager - There are no ConnectionStateListeners registered.
134657 [main-EventThread] INFO com.netflix.curator.framework.state.ConnectionStateManager - State change: LOST
139931 [ConnectionStateManager-0] WARN com.netflix.curator.framework.state.ConnectionStateManager - There are no ConnectionStateListeners registered.
149745 [ConnectionStateManager-0] WARN com.netflix.curator.framework.state.ConnectionStateManager - There are no ConnectionStateListeners registered.
149745 [ConnectionStateManager-0] WARN com.netflix.curator.framework.state.ConnectionStateManager - There are no ConnectionStateListeners registered.
149746 [main-EventThread] WARN com.netflix.curator.ConnectionState - Session expired event received
149746 [main-EventThread] INFO com.netflix.curator.framework.state.ConnectionStateManager - State change: LOST
149747 [main-EventThread] WARN backtype.storm.cluster - Received event :expired::none: with disconnected Zookeeper.
149747 [main-EventThread] WARN com.netflix.curator.ConnectionState - Session expired event received
149747 [main-EventThread] INFO com.netflix.curator.framework.state.ConnectionStateManager - State change: LOST
149747 [main-EventThread] WARN backtype.storm.cluster - Received event :expired::none: with disconnected Zookeeper.
158929 [main-EventThread] WARN backtype.storm.cluster - Received event :expired::none: with disconnected Zookeeper.
158931 [main-EventThread] WARN backtype.storm.cluster - Received event :expired::none: with disconnected Zookeeper.
158931 [Thread-6-EventThread] WARN com.netflix.curator.ConnectionState - Session expired event received
158931 [Thread-6-EventThread] INFO com.netflix.curator.framework.state.ConnectionStateManager - State change: LOST
158931 [Thread-6-EventThread] WARN backtype.storm.cluster - Received event :expired::none: with disconnected Zookeeper.
158932 [ConnectionStateManager-0] WARN com.netflix.curator.framework.state.ConnectionStateManager - There are no ConnectionStateListeners registered.
158933 [ConnectionStateManager-0] WARN com.netflix.curator.framework.state.ConnectionStateManager - There are no ConnectionStateListeners registered.
176934 [ConnectionStateManager-0] WARN com.netflix.curator.framework.state.ConnectionStateManager - There are no ConnectionStateListeners registered.
357333 [CuratorFramework-5] ERROR com.netflix.curator.ConnectionState - Connection timed out
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss
at com.netflix.curator.ConnectionState.getZooKeeper(ConnectionState.java:72) ~[curator-client-1.0.1.jar:na]
at com.netflix.curator.CuratorZookeeperClient.getZooKeeper(CuratorZookeeperClient.java:74) [curator-client-1.0.1.jar:na]
at com.netflix.curator.framework.imps.CuratorFrameworkImpl.getZooKeeper(CuratorFrameworkImpl.java:353) [curator-framework-1.0.1.jar:na]
at com.netflix.curator.framework.imps.BackgroundSyncImpl.performBackgroundOperation(BackgroundSyncImpl.java:39) [curator-framework-1.0.1.jar:na]
at com.netflix.curator.framework.imps.OperationAndData.callPerformBackgroundOperation(OperationAndData.java:40) [curator-framework-1.0.1.jar:na]
at com.netflix.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:547) [curator-framework-1.0.1.jar:na]
at com.netflix.curator.framework.imps.CuratorFrameworkImpl.access$200(CuratorFrameworkImpl.java:50) [curator-framework-1.0.1.jar:na]
at com.netflix.curator.framework.imps.CuratorFrameworkImpl$2.call(CuratorFrameworkImpl.java:177) [curator-framework-1.0.1.jar:na]
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) [na:1.6.0_65]
at java.util.concurrent.FutureTask.run(FutureTask.java:138) [na:1.6.0_65]
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895) [na:1.6.0_65]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918) [na:1.6.0_65]
at java.lang.Thread.run(Thread.java:680) [na:1.6.0_65]
[update]
I got new exception running 70M file:
622366 [CuratorFramework-9] ERROR com.netflix.curator.framework.imps.CuratorFrameworkImpl - Background exception was not retry-able or retry gave up
java.lang.OutOfMemoryError: GC overhead limit exceeded
The problem seems to be exactly as described: you've loaded more data into memory than your JVM can support. I assume this is happening to the spout. For very large files you'll need to break up the processing by either splitting the files in advance or streaming the files in instead of trying to load the whole file into memory.

Setup a graylog2 server with elasticsearch in a vagrant machine

I'm trying to Install graylog2 server on my local dev machine and encountering problems with elasticsearch setup.
My elasticsearch is installed as a service on a vagrant machine running on my dev machine. so My elasticsearch isn't installed in 127.0.0.1 but in 192.168.50.4 (the ip of the vagrant machine) I have ports 9200 forwarded from the vagrant machine but graylog2 server seems to fail to find it and stops running with a :
ERROR: Could not successfully connect to ElasticSearch. Check that
your cluster state is not RED and that ElasticSearch is running
properly.
Adding port 9300 forwarded from the vagrant machine changed the error to:
Caused by: org.elasticsearch.common.netty.channel.ChannelException:
Failed to bind to: 0.0.0.0/0.0.0.0:9350
I tried this settings in graylog conf file:
elasticsearch_network_host =192.168.50.4
but that only changes the error to an exception failing to bind to
Caused by: org.elasticsearch.common.netty.channel.ChannelException:
Failed to bind to: /192.168.50.4:9350 at
org.elasticsearch.common.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
But didn't help.
I'll be glad for any direction what am I doing wrong (either with elastic search configuration or the vagrant or graylog2)
Thanks!
Update following advice by the answer below I changed the following config:
elasticsearch_discovery_zen_ping_multicast_enabled = false
elasticsearch_discovery_zen_ping_unicast_hosts = 192.168.50.4:9300
I now get this error:
2014-06-16 23:04:34,946 WARN : org.elasticsearch.transport.netty - [graylog2-server] Message not fully read (response) for [6] handler org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$4#67bd250a, error [true], resetting
2014-06-16 23:04:36,451 WARN : org.elasticsearch.discovery.zen.ping.unicast - [graylog2-server] failed to send ping to [[#zen_unicast_1#][inet[/192.168.50.4:9300]]]
org.elasticsearch.transport.RemoteTransportException: Failed to deserialize exception response from stream
Caused by: org.elasticsearch.transport.TransportSerializationException: Failed to deserialize exception response from stream
at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:169)
at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:123)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:310)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.io.InvalidClassException: failed to read class descriptor
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1603)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1517)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1622)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1517)
looks that graylog2 still fails to connect to elastic search in a correct way
Details (update): graylog2-server-0.20.2, elasticsearch 1.1.0 (I think) - I can replace if that's the problem. java OpenJDK 64-Bit java version "1.7.0_55"
More Updates (thanks #sheena) When downgrading the elasticsearch version to 0.90.10 we got some progress but still not working:
Here is the current log:
2014-06-17 13:27:16,394 INFO : org.graylog2.Main - Graylog2 0.20.2 starting up. (JRE: Oracle Corporation 1.7.0_55 on Linux 3.13.0-29-generic)
2014-06-17 13:27:16,475 INFO : org.graylog2.plugin.system.NodeId - Node ID: e7245f12-2e8b-4803-9e88-7529169b5a91
2014-06-17 13:27:16,670 INFO : org.graylog2.buffers.ProcessBuffer - Initialized ProcessBuffer with ring size <1024> and wait strategy <BlockingWaitStrategy>.
2014-06-17 13:27:16,692 INFO : org.graylog2.buffers.OutputBuffer - Initialized OutputBuffer with ring size <1024> and wait strategy <BlockingWaitStrategy>.
2014-06-17 13:27:16,964 DEBUG: com.ning.http.client.providers.netty.NettyAsyncHttpProvider - Number of application's worker threads is 8
2014-06-17 13:27:17,272 INFO : org.elasticsearch.node - [graylog2-server] version[0.90.10], pid[24419], build[0a5781f/2014-01-10T10:18:37Z]
2014-06-17 13:27:17,273 INFO : org.elasticsearch.node - [graylog2-server] initializing ...
2014-06-17 13:27:17,273 DEBUG: org.elasticsearch.node - [graylog2-server] using home [/home/alon/Downloads/graylog2-server-0.20.2], config [/home/alon/Downloads/graylog2-server-0.20.2/config], data [[/home/alon/Downloads/graylog2-server-0.20.2/data]], logs [/home/alon/Downloads/graylog2-server-0.20.2/logs], work [/home/alon/Downloads/graylog2-server-0.20.2/work], plugins [/home/alon/Downloads/graylog2-server-0.20.2/plugins]
2014-06-17 13:27:17,281 INFO : org.elasticsearch.plugins - [graylog2-server] loaded [], sites []
2014-06-17 13:27:17,320 DEBUG: org.elasticsearch.common.compress.lzf - using [UnsafeChunkDecoder] decoder
2014-06-17 13:27:18,655 DEBUG: org.elasticsearch.threadpool - [graylog2-server] creating thread_pool [generic], type [cached], keep_alive [30s]
2014-06-17 13:27:18,740 DEBUG: org.elasticsearch.threadpool - [graylog2-server] creating thread_pool [index], type [fixed], size [4], queue_size [200]
2014-06-17 13:27:18,744 DEBUG: org.elasticsearch.threadpool - [graylog2-server] creating thread_pool [bulk], type [fixed], size [4], queue_size [50]
2014-06-17 13:27:18,745 DEBUG: org.elasticsearch.threadpool - [graylog2-server] creating thread_pool [get], type [fixed], size [4], queue_size [1k]
2014-06-17 13:27:18,745 DEBUG: org.elasticsearch.threadpool - [graylog2-server] creating thread_pool [search], type [fixed], size [12], queue_size [1k]
2014-06-17 13:27:18,745 DEBUG: org.elasticsearch.threadpool - [graylog2-server] creating thread_pool [suggest], type [fixed], size [4], queue_size [1k]
2014-06-17 13:27:18,745 DEBUG: org.elasticsearch.threadpool - [graylog2-server] creating thread_pool [percolate], type [fixed], size [4], queue_size [1k]
2014-06-17 13:27:18,746 DEBUG: org.elasticsearch.threadpool - [graylog2-server] creating thread_pool [management], type [scaling], min [1], size [5], keep_alive [5m]
2014-06-17 13:27:18,747 DEBUG: org.elasticsearch.threadpool - [graylog2-server] creating thread_pool [flush], type [scaling], min [1], size [2], keep_alive [5m]
2014-06-17 13:27:18,747 DEBUG: org.elasticsearch.threadpool - [graylog2-server] creating thread_pool [merge], type [scaling], min [1], size [2], keep_alive [5m]
2014-06-17 13:27:18,747 DEBUG: org.elasticsearch.threadpool - [graylog2-server] creating thread_pool [refresh], type [scaling], min [1], size [2], keep_alive [5m]
2014-06-17 13:27:18,748 DEBUG: org.elasticsearch.threadpool - [graylog2-server] creating thread_pool [warmer], type [scaling], min [1], size [2], keep_alive [5m]
2014-06-17 13:27:18,748 DEBUG: org.elasticsearch.threadpool - [graylog2-server] creating thread_pool [snapshot], type [scaling], min [1], size [2], keep_alive [5m]
2014-06-17 13:27:18,748 DEBUG: org.elasticsearch.threadpool - [graylog2-server] creating thread_pool [optimize], type [fixed], size [1], queue_size [null]
2014-06-17 13:27:18,768 DEBUG: org.elasticsearch.transport.netty - [graylog2-server] using worker_count[8], port[9350], bind_host[null], publish_host[null], compress[false], connect_timeout[30s], connections_per_node[2/3/6/1/1], receive_predictor[512kb->512kb]
2014-06-17 13:27:18,784 DEBUG: org.elasticsearch.discovery.zen.ping.unicast - [graylog2-server] using initial hosts [192.168.50.4:9300], with concurrent_connects [10]
2014-06-17 13:27:18,787 DEBUG: org.elasticsearch.discovery.zen - [graylog2-server] using ping.timeout [3s], master_election.filter_client [true], master_election.filter_data [false]
2014-06-17 13:27:18,788 DEBUG: org.elasticsearch.discovery.zen.elect - [graylog2-server] using minimum_master_nodes [-1]
2014-06-17 13:27:18,790 DEBUG: org.elasticsearch.discovery.zen.fd - [graylog2-server] [master] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
2014-06-17 13:27:18,801 DEBUG: org.elasticsearch.discovery.zen.fd - [graylog2-server] [node ] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
2014-06-17 13:27:18,845 DEBUG: org.elasticsearch.monitor.jvm - [graylog2-server] enabled [true], last_gc_enabled [false], interval [1s], gc_threshold [{old=GcThreshold{name='old', warnThreshold=10000, infoThreshold=5000, debugThreshold=2000}, default=GcThreshold{name='default', warnThreshold=10000, infoThreshold=5000, debugThreshold=2000}, young=GcThreshold{name='young', warnThreshold=1000, infoThreshold=700, debugThreshold=400}}]
2014-06-17 13:27:18,846 DEBUG: org.elasticsearch.monitor.os - [graylog2-server] Using probe [org.elasticsearch.monitor.os.JmxOsProbe#7b01e044] with refresh_interval [1s]
2014-06-17 13:27:18,849 DEBUG: org.elasticsearch.monitor.process - [graylog2-server] Using probe [org.elasticsearch.monitor.process.JmxProcessProbe#3103c203] with refresh_interval [1s]
2014-06-17 13:27:18,854 DEBUG: org.elasticsearch.monitor.jvm - [graylog2-server] Using refresh_interval [1s]
2014-06-17 13:27:18,854 DEBUG: org.elasticsearch.monitor.network - [graylog2-server] Using probe [org.elasticsearch.monitor.network.JmxNetworkProbe#1cc7580f] with refresh_interval [5s]
2014-06-17 13:27:18,857 DEBUG: org.elasticsearch.monitor.network - [graylog2-server] net_info
host [stox-alonisser]
vboxnet0 display_name [vboxnet0]
address [/fe80:0:0:0:800:27ff:fe00:0%4] [/192.168.50.1]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
wlan0 display_name [wlan0]
address [/fe80:0:0:0:e8b:fdff:fe62:dc9d%3] [/192.168.20.107]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
lo display_name [lo]
address [/0:0:0:0:0:0:0:1%1] [/127.0.0.1]
mtu [65536] multicast [false] ptp [false] loopback [true] up [true] virtual [false]
2014-06-17 13:27:18,858 DEBUG: org.elasticsearch.monitor.fs - [graylog2-server] Using probe [org.elasticsearch.monitor.fs.JmxFsProbe#2c8807d7] with refresh_interval [1s]
2014-06-17 13:27:19,196 DEBUG: org.elasticsearch.indices.store - [graylog2-server] using indices.store.throttle.type [MERGE], with index.store.throttle.max_bytes_per_sec [20mb]
2014-06-17 13:27:19,204 DEBUG: org.elasticsearch.cache.memory - [graylog2-server] using bytebuffer cache with small_buffer_size [1kb], large_buffer_size [1mb], small_cache_size [10mb], large_cache_size [500mb], direct [true]
2014-06-17 13:27:19,220 DEBUG: org.elasticsearch.script - [graylog2-server] using script cache with max_size [500], expire [null]
2014-06-17 13:27:19,234 DEBUG: org.elasticsearch.cluster.routing.allocation.decider - [graylog2-server] using node_concurrent_recoveries [2], node_initial_primaries_recoveries [4]
2014-06-17 13:27:19,235 DEBUG: org.elasticsearch.cluster.routing.allocation.decider - [graylog2-server] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active]
2014-06-17 13:27:19,236 DEBUG: org.elasticsearch.cluster.routing.allocation.decider - [graylog2-server] using [cluster_concurrent_rebalance] with [2]
2014-06-17 13:27:19,243 DEBUG: org.elasticsearch.gateway.local - [graylog2-server] using initial_shards [quorum], list_timeout [30s]
2014-06-17 13:27:19,424 DEBUG: org.elasticsearch.indices.recovery - [graylog2-server] using max_bytes_per_sec[20mb], concurrent_streams [3], file_chunk_size [512kb], translog_size [512kb], translog_ops [1000], and compress [true]
2014-06-17 13:27:19,486 DEBUG: org.elasticsearch.indices.memory - [graylog2-server] using index_buffer_size [265.4mb], with min_shard_index_buffer_size [4mb], max_shard_index_buffer_size [512mb], shard_inactive_time [30m]
2014-06-17 13:27:19,487 DEBUG: org.elasticsearch.indices.cache.filter - [graylog2-server] using [node] weighted filter cache with size [20%], actual_size [530.8mb], expire [null], clean_interval [1m]
2014-06-17 13:27:19,489 DEBUG: org.elasticsearch.indices.fielddata.cache - [graylog2-server] using size [-1] [-1b], expire [null]
2014-06-17 13:27:19,507 DEBUG: org.elasticsearch.gateway.local.state.meta - [graylog2-server] using gateway.local.auto_import_dangled [YES], with gateway.local.dangling_timeout [2h]
2014-06-17 13:27:19,511 DEBUG: org.elasticsearch.bulk.udp - [graylog2-server] using enabled [false], host [null], port [9700-9800], bulk_actions [1000], bulk_size [5mb], flush_interval [5s], concurrent_requests [4]
2014-06-17 13:27:19,514 DEBUG: org.elasticsearch.cluster.routing.allocation.decider - [graylog2-server] using node_concurrent_recoveries [2], node_initial_primaries_recoveries [4]
2014-06-17 13:27:19,514 DEBUG: org.elasticsearch.cluster.routing.allocation.decider - [graylog2-server] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active]
2014-06-17 13:27:19,515 DEBUG: org.elasticsearch.cluster.routing.allocation.decider - [graylog2-server] using [cluster_concurrent_rebalance] with [2]
2014-06-17 13:27:19,516 DEBUG: org.elasticsearch.cluster.routing.allocation.decider - [graylog2-server] using node_concurrent_recoveries [2], node_initial_primaries_recoveries [4]
2014-06-17 13:27:19,516 DEBUG: org.elasticsearch.cluster.routing.allocation.decider - [graylog2-server] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active]
2014-06-17 13:27:19,516 DEBUG: org.elasticsearch.cluster.routing.allocation.decider - [graylog2-server] using [cluster_concurrent_rebalance] with [2]
2014-06-17 13:27:19,528 INFO : org.elasticsearch.node - [graylog2-server] initialized
2014-06-17 13:27:19,529 INFO : org.elasticsearch.node - [graylog2-server] starting ...
2014-06-17 13:27:19,552 DEBUG: org.elasticsearch.netty.channel.socket.nio.SelectorUtil - Using select timeout of 500
2014-06-17 13:27:19,552 DEBUG: org.elasticsearch.netty.channel.socket.nio.SelectorUtil - Epoll-bug workaround enabled = false
2014-06-17 13:27:19,618 DEBUG: org.elasticsearch.transport.netty - [graylog2-server] Bound to address [/0:0:0:0:0:0:0:0:9350]
2014-06-17 13:27:19,622 INFO : org.elasticsearch.transport - [graylog2-server] bound_address {inet[/0:0:0:0:0:0:0:0:9350]}, publish_address {inet[/192.168.20.107:9350]}
2014-06-17 13:27:19,658 DEBUG: org.elasticsearch.transport.netty - [graylog2-server] connected to node [[#zen_unicast_1#][inet[/192.168.50.4:9300]]]
2014-06-17 13:27:22,628 WARN : org.elasticsearch.discovery - [graylog2-server] waited for 3s and no initial state was set by the discovery
2014-06-17 13:27:22,628 INFO : org.elasticsearch.discovery - [graylog2-server] graylog2/vWsYLp5JQoOJMva0FZgRsA
2014-06-17 13:27:22,629 DEBUG: org.elasticsearch.gateway - [graylog2-server] can't wait on start for (possibly) reading state from gateway, will do it asynchronously
2014-06-17 13:27:22,629 INFO : org.elasticsearch.node - [graylog2-server] started
2014-06-17 13:27:22,642 DEBUG: org.elasticsearch.transport.netty - [graylog2-server] disconnected from [[#zen_unicast_1#][inet[/192.168.50.4:9300]]]
2014-06-17 13:27:22,644 DEBUG: org.elasticsearch.discovery.zen - [graylog2-server] filtered ping responses: (filter_client[true], filter_data[false])
--> target [[Crimson Daffodil][vPHcWzoCQteDG19hofaayA][inet[/10.0.2.15:9300]]], master [[Crimson Daffodil][vPHcWzoCQteDG19hofaayA][inet[/10.0.2.15:9300]]]
2014-06-17 13:27:27,634 ERROR: org.graylog2.Main -
elasticsearch_network_host is not what you think. It is about the elasticsearch /client/ within graylog, and not the elasticsearch server you want to connect with. So graylog is trying to listen on 192.168.50.4 which isn't a valid IP address on the graylog system (your dev machine).
You most likely want to set these variables in graylog2 config:
elasticsearch_discovery_zen_ping_multicast_enabled = false
elasticsearch_discovery_zen_ping_unicast_hosts = 192.168.50.4:9300
Here is where I got stuck, but that was because I had elasticsearch 1.0 installed when I needed 0.90. I'll now more once my puppet/vagrant stack finishes re-provisioning. =)
EDIT: Mine is working now.

Resources