Motorola Razer v3i Modem detection error with Kannel - kannel

Please help me I am getting this error on running the bearerbox
[root#localhost sbin]# ./bearerbox -v 1 /usr/local/smskannel.conf
2012-04-30 11:56:28 [13417] [0] INFO: Debug_lvl = 1, log_file = <none>, log_lvl = 0
2012-04-30 11:56:28 [13417] [0] WARNING: DLR: using default 'internal' for storage type.
2012-04-30 11:56:28 [13417] [0] INFO: DLR using storage type: internal
2012-04-30 11:56:28 [13417] [0] INFO: HTTP: Opening server at port 13003.
2012-04-30 11:56:28 [13417] [0] INFO: BOXC: 'smsbox-max-pending' not set, using default (100).
2012-04-30 11:56:28 [13417] [0] INFO: Set SMS resend frequency to 60 seconds.
2012-04-30 11:56:28 [13417] [0] INFO: SMS resend retry set to unlimited.
2012-04-30 11:56:28 [13417] [0] INFO: DLR rerouting for smsc id <FAKE> disabled.
2012-04-30 11:56:28 [13417] [0] INFO: DLR rerouting for smsc id <(null)> disabled.
2012-04-30 11:56:28 [13417] [0] INFO: AT2[/dev/ttyACM0]: configuration doesn't show modemtype. will autodetect
2012-04-30 11:56:28 [13417] [0] INFO: ----------------------------------------
2012-04-30 11:56:28 [13417] [0] INFO: Kannel bearerbox II version 1.4.3 starting
2012-04-30 11:56:28 [13417] [7] INFO: AT2[/dev/ttyACM0]: opening device
2012-04-30 11:56:28 [13417] [0] INFO: MAIN: Start-up done, entering mainloop
2012-04-30 11:56:31 [13417] [7] INFO: AT2[/dev/ttyACM0]: speed set to 115200
2012-04-30 11:56:33 [13417] [7] INFO: AT2[/dev/ttyACM0]: Closing device
2012-04-30 11:56:33 [13417] [7] INFO: AT2[/dev/ttyACM0]: detect speed is 115200
2012-04-30 11:56:33 [13417] [7] INFO: AT2[/dev/ttyACM0]: opening device
2012-04-30 11:56:34 [13417] [7] INFO: AT2[/dev/ttyACM0]: speed set to 115200
2012-04-30 11:56:36 [13417] [7] PANIC: AT2[/dev/ttyACM0]: Cannot detect modem and generic not found
2012-04-30 11:56:36 [13417] [7] PANIC: ./bearerbox(gw_panic+0xc2) [0x80cc0e2]
2012-04-30 11:56:36 [13417] [7] PANIC: ./bearerbox [0x806ca62]
2012-04-30 11:56:36 [13417] [7] PANIC: ./bearerbox [0x806d5f1]
2012-04-30 11:56:36 [13417] [7] PANIC: ./bearerbox [0x80c2971]
2012-04-30 11:56:36 [13417] [7] PANIC: /lib/libpthread.so.0 [0xb9649b]
2012-04-30 11:56:36 [13417] [7] PANIC: /lib/libc.so.6(clone+0x5e) [0xaed42e]
I am using Motorola Razer v3i using usb cable with Redhat. The device is also detected in /dev as ttyACM0
This is my smskannel.conf
group = core
admin-port = 13003
smsbox-port = 13004
admin-password = bar
#status-password = foo
#admin-deny-ip = ""
#admin-allow-ip = ""
#log-file = "/tmp/kannel.log"
#log-level = 0
box-deny-ip = "*.*.*.*"
box-allow-ip = "127.0.0.1"
#unified-prefix = "+923,0092,0;+,00"
#access-log = "/tmp/access.log"
#store-file = "kannel.store"
#ssl-server-cert-file = "cert.pem"
#ssl-server-key-file = "key.pem"
#ssl-certkey-file = "mycertandprivkeyfile.pem"
#---------------------------------------------
# SMSC CONNECTIONS
group = smsc
smsc = fake
smsc-id = FAKE
port = 10000
connect-allow-ip = 127.0.0.1
#this is for Motorola Razer V3i
group = smsc
smsc = at
modemtype = auto
device=/dev/ttyACM0
my-number = 00923478847037
sms-center= 00923455000010
connect-allow-ip = 127.0.0.1
log-level = 0
#-----------------Modem Group------------
group = modems
id = Motorola
name = "Motorola"
init-string = "AT+C=1"
need-sleep = true
enable-mms = true
speed = 115200
message-storage = "SM"
#---------------------------------------------
# SMSBOX SETUP
#
group = smsbox
bearerbox-host = 127.0.0.1
sendsms-port = 13013
global-sender = 13013
#sendsms-chars = "0123456789 +-"
#log-file = "/tmp/smsbox.log"
#log-level = 0
#access-log = "/tmp/access.log"
#---------------------------------------------
# SEND-SMS USERS
group = sendsms-user
username = tester
password = foobar
#user-deny-ip = ""
#user-allow-ip = ""
#---------------------------------------------
# SERVICES
group = sms-service
keyword = nop
text = "You asked nothing and I did it!"
group = sms-service
keyword = default
text = "No service specified"
Please help me

Try this init-string:
init-string = "AT+CMEE=2;+CNMI=3,1,0,0,0".
It worked for my Motorola Razr2 v8: I am able to send and receive messages. This guy suggested the solution I provided and it worked for his Razr v3.
For a complete list of AT commands and explanations, I found this reference very useful.

Related

Kafka stream app failing to fetch offsets for partition

I created a kafka cluster with 3 brokers and following details:
Created 3 topics, each one with replication factor=3 and partitions=2.
Created 2 producers each one writing to one of the topics.
Created a Streams application to process messages from 2 topics and write to the 3rd topic.
It was all running fine till now but I suddenly started getting the following warning when starting the Streams application:
[WARN ] 2018-06-08 21:16:49.188 [Stream3-4f7403ad-aba6-4d34-885d-60114fc9fcff-StreamThread-1] org.apache.kafka.clients.consumer.internals.Fetcher [Consumer clientId=Stream3-4f7403ad-aba6-4d34-885d-60114fc9fcff-StreamThread-1-restore-consumer, groupId=] Attempt to fetch offsets for partition Stream3-KSTREAM-OUTEROTHER-0000000005-store-changelog-0 failed due to: Disk error when trying to access log file on the disk.
Due to this warning, Streams application is not processing anything from the 2 topics.
I tried following things:
Stopped all brokers, deleted kafka-logs directory for each broker and restarted the brokers. It didn't solve the issue.
Stopped zookeeper and all brokers, deleted zookeeper logs as well as kafka-logs for each broker, restarted zookeeper and brokers and created the topics again. This too didn't solve the issue.
I am not able to find anything related to this error on official docs or web. Does anyone have an idea of why am I getting this error suddenly?
EDIT:
Out of 3 brokers, 2 brokers(broker-0 and broker-2) continously emit these logs:
Broker-0 logs:
[2018-06-09 02:03:08,750] INFO [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Retrying leaderEpoch request for partition initial11_topic-1 as the leader reported an error: NOT_LEADER_FOR_PARTITION (kafka.server.ReplicaFetcherThread)
[2018-06-09 02:03:08,750] INFO [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Retrying leaderEpoch request for partition initial12_topic-0 as the leader reported an error: NOT_LEADER_FOR_PARTITION (kafka.server.ReplicaFetcherThread)
Broker-2 logs:
[2018-06-09 02:04:46,889] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Retrying leaderEpoch request for partition initial11_topic-1 as the leader reported an error: NOT_LEADER_FOR_PARTITION (kafka.server.ReplicaFetcherThread)
[2018-06-09 02:04:46,889] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Retrying leaderEpoch request for partition initial12_topic-0 as the leader reported an error: NOT_LEADER_FOR_PARTITION (kafka.server.ReplicaFetcherThread)
Broker-1 shows following logs:
[2018-06-09 01:21:26,689] INFO [GroupMetadataManager brokerId=1] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-06-09 01:31:26,689] INFO [GroupMetadataManager brokerId=1] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2018-06-09 01:39:44,667] ERROR [KafkaApi-1] Number of alive brokers '0' does not meet the required replication factor '1' for the offsets topic (configured via 'offsets.topic.replication.factor'). This error can be ignored if the cluster is starting up and not all brokers are up yet. (kafka.server.KafkaApis)
[2018-06-09 01:41:26,689] INFO [GroupMetadataManager brokerId=1] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
I again stopped zookeeper and brokers, deleted their logs and restarted. As soon as I create the topics again, I start getting the above logs.
Topic details:
[zk: localhost:2181(CONNECTED) 3] get /brokers/topics/initial11_topic
{"version":1,"partitions":{"1":[1,0,2],"0":[0,2,1]}}
cZxid = 0x53
ctime = Sat Jun 09 01:25:42 EDT 2018
mZxid = 0x53
mtime = Sat Jun 09 01:25:42 EDT 2018
pZxid = 0x54
cversion = 1
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 52
numChildren = 1
[zk: localhost:2181(CONNECTED) 4] get /brokers/topics/initial12_topic
{"version":1,"partitions":{"1":[2,1,0],"0":[1,0,2]}}
cZxid = 0x61
ctime = Sat Jun 09 01:25:47 EDT 2018
mZxid = 0x61
mtime = Sat Jun 09 01:25:47 EDT 2018
pZxid = 0x62
cversion = 1
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 52
numChildren = 1
[zk: localhost:2181(CONNECTED) 5] get /brokers/topics/final11_topic
{"version":1,"partitions":{"1":[0,1,2],"0":[2,0,1]}}
cZxid = 0x48
ctime = Sat Jun 09 01:25:32 EDT 2018
mZxid = 0x48
mtime = Sat Jun 09 01:25:32 EDT 2018
pZxid = 0x4a
cversion = 1
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 52
numChildren = 1
Any clue?
I found out the issue. It was due to following incorrect config in server.properties of broker-1:
advertised.listeners=PLAINTEXT://10.23.152.109:9094
Mistakenly port for advertised.listeners got changed to same as port of advertised.listeners of broker-2.

API not syncing files to Icinga2 satellite

I have a master and a satellite going over the internet. I cannot get the files from the master to sync to the satellite. I am looking under /var/lib/icinga2/api. There is no zones file.
My master zones file is as follows -
object Zone "master" {
endpoints = [ "master1" ]
}
object Endpoint "master1" {
host = "192.168.1.69"
port = "5665"
}
object Zone "Zone-Test" {
endpoints = [ "test-satellite-a" ]
}
object Endpoint "test-satellite-a" {
host = "51.52.53.54"
port = "5665"
}
object Zone "global-templates" {
global = true
}
The zones on the satellite are as follows -
object Endpoint "master1" {
host = "41.42.43.44"
port = "5665"
}
object Zone "master" {
endpoints = [ "master1" ]
}
object Endpoint NodeName {
}
object Zone ZoneName {
endpoints = [ NodeName ]
parent = "master"
}
object Zone "global-templates" {
global = true
}
When I run service icinga2 status, I get the following -
Nov 24 19:35:17 master1 icinga2[21599]: [2017-11-24 19:34:17 +0000] information/WorkQueue: #7 (IdoMysqlConnection, ido-mysql) items: 2, rate: 5.35/s (321/min 808/5min 808/15min);
Nov 24 19:35:17 master1 icinga2[21599]: [2017-11-24 19:34:17 +0000] information/ApiListener: New client connection for identity 'test-satellite-a' from [51.52.53.54]:37376
Nov 24 19:35:17 master1 icinga2[21599]: [2017-11-24 19:34:17 +0000] warning/ApiListener: No data received on new API connection for identity 'test-satellite-a'. Ensure that the remote endpoints are properly configured in a cluster setup.
Nov 24 19:35:17 master1 icinga2[21599]: Context:
Nov 24 19:35:17 master1 icinga2[21599]: (0) Handling new API client connection
Nov 24 19:35:17 master1 icinga2[21599]: [2017-11-24 19:34:27 +0000] information/WorkQueue: #7 (IdoMysqlConnection, ido-mysql) items: 8, rate: 5.5/s (330/min 835/5min 835/15min);
Nov 24 19:35:17 master1 icinga2[21599]: [2017-11-24 19:34:37 +0000] information/WorkQueue: #7 (IdoMysqlConnection, ido-mysql) items: 2, rate: 5.5/s (330/min 890/5min 890/15min);
Nov 24 19:35:17 master1 icinga2[21599]: [2017-11-24 19:34:47 +0000] information/WorkQueue: #7 (IdoMysqlConnection, ido-mysql) items: 2, rate: 5.33333/s (320/min 1025/5min 1025/15min);
Nov 24 19:35:17 master1 icinga2[21599]: [2017-11-24 19:35:07 +0000] information/WorkQueue: #7 (IdoMysqlConnection, ido-mysql) items: 6, rate: 5.5/s (330/min 1091/5min 1091/15min);
Nov 24 19:35:17 master1 icinga2[21599]: [2017-11-24 19:35:17 +0000] information/WorkQueue: #7 (IdoMysqlConnection, ido-mysql) items: 8, rate: 5.46667/s (328/min 1134/5min 1134/15min);
Any ideas what is going wrong here?
Have you tried to add the following to the zones.conf:
object Zone "director-global" {
global = true
}
This defines a global zone for the Icinga Director.
This is required to sync configuration commands,
templates, apply rules, etc. to satellite and clients.
All nodes require the same configuration and must
have accept_config enabled in the api feature.
The host port setting need to be configured either in the master or in the satellite..
Since its over the Internet make sure there is no reachability issue.
Also I assume you have added some config for the satellite zone. Only configs for the global zone and satellite zone get synced to the satellite.

MobileFirst 6.3.0 Getting UnavailableShardsException

After redeploy the application because a DB2 password update, the application was unable to be launched.
I found the following log:
[11/23/17 12:23:29:988 CST] 000000f1 DataReceiver E Error sending bulk request: java.lang.RuntimeException: failure in bulk execution:
[0]: index [worklight], type [app_activities], id [yRA1jtBzT9ScN0Hj-Fft0g], message [UnavailableShardsException[[worklight][0] [2] shardIt, [0] active : Timeout waiting for [1m], request: org.elasticsearch.action.bulk.BulkShardRequest#e38c893c]]
[3]: index [worklight], type [devices], id [c2062eef-e266-4209-83d2-13d043ae2a9d], message [UnavailableShardsException[[worklight][2] [2] shardIt, [0] active : Timeout waiting for [1m], request: org.elasticsearch.action.bulk.BulkShardRequest#34a9dfa3]]
[6]: index [worklight], type [devices], id [1ed55e4b-26e0-38ba-9f83-2b65d951722e], message [UnavailableShardsException[[worklight][0] [2] shardIt, [0] active : Timeout waiting for [1m], request: org.elasticsearch.action.bulk.BulkShardRequest#e38c893c]]
[8]: index [worklight], type [devices], id [1ed55e4b-26e0-38ba-9f83-2b65d951722e], message [UnavailableShardsException[[worklight][0] [2] shardIt, [0] active : Timeout waiting for [1m], request: org.elasticsearch.action.bulk.BulkShardRequest#e38c893c]]
[10]: index [worklight], type [app_activities], id [e4SxG701QwOwg7L5VztsTQ], message [UnavailableShardsException[[worklight][0] [2] shardIt, [0] active : Timeout waiting for [1m], request: org.elasticsearch.action.bulk.BulkShardRequest#e38c893c]]
[12]: index [worklight], type [devices], id [c2062eef-e266-4209-83d2-13d043ae2a9d], message [UnavailableShardsException[[worklight][2] [2] shardIt, [0] active : Timeout waiting for [1m], request: org.elasticsearch.action.bulk.BulkShardRequest#34a9dfa3]]
[13]: index [worklight], type [app_activities], id [wB2fqKAkT9-JAgfAqSPHZw], message [UnavailableShardsException[[worklight][0] [2] shardIt, [0] active : Timeout waiting for [1m], request: org.elasticsearch.action.bulk.BulkShardRequest#e38c893c]]
at com.ibm.elasticsearch.servlet.DataReceiver.processData(DataReceiver.java:132)
at com.ibm.elasticsearch.servlet.DataReceiver.processDataLegacy(DataReceiver.java:85)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:88)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:613)
at org.apache.wink.server.internal.handlers.InvokeMethodHandler.handleRequest(InvokeMethodHandler.java:63)
at org.apache.wink.server.handlers.AbstractHandler.handleRequest(AbstractHandler.java:33)
at org.apache.wink.server.handlers.RequestHandlersChain.handle(RequestHandlersChain.java:26)
at org.apache.wink.server.handlers.RequestHandlersChain.handle(RequestHandlersChain.java:22)
at org.apache.wink.server.handlers.AbstractHandlersChain.doChain(AbstractHandlersChain.java:67)
at org.apache.wink.server.internal.handlers.CreateInvocationParametersHandler.handleRequest(CreateInvocationParametersHandler.java:54)
I have not seen this error before

Getting unknown server error while initializing GSS security context

Working on a cocoa OSX app, Mac bind with active directory and logged in as active-dir user.
Getting the unknown server error while initializing GSS security context: with a PrincipalName provided by the server.
gss_init_sec_context
major: unknown routine error
minor: Server (krbtgt/LOCAL#XYZ.LOCAL)unknown while looking up 'WIN- ****$/xyz.local#LOCAL' (cached result, timeout in 167 sec)
Klist -5 displays these on terminal:
Credentials cache: API:DD4CC511-7BE2-4267-9923-6C8ABCD9297D
Principal: user#XYZ.LOCAL
Issued Expires Principal
Nov 5 17:21:23 2015 Nov 6 03:21:23 2015 krbtgt/XYZ.LOCAL#XYZ.LOCAL
Because off this error, I changed ker5.conf file like this:
[libdefaults]
default_realm = XYZ.LOCAL
renewable = true
forwardable= true
ticket_lifetime = 20d
renew_lifetime = 1d
default_tgs_enctypes = aes256-cts-hmac-sha1-96, aes128-cts-hmac-sha1-96, des3-cbc-sha1, arcfour-hmac-md5
default_tkt_enctypes = aes256-cts-hmac-sha1-96, aes128-cts-hmac-sha1-96, des3-cbc-sha1, arcfour-hmac-md5
[domain_realm]
xyz.com = XYZ.LOCAL
.xyz.com = XYZ.LOCAL

Setup a graylog2 server with elasticsearch in a vagrant machine

I'm trying to Install graylog2 server on my local dev machine and encountering problems with elasticsearch setup.
My elasticsearch is installed as a service on a vagrant machine running on my dev machine. so My elasticsearch isn't installed in 127.0.0.1 but in 192.168.50.4 (the ip of the vagrant machine) I have ports 9200 forwarded from the vagrant machine but graylog2 server seems to fail to find it and stops running with a :
ERROR: Could not successfully connect to ElasticSearch. Check that
your cluster state is not RED and that ElasticSearch is running
properly.
Adding port 9300 forwarded from the vagrant machine changed the error to:
Caused by: org.elasticsearch.common.netty.channel.ChannelException:
Failed to bind to: 0.0.0.0/0.0.0.0:9350
I tried this settings in graylog conf file:
elasticsearch_network_host =192.168.50.4
but that only changes the error to an exception failing to bind to
Caused by: org.elasticsearch.common.netty.channel.ChannelException:
Failed to bind to: /192.168.50.4:9350 at
org.elasticsearch.common.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
But didn't help.
I'll be glad for any direction what am I doing wrong (either with elastic search configuration or the vagrant or graylog2)
Thanks!
Update following advice by the answer below I changed the following config:
elasticsearch_discovery_zen_ping_multicast_enabled = false
elasticsearch_discovery_zen_ping_unicast_hosts = 192.168.50.4:9300
I now get this error:
2014-06-16 23:04:34,946 WARN : org.elasticsearch.transport.netty - [graylog2-server] Message not fully read (response) for [6] handler org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$4#67bd250a, error [true], resetting
2014-06-16 23:04:36,451 WARN : org.elasticsearch.discovery.zen.ping.unicast - [graylog2-server] failed to send ping to [[#zen_unicast_1#][inet[/192.168.50.4:9300]]]
org.elasticsearch.transport.RemoteTransportException: Failed to deserialize exception response from stream
Caused by: org.elasticsearch.transport.TransportSerializationException: Failed to deserialize exception response from stream
at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:169)
at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:123)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)
at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:310)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.io.InvalidClassException: failed to read class descriptor
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1603)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1517)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1622)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1517)
looks that graylog2 still fails to connect to elastic search in a correct way
Details (update): graylog2-server-0.20.2, elasticsearch 1.1.0 (I think) - I can replace if that's the problem. java OpenJDK 64-Bit java version "1.7.0_55"
More Updates (thanks #sheena) When downgrading the elasticsearch version to 0.90.10 we got some progress but still not working:
Here is the current log:
2014-06-17 13:27:16,394 INFO : org.graylog2.Main - Graylog2 0.20.2 starting up. (JRE: Oracle Corporation 1.7.0_55 on Linux 3.13.0-29-generic)
2014-06-17 13:27:16,475 INFO : org.graylog2.plugin.system.NodeId - Node ID: e7245f12-2e8b-4803-9e88-7529169b5a91
2014-06-17 13:27:16,670 INFO : org.graylog2.buffers.ProcessBuffer - Initialized ProcessBuffer with ring size <1024> and wait strategy <BlockingWaitStrategy>.
2014-06-17 13:27:16,692 INFO : org.graylog2.buffers.OutputBuffer - Initialized OutputBuffer with ring size <1024> and wait strategy <BlockingWaitStrategy>.
2014-06-17 13:27:16,964 DEBUG: com.ning.http.client.providers.netty.NettyAsyncHttpProvider - Number of application's worker threads is 8
2014-06-17 13:27:17,272 INFO : org.elasticsearch.node - [graylog2-server] version[0.90.10], pid[24419], build[0a5781f/2014-01-10T10:18:37Z]
2014-06-17 13:27:17,273 INFO : org.elasticsearch.node - [graylog2-server] initializing ...
2014-06-17 13:27:17,273 DEBUG: org.elasticsearch.node - [graylog2-server] using home [/home/alon/Downloads/graylog2-server-0.20.2], config [/home/alon/Downloads/graylog2-server-0.20.2/config], data [[/home/alon/Downloads/graylog2-server-0.20.2/data]], logs [/home/alon/Downloads/graylog2-server-0.20.2/logs], work [/home/alon/Downloads/graylog2-server-0.20.2/work], plugins [/home/alon/Downloads/graylog2-server-0.20.2/plugins]
2014-06-17 13:27:17,281 INFO : org.elasticsearch.plugins - [graylog2-server] loaded [], sites []
2014-06-17 13:27:17,320 DEBUG: org.elasticsearch.common.compress.lzf - using [UnsafeChunkDecoder] decoder
2014-06-17 13:27:18,655 DEBUG: org.elasticsearch.threadpool - [graylog2-server] creating thread_pool [generic], type [cached], keep_alive [30s]
2014-06-17 13:27:18,740 DEBUG: org.elasticsearch.threadpool - [graylog2-server] creating thread_pool [index], type [fixed], size [4], queue_size [200]
2014-06-17 13:27:18,744 DEBUG: org.elasticsearch.threadpool - [graylog2-server] creating thread_pool [bulk], type [fixed], size [4], queue_size [50]
2014-06-17 13:27:18,745 DEBUG: org.elasticsearch.threadpool - [graylog2-server] creating thread_pool [get], type [fixed], size [4], queue_size [1k]
2014-06-17 13:27:18,745 DEBUG: org.elasticsearch.threadpool - [graylog2-server] creating thread_pool [search], type [fixed], size [12], queue_size [1k]
2014-06-17 13:27:18,745 DEBUG: org.elasticsearch.threadpool - [graylog2-server] creating thread_pool [suggest], type [fixed], size [4], queue_size [1k]
2014-06-17 13:27:18,745 DEBUG: org.elasticsearch.threadpool - [graylog2-server] creating thread_pool [percolate], type [fixed], size [4], queue_size [1k]
2014-06-17 13:27:18,746 DEBUG: org.elasticsearch.threadpool - [graylog2-server] creating thread_pool [management], type [scaling], min [1], size [5], keep_alive [5m]
2014-06-17 13:27:18,747 DEBUG: org.elasticsearch.threadpool - [graylog2-server] creating thread_pool [flush], type [scaling], min [1], size [2], keep_alive [5m]
2014-06-17 13:27:18,747 DEBUG: org.elasticsearch.threadpool - [graylog2-server] creating thread_pool [merge], type [scaling], min [1], size [2], keep_alive [5m]
2014-06-17 13:27:18,747 DEBUG: org.elasticsearch.threadpool - [graylog2-server] creating thread_pool [refresh], type [scaling], min [1], size [2], keep_alive [5m]
2014-06-17 13:27:18,748 DEBUG: org.elasticsearch.threadpool - [graylog2-server] creating thread_pool [warmer], type [scaling], min [1], size [2], keep_alive [5m]
2014-06-17 13:27:18,748 DEBUG: org.elasticsearch.threadpool - [graylog2-server] creating thread_pool [snapshot], type [scaling], min [1], size [2], keep_alive [5m]
2014-06-17 13:27:18,748 DEBUG: org.elasticsearch.threadpool - [graylog2-server] creating thread_pool [optimize], type [fixed], size [1], queue_size [null]
2014-06-17 13:27:18,768 DEBUG: org.elasticsearch.transport.netty - [graylog2-server] using worker_count[8], port[9350], bind_host[null], publish_host[null], compress[false], connect_timeout[30s], connections_per_node[2/3/6/1/1], receive_predictor[512kb->512kb]
2014-06-17 13:27:18,784 DEBUG: org.elasticsearch.discovery.zen.ping.unicast - [graylog2-server] using initial hosts [192.168.50.4:9300], with concurrent_connects [10]
2014-06-17 13:27:18,787 DEBUG: org.elasticsearch.discovery.zen - [graylog2-server] using ping.timeout [3s], master_election.filter_client [true], master_election.filter_data [false]
2014-06-17 13:27:18,788 DEBUG: org.elasticsearch.discovery.zen.elect - [graylog2-server] using minimum_master_nodes [-1]
2014-06-17 13:27:18,790 DEBUG: org.elasticsearch.discovery.zen.fd - [graylog2-server] [master] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
2014-06-17 13:27:18,801 DEBUG: org.elasticsearch.discovery.zen.fd - [graylog2-server] [node ] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]
2014-06-17 13:27:18,845 DEBUG: org.elasticsearch.monitor.jvm - [graylog2-server] enabled [true], last_gc_enabled [false], interval [1s], gc_threshold [{old=GcThreshold{name='old', warnThreshold=10000, infoThreshold=5000, debugThreshold=2000}, default=GcThreshold{name='default', warnThreshold=10000, infoThreshold=5000, debugThreshold=2000}, young=GcThreshold{name='young', warnThreshold=1000, infoThreshold=700, debugThreshold=400}}]
2014-06-17 13:27:18,846 DEBUG: org.elasticsearch.monitor.os - [graylog2-server] Using probe [org.elasticsearch.monitor.os.JmxOsProbe#7b01e044] with refresh_interval [1s]
2014-06-17 13:27:18,849 DEBUG: org.elasticsearch.monitor.process - [graylog2-server] Using probe [org.elasticsearch.monitor.process.JmxProcessProbe#3103c203] with refresh_interval [1s]
2014-06-17 13:27:18,854 DEBUG: org.elasticsearch.monitor.jvm - [graylog2-server] Using refresh_interval [1s]
2014-06-17 13:27:18,854 DEBUG: org.elasticsearch.monitor.network - [graylog2-server] Using probe [org.elasticsearch.monitor.network.JmxNetworkProbe#1cc7580f] with refresh_interval [5s]
2014-06-17 13:27:18,857 DEBUG: org.elasticsearch.monitor.network - [graylog2-server] net_info
host [stox-alonisser]
vboxnet0 display_name [vboxnet0]
address [/fe80:0:0:0:800:27ff:fe00:0%4] [/192.168.50.1]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
wlan0 display_name [wlan0]
address [/fe80:0:0:0:e8b:fdff:fe62:dc9d%3] [/192.168.20.107]
mtu [1500] multicast [true] ptp [false] loopback [false] up [true] virtual [false]
lo display_name [lo]
address [/0:0:0:0:0:0:0:1%1] [/127.0.0.1]
mtu [65536] multicast [false] ptp [false] loopback [true] up [true] virtual [false]
2014-06-17 13:27:18,858 DEBUG: org.elasticsearch.monitor.fs - [graylog2-server] Using probe [org.elasticsearch.monitor.fs.JmxFsProbe#2c8807d7] with refresh_interval [1s]
2014-06-17 13:27:19,196 DEBUG: org.elasticsearch.indices.store - [graylog2-server] using indices.store.throttle.type [MERGE], with index.store.throttle.max_bytes_per_sec [20mb]
2014-06-17 13:27:19,204 DEBUG: org.elasticsearch.cache.memory - [graylog2-server] using bytebuffer cache with small_buffer_size [1kb], large_buffer_size [1mb], small_cache_size [10mb], large_cache_size [500mb], direct [true]
2014-06-17 13:27:19,220 DEBUG: org.elasticsearch.script - [graylog2-server] using script cache with max_size [500], expire [null]
2014-06-17 13:27:19,234 DEBUG: org.elasticsearch.cluster.routing.allocation.decider - [graylog2-server] using node_concurrent_recoveries [2], node_initial_primaries_recoveries [4]
2014-06-17 13:27:19,235 DEBUG: org.elasticsearch.cluster.routing.allocation.decider - [graylog2-server] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active]
2014-06-17 13:27:19,236 DEBUG: org.elasticsearch.cluster.routing.allocation.decider - [graylog2-server] using [cluster_concurrent_rebalance] with [2]
2014-06-17 13:27:19,243 DEBUG: org.elasticsearch.gateway.local - [graylog2-server] using initial_shards [quorum], list_timeout [30s]
2014-06-17 13:27:19,424 DEBUG: org.elasticsearch.indices.recovery - [graylog2-server] using max_bytes_per_sec[20mb], concurrent_streams [3], file_chunk_size [512kb], translog_size [512kb], translog_ops [1000], and compress [true]
2014-06-17 13:27:19,486 DEBUG: org.elasticsearch.indices.memory - [graylog2-server] using index_buffer_size [265.4mb], with min_shard_index_buffer_size [4mb], max_shard_index_buffer_size [512mb], shard_inactive_time [30m]
2014-06-17 13:27:19,487 DEBUG: org.elasticsearch.indices.cache.filter - [graylog2-server] using [node] weighted filter cache with size [20%], actual_size [530.8mb], expire [null], clean_interval [1m]
2014-06-17 13:27:19,489 DEBUG: org.elasticsearch.indices.fielddata.cache - [graylog2-server] using size [-1] [-1b], expire [null]
2014-06-17 13:27:19,507 DEBUG: org.elasticsearch.gateway.local.state.meta - [graylog2-server] using gateway.local.auto_import_dangled [YES], with gateway.local.dangling_timeout [2h]
2014-06-17 13:27:19,511 DEBUG: org.elasticsearch.bulk.udp - [graylog2-server] using enabled [false], host [null], port [9700-9800], bulk_actions [1000], bulk_size [5mb], flush_interval [5s], concurrent_requests [4]
2014-06-17 13:27:19,514 DEBUG: org.elasticsearch.cluster.routing.allocation.decider - [graylog2-server] using node_concurrent_recoveries [2], node_initial_primaries_recoveries [4]
2014-06-17 13:27:19,514 DEBUG: org.elasticsearch.cluster.routing.allocation.decider - [graylog2-server] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active]
2014-06-17 13:27:19,515 DEBUG: org.elasticsearch.cluster.routing.allocation.decider - [graylog2-server] using [cluster_concurrent_rebalance] with [2]
2014-06-17 13:27:19,516 DEBUG: org.elasticsearch.cluster.routing.allocation.decider - [graylog2-server] using node_concurrent_recoveries [2], node_initial_primaries_recoveries [4]
2014-06-17 13:27:19,516 DEBUG: org.elasticsearch.cluster.routing.allocation.decider - [graylog2-server] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active]
2014-06-17 13:27:19,516 DEBUG: org.elasticsearch.cluster.routing.allocation.decider - [graylog2-server] using [cluster_concurrent_rebalance] with [2]
2014-06-17 13:27:19,528 INFO : org.elasticsearch.node - [graylog2-server] initialized
2014-06-17 13:27:19,529 INFO : org.elasticsearch.node - [graylog2-server] starting ...
2014-06-17 13:27:19,552 DEBUG: org.elasticsearch.netty.channel.socket.nio.SelectorUtil - Using select timeout of 500
2014-06-17 13:27:19,552 DEBUG: org.elasticsearch.netty.channel.socket.nio.SelectorUtil - Epoll-bug workaround enabled = false
2014-06-17 13:27:19,618 DEBUG: org.elasticsearch.transport.netty - [graylog2-server] Bound to address [/0:0:0:0:0:0:0:0:9350]
2014-06-17 13:27:19,622 INFO : org.elasticsearch.transport - [graylog2-server] bound_address {inet[/0:0:0:0:0:0:0:0:9350]}, publish_address {inet[/192.168.20.107:9350]}
2014-06-17 13:27:19,658 DEBUG: org.elasticsearch.transport.netty - [graylog2-server] connected to node [[#zen_unicast_1#][inet[/192.168.50.4:9300]]]
2014-06-17 13:27:22,628 WARN : org.elasticsearch.discovery - [graylog2-server] waited for 3s and no initial state was set by the discovery
2014-06-17 13:27:22,628 INFO : org.elasticsearch.discovery - [graylog2-server] graylog2/vWsYLp5JQoOJMva0FZgRsA
2014-06-17 13:27:22,629 DEBUG: org.elasticsearch.gateway - [graylog2-server] can't wait on start for (possibly) reading state from gateway, will do it asynchronously
2014-06-17 13:27:22,629 INFO : org.elasticsearch.node - [graylog2-server] started
2014-06-17 13:27:22,642 DEBUG: org.elasticsearch.transport.netty - [graylog2-server] disconnected from [[#zen_unicast_1#][inet[/192.168.50.4:9300]]]
2014-06-17 13:27:22,644 DEBUG: org.elasticsearch.discovery.zen - [graylog2-server] filtered ping responses: (filter_client[true], filter_data[false])
--> target [[Crimson Daffodil][vPHcWzoCQteDG19hofaayA][inet[/10.0.2.15:9300]]], master [[Crimson Daffodil][vPHcWzoCQteDG19hofaayA][inet[/10.0.2.15:9300]]]
2014-06-17 13:27:27,634 ERROR: org.graylog2.Main -
elasticsearch_network_host is not what you think. It is about the elasticsearch /client/ within graylog, and not the elasticsearch server you want to connect with. So graylog is trying to listen on 192.168.50.4 which isn't a valid IP address on the graylog system (your dev machine).
You most likely want to set these variables in graylog2 config:
elasticsearch_discovery_zen_ping_multicast_enabled = false
elasticsearch_discovery_zen_ping_unicast_hosts = 192.168.50.4:9300
Here is where I got stuck, but that was because I had elasticsearch 1.0 installed when I needed 0.90. I'll now more once my puppet/vagrant stack finishes re-provisioning. =)
EDIT: Mine is working now.

Resources