Graylog cannot connect to Elasticsearch in Kubernetes cluster - elasticsearch

I deployed Graylog on a Kubernetes cluster and everything was working fine, until I decided to add an environment variable and update the graylog deployment.
Now, some things stopped working. I can see that all inputs are running and they are accepting messages:
However, if I try to see the received messages, it returns 500 error with the following message:
The docs say that the Graylog container needs a service called elasticsearch
docker run --link some-mongo:mongo --link some-elasticsearch:elasticsearch -p 9000:9000 -e GRAYLOG_WEB_ENDPOINT_URI="http://127.0.0.1:9000/api" -d graylog2/server
And if I attach to the graylog pod and curl elasticsearch:9200, I see a successful result:
{
"name" : "Vixen",
"cluster_name" : "graylog",
"cluster_uuid" : "TkZtckzGTnSu3JjERQNf4g",
"version" : {
"number" : "2.4.4",
"build_hash" : "fcbb46dfd45562a9cf00c604b30849a6dec6b017",
"build_timestamp" : "2017-01-03T11:33:16Z",
"build_snapshot" : false,
"lucene_version" : "5.5.2"
},
"tagline" : "You Know, for Search"
}
But if the graylog logs say that it is trying to connect to the localhost:
Again, everything was working to this day. Why is it trying to connect to the localhost, not the elastic search service?

Looks like it was a version problem. I downgraded the graylog container to the previous stable version: 2.2.3-1 and it started working again.
My guess is that when I updated the images today, it pulled the latest version which corrupted some things

you may want to try add elastichost to graylog.conf
https://github.com/Graylog2/graylog2-server/blob/master/misc/graylog.conf
at line 172
# List of Elasticsearch hosts Graylog should connect to.
# Need to be specified as a comma-separated list of valid URIs for the http ports of your elasticsearch nodes.
# If one or more of your elasticsearch hosts require authentication, include the credentials in each node URI that
# requires authentication.
#
# Default: http://127.0.0.1:9200
#elasticsearch_hosts = http://node1:9200,http://user:password#node2:19200
you can make your own graylog.conf and add this to your dockerfile then build with it.

Actually, Graylog has shifted to HTTP API in Graylog 2.3. Therefore, the method of connecting to Elasticsearch cluster has changed. You can now just provide the IP addresses of the ES nodes instead of zen_ping_unicast_hosts. This is the commit which changed this setting - https://github.com/Graylog2/graylog2-server/commit/4213a2257429b6a0803ab1b52c39a6a35fbde889.
This also enables us to connect AWS ES service as well which was not possible earlier. See this thread of discussion to get more insights - https://github.com/Graylog2/graylog2-server/issues/1473

Related

Elasticsearch is not running in browser

I have downloaded the Elasticsearch 8.1 in my Ubuntu. After successful installation, when I execute
curl -u elastic https://127.0.0.1:9200 -k
It is showing expected elasticsearch response. But when I hit http://127.0.0.1:9200/ or http://localhost:9200 in my browser, it is returning
After installation, I added network.host: 127.0.0.1 to elasticsearch.yml
Can anybody help me, why it is not running in browser ?
I am using Ubuntu 20 OS & following this Doc
As of version 8.0, Elasticsearch security is turned on by default and SSL/TLS is required for HTTP communications.
You can disable HTTP security if you want, but that's discouraged.
I am using the windows platform, the steps are the same. When you run the elasticsearch.bat in cmd
use this port for elasticsearch HTTPS secure https://localhost:9200/
check the username and password scroll down the cmd running elasticsearch
After login into the elasticsearch. Hurry...
Thanks
But
the best solution is to use Docker Image of ELK stack which is easy instead of downloading the E L K and then run on the local machine.

Multiple elastic instances on same host

I'm attempting to test elastic replication and install multiple elastic instances on the same host.
I've created an additional elastic search configuration file and set the following config property:
http.port: 9500
The other elastic search configuration file contains the default value :
http.port: 9200
I attempt to start elastic using :
./bin/elasticsearch -Ees.config=./config/elasticsearch.yml
but receive error :
uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: java.lang.IllegalArgumentException: unknown setting [es.config] please check that any required plugins are installed, or check the breaking changes documentation for removed settings
How to utilize 2 elasticsearch instances on the same host ?
Is there an alternative to the es.config parameter ?
ES_PATH_CONF=/path/to/my/config ./bin/elasticsearch
This is the way to do it according to the documentation https://www.elastic.co/guide/en/elasticsearch/reference/current/settings.html (depending on the version you are using, it might differ).
I would recommend using a docker setup for this endeavour as described here (official elasticsearch documentation) https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
As #Alkis Kalogeris already stated, I would also recommend using docker/docker-compose. You would just need to expose a different port to your localhost.

plugin:elasticsearch service not available in kibana

In order to use sense plugin, I have some problems when integrating kibana with elastic search. Everyting goes well. Elastic search and kibana installed properly in my machine.
When I run this command :
cd elasticsearch/bin/elasticsearch.bat
and then I go to http://localhost:9200/,
I got success message.
When I run this command :
cd kibana/bin/kibana.bat
and then I go to http://localhost:5601/app/sense
I got notification that
plugin:elasticsearch is not available.
this is prove that my elastic already running
this is my kibana.yml
this is my elastic.yml
What's going wrong?

Changing hostname breaks Rabbitmq when running on Kubernetes

I'm trying to run Rabbitmq using Kubernetes on AWS. I'm using the official Rabbitmq docker container. Each time the pod restarts the rabbitmq container gets a new hostname. I've setup a service (of type LoadBalancer) for the pod with a resolvable DNS name.
But when I use an EBS to make the rabbit config/messsage/queues persistent between restarts it breaks with:
exception exit: {{failed_to_cluster_with,
['rabbitmq#rabbitmq-deployment-2901855891-nord3'],
"Mnesia could not connect to any nodes."},
{rabbit,start,[normal,[]]}}
in function application_master:init/4 (application_master.erl, line 134)
rabbitmq-deployment-2901855891-nord3 is the previous hostname rabbitmq container. It is almost like Mnesia saved the old hostname :-/
The container's info looks like this:
Starting broker...
=INFO REPORT==== 25-Apr-2016::12:42:42 ===
node : rabbitmq#rabbitmq-deployment-2770204827-cboj8
home dir : /var/lib/rabbitmq
config file(s) : /etc/rabbitmq/rabbitmq.config
cookie hash : XXXXXXXXXXXXXXXX
log : tty
sasl log : tty
database dir : /var/lib/rabbitmq/mnesia/rabbitmq
I'm only able to set the first part of the node name to rabbitmq using the RABBITMQ_NODENAME environment variable.
Setting RABBITMQ_NODENAME to a resolvable DNS name breaks with:
Can't set short node name!\nPlease check your configuration\n"
Setting RABBITMQ_USE_LONGNAME to true breaks with:
Can't set long node name!\nPlease check your configuration\n"
Update:
Setting RABBITMQ_NODENAME to rabbitmq#localhost works but that negates any possibility to cluster instances.
Starting broker...
=INFO REPORT==== 26-Apr-2016::11:53:19 ===
node : rabbitmq#localhost
home dir : /var/lib/rabbitmq
config file(s) : /etc/rabbitmq/rabbitmq.config
cookie hash : 9WtXr5XgK4KXE/soTc6Lag==
log : tty
sasl log : tty
database dir : /var/lib/rabbitmq/mnesia/rabbitmq#localhost
Setting RABBITMQ_NODENAME to the service name, in this case rabbitmq-service like so rabbitmq#rabbitmq-service also works since kubernetes service names are internally resolvable via DNS.
Starting broker...
=INFO REPORT==== 26-Apr-2016::11:53:19 ===
node : rabbitmq#rabbitmq-service
home dir : /var/lib/rabbitmq
config file(s) : /etc/rabbitmq/rabbitmq.config
cookie hash : 9WtXr5XgK4KXE/soTc6Lag==
log : tty
sasl log : tty
database dir : /var/lib/rabbitmq/mnesia/rabbitmq#rabbitmq-service
Is this the right way though? Will I still be able to cluster multiple instances if the node names are the same?
The idea is to use a different 'service' and 'deployment' for each of the node you want to create.
As you said, you have to create a custom NODENAME for each i.e:
RABBITMQ_NODENAME=rabbit#rabbitmq-1
Also rabbitmq-1,rabbitmq-2,rabbitmq-3 have to be resolved from each nodes. For that you can use kubedns. The /etc/resolv.conf will look like:
search rmq.svc.cluster.local
and /etc/hosts must contains:
127.0.0.1 rabbitmq-1 # or rabbitmq-2 on node 2...
The services are here to create a stable network identity for each nodes
rabbitmq-1.svc.cluster.local
rabbitmq-2.svc.cluster.local
rabbitmq-3.svc.cluster.local
The different deployments resources will allow you to mount a different volume on each node.
I'm working on a deployment tool to simplify those actions:
I've done a demo on how I scale and deploy rabbitmq from 1 to 3 nodes on kubernetes:
https://asciinema.org/a/2ktj7kr2d2m3w25xrpz7mjkbu?speed=1.5
More generally, the complexity your facing to deploy a clustered application is addressed in the 'petset proposal': https://github.com/kubernetes/kubernetes/pull/18016
In addition to the first reply by #ant31:
Kubernetes now allows to setup a hostname, e.g. in yaml:
template:
metadata:
annotations:
"pod.beta.kubernetes.io/hostname": rabbit-rc1
See https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns # A Records and hostname Based on Pod Annotations - A Beta Feature in Kubernetes v1.2
It seems that the whole configuration alive multiple restarts or re-schedules. I've not setup a cluster however I'm going to follow the tutorial for mongodb, see https://www.mongodb.com/blog/post/running-mongodb-as-a-microservice-with-docker-and-kubernetes
The approach will be probably almost same from kubernetes point of view.

Elasctic Search is working at port 9200 but Kibana is not working

Hello I am starting work with kibana and elasticsearch. I am being able to run elasticsearch at port 9200 but kibana is not running at port 5601. The following two images are given for clarification
Kibana is not running and showing the page is not available
Kibana doesn't support space in the folder name. Your folder name is
GA Works
Remove the space between those two words kibana will then run without errors and you will be able to access at
http://localhost:5601
You can rename the folder with
GA_Works
Have you
a) Set the elasticsearch_url to point at your Elasticsearch instance in file kibana/config.yml?
b) Run ./bin/kibana (or bin\kibana.bat on windows) (after setting the above config)
?
If you tried all of the above and still it doesn't work make sure that the kibana process is running first. I found that /etc/init.d/kibana4_init doesn't start the process. If that is the case then try: opt/kibana/bin/kibana.
I also made kibana user:group owner of the folder/files.

Resources