I'm trying to configure ElasticSearch data source for Grafana. I have them both running in Docker locally, both have versions 7.2.0. For Grafana I provide ES URL as http://localhost:9200, index name, time field, and ES version. All other parameters stay with the default value.
By saving my config I can see in Grafana logs next:
t=2021-02-14T14:55:58+0000 lvl=eror msg="Data proxy error" logger=data-proxy-log userId=1 orgId=1 uname=admin path=/api/datasources/proxy/1/<index>/_mapping remote_addr=172.17.0.1 referer="http://localhost:3000/datasources/edit/1/?utm_source=grafana_gettingstarted" error="http: proxy error: dial tcp 127.0.0.1:9200: connect: connection refused"
t=2021-02-14T14:55:58+0000 lvl=info msg="Request Completed" logger=context userId=1 orgId=1 uname=admin method=GET path=/api/datasources/proxy/1/<index>/_mapping status=502 remote_addr=172.17.0.1 time_ms=1 size=0 referer="http://localhost:3000/datasources/edit/1/?utm_source=grafana_gettingstarted"
I can't get why Grafana tries to get the mapping from some unknown IP. And how to configure it.
By the way, request to http://localhost:9200/<index>/_mapping returns me the correct mapping.
According to Grafana Documentation about configuration:
"URL needs to be accessible from the Grafana backend/server", so, try replacing "http://localhost:9200" to "http://elasticsearch:9200" instead. I had the same issue before, and it worked to me replacing this way :)
Plus: "elasticsearch" is the default name of Elasticsearch container (in case you are running with Docker), so that is the reason of the name.
Related
We have deployed Elasticsearch(8.3) using kubernetes, ingress is defined for Elasticseach as https://elasticsearch.url.com/es, but when I am using the same to connect to Elasticsearch using Python elasticsearch package, I am getting error below:
Note: I have tried giving port number(https://elasticsearch.url.com:9200/es/)but still did not worked.
ValueError: URL must include a 'scheme', 'host', and 'port' component (ie 'https://localhost:9200')
I am using below code to connect:
from elasticsearch import Elasticsearch
client = Elasticsearch(
["https://elasticsearch.url.com/es/"],
http_auth=('username', 'password')
)
Kindly help me here how to resolve this.
the clients expect something like https://elasticsearch.url.com:9200/, as anything after the last / is considered a path/action of some sort, eg _search or an index name, for Elasticsearch to then do something with based on that context
you will likely need to remove the trailing es part of the url, then you can use https://elasticsearch.url.com:80/` (assuming ingress port 80 redirects to port 9200 for Elasticsearch)
yesterday I setup a dedicated single monitoring node following this guide.
I managed to fire up the new monitoring node with the same ES 6.6.0 version of the cluster, then added those lines to my elasticsearch.yml file on all ES cluster nodes :
xpack.monitoring.exporters:
id1:
type: http
host: ["http://monitoring-node-ip-here:9200"]
Then restarted all nodes and Kibana (that is actually running in one of the node of the ES cluster).
Now I can see today monitoring data indices being sent to the new monitoring external node but Kibana is showing a "You need to make some adjustments" when accessing the "Monitoring" section.
We checked the `cluster defaults` settings for `xpack.monitoring.exporters` , and found the
reason: `Remote exporters indicate a possible misconfiguration: id1`
Check that the intended exporters are enabled for sending statistics to the monitoring cluster,
and that the monitoring cluster host matches the `xpack.monitoring.elasticsearch` setting in
`kibana.yml` to see monitoring data in this instance of Kibana.
I already checked that all nodes are pingable each other , also I don't have xpack security so I haven't created any additional "remote_monitor" user.
I followed the error message and tried to add the xpack.monitoring.elasticsearch in kibana.yml file but I ended up with the following error :
FATAL ValidationError: child "xpack" fails because [child "monitoring" fails because [child
"elasticsearch" fails because ["url" is not allowed]]]
Hope anyone can help me in figuring what's wrong.
EDIT #1
Solved : problem was due to monitoring not being disabled in the monitoring cluster :
PUT _cluster/settings
{
"persistent": {
"xpack.monitoring.collection.enabled": false
}
}
Additional I made a mistake in kibana.yml configuration,
xpack.monitoring.elasticsearch should have been xpack.monitoring.elasticsearch.hosts
i had exactly the same problem but the root of cause was smth different.
here have a look
okay, i used to have the same problem.
my kibana did not show monitoring graphs, however
i had monitoring index index .monitoring-es-* available
the root of problem in my case was that my master nodes did not have :9200 HTTP socket available from the LAN. that is my config on master nodes was:
...
transport.host: [ "192.168.7.190" ]
transport.port: 9300
http.port: 9200
http.host: [ "127.0.0.1" ]
...
as you can see HTTP socket is available only from within host.
i didnt want if some one will make HTTP request for masters from LAN because there is
no point to do that.
However as i uderstand Kibana do not only read data from monitoring index
index .monitoring-es-*
but also make some requests directly for masters to get some information.
It was exactly why Kibana did not show anything about monitoring.
After i changed one line in the config on master node as
http.host: [ "192.168.0.190", "127.0.0.1" ]
immidiately kibana started to show monitoring graphs.
i recreated this expereminet several times.
Now all is working.
Also i want to underline in spite that now all is fine my monitoring index .monitoring-es-*
do NOT have "cluster_stats" documents.
So if your kibana do not show monitoring graphs i suggest
check if index .monitoring-es-* exists
check if your master nodes can serve HTTP requests from LAN
There is a problem with Graphite Docker images I try to run on my PC. Containers start up gracefully but I'm not able to send any message so that it would be displayed under "Metrics" tab. Volumes Mounting doesn't help either. Default storage-schema.conf should accept all messages.
The message used for testing is such:
echo "test.bash.stats 42 date +%s" | nc localhost 2003.
Moreover, most of the time (but not always) after sending above listed message "400 Bad request" error is responded.
Following images has been tested:
https://hub.docker.com/r/hopsoft/graphite-statsd/
https://hub.docker.com/r/kamon/grafana_graphite/
Any ideas, I'm missing something to configure additionally?
Despite the above explained issue there is is question related to Spring Boot Metrics export to Graphite over StatsD.
As described here http://docs.spring.io/spring-boot/docs/current/reference/html/production-ready-metrics.html in section 49.8.3 Example: Export to Statsd there is a requirement only to add com.timgroup:java-statsd-client as dependency and add property spring.metrics.export.statsd.host.
Unfortunately nothing is send to Graphite (docker image running on local PC https://github.com/kamon-io/docker-grafana-graphite). I have checked network with Wireshark (udp.port=8125). Is there maybe something missing to add into Spring boot project with metrics?
Using td-agent to forward my logs to a log aggregate node running kibana/elasticsearch/td-agent, I have my forwarders config set up like:
<match mytag.**>
type forward
flush_interval 10s
<server>
host myserver.com
port 24224
</server>
</match>
My log aggregate node is mapped via DNS to myserver.com
I configured everything and logs are collecting on my aggregate node just fine. Then I decide to spin up a new aggregate node to test a different configuration. I change my dns to send myserver.com to this new node instead.
I am able to access the new Kibana instance on the new node via dns just fine, but my forwarders all seem to be having an issue connecting. td-agent logs on forwarders show:
2015-12-24 16:11:26 +0000 [warn]: temporarily failed to flush the buffer. next_retry=2015-12-24 16:20:30 +0000 error_class="RuntimeError" error="no nodes are available" plugin_id="object:3fd1a993acf0"
The "no nodes are available" portion tells me that it can't connect to the forward server. I gave it some time but had the same result each time td-agent retried. I did a restart of td-agent and everything connected just fine.
Do I really need to restart td-agent on every server that is forwarding in order to connect to the new aggregate node? I was really hoping that td-agent could just use the DNS to dynamically shift.
Is there any way to do this? Maybe I need a load balancer to handle the swap?
expire_dns_cache parameter may help.
http://docs.fluentd.org/articles/out_forward#expirednscache
Kibana is unable to load the data from elasticsearch. I could see the below log in the elasticsearch. I am using elasticsearch version 1.4.2. Is this something related to load? Could anyone please help me?
[2015-11-05 22:39:58,505][DEBUG][action.bulk ] [Oddball] observer: timeout notification from cluster service. timeout setting [1m], time since start [1m]
elastic search by default runs at http://localhost:9200
make sure you have proper URL in kibana.ymal
<pre>
# Kibana is served by a back end server. This controls which port to use.
port: 5601
# The host to bind the server to.
#host: example.com
# The Elastic search instance to use for all your queries.
elasticsearch_url: "http://localhost:9200"
</pre>
Aslo in elastic search config elasticsearch.yaml provide cluster name and http.cors.allow-origin.
<pre>
# Cluster name identifies your cluster for auto-discovery. If you're running
# multiple clusters on the same network, make sure you're using unique names.
#
cluster.name: elasticsearch
http.cors.allow-origin: "/.*/"
</pre>
I could solve this by setting up a new node for Elasticsearch and clearing the unassigned shards by setting the replica to 0.