I'm currently trying out Elasticsearch. I've already made the first steps and installed it following the official instructions. But in my initial attempts, I get this incomprehensible error message. See this text below:
ÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöü
 Elasticsearch security features have been automatically configured!
 Authentication is enabled and cluster connections are encrypted.
Ôä╣´©Å Password for the elastic user (reset with `bin/elasticsearch-reset-password -u elastic`):
77kydX8WQaI6V5RXnEDB
Ôä╣´©Å HTTP CA certificate SHA-256 fingerprint:
b40aedc367164cf6d0f6a09ea2d6e6b258e41b94bbcd43db8f268238d590d472
Ôä╣´©Å Configure Kibana to use this cluster:
ÔÇó Run Kibana and click the configuration link in the terminal when Kibana starts.
ÔÇó Copy the following enrollment token and paste it into Kibana in your browser (valid for the next 30 minutes):
eyJ2ZXIiOiI4LjUuMyIsImFkciI6WyIxOTIuMTY4LjEuMTc5OjkyMDAiXSwiZmdyIjoiYjQwYWVkYzM2NzE2NGNmNmQwZjZhMDllYTJkNmU2YjI1OGU0MWI5NGJiY2Q0M2RiOGYyNjgyMzhkNTkwZDQ3MiIsImtleSI6IkFNZFBUNFVCenk4T19Mcy1lNmFvOmFoWnBnM1AtU3JDcEdlTXFPLS00M0EifQ==
Ôä╣´©Å Configure other nodes to join this cluster:
ÔÇó On this node:
Ôüâ Create an enrollment token with `bin/elasticsearch-create-enrollment-token -s node`.
Ôüâ Uncomment the transport.host setting at the end of config/elasticsearch.yml.
Ôüâ Restart Elasticsearch.
ÔÇó On other nodes:
Ôüâ Start Elasticsearch with `bin/elasticsearch --enrollment-token <token>`, using the enrollment token that you generated.
ÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöüÔöü
-----------------------------------------------------------------------
and this message, when I try to run the Elasticsearch on port 9200:
[2022-12-26T17:55:47,141][WARN ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [DESKTOP-VLBL46H] received plaintext http traffic on an https channel, closing connection Netty4HttpChannel{localAddress=/[0:0:0:0:0:0:0:1]:9200, remoteAddress=/[0:0:0:0:0:0:0:1]:60933}
Related
I am using http_poller input plugin which is scheduled every 15 mins. Based on the http_poller API response I need to execute Elasticsearch query.
For executing Elasticsearch query, I am using Elasticsearch Filter plugins and it is executed the first time without issue, but after second run it is throwing below error:
[2022-05-09T11:34:46,738][WARN ][logstash.filters.elasticsearch][logs][9c5fb8a0078cad1be396fedd387eb8680d72086b85be9efe15e6893ce2e73332] Failed to query elasticsearch for previous event {:index=>"logs-xx-prod_xx", :error=>"Read timed out"}
Aslo, it is throwing below error for Elasticsearch Output filter from onwards second run:
[2022-05-09T11:35:17,236][WARN ][logstash.outputs.elasticsearch][logs][8850a096b09c55eca7744c74cb4821d3f6e42a3e87a464228013b22ea1f0d576] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [https://elastic:xxxxxx#test.westeurope.azure.elastic-cloud.com:9243/][Manticore::SocketException] Connection reset by peer: socket write error {:url=>https://elastic:xxxxxx#test.westeurope.azure.elastic-cloud.com:9243/, :error_message=>"Elasticsearch Unreachable: [https://elastic:xxxxxx#test.westeurope.azure.elastic-cloud.com:9243/][Manticore::SocketException] Connection reset by peer: socket write error", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
[2022-05-09T11:35:17,236][ERROR][logstash.outputs.elasticsearch][logs][8850a096b09c55eca7744c74cb4821d3f6e42a3e87a464228013b22ea1f0d576] Attempted to send a bulk request but Elasticsearch appears to be unreachable or down {:message=>"Elasticsearch Unreachable: [https://elastic:xxxxxx#test.westeurope.azure.elastic-cloud.com:9243/][Manticore::SocketException] Connection reset by peer: socket write error", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :will_retry_in_seconds=>2}
[2022-05-09T11:35:19,236][ERROR][logstash.outputs.elasticsearch][logs][8850a096b09c55eca7744c74cb4821d3f6e42a3e87a464228013b22ea1f0d576] Attempted to send a bulk request but there are no living connections in the pool (perhaps Elasticsearch is unreachable or down?) {:message=>"No Available connections", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError, :will_retry_in_seconds=>4}
[2022-05-09T11:35:19,377][WARN ][logstash.outputs.elasticsearch][logs] Restored connection to ES instance {:url=>"https://elastic:xxxxxx#test.westeurope.azure.elastic-cloud.com:9243/"}
I have configured Logstash pipeline from Kibana as using the centralized pipeline management of the ES 7.16 version.
I have tried below configuration, but seems like not a single configuration is working.
Changed Pipeline batch size value to 100 then 50 then 25
pipeline workers is set to 1
set validate_after_inactivity to 0 and try diffrent value as well in Elasticsearch output plugin.
tried various timeout value like 100, 180, 200, 600 etc.
Previously i was setting custom document id using document_id param that also disable now.
One of the strange behavior I have noticed is that, document count are increased in ES index even after above error.
Also, there is no option to set timeout in the Elasticsearch filter plugin. Because when I tried to set timeout it throws error that "timeout param is not supported".
I'm trying to configure ElasticSearch data source for Grafana. I have them both running in Docker locally, both have versions 7.2.0. For Grafana I provide ES URL as http://localhost:9200, index name, time field, and ES version. All other parameters stay with the default value.
By saving my config I can see in Grafana logs next:
t=2021-02-14T14:55:58+0000 lvl=eror msg="Data proxy error" logger=data-proxy-log userId=1 orgId=1 uname=admin path=/api/datasources/proxy/1/<index>/_mapping remote_addr=172.17.0.1 referer="http://localhost:3000/datasources/edit/1/?utm_source=grafana_gettingstarted" error="http: proxy error: dial tcp 127.0.0.1:9200: connect: connection refused"
t=2021-02-14T14:55:58+0000 lvl=info msg="Request Completed" logger=context userId=1 orgId=1 uname=admin method=GET path=/api/datasources/proxy/1/<index>/_mapping status=502 remote_addr=172.17.0.1 time_ms=1 size=0 referer="http://localhost:3000/datasources/edit/1/?utm_source=grafana_gettingstarted"
I can't get why Grafana tries to get the mapping from some unknown IP. And how to configure it.
By the way, request to http://localhost:9200/<index>/_mapping returns me the correct mapping.
According to Grafana Documentation about configuration:
"URL needs to be accessible from the Grafana backend/server", so, try replacing "http://localhost:9200" to "http://elasticsearch:9200" instead. I had the same issue before, and it worked to me replacing this way :)
Plus: "elasticsearch" is the default name of Elasticsearch container (in case you are running with Docker), so that is the reason of the name.
I'm using DefaultMarkLogicDatabaseClientService 1.9.1.3-incubator in NiFi 1.11.4. MarkLogic 10.0-4 is running AWS and has an app server where SSL is configured at the AWS level.
How do I configure the DefaultMarkLogicDatabaseClientService to use HTTPS without needing an SSL Context Service?
Details:
Before SSL was set up, the DefaultMarkLogicDatabaseClientService was able to connect. Once SSL was set up, I'd get this error:
PutMarkLogic[id=bbb8f3c3-7d83-3fb7-454f-9da7d64fa3f6] Failed to properly initialize Processor. If still scheduled to run, NiFi will attempt to initialize and run the Processor again after the 'Administrative Yield Duration' has elapsed. Failure is due to com.marklogic.client.MarkLogicIOException: java.io.IOException: unexpected end of stream on Connection{my-host:8010, proxy=DIRECT hostAddress=my-host/my-IP:8010 cipherSuite=none protocol=http/1.1}: com.marklogic.client.MarkLogicIOException: java.io.IOException: unexpected end of stream on Connection{my-host:8010, proxy=DIRECT hostAddress=my-ost/my-IP:8010 cipherSuite=none protocol=http/1.1}
Okay, seems like it's not successful using protocol HTTP for a server that needs HTTPS. I see that the service can be configured to use an SSL Context Service, but I'm not looking to do client authentication. (Setting this up requires a truststore or keystore.)
If I replace the PutMarkLogic processor that uses the DefaultMarkLogicDatabaseClientService with an InvokeHTTP processor, I can specify the full URL, including "https://", without needing an SSL Context Services (but then I don't get the batching that I get with PutMarkLogic). I'd like to simply tell the MarkLogic service to use HTTPS.
Creating an SSLContextService with a truststore (that contains the public certificate of the MarkLogic server) populated and no keystore populated should work in this situation.
yesterday I setup a dedicated single monitoring node following this guide.
I managed to fire up the new monitoring node with the same ES 6.6.0 version of the cluster, then added those lines to my elasticsearch.yml file on all ES cluster nodes :
xpack.monitoring.exporters:
id1:
type: http
host: ["http://monitoring-node-ip-here:9200"]
Then restarted all nodes and Kibana (that is actually running in one of the node of the ES cluster).
Now I can see today monitoring data indices being sent to the new monitoring external node but Kibana is showing a "You need to make some adjustments" when accessing the "Monitoring" section.
We checked the `cluster defaults` settings for `xpack.monitoring.exporters` , and found the
reason: `Remote exporters indicate a possible misconfiguration: id1`
Check that the intended exporters are enabled for sending statistics to the monitoring cluster,
and that the monitoring cluster host matches the `xpack.monitoring.elasticsearch` setting in
`kibana.yml` to see monitoring data in this instance of Kibana.
I already checked that all nodes are pingable each other , also I don't have xpack security so I haven't created any additional "remote_monitor" user.
I followed the error message and tried to add the xpack.monitoring.elasticsearch in kibana.yml file but I ended up with the following error :
FATAL ValidationError: child "xpack" fails because [child "monitoring" fails because [child
"elasticsearch" fails because ["url" is not allowed]]]
Hope anyone can help me in figuring what's wrong.
EDIT #1
Solved : problem was due to monitoring not being disabled in the monitoring cluster :
PUT _cluster/settings
{
"persistent": {
"xpack.monitoring.collection.enabled": false
}
}
Additional I made a mistake in kibana.yml configuration,
xpack.monitoring.elasticsearch should have been xpack.monitoring.elasticsearch.hosts
i had exactly the same problem but the root of cause was smth different.
here have a look
okay, i used to have the same problem.
my kibana did not show monitoring graphs, however
i had monitoring index index .monitoring-es-* available
the root of problem in my case was that my master nodes did not have :9200 HTTP socket available from the LAN. that is my config on master nodes was:
...
transport.host: [ "192.168.7.190" ]
transport.port: 9300
http.port: 9200
http.host: [ "127.0.0.1" ]
...
as you can see HTTP socket is available only from within host.
i didnt want if some one will make HTTP request for masters from LAN because there is
no point to do that.
However as i uderstand Kibana do not only read data from monitoring index
index .monitoring-es-*
but also make some requests directly for masters to get some information.
It was exactly why Kibana did not show anything about monitoring.
After i changed one line in the config on master node as
http.host: [ "192.168.0.190", "127.0.0.1" ]
immidiately kibana started to show monitoring graphs.
i recreated this expereminet several times.
Now all is working.
Also i want to underline in spite that now all is fine my monitoring index .monitoring-es-*
do NOT have "cluster_stats" documents.
So if your kibana do not show monitoring graphs i suggest
check if index .monitoring-es-* exists
check if your master nodes can serve HTTP requests from LAN
I am using ES 5.0, Kibana 5.0 alpha4 and various beats to send data to ES. Everything runs fine and smooth at first. However, after a day, the beats suddenly cannot send the beats to the ES. I have been using different beats, including winlogbeat, metribeats, etc. All of them suddenly stop working.
Error is as shown
2016/07/16 15:22:15.259659 single.go:130: INFO Connecting error publishing events (retrying): 401 Unauthorized
2016/07/16 15:22:15.259695 single.go:145: INFO send fail
What would be the issue?
You have enabled authentication on Elasticsearch server like Shield(Security).
Put this in Beats configuration
elasticsearch:
username: `Your username here`
password: `Your password here`
protocol: `Choose your protocol here (http or https)`
hosts: `Array of Elasticsearch hosts like ["elasticsearch.example.com:9200"] `