elastic search with spring boot not work - elasticsearch

in spring boot project.
gradle:
dependencies {
compile('org.springframework.boot:spring-boot-starter-data-elasticsearch')
compile('io.searchbox:jest:2.0.3')
runtime('net.java.dev.jna:jna')
}
config.yml:
spring:
data:
elasticsearch:
cluster-nodes: 10.19.132.207:9300
cluster-name: es
elasticsearch:
jest:
uris: http://10.19.132.207:9200
read-timeout: 10000
And my es config:
cluster.name: es
node.name: node-1
network.host: 0.0.0.0
transport.tcp.port: 9300
http.port: 9200
when I want to save data to es. The console print:
Caused by: org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{10.19.132.207}{10.19.132.207:9300}]
at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:326) ~[elasticsearch-2.4.4.jar:2.4.4]
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:223) ~[elasticsearch-2.4.4.jar:2.4.4]
at org.elasticsearch.client.transport.support.TransportProxyClient.execute(TransportProxyClient.java:55) ~[elasticsearch-2.4.4.jar:2.4.4]
And my es print log:
java.lang.IllegalStateException: Received message from unsupported version: [2.0.0] minimal compatible version is: [5.0.0]
at org.elasticsearch.transport.TcpTransport.messageReceived(TcpTransport.java:1323) ~[elasticsearch-5.2.2.jar:5.2.2]
at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:74) ~[transport-netty4-5.2.2.jar:5.2.2]
How can I resolve this problem?

At first glance, it appears that neither 'spring-boot-starter-data-elasticsearch' nor 'jest 2.0.3' support Elasticsearch 5. I'd try downgrading your Elasticsearch instance to 2.4.4 and see if that works.

Related

Elasticsearch stays in initializing stage, without actually working or throwing an error

I am using Elasticsearch version 6.4.3, and it was working as expected until today morning.
When I run elasticsearch.exe, it stays in initializing state, without throwing any errors.
elasticsearch.log
[2022-06-07T12:17:24,504][INFO ][o.e.n.Node ] [JO01HQESKAVM02] initializing ...
[2022-06-07T12:17:24,770][INFO ][o.e.e.NodeEnvironment ] [JO01HQESKAVM02] using [1] data paths, mounts [[(C:)]], net usable_space [136.2gb], net total_space [499.4gb], types [NTFS]
[2022-06-07T12:17:24,770][INFO ][o.e.e.NodeEnvironment ] [JO01HQESKAVM02] heap size [15.9gb], compressed ordinary object pointers [true]
elasticsearch.yml
bootstrap.memory_lock: false
cluster.name: elasticsearch
http.port: 9200
network.host: 0.0.0.0
node.data: true
node.ingest: true
node.master: true
node.max_local_storage_nodes: 1
node.name: JO01HQESKAVM02
path.data: C:\ELK\Elasticsearch\data
path.logs: C:\ELK\Elasticsearch\logs
transport.tcp.port: 9300
path.repo: ["/mount/backups", "/mount/longterm_backups"]
xpack.license.self_generated.type: basic
xpack.security.enabled: false
any help?

Elaticsearch Sink Kafka Connector fails with ConnectionClosedException

We are running a Confluent ElasticsearchSinkConnector on a dedicated K8S Kafka connect cluster, all seems to be working well and records appears on our Elasticsearch cluster.
Once in a while we are getting an unrecoverable error, which fails that task(s) and require a manual restart of the connector(s).
There are not much details regarding the error:
Caused by: org.apache.kafka.connect.errors.ConnectException: Bulk request failed
due to
Caused by: org.apache.http.ConnectionClosedException: Connection is closed
We are running with the following configurations:
Class: io.confluent.connect.elasticsearch.ElasticsearchSinkConnector
Config:
batch.size: 1000
behavior.on.malformed.documents: warn
behavior.on.null.values: delete
connection.compression: true
connection.password: my-password
connection.timeout.ms: 30000
connection.url: https://es-http.com:9200
connection.username: elastic
errors.log.enable: true
errors.log.include.messages: true
errors.tolerance: all
key.converter: org.apache.kafka.connect.storage.StringConverter
read.timeout.ms: 30000
retry.backoff.ms: 60000
schema.ignore: true
Topics: my-topic
Transforms: ExtractField
transforms.ExtractField.field: metadata
transforms.ExtractField.type: org.apache.kafka.connect.transforms.ExtractField$Value
value.converter: org.apache.kafka.connect.json.JsonConverter
value.converter.schemas.enable: false
Tasks Max: 10
We are running a 3 nodes Elasticsearch cluster from this image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0, not sure if it is relevant.
There is no extra logs on neither the Elasticsearch cluster nor on the Kafka connect cluster.
Any suggestions?

Unable to run logstash with file configuration on docker

I am trying to run the ELK stack using docker. But unfortunately, logstash container is not running and I am unable to find the exact error why it's failing.
Here is my docker-compose file:
version: '3.7'
services:
elasticsearch:
image: elasticsearch:7.9.2
ports:
- '9200:9200'
networks:
- elk
environment:
- discovery.type=single-node
- xpack.security.enabled=false
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
logstash:
image: logstash:7.9.2
ports:
- '5000:5000'
networks:
- elk
volumes:
- type: bind
source: ./logstash/config/logstash.yml
target: /usr/share/logstash/config/logstash.yml
read_only: true
- type: bind
source: ./logstash/pipeline
target: /usr/share/logstash/pipeline
read_only: true
depends_on:
- elasticsearch
networks:
elk:
driver: bridge
logstash.yml
---
## Default Logstash configuration from Logstash base image.
## https://github.com/elastic/logstash/blob/master/docker/data/logstash/config/logstash-full.yml
#
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch:9200" ]
## X-Pack security credentials
#
xpack.monitoring.enabled: true
#xpack.monitoring.elasticsearch.username: elastic
#xpack.monitoring.elasticsearch.password: changeme
logstash.conf
input{
file{
path => "C:\Users\User1\Downloads\library-mgmt-system-logs\user-service\user-service.log"
start_position => "beginning"
}
}
output{
elasticsearch{
hosts => "elasticsearch:9200"
index => "library-mgmt-system-logstash-index"
ecs_compatibility => disabled
}
}
logstash shutdown logs:
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.jruby.ext.openssl.SecurityHelper (file:/tmp/jruby-1/jruby280139731768845147jopenssl.jar) to field java.security.MessageDigest.provider
WARNING: Please consider reporting this to the maintainers of org.jruby.ext.openssl.SecurityHelper
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2021-08-01T08:42:44,135][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.9.2", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.8+10-LTS on 11.0.8+10-LTS +indy +jit [linux-x86_64]"}
[2021-08-01T08:42:44,172][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2021-08-01T08:42:44,184][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2021-08-01T08:42:44,578][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"b15dc5df-3deb-4698-aa37-e114a733bfa9", :path=>"/usr/share/logstash/data/uuid"}
[2021-08-01T08:42:45,186][WARN ][deprecation.logstash.monitoringextension.pipelineregisterhook] Internal collectors option for Logstash monitoring is deprecated and targeted for removal in the next major version.
Please configure Metricbeat to monitor Logstash. Documentation can be found at:
https://www.elastic.co/guide/en/logstash/current/monitoring-with-metricbeat.html
[2021-08-01T08:42:46,007][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2021-08-01T08:42:46,306][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2021-08-01T08:42:46,394][INFO ][logstash.licensechecker.licensereader] ES Output version determined {:es_version=>7}
[2021-08-01T08:42:46,399][WARN ][logstash.licensechecker.licensereader] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2021-08-01T08:42:46,642][INFO ][logstash.monitoring.internalpipelinesource] Monitoring License OK
[2021-08-01T08:42:46,644][INFO ][logstash.monitoring.internalpipelinesource] Validated license for monitoring. Enabling monitoring pipeline.
[2021-08-01T08:42:48,382][INFO ][org.reflections.Reflections] Reflections took 32 ms to scan 1 urls, producing 22 keys and 45 values
[2021-08-01T08:42:48,706][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2021-08-01T08:42:48,706][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2021-08-01T08:42:48,725][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2021-08-01T08:42:48,725][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2021-08-01T08:42:48,736][INFO ][logstash.outputs.elasticsearch][main] ES Output version determined {:es_version=>7}
[2021-08-01T08:42:48,736][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] ES Output version determined {:es_version=>7}
[2021-08-01T08:42:48,736][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2021-08-01T08:42:48,736][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2021-08-01T08:42:48,785][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearchMonitoring", :hosts=>["http://elasticsearch:9200"]}
[2021-08-01T08:42:48,788][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch:9200"]}
[2021-08-01T08:42:48,793][WARN ][logstash.javapipeline ][.monitoring-logstash] 'pipeline.ordered' is enabled and is likely less efficient, consider disabling if preserving event order is not necessary
[2021-08-01T08:42:48,833][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
[2021-08-01T08:42:48,879][INFO ][logstash.javapipeline ][.monitoring-logstash] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, "pipeline.sources"=>["monitoring pipeline"], :thread=>"#<Thread:0xb20b7c7#/usr/share/logstash/logstash-core/lib/logstash/pipelines_registry.rb:141 run>"}
[2021-08-01T08:42:48,888][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1000, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x62ff495a#/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:122 run>"}
[2021-08-01T08:42:48,901][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2021-08-01T08:42:48,931][INFO ][logstash.outputs.elasticsearch][main] Installing elasticsearch template to _template/logstash
[2021-08-01T08:42:49,686][INFO ][logstash.javapipeline ][.monitoring-logstash] Pipeline Java execution initialization time {"seconds"=>0.81}
[2021-08-01T08:42:49,688][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>0.8}
[2021-08-01T08:42:49,730][INFO ][logstash.javapipeline ][.monitoring-logstash] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
[2021-08-01T08:42:50,840][ERROR][logstash.agent ] Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<main>, action_result: false", :backtrace=>nil}
[2021-08-01T08:42:51,147][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2021-08-01T08:42:53,108][INFO ][logstash.javapipeline ] Pipeline terminated {"pipeline.id"=>".monitoring-logstash"}
[2021-08-01T08:42:53,162][INFO ][logstash.runner ] Logstash shut down.
I resolved this issue. Please refer the below updated files
docker-compose.yaml
logstash:
image: logstash:7.13.4
ports:
- '5000:5000'
networks:
- elk
volumes:
- type: bind
source: ./logstash/config/logstash.yml
target: /usr/share/logstash/config/logstash.yml
read_only: true
- type: bind
source: ./logstash/pipeline
target: /usr/share/logstash/pipeline
read_only: true
- type: bind
source: C:/Users/Rupesh_Patil/Desktop/logstash-data
target: /usr/share/logs/
read_only: true
depends_on:
- elasticsearch
logstash.conf
input{
file{
type=>"user"
path=>"/usr/share/logs/user-service/user-service.log"
start_position=>"beginning"
}
}
output{
elasticsearch{
hosts => "elasticsearch:9200"
index => "library-mgmt-system-logstash-index"
ecs_compatibility => disabled
}
}

EFK with Searchguard

I have installed an EFK stack to log nginx access log.
While using fresh install Im able to send data from Fluentd to elasticsearch without any problem. However, I installed searchguard to implement authentication on elasticsearch and kibana. Now Im able to login to Kibana and elasticsearch with searchguards demo user credentials.
Now my problem is that fluentd is unable to to connect to elasticsearch. From td-agent log im getting the following messages:
2018-07-19 15:20:34 +0600 [warn]: #0 failed to flush the buffer. retry_time=5 next_retry_seconds=2018-07-19 15:20:34 +0600 chunk="57156af05dd7bbc43d0b1323fddb2cd0" error_class=Fluent::Plugin::ElasticsearchOutput::ConnectionFailure error="Can not reach Elasticsearch cluster ({:host=>\"<elasticsearch-ip>\", :port=>9200, :scheme=>\"http\", :user=>\"logstash\", :password=>\"obfuscated\"})!"
Here is my Fluentd config
<source>
#type forward
</source>
<match user_count.**>
#type copy
<store>
#type elasticsearch
host https://<elasticsearch-ip>
port 9200
ssl_verify false
scheme https
user "logstash"
password "<logstash-password>"
index_name "custom_user_count"
include_tag_key true
tag_key "custom_user_count"
logstash_format true
logstash_prefix "custom_user_count"
type_name "custom_user_count"
utc_index false
<buffer>
flush_interval 2s
</buffer>
</store>
</match>
sg_roles.yml:
sg_logstash:
cluster:
- CLUSTER_MONITOR
- CLUSTER_COMPOSITE_OPS
- indices:admin/template/get
- indices:admin/template/put
indices:
'custom*':
'*':
- CRUD
- CREATE_INDEX
'logstash-*':
'*':
- CRUD
- CREATE_INDEX
'*beat*':
'*':
- CRUD
- CREATE_INDEX
Can anyone help me on this?
It seemed td-agent was using TLSv1 as default
added ssl_version TLSv1_2 to the config and now working

Not able to set ElasticSearch cluster. Getting RemoteTransportException

I am trying to establish elasticsearch cluster with 3 nodes.
My node settings:
cluster.name: ControlElasticSearch
node.name: VinodNode
node.master: true
node.data: true
network.host: 132.186.102.84
discovery.zen.ping.unicast.hosts: ["132.186.102.49","132.186.189.127","132.186.102.84"]
discovery.zen.minimum_master_nodes: 2
When i restart my elasticsearch, i am getting **RemoteTransportException**.
[2017-05-29T17:33:58,374][INFO ][o.e.d.z.ZenDiscovery ] [VinodNode] failed to send join request to master
[{DivyaNode}{LKhUGAEXTzO-ZcglRYC_yQ}{YO0qeSh8QMWclmYcSCNyMg}{132.186.102.49}{132.186.102.49:9300}],
reason [RemoteTransportException[[DivyaNode][132.186.102.49:9300][internal:discovery/zen/join]];
nested: ConnectTransportException[[VinodNode][127.0.0.1:9300] connect_timeout[30s]];
nested: IOException[Connection refused: no further information: 127.0.0.1/127.0.0.1:9300];
nested: IOException[Connection refused: no further information]; ]
May i please know what might be the reason. If i just work with single node, it works. when i am trying to set up as cluster, i am facing this issue.
I am able to ping the machines and all the machines are in the same network.

Resources