None of the configured nodes are available issue with spring boot - elasticsearch

Hi friends i am developing spring boot project with elastic search i have setup elastic search on local machine and i have installed Head plugin in elastic search . My elastic search is setup correctly showing green sign.
My application-dev.yml file in my project is as follows:
server:
port: 8080
liquibase:
context: dev
spring:
profiles:
active: dev
datasource:
dataSourceClassName: org.h2.jdbcx.JdbcDataSource
url: jdbc:h2:mem:jhipster;DB_CLOSE_DELAY=-1
databaseName:
serverName:
username:
password:
jpa:
database-platform: com.aquevix.demo.domain.util.FixedH2Dialect
database: H2
openInView: false
show_sql: true
generate-ddl: false
hibernate:
ddl-auto: none
naming-strategy: org.hibernate.cfg.EJB3NamingStrategy
properties:
hibernate.cache.use_second_level_cache: true
hibernate.cache.use_query_cache: false
hibernate.generate_statistics: true
hibernate.cache.region.factory_class: org.hibernate.cache.ehcache.SingletonEhCacheRegionFactory
data:
elasticsearch:
cluster-name: elasticsearch
cluster-nodes: localhost:9200
messages:
cache-seconds: 1
thymeleaf:
mode: XHTML
cache: false
activemq:
broker-url: tcp://localhost:61616
metrics:
jmx.enabled: true
spark:
enabled: false
host: localhost
port: 9999
graphite:
enabled: false
host: localhost
port: 2003
prefix: TestApollo
cache:
timeToLiveSeconds: 3600
ehcache:
maxBytesLocalHeap: 16M
Elastic search service is running on my machine but when i try to save entity first my code save entity in mysql then in elastic search using elastic search repository but on saving entity into elastic it throws error:
Hibernate: insert into EMPLOYEE (id, rollno) values (null, ?)
[ERROR] com.aquevix.demo.aop.logging.LoggingAspect - Exception in com.aquevix.demo.web.rest.EmployeeResource.create() with cause = null
org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: []
at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:298) ~[elasticsearch-1.3.2.jar:na]
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:214) ~[elasticsearch-1.3.2.jar:na]
at org.elasticsearch.client.transport.support.InternalTransportClient.execute(InternalTransportClient.java:105) ~[elasticsearch-1.3.2.jar:na]
at org.elasticsearch.client.support.AbstractClient.index(AbstractClient.java:94) ~[elasticsearch-1.3.2.jar:na]
at org.elasticsearch.client.transport.TransportClient.index(TransportClient.java:331) ~[elasticsearch-1.3.2.jar:na]
at org.elasticsearch.action.index.IndexRequestBuilder.doExecute(IndexRequestBuilder.java:313) ~[elasticsearch-1.3.2.jar:na]
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:91) ~[elasticsearch-1.3.2.jar:na]
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:65) ~[elasticsearch-1.3.2.jar:na]
at org.springframework.data.elasticsearch.core.ElasticsearchTemplate.index(ElasticsearchTemplate.java:431) ~[spring-data-elasticsearch-1.1.3.RELEASE.jar:na]
at org.springframework.data.elasticsearch.repository.support.AbstractElasticsearchRepository.save(AbstractElasticsearchRepository.java:138) ~[spring-data-elasticsearch-1.1.3.RELEASE.jar:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_51]
i have also use 9300 port instead of 9200 but nothing is working. I have tried everything but could find solutions please help!

I have found the solution ES2.0 is not working correctly so i re-install ES1.7.3 now it is working in my case. complete details here!

I had the same problem as you, and also using Jhipster too. As mentioned one possible solution is to downgrade your elasticsearch instance but if you don't want to downgrade it, here is what it worked for me:
update spring boot to the lastet version (> 1.4.0.RC1)
Configure ElasticsearchTemplate manually instead of using autoconfiguration.
Please if you need more information have a look to this post:
http://ignaciosuay.com/how-to-connect-spring-boot-to-elasticsearch-2-x-x/

I encountered this error, and for me, the reason was that I was using the incorrect cluster name.
Steps to troubleshoot this error:
Make sure that Spring Data Elasticsearch is compatible with the Elasticsearch version that you intend to use. There is a table in the project's README which corresponds Spring Data Elasticsearch versions with Elasticsearch versions:
https://github.com/spring-projects/spring-data-elasticsearch#quick-start
In my case, I am using Spring Data Elasticsearch 3.0.7. According to the table, I need to use Elasticsearch 5.5.0, but I have found that Spring Data Elasticsearch 3.0.7 appears to be compatible with Elasticsearch 5.6.x as well.
Make sure that the spring.data.elasticsearch.cluster-nodes property specifies whatever port your Elasticsearch cluster is using for communication using the native Elasticsearch transport protocol.
By default, Elasticsearch listens on two ports, 9200 and 9300. Port 9200 is for communication using the RESTful API. Port 9300 is for communication using the transport protocol:
https://www.elastic.co/guide/en/elasticsearch/guide/current/_talking_to_elasticsearch.html
The Java client that Spring Data Elasticsearch uses expects to communicate using the transport protocol (9300 by default).
Make sure that the spring.data.elasticsearch.cluster-name property specifies the correct cluster name.
If you do not specifically set this property, then the default is "elasticsearch".
You can look up the Elasticsearch cluster name using the RESTful API:
curl -XGET 'http://localhost:9200/?pretty'
This command will print something similar to:
{
"name" : "XXXXXXX",
"cluster_name" : "some_cluster_name",
"cluster_uuid" : "XXXXXXXXXXXXXXXXXXXXXX",
"version" : {
"number" : "5.6.10",
"build_hash" : "b727a60",
"build_date" : "2018-06-06T15:48:34.860Z",
"build_snapshot" : false,
"lucene_version" : "6.6.1"
},
"tagline" : "You Know, for Search"
}
Make sure to set the value of the spring.data.elasticsearch.cluster-name property to the same string shown for "cluster_name".

You seem to be using JHipster (wonderful toolset if I may add) which uses
org.springframework.boot:spring-boot-starter-data-elasticsearch: ->
1.3.3.RELEASE
This only works with ElasticSearch BELOW 2.0 so just install ElasticSearch 1.7.3 and run your code

Related

Automated Setup of Kibana and Elasticsearch with Filebeat Module in Elastic Cloud for Kubernetes (ECK)

I'm trying out the K8s Operator (a.k.a. ECK) and so far, so good.
However, I'm wondering what the right pattern is for, say, configuring Kibana and Elasticsearch with the Apache module.
I know I can do it ad hoc with:
filebeat setup --modules apache2 --strict.perms=false \
--dashboards --pipelines --template \
-E setup.kibana.host="${KIBANA_URL}"
But what's the automated way to do it? I see some docs for the Kibana dashboard portion of it but what about the rest (pipelines, etc.)?
Note: At some point, I may end up actually running a beat for the K8s cluster, but I'm not at that stage yet. At the moment, I just want to set Elasticsearch/Kibana up with the Apache module additions so that external Apache services' Filebeats can get ingested/displayed properly.
FYI, I'm on version 6.8 of the Elastic stack for now.
you can try auto-discovery using label based approach.
config:
filebeat.autodiscover:
providers:
- type: kubernetes
hints.default_config.enabled: "false"
templates:
- condition.contains:
kubernetes.labels.app: "apache"
config:
- module: apache
access:
enabled: true
var.paths: ["/path/to/log/apache/access.log*"]
error:
enabled: true
var.paths: ["/path/to/log/apache/error.log*"]

use cloud foundry vcap env variables in spring's application.yml

In my application.yml of my spring boot app I want to configure the actuator metrics to push to my elastic server.
metrics:
enable:
all: false
diskspace: true
jvm: true
mycustomstuff: true
export:
elastic:
enabled: true
host: https://${vcap.services.my-cloud-logging.credentials.Elasticsearch-endpoint}
password: ${vcap.services.my-cloud-logging.credentials.Elasticsearch-password}
user-name: ${vcap.services.my-cloud-logging.credentials.Elasticsearch-username}
auto-create-index: false
index: metrics
But micrometer keeps failing when sending metrics because my variable is not well expended
Illegal character in authority at index 8: https://${vcap.services.my-cloud-logging.credentials.Elasticsearch-endpoint}/metrics-2021-10/_bulk"
I checked at runtime all the variables have the correct value. They are created by CloudFoundryVcapEnvironmentPostProcesso.
Actually, I have the feeling the problem is caused by the concatenation of "https://" with the variable.
Also confirmed by this unanswered question, where the OP wants to prepend "jdbc:" to a vcap variable
Using double quotes as "host: https://${vcap.services.my-cloud-logging.credentials.Elasticsearch-endpoint}" didn't help

Filebeat's GCP Module keep getting hash config error

I am currently trying to forward GCP's Cloud Logging to Filebeat to be forwarded to Elasticsearch following this docs with the GCP module settings on filebeat according to this docs
Currently I am only trying to forward audit logs so my gcp.yml module is as follows
- module: gcp
vpcflow:
enabled: false
var.project_id: my-gcp-project-id
var.topic: gcp-vpc-flowlogs
var.subscription_name: filebeat-gcp-vpc-flowlogs-sub
var.credentials_file: ${path.config}/gcp-service-account-xyz.json
#var.internal_networks: [ "private" ]
firewall:
enabled: false
var.project_id: my-gcp-project-id
var.topic: gcp-vpc-firewall
var.subscription_name: filebeat-gcp-firewall-sub
var.credentials_file: ${path.config}/gcp-service-account-xyz.json
#var.internal_networks: [ "private" ]
audit:
enabled: true
var.project_id: <my prod name>
var.topic: sample_topic
var.subscription_name: filebeat-gcp-audit
var.credentials_file: ${path.config}/<something>.<something>
When I run sudo filebeat setup I keep getting this error
2021-05-21T09:02:25.232Z ERROR cfgfile/reload.go:258 Error loading configuration files: 1 error: Unable to hash given config: missing field accessing '0.firewall' (source:'/etc/filebeat/modules.d/gcp.yml')
Although I can start the service, but I don't seem to see any logs forwarded from GCP's Cloud Logging pub/sub topic to elastic search.
Help or tips on best practice too would be appreciated.
Update
If I were to follow the docs in here, it would give me the same error but in audit

ReadOnly Rest plugin giving Authentication Exception

I am using readonlyRest plugin to secure elastic and kibana but once I added the following in my readonlyrest.yml, the kibana starts giving me "Authentication Exception", what could be the reason for that?
kibana.yml
elasticsearch.username: "kibana"
elasticsearch.password: "kibana123"
readonlyrest.yml
readonlyrest:
enable: true
response_if_req_forbidden: Access denied!!!
access_control_rules:
- name: "Accept all requests from localhost"
type: allow
hosts: [XXX.XX.XXX.XXX]
- name: "::Kibana server::"
auth_key: kibana:kibana123
type: allow
- name: "::Kibana user::"
auth_key: kibana:kibana123
type: allow
kibana_access: rw
indices: [".kibana*","log-*"]
My kibana and elastic are hosted on same server, is that the reason?
Another question: If I want to make my elastic server accessible only through a particular host then can I write that host in the first section of access_control_rules as mentioned in readonlyrest.yml?
Elastic version: 6.2.3
Log error: I didn't remember exactly but it was [ACL] Forbidden and showing false in all the three control rules.

Why does functional testing create new records on my Redis query cache?

I've just enabled query caching on my symfony application using the following configuration:
Doctine cache config
doctrine_cache:
providers:
cache:
namespace: '%cache_namespace%'
chain:
providers:
- array_cache
- redis_cache
- file_cache
redis_cache:
namespace: '%cache_namespace%'
predis:
host: "%redis_host%"
port: "%redis_port%"
password: "%redis_password%"
timeout: "%redis_timeout%"
array_cache:
namespace: '%cache_namespace%'
array: ~
file_cache:
namespace: '%cache_namespace%'
file_system:
directory: "%kernel.cache_dir%/application"
Doctrine ORM config
orm:
auto_generate_proxy_classes: "%kernel.debug%"
entity_managers:
an_entity_manager:
connection: connection
mappings:
AppBundle: ~
naming_strategy: doctrine.orm.naming_strategy.underscore
metadata_cache_driver:
type: service
id: "doctrine_cache.providers.cache"
query_cache_driver:
type: service
id: "doctrine_cache.providers.cache"
result_cache_driver:
type: service
id: "doctrine_cache.providers.cache"
I'm also having functional tests that populate a local sqlite database instead of the real one. What I'm seeing is the following:
Every time I run my tests, I see the Redis cache creating new keys even for identical records. I'm guessing this must be because the database gets re-created before every test gets executed, and the contents of the newly created rows don't matter as far as caching is concerned, but I can't be sure.
Does anyone know if this expected behaviour?
You should disable Redis (and all unecessary external dependencies) in you test environment. This could be done overriding your configuration in the test env, using the in-memory cache only.
To read more abound environments and configuration, you can look into the Symfony docs: http://symfony.com/doc/current/book/configuration.html#environment-configuration

Resources