docker- elasticsearch is stopping when i dockerize my service - spring-boot

Elasticsearch which is running in docker is getting stopped when i run my service in docker.
if i run my service(spring boot app) first then if i run elasticsearch in docker , elasticsearch will never start in docker container? any idea about this?
is below is the problem?
{
"type":"server",
"timestamp":"2019-12-11T10:25:39,589Z",
"level":"DEBUG",
"component":"o.e.a.s.TransportSearchAction",
"cluster.name":"docker-cluster",
"node.name":"179c5890f49c",
"message":"All shards failed for phase: [query]",
"cluster.uuid":"myyvQb8oS9qFjcCL4B41Sg",
"node.id":"RVuYJJ_NQ3uygvgq3cM-dg",
"stacktrace":[
"org.elasticsearch.ElasticsearchException$1: Fielddata is disabled on text fields by default. Set fielddata=true on [type] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead."
]
}

Related

Why Elasticsearch and Kibana generating garbage data ceaselessly?

I just installed Elasticsearch and Kibana. X-pack has been enable. I never put any data into Elasticsearch, but when I use Kibana to monitor elasticsearch, I found the number of data increased automatically. I have to stop Elasticsearch and Kibana to make it stop. Why this phenomenon happen? And how can I fix it?
PS: Elasticsearch and Kibana version is 7.11.1.
The increasing number of data: kibana>stack monitoring>Elasticsearch>Overview>data.
The red part was from: 2M ---> 314M, I don't know what it done.

Is there a way to retrieve data from a second Elasticsearch instance in Kibana?

I have an already an ELK stack. I wonder if it is possible to retrieve data from a second Elasticsearch instace in Kibana?
When you say "second Elasticsearch instance", I assume you mean a second cluster. For this you can use Cross Cluster Search (CCS), which you will first need to configure in Elasticsearch:
PUT _cluster/settings
{
"persistent": {
"cluster": {
"remote": {
"your_remote_cluster": {
"seeds": [
"<dns-name>:9300"
]
}
}
}
}
}
Then you need to add the Elasticsearch cluster in Kibana on which you configured the remote cluster (where you ran the PUT _cluster/settings). And finally add the right index pattern in Kibana](https://www.elastic.co/guide/en/kibana/current/management-cross-cluster-search.html) with your_remote_cluster:<pattern> (your_remote_cluster is the name you have configured in the PUT).
PS: If you are after a HA setup where one Kibana instance can talk to multiple Elasticsearch nodes in the same cluster, use the elasticsearch.hosts setting added in 6.6.
Link to kibana configuration bellow. Kibana is only able to target one cluster, so you can target different url but with very important restriction.
https://www.elastic.co/guide/en/kibana/current/settings.html
elasticsearch.hosts:
Default: "http://localhost:9200" The URLs of the Elasticsearch instances to use for all your queries. All nodes listed here must be on the same cluster.

Fielddata is disabled on text fields by default Set fielddata=true

I am running elastic stack v 7.2.0 on kubernetes and I am getting this error in the elasticsearh while accessing the metricbeat dashboard
Caused by: java.lang.IllegalArgumentException: Fielddata is disabled on text fields by default. Set fielddata=true on [host.name] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead.
My questions is If I have manually added the template and the dashboards, then why am I getting this error? With the template added should not the required mapping go inside the index? I saw some answers which suggested to apply the required mapping explicitly. But then In this case how should i give the index name on which this mapping is to be applied as my metricbeat index would be created daily with each new date. How this explicit mapping will persist across all the metricbeat indices created with each date?
PUT /what-should-be-the-index-name/_mapping

Elasticsearch configuration using Nlog

I'm using Nlog to write logs to Elasticsearch, which works just fine. The problem is that aggregations don't work because the fielddata on the property I try to aggregate is set to false by default. The error I get reads as follows:
illegal_argument_exception Reason: "Fielddata is disabled on text
fields by default. Set fielddata=true on [level] in order to load
fielddata in memory by uninverting the inverted index
Since an index is created by Nlog, I would like it to map certain properties in a way that they can be later aggregated. Is it possible to configurte Nlog so that the error is gone and aggregations start working?

When will elasticsearch try to recover its indexes?

I am trying to fix an issue related to Elasticsearch in our production which is not reproducible always. I am using Elasticsearch 1.5(yet to upgrade).
The issue is that while trying to create an index I get an error with exception IndexAlreadyExistsException because call to
client.admin().indices().prepareExists(index).get().isExists();
returns false as Elasticsearch is in recovery mode then when I try to create an index I get that exception.
Below are few links to the issues which says that Elasticsearch returns false while recovering indexes.
8945
8105
As I am not able to reproduce the issue always I am not able to test my fix which is to check the health first before checking isExists().
My question is that when will Elasticsearch start recovery?
You can use the prepareHealth() admin method in order to wait for the cluster to reach a given status before doing your index maintenance operations:
ClusterHealthResponse health = client.admin().cluster().prepareHealth(index)
.setWaitForGreenStatus()
.get();
Or you can also wait for the whole cluster to get green:
ClusterHealthResponse health = client.admin().cluster().prepareHealth()
.setWaitForGreenStatus()
.get();

Resources