How to bet notified when an Elastic Search Index has changed [duplicate] - elasticsearch

I am using Elasticsearch, and I am building a client (using the Java Client API) to export logs indexed via Logstash.
I would like to be able to be notified (by adding a listener somewhere) when a new document is index (= a new log line have been added) instead of querying the last X documents.
Is it possible ?

This is what you're looking for: https://github.com/ForgeRock/es-change-feed-plugin
Using this plugin, you can register to a websocket channel to receive indexation/deletion events as they happen. It has some limitations, though.
Back in the days, it was possible to install river plugins to stream documents to ES. The river feature has been removed, but this plugin above is like a "reverse river", where outside clients are notified by ES as documents get indexed.
Very useful and seemingly up-to-date with ES 6.x
UPDATE (April 14th, 2019):
According to what was said at Elastic{ON} Zurich 2019, at some point in the 7.x series, there will be a Changes API that will provide index changes notifications (document creation, update, deletion and more).
UPDATE (July 22nd, 2022):
ES 8.x is out and the Changes API is still nowhere in sight ... Good to know, though, that's it's still open at least.

Related

Elasticsearch : Is there a way to get an alert when a new agent joins the fleet?

When a new agent joins the fleet, I want to make a few notes on my application. Is it possible to get a notification whenever a new agent joins to fleet?
I checked elasticsearch watcher but haven't found solution so far.
Tldr;
There are no built in mechanism to perform such a task.
Nonetheless it is possible to create it, but it will be a bit of a hack.
Use the .fleet-agents index which won't be accessible in the 8.x releases
Use the Kibana agent api which is in experimental state.

Spring Boot with spring-data-elastic connecting to Elastic Search 7.4.0 on AWS server

I have 2 questions:
Can I run spring-data-elastic v4.0.1.RELEASE (with org.elasticsearch:elasticsearch 7.6.2 ) with ES client running on 7.4.0??? If not, what combination can I use for 7.4.0 client? We are migrating to AWS and I need to use 7.4.0 version of client.
I have parent/child relationship (configured as join datatype field). Could pls somebody provide a documentation or explain, how to use either ElasticsearchRestTemplate or ElasticsearchOperations to correctly insert/update both parent and child records?
Thank you.
Best regards,
Robert
ad 1): from the Elasticsearch documentation I can't at the moment find anything in the breaking changes sections that would prevent using a 7.4.0 client library, but that does not mean there aren't any. But that does not mean that there aren't any. Recently there was a breaking change in the Java classes (from 7.7 to 7.8) and I got the information:
our compatability focus is on the HTTP APIs and we don’t offer any guarantees on the code itself. There’s more background here: https://github.com/elastic/elasticsearch/issues/22707#issuecomment-274163711
So I'd say, write a small test app and with the corresponding libraries, start a local ES 7.4 and test it.
ad 2): adding the join-type mapping ang implementing the corresponding inserts etc. is currently worked on and will hopefully be available in version 4.1.

ELK - Removing old logs viewable in Kibana

I have managed to process log files using the ELK kit and I can now see my logs on Kibana.
I have scoured the internet and can't seem to find a way to remove all the old logs, viewable in Kibana, from months ago. (Well an explaination that I understand). I just want to clear my Kibana and start a fresh by loading new logs and them being the only ones displayed. Does anyone know how I would do that?
Note: Even if I remove all the Index Patterns (in Management section), the processed logs are still there.
Context: I have been looking at using ELK to analyse testing logs in my work. For that reason, I am using ElasticSearch, Kibana and Logstatsh v5.4, and I am unable to download a newer version due to company restrictions.
Any help would be much appreciated!
Kibana screenshot displaying logs
Update:
I've typed "GET /_cat/indices/*?v&s=index" into the Dev Tools>Console and got a list of indices.
I initially used the "DELETE" function, and it didn't appear to be working. However, after restarting everything, it worked the seond time and I was able to remove all the existing indices which subsiquently removed all logs being displayed in Kibana.
SUCCESS!
Kibana is just the visualization part of the elastic stack, your data is stored in elasticsearch, to get rid of it you need to delete your index.
The 5.4 version is very old and already passed the EOL date, it does not have any UI to delete the index, you will need to use the elasticsearch REST API to delete it.
You can do it from kibana, just click in Dev Tools, first you will need to list your index using the cat indices endpoint.
GET "/_cat/indices?v&s=index&pretty"
After that you will need to use the delete api endpoint to delete your index.
DELETE /name-of-your-index
On the newer versions you can do it using the Index Management UI, you should try to talk with your company to get the new version.

Quality profile weirdness (active/inactive rules) after sonarqube upgrade 6.3.1

I have upgraded sonarqube server from 6.2 to 6.3.1 and since then I see a weird behaviour regarding the quality profile (it might have occurred before, it is only now I see it).
When I click on the Quality Profile SonarWay (Java) I see
so it seems, that all rules are inactive.
When I click Activate More I see the following
so it looks, that there are rules are active (I assume due to the "Deactivate" option").
But when switching in the left bar to "active" under Quality Profile results in this
so clearly, no rules are active.
What is the second image then showing, what does the "Deactivate" mean, although it is inactive ?
How could this happen that suddenly all rules seem to be inactivated ?
This specific behaviour is a common symptom of a corrupted Elastic Search index (no longer in sync with SonarQube database).
Solution
Rebuild the SonarQube ElasticSearch index:
stop your SonarQube server
delete the ElasticSearch index # sonar_install_dir/data/es
start your SonarQube server
(reminder: ElasticSearch is a search engine used by SonarQube to index issues, rules etc. so that it can access this data rapidly without having to query the database all the time, see SonarQube Architecture)
Root-cause
Why did that happen ? A common case is an ElasticSearch index not being properly rebuilt after upgrading and/or changing database. Here's a typical scenario: you first start SonarQube on embedded H2 database, experiment a bit with it, then plug it to a full-fledged database. If the ElasticSearch index does not get scratched/rebuilt in between, then the index gets corrupted as the database/dataset it used to be in synch with just changed all of the sudden.
FYI there's an improvement planned to handle this more gracefully: SONAR-5681 .
Note: independently from above solution, do not take ElasticSearch index rebuild as a lightweight operation that should be performed regularly. SonarQube does self-manage its ElasticSearch index, so any issue must be investigated first.

Searching during Pentaho ElasticSearch update runs

We are using ES to index ~1.5mil records from the database. To populate the index we are using Pentaho ES component which is set to “Overwrite if exists” (runs ~15 min). Also, individual indexed documents can be retrieved, updated or deleted via Java services.
The question is, what will ES return during full Pentaho update run? For example, we have 1.5mil indexed documents with version = 1. Next update will change this version to 2. If we request a document while Pentaho is updating it – will we receive the old version of it? Will service will be unavailable for that particular document? Also, if we receive an old version, will the new version be available immediately after update or will it wait till full batch is updated (pentaho component is sending rows in batches of 5k)?
Pentaho - 4.4
ElasticSearch - 0.19.4
Lucene - 3.6.0
You will receive the old version of a document if the new one isn't committed yet. The service will continue to be available.
The new versions will be available depending on the refresh_interval setting in elasticsearch. This defaults to every 1s.
It's possible that pentaho may twiddle the refresh_interval during the data load. If that's the case then you'll have to wait until pentaho calls the refresh method directly or until it resets the parameter.
You might simply start the run and then check the setting for the refresh_interval via:
curl -XGET "http://my-es-server:9200/my-index-name/_settings"

Resources