elasticsearch search by part of a word - elasticsearch

Could you help me? There is an ELK cluster (version 5) and through kibana I execute a query for a part of a word using a wildcard, for example, examp*, but nothing is found. If I search for the whole word example, then everything is found. I also have a second ELK cluster and everything is found correctly in the part of the word using wildcard. I don't understand what is the difference between the settings between these two clusters

Related

How to remove stop words while searching using Lucene query in Kibana

While searching using lucene query in kibana for the keyword "METHOD FOR FABRICATING OPTICAL", I don't want "FOR" to be considered while finding matches as it is a stopword.
Q. How do i ignore stop words in this specific case in kibana ?
Q. How do i ignore all the stop words while writing a lucene query in kibana ?
Kibana screenshot : Seaching for "METHOD FOR FABRICATING OPTICAL"
Since you're searching in Kibana manually, you can always remove the stop words yourself :-)
Another way is to go to Stack Management > Advanced settings and modify the "Query string options" by adding the stop analyzer to the options, like below:

Connecting Elasticsearch to Kibana

I am trying to display the iris data in kibana by connecting to elasticsearch and creating an index called "iris" in R. I am taking the following steps:
Execute elasticsearch 5.5.3 batch file (localhost:9200 displays results on web)
Run the following code in R (connects and displays the iris search result successfully)
**library(elasticsearchr)
es<- elastic("http://localhost:9200", "iris", "data")
es %index% iris
for_everything <- query('{
"match_all": {}
}')
es %search% for_everything**
Run Kibana 5.5.3 batch file (checked yml file which says #elasticsearch.url: "http://localhost:9200")
However, Kibana can't search the index "iris" as shown below:
I tried running the logstash 5.5.3 batchfile before step 3, but it generated an error message on command prompt and closed. Another weird thing is that I don't see any index created on localhost:9200 on web, while searching for index in R shows results. Plus, below is the message I get when I start in step 1.
FYI, result of http://localhost:9200/_cat/indices
This is a snapshot of my kibana management > index pattern page.
you should add index pattern to kibana via management -> kibana -> index patterns.
at the moment you are searching "iris" word on none of your index. also i think you must change your search phrase.

How many nodes elastic search creates on single machine by defaults

I have ELK setup. with elastic search version 2.3.x. I wanted to know how many nodes by default it should create. I have noticed a weird situation. When I restarted elastic search it started with 3 nodes. There are multiple folder in data path ( /var/lib/elasticsearch/0 , 1 , 2 ,3). But when I restart it again it took only one node. I wanted to know how it is defining a number of nodes.
# curl -s -XGET "http://localhost:9200/_cat/nodes?v"
Because of this many shared are showing un-assigned due to lack of nodes
Your single elasticsearch installation on one machine is one node, if you want to have more u need to have multiple elastic installations running. See here for further details.

ElasticSearch - Search by IP[regex]

I have Kibana and ES. I have many indexes. I am using message field in ElasticSearch. My goal is to mask all IP addresses, which I already do using Logstash.
Now, given the fact there are many different indexes, and also different log types, I would like to run either Kibana or ES query for any occurence of IP. Just in case, that I missed any of them. Also, I would like to do it for email format as well.
Question is, how can I run IP/email regex search on ElasticSearch or Kibana?
Message field is string type, and is indexed.
I have found what I was looking for. In my case this approach is valid, since I do not care about performance. This was just a test to make sure I don't 'leak' information.
ElasticSearch regex query.

Elasticsearch indexes but does not store documents

I'm having troubles storing documents within a 3-node Elasticsearch cluster that previously was able to store documents. I use the Java API to send bulks of documents to Elasticsearch, which are accepted (no failure in BulkResponse object) AND Elasticsearch has heavy index activities. However, the number of documents are not increased and I assume that none of them are store.
I've looked into Elasticsearch logs (of all three nodes) but I see no errors or warnings.
Note: I've had to restart two nodes previously but search/query is working perfectly. (the count in the image starts at ~17:00 as I've installed the Marvel plugin at this time)
What can I do to solve or debug the problem?
Sorry for this point blank code blindness by me! I forgot to skip the cursor when reading from MongoDB and therefore re-inserted the same 1000 documents into Elasticsearch for thousands of times!
Learning: If this problem occurs check if you select the correct documents in your database and that these documents are not already stored in ES.
Sidenote to Marvel: It would be great is this could be indicated in any way - e.g. by having a chart with "updated documents" (I rechecked and could not find one)

Resources