Elastic search query string shows documents that do not have specified key - elasticsearch

In version 6.7.1 elastic search, I am using the query string to get some documents. After executing the query string, in addition to the actual documents, it gives those documents also which does not have the key against which data is filtered.
This was not the case when I was using 6.4.2 elastic version. The official site does not have any information regarding that.
My query looks like -
"* AND ( properties.foreignKeys.referenceTableId :(file_datatypes) OR properties.primaryKeyMetadata.referenceTables :(file_datatypes) )".
It shows the documents that has properties.foreignKeys: null and properties.primaryKeyMetadata: null, in json
Any update will be helpful.

Related

Kibana - update default search query

I am new to elastic search and Kibana. In Kibana, while trying to fetch elastic search document in json, by default a bsearch query been executed with wildcard field search as below
fields: [{field: "*", include_unmapped: "true"}, {field: "timestamp",
format: "date_time"}]
This in turn returns all the document values as array under fields section. I need to turn off requesting fields in search query and its enough to have _source metadata in my json.
How to update the default query been searched in kibana? Thanks in advance
Installed elastic search version - 7.17.3
In Advanced Settings, you can turn on "Read fields from source" instead of using the Fields API, but it's soon going to be deprecated:

JanusGraph with Elasticsearch index is not working

I have added mixed index in JanusGraph to support full-text search with Elasticsearch.
I have mixed index like:
myindex = mgmt.buildIndex("myesindex", Vertex.class)
.addKey("name", Mapping.TEXTSTRING.asParameter())
.addKey("sabindex", Mapping.TEXTSTRING.asParameter())
.buildMixedIndex("search");
I am able to load data into Elasticsearch engine.
Also I am able to execute the query successfully.
The issue I am facing is when I hit query :
g.V().has('code','abc').valueMap()
==>{str=[some text], code=[abc], sab=[sab], sabindex=[sabindex], name=[[some tex]]}
I am getting the result successfully, but when I try to search with name and code:
g.V().has('name', textContains('some text')).has('code','abc').valueMap()
code field is also indexed(composite)
At that time I am getting no result. Though data is present in graph and Elasticsearch.
And another scenario is same query with different name and code works successfully. I also rebuild the graph multiple times but not getting positive results.
The first query shows the value is name=[[some tex]]. It is missing the final t in text, so that explains why the query isn't matching on some text.
If you instead do textContains('some tex'), you would get the same result as the first query. Using the profile() step would show that the myindex was utilized.
See this gist of the recreate scenario.

Term aggregation using template in Grafana with Elasticsearch as data source

I have a doc in Elasticsearch with different fieldnames, eg: a,b,c,d...
I want to use templating in Grafana to query a term aggregation in such way that I get the values in a field. eg: i.
I'm trying to use this query:
{"find":"terms","field":"i","size":25}
but it does not return any values.
I know that there are some values as I query the same docs with Sense.
I have Grafana v 4.6.2 and Elasticsearch v 2.3.4
The field I wanted has a "-" in the string. ES sees it as a separator, this was the reason of the error.
Changing the field's mapping to "not analyzed" should help.

How to retrieve all document ids matching a search, in elastic search?

I'm working on a simple side project, and have a tech stack that involves both a SQL database and ElasticSearch. I only have ElasticSearch because I assumed that as my project grows, my full text searching would be most efficiently performed by ES. My ES schema is very simple - documents that I insert into ES have 2 fields, one being the id and the other being the field with the body of text to search. The id being inserted into ES corresponds to that document's primary key id from the SQL database.
insert record into SQL -> insert record into ES using PK from SQL
Searching would be the reverse of that. Query ES and grab all the matching ids, and then turn around and use those ids to get records from SQL.
search ES can get all PK ids -> use those ids to get documents from SQL
The problem that I am facing though, is that ES can only return documents in a paginated manner. This is a problem because I also have a WHERE clause on my SQL query, beyond just the ids. My SQL query might look like this ...
SELECT * FROM foo WHERE id IN (1,2,3,4,5) AND bar != 'baz'
Well, with ES paginating the results, my WHERE clause will always only be querying a subset of the full results from ES. Even if I utilize ES' skip and take, I'm still only querying SQL using a subset of document ids.
Is there a way to get Elastic Search to only return the entire list of matching document ids? I realize this is here to not allow me to shoot myself in the foot, because doing this across all shards and many many documents is not efficient. Is there no way, though?
After putting in some hours on this project, I've only now realized that I've poorly engineered this, unless I can get all of these ids from ES. Some alternative implementations that I've thought of would be to store the things that I'm filtering on, in SQL, in ES as well. A problem there is that I'd have to update the ES document every time I update the document in SQL. This would require a pretty big rewrite to some of my data access code. I could scrap ElasticSearch all together and just perform searching in Postgres, for now, until I can think of a better way to structure this.
The elasticsearch not support return each and every doc match to you queries. Because it Ll overload the system. Instead of this.. Use scroll concept in elasticsearch.. It's lik cursor concept in db's..
http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/scan-scroll.html
For more examples refer the Github repo. https://github.com/sidharthancr/elasticsearch-java-client
Hope it helps..
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-request-fields.html
please have a look into the elastic search document where you can specify only particular fields that return from the match documents
hope this resolves your problem
{
"fields" : ["user", "postDate"],
"query" : {
"term" : { "user" : "kimchy" }
}
}

Grouping documents based on named and Lat ,long in Elastic Search

I wanted to group documents based on Name and Lat, Lang in Elastic Search.I explored the aggregations API but it gives only a count for a specific criteria not the actual documents.Is there a way in which we can do this in Elastic Search
you could use nested aggregations - something like aggregate by name, _id. And use second query to get document by ids.

Resources