debugging elasticsearch - elasticsearch

I'm using tire and elasticsearch. The service has started using port 9200. However, it was returning 2 errors:
"org.elasticsearch.search.SearchParseException: [countries][0]: from[-1],size[-1]: Parse Failure [Failed to parse source [{"query":{"query_string":{"query":"name:"}}}]]"
and
"Caused by: org.apache.lucene.queryParser.ParseException: Cannot parse 'name:': Encountered "<EOF>" at line 1, column 5."
So, I reinstalled elasticsearch and the service container. Service starts fine.
Now, when I search using tire I get no results when results should appear and I don't receive any error messages.
Does anybody have any idea how I might find out what is wrong, let alone fix it?

first of all, you don't need to reindex anything, in the usual cases. It depends how you installed and configured elasticsearch, but when you install and upgrade eg. with Homebrew, the data are persisted safely.
Second, no need to reinstall anything. The error you're seeing means just what it says on the tin: SearchParseException, ie. your query is invalid:
{"query":{"query_string":{"query":"name:"}}}
Notice that you didn't pass any query string for the name qualifier. You have to pass something, eg:
{"query":{"query_string":{"query":"name:foo"}}}
or, in Ruby terms:
Tire.index('test') { query { string "name:hey" } }
See this update to the Railscasts episode on Tire for an example how to catch errors due to incorrect Lucene queries.

Related

How to cast a field in Elasticsearch pipelines / Painless script

I have an application which is logging level as integers. I am using filebeat to send the logs to ES. I have set level as string in the ES index, which is working for most of the applications. But when filebeat is receiving an integer, the indexation is failing of course with:
"type":"illegal_argument_exception","reason":"field [level] of type [java.lang.Integer] cannot be cast to [java.lang.String]"
In my document: "level":30
I added a step Script in my ingestion pipeline. But I can't manage to make it work: either I get a compilation error or the script is somehow failing and nothing at all get indexed.
Some very basic script I tried:
if (doc['level'].value == 30) {
doc['level'].value = 'info';
}
Any idea on how to handle this in ES pipelines?
Regards
The best way is to transform data before sending to ES.
You can usse processsor in filebeat to filter you data.
https://www.elastic.co/guide/en/beats/filebeat/current/defining-processors.html

How to Fix Document Not Found errors with find

I have a collection of Person, stored in a legacy mongodb server (2.4) and accessed with the mongoid gem via the ruby mongodb driver.
If I perform a
Person.where(email: 'some.existing.email#server.tld').first
I get a result (let's assume I store the id in a variable called "the_very_same_id_obtained_above")
If I perform a
Person.find(the_very_same_id_obtained_above)
I got a
Mongoid::Errors::DocumentNotFound
exception
If I use the javascript syntax to perform the query, the result is found
Person.where("this._id == #{the_very_same_id_obtained_above}").first # this works!
I'm currently trying to migrate the data to a newever version. Currently mongodbrestore-ing on amazon documentdb to make tests (mongodb 3.6 compatible) and the issue remains.
One thing I noticed is that those object ids are peculiar:
5ce24b1169902e72c9739ff6 this works anyway
59de48f53137ec054b000004 this requires the trick
The small number of zeroes toward the end of the id seems to be highly correlated with the problem (I have no idea of the reason).
That's the default:
# Raise an error when performing a #find and the document is not found.
# (default: true)
raise_not_found_error: true
Source: https://docs.mongodb.com/mongoid/current/tutorials/mongoid-configuration/#anatomy-of-a-mongoid-config
If this doesn't answer your question, it's very likely the find method is overridden somewhere in your code!

kibana 4 gives an error "Discover: An error occurred with your request. Reset your inputs and try again"

kibana 4 gives an error "Discover: An error occurred with your request. Reset your inputs and try again" 80% of the time when I try to sort by a numeric field. It works fine when sorted by any other field. Did anyone get this issue?
I had this when I had added a sequence number in logstash to index (because several logs could be added in the same millisecond, causing the sort not to show the ordering correctly).
If you open up firefox debugger and view the console, it will show you more information related to the error. In my case
java.lang.Long cannot be cast to org.apache.lucene.util.BytesRef
I added
{ "unmapped_type": "number" }
into the advance settings - sort:options. It returns sorted data correctly but appears to throw a yellow warning.
Yes. I had that issue and opening the browser's javascript console helped me to see that a non-JSON document was the problem. Apparently, you can store non-JSON in elasticsearch (at least I can with 1.6.2). That creates problems with Kibana.
So: Open the browser's console, look for "error parsing body" or smth similar. You should also get the faulty string. Use that to identify to culprit document.

Using a function query from Solr

I'm trying to calculate the tf*idf of a term in my index.
Following Yonik's post from http://yonik.com/posts/solr-relevancy-function-queries/ I tried
http://localhost:8080/solr/select/?fl=score,id&defType=func&q=mul(tf(texto_completo,bug),idf(texto,bug))
(where texto_completo is the field, and 'bug' is the term) without much success. The response was:
error 400: The request sent by the client was syntactically incorrect (null).
I went ahead and looked at this answer /a/13477887 so I tried to do a simpler function query:
http://localhost:8080/solr/select/?q={!func}docFreq(texto_completo,bug)
And yet, I got the same error.
What is my syntax lacking to work properly?
For this not working:
q={!func}docFreq(texto_completo,bug)
use all lower-case docfreq:
q={!func}docfreq(texto_completo,bug)
I just tried:
q={!func}mul(tf(name,movie),idf(name,movie))
in Solr 4.2.1 and it is working fine. My field name is name (Text type) and term I am looking for is movie.
UPDATE: You need at least Solr 4.0 to use these. See http://wiki.apache.org/solr/FunctionQuery#Relevance_Functions

Splunk-client (with Nokogiri) giving Undefined Namespace Prefix

I'm using splunk-client to extract results from splunk. Here's the code:
query = "sourcetype=collection #{order_id}"
search = #splunk_client.search(query)
search.wait
The search is happening fine, and it seems like I'm doing everything according to the example (https://github.com/cbrito/splunk-client), but I get this error on the 'search.wait' line:
Undefined namespace prefix: //s:key[#name='isDone']
Any ideas what could be going wrong? Running these commands in irb works fine. Is there some sort of blocking issue?
There is currently very little error checking which occurs within the gem itself. The reason for the error is that wait looks for the status of the isDone key to change to true.
Since your credentials were not properly setup in the first place, the gem creates a search object with an invalid session. The search does not initially fail, because enough response came back from Splunk that Nokogiri processes it into an object without a Splunk search sid.
In the future I should likely raise an exception if a proper sid is not returned to avoid confusion.
Source: I wrote the gem.
I found out the issue -- the splunk client wasn't authenticating properly, and so search was actually a broken SplunkJob object (with a nil username and authentication key). It's strange that there was no error raised until the wait command, but upon inspecting the search object, one of the fields stated that the object was malformed.

Resources