I'm using Solr with Sunspot (ruby) and due to other constraints i have to use the Lucene parser instead of the DisMax parser. I need to be able to search using username as well as first_name fields at the same time.
If i were using DisMax i can specify qf="username+first_name" but using only the lucene parser I am only able to set df (default field) and it will not allow me to specify more than one field.
How can I search multiple fields using the lucene parser?
Update: Answer: just use the q parameter
adjust_solr_params do |params|
params[:defType] = "lucene"
params[:q] = "username:\"#{params[:q]}\" OR first_name:\"#{params[:q]}\""
end
You can use copy fields instructions in your schema to create a "catch all" field from all the fields you want to search on. You then set df to that field.
To expand on Karussell's comment, the default field is just that, the default. You can explicitly specify however many fields you want, it's only if you don't specify one that the default comes into play.
So a query like username:foo first_name:bar will find documents with a username of "foo" and a first_name of "bar."
Related
I have a preexisting index that contains field mappings and is currently being queried by many applications. I would like to add additional ways for the data to be queried, specifically, support full text search via analysis. Multi-fields seemed like the obvious way to do this, but I found that adding new multi-fields actually changes the existing query behavior.
For example, I have an "id" field that is a keyword. Applications are already using this field to query on. After I add a new multi-field, like "txt" (using the standard analyzer), new documents can be found by querying with just a partial value match. Values for "id" look like this: "123-abc" so now a query with just "abc" will match when querying against the "id" field. This is not how it worked previously (the keyword only field would require the entire value "123-abc").
Ideally, the top-level "id" field would be keyword only, and if a "full text" search was required, the query would need to specify "id.txt". So my question is... is there a way to disable multi-fields and require that the query explicitly set a sub field when needed?
My only other thought on how to solve this, was to use copy_to so that these fields are completely distinct... but that is a bit more work and there are many many fields to deal with that would require this.
I would like to create SQL query on some text field (not keyword) for example "name" field and send that query to elastic server.
my problem is that I need to use the standard SQL language (not the MATCH and QUERY operators which are specials for elastic SQL) of text fields.
when I tried to use JDBC driver or when I tried to use high-level-java-client with LIKE operatorI got the following error
"No keyword/multi-field defined exact matches for [name]; define one or use MATCH/QUERY instead"
I also tried to use the translate API of elasticsearch- but even there I couldn't use the "LIKE" operator on text fields only on keyword fields.
does anyone have any solution for me? I want to use the LIKE operator on text fields instead of the full text operators which are unique to elastic sql.
Please check the this documentation. they have clearly mentioned in document that it is not possible.
One significant difference between LIKE/RLIKE and the full-text search
predicates is that the former act on exact fields while the latter
also work on analyzed fields. If the field used with LIKE/RLIKE
doesn’t have an exact not-normalized sub-field (of keyword type)
Elasticsearch SQL will not be able to run the query. If the field is
either exact or has an exact sub-field, it will use it as is, or it
will automatically use the exact sub-field even if it wasn’t
explicitly specified in the statement.
If you still want to used text field then you need to enabled multi-field as mentioned here. or you can try out to enable fielddata on text field but i am not sure that it will work SQL or not.
I have a use case which is a bit similar to the ES example of dynamic_template where I want certain strings to be analyzed and certain not.
My document fields don't have such a convention and the decision is made based on an external schema. So currently my flow is:
I grab the inputs document from the DB
I grab the approrpiate schema (same database, currently using logstash for import)
I adjust the name in the document accordingly (using logstash's ruby mutator):
if not analyzed I don't change the name
if analyzed I change it to ORIGINALNAME_analyzed
This will handle the analyzed/not_analyzed problem thanks to dynamic_template I set but now the user doesn't know which fields are analyzed so there's no easy way for him to write queries because he doesn't know what's the name of the field.
I wanted to use field name aliases but apparently ES doesn't support them. Are there any other mechanisms I'm missing I could use here like field rename after indexation or something else?
For example this ancient thread mentions that field.sub.name can be queried as just name but I'm guessing this has changed when they disallowed . in the name some time ago since I cannot get it to work?
Let the user only create queries with the original name. I believe you have some code that converts this user query to Elasticsearch query. When converting to Elasticsearch query, instead of using the field name provided by the user alone use both the field names ORIGINALNAME as well as ORIGINALNAME_analyzed. If you are using a match query, convert it to multi_match. If you are using a term query, convert it to a bool should query. I guess you get where I am going with this.
Elasticsearch won't mind if a field does not exists. This can be a problem if there is already a field with _analyzed appended in its original name. But with some tricks that can be fixed too.
As I understand it, ElasticSearch searches on the magic _all field by default. The problem with this seems to be that if a field uses a different index analyzer, the analyzed data from this field is not searched.
I've had success with searching on the fields ['domain', '_all'] but I really need to avoid having to manually specify each field which was analyzed differently. I see fields supports wildcards but seemingly not '' on its own. I could do a, b*, c*, d* etc. but this seems a tad inefficient.
the special field "_all" is discontinued and copy_to function can be used instead as per the official documentation. This approach allows one to create a computed field (managed by elastic search) that one can specify to copy data from other fields to mimic _all search.
However there is an alternative approach through the use of multi_match providing wildcard field names as part of the query. This works just like the earlier mechanism searching "_all" field.
{"multi_match":{"query":"java","fields":["*"]}}]}}
I'm integrating elasticsearch into an asset tracking application. When I setup the mapping initially, I envisioned the 'brand' field being a single-term field like 'Hitachi', or 'Ford'. Instead, I'm finding that the brand field in the actual data contains multiple terms like: "MB 7 A/B", "B-7" or even "Brush Bull BB72X".
I have an autocomplete component setup now that I configured to do autocomplete against an edgeNGram field, and perform the actual search against an nGram field. It's completely useless the way I set it up because users expect the search results to be restricted to what the autocomplete matches.
Any suggestions on the best way to setup my mapping to support autocomplete and subsequent searches against a multiple term field like this? I'm considering a terms query against a keyword field, or possibly a match query with 'and' as the operator? I also have to deal with hyphens like "B-7".
you can use phrase suggest, the guide is here:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-suggesters.html
the phrase suggest guide is here:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-suggesters-phrase.html