Magento SOLR doesn't return results - magento

I am trying to integrate SOLR with Magento on my development machine. We are upgrading Magento and I want to test if SOLR is working as well.
I am able to feed SOLR, the stats say that it has documents. In SOLR admin, when I put in : as query string, I do get the list of documents. But when I search for "maria mosters" for example, no results are returned.
I have tried SOLR 1.4.1 (which we run in production) and 3.4.0.
My schema.xml: http://pastebin.com/3a2J99re

Thank you for your replies. I finally got my answer, for my case.
I found out by checking the query string that was being logged by SOLR. This was for example:
127.0.0.1 - - [28/09/2011:09:05:34 +0000] "GET /solr/select?sort=score+desc&fl=id&spellcheck=true&spellcheck.count=2&qt=magento_nl&spellcheck.collate=true&spellcheck.dictionary=magento_spell_nl&spellcheck.extendedResults=true&fq=visibility%3A4+AND+store_id%3A1&version=1.2&wt=json&json.nl=map&q=%28maria+mosterd%29&start=0&rows=1 HTTP/1.0" 400 1405
When I requested this query the first time, it said that the field visibility was unknown. Apparently this field was added by Magento in the upgraded release. I added the field to the config, and ran the query again. Now it said that the dictionairy magento_spell_nl did not exist.
What happened?
The new Magento has a option called "Enable Search Suggestions". In my previous Magento version, this option didn't exist, so this spellchecker thing was not passed to the query string.
When I turned this setting of, I was able to use my exact copy of the production server.

*:*
would work as its matching all on all fields.
Search for maria mosters is going to search in the default field, if you are using the standard request handler.
The default search field set in the schema is fulltext and I don't see any copyfields into it.
So are you sure the field is populated.
If you are using any custom request handler via the qt param, are the proper fields included in it ?
sharing you solrconfig and full query might help for others to help you further.

Looks like your issue is that in your schema, you have the fulltext field defined as the default search field, but you are not populating that field. I would recommend either setting the default field to another field that you are populating or when you execute your query, specify the field that you want to search against... Example text_en:"maria monsters"
Please also see the SolrQuerySyntax page on the Solr Wiki for more details.

Related

elasticsearch unknown setting index.include_type_name

I'm in really weird situations, I need to create indexes in elasticsearch that contain typeless fields. I have a rails application that sends any data per second to my elasticsearch. about my architecture, I have to say I use elastic-stack on docker in ubuntu server and use socket to send data's to elk and all of them are the latest version.
In my rails application user could choose datatype for each field but the issues happen when the user want to change the datatype of one field right after it's created, logstash return this error
error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [field] of type [long] in document with id '5e760cac-cafc-4fd0-9e45-1c650967ccd4'. Preview of field's value: '2022-01-18T08:06:30'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"For input string: \"2022-01-18T08:06:30\
I found deadly queue letter plugins to save wrong input in my server after that I think if I could index documents without any type the problem is solved so I start to googling and found Removal of mapping types in elasticsearch documents and I follow instructions which describe in tutorials I get the following error:
unknown setting [index.include_type_name] please check that any required plugins are installed, or check the breaking changes documentation for removed settings
even I put "include_type_name" in the request to send to the elastic noting change I have the latest version of elastic.
I think maybe it's helpful to edit the default elasticsearch template but noting the change. could you please help me with what should I do?
As already mentioned in the comments, Elasticsearch does not support changing the data type of a field without a reindex or creating a new index.
For example, if a field is mapped as a numeric field like integer and the user wants to index a string value in this field, elasticsearch will return an mapping error.
You would need to change the mapping of the index and reindex it or create a entirely new index using the new mapping.
None of this is done automatically by elastic, you would need to deal with this in your application, you could catch the error and implement some logic to create a new index with the new mapping, but this also could lead to other problems as having too many indices in the cluster and query errors when the range of the query include index with the same field with different mappings.
One feature that Elasticsearch has that could help you in some way is the runtime fields, with runtime fields you can query a field that has a specific mapping using a different mapping.
For example, if you have a field that has date values, but was wrongly mapped as a keyword or text field, you could use a runtime field to query it as it was a date field.
But again, this will need that you implement a logic to build those runtime fields and also can lead to other problems, not all the data types are available to runtime fields and runtime fields can impact in the performance.
Another feature that could help you is to use of multi-fields, this, I think, is the closest you got of having a field with multiple data types.
Using multi-fields you could have a field named date with the date type and also a field named date.keyword with the keyword type, you also could have a field name code with the keyword type and a field name code.int with the integer type, you would also need to use the ignore_malformed setting in the mapping so elastic does not reject the entire document in case of mapping errors, just the field with the wrong mapping.
Just keep in mind that when use multi-fields, you will have a different field for each mapping, for example date is a field, date.keyword is another field, this will increase the storage usage.
But again, none of this is done automatically, it needs logic in your application, elasticsearch does not allows you to change the mapping of a field, if your application needs this, you will need to implement something in the application that can work with that limitations of elasticsearch.

Elasticsearch Jest update a whole document

I have an elasticsearch server which i'm accessing via a java server using the Jest client and i was looking for the best way to update multiple fields of a document each time.
I have looked to the documentation so far, and i have found that there are two way for doing it :
Partial update via a script : i don't think it is suitable for multiple field update (because i don't know the modified fields).
Whole document update: via re-indexing the whole document.
My question is how could i update the whole document knowing that Jest provide only update via a script?
Is it the best way to delete a document and indexing the updated version?
Already answered this in the github issue you also opened but again:
You should use the second way you linked (Whole document update) and there is no special API for it, it's just a regular index request. So you can do it simply by sending your Index request against the id of the document you want to update.
For example assuming you have below document already indexed in Elasticsearch within index people, type food, id 9:
{"user": "kramer", "fav_food": "jello"}
Then you would do:
String source = "{\"user\": \"kramer\", \"fav_food\": \"pizza\"}";
JestResult result = client.execute(
new Index.Builder(source)
.index("people")
.type("food")
.id(9)
.build()
);

How to do "where not exists" type filtering in Kibana/ELK?

I am using ELK to create dashboards from my log files. I have a log file with entries that contain an id value and a "success"/"failure" value, displaying whether an operation with a given id succeeded or failed. Each operation/id can fail an unlimited number of times and succeed at most once. In my Kibana dashboard I want to display the count of log entries with a "failure" value for each operation id, but I want to filter out cases where a "success" log entry for the id exists. i.e. I am only interested in operations that never succeeded. Any hints for tricks that would achieve this?
This is easy in Kibana 5 search bar. Just add a filter
!(_exists_:"your_variable")
you can toggle the filter or write the inverse query as
_exists_:"your_variable"
In Kibana 4 and Kibana 3 you can use this query which is now deprecated
_missing_:"your_variable"
NOTE: In Elasticsearch 7.x, Kibana now has a pull down to select KQL or Lucene style queries in the search bar. Be mindful that syntax such as _exists_:FIELD is a Lucene syntax and you need to set the pulldown accordingly.
In newer ELK versions (I think after Elasticsearch 6) you should use field:* to check if the field exist and not field:* to check if it's missing.
elastic search reference:
https://www.elastic.co/guide/en/elasticsearch/reference/6.5/query-dsl-query-string-query.html#_wildcards
! (_exists_:NAME) is not working for me. I use suggestion from:
https://discuss.elastic.co/t/kibana-5-0-0--missing--is-not-working-anymore/64336
NOT _exists_:NAME
UPDATE The problem I faced is that ES syntax forbids spaces after negation operators. Use one of:
NOT _exists_:FIELD
!_exists_:FIELD
-_exists_:FIELD
Check tutorial: https://www.timroes.de/2016/05/29/elasticsearch-kibana-queries-in-depth-tutorial/
NOTE: In Elasticsearch 7.x, Kibana now has a pull down to select KQL or Lucene style queries in the search bar. Be mindful that syntax such as _exists_:FIELD is a Lucene syntax and you need to set the pulldown accordingly.
In newer versions of Kibana the default language is now KQL (Kibana Query Language) not Lucene anymore. So most answers here are outdated. The query if a field exists is the following:
your_variable:*
and to answer your question you can just negate that:
not your_variable:*
You can find more documation on here: https://www.elastic.co/guide/en/kibana/7.15/kuery-query.html
You can also toggle back to Lucene if you click on that button inside the search field but in my opinion the new language is way easier to use:
One option would be to create an own query for this criteria in Kibana. Then just have your panel that does the counting just to use this query.
value:failure
More information here:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl-query-string-query.html#query-string-syntax

Elasticsearch not searching some fields

I have just updated a website, the update adds new fields to elasticsearch.
In my dev environment, it all works fine. but on the live site, the new fields are not being found.
Eg. I have added a new field with the value : 1
However, when adding a filtered query of
{"field":1}
It does not find any matching results.
When I look in the documents, I can see docs with the field set to 1
Would the reason for this be that the new field was added after the mappings was set? I am not all that familiar with elasticsearch, So I am not really sure where to start looking to fix it.
Any help would be appreciated.
Update:
querying from URL shows nothing either
_search/?pretty=true&size=50&q=field1:*
however there is another field that was added at the same time which I can search on.
I can see field1 in the result set but it just wont allow me to search on it.
Only difference i see in the mapping is that the one that is working is set to type:long whereas the one not working is set as type:string
Is it a length issue on the ngram? what was your "min_gram" settings?
When you check on your index settings like this:
GET <host>/<index_name>/_settings
Does it work when you filter for a two digit field?
Are all the field values one digit?
It's OK to add a field after the mapping was set. ElasticSearch will guess the mapping for you. (in fact, it's one of their selling features --- no need to define the mapping, just throw the data at it)
There are a few things that can go wrong:
Verify that data is actually in the index. To do that, just navigate to the _search url with no parameters, you should see the field if it is indexed.
Look at your mapping. Could it be that the field is explicitly set not to be indexed?
Another possibility is that your query is wrong (but that is unlikely, since you're saying it works in the development environment)

Magento 1.3.x Quick Search problem

We have built a website http://www.goshopping.pk/ (sorry had to post the link as its important for this question).
The quick search is not working as it should. For example, search "Nokia" and you will get all sorts of results. Search "Dell" and you get the same results. However, searching exact matches like "nokia 6600" or "Intel Core 2 DUO" or "Dell Inspiron" works perfectly fine.
We have rebuilt the search index, emptied the cache etc but it has no effect. What are we missing?
Help appreciated. Thanks!
One quick tip I normally advise people is to remove the description from quick search results in Catalog > Manage Attributes > Attributes
Obviously the description contains all sorts of words and can dilute search results. See if that improves anything.
Also in Configuration > Catalog I normally change the Search Type to Fulltext for more accurate results.
Based on the suggestion from Adam, we were able to resolve this. Here is what we did if anyone needs future reference:
We had about 400 attributes defined and a lot of them were set to search in quick search by our client. What we did is we manually ran a query via phpmyadmin for table "eav_attribute" and updated ALL records to have is_searchable=0
We then manually edited the title and description record in eav_attribute table to is_searchable=1
Rebuilt the search index via Mage Admin and all was good.
Best,
K

Resources