Unable to set set max_clause_count on ElastiSearch 2.4.5 - magento

I have set elasticsearch on Magento 2 Enterprise Edition. On trying to load categories page, I get these error:
(Elasticsearch\Common\Exceptions\ServerErrorResponseException): too_many_clauses: maxClauseCount is set to 1024
When I try to update elasticsearch.yaml to index.query.bool.max_clause_count: 10024 as suggested here http://devdocs.magento.com/guides/v2.1/config-guide/elasticsearch/es-overview.html elasticsearch does not start.
I have read that setting maxClauseCount has been deprecated in Elasticsearch 5 https://www.elastic.co/guide/en/elasticsearch/reference/5.0/breaking_50_settings_changes.html#_search_settings. But I am using version 2.4.5
How do I go about setting maxClauseCount?

Related

SQL Error when querying any tables/views on a Databricks cluster via Dbeaver

I am able to connect to the cluster, browse its hive catalog, see tables/views and columns/datatypes
Running a simple select statement from a view on a parquet file produces this error and no other results:
SQL Error [500540] [HY000]: [Databricks][DatabricksJDBCDriver](500540) Error caught in BackgroundFetcher. Foreground thread ID: 180. Background thread ID: 223. Error caught: sun.misc.Unsafe or java.nio.DirectByteBuffer.<init>(long, int) not available.
Standard Databricks cluster:
Standard_DS3_v2
JDBC URL:
jdbc:databricks://<reducted>.1.azuredatabricks.net:443/default;transportMode=http;ssl=1;httpPath=sql/protocolv1/o/<reducted>/<reducted>;AuthMech=3;UID=token;PWD=<reducted>
Advanced Options Spark Config:
spark.databricks.cluster.profile singleNode
spark.databricks.io.directoryCommit.createSuccessFile false
spark.master local[*, 4]
spark.driver.extraJavaOptions -Dio.netty.tryReflectionSetAccessible=true
spark.hadoop.fs.azure.account.key.<reducted>.blob.core.windows.net <reducted>
spark.executor.extraJavaOptions -Dio.netty.tryReflectionSetAccessible=true
parquet.enable.summary-metadata false
My local machine:
Dbeaver Version 22.1.2.202207091909
MacOS version (M1 chip): Monterey 12.4
Java version:
java --version
openjdk 18.0.1 2022-04-19
OpenJDK Runtime Environment Homebrew (build 18.0.1+0)
OpenJDK 64-Bit Server VM Homebrew (build 18.0.1+0, mixed mode, sharing)
I am able to do the following with no errors (Databricks default test dataset):
CREATE TABLE diamonds USING CSV OPTIONS (path "/databricks-datasets/Rdatasets/data-001/csv/ggplot2/diamonds.csv", header "true");
When I run this select color from diamonds; or this select * from diamonds;
I get this:
SQL Error [500618] [HY000]: [Databricks][DatabricksJDBCDriver](500618) Error occured while deserializing arrow data: sun.misc.Unsafe or java.nio.DirectByteBuffer.<init>(long, int) not available
Hence, any select query on any object (parquet file or anything else) causes the error described above.
What could be the problem? Any recommendations how to resolve this error? Why am I able to connect and see the metadata of the schemas/tables/views/columns, but not query or view the data?
P.S. I followed this guide exactly: https://learn.microsoft.com/en-us/azure/databricks/dev-tools/dbeaver#step-3-connect-dbeaver-to-your-azure-databricks-databases

fscrawler 2.3 with elasticsearch 5.5 getting error string index out of range

I have ElasticSearch 5.5 with x-pack working without any issue.
But while I trying use fscrawler 2.3 on a folder I get this error
WARN [f.p.e.c.f.FsCrawlerImpl]
Error while crawling c:/tmp/es: String index out of range: -1
What am I doing wrong?
Try to use backslashes in the url of the _settings.json - like "C:\\tmp\\es"

Logstash 5 Alpha4 to elasticsearch5 Alpha4 communication error

Elasticsearch 5 is secured with xpack security and hooked with ldap which is working fine. Even user has admin right in role_mapping.
Logstash 5 configuration is as below
output {
elasticsearch {
hosts => ['localhost:9200']
user => 'gaurav#gmail.com'
password => 'pwd'
}
}
Getting below error and because of which logstash is not able to pass data to elasticsearch.
{:timestamp=>"2016-07-14T16:32:29.592000+0530",
:message=>"Encountered an unexpected error submitting a bulk request! Will retry.",
:error_message=>"undefined method code' for #",
:class=>"NoMethodError", :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-4.1.1-java/lib/logstash/outputs/elasticsearch/common.rb:217:insafe_bulk'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-4.1.1-java/lib/logstash/outputs/elasticsearch/common.rb:105:in submit'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-4.1.1-java/lib/logstash/outputs/elasticsearch/common.rb:72:inretrying_submit'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-4.1.1-java/lib/logstash/outputs/elasticsearch/common.rb:23:in multi_receive'", "org/jruby/RubyArray.java:1653:ineach_slice'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-4.1.1-java/lib/logstash/outputs/elasticsearch/common.rb:22:in multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator.rb:136:inthreadsafe_multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/output_
I think I may have figured it out. I am using the Logstash 5.1.1-alpine docker image. As far as I can tell, it comes with the elasticsearch-output plugin v4.5.0, which seems to have this bug. Forcing an update of that plugin to the latest (6.2) has fixed this issue.
My Dockerfile now
FROM logstash:5.1.1-alpine
RUN $LOGSTASH_PATH/logstash-plugin install --version 6.2.0 logstash-output-elasticsearch
With the updated plugin, I no longer see this error.

How to enable dynamic scripting in ElasticSearch with its gem?

I'm getting the error nested: ScriptException[dynamic scripting for [groovy] disabled]; because of this aggregation I'm making:
agg :category_aggregation do
{
terms: {
script: "doc['categories.id'].value + '|' + doc['categories.name'].value",
size: 30
}
}
end
I'm using the official elasticsearch gem and also tried with chewy but couldn't find how to enable the dynamic_search anywhere.
ElasticSearch version in my OS X: 1.5.2 installed with homebrew.
Dynamic scripting can only be enabled from the elasticsearch.yml configuration file in your ES cluster.
Add this to the file on every node and restart your cluster:
script.disable_dynamic: false
UPDATE
Since you've installed ES via homebrew, you can find the elasticsearch.yml file in /usr/local/Cellar/elasticsearch/1.5.2/config

Magento Upgrade 1.3 1.4 Pear Error Invalid Argument foreach

Hello I am trying to upgrade Magento 1.3 to 1.4. using this guide http://astrio.net/blog/magento-upgrade-guide/
I tried the command ./pear upgrade -f magento-core/Mage_All_Latest-stable
I got the error
Notice: Array to string conversion in PEAR/REST/10.php on line 85
PHP Notice: Array to string conversion in /home/www/sss/staging.mysite.net/public/downloader/pearlib/php/PEAR/REST/10.php on line 85
So I tried Magento upgrade PEAR error ( specifically this command ./pear channel-update connect.magentocommerce.com/core ) .. but that gives me the error:
Updating channel "connect.magentocommerce.com/core"
Channel "connect.magentocommerce.com/core" is not responding over http://, failed with message: File http://connect.magentocommerce.com:80/core/channel.xml not valid (received: HTTP/1.1 404 Not Found
)
Trying channel "connect.magentocommerce.com/core" over https:// instead
Cannot retrieve channel.xml for channel "connect.magentocommerce.com/core" (File https://connect.magentocommerce.com:443/core/channel.xml not valid (received: HTTP/1.1 404 Not Found
))
Ideas?
You waited a bit too long to upgrade if you want to use Magento Connect to do it.
In order to upgrade 1.4 to a later version using Magento Connect requires you to upgrade to the 1.5 PEAR Connect package which switches from using ./pear to ./mage. There is no equivalent to doing this upgrade for 1.3 to use Connect to upgrade to 1.4.
You will need to manually download and apply the Magento 1.4 upgrade by downloading the full package from the Magento CE Download page under the Release Archives for the version of 1.4 you wish to upgrade to.
Of course, test it hard on a staging server as you will find that there will be template issues, database upgrade issues, etc.

Resources