Elasticsearch-2.2.1 wont start when I add the line
script.disable_dynamic: false
to my elasticsearch.yml file as show Here
What is could be causing this?
check instruction for appropriate version. Link you provided for version 1.6
script.inline: true
script.indexed: true
Related
I'm trying to get GrumPHP to work with a small Laravel 9 project but php-cs-fixer is being pulled from the wrong location and I can't seem to find how to change this.
Error from GrumPHP:
phpcsfixer
==========
PHP needs to be a minimum version of PHP 7.1.0 and maximum version of PHP 7.4.*.
You can fix errors by running the following command:
'/windir/f/wamp64/vendor/bin//php-cs-fixer' '--config=./home/testuser/php-cs-config.php_cs' '--verbose' 'fix'
Seems like an easy fix, so I updated php-cs-fixer and followed the upgrade guide to get to v3. (currently sitting on 3.10). But I can also see that '/windir/f/wamp64/vendor/bin//php-cs-fixer' is not the correct directory for php-cs-fixer, the actual bin folder is located in WSL not the windows directory so I included a GRUMPHP_BIN_DIR in the grumphp yaml but still no luck.
grumphp.yml
grumphp:
environment:
variables:
GRUMPHP_PROJECT_DIR: "."
GRUMPHP_BIN_DIR: "./home/testuser/tools/vendor/bin/"
paths:
- './home/plustime/tools'
tasks:
phpcsfixer:
config: "./home/testuser/php-cs-config.php_cs"
allow_risky: ~
cache_file: ~
rules: []
using_cache: ~
config_contains_finder: true
verbose: true
diff: false
triggered_by: ['php']
I can't seem to find much about this or anything in the docs, so any help would be appreciated.
This ended up coming down to altering how WSL constructs the environment. To get around WSL building windows paths into the Linux distribution.
The answer was found here:
How to remove the Win10's PATH from WSL
Quick run down:
On the WSL instance open the file /etc/wsl.conf with something like
sudo nano /etc/wsl.conf
Add the following lines to the bottom of the file,
[interop]
appendWindowsPath = false
Mine looked like this when it was finished:
# Enable extra metadata options by default
[automount]
enabled = true
root = /windir/
options = "metadata,umask=22,fmask=11"
mountFsTab = false
# Enable DNS – even though these are turned on by default, we'll specify here just to be explicit.
[network]
generateHosts = true
generateResolvConf = true
[interop]
appendWindowsPath = false
Then restart the WSL instance from your windows terminal and restart it.
wsl --shutdown
GrumPHP now using the correct php-cs-fixer.
I have a GKE cluster with deployed SonarQube.
Also, we added istio, and changed the work path from http://IP_ADDRESS to http://IP_ADDRESS/sonarqube/.
Now we get an error, because sonarqube tries to find general files in http://IP_ADDRESS, but should check in http://IP_ADDRESS/sonarqube/.
We use https://github.com/helm/charts/tree/master/stable/sonarqube for deployment.
How can I change the work path, which value should I change?
Please help.
Added:
livenessProbe:
sonarWebContext: /sonarqube/
readinessProbe:
sonarWebContext: /sonarqube/
extraEnv:
sonar.web.context: /sonarqube
Now it works.
You can set this properties: sonar.web.host.
I'm adding a linter for gosec for golangci-lint and everything is covered except the following:
exec.Command(params[0], params[1:]…)
I know that I can disable this lint but I don't want to do it. Is there a way to fix the code to satisfy this lint?
the error is:
G204: Subprocess launched with function call as argument or cmd arguments ```
Instead of disabling the linter you could exclude the specific line with an annotation;
exec.Command(params[0], params[1:]...) //nolint:gosec
If you want to disable only this check, you can
exec.Command(params[0], params[1:]...) // #nosec G204
Hardcode a command call. There are no other options AFAIS.
Update: starting from version 1.40 you gosec options are customizable, see example config .golangci.example.yml in https://github.com/golangci/golangci-lint repository.
linters-settings:
gosec:
# To select a subset of rules to run.
# Available rules: https://github.com/securego/gosec#available-rules
includes:
- G401
- G306
- G101
# To specify a set of rules to explicitly exclude.
# Available rules: https://github.com/securego/gosec#available-rules
excludes:
- G204
# To specify the configuration of rules.
# The configuration of rules is not fully documented by gosec:
# https://github.com/securego/gosec#configuration
# https://github.com/securego/gosec/blob/569328eade2ccbad4ce2d0f21ee158ab5356a5cf/rules/rulelist.go#L60-L102
config:
G306: "0600"
G101:
pattern: "(?i)example"
ignore_entropy: false
entropy_threshold: "80.0"
per_char_threshold: "3.0"
truncate: "32"
I have a pretty big terms query in elasticsearch, so I get
too_many_clauses: maxClauseCount is set to 1024
I tried increasing it in the elasticsearch.yml by
index:
query:
bool:
max_clause_count: 10240
and via
curl -XPUT "http://localhost:9200/plastic/_settings" -d '{ "index" : { "max_clause_count" : 10000 } }'
but nothing worked. My index is named plastic.
In Elasticsearch 5, index.query.bool.max_clause_count has been deprecated/removed.
Insert in your elasticsearch.yml file indices.query.bool.max_clause_count : n instead (where n - new supported number of clauses).
NOTE: Here is link to documentation.
Background
NOTE: The advice given in this answer only applies to versions of Elasticsearch below 5.5. The method described references a property that was eventually removed in 5.5.
Search Settings
The setting index.query.bool.max_clause_count has been removed. In order
to set the maximum number of boolean clauses
indices.query.bool.max_clause_count should be used instead.
Ref - Breaking changes in 5.0 » Settings changes
Original Answer
Add the following:
index.query.bool.max_clause_count: 10240
To the file elasticsearch.yml on each node of the cluster, then of course, restart the nodes (any change in the config file needs a restart).
this can also be achieved by updating configuration inside elasticsearch.yml file (inside config folder of elastic installation)
indices.query.bool.max_clause_count: 4096
If you're looking to make changes to your cloud elasticsearch.yml file in a cloud deployment, see Add Elasticsearch user settings for the steps to follow to achieve this and the changes you're allowed to make. However, note that there is a limit to the settings you're allowed to change in the cloud deployment.
If you're looking to make changes to your local cluster, you can make the change directly in your elasticsearch.yml file located at "elasticsearch-x.x.x/config/elasticsearch.yml" by simply adding a line of the required setting to the file. For example, if you want to change indices.query.bool.max_clause_count from it's default value of 1024 to 4096, you can add the line indices.query.bool.max_clause_count: 4096 to your elasticsearch.yml file.
Remember that indices.query.bool.max_clause_count is 1024 be default, though the elasticsearch.yml does explicitly contain a line stating this value, yet it would resort to 1024 being the default value. So the only way to change this value is that you will explicitly add the line "indices.query.bool.max_clause_count: 4096". By including this line and specifying your preferred value in your own elasticsearch.yml file, you have ultimately modified the value "indices.query.bool.max_clause_count" for your cluster.
The following image shows how this line was appended to a sample elasticsearch.yml file:
After adding this line and saving the changes to your elasticsearch.yml file, start Elasticsearch, and then Kibana (if you're interacting with ES with Kibana). You can verify your setting by running the command: GET /_cluster/settings/?include_defaults in Kibana or curl -XGET "http://localhost:9200/_cluster/settings/?include_defaults" in your command line. Then, look for max_clause_count in the command's output to verify the value of indices.query.bool.max_clause_count
I have an elasticsearch cluster (ELK) and some nodes sending logs to the logstash using filebeat. All the servers in my environment are CentOS 6.5.
The filebeat.yml file in each server is enforced by a Puppet module (both my production and test servers got the same configuration).
I want to have a field in each document which tells if it came from a production/test server.
I wanted to generate a dynamic custom field in every document which indicates the environment (production/test) using filebeat.yml file.
In order to work this out i thought of running a command which returns the environment (it is possible to know the environment throught facter) and add it under an "environment" custom field in the filebeat.yml file but I couldn't find any way of doing so.
Is it possible to run a command through filebeat.yml?
Is there any other way to achieve my goal?
In your filebeat.yml:
filebeat:
prospectors:
-
paths:
- /path/to/my/folder
input_type: log
# Optional additional fields. These field can be freely picked
# to add additional information to the crawled log files
fields:
mycustomvar: production
in filebeat-7.2.0 i use next syntax:
processors:
- add_fields:
target: ''
fields:
mycustomfieldname: customfieldvalue
note: target = '' means that mycustomfieldname is a top-level field
official 7.2 docs
Yes, you can add fields to the document through filebeats.
The official doc shows you how.