1000 max shards reached. I would like to increase or clear exisitng and start again. I have 5 servers I am monitoring - elasticsearch

I tried to increase the shards with this...but to no avail.
curl -XPUT 'http://206.189.196.214:9200/_cluster/settings -H 'Content-type: application/json' --data-binary $'{"transient":{"cluster.max_shards_per_node":5100}}'`
I have a typo in the above ... it returned the below error:
"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"invalid
version format: -H CONTENT-TYPE:
HTTP/1.1"}],"type":"illegal_argument_exception","reason":"invalid
version format: -H CONTENT-TYPE: HTTP/1.1"},"status":400}curl: (3)
[globbing] nested brace in column 44
Please advise. Thoughts. Elasticsearch is running, Zabbix is running, logstash is running, all seems happy but reached a limit on 1000/1000 shards.

It would be a better option if you set this limit into your elasticserch.yml file. Because if you restart your cluster you will lose these configs. But your request would be something like this:
curl -XPUT "http://elasticsearch_host:9200/_cluster/settings" -H 'Content-Type: application/json' -d'
{
"transient": {
"cluster.routing.allocation.total_shards_per_node": 5100
}
}'

Related

Elasticsearch read only user

I wanted to add a read only user to my cluster, my app prefixes all its indexes with myapp_.
Following https://www.elastic.co/blog/user-impersonation-with-x-pack-integrating-third-party-auth-with-kibana (what a strange title for the only actually usable blog post on this...) I have first added a role with
curl -XPOST '$ELASTIC_URL:9200/_xpack/security/role/name_of_readonly_role' \
-H 'Content-Type: application/json' \
-d'{"indices":[{"names":"myapp_*","privileges":["read"]}]}'
and then added it to a user:
curl -XPOST $ELASTIC_URL:9200/_xpack/security/user/name_of_user \
-H 'Content-Type: application/json' \
-d'{"roles":["name_of_readonly_role"],"password":"some_password"}'
but when opening $ELASTIC_URL:9200 I got
action [cluster:monitor/main] is unauthorized for user
what's next?
There's a complete dearth of examples for this as far as I can see, to fix this problem the role command needs to be re-run with -d'{"cluster":["monitor"], "indices":[{"names":"myapp_*","privileges":["read"]}]}' (same curl command works for creating or updating roles). This seems to leak the name of all indexes but not much else aside from their names and I was fine with that. And even that seems to be not enough for some apps like the ElasticSearch Head brower extension, I needed to add the index level monitor privilege as well: -d'{"cluster":["monitor"], "indices":[{"names":"myapp_*","privileges":["read", "monitor"]}]}'. Role changes are automatically applied to users.
I still have no idea what the "/main" relates to in the error message but this works.

ElasticSearch :content-type header application/x-www-form-urlencoded is not supported

So i had this error after typing this command:
curl -XPUT localhost:9200/_bulk --data-binary #movies_elastic.json, i found an answer here : ElasticSearch - Content-Type header [application/x-www-form-urlencoded] is not supported
that suggested to add -H option but i didn't get where to add it exactly.
Please keep in mind i'm new to ELK.
Thanks.
screenshot from console
Assuming everything else with your command is correct, it's a command line option to curl. Like this:
curl -XPUT localhost:9200/_bulk -H'Content-Type: application/json' --data-binary #movies_elastic.json

TransportError(403, u'cluster_block_exception', u'blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];')

When I try to store anything in elasticsearch, An error says that:
TransportError(403, u'cluster_block_exception', u'blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];')
I already inserted about 200 millions documents in my index. But I don't have an idea why this error is happening.
I've tried:
curl -u elastic:changeme -XPUT 'localhost:9200/_cluster/settings' -H 'Content-Type: application/json' -d '{"persistent":{"cluster.blocks.read_only":false}}'
As mentioned here:
ElasticSearch entered "read only" mode, node cannot be altered
And the results is:
{"acknowledged":true,"persistent":{"cluster":{"blocks":{"read_only":"false"}}},"transient":{}}
But nothing changed. what should I do?
Try GET yourindex/_settings, this will show yourindex settings. If read_only_allow_delete is true, then try:
PUT /<yourindex>/_settings
{
"index.blocks.read_only_allow_delete": null
}
I got my issue fixed.
plz refer to es config guide for more detail.
The curl command for this is
curl -X PUT "localhost:9200/twitter/_settings?pretty" -H 'Content-Type: application/json' -d '
{
"index.blocks.read_only_allow_delete": null
}'
Last month I facing the same problem, you can try this code on your Kibana Dev Tools
curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'
I hope it helps
I had faced the same issue when my disk space was full,
please see the steps that I did
1- Increase the disk space
2- Update the index read-only mode, see the following curl request
curl -XPUT -H "Content-Type: application/json"
http://localhost:9200/_all/_settings -d
'{"index.blocks.read_only_allow_delete": null}'
This happens because of the default watermark disk usage of Elastic Search. Usually it is 95% of disk size.
This happens when Elasticsearch thinks the disk is running low on space so it puts itself into read-only mode.
By default Elasticsearch's decision is based on the percentage of disk space that's free, so on big disks this can happen even if you have many gigabytes of free space.
The flood stage watermark is 95% by default, so on a 1TB drive you need at least 50GB of free space or Elasticsearch will put itself into read-only mode.
For docs about the flood stage watermark see https://www.elastic.co/guide/en/elasticsearch/reference/6.2/disk-allocator.html.
Quoted from part of this answer
One solution is to disable it enitrely (I found it useful in my local and CI setup). To do it run the 2 commands:
curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_cluster/settings -d '{ "transient": { "cluster.routing.allocation.disk.threshold_enabled": false } }'
curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'
Tagging into this later on as I just encountered the problem myself - I accomplished the following steps.
1) Deleted older indexes to free up space immediately - this brought me to around 23% free.
2) Update the index read-only mode.
I still had the same issue. I checked the Dev Console to see what might be locked still and none were. Restarted the cluster and had the same issue.
Finally under index management I selected the indexes with ILM lifecycle issues and picked to reapply ILM step. Had to do that a couple of times to clear them all out but it did.
The problem may be a disk space problem, i had this problem despite i cleaned many space my disk, so, finally i delete the data folder and it worked: sudo rm -rf /usr/share/elasticsearch/data/
This solved the issue;
PUT _settings { "index": { "blocks": { "read_only_allow_delete": "false" }
}

Dynamodb connection: strange behaviour

I have created an Amazon DynamoDB database in a Docker container using this request:
curl -X POST http://192.168.99.100:8000/ -H 'accept-encoding: identity' -H 'authorization: AWS4-HMAC-SHA256 Credential=key/20170515/us-east-1/execute-api/aws4_request, SignedHeaders=accept-encoding;content-length;content-type;host;x-amz-date;x-amz-target, Signature=f2f21c6263ad5380aaa' -H 'cache-control: no-cache' -H 'content-type: application/json' -H 'x-amz-date: 20170515T151032Z' -H 'x-amz-target: DynamoDB_20120810.CreateTable' -d '{"AttributeDefinitions": [{"AttributeName": "userId","AttributeType": "S"}],"TableName": "User","KeySchema": [{"AttributeName": "userId","KeyType": "HASH"}],"ProvisionedThroughput": {"ReadCapacityUnits": 1,"WriteCapacityUnits": 1}}'
When I list the tables using a curl command like that:
curl -X POST http://192.168.99.100:8000/ -H 'authorization: AWS4-HMAC-SHA256 Credential=key/20170515/us-east-1/execute-api/aws4_request, SignedHeaders=accept-encoding;content-length;content-type;host;x-am z-date;x-amz-target' -H 'cache-control: no-cache' -H 'content-type: application/json' -H 'x-amz-date: 20 170515T151032Z' -H 'x-amz-target: DynamoDB_20120810.ListTables ' -d '{}'
All works fine. I get the list of the tables:
{"TableNames":["UserTable1","User", "TestTable]}
The problem is when I connect to this database using RazorSQL there is no table on it. I have the same problem with my application spring-boot it raise an exception:
Cannot do operations on a non-existent table (Service: AmazonDynamoDBv2; Status Code: 400;
Would you have any ideas about this strange behaviour ?
this is a screen shot of my connection profile:
When using DynamoDB locally, you should be aware of the following:
If you use the -sharedDb option, DynamoDB creates a single database file named shared-local-instance.db. Every program that connects to DynamoDB accesses this file. If you delete the file, you lose any data you have stored in it.
If you omit -sharedDb, the database file is named myaccesskeyid_region.db, with the AWS access key ID and region as they appear in your application configuration. If you delete the file, you lose any data you have stored in it.
So, make sure you're passing -shareDb.
Those who are using the official DynamoDB Local Docker image can do that like this:
docker run -p 8000:8000 amazon/dynamodb-local -jar DynamoDBLocal.jar -inMemory -sharedDb
The original ENTRYPOINT and CMD used by the image can be seen in docker inspect amazon/dynamodb-local output and are:
"Entrypoint": [
"java"
]
"Cmd": [
"-jar",
"DynamoDBLocal.jar",
"-inMemory"
]
So we basically copied them and added -sharedDb.

elasticsearch xpack - Can’t change default password

I installed Xpack for elasticsearch, and tried to to change the password as it says here:
https://www.elastic.co/guide/en/x-pack/current/security-getting-started.html
Running this:
curl -XPUT -u elastic localhost:9200/_xpack/security/user/elastic/_password -H Content-Type: application/json -d {"password" : "elasticpassword"}
is getting me:
{"error":{"root_cause":[{"type":"json_parse_exception","reason":"Unexpected character ('p' (code 112)): was expecting double-quote to start field name\n
at [Source: org.elasticsearch.transport.netty4.ByteBufStreamInput#4a142671; line: 1, column: 3]"}],"type":"json_parse_exception","reason":"Unexpected character ('p' (code 112)): was expecting double-quote to start field name\n
at [Source: org.elasticsearch.transport.netty4.ByteBufStreamInput#4a142671; line: 1, column: 3]"},"status":500}curl: (6) Could not resolve host: application
curl: (3) Bad URL, colon is first character
curl: (3) [globbing] unmatched close brace/bracket in column 16
or, running this:
curl -XPUT -u elastic "localhost:9200/_xpack/security/user/elastic/_password" -H "Content-Type: application/json -d {"password" : "elasticpassword"}
gets me this:
{"error":{"root_cause":[{"type":"not_x_content_exception","reason":"Compressor detection can only be called on some xcontent bytes or compressed xcontent bytes"}],"type":"not_x_content_exception","reason":"Compressor detection can only be called on some xcontent bytes or compressed xcontent bytes"},"status":500}
I can't seem to get it, no matter what combination of " or ' I'm using. Please help!
Thanks
This is general problem of Windows users. You can use Sense Chrome Plugin or Kibana Dev Tools instead.
curl -XPUT -u elastic localhost:9200/_xpack/security/user/elastic/_password -H "Content-Type: application/json" -d "{\"password\" : \"elasticpassword\"}"
Please check the elastic developer response about this situation:
https://discuss.elastic.co/t/index-a-new-document/35281/8

Resources