Why is there unequal CPU usage as elasticsearch cluster scales? - amazon-ec2

I have a 15 node elasticsearch cluster and am indexing a lot of documents. The documents are of the form { "message": "some sentences" }. When I had a 9 node cluster, I could get CPU utilization upto 80% on all of them, when I turned it into a 15 node cluster, i get 90% CPU usage on 4 nodes and only ~50% on the rest.
The specification of the cluster is:
15 Nodes c4.2xlarge EC2 insatnces
15 shards, no replicas
There is load balancer in-front of all the instances and the instances are accessed through the load balancer.
Marvel is running and is used to monitor the cluster
Refresh interval 1s
I could index 50k docs/sec on 9 nodes and only 70k docs/sec on 15 nodes. Shouldn't I be able to do more?

I'm not yet an expert on scalability and load balancing in ES but some things to consider :
load balancing should be native in ES thus having a load balancer in-front can actually mitigate the in-house load balancing results. It's kind of like having a speed limitation on your car but manually using the brakes, it doesn't make that much sense since your speed limitator should already do the job and will be prevented from doing it right when you input "manual regulation". Have you tried not using your load balancer and just using the native load balancing to see how it fares ?
while having more CPU / computation power across different servers / shards, it also forces you to go through multiple shards every time you write/read a document, thus if 1 shard can do N computations, M shards won't actually be able to do M*N computations
having 15 shards is probably overkill in a lot of cases
having 15 shards but no replication is weird/bad since if any of your 15 servers falls, you won't be able to access your whole index
you can actually hold multiple nodes on a single server
What is your index size in terms of storage ?

Related

Controlling where shards are allocated

My setup:
two zoness: fast and slow with 5 nodes each.
fast nodes have ephemeral storage, whereas the slow nodes are NFS based.
Running Elasticsearch OSS v7.7.1. (I have no control over the version)
I have the following cluster setting: cluster.routing.allocation.awareness.attributes: zone
My index has 2 replicas, so 3 shard instances (1x primary, 2x replica)
I am trying to ensure the following:
1 of the 3 shard instances to be located in zone fast.
2 of the 3 shard instances to be located in zone slow (because it has persistent storage)
Queries to be run in shard in zone fast where available.
Inserts to only return as written once its written once its been replicated.
Is this setup possible?
Link to a related question: How do I control where my primary and replica shards are located?
EDIT to add extra information:
Both fast and slow nodes run on a PaaS offering where we are not in control of hardware restarts meaning there can technically be non-graceful shutdowns/restarts at any point.
I'm worried about unflushed data and/or index corruption so I am looking for multiple replicas to be on the slow zone nodes backed by NFS to reduce the likelihood of data loss, despite the fact that this will "overload" the slow zone with redundant data.

High CPU usage on elasticsearch nodes

we have been using a 3 node Elasticsearch(7.6v) cluster running in docker container. I have been experiencing very high cpu usage on 2 nodes(97%) and moderate CPU load on the other node(55%). Hardware used are m5 xlarge servers.
There are 5 indices with 6 shards and 1 replica. The update operations take around 10 seconds even for updating a single field. similar case is with delete. however querying is quite fast. Is this because of high CPU load?
2 out of 5 indices, continuously undergo a update and write operations as they listen from a kafka stream. size of the indices are 15GB, 2Gb and the rest are around 100MB.
You need to provide more information to find the root cause:
All the ES nodes are running on different docker containers on the same host or different host?
Do you have resource limit on your ES docker containers?
How much heap size of ES and is it 50% of host machine RAM?
Node which have high CPU, holds the 2 write heavy indices which you mentioned?
what is the refresh interval of your indices which receives high indexing requests.
what is the segment size of your 15 GB indices, use https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-segments.html to get this info.
What all you have debugged so far and is there is any interesting info you want to share to find the issue?

Indexing multiple indexes in elastic search at the same time

I am using logstash for ETL purpose and have 3 indexes in the Elastic search .Can I insert documents into my 3 indexes through 3 different logtash processes at the same time to improve the parallelization or should I insert documents into 1 index at a time.
My elastic search cluster configuration looks like:
3 data nodes
1 client node
3 data nodes - 64 GB RAM, SSD Disk
1 client node - 8 GB RAM
Shards - 20 Shards
Replica - 1
Thanks
As always it depends. The distribution concept of Elasticsearch is based on shards. Since the shards of an index live on different nodes, you are automatically spreading the load.
However, if Logstash is your bottleneck, you might gain performance from running multiple processes. Though if running multiple LS process on a single machine will make a positive impact is doubtful.
Short answer: Parallelising over 3 indexes won't make much sense, but if Logstash is your bottleneck, it might make sense to run those in parallel (on different machines).
PS: The biggest performance improvement generally is batching requests together, but Logstash does that by default.

Optimizing Bulk Indexing in elasticsearch

We have an elastic search cluster of 3 nodes of the following configurations
#Cpu Cores Memory(GB) Disk(GB) IO Performance
36 244.0 48000 very high
The machines are in 3 different zones namely eu-west-1c,eu-west-1a,eu-west-1b.
Each elastic search instance is being allocated 30GB of heap space.
we are using the above cluster for running aggregations only. The cluster has replication factor of 1 and all the string fields are not analyzed , doc_values is true for all the fields.
We are pumping data into this cluster running 6 instances of logstash in parallel ( having a batch size of 1000)
When more instances of logstash are started one by one the nodes of the ElasticSearch cluster starts throwing out of memory error.
What could be the possible optimizations to speed up bulk indexing rate on the cluster?= Will presence of nodes of cluster in the same zone increase bulk indexing? Will adding more nodes in the cluster help ?
Couple of steps taken so far
Increase the bulk queue size from 50 to 1000
Increase refresh interval from 1 seconds to 2 minutes
Changed segments merge throttling to none (
https://www.elastic.co/guide/en/elasticsearch/guide/current/indexing- performance.html)
We cannot set the replication factor to 0 due to inconsistency involved if one of the nodes goes down.

Elasticsearch cluster

I have ES cluster with 5 machines.
One of that machines is always using more resources than other for instance now i see that average load is CPU 7%, Memory 65,
But i have node4 which is strange because it using 30% of CPU and 86% of memory.
Machines are totally the same, configuration the same only node4 is only data node. And when I compare node4 with other in marvel they are doing almost the same tasks..
Any suggestion how to debug and see why its using more than other?
PS. Reason why i care because few times my cluster dies because of node4, I did some improvements in app, but still i want to understand what is going on with node4.
Two things about your cluster:
This is wrong: "all requests are sent to master (node1, node2)"! You should send the requests in round robin fashion to all the nodes holding data, otherwise you
'll have nodes that simply do more work than others
You are wasting memory and overall resources by having a lot of small shards... You should consider moving to 1 primary and 1 replica for your indices. The default (5 primaries, 1 replica) is too much. Your indices are way too small to have 5 shards.

Resources