Horizontal scaling of Logstash and Elasticseach based solution - ruby

we have a logstash and Elasticsearch-based solution to analyze network protocols, and I am trying to benchmark the whole solution. When I increase the number of Nodes for client/logstash and Elasticsearch, I am not seeing the expected scaling of the performance.
We have two protocols (protocol A and protocol B), there are separate logstash pipelines to process these protocols. To process each protocolweI have two logstash pipelines. i.e total 4 logstash pipelines, each protocol contains 2 logstash pipelines (pipeline 1 and pipeline2).
Pipeline Details:
I am decoding the packets related to the protocols and keeping them in two separate folders. Logstash pipeline-1 is monitoring a folder for new files (ndjson/ each file contains thousands of JSON objects). And for every event in pipeline 1 I am querying an elasticsearch index to get some metadata. I am using ruby and some other filters to process the events as shown below. The processed events are then ingested into an elaticsearch index. Each event contains a session id, and each event is tagged with the corresponding session ID before ingesting, and unique session ids are indexed in a separate index.
Logstash Pipeline 2 is reading from the index where I am storing the session ID’s. and for each session ID/tag, I am getting all the messages belonging to that session (search using tag). I am further stitching all the messages belonging to a particular session and ingesting it into a final index. While forming every session I am again querying an intermediate index for some metadata and after forming the session again I am updating the corresponding document in that index.
Setup Details:
4 machines each containing
16 cores
30 GB RAM
SSD NVMe
Note:
For all tests the ES JVM heap is configured to 15 GB (50% of total system memory). And logstash JVM heap is configured to 11 GB for each pipeline (a total of 22GB per machine as I am running two pipelines on one machine i.e pipeline 1 and pipeline 2).
we are doing a good amount of processing in ruby for each pipeline. Around 200 and 150 lines of ruby code for logstash pipeline 1 and pipeline 2 of Protocol A. Around 500 and 100 lines of ruby code for logstash pipeline 1 and pipeline 2 of Protocol B.
At any point in time, logstash pipeline 1 will be writing to the index other than the index that pipeline 2 is reading from. Once Pipeline 2 is done with reading all the data in the index it is working on it will exit. And I clean up all data from the index that pipeline 2 processed and restart pipeline 2 again, and this time it reads from the index that pipeline 1 was writing before. Also this time pipeline 1 writes to the index that pipeline 2 was reading previously. This toggling of the index is done every time after pipeline 2 restarts.
I ran several benchmarks by finetuning the workers, batch size and by modifying JVM Heap size for both Elasticsearch and Logstash but didn't see any improvements in the performance.
When I run the performance benchmarking for the solution with 2 nodes (1 for ES and 1 for logstash pipelines), I am getting 6k and 7k TPS for each protocol A and protocol B respectively. But when I benchmark both the protocols together with 4 nodes (2 Node ES Cluster and 1 Node for Protocol A, 1 Node for Protocol B pipelines), I am getting 9.5k combined TPS (4.5k TPS for Protocol A and 5 k for Protocol B).
Am I doing anything wrong which is affecting the performance of logstash/Elasticsearch? IsAre there any other parameres to fintune?
Any suggestions or feedback are appreciated.
Best Regards,
Rakhesh Kumbi

Related

Elasticsearch and Fluentd optimisation for log cluster

we are using Elasticsearch and Fluentd for Central logging platform. below is our Config details:
Elasticsearch Cluster:
Master Nodes: 64Gb Ram, 8 CPU, 9 instances
Data Nodes: 64Gb Ram, 8 CPU, 40 instances
Coordinator Nodes: 64Gb Ram, 8Cpu, 20 instances
Fluentd: at any given time we have around 1000+ fluentd instances writing logs to Elasticsearch coordinator nodes.
and on daily basis we create around 700-800 indices and which total to 4K shards on daily basis. and we keep maximum 40K shards on cluster.
we started facing performance issue on Fluentd side, where fluentd instances fails to write logs. common issues are :
1. read time out
2. request time out
3. {"time":"2021-07-02","level":"warn","message":"failed to flush the buffer. retry_time=9 next_retry_seconds=2021-07-02 07:23:08 265795215088800420057/274877906944000000000 +0000 chunk=\"5c61e5fa4909c276a58b2efd158b832d\" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error=\"could not push logs to Elasticsearch cluster ({:host=>\\\"logs-es-data.internal.tech\\\", :port=>9200, :scheme=>\\\"http\\\"}): [429] {\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"circuit_breaking_exception\\\",\\\"reason\\\":\\\"[parent] Data too large, data for [<http_request>] would be [32274168710/30gb], which is larger than the limit of [31621696716/29.4gb], real usage: [32268504992/30gb], new bytes reserved: [5663718/5.4mb], usages [request=0/0b, fielddata=0/0b, in_flight_requests=17598408008/16.3gb, model_inference=0/0b, accounting=0/0b]\\\",\\\"bytes_wanted\\\":32274168710,\\\"bytes_limit\\\":31621696716,\\\"durability\\\":\\\"TRANSIENT\\\"}],\\\"type\\\":\\\"circuit_breaking_exception\\\",\\\"reason\\\":\\\"[parent] Data too large, data for [<http_request>] would be [32274168710/30gb], which is larger than the limit of [31621696716/29.4gb], real usage: [32268504992/30gb], new bytes reserved: [5663718/5.4mb], usages [request=0/0b, fielddata=0/0b, in_flight_requests=17598408008/16.3gb, model_inference=0/0b, accounting=0/0b]\\\",\\\"bytes_wanted\\\":32274168710,\\\"bytes_limit\\\":31621696716,\\\"durability\\\":\\\"TRANSIENT\\\"},\\\"status\\\":429}\"","worker_id":0}
looking for guidance on this, how we can optimise our Logs cluster?
Well, by the looks of it, you have exhausted your parent circuit breaker limit of 95% of Heap Memory.
The error you mentioned has been mentioned in the elasticsearch docs -
[1]: https://www.elastic.co/guide/en/elasticsearch/reference/current/fix-common-cluster-issues.html#diagnose-circuit-breaker-errors
. The page also refers to a few steps you can take to Reduce JVM memory pressure, which can be helpful to reduce this error.
You can also try increasing this limit to 98%, using the dynamic command -
PUT /_cluster/settings
{
"persistent" : {
"indices.breaker.total.limit" : "98%"
}
}
But I would suggest this be performance tested before applying in production.
Since your request is 30GB, which is a bit too much, for a more reliable solution, I would suggest increasing your log scrapers frequency, so that it makes more frequent posts to ES with smaller-sized data blocks.

High CPU usage on elasticsearch nodes

we have been using a 3 node Elasticsearch(7.6v) cluster running in docker container. I have been experiencing very high cpu usage on 2 nodes(97%) and moderate CPU load on the other node(55%). Hardware used are m5 xlarge servers.
There are 5 indices with 6 shards and 1 replica. The update operations take around 10 seconds even for updating a single field. similar case is with delete. however querying is quite fast. Is this because of high CPU load?
2 out of 5 indices, continuously undergo a update and write operations as they listen from a kafka stream. size of the indices are 15GB, 2Gb and the rest are around 100MB.
You need to provide more information to find the root cause:
All the ES nodes are running on different docker containers on the same host or different host?
Do you have resource limit on your ES docker containers?
How much heap size of ES and is it 50% of host machine RAM?
Node which have high CPU, holds the 2 write heavy indices which you mentioned?
what is the refresh interval of your indices which receives high indexing requests.
what is the segment size of your 15 GB indices, use https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-segments.html to get this info.
What all you have debugged so far and is there is any interesting info you want to share to find the issue?

Indexing multiple indexes in elastic search at the same time

I am using logstash for ETL purpose and have 3 indexes in the Elastic search .Can I insert documents into my 3 indexes through 3 different logtash processes at the same time to improve the parallelization or should I insert documents into 1 index at a time.
My elastic search cluster configuration looks like:
3 data nodes
1 client node
3 data nodes - 64 GB RAM, SSD Disk
1 client node - 8 GB RAM
Shards - 20 Shards
Replica - 1
Thanks
As always it depends. The distribution concept of Elasticsearch is based on shards. Since the shards of an index live on different nodes, you are automatically spreading the load.
However, if Logstash is your bottleneck, you might gain performance from running multiple processes. Though if running multiple LS process on a single machine will make a positive impact is doubtful.
Short answer: Parallelising over 3 indexes won't make much sense, but if Logstash is your bottleneck, it might make sense to run those in parallel (on different machines).
PS: The biggest performance improvement generally is batching requests together, but Logstash does that by default.

Optimizing Bulk Indexing in elasticsearch

We have an elastic search cluster of 3 nodes of the following configurations
#Cpu Cores Memory(GB) Disk(GB) IO Performance
36 244.0 48000 very high
The machines are in 3 different zones namely eu-west-1c,eu-west-1a,eu-west-1b.
Each elastic search instance is being allocated 30GB of heap space.
we are using the above cluster for running aggregations only. The cluster has replication factor of 1 and all the string fields are not analyzed , doc_values is true for all the fields.
We are pumping data into this cluster running 6 instances of logstash in parallel ( having a batch size of 1000)
When more instances of logstash are started one by one the nodes of the ElasticSearch cluster starts throwing out of memory error.
What could be the possible optimizations to speed up bulk indexing rate on the cluster?= Will presence of nodes of cluster in the same zone increase bulk indexing? Will adding more nodes in the cluster help ?
Couple of steps taken so far
Increase the bulk queue size from 50 to 1000
Increase refresh interval from 1 seconds to 2 minutes
Changed segments merge throttling to none (
https://www.elastic.co/guide/en/elasticsearch/guide/current/indexing- performance.html)
We cannot set the replication factor to 0 due to inconsistency involved if one of the nodes goes down.

Elasticsearch 1.5.2 High JVM Heap in 2 nodes even without bulk indexing

We have been facing multiple downtimes recently, especially after few hours of Bulk Indexing. To avoid further downtimes, we disabled bulk indexing temporarily and added another node. Now downtimes have stopped but two out 6 nodes permanently remain at JVM Heap > 80%
We have a 6 node cluster currently (previously 5), each being EC2 c3.2xlarge with 16 GB ram, 8 GB of JVM heap, all master+data. We're using Elasticsearch 1.5.2 which has known issues like [OOM Thrown on merge thread](https://issues.apache.org/jira/browse/LUCENE-6670 OOM Thrown on merge thread), and we faced the same regularly.
There are two major indices used frequently for Search and autosuggest having doc count/size as follows:
health status index pri rep docs.count docs.deleted store.size
green open aggregations 5 0 16507117 3653185 46.2gb
green open index_v10 5 0 3445495 693572 44.8gb
Ideally, we keep at least one replica for each index, but our last attempt to add replica with 5 nodes resulted in OOM errors and full heap, so we turned it back to 0.
We also had two bulk update jobs running between 12-6 AM, each updating about 3 million docs with 2-3 fields on a daily basis. They were scheduled at 1.30 AM and 4.30 AM, each sending bulk feed with 100 docs (about 12 KB in size) to BULK api using a bash script having a sleep time of .25s between each to avoid too many parallel requests. When we started the bulk update, we had max 2 million docs to update daily, but the doc count almost doubled in a short span (to 3.8 million) and we started seeing Search response time spikes mostly between 4-6 AM and sometimes even later. Our average Search response time also increased from 60-70 ms to 150+ ms. A week ago, master left due to ping timeout, and soon after that we received shard failed error for one index. On investigating further, we found that this specific shard's data was inaccessible. To save unavailability of data, we restarted the node and reindexed the data.
However, the node downtime happened many more times, and each time Shards went into UNASSIGNED or INITIALIZING state. We finally deleted the index and started fresh. But heavy indexing again brought OutOfMemory Errors and node downtime, with same shard issue and data loss. To avoid further downtimes, we stopped all bulk jobs and reindexed data at a very slow rate.
We also added one more node to distribute load. Yet, currently we have 3 nodes with JVM constantly above 75+%, 2 being 80+ always. We have noticed that number of segments and their size is relatively high on these nodes (about 5 GB), but using optimize index on these would risk increasing heap again, with a probability of downtime.
Another important point to note is that our tomcat apps hit only 3 of all nodes (for normal search and indexing), and mostly one of the other two node was used for bulk indexing. Thus, out of three query+indexing receiving node, one node, and the left node for bulk indexing has relatively high heap.
There are following known issues with our configuration and indexing approach, which are planning to fix:
Bulk indexing hits only one node, thus increasing its heap, and causes slightly high GC pauses.
mlockall is set to false
Snapshot is needed to revert index in such cases, we're under planning phase when this incident happened.
We can merge 2 bulk jobs into one, to avoid too indexing request under queue at the same time.
We can use optimize API at regular interval in the bulk indexing script to avoid existence of too many segments.
Elasticsearch yml: (only relevant and enabled settings mentioned)
master: true
index.number_of_shards: 5
index.number_of_replicas: 2
path.conf: /etc/elasticsearch
path.data: /data
transport.tcp.port: 9300
transport.tcp.compress: false
http.port: 9200
http.enabled: true
gateway.type: local
gateway.recover_after_nodes: 2
gateway.recover_after_time: 5m
gateway.expected_nodes: 3
discovery.zen.minimum_master_nodes: 4 # Now that we have 6 nodes
discovery.zen.ping.timeout: 3s
discovery.zen.ping.multicast.enabled: false
Node stats:
Pastebin link
Hot threads:
Pastebin link
If I understand well you have 6 servers, each of one of them is running elasticsearch node.
What I would to is to run more than one node on each server separating the roles, node that act as client, node that acts as data node, and node that act as master. I think that you can have two nodes on each server.
3 servers: data + client
3 servers: data + master
The client nodes and the master nodes will need less amount of RAM. The configuration files will be more complex but it will work better.

Resources