Indexing very slow in elasticsearch - elasticsearch

I can't increase the indexing more than 10000 event/second no matter what I do. I am getting around 13000 events per second from kafka in a single logstash instance. I am running 3 Logstash in different machines reading data from same kafka topic.
I have setup a ELK cluster with 3 Logstash reading data from Kafka and sending them to my elastic cluster.
My cluster contains 3 Logstash, 3 Elastic Master Node, 3 Elastic Client node and 50 Elastic Data Node.
Logstash 2.0.4
Elastic Search 5.0.2
Kibana 5.0.2
All Citrix VM having same configuration of :
Red Hat Linux-7
Intel(R) Xeon(R) CPU E5-2630 v3 # 2.40GHz 6 Cores
32 GB RAM
2 TB spinning media
Logstash Config file :
output {
elasticsearch {
hosts => ["dataNode1:9200","dataNode2:9200","dataNode3:9200" upto "**dataNode50**:9200"]
index => "logstash-applogs-%{+YYYY.MM.dd}-1"
workers => 6
user => "uname"
password => "pwd"
}
}
Elasticsearch Data Node's elastcisearch.yml File:
cluster.name: my-cluster-name
node.name: node46-data-46
node.master: false
node.data: true
bootstrap.memory_lock: true
path.data: /apps/dataES1/data
path.logs: /apps/dataES1/logs
discovery.zen.ping.unicast.hosts: ["master1","master2","master3"]
network.host: hostname
http.port: 9200
The only change that I made in my **jvm.options** file is
-Xms15g
-Xmx15g
System config changes that I did are as follows:
vm.max_map_count=262144
and in /etc/security/limits.conf I added :
elastic soft nofile 65536
elastic hard nofile 65536
elastic soft memlock unlimited
elastic hard memlock unlimited
elastic soft nproc 65536
elastic hard nproc unlimited
Indexing Rate
One of the active data node:
$ sudo iotop -o
Total DISK READ : 0.00 B/s | Total DISK WRITE : 243.29 K/s
Actual DISK READ: 0.00 B/s | Actual DISK WRITE: 357.09 K/s
TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
5199 be/3 root 0.00 B/s 3.92 K/s 0.00 % 1.05 % [jbd2/xvdb1-8]
14079 be/4 elkadmin 0.00 B/s 51.01 K/s 0.00 % 0.53 % java -Xms15g -Xmx15g -XX:+UseConcMarkSweepGC -XX:CMSIni~h-5.0.2/lib/* org.elasticsearch.bootstrap.Elasticsearch
13936 be/4 elkadmin 0.00 B/s 51.01 K/s 0.00 % 0.39 % java -Xms15g -Xmx15g -XX:+UseConcMarkSweepGC -XX:CMSIni~h-5.0.2/lib/* org.elasticsearch.bootstrap.Elasticsearch
13857 be/4 elkadmin 0.00 B/s 58.86 K/s 0.00 % 0.34 % java -Xms15g -Xmx15g -XX:+UseConcMarkSweepGC -XX:CMSIni~h-5.0.2/lib/* org.elasticsearch.bootstrap.Elasticsearch
13960 be/4 elkadmin 0.00 B/s 35.32 K/s 0.00 % 0.33 % java -Xms15g -Xmx15g -XX:+UseConcMarkSweepGC -XX:CMSIni~h-5.0.2/lib/* org.elasticsearch.bootstrap.Elasticsearch
13964 be/4 elkadmin 0.00 B/s 31.39 K/s 0.00 % 0.27 % java -Xms15g -Xmx15g -XX:+UseConcMarkSweepGC -XX:CMSIni~h-5.0.2/lib/* org.elasticsearch.bootstrap.Elasticsearch
14078 be/4 elkadmin 0.00 B/s 11.77 K/s 0.00 % 0.00 % java -Xms15g -Xmx15g -XX:+UseConcMarkSweepGC -XX:CMSIni~h-5.0.2/lib/* org.elasticsearch.bootstrap.Elasticsearch
Index Details :
index shard prirep state docs store
logstash-applogs-2017.01.23-3 11 r STARTED 30528186 35gb
logstash-applogs-2017.01.23-3 11 p STARTED 30528186 30.3gb
logstash-applogs-2017.01.23-3 9 p STARTED 30530585 35.2gb
logstash-applogs-2017.01.23-3 9 r STARTED 30530585 30.5gb
logstash-applogs-2017.01.23-3 1 r STARTED 30526639 30.4gb
logstash-applogs-2017.01.23-3 1 p STARTED 30526668 30.5gb
logstash-applogs-2017.01.23-3 14 p STARTED 30539209 35.5gb
logstash-applogs-2017.01.23-3 14 r STARTED 30539209 35gb
logstash-applogs-2017.01.23-3 12 p STARTED 30536132 30.3gb
logstash-applogs-2017.01.23-3 12 r STARTED 30536132 30.3gb
logstash-applogs-2017.01.23-3 15 p STARTED 30528216 30.4gb
logstash-applogs-2017.01.23-3 15 r STARTED 30528216 30.4gb
logstash-applogs-2017.01.23-3 19 r STARTED 30533725 35.3gb
logstash-applogs-2017.01.23-3 19 p STARTED 30533725 36.4gb
logstash-applogs-2017.01.23-3 18 r STARTED 30525190 30.2gb
logstash-applogs-2017.01.23-3 18 p STARTED 30525190 30.3gb
logstash-applogs-2017.01.23-3 8 p STARTED 30526785 35.8gb
logstash-applogs-2017.01.23-3 8 r STARTED 30526785 35.3gb
logstash-applogs-2017.01.23-3 3 p STARTED 30526960 30.4gb
logstash-applogs-2017.01.23-3 3 r STARTED 30526960 30.2gb
logstash-applogs-2017.01.23-3 5 p STARTED 30522469 35.3gb
logstash-applogs-2017.01.23-3 5 r STARTED 30522469 30.8gb
logstash-applogs-2017.01.23-3 6 p STARTED 30539580 30.9gb
logstash-applogs-2017.01.23-3 6 r STARTED 30539580 30.3gb
logstash-applogs-2017.01.23-3 7 p STARTED 30535488 30.3gb
logstash-applogs-2017.01.23-3 7 r STARTED 30535488 30.4gb
logstash-applogs-2017.01.23-3 2 p STARTED 30524575 35.2gb
logstash-applogs-2017.01.23-3 2 r STARTED 30524575 35.3gb
logstash-applogs-2017.01.23-3 10 p STARTED 30537232 30.4gb
logstash-applogs-2017.01.23-3 10 r STARTED 30537232 30.4gb
logstash-applogs-2017.01.23-3 16 p STARTED 30530098 30.3gb
logstash-applogs-2017.01.23-3 16 r STARTED 30530098 30.3gb
logstash-applogs-2017.01.23-3 4 r STARTED 30529877 30.2gb
logstash-applogs-2017.01.23-3 4 p STARTED 30529877 30.2gb
logstash-applogs-2017.01.23-3 17 r STARTED 30528132 30.2gb
logstash-applogs-2017.01.23-3 17 p STARTED 30528132 30.4gb
logstash-applogs-2017.01.23-3 13 r STARTED 30521873 30.3gb
logstash-applogs-2017.01.23-3 13 p STARTED 30521873 30.4gb
logstash-applogs-2017.01.23-3 0 r STARTED 30520172 30.4gb
logstash-applogs-2017.01.23-3 0 p STARTED 30520172 30.5gb
I tested the incoming data in logstash by dumping data into a file. I got a file of 290 MB with 377822 lines in 30 seconds. So there is no issue from Kafka as at a given time I am receiving 35000 events per second in my 3 Logstash servers but my Elasticsearch is able to index maximum of 10000 events per second.
Can someone please help me with this issue?
Edit: I tried sending the request in batch of default 125, then 500, 1000, 10000, but still I didn't got any improvement in the indexing speed.

I improved indexing rate by moving to a larger Machines for Data nodes.
Data Node: A VMWare virtual machine with the following config:
14 CPU # 2.60GHz
64GB RAM, 31GB dedicated for elasticsearch.
The fasted disk that was available to me was SAN with Fibre Channel as I couldn't get any SSD or Local Disks.
I achieved maximum indexing rate of 100,000 events per second. Each document size is around 2 to 5 KB.

Related

What does "L" means in Elasticsearch Node status?

Bellow is the Node status of my Elasticsearch cluster(please follow the node.role column,
[root#manager]# curl -XGET http://192.168.6.51:9200/_cat/nodes?v
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
192.168.6.54 20 97 0 0.00 0.00 0.00 dim - siem03.arif.local
192.168.6.51 34 55 0 0.16 0.06 0.01 l - siem00.arif.local
192.168.6.52 15 97 0 0.00 0.00 0.00 dim * siem01.arif.local
192.168.6.53 14 97 0 0.00 0.00 0.00 dim - siem02.arif.local
From Elasticsearch Documentation,
node.role, r, role, nodeRole
(Default) Roles of the node. Returned values include m (master-eligible node), d (data node), i (ingest node), and - (coordinating node only).
So, from the above output, the dim means, Data + Master + Ingest node. Which is absolutely correct. But I configured the host siem00.arif.local as a coordinating node. But it showed l which is not an option described by the documentation.
So what does it mean? It was just - before. But after an update (which I have pushed on each of the nodes) it doesn't work anymore and shows l in the node.role
UPDATE:
All the other nodes except the coordinating node were 1 version back. Now I have updated all of the nodes with exact same version. Now it works and here is the output,
[root#manager]# curl -XGET http://192.168.6.51:9200/_cat/nodes?v
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
192.168.6.53 9 79 2 0.00 0.20 0.19 dilm * siem02.arif.local
192.168.6.52 13 78 2 0.18 0.24 0.20 dilm - siem01.arif.local
192.168.6.51 33 49 1 0.02 0.21 0.20 l - siem00.arif.local
192.168.6.54 12 77 4 0.02 0.19 0.17 dilm - siem03.arif.local
Current Version is :
[root#manager]# rpm -qa | grep elasticsearch
elasticsearch-7.4.0-1.x86_64
The built-in roles are indeed d, m, i and -, but any plugin is free to define new roles if needed. There's another one called v for voting-only nodes.
The l role is for Machine Learning nodes (i.e. those with node.ml: true) as can be seen in the source code of MachineLearning.java in the MachineLearning plugin.

Elasticsearch - Replicas is unassigned after reopen index INDEX_REOPENED error

I closed my index and reopen it and now my shards can't assigne.
curl -s -XGET localhost:9201/_cat/shards?h=index,shard,prirep,state,unassigned.reason | grep UNASSIGNED
2018.03.27-team-logs 2 r UNASSIGNED INDEX_REOPENED
2018.03.27-team-logs 5 r UNASSIGNED INDEX_REOPENED
2018.03.27-team-logs 3 r UNASSIGNED INDEX_REOPENED
2018.03.27-team-logs 4 r UNASSIGNED INDEX_REOPENED
2018.03.27-team-logs 1 r UNASSIGNED INDEX_REOPENED
2018.03.27-team-logs 0 r UNASSIGNED INDEX_REOPENED
2018.03.28-team-logs 2 r UNASSIGNED INDEX_REOPENED
2018.03.28-team-logs 5 r UNASSIGNED INDEX_REOPENED
2018.03.28-team-logs 3 r UNASSIGNED INDEX_REOPENED
2018.03.28-team-logs 4 r UNASSIGNED INDEX_REOPENED
2018.03.28-team-logs 1 r UNASSIGNED INDEX_REOPENED
2018.03.28-team-logs 0 r UNASSIGNED INDEX_REOPENED
Could anybody explain me what does error means and how to solve it? Before I closed it everything works fine. I configured 6 shards and 1 replica. Installed Elasticsearch 6.2.
EDIT:
Output of curl -XGET "localhost:9201/_cat/shards":
2018.03.29-team-logs 1 r STARTED 1739969 206.2mb 10.207.46.247 elk-es-data-hot-1.platform.osdc2.mall.local
2018.03.29-team-logs 1 p STARTED 1739969 173mb 10.206.46.246 elk-es-data-hot-2.platform.osdc1.mall.local
2018.03.29-team-logs 2 p STARTED 1739414 169.9mb 10.207.46.247 elk-es-data-hot-1.platform.osdc2.mall.local
2018.03.29-team-logs 2 r STARTED 1739414 176.3mb 10.207.46.248 elk-es-data-hot-2.platform.osdc2.mall.local
2018.03.29-team-logs 4 p STARTED 1740185 186mb 10.206.46.247 elk-es-data-hot-1.platform.osdc1.mall.local
2018.03.29-team-logs 4 r STARTED 1740185 169.4mb 10.206.46.246 elk-es-data-hot-2.platform.osdc1.mall.local
2018.03.29-team-logs 5 r STARTED 1739660 164.3mb 10.207.46.248 elk-es-data-hot-2.platform.osdc2.mall.local
2018.03.29-team-logs 5 p STARTED 1739660 180.1mb 10.206.46.246 elk-es-data-hot-2.platform.osdc1.mall.local
2018.03.29-team-logs 3 p STARTED 1740606 171.2mb 10.207.46.248 elk-es-data-hot-2.platform.osdc2.mall.local
2018.03.29-team-logs 3 r STARTED 1740606 173.4mb 10.206.46.247 elk-es-data-hot-1.platform.osdc1.mall.local
2018.03.29-team-logs 0 r STARTED 1740166 169.7mb 10.207.46.247 elk-es-data-hot-1.platform.osdc2.mall.local
2018.03.29-team-logs 0 p STARTED 1740166 187mb 10.206.46.247 elk-es-data-hot-1.platform.osdc1.mall.local
2018.03.28-team-logs 1 p STARTED 2075020 194.2mb 10.207.46.248 elk-es-data-hot-2.platform.osdc2.mall.local
2018.03.28-team-logs 1 r UNASSIGNED
2018.03.28-team-logs 2 p STARTED 2076268 194.9mb 10.206.46.247 elk-es-data-hot-1.platform.osdc1.mall.local
2018.03.28-team-logs 2 r UNASSIGNED
2018.03.28-team-logs 4 p STARTED 2073906 194.9mb 10.207.46.247 elk-es-data-hot-1.platform.osdc2.mall.local
2018.03.28-team-logs 4 r UNASSIGNED
2018.03.28-team-logs 5 p STARTED 2072921 195mb 10.207.46.248 elk-es-data-hot-2.platform.osdc2.mall.local
2018.03.28-team-logs 5 r UNASSIGNED
2018.03.28-team-logs 3 p STARTED 2074579 194.1mb 10.206.46.246 elk-es-data-hot-2.platform.osdc1.mall.local
2018.03.28-team-logs 3 r UNASSIGNED
2018.03.28-team-logs 0 p STARTED 2073349 193.9mb 10.207.46.248 elk-es-data-hot-2.platform.osdc2.mall.local
2018.03.28-team-logs 0 r UNASSIGNED
2018.03.27-team-logs 1 p STARTED 356769 33.5mb 10.207.46.246 elk-es-data-warm-1.platform.osdc2.mall.local
2018.03.27-team-logs 1 r UNASSIGNED
2018.03.27-team-logs 2 p STARTED 356798 33.6mb 10.206.46.244 elk-es-data-warm-2.platform.osdc1.mall.local
2018.03.27-team-logs 2 r UNASSIGNED
2018.03.27-team-logs 4 p STARTED 356747 33.7mb 10.207.46.246 elk-es-data-warm-1.platform.osdc2.mall.local
2018.03.27-team-logs 4 r UNASSIGNED
2018.03.27-team-logs 5 p STARTED 357399 33.8mb 10.207.46.245 elk-es-data-warm-2.platform.osdc2.mall.local
2018.03.27-team-logs 5 r UNASSIGNED
2018.03.27-team-logs 3 p STARTED 357957 33.7mb 10.206.46.245 elk-es-data-warm-1.platform.osdc1.mall.local
2018.03.27-team-logs 3 r UNASSIGNED
2018.03.27-team-logs 0 p STARTED 356357 33.4mb 10.207.46.245 elk-es-data-warm-2.platform.osdc2.mall.local
2018.03.27-team-logs 0 r UNASSIGNED
.kibana 0 p STARTED 2 12.3kb 10.207.46.247 elk-es-data-hot-1.platform.osdc2.mall.local
.kibana 0 r UNASSIGNED
Output of curl -XGET "localhost:9201/_cat/nodes":
10.207.46.248 8 82 0 0.07 0.08 0.11 d - elk-es-data-hot-2
10.206.46.245 9 64 0 0.04 0.11 0.08 d - elk-es-data-warm-1
10.207.46.249 11 90 0 0.00 0.01 0.05 m * elk-es-master-2
10.207.46.245 9 64 0 0.00 0.01 0.05 d - elk-es-data-warm-2
10.206.46.247 12 82 0 0.00 0.06 0.08 d - elk-es-data-hot-1
10.206.46.244 10 64 0 0.08 0.04 0.05 d - elk-es-data-warm-2
10.207.46.243 5 86 0 0.00 0.01 0.05 d - elk-kibana
10.206.46.248 10 92 1 0.04 0.18 0.24 m - elk-es-master-1
10.206.46.246 6 82 0 0.02 0.07 0.09 d - elk-es-data-hot-2
10.207.46.247 9 82 0 0.06 0.06 0.05 d - elk-es-data-hot-1
10.206.46.241 6 91 0 0.00 0.02 0.05 m - master-test
10.206.46.242 8 89 0 0.00 0.02 0.05 d - es-kibana
10.207.46.246 8 64 0 0.00 0.02 0.05 d - elk-es-data-warm-1
It is expected behaviour.
Elasticsearch will not put primary and replica shard on the same
node. You will need at least 2 nodes to have have 1 replica.
You can simply set replica to 0
PUT */_settings
{
"index" : {
"number_of_replicas" : 0
}
}
UPDATE:
After running following request
GET /_cluster/allocation/explain?pretty
we can see response here
https://pastebin.com/1ag1Z7jL
"explanation" : "there are too many copies of the shard allocated to
nodes with attribute [datacenter], there are [2] total configured
shard copies for this shard id and [3] total attribute values,
expected the allocated shard count per attribute [2] to be less than
or equal to the upper bound of the required number of shards per
attribute [1]"
Probbably you have zone setting used. Elasticsearch will avoid to put primary and replica shard in same zone
https://www.elastic.co/guide/en/elasticsearch/reference/current/allocation-awareness.html
With ordinary awareness, if one zone lost contact with the other zone,
Elasticsearch would assign all of the missing replica shards to a
single zone. But in this example, this sudden extra load would cause
the hardware in the remaining zone to be overloaded.
Forced awareness solves this problem by NEVER allowing copies of the
same shard to be allocated to the same zone.
For example, lets say we have an awareness attribute called zone, and
we know we are going to have two zones, zone1 and zone2. Here is how
we can force awareness on a node:
cluster.routing.allocation.awareness.force.zone.values: zone1,zone2
cluster.routing.allocation.awareness.attributes: zone

Aerospike - No improvements in latency on moving to in-memory cluster from on-disk cluster

To begin with, we had an aerospike cluster having 5 nodes of i2.2xlarge type in AWS, which our production fleet of around 200 servers was using to store/retrieve data. The aerospike config of the cluster was as follows -
service {
user root
group root
paxos-single-replica-limit 1 # Number of nodes where the replica count is automatically reduced to 1.
pidfile /var/run/aerospike/asd.pid
service-threads 8
transaction-queues 8
transaction-threads-per-queue 4
fabric-workers 8
transaction-pending-limit 100
proto-fd-max 25000
}
logging {
# Log file must be an absolute path.
file /var/log/aerospike/aerospike.log {
context any info
}
}
network {
service {
address any
port 3000
}
heartbeat {
mode mesh
port 3002 # Heartbeat port for this node.
# List one or more other nodes, one ip-address & port per line:
mesh-seed-address-port <IP> 3002
mesh-seed-address-port <IP> 3002
mesh-seed-address-port <IP> 3002
mesh-seed-address-port <IP> 3002
# mesh-seed-address-port <IP> 3002
interval 250
timeout 10
}
fabric {
port 3001
}
info {
port 3003
}
}
namespace FC {
replication-factor 2
memory-size 7G
default-ttl 30d # 30 days, use 0 to never expire/evict.
high-water-disk-pct 80 # How full may the disk become before the server begins eviction
high-water-memory-pct 70 # Evict non-zero TTL data if capacity exceeds # 70% of 15GB
stop-writes-pct 90 # Stop writes if capacity exceeds 90% of 15GB
storage-engine device {
device /dev/xvdb1
write-block-size 256K
}
}
It was properly handling the traffic corresponding to the namespace "FC", with latencies within 14 ms, as shown in the following graph plotted using graphite -
However, on turning on another namespace, with much higher traffic on the same cluster, it started to give a lot of timeouts and higher latencies, as we scaled up the number of servers using the same cluster of 5 nodes (increasing number of servers step by step from 20 to 40 to 60) with the following namespace configuration -
namespace HEAVYNAMESPACE {
replication-factor 2
memory-size 35G
default-ttl 30d # 30 days, use 0 to never expire/evict.
high-water-disk-pct 80 # How full may the disk become before the server begins eviction
high-water-memory-pct 70 # Evict non-zero TTL data if capacity exceeds # 70% of 35GB
stop-writes-pct 90 # Stop writes if capacity exceeds 90% of 35GB
storage-engine device {
device /dev/xvdb8
write-block-size 256K
}
}
Following were the observations -
----FC Namespace----
20 - servers, 6k Write TPS, 16K Read TPS
set latency = 10ms
set timeouts = 1
get latency = 15ms
get timeouts = 3
40 - servers, 12k Write TPS, 17K Read TPS
set latency = 12ms
set timeouts = 1
get latency = 20ms
get timeouts = 5
60 - servers, 17k Write TPS, 18K Read TPS
set latency = 25ms
set timeouts = 5
get latency = 30ms
get timeouts = 10-50 (fluctuating)
----HEAVYNAMESPACE----
20 - del servers, 6k Write TPS, 16K Read TPS
set latency = 7ms
set timeouts = 1
get latency = 5ms
get timeouts = 0
no of keys = 47 million x 2
disk usage = 121 gb
ram usage = 5.62 gb
40 - del servers, 12k Write TPS, 17K Read TPS
set latency = 15ms
set timeouts = 5
get latency = 12ms
get timeouts = 2
60 - del servers, 17k Write TPS, 18K Read TPS
set latency = 25ms
set timeouts = 25-75 (fluctuating)
get latency = 25ms
get timeouts = 2-15 (fluctuating)
* Set latency refers to latency in setting aerospike cache keys and similarly get for getting keys.
We had to turn off the namespace "HEAVYNAMESPACE" after reaching 60 servers.
We then started a fresh POC with a cluster having nodes which were r3.4xlarge instances of AWS (find details here https://aws.amazon.com/ec2/instance-types/), with the key difference in aerospike configuration being the usage of memory only for caching, hoping that it would give better performance. Here is the aerospike.conf file -
service {
user root
group root
paxos-single-replica-limit 1 # Number of nodes where the replica count is automatically reduced to 1.
pidfile /var/run/aerospike/asd.pid
service-threads 16
transaction-queues 16
transaction-threads-per-queue 4
proto-fd-max 15000
}
logging {
# Log file must be an absolute path.
file /var/log/aerospike/aerospike.log {
context any info
}
}
network {
service {
address any
port 3000
}
heartbeat {
mode mesh
port 3002 # Heartbeat port for this node.
# List one or more other nodes, one ip-address & port per line:
mesh-seed-address-port <IP> 3002
mesh-seed-address-port <IP> 3002
mesh-seed-address-port <IP> 3002
mesh-seed-address-port <IP> 3002
mesh-seed-address-port <IP> 3002
interval 250
timeout 10
}
fabric {
port 3001
}
info {
port 3003
}
}
namespace FC {
replication-factor 2
memory-size 30G
storage-engine memory
default-ttl 30d # 30 days, use 0 to never expire/evict.
high-water-memory-pct 80 # Evict non-zero TTL data if capacity exceeds # 70% of 15GB
stop-writes-pct 90 # Stop writes if capacity exceeds 90% of 15GB
}
We began with the FC namespace only, and decided to go ahead with the HEAVYNAMESPACE only if we saw significant improvements with the FC namespace, but we didn't. Here are the current observations with different combinations of node count and server count -
Current stats
Observation Point 1 - 4 nodes serving 130 servers.
Point 2 - 5 nodes serving 80 servers.
Point 3 - 5 nodes serving 100 servers.
These observation points are highlighted in the graphs below -
Get latency -
Set successes (giving a measure of the load handled by the cluster) -
We also observed that -
Total memory usage across cluster is 5.52 GB of 144 GB. Node-wise memory usage is ~ 1.10 GB out of 28.90 GB.
There were no observed write failures yet.
There were occasional get/set timeouts which looked fine.
No evicted objects.
Conclusion
We are not seeing the improvements we had expected, by using the memory-only configuration. We would like to get some pointers to be able to scale up with the same cost -
- via tweaking the aerospike configurations
- or by using some more suitable AWS instance type (even if that would lead to cost cutting).
Update
Output of top command on one of the aerospike servers, to show SI (Pointed out by #Sunil in his answer) -
$ top
top - 08:02:21 up 188 days, 48 min, 1 user, load average: 0.07, 0.07, 0.02
Tasks: 179 total, 1 running, 178 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.3%us, 0.1%sy, 0.0%ni, 99.4%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st
Mem: 125904196k total, 2726964k used, 123177232k free, 148612k buffers
Swap: 0k total, 0k used, 0k free, 445968k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
63421 root 20 0 5217m 1.6g 4340 S 6.3 1.3 461:08.83 asd
If I am not wrong, the SI appears to be 0.2%. I checked the same on all the nodes of the cluster and it is 0.2% on one and 0.1% on the rest of the three.
Also, here is the output of the network stats on the same node -
$ sar -n DEV 10 10
Linux 4.4.30-32.54.amzn1.x86_64 (ip-10-111-215-72) 07/10/17 _x86_64_ (16 CPU)
08:09:16 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
08:09:26 lo 12.20 12.20 5.61 5.61 0.00 0.00 0.00 0.00
08:09:26 eth0 2763.60 1471.60 299.24 233.08 0.00 0.00 0.00 0.00
08:09:26 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
08:09:36 lo 12.00 12.00 5.60 5.60 0.00 0.00 0.00 0.00
08:09:36 eth0 2772.60 1474.50 300.08 233.48 0.00 0.00 0.00 0.00
08:09:36 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
08:09:46 lo 17.90 17.90 15.21 15.21 0.00 0.00 0.00 0.00
08:09:46 eth0 2802.80 1491.90 304.63 245.33 0.00 0.00 0.00 0.00
08:09:46 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
08:09:56 lo 12.00 12.00 5.60 5.60 0.00 0.00 0.00 0.00
08:09:56 eth0 2805.20 1494.30 304.37 237.51 0.00 0.00 0.00 0.00
08:09:56 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
08:10:06 lo 9.40 9.40 5.05 5.05 0.00 0.00 0.00 0.00
08:10:06 eth0 3144.10 1702.30 342.54 255.34 0.00 0.00 0.00 0.00
08:10:06 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
08:10:16 lo 12.00 12.00 5.60 5.60 0.00 0.00 0.00 0.00
08:10:16 eth0 2862.70 1522.20 310.15 238.32 0.00 0.00 0.00 0.00
08:10:16 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
08:10:26 lo 12.00 12.00 5.60 5.60 0.00 0.00 0.00 0.00
08:10:26 eth0 2738.40 1453.80 295.85 231.47 0.00 0.00 0.00 0.00
08:10:26 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
08:10:36 lo 11.79 11.79 5.59 5.59 0.00 0.00 0.00 0.00
08:10:36 eth0 2758.14 1464.14 297.59 231.47 0.00 0.00 0.00 0.00
08:10:36 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
08:10:46 lo 12.00 12.00 5.60 5.60 0.00 0.00 0.00 0.00
08:10:46 eth0 3100.40 1811.30 328.31 289.92 0.00 0.00 0.00 0.00
08:10:46 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
08:10:56 lo 9.40 9.40 5.05 5.05 0.00 0.00 0.00 0.00
08:10:56 eth0 2753.40 1460.80 297.15 231.98 0.00 0.00 0.00 0.00
Average: IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
Average: lo 12.07 12.07 6.45 6.45 0.00 0.00 0.00 0.00
Average: eth0 2850.12 1534.68 307.99 242.79 0.00 0.00 0.00 0.00
From the above, I think the total number of packets handled per second should be 2850.12+1534.68 = 4384.8 (sum of rxpck/s and txpck/s) which is well within 250K packets per second, as mentioned in The Amazon EC2 deployment guide on the Aerospike site which is referred in #RonenBotzer's answer.
Update 2
I ran the asadm command followed by show latency on one of the nodes of the cluster and from the output, it appears that there is no latency beyond 1 ms for both reads and writes -
Admin> show latency
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~read Latency~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Node Time Ops/Sec >1Ms >8Ms >64Ms
. Span . . . .
ip-10-111-215-72.ec2.internal:3000 11:35:01->11:35:11 1242.1 0.0 0.0 0.0
ip-10-13-215-20.ec2.internal:3000 11:34:57->11:35:07 1297.5 0.0 0.0 0.0
ip-10-150-147-167.ec2.internal:3000 11:35:04->11:35:14 1147.7 0.0 0.0 0.0
ip-10-165-168-246.ec2.internal:3000 11:34:59->11:35:09 1342.2 0.0 0.0 0.0
ip-10-233-158-213.ec2.internal:3000 11:35:00->11:35:10 1218.0 0.0 0.0 0.0
Number of rows: 5
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~write Latency~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Node Time Ops/Sec >1Ms >8Ms >64Ms
. Span . . . .
ip-10-111-215-72.ec2.internal:3000 11:35:01->11:35:11 33.0 0.0 0.0 0.0
ip-10-13-215-20.ec2.internal:3000 11:34:57->11:35:07 37.2 0.0 0.0 0.0
ip-10-150-147-167.ec2.internal:3000 11:35:04->11:35:14 36.4 0.0 0.0 0.0
ip-10-165-168-246.ec2.internal:3000 11:34:59->11:35:09 36.9 0.0 0.0 0.0
ip-10-233-158-213.ec2.internal:3000 11:35:00->11:35:10 33.9 0.0 0.0 0.0
Number of rows: 5
Aerospike has several modes for storage that you can configure:
Data in memory with no persistence
Data in memory, persisted to disk
Data on SSD, primary index in memory (AKA Hybrid Memory architecture)
In-Memory Optimizations
Release 3.11 and release 3.12 of
Aerospike include several big performance improvements for in-memory namespaces.
Among these are a change to how partitions are represented, from a single red-black tree to sprigs (many sub-trees). The new config parameters partition-tree-sprigs and partition-tree-locks should be used appropriately. In your case, as r3.4xlarge instances have 122G of DRAM, you can afford the 311M of overhead associated with setting partition-tree-sprigs to the max value of 4096.
You should also consider the auto-pin=cpu setting, as well. This option does require Linux Kernal >= 3.19 which is part of Ubuntu >= 15.04 (but not many others yet).
Clustering Improvements
The recent releases 3.13 and 3.14 include a rewrite of the cluster manager. In general you should consider using the latest version, but I'm pointing out the aspects that will directly affect your performance.
EC2 Networking and Aerospike
You don't show the latency numbers of the cluster itself, so I suspect the problem is with the networking, rather than the nodes.
Older instance family types, such as the r3, c3, i2, come with ENIs - NICs which have a single transmit/receive queue. The software interrupts of cores accessing this queue may become a bottleneck as the number of CPUs increases, all of which need to wait for their turn to use the NIC. There's a knowledge base article in the Aerospike community discussion forum on using multiple ENIs with Aerospike to get around the limited performance capacity of the single ENI you initially get with such an instance. The Amazon EC2 deployment guide on the Aerospike site talks about using RPS to maximize TPS when you're in an instance that uses ENIs.
Alternatively, you should consider moving to the newer instances (r4, i3, etc) which come with multiqueue ENAs. These do not require RPS, and support higher TPS without adding extra cards. They also happen to have better chipsets, and cost significantly less than their older siblings (r4 is roughly 30% cheaper than r3, i3 is about 1/3 the price of the i2).
Your title is misleading. Please consider changing it. You moved from on-disk to in-memory.
mem+disk means data is both on disk and mem (using data-in-memory=true).
My best guess is that one CPU is bottlenecking to do network I/O.
You can take a look at the top output and see the si (software interrupts)
If one CPU is showing much higher than the other,
simplest thing you can try is RPS (Receive Packet Steering)
echo f|sudo tee /sys/class/net/eth0/queues/rx-0/rps_cpus
Once you confirm that its network bottlneck,
You can try ENA as suggested by #Ronen
Going into details,
When you had 15ms latency with only FC, assuming its low tps.
But when you added high load on HEAVYNAMESPACE in prod,
the latency kept increasing as you added more client nodes and hence tps.
Simlarly in you POC also, the latency increased with client nodes.
The latency is under 15ms even with 130 servers. Its partly good.
I am not sure if I understood your set_success graph. Assumign its in ktps.
Update:
After looking at the server side latency histogram, looks like server is doing fine.
Most likely it is a client issue. Check CPU and network on the client machine(s).

DSE - Cassandra : Commit Log Disk Impact on Performances

I'm running a DSE 4.6.5 Cluster (Cassandra 2.0.14.352).
Following datastax's guidelines, on every machine, I separated the data directory from the commitlog/saved caches directories:
data is on blazing fast drives
commit log and saved caches are on the system drives : 2 HDD RAID1
Monitoring disks with OpsCenter while performing intensive writes, I see no issue with the first, however I see the queue size from the later (commit log) averaging around 300 to 400 with spikes up to 700 requests. Of course the latency is also fairly high on theses drives ...
Is this affecting, the performance of my cluster ?
Would you recommend putting the commit log and saved cache on a SSD ? separated from the system disks ?
Thanks.
Edit - Adding tpstats from one of nodes :
[root#dbc4 ~]# nodetool tpstats
Pool Name Active Pending Completed Blocked All time blocked
ReadStage 0 0 15938 0 0
RequestResponseStage 0 0 154745533 0 0
MutationStage 1 0 306973172 0 0
ReadRepairStage 0 0 253 0 0
ReplicateOnWriteStage 0 0 0 0 0
GossipStage 0 0 340298 0 0
CacheCleanupExecutor 0 0 0 0 0
MigrationStage 0 0 0 0 0
MemoryMeter 1 1 36284 0 0
FlushWriter 0 0 23419 0 996
ValidationExecutor 0 0 0 0 0
InternalResponseStage 0 0 0 0 0
AntiEntropyStage 0 0 0 0 0
MemtablePostFlusher 0 0 27007 0 0
MiscStage 0 0 0 0 0
PendingRangeCalculator 0 0 7 0 0
CompactionExecutor 8 10 7400 0 0
commitlog_archiver 0 0 0 0 0
HintedHandoff 0 1 222 0 0
Message type Dropped
RANGE_SLICE 0
READ_REPAIR 0
PAGED_RANGE 0
BINARY 0
READ 0
MUTATION 49547
_TRACE 0
REQUEST_RESPONSE 0
COUNTER_MUTATION 0
Edit 2 - sar output :
04:10:02 AM CPU %user %nice %system %iowait %steal %idle
04:10:02 PM all 22.25 26.33 1.93 0.48 0.00 49.02
04:20:01 PM all 23.23 26.19 1.90 0.49 0.00 48.19
04:30:01 PM all 23.71 26.44 1.90 0.49 0.00 47.45
04:40:01 PM all 23.89 26.22 1.86 0.47 0.00 47.55
04:50:01 PM all 23.58 26.13 1.88 0.53 0.00 47.88
Average: all 21.60 26.12 1.71 0.56 0.00 50.01
Monitoring disks with OpsCenter while performing intensive writes, I see no issue with the first,
Cassandra persists writes in memory (memtable) and on the commitlog (disk).
When the memtable size grows to a threshold, or when you manually trigger it, Cassandra will write everything to disk (flush the memtables).
To make sure your setup is capable of handling your workload try to manually flush all your memtables
nodetool flush
on a node. Or just a specific keyspace with
nodetool flush [keyspace] [columnfamilfy]
At the same time monitor your disks I/O.
If you have high I/O wait you can either share the workload by adding more nodes, or switch the data drives to better one with higher throughput.
Keep an eye to dropped mutations (can be other nodes sending the writes/hints) and dropped flush-writer.
I see the queue size from the later (commit log) averaging around 300 to 400 with spikes up to 700 requests.
This will probably be your writes to the commitlog.
Is your hardware serving any other thing? Is it software raid? Do you have swap disabled?
Cassandra works best alone :) So yes, put at least, the commitlog on a separate (can be smaller) disk.

windbg memory leak investigation - missing heap memory

I am investigating a slow memory leak in a windows application using windbg
!heap -s gives the following output
Heap Flags Reserv Commit Virt Free List UCR Virt Lock Fast
(k) (k) (k) (k) length blocks cont. heap
-------------------------------------------------------------------------------------
00000023d62c0000 08000002 1182680 1169996 1181900 15759 2769 78 3 2b63 LFH
00000023d4830000 08008000 64 4 64 2 1 1 0 0
00000023d6290000 08001002 1860 404 1080 43 7 2 0 0 LFH
00000023d6dd0000 08001002 32828 32768 32828 32765 33 1 0 0
External fragmentation 99 % (33 free blocks)
00000023d8fb0000 08001000 16384 2420 16384 2412 5 5 0 3355
External fragmentation 99 % (5 free blocks)
00000023da780000 08001002 60 8 60 5 2 1 0 0
-------------------------------------------------------------------------------------
This shows that the heap with address 00000023d62c0000 has over a gigabyte of reserved memory.
Next I ran the command !heap -stat -h 00000023d62c0000
heap # 00000023d62c0000
group-by: TOTSIZE max-display: 20
size #blocks total ( %) (percent of total busy bytes)
30 19b1 - 4d130 (13.81)
20 1d72 - 3ae40 (10.55)
ccf 40 - 333c0 (9.18)
478 8c - 271a0 (7.01)
27158 1 - 27158 (7.00)
40 80f - 203c0 (5.78)
410 79 - 1eb90 (5.50)
68 43a - 1b790 (4.92)
16000 1 - 16000 (3.94)
50 39e - 12160 (3.24)
11000 1 - 11000 (3.05)
308 54 - fea0 (2.85)
60 28e - f540 (2.75)
8018 1 - 8018 (1.43)
80 f2 - 7900 (1.36)
1000 5 - 5000 (0.90)
70 ac - 4b40 (0.84)
4048 1 - 4048 (0.72)
100 3e - 3e00 (0.69)
48 c9 - 3888 (0.63)
If I add up the total size of the heap blocks from the above command (4d130 + 3ae40 + ...) I get a few megabytes of allocated memory.
Am I missing something here? How can I find which blocks are consuming the gigabyte of allocated heap memory?
I believe that the !heap –stat is broken for 64 bits dumps, at least big one. I have instead used debugdiag 1.2 for hunting memory leaks on 64 bits.

Resources