CEPH HA between 2 DC - high-availability

I have 2 datacenters with CEPH with 12 osd (DC1: 3osd x 2nodes, DC2: 3osd x 2nodes) and 1 pool with replicated size of 2.
The crush map:
ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 2.00000 root default
-105 1.00000 datacenter 1
-102 1.00000 host f200pr03
4 ssd 1.00000 osd.4 up 1.00000 1.00000
7 ssd 1.00000 osd.7 up 1.00000 1.00000
10 ssd 1.00000 osd.10 up 1.00000 1.00000
-103 1.00000 host f200pr04
5 ssd 1.00000 osd.5 up 1.00000 1.00000
8 ssd 1.00000 osd.8 up 1.00000 1.00000
11 ssd 1.00000 osd.11 up 1.00000 1.00000
-104 1.00000 datacenter 2
-100 1.00000 host f200pr01
0 ssd 1.00000 osd.0 up 0.70007 1.00000
1 ssd 1.00000 osd.1 up 0.70007 1.00000
2 ssd 1.00000 osd.2 up 1.00000 1.00000
-101 1.00000 host f200pr02
3 ssd 1.00000 osd.3 up 0.70007 1.00000
6 ssd 1.00000 osd.6 up 0.70007 1.00000
9 ssd 1.00000 osd.9 up 1.00000 1.00000
And the pool has applied this crush rule:
# rules
rule replicated_rule {
id 3
type replicated
min_size 1
max_size 5
step take default
step chooseleaf firstn 2 type datacenter
step emit
}
When I shutdown the datacenter 2 then datacenter 1 evolve a inconsistent status and "ceph status" finished with the message "Cluster connection aborted".
In ceph.conf I have the next configuration:
[global]
fsid = 48abdb31-95db-48d7-b2aa-835be95bfe3c
mon_initial_members = f200pr01, f200pr02, f200pr03, f200pr04
mon_host = 10.20.230.241,10.20.230.242,10.20.230.243,10.20.230.244
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
filestore_xattr_use_omap = true
mon_clock_drift_allowed = .15
mon_clock_drift_warn_backoff = 30
osd_journal_size = 10000
public_network = 10.20.230.0/24
¿What I need to change for add Datacenter HA in my environment? At this moment only support the shutdown of 1/4 nodes.
Regards,

First, don't use replicated size 2 pools, it's a really bad idea and will lead to problems sooner or later. With ceph pacific there's a stretch mode available which you had to setup manually prior to pacific.
In your case the cluster is not available because you have two MONs per DC and if one DC fails the remaining two MONs can't form a quorum. You need a fifth MON in a different datacenter to have a resilient cluster. As for the replication with two DCs it's either recommended to have a pool size = 4 or use erasure-coded pools although your cluster is a little too small for that.

Related

Finding the transfer matrix of a system

I have the following system
Which represent a system of 4 known inputs with 12 known outputs.
What methods can I use to find the transfer matrix, can I use the neural network or something like that or it is only possible with matrix algebra?
Any help would be appreciated
thanks in advance
No need to use neural network, matrix algebra is enough!
Your question can be formulated as an optimization problem, i.e., minimize f(T) = norm(y - T*x) given y and x. If you have sufficient data pairs (x,y), then you can solve T.
Another easy way is to use generalized inverse of matrix to solve the transfer matrix T, i.e., T = Y*ginv(X). Here I will show you an example in language R
library(MASS)
Y <- matrix(1:36,nrow = 9)
X <- matrix(1:16,nrow = 4)
T <- Y %*% ginv(X)
where
> X
[,1] [,2] [,3] [,4]
[1,] 1 5 9 13
[2,] 2 6 10 14
[3,] 3 7 11 15
[4,] 4 8 12 16
> Y
[,1] [,2] [,3] [,4]
[1,] 1 10 19 28
[2,] 2 11 20 29
[3,] 3 12 21 30
[4,] 4 13 22 31
[5,] 5 14 23 32
[6,] 6 15 24 33
[7,] 7 16 25 34
[8,] 8 17 26 35
[9,] 9 18 27 36
and the transfer T is solved as
> T
[,1] [,2] [,3] [,4]
[1,] 1.95 1.025 0.1 -0.825
[2,] 1.65 0.925 0.2 -0.525
[3,] 1.35 0.825 0.3 -0.225
[4,] 1.05 0.725 0.4 0.075
[5,] 0.75 0.625 0.5 0.375
[6,] 0.45 0.525 0.6 0.675
[7,] 0.15 0.425 0.7 0.975
[8,] -0.15 0.325 0.8 1.275
[9,] -0.45 0.225 0.9 1.575
To verify the obtained T, you can use
> norm(Y - T%*%X,"2")
[1] 1.178746e-13
which is close to 0, indicating that the obtained T is valid.

What does "L" means in Elasticsearch Node status?

Bellow is the Node status of my Elasticsearch cluster(please follow the node.role column,
[root#manager]# curl -XGET http://192.168.6.51:9200/_cat/nodes?v
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
192.168.6.54 20 97 0 0.00 0.00 0.00 dim - siem03.arif.local
192.168.6.51 34 55 0 0.16 0.06 0.01 l - siem00.arif.local
192.168.6.52 15 97 0 0.00 0.00 0.00 dim * siem01.arif.local
192.168.6.53 14 97 0 0.00 0.00 0.00 dim - siem02.arif.local
From Elasticsearch Documentation,
node.role, r, role, nodeRole
(Default) Roles of the node. Returned values include m (master-eligible node), d (data node), i (ingest node), and - (coordinating node only).
So, from the above output, the dim means, Data + Master + Ingest node. Which is absolutely correct. But I configured the host siem00.arif.local as a coordinating node. But it showed l which is not an option described by the documentation.
So what does it mean? It was just - before. But after an update (which I have pushed on each of the nodes) it doesn't work anymore and shows l in the node.role
UPDATE:
All the other nodes except the coordinating node were 1 version back. Now I have updated all of the nodes with exact same version. Now it works and here is the output,
[root#manager]# curl -XGET http://192.168.6.51:9200/_cat/nodes?v
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
192.168.6.53 9 79 2 0.00 0.20 0.19 dilm * siem02.arif.local
192.168.6.52 13 78 2 0.18 0.24 0.20 dilm - siem01.arif.local
192.168.6.51 33 49 1 0.02 0.21 0.20 l - siem00.arif.local
192.168.6.54 12 77 4 0.02 0.19 0.17 dilm - siem03.arif.local
Current Version is :
[root#manager]# rpm -qa | grep elasticsearch
elasticsearch-7.4.0-1.x86_64
The built-in roles are indeed d, m, i and -, but any plugin is free to define new roles if needed. There's another one called v for voting-only nodes.
The l role is for Machine Learning nodes (i.e. those with node.ml: true) as can be seen in the source code of MachineLearning.java in the MachineLearning plugin.

Aerospike - No improvements in latency on moving to in-memory cluster from on-disk cluster

To begin with, we had an aerospike cluster having 5 nodes of i2.2xlarge type in AWS, which our production fleet of around 200 servers was using to store/retrieve data. The aerospike config of the cluster was as follows -
service {
user root
group root
paxos-single-replica-limit 1 # Number of nodes where the replica count is automatically reduced to 1.
pidfile /var/run/aerospike/asd.pid
service-threads 8
transaction-queues 8
transaction-threads-per-queue 4
fabric-workers 8
transaction-pending-limit 100
proto-fd-max 25000
}
logging {
# Log file must be an absolute path.
file /var/log/aerospike/aerospike.log {
context any info
}
}
network {
service {
address any
port 3000
}
heartbeat {
mode mesh
port 3002 # Heartbeat port for this node.
# List one or more other nodes, one ip-address & port per line:
mesh-seed-address-port <IP> 3002
mesh-seed-address-port <IP> 3002
mesh-seed-address-port <IP> 3002
mesh-seed-address-port <IP> 3002
# mesh-seed-address-port <IP> 3002
interval 250
timeout 10
}
fabric {
port 3001
}
info {
port 3003
}
}
namespace FC {
replication-factor 2
memory-size 7G
default-ttl 30d # 30 days, use 0 to never expire/evict.
high-water-disk-pct 80 # How full may the disk become before the server begins eviction
high-water-memory-pct 70 # Evict non-zero TTL data if capacity exceeds # 70% of 15GB
stop-writes-pct 90 # Stop writes if capacity exceeds 90% of 15GB
storage-engine device {
device /dev/xvdb1
write-block-size 256K
}
}
It was properly handling the traffic corresponding to the namespace "FC", with latencies within 14 ms, as shown in the following graph plotted using graphite -
However, on turning on another namespace, with much higher traffic on the same cluster, it started to give a lot of timeouts and higher latencies, as we scaled up the number of servers using the same cluster of 5 nodes (increasing number of servers step by step from 20 to 40 to 60) with the following namespace configuration -
namespace HEAVYNAMESPACE {
replication-factor 2
memory-size 35G
default-ttl 30d # 30 days, use 0 to never expire/evict.
high-water-disk-pct 80 # How full may the disk become before the server begins eviction
high-water-memory-pct 70 # Evict non-zero TTL data if capacity exceeds # 70% of 35GB
stop-writes-pct 90 # Stop writes if capacity exceeds 90% of 35GB
storage-engine device {
device /dev/xvdb8
write-block-size 256K
}
}
Following were the observations -
----FC Namespace----
20 - servers, 6k Write TPS, 16K Read TPS
set latency = 10ms
set timeouts = 1
get latency = 15ms
get timeouts = 3
40 - servers, 12k Write TPS, 17K Read TPS
set latency = 12ms
set timeouts = 1
get latency = 20ms
get timeouts = 5
60 - servers, 17k Write TPS, 18K Read TPS
set latency = 25ms
set timeouts = 5
get latency = 30ms
get timeouts = 10-50 (fluctuating)
----HEAVYNAMESPACE----
20 - del servers, 6k Write TPS, 16K Read TPS
set latency = 7ms
set timeouts = 1
get latency = 5ms
get timeouts = 0
no of keys = 47 million x 2
disk usage = 121 gb
ram usage = 5.62 gb
40 - del servers, 12k Write TPS, 17K Read TPS
set latency = 15ms
set timeouts = 5
get latency = 12ms
get timeouts = 2
60 - del servers, 17k Write TPS, 18K Read TPS
set latency = 25ms
set timeouts = 25-75 (fluctuating)
get latency = 25ms
get timeouts = 2-15 (fluctuating)
* Set latency refers to latency in setting aerospike cache keys and similarly get for getting keys.
We had to turn off the namespace "HEAVYNAMESPACE" after reaching 60 servers.
We then started a fresh POC with a cluster having nodes which were r3.4xlarge instances of AWS (find details here https://aws.amazon.com/ec2/instance-types/), with the key difference in aerospike configuration being the usage of memory only for caching, hoping that it would give better performance. Here is the aerospike.conf file -
service {
user root
group root
paxos-single-replica-limit 1 # Number of nodes where the replica count is automatically reduced to 1.
pidfile /var/run/aerospike/asd.pid
service-threads 16
transaction-queues 16
transaction-threads-per-queue 4
proto-fd-max 15000
}
logging {
# Log file must be an absolute path.
file /var/log/aerospike/aerospike.log {
context any info
}
}
network {
service {
address any
port 3000
}
heartbeat {
mode mesh
port 3002 # Heartbeat port for this node.
# List one or more other nodes, one ip-address & port per line:
mesh-seed-address-port <IP> 3002
mesh-seed-address-port <IP> 3002
mesh-seed-address-port <IP> 3002
mesh-seed-address-port <IP> 3002
mesh-seed-address-port <IP> 3002
interval 250
timeout 10
}
fabric {
port 3001
}
info {
port 3003
}
}
namespace FC {
replication-factor 2
memory-size 30G
storage-engine memory
default-ttl 30d # 30 days, use 0 to never expire/evict.
high-water-memory-pct 80 # Evict non-zero TTL data if capacity exceeds # 70% of 15GB
stop-writes-pct 90 # Stop writes if capacity exceeds 90% of 15GB
}
We began with the FC namespace only, and decided to go ahead with the HEAVYNAMESPACE only if we saw significant improvements with the FC namespace, but we didn't. Here are the current observations with different combinations of node count and server count -
Current stats
Observation Point 1 - 4 nodes serving 130 servers.
Point 2 - 5 nodes serving 80 servers.
Point 3 - 5 nodes serving 100 servers.
These observation points are highlighted in the graphs below -
Get latency -
Set successes (giving a measure of the load handled by the cluster) -
We also observed that -
Total memory usage across cluster is 5.52 GB of 144 GB. Node-wise memory usage is ~ 1.10 GB out of 28.90 GB.
There were no observed write failures yet.
There were occasional get/set timeouts which looked fine.
No evicted objects.
Conclusion
We are not seeing the improvements we had expected, by using the memory-only configuration. We would like to get some pointers to be able to scale up with the same cost -
- via tweaking the aerospike configurations
- or by using some more suitable AWS instance type (even if that would lead to cost cutting).
Update
Output of top command on one of the aerospike servers, to show SI (Pointed out by #Sunil in his answer) -
$ top
top - 08:02:21 up 188 days, 48 min, 1 user, load average: 0.07, 0.07, 0.02
Tasks: 179 total, 1 running, 178 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.3%us, 0.1%sy, 0.0%ni, 99.4%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st
Mem: 125904196k total, 2726964k used, 123177232k free, 148612k buffers
Swap: 0k total, 0k used, 0k free, 445968k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
63421 root 20 0 5217m 1.6g 4340 S 6.3 1.3 461:08.83 asd
If I am not wrong, the SI appears to be 0.2%. I checked the same on all the nodes of the cluster and it is 0.2% on one and 0.1% on the rest of the three.
Also, here is the output of the network stats on the same node -
$ sar -n DEV 10 10
Linux 4.4.30-32.54.amzn1.x86_64 (ip-10-111-215-72) 07/10/17 _x86_64_ (16 CPU)
08:09:16 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
08:09:26 lo 12.20 12.20 5.61 5.61 0.00 0.00 0.00 0.00
08:09:26 eth0 2763.60 1471.60 299.24 233.08 0.00 0.00 0.00 0.00
08:09:26 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
08:09:36 lo 12.00 12.00 5.60 5.60 0.00 0.00 0.00 0.00
08:09:36 eth0 2772.60 1474.50 300.08 233.48 0.00 0.00 0.00 0.00
08:09:36 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
08:09:46 lo 17.90 17.90 15.21 15.21 0.00 0.00 0.00 0.00
08:09:46 eth0 2802.80 1491.90 304.63 245.33 0.00 0.00 0.00 0.00
08:09:46 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
08:09:56 lo 12.00 12.00 5.60 5.60 0.00 0.00 0.00 0.00
08:09:56 eth0 2805.20 1494.30 304.37 237.51 0.00 0.00 0.00 0.00
08:09:56 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
08:10:06 lo 9.40 9.40 5.05 5.05 0.00 0.00 0.00 0.00
08:10:06 eth0 3144.10 1702.30 342.54 255.34 0.00 0.00 0.00 0.00
08:10:06 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
08:10:16 lo 12.00 12.00 5.60 5.60 0.00 0.00 0.00 0.00
08:10:16 eth0 2862.70 1522.20 310.15 238.32 0.00 0.00 0.00 0.00
08:10:16 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
08:10:26 lo 12.00 12.00 5.60 5.60 0.00 0.00 0.00 0.00
08:10:26 eth0 2738.40 1453.80 295.85 231.47 0.00 0.00 0.00 0.00
08:10:26 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
08:10:36 lo 11.79 11.79 5.59 5.59 0.00 0.00 0.00 0.00
08:10:36 eth0 2758.14 1464.14 297.59 231.47 0.00 0.00 0.00 0.00
08:10:36 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
08:10:46 lo 12.00 12.00 5.60 5.60 0.00 0.00 0.00 0.00
08:10:46 eth0 3100.40 1811.30 328.31 289.92 0.00 0.00 0.00 0.00
08:10:46 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
08:10:56 lo 9.40 9.40 5.05 5.05 0.00 0.00 0.00 0.00
08:10:56 eth0 2753.40 1460.80 297.15 231.98 0.00 0.00 0.00 0.00
Average: IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
Average: lo 12.07 12.07 6.45 6.45 0.00 0.00 0.00 0.00
Average: eth0 2850.12 1534.68 307.99 242.79 0.00 0.00 0.00 0.00
From the above, I think the total number of packets handled per second should be 2850.12+1534.68 = 4384.8 (sum of rxpck/s and txpck/s) which is well within 250K packets per second, as mentioned in The Amazon EC2 deployment guide on the Aerospike site which is referred in #RonenBotzer's answer.
Update 2
I ran the asadm command followed by show latency on one of the nodes of the cluster and from the output, it appears that there is no latency beyond 1 ms for both reads and writes -
Admin> show latency
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~read Latency~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Node Time Ops/Sec >1Ms >8Ms >64Ms
. Span . . . .
ip-10-111-215-72.ec2.internal:3000 11:35:01->11:35:11 1242.1 0.0 0.0 0.0
ip-10-13-215-20.ec2.internal:3000 11:34:57->11:35:07 1297.5 0.0 0.0 0.0
ip-10-150-147-167.ec2.internal:3000 11:35:04->11:35:14 1147.7 0.0 0.0 0.0
ip-10-165-168-246.ec2.internal:3000 11:34:59->11:35:09 1342.2 0.0 0.0 0.0
ip-10-233-158-213.ec2.internal:3000 11:35:00->11:35:10 1218.0 0.0 0.0 0.0
Number of rows: 5
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~write Latency~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Node Time Ops/Sec >1Ms >8Ms >64Ms
. Span . . . .
ip-10-111-215-72.ec2.internal:3000 11:35:01->11:35:11 33.0 0.0 0.0 0.0
ip-10-13-215-20.ec2.internal:3000 11:34:57->11:35:07 37.2 0.0 0.0 0.0
ip-10-150-147-167.ec2.internal:3000 11:35:04->11:35:14 36.4 0.0 0.0 0.0
ip-10-165-168-246.ec2.internal:3000 11:34:59->11:35:09 36.9 0.0 0.0 0.0
ip-10-233-158-213.ec2.internal:3000 11:35:00->11:35:10 33.9 0.0 0.0 0.0
Number of rows: 5
Aerospike has several modes for storage that you can configure:
Data in memory with no persistence
Data in memory, persisted to disk
Data on SSD, primary index in memory (AKA Hybrid Memory architecture)
In-Memory Optimizations
Release 3.11 and release 3.12 of
Aerospike include several big performance improvements for in-memory namespaces.
Among these are a change to how partitions are represented, from a single red-black tree to sprigs (many sub-trees). The new config parameters partition-tree-sprigs and partition-tree-locks should be used appropriately. In your case, as r3.4xlarge instances have 122G of DRAM, you can afford the 311M of overhead associated with setting partition-tree-sprigs to the max value of 4096.
You should also consider the auto-pin=cpu setting, as well. This option does require Linux Kernal >= 3.19 which is part of Ubuntu >= 15.04 (but not many others yet).
Clustering Improvements
The recent releases 3.13 and 3.14 include a rewrite of the cluster manager. In general you should consider using the latest version, but I'm pointing out the aspects that will directly affect your performance.
EC2 Networking and Aerospike
You don't show the latency numbers of the cluster itself, so I suspect the problem is with the networking, rather than the nodes.
Older instance family types, such as the r3, c3, i2, come with ENIs - NICs which have a single transmit/receive queue. The software interrupts of cores accessing this queue may become a bottleneck as the number of CPUs increases, all of which need to wait for their turn to use the NIC. There's a knowledge base article in the Aerospike community discussion forum on using multiple ENIs with Aerospike to get around the limited performance capacity of the single ENI you initially get with such an instance. The Amazon EC2 deployment guide on the Aerospike site talks about using RPS to maximize TPS when you're in an instance that uses ENIs.
Alternatively, you should consider moving to the newer instances (r4, i3, etc) which come with multiqueue ENAs. These do not require RPS, and support higher TPS without adding extra cards. They also happen to have better chipsets, and cost significantly less than their older siblings (r4 is roughly 30% cheaper than r3, i3 is about 1/3 the price of the i2).
Your title is misleading. Please consider changing it. You moved from on-disk to in-memory.
mem+disk means data is both on disk and mem (using data-in-memory=true).
My best guess is that one CPU is bottlenecking to do network I/O.
You can take a look at the top output and see the si (software interrupts)
If one CPU is showing much higher than the other,
simplest thing you can try is RPS (Receive Packet Steering)
echo f|sudo tee /sys/class/net/eth0/queues/rx-0/rps_cpus
Once you confirm that its network bottlneck,
You can try ENA as suggested by #Ronen
Going into details,
When you had 15ms latency with only FC, assuming its low tps.
But when you added high load on HEAVYNAMESPACE in prod,
the latency kept increasing as you added more client nodes and hence tps.
Simlarly in you POC also, the latency increased with client nodes.
The latency is under 15ms even with 130 servers. Its partly good.
I am not sure if I understood your set_success graph. Assumign its in ktps.
Update:
After looking at the server side latency histogram, looks like server is doing fine.
Most likely it is a client issue. Check CPU and network on the client machine(s).

Indexing very slow in elasticsearch

I can't increase the indexing more than 10000 event/second no matter what I do. I am getting around 13000 events per second from kafka in a single logstash instance. I am running 3 Logstash in different machines reading data from same kafka topic.
I have setup a ELK cluster with 3 Logstash reading data from Kafka and sending them to my elastic cluster.
My cluster contains 3 Logstash, 3 Elastic Master Node, 3 Elastic Client node and 50 Elastic Data Node.
Logstash 2.0.4
Elastic Search 5.0.2
Kibana 5.0.2
All Citrix VM having same configuration of :
Red Hat Linux-7
Intel(R) Xeon(R) CPU E5-2630 v3 # 2.40GHz 6 Cores
32 GB RAM
2 TB spinning media
Logstash Config file :
output {
elasticsearch {
hosts => ["dataNode1:9200","dataNode2:9200","dataNode3:9200" upto "**dataNode50**:9200"]
index => "logstash-applogs-%{+YYYY.MM.dd}-1"
workers => 6
user => "uname"
password => "pwd"
}
}
Elasticsearch Data Node's elastcisearch.yml File:
cluster.name: my-cluster-name
node.name: node46-data-46
node.master: false
node.data: true
bootstrap.memory_lock: true
path.data: /apps/dataES1/data
path.logs: /apps/dataES1/logs
discovery.zen.ping.unicast.hosts: ["master1","master2","master3"]
network.host: hostname
http.port: 9200
The only change that I made in my **jvm.options** file is
-Xms15g
-Xmx15g
System config changes that I did are as follows:
vm.max_map_count=262144
and in /etc/security/limits.conf I added :
elastic soft nofile 65536
elastic hard nofile 65536
elastic soft memlock unlimited
elastic hard memlock unlimited
elastic soft nproc 65536
elastic hard nproc unlimited
Indexing Rate
One of the active data node:
$ sudo iotop -o
Total DISK READ : 0.00 B/s | Total DISK WRITE : 243.29 K/s
Actual DISK READ: 0.00 B/s | Actual DISK WRITE: 357.09 K/s
TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
5199 be/3 root 0.00 B/s 3.92 K/s 0.00 % 1.05 % [jbd2/xvdb1-8]
14079 be/4 elkadmin 0.00 B/s 51.01 K/s 0.00 % 0.53 % java -Xms15g -Xmx15g -XX:+UseConcMarkSweepGC -XX:CMSIni~h-5.0.2/lib/* org.elasticsearch.bootstrap.Elasticsearch
13936 be/4 elkadmin 0.00 B/s 51.01 K/s 0.00 % 0.39 % java -Xms15g -Xmx15g -XX:+UseConcMarkSweepGC -XX:CMSIni~h-5.0.2/lib/* org.elasticsearch.bootstrap.Elasticsearch
13857 be/4 elkadmin 0.00 B/s 58.86 K/s 0.00 % 0.34 % java -Xms15g -Xmx15g -XX:+UseConcMarkSweepGC -XX:CMSIni~h-5.0.2/lib/* org.elasticsearch.bootstrap.Elasticsearch
13960 be/4 elkadmin 0.00 B/s 35.32 K/s 0.00 % 0.33 % java -Xms15g -Xmx15g -XX:+UseConcMarkSweepGC -XX:CMSIni~h-5.0.2/lib/* org.elasticsearch.bootstrap.Elasticsearch
13964 be/4 elkadmin 0.00 B/s 31.39 K/s 0.00 % 0.27 % java -Xms15g -Xmx15g -XX:+UseConcMarkSweepGC -XX:CMSIni~h-5.0.2/lib/* org.elasticsearch.bootstrap.Elasticsearch
14078 be/4 elkadmin 0.00 B/s 11.77 K/s 0.00 % 0.00 % java -Xms15g -Xmx15g -XX:+UseConcMarkSweepGC -XX:CMSIni~h-5.0.2/lib/* org.elasticsearch.bootstrap.Elasticsearch
Index Details :
index shard prirep state docs store
logstash-applogs-2017.01.23-3 11 r STARTED 30528186 35gb
logstash-applogs-2017.01.23-3 11 p STARTED 30528186 30.3gb
logstash-applogs-2017.01.23-3 9 p STARTED 30530585 35.2gb
logstash-applogs-2017.01.23-3 9 r STARTED 30530585 30.5gb
logstash-applogs-2017.01.23-3 1 r STARTED 30526639 30.4gb
logstash-applogs-2017.01.23-3 1 p STARTED 30526668 30.5gb
logstash-applogs-2017.01.23-3 14 p STARTED 30539209 35.5gb
logstash-applogs-2017.01.23-3 14 r STARTED 30539209 35gb
logstash-applogs-2017.01.23-3 12 p STARTED 30536132 30.3gb
logstash-applogs-2017.01.23-3 12 r STARTED 30536132 30.3gb
logstash-applogs-2017.01.23-3 15 p STARTED 30528216 30.4gb
logstash-applogs-2017.01.23-3 15 r STARTED 30528216 30.4gb
logstash-applogs-2017.01.23-3 19 r STARTED 30533725 35.3gb
logstash-applogs-2017.01.23-3 19 p STARTED 30533725 36.4gb
logstash-applogs-2017.01.23-3 18 r STARTED 30525190 30.2gb
logstash-applogs-2017.01.23-3 18 p STARTED 30525190 30.3gb
logstash-applogs-2017.01.23-3 8 p STARTED 30526785 35.8gb
logstash-applogs-2017.01.23-3 8 r STARTED 30526785 35.3gb
logstash-applogs-2017.01.23-3 3 p STARTED 30526960 30.4gb
logstash-applogs-2017.01.23-3 3 r STARTED 30526960 30.2gb
logstash-applogs-2017.01.23-3 5 p STARTED 30522469 35.3gb
logstash-applogs-2017.01.23-3 5 r STARTED 30522469 30.8gb
logstash-applogs-2017.01.23-3 6 p STARTED 30539580 30.9gb
logstash-applogs-2017.01.23-3 6 r STARTED 30539580 30.3gb
logstash-applogs-2017.01.23-3 7 p STARTED 30535488 30.3gb
logstash-applogs-2017.01.23-3 7 r STARTED 30535488 30.4gb
logstash-applogs-2017.01.23-3 2 p STARTED 30524575 35.2gb
logstash-applogs-2017.01.23-3 2 r STARTED 30524575 35.3gb
logstash-applogs-2017.01.23-3 10 p STARTED 30537232 30.4gb
logstash-applogs-2017.01.23-3 10 r STARTED 30537232 30.4gb
logstash-applogs-2017.01.23-3 16 p STARTED 30530098 30.3gb
logstash-applogs-2017.01.23-3 16 r STARTED 30530098 30.3gb
logstash-applogs-2017.01.23-3 4 r STARTED 30529877 30.2gb
logstash-applogs-2017.01.23-3 4 p STARTED 30529877 30.2gb
logstash-applogs-2017.01.23-3 17 r STARTED 30528132 30.2gb
logstash-applogs-2017.01.23-3 17 p STARTED 30528132 30.4gb
logstash-applogs-2017.01.23-3 13 r STARTED 30521873 30.3gb
logstash-applogs-2017.01.23-3 13 p STARTED 30521873 30.4gb
logstash-applogs-2017.01.23-3 0 r STARTED 30520172 30.4gb
logstash-applogs-2017.01.23-3 0 p STARTED 30520172 30.5gb
I tested the incoming data in logstash by dumping data into a file. I got a file of 290 MB with 377822 lines in 30 seconds. So there is no issue from Kafka as at a given time I am receiving 35000 events per second in my 3 Logstash servers but my Elasticsearch is able to index maximum of 10000 events per second.
Can someone please help me with this issue?
Edit: I tried sending the request in batch of default 125, then 500, 1000, 10000, but still I didn't got any improvement in the indexing speed.
I improved indexing rate by moving to a larger Machines for Data nodes.
Data Node: A VMWare virtual machine with the following config:
14 CPU # 2.60GHz
64GB RAM, 31GB dedicated for elasticsearch.
The fasted disk that was available to me was SAN with Fibre Channel as I couldn't get any SSD or Local Disks.
I achieved maximum indexing rate of 100,000 events per second. Each document size is around 2 to 5 KB.

windbg memory leak investigation - missing heap memory

I am investigating a slow memory leak in a windows application using windbg
!heap -s gives the following output
Heap Flags Reserv Commit Virt Free List UCR Virt Lock Fast
(k) (k) (k) (k) length blocks cont. heap
-------------------------------------------------------------------------------------
00000023d62c0000 08000002 1182680 1169996 1181900 15759 2769 78 3 2b63 LFH
00000023d4830000 08008000 64 4 64 2 1 1 0 0
00000023d6290000 08001002 1860 404 1080 43 7 2 0 0 LFH
00000023d6dd0000 08001002 32828 32768 32828 32765 33 1 0 0
External fragmentation 99 % (33 free blocks)
00000023d8fb0000 08001000 16384 2420 16384 2412 5 5 0 3355
External fragmentation 99 % (5 free blocks)
00000023da780000 08001002 60 8 60 5 2 1 0 0
-------------------------------------------------------------------------------------
This shows that the heap with address 00000023d62c0000 has over a gigabyte of reserved memory.
Next I ran the command !heap -stat -h 00000023d62c0000
heap # 00000023d62c0000
group-by: TOTSIZE max-display: 20
size #blocks total ( %) (percent of total busy bytes)
30 19b1 - 4d130 (13.81)
20 1d72 - 3ae40 (10.55)
ccf 40 - 333c0 (9.18)
478 8c - 271a0 (7.01)
27158 1 - 27158 (7.00)
40 80f - 203c0 (5.78)
410 79 - 1eb90 (5.50)
68 43a - 1b790 (4.92)
16000 1 - 16000 (3.94)
50 39e - 12160 (3.24)
11000 1 - 11000 (3.05)
308 54 - fea0 (2.85)
60 28e - f540 (2.75)
8018 1 - 8018 (1.43)
80 f2 - 7900 (1.36)
1000 5 - 5000 (0.90)
70 ac - 4b40 (0.84)
4048 1 - 4048 (0.72)
100 3e - 3e00 (0.69)
48 c9 - 3888 (0.63)
If I add up the total size of the heap blocks from the above command (4d130 + 3ae40 + ...) I get a few megabytes of allocated memory.
Am I missing something here? How can I find which blocks are consuming the gigabyte of allocated heap memory?
I believe that the !heap –stat is broken for 64 bits dumps, at least big one. I have instead used debugdiag 1.2 for hunting memory leaks on 64 bits.

Resources