Does cAdvisor use binary or decimal system when reporting bytes - metrics

I am using cAdvisor v0.44.0 to expose container metrics which are collected by Prometheus and visualized in Grafana. One of the cAdvisor metrics is container_network_receive_bytes_total. Let's say that its value is 554325454. Should bytes reported by cAdvisor be converted to MB with decimal or binary approach?
554325454 bytes = 554.3254539999999 MB (decimal)
554325454 bytes = 528.6459484100342 MB (binary)

Related

Elasticsearch Ram recommendation

I'm deploying an Elasticsearch cluster with roughly 40GB a day with a time-to-live of 365 days. Write speed would be around 50 msgs/sec. Reads would be mostly driven by user dashboards, so the read frequency won't be high. What would be the best hardware requirements for this amount of data? How many master and data nodes may require in this situation?
obviously base on search index rate you should choose the hardware. 50 msg/sec is very low for elasticsearch. you have total 14.6TB data that is your 85 percent of total disk (base on 85% watermark). this means that you need 17TB disk. I think you can use one server with 128GB RAM and atleast 10 Core CPU and 17TB disk or have two server with half of this config. one server is master and data node and one server will be only data node.

Kilograms to grams convert processor in Elasticsearch

There is a standard processor/converter for bytes from gigabytes https://www.elastic.co/guide/en/elasticsearch/reference/current/bytes-processor.html
We have a field weight and the values can be 3kg, 1200g, ...
Is there similar processor for grams? Or which processor can be used to achieve same functionality?

How does storage beign used in elasticsearch 2.x

I am using elasticsearch 2.x . I have total 9 nodes with M4.4xlarge and I want to downsize to lesser nodes .
Total docs I need to store 2 Million
Per doc size = 30KB
With this stats I Believe Elasticsearch would need
30 KB * 2000000 docs = 60000000 KB = 60000 = 60GB ~
However when I have indexed all docs I see its 500GB data.
I am confused as how my index has grown this much
could someone please give me some insights to work upon

Maximum Size of string metric in sonarqube

Is there any limit on size of metric whose data type is string in sonarqube?
4000 characters is the max size of data measures

Elastic Search and Logstash Performance Tuning

In a Single Node Elastic Search along with logstash, We tested with 20mb and 200mb file parsing to Elastic Search on Different types of the AWS instance i.e Medium, Large and Xlarge.
Environment Details : Medium instance 3.75 RAM 1 cores Storage :4 GB SSD 64-bit Network Performance: Moderate
Instance running with : Logstash, Elastic search
Scenario: 1
**With default settings**
Result :
20mb logfile 23 mins Events Per/second 175
200mb logfile 3 hrs 3 mins Events Per/second 175
Added the following to settings:
Java heap size : 2GB
bootstrap.mlockall: true
indices.fielddata.cache.size: "30%"
indices.cache.filter.size: "30%"
index.translog.flush_threshold_ops: 50000
indices.memory.index_buffer_size: 50%
# Search thread pool
threadpool.search.type: fixed
threadpool.search.size: 20
threadpool.search.queue_size: 100
**With added settings**
Result:
20mb logfile 22 mins Events Per/second 180
200mb logfile 3 hrs 07 mins Events Per/second 180
Scenario 2
Environment Details : R3 Large 15.25 RAM 2 cores Storage :32 GB SSD 64-bit Network Performance: Moderate
Instance running with : Logstash, Elastic search
**With default settings**
Result :
20mb logfile 7 mins Events Per/second 750
200mb logfile 65 mins Events Per/second 800
Added the following to settings:
Java heap size: 7gb
other parameters same as above
**With added settings**
Result:
20mb logfile 7 mins Events Per/second 800
200mb logfile 55 mins Events Per/second 800
Scenario 3
Environment Details :
R3 High-Memory Extra Large r3.xlarge 30.5 RAM 4 cores Storage :32 GB SSD 64-bit Network Performance: Moderate
Instance running with : Logstash, Elastic search
**With default settings**
Result:
20mb logfile 7 mins Events Per/second 1200
200mb logfile 34 mins Events Per/second 1200
Added the following to settings:
Java heap size: 15gb
other parameters same as above
**With added settings**
Result:
20mb logfile 7 mins Events Per/second 1200
200mb logfile 34 mins Events Per/second 1200
I wanted to know
What is the benchmark for the performance?
Is the performance meets the benchmark or is it below the benchmark
Why even after i increased the elasticsearch JVM iam not able to find the difference?
how do i monitor Logstash and improve its performance?
appreciate any help on this as iam new to logstash and elastic search.
I think this situation is related to the fact that Logstash uses fixed size queues (The Logstash event processing pipeline)
Logstash sets the size of each queue to 20. This means a maximum of 20 events can be pending for the next stage. The small queue sizes mean that Logstash simply blocks and stalls safely when there’s a heavy load or temporary pipeline problems. The alternatives would be to either have an unlimited queue or drop messages when there’s a problem. An unlimited queue can grow unbounded and eventually exceed memory, causing a crash that loses all of the queued messages.
I think what you should try is to increase the worker count with the '-w' flag.
On the other hand many people say that Logstash should be scaled horizontally, rather that adding more cores and GB of ram (How to improve Logstash performance)
You have given Java Heap size correctly with respect to your total memory, but I think you are not utilizing it properly. I hope you have idea about what is fielddata size, the default is 60% of Heap size and you are reducing it to 30%.
I don't know why you are doing this, my perception might be wrong for your use-case but its good habit to allocate indices.fielddata.cache.size: "70%" or even 75%, but with this setting you must have to set something like indices.breaker.total.limit: "80%" to avoid Out Of Memory(OOM) exception. You can check this for further details on Limiting Memory Usage.

Resources