I want to calculate es's indexing rate myself. In stats api,there is a index_time_in_millis field,what's the meaning of the field?
You're close. I believe you can calculate the index rate by doing the following:
Sample the index_total and index_time_in_millis for a couple time periods.
Subtract the index_total (sample period + 1) from index_total for the original sample period.
Subtract the index_time_in_millis (sample period +1) from the index_time_in_millis for the original sample period. Then divide this number by 1000 to get seconds.
Finally take the number from step 2 and divide by step 3. This will give you the number of documents indexed/sec.
Formula: (Tindex_+1 - Tindex)/((Indx_time_milis_+1 - Indx_time_milli)/1000)
Related
In Prometheus I want to calculate the total average of this specific power metric (dell_hw_chassis_power_reading) within a period of time. For example, the last 2 hours.
I currently am using this query below:
sum(avg_over_time(dell_hw_chassis_power_reading[2h])) by (node,instance)
This query seems to only give me the latest value within that 2-hour interval specified in the query. It does not give me the total average for all scrapes of that metric within the 2-hours.
The query avg_over_time(dell_hw_chassis_power_reading[2h]) returns the average values for raw samples stored in Prometheus over the last 2 hours. If Prometheus contains multiple time series with the dell_hw_chassis_power_reading name, then the query would return independent averages per each matching time series.
If you want calculating the average over all the time series with the name dell_hw_chassis_power_reading, then the following query must be used:
sum(sum_over_time(dell_hw_chassis_power_reading[2h]))
/
sum(count_over_time(dell_hw_chassis_power_reading[2h]))
If you need the average per some label or a set of labels, then the following query must be used:
sum(sum_over_time(dell_hw_chassis_power_reading[2h])) by (labels)
/
sum(count_over_time(dell_hw_chassis_power_reading[2h])) by (labels)
How can I estimate time to read an article like docs.asp.net website
docs.asp.net
At the top of all articles, it says you need xxx minutes to read. I think they are using an algorithm to estimate time
How can I do that!
Thanks in advance
The words read per minute average is about 250-300, once you know this you just need to:
Get the article word count.
Divide this number by 275 (more or less).
Round the result to get a integer number of minutes.
According to a study conducted in 2012, the average reading speed of an adult for text in English is: 228±30 words, 313±38 syllables, and 987±118 characters per minute.
You can therefore calculate an average time to read a particular article by counting one of these factors and dividing by that average speed. Syllables per minute is probably the most accurate, but for computers, words and characters are easier to count.
Study Citation:
Standardized Assessment of Reading Performance: The New International Reading Speed Texts IReST by
Susanne Trauzettel-Klosinski; Klaus Dietz; the IReST Study Group, published in Investigative Ophthalmology & Visual Science August 2012, Vol.53, 5452-5461
A nice solution here, on how
to get an estimated read time of any article or blog post https://stackoverflow.com/a/63820743/12490386
here's an easy one
Divide your total word count by 200.
You’ll get a decimal number, in this case, 4.69. The first part of your decimal number is your minute. In this case, it’s 4.
Take the second part — the decimal points — and multiply that by 0.60. Those are your seconds. Round up or down as necessary to get a whole second. In this case, 0.69 x 0.60 = 0.414. We’ll round that to 41 seconds.
The result? 938 words = a 4 minute, 41 second read.
We have a counter metric in one our micro services which pushes data to DataDog. I want to display the total count for given time frame, and also the count per day (X axis would have the date and Y axis would have count). How do we achive this?
I tried using sum by and diff with Query value representation. It gives the total number of the count for given time frame. But I would like to get a bar graph with the X axis as the date and the Y axis as the count. Is this possible in DataDog?
It seems like there are 2 main questions here:
display the total count for a given time frame.
the count per day.
I think the rollup method is going to be your friend for both questions.
For #1 you need to pass in the time frame you want a total over: sum:<metric_name>.rollup(sum, <time_frame>) and the single value can be displayed using the Query Value visualization.
For #2 the datadog docs say you can get metrics per a day by
graphed using a day-long rollup with .rollup(avg,86400)
So this would look something like sum:<metric_name>.rollup(sum, 86400) and can be displayed a Timeseries with bars.
I'm creating an app to monitor water quality. The temperature data is updated every 2 min to firebase real-time database. App has two requirements
1) It should alert the user when temperature exceed 33 degree or drop below 23 degree - This part is done
2) It should alert user when it has big temperature fluctuation after analysing data every 30min - This part i'm confused.
I don't know what algorithm to use to detect big temperature fluctuation over a period of time and alert the user. Can someone help me on this?
For a period of 30 minutes, your app would give you 15 values.
If you want to figure out a big change in this data, then there is one way to do so.
You can use implement the following method:
Calculate the mean and the standard deviation of the values.
Subtract the data you have from the mean and then take the absolute value of the result.
Compare if the absolute value is greater than one standard deviation, if it is greater then you have a big data.
See this example for better understanding:
Lets suppose you have these values for 10 minutes:
25,27,24,35,28
First Step:
Mean = 27 (apprx)
One standard deviation = 3.8
Second Step: Absolute(Data - Mean)
abs(25-27) = 2
abs(27-27) = 0
abs(24-27) = 3
abs(35-27) = 8
abs(28-27) = 1
Third Step
Check if any of the subtraction is greater than standard deviation
abs(35-27) gives 8 which is greater than 3.8
So, there is a big fluctuation. If all the subtracted results are less than standard deviation, then there is no fluctuation.
You can still improvise the result by selecting two or three standard deviation instead of one standard deviation.
Start by defining what you mean by fluctuation.
You don't say what temperature scale you're using. Fahrenheit, Celsius, Rankine, or Kelvin?
Your sampling rate is a new data value every two minutes. Do you define fluctuation as the absolute value of the difference between the last point and current value? That's defensible.
If the max allowable absolute value is some multiple of your 33-23 = 10 degrees you're in business.
When I'm trying to execute my test plan in jmeter for 10,50,100... virtual users with ram up period 30 sec and Loop count is 1. I'm not getting Average response time exactly when I calculated with Average Time=(Min Time+ Max Time)/2.
Please check my attached image for differences in Average time
Can anyone suggest me please how we need to understand this.
Thanks in Advance.
Average: This is the Average elapsed time of a set of results. It is the arithmetic mean of all the samples response time.
The following equation show how the Average value (μ) is calculated:
μ = 1/n * Σi=1…n xi
An important thing to understand is that the mean value can be very misleading as it does not show you how close (or far) your values are from the average.The main thing you should focus on is "Standard Deviation".
The standard deviation (σ) measures the mean distance of the values to their average (μ). In other words, it gives us a good idea of the dispersion or variability of the measures to their mean value.
The following equation show how the standard deviation (σ) is calculated:
σ = 1/n * √ Σi=1…n (xi-μ)2
So interpreting the standard deviation is wise as mean value could be the same for the different response time of the samples! If the deviation value is low compared to the mean value, it will indicate you that your measures are not dispersed (or mostly close to the mean value) and that the mean value is significant.
Min - The lowest elapsed time(response time) for the samples with the same label.
Max - The longest elapsed time (response time) for the samples with the same label.
For further detail you could go through JMeter documentation and this blog. It will really help you to understand the concept.