Multiple samples
I have been able to calculate the throughput with single sample accurately but with multiple samples i am not able to get the value that is in the picture.
According to JMeter Glossary:
Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
The formula is: Throughput = (number of requests) / (total time).
So you need to measure your test duration and then divide the number of the requests by the test duration.
The exact implementation of all the calculated metrics lives in Calculator class, the function which returns the throughput looks like:
private double getRatePerSecond(long value) {
if (elapsedTime > 0) {
return value / ((double) elapsedTime / 1000); // 1000 = millisecs/sec
}
return 0.0;
}
You might also be interested in What is the Relationship Between Users and Hits Per Second? article
Related
Consider a small monitoring device which displays temperature average every 10 seconds.
timestamp value
20190304000000 62.7
20190304000010 62.5
20190304000020 62.8
....
....
how to calculate and update average without increasing memory footprint. that is, whole data storage (persistent or memory) in not possible
In addition to other answers, you might want to use IIR filter to get Exponential moving average: filter that applies weighting factors which decrease exponentially, so last values have more impact than older ones
newAverage = OldAverage * (1-alpha) + NewValue * alpha
where alpha is small value like 0.1 that has relation to decreasing time/constant
Keep a total sum, and a count of the number of temperatures recorded. Then divide the sum by the count every time you report the answer, to avoid compounding floating point errors.
from itertools import count
temperature_sum = 0
for temperature_count in count(1):
temperature_sum += read_from_sensor()
print("Average: {}".format(temperature_sum / temperature_count))
we will need two variables
int count;
float average;
void main() {
do while(true) {
float temperature = ReadFromSensor(); //not included
average = ((average*count) + temperature)/ ++count;
cout << "average: " << average << endl;
}
}
I am trying to implement an algorithm for backgammon similar to td-gammon as described here.
As described in the paper, the initial version of td-gammon used only the raw board encoding in the feature space which created a good playing agent, but to get a world-class agent you need to add some pre-computed features associated with good play. One of the most important features turns out to be the blot exposure.
Blot exposure is defined here as:
For a given blot, the number of rolls out of 36 which would allow the opponent to hit the blot. The total blot exposure is the number of rolls out of 36 which would allow the opponent to hit any blot. Blot exposure depends on: (a) the locations of all enemy men in front of the blot; (b) the number and location of blocking points between the blot and the enemy men and (c) the number of enemy men on the bar, and the rolls which allow them to re-enter the board, since men on the bar must re-enter before blots can be hit.
I have tried various approaches to compute this feature efficiently but my computation is still too slow and I am not sure how to speed it up.
Keep in mind that the td-gammon approach evaluates every possible board position for a given dice roll, so each turn for every players dice roll you would need to calculate this feature for every possible board position.
Some rough numbers: assuming there are approximately 30 board position per turn and an average game lasts 50 turns we get that to run 1,000,000 game simulations takes: (x * 30 * 50 * 1,000,000) / (1000 * 60 * 60 * 24) days where x is the number of milliseconds to compute the feature. Putting x = 0.7 we get approximately 12 days to simulate 1,000,000 games.
I don't really know if that's reasonable timing but I feel there must be a significantly faster approach.
So here's what I've tried:
Approach 1 (By dice roll)
For every one of the 21 possible dice rolls, recursively check to see a hit occurs. Here's the main workhorse for this procedure:
private bool HitBlot(int[] dieValues, Checker.Color checkerColor, ref int depth)
{
Moves legalMovesOfDie = new Moves();
if (depth < dieValues.Length)
{
legalMovesOfDie = LegalMovesOfDie(dieValues[depth], checkerColor);
}
if (depth == dieValues.Length || legalMovesOfDie.Count == 0)
{
return false;
}
bool hitBlot = false;
foreach (Move m in legalMovesOfDie.List)
{
if (m.HitChecker == true)
{
return true;
}
board.ApplyMove(m);
depth++;
hitBlot = HitBlot(dieValues, checkerColor, ref depth);
board.UnapplyMove(m);
depth--;
if (hitBlot == true)
{
break;
}
}
return hitBlot;
}
What this function does is take as input an array of dice values (i.e. if the player rolls 1,1 the array would be [1,1,1,1]. The function then recursively checks to see if there is a hit and if so exits with true. The function LegalMovesOfDie computes the legal moves for that particular die value.
Approach 2 (By blot)
With this approach I first find all the blots and then for each blot I loop though every possible dice value and see if a hit occurs. The function is optimized so that once a dice value registers a hit I don't use it again for the next blot. It is also optimized to only consider moves that are in front of the blot. My code:
public int BlotExposure2(Checker.Color checkerColor)
{
if (DegreeOfContact() == 0 || CountBlots(checkerColor) == 0)
{
return 0;
}
List<Dice> unusedDice = Dice.GetAllDice();
List<int> blotPositions = BlotPositions(checkerColor);
int count = 0;
for(int i =0;i<blotPositions.Count;i++)
{
int blotPosition = blotPositions[i];
for (int j =unusedDice.Count-1; j>= 0;j--)
{
Dice dice = unusedDice[j];
Transitions transitions = new Transitions(this, dice);
bool hitBlot = transitions.HitBlot2(checkerColor, blotPosition);
if(hitBlot==true)
{
unusedDice.Remove(dice);
if (dice.ValuesEqual())
{
count = count + 1;
}
else
{
count = count + 2;
}
}
}
}
return count;
}
The method transitions.HitBlot2 takes a blotPosition parameter which ensures that only moves considered are those that are in front of the blot.
Both of these implementations were very slow and when I used a profiler I discovered that the recursion was the cause, so I then tried refactoring these as follows:
To use for loops instead of recursion (ugly code but it's much faster)
To use parallel.foreach so that instead of checking 1 dice value at a time I check these in parallel.
Here are the average timing results of my runs for 50000 computations of the feature (note the timings for each approach was done of the same data):
Approach 1 using recursion: 2.28 ms per computation
Approach 2 using recursion: 1.1 ms per computation
Approach 1 using for loops: 1.02 ms per computation
Approach 2 using for loops: 0.57 ms per computation
Approach 1 using parallel.foreach: 0.75 ms per computation
6 Approach 2 using parallel.foreach: 0.75 ms per computation
I've found the timings to be quite volatile (Maybe dependent on the random initialization of the neural network weights) but around 0.7 ms seems achievable which if you recall leads to 12 days of training for 1,000,000 games.
My questions are: Does anyone know if this is reasonable? Is there a faster algorithm I am not aware of that can reduce training?
One last piece of info: I'm running on a fairly new machine. Intel Cote (TM) i7-5500U CPU #2.40 GHz.
Any more info required please let me know and I will provide.
Thanks,
Ofir
Yes, calculating these features makes really hairy code. Look at the GNU Backgammon code. find the eval.c and look at the lines for 1008 to 1267. Yes, it's 260 lines of code. That code calculates what the number of rolls that hits at least one checker, and also the number of rolls that hits at least 2 checkers. As you see, the code is hairy.
If you find a better way to calculate this, please post your results. To improve I think you have to look at the board representation. Can you represent the board in a different way that makes this calculation faster?
I have problem to find out how total value in aggregate report is calculated.
Do anybody know algorithm for this value ?
Basing on Jmeter documentation for single call is calculate as: total execution/ time of execution.
Problem is that total value for throughput isn't number of total executions divided by total time of test. It is calculated in more smart way and I looking for algorithm of this smart way :).
As per The Load Reports guide:
Throughput is measured in requests per second/minute/hour. The time unit is chosen so that the displayed rate is at least 1.0. When the throughput is saved to a CSV file, it is expressed in requests/second, i.e. 30.0 requests/minute is saved as 0.5.
As per JMeter Glossary
Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
The formula is: Throughput = (number of requests) / (total time).
As per Calculator class from JMeter's source
/**
* Throughput in bytes / second
*
* #return throughput in bytes/second
*/
public double getBytesPerSecond() {
if (elapsedTime > 0) {
return bytes / ((double) elapsedTime / 1000); // 1000 = millisecs/sec
}
return 0.0;
}
/**
* Throughput in kilobytes / second
*
* #return Throughput in kilobytes / second
*/
public double getKBPerSecond() {
return getBytesPerSecond() / 1024; // 1024=bytes per kb
}
I was trying to generate a random number in CAPL program (similar to C language) using timers.
Say I have a timer X and I start it
/****Timer start****/
on start
{
settimer (x,20000); // setting the timer for 20 secs
}
Now I need a random number only between 300ms to 20 secs with a resolution of 500ms.
CAPL has a inbuilt function called random() to do this.
I did like
int random(int x);
Now how can I make sure that I get a random value only with resolution of 500ms?
Any suggestions?
How about
y = random(40);
TestWaitForTimeout(300+y*500);
y gets a random value between 0 and 39, corresponding to 0-19.5 seconds with 500 ms resolution. Then you add 300ms to the total timeout. The resulting timeout will be between 300ms and 20s with a resolution of 500ms.
I was able to generate random numbers by writing a test function as below.
The random function generates a random number between 0 to n-1.
As far as resolution is concerned the library function random() doesn't allow to vary the resolution.
testfunction Random_No ()
{
dword y;
y = random(20000);
TestWaitForTimeout(y);
}
I am programming in java and I have come across a problem I could use some help with. Basically I need the user to enter how many times they expect a certain event to happen in a certain amount of times. The event takes a certain amount of time to complete as well. With all that said I need to use a random number generator to decide whether or not the event should happen based on the expected value.
Here's an example. Say the event takes 2 seconds to complete. The user says they want 100 seconds total and they expect the event to happen 25 times. Right now this is what I have. Units is the units of time and expectedLanding is how many times they would like the event to take place.
double isLandingProb = units/expectedLanding;
double isLanding = isLandingProb * random.nextDouble();
if(isLanding >= isLandingProb/2){
//do event here
}
This solution isn't working, and I'm having trouble thinking of something that would work.
Try this:
double isLandingProb = someProbability;
double isLanding = random.nextDouble();
if(isLanding <= isLandingProb){
//do event here
}
For example, if your probability is .25 (1 out of 4), and nextDouble returns a random number between 0 and 1, then your nextDouble needs to be less than (or equal to) .25 to achieve a landing.
Given an event that takes x seconds to run, but you want it to run on average once every y seconds, then it needs to execute with probability x/y. Then the expectation of the number of seconds the event is running over y seconds is x = one event.
int totalSeconds;
int totalTimes;
double eventTime;
double secondsPerEvent = 1.0d * totalSeconds / totalTimes;
if( eventTime > secondsPerEvent ) throw new Exception("Impossible to satisfy");
double eventProbability = eventTime / secondsPerEvent;
if( eventProbability < random.nextDouble() )
// do event