how to interprete sinfo cpu load %O? - cpu

sinfo --format "%O" gives the load of nodes.
Is this an average value of a specific time period?
And how is this value related with the load averages (1m,5m,15m) of uptime command?
Thanks

Yes, it returns the 5min load average value.
SLURM uses sysinfo to measure the cpu load value (am using slurm 15.08.5).
In the source code of slurm, the following line measures the cpu load value.
float shift_float = (float) (1 << SI_LOAD_SHIFT);
if (sysinfo(&info) < 0) {
*cpu_load = 0;
return errno;
}
*cpu_load = (info.loads[1] / shift_float) * 100.0;
From the sysinfo man page:
unsigned long loads[3]; /* 1, 5, and 15 minute load averages */
info.loads[1] returns the 5min average. sysinfo reads from the file /proc/loadavg
To understand why SI_LOAD_SHIFT is used, please read the reference

Related

Convert TMediaPlayer->Duration to min:sec (FMX)

I'm working with the TMediaPlayer1 control in an FMX app using C++ Builder 10.2 Version 25.0.29899.2631. The code below runs fine in Win32 and gives the expected result after loading an mp3 file that is 35 minutes, 16 seconds long.
When i run this same code targeting iOS i get the following error:
[bcciosarm64 Error] Unit1.cpp(337): use of overloaded operator '/' is ambiguous (with operand types 'Fmx::Media::TMediaTime' and 'int')
Here is my code that takes the TMediaPlayer1->Duration and converts it to min:sec,
UnicodeString S = System::Ioutils::TPath::Combine(System::Ioutils::TPath::GetDocumentsPath(),"43506.mp3");
if (FileExists(S)) {
MediaPlayer1->FileName = S;
int sec = MediaPlayer1->Duration / 10000000; // <-- this is problem line
int min = sec / 60;
sec = sec - (60 * min);
lblEndTime->Text = IntToStr(min) + ":" + IntToStr(sec);
}
How should i be doing that division?
UPDATE 1: I fumbled around and figured out how to see the values with this code below. When i run on Win32 i get 21169987500 for the Duration (35 min, 16 seconds) and i get 10000000 for MediaTimeScale - both correct. When i run on iOS i get 0 for Duration and 10000000 for MediaTimeScale. But, if i start the audio playing (e.g. MediaPlayer1->Play();) first and THEN run those 2 showmessages i get the correct result for Duration.
MediaPlayer1->FileName = S; // load the mp3
ShowMessage(IntToStr((__int64) Form1->MediaPlayer1->Media->Duration));
ShowMessage(IntToStr((__int64) MediaTimeScale));
It looks like the Duration does not get set on iOS until the audio actually starts playing. I tried a 5 second delay after setting MediaPlayer1->Filename but that doesn't work. I tried a MediaPlayer1->Play(); followed by MediaPlayer->Stop(); but that didn't work either.
Why isn't Duration set when the FileName is assigned? I'd like to show the Duration before the user ever starts playing the audio.

Whenever i ran the Jmeter test for less than 10 Thread Groups then all the time "Throughput" shows numbers in "Minutes"

When I execute test in JMeter for less than 10 Thread Groups, in Summary Report column Throughput showing result in Minutes.
Can anyone please help me
As per RateRenderer class source
String unit = "sec";
if (rate < 1.0) {
rate *= 60.0;
unit = "min";
}
if (rate < 1.0) {
rate *= 60.0;
unit = "hour";
}
setText(formatter.format(rate) + "/" + unit);
So:
If throughput is more than 1 - time unit is "seconds"
If your throughput is less than 1 - it's being multiplied by 60 and time unit is set to "minutes"
If after throughput converting to "minutes" it is still less than 1 - it is being multiplied by 60 and time unit is set to "hours"
If you need to get the throughput in hits per second from minutes - just divide the value by 60.
Other options are:
Patch the RateRenderer class and comment out the two above "if" clauses
Use an external 3rd-party tool like BM.Sense for JMeter results analysis

Jmeter : Summary report : Throughput

is the total throughput shown in last row in Summary Report correct ? I m using Jmeter 2.11
I find it difficult to match the displayed figure by manipulation.
I followed the formula (x/sec) : Number of request / Total response time required (in sec)
Or 1/Avg total response time (sec).
for example : 50 request taking avg response time as 2000 ms each then throughput = 50/(50*2) = 0.5/sec
But Jmeter shows different value than 0.5/sec or 30/min
Can someone help me here?
I was also having similar assumption. But this is the formula for calculating throughput.
endTime = lastSampleStartTime + lastSampleLoadTime
startTime = firstSampleStartTime
converstion = unit time conversion value
Throughput = Numrequests / ((endTime - startTime)*conversion)
(I got this few months back from the below answer)
Calculating throughput from Jmeter jtl log file

HyperTable: Loading data using Mutators Vs. LOAD DATA INFILE

I am starting a discussion, which I hope, will become one place to discuss data loading method using mutators Vs. loading using flat file via 'LOAD DATA INFILE'.
I have been baffled to get enormous performance gain using mutators (using batch size = 1000 or 10000 or 100K et cetera).
My project involved loading close to 400 million rows of social media data into HyperTable to be used for real time analytics. It took me close to 3 days to just load just 1 million row of data (code sample below). Each row is approximately 32 byte. So, in order to avoid taking 2-3 weeks to load this much data, I prepared a flat file with rows and used DATA LOAD INFILE method. Performance gain was amazing. Using this method, loading rate was 368336 cells/sec.
See below for actual snapshot of action:
hypertable> LOAD DATA INFILE "/data/tmp/users.dat" INTO TABLE users;
Loading 7,113,154,337 bytes of input data...
0% 10 20 30 40 50 60 70 80 90 100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
Load complete.
Elapsed time: 508.07 s
Avg key size: 8.92 bytes
Total cells: 218976067
Throughput: 430998.80 cells/s
Resends: 2210404
hypertable> LOAD DATA INFILE "/data/tmp/graph.dat" INTO TABLE graph;
Loading 12,693,476,187 bytes of input data...
0% 10 20 30 40 50 60 70 80 90 100%
|----|----|----|----|----|----|----|----|----|----|
***************************************************
Load complete.
Elapsed time: 1189.71 s
Avg key size: 17.48 bytes
Total cells: 437952134
Throughput: 368118.13 cells/s
Resends: 1483209
Why is performance difference between 2 method is so vast? What's the best way to enhance mutator performance. Sample mutator code is below:
my $batch_size = 1000000; # or 1000 or 10000 make no substantial difference
my $ignore_unknown_cfs = 2;
my $ht = new Hypertable::ThriftClient($master, $port);
my $ns = $ht->namespace_open($namespace);
my $users_mutator = $ht->mutator_open($ns, 'users', $ignore_unknown_cfs, 10);
my $graph_mutator = $ht->mutator_open($ns, 'graph', $ignore_unknown_cfs, 10);
my $keys = new Hypertable::ThriftGen::Key({ row => $row, column_family => $cf, column_qualifier => $cq });
my $cell = new Hypertable::ThriftGen::Cell({key => $keys, value => $val});
$ht->mutator_set_cell($mutator, $cell);
$ht->mutator_flush($mutator);
I would appreciate any input on this? I don't have tremendous amount of HyperTable experience.
Thanks.
If it's taking three days to load one million rows, then you're probably calling flush() after every row insert, which is not the right thing to do. Before I describe hot to fix that, your mutator_open() arguments aren't quite right. You don't need to specify ignore_unknown_cfs and you should supply 0 for the flush_interval, something like this:
my $users_mutator = $ht->mutator_open($ns, 'users', 0, 0);
my $graph_mutator = $ht->mutator_open($ns, 'graph', 0, 0);
You should only call mutator_flush() if you would like to checkpoint how much of the input data has been consumed. A successful call to mutator_flush() means that all data that has been inserted on that mutator has durably made it into the database. If you're not checkpointing how much of the input data has been consumed, then there is no need to call mutator_flush(), since it will get flushed automatically when you close the mutator.
The next performance problem with your code that I see is that you're using mutator_set_cell(). You should use either mutator_set_cells() or mutator_set_cells_as_arrays() since each method call is a round-trip to the ThriftBroker, which is expensive. By using the mutator_set_cells_* methods, you amortize that round-trip over many cells. The mutator_set_cells_as_arrays() method can be more efficient for languages where object construction overhead is large in comparison to native datatypes (e.g. string). I'm not sure about Perl, but you might want to give that a try to see if it boosts performance.
Also, be sure to call mutator_close() when you're finished with the mutator.

Getting surprising elapsed time in windows and linux

I have written one function which is platform independent and working nicely in windows as well as linux. I wanted to check the execution time of that function. I am using QueryPerformanceCounter to calculate the execution time in windows and "gettimeofday" in linux.
The problem is in windows the execution time is 60 mili seconds and in linux its showing 4 ms. Its a huge difference b/w them. Can anybody suggest what might went wrong....or If any body knows the some other APIs better than these to calculate elapsed time please let me know...
here is the code for i have written using gettimeofday......
void main()
{
timeval start_time;
timeval end_time;
gettimeofday(&start_time,NULL);
function_invoke(........);
gettimeofday(&end_time,NULL);
timeval res;
timersub(&start_time,&end_time,&res);
cout<<"function_invoke took seconds = "<<res.tv_sec<<endl;
cout<<"function_invoke took microsec = "<<res.tv_usec<<endl;
}
OUTPUT :
function_invoke took seconds = 0
function_invoke took microsec = 4673 ( 4.673 mili seconds )

Resources