I am trying to measure the runtime of an mpi programm. I am running it with different number of processes to compare the runtime later.
After calling MPI_Init() I call and save the value of MPI_WTime() in a value.
Before calling MPI_Finalize() I call MPI_WTime and save its value some in another variable.
After MPI_Finalize I subtract the two values of MPI_WTime().
My program looks like this:
int main(int argc, char* argv[]){
double start, end;
MPI_Init(&argc, &argv);
start = MPI_Wtime();
//Do something.......
end = MPI_Wtime();
MPI_Finalize();
if(rank==0){
printf("Runtime=%d\n", end-start);
}
}
This works. The problem is just the results I get. Unfortunately I get no seconds. Running the Program with one process returns a number like this : 41291472 and running the program with 64 processes returns 1978567624.
These values are no seconds! How can I convert them to seconds?
This should do the trick:
printf("Runtime=%f\n", end-start);
Your start and end variables are doubles, so the result of end - start is also a double value. You need to use to correct format specifier in printf, which is %f for floating point values. Using d will interpret the result as an integer and you'll only get garbage.
Mort's answer is correct. I only wanted to add that MPI_Wtime does not return an integer count of elapsed seconds. It returns a floating point value -- you know this because you declared start and end as double, but maybe you forgot when you set up your printf specifier.
Related
I'm trying to understand what should be the correct behavior of C++11
std::get_time() when the input data is "shorter" than expected by the format
string. For example, what the following program should print:
#include <ctime>
#include <iomanip>
#include <sstream>
#include <iostream>
int main (int argc, char* argv[])
{
using namespace std;
tm t {};
istringstream is ("2016");
is >> get_time (&t, "%Y %d");
cout << "eof: " << is.eof () << endl
<< "fail: " << is.fail () << endl;
}
Note that get_time()
behavior is described in terms of
std::time_get<CharT,InputIt>::get().
Based on the latter (see 1c paragraph) I would expect both eofbit and failbit
to be set and so the program to print:
eof: 1
fail: 1
However, for all the major Standard C++ Library implementations (libstdc++
10.2.1, libc++ 11.0.0, and MSVC 16.8) it prints:
eof: 1
fail: 0
Interestingly, that for MSVC before 16.8 it prints:
eof: 1
fail: 1
But the commit "std::get_time should not fail when format is longer than the stream"
suggests that this was fixed deliberately.
Could someone clarify if (and why) the mentioned standard libraries behave correctly and, if that's the case, how it is supposed to detect that the format string was not fully used.
I cannot explain with 100% accuracy, but I can try to explain why the function behaves as observed.
I suspect that the eofbit is not set in your case, which is 1c, because case 1b takes precedence:
b) There was a parsing error (err != std::ios_base::goodbit)
According to The eofbit part of https://en.cppreference.com/w/cpp/io/ios_base/iostate , one of the situations when the eof bit is set is when
The std::get_time I/O manipulator and any of the std::time_get parsing functions: time_get::get, time_get::get_time, time_get::get_date etc., if the end of the stream is reached before the last character needed to parse the expected date/time value was processed.
The same source for the failbit says:
The time input manipulator std::get_time (technically, time_get::get it calls), if the input cannot be unambiguously parsed as a time value according to the given format string.
So my guess is that when the input is 2000, get tries to read it in using operator>>(std::string&), hits the eof condition and sets the eofbit. This satisfies condition 1b, so condition 1c cannot be applied.
If the function expects a year and the input is shorter than 4 digit, e.g. 200, or of it contains a space after the year, 2000 , or contains more than 4 digits, 20001, the function returns failbit. However, if the input is a 4-digit number starting with 0's, e.g. 0005, the function returns eofbit == 1, failbit == 0. This is in accordance with the specification of %Y format specifier:
parses full year as a 4 digit decimal number, leading zeroes permitted but not required
So I hope this explains why sometimes condition 1c is not taken into account. We can detect that the format string has not been fully used in a usual way, by testing the good() member function. I believe telling the difference between the function returning failbit == 1 or 0 is of very little practical importance. I also believe the standard is imprecise here, but if we assume that the user is interested in the value of good(), this lack of precision is of no practical relevance.
It is also possible that the value of failbit in the case you consider is implementation-defined: an implementation could try and read exactly 4 characters to satisfy the %Y format specifier, in which case the eofbit would not be set. But this is only my guess.
EDIT
Look at this modification of your program:
int main (int argc, char* argv[])
{
using namespace std;
tm t {};
istringstream is ("2016");
// is >> get_time (&t, "%Y %d");
std::string s;
is >> s;
cout << "eof: " << is.eof () << endl
<< "fail: " << is.fail () << endl;
}
I replaced get_time with std::string, but the behavior did not change! The string has been read in to its end, so the stream state cannot be set to fail; however, it hit the end-of-file, so the eofbit has been set!
eof: 1
fail: 0
What I'm saying is that a similar phenomenon can take place inside get_time and then the stream's state is propagated up to the result of get_time.
Ok, it seems that all the mentioned implementations behave according to the
C++11 standard.
Here is my understanding of what happens in the above program.
std::get_time() does all the preparations and calls std::time_get<CharT,InputIt>::get().
Since the first format string character is '%', the get() function calls
do_get() at the first iteration of the parsing loop.
do_get() reads "2016" while processing the %Y specifier and fills the
respective field in the time object. Besides that, it sets eofbit according to
the standard, since "the end of the input stream is reached after reading a
character". This makes get() function to bail out from the loop after the
do_get() call due to 1b condition (see get() for details), with only eofbit set for the stream. Note
that the format part that follows %Y is fully ignored.
But if we, for example, change the input stream from "2016" to "2016 " (append the space character), then do_get() doesn't set eofbit, get() reads/matches the spaces in the stream and format after the do_get() call, and then bails out due to 1c condition with both eofbit and failbit set.
Generally reading with std::get_time() seems to succeed (failbit is not set)
when either format string is fully matched against the stream (which may still
have some data in it) or if the end of the stream is reached after a
conversion specifier was successfully applied (with the rest of the format
string ignored).
I am attempting to loop a command in VxWorks around 6Hz, I cannot compile code for the target in question so I have to use existing VxWorks shell commands
I have tried:
repeat(1000,functionX,param1,param2,param3)
This works well at repeating the command 1000 times but wont give me the frequency I require
As a comprimise I looked at:
period()
as this is capable of giving me 1hz calls on the function (which might be acceptable) however I cannot work out how to enter the required parameters into the FunctionX
I have tried both:
period(1,functionX,param1,param2,param3)
and
period(1,functionX(param1,param2,param3))
with no luck
Any Ideas on how to acheive the 6Hz rate for FunctionX would be great but if that is not possible without compiling some code then I will settle for a way of getting the period command to work with parameters in the function I am calling
Repeat and period have the same signatures, but the interpretation of the first parameter is different. So if you can call repeat successfully then you can also call period successfully.
int period
(
int secs, /* period in seconds */
FUNCPTR func, /* function to call repeatedly */
int arg1, /* first of eight args to pass to func */
int arg2,
int arg3,
int arg4,
int arg5,
int arg6,
int arg7,
int arg8
)
int repeat
(
int n, /* no. of times to call func (0=forever) */
FUNCPTR func, /* function to call repeatedly */
int arg1, /* first of eight args to pass to func */
int arg2,
int arg3,
int arg4,
int arg5,
int arg6,
int arg7,
int arg8
)
For repeat the first parameter, is the number of times to call the function, and for period the first parameter is the period in seconds
So period is really too slow for you, and repeat is too fast, though you could use tickGet to make it work. What you really want is a vxworks watchdog. Lookup wdCreate() and wdStart() in your vxworks docs, but be aware that your watchdog handler will be called from an ISR, and so standard ISR precautions should be taken (i.e. you will need a task to do the real work which should pend on a msgQ, or a semaphore that your watchdog handler triggers).
Actually now that I think about it, I believe that repeat and period also call the handler from an ISR, so technically the same restrictions apply there as well.
I am using Alien for Lua to reference the WaitForSingleObject function in the Windows Kernel32.dll.
I am pretty new to Windows programming, so the question I have is about the following #defined variables referenced by the WaitForSingleObject documentation:
If dwMilliseconds is INFINITE, the function will return only when the object is signaled.
What is the INFINITE value? I would naturally assume it to be -1, but I cannot find this to be documented anywhere.
Also, with the following table, it mentions the return values in hexadecimal, but I am confused as to why they have an L character after the last digit. Could this be something as simple as casting it to a Long?
The reason I ask is because Lua uses a Number data type, so I am not sure if I should be checking for this return value via Hex digits (0-F) or decimal digits (0-9)?
The thought crossed my mind to just open a C++ application and print out these values, so I did just that:
#include <windows.h>
#include <process.h>
#include <iostream>
int main()
{
std::cout << INFINITE;
std::cout << WAIT_OBJECT_0;
std::cout << WAIT_ABANDONED;
std::cout << WAIT_TIMEOUT;
std::cout << WAIT_FAILED;
system("pause");
return 0;
}
The final Lua results based off my findings is:
local INFINITE = 4294967295
local WAIT_OBJECT_0 = 0
local WAIT_ABANDONED = 128
local WAIT_TIMEOUT = 258
local WAIT_FAILED = 4294967295
I tried to Google for the same information. Eventually, I found this Q&A.
I found two sources with: #define INFINITE 0xFFFFFFFF
https://github.com/tpn/winsdk-10/blob/master/Include/10.0.10240.0/um/WinBase.h#L704
https://github.com/Alexpux/mingw-w64/blob/master/mingw-w64-tools/widl/include/winbase.h#L365
For function WaitForSingleObject, parameter dwMilliseconds has type DWORD.
From here: https://learn.microsoft.com/en-us/windows/win32/winprog/windows-data-types
I can see: DWORD A 32-bit unsigned integer.
Thus, #RemyLebeau's comment above looks reasonable & valid:
`4294967295` is the same as `-1` when interpreted as a signed integer type instead.`
In short: ((DWORD) -1) == INFINITE
Last comment: Ironically, this "infinite" feels similar to the Boeing 787 problem where they needed to reboot/restart the plane once per 51 days. Feels eerily close / similar!
I can't find any clear indication of if/when the 64-bit value returned by QueryPerformanceCounter() gets reset, or overflows and resets back to zero. Hopefully it never overflows because the 64 bits gives space for decades worth of counting at gigahertz rates. However... is there anything other than a computer restart that will reset it?
Empirically, QPC is reset at system startup.
Note that you should not depend on this behavior, since Microsoft do not explicitly state what the "zero point" is for QPC, merely that it is a monotonically increasing value (mod 2^64) that can be used for high precision timing.
Hence they are quite within their rights to modify it's behavior at any time. They could, for example, make it return values that match FILETIME values as would be produced by a call to GetSystemTimeAsFileTime(), with the same resolution, 100ns tick rate. Under these circumstances, it would never reset. At least not in your or my lifetimes.
That said, the following program when run on Windows 10 [Version 6.3.16299] produces pairs of identical values that are the system uptime in seconds.
#include <windows.h>
#include <iostream>
int main()
{
LARGE_INTEGER performanceCount;
LARGE_INTEGER performanceFrequency;
QueryPerformanceFrequency(&performanceFrequency);
for (;;)
{
QueryPerformanceCounter(&performanceCount);
DWORD const systemTicks = timeGetTime();
DWORD const systemSeconds = systemTicks / 1000;
__int64 const performanceSeconds = performanceCount.QuadPart / performanceFrequency.QuadPart;
std::cout << systemSeconds << " " << performanceSeconds << std::endl;
Sleep(1000);
}
return 0;
}
Standard disclaimers apply, your actual mileage may vary, etc. etc. etc.
It seems that some Windows running inside VirtualBox may reset QueryPerformanceCounter every 20 minutes or so: see here.
QPC is more reliable as time goes by, but for better portability a low precision timer should be used such as GetTickCount64.
I'm trying to get some simple piece of code I found on a website to work in VC++ 2010 on windows vista 64:
#include "stdafx.h"
#include <windows.h>
int _tmain(int argc, _TCHAR* argv[])
{
DWORD dResult;
BOOL result;
char oldWallPaper[MAX_PATH];
result = SystemParametersInfo(SPI_GETDESKWALLPAPER, sizeof(oldWallPaper)-1, oldWallPaper, 0);
fprintf(stderr, "Current desktop background is %s\n", oldWallPaper);
return 0;
}
it does compile, but when I run it, I always get this error:
Run-Time Check Failure #2 - Stack around the variable 'oldWallPaper' was corrupted.
I'm not sure what is going wrong, but I noticed, that the value of oldWallPaper looks something like "C\0:\0\0U\0s\0e\0r\0s[...]" -- I'm wondering where all the \0s come from.
A friend of mine compiled it on windows xp 32 (also VC++ 2010) and is able to run it without problems
any clues/hints/opinions?
thanks
The doc isn't very clear. The returned string is a WCHAR, two bytes per character not one, so you need to allocate twice as much space otherwise you get a buffer overrun. Try:
BOOL result;
WCHAR oldWallPaper[(MAX_PATH + 1)];
result = SystemParametersInfo(SPI_GETDESKWALLPAPER,
_tcslen(oldWallPaper), oldWallPaper, 0);
See also:
http://msdn.microsoft.com/en-us/library/ms724947(VS.85).aspx
http://msdn.microsoft.com/en-us/library/ms235631(VS.80).aspx (string conversion)
Every Windows function has 2 versions:
SystemParametersInfoA() // Ascii
SystemParametersInfoW() // Unicode
The version ending in W is the wide character type (ie Unicode) version of the function. All the \0's you are seeing are because every character you're getting back is in Unicode - 16 bytes per character - the second byte happens to be 0. So you need to store the result in a wchar_t array, and use wprintf instead of printf
wchar_t oldWallPaper[MAX_PATH];
result = SystemParametersInfo(SPI_GETDESKWALLPAPER, MAX_PATH-1, oldWallPaper, 0);
wprintf( L"Current desktop background is %s\n", oldWallPaper );
So you can use the A version SystemParametersInfoA() if you are hell-bent on not using Unicode. For the record you should always try to use Unicode, however.
Usually SystemParametersInfo() is a macro that evaluates to the W version, if UNICODE is defined on your system.