Is there a way to determine the time taken from power on to Windows starting - boot

I would like to be able to tell how long it takes to get from power on to windows starting.
Is there a way of determining this retrospectively (ie once windows has started)?
Does the BIOS/CMOS hold a last boot time?
Would it be possible to tell from RDTSC how long a machine has been running for and subtract the windows boot time?

You might try BootTimer or BootRacer to see either of them will do what you want.
I don't believe you can determine this after Windows is started. I'm not aware of any BIOS that stores the last boot time. But on any modern machine, if the time between power on to calling the OS boot loader (essentially the time it takes to run the POST routines) takes longer than a few seconds, something is wrong.
Are you trying to do this programmatically to get the accurate amount of time that the machine has been online and usable? The inaccuracy resulting from the few seconds that POST takes doesn't seem like it would make a significant difference. If you're timing for benchmarking or optimization purposes, either of these two utilities should work for you.

Get the time since power on from GetTickCount(). Then get the timestamp of a file Windows touches at boot (windows\bootstat.dat for example). Code is below. On my machine it says 16 seconds which sounds accurate.
#include <stdio.h>
#include <windows.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <time.h>
int main()
{
struct __stat64 st;
_stat64("c:\\windows\\bootstat.dat", &st);
return printf("%d\n", st.st_mtime - (time(NULL) - GetTickCount()/1000));
}

Related

Test if the program uses MPI (distributed) correctly?

How do I check that a program is using MPI when it runs? Specifically, how can I verify the program is running on multiple processors? Also, how can I figure out if my program is correctly running across multiple nodes?
I am assuming you're trying to figure out which processor/host is the MPI process running on.
You can use the MPI_Get_processor_name function to print the processor name.
Here is what your code will look like.
#include <mpi.h>
#include <stdio.h>
int main(int argc, char **argv)
{
int rank, max_len;
char processorname[MPI_MAX_PROCESSOR_NAME];
MPI_Init(&argc,&argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
MPI_Get_processor_name(processorname,&max_len);
printf("Hello world! I am process number: %d on processor %s\n", rank, processorname);
MPI_Finalize();
return 0;
}
So now to compile the program use mpicc -o hello_world hello_world.c.
To run the program use mpirun -np 4 -f machinefile ./hello_world.
This will run the program in 4 different processors mentioned in your machinefile.
You didn't tell us, what you are actually looking for. Your question is unclear and ambiguous, it would be great if you could improve it. That being said, I guess you would like to know wether your processes are actually executed by distinct CPU cores.
First of all, Pooja Nilangekar explained a method to verify the distribution across a network. Now within a single node, it most likely depends on the systems you are running on. If it is a Linux, you could for example make use of the /proc filesystem, and check the status of the current process in /proc/self/. This pseudo filesystem offers a file stat, which contains a field processor showing the cpu_id, this process was last run on. Maybe, also check /proc/self/status for the cpus, the process is allowed to run on. It might be that MPI or your scheduler puts restrictions on this for each process. Together with the node information from the answer of Pooja Nilangekar, you can thereby obtain the running information for each process.
If you can not modify the sources, to have each process reporting where it is running, I think, the easiest way to see which cores are utilized would be top, maybe also have look at this blog on How do I find out Linux CPU utilization?, which also mentions mpstat and sar.

Why is usleep not working at boot time?

I have a daemon that launchd runs at system boot (OS X). I need to delay startup of my daemon by 3-5 seconds, yet the following code executes instantly at boot, but delays properly well after boot:
#include <unistd.h>
...
printf("Before delay\n");
unsigned int delay = 3000000;
while( (delay=usleep(delay)) > 0)
{
;
}
printf("After delay\n");
If I run it by hand after the system has started, it delays correctly. If I let launchd start it at boot the console log shows that there is no delay between Before delay and After delay - they are executed in the same second.
If I could get launchd to execute my daemon after a delay after boot that would be fine as well, but my reading suggests that this isn't possible (perhaps I'm wrong?).
Otherwise, I need to understand why usleep isn't working, and what I can do to fix it, or what delay I might be able to use instead that works that early in the boot process.
First things first. Put in some extra code to also print out the current time, rather than relying on launchd to do it.
It's possible that the different flushing behaviour for standard output may be coming into play.
If standard output can be determined to be an interactive device (such as running it from the command line), it is line buffered - you'll get the "before" line flushed before the delay.
Otherwise, it's fully buffered so the flush may not happen until the program exits (or you reach the buffer size of (for example) 4K. That means that launchd may see the lines come out together, both after the delay.
Getting the C code to timestamp the lines will tell you if ths is the problems, something like:
#include <stdio.h>
#include <time.h>
#include <unistd.h>
int main (void) {
printf("%d: Before delay\n", time(0));
unsigned int delay = 3000000;
while( (delay=usleep(delay)) > 0);
printf("%d: After delay\n", time(0));
return 0;
}
To see why the buffering may be a problem, consider running that program above as follows:
pax> ./testprog | while read; do echo $(date): $REPLY; done
Tue Jan 31 12:59:24 WAST 2012: 1327985961: Before delay
Tue Jan 31 12:59:24 WAST 2012: 1327985964: After delay
You can see that, because the buffering causes both lines to appear to the while loop when the program exits, they get the same timestamp of 12:59:24 despite the fact they were generated three seconds apart within the program.
In fact, if you change it as follows:
pax> ./testprog | while read; do echo $(date) $REPLY; sleep 10 ; done
Tue Jan 31 13:03:17 WAST 2012 1327986194: Before delay
Tue Jan 31 13:03:27 WAST 2012 1327986197: After delay
you can see the time seen by the "surrounding" program (the while loop or, in your case, launchd) is totally disconnected from the program itself).
Secondly, usleep is a function that can fail! And it can fail by returning -1, which is very much not greater than zero.
That means, if it fails, your delay will be effectively nothing.
The Single UNIX Specification states, for usleep:
On successful completion, usleep() returns 0. Otherwise, it returns -1 and sets errno to indicate the error.
The usleep() function may fail if: [EINVAL]: The time interval specified 1,000,000 or more microseconds.
That's certainly the case with your code although it would be hard to explain why it works after boot and not before.
Interestingly, the Mac OSX docs don't list EINVAL but they do allow for EINTR if the sleep is interrupted externally. So again, something you should check.
You can check those possibilities with something like:
#include <stdio.h>
#include <time.h>
#include <errno.h>
#include <unistd.h>
int main (void) {
printf("%d: Before delay\n", time(0));
unsigned int delay = 3000000;
while( (delay=usleep(delay)) > 0);
printf("%d: After delay\n", time(0));
printf("Delay became %d, errno is %d\n", delay, errno);
}
One other thing I've just noticed, from your code you seem to be assuming that usleep returns the number of microseconds unslept (remaining) and you loop until it's all done, but that behaviour is not borne out by the man pages.
I know that nanosleep does this (by updating the passed structure to contain the remaining time rather than returning it) but usleep only returns 0 or -1.
The sleep function acts in that manner, returning the number of seconds yet to go. Perhaps you might look into using that function instead, if possible.
In any case, I would still run that (last) code segment above just so you can ascertain what the actual problem is.
According to the old POSIX.1 standard, and as documented in the OSX manual page, usleep returns 0 on success and -1 on error.
If you get an error it's most likely EINTR (the only error documented in the OSX manual page) meaning it has been interrupted by a signal. You better check errno to be certain though. As a side-note, on the Linux manual page it states that you can get EINVAL too in some cases:
usec is not smaller than 1000000. (On systems where that is considered an error.)
As another side-note, usleep has been obseleted in the latest POSIX.1 standard, in favor of nanosleep.

How to make pthread_cond_timedwait() robust against system clock manipulations?

Consider the following source code, which is fully POSIX compliant:
#include <stdio.h>
#include <limits.h>
#include <stdint.h>
#include <stdlib.h>
#include <pthread.h>
#include <sys/time.h>
int main (int argc, char ** argv) {
pthread_cond_t c;
pthread_mutex_t m;
char printTime[UCHAR_MAX];
pthread_mutex_init(&m, NULL);
pthread_cond_init(&c, NULL);
for (;;) {
struct tm * tm;
struct timeval tv;
struct timespec ts;
gettimeofday(&tv, NULL);
printf("sleep (%ld)\n", (long)tv.tv_sec);
sleep(3);
tm = gmtime(&tv.tv_sec);
strftime(printTime, UCHAR_MAX, "%Y-%m-%d %H:%M:%S", tm);
printf("%s (%ld)\n", printTime, (long)tv.tv_sec);
ts.tv_sec = tv.tv_sec + 5;
ts.tv_nsec = tv.tv_usec * 1000;
pthread_mutex_lock(&m);
pthread_cond_timedwait(&c, &m, &ts);
pthread_mutex_unlock(&m);
}
return 0;
}
Prints the current system date every 5 seconds, however, it does a sleep of 3 seconds between getting the current system time (gettimeofday) and the condition wait (pthread_cond_timedwait).
Right after it is printing "sleep (...)", try setting the system clock two days into the past. What happens? Well, instead of waiting 2 more seconds on the condition as it usually does, pthread_cond_timedwait now waits for two days and 2 seconds.
How do I fix that?
How can I write POSIX compliant code, that does not break when the user manipulates the system clock?
Please keep in mind that the system clock might change even without user interaction (e.g. a NTP client might update the clock automatically once a day). Setting the clock into the future is no problem, it will only cause the sleep to wake up early, which is usually no problem and which you can easily "detect" and handle accordingly, but setting the clock into the past (e.g. because it was running in the future, NTP detected that and fixed it) can cause a big problem.
PS:
Neither pthread_condattr_setclock() nor CLOCK_MONOTONIC exists on my system. Those are mandatory for the POSIX 2008 specification (part of "Base") but most systems still only follow the POSIX 2004 specification as of today and in the POSIX 2004 specification these two were optional (Advanced Realtime Extension).
Interesting, I've not encountered that behaviour before but, then again, I'm not in the habit of mucking about with my system time that much :-)
Assuming you're doing that for a valid reason, one possible (though kludgy) solution is to have another thread whose sole purpose is to periodically kick the condition variable to wake up any threads so affected.
In other words, something like:
while (1) {
sleep (10);
pthread_cond_signal (&condVar);
}
Your code that's waiting for the condition variable to be kicked should be checking its predicate anyway (to take care of spurious wakeups) so this shouldn't have any real detrimental effect on the functionality.
It's a slight performance hit but once every ten seconds shouldn't be too much of a problem. It's only really meant to take care of the situations where (for whatever reason) your timed wait will be waiting a long time.
Another possibility is to re-engineer your application so that you don't need timed waits at all.
In situations where threads need to be woken for some reason, it's invariably by another thread which is perfectly capable of kicking a condition variable to wake one (or broadcasting to wake the lot of them).
This is very similar to the kicking thread I mentioned above but more as an integral part of your architecture than a bolt-on.
You can defend your code against this problem. One easy way is to have one thread whose sole purpose is to watch the system clock. You keep a global linked list of condition variables, and if the clock watcher thread sees a system clock jump, it broadcasts every condition variable on the list. Then, you simply wrap pthread_cond_init and pthread_cond_destroy with code that adds/removes the condition variable to/from the global linked list. Protect the linked list with a mutex.

Console textgame.exe works on windows 7, not on vista… WHY?

Hey so i've made a text game using the pdCurses library and microsoft opperating system tools. Here are my includes and look below for other explination:
#include <iostream>
#include <time.h> // or "ctime"
#include <stdio.h> // for
#include <cstdlib>
#include <Windows.h>
#include <conio.h>
#include<curses.h>
#include <algorithm>
#include <string>
#include <vector>
#include <sstream>
#include <ctime>
#include <myStopwatch.h> // for keeping times
#include <myMath.h> // numb_digits() and digit_val();
myStopwath/Math.h includes:
#include <stdio.h>
#include <math.h>
#include <tchar.h>
So i've tested the game (which includes a folder containing the .exe and pdcurses.dll) on my computer running windows 7 and it works great, however when running it on another computer which has vista or older my game comes up, but immediatly ends due to the loss of all the players lives almost instantaniously.... how could this be?
If you would like to see the full source code, go to this Link
Thanks!
In the main game loop, you are not initializing the coll variable before passing it to theScreen.check_collision(). If the player is in no danger, then that function does not update this value. Back in the main loop you don't check the return value from check_collision(), and the program is now making decisions based on whatever uninitialized value was in that variable. Welcome to the wide world of Undefined Behavior.
It is likely the difference you're seeing on different OS's is due to the way the different heap managers initialize memory pages. Even if your player survives for a while, after the first collision, that memory location now holds 'X', which is then never cleared, and while the result is still "undefined", on most architectures, this will result in registering a new collision on each iteration, explaining why your "lives" are vanishing so quickly.
Two things you need to do to fix this:
All code paths through check_collision must write to the 'buff' out parameter. The easiest way to do this is initialize it to 0 in the first line of the function. (Alternatively, if it's intended as an in/out param, then you need to initialize it in the main loop before calling check_collision() )
Make your decision based on the return value of check_collision(), rather than the out parameter. (Or, if that return value really is not important, change the return type of the function to void)
Line 23 in string_lines is missing a comma at the end. Don't think this is your whole issue, but that can't be good either.
You didn't say if you recompiled it separately under each OS (Vista, etc). And if you did recompile, whether same version of compiler was used.
Windows 7 shipped with Visual C++ 2008 runtime.
Windows Vista shipped Visual C++ 2005 runtime.
XP shipped with Visual C++ 6.0 runtime.
Since you compiled the application in Visual Studio 2010, more than likely it was not compiled to target older operating systems.
Try installing the latest runtimes on the machine you are testing with and if it works after doing that, you know to recompile your project to support older operating systems.
http://www.microsoft.com/download/en/details.aspx?id=5555 x86
http://www.microsoft.com/download/en/details.aspx?id=14632 x64

GetIpAddrTable() leaks memory. How to resolve that?

On my Windows 7 box, this simple program causes the memory use of the application to creep up continuously, with no upper bound. I've stripped out everything non-essential, and it seems clear that the culprit is the Microsoft Iphlpapi function "GetIpAddrTable()". On each call, it leaks some memory. In a loop (e.g. checking for changes to the network interface list), it is unsustainable. There seems to be no async notification API which could do this job, so now I'm faced with possibly having to isolate this logic into a separate process and recycle the process periodically -- an ugly solution.
Any ideas?
// IphlpLeak.cpp - demonstrates that GetIpAddrTable leaks memory internally: run this and watch
// the memory use of the app climb up continuously with no upper bound.
#include <stdio.h>
#include <windows.h>
#include <assert.h>
#include <Iphlpapi.h>
#pragma comment(lib,"Iphlpapi.lib")
void testLeak() {
static unsigned char buf[16384];
DWORD dwSize(sizeof(buf));
if (GetIpAddrTable((PMIB_IPADDRTABLE)buf, &dwSize, false) == ERROR_INSUFFICIENT_BUFFER)
{
assert(0); // we never hit this branch.
return;
}
}
int main(int argc, char* argv[]) {
for ( int i = 0; true; i++ ) {
testLeak();
printf("i=%d\n",i);
Sleep(1000);
}
return 0;
}
#Stabledog:
I've ran your example, unmodified, for 24 hours but did not observe that the program's Commit Size increased indefinitely. It always stayed below 1024 kilobyte. This was on Windows 7 (32-bit, and without Service Pack 1).
Just for the sake of completeness, what happens to memory usage if you comment out the entire if block and the sleep? If there's no leak there, then I would suggest you're correct as to what's causing it.
Worst case, report it to MS and see if they can fix it - you have a nice simple test case to work from which is more than what I see in most bug reports.
Another thing you may want to try is to check the error code against NO_ERROR rather than a specific error condition. If you get back a different error than ERROR_INSUFFICIENT_BUFFER, there may be a leak for that:
DWORD dwRetVal = GetIpAddrTable((PMIB_IPADDRTABLE)buf, &dwSize, false);
if (dwRetVal != NO_ERROR) {
printf ("ERROR: %d\n", dwRetVal);
}
I've been all over this issue now: it appears that there is no acknowledgment from Microsoft on the matter, but even a trivial application grows without bounds on Windows 7 (not XP, though) when calling any of the APIs which retrieve the local IP addresses.
So the way I solved it -- for now -- was to launch a separate instance of my app with a special command-line switch that tells it "retrieve the IP addresses and print them to stdout". I scrape stdout in the parent app, the child exits and the leak problem is resolved.
But it wins "dang ugly solution to an annoying problem", at best.

Resources