What does C "Sleep" function (capital "S") do on a Mac? - xcode

Note the capital "S" in Sleep. Sleep with a capital "S" is a standard function that sleeps milliseconds on the PC. On Mac OS X, there is no such symbol. However, the Xcode linking environment seems to find something to link it to. What is it?

Well, it’s an old old Carbon function (in the CoreServices / OSServices framework) that puts the computer to sleep. I can’t find any documentation.
Sleep and Xcode

sleep(int) is a method from the unix system that run mac Known as Darwin.
Here is the ManPage for sleep
Essentially it is a C call that lets you tell the computer to sleep for 'int' number of seconds.
alternatively you can use 'usleep(unsigned int)' which will sleep for 'unsigned int' number of "microseconds" which is second * 1000 * 1000 or 1000 milliseconds.
Here is the ManPage for usleep
Both of these functions are wrapped to allow you access to the underlying "C/C++" methods that a normal C/C++ developer would use.
here is an equivalent code example
NSTimeInterval sleepTime = 2.0; //Time interval is a double containing fractions of seconds
[NSThread sleepForTimeInterval:sleepTime]; // This will sleep for 2 seconds
sleep((int)sleepTime); // This will also sleep for 2 seconds
if you wish to have more granularity you will need usleep(unsigned int) which will give you a much more precise number.
NSTimeInterval sleepTime = 0.2; // This is essentially 2 tenths of a second
[NSThread sleepForTimeInterval:sleepTime]; // This will sleep for 2 tenths of a second;
usleep((unsigned int)(sleepTime * 1000 * 1000)); // This will also sleep for 2 tenths of a second
I hope that helps

The equivalent to sleep should be
[NSThread sleepForTimeInterval:5.0];
However this is in seconds. To use milliseconds I think you have to use usleep( num * 1000), where num is number of mills
But I don't know what Sleep(...) does

On the mac, under OSX, there is no such symbol.
I don't think there is such a symbol in Classic mac either - I even looked in my ancient copy of THINK Reference.
I would also be surprised to find a Sleep (with a capital S) function in C, since many people name C functions using all lower case.
Were you prompted to ask the question because you're getting a link error?

There is usleep().
pthread_create(&pth,NULL,threadFunc,"foo");
while(i < 100)
{
usleep(1);
printf("main is running...\n");
++i;
}
printf("main waiting for thread to terminate...\n");
pthread_join(pth,NULL);
return 0;

Are you using any other libraries in your project?
I'm getting compile errors using both Cocoa and Carbon using apple's template projects, however I notice that sleep functions (using the definition) are a feature in both the SDL and SFML cross platform libraries, and perhaps for many others.
Have you tried your code example on a template project using only with apples libraries?
It could be that Sleep() is a function in something else you are linking to.

Related

How to measure total time spent in a function?

I have a utility function that I suspect is eating up a large portion of my application's execution time. Using Time Profiler to look at the call stack, this function takes up a large portion of the execution time of any function from which it is called. However, since this utility function is called from many different sources, I am having trouble determining if, overall, this is the best use of my optimization time.
How can I look at total time spent in this function during program execution, regardless of who called it?
For clarity, I want to combine the selected entries with all other calls to that function into a single entry:
For me, what does the trick is ticking "Invert Call Tree". It seems to sort "leaf" functions in the call tree in order of those that cumulate the most time, and allow you to see what calls them.
The checkbox can be found in the right panel, called "Display Settings" (If hidden: ⌘2 or View->Inspectors->Show Display Settings)
I am not aware of an instruments based solution but here is something you can do from code. Hope somebody provides an instruments solution but until then to get you going here goes.
#include <time.h>
//have this as a global variable to track time taken by the culprit function
static double time_consumed = 0;
void myTimeConsumingFunction(){
//add these lines in the function
clock_t start, end;
start = clock();
//main body of the function taking up time
end = clock();
//add this at the bottom and keep accumulating time spent across all calls
time_consumed += (double)(end - start) / CLOCKS_PER_SEC;
}
//at termination/end-of-program log time_consumed.
To see the totals for a particular function, follow these steps:
Profile your program with Time Profiler
Find and select any mention of the function of interest in the Call Tree view (you can use Edit->Find)
Summon the context menu over the selected function and 'Focus on calls made by ' (Or use Instrument->Call Tree Data Mining->Focus on Calls Made By )
If your program is multi-threaded and you want a total across all threads, make sure 'Separate by Thread' is not checked.
I can offer the makings of the answer you're looking for but haven't got this working within Instruments yet...
Instruments uses dtrace under the hood. dtrace allows you to respond to events in your program such as a function being entered or returned from. The response to each event can be scripted.
You can create a custom instrument with scripting in Instruments.
Here is a noddy shell script that launches dtrace outside of Instruments and records the time spent in a certain function.
#!/bin/sh
dtrace -c <yourprogram> -n '
unsigned long long totalTime;
self uint64_t lastEntry;
dtrace:::BEGIN
{
totalTime = 0;
}
pid$target:<yourprogram>:*<yourfunction>*:entry
{
self->lastEntry = vtimestamp;
}
pid$target:<yourprogram>:*<yourfunction>*:return
{
totalTime = totalTime + (vtimestamp - self->lastEntry);
/*#timeByThread[tid] = sum(vtimestamp - self->lastEntry);*/
}
dtrace:::END
{
printf( "\n\nTotal time %dms\n" , totalTime/1000000 )
}
'
What I haven't figured out yet is how to transfer this into instruments and get the results to appear in a useful way in the GUI.
I think you can call system("time ls"); twice and it will just work for you. The output will be printed on debug console.

Best way to measure elapsed time in Scheme

I have some kind of "main loop" using glut. I'd like to be able to measure how much time it takes to render a frame. The time used to render a frame might be used for other calculations. The use of the function time isn't adequate.
(time (procedure))
I found out that there is a function called current-time. I had to import some package to get it.
(define ct (current-time))
Which define ct as a time object. Unfortunately, I couldn't find any arithmetic packages for dates in scheme. I saw that in Racket there is something called current-inexact-milliseconds which is exactly what I'm looking for because it has nanoseconds.
Using the time object, there is a way to convert it to nanoseconds using
(time->nanoseconds ct)
This lets me do something like this
(let ((newTime (current-time)))
(block)
(print (- (time->nanoseconds newTime) (time->nanoseconds oldTime)))
(set! oldTime newTime))
Seems good enough for me except that for some reasons it was printing things like this
0
10000
0
0
10000
0
10000
I'm rendering things using opengl and I find it hard to believe that some rendering loop are taking 0 nanoseconds. And that each loop is quite stable enough to always take the same amount of nanoseconds.
After all, your results are not so surprising because we have to consider the limited timer resolution for each system. In fact, there are some limits that depend in general by the processor and by the OS processes. These are not able to count in an accurate manner than we expect, despite a quartz oscillator can reach and exceed a period of a nanosecond. You are also limited by the accuracy and resolution of the functions you used. I had a look at the documentation of Chicken scheme but there is nothing similar to (current-inexact-milliseconds) → real? of Racket.
After digging around, I came with the solution that I should write it in C and bind it to scheme using bindings.
(require-extension bind)
(bind-rename "getTime" "current-microseconds")
(bind* #<<EOF
uint64_t getTime();
#ifndef CHICKEN
#include <sys/time.h>
uint64_t getTime() {
struct timeval tim;
gettimeofday(&tim, NULL);
return 1000000 * tim.tv_sec + tim.tv_usec;
}
#endif
EOF
)
Unfortunately this solution isn't the best because it will be chicken-scheme only. It could be implemented as a library but a library to wrap only one function that doesn't exists on any other scheme doesn't make sense.
Since nanoseconds doens't actually make much sense after all, I got the microseconds instead.
Watch the trick here, define the function to get wrapped above and prevent the include to get parsed by bind. When the file will get loaded in Gcc, it will build with the include and function definition.
CHICKEN has current-milliseconds: http://api.call-cc.org/doc/library/current-milliseconds

determine the time taken for my code to execute in MFC

I need a way to find out the amount of time taken by a function, and a section of my code inside a function, to execute
Does Visual Studio provide any mechanism for doing this, or is it possible to do so from the program using MFC functions? I am new to MFC so I am not sure how this can be done. I thought this should be a pretty straight forward operation but I cannot find any examples on how this may be done either
A quick way, but quite imprecise, is by using GetTickCount():
DWORD time1 = GetTickCount();
// Code to profile
DWORD time2 = GetTickCount();
DWORD timeElapsed = time2-time1;
The problem is GetTickCount() uses the system timer, which has a typical resolution of 10 - 15 ms. So it is only useful with long computations.
It can't tell the difference between a function that takes 2 ms to run and one that takes 9 ms. But if you are in the seconds range, it may well be enough.
If you need more resolution, you can use the performance counter, as RedEye explains.
Or you can try a profiler (maybe this is what you were looking for?). See this question.
There may be better ways, but I do it like this :
// At the start of the function
LARGE_INTEGER lStart;
QueryPerformanceCounter(&lStart);
LARGE_INTEGER lFreq;
QueryPerformanceFrequency(&lFreq);
// At the en of the function
LARGE_INTEGER lEnd;
QueryPerformanceCounter(&lEnd);
TRACE("FunctionName t = %dms\n", (1000*(lEnd.LowPart - lStart.LowPart))/lFreq.LowPart);
I use this method quite a lot for optimising graphics code, finding time taken for screen updates etc. There are other methods of doing the same or similar, but this is quick and simple.

Luaj os.time() return milliseconds

os.time() in Luaj returns time in milliseconds, but according to lua documentation, it should return time in seconds.
Is this a bug in Luaj?
And Can you suggest a workaround that will work with Luaj(for java) and real Lua(c/c++)? because i have to use the same lua source for both applications.(cant simply divide it with 1000, as they both have return different time scale)
example in my lua file:
local start = os.time()
while(true) do
print(os.time() - start)
end
in c++ , i received output:
1
1
1
...(1 seconds passed)
2
2
2
in java (using Luaj), i got:
1
...(terminate in eclipse as fast as my finger can)
659
659
659
659
fyi, i try this on windows
Yes there's a bug in luaj.
The implementation just returns System.currentTimeMillis() when you call os.time(). It should really return something like (long)(System.currentTimeMillis()/1000.)
It's also worth pointing out that the os.date and os.time handling in luaj is almost completely missing. I would recommend that you assume that they've not been implemented yet.
Lua manual about os.time():
The returned value is a number, whose meaning depends on your system. In POSIX, Windows, and some other systems, this number counts the number of seconds since some given start time (the "epoch"). In other systems, the meaning is not specified, and the number returned by time can be used only as an argument to os.date and os.difftime.
So, any Lua implementation could freely change the meaning of os.time() value.
It appears like you've already confirmed that it's a bug in LuaJ; as for the workaround you can replace os.time() with your own version:
if (runningunderluaj) then
local ostime = os.time
os.time = function(...) return ostime(...)/1000 end
end
where runningunderluaj can check for some global variable that is only set under luaj. If that's not available, you can probably come up with your own check by comparing the results from calls to os.clock and os.time that measure time difference:
local s = os.clock()
local t = os.time()
while true do
if os.clock()-s > 0.1 then break end
end
-- (at least) 100ms has passed
local runningunderluaj = os.time() - t > 1
Note: It's possible that os.clock() is "broken" as well. I don't have access to luaj to test this...
In luaj-3.0-beta2, this has been fixed to return time in seconds.
This was a bug in all versions of luaj up to and including luaj-3.0-beta1.

What is the correct way to prevent sleep on OS X? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
How to programmatically prevent a Mac from going to sleep?
What is the correct method on the current version of OS X (10.7) for preventing sleep while an application or process is running?
In particular, is IOCancelPowerChange still (or did it ever) serve this purpose? I call IOCancelPowerChange in response to kIOMessageCanSystemSleep, but that doesn't do the trick.
Essentially the same question as the first part of this one has been asked before, but the documentation it points to is quite old and the answer was never accepted.
IOCancelPowerChange continues to work but only for idle triggered sleep; it will not work for sleep triggered by the Finder's Sleep menu item, programmatically requested, or from a push of the power button.
Apple's Q&A1340 answers the question "Q: How can my application get notified when the computer is going to sleep or waking from sleep? How do I prevent sleep?"
Listing 2 of Q&A1340:
#import <IOKit/pwr_mgt/IOPMLib.h>
// kIOPMAssertionTypeNoDisplaySleep prevents display sleep,
// kIOPMAssertionTypeNoIdleSleep prevents idle sleep
//reasonForActivity is a descriptive string used by the system whenever it needs
// to tell the user why the system is not sleeping. For example,
// "Mail Compacting Mailboxes" would be a useful string.
// NOTE: IOPMAssertionCreateWithName limits the string to 128 characters.
CFStringRef* reasonForActivity= CFSTR("Describe Activity Type");
IOPMAssertionID assertionID;
IOReturn success = IOPMAssertionCreateWithName(kIOPMAssertionTypeNoDisplaySleep,
kIOPMAssertionLevelOn, reasonForActivity, &assertionID);
if (success == kIOReturnSuccess)
{
//Add the work you need to do without
// the system sleeping here.
success = IOPMAssertionRelease(assertionID);
//The system will be able to sleep again.
}
Note that you can only stop idle time sleep, not sleep triggered by the user.
For applications supporting Mac OS X 10.6 and later, use the new IOPMAssertion family of functions. These functions allow other applications and utilities to see your application's desire not to sleep; this is critical to working seamlessly with third party power management software.
You could call updatesystemActivity(OverallAct) every 30 seconds to prevent the display from sleeping.

Resources