Time Comparison in D - time

I'm trying to build a Pomodoro timer app in D. I used to ruby, and I would like to do basic time comparisons.
I tried using something like
auto startTime = Clock.currTime();
And then comparing by grabbing the current time and comparing:
do{
// bla bla stuff
auto nowTime = Clock.currTime();
}while(nowTime <= (startTime + dur!"minute"(25));
However, missing method and type errors ensue. Any ideas?

In addition to CyberShadow's answer which does indeed tell you how to fix your code, I would point out that this particular approach is not the best approach for a timer. Aside from the fact that there's a good chance that a condition variable would make more sense (depending on what you're really doing), Clock.currTime is the wrong function to be using.
Clock.currTime returns the time using the real time clock, whereas timing is going to generally be more accurate with a monotonic clock. With clocks other than a monotonic clock, the time can be affected by changes to the clock (e.g. the system clock gets adjusted by a few minutes by the NTP daemon). However, a monotonic clock always moves forward at the same rate, even if the system clock is adjusted. So, it's not very useful for getting the time, but it's perfect for timing stuff. For that, you'd want to do something more like this:
auto endTime = Clock.currSystemTick + to!TickDuration(dur!"minutes"(25));
do
{
//bla bla stuff
} while(Clock.currSystemTick < endTime);
So, you end up dealing with core.time.TickDuration instead of std.datetime.SysTime. As long as you don't need the actual time of day and are just using this for timing purposes, then this approach is better.

You're missing a )
Variables declared inside a while scope are not visible to the while condition - you need to move the nowTime declaration outside of the do ... while block.
It should be dur!"minutes", not "minute".
With these fixes, the code compiles fine for me.

Related

How to obtain a kernel timestamp for an interrupt?

I have an event from the realtime world, which generates an interrupt. I need to register this event to one of the Linux kernel timescales, like CLOCK_MONOTONIC or CLOCK_REALTIME, with the goal of establishing when the event occurred in real calendar time. What is the currently recommended way to do this? My google search found some patches submitted back in 2011 to support it, but the interrupt-handling code has been heavily revised since then and I don't see a reference to timestamps anymore.
For my intended application the accuracy requirements are low (1 ms). Still, I would like to know how to do this properly. I should think it's possible to get into the microsecond range, if one can exclude the possibility of higher-priority interrupts.
If you need only low precision, you could get away with reading jiffies.
However, if CONFIG_HZ is less than 1000, you will not even get 1 ms resolution.
For a high-resolution timestamp, see how firewire-cdev.c does it:
case CLOCK_REALTIME: getnstimeofday(&ts); break;
case CLOCK_MONOTONIC: ktime_get_ts(&ts); break;
case CLOCK_MONOTONIC_RAW: getrawmonotonic(&ts); break;
If I understood your needs right - you may use getnstimeofday() function for this purpose.
If you need the high precision monotonic clock value (which is usually a good idea) you should look at ktime_get_ts() function (defined in linux/ktime.h). getnstimeofday() suggested in the other answer returns the "wall" time which may actually appear to go backward on occassion, resulting in unexpected behavior for some applications.

Recreating bugs in cocos2d iphone

I guess someone must have asked a similar question before, but here goes.
It would be useful to be able to record games so that if a bug happened during the game, the recorded play can be reused later with a fixed build to confirm if the bug is fixed or not. I am using box2d as well and from what I remember it seems as if box2d is not really deterministic, but at least being able to recreate most of the state from the first time would be OK in many cases. Recreating the same randomized values would take reinstating the same time etc I assume. Any insight?
I have been fiddling with calabash-ios with various success. I know it's possible to record plays, and playback them there later. I just assume it wouldn't recreate random values.
A quick look at box2d faq and I think box2d is deterministic enough
For the same input, and same binary, Box2D will reproduce any
simulation. Box2D does not use any random numbers nor base any
computation on random events (such as timers, etc).
However, people often want more stringent determinism. People often
want to know if Box2D can produce identical results on different
binaries and on different platforms. The answer is no. The reason for
this answer has to do with how floating point math is implemented in
many compilers and processors. I recommend reading this article if you
are curious:
http://www.yosefk.com/blog/consistency-how-to-defeat-the-purpose-of-ieee-floating-point.html
If you encapsulate the input state the player gives to the world each time step (eg. in a POD struct) then it's pretty straightforward to write that to a file. For example, suppose you have input state like:
struct inputStruct {
bool someButtonPressed;
bool someOtherKeyPressed;
float accelerometerZ;
... etc
};
Then you can do something like this each time step:
inputStruct currentState;
currentState.someButtonPressed = ...; // set contents from live user input
if ( recording )
fwrite( &currentState, sizeof(inputStruct), 1, file );
else if ( replaying ) {
inputStruct tmpState;
int readCount = fread( &tmpState, sizeof(inputStruct), 1, file );
if ( readCount == 1 )
currentState = tmpState; //overwrite live input
}
applyState( currentState ); // apply forces, game logic from input
world->Step( ... ); // step the Box2D world
Please excuse the C++ centric code :~) No doubt there are equivalent ways to do it with Objective-C.
This method lets you regain live control when the input from the file runs out. 'file' is a FILE* that you would have to open in the appropriate mode (rb or wb) when the level was loaded. If the bug you're chasing causes a crash, you might need to fflush after writing to make sure the input state actually gets written before crashing.
As you have noted, this is highly unlikely to work across different platforms. You should not assume that the replay file will reproduce the same result on anything other than the device that recorded it (which should be fine for debugging purposes).
As for random values, you'll need to ensure that anything using random values that may affect the Box2D world go through a deterministic random generator which is not shared with other code, and you'll need to record the seed that was used for each replay. You might like to use one of the many implementations of Mersenne Twister found at http://en.wikipedia.org/wiki/Mersenne_twister
When I say 'not shared', suppose you also use the MT algorithm to generate random directions for particles, purely for rendering purposes - you would not want to use the same generator instance for that as you do for physics-related randomizations.

What's the most efficient way to ignore code in lua?

I have a chunk of lua code that I'd like to be able to (selectively) ignore. I don't have the option of not reading it in and sometimes I'd like it to be processed, sometimes not, so I can't just comment it out (that is, there's a whole bunch of blocks of code and I either have the option of reading none of them or reading all of them). I came up with two ways to implement this (there may well be more - I'm very much a beginner): either enclose the code in a function and then call or not call the function (and once I'm sure I'm passed the point where I would call the function, I can set it to nil to free up the memory) or enclose the code in an if ... end block. The former has slight advantages in that there are several of these blocks and using the former method makes it easier for one block to load another even if the main program didn't request it, but the latter seems the more efficient. However, not knowing much, I don't know if the efficiency saving is worth it.
So how much more efficient is:
if false then
-- a few hundred lines
end
than
throwaway = function ()
-- a few hundred lines
end
throwaway = nil -- to ensure that both methods leave me in the same state after garbage collection
?
If it depends a lot on the lua implementation, how big would the "few hundred lines" need to be to reliably spot the difference, and what sort of stuff should it include to best test (the main use of the blocks is to define a load of possibly useful functions)?
Lua's not smart enough to dump the code for the function, so you're not going to save any memory.
In terms of speed, you're talking about a different of nanoseconds which happens once per program execution. It's harming your efficiency to worry about this, which has virtually no relevance to actual performance. Write the code that you feel expresses your intent most clearly, without trying to be clever. If you run into performance issues, it's going to be a million miles away from this decision.
If you want to save memory, which is understandable on a mobile platform, you could put your conditional code in it's own module and never load it at all of not needed (if your framework supports it; e.g. MOAI does, Corona doesn't).
If there is really a lot of unused code, you can define it as a collection of Strings and loadstring() it when needed. Storing functions as strings will reduce the initial compile time, however of most functions the string representation probably takes up more memory than it's compiled form and what you save when compiling is probably not significant before a few thousand lines... Just saying.
If you put this code in a table, you could compile it transparently through a metatable for minimal performance impact on repeated calls.
Example code
local code_uncompiled = {
f = [=[
local x, y = ...;
return x+y;
]=]
}
code = setmetatable({}, {
__index = function(self, k)
self[k] = assert(loadstring(code_uncompiled[k]));
return self[k];
end
});
local ff = code.f; -- code of x gets compiled here
ff = code.f; -- no compilation here
for i=1, 1000 do
print( ff(2*i, -i) ); -- no compilation here either
print( code.f(2*i, -i) ); -- no compile either, but table access (slower)
end
The beauty of it is that this compiles as needed and you don't really have to waste another thought on it, it's just like storing a function in a table and allows for a lot of flexibility.
Another advantage of this solution is that when the amount of dynamically loaded code gets out of hand, you could transparently change it to load code from external files on demand through the __index function of the metatable. Also, you can mix compiled and uncompiled code by populating the "code" table with "real" functions.
Try the one that makes the code more legible to you first. If it runs fast enough on your target machine, use that.
If it doesn't run fast enough, try the other one.
lua can ignore multiple lines by:
function dostuff()
blabla
faaaaa
--[[
ignore this
and this
maybe this
this as well
]]--
end

How far should I go to avoid internal getters/setters within a class

I have more of a "how much is too much" question. I have a Java class that defines several getters/setters for use by external classes (about 30 altogether). However, the Java class itself requires the use of these variables as well in some cases.
I understand the concept of using member fields instead of the getter methods within a class, but the getters in this case perform a function (unmasking an integer to be specific) to create the value to be returned.
So from a performance and memory reduction perspective, for the few calls within the class that need those values, I'm curious if I should...
a. Just call the getter
b. Do the unmasking wherever I need the values throughout the class, just like the getter
c. Create variables to hold those values, load these up by calling all the getters on startup, and use those within the class (30 or so integers may not be a serious memory risk, but I would also need to add to my code to keep those updated if a user sets new values...since the value is updated and masked).
Any thoughts are appreciated!
A. Just call the getter.
From a performance and memory reduction perspective, there is really little to no impact here on constantly re-using the same functions. That's what code reuse is all about.
From a high level execution/performance view, we do something like this:
code: myGetter()
program : push the program state (very few cycles)
program : jump to mygetter (1 clock cycle)
program : execute mygetter (don't know your code but probably very few cycles)
program : save the result ( 1 clock cycle)
program : pop the program state ( very few cycles )
program : continue to next line of code ( 1 clock cycle)
In performance, the golden rule of thumb is to spend your time optimizing what really makes a difference. For all general purposes, disk I/O takes up the most time and resources.
Hope this helps!
a) Call the getter - as you pointed out it's the right and clean way in your case.
b) and c) would be premature optimization and most likely do more harm than good (unless you REALLY know that this particular spot will be a hot spot in your code AND your JIT-compiler will not be able to optimize it for you).
If you really hit performance problems at some point, then profile the application and optimize only hot spots manually.
Don't Repeat Yourself is the guiding principle here. Trying to save a function call by repeating the same unmasking code throughout a class is a recipe for disaster. I would just call the getter within the class.

Is there a way to find out the current count of a win32 semaphore?

I'm looking for a way with no side effects.
Ideally, the following code would do the trick:
long currentCount = 0;
::ReleaseSemaphore(h, 0, &currentCount);
But unfortunately 0 is not allowed as the value of lReleaseCount, so the call returns FALSE.
If you want that value for external monitoring (as you suggest in your comment) then either use the previous value after a call to ReleaseSemaphore() or IMHO a better solution is that you implement your own 'interlocked' counter in addition to your semaphore; you then have your monitoring count and can access it in any way you like... Just don't use it as a way of seeing if you can 'enter' the semaphore...
As Chris rightly says, you can't obtain the current count as it is potentially always changing.
This might be a little too late but I think NtQuerySemaphore() is probably what you want to take a look at.
There is no such thing as a "current count" of a Win32 semaphore - which is why you can't get it.
I mean, patently, at some point of time the count on a semaphore will be some value, but from the point of view of a thread, unless it takes action to increase or decrease the semaphore count, another thread might make any answer retrieved entirely invalid the moment it is computed.
It is for this reason that windows api synchronization functions do not let you take the previous lock count without a side effect. The side effect guarantees that you have a valid window of opportunity to actually use the value in a meaningful way.
The obvious "work around" would be to do something like
LONG count = 0;
if( WAIT_OBJECT_0 == WaitForSingleObject(hSemaphore,0L))
{
// Semaphores count is at least one.
ReleaseSemaphore(hSemaphore,1,&count);
}
Why is this better? I'm not sure. But perhaps there is a possibility of doing something meaningful between waiting and releasing that would have been a race condition if ReleaseSemaphore was allowed to release 0.
The sysinternals tool Process Explorer can display the internals of win32 handles, including semaphores and their current/max counts. Good enough for debugging but not so useful for automated monitoring.
If Process Explorer can do it, you probably can too ... but it will probably require deep knowledge of windows internals.

Resources