Flowplayer getTime as a float - flowplayer

So, the flowplayer documentation says that there are two ways to get the current play time of a Player object. getTime() and getStatus().time. Unfortunately, both of those return int's (in seconds), and I am creating a screen-shot and need to get the current timestamp in 1/10th's of a second (and 1/100 is even better). Is it possible to get a more accurate value (ideally in JS, I can manage AS if necessary but that is annoying)

No, apparently there is no way to retrieve that information.

Related

gSOAP - is there a way to specify the time for soap_wsse_add_UsernameTokenDigest?

I'm trying to use gSOAP to talk to a network camera that supports ONVIF, and I need a way to specify the time that soap_wsse_add_UsernameTokenDigest uses when it hashes the password.
At the moment I'm unable to insure that both the camera and my client have proper NTP time sync. Therefore, I'd like to take the approach used by tools like python-onvif and simply apply an offset to the time used in generating the UsernameToken. The camera's date/time can be retrieved without authentication, so computing such an offset is trivial.
My problem is that I can't see any way to get soap_wsse_add_UsernameTokenDigest to use anything except the current time when it computes the password hash.
Is there any way to change what time soap_wsse_add_UsernameTokenDigest uses, short of changing the system clock?
And a look at the source code for soap_wsse_add_UsernameTokenDigest answers the question: NO, there's no way to specify that time or an offset because it simply calls time(NULL) directly.
So my options are to modify soap_wsse_add_UsernameTokenDigest, compute the hashes myself and call soap_wsse_add_UsernameTokenText, or find some way to insure time sync.

How to obtain a kernel timestamp for an interrupt?

I have an event from the realtime world, which generates an interrupt. I need to register this event to one of the Linux kernel timescales, like CLOCK_MONOTONIC or CLOCK_REALTIME, with the goal of establishing when the event occurred in real calendar time. What is the currently recommended way to do this? My google search found some patches submitted back in 2011 to support it, but the interrupt-handling code has been heavily revised since then and I don't see a reference to timestamps anymore.
For my intended application the accuracy requirements are low (1 ms). Still, I would like to know how to do this properly. I should think it's possible to get into the microsecond range, if one can exclude the possibility of higher-priority interrupts.
If you need only low precision, you could get away with reading jiffies.
However, if CONFIG_HZ is less than 1000, you will not even get 1 ms resolution.
For a high-resolution timestamp, see how firewire-cdev.c does it:
case CLOCK_REALTIME: getnstimeofday(&ts); break;
case CLOCK_MONOTONIC: ktime_get_ts(&ts); break;
case CLOCK_MONOTONIC_RAW: getrawmonotonic(&ts); break;
If I understood your needs right - you may use getnstimeofday() function for this purpose.
If you need the high precision monotonic clock value (which is usually a good idea) you should look at ktime_get_ts() function (defined in linux/ktime.h). getnstimeofday() suggested in the other answer returns the "wall" time which may actually appear to go backward on occassion, resulting in unexpected behavior for some applications.

Time profiler between a certain range of time?

I wonder how can I setup the time profiler instrument to show me the calls that are done between a period of time. I don't want it to show me all the calls of running time.
Is this possible?
I've been trying with flags but no see anything to change.
Basically I want to focus on a certain peak.
Option-drag on the timeline in instruments to just include results from that time range. Simple as that, really.
There is no way (that I'm aware of anyway) to trigger arbitrary flags in Instruments from user code. I've come up with a couple of alternatives.
The simplest one I've found is to put a call to sleep(1) right before and right after the stuff I want to look at, this means that I can easily identify a period of total idle right before and after the zone of interest. Crude, but effective.
The other alternative is that you can use Instruments' custom instrument mechanism to instrument certain calls. This can, similarly, give you other items on the timeline that you can use for reference. These can be challenging to create and get just right, so most often I just use the cruder method described above.
HTH
Not sure when this changed, but in Xcode 11, Option-dragging only performs a zoom. It doesn't change the range of reported data. Cmd-dragging does nothing. The key is to just drag across the range, but do it in the content region of the Time Profiler plot area - NOT the "time bar" at the top.

How to prevent time-based cheats on a time-based simulation game?

In the iphone game "Tiny Tower", I'm guessing it uses some kind of simulation based on the time spent between the last play and the current time, because you can set the current time forward and you will get the benefit from the fake elapsed time span.
Is there an algorithm that I can use to prevent this sort of thing? (Or at least make it difficult enough for the average user to pull off!)
Edit: thanks, I understand that, despite my wording, there's no way to prevent things you store on the client side, but I want to make it at least more difficult than "changing the time" to hack it!
The gamecube had a way to do this so it must be possible.
Is there an event triggered when the iphone time is set ? In that case you can react that.
Another solution is to require to be online when the game is launched, this way you can check time on a remote server.
You could has well check if you got an event on the phone login or wake up react to it, saving the time at that moment in your DB. You would have the last non modified time.
A last possible trick is to check for a file you know is going to be modified by an action prior to time change (such as login), and check the 'last modification' date.
You can investigate in the GPS direction as well. A GPS need to be synchronised with the satellite it contact, so it must keep track of time in some way, and maybe there is an API for that.
Unfortunatly you are on an iphone, which mean your possibilities are limited since applications got very few rights and are sandboxed.
EDIT:
Just though about it but, can you create event in the iphone calendar ? And check if it has been trigered ? Cause you could set a fake meeting or something for every day. Not clean, but creative.
EDIT 2: can you set a timer as a code for IOS to execute in 60 minutes ? If you can, set this timer, pass the time expected to be when this code run, then when the code run, compare and inform your program.
One way to prevent it is to monitor time passing by checking timestamps for their logins in a database. It doesn't matter if the client's iPhone's time is off; the database on your end will still know how long it's been since the last login.
I think if you have internet access you can take the time from a server.
A second solution : You can record the "datetime" and every time you see a "BIG" difference between the record datetime and the running datetime you know there might be a problem.
but this is not elegant, i know.
You can also record a small ammount of datetimes that the application started and check the diffrence with the running datetime.
Also you can use "Activity"->"Datetime" so the "Updates" (levels etc) can't be retaken.
Because the system Datetime can be changed by user, there is potential for "hack".
call a web service to get the time, rather than rely on the phone. There are several places you could get time from, google is your friend i'm sure, or create one yourself, and use the local time of the machine the service runs on for the time.
You could also use the Network Time Protocol (NTP) servers to get a consistent time

Is there a generally acceptable definition of (soft) realtime delays?

I'm trying to find a benchmark for how long users are willing to wait for a response from a remote service. In my case the response is for very useful but not business critical validation of data entry. I guess that there must have been some work done in the HCI space on this.
If you know of a generally accepted definition for soft realtime responses then great but I'd also appreciate your well reasoned thoughts.
Chris
US DOD MIL-STD 1472-F Human Engineering Standard has the most widely accepted requirements for maximum allowed response time (from Table XXII, page 196, times in seconds):
Key Response (Key depression until positive response, e.g., "click"): 0.1
Key Print (Key depression until appearance of character): 0.2
Page Turn (End of request until first few lines are visible): 1.0
Page Scan (End of request until text begins to scroll): 0.5
XY Entry (From selection of field until visual verification): 0.2
Function (From selection of command until response): 2.0
Pointing (From input of point to display point): 0.2
Sketching (From input of point to display of line): 0.2
Local Update (Change to image using local data base, e.g., new menu list): 0.5
Host Update (from display buffer): 2.0
File Update (Change where data is at host in readily accessible form): 10.0
Inquiry - Simple (e.g., a scale change of existing image): 2.0
Inquiry - Complex (Image update requires an access to a host file): 10.0
Error Feedback (From command until display of a commonly used message): 2.0
As you can see, acceptable response time depends on what response the user is waiting for. For something like a pulldown menu appearing, it's 0.5 seconds max. For a full page load in a browser, you want something to appear in 1.0 s to 2.0 s and the full page loaded in 10.0 s. In all the above, shorter response times are better. Only in bizarre circumstances will users object to a 0.001s response time.
In any case, if the response time will be greater than 0.5 s, then you need to provide feedback such as a throbber or hourglass sprite. If response time is a minimum of 5-15 s (depending on what standard you use), provide a progress bar. With a progress bar, very long response times (on the order or minutes over even hours) may be acceptable as long as you set it up for the user as a “batch” process rather than being an interactive program. It's much better for the user to make all input and wait an hour than to make input on four occasions, waiting 15 minutes after each.
The above list has the accepted standards. How long your users are willing to wait (e.g., before giving up) essentially boils down to the user making a cost-benefit analysis. Is what I’m going to get worth the wait? What are my sunk costs? Is there an alternative (e.g., another web site) that can do it better? Can I do other things while I wait to make the most of my time? However, whatever users willing to do, you can bet they’ll resent delays greater than the standards above.
Human reaction time seems to be around 200 ms - anything around there will be perceived as instantaneous. That sort of number is hard to achieve, especially in an application that gets information from remote services.
If you take a look at Google's search suggestion box, the lag there is minimal - less than a second. It's astoundingly fast, and really remarkable for a web application. This is really nice for Google's users, but it's bad news for you. These days, users expect most applications to react with the same sort of speed an efficiency; anything slower is considered rather laggy. However, it's worth noting that people's patience usually varies with the complexity of the task at hand. A simple form submit should never take much time, but something like uploading photos is expected to take a while.
My feeling is this: go with your gut. If your application is fairly simple then you should try to get the wait/load time down to less than a second. If you can't, then your best bet is to add an indicator so the user knows that some computations are being done in the background. This can be in the form of a small animation or a progress bar.
Unfortunately, the answer to this question is not typically a well-defined number. Users expectations vary widely and can change depending on what it is you're talking about.
As computers continue to become more ubiquitous and we (the consumers) continue to have growing expectations of speed, remote services, websites, and even applications will need to continue to respond more quickly. Generally speaking, you want everything to be as fast as possible.
With this said, I would look at what your remote service is for. Since you said, "the response is for very useful..." to me, that means it probably will get used frequently. People tend to use what is useful. If that's the case, I would look for ways to make that remote service respond quickly.
Of course, there is also the caveat that you don't want to start optimizing before the service is written. What is the current response time? What is the context in which this will be used? Those factors will do a lot to determine the longest users are willing to wait for the service.
You might want to search for "SLA" or "Service Level Agreement". Those are the documents in a web business that make guarantees as to how long data will take to get back to the user, whether it's an HTML document or a web service call.

Resources