What is the function of "millis()" in this code snippet? - arduino-uno

What is the function of millis() in this code snippet?
if (millis() > timer) {
timer = millis() + 5000;
ether.browseUrl(PSTR("/demo/"), "aphorisms.php", website, response_callback);
}

The "timer" variable bumps the current cumulative time, as measured by millis(), and sets a value of 5 seconds greater. This snippet will reside in a larger loop, and whenever the time since the last iteration exceeds 5 seconds, will execute the subsequent statement and bump the timer again. Else, the snippet simply passes thru.
If you wanted to do something every 5 seconds, or whatever interval you choose, this is a simple way to do that. Of course, that interval may be elongated, depending on other code in the loop.

Related

Golang time.Ticker to tick on clock times

I am working on a Go program and it requires me to run certain function at (fairly) exact clock times (for example, every 5 minutes, but then specifically at 3:00, 3:05, 3:10, etc, not just every 5 minutes after the start of the program).
Before coming here and requesting your help, I tried implementing a ticker does that, and even though it seems to work ok-ish, it feels a little dirty/hacky and it's not super exact (it's only fractions of milliseconds off, but I'm wondering if there's reason to believe that discrepancy increases over time).
My current implementation is below, and what I'm really asking is, is there a better solution to achieve what I'm trying to achieve (and that I can have a little more confidence in)?
type ScheduledTicker struct {
C chan time.Time
}
// NewScheduledTicker returns a ticker that ticks on defined intervals after the hour
// For example, a ticker with an interval of 5 minutes and an offset of 0 will tick at 0:00:00, 0:05:00 ... 23:55:00
// Using the same interval, but an offset of 2 minutes will tick at 0:02:00, 0:07:00 ... 23:57
func NewScheduledTicker(interval time.Duration, offset time.Duration) *ScheduledTicker {
s := &ScheduledTicker{
C: make(chan time.Time),
}
go func() {
now := time.Now()
// Figure out when the first tick should happen
firstTick := now.Truncate(interval).Add(interval).Add(offset)
// Block until the first tick
<-time.After(firstTick.Sub(now))
t := time.NewTicker(interval)
// Send initial tick
s.C <- firstTick
for {
// Forward ticks from the native time.Ticker to the ScheduledTicker channel
s.C <- <-t.C
}
}()
return s
}
Most timer apis across all platforms work in terms of system time instead of wall clock time. What you are expressing to is have a wall clock interval.
As the other answer expressed, there are open source packages available. A quick Google search for "Golang Wall Clock ticker" yields interesting results.
Another thing to consider. On Windows there are "scheduled tasks" and on Linux there are "cronjobs" that will do the wall clock wakeup interval for you. Consider using that if all your program is going to do is sleep/tick between needed intervals before doing needed work.
But if you build it yourself...
Trying to get things done on wall clock intervals is complicated by desktop PCs going to sleep when laptop lids close (suspending system time) and clock skew between system and wall clocks. And sometimes users like to change their PC's clocks - you could wake up and poll time.Now and discover you're at yesterday! This is unlikely to happen on servers running in the cloud, but a real thing on personal devices.
On my product team, when we really need want clock time or need to do something on intervals that span more than an hour, we'll wake up at a more frequent interval to see if "it's time yet". For example, if there's something we want to execute every 12 hours, we might wake up and poll the time every hour. (We use C++ where I work instead of Go).
Otherwise, my general algorithm for a 5 minute interval would be to sleep (or set a system timer) for 1 minute or shorter. After every return from time.Sleep, check the current time (time.Now()) to see if the current time is at or after the next expected interval time. Post your channel event and then recompute the next wake up time. You can even change the granularity of your sleep time if you work up spuriously early.
But be careful! the golang Time object contains both a wall clock and system clock time. This includes the result returned by Time.Now(). Time.Add(), Time.Sub(), and some of the other Time comparison functions work on the monolithic time first before falling over to wall clock time. You can strip the monolithic time out of the Time.Now result by doing this:
func GetWallclockNow() time.Time {
var t time.Time = time.Now()
return time.Date(t.Year(), t.Month(), t.Day(), t.Hour(), t.Minute(), t.Second(), t.Nanosecond(), t.Location())
}
Then subsequent operations like Add and After will be in wall clock space.

How to properly implement waiting of async computations?

i have some little trouble and i am asking for hint. I am on Windows platform, doing calculations in a following manner:
int input = 0;
int output; // junk bytes here
while(true) {
async_enqueue_upload(input); // completes instantly, but transfer will take 10us
async_enqueue_calculate(); // completes instantly, but computation will take 80us
async_enqueue_download(output); // completes instantly, but transfer will take 10us
sync_wait_finish(); // must wait while output is fully calculated, and there is no junk
input = process(output); // i cannot launch next step without doing it on the host.
}
I am asking about wait_finish() thing. I must wait all devices to finish, to combine all results and somehow process the data and upload a new portion, that is based on a previous computation step. I need to sync data in between each step, so i can't parallelize steps. I know, this is not quite performant case. So lets proceed to question.
I have 2 ways of checking completion, within wait_finish(). First is to put thread to sleep until it wakes up by completion event:
while( !is_completed() )
Sleep(1);
It has very low performance, because actual calculation, to say, takes 100us, and minimal Windows sheduler timestep is 1ms, so it gives unsuitable 10x lower performance.
Second way is to check completion in empty infinite loop:
while( !is_completed() )
{} // do_nothing();
It has 10x good computation performance. But it is also unsuitable solution, because it makes full cpu core utilisation usage, with absolutely useless work. How to make cpu "sleep" exactly time i needed? (Each step has equal amount of work)
How this case is usually solved, when amount of calculation time is too big for active spin-wait, but is too small compared to sheduler timestep? Also related subquestion - how to do that on linux?
Fortunately, i have succeeded in finding answer on my own. In short words - i should use linux for that.
And my investigation shows following. On windows there is hidden function in ntdll, NtDelayExecution(). It is not exposed through SDK, but can be loaded in a following manner:
static int(__stdcall *NtDelayExecution)(BOOL Alertable, PLARGE_INTEGER DelayInterval) = (int(__stdcall*)(BOOL, PLARGE_INTEGER)) GetProcAddress(GetModuleHandleW(L"ntdll.dll"), "NtDelayExecution");
It allows to set sleep intervals in 100ns periods. However, even that not worked well, as shown in a following benchmark:
SetPriorityClass(GetCurrentProcess(), REALTIME_PRIORITY_CLASS); // requires Admin privellegies
SetThreadPriority(GetCurrentThread(), THREAD_PRIORITY_TIME_CRITICAL);
uint64_t hpf = qpf(); // QueryPerformanceFrequency()
uint64_t s0 = qpc(); // QueryPerformanceCounter()
uint64_t n = 0;
while (1) {
sleep_precise(1); // NtDelayExecution(-1); waits one 100-nanosecond interval
auto s1 = qpc();
n++;
auto passed = s1 - s0;
if (passed >= hpf) {
std::cout << "freq=" << (n * hpf / passed) << " hz\n";
s0 = s1;
n = 0;
}
}
That yields something less than just 2000 hz loop rate, and result varies from string to string. That led me towards windows thread switching sheduler, which is totally not suited for real time tasks. And its minimum interval of 0.5ms (+overhead). Btw, does anyone knows on how to tune that value?
And next was linux question, and what does it can? So i've built custom tiny kernel 4.14 with means of buildroot, and tested that benchmark code there. I replaced qpc() to return clock_gettime() data, with CLOCK_MONOTONIC clock, and qpf() just returns number of nanoseconds in a second and sleep_precise() just called clock_nanosleep(). I was failed to find out what is the difference between CLOCK_MONOTONIC and CLOCK_REALTIME.
And i was quite surprised, getting whooping 18.4khz frequency just out of the box, and that was quite stable. While i tested several intervals, i found that i can set the loop to almost any frequency up to 18.4khz, but also that actual measured wait time results differs to 1.6 times of what i asked. For example if i ask to sleep 100 us it actually sleeps for ~160 us, giving ~6.25 khz frequency. Nothing else is going on the system, just kernel, busybox and this test. I am not an experience linux user, and i am still wondering how can i tune this to be more real-time and deterministic. Can i push that frequency maximum even more?

Understanding Delta time in LWJGL

I've started looking into lwjgl and I'm particularly having trouble understanding how Delta works. I have browsed other questions and websites related to this but it is still a confusing topic to wrap my head around. It would be great if someone here can help me out so please bear with me.
I understand that the Delta time for 60fps would be 16, around double that if the frame-rate is 30. I don't understand how this is calculated. Is it the time it takes between frames? Sorry for the noobish question.
private long getTime() {
return (Sys.getTime() * 1000) / Sys.getTimerResolution();
}
private int getDelta() {
long currentTime = getTime();
int delta = (int)(currentTime - lastTime);
lastTime = getTime();
return delta;
}
As opiop65 already said, the delta time is simply the time spent between your last frame's beggining and your current frame's beggining.
How does it work?
Delta time can be any kind of unit: nanoseconds, milliseconds (<- usually this is the standard) or seconds. As you said delta time is 16 when the game is running on 60FPS and 32 when the game runs on 30FPS. As for the why, it's simple: In order for a game to run at 60 frames per second it has to produce a frame every 1000/60 (= 16.666667) milliseconds, but if it running at 30 frames then it has to produce a frame every 1000/30 (= 33.333333) milliseconds.
But why do we use delta time?
We use delta time because we want to do movement and all sorts of stuff time dependant and not frame depentdant. Lets say that you want one of your game's character to move 1 unit horizontally per second. How do you do that? Obviously, you can't just add 1 to the character's location's X value, because it would get moved 1*x times per second where x is equal to your FPS (assuming that you would update the character every frame). That would mean that if somebody runs the game on 1 FPS his character would move 1 units per second, where if somebody runs the game on 5000 FPS his character would move 5000 units per second. Of course that is unacceptable.
One could say that he would move the character 1/16.6667 units on every update but then again if somebody has 1 FPS he moves 1/16.6667 units per second, opposed to that guy who runs on 5000 FPS, thus moving 5000*(1/16.6667) units per second.
Yes, you can enable V-Sync but what if somebody has a 120Hz monitor (or even higher) and not 60Hz?
Yes, you can lock the framerate but your players wouldn't be too happy about that. Also that wouldn't stop the character from slowing down when the game drops below 60FPS. So what now?
Delta time to the rescue!
All you have to do is just to move your character 1*delta on every update.
Delta time is low if the game runs on a high FPS and high if the game runs on a low FPS thus making those character go slower who runs the game on a higher FPS (so he would move smaller amounts but more frequently) and those character faster who runs the game on a lower FPS (so he would move larger amounts less frequently) and in the end they would move equal distances over the same time.
Please note that it does matter what unit you use when multiplying with the delta time:
If you use millis then at 60FPS your delta would be 16.6667 ending up with 1*16.6667 = 16.6667 movement every frame. However, if you would measure your delta time in seconds then at 60FPS your delta time would be 0.016667 meaning that your character would move 0.016667 units every frame.
This is not something you should worry about, just keep it in mind.
Delta time is simply the time it takes for one frame to "dispose" of itself and then another to display on the screen. Its basically the time between frames, as you put it. From Google:
Mathematics. an incremental change in a variable.
Let's pick apart your code.
return (Sys.getTime() * 1000) / Sys.getTimerResolution();
This line simply returns the current time in (I believe) milliseconds?
long currentTime = getTime();
int delta = (int)(currentTime - lastTime);
lastTime = getTime();
return delta;
The first line simply gets the current time. Second line then calculates delta by subtracting the current time (which is the time when the current frame was displayed) by the lastTime variable (which is the time when the last frame was displayed). Then lastTime is set to the currentTime, which is when the current frame is displayed. Its really simple when you think about it, its just the change in time between frames.

What's best way to keep a ruby process run forever?

I have to run a file.rb that make a micro-task (Insert a qwuery into a database) every second.
I have used a for loop (1..10^9) but I got a CPU usage exceed alert! So what's the best way to not waste all CPU?
The simplest way to run forever is just to loop
loop do
run_db_insert
sleep 1s
end
If it's important that you maintain a 1 Hz rate, note that the DB insert takes some amount of time, so the one-second sleep means each cycle takes dbtime+1 second and you will steadily fall behind. If the DB interaction is reliably less than a second, you can modify the sleep to adjust for the next one-second interval.
loop do
run_db_insert
sleep(Time.now.to_f.ceil - Time.now.to_f)
end
Simple loop with sleep command should do the job.
while true
# do stuff here
sleep 1 # wait one second
end

Measuring execution time of selected loops

I want to measure the running times of selected loops in a C program so as to see what percentage of the total time for executing the program (on linux) is spent in these loops. I should be able to specify the loops for which the performance should be measured. I have tried out several tools (vtune, hpctoolkit, oprofile) in the last few days and none of them seem to do this. They all find the performance bottlenecks and just show the time for those. Thats because these tools only store the time taken that is above a threshold (~1ms). So if one loop takes lesser time than that then its execution time won't be reported.
The basic block counting feature of gprof depends on a feature in older compilers thats not supported now.
I could manually write a simple timer using gettimeofday or something like that but for some cases it won't give accurate results. For ex:
for (i = 0; i < 1000; ++i)
{
for (j = 0; j < N; ++j)
{
//do some work here
}
}
Now here I want to measure the total time spent in the inner loop and I will have to put a call to gettimeofday inside the first loop. So gettimeofday itself will get called a 1000 times which introduces its own overhead and the result will be inaccurate.
Unless you have an in circuit emulator or break-out box around your CPU, there's no such thing as timing a single-loop or single-instruction. You need to bulk up your test runs to something that takes at least several seconds each in order to reduce error due to other things going on in the CPU, OS, etc.
If you're wanting to find out exactly how much time a particular loop takes to execute, and it takes less than, say, 1 second to execute, you're going to need to artificially increase the number of iterations in order to get a number that is above the "noise floor". You can then take that number and divide it by the number of artificially inflated iterations to get a figure that represents how long one pass through your target loop will take.
If you're wanting to compare the performance of different loop styles or techniques, the same thing holds: you're going to need to increase the number of iterations or passes through your test code in order to get a measurement in which what you're interested in dominates the time slice you're measuring.
This is true whether you're measuring performance using sub-millisecond high performance counters provided by the CPU, the system date time clock, or a wall clock to measure the elapsed time of your test.
Otherwise, you're just measuring white noise.
Typically if you want to measure the time spent in the inner loop, you'll put the time get routines outside of the outer loop and then divide by the (outer) loop count. If you expect the time of the inner loop to be relatively constant for any j, that is.
Any profiling instructions incur their own overhead, but presumably the overhead will be the same regardless of where it's inserted so "it all comes out in the wash." Presumably you're looking for spots where there are considerable differences between the runtimes of two compared processes, where a pair of function calls like this won't be an issue (since you need one at the "end" too, to get the time delta) since one routine will be 2x or more costly over the other.
Most platforms offer some sort of higher resolution timer, too, although the one we use here is hidden behind an API so that the "client" code is cross-platform. I'm sure with a little looking you can turn it up. Although even here, there's little likelihood that you'll get better than 1ms accuracy, so it's preferable to run the code several times in a row and time the whole run (then divide by the loop count, natch).
I'm glad you're looking for percentage, because that's easy to get. Just get it running. If it runs quickly, put an outer loop around it so it takes a good long time. That won't affect the percentages. While it's running, get stackshots. You can do this with Ctrl-Break in gdb, or you can use pstack or lsstack. Just look to see what percentage of stackshots display the code you care about.
Suppose the loops take some fraction of time, like 0.2 (20%) and you take N=20 samples. Then the number of samples that should show them will average 20 * 0.2 = 4, and the standard deviation of the number of samples will be sqrt(20 * 0.2 * 0.8) = sqrt(3.2) = 1.8, so if you want more precision, take more samples. (I personally think precision is overrated.)

Resources