Daylight Savings and sleep - sleep

I have a C application that sleeps for 1 minute between tasks. How will this be affected when Daylight Savings Time ends? At 3:00am, when the clocks go back to 2:00am, will sleep suspend execution for 1 minute or 1 hour and 1 minute?
Example:
while(1)
{
sleep(60);
do_something();
}

No, your code will not be affected by the Daylight Savings Time ends . No problem.
When you use sleep function your program wait until a certain amount of time that you specify as parameter to the function, the time of the system doesn't affect the function behaviour.

Related

Golang time.Ticker to tick on clock times

I am working on a Go program and it requires me to run certain function at (fairly) exact clock times (for example, every 5 minutes, but then specifically at 3:00, 3:05, 3:10, etc, not just every 5 minutes after the start of the program).
Before coming here and requesting your help, I tried implementing a ticker does that, and even though it seems to work ok-ish, it feels a little dirty/hacky and it's not super exact (it's only fractions of milliseconds off, but I'm wondering if there's reason to believe that discrepancy increases over time).
My current implementation is below, and what I'm really asking is, is there a better solution to achieve what I'm trying to achieve (and that I can have a little more confidence in)?
type ScheduledTicker struct {
C chan time.Time
}
// NewScheduledTicker returns a ticker that ticks on defined intervals after the hour
// For example, a ticker with an interval of 5 minutes and an offset of 0 will tick at 0:00:00, 0:05:00 ... 23:55:00
// Using the same interval, but an offset of 2 minutes will tick at 0:02:00, 0:07:00 ... 23:57
func NewScheduledTicker(interval time.Duration, offset time.Duration) *ScheduledTicker {
s := &ScheduledTicker{
C: make(chan time.Time),
}
go func() {
now := time.Now()
// Figure out when the first tick should happen
firstTick := now.Truncate(interval).Add(interval).Add(offset)
// Block until the first tick
<-time.After(firstTick.Sub(now))
t := time.NewTicker(interval)
// Send initial tick
s.C <- firstTick
for {
// Forward ticks from the native time.Ticker to the ScheduledTicker channel
s.C <- <-t.C
}
}()
return s
}
Most timer apis across all platforms work in terms of system time instead of wall clock time. What you are expressing to is have a wall clock interval.
As the other answer expressed, there are open source packages available. A quick Google search for "Golang Wall Clock ticker" yields interesting results.
Another thing to consider. On Windows there are "scheduled tasks" and on Linux there are "cronjobs" that will do the wall clock wakeup interval for you. Consider using that if all your program is going to do is sleep/tick between needed intervals before doing needed work.
But if you build it yourself...
Trying to get things done on wall clock intervals is complicated by desktop PCs going to sleep when laptop lids close (suspending system time) and clock skew between system and wall clocks. And sometimes users like to change their PC's clocks - you could wake up and poll time.Now and discover you're at yesterday! This is unlikely to happen on servers running in the cloud, but a real thing on personal devices.
On my product team, when we really need want clock time or need to do something on intervals that span more than an hour, we'll wake up at a more frequent interval to see if "it's time yet". For example, if there's something we want to execute every 12 hours, we might wake up and poll the time every hour. (We use C++ where I work instead of Go).
Otherwise, my general algorithm for a 5 minute interval would be to sleep (or set a system timer) for 1 minute or shorter. After every return from time.Sleep, check the current time (time.Now()) to see if the current time is at or after the next expected interval time. Post your channel event and then recompute the next wake up time. You can even change the granularity of your sleep time if you work up spuriously early.
But be careful! the golang Time object contains both a wall clock and system clock time. This includes the result returned by Time.Now(). Time.Add(), Time.Sub(), and some of the other Time comparison functions work on the monolithic time first before falling over to wall clock time. You can strip the monolithic time out of the Time.Now result by doing this:
func GetWallclockNow() time.Time {
var t time.Time = time.Now()
return time.Date(t.Year(), t.Month(), t.Day(), t.Hour(), t.Minute(), t.Second(), t.Nanosecond(), t.Location())
}
Then subsequent operations like Add and After will be in wall clock space.

What is the function of "millis()" in this code snippet?

What is the function of millis() in this code snippet?
if (millis() > timer) {
timer = millis() + 5000;
ether.browseUrl(PSTR("/demo/"), "aphorisms.php", website, response_callback);
}
The "timer" variable bumps the current cumulative time, as measured by millis(), and sets a value of 5 seconds greater. This snippet will reside in a larger loop, and whenever the time since the last iteration exceeds 5 seconds, will execute the subsequent statement and bump the timer again. Else, the snippet simply passes thru.
If you wanted to do something every 5 seconds, or whatever interval you choose, this is a simple way to do that. Of course, that interval may be elongated, depending on other code in the loop.

Is QueryPerformanceCounter counter process specific?

https://msdn.microsoft.com/en-us/library/windows/desktop/dn553408(v=vs.85).aspx
https://msdn.microsoft.com/en-us/library/ms644904(VS.85).aspx
Imagine that I measure some part of code (20ms)
Context switching happend. And my thread was displaced by another thread which was executed (20 ms)
Then I receive quantum of time back from scheduler and perform some cals during 1ms.
If calculate elapsed time then what time will I receive? 41ms or 21 ms?
If calculate elapsed time then what time will I receive? 41ms or 21 ms?
QueryPerformanceCounter reports wall clock time. So the answer will be 41ms.

Parallel-ForkManager, DBI. Faster than before forking, but still too slow

I have a very simple task on updating database.
my $pm = new Parallel::ForkManager(15);
for my $line (#lines){
my $pid = $pm->start and next;
my $dbh2 = $dbh->clone();
my $sth2 = $dbh2->prepare("update db1 set field1=? where field2 =?");
my ($field1, $field2) = very_slow_subroutine();
$sth2->execute($field1,$field2);
$pm->finish;
}
$pm->wait_all_children;
I could just use $dbh2->do, but I doubt it a reason for a slowness.
What interesting, is that it seems it very fast starts these 15 processes (or whatever I specify) , but right after that slows drastically, still noticeable faster than without forking, but I would expect more...
Edit:
The very_slow_subroutine is sub which get an answer from a web service. The service can answer from fraction of second to several seconds on time out. I have to ask dozen thousands times... the reason I would like to make a fork.
And if this is matters -- I am on Linux.
Parallel::ForkManager doesn't magically make things faster, it just lets you do run your code multiple times and at the same time. In order to get the benefit out of it, you have to design your code for parallelism.
Think of it this way. It takes you 10 minutes to get to the store, shop, load your car, come back, and unload it. You need to get 5 loads. You alone can do it in 50 minutes. That is working in serial. 10 minutes * 5 trips one after the other = 50 minutes.
Let's say you get four friends to help. You all start off for the store at the same time. There's still 5 trips, and they still take 10 minutes, but because you did it in parallel the total time is only 10 minutes.
But it will never take less than 10 minutes, no matter how many trips you have to make or how many friends you get to help. That is why the process starts up fast, everybody gets into their cars and drives off to the store, but then nothing happens for a while because it still takes 10 minutes for everyone to do their job.
Same thing here. Your loop body takes X time to run. If you iterate through it Y times, it will take X * Y real world human time to run. If you run it in parallel Y times, ideally it will take just X time to run. Each parallel worker must still execute the full body of the loop taking X time.
In order to speed things up further, you have to break up the big bottleneck of very_slow_subroutine and make that work in parallel. Your SQL is so simple that is where you should focus your efforts at optimization and parallelism.
Let's say the store is really close, it's only a 1 minute drive (this is your SQL UPDATE), but shopping, loading and unloading takes 9 minutes (this is very_slow_subroutine). What if instead you have 5 cars and 15 friends. You load 3 people into each car. Driving to and from the store will take the same time, but now three people are working together to do the shopping, loading and unloading taking only 4 minutes. Now each trip takes 5 minutes instead of 10.
This represents redesigning very_slow_subroutine to do its work in parallel. If it's just a big loop, you can put more workers on that loop. If it's a series of slow operations, you will have to redesign it to take advantage of parallel execution.
If you use too many workers you can clog up the system, it depends on what the bottleneck is. If it's CPU bound and you have 2 CPU cores, you're probably see performance gains up to 3 to 5 workers ((cores * 2)+1 is a good rule of thumb) and after that performance will drop off as the CPU spends more time switching between processes than doing work. If the bottleneck is IO, or an external service as is often the case with database and network calls, you can see great efficiencies throwing many workers at the problem. While one process is waiting around for a disk or network operation, the others can be using your CPU.
Whether parallelism can help depends on where your bottleneck is. If your CPU with 4 cores is the bottleneck, forking 4 processes might cause things to complete in about 1/4th the under the best case scenario, but spawning 15 processes is not going to improve things much more.
If, more likely, your bottleneck is in I/O, starting 15 processes that compete for the same I/O is not going to help much, although in cases where you have tons of memory to use as file cache, some improvement might be possible.
To explore the limits on your system, consider the following program:
#!/usr/bin/env perl
use strict;
use warnings;
use Parallel::ForkManager;
run(#ARGV);
sub run {
my $count = #_ ? $_[0] : 2;
my $pm = Parallel::ForkManager->new($count);
for (1 .. 20) {
$pm->start and next;
sleep 1;
$pm->finish;
}
$pm->wait_all_children;
}
My ancient laptop has a single CPU with 2 cores. Let's see what I get:
TimeThis : Command Line : perl sleeper.pl 1
TimeThis : Elapsed Time : 00:00:20.735
TimeThis : Command Line : perl sleeper.pl 2
TimeThis : Elapsed Time : 00:00:06.578
TimeThis : Command Line : perl sleeper.pl 4
TimeThis : Elapsed Time : 00:00:04.578
TimeThis : Command Line : perl sleeper.pl 8
TimeThis : Elapsed Time : 00:00:03.546
TimeThis : Command Line : perl sleeper.pl 16
TimeThis : Elapsed Time : 00:00:02.562
TimeThis : Command Line : perl sleeper.pl 20
TimeThis : Elapsed Time : 00:00:02.563
So, running with max 20 processes gives me a total run time over 2.5 seconds for sleeping one second 20 times.
On the other hand, with just one process, sleeping one second 20 times took just over 20 seconds. That is a huge improvement, but it also indicates a management overhead of more than 150% when you have 20 processes each sleeping for one second.
This is in the nature of parallel programming. There are a lot of formal treatments out there on what you can expect, but Amdahl's Law is required reading.

What's best way to keep a ruby process run forever?

I have to run a file.rb that make a micro-task (Insert a qwuery into a database) every second.
I have used a for loop (1..10^9) but I got a CPU usage exceed alert! So what's the best way to not waste all CPU?
The simplest way to run forever is just to loop
loop do
run_db_insert
sleep 1s
end
If it's important that you maintain a 1 Hz rate, note that the DB insert takes some amount of time, so the one-second sleep means each cycle takes dbtime+1 second and you will steadily fall behind. If the DB interaction is reliably less than a second, you can modify the sleep to adjust for the next one-second interval.
loop do
run_db_insert
sleep(Time.now.to_f.ceil - Time.now.to_f)
end
Simple loop with sleep command should do the job.
while true
# do stuff here
sleep 1 # wait one second
end

Resources