I am noob in golang, but I would like to change a source code that writes data into database every minute to every second. I have trobles to find what Tick does in the code. The config.SampleRate is integer = 1, which means every minute = every 60 seconds
What this tick is all about and the end part of it: <-tick, combined with counter i?
i := 0
tick := time.Tick(time.Duration(1000/config.Samplerate) * time.Millisecond)
for {
// Restart the accumulator loop every 60 seconds.
if i > (60*config.Samplerate - 1) {
i = 0
//some code here
}
//some code there
}
<-tick
i++
tick is a channel in Go. If you look at the docs, tick should send something to the channel once each time interval, which is specified by time.Duration(1000/config.Samplerate) * time.Millisecond in your code. <-tick just waits for that time interval to pass.
i keeps track of how many seconds pass, so every time it ticks, you add one to i. The if statement checks when one minute passes.
So, the code inside the if statement fires every 60 seconds, while the code right under the if block fires every second.
Related
I am learning leaky bucket algorithm and want to get my hand dirty by writing some simple code with redis plus golang http.
When I searched here with the keyword redis, leaky, bucket. There are many similar questions as shown in [1], which is nice. However I find I have a problem to understand the entire logic after going through those threads and wiki[2]. I suppose there is something I do not understand and am also not aware of it. So I would like to rephrase it again here; and please correct me if I get it wrong.
The pseudo code:
key := "ip address, token or anything that can be the representative of a client"
redis_queue_size := 5
interval_between_each_request := 7
request := obtain_http_request_from_somewhere()
if check_current_queue_size() < redis_queue_size:
if is_queue_empty()
add_request_to_the_queue() // zadd "ip1" now() now() // now() is something like seconds, milliseconds or nanoseconds e.g. t = 1
process_request(request)
else
now := get_current_time()
// add_request_to_... retrieves the first element in the queue
// compute the expected timestamp to execute the request and its current time
// e.g. zadd "ip1" <time of the first elment in the queue + interval_between_each_request> now
add_request_to_redis_queue_with_timestamp(now, interval_between_each_request) // e.g. zadd "ip" <timestamp as score> <timestamp a request is allowed to be executed>
// Below function check_the_time_left...() will check how many time left at which the current request need to wait.
// For instance, the first request stored in the queue with the command
// zadd "ip1" 1 1 // t = 1
// and the second request arrives at t = 4 but it is allowed t be executed at t = 8
// zadd "ip1" 8 4 // where 4 := now, 8 := 1 + interval_between_each_request
// so the N will be 4
N := check_the_time_left_for_the_current_request_to_execute(now, interval_between_each_request)
sleep(N) // now the request wait for 4 seconds before processing the request
process_request(http_request_obj)
else
return // discard request
I understand the part when queue is full, then the following requests will be discarded. However I suppose I may misunderstand when the queue is not full, how to reshape the incoming request so it can be executed in a fixed rate.
I appreciate any suggestions
[1]. https://stackoverflow.com/search?q=redis+leaky+bucket+&s=aa2eaa93-a6ba-4e31-9a83-68f791c5756e
[2]. https://en.wikipedia.org/wiki/Leaky_bucket#As_a_queue
If this is for simple rate-limiting the sliding window approach using a sorted set is what we see implemented by most Redis users https://github.com/Redislabs-Solution-Architects/RateLimitingExample/blob/sliding_window/app.py
If you are set on leaky bucket you might consider using a redis stream per consumerID (apiToken/IP Address etc) as follows
request comes in for consumerID
XADD requests-[consumerID] MAXLEN [BUCKET SIZE]
spawn a go routine if necessary for that consumerID
get current time
if XLEN of requests-[consumerID] is 0 exit go routine
XREAD COUNT [number_of_requests_per_period] BLOCK [time period - 1 ms] STREAMS requests-[consumerID]
get the current time and sleep for the remainder of the time period
https://redis.io/commands#stream details how streams work
There are several ways you can implement a leaky bucket but there should be two separate parts for the process. One that puts things in the bucket and another that removes them at a set interval if there is anything to remove.
You can use a separate goroutine that would consume the messages at a set interval. This would simplify your code since on one code path you would only have to look into the queue size and drop packets and another code path would just consume whatever there is.
I am on the first chapter The Go Programming Language (Addison-Wesley Professional Computing Series) and the 3rd exercise in the book asks me to measure code performance using time.
So, I came up with the following code.
start := time.Now()
var s, sep string
for i := 1; i < len(os.Args); i++ {
s += sep + os.Args[i]
sep = " "
}
fmt.Print(s)
fmt.Printf("\nTook %.2fs \n", time.Since(start).Seconds())
fmt.Println("------------------------------------------------")
start2 := time.Now()
fmt.Print(strings.Join(os.Args[1:], " "))
fmt.Printf("\nTook %.2fs", time.Since(start2).Seconds())
When I ran this code on Windows and Mac, it always return 0.00 second. I added a pause in my code to check whether it's correct and it seems fine. What I don't understand is why it always returns 0.0.
There is very little code between your start times and the time.Since() calls, in the first example just a few string concatenations and an fmt.Print() call, in the second example just a single fmt.Print() call. These are executed by your computer very fast.
So fast, that the result is most likely less than a millisecond. And you print the elapsed time using the %.2f verb, which rounds the seconds to 2 fraction digits. Which means if the elapsed time is less than 0.005 sec, it will be rounded to 0. This is why you see 0.00s printed.
If you change the format to %0.12f, you will see something like:
Took 0.000027348000s
Took 0.000003772000s
Also note that the time.Duration value returned by time.Since() implements fmt.Stringer, and it "formats" itself intelligently to a unit that is more meaningful. So you may print it as-is.
For example if you print it like this:
fmt.Println("Took", time.Since(start))
fmt.Println("Took", time.Since(start2))
You will see an output something like this:
Took 18.608µs
Took 2.873µs
Also note that if you want to measure the performance of some code, you should use Go's built-in testing and benchmarking facilities, namely the testing package. For details, see Order of the code and performance.
I have to simulate a scenario with a RSU that has limited processing capacity; it can only process a limited number of messages in a time unit (say 1 second).
I tried to set a counter in the RSU application. the counter is incremented each time the RSU receives a message and decremented after processing it. here is what I have done:
void RSUApp::onBSM(BasicSafetyMessage* bsm)
{
if(msgCount >= capacity)
{
//drop msg
this->getParentModule()->bubble("capacity limit");
return;
}
msgCount++;
//process message here
msgCount--;
}
it seems useless, I tested it using capacity limit=1 and I have 2 vehicles sending messages at the same time. the RSU process both although it should process one and drop the other.
can anyone help me with this?
In the beginning of the onBSM method the counter is incremented, your logic gets executed and finally the counter gets decremented. All those steps happen at once, meaning in one step of the simulation.
This is the reason why you don't see an effect.
What you probably want is a certain amount of "messages" to be processed in a certain time interval (e.g. 500 ms). It could somehow look like this (untested):
if (simTime() <= intervalEnd && msgCount >= capacity)
{
this->getParentModule()->bubble("capacity limit");
return;
} else if (simTime() > intervalEnd) {
intervalEnd = simTime() + YOURINTERVAL;
msgCount = 0;
}
......
The variable YOURINTERVAL would be time amount of time you like to consider as the interval for your capacity.
You can use self messaging with scheduleAt(simTime()+delay, yourmessage);
the delay will simulate the required processing time.
Hi I can't seem to get my head around the correct way to do time arithmetic in Go.
I have a time "object" later initialized to Now() and stored.
insertTime time.Time
Later, I need to see if the item is older than 15 minutes.
How do i do this?
Do I need to create a Duration of 15 Minutes add it to the current time and compare? If so, how do I do that?
func (Time) After will be helpful, I believe. Schema:
when := time.Now()
...
if time.Now().After(when.Add(15*time.Minute)) {
// Conditionally process something if at least 15 minutes elapsed
}
Instead of a variable, when could be a field of some struct, for example.
Alternative approach:
deadline := time.Now().Add(15*time.Minute)
...
if time.Now().After(deadline) {
// Conditionally process something if at least 15 minutes elapsed
}
I prefer the later version personally.
Suppose I want to run a task once per hour, but at a variable time during the hour. It doesn't have to be truly random; I just don't want to do it at the top of the hour every hour, for example. And I want to do it once per hour only.
This eliminates several obvious approaches, such as sleeping a random amount of time between 30 and 90 minutes, then sleeping again. It would be possible (and pretty likely) for the task to run several times in a row with a sleep of little more than 30 minutes.
The approach I'm thinking about looks like this: every hour, hash the Unix timestamp of the hour, and mod the result by 3600. Add the result to the Unix timestamp of the hour, and that's the moment when the task should run. In pseudocode:
while now = clock.tick; do
// now = a unix timestamp
hour = now - now % 3600;
hash = md5sum(hour);
the_time = hour + hash % 3600;
if now == the_time; then
do_the_work();
end
end
I'm sure this will meet my requirements, but I thought it would be fun to throw this question out and see what ideas other people have!
For the next hour to do work in, just pick a random minute within that hour.
That is, pick a random time for the next interval to do work in; this might be the same interval (hour) as the current interval (hour) if work has carried over from the previous interval.
The "time to sleep" is simply the time until then. This could also be execute "immediately" on a carry-over situation if the random time was before now: this will ensure that a random time is picked each hour, unless work takes more than an hour.
Don't make it more complex than it has to be - there is no reason to hash or otherwise muck with random here. This is how "Enterprise" solutions like SharePoint Timers (with an Hourly Schedule) work.
Schedule your task (with cron or the like) to run at the top of every hour.
At the beginning of your task, sleep for a random amount of time, from 0 to (60 - (the estimated running time of your task + a fudge factor)) minutes.
If you don't want your task to run twice simultaneously, you can use a pid file. The task can check - after sleeping - for this file and wait for the currently running task to finish before starting again.
I've deployed my suggested solution and it is working very well. For example, once per minute I sample some information from a process I'm monitoring, but I do it at variable times during the minute. I created a method of a Timestamp type, called RandomlyWithin, as follows, in Go code:
func (t Timestamp) RandomlyWithin(dur Timestamp, entropy ...uint32) Timestamp {
intervalStart := t - t % dur
toHash := uint32(intervalStart)
if len(entropy) > 0 {
toHash += entropy[0]
}
md5hasher.Reset()
md5hasher.Write([]byte{
uint8(toHash >> 24 & 255),
uint8(toHash >> 16 & 255),
uint8(toHash >> 8 & 255),
uint8(toHash & 255)})
randomNum := binary.BigEndian.Uint32(md5hasher.Sum(nil)[0:4])
result := intervalStart + Timestamp(randomNum)%dur
return result
}