Absolute difference between time.Duration - go

I have 2 variables with time.Duration type. I need to find the difference in duration between them.
For example:
v1 = 1sec and v2 = 10sec — difference is 9 sec.
v1 = 10sec and v2 = 1sec — difference is also 9 sec.
Both variables can have different values for hours, minutes etc.
How can I do this in Go?

Try this:
package main
import (
"fmt"
"time"
)
func main() {
a := 1 * time.Second
b := 10 * time.Second
c := absDiff(a, b)
fmt.Println(c) // 9s
fmt.Println(absDiff(b, a)) // 9s
}
func absDiff(a, b time.Duration) time.Duration {
if a >= b {
return a - b
}
return b - a
}
This is another form:
package main
import (
"fmt"
"time"
)
func main() {
a := 1 * time.Second
b := 10 * time.Second
c := abs(a - b)
fmt.Println(c) // 9s
}
func abs(a time.Duration) time.Duration {
if a >= 0 {
return a
}
return -a
}

Go 1.19
The function Abs has been added to the language, so you can just use that instead of rolling your own:
func main() {
d1 := time.Second * 10
d2 := time.Second
sub1 := d1 - d2
sub2 := d2 - d1
fmt.Println(sub1.Abs()) // 9s
fmt.Println(sub2.Abs()) // 9s
}

An elegant solution is to use bitshift >>
func absDuration(d time.Duration) time.Duration {
s := d >> 63
return (d ^ s) - s
}
bitshift in Golang is logic if the left operand is negative

Related

Attempt to parallelize not fast enough

I read about Go's concurrency model and also saw about the difference between concurrency and parallelism. In order to test parallel execution, I wrote the following program.
package main
import (
"fmt"
"runtime"
"time"
)
const count = 1e8
var buffer [count]int
func main() {
fmt.Println("GOMAXPROCS: ", runtime.GOMAXPROCS(0))
// Initialise with dummy value
for i := 0; i < count; i++ {
buffer[i] = 3
}
// Sequential operation
now := time.Now()
worker(0, count-1)
fmt.Println("sequential operation: ", time.Since(now))
// Attempt to parallelize
ch := make(chan int, 1)
now = time.Now()
go func() {
worker(0, (count/2)-1)
ch <- 1
}()
worker(count/2, count-1)
<-ch
fmt.Println("parallel operation: ", time.Since(now))
}
func worker(start int, end int) {
for i := start; i <= end; i++ {
task(i)
}
}
func task(index int) {
buffer[index] = 2 * buffer[index]
}
But the problem is: the results are not very pleasing.
GOMAXPROCS: 8
sequential operation: 206.85ms
parallel operation: 169.028ms
Using a goroutine does speed things up but not enough. I expected it to be closer to being twice as fast. What is wrong with my code and/or understanding? And how can I get closer to being twice as fast?
Parallelization is powerful, but it's hard to see with such a small computational load. Here is some sample code with a larger difference in the result:
package main
import (
"fmt"
"math"
"runtime"
"time"
)
func calctest(nCPU int) {
fmt.Println("Routines:", nCPU)
ch := make(chan float64, nCPU)
startTime := time.Now()
a := 0.0
b := 1.0
n := 100000.0
deltax := (b - a) / n
stepPerCPU := n / float64(nCPU)
for start := 0.0; start < n; {
stop := start + stepPerCPU
go f(start, stop, a, deltax, ch)
start = stop
}
integral := 0.0
for i := 0; i < nCPU; i++ {
integral += <-ch
}
fmt.Println(time.Now().Sub(startTime))
fmt.Println(deltax * integral)
}
func f(start, stop, a, deltax float64, ch chan float64) {
result := 0.0
for i := start; i < stop; i++ {
result += math.Sqrt(a + deltax*(i+0.5))
}
ch <- result
}
func main() {
nCPU := runtime.NumCPU()
calctest(nCPU)
fmt.Println("")
calctest(1)
}
This is the result I get:
Routines: 8
853.181µs
Routines: 1
2.031358ms

Sleep by fraction of integer

I am trying to write a program that pauses for random intervals of time that are decimal numbers.
Here is the program that is not working:
package main
import (
"fmt"
"math/rand"
"time"
)
var test int
func main() {
intervalGenerate()
}
func intervalGenerate() {
var randint float64
rand.Seed(time.Now().UnixNano())
randInterval := randFloats(3, 7, 1)
randint = randInterval[0]
duration := time.Duration(randint) * time.Second
fmt.Println("Sleeping for", duration)
time.Sleep(duration)
fmt.Println("Resuming")
}
func randFloats(min, max float64, n int) []float64 {
res := make([]float64, n)
for i := range res {
res[i] = min + rand.Float64()*(max-min)
}
return res
}
Currently a random number is generated between 3 and 7 (including decimals) but Sleep rounds to the nearest round number.
From what I understand the reason this is failing is because Sleep takes Duration, which is an Int64:
func Sleep(d Duration)
Is there a way to sleep a program for fractions of a second?
Go Playground:
https://play.golang.org/p/z-dnDBnUfxr
Use time.Millisecond, time.Microsecond, or time.Nanosecond depending on the level of granularity you need.
// sleep for 2.5 seconds
milliseconds := 2500
time.Sleep(time.Duration(milliseconds) * time.Millisecond)

goroutine value return order [duplicate]

This question already has answers here:
Golang channel output order
(4 answers)
Closed 4 years ago.
Why following codes always return 2,1, not 1,2.
func test(x int, c chan int) {
c <- x
}
func main() {
c := make(chan int)
go test(1, c)
go test(2, c)
x, y := <-c, <-c // receive from c
fmt.Println(x, y)
}
If you want to know what the order is, then let your program include ordering information
This example uses a function closure to generate a sequence
The channel returns a struct of two numbers, one of which is a sequence order number
The sequence incrementer should be safe across go routines as there is a mutex lock on the sequence counter
package main
import (
"fmt"
"sync"
)
type value_with_order struct {
v int
order int
}
var (
mu sync.Mutex
)
func orgami(x int, c chan value_with_order, f func() int) {
v := new(value_with_order)
v.v = x
v.order = f()
c <- *v
}
func seq() func() int {
i := 0
return func() int {
mu.Lock()
defer mu.Unlock()
i++
return i
}
}
func main() {
c := make(chan value_with_order)
sequencer := seq()
for n := 0; n < 10; n++ {
go orgami(1, c, sequencer)
go orgami(2, c, sequencer)
go orgami(3, c, sequencer)
}
received := 0
for q := range c {
fmt.Printf("%v\n", q)
received++
if received == 30 {
close(c)
}
}
}
second version where the sequence is called from the main loop to make the sequence numbers come out in the order that the function is called
package main
import (
"fmt"
"sync"
)
type value_with_order struct {
v int
order int
}
var (
mu sync.Mutex
)
func orgami(x int, c chan value_with_order, seqno int) {
v := new(value_with_order)
v.v = x
v.order = seqno
c <- *v
}
func seq() func() int {
i := 0
return func() int {
mu.Lock()
defer mu.Unlock()
i++
return i
}
}
func main() {
c := make(chan value_with_order)
sequencer := seq()
for n := 0; n < 10; n++ {
go orgami(1, c, sequencer())
go orgami(2, c, sequencer())
go orgami(3, c, sequencer())
}
received := 0
for q := range c {
fmt.Printf("%v\n", q)
received++
if received == 30 {
close(c)
}
}
}

Concurrent integral calculation in Golang

I tried to compute the integral concurrently, but my program ended up being slower than computing the integral with a normal for loop. What am I doing wrong?
package main
import (
"fmt"
"math"
"sync"
"time"
)
type Result struct {
result float64
lock sync.RWMutex
}
var wg sync.WaitGroup
var result Result
func main() {
now := time.Now()
a := 0.0
b := 1.0
n := 100000.0
deltax := (b - a) / n
wg.Add(int(n))
for i := 0.0; i < n; i++ {
go f(a, deltax, i)
}
wg.Wait()
fmt.Println(deltax * result.result)
fmt.Println(time.Now().Sub(now))
}
func f(a float64, deltax float64, i float64) {
fx := math.Sqrt(a + deltax * (i + 0.5))
result.lock.Lock()
result.result += fx
result.lock.Unlock()
wg.Done()
}
3- For performance gain, you may divide tasks per CPU cores without using lock sync.RWMutex:
+30x Optimizations using channels and runtime.NumCPU(), this takes 2ms on 2 Cores and 993µs on 8 Cores, while Your sample code takes 61ms on 2 Cores and 40ms on 8 Cores:
See this working sample code and outputs:
package main
import (
"fmt"
"math"
"runtime"
"time"
)
func main() {
nCPU := runtime.NumCPU()
fmt.Println("nCPU =", nCPU)
ch := make(chan float64, nCPU)
startTime := time.Now()
a := 0.0
b := 1.0
n := 100000.0
deltax := (b - a) / n
stepPerCPU := n / float64(nCPU)
for start := 0.0; start < n; {
stop := start + stepPerCPU
go f(start, stop, a, deltax, ch)
start = stop
}
integral := 0.0
for i := 0; i < nCPU; i++ {
integral += <-ch
}
fmt.Println(time.Now().Sub(startTime))
fmt.Println(deltax * integral)
}
func f(start, stop, a, deltax float64, ch chan float64) {
result := 0.0
for i := start; i < stop; i++ {
result += math.Sqrt(a + deltax*(i+0.5))
}
ch <- result
}
Output on 2 Cores:
nCPU = 2
2.0001ms
0.6666666685900485
Output on 8 Cores:
nCPU = 8
993µs
0.6666666685900456
Your sample code, Output on 2 Cores:
0.6666666685900424
61.0035ms
Your sample code, Output on 8 Cores:
0.6666666685900415
40.9964ms
2- For good benchmark statistics, use large number of samples (big n):
As you See here using 2 Cores this takes 110ms on 2 Cores, but on this same CPU
using 1 Core this takes 215ms with n := 10000000.0:
With n := 10000000.0 and single goroutine, see this working sample code:
package main
import (
"fmt"
"math"
"time"
)
func main() {
now := time.Now()
a := 0.0
b := 1.0
n := 10000000.0
deltax := (b - a) / n
result := 0.0
for i := 0.0; i < n; i++ {
result += math.Sqrt(a + deltax*(i+0.5))
}
fmt.Println(time.Now().Sub(now))
fmt.Println(deltax * result)
}
Output:
215.0123ms
0.6666666666685884
With n := 10000000.0 and 2 goroutines, see this working sample code:
package main
import (
"fmt"
"math"
"runtime"
"time"
)
func main() {
nCPU := runtime.NumCPU()
fmt.Println("nCPU =", nCPU)
ch := make(chan float64, nCPU)
startTime := time.Now()
a := 0.0
b := 1.0
n := 10000000.0
deltax := (b - a) / n
stepPerCPU := n / float64(nCPU)
for start := 0.0; start < n; {
stop := start + stepPerCPU
go f(start, stop, a, deltax, ch)
start = stop
}
integral := 0.0
for i := 0; i < nCPU; i++ {
integral += <-ch
}
fmt.Println(time.Now().Sub(startTime))
fmt.Println(deltax * integral)
}
func f(start, stop, a, deltax float64, ch chan float64) {
result := 0.0
for i := start; i < stop; i++ {
result += math.Sqrt(a + deltax*(i+0.5))
}
ch <- result
}
Output:
nCPU = 2
110.0063ms
0.6666666666686073
1- There is an optimum point for number of Goroutines, And from this point forward increasing number of Goroutines doesn't reduce program execution time:
On 2 Cores CPU, with the following code, The result is:
nCPU: 1, 2, 4, 8, 16
Time: 2.1601236s, 1.1220642s, 1.1060633s, 1.1140637s, 1.1380651s
As you see from nCPU=1 to nCPU=2 time decrease is big enough but after this point it is not much, so nCPU=2 on 2 Cores CPU is Optimum point for this Sample code, so using nCPU := runtime.NumCPU() is enough here.
package main
import (
"fmt"
"math"
"time"
)
func main() {
nCPU := 2 //2.1601236s#1 1.1220642s#2 1.1060633s#4 1.1140637s#8 1.1380651s#16
fmt.Println("nCPU =", nCPU)
ch := make(chan float64, nCPU)
startTime := time.Now()
a := 0.0
b := 1.0
n := 100000000.0
deltax := (b - a) / n
stepPerCPU := n / float64(nCPU)
for start := 0.0; start < n; {
stop := start + stepPerCPU
go f(start, stop, a, deltax, ch)
start = stop
}
integral := 0.0
for i := 0; i < nCPU; i++ {
integral += <-ch
}
fmt.Println(time.Now().Sub(startTime))
fmt.Println(deltax * integral)
}
func f(start, stop, a, deltax float64, ch chan float64) {
result := 0.0
for i := start; i < stop; i++ {
result += math.Sqrt(a + deltax*(i+0.5))
}
ch <- result
}
Unless the time taken by the activity in the goroutine takes a lot more time than needed to switch contexts, carry out the task and use a mutex to update a value, it would be faster to do it serially.
Take a look at a slightly modified version. All I've done is add a delay of 1 microsecond in the f() function.
package main
import (
"fmt"
"math"
"sync"
"time"
)
type Result struct {
result float64
lock sync.RWMutex
}
var wg sync.WaitGroup
var result Result
func main() {
fmt.Println("concurrent")
concurrent()
result.result = 0
fmt.Println("serial")
serial()
}
func concurrent() {
now := time.Now()
a := 0.0
b := 1.0
n := 100000.0
deltax := (b - a) / n
wg.Add(int(n))
for i := 0.0; i < n; i++ {
go f(a, deltax, i, true)
}
wg.Wait()
fmt.Println(deltax * result.result)
fmt.Println(time.Now().Sub(now))
}
func serial() {
now := time.Now()
a := 0.0
b := 1.0
n := 100000.0
deltax := (b - a) / n
for i := 0.0; i < n; i++ {
f(a, deltax, i, false)
}
fmt.Println(deltax * result.result)
fmt.Println(time.Now().Sub(now))
}
func f(a, deltax, i float64, concurrent bool) {
time.Sleep(1 * time.Microsecond)
fx := math.Sqrt(a + deltax*(i+0.5))
if concurrent {
result.lock.Lock()
result.result += fx
result.lock.Unlock()
wg.Done()
} else {
result.result += fx
}
}
With the delay, the result was as follows (the concurrent version is much faster):
concurrent
0.6666666685900424
624.914165ms
serial
0.6666666685900422
5.609195767s
Without the delay:
concurrent
0.6666666685900428
50.771275ms
serial
0.6666666685900422
749.166µs
As you can see, the longer it takes to complete a task, the more sense it makes to do it concurrently, if possible.

Go big.Int factorial with recursion

I am trying to implement this bit of code:
func factorial(x int) (result int) {
if x == 0 {
result = 1;
} else {
result = x * factorial(x - 1);
}
return;
}
as a big.Int so as to make it effective for larger values of x.
The following is returning a value of 0 for fmt.Println(factorial(r))
The factorial of 7 should be 5040?
Any ideas on what I am doing wrong?
package main
import "fmt"
import "math/big"
func main() {
fmt.Println("Hello, playground")
//n := big.NewInt(40)
r := big.NewInt(7)
fmt.Println(factorial(r))
}
func factorial(n *big.Int) (result *big.Int) {
//fmt.Println("n = ", n)
b := big.NewInt(0)
c := big.NewInt(1)
if n.Cmp(b) == -1 {
result = big.NewInt(1)
}
if n.Cmp(b) == 0 {
result = big.NewInt(1)
} else {
// return n * factorial(n - 1);
fmt.Println("n = ", n)
result = n.Mul(n, factorial(n.Sub(n, c)))
}
return result
}
This code on go playground: http://play.golang.org/p/yNlioSdxi4
Go package math.big has func (*Int) MulRange(a, b int64). When called with the first parameter set to 1, it will return b!:
package main
import (
"fmt"
"math/big"
)
func main() {
x := new(big.Int)
x.MulRange(1, 10)
fmt.Println(x)
}
Will produce
3628800
In your int version, every int is distinct. But in your big.Int version, you're actually sharing big.Int values. So when you say
result = n.Mul(n, factorial(n.Sub(n, c)))
The expression n.Sub(n, c) actually stores 0 back into n, so when n.Mul(n, ...) is evaluated, you're basically doing 0 * 1 and you get back 0 as a result.
Remember, the results of big.Int operations don't just return their value, they also store them into the receiver. This is why you see repetition in expressions like n.Mul(n, c), e.g. why it takes n again as the first parameter. Because you could also sayresult.Mul(n, c) and you'd get the same value back, but it would be stored in result instead of n.
Here is your code rewritten to avoid this problem:
func factorial(n *big.Int) (result *big.Int) {
//fmt.Println("n = ", n)
b := big.NewInt(0)
c := big.NewInt(1)
if n.Cmp(b) == -1 {
result = big.NewInt(1)
}
if n.Cmp(b) == 0 {
result = big.NewInt(1)
} else {
// return n * factorial(n - 1);
fmt.Println("n = ", n)
result = new(big.Int)
result.Set(n)
result.Mul(result, factorial(n.Sub(n, c)))
}
return
}
And here is a slightly more cleaned-up/optimized version (I tried to remove extraneous allocations of big.Ints): http://play.golang.org/p/feacvk4P4O
For example,
package main
import (
"fmt"
"math/big"
)
func factorial(x *big.Int) *big.Int {
n := big.NewInt(1)
if x.Cmp(big.NewInt(0)) == 0 {
return n
}
return n.Mul(x, factorial(n.Sub(x, n)))
}
func main() {
r := big.NewInt(7)
fmt.Println(factorial(r))
}
Output:
5040
Non-recursive version:
func FactorialBig(n uint64) (r *big.Int) {
//fmt.Println("n = ", n)
one, bn := big.NewInt(1), new(big.Int).SetUint64(n)
r = big.NewInt(1)
if bn.Cmp(one) <= 0 {
return
}
for i := big.NewInt(2); i.Cmp(bn) <= 0; i.Add(i, one) {
r.Mul(r, i)
}
return
}
playground

Resources