Is there a way to make this golang code shorter?
func MergeSlices(s1 []float32, s2 []int32) []int {
var slice []int
for i := range s1 {
slice = append(slice, int(s1[i]))
}
for i := range s2 {
slice = append(slice, int(s2[i]))
}
return slice
}
You can't eliminate the loops to convert each element to int individually, because you can't convert whole slices of different element types. For explanation, see this question: Type converting slices of interfaces in go
The most you can do is use named result type, and a for range with 2 iteration values, where you can omit the first (the index) by assigning it to the blank identifier, and the 2nd will be the value:
func MergeSlices(s1 []float32, s2 []int32) (s []int) {
for _, v := range s1 {
s = append(s, int(v))
}
for _, v := range s2 {
s = append(s, int(v))
}
return
}
But know that your code is fine as-is. My code is not something to always follow, it was to answer your question: how to make your code shorter. If you want to improve your code, you could start by looking at its performance, or even refactoring your code to not end up needing to merge slices of different types.
Your code should be correct, maintainable, readable, and reasonably efficient. Note that shortness of code is not one of the important goals. For good reason, Stack Exchange has another site for Code Golf questions: Programming Puzzles & Code Golf.
Your code could be improved; it's inefficient. For example, merging two len(256) slices,
BenchmarkMergeSlices 200000 8350 ns/op 8184 B/op 10 allocs/op
Here's a more efficient (and longer) version:
BenchmarkMergeSlices 300000 4420 ns/op 4096 B/op 1 allocs/op
.
func MergeSlices(s1 []float32, s2 []int32) []int {
slice := make([]int, 0, len(s1)+len(s2))
for i := range s1 {
slice = append(slice, int(s1[i]))
}
for i := range s2 {
slice = append(slice, int(s2[i]))
}
return slice
}
Use the Go Code Review Comments for Named Result Parameters. For example: "Don't name result parameters just to avoid declaring a var inside the function; that trades off a minor implementation brevity at the cost of unnecessary API verbosity. Clarity of docs is always more important than saving a line or two in your function."
var s1 []int
var s2 []int
newSlice = append(s1, s2...)
The code can't get any shorter, but that's a goal of dubious value to begin with; it's not overly verbose as-is. You can, however, likely improve performance by eliminating the intermediate allocations. Every time you call append, if the target slice doesn't have enough space, it expands it, guessing at the necessary size since you haven't told it how much space it will need.
The simplest would just be to presize your target slice (replace var slice []int with slice := make([]int, 0, len(s1) + len(s2)); that way the appends never have to expand it. Setting the second parameter to 0 is important, that sets the length to zero, and the capacity to the total size needed, so that your appends will work as expected.
Once you've presized it though, you can get rid of the appends entirely, and directly set each index:
func MergeSlices(s1 []float32, s2 []int32) []int {
slice := make([]int, len(s1) + len(s2))
for i,v := range s1 {
slice[i] = int(v)
}
for i,v := range s2 {
slice[i+len(s1)] = int(v)
}
return slice
}
Playground link
Related
If I need to use tmp slices in a function and the function will be called many times, their max capacity will not exceed 10. But the length of them are varied. Just for example, maybe 80% of them only have size of 1. 10% of them have size 3 and 10% of them have size 10.
I can think of an example function like the following:
func getDataFromDb(s []string) []string {
tmpSlice := make([]string, 0, 10)
for _, v := range s {
if check(v) {
tmpSlice = append(tmpSlice, v)
}
}
......
return searchDb(tmpSlice)
}
So should I do var tmpSlice []string, tmpSlice := make([]string, 0, 0), tmpSlice := make([]string, 0, 5), or tmpSlice := make([]string, 0, 10)? or any other suggestions?
Fastest would be if code doesn't allocate on the heap.
Create variables that allocate on the stack and do no escape (pass variables by value, otherwise they will escape).
Escaping you can check by adding -gcflags "-m -l" on building.
Here is an example that shows if we substitute slice with array and pass it by value, it results in fast code without allocation (on the heap).
package main
import "testing"
func BenchmarkAllocation(b *testing.B) {
b.Run("Slice", func(b2 *testing.B) {
for i := 0; i < b2.N; i++ {
_ = getDataFromDbSlice([]string{"one", "two"})
}
})
b.Run("Array", func(b2 *testing.B) {
for i := 0; i < b2.N; i++ {
_ = getDataFromDbArray([]string{"one", "two"})
}
})
}
type DbQuery [10]string
type DbQueryResult [10]string
func getDataFromDbArray(s []string) DbQueryResult {
q := DbQuery{}
return processQueryArray(q)
}
func processQueryArray(q DbQuery) DbQueryResult {
return (DbQueryResult)(q)
}
func getDataFromDbSlice(s []string) []string {
tmpArray := make([]string, 0, 10)
return processQuerySlice(tmpArray)
}
func processQuerySlice(q []string) []string {
return q
}
Running benchmark with benchmem gives this results:
BenchmarkAllocation/Slice-6 30000000 51.8 ns/op 160 B/op 1 allocs/op
BenchmarkAllocation/Array-6 100000000 15.7 ns/op 0 B/op 0 allocs/op
This answer assumes that searchDB does not retain a reference to the slice passed to it. It seems unlikely that the function retains a reference given the variable and function names.
These options have the same memory and performance characteristics:
var tmpSlice []string
tmpSlice := []string{}
tmpSlice := make([]string, 0)
tmpSlice := make([]string, 0, 0)
None of them allocate memory until the first append operation. If these are your only options, then pick one of the first two because they are easier to read.
This option will have the best performance:
tmpSlice := make([]string, 0, 10)
This ensures that the backing array for the slice is allocated once. There will be no reallocations of the backing array as values are appended.
If searchDB's argument does not escape, then the one allocation for the backing array will be made on the stack. This is the best possible performance. You can find out if the argument escapes by building with the -gcflags "-m -l" option.
Given that getDataFromDb invokes a database operation, any performance difference between the options will be in the noise. It's more important is to write clear and simple code than to optimize this.
I would probably go with the var tmpSlice []string over tmpSlice := make([]string, 0, 10) because there's no need to understand where the value 10 came from with the former.
I would do
var tmpSlice []string
This would give you an empty string slice and you can append as needed.
Unless the slice range gets big and you know the dimension beforehand, otherwise I wouldn't pre-allocate memory for it
I have two variables with big numbers set as strings:
var numA = "340282366920938463463374607431768211456"
var numB = "17014118346046923173168730371588410572"
I want to be able to add and subtract these kinds of large string numbers in Go.
I know I need to use math/big but I still can not for the life of me figure out how, so any example help will be greatly appreciated!
You may use big.NewInt() to create a new big.Int value initialized with an int64 value. It returns you a pointer (*big.Int). Alternatively you could simply use the builtin new() function to allocate a big.Int value which will be 0 like this: new(big.Int), or since big.Int is a struct type, a simple composite literal would also do: &big.Int{}.
Once you have a value, you may use Int.SetString() to parse and set a number given as string. You can pass the base of the string number, and it also returns you a bool value indicating if parsing succeeded.
Then you may use Int.Add() and Int.Sub() to calculate the sum and difference of 2 big.Int numbers. Note that Add() and Sub() write the result into the receiver whose method you call, so if you need the numbers (operands) unchanged, use another big.Int value to calculate and store the result.
See this example:
numA := "340282366920938463463374607431768211456"
numB := "17014118346046923173168730371588410572"
ba, bb := big.NewInt(0), big.NewInt(0)
if _, ok := ba.SetString(numA, 10); !ok {
panic("invalid numA")
}
if _, ok := bb.SetString(numB, 10); !ok {
panic("invalid numB")
}
sum := big.NewInt(0).Add(ba, bb)
fmt.Println("a + b =", sum)
diff := big.NewInt(0).Sub(ba, bb)
fmt.Println("a - b =", diff)
Output (try it on the Go Playground):
a + b = 357296485266985386636543337803356622028
a - b = 323268248574891540290205877060179800884
I wanted to implement time based slots for holding data using golang slices. I managed to come up with a go program like this and it also works. But I have few questions regarding garbage collection and the general performance of this program. Does this program guarantee garbage collection of items once slice is equated to nil? And while shuffling slices, I hope this program does not do any deep copying.
type DataSlots struct {
slotDuration int //in milliseconds
slots [][]interface{}
totalDuration int //in milliseconds
}
func New(slotDur int, totalDur int) *DataSlots {
dat := &DataSlots{slotDuration: slotDur,
totalDuration: totalDur}
n := totalDur / slotDur
dat.slots = make([][]interface{}, n)
for i := 0; i < n; i++ {
dat.slots[i] = make([]interface{}, 0)
}
go dat.manageSlots()
return dat
}
func (self *DataSlots) addData(data interface{}) {
self.slots[0] = append(self.slots[0], data)
}
// This should be a go routine
func (self *DataSlots) manageSlots() {
n := self.totalDuration / self.slotDuration
for {
time.Sleep(time.Duration(self.slotDuration) * time.Millisecond)
for i := n - 1; i > 0; i-- {
self.slots[i] = self.slots[i-1]
}
self.slots[0] = nil
}
}
I removed critical section handling in this snippet to make it concise.
Once your slice is set too nil, any values contained in the slice are available for garbage collection, provided that the underlying array isn't shared with another slice.
Since there are no slice operations in your program, you never have multiple references to the same array, nor are you leaving data in any inaccessible portions of the underlying array.
What you need to be careful of, is when you're using slice operations:
a := []int{1, 2, 3, 4}
b := a[1:3]
a = nil
// the values 1 and 4 can't be collected, because they are
// still contained in b's underlying array
c := []int{1, 2, 3, 4}
c = append(c[1:2], 5)
// c is now []int{2, 5}, but again the values 1 and 4 are
// still in the underlying array. The 4 may be overwritten
// by a later append, but the 1 is inaccessible and won't
// be collected until the underlying array is copied.
While append does copy values when the capacity of the slice in insufficient, only the values contained in the slice are copied. There is no deep copy of any of the values.
In Go, what is the difference between var s []int and s := make([]int, 0)?
I find that both works, but which one is better?
Simple declaration
var s []int
does not allocate memory and s points to nil, while
s := make([]int, 0)
allocates memory and s points to memory to a slice with 0 elements.
Usually, the first one is more idiomatic if you don't know the exact size of your use case.
In addition to fabriziom's answer, you can see more examples at "Go Slices: usage and internals", where a use for []int is mentioned:
Since the zero value of a slice (nil) acts like a zero-length slice, you can declare a slice variable and then append to it in a loop:
// Filter returns a new slice holding only
// the elements of s that satisfy f()
func Filter(s []int, fn func(int) bool) []int {
var p []int // == nil
for _, v := range s {
if fn(v) {
p = append(p, v)
}
}
return p
}
It means that, to append to a slice, you don't have to allocate memory first: the nil slice p int[] is enough as a slice to add to.
Just found a difference. If you use
var list []MyObjects
and then you encode the output as JSON, you get null.
list := make([]MyObjects, 0)
results in [] as expected.
A bit more complete example (one more argument in .make()):
slice := make([]int, 2, 5)
fmt.Printf("length: %d - capacity %d - content: %d", len(slice), cap(slice), slice)
Out:
length: 2 - capacity 5 - content: [0 0]
Or with a dynamic type of slice:
slice := make([]interface{}, 2, 5)
fmt.Printf("length: %d - capacity %d - content: %d", len(slice), cap(slice), slice)
Out:
length: 2 - capacity 5 - content: [<nil> <nil>]
The code snippet below is the library implementation of the push methods for a priority queue. I am wondering why the line with the code a = a[0 : n+1] does not throw an out of bounds errors.
func (pq *PriorityQueue) Push(x interface{}) {
// Push and Pop use pointer receivers because they modify the slice's length,
// not just its contents.
// To simplify indexing expressions in these methods, we save a copy of the
// slice object. We could instead write (*pq)[i].
a := *pq
n := len(a)
a = a[0 : n+1]
item := x.(*Item)
item.index = n
a[n] = item
*pq = a
}
a slice is not an array; it is a view onto an existing array. The slice in question is backed by an array larger than itself. When you define a slice of an existing slice, you're actually slicing the underlying array, but the indexes referenced are relative to the source slice.
That's a mouthful. Let's prove this in the following way: we'll create a slice of zero length, but we'll force the underlying array to be larger. When creating a slice with make, the third parameter will set the size of the underlying array. The expression make([]int, 0, 2) will allocate an array of size 2, but it evaluates to a size-zero slice.
package main
import ("fmt")
func main() {
// create a zero-width slice over an initial array of size 2
a := make([]int, 0, 2)
fmt.Println(a)
// expand the slice. Since we're not beyond the size of the initial
// array, this isn't out of bounds.
a = a[0:len(a)+1]
a[0] = 1
fmt.Println(a)
fmt.Println(a[0:len(a)+1])
}
see here. You can use the cap keyword to reference the size of the array that backs a given slice.
The specific code that you asked about loops over cap(pq) in the calling context (container/heap/example_test.go line 90). If you modify the code at the call site and attempt to push another item into the queue, it will panic like you expect. I ... probably wouldn't suggest writing code like this. Although the code in the standard library executes, I would be very sour if I found that in my codebase. It's generally safer to use the append keyword.
Because it works in a specific example program. Here are the important parts from the original/full example source)
const nItem = 10
and
pq := make(PriorityQueue, 0, nItem)
and
for i := 0; i < cap(pq); i++ {
item := &Item{
value: values[i],
priority: priorities[i],
}
heap.Push(&pq, item)
}
Is it an example from container/heap? If yes, then it doesn't throws an exception because capacity is big enough (see how the Push method is used). If you change the example to Push more items then the capacity, then it'll throw.
It does in general; it doesn't in the container/heap example. Here's the general fix I already gave you some time ago.
func (pq *PriorityQueue) Push(x interface{}) {
a := *pq
n := len(a)
item := x.(*Item)
item.index = n
a = append(a, item)
*pq = a
}
Golang solution to Project Euler problem #81