If you look at the image package here http://golang.org/src/pkg/image/image.go you can see that the implementation of Opaque() for every image does the same thing, differing only in the pixel-specific logic.
Is there a reason for this? Would any general solution be less efficient? Is it just an oversight? Is there some limitation (I cannot see one) to the type system that would make a polymorphic [was: generic] approach difficult?
[edit] The kind of solution I was thinking of (which does not need generics in the Java sense) would be like:
type ColorPredicate func(c image.Color) bool;
func AllPixels (p *image.Image, q ColorPredicate) bool {
var r = p.Bounds()
if r.Empty() {
return true
}
for y := r.Min.Y; y < r.Max.Y; y++ {
for x := r.Min.X; x < r.Max.X; x++ {
if ! q(p.At(x,y)) {
return false
}
}
}
return true
}
but I am having trouble getting that to compile (still very new to Go - it will compile with an image, but not with an image pointer!).
Is that too hard to optimise? (you would need to have function inlining, but then wouldn't any type checking be pulled out of the loop?). Also, I now realise I shouldn't have used the word "generic" earlier - I meant it only in a generic (ha) way.
There is a limitation to the type system which prevents a general solution (or at least makes it very inefficient).
For example, the bodies of RGBA.Opaque and NRGBA.Opaque are identical, so you'd think that they could be factored out into a third function with a signature something like this:
func opaque(pix []Color, stride int, rect Rectangle) bool
You'd like to call that function this way:
func (p *RGBA) Opaque() bool {
return opaque([]Color(p.Pix), p.Stride, p.Rect)
}
But you can't. p.Pix can't be converted to []Color because those types have different in-memory representations and the spec forbids it. We could allocate a new slice, convert each individual element of p.Pix, and pass that, but that would be very inefficient.
Observe that RGBAColor and NRGBAColor have the exact same structure. Maybe we could factor out the function for just those two types, since the in-memory representation of the pixel slices is exactly the same:
func opaque(pix []RGBAColor, stride int, rect Rectangle) bool
func (p *NRGBA) Opaque() bool {
return opaque([]RGBAColor(p.Pix), p.Stride, p.Rect)
}
Alas, again this isn't allowed. This seems to be more of a spec/language issue than a technical one. I'm sure this has come up on the mailing list before, but I can't find a good discussion of it.
This seems like an area where generics would come in handy, but there's no solution for generics in Go yet.
Why does Go not have generic
types?
Generics may well be added at some
point. We don't feel an urgency for
them, although we understand some
programmers do.
Generics are convenient but they come
at a cost in complexity in the type
system and run-time. We haven't yet
found a design that gives value
proportionate to the complexity,
although we continue to think about
it. Meanwhile, Go's built-in maps and
slices, plus the ability to use the
empty interface to construct
containers (with explicit unboxing)
mean in many cases it is possible to
write code that does what generics
would enable, if less smoothly.
This remains an open issue.
Related
// reflect/value.go
func ValueOf(i interface{}) Value {
if i == nil {
return Value{}
}
// TODO: Maybe allow contents of a Value to live on the stack.
// For now we make the contents always escape to the heap. It
// makes life easier in a few places (see chanrecv/mapassign
// comment below).
escapes(i)
The code above is the source code of Value.go in golang, and the comment above the escapes(i) shows that each time we call the ValueOf function, the i will escape to the heap, that's why? Namely, how to explain the It makes life easier in a few places?
I am still learning go, so I can't describe more, that's why a community wiki answer. But here's what excerpted note says (note above the chanrecv function):
Note: some of the noescape annotations below are technically a lie,
but safe in the context of this package. Functions like chansend
and mapassign don't escape the referent, but may escape anything
the referent points to (they do shallow copies of the referent).
It is safe in this package because the referent may only point
to something a Value may point to, and that is always in the heap
(due to the escapes() call in ValueOf).
Also see:
// Dummy annotation marking that the value x escapes,
// for use in cases where the reflect code is so clever that
// the compiler cannot follow.
func escapes(x interface{}) {
if dummy.b {
dummy.x = x
}
}
var dummy struct {
b bool
x interface{}
}
I hope, this will be helpful.
Code I'm exploring:
type Stack struct {
length int
values []int
}
func (s *Stack) Push(value int) {
// ...
}
func (s *Stack) Pop() int {
// ...
}
func (s *Stack) Length() int {
return s.length
}
Methods Push and Pop change the length field in Stack struct. And I wanted to hide this field from other files to prevent code like stack.length = ... (Manual length change). But I was need to have ability to read this field, so I added getter method - Length.
And my question is:
Shouldn't stack.Length() become slower than stack.length, because it is a function call? I have learnt assembler a bit and I know how many operations program should do to call a function. Have I understand right: By adding getter method stack.Length() I protected those who use my lib from bad usage but the cost of it - program's performance? This actually concerns not only Go.
Shouldn't stack.Length() become slower than stack.length, because it is a function call?
Objection! Assumes facts not in evidence.
Specifically:
Why do you think it is a function call? It looks like one, but actual Go compilers will often expand the code in line.
Why do you think a function call is slower than inline code? When measuring actual programs on actual computers, sometimes function calls are faster than inline code. It turns out the crucial part is usually whether the instructions being executed, and their operands, are already in the appropriate CPU caches. Sometimes, expanding functions inline makes the program run more slowly.
The compiler should do the inline expansion unless it makes the program run more slowly. How good the compiler is at pre- or post-detecting such slowdowns, if present, is a separate issue. In this particular case, given the function definition, the compiler is almost certain to just expand the function in line, as accessing stack.length will likely be one instruction, and calling a function will be one instruction, and deciding the tradeoff here will be easy.
I was reading an answer to stackoverflow question and tried to modify the function history to take IntoIter where item can be anything that can be transformed into reference and has some traits Debug in this case.
If I will remove V: ?Sized from the function definition rust compiler would complain that it doesn't know the size of str at compile time.
use std::fmt::Debug;
pub fn history<I: IntoIterator, V: ?Sized>(i: I) where I::Item: AsRef<V>, V: Debug {
for s in i {
println!("{:?}", s.as_ref());
}
}
fn main() {
history::<_, str>(&["st", "t", "u"]);
}
I don't understand why compiler shows error in the first place and not sure why the program is working properly if I kind of cheat with V: ?Sized.
I kind of cheat with V: ?Sized
It isn't cheating. All generic arguments are assumed to be Sized by default. This default is there because it's the most common case - without it, nearly every type parameter would have to be annotated with : Sized.
In your case, V is only ever accessed by reference, so it doesn't need to be Sized. Relaxing the Sized constraint makes your function as general as possible, allowing it to be used with the most possible types.
The type str is unsized, so this is not just about generalisation, you actually need to relax the default Sized constraint to be able to use your function with str.
I've always found the package.New() syntax in go rather awkward to work with.
The suggestion is that if a package holds only a single type, using package.New() to create an instance; if multiple types exist, using package.NewBlah().
http://golang.org/doc/effective_go.html#package-names
However, this approach falls down if you if you have an existing package with a New() api, adding a new external type to the package breaks the api, because you must now rename this NewFoo(). Now you have to go and change anything that uses New(), which is deeply irritating.
...and I'm just discontent with the aesthetic of writing this:
import "other"
import "bar"
import "foo"
o := other.New() // <-- Weird, what type am I getting? No idea.
x := bar.New()
y := foo.NewFoo() // <-- Awkward, makes constructor naming look inconsistent
z := foo.NewBar()
So, recently I've been using this pattern instead:
x := foo.Foo{}.New() // <-- Immediately obvious I'm getting a Foo
y := foo.Bar{}.New() // <-- Only an additional 3 characters on NewBar{}
o := other.Foo{}.New() // <-- Consistent across all packages, no breakage on update
Where the module is defined something like this:
package foo
type Foo struct {
x int
}
func (s Foo) New() *Foo {
// Normal init stuff here
return &s // <-- Edit: notice the single instance is returned
}
type Bar struct {
}
func (Bar) New() *Bar {
return &Bar{} // <-- Edit: Bad, results in double alloc. Not like this.
}
Godoc seems to work fine with it, and it seems more obvious and consistent to me, without additional verbosity.
So, question: Is there any tangible downside to this?
Yes, it has a downside. This approach may generate unnecessary garbage - depending on how good the optimization of a specific Go compiler implementation is.
It's not terribly idiomatic and may if done badly create excess garbage as you note. Essentially you are just creating an Init method for your object. I don't use a lot of constructors myself tending to prefer having valid zero values for my objects and only using a constructor if that doesn't hold true.
In your case I think I'd just stop calling the method new and instead call it Init or Setup to better reflect what it's doing. That would avoid giving people the wrong idea about what it's doing.
Edit:
I should have been more detailed here. Calling the method Init or Setup and then using it on a Zero Value would better reflect what is going on to the consumer. eg
f := &foo{}
f.Init()
This avoids the excess garbage and gives you an initializer method as you describe.
I am trying to familiarize myself with Go and so was trying to implements some search function but looking through the docs for the container types, none of the inbuilt type implements a contains method. Am i missing something and if not how do i go about testing for membership? Do I have to implement my own method or i have to iterate through all elements. If this is so what is the rationale behind the omission of this elementary method for container types?
The standard library's container types require you do type assertions when pulling elements out. The containers themselves have no way of doing tests for membership because they don't know the types they're containing and have no way of doing a comparison.
Ric Szopa's skip list implementation might be what you're looking for. It has a Set type which implements a Contains method.
https://github.com/ryszard/goskiplist
I've been using it in production and am quite happy with it.
Maps are a built-in type which has a "contains" construct, not a method, though.
http://play.golang.org/p/ddpmiskxqS
package main
import (
"fmt"
)
func main() {
a := map[string]string{"foo": "bar"}
_, k := a["asd"]
fmt.Println(k)
_, k = a["foo"]
fmt.Println(k)
}
With the container/list package, you write your own loop to search for things. The reasoning for not having this provided in the package is probably as Dystroy said, that would hide an O(n) operation.
You can't add a method, so you just write a loop.
for e := l.Front(); e != nil; e = e.Next() {
data := e.Value.(dataType) // type assertion
if /* test on data */ {
// do something
break
}
}
It's simple enough and the O(n) complexity is obvious.
In your review of data structures supplied with Go that support searching, don't miss the sort package. Functions there allow a slice to be sorted in O(n log(n)) and then binary searched in O(log(n)) time.
Finally as Daniel suggested, consider third-party packages. There are some popular and mature packages for container types.