Create constants visible across packages, accessible directly - go

I would like to define my Error Codes in a package models.
error.go
package models
const{
EOK = iota
EFAILED
}
How can I use them in another package without referring to them as models.EOK. I would like to use directly as EOK, since these codes would be common across all packages.
Is it the right way to do it? Any better alternatives?

To answer you core question
You can use the dot import syntax to import the exported symbols from another package directly into your package's namespace (godoc):
import . "models"
This way you could directly refer to the EOK constant without prefixing it with models.
However I'd strongly advice against doing so, as it generates rather unreadable code. see below
General/style advice
Don't use unprefixed export path like models. This is considered bad style as it will easily globber. Even for small projects, that are used only internally, use something like myname/models. see goblog
Regarding your question about error generation, there are functions for generating error values, e.g. errors.New (godoc) and fmt.Errorf (godoc).
For a general introduction on go and error handling see goblog

W.r.t. the initial question, use a compact package name, for example err.
Choosing an approach to propagating errors, and generating error messages depends on the scale and complexity of the application. The error style you show, using an int, and then a function to decode it, is quite C-ish.
That style was partly caused by:
the lack of multiple value returns (unlike Go),
the need to use a simple type (to be easily propagated), and
that gets translated to text with a function (unlike Go's error interface), so that the local language strings can be changed.
For small apps with simple errors strings. I put the packages' error strings at the head of a package file, and just return them, maybe using errors.New(...), or fmt.Errorf if the string needs to be completed using some data.
That 'int' style of error reporting doesn't offer something as flexible as Go's error interface. The error interface lets us build information-rich error structures, to return useful information, and not just an int value or string.
An implication is different packages can yield different real-types which implement the Error interface. We don't need to agree a single error real-type across an entire set of packages. So error is an interface which can be easily propagated, like an int, yet, the real-type of error can be much richer than an int. Error generation (implementing Error) can be as centralised or distributed as we need, unlike strerror()-style functions which can be awkward to extend.

Related

golang code organization: where should I put custom error types that are only relevant to one function?

I have just started working on my first golang project and really like the idea of returning custom error types from functions and using type assertion in the calling code, to check for specific errors. I find this solution cleaner than always comparing error messages.
My only question is: where do you best put these custom error types?
Say a number of custom error types is only used (returned) by one utility function, should they go in the same package as the function? Should I group them somehow? Or maybe there's a better way of doing this kind of thing..
"Same package" would be my initial thought. There may be cases where having them in a different package would make sense, but that would only be if they're legitimately "the same error" from functions in multiple packages, with none of those packages being the logical "most owner".

Finding functions that return a specific type

Perhaps this is a silly question, but is there a way to find all functions (in the standard library or GOPATH) that return a specific type?
For example there are many functions that take io.Writer as an argument. Now I want to know how to create an io.Writer and there are many ways to do so. But how can I find all the ways easily without guessing at packages and looking through all the methods to find those that return an io.Writer (or whatever other type I'm after)?
Edit: I should extend my question to also find types that implement a specific interface. Sticking with the io.Writer example (which admittedly was a bad example for the original question) it would be good to have a way to find any types that implement the Writer interface, since those types would be valid arguments for a function that takes takes an io.Writer and thus answer the original question when the type is an interface.
Maybe not the best way but take a look at the search field at the top of the official golang.org website. If you search for "Writer":
http://golang.org/search?q=Writer
You get many results, grouped by categories like
Types
Package-level declarations
Local declarations and uses
and Textual occurrences
Also note that io.Writer is an interface, and we all know how Go handles interfaces: when implementing an interface, there is no declaration of intent, a type implicitly implements an interface if the methods defined by the interface are declared. This means that you won't be able to find many examples where an io.Writer is created and returned because a type might be named entirely different and still be an io.Writer.
Things get a little easier if you look for a non-interface type for example bytes.Buffer.
Also in the documentation of the declaring package of the type the Index section groups functions and methods of the package by type so you will find functions and methods of the same package that return the type you're looking for right under its entry/link in the Index section.
Also note that you can check the dependencies of packages at godoc.org. For example you can see what packages import the io package which may be a good starting point to look for further examples (this would be exhausting in case of package io because it is so general that at the moment 23410 packages import it).
In my days coding I'm personally rare in need to find functions returning Int16 and error(func can return few values in Go, you know)
For second part of your question there exists wonderful cmd implements written by Dominik Honnef go get honnef.co/go/implements
After discover type that satisfy your conditions you can assume constructor for type (something like func NewTheType() TheType) would be just after TheType declaration in source code and docs. It's a proven Go practice.

What is the best way to manage a large quantity of constants

I am currently working on a very complex program that processes rows from an input table and has a huge number of possible outcomes for each record. Because of this I have a very large number of constants defined for the outcome messages. There is one success message for the record, but a multitude of possible warnings and errors.
My first thought was to define all of my constants for these messages at the package body level, but then I decided to move each constant to the procedure where it is used. I'm now second guessing that decision and thinking of moving everything back to package body level. What is the best way to define this many constants? Ease of maintainability is my ultimate goal for this program since it is so complex.
I think this is a matter of taste. In my application I put all error codes into an Error-Package. All main and commonly used constants I put into a separate package (without a package body).
Again, a matter of taste, but I tend to put a list of named constants at the package spec level rather than the package body so that they can be referenced by any portion of the application. If I ever want to change the error code that c_err_for_specific_reason_x uses, it becomes a single place to do so.
If I wanted to hide the codes and put them within the body I would have a get_error_code(p_get_error_name varchar) function that did the translation based on you passing a valid constant name.
I've done both on different projects, but tend towards the list over the function most times. I tend to use the function if it a table-driven source of the data.
It ... wait for it ... depends!
Since you currently define your constants in the package body, you don't need them to be publicly accessible outside the package. So defining them in a spec really doesn't buy you anything.
Here's is the rule I follow: Define constants within the smallest scope needed. So if a constant is used only within one procedure, define it in that procedure. If it is used within more than one procedure, define it in the body. If it is used elsewhere by code in other packages (or non-packaged SPs) but only when using a particular package, define it in the spec of that package. If it is used by other code for general use, put it in a separate spec of such general constants.

When writing a single package meant to be used as a command, which is idiomatic: name all identifiers as private or name all identifiers as public?

In Go, public names start with an upper case letter and private names start with a lower case letter.
I'm writing a program that is not library and is a single package. Is there any Go idiom that stipulates whether my identifiers should be all public or all private? I do not plan on using this package as a library or as something that should be imported from another Go program.
I can't think of any reason why I'd want a mixture. It "feels" like going all private is the proper choice.
I don't think I got any concrete answer, but Nate was closest with telling me to think of "exporting vs non-exporting" instead of "public and private".
This leads me to believe that not exporting anything is the best approach. In the worst case scenario, if I end up importing code from my application in another package, I will have to rethink what should be exported and what shouldn't be. Which is a good thing IMO.
If you are attempting to adjust your mindset to be more Go idiomatic, you should stop thinking of variables, functions, and methods as public or private. The more accurate term is exported or not exported. It definitely has a more C like feel to it.
As others have stated exporting really isn't needed for application program code. If for organizational reasons you decide to break your program up into packages, you could use sub-packages. At work we've decided to do just this. We have:
projectgopath/src/projectname
projectname/subcomponent1
projectname/subcomponent2
So far I am really liking this structure. It aids in separation of concerns, but does not go to the extent of making a package outside of the main project. The intent is clear. The sub-package's intended use is for this program only...
The new go build and go install commands seem to deal very well with it. We group components together in packages and expose only the necessary bits via exports.
In the described situation both approaches are equally valid, so it's more or less a matter of personal preferences. In my case I'm using camelCase identifiers for package main, mostly out of habit.
A lot of my go files started their life in isolated commands and were moved to packages as they could be reused by a few commands around the same topic.
I think you should make private all that couldn't possibly be called from elsewhere (supposing one day you make it an importable package) and make public the big functions that can be understood from elsewhere (if any) and structs fields when they are orthogonal (I mean when a change of the value of one field doesn't break the consistency of the struct value).

Is there any downside to redundant qualifiers? Any benefit?

For example, referencing something as System.Data.Datagrid as opposed to just Datagrid. Please provide examples and explanation. Thanks.
The benefit is that you don't need to add an import for everything you use, especially if it's the only thing you use from a particular namespace, it also prevents collisions.
The downside, of course, is that the code balloons out in size and gets harder to read the more you use specific qualifiers.
Personally I tend to use imports for most things unless I know for sure I will only be using something from a particular namespace once or twice, so it won't impact the readability of my code.
You're being very explicit about the type you're referencing, and that is a benefit. Although, in the very same process you're giving up code clarity, which clearly is a downside in my case, as I want code to be readable and understandable. I go for the short version unless I have a conflict in different namespaces which can only be solved with the explicit referencing to classes.. Unless I make an alias for it with the keyword using:
using Datagrid = System.Data.Datagrid;
Actually the full path is global::System.Data.DataGrid. The point of using a more qualified path is to avoid having to use additional using statements, especially if the introduction of another using will cause problems with type resolution. More fully qualified identifiers exist so that you can be explicit when you need to be explicit, but if the class's namespace is clear, then the DataGrid version is clearer to many.
I generally use the shortest form available in order to keep the code as clean and readable as possible. That's what using directives are for, after all, and tooltips in the VS editor give you instant detail on the provenance of a type.
I also tend to use a namespace tag for RCWs in a COM interop layer, to call out those variables explicitly in the code (they may need special attention on lifecycle and collection), eg
using _Interop = Some.Interop.Namespace;
In terms of performance there is no upside/downside. Everything is resolved at compile time and the generated MSIL is identical whether you use fully-qualified names or not.
The reason why its use is prevalent in the .NET world is because of auto-generated code, such as designer markup. In that case it would be better to fully-qualify names like class names because of possible conflicts with other classes you may have in your code.
If you have a tool like ReSharper, it will actually tell you what fully-qualified references you have are unnecessary (e.g. by graying them out) so you can lop them off. If you frequently cut-paste code across your various code bases, it would be a must to fully qualify them. (then again, why would you want to do cut-paste all the time; it's a bad form of code reuse!)
I don't think there is really a downside, just readability vs actual time spent coding. In general if you don't have namespaces with ambiguous object I don't think it's really needed. Another thing to consider is level of use. If you have one method that uses reflection and you are alright with typeing System.Reflection 10 times, then it's not a big deal but if you plan on using a namespace alot then I would recommend an include.
Depending on your situation, extra qualifiers will generate a warning (if this is what you mean by redundant). If you then treat warnings as errors, that's a pretty serious downside.
I've run into this with GCC for example.
struct A {
int A::b; // warning!
}

Resources