I have a sample helloworld.proto file and use Python. I am not getting what does this option keyword has to do in the compilation phase?
syntax = "proto3";
package services.helloworld;
option go_package = "github.com/xyz/api/go/services/helloworld";
To a python user? Probably not a lot. Options are parsed into the DSL object model (FileDescriptorSet), and can be used by whatever tool is processing the schema. The "go" processor presumably uses that option to determine a package/namespace/etc. The python processor, on the other hand, probably isn't remotely interested. There is no "py" equivalent, so I presume it isn't needed for python. As for what is does: from descriptor.proto:
// Sets the Go package where structs generated from this .proto will be
// placed. If omitted, the Go package will be derived from the following:
// - The basename of the package import path, if provided.
// - Otherwise, the package statement in the .proto file, if present.
// - Otherwise, the basename of the .proto file, without extension.
optional string go_package = 11;
Different options do different things; descriptor.proto is usually the best source for what inbuilt options exist (and what they do), however custom options can be defined by 3rd party tools.
Related
After parsing a file and removing some functions, is it possible to also remove any now-unused imports before writing the new file?
I recently had a need for something similar when writing a code generator tool. Here's an outline of my solution:
golang.org/x/tools/imports exports a func Process that will take a Go source file (as []byte), and automatically "fixes" imports: remove unused imports and add unlisted imports that are referenced in the source file. The goimports command is based on this.
After editing my AST (e.g. removing some functions), I first print the AST to a Go source file string with go/format. Note that this does not write to a disk file; it produces the string form of the AST in memory.
Next I use imports.Process to "fix" the imports of the file. It is important to pass the true file path of the Go source file to Process, even if it does not exist on disk yet. The result of this call is a "fixed" Go source file as a []byte "string"; i.e. any unused imports are removed.
Now I perform a "diff" of the imports from the original AST and this "fixed" source file. To perform the diff, it is easier to do so by comparing ASTs, so I first use go/parser on the "fixed" source file using parser.ParseFile. I use the true file path of the Go source file, and an empty token.NewFileSet() when calling ParseFile. Again, the Go source file does not have to exist on disk.
The AST diff simply compares the []*ast.ImportSpec slice (field ast.File.Imports) between each of the two file-root AST nodes. (My implementation uses a map to index each []*ast.ImportSpec first, making it easier to cross-check without a double-nested loop.)
Finally, now that I have a list of *ast.ImportSpec that have been removed (unused), I use golang.org/x/tools/go/ast/astutil to rewrite the original AST, using the cursor to remove these known *ast.ImportSpec nodes when visited.
An aside: the imports package cited above has an "internal" package that will provide a FixImports function that essentially provides the raw "diff" derived manually above. Unfortunately we can't use it because it is marked as internal.
I am developing a go package, which is a little bit complex and thus I want to organize the source code into multiple directories.
However, I don't want the users of the package to have to use too long imports. Anyways, the internal structure of the package isn't their concern.
Thus, my package structure looks so:
subDir1
subSubDir1
subSubDir2
subDir2
subSubDir3
...and so on. All of them have their exported calls.
I would like to avoid that my users have to import
import (
"mypackage/subDir1"
"mypackage/subDir1/subSubDir2"
)
...and so on.
I only want, if they want to use an exported function from my package, they should have access all of them by simply importing mypackage.
I tried that I declare package mypackage in all of the .go files. Thus, I had source files in different directories, but with the same package declaration.
In this case, the problem what I've confronted was that I simply couldn't import multiple directories from the same package. It said:
./src1.go:6:15: error: redefinition of ‘mypackage’
"mypackage/mysubdir1"
^
./src1.go:4:10: note: previous definition of ‘mypackage’ was here
"mypackage"
^
./src1.go:5:15: error: redefinition of ‘mypackage’
"mypackage/mysubdir2"
^
./src1.go:4:10: note: previous definition of ‘mypackage’ was here
"mypackage"
^
Is it somehow possible?
You should not do this in any case, as the language spec allows a compiler implementation to reject such constructs. Quoting from Spec: Package clause:
A set of files sharing the same PackageName form the implementation of a package. An implementation may require that all source files for a package inhabit the same directory.
Instead "structure" your file names to mimic the folder structure; e.g. instead of files of
foo/foo1.go
foo/bar/bar1.go
foo/bar/bar2.go
You could simply use:
foo/foo1.go
foo/bar-bar1.go
foo/bar-bar2.go
Also if your package is so big that you would need multiple folders to even "host" the files of the package implementation, you should really consider not implementing it as a single package, but break it into multiple packages.
Also note that Go 1.5 introduced internal packages. If you create a special internal subfolder inside your package folder, you may create any number of subpackages inside that (even using multiple levels). Your package will be able to import and use them (or to be more precise all packages rooted at your package folder), but no one else outside will be able to do so, it would be a compile time error.
E.g. you may create a foo package, have a foo/foo.go file, and foo/internal/bar package. foo will be able to import foo/internal/bar, but e.g. boo won't. Also foo/baz will also be able to import and use foo/internal/bar because it's rooted at foo/.
So you may use internal packages to break down your big package into smaller ones, effectively grouping your source files into multiple folders. Only thing you have to pay attention to is to put everything your package wants to export into the package and not into the internal packages (as those are not importable / visible from the "outside").
Inside your package source code, you have to differentiate your source directories by renamed imports. You can declare the same package mypackage in all of your source files (even if they are in different directories).
However, while you import them, you should give an induvidual names to the directories. In your source src1.go, import the other directories on this way:
import (
"mypackage"
submodule1 "mypackage/mySubDir"
)
And you will be able to reach the API defined in "mypackage" as mypackage.AnyThing(), and the API defined in mySubDir as submodule1.AnyThing().
The external world (i.e. the users of your package) will see all exported entities in myPackage.AnyThing().
Avoid namespace collisions. And use better understable, intuitive naming as in the example.
Yes, this is doable without any problems, just invoke the Go compiler by hand, that is not via the go tool.
But the best advice is: Don't do that. It's ugly and unnecessarily complicated. Just design your package properly.
Addendum (because the real intention of this answer seems to get lost sometimes, maybe because irony is too subtle): Don't do that!! This is an incredible stupid idea! Stop fighting the tools! Everybody will rightfully hate you if you do that! Nobody will understand your code or be able to compile it! Just because something is doable in theory doesn't mean this is a sensible idea in any way. Not even for "learning purpose"! You probably even don't know how to invoke the Go compiler by hand and if you figure it out it will be a major pita.
I'm generating a Go file (to include constants such as build version etc) so that the constants can be used in other packages. I have a created a small tool that will create the file with go generate but am trying to think of an appropriate name so that
It is obvious that it is generated, so if it is missing (on build) the user then knows to run go generate
And I can then add the file to the .gitignore
My first guess is something like version_GENERATED.go
Any conventions I should be aware of or better suggestions?
Having a suffix like _GENERATED added to the file name does not hold any information until the file is generated, as the compiler will just give you "unrelated" errors like "undefined: xxx" (the compiler won't guess that if the identifier would exists, it would be in version_GENERATED.go).
For example the stringer generator generates files with name type_string.go where type is replaced with the name of the type it is generated for.
So I think simply following the general guidelines for file names is enough, except maybe use _gen or _generated suffix. Or if your tool is public and used by others too, then use the name of the tool as the suffix (like stringer does).
If you do want the user to get a talkative error message in case your generator is yet to be run, your generator may generate an exported constant whose name is talkative if included in an error message, like:
const MustRunStringerGenerator = 0
And in your program refer to it like:
var _ = MustRunStringerGenerator // Test if stringer has been run
If stringer has not yet been run, you'll see an error message:
undefined: MustRunStringerGenerator
I have some questions on package naming for external Go libraries.
I am interested if using generic names like "text" is considered a good practice? Having in mind that I cannot declare a "nested package" and that the library I am building deals with text processing, is it ok to have the package named "text" or should I stick to the library name as a package name too?
I am building a set of libraries (different projects) and I want to combine them under the same package. Is this also problematic? I am new to the Go community and am still not sure if package pollution is a problem or not (I do not see a problem as long as I import few packages in my code).
The reference on that naming topic is "blog: Package names"
It includes:
Avoid unnecessary package name collisions.
While packages in different directories may have the same name, packages that are frequently used together should have distinct names. This reduces confusion and the need for local renaming in client code. For the same reason, avoid using the same name as popular standard packages like io or http.
Check also your package publishing practice, as it will help disambiguate your "text" package from others.
As illustrated in "An Introduction to Programming in Go / Packages":
math is the name of a package that is part of Go's standard distribution, but since Go packages can be hierarchical we are safe to use the same name for our package. (The real math package is just math, ours is golang-book/chapter11/math)
When we import our math library we use its full name (import "golang-book/chapter11/math"), but inside of the math.go file we only use the last part of the name (package math).
We also only use the short name math when we reference functions from our library. If we wanted to use both libraries in the same program Go allows us to use an alias:
import m "golang-book/chapter11/math"
func main() {
xs := []float64{1,2,3,4}
avg := m.Average(xs)
fmt.Println(avg)
}
m is the alias.
As mentioned in the comments by elithrar, Dave Cheney has some additional tips:
In other languages it is quite common to ensure your package has a unique namespace by prefixing it with your company name, say com.sun.misc.Unsafe.
If everyone only writes packages corresponding to domains that they control, then there is little possibility of a collision.
In Go, the convention is to include the location of the source code in the package’s import path, ie
$GOPATH/src/github.com/golang/glog
This is not required by the language, it is just a feature of go get.
I am trying to add some support for D programming language to my vim config. For autocompletion I need to detect packages that are included. This is not exactly hard to do in simple case:
import std.stdio;
import std.conv;
My config:
set include=^\\s*import
set includeexpr=substitute(v:fname,'\\.','/','g')
Works great.
However, imports can have more complicated format, for example:
package import std.container, std.stdio = io, std.conv;
I was not able to find a simple way to parse this with include and includeexpr.
Also there is a second problem: import can have different access modifiers, like public and private. VIM scans included files recursively, import statements from included files are parsed too. But I need to distinguish between the file I am working with now and files which are scanned automatically: in current file all imports should be detected, but in other files only public import statements should add more files to the search.
Thanks for help.
Update
It's a shame if this can not be done without full parsers. Essentially, I only need two things:
ability to return an array from includeexpr instead of one file name
ability to distinguish between includes in current and other files
I think only way to do it reliably is to use complete parser and semantic analyzer. D Completion Daemon (https://github.com/Hackerpilot/DCD/tree/master/editors/vim ) has vim plugin and is not very resource-hungry.
Vim's include mechanism and 'includeexpr' are heavily influenced by the C programming language and only work for single files. You cannot return a list of filenames, so it won't be possible to support D's complex include mechanism with Vim. Use an IDE that is fully tailored to support the programming language, not a general-purpose text editor.