After parsing a file and removing some functions, is it possible to also remove any now-unused imports before writing the new file?
I recently had a need for something similar when writing a code generator tool. Here's an outline of my solution:
golang.org/x/tools/imports exports a func Process that will take a Go source file (as []byte), and automatically "fixes" imports: remove unused imports and add unlisted imports that are referenced in the source file. The goimports command is based on this.
After editing my AST (e.g. removing some functions), I first print the AST to a Go source file string with go/format. Note that this does not write to a disk file; it produces the string form of the AST in memory.
Next I use imports.Process to "fix" the imports of the file. It is important to pass the true file path of the Go source file to Process, even if it does not exist on disk yet. The result of this call is a "fixed" Go source file as a []byte "string"; i.e. any unused imports are removed.
Now I perform a "diff" of the imports from the original AST and this "fixed" source file. To perform the diff, it is easier to do so by comparing ASTs, so I first use go/parser on the "fixed" source file using parser.ParseFile. I use the true file path of the Go source file, and an empty token.NewFileSet() when calling ParseFile. Again, the Go source file does not have to exist on disk.
The AST diff simply compares the []*ast.ImportSpec slice (field ast.File.Imports) between each of the two file-root AST nodes. (My implementation uses a map to index each []*ast.ImportSpec first, making it easier to cross-check without a double-nested loop.)
Finally, now that I have a list of *ast.ImportSpec that have been removed (unused), I use golang.org/x/tools/go/ast/astutil to rewrite the original AST, using the cursor to remove these known *ast.ImportSpec nodes when visited.
An aside: the imports package cited above has an "internal" package that will provide a FixImports function that essentially provides the raw "diff" derived manually above. Unfortunately we can't use it because it is marked as internal.
Related
I have a Go project in JetBrains goland where all files are runnable yet independent of each other.
But to make every runnnable, I need to make them as package main.
And I have several "Vertex" defined elsewhere in other file and Goland complain about it.
But it is still runnable, and that's purely complaint from Goland.
Question -
Is there a better way to organized the files?
If not, is there a way to turn off the complaint from Goland?
Working with multiple files that declare the main() function in the same directory is not recommended in general, mainly due to problems similar to yours.
However, there are several ways to solve this.
You can use build constraints, also known as build tags, to separate the binaries at build time. When using them, the IDE will also need to be adjusted using the Settings/Preferences | Build Tags & Vendoring. And, depending how you build your application, you might also need to adjust the build command to add the corresponding tags to it.
The other option, which I'd recommend in this case, is to move each main() defining file into a structure such as this:
/repository_root
/cmd
/command1
command1.go (file holds the `main()` func)
/command2
command2.go (file holds the `main()` func)
/command3
command3.go (file holds the `main()` func)
/some
/package
some_file.go
some_other_file.go
....
some_other_file.go
As an example of this layout, you can have a look at Delve, which uses a similar structure, but only has a single "command" in the cmd folder.
Lastly, sometimes it's possible to remove the duplication and move it to a common file which holds the data type, but it's not always ideal and can make the build command more complex, since you need to specify all the files that should be included in the build process.
Edit:
And you can read more on how to organize your Go packages/applications here
These articles will explain how to organize your Go packages:
https://rakyll.org/style-packages/
https://medium.com/#benbjohnson/standard-package-layout-7cdbc8391fc1#.ds38va3pp
https://peter.bourgon.org/go-best-practices-2016/#repository-structure
To understand more about the design philosophy for Go packages: https://www.goinggo.net/2017/02/design-philosophy-on-packaging.html
I am developing a go package, which is a little bit complex and thus I want to organize the source code into multiple directories.
However, I don't want the users of the package to have to use too long imports. Anyways, the internal structure of the package isn't their concern.
Thus, my package structure looks so:
subDir1
subSubDir1
subSubDir2
subDir2
subSubDir3
...and so on. All of them have their exported calls.
I would like to avoid that my users have to import
import (
"mypackage/subDir1"
"mypackage/subDir1/subSubDir2"
)
...and so on.
I only want, if they want to use an exported function from my package, they should have access all of them by simply importing mypackage.
I tried that I declare package mypackage in all of the .go files. Thus, I had source files in different directories, but with the same package declaration.
In this case, the problem what I've confronted was that I simply couldn't import multiple directories from the same package. It said:
./src1.go:6:15: error: redefinition of ‘mypackage’
"mypackage/mysubdir1"
^
./src1.go:4:10: note: previous definition of ‘mypackage’ was here
"mypackage"
^
./src1.go:5:15: error: redefinition of ‘mypackage’
"mypackage/mysubdir2"
^
./src1.go:4:10: note: previous definition of ‘mypackage’ was here
"mypackage"
^
Is it somehow possible?
You should not do this in any case, as the language spec allows a compiler implementation to reject such constructs. Quoting from Spec: Package clause:
A set of files sharing the same PackageName form the implementation of a package. An implementation may require that all source files for a package inhabit the same directory.
Instead "structure" your file names to mimic the folder structure; e.g. instead of files of
foo/foo1.go
foo/bar/bar1.go
foo/bar/bar2.go
You could simply use:
foo/foo1.go
foo/bar-bar1.go
foo/bar-bar2.go
Also if your package is so big that you would need multiple folders to even "host" the files of the package implementation, you should really consider not implementing it as a single package, but break it into multiple packages.
Also note that Go 1.5 introduced internal packages. If you create a special internal subfolder inside your package folder, you may create any number of subpackages inside that (even using multiple levels). Your package will be able to import and use them (or to be more precise all packages rooted at your package folder), but no one else outside will be able to do so, it would be a compile time error.
E.g. you may create a foo package, have a foo/foo.go file, and foo/internal/bar package. foo will be able to import foo/internal/bar, but e.g. boo won't. Also foo/baz will also be able to import and use foo/internal/bar because it's rooted at foo/.
So you may use internal packages to break down your big package into smaller ones, effectively grouping your source files into multiple folders. Only thing you have to pay attention to is to put everything your package wants to export into the package and not into the internal packages (as those are not importable / visible from the "outside").
Inside your package source code, you have to differentiate your source directories by renamed imports. You can declare the same package mypackage in all of your source files (even if they are in different directories).
However, while you import them, you should give an induvidual names to the directories. In your source src1.go, import the other directories on this way:
import (
"mypackage"
submodule1 "mypackage/mySubDir"
)
And you will be able to reach the API defined in "mypackage" as mypackage.AnyThing(), and the API defined in mySubDir as submodule1.AnyThing().
The external world (i.e. the users of your package) will see all exported entities in myPackage.AnyThing().
Avoid namespace collisions. And use better understable, intuitive naming as in the example.
Yes, this is doable without any problems, just invoke the Go compiler by hand, that is not via the go tool.
But the best advice is: Don't do that. It's ugly and unnecessarily complicated. Just design your package properly.
Addendum (because the real intention of this answer seems to get lost sometimes, maybe because irony is too subtle): Don't do that!! This is an incredible stupid idea! Stop fighting the tools! Everybody will rightfully hate you if you do that! Nobody will understand your code or be able to compile it! Just because something is doable in theory doesn't mean this is a sensible idea in any way. Not even for "learning purpose"! You probably even don't know how to invoke the Go compiler by hand and if you figure it out it will be a major pita.
I am trying to add some support for D programming language to my vim config. For autocompletion I need to detect packages that are included. This is not exactly hard to do in simple case:
import std.stdio;
import std.conv;
My config:
set include=^\\s*import
set includeexpr=substitute(v:fname,'\\.','/','g')
Works great.
However, imports can have more complicated format, for example:
package import std.container, std.stdio = io, std.conv;
I was not able to find a simple way to parse this with include and includeexpr.
Also there is a second problem: import can have different access modifiers, like public and private. VIM scans included files recursively, import statements from included files are parsed too. But I need to distinguish between the file I am working with now and files which are scanned automatically: in current file all imports should be detected, but in other files only public import statements should add more files to the search.
Thanks for help.
Update
It's a shame if this can not be done without full parsers. Essentially, I only need two things:
ability to return an array from includeexpr instead of one file name
ability to distinguish between includes in current and other files
I think only way to do it reliably is to use complete parser and semantic analyzer. D Completion Daemon (https://github.com/Hackerpilot/DCD/tree/master/editors/vim ) has vim plugin and is not very resource-hungry.
Vim's include mechanism and 'includeexpr' are heavily influenced by the C programming language and only work for single files. You cannot return a list of filenames, so it won't be possible to support D's complex include mechanism with Vim. Use an IDE that is fully tailored to support the programming language, not a general-purpose text editor.
Does any program compiled with the -g command have its source code available for gbd to list even if the source code files are unavailable?? Also when you set the breakpoints at a line in a program with a complicated multi source file structure do you need the names of the source code files??
OP's 1st Question:
Does any program compiled with the -g command have its source code available for gbd to list even if the source code files are unavailable??
No. If there is no path to the sources, then you will not see the source.
OP's 2nd Question:
[...] when you set the breakpoints at a line in a program with a complicated multi source file structure do you need the names of the source code files??
Not always. There are a few ways of setting breakpoints. The only two I remember are breaking on a line or breaking on a function. If you wanted to break on the first line of a function, use
break functionname
If the function lives in a module
break __modulename_MOD_functionname
The modulename and functionname should be lowercase, no matter how you've declared them in the code. Note the two underscores before the module name. If you are not sure, use nm on the executable to find out what the symbol is.
If you have the source code available and you are using a graphical environment, try ddd. It stops me swearing and takes a lot of guesswork out of gdb. If the source is available, it will show up straight away.
I'm using io/ioutil to read a small text file:
fileBytes, err := ioutil.ReadFile("/absolute/path/to/file.txt")
And that works fine, but this isn't exactly portable. In my case, the files I want to open are in my GOPATH, for example:
/Users/matt/Dev/go/src/github.com/mholt/mypackage/data/file.txt
Since the data folder rides right alongside the source code, I'd love to just specify the relative path:
data/file.txt
But then I get this error:
panic: open data/file.txt: no such file or directory
How can I open files using their relative path, especially if they live alongside my Go code?
(Note that my question is specifically about opening files relative to the GOPATH. Opening files using any relative path in Go is as easy as giving the relative path instead of an absolute path; files are opened relative to the compiled binary's working directory. In my case, I want to open files relative to where the binary was compiled. In hindsight, this is a bad design decision.)
Hmm... the path/filepath package has Abs() which does what I need (so far) though it's a bit inconvenient:
absPath, _ := filepath.Abs("../mypackage/data/file.txt")
Then I use absPath to load the file and it works fine.
Note that, in my case, the data files are in a package separate from the main package from which I'm running the program. If it was all in the same package, I'd remove the leading ../mypackage/. Since this path is obviously relative, different programs will have different structures and need this adjusted accordingly.
If there's a better way to use external resources with a Go program and keep it portable, feel free to contribute another answer.
this seems to work pretty well:
import "os"
import "io/ioutil"
pwd, _ := os.Getwd()
txt, _ := ioutil.ReadFile(pwd+"/path/to/file.txt")
I wrote gobundle to solve exactly this problem. It generates Go source code from data files, which you then compile into your binary. You can then access the file data through a VFS-like layer. It's completely portable, supports adding entire file trees, compression, etc.
The downside is that you need an intermediate step to build the Go files from the source data. I usually use make for this.
Here's how you'd iterate over all files in a bundle, reading the bytes:
for _, name := range bundle.Files() {
r, _ := bundle.Open(name)
b, _ := ioutil.ReadAll(r)
fmt.Printf("file %s has length %d\n", name, len(b))
}
You can see a real example of its use in my GeoIP package. The Makefile generates the code, and geoip.go uses the VFS.
Starting from Go 1.16, you can use the embed package. This allows you to embed the files in the running go program.
Given the file structure:
-- main.go
-- data
\- file.txt
You can reference the file using a go directive
package main
import (
"embed"
"fmt"
)
//go:embed data/file.txt
var content embed.FS
func main() {
text, _ := content.ReadFile("data/file.txt")
fmt.Println(string(text))
}
This program will run successfully regardless of where the program is executed. This is useful in case the file could be called from multiple different locations, for instance, from a test directory.
I think Alec Thomas has provided The Answer, but in my experience it isn't foolproof. One problem I had with compiling resources into the binary is that compiling may require a lot of memory depending on the size of your assets. If they're small, then it's probably nothing to worry about. In my particular scenario, a 1MB font file was causing compilation to require somewhere around 1GB of memory to compile. It was a problem because I wanted it to be go gettable on a Raspberry Pi. This was with Go 1.0; things may have improved in Go 1.1.
So in that particular case, I opt to just use the go/build package to find the source directory of the program based on the import path. Of course, this requires that your targets have a GOPATH set up and that the source is available. So it isn't an ideal solution in all cases.