I have a Go project in JetBrains goland where all files are runnable yet independent of each other.
But to make every runnnable, I need to make them as package main.
And I have several "Vertex" defined elsewhere in other file and Goland complain about it.
But it is still runnable, and that's purely complaint from Goland.
Question -
Is there a better way to organized the files?
If not, is there a way to turn off the complaint from Goland?
Working with multiple files that declare the main() function in the same directory is not recommended in general, mainly due to problems similar to yours.
However, there are several ways to solve this.
You can use build constraints, also known as build tags, to separate the binaries at build time. When using them, the IDE will also need to be adjusted using the Settings/Preferences | Build Tags & Vendoring. And, depending how you build your application, you might also need to adjust the build command to add the corresponding tags to it.
The other option, which I'd recommend in this case, is to move each main() defining file into a structure such as this:
/repository_root
/cmd
/command1
command1.go (file holds the `main()` func)
/command2
command2.go (file holds the `main()` func)
/command3
command3.go (file holds the `main()` func)
/some
/package
some_file.go
some_other_file.go
....
some_other_file.go
As an example of this layout, you can have a look at Delve, which uses a similar structure, but only has a single "command" in the cmd folder.
Lastly, sometimes it's possible to remove the duplication and move it to a common file which holds the data type, but it's not always ideal and can make the build command more complex, since you need to specify all the files that should be included in the build process.
Edit:
And you can read more on how to organize your Go packages/applications here
These articles will explain how to organize your Go packages:
https://rakyll.org/style-packages/
https://medium.com/#benbjohnson/standard-package-layout-7cdbc8391fc1#.ds38va3pp
https://peter.bourgon.org/go-best-practices-2016/#repository-structure
To understand more about the design philosophy for Go packages: https://www.goinggo.net/2017/02/design-philosophy-on-packaging.html
Related
I am developing a go package, which is a little bit complex and thus I want to organize the source code into multiple directories.
However, I don't want the users of the package to have to use too long imports. Anyways, the internal structure of the package isn't their concern.
Thus, my package structure looks so:
subDir1
subSubDir1
subSubDir2
subDir2
subSubDir3
...and so on. All of them have their exported calls.
I would like to avoid that my users have to import
import (
"mypackage/subDir1"
"mypackage/subDir1/subSubDir2"
)
...and so on.
I only want, if they want to use an exported function from my package, they should have access all of them by simply importing mypackage.
I tried that I declare package mypackage in all of the .go files. Thus, I had source files in different directories, but with the same package declaration.
In this case, the problem what I've confronted was that I simply couldn't import multiple directories from the same package. It said:
./src1.go:6:15: error: redefinition of ‘mypackage’
"mypackage/mysubdir1"
^
./src1.go:4:10: note: previous definition of ‘mypackage’ was here
"mypackage"
^
./src1.go:5:15: error: redefinition of ‘mypackage’
"mypackage/mysubdir2"
^
./src1.go:4:10: note: previous definition of ‘mypackage’ was here
"mypackage"
^
Is it somehow possible?
You should not do this in any case, as the language spec allows a compiler implementation to reject such constructs. Quoting from Spec: Package clause:
A set of files sharing the same PackageName form the implementation of a package. An implementation may require that all source files for a package inhabit the same directory.
Instead "structure" your file names to mimic the folder structure; e.g. instead of files of
foo/foo1.go
foo/bar/bar1.go
foo/bar/bar2.go
You could simply use:
foo/foo1.go
foo/bar-bar1.go
foo/bar-bar2.go
Also if your package is so big that you would need multiple folders to even "host" the files of the package implementation, you should really consider not implementing it as a single package, but break it into multiple packages.
Also note that Go 1.5 introduced internal packages. If you create a special internal subfolder inside your package folder, you may create any number of subpackages inside that (even using multiple levels). Your package will be able to import and use them (or to be more precise all packages rooted at your package folder), but no one else outside will be able to do so, it would be a compile time error.
E.g. you may create a foo package, have a foo/foo.go file, and foo/internal/bar package. foo will be able to import foo/internal/bar, but e.g. boo won't. Also foo/baz will also be able to import and use foo/internal/bar because it's rooted at foo/.
So you may use internal packages to break down your big package into smaller ones, effectively grouping your source files into multiple folders. Only thing you have to pay attention to is to put everything your package wants to export into the package and not into the internal packages (as those are not importable / visible from the "outside").
Inside your package source code, you have to differentiate your source directories by renamed imports. You can declare the same package mypackage in all of your source files (even if they are in different directories).
However, while you import them, you should give an induvidual names to the directories. In your source src1.go, import the other directories on this way:
import (
"mypackage"
submodule1 "mypackage/mySubDir"
)
And you will be able to reach the API defined in "mypackage" as mypackage.AnyThing(), and the API defined in mySubDir as submodule1.AnyThing().
The external world (i.e. the users of your package) will see all exported entities in myPackage.AnyThing().
Avoid namespace collisions. And use better understable, intuitive naming as in the example.
Yes, this is doable without any problems, just invoke the Go compiler by hand, that is not via the go tool.
But the best advice is: Don't do that. It's ugly and unnecessarily complicated. Just design your package properly.
Addendum (because the real intention of this answer seems to get lost sometimes, maybe because irony is too subtle): Don't do that!! This is an incredible stupid idea! Stop fighting the tools! Everybody will rightfully hate you if you do that! Nobody will understand your code or be able to compile it! Just because something is doable in theory doesn't mean this is a sensible idea in any way. Not even for "learning purpose"! You probably even don't know how to invoke the Go compiler by hand and if you figure it out it will be a major pita.
I have a #define ONB in a c file which (with several #ifndef...#endifs) changes many aspects of a programs behavior. Now I want to change the project makefile (or even better Makefile.am) so that if ONB is defined and some other options are set accordingly, it runs some special commands.
I searched the web but all i found was checking for environment variables... So is there a way to do this? Or I must change the c code to check for that in environment variables?(I prefer not changing the code because it is a really big project and i do not know everything about it)
Questions: My level is insufficient to ask in comments so I will have to ask here:
How and when is the define added to the target in the first place?
Do you essentially want a way to be able to post compile query the binaries to to determine if a particular define was used?
It would be helpful if you could give a concrete example, i.e. what are the special commands you want run, and what are the .c .h files involved?
Possible solution: Depending on what you need you could use LLVM tools to maybe generate and examine the AST of your code to see if a define is used. But this seems a little like over engineering.
Possible solution: You could also use #includes to pull in .c or header files and a conditional error be generated, or compile (to a .o), then if the compile fails you know it is defined or not. But this has it's own issues depending on how things are set-up in your make file.
Ok so im getting in to Kernel Module Development and the guides all pretty much use the same basic make file that contains this line:
make -C /lib/modules/`uname -r`/build M=$(PWD) modules
So my questions are:
Why is a make file calling make? that seems recursive
what is the M for? i cant find a make -M flag in any of the man pages
Recursive use of make is a common technique for introducing modularity into your build process. For example, in your particular case, you could support a new architecture by putting the relevant component in a folder whose name matches the uname -r output for that architecture, and you wouldn't have to change the master makefile at all. Another example, if you make one component modular, it makes it much easier to reuse in another project without making large changes to the new project's master makefile.
Just like it can be helpful to separate your code into files, modules, and classes (the latter for languages other than C, obviously), it can be helpful to separate your build process into separate modules. It's just a form of organization to make managing your projects easier. You might group related functionality into separate libraries, or plugins, and build them separately. Different individuals or teams could work on the separate components without all of them needing write access to the master makefile. You might want to build your components separately so that you can test them separately.
It's not impossible to do all of these things without recursive use of make, of course, but it's one common way of organizing things. Even if you don't use make recursively, you're still going to end up with a bunch of different component "makefiles" on a large project - they'll just be imported or included into the master makefile, rather than standing alone by themselves and being run via separate invocations of make.
Creating and maintaining a single makefile for a very large project is not a trivial matter. However, as the article Recursive make considered harmful describes, recursive use of make is not without its own problems, either.
As for your M, that's just overriding a variable at the command line. Somewhere in the makefile(s) the variable M will be used, and if you specify its value at the command line in this way, then the value you specify will override any other assignments to that variable that may occur in the makefile(s).
I know this might be controversial or not very broad but I'm going to try to be very specific and relate to other questions.
Ok so when I make a Go program what things should I be thinking in terms of how I should organize my project? (E.g. should I think ok I'm going to have controllers of some sort so I should have a controller subdirectory that's going to do this so I should have that)
How should I structure a package?
For example the current program I'm working on I'm trying to make a SteamBot using this package
But while I'm writing it I don't know if I should export certain methods into their own own files, e.g. I'd have something like
func (t *tradeBot) acceptTrade() {}
func (t *tradeBot) declineTrade() {}
func (t *tradeBot) monitorTrade() {}
func (t *tradeBot) sendTrade() {}
each method is going to have quite a bit of code so should I export each method into its own file or just have 1 file with 3000 lines of code in it?
Also using global variables so that I can set one variable and then leave it and be able to use it in multiple functions, or is this bad and I should pass the variables as arguments?
Also should I order my files like:
package
imports
constants
variables
functions
methods
Or do I just put things where I need them?
The first place to look for inspiration is the Go standard library. The Go standard library is a pretty good guide of what "good" Go code should look like. A library isn't quite the same as an application, but it's definitely a good introduction.
In general, you would not break up every method into its own file. Go tends to prefer larger files that cover a topic. 3000 lines seems quite large, though. Even the net/http server.go is only 2200 lines, and it's pretty big.
Global mutable variables are just as bad in Go as in any language. Since Go relies so heavily on concurrent programming, global mutable variables are quite horrible. Don't do that. One common exception is sync structures like sync.Pool or sync.Once, which sometimes are package global, but are also designed to be accessed concurrently. There are also sometimes "Default" versions of structures, like http.DefaultClient, but you can still pass explicit ones to functions. Again, look through the Go standard library and see what is common and what is rare.
Just a few tips that you hopefully find useful:
Organize code into multiple files and packages by features, not by layers. This becomes more important the bigger your application becomes. package controllers with one or two controllers is probably ok, but putting hundreds of unrelated controllers in the same package doesn't make any sense. The following article explains it very well: http://www.javapractices.com/topic/TopicAction.do?Id=205
Global variables sometimes make code easier to write however they should be used with care. I think unexported global variables for things like logger, debug flags, etc are ok.
I am working on a project using the cmake build system. By default CMake has a nice framework for generating a single executable from a set of C/C++ code. The cmake function is called create_test_sourcelist. What it does is generate a C/C++ dispatcher with a single main entry point which will call other C/C++ code.
Therefore I have a bunch of C/C++ files with a function signature such as: int TestFunctionality1(int argc, char *argv[]), which I'd like to keep as-is, unless of course it means even more work.
How can I keep this system in place and start using BOOST_CHECK ? I could not figure out how to specify the actual main entry point is not called int main(int argc, char *argv[]).
My goal is have a framework for integration with Jenkins, since the project already uses Boost, I believe this should be doable without re-writing the existing CMake test suite and changing all tests into independent main function.
Unfortunately, it seems there is no straightforward and clean way to do that.
From one side the only useful function of create_test_sourcelist is to generate a test driver: a (stupid pretty simple, naive and with lack of abilities to hack/extend) C/C++ translation unit based on ${cmake-prefix}/share/cmake/Templates/TestDriver.cxx.in (and there is no way to select some other template).
From other side, Boost UTF offers its own test runner (which is equal to test driver in CMake's terminology), but in any variant (static, dynamic, single-header) it contains definition for main() some way or another (even in case of external test runner).
…so you end up with two main() functions and no ability to choose a single one.
Digging a little into sources of create_test_sourcelist I've really wonder why are they implemented it as a command and not as ordinal (extern) cmake module — it doesn't do any special (which can't be implemented using CMake language). This command is really stupid — it doesn't check that required functions are really exists (you'll get compile errors if smth wrong). There are no ways for flexible customization of output source file at all. All that is does is stripping paths and extensions from a list of source files and substitute it to mentioned template using ordinal configure_file()…
So, personally I see no reason to use it at all. It is why I've done the same (but better ;) job in the module mentioned in the comment above.
If you are still want to use that command, generated test driver completely useless if you want to use Boost UTF. You need to provide your own initialization function anyway (and it is not main()), where you can manually register your test cases into a master test suite (or organize your tests into some more complex tree). In that case there is absolutely no reason to use create_test_sourcelist! All what you can get from it is a list of sources that needs to be given to add_executable()… but it is much more easy to do w/ set()… This command even can't help you w/ list of test functions (list of filenames w/o extension actually) to call (it used internally and not exported). Do you still want to use that command??