So I have the following files
/src/baseService.thrift
/baseTypes.thrift
/baseSecurity.thrift
I want all of these thrift definitions to be created into one library. The top of each file is thus:
baseService.thrift
==================
namespace java foo.bar
namespace cpp foo.bar
namespace js foo.bar
namespace go foo.bar
import "baseTypes.thrift"
baseTypes.thrift
================
namespace java foo.bar
namespace cpp foo.bar
namespace js foo.bar
namespace go foo.bar
baseSecurity.thrift
===================
namespace java foo.bar
namespace cpp foo.bar
namespace js foo.bar
namespace go foo.bar
import "baseTypes.thrift"
The problem is, how to I create all of these into one lib package? It works fine for java/cpp/js but when I try to build for go it's a no go.
With thrift, you can't do a thrift gen:baz *.thrift, you have to do the files one at a time. For the other languages, we just do a:
for f in `find *.thrift`; do
thrift -o myGenDir --gen go $f"
done
(substitute appropriate gen command for each lang)
For Python this is fine because it puts every gen'd file in it's own dir based on the filename [ i.e. foo/bar/{filename}/ttypes.py]. For Java it dumps all of the files in foo/bar/ but every class name is unique. For cpp, it dumps it all into the gen dir, but uniquely named per thrift file [so {filename.h}, {filename.cpp}]. For Go, however, it dumps everything into foo/bar like so:
/foo/bar/constants.go
/foo/bar/service.go
/foo/bar/service-remote/
/foo/bar/baz/ [for anything that has a namespace of foo.bar.baz]
/foo/bar/ttypes.go
The problem is, the ttypes.go and (presumably) constants.go are getting overwritten by whatever is gen'd last in the for loop. Is there a way around this? It works for the other languages - seems like it's an oversight for Go. What am I missing. We've got lots of Thrift files with lots of stuff in them - I'd rather not have to combine everything that's at the same package level into one thrift file.
The problem is, the ttypes.go and (presumably) constants.go are getting overwritten by whatever is gen'd last in the for loop.
Yes, that's true.
Is there a way around this?
The most (cross-language) portable recommendation is to not do this. Instead:
put different IDL files into different namespaces
put everything belonging to one namespace into exactly one IDL file
The Thrift compiler offers a few compiler switches for Go that may help you at least partially, (you get all available options for all languages by typing thrift --help on the command prompt)
go (Go):
package_prefix= Package prefix for generated files.
thrift_import= Override thrift package import path (default:git.apache.org/thrift.git/lib/go/thrift)
package= Package name (default: inferred from thrift file name)
These options are used like in
thrift -gen go:package=mypack,package_prefix=myprefix
It works for the other languages - seems like it's an oversight for Go.
It might be your impression but I'd recommend not to try it, if you are interested in cross-language compatibility. The behaviour is the same with other languages. Just as an example: I recently fixed (or better: worked around) a problem with the Erlang tests, where I had to fight exactly this very same issue.
Had the same problem recently. Every IDL in different namespaces just doesn't work. Code looks bad, different namespaces everywhere that you have to remember, annoying to add/remove namespaces for every little thing.
I only define a single namespace so I come with this. Basically, objects are in different files but they're written as they're in a single file. So no imports, no cross-file references, no namespaces in every file. I put namespaces in a separate file. Then my script joins everything into one big thirft file and compiles it. It does require you to put everything in the right order but it works for languages I need - Go, C# and Java worked fine.
For me it too looks like an oversight. There is no reason for it to be like that just for Go. Maybe some day I will send a merge request with behaviour that better matches other languages.
Related
I have a Go project in JetBrains goland where all files are runnable yet independent of each other.
But to make every runnnable, I need to make them as package main.
And I have several "Vertex" defined elsewhere in other file and Goland complain about it.
But it is still runnable, and that's purely complaint from Goland.
Question -
Is there a better way to organized the files?
If not, is there a way to turn off the complaint from Goland?
Working with multiple files that declare the main() function in the same directory is not recommended in general, mainly due to problems similar to yours.
However, there are several ways to solve this.
You can use build constraints, also known as build tags, to separate the binaries at build time. When using them, the IDE will also need to be adjusted using the Settings/Preferences | Build Tags & Vendoring. And, depending how you build your application, you might also need to adjust the build command to add the corresponding tags to it.
The other option, which I'd recommend in this case, is to move each main() defining file into a structure such as this:
/repository_root
/cmd
/command1
command1.go (file holds the `main()` func)
/command2
command2.go (file holds the `main()` func)
/command3
command3.go (file holds the `main()` func)
/some
/package
some_file.go
some_other_file.go
....
some_other_file.go
As an example of this layout, you can have a look at Delve, which uses a similar structure, but only has a single "command" in the cmd folder.
Lastly, sometimes it's possible to remove the duplication and move it to a common file which holds the data type, but it's not always ideal and can make the build command more complex, since you need to specify all the files that should be included in the build process.
Edit:
And you can read more on how to organize your Go packages/applications here
These articles will explain how to organize your Go packages:
https://rakyll.org/style-packages/
https://medium.com/#benbjohnson/standard-package-layout-7cdbc8391fc1#.ds38va3pp
https://peter.bourgon.org/go-best-practices-2016/#repository-structure
To understand more about the design philosophy for Go packages: https://www.goinggo.net/2017/02/design-philosophy-on-packaging.html
I am developing a go package, which is a little bit complex and thus I want to organize the source code into multiple directories.
However, I don't want the users of the package to have to use too long imports. Anyways, the internal structure of the package isn't their concern.
Thus, my package structure looks so:
subDir1
subSubDir1
subSubDir2
subDir2
subSubDir3
...and so on. All of them have their exported calls.
I would like to avoid that my users have to import
import (
"mypackage/subDir1"
"mypackage/subDir1/subSubDir2"
)
...and so on.
I only want, if they want to use an exported function from my package, they should have access all of them by simply importing mypackage.
I tried that I declare package mypackage in all of the .go files. Thus, I had source files in different directories, but with the same package declaration.
In this case, the problem what I've confronted was that I simply couldn't import multiple directories from the same package. It said:
./src1.go:6:15: error: redefinition of ‘mypackage’
"mypackage/mysubdir1"
^
./src1.go:4:10: note: previous definition of ‘mypackage’ was here
"mypackage"
^
./src1.go:5:15: error: redefinition of ‘mypackage’
"mypackage/mysubdir2"
^
./src1.go:4:10: note: previous definition of ‘mypackage’ was here
"mypackage"
^
Is it somehow possible?
You should not do this in any case, as the language spec allows a compiler implementation to reject such constructs. Quoting from Spec: Package clause:
A set of files sharing the same PackageName form the implementation of a package. An implementation may require that all source files for a package inhabit the same directory.
Instead "structure" your file names to mimic the folder structure; e.g. instead of files of
foo/foo1.go
foo/bar/bar1.go
foo/bar/bar2.go
You could simply use:
foo/foo1.go
foo/bar-bar1.go
foo/bar-bar2.go
Also if your package is so big that you would need multiple folders to even "host" the files of the package implementation, you should really consider not implementing it as a single package, but break it into multiple packages.
Also note that Go 1.5 introduced internal packages. If you create a special internal subfolder inside your package folder, you may create any number of subpackages inside that (even using multiple levels). Your package will be able to import and use them (or to be more precise all packages rooted at your package folder), but no one else outside will be able to do so, it would be a compile time error.
E.g. you may create a foo package, have a foo/foo.go file, and foo/internal/bar package. foo will be able to import foo/internal/bar, but e.g. boo won't. Also foo/baz will also be able to import and use foo/internal/bar because it's rooted at foo/.
So you may use internal packages to break down your big package into smaller ones, effectively grouping your source files into multiple folders. Only thing you have to pay attention to is to put everything your package wants to export into the package and not into the internal packages (as those are not importable / visible from the "outside").
Inside your package source code, you have to differentiate your source directories by renamed imports. You can declare the same package mypackage in all of your source files (even if they are in different directories).
However, while you import them, you should give an induvidual names to the directories. In your source src1.go, import the other directories on this way:
import (
"mypackage"
submodule1 "mypackage/mySubDir"
)
And you will be able to reach the API defined in "mypackage" as mypackage.AnyThing(), and the API defined in mySubDir as submodule1.AnyThing().
The external world (i.e. the users of your package) will see all exported entities in myPackage.AnyThing().
Avoid namespace collisions. And use better understable, intuitive naming as in the example.
Yes, this is doable without any problems, just invoke the Go compiler by hand, that is not via the go tool.
But the best advice is: Don't do that. It's ugly and unnecessarily complicated. Just design your package properly.
Addendum (because the real intention of this answer seems to get lost sometimes, maybe because irony is too subtle): Don't do that!! This is an incredible stupid idea! Stop fighting the tools! Everybody will rightfully hate you if you do that! Nobody will understand your code or be able to compile it! Just because something is doable in theory doesn't mean this is a sensible idea in any way. Not even for "learning purpose"! You probably even don't know how to invoke the Go compiler by hand and if you figure it out it will be a major pita.
I started learning lisp and am looking for an efficient way to manage my personal libraries.
So i thought it would be useful to compile my library into a single fasl-file (containing both package-information and actual implementation), that i can afterwards load with (load "lib.fasl") to include the library. Problem is, the library consists of multiple *.lisp-files, lets say foo.lisp and bar.lisp.
I came as far as to compile them separately using (compile-file "foo.lisp") and (compile-file "bar.lisp"), respectively.
Obviously it would be rather messy having to LOAD every file of the library (i.e. foo.fasl and bar.fasl) manually when i want to use them, so i am looking for something like
(link "foo.fasl" "bar.fasl" :output "lib.fasl")
or
(compile-file "foo.lisp" "bar.lisp" :output "lib.fasl")
to produce a single lib.fasl, which I can then LOAD.
I don't want to use core-files, because I want to be able to combine my libraries flexibly (which would require to create a separate core-file for every possible combination of libraries).
I searched both the SBCL user-manual for lisp-functions doing this and the SBCL manpage for functionality using the CLI, but I wasn't able to find anything.
I would prefer a solution using SBCL, but I will take anything else too.
Thanks in advance!
IIRC for SBCL you just concatenate the FASL files into one file.
ASDF 3 has a way to build a single FASL file out of a system or a system with all dependencies (see compile-bundle-op and monolithic-compile-bundle-op).
In its portable library uiop there is also a function combine-fasls, which supports multiple CL implementations.
Use cat:
$ cat f1.fasl f2.fasl .... > mypackage.fasl
Note that the more common way is creating images.
You might also want to explore asdf.
I am working on a project using the cmake build system. By default CMake has a nice framework for generating a single executable from a set of C/C++ code. The cmake function is called create_test_sourcelist. What it does is generate a C/C++ dispatcher with a single main entry point which will call other C/C++ code.
Therefore I have a bunch of C/C++ files with a function signature such as: int TestFunctionality1(int argc, char *argv[]), which I'd like to keep as-is, unless of course it means even more work.
How can I keep this system in place and start using BOOST_CHECK ? I could not figure out how to specify the actual main entry point is not called int main(int argc, char *argv[]).
My goal is have a framework for integration with Jenkins, since the project already uses Boost, I believe this should be doable without re-writing the existing CMake test suite and changing all tests into independent main function.
Unfortunately, it seems there is no straightforward and clean way to do that.
From one side the only useful function of create_test_sourcelist is to generate a test driver: a (stupid pretty simple, naive and with lack of abilities to hack/extend) C/C++ translation unit based on ${cmake-prefix}/share/cmake/Templates/TestDriver.cxx.in (and there is no way to select some other template).
From other side, Boost UTF offers its own test runner (which is equal to test driver in CMake's terminology), but in any variant (static, dynamic, single-header) it contains definition for main() some way or another (even in case of external test runner).
…so you end up with two main() functions and no ability to choose a single one.
Digging a little into sources of create_test_sourcelist I've really wonder why are they implemented it as a command and not as ordinal (extern) cmake module — it doesn't do any special (which can't be implemented using CMake language). This command is really stupid — it doesn't check that required functions are really exists (you'll get compile errors if smth wrong). There are no ways for flexible customization of output source file at all. All that is does is stripping paths and extensions from a list of source files and substitute it to mentioned template using ordinal configure_file()…
So, personally I see no reason to use it at all. It is why I've done the same (but better ;) job in the module mentioned in the comment above.
If you are still want to use that command, generated test driver completely useless if you want to use Boost UTF. You need to provide your own initialization function anyway (and it is not main()), where you can manually register your test cases into a master test suite (or organize your tests into some more complex tree). In that case there is absolutely no reason to use create_test_sourcelist! All what you can get from it is a list of sources that needs to be given to add_executable()… but it is much more easy to do w/ set()… This command even can't help you w/ list of test functions (list of filenames w/o extension actually) to call (it used internally and not exported). Do you still want to use that command??
So, in Project AB I have FileA.fs and FileB.fs. FileB uses definitions from FileA, and both FileA and FileB use definitions from Project C (written in C#).
In FileA.FS, I have:
#if COMPILED
namespace ProjectAB
#else
#I "bin\debug"
#r "ProjectZ.dll"
#endif
...which works how it's supposed to -- I can run the whole file in F#-Interactive and it's great.
In FileB.fs, my header is:
#if COMPILED
module ProjectAB.ModuleB
#else
#load "FileA.fs"
#I "bin\debug"
#r "ProjectZ.dll"
#endif
But when I run this (from FileB), I get the error:
FileA.fs(6,1): error FS0222: Files in libraries or multiple-file applications must begin with a namespace or module declaration, e.g. 'namespace SomeNamespace.SubNamespace' or 'module SomeNamespace.SomeModule'. Only the last source file of an application may omit such a declaration.
According to the fsi.exe reference, the #load directive "Reads a source file, compiles it, and runs it". But it seems like it must be doing so without the COMPILED directive defined, because it doesn't see the "namespace ProjectAB" declaration.
How can I set up my headers so that I can run either file in F#-interactive?
Edit Per latkin's answer below, I created a script as the last file in the project, _TestScript.fsx. I removed all the precompiler stuff from the other files and set this as the header of the .fsx file:
#if INTERACTIVE
#I "bin\debug"
#r "ProjectZ.dll"
#load "FileA.fs"
#load "FileB.fs"
#endif
When I run this in the interactive, it correctly loads ProjectZ, FileA, and FileB for me to access in the interactive window.
However, in _TestScript.fsx, I get squiggly red lines and no intellisense on any of the functions/types from the referenced files (including the "open" statements).
Is there something else I need to set up in the script file to make the intellisense work? (The answer might be pretty basic since I have not used .fsx files before.)
I don't think there is a way to do this smoothly. A few things to consider:
INTERACTIVE is always defined when you are being processed by fsi.exe, whether you are a .fsx, .fs, #load'ed, whatever. COMPILED is similarly always defined when you are being processed by fsc.exe. I can see how the quoted phrase from the docs maybe doesn't make this totally crystal clear.
You can only declare namespaces in fsi from a #load'ed file
So if you want your file to declare a namespace, and to work as the single file in interactive, then the namespace has to be #ifdef'ed out. But that also means the namespace will be #ifdef'ed out when the file is #load'ed...
You might be able to work around this by conditionally declaring it as a module, not a namespace. Or perhaps creating additional, more granular defines. It will be tricky.
Trying to get source files to work properly as part of a compiled library and simultaneously as single-file scripts is not easy, and I don't think the tooling was designed with this scenario in mind. More common is to have all of your library files behave purely as library files, then use dedicated standalone scripts which #loads the .fs files they need. This keeps the driving code and the library code separate, and things fit together more cleanly.