The task i'm using looks something like this:
Rake::PackageTask.new("deploy", "0.1.2") do |p|
p.need_tar = true
p.package_files.include("build/**/*")
end
This generates a deploy-0.1.2.zip file. I would like to be able to generate a different package for each folder contained in the build, for example:
build/
|— en/
|— es/
|— de/
|— fr/
Should generate deploy-en-0.1.2.zip, deploy-es-0.1.2.zip, deploy-de-0.1.2.zip, deploy-fr-0.1.2.zip files.
The preconised concept is to locales, not make specific packages
Work with i18n, create a determining code if needed or assuming the locales of the workstation, server how code is executed (in case of CLI, library, etc...)
Otherwise you could create n clones for each languages of your project and create a Rakefile in the root folder how do explicit or recursive call of rake target for each gem building...
you need to do n specific gemspec (one for a specific language) , you also need to write a opy target in your Rakefile to centralized packages because of the differents pkg folders build in each language specific gem folder you create?
Personnaly i think it's not a good idea, you have to maintained n similar copy of the same application ....
Related
I successfully used rules_go to build a gRPC service:
go_proto_library(
name = "processor_go_proto",
compilers = ["#io_bazel_rules_go//proto:go_grpc"],
importpath = "/path/to/proto/package",
proto = ":processor_proto",
deps = ["//services/shared/proto/common:common_go_proto"],
)
However, I'm not sure how to import the resulting file in VSCode. The generated file is nested under bazel_bin and under the original proto file path; so to import this, it seems like I would need to write out the entire path (including the bazel_bin part) to the generated Go file. To my understanding, there doesn't seem to be a way to instruct VSCode to look under certain folders that only contain Go packages/files; everything seems to need a go.mod file. This makes it quite difficult to develop in.
For clarity, my directory structure looks something like this:
WORKSPACE
bazel-bin
- path
- to
- generated_Go_file.go
go.mod
go.sum
proto
- path
- to
- gRPC_proto.proto
main.go
main.go should use the generated_Go_file.go.
Is there a way around this?
I don't use Bazel and so cannot help with the Bazel configuration. It's likely there is a way to specify the generated code location so that you can revise this to reflect you preference.
The outline you provide of the generated code, is workable though and a common pattern. Often the generated proto|gRPC code is placed in a module's gen subdirectory.
This is somewhat similar to vendoring where your code incorporates what may often be a 3rd-party's stubs (client|server) into your code. The stubs must reflect the proto(s) package(s) and, when these are 3rd-party, using gen or bazel-bin provide a way to keep potentially multiple namespaces discrete.
You're correct that the import for main.go, could (!) be prefixed with the module name from go.mod (first line) followed by the folder path to the generated code. This is standard go packaging and treats the generated code in a similar way to vendored modules.
Another approach is to use|place the generated code in a different module.
For code generated from 3rd-party protos, this may be preferable and the generated code may be provided by the 3rd-party in a module that you can go get or add to your go.mod.
An example of this approach is Google Well-Known Types. The proto (sources) are bundled with protoc (lib directory) and, when protoc compiles sources that references any of these, the Go code that is generated includes imports that reference a Google-hosted location of the generated code (!) for these types (google.golang.org/protobuf/types/known).
Alternatively, you can replicate this behavior without having to use an external repo. The bazel-bin folder must be outside of the current module. Each distinct module in bazel-bin, would need its own go.mod file. You would include in a require block in your code's go.mod file references to the modules' (one or more) locations. You don't need to publish the module to a external repo but can simply require ( name => path/to/module ) to provide a local reference.
As the title says, is it possible to generate code (with something like //go:generate) after a module dependency is downloaded?
Specifically, let's say there's a repo example.com/protobuf containing some .proto files and a few .sh scripts for generating bindings in different languages, plus a go.mod so it can be used as a dependency from go like so:
module example.com/application
go 1.18
require (
example.com/protobuf v1.0.0
)
However, the generated .go files are not included in this repo, they have to be generated using one of the .sh scripts, so if you try this, you get an error like module example.com/protobuf#latest found (v1.0.0), but does not contain package example.com/protobuf/foo
Is there a way around this without resorting to eg. git submodules?
No this is not possible for obvious security reasons.
I am developing multiple microservices, which require different modules (which shall be available like modules on github, but private)
My first tests with Go were all located within one package, which gets quite messy after some time
I'm coming from the Java side of programming - with loads of packages - which keep stuff clear and clean.
(This also works with public modules like on github.com/xyz/module1 github.com/xyz/module2 github.com/xyz/module3)
I just need this for private modules - how can I do so?
This is what I tried yet:
Directory of my go sources:
C:\my\path\to\go\src\
In this dir I have multiple subdirs containing modules (actually more than listed here)
my-module1
my-module2
my-module3
For each folder I called go mod init but I get the message that
package my-module1/util is not in GOROOT (c:\go\src\my-module1\util)
which is obviously right, as my private libraries reside in C:\my\path\to\go\src\
Importing packages from github with go get ... is working without troubles (those packages will be loaded but copied to c:\go\src)
Working with all files in one folder works but is not the desired solution (I need to create multiple microservices therefore I want to be able to create different projects with custom executeables and or tests)
What am I doing wrong?
If more information is needed I will provide it - just let me know what
NOTE: packages without go file in package main cannot be installed via go install. This system looks pretty confusing to me - as modules cannot be found...
finally i created a multimodule
means i have a root folder (lets call it root) which contains a go.mod file
this defines the root module
it requires all my submodules
and it replaces the absolute path into a relative.
root
* aa
* ab
* bc
main gets defined as
module root
go 1.15
require (
root/aa v0.0.0
root/ab v0.0.0
root/bc v0.0.0
)
replace (
root/aa => ./aa
root/ab => ./ab
root/bc => ./bc
)
all the submodules have a very simple definition (this is equal for all, as long as you do not have any dependencies)
module root/aa
Facebook's Pysa tool looks useful, in the Pysa tutorial exercises they refer to files that are provided in the pyre-check repository using a relative path to include a path outside of the exercise directory.
https://github.com/facebook/pyre-check/blob/master/pysa_tutorial/exercise1/.pyre_configuration
{
"source_directories": ["."],
"taint_models_path": ["."],
"search_path": [
"../../stubs/"
],
"exclude": [
".*/integration_test/.*"
]
}
There are stubs provided for Django in the pyre-check repository which if I know the path where pyre check is installed I can hard-code in my .pyre_configuration and get something working but another developer may install pyre-check differently.
Is there a better way to refer to these provided stubs or should I copy them to the repository I'm working on?
Many projects have a standard development environment, allowing for hard coded paths in the .pyre_configuration file. These will usually point into the venv, or some other standard install location for dependencies.
For projects without a standard development environment, you could trying incorporating pyre init into your setup scripts. pyre init will setup a fresh .pyre_configuration file with paths that correspond to the current install of pyre. For additional configuration you want to add on top of the generated .pyre_configuration file (such as a pointer to local taint models), you can hand write a .pyre_configuration.local, which will act as an overlay and overwrite/add to the content of .pyre_configuration.
Pyre-check looks for the stubs in the directory specified by the typeshed directive in the configuration file.
The easiest way is to move stubs provided for Django in the pyre-check repository to the typeshed directory that is in the pyre-check directory.
For example, if you have installed pyre-check to the ~/.local/lib directory, move the django directory from ~/.local/lib/pyre_check/stubs to ~/.local/lib/pyre_check/typeshed/third_party/2and3/ and make sure your .pyre_configuration file will look like this:
{
"source_directories": ["~/myproject"],
"taint_models_path": "~/myproject/taint",
"typeshed": "~/.local/lib/pyre_check/typeshed"
}
In this case, your Django stubs directory will be ~/.local/lib/pyre_check/typeshed/third_parth/2and3/django
Pyre-check uses the following algorithm to traverse across the typeshed directory:
If it contains the third_party subdirectory, it uses a legacy method: enters just the two subdirectories: stdlib and third_party and there looks for any subdirectory except those with names starting with 2 but not 2and3, and looks for the modules in those subdirectories like 2and3, e.g. in third_party/2and3/
Otherwise, it enters the subdirectories stubs and stdlib, and looks for modules there, e.g. in stubs/, but not in stubs/2and3/.
That's why specifying multiple paths may be perplexing and confusing, and the easiest way is to setup the typeshed directory to ~/.local/lib/pyre_check/typeshed/ and move django to third_parth/2and3, so it will be ~/.local/lib/pyre_check/typeshed/third_parth/2and3/django.
Also don't forget to copy the .pysa files that you need to the taint_models_path directory. Don't set it up to the directory of the Pyre-check, create your own new directory and copy only those files that are relevant to you.
I'm starting to learn ruby. I'm also a day-to-day C++ dev.
For C++ projects I usually go with following dir structure
/
-/bin <- built binaries
-/build <- build time temporary object (eg. .obj, cmake intermediates)
-/doc <- manuals and/or Doxygen docs
-/src
--/module-1
--/module-2
-- non module specific sources, like main.cpp
- IDE project files (.sln), etc.
What dir layout for Ruby (non-Rails, non-Merb) would you suggest to keep it clean, simple and maintainable?
As of 2011, it is common to use jeweler instead of newgem as the latter is effectively abandoned.
Bundler includes the necessary infrastructure to generate a gem:
$ bundle gem --coc --mit --test=minitest --exe spider
Creating gem 'spider'...
MIT License enabled in config
Code of conduct enabled in config
create spider/Gemfile
create spider/lib/spider.rb
create spider/lib/spider/version.rb
create spider/spider.gemspec
create spider/Rakefile
create spider/README.md
create spider/bin/console
create spider/bin/setup
create spider/.gitignore
create spider/.travis.yml
create spider/test/test_helper.rb
create spider/test/spider_test.rb
create spider/LICENSE.txt
create spider/CODE_OF_CONDUCT.md
create spider/exe/spider
Initializing git repo in /Users/francois/Projects/spider
Gem 'spider' was successfully created. For more information on making a RubyGem visit https://bundler.io/guides/creating_gem.html
Then, in lib/, you create modules as needed:
lib/
spider/
base.rb
crawler/
base.rb
spider.rb
require "spider/base"
require "crawler/base"
Read the manual page for bundle gem for details on the --coc, --exe and --mit options.
The core structure of a standard Ruby project is basically:
lib/
foo.rb
foo/
share/
foo/
test/
helper.rb
test_foo.rb
HISTORY.md (or CHANGELOG.md)
LICENSE.txt
README.md
foo.gemspec
The share/ is rare and is sometimes called data/ instead. It is for general purpose non-ruby files. Most projects don't need it, but even when they do many times everything is just kept in lib/, though that is probably not best practice.
The test/ directory might be called spec/ if BDD is being used instead of TDD, though you might also see features/ if Cucumber is used, or demo/ if QED is used.
These days foo.gemspec can just be .gemspec --especially if it is not manually maintained.
If your project has command line executables, then add:
bin/
foo
man/
foo.1
foo.1.md or foo.1.ronn
In addition, most Ruby project's have:
Gemfile
Rakefile
The Gemfile is for using Bundler, and the Rakefile is for Rake build tool. But there are other options if you would like to use different tools.
A few other not-so-uncommon files:
VERSION
MANIFEST
The VERSION file just contains the current version number. And the MANIFEST (or Manifest.txt) contains a list of files to be included in the project's package file(s) (e.g. gem package).
What else you might see, but usage is sporadic:
config/
doc/ (or docs/)
script/
log/
pkg/
task/ (or tasks/)
vendor/
web/ (or site/)
Where config/ contains various configuration files; doc/ contains either generated documentation, e.g. RDoc, or sometimes manually maintained documentation; script/ contains shell scripts for use by the project; log/ contains generated project logs, e.g. test coverage reports; pkg/ holds generated package files, e.g. foo-1.0.0.gem; task/ could hold various task files such as foo.rake or foo.watchr; vendor/ contains copies of the other projects, e.g. git submodules; and finally web/ contains the project's website files.
Then some tool specific files that are also relatively common:
.document
.gitignore
.yardopts
.travis.yml
They are fairly self-explanatory.
Finally, I will add that I personally add a .index file and a var/ directory to build that file (search for "Rubyworks Indexer" for more about that) and often have a work directory, something like:
work/
NOTES.md
consider/
reference/
sandbox/
Just sort of a scrapyard for development purposes.
#Dentharg: your "include one to include all sub-parts" is a common pattern. Like anything, it has its advantages (easy to get the things you want) and its disadvantages (the many includes can pollute namespaces and you have no control over them). Your pattern looks like this:
- src/
some_ruby_file.rb:
require 'spider'
Spider.do_something
+ doc/
- lib/
- spider/
spider.rb:
$: << File.expand_path(File.dirname(__FILE__))
module Spider
# anything that needs to be done before including submodules
end
require 'spider/some_helper'
require 'spider/some/other_helper'
...
I might recommend this to allow a little more control:
- src/
some_ruby_file.rb:
require 'spider'
Spider.include_all
Spider.do_something
+ doc/
- lib
- spider/
spider.rb:
$: << File.expand_path(File.dirname(__FILE__))
module Spider
def self.include_all
require 'spider/some_helper'
require 'spider/some/other_helper'
...
end
end
Why not use just the same layout? Normally you won't need build because there's no compilation step, but the rest seems OK to me.
I'm not sure what you mean by a module but if it's just a single class a separate folder wouldn't be necessary and if there's more than one file you normally write a module-1.rb file (at the name level as the module-1 folder) that does nothing more than require everything in module-1/.
Oh, and I would suggest using Rake for the management tasks (instead of make).
I would stick to something similar to what you are familiar with: there's no point being a stranger in your own project directory. :-)
Typical things I always have are lib|src, bin, test.
(I dislike these monster generators: the first thing I want to do with a new project is get some code down, not write a README, docs, etc.!)
So I went with newgem.
I removed all unnecessary RubyForge/gem stuff (hoe, setup, etc.), created git repo, imported project into NetBeans. All took 20 minutes and everything's on green.
That even gave me a basic rake task for spec files.
Thank you all.