Example Scenario
I have an AWS S3 bucket with lots of object which a couple of my application needs to access. The applications use some info available to them to form the S3 object name, get the object and run a transformation on the object data before using it for further processing specific to the application.
I would like to create a module which will hold the logic for forming the object name, obtain it from S3 and then run the transformation on the object data so that I wont be duplicating these functions in multiple places.
In this scenario should I add AWS SDK as a dependency in the module? Keep in mind that the applications might have to use AWS SDK for something else specific to that application or they might not require it at all.
In general what is the best way to solve problems like this i.e where to add the dependency? And how to manage different versions?
If your code has to access packages from the AWS SDK then yes, you have no choice but to add it as dependency. If it doesn't and the logic is generic, or you can abstract it away from the AWS SDK then you don't need the dependency (and in fact the go tooling like go mod tidy will remove the dependency from go.mod if you add it)
In this scenario should I add AWS SDK as a dependency in the module?
Keep in mind that the applications might have to use AWS SDK for
something else specific to that application or they might not require
it at all.
Yes, if any package from your module depends on AWS SDK, Go Modules system is going to add AWS SDK as a dependency for your module. There is nothing special you are supposed to do with your module.
Try this script with Go 1.11 or higher (and make sure to work out of GOPATH):
Write your module like this:
Tree:
moduledir/packagedir1
moduledir/packagedir2
Initialize the module:
Recipe:
cd moduledir
go mod init moduledir ;# or go mod init github.com/user/moduledir
Build module packages:
Recipe:
go install ./packagedir1
go install ./packagedir2
Module things are supposed to automagically work!
In general what is the best way to solve problems like this i.e where to add the dependency? And how to manage different versions?
The Modules system is going to automatically manage dependencies for your module and record them in the files go.mod and go.sum. If you need to override some dependency, you should use the 'go get' command for that. For instance, see this question: How to point Go module dependency in go.mod to a latest commit in a repo?
You can also find much information on Modules here: https://github.com/golang/go/wiki/Modules
Related
Hard to put in a single sentence, but here is the situation. I am developing a golang package, my intention is for it to be go-gettable. The core of the package provides some "central" functionality, an http middleware. And I need several "adapter" packages to support some of the most famous golang http frameworks.
Each adapter responsibility is to get the required information from the http request and then consume a core service where the logic resides. So the main logic resides in a single place while each adapter acts like, well, an adapter.
The first approach I can think of is to have both the core and adapters as part of the same module, but this will add a lot of unnecessary dependencies to the importing project. For instance, if you want to import the package to support framework A the package will indirectly add all the dependencies required by the adapters for other frameworks, even when not used.
The approach I am considering is to have several modules in the same package: a core module and a separate module for each adapter. Each adapter module will then import the core module:
|- core
| |- go.mod
|
|- adapter1
| |- go.mod
|
|- adapter2
|- go.mod
This way, adapter 1 module could be imported and will only carry it's dependencies and those of the core module, leaving adapter 2 dependencies out of the picture.
I got this structure to work locally: I can successfully import adapter 1, or adapter 2, from another golang project using go mod replace statement but when I push changes to the git repo and try to import directly from there I can't get go mod to download the latest version/tag of each package, not even by explicitly providing the version tag I want to use. It keeps downloading an older version and complaining about some missing code parts (that exist only in the latest version).
I followed this guide on sourcing multiple modules on a single repository but an important distinction is that in my case, I am sourcing a module that references another module in the same repo, while the example in the guide shows how to source modules independently.
So my question is, is it at all possible to source a go module that references another go module on the same repo?
Would it be a better approach to have my "core" module on a separate repository and then an "adapters" repo/module with a package for each adapter?
The purpose of having it all in the same repo is to make the development easier, but it is complicating version control a lot.
Any advice will be greatly appreciated. If you need me to clarify something I would gladly do so. Thanks in advance.
Consider that any go install would not be possible with a replace directive in your go.mod (issue 44840).
That would result in the error message:
The go.mod file for the module providing named packages contains one or more
replace directives.
It must not contain directives that would cause it to be interpreted differently
than if it were the main module.
So one module per repositories is preferable, and you can then group your repositories into one parent Git repository (through submodules, each one following a branch for easy update) for convenience.
I am trying to create my first monorepo in Go. The project structure looks as follows:
As you can see on the picture, the monoplay folder is the root.
The pb folder contains the generated gRPC code that I would like to consume in the srv_boo/main.go and srv_foo/main.go files.
The question is, how to consume the generated gRPC code from folder pb in the srv_boo/main.go and srv_foo/main.go files?
Is the folder structure correct?
Would like also to deploy the services individually.
Is maybe https://bazel.build/ the solution?
Having the entire repository as one go module will help with this, i.e only one go.mod file in the "Monoplay" root folder.
Then the services can reference the generated go files using "github.com/*/monoplay/pb/*" imports.
This will also centralize dependency management for all the entire repository, since there is only one go.mod file, if that is something you want.
Other alternatives:
Using "go mod edit":
https://go.dev/ref/mod#go-mod-edit
Or, as DazWilkin suggests, use "go_package" in proto files together with "go-grpc_opt" and "go_opt".
I use the single module approach and recommend it.
If the repository will contain a lot of code and building everything (including container images) is cumbersome and takes to long then look into bazel.
If I use bazel to build my protobuf dependent Go serverless functions, bazel will make the protobuf generated go code available at the import path that I specify.
Google cloud functions for go requires one to use go modules.
How can I add the dummy import path created by bazel to my go.mod file? The function deploy to google cloud fails because the dummy import can not be resolved. (G cloud requires me to upload my go source, AWS lambdas would allow me to upload a binary, which would work fine.)
I'm guessing I'll have to either go with AWS lambdas, use serverless containers, or write a genrule that copies the outputs of the proto generated code into my source directory but I'd like to avoid that ugliness.
I work at Google on Go and Google Cloud Functions.
I see a few options for using Cloud Functions:
Publish the generated code publicly. You may not want to do this for a variety of reasons.
Copy the generated code into your source directory. This is the easiest. When you deploy your function, the current directory gets zipped up and sent to be built. We don't copy any dependencies from outside your current directory. If you do this, you can import the generated code by having the package path be prefixed by the module path of your directory.
Use vendoring. If you run go mod vendor and have that grab your generated code (at whatever path you choose), it will create a vendor directory with all of your dependencies. The Cloud Functions builder prefers go.mod over vendor, though. So you would have to .gcloudignore the go.mod and go.sum file to make sure they don't get uploaded when you deploy your code. https://cloud.google.com/functions/docs/writing/specifying-dependencies-go has more information.
Fabric chaincode requires separate folders for each chaincode for deployment.
for e.g., chaincode_1 will need to be in chaincode_1 folder with all the dependencies (vendor), util/library functions + chaincode_1.go and same for chaincode_2.
My question is how to organize util/library folder if it has functions that i want to use it across chaincodes. Fabric chaincode deployment does not allow it.. i think. And the util folder is replicated/redundant in each chaincode folder
You could put all of your shared dependencies, including your own util package, in a separate repository and then vendor them via dep. They would still be replicated per chaincode, but it might be easier to manage them via dep ensure rather than having to manually copy them.
I am looking for any best practices and/or recommendations around how best to manage releases for custom modules in a production environment running on the Spring-XD platform.
Specifically, if I have a custom module foo-1.0.0 deployed into a farm of xd containers and I wish to rev it to version foo-1.1.0. What are my alternatives? I gather the following might work (from looking at other questions and docs):
Assuming a shared filesystem/directory for each server/container the custom module jar can be replaced and the container will pick up the new version without a need to restart the server. Will this work? Does this mean the jar name needs to be the same or will it work with versioned named jars?
Maintain a duplicate/mirrored container envs so that one set of containers can be updated by properly removing the stream/job/modules and then bring up the environment up with the updated module version etc... (though this is expensive from a hardware perspective) basically doing a rolling upgrade of sorts
Any other ways?
An ancillary question might be about how easy is it to expose the version of the custom module being used by a given container?
Any thoughts would be appreciated.
Thanks,
Mark