We have 2 repos relevant to this question:
gitlab.com/company/product/team/service
gitlab.com/company/product/team/api
They both have go.mod files. The api repo is dead simple. It contains a dozen or so typescript files that go through a process to generate mirror image Go structs. We use these in integration and unit tests sort of like a stylesheet so we can test the structure of responses from service. Api has 0 imports. No fmt, time, nothing. Here's api's go.mod file:
module gitlab.com/company/product/team/api/v2
go 1.16
The latest tag on its master branch is v2.68.0. Service's go.mod has this require line for api:
require (
...
gitlab.com/company/product/team/api/v2 v2.68.0
...
)
And it Works on My Machine(TM). When I run go get, go mod tidy, go mod download, go mod vendor, etc, the correct version gets downloaded. Not so in docker. To work with our CI/CD pipeline and eventually get deployed in k8, we build a docker image to house the code. The RUN go mod vendor in service's Dockerfile is giving us an error when it tries to import the api (adding a verbose flag does not result in more details here):
go: gitlab.com/company/product/team/api/v2#v2.68.0: reading gitlab.com/company/product/team/api/team/api/go.mod at revision team/api/v2.68.0: unknown revision team/api/v2.68.0
Note the repeated folders in the ../team/api/team/api/go.mod url.
Our Dockerfile is loaded with an SSH key that's registered with gitlab, the necessary entries into ~/.ssh/known_hosts are present, there's a ~/.netrc file with a gitlab access token that has the read_api permission, GOPRIVATE=gitlab.com/company/*, GO111MODULE=on, and when it runs go mod vendor it doesn't have issues with any of the rest of our gitlab.com/company/product/team/* imports. These mirror my local machine where go get works.
Why, for this one repo, would go mod vendor decide to repeat folders when attempting to import?
Turns out the answer isn't anything that SO users would have had much insight into. The .netrc token we have in service's Dockerfile is an access token that only works in the service project. It was created by opening the settings of the service repo and creating the access token there instead of using a service account that has access to all the necessary repos. When go uses it to access the api it no longer grants access and go has issues asking gitlab which folder in the URL is the RepoRoot because gitlab erroneously claims all folders in the path are a repo. If you're using subgroups in gitlab and running go get, you know what I'm talking about.
It worked on my machine because my .netrc has my token and I have enough access to not be an issue.
We fixed it by updating the .netrc file in the dockerfile to use a token generated by a service account that has api access to all relevant repos.
I was about to delete the question, but finding resources talking about this problem are few and far between.
Related
Background
At my company, we use Bit Bucket to host our git repos. All traffic to the server flows through a custom, non-standard port. Cloning from our repos looks something like git clone ssh://git#stash.company.com:9999/repo/path/name.git.
The problem
I would like to create Go modules hosted on this server and managed by go mod, however, the fact that traffic has to flow through port 9999 makes this very difficult. This is because go mod operates on the standard ports and doesn't seem to provide a way to customise ports for different modules.
My question
Is it possible to use go mod to manage Go modules hosted on a private git server with a non-standard port?
Attempted solutions
Vendoring
This seems to be the closest to offering a solution. First I go mod vendor the Go application that wants to use these Go modules, then I git submodule the Go module in the vendor/ directory. This works perfectly up to the point that a module needs to be updated or added. go mod tidy will keep failing to download or update the other Go modules because it cannot access the "git URL" of the custom Go module. Even when the -e flag is set.
Editing .gitconfig
Editing the .gitconfig to replace the URL without the port with the URL with the port is a solution that will work but is a very dirty hack. Firstly, these edits will have to be done for any new modules, and for every individual developer. Secondly, this might brake other git processes when working on these repositories.
The go tool uses git under the hood, so you'd want to configure git in your environment to use an alternate url. Something like
git config --global url."ssh://git#stash.company.com:9999/".insteadOf "https://stash.company.com"
Though I recall that bitbucket/stash sometimes provides an extra suffix for reasons I don't recall, so you might need to do something like this:
git config --global url."ssh://git#stash.company.com:9999/".insteadOf "https://stash.company.com/scm/"
ADDITIONAL EDIT
user bcmills mentioned below that you can also serve the go-import metadata over HTTPS, and use whatever vanity URL you like, provided you control the domain resolution. This can be done with varying degrees of sophistication, from a simple nginx rule to static content generators, dedicated vanity services or even running your own module proxy with Athens
This still doesn't completely solve the problem of build environment configuration, however, since you'll want the user to set GOPRIVATE or GOPROXY or both, depending on your configuration.
Also, if your chosen domain is potentially globally resolvable, you might want to consider registering it anyway to keep it from being registered by a potentially-malicious third party.
For one of our repositories we set "Custom CI configuration path" inside GitLab to a remote gitlab-ci.yml. We want to do this to prevent Developers to change the gitlab-ci.yml file (as protected files are available in EE Premium and up). But except this purpose, the Custom CI configuration path feature should work anyway for Merge Requests.
Being in repo
group1/repo1
we set
.gitlab-ci.yml#group1/repo1-ci
repo1-ci repository exists and ci works correctly when we push to configured branches etc.
For Merge Request functionality GitLab tells us:
Detached merge request pipeline #123 failed for ...
Project group1/repo1-ci not found or access denied!
We added the developers to repo1-ci repo as developers, to be able to read the files. It does not help. Anyway the expectation is, that it is not run with user permissions, so it should simply find the gitlab-ci.yml file.
Any ideas on this?
So our expectations were right an it seems that we have to add one important thing into our considerations:
If a user interacts in the GitLab UI with the Merge Request features and you are using "Custom CI configuration path" for your gitlab-ci.yml file, please ensure
this user needs at least read permissions to that remote file, even if you moved it to another repo on purpose (e.g. use enhanced file protection in PREMIUM/ULTIMATE or push/merge protect the branches for the Developer role)
the user got this permission change applied in a running session
The last part failed for our users, as it worked one day later. Seems that they just continued working from their open merge request page and GitLab checks the accessibility out of this session (using a cookie, token or something which was not updated with the the access to the remote repo/file)
It works!
I am using gitlab as repository and want to push my code on ec2 whenever any commit is done on gitlab. The gitlab CD/CI documentation states that I have to add a file .gitlab-ci.yml at the root directory of my repo. This is actually a problem for me because, I want project repo to have only code and not any configuration related info like build and deploy etc. Also when anybody clones the repo, they would have access to location where my code is pushed/deployed on ec2. Is there any work around for this problem ?
You'll need to use a gitlab-ci.yml filke to deploy your application. The file provides instructions and a pipeline "infrastructure" which, if properly configured, will build, test and automatically deploy your code.
If you are worried about leaking credentials, you should use the built-in instance variables to mask your important bits, like a "$SERVERNAME" or "$DB_PASSWORD" for instance.
Lastly, you can use the power of gitignore, in order to not publish all of your credentials or sensitive bits to your projects' servers or instances.
Our organization has a locally running instance of Artifactory, and also a local instance of Bitbucket. We are trying to get them to play well together so that Artifactory can serve up our private PHP packages right out of Bitbucket.
Specifically, we'd like to create a Composer Remote Repository in Artifactory that serves up our private PHP packages, where those packages are sourced from git repositories on our local Bitbucket server.
Note that we'd rather not create and upload our own package zip files for each new package version, as suggested here. Ideally, we just want to be able to commit changes to a PHP package in BitBucket, tag those changes as a new package version, and have that new version be automatically picked up and served by Artifactory.
The Artifactory Composer documentation suggests that this is possible:
A Composer remote repository in Artifactory can proxy packagist.org
and other Artifactory Composer repositories for index files, and
version control systems such as GitHub or BitBucket, or local
Composer repositories in other Artifactory instances for binaries.
We've spent a lot of time trying making this work, but haven't been able to do it. The Remote Repository that we create always remains empty, no matter what we do. Can anyone offer an example to help, or even just confirm that what we're attempting isn't possible?
For reference, we've been trying to find the right settings to put into this setup page:
Thanks!
Artifactory won't download and pack the sources for you, it expects to find binary artifacts.
The mention of source control in the documentation refers to downloading the archives from source control systems, either uploaded there as archives (don't do that), or packed by the source control system on download request (that is what you are looking for).
You can use this REST API to download automatically generated zips from BitBucket. If you can configure the composer client to look for the packages in the right place, you're all set.
Revel models are defined under the models package; so in order to import them one must use the full repo path relative to the %GOPATH/src folder which in this case project/app/models thus results in
import PROJECTNAME/app/models
so far, so good i'f you'r using your app name as the folder name of your local dev machine and have dev+prod environments only.
Heroku's docs recommends using multiple apps for different environment (i.e. for staging). with the same repository with distinct origins;
This is where problem starts, now, since the staging enviromnent resides on alternative appname(let's say PROJECTNAME_STAGING), it's sources are stored under PROJECTNAME_STAGING but the actual code still import PROJECTNAME/app/models instead of import PROJECTNAME_STAGING/app/models; so compile fails, etc.
Is there any possibility to manage multiple environments with a single local repo and multiple origins with revel's heroku buildpack? or a feature is needed in the buildpack that is yet to be implemented?
In addition, there is this possible issue with the .godir file that is required to be versioned and contain the git path to the app, so what about the multi-environment duality regarding this file?
Solution was simple enougth;
The buildpack uses the string in .godir both for the argument for revel run as well as the directory name under GOPATH/src. My .godir file had a git.heroku.com/<APPNAME>.git format; Instead I just used APPNAME format.