Background
At my company, we use Bit Bucket to host our git repos. All traffic to the server flows through a custom, non-standard port. Cloning from our repos looks something like git clone ssh://git#stash.company.com:9999/repo/path/name.git.
The problem
I would like to create Go modules hosted on this server and managed by go mod, however, the fact that traffic has to flow through port 9999 makes this very difficult. This is because go mod operates on the standard ports and doesn't seem to provide a way to customise ports for different modules.
My question
Is it possible to use go mod to manage Go modules hosted on a private git server with a non-standard port?
Attempted solutions
Vendoring
This seems to be the closest to offering a solution. First I go mod vendor the Go application that wants to use these Go modules, then I git submodule the Go module in the vendor/ directory. This works perfectly up to the point that a module needs to be updated or added. go mod tidy will keep failing to download or update the other Go modules because it cannot access the "git URL" of the custom Go module. Even when the -e flag is set.
Editing .gitconfig
Editing the .gitconfig to replace the URL without the port with the URL with the port is a solution that will work but is a very dirty hack. Firstly, these edits will have to be done for any new modules, and for every individual developer. Secondly, this might brake other git processes when working on these repositories.
The go tool uses git under the hood, so you'd want to configure git in your environment to use an alternate url. Something like
git config --global url."ssh://git#stash.company.com:9999/".insteadOf "https://stash.company.com"
Though I recall that bitbucket/stash sometimes provides an extra suffix for reasons I don't recall, so you might need to do something like this:
git config --global url."ssh://git#stash.company.com:9999/".insteadOf "https://stash.company.com/scm/"
ADDITIONAL EDIT
user bcmills mentioned below that you can also serve the go-import metadata over HTTPS, and use whatever vanity URL you like, provided you control the domain resolution. This can be done with varying degrees of sophistication, from a simple nginx rule to static content generators, dedicated vanity services or even running your own module proxy with Athens
This still doesn't completely solve the problem of build environment configuration, however, since you'll want the user to set GOPRIVATE or GOPROXY or both, depending on your configuration.
Also, if your chosen domain is potentially globally resolvable, you might want to consider registering it anyway to keep it from being registered by a potentially-malicious third party.
Related
We have 2 repos relevant to this question:
gitlab.com/company/product/team/service
gitlab.com/company/product/team/api
They both have go.mod files. The api repo is dead simple. It contains a dozen or so typescript files that go through a process to generate mirror image Go structs. We use these in integration and unit tests sort of like a stylesheet so we can test the structure of responses from service. Api has 0 imports. No fmt, time, nothing. Here's api's go.mod file:
module gitlab.com/company/product/team/api/v2
go 1.16
The latest tag on its master branch is v2.68.0. Service's go.mod has this require line for api:
require (
...
gitlab.com/company/product/team/api/v2 v2.68.0
...
)
And it Works on My Machine(TM). When I run go get, go mod tidy, go mod download, go mod vendor, etc, the correct version gets downloaded. Not so in docker. To work with our CI/CD pipeline and eventually get deployed in k8, we build a docker image to house the code. The RUN go mod vendor in service's Dockerfile is giving us an error when it tries to import the api (adding a verbose flag does not result in more details here):
go: gitlab.com/company/product/team/api/v2#v2.68.0: reading gitlab.com/company/product/team/api/team/api/go.mod at revision team/api/v2.68.0: unknown revision team/api/v2.68.0
Note the repeated folders in the ../team/api/team/api/go.mod url.
Our Dockerfile is loaded with an SSH key that's registered with gitlab, the necessary entries into ~/.ssh/known_hosts are present, there's a ~/.netrc file with a gitlab access token that has the read_api permission, GOPRIVATE=gitlab.com/company/*, GO111MODULE=on, and when it runs go mod vendor it doesn't have issues with any of the rest of our gitlab.com/company/product/team/* imports. These mirror my local machine where go get works.
Why, for this one repo, would go mod vendor decide to repeat folders when attempting to import?
Turns out the answer isn't anything that SO users would have had much insight into. The .netrc token we have in service's Dockerfile is an access token that only works in the service project. It was created by opening the settings of the service repo and creating the access token there instead of using a service account that has access to all the necessary repos. When go uses it to access the api it no longer grants access and go has issues asking gitlab which folder in the URL is the RepoRoot because gitlab erroneously claims all folders in the path are a repo. If you're using subgroups in gitlab and running go get, you know what I'm talking about.
It worked on my machine because my .netrc has my token and I have enough access to not be an issue.
We fixed it by updating the .netrc file in the dockerfile to use a token generated by a service account that has api access to all relevant repos.
I was about to delete the question, but finding resources talking about this problem are few and far between.
I'm trying to setup Mercurial on developer workstations so that they can pull from each other.
I don't want to push.
I know each workstation needs to run
hg serve
The format of the pull command is
hg pull ssh:[SOURCE]
What I'm having problem with is defining SOURCE, and any other permission issues.
I would believe that SOURCE ends with the name of the repository being pulled from. What I don't know is form the host name. Can I use IPs instead?
What permission issues do I need to look out for?
SOURCE == //<hostname>/<repository>
All developers or test stations are running Windows 7 or Windows XP.
I have searched for this answer and have come up empty. I did look at all the questions suggested by SO as I typed this question.
This is probably a simple Windows concept, but I'm not an expert in simple Windows concepts. :)
The hg help urls output has these examples:
Valid URLs are of the form:
local/filesystem/path[#revision]
file://local/filesystem/path[#revision]
http://[user[:pass]#]host[:port]/[path][#revision]
https://[user[:pass]#]host[:port]/[path][#revision]
ssh://[user#]host[:port]/[path][#revision]
and a lot of info about what can be used for each component (host can be anything that your dns resolver resolves or a ipv4 or ipv6 address. I beleive on windows systems UNC paths count.
Also, you appear to have some confusion about when you can use ssh. You can use ssh:// URLs to access repositories on the file systems of systems that are running ssh servers. If they're running hg serve then you can access them using the http:// URL that hg serve gives you when you start it. It's usually used for quick "here grab this from me and see if you can tell me what I'm doing wrong" situations rather than for all-the-time sharing.
As far as I am aware, in order to use the maven-release-plugin, you have to drop in an scm section into your POM file. E.g.:
<scm>
<connection>scm:hg:ssh://hg#bitbucket.org/my_account/my_project</connection>
<developerConnection>scm:hg:ssh://hg#bitbucket.org/my_account/my_project</developerConnection>
<url>ssh://hg#bitbucket.org/my_account/my_project</url>
<tag>HEAD</tag>
</scm>
I understand this data is used to determine what to tag and where to push changes. But isn't this information already available if you have the code cloned/checked-out? I'm struggling a bit with the concept that I need to tell maven what code it needs to tag when it could, at least in theory, just ask Git/HG/SVN/CVS what code it's dealing with. I suspect I'm missing something in the details, but I'm not sure what. Could the the maven-release-plugin code be changed to remove this as a requirement, or at least make auto-detection the default? If not could someone provide some context on why that wouldn't work?
For one thing, GIT and Subversion can have different SCM URIs for read-write and read-only access.
This is what the different <connection> and <developerConnection> URIs are supposed to capture. The first is a URI that is guaranteed read access. The second is a URI that is guaranteed write access.
Very often from a checked out repository, it is not possible to infer the canonical URIs.
For example, I might check out the Subversion repository in-house via the svn: protocol and the IP address of the server, but external contributors would need to use https:// with the hostname.
Or even with GIT repositories, on Github you have different URIs for different access mechanisms, e.g.
https://github.com/stephenc/eaio-uuid.git (read-write using Username / Password or OAuth)
git#github.com:stephenc/eaio-uuid.git (read-write using SSH private key Identification)
git://github.com/stephenc/eaio-uuid.git (anonymous read only)
Never mind that you may have checked out git://github.com/zznate/eaio-uuid.git or cloned a local check out, in other words, your local git repository may thing that "upstream" is ../eaio-uuid-from-nate and not git#github.com:stephenc/eaio-uuid.git
I agree that for some SCM tools, you could auto-detect... for example if you know the source is checked out from, e.g. AccuRev, you should be OK assuming its details... until you hit the Subversion or GIT or CVS or etc code module checked out into the AccuRev workspace (true story) so that the tag that was being pulled in could be updated.
So in short, the detection code would have to be damn sure that you were not using two SCM systems at the same time to be sure which is the master SCM... and the other SCM may not even be leaving marker files on disk to sniff out (AccuRev, for example, doesn't... hence why I've picked on it)
The only safe way is to require the pom to define, at least the SCM system, and for those SCM systems where the URI cannot be reliably inferred (think CVS, Subversion, GIT, HG, in fact most of them) require the URI to be specified.
I made a Sinatra app, that will be hosted on Heroku, and the source will be up on GitHub. The problem is that i have a file with API keys, that is currently in .gitignore. Is there a way, that I can push my repo to heroku with the key file and exclude the file when pushing to GitHub?
Thanks in advance!
It is possible to maintain a separate branch just for deployment, but it takes much discipline to maintain it properly:
Add a commit to a production branch that adds the config file (git add -f to bybass your excludes).
To update your production branch, merge other branches (e.g. master) into it.
However, you must then never merge your production branch into anything else, or start branches based on any “production commit” (one whose ancestry includes your “add the keys” commit).
An easier path is to adopt Heroku’s custom of using environment variables to communicate your secret values to your instances. See the docs on Configuration and Config Vars:
heroku config:add KEY1=foobar KEY2=frobozz
Then access the values via ENV['KEY1'] and ENV['KEY2'] in your initialization code or wherever you need them. To support your non-Heroku deployments, you could either define the same environment variables or fall back to reading your existing config files if the environment variables do not exist.
The Figaro gem provides a good way to manage this issue. It basically simulates Heroku's environment variable approach locally, and makes it easy to keep your keys in sync between your development environment and Heroku.
Our team consists of 3 people and we want to use Mercurial for the verison control of our codes.
The problem is that we don't have a common server that we all have access to.
Is there a way to make one of the users the host of the repository, so that others can connect him to work on the code?
We're all using Windows 7, if it matters.
Because mercurial is a distributed version control system, you don't have to have a central server, as you can clone, push and pull between one another.
But, you could look at creating a central repository on bitbucket at no cost for up to 5 users.
Yes, just run hg serve in that host & directory. If you have IP access you'll be able to work with it. You'll need to set the web.allow_push option to * to enable the remote push.
Another option is to run hg serve on all the workstations and only pull from one another, never push.