When I was using Arcanist and landing some code I found out that my terminal does not escape the colors. The <ESC>[1;32m9912da1<ESC>[m is supposed 32m9912da1 or something similar. Is this a problem with my terminal? Other people at my work don't find this problem.
Landing current branch 'some-branch'.
TARGET Landing onto "master", selected by following tracking branches upstream to the closest remote.
REMOTE Using remote "origin", selected by following tracking branches upstream to the closest remote.
FETCH Fetching origin/master...
These commits will be landed:
- <ESC>[1;32m9912da1<ESC>[m some commit
- <ESC>[1;32m687f799<ESC>[m some other commit
If you are using arcanist on windows with git bash this should fix it:
# add the following function to your .bash_profile
function arc(){
command arc --ansi $#|cat
}
For detailed instructions see: https://thomas-barthelemy.github.io/2015/04/23/phabricator-arcanist-gitbash/
Related
I am trying to get some code to run which is here, on GitHub:
https://github.com/dolthub/dolthub-etl-jobs/tree/master/loaders/nvd
Once I've cloned the repo I run the run.sh script and it fails with the below:
./run.sh
1 synchronisation error:
unexpected http response from "https://nvd.nist.gov/feeds/json/cve/1.0/nvdcve-1.0-2002.meta" ("404 Not Found"): ""
cloning https://doltremoteapi.dolthub.com/Liquidata/NVD
For this to have a chance of working I need to change wherever this is referenced:
https://nvd.nist.gov/feeds/json/cve/1.0/nvdcve-1.0-20XX.meta
to:
https://nvd.nist.gov/feeds/json/cve/1.1/nvdcve-1.1-20XX.meta
The old reference is no longer valid.
However, it's impossible to see where the script is actually calling that URL from. Probably I do not know near enough about how go and GitHub hang together.
If I could figure it out, maybe I could just pull down the code and edit it manually once it was on my host or even create a fork with the new URL in it.
Find where the URL is actually coming from when I call run.sh which errors out almost immediately.
Make a change that reflects the valid one.
TLDR: Replace 4 by 6 in line 44 of main.go.
go.mod from https://github.com/dolthub/dolthub-etl-jobs/tree/master/loaders/nvd
requires github.com/facebookincubator/nvdtools but replaces this by github.com/liquidata-inc/nvdtools which redirects to github.com/dolthub/nvdtools which is an archived repo(!) (cf. https://github.com/dolthub/dolthub-etl-jobs/blob/d858a2433f68d72dc643e26085a5a0c44edbb85c/loaders/nvd/go.mod#L5-L7).
Supported CVE feeds of dolthub/nvdtools are defined here: https://github.com/dolthub/nvdtools/blob/e67111c0fff487cc15cd2ba32668141622cf9c63/providers/nvd/cve.go#L44-L53
cve10jsonGz is 4, cve11jsonGz is 6
main.go sets the CVE feed here: https://github.com/dolthub/dolthub-etl-jobs/blob/d858a2433f68d72dc643e26085a5a0c44edbb85c/loaders/nvd/main.go#L43.
Change it from 4 to 6.
I didn't test the rest of run.sh, but at least the problem you're mentioning in your question should be solved.
I've followed this for several of our projects. It works wonderfully, except grabbing the latest git tag. For example, if I have tags 1,2,3,4,5,6,7,8,9,10, Capifony will try to deploy tag 9 because it sees that as the latest tag using the code provided on that how-to.
How can I change the following line to always get the latest tag?
set :branch, `git tag`.split("\n").last
The output of git tag is alphabetical. How 'bout git tag | sort -n?
Alternatively you could perform a numeric sort on the result of the split before grabbing the last entry.
git tag --sort=version:refname will sort this kind of tag properly.
Suppose you have a repository at github.com/someone/repo and you fork it to github.com/you/repo. You want to use your fork instead of the main repo, so you do a
go get github.com/you/repo
Now all the import paths in this repo will be "broken", meaning, if there are multiple packages in the repository that reference each other via absolute URLs, they will reference the source, not the fork.
Is there a better way as cloning it manually into the right path?
git clone git#github.com:you/repo.git $GOPATH/src/github.com/someone/repo
If you are using go modules. You could use replace directive
The replace directive allows you to supply another import path that might
be another module located in VCS (GitHub or elsewhere), or on your
local filesystem with a relative or absolute file path. The new import
path from the replace directive is used without needing to update the
import paths in the actual source code.
So you could do below in your go.mod file
module some-project
go 1.12
require (
github.com/someone/repo v1.20.0
)
replace github.com/someone/repo => github.com/you/repo v3.2.1
where v3.2.1 is tag on your repo. Also can be done through CLI
go mod edit -replace="github.com/someone/repo#v0.0.0=github.com/you/repo#v1.1.1"
To handle pull requests
fork a repository github.com/someone/repo to github.com/you/repo
download original code: go get github.com/someone/repo
be there: cd "$(go env GOPATH)/src"/github.com/someone/repo
enable uploading to your fork: git remote add myfork https://github.com/you/repo.git
upload your changes to your repo: git push myfork
http://blog.campoy.cat/2014/03/github-and-go-forking-pull-requests-and.html
To use a package in your project
https://github.com/golang/go/wiki/PackageManagementTools
One way to solve it is that suggested by Ivan Rave and http://blog.campoy.cat/2014/03/github-and-go-forking-pull-requests-and.html -- the way of forking.
Another one is to workaround the golang behavior. When you go get, golang lays out your directories under same name as in the repository URI, and this is where the trouble begins.
If, instead, you issue your own git clone, you can clone your repository onto your filesystem on a path named after the original repository.
Assuming original repository is in github.com/awsome-org/tool and you fork it onto github.com/awesome-you/tool, you can:
cd $GOPATH
mkdir -p {src,bin,pkg}
mkdir -p src/github.com/awesome-org/
cd src/github.com/awesome-org/
git clone git#github.com:awesome-you/tool.git # OR: git clone https://github.com/awesome-you/tool.git
cd tool/
go get ./...
golang is perfectly happy to continue with this repository and doesn't actually care some upper directory has the name awesome-org while the git remote is awesome-you. All import for awesome-org are resovled via the directory you have just created, which is your local working set.
In more length, please see my blog post: Forking Golang repositories on GitHub and managing the import path
edit: fixed directory path
If your fork is only temporary (ie you intend that it be merged) then just do your development in situ, eg in $GOPATH/src/launchpad.net/goamz.
You then use the features of the version control system (eg git remote) to make the upstream repository your repository rather than the original one.
It makes it harder for other people to use your repository with go get but much easier for it to be integrated upstream.
In fact I have a repository for goamz at lp:~nick-craig-wood/goamz/goamz which I develop for in exactly that way. Maybe the author will merge it one day!
Here's a way to that works for everyone:
Use github to fork to "my/repo" (just an example):
go get github.com/my/repo
cd ~/go/src/github.com/my/repo
git branch enhancement
rm -rf .
go get github.com/golang/tools/cmd/gomvpkg/…
gomvpkg <<oldrepo>> ~/go/src/github.com/my/repo
git commit
Repeat each time when you make the code better:
git commit
git checkout enhancement
git cherry-pick <<commit_id>>
git checkout master
Why? This lets you have your repo that any go get works with. It also lets you maintain & enhance a branch that's good for a pull request. It doesn't bloat git with "vendor", it preserves history, and build tools can make sense of it.
Instead of cloning to a specific location, you can clone wherever you want.
Then, you can run a command like this, to have Go refer to the local version:
go mod edit -replace github.com/owner/repo=../repo
https://golang.org/cmd/go#hdr-Module_maintenance
The answer to this is that if you fork a repo with multiple packages you will need to rename all the relevant import paths. This is largely a good thing since you've forked all of those packages and the import paths should reflect this.
Use vendoring and submodules together
Fork the lib on github (go-mssqldb in this case)
Add a submodule which clones your fork into your vendor folder but has the path of the upstream repo
Update your import statements in your source code to point to the vendor folder, (not including the vendor/ prefix). E.g. vendor/bob/lib => import "bob/lib"
E.g.
cd ~/go/src/github.com/myproj
mygithubuser=timabell
upstreamgithubuser=denisenkom
librepo=go-mssqldb
git submodule add "git#github.com:$mygithubuser/$librepo" "vendor/$upstreamgithubuser/$librepo"
Why
This solves all the problems I've heard about and come across while trying to figure this out myself.
Internal package refs in the lib now work because the path is unchanged from upstream
A fresh checkout of your project works because the submodule system gets it from your fork at the right commit but in the upstream folder path
You don't have to know to manually hack the paths or mess with the go tooling.
More info
https://git-scm.com/book/en/v2/Git-Tools-Submodules
How do I fix the error message "use of an internal package not allowed" when go getting a golang package?
https://github.com/denisenkom/go-mssqldb/issues/406
https://github.com/golang/go/wiki/PackageManagementTools#go15vendorexperiment
The modern answer (go 1.15 and higher, at least).
go mod init github.com/theirs/repo
Make an explicit init arg that is the ORIGINAL package names. If you don't include the repo name, it will assume the one in gopath. But when you use go modules, they no longer care where they are on disk, or where git actually pulls dependencies from.
To automate this process, I wrote a small script. You can find more details on my blog to add a command like "gofork" to your bash.
function gofork() {
if [ $# -ne 2 ] || [ -z "$1" ] || [ -z "$2" ]; then
echo 'Usage: gofork yourFork originalModule'
echo 'Example: gofork github.com/YourName/go-contrib github.com/heirko/go-contrib'
return
fi
echo "Go get fork $1 and replace $2 in GOPATH: $GOPATH"
go get $1
go get $2
currentDir=$PWD
cd $GOPATH/src/$1
remote1=$(git config --get remote.origin.url)
cd $GOPATH/src/$2
remote2=$(git config --get remote.origin.url)
cd $currentDir
rm -rf $GOPATH/src/$2
mv $GOPATH/src/$1 $GOPATH/src/$2
cd $GOPATH/src/$2
git remote add their $remote2
echo Now in $GOPATH/src/$2 origin remote is $remote1
echo And in $GOPATH/src/$2 their remote is $remote2
cd $currentDir
}
export -f gofork
You can use command go get -f to get you a forked repo
in your Gopkg.toml file add these block below
[[constraint]]
name = "github.com/globalsign/mgo"
branch = "master"
source = "github.com/myfork/project2"
So it will use the forked project2 in place of github.com/globalsign/mgo
I know the slug compiler removes the .git directory when creating a heroku slug, but is there any way to configure Heroku so that I can access the currently running git commit number from within my scripts?
I'd like to be able to have a small link on my sinatra app (run within Heroku) which says "running version e72fb274a0" (or something similar). How can I retrieve this, or force the slug compiler to add it to an environment variable?
PROGRESS:
I reckon the best way to do this is to make a custom buildpack which writes the git commit version number to the heroku slug before the .git directory is deleted.
I've tried to do this (see my fork of the ruby buildpack) but the line I've added – line 23 – doesn't seem to be doing the job. Heroku sees & uses the new buildpack, but doesn't seem to write the file to the slug.
Anyone have any idea why my custom buildpack isn't working as expected?
Thanks,
JP
A couple of options...
SOURCE_VERSION environment variable (build-time)
Since 1st April 2015, there's a SOURCE_VERSION environment variable available to builds running on Heroku. For git-pushed builds, this is the git commit SHA-1 of the source being built:
https://devcenter.heroku.com/changelog-items/630
(thanks to #srtech for pointing that out!)
An example of me using that variable in a build - if you look at the HTML served by the deployed app, you'll see the commit id is coming though in an HTML comment near the very bottom: https://gu-who.herokuapp.com/
/etc/heroku/dyno metadata file (run-time)
Heroku have beta functionality to write out a /etc/heroku/dyno metadata file onto your running dyno. If you email support you can probably get added to the beta. Here's a place where Heroku themselves are using it:
https://github.com/heroku/fix/blob/6c8ab7a/lib/heroku_dyno_metadata.rb
The contents look like this:
{
"dyno":{
"physical_id":"161bfad9-9e83-40b7-b385-78305db2f168",
"size":1,
"name":"run.7145"
},
"app":{
"id":null
},
"release":{
"id":50,
"commit":"2c3a0b24069af49b3de35b8e8c26765c1dba9ff0",
"description":null
}
}
..so release.commit is the field you're after. I used to use this method until the SOURCE_VERSION variable became available.
In 2018 this is what you want:
https://devcenter.heroku.com/articles/dyno-metadata
heroku labs:enable runtime-dyno-metadata -a <app name>
You can run a script before deploy that store this information (maybe on a YAML)
using these a = `ls` (note that is not ' "apostrophe" sign is ` "inverse accute" sign)
the a variable will have the result of this bash command,so you can do
git = `git log`
and then find the information you want it and store it.
So you will be able to retrieve it later.
Did this helped ?
I am writing a program in Ruby that necessitates downloading the most current version of my team's software from SVN upon start up.
The checkout function (from the Ruby SVN bindings) is what I believe I want to use, because an update would not ADD any files that do not exist on my machine's local "trunk" workspace. A checkout statement would both update files that do not match to HEAD, and it would download ones that don't exist at all. Effectively, after running a fully recursive checkout, I would hope to have an exact copy of the most recent SVN repository.
According to this API, a checkout statement basically takes the following:
an exact SVN URL
a local root project directory
a revision (I would be using the string 'HEAD')
recursive (integer 1 or 0)
a pool object (I cannot determine what this is for exactly, but I don't think it affects me)
Here's what I wrote, inside a block that iterates for each file in the SVN repository:
if status != NORMAL #any file that changed or is 'missing'
ctx.checkout(status.entry.url, ROOTDIR, 'HEAD', 0, nil) #update abnormal file to HEAD
end
As a test, I erased a directory from my local workspace, and attempted to restore it with this command. It runs through until it reaches one of the missing files, at which point it raises an error:
`svn_client_checkout3': subversion/libsvn_fs_fs/tree.c:663: Svn::Error::FsNotFound: File not found: revision 0, path '/trunk/project-gadfly/SocketServer/DiscoveryServer.cpp' (Svn::Error::FsNotFound)
I do not understand why this error would be raised, because I thought that a checkout statement would see that the directory (i.e. file) does not exist locally and then create it. Perhaps I am doing something wrong?
Looking back on what I've written, I think all of this was a long-winded way of asking the following simple question: How do I get the most current version of SVN repository onto my local hard drive with an SVN Ruby command?
Thanks in advance,
Elwood Hopkins
I don't know about Ruby-specific part of the question, but it's clear that you asked SVN API to checkout "status.entry.url" at revision 0, which of course doesn't exist here.
It's also strange that you looked into Perl documentation for writing in Ruby. I would recommend you to look at Subversion sources instead.
Here's Ruby method declaration:
http://svn.apache.org/repos/asf/subversion/branches/1.7.x/subversion/bindings/swig/ruby/svn/client.rb
def checkout(url, path, revision=nil, peg_rev=nil,
depth=nil, ignore_externals=false,
allow_unver_obstruction=false)
revision ||= "HEAD"
Client.checkout3(url, path, peg_rev, revision, depth,
ignore_externals, allow_unver_obstruction,
self)
end
So as you can see, you've specified 0 as peg revision. But you should specify HEAD instead.
What about pools --- they are parts of SVN memory managements. Here's the explanation: http://subversion.apache.org/docs/community-guide/conventions.html#apr-pools