I was looking at https://yarnpkg.com/lang/en/docs/cli/policies/ and it seems like a good idea to use it to allow my team to easily be on the same yarn version. However, yarn policies set 1.18 downloads the full yarn release into .yarn/releases (a 4.5mb js file) and sets a config entry in the repo's .yarnrc file.
It feels weird to check in this 4.5mb yarn executable, but if I don't, my colleagues are not going to be able to run yarn, because the entry in the .yarnrc won't exist on their system and it's not magically downloaded...
So, is it best practice to check the .yarn/releases folder into version control?
Yes your assumption is correct, you check yarnrc and the actual yarn JS file into source control.
Related
Currently I am working on an API which uses Serverless Framework with Go.
I'm using the Serverless-offline plugin for local testing.
This API depends on a few other repositories (which I also maintain), which I import using the go.mod file.
However I am having a hard time refining my developer workflow.
Currently, if I want to make changes in a repository which this API depends upon, I have to alter the projects go.mod to include replace directives for the purpose of testing, but then I'm having to manually change it back for deployment to production.
Basically I'm looking for a way to include replace directives, which only get applied during local development. How has everyone else dealt with this problem?
Bonus question: Is there any way to run Serverless offline in docker? I'm finding that serverless-offline running on the bare metal is causing inconsistencies between different developers environments.
You can run go commands with an alternate go.mod file with the -modfile option:
From Build commands:
The -modfile=file.mod flag instructs the go command to read (and
possibly write) an alternate file instead of go.mod in the module root
directory. The file’s name must end with .mod. A file named go.mod
must still be present in order to determine the module root directory,
but it is not accessed. When -modfile is specified, an alternate
go.sum file is also used: its path is derived from the -modfile flag
by trimming the .mod extension and appending .sum.
Create a local.go.mod file with the necessary replace directive for development and build, for example, with:
go build -modfile=local.go.mod ./...
From the official document from GitLab, it seems that a CI/CD stage can only be skipped based on if a file is changed. Is it possible to skip a step based on if a file / folder exists on the machine to be deployed?
The case is that it is common to use a package management tool. (e.g. composer in PHP or NPM in Node.js) Currently the rule is to check if the respective configuration file is changed (e.g. composer.json for composer or package.json for NPM) to see if it is necessary to run the install step (i.e. composer install or npm install) However, although it seldom happens, when a new machine is used for deployment, the CI/CD would crash because the install step is skipped.
Currently, the problem is solved by manually trigger the install step, but is it possible to auto detect if the file exists at the hosting machine to determine if the install step should be run?
I don't think that there is a way to do what you describe. The only/except feature in GitLab runs on the GitLab server, before any jobs are created. To be able to know if a file already exists on a server, there needs to be a job created and it needs to be assigned to a gitlab runner.
They have this issue, where it looks like it will be possible to start a job and then determine that it should be skipped by using exit codes:
https://gitlab.com/gitlab-org/gitlab/issues/16733
In the mean time, you can make it so that your install job always runs, and then just use your build scripts to check for this file. From the pipeline view in GitLab, you will always see your install step. The actual job can just skip to do anything if the file already exists, and then you don't need to remember to run the job manually.
My Problem
Elastic Beats is an open source project for log shippers written in Go. It features several log outputs, including console, Elasticsearch and Redis. I would like to add an output of my own - to AWS Kinesis.
I have cloned the repo to ~/github/beats, and tried building it:
$ cd filebeat; go build main.go
However, it failed due to a missing library which is a part of the project:
main.go:6:2: cannot find package "github.com/elastic/beats/filebeat/cmd" in any of:
/usr/local/go/src/github.com/elastic/beats/filebeat/cmd (from $GOROOT)
/Users/adam/go/src/github.com/elastic/beats/filebeat/cmd (from $GOPATH)
A directory of the project is dependent on a package from the same repo, but instead of looking one directory up the hierarchy it looks in the GOPATH.
So, go get github.com/elastic/beats/filebeat/cmd fetched the code, and now go build main.go works. Changing the code in my GOPATH is reflected in these builds.
This leaves me with an structural inconvenience. Some of my code is at a working directory, and some of it is at my GOPATH and included by the code in my working directory.
I would like to have all my code in a single directory for various reasons, not the least being keeping everything under version control.
What Have I Tried
Mostly searching for the problem. I am quite new to Go, so I might have missed the correct terminology.
My Question
What is the right way to edit the code of an imported library in Go?
One of the recommended ways to work with other's packages is:
Get the sources of the original package:
go get github.com/elastic/beats
As a result you will clone project's git repository to the folder
$GOPATH/src/github.com/elastic/beats
Make some fixes, compile code, fix, compile... When you make go install package will be compiled and installed to your system. When you need merge updates from original repository you can git pull them.
Everything is OK. What's next? How to share your work with others?
Fork project on github, suppose it will be github.com/username/beats
Add this fork as another remote mycopy (or any other name you like) to your local repository
git remote add mycopy git://github.com/username/beats.git
When all is done you can push updated sources to your repo on github
git push mycopy
and then open a pull-request to original sources. This way you can share your work with others. And keep your changes in sync with mainstream.
Previous answers to this question are obsolete when developing projects that using Go Modules.
For projects that using Go Modules, one may use the following command to replace an imported library(eg. example.com/imported/module) with a local module(eg. ./local/module):
go mod edit -replace=example.com/imported/module=./local/module
Or by adding the following line into the go.mod file:
replace example.com/imported/module => ./local/module
Reference Docs: https://golang.org/doc/modules/managing-dependencies#unpublished
A project working copy should be checked out into $GOPATH/src/package/import/path - for example, this project should be checked out into /Users/adam/go/src/github.com/elastic/beats. With the project in the correct location, the go tooling will be able to operate on it normally; otherwise, it will not be able to resolve imports correctly. See go help gopath for more info.
I'm new to this repository, I already installed it and it is working fine on Ubuntu 14.04. Now I want to personalize it and I've found everywhere that to avoid losing your customizations, you should place them in [dspace-source]/dspace/modules/xmlui/src/main/webapp/themes (I'm choosing xmlui since that is the interface I'm using and themes because that is the only customizations I want to do for now) and then you should do a mvn package from [dspace-source]/dspace for it to apply the changes to the installation directory ([dspace]). I have done this but the new theme I created doesn't appear in the installation directory. Should I do an ant update after the mvn package? Am I missing something for the documentation?
Thanks for the help!
You are correct. mvn package will build the code in dspace-source/target. ant update will copy the code from dspace-source/target to your installation directory. The maven build is generic and does not know your configuration settings. The ant task will read your configuration settings (which contain the install path).
After running ant update, you should restart tomcat.
Because the maven/ant cycles can take some time, I will occasionally make changes to uncompiled files (xsl, js, css) on the source branch and then copy them directly to the install branch.
Beware of making changes directly in the install branch since it is easy to overwrite with the ant command.
The cocoon layer of XMLUI does cache some files. If you make a change and it does not seem to take effect, sign in with an admin login and go to Administrative->Java Console->Clear Cache to force a change to be reloaded.
I want to make some changes to hadoop hdfs according to a published paper. After that I just need to build HDFS and get it running. How can I do that?
Refer the following Hadoop documentation
http://wiki.apache.org/hadoop/HowToContribute
This assumes you build on Linux. If you use a different OS, you may need to do some extra steps; for details see this - I've never done this on non-Linux myself.
Install Git, Java (JDK), Maven and ProtocolBuffer (2.5+ version required)
Clone https://github.com/apache/hadoop-common.git by typing something like this in your command line:
git clone https://github.com/apache/hadoop-common.git
Note: you may want to use a particular branch corresponding to the version of HDFS you're looking to build. To list all branches, type git branch -a. Then to switch to branch 2.3, for example, type:
git checkout --track origin/branch-2.3
If you did everything correctly, you should see a message about tracking the remote branch you've selected.
Make whatever changes you need to make in HDFS; the code lives under hadoop-hdfs-project.
Compile the project by running the following from the root of your tree:
mvn install -DskipTests
This will take some time the first time you do it, but will be a lot quicker during re-runs.
Your final jars will be placed into directories like hadoop-hdfs-project/hadoop-hdfs/target (this is accurate for at least 2.3, but it might have been different in older version, or it may change in the future).