Should `.bundle` directory be added to CVS? - ruby

Recently I moved some of my Ruby project dependencies to development group to not to install them on production environments (like rubocop)
And in those projects I see that there is new file .bundle/config with content like this:
---
BUNDLE_WITH: "development"
Or
---
BUNDLE_WITHOUT: "development"
I think I can safely add this file (and .bundle) folder to .gitignore, but to be sure - is there any best practice for that?
Could not find any useful info on $PROJECT_ROOT/.bundle directory

TL;DR
Don't store the .bundle directory in source. It's intended to be a local cache of certain bundler settings and flags, rather than something shared between all project contributors.
Analysis
There are some arguments for and against storing your Gemfile.lock in source control, but the contents of the .bundle directory are not intended to be shared across multiple users/machines within a project. The only potential use case for tracking .bundle/config is to remember certain flags across runs in a production or testing branch, but that behavior is deprecated anyway.
Pragmatically, storing the .bundle directory may lead to unintended flag usage by your fellow developers. This is likely to lead to unnecessary debugging efforts, and potentially-surprising behavior for project contributors. A Rake task or setup/deployment script is a better option for handling Bundler flags, especially as this behavior is deprecated and is scheduled to go away in Bundler 3.

Related

What is difference between installed package and GOCACHE

I always build with -i flag to install packages and .a files are installed in GOPATH/pkg directory.
GOCACHE directory shown by go env GOCACHE seems to store cache files as well.
What's difference between them?
And what I wanna know is both of them should be saved if I want to make build time faster?
TLDR; The cache folder is internal to the go tool and its working should be opaque to the user, and its purpose is to speed up builds and tests. For example if you use a version control system (such as git), switching between branches or versions, the GOPATH/pkg may only contain package files of one version. The go cache folder may contain (partially) compiled packages of multiple branches and versions, speeding up future builds when you switch between branches and versions.
The cache folder was introduced in Go 1.10:
The go build command now maintains a cache of recently built packages, separate from the installed packages in $GOROOT/pkg or $GOPATH/pkg. The effect of the cache should be to speed builds that do not explicitly install packages or when switching between different copies of source code (for example, when changing back and forth between different branches in a version control system). The old advice to add the -i flag for speed, as in go build -i or go test -i, is no longer necessary: builds run just as fast without -i. For more details, see go help cache.
So you don't need to use -i anymore for fast builds.
Some quotes from the output of go help cache:
The go command caches build outputs for reuse in future builds.
The default location for cache data is a subdirectory named go-build
in the standard user cache directory for the current operating system.
Setting the GOCACHE environment variable overrides this default,
and running 'go env GOCACHE' prints the current cache directory.
The go command periodically deletes cached data that has not been
used recently. Running 'go clean -cache' deletes all cached data.
The build cache correctly accounts for changes to Go source files,
compilers, compiler options, and so on: cleaning the cache explicitly
should not be necessary in typical use. However, the build cache
does not detect changes to C libraries imported with cgo.
If you have made changes to the C libraries on your system, you
will need to clean the cache explicitly or else use the -a build flag
(see 'go help build') to force rebuilding of packages that
depend on the updated C libraries.
The go command also caches successful package test results.
See 'go help test' for details. Running 'go clean -testcache' removes
all cached test results (but not cached build results).
The cache folder is also used to store test results, so in some circumstances, the cached results may be presented without running the tests again.
Your questions is self-answering:
$ ls $(go env GOCACHE)
$ cat $(go env GOCACHE)/README
and
$ ls $(go env GOPATH)/pkg
As you can see - there is nothing similar between them:
GOPATH/pkg - compiled packages that remains static between builds. Those files are not "cashe" files actually
GOCACHE - collections of build artefacts that constantly changes between builds
More elaborate answer could be done by examining sources of go build

How to enforce bundle install location

I come from a Python and JavaScript background.
When developing a JavaScript project, dependencies are installed in a node_modules directory in the project root.
When developing Python project, typically virtualenvwrapper is used. In this case dependencies are installed in a virtual environment, which is located in ~/.virtualenvs/<project_name> by default.
Now I need to use a ruby tool for a project. The tool that appears to be the most promising for a similar setup as described above, is bundler.
However, the default installation location for bundler is system-wide. I consider this to be harmful.
For one of my systems, it will prompt for a password, at which point I can still abort.
However, for my other system I can write into the global ruby installation. I'm using a homebrew installed ruby here. Bundle will just install dependencies globally.
I know I can specify the installation location by adding --path, but this is easy to forget.
One way to enforce an installation path is by committing .bundle/config. It would just have to contain this:
---
BUNDLE_PATH: "."
However, some googling around shows that it's not adviced to commit this file.
What is the recommended way to prevent accidental global installations using bundler?
Who's to say it will be accidental? It really depends on what context you're talking about here. I have my Ruby set up so that bundle install works without requiring sudo, it's all done through rbenv automatically. The same is true with rvm if done as a user-level install.
When it comes to deploying apps and you want to make sure it's deployed correctly, that's where tools like Capistrano come into play: Create a deployment script that will apply the correct procedure every time.
Checking in a .bundle/config is really rude from a dev perspective, just like checking in any other user-specific preferences you might have. It causes no end of conflict with other team members.

Should I commit the yarn.lock file and what is it for?

Yarn creates a yarn.lock file after you perform a yarn install.
Should this be committed to the repository or ignored? What is it for?
Yes, you should check it in, see Migrating from npm
What is it for?
The npm client installs dependencies into the node_modules directory non-deterministically. This means that based on the order dependencies are installed, the structure of a node_modules directory could be different from one person to another. These differences can cause works on my machine bugs that take a long time to hunt down.
Yarn resolves these issues around versioning and non-determinism by using lock files and an install algorithm that is deterministic and reliable. These lock files lock the installed dependencies to a specific version and ensure that every install results in the exact same file structure in node_modules across all machines.
Depends on what your project is:
Is your project an application? Then: Yes
Is your project a library? If so: No
A more elaborate description of this can be found in this GitHub issue where one of the creators of Yarn eg. says:
The package.json describes the intended versions desired by the original author, while yarn.lock describes the last-known-good configuration for a given application.
Only the yarn.lock-file of the top level project will be used. So unless ones project will be used standalone and not be installed into another project, then there's no use in committing any yarn.lock-file – instead it will always be up to the package.json-file to convey what versions of dependencies the project expects then.
I see these are two separate questions in one. Let me answer both.
Should you commit the file into repo?
Yes. As mentioned in ckuijjer's answer it is recommended in Migration Guide to include this file into repo. Read on to understand why you need to do it.
What is yarn.lock?
It is a file that stores the exact dependency versions for your project together with checksums for each package. This is yarn's way to provide consistency for your dependencies.
To understand why this file is needed you first need to understand what was the problem behind original NPM's package.json. When you install the package, NPM will store the range of allowed revisions of a dependency instead of a specific revision (semver). NPM will try to fetch update the dependency latest version of dependency within the specified range (i.e. non-breaking patch updates). There are two problems with this approach.
Dependency authors might release patch version updates while in fact introducing a breaking change that will affect your project.
Two developers running npm install at different times may get the different set of dependencies. Which may cause a bug to be not reproducible on two exactly same environments. This will might cause build stability issues for CI servers for example.
Yarn on the other hand takes the route of maximum predictability. It creates yarn.lock file to save the exact dependency versions. Having that file in place yarn will use versions stored in yarn.lock instead of resolving versions from package.json. This strategy guarantees that none of the issues described above happen.
yarn.lock is similar to npm-shrinkwrap.json that can be created by npm shrinkwrap command. Check this answer explaining the differences between these two files.
You should:
add it to the repository and commit it
use yarn install --frozen-lockfile and NOT yarn install as a default both locally and on CI build servers.
(I opened a ticket on yarn's issue tracker to make a case to make frozen-lockfile default behavior, see #4147).
Beware to NOT set the frozen-lockfile flag in the .yarnrc file as that would prevent you from being able to sync the package.json and yarn.lock file. See the related yarn issue on github
yarn install may mutate your yarn.lock unexpectedly, making yarn claims of repeatable builds null and void. You should only use yarn install to initialize a yarn.lock and to update it.
Also, esp. in larger teams, you may have a lot of noise around changes in the yarn lock only because a developer was setting up their local project.
For further information, read upon my answer about npm's package-lock.json as that applies here as well.
This was also recently made clear in the docs for yarn install:
yarn install
Install all the dependencies listed within package.json
in the local node_modules folder.
The yarn.lock file is utilized as follows:
If yarn.lock is present and is enough to satisfy all the dependencies
listed in package.json, the exact versions recorded in yarn.lock are
installed, and yarn.lock will be unchanged. Yarn will not check for
newer versions.
If yarn.lock is absent, or is not enough to satisfy
all the dependencies listed in package.json (for example, if you
manually add a dependency to package.json), Yarn looks for the newest
versions available that satisfy the constraints in package.json. The
results are written to yarn.lock.
If you want to ensure yarn.lock is not updated, use --frozen-lockfile.
From My experience I would say yes we should commit yarn.lock file. It will ensure that, when other people use your project they will get the same dependencies as your project expected.
From the Doc
When you run either yarn or yarn add , Yarn will generate a yarn.lock file within the root directory of your package. You don’t need to read or understand this file - just check it into source control. When other people start using Yarn instead of npm, the yarn.lock file will ensure that they get precisely the same dependencies as you have.
One argue could be, that we can achieve it by replacing ^ with --. Yes we can, but in general, we have seen that majority of npm packages comes with ^ notation, and we have to change notation manually for ensuring static dependency version.But if you use yarn.lock it will programatically ensure your correct version.
Also as Eric Elliott said here
Don’t .gitignore yarn.lock. It is there to ensure deterministic dependency resolution to avoid “works on my machine” bugs.
Not to play the devil's advocate, but I have slowly (over the years) come around to the idea that you should NOT commit the lock files.
I know every bit of documentation they have says that you should. But what good can it possibly do?! And the downsides far outweigh the benefits, in my opinion.
Basically, I have spent countless hours debugging issues that have eventually been solved by deleting lock files. For example, the lock files can contain information about which package registry to use, and in an enterprise environment where different users access different registries, it's a recipe for disaster.
Additionally, the lock files can really mess up your dependency tree. Because yarn and npm create a complex tree and keep external modules of different versions in different places (e.g. in the node_modules folder within a module in the top node_modules folder of your app), if you update dependencies frequently, it can create a real mess. Again, I have spent tons of time trying to figure out what an old version of a module was still being used in a dependency wherein the module version had been updated, only to find that deleting the lock file and the node_modules folder solved all the hard-to-diagnose problems.
I even have shell aliases now that delete the lock files (and sometimes node_modules folders as well!) before running yarn or npm.
Just the other side of the coin, I guess, but blindly following this dogma can cost you........
I'd guess yes, since Yarn versions its own yarn.lock file:
https://github.com/yarnpkg/yarn
It's used for deterministic package dependency resolution.
Yes! yarn.lock must be checked in so any developer who installs the dependencies get the exact same output! With npm [that was available in Oct 2016], for instance, you can have a patch version (say 1.2.0) installed locally while a new developer running a fresh install might get a different version (1.2.1).
Yes, You should commit it. For more about yarn.lock file, refer the official docs here

Where does COMPASS expect to find the config.rb (if there are several)?

I'm introducing COMPASS to a project where we have different development branches from a SVN versioned web project in different folders for development. So we'll need to have several config.rb copies - the config.rb should be in trunk and thus in every branch we check out from SVN. How can I switch from "watching" the SASS directory in one branch checked out on my computer to watching another SASS directory in another branch checked out?!
Since the directory names for the checked out branched change with every new branch, having just one config.rb for COMPASS outside of / at the root directory of all branch directories is not an optimal solution since then we'd have to update every local config.rb on every developers computer with every new branch checked out...
Do I simply start "compass watch" in every branch's styles directory when working within that branch? Will there then be several COMPASS instances running, watching all the different branches?
I didn't find any answer to this problem on the internet, so I hope to find one here. Any idea welcome!
Cheers, Roman.
You can precise the location of the configuration file with the -c command line argument: compass watch -c custom/folder/config.rb. But be careful, all paths to match the configuration variables (sass_dir, css_dir, etc) depend on the initial path where the command Compass is launched. For example, you can place in a branch and point the configuration file present in the trunk.
However, the easiest way would be to have a configuration file by branch and start the compilation in each of them.

Should Gemfile.lock be included in .gitignore?

I'm sort of new to bundler and the files it generates. I have a copy of a git repo from GitHub that is being contributed to by many people so I was surprised to find that bundler created a file that didn't exist in the repo and wasn't in the .gitignore list.
Since I have forked it, I know adding it to the repo won't break anything for the main repo, but if I do a pull request, will it cause a problem?
Should Gemfile.lock be included in the repository?
Update for 2022 from TrinitronX
Fast-forward to 2021 and now Bundler docs [web archive] now say to commit the Gemfile.lock inside a gem... ¯_(ツ)_/¯ I guess it makes sense for developers and ease of use when starting on a project. However, now CI jobs need to be sure to remove any stray Gemfile.lock files to test against other versions.
Legacy answer ~2010
Assuming you're not writing a rubygem, Gemfile.lock should be in your repository. It's used as a snapshot of all your required gems and their dependencies. This way bundler doesn't have to recalculate all the gem dependencies each time you deploy, etc.
From cowboycoded's comment below:
If you are working on a gem, then DO NOT check in your Gemfile.lock. If you are working on a Rails app, then DO check in your Gemfile.lock.
Here's a nice article explaining what the lock file is.
The real problem happens when you are working on an open-source Rails app that needs to have a configurable database adapter. I'm developing the Rails 3 branch of Fat Free CRM.
My preference is postgres, but we want the default database to be mysql2.
In this case, Gemfile.lock still needs be checked in with the default set of gems, but I need to ignore changes that I have made to it on my machine. To accomplish this, I run:
git update-index --assume-unchanged Gemfile.lock
and to reverse:
git update-index --no-assume-unchanged Gemfile.lock
It is also useful to include something like the following code in your Gemfile. This loads the appropriate database adapter gem, based on your database.yml.
# Loads the database adapter gem based on config/database.yml (Default: mysql2)
# -----------------------------------------------------------------------------
db_gems = {"mysql2" => ["mysql2", ">= 0.2.6"],
"postgresql" => ["pg", ">= 0.9.0"],
"sqlite3" => ["sqlite3"]}
adapter = if File.exists?(db_config = File.join(File.dirname(__FILE__),"config","database.yml"))
db = YAML.load_file(db_config)
# Fetch the first configured adapter from config/database.yml
(db["production"] || db["development"] || db["test"])["adapter"]
else
"mysql2"
end
gem *db_gems[adapter]
# -----------------------------------------------------------------------------
I can't say if this is an established best practice or not, but it works well for me.
My workmates and I have different Gemfile.lock, because we use different platforms, windows and mac, and our server is linux.
We decide to remove Gemfile.lock in repo and create Gemfile.lock.server in git repo, just like database.yml. Then before deploy it on server, we copy Gemfile.lock.server to Gemfile.lock on server using cap deploy hook
Agreeing with r-dub, keep it in source control, but to me, the real benefit is this:
collaboration in identical environments (disregarding the windohs and linux/mac stuff). Before Gemfile.lock, the next dude to install the project might see all kinds of confusing errors, blaming himself, but he was just that lucky guy getting the next version of super gem, breaking existing dependencies.
Worse, this happened on the servers, getting untested version unless being disciplined and install exact version. Gemfile.lock makes this explicit, and it will explicitly tell you that your versions are different.
Note: remember to group stuff, as :development and :test
Simple answer in the year 2021:
Gemfile.lock should be in the version control also for Rubygems. The accepted answer is now 11 years old.
Some reasoning here (cherry-picked from comments):
#josevalim https://github.com/heartcombo/devise/pull/3147#issuecomment-52193788
The Gemfile.lock should stay in the repository because contributors and developers should be able to fork the project and run it using versions that are guaranteed to work.
#rafaelfranca https://github.com/rails/rails/pull/18951#issuecomment-74888396
I don't think it is a good idea to ignore the lock file even for plugins.
This mean that a "git clone; bundle; rake test" sequence is not guarantee to be passing because one of yours dozens of dependencies were upgraded and made your code break. Also, as #chancancode said, it make a lot harder to bisect.
Also Rails has Gemfile.lock in git:
https://github.com/rails/rails/commit/0ad6d27643057f2eccfe8351409a75a6d1bbb9d0
The Bundler docs address this question as well:
ORIGINAL: http://gembundler.com/v1.3/rationale.html
EDIT: http://web.archive.org/web/20160309170442/http://bundler.io/v1.3/rationale.html
See the section called "Checking Your Code into Version Control":
After developing your application for a while, check in the
application together with the Gemfile and Gemfile.lock snapshot. Now,
your repository has a record of the exact versions of all of the gems
that you used the last time you know for sure that the application
worked. Keep in mind that while your Gemfile lists only three gems
(with varying degrees of version strictness), your application depends
on dozens of gems, once you take into consideration all of the
implicit requirements of the gems you depend on.
This is important: the Gemfile.lock makes your application a single
package of both your own code and the third-party code it ran the last
time you know for sure that everything worked. Specifying exact
versions of the third-party code you depend on in your Gemfile would
not provide the same guarantee, because gems usually declare a range
of versions for their dependencies.
The next time you run bundle install on the same machine, bundler will
see that it already has all of the dependencies you need, and skip the
installation process.
Do not check in the .bundle directory, or any of the files inside it.
Those files are specific to each particular machine, and are used to
persist installation options between runs of the bundle install
command.
If you have run bundle pack, the gems (although not the git gems)
required by your bundle will be downloaded into vendor/cache. Bundler
can run without connecting to the internet (or the RubyGems server) if
all the gems you need are present in that folder and checked in to
your source control. This is an optional step, and not recommended,
due to the increase in size of your source control repository.
No Gemfile.lock means:
new contributors cannot run tests because weird things fail, so they won't contribute or get failing PRs ... bad first experience.
you cannot go back to a x year old project and fix a bug without having to update/rewrite the project if you lost your local Gemfile.lock
-> Always check in Gemfile.lock, make travis delete it if you want to be extra thorough https://grosser.it/2015/08/14/check-in-your-gemfile-lock/
A little late to the party, but answers still took me time and foreign reads to understand this problem. So I want to summarize what I have find out about the Gemfile.lock.
When you are building a Rails App, you are using certain versions of gems in your local machine. If you want to avoid errors in the production mode and other branches, you have to use that one Gemfile.lock file everywhere and tell bundler to bundle for rebuilding gems every time it changes.
If Gemfile.lock has changed on your production machine and Git doesn't let you git pull, you should write git reset --hard to avoid that file change and write git pull again.
The other answers here are correct: Yes, your Ruby app (not your Ruby gem) should include Gemfile.lock in the repo. To expand on why it should do this, read on:
I was under the mistaken notion that each env (development, test, staging, prod...) each did a bundle install to build their own Gemfile.lock. My assumption was based on the fact that Gemfile.lock does not contain any grouping data, such as :test, :prod, etc. This assumption was wrong, as I found out in a painful local problem.
Upon closer investigation, I was confused why my Jenkins build showed fetching a particular gem (ffaker, FWIW) successfully, but when the app loaded and required ffaker, it said file not found. WTF?
A little more investigation and experimenting showed what the two files do:
First it uses Gemfile.lock to go fetch all the gems, even those that won't be used in this particular env. Then it uses Gemfile to choose which of those fetched gems to actually use in this env.
So, even though it fetched the gem in the first step based on Gemfile.lock, it did NOT include in my :test environment, based on the groups in Gemfile.
The fix (in my case) was to move gem 'ffaker' from the :development group to the main group, so all env's could use it. (Or, add it only to :development, :test, as appropriate)

Resources