If you publish the source for a ruby gem to github.com, is the Gemfile.lock supposed to included?
This guy has strong opinions.
http://yehudakatz.com/2010/12/16/clarifying-the-roles-of-the-gemspec-and-gemfile/
Namely:
You should include your Gemfile.lock in version control if you're developing an application
You should not include your Gemfile.lock in version control if you're developing a gem
I'm not sure if I'm convinced yet. I think that using the Gemfile.lock in my version control is good. But I think that it is too much for that file to be included for others use. The gemfile is enough for an install for others. I think Gemfile.lock is for development, not deployment, contrary to the previously expressed opinion.
No, the Gemfile.lock file is an aid for deployment, not for development.
It is used to recreate an exact replica of your environment on another systems, something that is not (usually) required for development.
Related
I am working on a server migration and upgrade, and I don't code in Ruby at all.
Is there an easy way for me to scan / review the gemfile / installed dependencies to check that latest updated / unpatched dependencies?
The code references to a hundred at least dependencies and I am not sure which are no longer the latest stable version.
You can try bundle with the outdated command and using the --strict parameter to make sure it lists only compatible gems:
bundle outdated --strict
But like I said in my comment above, you usually want to know what you are doing if you plan to upgrade any gem. Any changes to the API or functionality of a gem may break part of or the entire codebase. Make sure you have working backups.
Isn't the Gemfile.lock a hack used to perpetuate bad practices in dependency version control?
I.e. Shouldn't developers set the dependency version ranges strictly in the Gemfile?
For example if my Gemfile says that I depend on gem A version 1.0.1 or versions [1.0-2.0), why would I need the .lock?
No, Gemfile.lock makes a lot of sense and is crucial to the concept of automatically picking gem versions. As a developer, you do not need to bother about exact version numbers. You can say "give me whatever version of gem X fits all other versions of all other gems" (by just saying gem 'xyz' without any further information). Or you can tell it to stay within the bugfixing line of an older version of a gem (gem 'xyz', '~> 2.3.0') or whatever.
By adding the exact version in Gemfile.lock you then make sure that the versions stay consistent for all developers (and environments). You make the act of upgrading to a newer version of a gem a conscious (and well-documented) choice instead of a random part of your build/deploy process.
why would I need the .lock?
to install exactly the same versions as all the other guys in the team. Or install in production the same versions that you use in development.
It might happen that a new version of some gem is released while you were collecting sign-offs for your release. You better be sure you install/load exactly the versions that you developed/tested with.
I maintain a gem with dependencies that are stored in a Gemfile, for example:
gem 'foo', '~> 1.5'
gem 'bar', '~> 2.0.5'
Thanks to pessimistic version constraints, bundler will by default install the latest 1.x version of foo, but can compromise on a lower version if my gem is used in conjunction with another that requires (for example) foo =1.6.2
Question: Is there a simple way to get bundler to install all of the minimum versions of my dependencies (in this case, foo =1.5.0 and bar =2.0.5) so that I can test whether, after I write some new functionality, my gem will still work in combination with other environments that use those lower versions?
Or, is the only way for me to manually reinstall all of the minimum versions and then run my tests?
Because we decided to use Rubygems' Requirement class, there isn't a way to specify the lowest version. I vaguely recall an automated testing tool to help you iterate over dependency versions that you want to test against, but it's extremely hard to automate because there are an exponential number of possible version combinations. I suggest creating a second Gemfile with the oldest versions you want to test against, and using BUNDLE_GEMFILE to run against that Gemfile in an additional CI build.
I saw your question in IRC... from my understanding, there's no way to do it without changing your Gemfile. Sorry. :(
https://github.com/carlhuda/bundler/blob/master/lib/bundler/cli.rb for reference
I have an application with many optional components, all with their own complex dependencies. For example, some deployments might want to use LDAP functionality and will need to load ldap-related gems. But many will not, and those that don't should not have to install ldap-related gems.
How can I use Bundler to load these dependencies depending on which components users (deployers) have enabled?
I don't want to to force deployers to manually edit their Gemfiles. It has to be possible to enabled/disable components via the application's UI.
Just including every possible dependency in the Gemfile is not ideal. Some of the rarely used components require a lot of complicated native compilation. Another solution might be to have the application edit its own Gemfile. But this is kind of awkward and would likely require a restart every time components are changed.
Is there a way in Bundler to dynamically load gems in runtime? If not, are there alternatives that provide something like Bundler's sandboxing but allow for dynamic loading?
You could provide multiple Gemfiles and use bundle install --gemfile to use the specific gemfile and only install the Gems you need for that deployment.
In your application you could then use Bundle.setup with the appropriate groups of the previously installed Gemfile to just load the appropriate Gems
Sure thats not a nice and easy way, but should give you the functionality you want.
See
Bundler Setup
bundle install
In this question, I mentioned my assumption that rubyforge gems are more official, authoritative, and stable than github forks. One of the people replying to my question said that my assumption might not be accurate.
What have you observed? Do people use github to release early and release often, only putting stable releases on rubyforge, or do people release less often on rubyforge for other reasons (eg rubyforge being more of a hassle)?
Update: This question is a little moot now. Github gems are defunct, and rubyforge gems are going to be moved to rubygems.org.
No difference as far as I can tell.
There is a huge range in quality/stability of gems from both sources. Some are rock solid, others are pre-alpha quality.
It really depends on the gem project itself.
Having said that though, the github model does lend itself to more rapid turn around on issues. It's much easier to fork a project, fix a bug, and submit it back to be included in the original source. So at least on the popular projects, bugs get fixed quicker. So maybe that helps projects mature quicker, but I don't know.
What I noticed is a decreasing quality of GEM released via GitHub compared with the overall quality of GEMS at RubyForge.
IMHO there are at least two major explanation for this behavior:
--
Prior to GitHub the 99% of Rubyist was Subversion-dependent. You can say what you want about Subversion, but it is definitely more easy to use compared with Git and everybody is aware of the trunk/tags/branches layout. Then people started to move to Git. Just a super-limited slice of Subversion users started to use Git with the level of knowledge it should require and, what I noticed, is that people started to forgot about tags.
Once upon a time there were tags. In subversion people were used to release new version based on specific tags so that you can easily detect which version you installed and which was the stable branch.
Nowadays I see tons of libraries always under development in the Git master branch. No tags, no stable branches. In general, when libraries where released via RubyForge there was an higher level of attention to the deployment step.
--
GitHub makes the publishing step no more a hassle. That said, you can easily publish a new GEM simply pushing the gemspec into your repository.
In my opinion this simplicity might can lead to a lower quality. More less-skilled developed started to distribute GEMS because it's as easy as generating a new project with Jeweler (or a similar library) and pushing a git repository. They didn't know much more about release management, backward compatibility, release bumps, release maintenance.
Often I came across an unfinished libraries packaged as a GEM just because the developer forgot to remote the .gemspec file. Each commit caused a new GEM to be build with no apparent coherence and consistence.
I'm absolutely in favor of the "release often" practice but when it makes sense. Git provides an excellent branch support, you don't need to clutter the master branch with tons of unrelated commits and releasing unfinished piece of code that you call libraries.
--
Last but not least, what I probably hate the most is the unlimited duplication of the same GEM. When RubyForge was the unchallenged GEM source, it was quite easy to find and install a new project.
IMHO, GitHub introduced an unnecessary layer of complexity. First, you have GEM both available via rubyforge as mygem and via GitHub as username-mygem. You often need to spend time to figure out which GEM is the most updated and holds the master development.
Furthermore, some popular GEM was no longer updated on RubyForge and many people continues to use them just because RubyGems doesn't notify you about new versions. Easy to understand, if you installed coolgem release 1.2.4 and the same library is now available as superuser-coolgem (release 2.0), RubyGems is not clever enough to tell you a new update is available.
--
Now it's time for a disclaimer.
I'm not saying GitHub users produces crappy GEMS compared with RubyForge. I'm a GitHub user and prior I was a RubyForge user as well. Thousands of GEMS successfully migrated from RubyForge to GitHub without leaving the end-user in the "which one" limbo.
The best example Rails, but I can mention many other GEMs including (but not limited to) Capistrano, Hpricot, RedCloth... All those libraries are now hosted on GitHub and if you carefully look at them you can easily recognize the same level of quality as before.
Last but not least, all those libraries continue to be released via RubyForge as the master source so that you don't need to reconfigure your environment to detect whether to install rails-rails or rails.
Also, the end-user is not affected by development decisions. Take Capistrano for instance. A couple of months ago Jamis announced the end of its commitment to the development. The community took charge of the development and moved the master repository from jamis/capistrano to capistrano/capistrano. What would happen if the GEM was released as jamis-capistrano? All the users would have to switch to the new GEM and the new repository with lot of hassle.
This scenario never arised because RubyForge was and continue to be the main Capistrano delivery hub.
--
In conclusion, I unfortunately notices an overall decrease of GEM quality mainly caused by more people approaching Ruby and RubyGems without the necessary level of knowledge. The same apply to a large number of Rails plugins.
GitHub cannot be labelled as the culprit. When complex things become more easy and more people approach them without an underlying knowledge, it's normal that the quality can decrease because complexity is a natural selection process.
Anyway, there's still an excellent level of quality in the Ruby community. It's amazing to see how Ruby developers are committed to unit testing and other professional programming habits.
Probably less stable and slightly more up to date :)
-r
to answer your question finally: both of the resources you mentioned (rubyforge, github) are now obsolete, since gemcutter is the new and only place for rubygems.
Gemcutter Is The New Official Default RubyGem Host:
http://www.rubyinside.com/gemcutter-is-the-new-official-default-rubygem-host-2659.html