Why should one care about specifying gem version at all if bundler detects Ruby version and manages to get the latest release to match that version. If I'm not a fond of newer version personally, I would disable incrementing with ~> 1.4.4 and in other cases I'd let bundler manage stuff with putting gem name into Gemfile without any argument
The approach you are suggesting - start with the latest version and pin if problems are experienced - works fine for projects that are 1) actively maintained and 2) tolerant of breakage.
Now imagine you have to deliver this project to a customer who then will run it for a year or longer and you won't be there to support it. In this case simply getting the latest release of all dependencies is not necessarily the best strategy. Maybe you would proactively specify major versions of all of your important dependencies instead. Potentially even lock to minor versions which does give more stability at the cost of missing security updates/bug fixes.
At the company where I am working, there is a huge codebase currently running on Ruby 1.6.8 (2002), which I am tasked with updating to the latest possible version.
There already exists a documentation, which explains how to update the code to Ruby 1.9.3, but even then there is not much documentation which explains the changes which 2.x introduced.
This documentation is also not ideal for me, since it wasn't extensive enough. Is there a website where I can find the changelogs for every Ruby version?
Here https://www.ruby-lang.org/en/downloads/releases/ are the changelogs of each version.
Isn't the Gemfile.lock a hack used to perpetuate bad practices in dependency version control?
I.e. Shouldn't developers set the dependency version ranges strictly in the Gemfile?
For example if my Gemfile says that I depend on gem A version 1.0.1 or versions [1.0-2.0), why would I need the .lock?
No, Gemfile.lock makes a lot of sense and is crucial to the concept of automatically picking gem versions. As a developer, you do not need to bother about exact version numbers. You can say "give me whatever version of gem X fits all other versions of all other gems" (by just saying gem 'xyz' without any further information). Or you can tell it to stay within the bugfixing line of an older version of a gem (gem 'xyz', '~> 2.3.0') or whatever.
By adding the exact version in Gemfile.lock you then make sure that the versions stay consistent for all developers (and environments). You make the act of upgrading to a newer version of a gem a conscious (and well-documented) choice instead of a random part of your build/deploy process.
why would I need the .lock?
to install exactly the same versions as all the other guys in the team. Or install in production the same versions that you use in development.
It might happen that a new version of some gem is released while you were collecting sign-offs for your release. You better be sure you install/load exactly the versions that you developed/tested with.
I'm using RVM to install Ruby on production server. Listing the known rubies gives me
[ruby-]2.2.0
[ruby-]2.2-head
Is it safe to use 2.2-head in production mode or better rely on 2.2.0?
Better use 2.2.0 in production.
As it is stable.
Every month or two a stable release of RVM is created, it includes minor version increase. Between releases only bugfixes and ruby version updates are added to it with teeny version update. Normal development and major changes continue on master branch to install it use head version. It's important to use head version before reporting errors as those could be already fixed.
In this question, I mentioned my assumption that rubyforge gems are more official, authoritative, and stable than github forks. One of the people replying to my question said that my assumption might not be accurate.
What have you observed? Do people use github to release early and release often, only putting stable releases on rubyforge, or do people release less often on rubyforge for other reasons (eg rubyforge being more of a hassle)?
Update: This question is a little moot now. Github gems are defunct, and rubyforge gems are going to be moved to rubygems.org.
No difference as far as I can tell.
There is a huge range in quality/stability of gems from both sources. Some are rock solid, others are pre-alpha quality.
It really depends on the gem project itself.
Having said that though, the github model does lend itself to more rapid turn around on issues. It's much easier to fork a project, fix a bug, and submit it back to be included in the original source. So at least on the popular projects, bugs get fixed quicker. So maybe that helps projects mature quicker, but I don't know.
What I noticed is a decreasing quality of GEM released via GitHub compared with the overall quality of GEMS at RubyForge.
IMHO there are at least two major explanation for this behavior:
--
Prior to GitHub the 99% of Rubyist was Subversion-dependent. You can say what you want about Subversion, but it is definitely more easy to use compared with Git and everybody is aware of the trunk/tags/branches layout. Then people started to move to Git. Just a super-limited slice of Subversion users started to use Git with the level of knowledge it should require and, what I noticed, is that people started to forgot about tags.
Once upon a time there were tags. In subversion people were used to release new version based on specific tags so that you can easily detect which version you installed and which was the stable branch.
Nowadays I see tons of libraries always under development in the Git master branch. No tags, no stable branches. In general, when libraries where released via RubyForge there was an higher level of attention to the deployment step.
--
GitHub makes the publishing step no more a hassle. That said, you can easily publish a new GEM simply pushing the gemspec into your repository.
In my opinion this simplicity might can lead to a lower quality. More less-skilled developed started to distribute GEMS because it's as easy as generating a new project with Jeweler (or a similar library) and pushing a git repository. They didn't know much more about release management, backward compatibility, release bumps, release maintenance.
Often I came across an unfinished libraries packaged as a GEM just because the developer forgot to remote the .gemspec file. Each commit caused a new GEM to be build with no apparent coherence and consistence.
I'm absolutely in favor of the "release often" practice but when it makes sense. Git provides an excellent branch support, you don't need to clutter the master branch with tons of unrelated commits and releasing unfinished piece of code that you call libraries.
--
Last but not least, what I probably hate the most is the unlimited duplication of the same GEM. When RubyForge was the unchallenged GEM source, it was quite easy to find and install a new project.
IMHO, GitHub introduced an unnecessary layer of complexity. First, you have GEM both available via rubyforge as mygem and via GitHub as username-mygem. You often need to spend time to figure out which GEM is the most updated and holds the master development.
Furthermore, some popular GEM was no longer updated on RubyForge and many people continues to use them just because RubyGems doesn't notify you about new versions. Easy to understand, if you installed coolgem release 1.2.4 and the same library is now available as superuser-coolgem (release 2.0), RubyGems is not clever enough to tell you a new update is available.
--
Now it's time for a disclaimer.
I'm not saying GitHub users produces crappy GEMS compared with RubyForge. I'm a GitHub user and prior I was a RubyForge user as well. Thousands of GEMS successfully migrated from RubyForge to GitHub without leaving the end-user in the "which one" limbo.
The best example Rails, but I can mention many other GEMs including (but not limited to) Capistrano, Hpricot, RedCloth... All those libraries are now hosted on GitHub and if you carefully look at them you can easily recognize the same level of quality as before.
Last but not least, all those libraries continue to be released via RubyForge as the master source so that you don't need to reconfigure your environment to detect whether to install rails-rails or rails.
Also, the end-user is not affected by development decisions. Take Capistrano for instance. A couple of months ago Jamis announced the end of its commitment to the development. The community took charge of the development and moved the master repository from jamis/capistrano to capistrano/capistrano. What would happen if the GEM was released as jamis-capistrano? All the users would have to switch to the new GEM and the new repository with lot of hassle.
This scenario never arised because RubyForge was and continue to be the main Capistrano delivery hub.
--
In conclusion, I unfortunately notices an overall decrease of GEM quality mainly caused by more people approaching Ruby and RubyGems without the necessary level of knowledge. The same apply to a large number of Rails plugins.
GitHub cannot be labelled as the culprit. When complex things become more easy and more people approach them without an underlying knowledge, it's normal that the quality can decrease because complexity is a natural selection process.
Anyway, there's still an excellent level of quality in the Ruby community. It's amazing to see how Ruby developers are committed to unit testing and other professional programming habits.
Probably less stable and slightly more up to date :)
-r
to answer your question finally: both of the resources you mentioned (rubyforge, github) are now obsolete, since gemcutter is the new and only place for rubygems.
Gemcutter Is The New Official Default RubyGem Host:
http://www.rubyinside.com/gemcutter-is-the-new-official-default-rubygem-host-2659.html