The Ideal
Using rvm, it would be awesome to be able to have multiple Rubies on one webserver, and through some sort of server configuration, be able to assign Ruby versions to different Rails/Sinatra/etc apps on a per-project basis.
I am aware, from rvm's documentation, that Passenger only works with one Ruby at a time. :(
The Compromise
Failing that, it would be nice to at least be able to concoct a way to be able to assign projects to a Ruby 1.8 or a Ruby 1.9 interpreter. I've read that using Nginx as a reverse proxy allows running Apache and Nginx on the same box. Would it then be possible to have Apache+Passenger using one Ruby, and Nginx+Passenger using a different one? Maybe use something other than Passenger with Nginx?
Am I Barking Up the Wrong Tree?
Am I missing a good solution to this issue? Am I walking into a nightmare configuration situation? Is what I want even viable, or is it necessary to run another box to run a separate Ruby version?
check this post: Phusion Passenger & running multiple Ruby versions from official Phusion blog. It's solve the problem by use Passenger Standalone as reverse proxy.
You could use Thin or Unicorn with Nginx. You could than write a god script or some other startup script to set the ruby version per project (or simply start Thin/Unicorn manually).
Use multiple small virtual machines?
What about keeping all your projects (or at least those on one server) on the same ruby version? Be it using an old version or upgrading old projects?
I think, this way you have the fewest problems.
The compromise is possible. You can it this a variety of ways but probably the easiest is a reverse proxy combined with as many chrooted ruby+webserver+rails installs as you find convenient.
If you're using Mac OS X, you may want to check out POW!, which can handle multiple rubies. There's a linux alternative called hoof, which isn't as developed but is getting close.
Related
I have a VPS and usually I write Ruby scripts for daily tasks. Sometimes I want to use the same scripts / methods on my home machine too. How should I share and reuse the already written codes between the too machine? Should I write a gem and install on both machine? Or is there a way to use the "load" method to load Ruby modules from a HTTP or maybe NFS share? HTTP would be preferable like in JavaScript / HTML, I think, however the "load" method seems to be not working with a HTTP url.
I think using github or some other source control software would be the most appropriate idea.
Sharing code via HTTP or NFS seems very weird to me. I can think of problems with loading from external sources according to reliability and security.
I would prefer a Gem or at least a git repo that I can check out when I need the code on a different machine.
Version control system like Github or BitBucket is perfect for this. You'll basically have a central repository (on their server) where you'll store your code. Suppose computer A makes a change and "pushes" it to the server, now you can easily let know computer B, C etc. that there was a change made and they can update the code that's there.
Ruby comes with a powerful package management system called RubyGems that was created for exactly this purpose.
My tool of choice for sharing scripts between various machines is Dropbox - cheap and quick solution. For me works perfectly.
I have been trying to use delayed_job for about 1 hour on Windows with no success. Before going off and trying the next candidates, I decided it could be wiser to ask for other experiences of background job processing under Windows. Has anyone used something succesfully with Rails 3?
EDIT: to win the bounty, please list, if any, gems for starting background jobs that work under Windows.
There's the Backgroundjob gem, but it looks pretty old (the latest version was released in 2008). It should support windows, here's the quote from the README:
platform independent (including winblows [sic])
There are a few forks though, maybe you will find one that you can use without much pain.
I'm probably dreaming here, but am wondering if there's any possibility of completely embedding a minimal CouchDB engine within a Windows application, such that the app can be run without requiring installation (of CouchDB/Erlang) on the user's computer.
I already provide this slimmed down / bundled ability - check here https://github.com/dch/couchdb/downloads and specifically the lean bundle at 16MiB erlang + all couch love here https://github.com/downloads/dch/couchdb/couchdb-1.1.0+COUCHDB-1152_otp_R14B03_lean.7z
Some brief notes on bundling and embedding couchdb on windows at wiki.apache.org/couchdb/Quirks_on_Windows including how to hide the erlang window (erl.exe -detached) at startup.
Ask on CouchDB #user mailing list if you want more info or help while you have a crack at this.
While not a code solution, you could use one of the bundling applications that can embed files and other files into one executable. One example would be BoxedApp.
Why bother. It is so easy to install Erlang on Windows. Just bundle up the whole thing including the erl.exe binary and have your installer unzip it into a folder. The only thing that you would need to change would be the batch files, or better yet, discard them and write your own batch file to start up CouchDb. Also, it is a good idea to use a different port that either the normal Erlang port (or the usual CouchDB port) and maybe even get Erlang to use localhost as its "shortname".
The CouchDB wiki does provide at least a few tips for Integrating CouchDB into your Windows Applications. YMMV, from what I can tell it's more or less just tips on creating a relocatable build. You'll want to likely generate a solid random admin user/password into the local.ini file during the install process and set up proper permissions on all created databases (to protect against any potential cross-site scripting vulnerabilities) in addition to ensuring the socket binding only happens on the default localhost interface.
RVM is great for developing on your local machine. But is it safe on a production machine?
I built RVM for production and added the developer 'niceties' later on.
If you would like more information read the documentation on the website and come talk to me in #rvm on irc.freenode.net sometime during the day EDT most days.
Since RVM is just a fancy way of downloading, isolating and switching between existing Ruby implementations, I'd say that it's as production ready as whatever ruby implementation you're currently running it with.
Essentially, all RVM does is point your path at a specific Ruby implementation. This is exactly what happens when you use your *nix distribution's Ruby implementation. The only real difference is that your path will be re-written so that when you run ruby -v it will run a ruby from your current user's .rvm directory instead of a global system directory like /usr/local/bin.
I'd go even further and say that using RVM is a better solution than using what generally gets installed in a *nix distro because it makes it easy to sandbox the specific ruby implementation on a per-user basis. RVM also makes it possible to attempt switching rubies (ie; from 1.8.7 to 1.9.2) on your production app while keeping a solid rollback strategy in place if something doesn't work quite right. It also makes it easier to keep old applications running on one version of Ruby, while switching new apps to more current versions.
I disagree, especially if you're using any kind of automated production process (puppet, chef, fog, etc) and you have more than one or two machines.
We've had issues where version X of RVM worked in a completely different way to version Y of RVM (different default Rubygems versions, different default gemset configs, complete changeup of how system wide install works), breaking our automated provisioning process.
Not an issue if you're developing and on hand to tune things, a killer if you have an unattended scripted / puppet install. We worked around these issues by locking to a particular RVM version, but I remember having a conversation with Wayne where he discouraged this. If we kept using RVM in prod, we were going to actually package it into a series of .debs (one for the install, one for each Ruby).
The way that .rvmrc prompts by default and can only be overriden in the homedir ~/.rvmrc (and not the system-wide one) was also unhelpful.
I actually like the way that RVM will change up and do things this way in development - nothing sucks more than being held back by backward compatibility. This approach, however, cost us some time (and pulled hairs) in production/staging/uat/test.
RVM is apparently a reasonable production tool
You know, I once made a similar rvm is a development tool comment and was informed that rvm was originally a production tool.
So, RVM will make your production environment more complex, which is bad, but it makes it more isolated and compartmentalized, what the language people would call modular, and that's good.
In the end, as long as you test your deployments, I don't see how a static configuration of any kind could be, all by itself, "unsafe".
It all depends on how you are installing RVM , single-user or multi-user . installing RVM system wide can cause lots of mess whole switching between different rubies. Better you opt for single user , minus that RVM does a good job for what it's meant to do .
I guess there's two parts to this question:
Is RVM intended to be for production machines, as opposed to development machines?
Is RVM reliable enough software to be used on production machines?
For (1), Wayne E. Seguin has stated that it's intended to be used on production machines. There's no point in disputing his intent.
For (2), I'm not so sure. Is it appropriate to use software that has a new version number every couple of days on a production machine? Also, RVM once deleted my entire ~/ruby directory. To Wayne's credit, when I told him about it, he fixed it that night, but that doesn't exactly say "production ready" to me.
Edit: I've just read about bumblebee's deletion of /usr, and I'll just say - it could have been worse! LOL.
I've been using RVM on a production webserver for over a year now with zero problems. I've kept it pretty up-to-date, running rvm get head frequently. Zero issues, ever. :)
Yes, I've used rvm on production machines and also set up puppet modules to install rvm as the default system ruby along with gemsets, etc.
If you run multiple apps on a single server, rvm can help you keep all your apps gemsets (and ruby versions) totally separate. However, if you are running only a single app on a server, there may not be as much benefit to having rvm installed.
I've pretty much used RVM on all my production servers running rails apps!. RVM has not let me down.
What are the advantages and disadvantages of using the built-in Apache for local web development on Mac OS X, specifically 10.6 Snow Leopard?
Instead of using the built-in Apache, I know that options such as MAMP and XAMPP exist. However, for some reason I just haven't wrapped my head around the benefits or potential pitfalls with using the built-in Apache versus using a MAMP/XAMPP-based (or other) solution.
Is the advantage of a MAMP/XAMPP-based solution simply ease of configuration?
When not using the built-in Apache are there other benefits besides ease of configuration? For instance, is there a benefit similar to using virtualenv to avoid tainting a pristine Python install?
If you're only developing static webpages and don't need PHP or MySQL, then why not use the built-in Apache with something like virtualhost-sh or VirtualHostX to ease configuration?
Configuration and Usage Considerations
I am interested in using virtual hosts in order to simultaneously develop multiple websites
I use git for version control and have a tendency to store source files in ~/developmentinstead of ~/Sites (this probably isn't material, but thought I'd mention it)
Related Research
The answers to the SuperUser Question What is the best Apache PHP Setup for a Mac Developer talk about different MAMP, XAMPP, and roll your own solutions
Advantages:
It's already there, you don't have to install anything
If all you are interpreting are .html files only, then it's fine.
Disadvantages:
You can't update it
(Well you shouldn't. You can, it just feels hacky modifying stock system components).
If you wanted to enable PHP/MySQL etc later on you will be changing things in paths on the system that may break between OS updates.
If this is your primary OS, you are now running extra daemons (PHP/MySQL/Apache) in the background that eat up CPU cycles.
Overall though I wouldn't do it. MAMP's daemons are easy to start/stop and your changes are confined to MAMP. If you mess something up or need to quickly get different sites running with different settings it's kinda easier to blast things away in MAMP and start again (not that MAMP is without it's hassles).
If you don't want to use MAMP i'd suggest getting a dedicated Linux box (or use a Linux Virtual Machine) to do this on having been down the OSX Apache path before. It's not pretty. OSX's built in stuff might seem easier at first, but it's inflexible and eventually as your requirements grow you'll wish you hadn't done it.
Update:
I would recommend going with XAMPP over MAMP. It has better performance and is updated more often. Plus XAMPP is Cross platform and Open Source :)
I've used the stock Apache 1.x in previous versions of OS X for both local development and production web sites and have never had a problem with system updates breaking anything. I've never done anything extremely fancy, but have had plenty of vhosts, regular and reverse proxies, PHP, Python and Perl CGIs, custom cgi-bin locations, custom logs, etc, without issues. It has always worked exactly as I expect Apache to work.
This has continued to be the case with Apache 2 under 10.6. So for local development and low-key production stuff, I'd trust it.
I've had the same experience with the stock Apache installs on OS X Server, with the exception that using the provided GUI tools to edit the httpd.conf files has always been a total disaster. They simply never worked for me, overwrote previous changes, or outright crashed.