Multiple homebrew installations - not recommended but best solution? - osx-mountain-lion

I'm on OSX, 10.8.3, ad use homebrew. I was wondering it would be possible to have multiple instances installed. I have the default /usr/local/ install but would like to have a completely separate version where I can test packages and different installs without screwing up my basic install in /usr/local/ that works great.
I read the wiki and all that about installing into ~/(.)homebrew and then symlinking "brew" to /usr/local/ but I imagine that would conflict with my current install in that directory.
I do not mind, and realize I will have to change my projects to work with the different homebrew path but if anybody has done this and has some suggestions I would greatly appreciate it.
There is still no great way to clean dependencies for formulas well, imo, although progress has definitely been made. I know it's not recommended but I would like to have a "test" homebrew install where I can install all kinds of temporary packages needed for just one or two quick tasks and remove. I appreciate ANY and ALL advice so if you have anything that might be helpful, please do let me know.
Cheers and thank you SO community!

I'd recommend using a virtual machine as a testbed, instead of an alternate homebrew installation path. A VM setup has several advantages:
Homebrew can still live in its preferred, supported path.
You can see exactly what your formula changes will do to a "normal" installation.
You can still use broken, non-relocatable formulae safely.
Files are guaranteed not to leak out of your alternate path and clobber the main Homebrew install or other stuff, like they can if you have errors in your formulas
You can use VM snapshots to provide a clean baseline, and revert to them to discard any changes or mistakes.
You can make a bunch of alternate setups instead of just one or two, and they're all independent.
You can have a minimal install to isolate variables and reduce interactions.
You can even run them with other OS versions.
You can mess around with scary low-level stuff like kernel extensions without fear.
You can hand off an entire machine to a collaborator so they can se exactly what context your testing stuff in.
It's good practice for learning how to automate and modularize machine setup and configuration and will make your code & deployment process better.
Etc etc
I use VMs for all my Homebrew development work and it's great. I will never be doing deployment-oriented work without VMs again. Parallels is good, totally worth the money IMHO, or you can pick up VirtualBox for free.
To get your development code between instances, you can use git or other version control to shuffle them around, or mount virtual shared drives, or both.

Related

What are acceptable changes to make to a user's computer during make install?

I have been experimenting with make and makefiles. I have a shared library that I'm now installing into ${PREFIX}/lib instead of hard-coded /usr/lib, but now it can only be found by other projects if ldconfig has been run with the proper directory since the workstation was started. That is not acceptable behavior.
The solution I found to this particular problem is to add (a resolved) {PREFIX}/lib to /etc/ld.so.conf, either directly or by adding a new .conf file in /etc/ld.so.conf/d. This is easy enough for me to do on my own system, and I know how to write makefile rules so an end-user won't have to solve this problem, but I am uneasy with the makefile solution. Since I can't find any online suggestions to do this, I'm concerned that it may be taking too much liberty with the end-user's computer.
That introduces my general question: what am I allowed to change on an end-user's computer during sudo make install? Being root during install gives me almost unlimited abilities to make software changes, but obviously some things should be out-of-bounds. Where can I find guidelines about what well-behaved installers can or should not do?

Does the Debian packaging system and RVM play well together?

Update
Forgive my ignorance, I think I asked him about rvm when I really meant RubyGems. And I think he's thinking of RubyGems because there does seem to be some controversy over it, at least there was in the past: http://stakeventures.com/articles/2008/12/04/rubygem-is-from-mars-aptget-is-from-venus. So please s/rvm/RubyGems/g for the below question.
End update
My server admin is a little wary of using rvm on Debian. Here's what he says:
Unfortunately, the whole rvm system doesn't interact properly with a
packaging system like Debian, and it's a nightmare to deploy when you
do the deployment at different times. [You can easily end up with
different versions of modules on different systems, etc, and you have
to deal with rvm stomping all over the Debian packaging system.]
I think what he's saying here is that we are going to be running the app across multiple servers and if we upgrade one server, it's going to cause serious problems for us.
Is there a way to address his concerns?
RVM in no way shape or form 'stomps' on the debian package system. RVM installs either in $HOME/.rvm for a general user, or in /usr/local/rvm using the 'rvm' group to which members must be added which is the normal place for 3rd party non-mission-critical applications, headers, and libraries.
RVM came into existence because of package managers. They were forever screwing up dependencies of rubies and gems, being behind the times for getting security updates pushed out immediately, for multiple rubies to be installed and managed on the same box without having to play symlink games to get them to work, and made deployments to multiple machines with multiple disparate deployment requirements a nightmare.
RVM solved all of that in a more than fairly seamless matter, with a specific eye on ensuring not only security of the install, and the users that use it, but also to ensuring that the package manager was in no way involved. This ensures that the package management tools and their databases of installed packages wouldn't get suddenly wacked.
I got involved as a user, and then as a developer on the RVM Project because it solved the dilemma so well and so elegantly.
Rubygems does not allows good control of gem versions, but together with bundler it allows a lot better control of version compared to apt-get.
You need to read on Bundler - it allows you to specify loose dependencies in Gemfile and strict ones are recorded in Gemfile.lock.
His concerns are that ruby is a moving target, ruby is updated every few months and all users should always update to latest patch level.
Ruby is a lot different packages (maybe except openssl), where ruby team is updating releases with patches, this allows focusing security efforts in one place, but this is against conservative approach of package managers where a version is picked and only security patches are applied to it, as stable it sounds - it spreads security effort on multiple teams and slows down whole open source community. Operating system maintainers do not want to accept the fact someone does part of the work for them and they could trust someone with it.
As for the repetitiveness of the process your admin is showing a lot of ignorance, RVM allows to lock versions, which is against the the Ruby approach explained above. So the simplest way to lock everything is to lock RVM to one version:
rvm install 1.15.14
But if the locking of RVM is required the preferred way is to lock it to minor version where compatibility is kept, but updates are provided:
rvm install latest-1.15
RVM does not keep this versions going for a very long time, but anytime there are concerns about current stable "stability" - we keep the previous version updated so you can decide which one to use.
#deryldoucette also explained a lot in his answer, I tried to not "reexplain" things.

Is it possible to do version control on my computer system files themselves?

Here's my problem. I have OSX Lion and I do Web development, BUT I have no real comprehension of what I'm doing when I'm using brew, pear, and the terminal in general. I am working on leveling up, but I still have to work in the meantime. That's why I very often mess up my system files (just tried to install PHPUnit, didn't work, so I deleted other pear directories, still didn't work, and now I end up with a mess).
It would feel better and relieve a lot of stress to know I can revert back my changes when I mess up. So my question is, can I set up a version control like git on all my computer files themselves, so that before any big change, I can save the state of my computer? Or is there any other way to get that same result?
I think creating different users for my mac is not enough, cause I want to build up my system, and add things to it, so it doesn't really help. And I'm not sure, but Time Machine is made just to get some files, not to revert my system to some previous state, or can it do it?
Help would be greatly greatly appreciated, thanks!
Seems to me you need to use a VM.
Take snapshots and work without worries. If you mess up you just revert to your last known good snapshot
You can do this - you can version control anything... but I wouldn't recommend it (at least not with GIT/SVN/etc - perhaps there's some software designed for this purpose that I'm unaware of).
You'll be tracking version changes for tons of files, temporary, setting files, binaries, etc. Files would be changing all the time and you'd need to stay on top of commits and so forth. Instead I'd recommended just copying folders (backup), making changes, verifying your changes work, then deleting the backups.
It's very easy to overuse version control.
Having an external drive with time machine and allowing it to sync often will allow you to revert certain parts (or all) of the file system to a certain date.
Since you're under OS X, I'd suggest Time Machine - it is more adapted to what you want to do than a source control versioning. TM is pretty decent at backuping, but there are other solutions if this one doesn't fit your needs.
EDIT: as commented by #dstarh, brew isolates everything it installs and uses symbolic links when needed. So use it whenever you can, it leaves your system cleans. There's instructions on how to uninstall a software, and in the worst of the cases, you could look at the source of your software's formula and find out what to delete.
Long story short : yes you could, but there's way easier and painless ways to do this.

Winlibre - An Aptitude-Synaptic for Windows. Would that be useful?

Last year, in 2009 GSoC, I participated with an organization called Winlibre. The basic idea is having a project similar to Aptitude (or Apt-get) and a GUI like Synaptic but for Windows and just to hold (initially), only open source software. The project was just ok, we finished what we considered was a good starting point but unfortunately, due to different occupations of the developers, the project has been idle almost since GSoC finished. Now, I have some energy, time and interest to try to continue this development. The project was divided in 3 parts: A repository server (which i worked on, and which was going to store and serve packages and files), a package creator for developers, and the main app, which is apt-get and its GUI.
I have been thinking about the project, and the first question that came to my mind is.. actually is this project useful for developers and Windows users? Keep in mind that the idea is to solve dependencies problems, and install packages "cleanly". I'm not a Windows developer and just a casual user, so i really don't have a lot of experience on how things are handled there, but as far as I have seen, all installers handle those dependencies. Will windows developers be willing to switch from installers to a packages way of handling installations of Open source Software? Or it's just ok to create packages for already existing installers?
The packages concept is basically the same as .deb or .rpm files.
I still have some other questions, but basically i would like to make sure that it's useful in someway to users and Windows developers, and if developers would find this project interesting. If you have any questions, feedback, suggestions or criticisms, please don't hesitate posting them.
Thanks!!
be sure to research previous efforts on this. Google turns up several similar/relevant efforts.
http://en.wikipedia.org/wiki/Package_management_system#Microsoft_Windows
http://windows-get.sourceforge.net
http://pina.plasmite.com
IIRC there was an rpm for windows at some point
Also I think there was some guy (who used to work at MS) in the news recently that basically is starting up a very similar project. I can't find a link to this now.
But anyway, yeah, it would be awesome if there was such a standard tool and repository.
I can only speak for myself, but obviously I could definitely make use of such a tool as I found your post through googling! ;)
My two use cases for this tool would the following ones:
1. I generally avoid to re-install my system as long as possible (in fact I manage to do so only for switching to a reasonable (not each an every) new version of Windows every few years or to setup new computers). But still I'd like my software to be up-to-date. Neither do I want to have to go to all the web pages and check manually if there are compatibility issues with the new version of Doxygen, Graphviz and the latest version of MikTeX for example, nor do I want to have to navigate to the download pages and run the setups all by myself. I just want to schedule ONE SINGLE (!) tool, which checks whether there are new updates or not and updates those applications which are not in conflict with any other application version.
If it unavoidably happens to me that I have to re-install my system, I don't want to get the new setups neither (and check compatibility). I even don't want to wait for one setup to finish in order to start the next one, I just want to check the tools I need, or even better, I want to simply load my "WinApt XML" batch installation file, which gets the installers and handles the setups sequentially all by itself.
I don't know enough about the architecture of .deb or .rpm but IMHO the most reasonable would be to maintain a DB with only the names, versions, dependencies and the location of the different versions' download locations. I mean, most of the tools available for Windows provide .msi packages anyways, which (I guess) is the application itself and some custom installation properties (really not sure how scripting is handled, but I know that creating a MSI in Visual Studio has very limited abilities to create custom installation steps and I can only imagine this is due to limitations of MSI protocol).
I guess a GUI will be mandatory for Windows users ;) but I personally would prefer the additional ability to handle the setups with the console.
Well, I like the idea and would love to hear from that (or such a) tool in the future.
Cheers
Check out NSIS. It's an open source MSI creator. You might be able to use it as part of your package creation software.
http://nsis.sourceforge.net/Main_Page
For the ALT-.Net tool/lib stack there have been some affords in this direction: Horn Get
However, the usability in a real world project has been subject in this SO question.

What's the best way to maintain two development OS X machines?

I am jumping off a laptop as my primary development machine and moving to an iMac. I plan to maintain an older laptop for development use on business trips and other working travels, as well as occasional coffee shop visits.
Would love to hear any protips out there for maintaing two dev environments while minimizing the necessary fiddling. Details would be great! I generally work with Ruby, Rails, MySQL, git, and numerous Ruby gems.
Thanks in advance for all ideas you can share!
I've found that using something like MacPorts can the ease the pain of maintaining n different systems. And not only maintaining them, but keeping the environments fairly close (same versions of same packages).
And if you can't guarantee access to your VCS, then you can do something like host a "local" VCS on your iMac that you can then commit to from your laptop.
A distribute VCS like Git is perfect for this, even if you use a different VCS for you main code on the Mac.
That way you can use the distributed features to commit code to your laptop and sync everything up easily both before you leave, and when you get home.
It sounds like you want to make sure the code in the 2 environments is in synch. Create a sandbox from your version control system and keep everything in the VCS. If you can't guarantee access to your VCS server, keep the latest on a USB stick and carry it with you.

Resources