How can I update Perl on Windows without losing modules? - windows

At work I'm using Perl 5.8.0 on Windows.
When I first put Perl on, I went to CPAN, downloaded all the sources, made a few changes (in the .MAK file(?) to support threads, or things like that), and did nmake / nmake test / nmake install. Then, bit by bit, I've downloaded individual modules from CPAN and done the nmake dance.
So, I'd like to upgrade to a more recent version, but the new one must not break any existing scripts. Notably, a bunch of "use" modules that I've installed must be installed in the new version.
What's the most reliable (and easiest) way to update my current version, ensuring that everything I've done with the nmake dance will still be there after updating?

As others noted, start by installing the new perl in a separate place. I have several perls installed, each completely separate from all of the others.
To do that, you'll have to configure and compile the sources yourself. When you run configure, you'll get a chance to specify the installer. I gave detailed instructions for this in an "Compiling My Own Perl" in the Spring 2008 issue of The Perl Review. There's also an Item in Effective Perl Programming that shows you how to do it.
Now, go back to your original distribution and run cpan -a to create an autobundle file. This is a Pod document that lists all of the extra stuff you've installed, and CPAN.pm understands how to use that to reinstall everything.
To install things in the new perl, use that perl's path to start CPAN.pm and install the autobundle file you created. CPAN.pm will get the right installation paths from that perl's configuration.
Watch the output to make sure things go well. This process won't install the same versions of the modules, but the latest versions.
As for Strawberry Perl, there's a "portable" version you can install somewhere besides the default location. That way you could have the new perl on removable media. You can test it anywhere you like without disturbing the local installation. I don't think that's quite ready for general use though. The Berrybrew tool might help you manage that.
Good luck, :)

I would seriously consider looking at using Strawberry Perl.

You can install a second version of Perl in a different location. You'll have to re-install any non-core modules into the new version. In general, different versions of Perl are not binary compatible, which could be an issue if you have any program-specific libraries that utilize XS components. Pure Perl modules shouldn't be affected.

If you stay within the 5.8 track, all installed modules that contain XS (binary) extensions will continue to work, as binary compatibility is guaranteed within the same 5.8 series. If you moved to 5.10 then you would have to recompile any modules that contain XS components.
All you need to do is ensure that the new build lists the previous include directories in its #INC array (which is used to look for modules).
By the sounds of it, I think you're on Windows, in which case the current #INC paths can be viewed with
perl -le "print for #INC"
Make sure you target your new Perl version in another directory. It will happily coexist
with the previous version, and this will allow you to choose which Perl installation gets used; it's just a question of getting your PATH order sorted out. As soon as a Perl interpreter is started up, it knows where to look for the rest of its modules.
Strawberry Perl is probably the nicest distribution on Windows these days for rolling your own.

I think the answer to this involves virtualisation of some kind:
Set up an exact copy of your current live machine. Upgrade Perl, using the same directory locations and structures as you're using at the moment.
Go through your scripts testing them on the new image.
Once you're happy, flip the switch.
The thinking behind this is that there's probably all sorts of subtle dependencies and assumptions you haven't thought of. While unlikely, the latest version of a particular module (possibly even a core module, although that's even more unlikely) might have a subtle difference compared to the one you were using. Unless you've exhaustively gone through your entire codebase, there's quite possibly a particular module that's required only under certain circumstances.
You can try and spot this by building a list of all your scripts - a list that you should have anyway, by dint of all your code being under version control (you are using version control, e.g. Subversion, yes?) - and iterating through it, running perl -c on each script. e.g. this script. That sort of automated test is invaluable: you can set it running, go away for a coffee or whatever, and come back to check whether everything worked. The first few times you'll probably find an obscure module that you'd forgotten about, which is fine: the whole point of automating this is so that you don't have to do the drudge-work of checking every single script.

When I did it I installed the newer one into a separate directory. There's a bit of added confusion running two versions, but it definitely helps make sure everything's working first, and provides a quick way of switching back to the old one in a pinch. I also set up Apache to run two separate services, so I could monkey around with the newer Perl in one service without touching the production one on the old Perl.
It's probably a lot wiser, in hindsight, to install on a separate computer, and do your testing there. Record every configuration change you need to make.
I am not sure about building it yourself—I always just used prepackaged binaries for Windows.
I'm not sure I understand exactly what you're asking. Do you have a list of changes you made to the 5.8 makefile? Or is the question how to obtain such a list? Are you also asking how to find out which packages above the base install you've obtained from CPAN? Are you also asking how to test that your custom changes won't break those packages if you get them from CPAN again?

Why don't you use ActivePerl and its "ppm" tool to (re)install modules?

Related

Dependency solution when make/compile error from source code

Very often we need to install software from its source code. Most of the time I just hit "make world" or "make all" then it will work like a charm. But some other time we see make errors, and we need to install other packages in order to let the make go through. This is particularly a problem for compiling low-level systems, such as a Linux kernel or Xen hypervisor.
I have one experience with Xen 3.4. Maybe it has been documented in some corner documents, but it depends on udev-125 to work properly. The weird thing is it functions well most of the time when udev version is 160+, it only breaks in certain cases! It took me a few MONTHS to find out it was because of the wrong udev version!
To make developers' life easier, when a source code is made successfully in one machine, is there some tools to record the list of packages and versions of that machine? Such a 'snapshot' should be shipped with the source code as well, so that when someone meets the make error they at least have a successful 'snapshot' for reference.
Is there such a tool already?
If your software depends on a specific version of a dependency, you should write a check for your configure script/cmakefile/etc. that tests the version of the dependency and bails out if the wrong version was found.
Comparing the output of config.log (a file created by a configure script) can also help diagnose problems like you encountered.

Run Compass/SASS with a different version

I am trying to find a nice solution working on two different compass projects. One is based off Compass using Blueprint (older version), and the other is based on susy grid (newer version).
Currently, I have to reinstall the right version for the watch process.
Is it possible to run compile with a specified version? It would be great if it is also possible to run a watch process with a specified version.
Running it as
compass _0.10.5_ compile
will do what you want. (Where you put in the desired version in place of 0.10.5, obviously.)
The tool you're looking for is probally rvm which allows you to have different versions of ruby/gems installed and easily switch between them.
Perhaps there's a simpler way.
If you can use something like Codekit or Livereload, those tools allow you to used embedded sass libraries or define your own.
That you could use the builtin libraries for one project and your custom ones for other projects.
RVM suggested above also works i believe but never tried myself.

Would using external perl in Coco application be considered bad practice?

I have some perl applications & modules that I use for a number of tasks.
I would like to bundle these up and place a Coca wrapper around them, so I can distribute them to other people.
Assuming I can force use of the bundled OSX perl and include the modules I need inside my application, is there any real problems with doing this? I really do not want to re-implement everything I have already done.
There is nothing wrong with using the system-provided tools, include Perl. Things to consider:
The version of Perl changes from release to release. So if you need to support 10.4-10.7, you wind up with very diverse versions of Perl to support.
It can be tricky to include your own versions of modules, particularly if those modules rely on other modules, and most especially if those modules include compiled code.
Occasionally users mess with their system Perl more than you would like. In particular, they might install new modules or upgrade existing modules. Ideally you can say "don't do that," but it can create problems.
My team has had nightmares dealing with Net::SSL on different versions of OS X. We finally have removed Perl entirely from our code base due to the headaches of managing all the different versions of Perl and of Perl modules that might be in the system libraries.
But if you keep your dependencies simple, then there's no problem using the system Perl.
As long as you employ good coding practices, it should be fine. I've never use Perl in a Cocoa application myself. You might want to check out Camel Bones: http://sourceforge.net/projects/camelbones/
Its a Perl/Cocoa bridge. Never used it myself, but it may help.

Manipulating source packages from Hackage how to easy deploy to several windows boxes?

Recently when I have found good sources packages for ghc 6.12/6.10 on Hackage I've been forced to do some minor or major changes to the cabal files to make those packages to work under windows.
Besides to fork and merge my fixes with github, what seems to be the best way/ good enough practice to take these modified builds to a couple of other windows boxes that only has a basic haskell platform installed?
I should prefer if I somehow could work with the cabal-install because that is what one normally use.
Should one put the modfied build dirs on a shared/networked dir and mount from the targeted windows box?
Say something like this:
on machine prepare
cabal fetch foo
cabal unpack foo
cd foo
edit .cabal and .hs
cabal configure
cabal build
On machine useanddevelopnormal
cd machinepreparemount
cd foo
cabal install
The Yackage tool allows you to run a local Hackage-compatible server easily. You could deploy your modified versions on Yackage, add the Yackage repo to your repository list and then use cabal install as usual.
Using github is certainly "good enough" although if it seems to be a regular operation that you do in order to get something working on Windows, you may want to mention it on the development mailing lists for GHC, or at least on haskell-cafe. If this procedure is minor enough, if may need to be incorporated into general builds.
Definitely, working with the cabal-install is suggested. However you are able to distribute your personal fixes is a private matter, and not meant for others to control.
In principle it is possible to make local hackage archives and to point cabal-install at it. However currently we do not have very good tools for producing the archive format. If you have the time, it's a matter of getting the right directory layout and using tar to make the index.

Any existing pure PHP "make" tools?

Let me elaborate on the question...
I have a custom CMS (built on codeigniter FTW) that includes many different types of modules.
Every time we have a new project come through the door, it is a variation and amalgamation of a few of the existing modules.
Sometimes a project comes through with requirements that are not satisfied by the existing modules, in that case I will write a new module...
All the modules are separated out in folders and the code is VC-ed using GIT. Every module has it's own Model, View, Controller, SQL and Javascript files. All the dependencies are also separated and folder-ed nicely...
The next step for me is to create some sort of installer script that will take me through the "scaffolding" process step by step, allowing me to choose from the existing modules. A glorified "makefile" if you may...
Rather than rolling my own, does anyone know of any such thing out in the wild.
I know of Apache ANT (java), what I need is something in pure PHP with very low or no dependencies...
I would like something as simple as running a git pull and then php make.php
Thanks.
The "Ant-like" alternative I am aware of in PHP land is phing it is written in PHP and it will allow you to perform several tasks for packaging, deploying and testing your web applications. The documentation is a great starting point if you want to hit the ground running.
It is can also be extended to define new tasks if needed (examples and explanations are provided in the documentation)
Reading through the doco it appears to be possible to install Phing without PEAR as documented here you would have to correctly setup the environment on each machine you wish to use Phing on. I can not confirm this method though as I use PEAR for all my installs.

Resources