RPM Hotfix/Patch Approach - installation

We run a system on Centos 5.5 and install our software using one RPM containing all of our software. When we need to apply a hot fix or patch the current system is simply to stick on a tar and untar it.
I'm trying to develop a trackable, repeatable system for applying hot fixes and patches but I'm a little unsure as to what role RPM plays in this process.
From what I understand if we up the version number and reinstall even with just one file changed then RPM will blast the whole lot. This requires us being absolutely sure that noone has put another hotfix on the system that we're not aware of as it will be replaced.
Is it possible to make an RPM that contains JUST the new files and apply that on top of an existing RPM? How would that affect subsequent upgrades of the system?

The problem is changing files locally installed by the rpm and forgetting about them.
If you use this as a hotfix procedure, right after the hotfix you should build the new rpm and then deploy that new rpm.
If people are allowed to put hotfix after hotfix on top of a machine without ever committing it to the normal process, you're inviting disaster.
One thing you can do to be reminded of the fact that you have hotfixes lying around is to use
rpm -q --verify (your rpm name)
This will print a list of files that have been changed since the rpm was installed. That way you at least know which files were patched and should be taken into account.

The point of RPM is to have repeatable, verifiable installs. You should generate a new RPM with the update rolled into it (via a patch or new upstream source).
Splitting up your monolithic package would allow you to upgrade parts individually.

You can use deltarpm (but I don't recommend it, see bellow).
Having old rpm and new rpm (on build machine) you can generate delta using deltarpm tool.
Using deltarpm tool on box where software is installed you can then automagically upgrade old rpm to the new rpm (and downgrade it back if needed).
I don't like it because if any (non config) file provided by old rpm will get change, you will not be able to install the fix. Moreover deltarpm it is not production-ready tool. You have been warned.
As an alternative for deltarpm I will advice to split your software to few smaller RPMs and deliver subset of new RPMs as a hotfix. This is most commom approach.

Related

Install package in separate area for read-only Anaconda Linux install

at work we have a central, read-only, Linux Anaconda installation, and several projects need library packages for their individual project members.
Is there a way to conda install packages in a writable area set aside for each project?
Our Linux servers are also not directly web connected, but we can transfer data from a Windows machine that is. Is there a way for the windows conda to download data for our Linux install in such a way that I can transfer the downloaded files to Linux and then finish the install on Linux , with the conda linux not needing a direct web connection?
Thanks in advance :-)
The best answer to this question is a bit oblique: the Anaconda Distribution is designed for a single user on a single system with unrestricted access to the Internet. Any other use is considered "off label" and YMMV, though there are no license restrictions in place preventing you from trying to use it as you see fit. Anaconda Enterprise is the commercial product that is specifically designed for multi-user, server-deployed Anaconda with firewall restrictions. Security, governance, indemnification, support, collaboration, etc. etc. Check out https://www.continuum.io/ for more details.
But there are "work around" ways to achieve what you want, albeit complicated ones. For it to be reliable, reproducible, and maintainable you're going to end up reimplementing a lot of what is in Anaconda Enterprsie. Here are some tips:
Check out the "conda in multi-user environments" documentation
Check out the "Centralized Anaconda installation" documentation
Regular user alice for project foo can do conda create -p /nfs/project/foo/envs/custompython --offline anaconda; conda activate /nfs/project/foo/envs/custompython; conda install pkg1 pkg2 pkg3
You're going to run into ownership/permission issues. If you have sensible umask values then when alice's colleague bob tries to update pkg2 in the foo project he'll discover that he can't unlink the files alice wrote there. There is stuff you can do (as the IT admin) with chown, or alice can do with chmod, but its all a bit of a bother and there are lots of ways you can paralyze a conda environment because it is expecting "writability" to be binary for a particular environment. There is a long history in the conda GH issue tracker of people (myself included) shooting themselves in the foot by starting a conda env setup with one account and then making mods with another account that bork out half way through, leaving everything inconsistent.
Be careful about .condarc files. My advice: avoid them everywhere but in the base Anaconda installation (say, inside /opt/anaconda/.condarc). All sorts of weird stuff can happen when multiple overlaying .condarc files come together (the docs reference above discusses this).
People can create their own environments in an "offline" mode so long as the packages specified in those new environments (and their dependencies) are a subset of the packages available in the base environment (or subsequently added to the package cache), taking into account versions as well, of course.
You can download packages using your online Windows machine by grabbing them from repo.continuum.io and from anaconda.org. Make sure you download them for the right platform. But the challenge: you need to download a set of packages that will satisfy the dependencies of the package you want to install. There isn't a super easy way to get that information when you're offline.
Once you drop new packages into the Linux system's package cache be sure to re-run conda index.
Beware installing packages directly from their tarballs: this will not pick up any dependencies and does what is called a "force" install. So doing conda install /path/to/conda/pkg-ver.tar.bz2 is actually most similar to doing conda install --force --no-deps pkg=ver (though not identical, to be sure). --force means the install will happen NO MATTER WHAT, even if it will break your environment (violate existing package dependencies), and --no-deps means you won't get any of the dependencies of pkg installed.

How to install a Chocolatey package completely offline?

I need to install software on Windows clients that are completely offline. That means they have no Internet access.
An example. Let's say I want to install Paint.Net. I go to a reference machine (with INet) and install Paint.Net with Chocolatey.
choco install paint.net -y
After the install is finished I have the software installed and two artifacts:
The package file "paint.net.nupkg" in %ChocolateyInstall%/lib/paint.net
and
the the installer file "paint.net.4.0.6.install.zip" in %Temp%\chocolatey.
I now put these two files on a USB stick. Then I go to the offline machine, plug in the USB stick and want to install the package.
Is it possible to install the software without modifying the package? I am aware that inside the nupkg file there is a tools/chocolateyInstall.ps1 file with a $url variable defined. But I want to install the package without changing the package content or modifying the URL by hand.
I played around with the parameters --cache and --source but with little to no luck.
I have seen that this kind of question is asked before. But never (to my knowledge) with the intend to run the installer file from the stick too (and not only the package file). So I hope this is not a duplicate.
Caching Downloads - Not Deterministic
While there are ways to set the original nupkg (with the version on it, not the one in the packages directory - use download from left side of package's page on the Chocolatey community package repository) and the cache onto a USB stick somewhere, it's not always deterministic that it will work. You can also override the cache location, so that the folder is somewhere not in TEMP. See choco config, choco config -h and choco config set cacheLocation c:\some\location to do this.
Create Your Own Packages - Better
For packages you need offline, you have the ability to manage your own packages and you can embed software right into the package. This is desired when you want to manage software offline as most things on the community repository are subject to copyright law and distribution rights (why they don't simply have the software they represent embedded).
Creating and working with your own packages is very secure, reliable, and repeatable (and can be completely offline), but it does tend to take up time. If you are doing this for yourself, then it could override any time-savings you get as a consumer using Chocolatey and the community repository.
Internalized Packages - Best
The best thing you can do here is a process called internalizing, where you download and extract the package, download all of the resources and embed them in the package (or put them somewhere local/UNC share), edit the scripts to use those embedded/local resources and recompile the package.
This allows you to take advantage of existing package logic without the issue of the internet.
For more details see Recompiling Packages and Package Internalizer - Automatically Recompile Packages.
NOTE: As a side note, we are thinking of offering the ability to auto recompile with Chocolatey Pro edition and not just the Business edition.
Organization Use of Chocolatey
Most organizations using Chocolatey are doing some combination of creating packages and recompiling packages, because they need absolute trust and control over those packages when being used in production scenarios.

Merge module install failing during major upgrade

I have an InstallShield InstallScript MSI project that contains the FLEXnet Connect without Software Manager merge module. The version of this product is 6.0.32. I created a second installer for version 6.1 that also contains the FLEXnet Connect without Software Manager merge module. When I perform a major upgrade on a system that contains the 6.0.32 version I get a message in the MSI log stating:
Disallowing installation of component: {FF970098-B748-427B-B946-AA8E1A1F82AD} since the same component with higher versioned keyfile exists
The component is referencing the isusweb.dll file located in the FLEXnet Connect folder.
It looks like this check occurs prior to the 6.0.32 product being removed. The install proceeds to remove the 6.0.32 product, which removes isusweb.dll. During the 6.1 install the isusweb.dll is not put back because of the component version check.
The upgrade succeeds. When I attempt to run the application from a shortcut it verifies the components. Since the isusweb.dll is missing the MSI attempts a repair, then cannot find the MSI and does not allow the application to open.
Is there some way to get the merge module to always overwrite?
This sounds suspiciously like this bug:
http://support.microsoft.com/kb/905238/en-us
and I've come across this bug and you do see that log message, and RemoveExistingProducts is early in the install. It decides to not install the file based on the higher version being there, but doesn't re-evaluate that decision after the REP removes it. Then a repair restores it when you use a shortcut. The bug should apply only to files in the GAC or SxS, so that's a bit puzzling.
If you can schedule REP at the end of the transaction sequence (InstallExecute, REP, InstallFinalize) that should fix it - might be worth a try, all other effects of the move being ok.
Merge modules don't get installed, they get merged. Product MSI's get installed. One of the problems with using third party merge modules is if they have a bug, there isn't much you can do about it.
I'd consider creating an MSI solely for the purpose of encapsulating this MSM. Then I'd create a setup prereq or suite installer to install this MSI apart from your product MSI.
You have got two really good answers already, but to try and synthesize:
It really sounds like a buggy merge module. Phil suggests to fix your REP placement in the InstallExecuteSequence to work around the bug. Chris suggests to put the faulty merge module in its own setup. I agree with both and think you should follow both suggestions:
Remove the merge module from your main setup.
Create a new setup and add the faulty merge module and ensure the right REP sequencing.
For the REP fix to work your component referencing must be 100% correct - now and in the future. To eliminate this as a problem creating a separate setup allows you to contain the buggy module inside its own MSI. This will help you avoid re-activating the bug by mistake or by changed design in the future - and the latter is never unlikely.
As Chris says: a merge module isn't delivered, it is merged. An updated merge module may be available for all I know, but even then it is wise to contain it. Especially when you are dealing with the GAC (Global Assembly Cache).
Another solution that I applied when encountered this bug was to set to update the "Version" column from File table, in the merge module, using Orca. Set that to the maximum 65535.65535.65535.65535, this will force the upgrade to always install the DLL from the merge module.

Update Magento to specific version (not latest)

I want to update a Magento 1.4.2.0 to 1.6.x.y. instead of the latest version 1.7.x.y
There are many articles on how to update a Magento installation to the latest version, but that´s not what I want. There are some forum threads where people are asking how to update to a specific version, but those all don´t sport a solution.
It seems like it is only possible to unpack the tar.gz of the specific version, but it is not possible to use the command line tool, i.e.
./mage config-set preferred_state stable
./mage upgrade-all --force
Is there a way to use the command line tool to update to a specific version?
Two different ways of handling this, 1) Manually...
From my experience, what you end up doing is scrapping Magento Connect (which often is the source of all evil, when you've had it do partial upgrades two or three times in a row), downloading the whole package from the download archive (you have all the versions available from there on the Release Archives tab), unzipping them to a directory and then either on the server, copying them into your Magento root directory or from a remote workstation, ftp/scp uploading them to the server Magento root directory.
If you're serious about running Magento, you will have a development server that you do this to several times to find out where all the upgrade breakages are so you can weed out busted templates, detect forgotten core modifications, curse third party modules that don't survive, etc. It's really important to do this if you're depending on that e-commerce site for your income as intense suckage occurs when you aren't ready and sink the live site.
If you've modularized all your module overrides, created your own skin folders and custom template or used a well written template from a developer, it truly is just simply dumping the new version files on top of the old version files and overwriting everything (only after disabling all Magento caching and the compiler if you were using it and manually deleting all var/cache--? folders).
If, however, you've modified any of the files you are overwriting, you are in a world of hurt because you didn't do things properly.
Also, you have to deal with upgrading third party modules to work with the new version.
Then before committing the live site, backup all Magento application files and do a database dump.
2) Or use the command line tool as follows...
Since the original question was, "can you use the command line tool?" yes you can. Once you have the file saved from the download archive, use the following:
./mage install-file /home/login-name/path-to-download-file/magento-1.5.x.x.tgz
I've also used this on various module packages to inspect the contents. The mage command has a download only, download the package file, inspect the contents. If you like what it does, install it.

MSI diff on packages to create patches

I am looking at what the best options for creating patches for our clients. They don't want a full blown installer when doing minor patches (from 1.0 to 1.1) since they will need to do a full regression on the system.
I wanted to know whether there is a tool (paid or not) that will take 2 msi builds do a diff/compare and out put a useable patch installer.
Most of the time it will be updates to assemblies (modified or new) but may require some custom script (in c# if possible)
They don't want a full blown installer when doing minor patches... since they will need to do a full regression on the system.
This statement is based on some faulty logic and whoever uttered it is blowing smoke out the proverbial. There is every bit as much risk from a patch install as what there is from a full installer - if the testers have no faith in your build/release process then both need to be tested fully. The msi is just the packaging, a full install or a patch install can both change the whole system. If the testers want to come up with the argument that "with the patch, the file abc.dll has not changed so we don't have to test the functionality in it", then you can argue that that thinking is incorrect - if the code using abc.dll has changed then abc.dll may exhibit different behaviour.
IOW, my argument is that a patch install or full install both carry the same level of risk, and both should be tested to the same level. To minimize the amount of retesting required you need to build trust and certainty with your release process - an automated build/release process and an auditable source control system should do that for you.
In any case, i agree withthe answer from #Christopher - tools such as InstallShield can be used to create a single msi that is either a full install if you haven't already got the product on your machine, or it will switch to upgrade mode if it detects an item with the same product code and a lesser version number already installed. Having said that, it can be incredibly difficult to get that upgrade to work correctly.
InstallShield can do this provided:
You follow strict compliance with the component rules and have a working minor upgrade servicing story.
You are incrementally building your assemblies as to not make the system think all of your files changed.
There are other ways to accomplish this but it takes quite a bit of plumbing to be honest.
To be honest, I've heard your "we have to test it all" story many times and I've never bought the argument. Usually they want to piece-meal their baseline and then stick their head in the sand on what the true test surface is. Usually their real problem is one of SCM / Build / Release discipline and not whether the installer is a Major Upgrade, Minor Upgrade or Patch. ( IMO )

Resources