As a guy who frequently switches between QA, build and operations, I keep running into the issue of what to do about operating system updates on the build server. The dichotomy is the same on Windows, Linux, MacOS or any other o/s that can update itself via the internet:
The QA team wants to keep the build server exactly as it is from the beginning of the product release cycle to the end, since installing updates could destabilize the server and means that successive builds aren't made against the same baseline.
The ops team wants the software to be deployed on a system with all the latest security patches; this can mean that the software isn't deployed on exactly the same version of the o/s that it was built on.
I usually mitigate this by taking release candidate builds and installing them on a test server that has a completely up-to-date o/s, repeating the automated tests that are run on the build server and doing some additional system level testing to make sure everything looks good before deployment. However, this seems inefficient to me; does anyone have a better way ?
Personally i don't think you have much of an issue here - just apply the latest updates to the build server. The main reasons i say this are:
it is highly unlikely that your code or any of the dependencies on the build server are so tightly coupled to the OS version that installing regular updates is going to affect anything, let alone break it. There can be minor differences between window messages etc between Windows versions, but those are few and far between, and are usually quite well documented out there on teh interweb. If you are using managed technology stacks like WPF/Silverlight or ASP.Net and even mostly Winforms then you will be isolated from these changes - they should only affect you if you are doing hardcore stuff using the WinAPI directly to create your windows or draw your buttons.
it is a good practice to always engineer your product against the latest version of the OS, because you need to encourage your customer to implement those updates too - IOW you should not be in a position where you have to say to your client to not install update xyz because your application will not run against it - especially if that update is a critical security update
testing for differences between OS versions should be done by the QA team and should independant of what is on the build server
you do not want your build server to get in to such a state that it has been so isolated from the company update process that when you finally do apply them all it barfs and spits molten silicon everywhere. IOW, the longer you wait to update, the higher the risk of something going wrong and doing so catastrophically. Small and frequent/incremental updates are lower risk than mass updates once per decade :)
The build server updates that you do have to be cautious about are third party controls or library updates - they can frequently contain breaking changes or considerably altered behavior. They really should be scheduled, and followed up by a round of testing looking for any changes.
Virtualize!
Using stuff like VMWare Server you can script the launch and suspend of virtual machines. So you can script VM resume, SSH to launch build, copy, VM suspend, repeat. (I say this, but I abandoned my work on this. Still, I was making progress at the time.)
Also, you can trust your OS vendors. Can't you?
They have an interest in compatibility. If you build on Windows XP it is almost certain to work on XP SP3 and Vista and Windows 7.
If you build on RedHat Enterprise 5, it had better work on 5.1, 5.2, 5.3, 5.4, etc.
In my experience this has worked out OK so far for me and I recommend building on your lowest patch OS versions. With the Linux stuff in particular I have found newer releases linking to more recent libraries not available on older versions.
Of course it doesn't hurt to test your code on a copy of the deployment server. It all depends on how certain you want to be.
Take the build server off the network, that way you do not need to worry about installing security updates. Only load the source from CD, thumb drive or whatever other means.
Plug it back in at the end of your release cycle and then let all the updates take place.
Well, for the most stable process, I would have two build servers, "Build with Initial config, Build with update config", and two autotest test servers with similar differences. Use virtualization to do this effectively and scriptably.
Related
I would like to know if Informatica can be used on Windows system ? If so what are the prerequisites?
Both the previous answers are wrong.
Windows 10 is supported for installing the client tools only.
For exact details which Windows server versions are supported, please log on (after initial sign-up if you haven't already done so) to the Informatica Network at https://network.informatica.com ; there's a section named Product Availability Matrices, here you find for each PowerCenter version the so-called Product Availability Matrix (PAM) indicating which Windows versions are supported for server installation and for client installation. You need both, and you can install both on the same Windows server system.
I won't go into this ugly ancient flame war here. Be it enough to say that some people managed to install the server part on Windows 10, but very few ever made it work reliably (in most cases the installation seems to work but doesn't, at latest after the next system restart). I wouldn't waste one single second trying to do so, it's not worth the time.
I'm working with an old (Delphi 2010) app with a number of very specific components that have to be installed, some from compiled from sources. It's a pain to set up, is what I'm getting at.
Currently, I have it on a Windows 10 machine, but I haven't upgraded Windows 10 in quite some time. If I upgrade Windows, it breaks the debugger (and I haven't been able to fix that so I've downgraded Windows).
I'm trying to find any way to move the Delphi environment without having to go through the various steps to get it to work, like making a VM out of it. Or, if I have to go through the steps again, only do it one more time in such a way that I can push-button recreate it. (There are a lot of things I need to try to upgrade the app itself, but many of those strategies will break the environment for me.)
Any strategies?
My team is developing a desktop application (mixed C++/Tcl) that is used in a client-server setup. Currently it is Windows-only, but soon we will need to port it to Linux. CruiseControl.NET builds it every night from the source code in SVN and packages it into NSIS installer, but we have no automated tests to run.
It is nearly impossible to add any unit tests, but integration testing of the application is easy, because it is already heavily script-based.
The main task is to install the app into 3 PCs, configure it (that involves copying some files around), run it, monitor a possible crash, wait till integration testing is done, collect a summary, send emails. It could be done with a bunch of custom PowerShell scripts, but
In future we will want to add more features and more testing, and
what used to be a simple script soon blows up (as usual), so I want
to minimize custom scripting, and if I need to script something, I
prefer bash/cygwin (I am not familiar with Python or Ruby).
I want a web dashboard that will report current progress, and if
something failed - show logs
I need some supervisor that will monitor the app under test and
report if it hangs or crashes
we will need to test it also on Linux
ideally I would like to orchestrate some test steps between the PCs
(e.g. run test X on PC1 and test Y on PC2 in parallel, wait till they
both finish, then run test Z on PC1, while monitoring that nothing
crashes on PC2 etc)
So, I am looking for a COTS tool/set of tools that will help me to do it and don't have a steep learning curve. Ideally, for free, but if it is really good and has fair pricing, my company may purchase a license.
The process should be triggered from CruiseControl.NET when the NSIS installer is ready, and then perform everything described above. Basically, it should allow at least remote installation of software, running custom scripts and have a web dashboard.
Apparently, SCCM tools like Chef could be used, but so far neither of them supports a Windows server, only nodes. I would like to avoid setting up a Linux VM just for that, although I can do it, if I have no other choice. Also, Chef seems to be a bit overkill - good for 10k machines, but I have only 3... maybe 5 in future. And I am particularly curious about chances to orchestrate a distributed test.
Most of the similar questions here on StackOverflow and in internets are about web apps, Java containers, Maven etc, and there are just so many tools and plugins for these tools to evaluate.
Thanks in advance.
Install ccnet on your test machines. Have those ccnet projects listen to a file that gets edited when a new installer is ready. Have the test machines install that new installer and run tests. There you go. ccnet sends emails so there's your basic reporting.
Have the test results get reported into a database via web services using gSOAP(that's what we did). For linux you can run java cruisecontrol if you must. Write a gSOAP enabled test controller program to report the test results from the test machines. A little c++ app will do. Then write a website(we use ASP.NET) to query the database(Postgresql) and show results. Have the test machines auto update themselves via SVN to get the latest changes to the configuration. Use Nant. Nant is far superior to just using ccnet to run tasks. Nant works through ccnet. Use XML, XSL and CSS with ccnet to make test emails have the information you want(new passes, new failures, SVN differences to code bases, etc...)
Our latest development is putting a big TV in the kitchen with a summary of test results so people can know more readily what they broke!
The first thing I'd get working is a test machine listening for the new installer, installing it, running some basic tests and emailing the results back. Put the ccnet and nant configuration in version control and get that auto updating on the test machine so you don't have to log into every test machine and do an update every time you make a change.
This is hugely broad and pretty close to opinion based. Chef can handle steps like deploying the application to the test machines but it isn't a GUI test framework so you would need something else to handle that. Jenkins supports distributing tests to windows hosts so that seems like a good choice on that side of things but it isn't that great at multi-node tests or orchestration between them. I suspect you'll need to write most of this yourself given the requirements.
I am working on a VB6 application which has many executable and an Active X dlls.
And there are to be updated in c;lient machines to lates version once in a while which i am asking the user to update manually.
Can you please suggest me a way using which i can update it automatically from the files that can be available online.
Thanks.
Windows Installer has features supporting Patching and Upgrades. Using those techniques you can create various levels of "upgrade" packages.
Your application would need a separate "update" utility that is spawned when the user approves updating, perhaps in response to a prompt your program raises after checking for new versions.
This updater would check the current version and the remote site's catalog of updates to pick the appropriate package, download it to a temporary location, start Windows Installer to process the package (or packages, sometimes you might need to run several Installer runs), and clean up the temp location. Then you might offer to restrt the updated application or on some occasions need to reboot.
This updater would be a fancy form of the common "installation bootstrapper." As you can tell it needs some "smarts" in order to tell what package or packages to download and install in what sequence, when it needs to request rebooting, etc. This would probably be based on a downloaded "rules script" it obtains as part of selecting a valid update option.
After all, sometimes you can just apply a minor upgrade or patch upgrade, sometimes you need a more complete install or entire reinstall.
If your needs are extremely simple (just an EXE and maybe a few DLLs and OCXs - preferably using reg-free COM) you may not need to go to these lengths. However when you start adding in other considerations like multiple programs, data directory creation and security settings, possibly running a settings file conversion or even database conversion, DCOM, firewall, etc. configuration, database drivers or providers, etc. things get complicated quickly. Too complicated for simple snatch and grab updating.
And admin rights/UAC issues are a factor so you'll probably have to deal with privilege elevation.
None of this is trivial stuff. There are people who do little more than construct and test such deployment systems as their entire job.
If you use soemthing like Inno setup to install the application then an update is simple a matter of running that periodically.
You can either detect there is a new version available by checking a web site/local server, or just prompt to run the update after X days.
If I wanna authenticate windows accounts to AD when a user browses to an apache-running site on a Linux server, here are the usual suspects:
List item
mod_ntlm (which I used in a distant past) - last update on 2003
mod_auth_ntlm_winbind - last update on 04/2007
mod_auth_kerb - last update on 12/2008
No luck getting any of those to work with a recent, fully patched, windows 2000 AD server.
Do you have any clues as to a recipe that does work?
-Peter
-- UPDATE
my current build environment is this:
OS: Ubuntu Lucid
Apache 2.2.14 (from repos)
the auth modules I recompiled from source.
Did you just try to drop binary modules onto an existing apache binary, or did you rebuild Apache and the modules from source on your system?
The last time I did this (admittedly 3+ years ago), I found a combination of Apache+mod_ntlm that worked, but I ended up using a less-than-current version of Apache, in order to match the version of mod_ntlm that I found. My conclusion at the time was that if I wanted current, I was going to have to rebuild Apache and mod_ntlm from source, and I didn't have the time to do that.
Unfortunately, that was two jobs ago, and I don't have access to the configuration details.
LDAP. Active Directory should speak the LDAP protocol well enough (although, I believe Novell's eDirectory sticks to the spec better) that you can use LDAP authentication setups to communicate with it. It'll be a lot easier than fussing around with the Windows-centric NTLM garbage.
See this site for an example:
http://www.jejik.com/articles/2007/06/apache_and_subversion_authentication_with_microsoft_active_directory/
The other, likely costly option, is to invest in an identity manager product. Novell, Sun (now Oracle), and IBM all make one. I suspect that, unless you're designing something for a mid-size corporate project, you won't need these. But, they are an option to consider.