Changing environment in Hudson, that stays for the whole build - windows

how can I execute a batch-file or just some (e.g. twice) commands in a job of Hudson (running on windows xp, as a non-service, but may change), that the environment just stays for the whole build.
I need to do this, because I have to change the current path with 'cd' (we are using relative paths in our proj) and 'set' some environment-variables for msbuild.
Thank you in Advance.

Not sure why you need to get out of the service realm. My understanding was so far that Hudson starts a new environment for every job, so that the jobs don't interfere with each other. So if you don't use commands that effect other ennvironments (e.g. subst) you will be fine with adding a "Execute Windows Batch Command".
If your service runs with the wrong permissions, you have two options. First, change the permission of the service (run it under a different user than the local system user) or call the runas command. If for whatever reason you still need to contain changes to certain parts of your job you can always call cmd to create a new environment.

Related

How do I set custom bash prompt / env variables / aliases in a way that the user can restore their previous settings at any point?

So I have written a tool that generates and offline replica of another system. It allows you to run commands typically run online, offline by specify a series of .mgmt and .sock files in the command.
However, I want users to be able to enter these commands as if they were on the live system. Therefore, I had it generate a script that can be sourced to set the environment variables and aliases necessary to allow the user to enter commands easily.
There are a few issues this created that I want to work around, and I am curious is there is a standard best practice when doing this.
I want the bash prompt to change (or atleast be appended to) when the user sources the new variables so it is clear they are running commands on the offline replica. I can do this by setting $PS1. However, I also want a 'deactivate' script to restore the previous user environment. How do I undo this change to the previous one?
When they source my script, env variables change that may have been previously set. I have the script store any previous variables as OLD_<env_variable_name> and create a second deactivate script that restores them as well as removes any aliases (and eventually resets the bash prompt). Is this the best way to do this, or is there a much simpler method I may be missing?

Can an environment variable be shared between 2 different shell types?

I came into an environment were when users log into our system, they log in with the csh by default. We also have an automation login (let's call it "autologin") that also invokes the csh by default.
This login in used to execute (via its crontab) all of our scripts (50+) used to send and receive files with our vendors. The results of these individual file transmissions are used to feed a dashboard for each transmission.
The dashboard simply has a light for each file transmission (green light if the last file transmission was successful and red if it failed). This success/fail status is set (in a SQL Server database) from the scripts, using a tsql -H connection.
We are currently using SQL Server 2008, but are upgrading to 2016. So I need to change are 50+ scripts' tsql connection from sql_2008 to sql_2016. I had the idea to use an environment variable (let's say AUTOSQL) that could use.
I could then change all of the 50+ scripts to reference AUTOSQL, instead of sql_2008, and then set the environment variable to sql_2008/sql_2016/and whatever we upgrade to in the future. As I previously mentioned, all users log in with csh as the default shell. The problem I've encountered is all of the shell scripts are written in bash.
How can I set up an environment variable for the bash (our automation) scripts to use, so when we upgrade in the future, I simply have to change the value of one environment variable, instead of changes to 50+ scripts? Thank you
Environment-variables are an operating system feature that is "application agnostic". In Unix-like environments, any kind of program can pass environment variables to any other, that is its child.
The real issue here is that the fifty scripts are run by cron from a crontab file. This means that they will not inherit the AUTOSQL variable, even if it is exported by the csh login script.
See:
Where can I set environment variables that crontab will use?
Also, on the ServerFault StackExchange:
https://serverfault.com/questions/337631/crontab-execution-doesnt-have-the-same-environment-variables-as-executing-user
It's great to see someone simplifying and consolidating their scripts.
If all these scripts are executed in cron (by root), /root is the first place I'd look.
Step 1:
Choice A.) Set and export AUTOSQL in root's .profile
Choice B.) Set and export AUTOSQL in root's .bashrc
Choice C.) Set and export AUTOSQL in root's (whatever you wanna call the file)
export AUTOSQL='sql_{year}'
Step 2:
Make sure you source this file at the top of your scripts. From now on, you can add environment switches at will to this file, since all of your scripts will source it.
. /root/.{bashrc || profile || whatever you decide to name the file}
Hope this helped! Again, the decision is yours.

Jenkins Build Never Finishing

I have a Jenkins master/slave set up which has been working quite happily, running Oracle imports on some Linux boxes.
I have just added a new slave node and tried to run our existing database import job on this new node. This job consists of three subprojects; the first one runs some execute shells, copying files and changing permissions and this currently completes successfully, the second runs an execute shell which ends with an Oracle impdp. The impdp completes (the db exists and ps -ef no longer shows impdp running) but the Jenkins subproject never finishes. The UI just sits there with the clock whirring around.
I've tried adding an echo after the impdp, and this also executes correctly, but the subproject still never finishes.
If I add a Post-Build email notification, it is not sent.
The third subproject is never reached.
What could be the cause of this and how do I debug what is happening?
In our case, the jobs would declare "Finished: SUCCESS", but then continue with some unknown Jenkins business for another 10 or 20 minutes. After putting on more detailed logging, we found it was related to the ill-named LogRotator.
We have thousands of old builds and are deleting the artifacts for those older than a certain number of days. Because of the way old builds are handled, Jenkins searches the entire list of old builds even though they have already had their artifacts removed.
There is issue that is now fixed related to this: https://issues.jenkins-ci.org/browse/JENKINS-22607
As of right now I do not see it in a release, but if you have this issue, the temporary workaround is to turn off the deletion.
This turned out to be something horrible :-)
After finishing the work, Jenkins tries to kill all processes it spawned. To identify them, it goes through all processes in the OS, reads from /proc/<pid>/environ (this is a Linux box) which contains the process’ environment variables and compares them with the environment it sets to Jenkins processes.
Problem was there was one particular Oracle process running on our db server where if you tried to read from /proc/pid/environ for it, it would just hang forever – which is where the Jenkins code would get stuck.
I have no idea why it was getting stuck like this and nor did our DBA. We restarted it and now it works.
You can add set +x to the top of shell scripts to see which commands are actually executed. That way you should be able to easily see from the output which command is blocking.

Jenkins Timeout because of long script execution

I have some Issues regarding Jenkins and running a Powershell Script within. Long Story short: the Script takes 8x longe execution time then running it manually (takes just a few minutes) on the Server(Slave).
Im wondering why?
In the script are functions which which invoke commands like & msbuild.exe or & svn commit. I found out that the script hangs up in those Lines where before metioned commands are executed. The result is, that Jenkins time out because the Script take that long. I could alter the Timeout threshold in the Jenkins Job Configuration but i dont think this is the solution for the problem
There are no error ouputs or any information why it takes that long and i do not have any further Idea for the reason. Maybe one of you could tell me, how Jenkins invokes internaly those commands.
This is what Jenkins does (Windows batch plugin):
powershell -File %WORKSPACE%\ScriptHead\DeployOrRelease.ps1
I've created my own Powershell CI Service before I found that Jenkins supports it's own such plugin. But in my implementation and in my current jobs configs we follow sample segregation principle rule: more is better better. I found that my CI Service works better when is separated in different steps (also in case of error it's a lot easy for a root cause analyse). The Single responsibility principle is also helpful here. So as in Jenkins we have pre- & post-, build and email steps as separate script. About
msbuild.exe
As far as I remember in my case there were issues related with the operations in FileSystem paths. So when script was divided/separated in different functions we had better performance (additional checks of params).
Use "divide and conquer" technique. You have two choices: modify your script so that will display what is doing and how much it takes for every step. Second option is to make smaller scripts to perform actions like:
get the code source,
compile/build the application,
run the test,
create a package,
send the package,
archive the logs
send notification.
The most problematic is usually the first step: To get the source code from GIT or SVN or Mercurial or whatever you have as version control system. Make sure this step is not embeded into your script.
During the job run, Jenkins capture the output and use AJAX to display the result in your browser. In the script make sure you flush standard output for every step or several steps. Some languages cache standard output so you can see the results only at the end.
Also you can create log files that can be helpful to archive and verify activity status for older runs. From my experience using Jenkins with more then 10 steps requires you to create a specialized application that can run multiple steps like "robot framework".

What is the Cloud-Init equivalent for Windows?

It seems that the stock bootstrapping process is a bit lacking on Windows.
Linux has cloud-init which will install packages, store files, and run a bash script from user data.
Windows has ec2config but there is currently no support to run a cmd or powershell script when the system is "ready"--meaning that all the initial reboots are completed.
There seem to be third party options. For example RightScale has the RightLink agent which performs this function.
Are there open source options available?
Are there any plans to add this feature to Ec2Config?
Do I have to build this my self?
Am I missing something?
It appears that EC2Config on the Amazon-provided AMIs now supports "User Data Scripts" as of the 11-April-2012 updates.
The documentation has not yet been updated, so it's hard to tell if it supports PowerShell or just cmd.exe scripts. I've posted a question on the AWS forums to try and get some more detail, and will update here when I learn more.
UPDATE: It looks like cmd.exe batch syntax is supported, which can in turn invoke PowerShell. There's a new version of the EC2Config documentation included on the AMI. Quoting from it:
[EC2Config] will read in the user data specified for the instance and then check if it contain the tags <script> and </script>. If it finds both then it will take the information between those two tags and save it to a batch file located in the Settings folder of this application. It will then execute the batch file during the start of an instance.
The batch file will only be created and executed on the first launch of an instance after a sysprep. If you want to have the batch file created and executed again set the Ec2HandleUserdata plugin state to Enabled.
UPDATE 2: My interpretation is confirmed by Shon from the AWS Team
UPDATE 3: And as of the May-2012 AMIs, PowerShell is supported using the <powershell/> tag.
Cloudbase.it have opensourced a python windows service they call cloudbase-init which follows the configdrive and HTTP datasources.
http://www.cloudbase.it/cloud-init-for-windows-instances/
github here
https://github.com/stackforge/cloudbase-init/
I had to build one myself however it was very easy. Just made a service that reads the user-data when starts up and executes the file as a powershell script.
To get around the issue of not knowing when to start the service I just made the service start type as "delayed-auto" and that seemed to fix the problem. Depending on what you need to do to the system that may or may not work for you however in my case that was all I had to do.
I added a new codeplex project that already has this tool built for windows. Looking forward to some feedback.
http://cloudinitnet.codeplex.com/
We had to build it ourselves; we did it with a custom service and built our own AMIs. There's no provision currently within EC2Config to do it.
Even better, there is no easy way to determine when the instance is "ready". We had to do it by tailing the logfile of EC2Config.
I've recently found nssm (at nssm.cc) which easily wraps a simple batch file (or pretty much anything else) as a service. You can then us sc config servic1 depend= service0 to force the batch file to be run at a particular point in the service initialization sequence. I am using it in between ex2config and sql express to create a folder on d, for instance. You'll have to use the services tool to make it run as network services and change the AppExit property to Ignore using regedit, but it works once you get it all in place.

Resources