I have multiple existing projects which build fine. They run MSBUILD on a windows agent running the windows service.
I wanted to create a single project that builds them all in a particular order and collects the artifacts from all of them. I decided to try creating a pipeline. When I run it it gets to the first build statement and then just hangs there, no error it just says "Scheduling Project:.." and the little wheel spins forever. The job its trying to start normally finishes in a few seconds.
stage('job1'){
node('windows'){
build job:'job1', quietPeriod: 0, wait: true
}
}
I have to kill the build manually, it never starts the job.
Ok I managed to rearrange things so that the pipeline runs on the master and the individual jobs and artifact copies run one at a time on the slave, its working now.
Related
I have a build running in TeamCity, with only one build step: launching a BAT file. TeamCity sometimes kills my build with a (double) keyboard interrupt, and I have no idea why. The output at the end of the build is like this:
Running build failed.
Error:
NUnit test failed (7).
Starting BuildFailureTarget: recover
Uninstalling service under test..Terminate batch job (Y/N)?
^C
Process exited with code -1073741510
This build runs some integration tests via NUnit, after installing a Windows service, with a SQL database. If any of the tests fail, the build script (which uses FAKE, F#'s Make) runs some cleanup—uninstalls the service, tears down the database. It's the same cleanup code that runs when the build passes, only the target name is different (recover). It seems that TeamCity only kills the build when some tests have failed. I should note that the message "Uninstalling service under test" is from a subprocess which is running the uninstaller. This still happens even if we turn off several failure conditions such that the build (spuriously) passes after several tests fail (we are not using Java, so we assume that one is irrelevant):
I can't figure out why TeamCity is killing my build before it is done. How do I figure out what would cause TeamCity to issue this interrupt?
It seems that TeamCity does this if it detects dangling processes (not sure how to be more precise about that). What we had happening was that an exception was being thrown by a third-party library while we were running a subprocess, before the code that stopped that process. The exception was handled, and the cleanup that got triggered by the exception would have resulted in the process getting shut down anyway (through another means), but before that cleanup was finished, TeamCity was killing our build: which ironically meant that the process never did get shut down.
Our solution was to catch the exception and ensure the first shutdown code got called before failing. Ultimately we were not able from a TeamCity side to get more clarity on what was happening: we found the bug by careful analysis of our code. However it seems that this happens when standard cleanup logic for subprocesses fails.
I have a gitlab runner installed on one of my test servers.
I want to build and deploy my app on every commit.
The server is a windows server and the app is a .net core 1.1 app.
my build script works fine but eventually it runs dotnet MyApp.dll which obviously makes the pipeline stop wait until it finishes (but, of course, my app won't finish, I want it to run..)
I tried running start dotnet MyApp.dll but that still doesn't work as gitlab's runner won't stop running until all of it's child processes exit.
I am certain I'm using gitlab's CI in a non idiomatic way but fail to understand how to deploy locally correctly.
Any suggestions?
Windows doesn't offer any easy way to disown a process and you probably don't want to task yourself with stopping the process on your next deploy. What you should do is use SRVANY.EXE to create service out of your application and then use the Gitlab CI to stop the service, replace the files and run it again. It's been a while since I used Windows so I'm sorry but I can't provide the exact commands to run.
I'm using Packer to create AWS AMIs for deployment. I build a couple in Parallel for different types of AMIs (Application server, Worker server), and provision them using Ansible.
However, if one of the build processes fail, I want to halt the entire build process for all parallel builds. Is there a way to accomplish this with packer?
No.
(Unless you do some strange script that runs last in all builds and wait for all other or times out. But you better do a cleanup script that checks the result of the Packer build and deregister AMI's if one of build failed.)
We are using TFS/VS 2010 to run Selenium tests which are scheduled in the TFS controller. After the build and tests are finished I would like to run the failed tests from that build.
Currently I am doing this by using a Windows Scheduled Task and executing a batch file which calls a powershell script which gets the latest build version (and failed tests) and then executes them (using mstest) and finally publishes the results back to build.
I just want this to happen without a windows scheduled task, it is too fickle. I believe I need to edit ProcessTemplate.xaml and add an event (InvokeProcess) to achieve this, I just can't find much on it.
Thanks in advance!
I have two builds running in teamcity. One deploys the db, the other makes an app and then runs against the db.
So my problem is, i dont want the db build to start if the second is running.
So i need the database build not to trigger/wait until the other is finished?
You can create a snapshot dependency on build(B1) that you would like to wait on the other build(B2)
Assuming B1 needs to start only after B2 is complete
Better way is to create one build configuration with two build steps. Then set limit of running build configuration to one and you are done.