I haven't been able to find an easy way to config Jenkins with a Cakephp project on my localhost to implement Continuos Integration properly.
Would be appreciated if someone supply an easy to understand tutorial, from configuring Jenkins to run Cakephp test units.
Thanks
As said by #xgalvin getting all the dependencies running on Windows is a messy and error prone task. You'd be better of with what he suggested or an Linux based server - be it a virtual machine or not. Anyway definitely use the Jenkins PHP template. I've personally done this a couple of times and the first time it was a bit of a hassle but it is not very hard to do. All you need is basic knowledge of Linux/bash/PHP/Jenkins and some time.
Related
I am interested to know if someone has explored using Jenkins only as a backend tool but use some better web based UI to start build, and report job details.
Jenkins is really amazing at what it does and with pipeline, it actually does lot of things that a modern build system might need. However, I am really not happy with the UI it gives users, it is just very dull and is not very intuitive. I was hoping if someone had explored developing their own UI to show the different jobs configured in jenkins, taking inputs from users and running the jobs and showing the logs in a more intuitive way.
For now, I found that Blue Ocean Jenkins Plugin is the best way to get a much improved UI for jenkins.
I am aware of the process to install WAS 8.5.5.x and 9.0.x versions using IM response file(s) but would like to know best practices and recommendations to perform WAS installation and upgrade on more than one server, to avoid manual errors and reduce time.
I am open to use to Ansible, Puppet or any other orchestration tools as well, but would like to know possible options if we are not allowed to use these tools.
Ultimate goal is to automate most of the setup/upgrade steps, if not all of them since when dealing with bunch of servers.
Thanks
Assuming you are referring to WebSphere Application Server traditional, take a look at the approaches described here, https://www.ibm.com/support/knowledgecenter/SSEQTP_9.0.0/com.ibm.websphere.installation.base.doc/ae/tins_enterprise_install.html, especially if you are working with larger scale deployments.
Consider creating master images and distributing them in a swinging profile-type setup. They make it easier and faster to install and apply updates since you only need to create images once and distribute many times. You have consistency across systems too.
You can then automate with your preferred automation technology.
We use ansible, simple and effectively.
True, you must of course develop a playbook that will be able to do all this.
I'm currently experimentig with FoundationDB in a .Net WebApi 2 project. The WebApi controller performs a simple getrange against the foundationdb cluster, and everything works fine ... if I run the project just once.
The second time I run it, I get the dreaded api_version_already_set error, and the only way to have everything up and running again is to restart IIS. I've found this similar question, and the only "solution" proposed in the answer is to run a process per App Domain, that isn't really ideal.
I have also tried this hack used in the .Net library, but all it does is switch the api_version_already_set error to network_already_setup or broken_promise.
Has anybody else found a better solution?
PS: To temporarly solve this, I'm running the WebApi as self host, and this seems to solve the problem, but makes the use of FoundationDB in conjunction with WebApi annoying outside of a test environment.
This issue is still present in version 5.x, for the same reason. The network thread can only be created (and shut down) once per process, so any host that use multiple Application Domain per process will not work. There does not seem to be any incentive to solve this issue (that mostly only impact managed platforms like .NET, maybe Java?).
Fortunately, with ASP.NET Core and web hosts like Kestrel (out of process, does not use AppDomains), this issue will become moot.
This can still cause issues with unit test runners that attempt to cache the process between runs. You need to disable this caching feature for them to run reliably.
I'm building some scripts for automatically setting up a developer's machine so everyone has an identical setup & configuration.
One thing in particular I want to automate is the configuration in IIS7. We have a bunch of web apps which need to be hosted locally and would ideally like them all set up automatically. Does anyone know of a sensible way to do this?
a little bit of Microsoft.Web.Administration + a bit of LINQPad and you're laughing.
its fairly easy with the powershell snap in
Edit: The MSDN docs are pretty whack as usual, but if you rummage about enough you can eventually find all the commands available to you
Is a CI server required for continous integration?
In order to facilitate continous integration you need to automate the build, distribution, and deploy processes. Each of these steps is possible without any specialized CI-Server. Coordinating these activities can be done through file notifications and other low level mechanisms; however, a database driven backend (a CI-Server) coordinating these steps greatly enhances the reliability, scalability, and maintainability of your systems.
You don't need a dedicated server, but a build machine of some kind is invaluable, otherwise there is no single central place where the code is always being built and tested. Although you can mimic this affect using a developer machine, there's the risk of overlap with the code that is being changed on that machine.
BTW I use Hudson, which is pretty light weight - doesn't need much to get it going.
It's important to use a dedicated machine so that you get independent verification, without corruption.
For small projects, it can be a pretty basic machine, so don't let hardware costs get you down. You probably have an old machine in a closet that is good enough.
You can also avoid dedicated hardware by using a virtual machine. Best bet is to find a server that is doing something else but is underloaded, and put the VM on it.
Before I ever heard the term "continuous-integration" (This was back in 2002 or 2003) I wrote a nightly build script that connected to cvs, grabbed a clean copy of the main project and the five smaller sub-projects, built all the jars via ant then built and redeployed a WAR file via a second ant script that used the tomcat ant tasks.
It ran via cron at 7pm and sent email with a bunch of attached output files. We used it for the entire 7 months of the project and it stayed in use for the next 20 months of maintenance and improvements.
It worked fine but I would prefer hudson over bash scripts, cron and ant.
A separate machine is really necessary if you have more than one developer on the project.
If you're using the .NET technology stack here's some pointers:
CruiseControl.Net is fairly lightweight. That's what we use. You could probably run it on your development machine without too much trouble.
You don't need to install or run Visual Studio unless you have Visual Studio Setup Projects. Instead, you can use a free command line build tool called MSBuild.