Can you host a bitbucket pipeline internally? - continuous-integration

We are currently using bitbucket cloud to host our grails-app repository. We want to set up some pipelines to do things like run unit tests and make sure the app compiles before being able to merge a branch to master.
I know this can pretty easily be done by letting them host the pipeline and committing a well written pipe file, however there is a problem standing that our app is very large, and even on brand new macbook pros takes 20 minutes to compile, on some older ones it can take 2 hours or more. Grails, thankfully, only compiles files that have changes in them from the last compilation. However, this can't be used on a bitbucket pipe that's working off a fresh pull of the app every time it runs.
My solution to this was wanting to set up a pipeline to run for us internally so that it can already have the app pulled, and just switch to the desired branch and run from there. This still might take time if switching between 2 very diverged branches, but it's better than compiling from fresh every time.
I can't seem to find any documentation on hosting a pipeline internally with bitbucket cloud, does anyone know if this is possible, and if so where there is documentation for it?
It would also be acceptable to find a solution to the long compilation problem itself with bitbucket hosted pipelines.

A few weeks ago, self hosted runners was made available as a public beta. Here are the details: https://community.atlassian.com/t5/Bitbucket-Pipelines-articles/Bitbucket-Pipelines-Runners-is-now-in-open-beta/ba-p/1691022
Additionally, if you're looking to retain some of your files from one build to the next to save doing the same work over and over again, have a look at caches: https://support.atlassian.com/bitbucket-cloud/docs/cache-dependencies/ there are some built ones that you could use, but you can define your own custom ones as well. Essentially it's just a way of preserving the contents of a directory for a future build.

Related

How do I duplicate my Heroku app to add to a pipeline?

I'm beginning to understand how Heroku works, but haven't yet used a pipeline. I have an app I'm working on that is near its first production version. I'd like to begin using pipelines.
But I don't understand how to begin. What do I need to do to make a copy of the current app and have that copy be in the development stage and make another copy for the staging stage? Do I fork my git repository twice and add each one?
I'm trying to take this one step at a time. I don't need GitHub integration yet. This is a small project and will not have any pull requests for quite some time, if ever. I'm only interested in the ability to develop, stage and release in the three stages offered by Heroku.
While pipelines do use multiple apps, they should use the same git repository with different remotes. Heroku's help page helped me understand that the process is to link the repository to each app different remote names and then push to the remote that I'm currently working on.

SVN Post-Commit to Update Working Copy when Working Copy is on a Network Drive

I work for a fairly new web development company and we are currently testing subversion installations to implement a versioning system. One of the features we need the versioning system to perform is to update the development server with an edited file once it has been committed.
We would like to maintain one server for all of our SVN repositories, even though, due to system requirements, we need to maintain several separate development servers. I understand that the updates are fairly simple when the development server resides in the same location as SVN, but that is just not possible for us. So, we need to map separate network drives to the SVN server for each development server.
However, this errors on commit. Here is my working copy test directory, as referenced in the post-commit.bat file:
SET WORKING_COPY=Z:\testweb
This, however, results in an error...
post-commit hook failed (exit code 1) with output: svn: Error resolving case of 'Z:\testweb'
I'm sure this is because the server is not the same user as me and therefore does not have the share I need mapped to "Z" - I just have no idea how to work around this. Can anyone help?
UPDATE: The more I look in to these issues it appears that the real solution to the problem is to use a CI Server to accomplish what I am attempting to accomplish. I am currently looking in to TeamCity and what it might do for us.
Don't do this through a post-commit hook. If you ever manage to get the hook to succeed, you'll be causing the person who did the commit to wait until the update is complete. Instead, I recommend that you use Jenkins which is a continuous build engine.
It is possible that you don't have anything to build. After all, if you're using PHP or JavaScript, there's nothing to compile. However, you can still use Jenkins to do the update for you.
I can't get into the nitty-gritty detail hear, but one of the things you can do with Jenkins is redefine its working directory. You can do this by clicking on the Advanced button when you define a job, and it'll ask you where you want the working directory. In this case, you can specify your server's working directory.
One of the things you can do with Jenkins is have it automatically run tests, or maybe do a bit smoother update. For example, you might have to restart your web server when you change a few files, or maybe you need to make sure that if you're changing 100 files, they all get changed at once, or your server isn't in a stable state. You could use Jenkins to do this too. And, if there are any problems, you can have Jenkins email the person who is responsible for the server that the server update failed.
Jenkins is easy to setup and use. You can download it and start up Jenkins in 10 minutes. Setting up a job in Jenkins might take you another 15 minutes if you had never seen Jenkins before and had no idea how it works.

Distributed Revision Control with automatic synchronization or Eclipse plugin better than FileSync?

I have what I hope is not a unique situation...
...and I'm looking for suggestions.
I am looking for a better synchronization plugin for Eclipse than FileSync
-or-
I am looking for a distributed (preferably) version control system that will allow me and the other developers in my team the ability to work with local files and have that repository automatically upload changes and revision history to our development box
-or-
A combination of the two.
Most revision control applications I've tried are catered more to the compiled code workflow where you only check in when you have a compilable code base, and that makes sense to me. We, however, are working with Coldfusion pages on a remote development server which complicates the process of check-ins, quick updates and debugging. Now, I don't necessarily want to have to check in every time I want to test code (because that would be a nightmare...) but it would be nice to have something that tracks changes throughout the day and logs those changes in a revision control automatically (Dev would state intention in dialog on opening the project?) while keeping the files on the development server in sync with all the programmer's machines. It would be awesome if it tracked changes locally and did one auto check-in per day (at some scheduled time, preferably as a background process), but I've not seen anything like this.
Currently we are shoe-horned into using Serena PVCS (because they have free licenses mainly) and it's not a very fast solution when we all work in different States, our development server is in a State that none of us work in, and the repository is in an even different State. (No control over this!) It normally takes Eclipse 10-15 minutes to synchronize ~500 files with the PVCS server and check-ins are "Eclipse-lockingly" slow. (ie: when checking in, forget using Eclipse for anything.)
I would like to have a workflow process that manages all our workfiles locally, synchronizes those changes to a remote development server and pulls down any changes that happen to be up there. I do not count on having any/many conflicting merges during this because we all work on different parts of the same site. However, it may happen.
I have played around with Bazaar, and this is what made me think about having a distributed revision system, but I'd like to have that auto-merge with the remote repository (the development server in this case) and I did not find a way to do that when local files were updated. I will have to admit I have not looked into Git or Mercurial much and was hoping that someone could share their experience with me on feature sets or solutions if one of these other options will work.
To give a back history, this came about when one of our developers started using FileSync in Eclipse and started overwriting all our changes because the Eclipse FileSync plugin is only one way... from the dev box to the server. Boss asked why we weren't checking in all the time... we blame the speed... I get tasked to find a solution.
Also, a centralized solution like SVN was already turned down (because we have Serena and a crew of people that are supposed to manage this... but I've been waiting two days for even a response to an issue log I submitted concerning our lack of speed issue, so if we can self manage a solution [thus distributed and why I looked at Bazaar] that would be awesome.)
A DVCS like Git or Mercurial would definitely be a sensible choice, especially for:
distributed development
distributed repos (including one dedicated for those checks of yours)
That notion of dedicated repo is not a new one and has been used before (for local repo used for testing before pushing to a remote repo), but it can easily be adapted for the kind of automatic pushing you are looking for.
For a strong Eclipse integration, I would go with Git (even though EGit is not fully baked yet): all Eclipse projects (for the development of Eclipse) itself are or will be soon on git repo.
Eclipse is committed to replace its current native CVS integration with a complete native Git integration.

how to develop code on multiple computers when using CI Server

I currently develop a lot of my work on my laptop while on train to and from work most days. But it is no way near as good or easy as my main machine at home. My issue is how to keep the files in sync. I use subversion but also have a CI Server that rebuilds on each check in. There is a question here that deals with multiple machines and says to use source control. But what if there is a CI server, this would lead to broken builds.
My first thought (which seems a bit daft) would be to use two sets of source control, or at least two repos for the same code. One for the CI server and one to transfer between machines. Not sure you can even do that with SVN. Was just looking at Mercurial after Jeol's blog. Not sure if this would be able to work as you still have the issue of pushing to central repo would be where the CI server pulls from.
I guess the crux is how do you share code that is still buggy or doesn't compile so you can switch machines, without braking the build?
Thanks
You're actually on the right track with the multiple repository idea, but Subversion doesn't natively support this sort of thing. Look into using a so-called DVCS (Distributed Version Control System) such as Git or Mercurial, where you can have multiple repositories and easily share changes between them.
You can then push changes to the central server from either your desktop or your laptop machine, and your source control system will sort it all out for you. Changes that are not yet complete can be done on another branch, so if one morning on the train you write some good code and some bad code, you can push the good code up to the repository used by the CI server, and the bad code over to your desktop development machine, and carry on from the point you left off.
What about branches? Develop on one branch and only reintegrate after you finished a piece of development?

What does CruiseControl (or any other CI tool) give more than well-written (n)Ant?

We have a large collection of nAnt scripts that build our various products. They almost all have the following structure:
Erase old working copy.
Check out complete fresh copy from version control.
Increment build number in appropriate file (custom nAnt task).
Run static analysis (StyleCop, Perl scripts)
Build solution using Visual Studio - ends up with MSI output.
Run unit tests (nUnit, JSUnit)
Run static analysis (FxCop)
Zip up deliverables (MSI, readme, etc) into well-named package.
Put this zip package onto a server share.
Email results to team.
From our research, it seems that CruiseControl(.net?)/Hudson/BuildBot would only add the trigger that causes the build, which at the moment is double-clicking the nAnt script over Remote Desktop and a status dashboard.
Are we missing anything else significant?
The question is subjective, and thus so is my answer.
In the projects I've automated before, CruiseControl was used essentially for that one purpose: so we didn't have to remote into the build machine and trigger builds. The CI part is that CruiseControl will monitor the repository for you, triggering builds at the intervals you define.
It also gave us the dashboard from which could trigger releases, or go back to examine logs and artefacts from past builds.
For us that was enough benefit to implement CruiseControl. Perhaps it doesn't "seem" like much until you've finished it and a month later realized you haven't had to touch your build system because it's off silently and thanklessly doing its thing for you.
A Continuous Integration server such as Hudson would do 1, 2, 3, 9 and 10 for you so that you don't have to implement them yourself. If you've already got it working that's maybe not a huge improvement for your current project but it makes things simpler for subsequent projects. It would also, as you mention, take care of when to trigger the build.
Hudson will also chart various trends over time, such as test coverage, build time, static analysis results. You can also have more sophisticated notifications than just e-mail if you choose.
The most important thing it gives you is visual feedback (the bigger the screen is better). When you have one machine, dedicated to displaying buildresults, visible to all team members, it works like a catalyst to people see that something is wrong and fixes it.
If you have something like that standing in a place where your boss can see it and ask you "Hey Wilkinson, why is this screen red?" will you fix your build faster?
Thay all look the same, you can pick whatever you think fits your needs, just have one setup and running.

Resources