I've just started learning and using nativescript; everything seems alright and I'm making progress, but I haven't been able to find a solution for continuous delivery. (on push on master branch in bitbucket, I want to automatically make the build and distribute it).
I've tried appcenter from Microsoft (I used it before for Xamarin) but could not make it work.
I use bitrise (https://www.bitrise.io/). It integrates with bitbucket for CI and also offers scheduled builds. There are free plans available. Takes a little config to get working with NativeScript, but worth it so far.
I've searched for multiple solutions. I've retried with VS appcenter, there was an example or two on google(e.g. https://github.com/skhye05/nativescript-app-center ), but did not work for me.
I guess I'll close the ticket for now.
Related
We are currently using bitbucket cloud to host our grails-app repository. We want to set up some pipelines to do things like run unit tests and make sure the app compiles before being able to merge a branch to master.
I know this can pretty easily be done by letting them host the pipeline and committing a well written pipe file, however there is a problem standing that our app is very large, and even on brand new macbook pros takes 20 minutes to compile, on some older ones it can take 2 hours or more. Grails, thankfully, only compiles files that have changes in them from the last compilation. However, this can't be used on a bitbucket pipe that's working off a fresh pull of the app every time it runs.
My solution to this was wanting to set up a pipeline to run for us internally so that it can already have the app pulled, and just switch to the desired branch and run from there. This still might take time if switching between 2 very diverged branches, but it's better than compiling from fresh every time.
I can't seem to find any documentation on hosting a pipeline internally with bitbucket cloud, does anyone know if this is possible, and if so where there is documentation for it?
It would also be acceptable to find a solution to the long compilation problem itself with bitbucket hosted pipelines.
A few weeks ago, self hosted runners was made available as a public beta. Here are the details: https://community.atlassian.com/t5/Bitbucket-Pipelines-articles/Bitbucket-Pipelines-Runners-is-now-in-open-beta/ba-p/1691022
Additionally, if you're looking to retain some of your files from one build to the next to save doing the same work over and over again, have a look at caches: https://support.atlassian.com/bitbucket-cloud/docs/cache-dependencies/ there are some built ones that you could use, but you can define your own custom ones as well. Essentially it's just a way of preserving the contents of a directory for a future build.
I'm somewhat new to maintaining separate production vs development builds of an app.
I want to have my current build deployed to heroku so i can easily get it in front of people for critique but I'd also like to run a local version as well so i can make changes and see them quickly on the fly.
With my app on heroku, everytime i make a change, I have to push to github then hit the deploy button. This takes a relatively long amount of time compared to just launching the app on localhost and just refreshing the browser page to see how the changes came out. This is fine if you have made a ton of changes and you know they all work as expected, but its horrible for trying incremental changes, as you can imagine.
I know this is sort of a newbie question, but how can I have the best of both worlds?
The only way to achieve something like this is with review apps.
Instead of doing a git push, you will need to enable GitHub Sync. You will be able to deploy either through the heroku dashboard, or automatically when a push is made to master.
Review apps will automatically create a test app and configure it, for each pull request.
Then, when you wish to do QA with other people, you just have to give them the address to that review app where the new code is deployed instead of the main one.
I tried continuous integration tools Travis CI, CircleCI and codeship, but found none of them provide support document for phabricator. Does anyone have ideas about how to do continuous integration (CI) with Phabricator?
I have done an integration with Travis-CI by adding post diff and land hooks to Phabricator to push diffs as branches to GitHub where Travis looks for branch updates. As far as I know, Travis-CI integrates only with GitHub, so if your main repo is there and Phabricator is pointing to it, it can be done.
If you want to take this approach, the place to start is with creating your own ArcanistConfiguration and overriding didRunWorkflow. See also how to create a new library. The API documentation is pretty good, but I had to go through some trial and error to get what I wanted. The Phabricator people are probably happy to answer questions.
You can also look into the Phabricator conduit differential.createcomment to script messages to diffs like so:
arc call-conduit --conduit="https://my.phabricator.com/" --arcrc-file="robot.arcrc" \
differential.createcomment <<EOF
{"revision_id":"1234","message":"Yer build done failed"}
EOF
Where robot.arcrc is an arcrc file with the credentials to push messages, and 1234 is the revision number. You would have to use the conduit API to get the revision number.
So, I think the answer is you may have to build your own custom solution depending on which CI integration for the CI tool you want to integrate with. And here's a discussion of Travis support for Phabricator.
Edit: Here's traphic, an example of extending arcanist to push diffs to branches on GitHub on arc diff and remove them on arc land. As Travis-CI looks for update from GitHub, it will build your diffs.
Side note: This is mostly a brain dump. I know good answers have more code examples and links are frowned on, but the question was pretty open ended and was looking for pointers, so I'm trying to be helpful.
If you are using Jenkins, the nice guys at Uber created a Jenkins Phabricator Differential plugin that makes it possible to really clean up your job configs (if you've already set them up using the links from zerodiff's post).
Might be worth noting that Phabricator's tool to do continuous integration (i.e. Harbormaster) is currently under development.
You can find this in a table in their comparison page. See http://phabricator.org/comparison/
I am referred to Hudson today.
I have heard about continuous integration before, but I have no idea what the heck is a ci-server.
Hudson is really easy to install in Ubuntu and in several minutes I managed to set up an instance of it.
But I don't quite understand the workflow of a ci-server, or how am I supposed to use it?
Please tell me if you have experience about ci, thanks in advance.
Edit:
I am currently using Mercurial as my SCM, and I wonder what is the right way to use it with Hudson.
I have installed the Mercurial Plugin of Hudson, and I create a new job with a local repository. When I commit in the repository the Hudson job is built with the latest version of my source code.
If what I used is a remote repository, what's the workflow like?
Is it something like the following?
Set up a Hudson job with the repository
Developer makes a local clone of the repository
Developer commit and push changes
The remote repository update with the incoming changeset
Run a Hudson build
There may be something I misunderstanded at all, please help me point it out.
Continuous Integration is the process of "integrating software" continuously i.e. as frequently as possible (ultimately after each set of changes) to avoid any big-bang integration and all subsequent problems by getting immediate feedback.
To implement Continuous Integration, you first need to automate the build of your software (where build means of course compiling sources, packaging them, but also compiling tests, running the tests, running quality checks, etc, anything that will help to get feedback on the health of your code). Then you need to trigger the build on the latest version of the sources on a particular event (a change in the repository, a temporal event), to generate reports and to send notifications upon failure (by mail, twitter, etc).
And this is precisely the responsibility of a CI engine: offering trigger mechanisms, being able to get the latest version of the sources, running the build, generating and publishing reports, sending notifications. CI engines do implement this.
And because running a build is CPU and Disk intensive, CI engines usually run on a dedicated machine (or even a farm of machines if you want to build lots of projects).
Back to your question now. Once you've got Hudson running, configure it (Manage Hudson > Configure System): setup the JDK, build tools, etc. Then setup an Hudson Job and follow the steps: configure the location of the source repository, the build tool, the trigger, a notification channel and you're done (you can do more complex things but that's a start).
For more details on the setup, check:
The official Use Hudson guide for more details. << START HERE
Continuous Integration with Hudson - Tutorial.
Spot defects early with Continuous Integration.
Martin Fowler's overview of continuous integration is one of the canonical references. In my opinion, using automation to make sure your code base is healthy is one of the most useful things that you can set up.
Update Sorry that I didn't have much time earlier to expand on my reply. #Pascal_Thivent is right that in order to effectively use CI, you need to be able to automate your builds, tests, etc. CI is actually a good forcing function for this. For me, it's one of those little warning flags if I start to think that it would be too painful to put a build into Hudson. It means that something is not quite right.
What I like about Hudson is that it's flexible enough to accommodate different workflows. We use it for both builds / unit tests and releases. And it eliminates a lot of the worry about certain release procedures only working in one person's environment.
What I don't like about Hudson is that it is occasionally unstable when new builds break plugins. I've had a couple of upgrades (2 out of 10 or so) go bad because of incompatibilities. I do two things now:
I never upgrade my team's Hudson server to the latest and greatest right away. I generally only upgrade when there are significant new features, or bug fixes.
I now have a basic Hudson instance set up with all my plugins on a virtual machine with some dummy builds that I fire up to test out any new upgrades before doing it on the public server.
I have a few Visual Studio Solutions/Projects that are being worked on in my company, which now require a scheme for automatic nightly builds. Such a scheme needs to be able to check the latest versions from SVN, build the solutions, create the appropriate downloadable files (including installers, documentation, etc.), send e-mails to the developers upon errors and all sorts of other nifty things. What tool, or tool-set, should I use for this?
I used to use FinalBuilder a few years ago and I liked that a lot but I'm not sure if they support such features as nightly-builds and email messages.
At my work we use CCNET, but with builds on check-in more than nightly - although it's easily configured for either or both.
You can very easily set up unit testing to run on every checkin as well, FXCop testing, and a slew of other products.
I would also advise checking out Team City as an option, because it has a free version, and the reporting and setup is reportedly much simpler (it does look nice to me). It does have a limit of somewhere around 20 team members/projects, before it hits a pay-for window.
That said, we started with CCNET, and have grown several products too large to look at Team City on the free version and are very happy with what we have.
Features that help with CCNET include:
XML based configuration - you can usually copy and paste most of what you need.
More or less you'll be able to plug your treesurgeon script in as your build script, and point CCNET at that as an executable task to run the compilation.
Lots of documentation and very easy to set up nunit, ncover, fxcop, etc.
Taskbar app that will let you know the status of your projects at any time, and it can also fire off an email or keep an RSS feed with the same information.
But I'd definitely go with running a CI build on every check-in - for the most part will run the unit tests before checking in, but let the CCNET server handle run any applications/assemblies that would have dependencies on the assembly we're checking in, and they get re-built, and re-tested on every checkin.
Given that CCNET is free free and takes very little time to set up - I'd highly recommend just going for it and seeing if it suits you, then expanding from there.
(There's another thread here where I posted pretty much the same/with a few alterations - but some of the other comments may help too! Automated Builds)
Edit to add: You can easily set up your own deployment scheme for CCNET, and there are a tonne of blog posts out there to assist, and email notifications can really be set up fairly granularly, either on all successes, all failures, when it changes from success to fail, etc. There's also built in RSS, and you could even set up your own notifiers for other systems.
FinalBuilder does support emailing and just executing FinalBuilder each night will give you nightly builds. You don't really need other software for that if you don't want to.
You could also use CCNet to trigger a build when needed and have it execute FinalBuilder for the build. You can then decide if FinalBuilder or CCNet should email.
Finally FinalBuilder has a Server version which is sorta like CCNet in that it is a continues integration tool using FinalBuilder. See http://www.finalbuilder.com/finalbuilder-server.aspx
Of course the biggest advantage of CCNet is that it is free and open source.
Although it costs, I highly recommend Visual Build. It works with MSBuild, and old tools like Visual Basic. It is scriptable, and can do everything from making installers to simple Continuous Integration.
We just started using Hudson here at the office.
Its free and open-source, it has a very user friendly UI. Plus there are tons of options and plugins available.
I was up and running in a matter of minutes after installing it. All the other devs here are loving it.
All in all, its a very elegant solution for Continuous Integration or Nightly Builds.
I've recently started using CruiseControl.NET (http://confluence.public.thoughtworks.org/display/CCNET/Welcome+to+CruiseControl.NET). It works reasonably well, although configuration could be easier. CruiseControl.NET is free and open source, and seems to integrate with most standard tools, although I've personally only used it with CVS, SVN, NUnit and MSBuild.
Luntbuild
Supports a wide variety of source control and build systems. Very customizable. Open Source. Setup takes some time, but it's not too horrible.
Buildbot is open source and very powerful too. You should take a look at it.
Cascade supports doing a build on every single change committed to the repository.
I would not recommend doing only nightly builds -- that's a pretty long window where a build break can slip in before it's reported.