I am using Drone as my CI/CD tool and now I'm facing an issue with this. My team consists of developers and testers (who also are developers), and the Test Team wants to put a specific branch in the staging environment, to test everything before merge it. The issue is: how to do that?
Drone has many configurations that can be found in the documentation. I am searching for something that allows my team to enter a Job and specify the branch in some sort of Dropdown component, then run the Job using that branch. This can be done so easily with Jenkins. Is there any way of doing that in Drone?
Thanks for any help.
Drone is very git-commit driven, which took some getting used to when we transitioned away from Jenkins.
You can probably do what you described with promotions. Promotions let you re-run a previous build with specific parameters. This lets you run different pipeline steps, or even completely different pipelines, depending on the promotion target specified.
I think a combination of promotions and triggers will get you where you need to go. Here is documentation on triggers .
So I would create a promotion that is triggered by the 'staging' branch. You can also ask more questions in the community slack for drone
Related
Hey guys I work with a team of four people and only two of us really push changes to the Master Branch. As a result we communicate regularly to ensure we are not working on the same file of the solution to avoid merge conflicts. Definitely not the best practices.
However, we will be bringing in more people to work with us and it was proposed to switch to a Continuous Integration and Continuous Deployment environment. I understand the concept of a CICD environment, but creating such environment will be the responsibility of another team.
My task however is coming up with a document for developers on how we should program going forward. Every time I Google something like "CICD Best Practices" I get results for building the CICD environment, however I am interested on best practices developers must follow when programming in a CICD environment as opposed to building a CICD environment.
I am aware of the obvious "Commit your changes often, document your code, use good names". So I am trying to track down any publication or sources (from Microsoft, Amazon, etc) that might mention best practices that are not so obvious. Could anyone point me in the right direction?
Please and thanks!
From my experience the best workflow is to restrict access to the master branch, no one should be allowed to push to it.
All changes should be made through PRs and pass all the tests defined in your cl pipeline.
In that way you can keep your master branch always with working code and deploy it directly to a staging environment. Once you have acceptable code on master you can make a release and have your CD deploy it to production.
We have a project on Snap CI but since it's going away we have to switch to another tool.
GoCD is our favorite but there is an important feature I am wondering if it's supported: Branch tracking.
Snap CI supports branch tracking which is currently enabled in our project: 'This repository has automatic branch tracking enabled for all branches starting with ***':
I tried to set up GoCD to do exactly this. But I couldn't find a way to achieve this behavior. The only thing I found was the feature branch / pull request plugin.
Do you know if such a feature is supported or how I have to configure the FB/PR plugin?
Thanks!
We wanted to do something similar and were faced with the same problem. In the end, we couldn't get direct branch tracking, but created a GoCD API client that created pipelines from a template, and the branch of Git material is set to a parameter (e.g. #branch). The client gets run manually when branches are created, but it could quite easily be adapted to run from hooks to automate it.
I am referred to Hudson today.
I have heard about continuous integration before, but I have no idea what the heck is a ci-server.
Hudson is really easy to install in Ubuntu and in several minutes I managed to set up an instance of it.
But I don't quite understand the workflow of a ci-server, or how am I supposed to use it?
Please tell me if you have experience about ci, thanks in advance.
Edit:
I am currently using Mercurial as my SCM, and I wonder what is the right way to use it with Hudson.
I have installed the Mercurial Plugin of Hudson, and I create a new job with a local repository. When I commit in the repository the Hudson job is built with the latest version of my source code.
If what I used is a remote repository, what's the workflow like?
Is it something like the following?
Set up a Hudson job with the repository
Developer makes a local clone of the repository
Developer commit and push changes
The remote repository update with the incoming changeset
Run a Hudson build
There may be something I misunderstanded at all, please help me point it out.
Continuous Integration is the process of "integrating software" continuously i.e. as frequently as possible (ultimately after each set of changes) to avoid any big-bang integration and all subsequent problems by getting immediate feedback.
To implement Continuous Integration, you first need to automate the build of your software (where build means of course compiling sources, packaging them, but also compiling tests, running the tests, running quality checks, etc, anything that will help to get feedback on the health of your code). Then you need to trigger the build on the latest version of the sources on a particular event (a change in the repository, a temporal event), to generate reports and to send notifications upon failure (by mail, twitter, etc).
And this is precisely the responsibility of a CI engine: offering trigger mechanisms, being able to get the latest version of the sources, running the build, generating and publishing reports, sending notifications. CI engines do implement this.
And because running a build is CPU and Disk intensive, CI engines usually run on a dedicated machine (or even a farm of machines if you want to build lots of projects).
Back to your question now. Once you've got Hudson running, configure it (Manage Hudson > Configure System): setup the JDK, build tools, etc. Then setup an Hudson Job and follow the steps: configure the location of the source repository, the build tool, the trigger, a notification channel and you're done (you can do more complex things but that's a start).
For more details on the setup, check:
The official Use Hudson guide for more details. << START HERE
Continuous Integration with Hudson - Tutorial.
Spot defects early with Continuous Integration.
Martin Fowler's overview of continuous integration is one of the canonical references. In my opinion, using automation to make sure your code base is healthy is one of the most useful things that you can set up.
Update Sorry that I didn't have much time earlier to expand on my reply. #Pascal_Thivent is right that in order to effectively use CI, you need to be able to automate your builds, tests, etc. CI is actually a good forcing function for this. For me, it's one of those little warning flags if I start to think that it would be too painful to put a build into Hudson. It means that something is not quite right.
What I like about Hudson is that it's flexible enough to accommodate different workflows. We use it for both builds / unit tests and releases. And it eliminates a lot of the worry about certain release procedures only working in one person's environment.
What I don't like about Hudson is that it is occasionally unstable when new builds break plugins. I've had a couple of upgrades (2 out of 10 or so) go bad because of incompatibilities. I do two things now:
I never upgrade my team's Hudson server to the latest and greatest right away. I generally only upgrade when there are significant new features, or bug fixes.
I now have a basic Hudson instance set up with all my plugins on a virtual machine with some dummy builds that I fire up to test out any new upgrades before doing it on the public server.
We have a large collection of nAnt scripts that build our various products. They almost all have the following structure:
Erase old working copy.
Check out complete fresh copy from version control.
Increment build number in appropriate file (custom nAnt task).
Run static analysis (StyleCop, Perl scripts)
Build solution using Visual Studio - ends up with MSI output.
Run unit tests (nUnit, JSUnit)
Run static analysis (FxCop)
Zip up deliverables (MSI, readme, etc) into well-named package.
Put this zip package onto a server share.
Email results to team.
From our research, it seems that CruiseControl(.net?)/Hudson/BuildBot would only add the trigger that causes the build, which at the moment is double-clicking the nAnt script over Remote Desktop and a status dashboard.
Are we missing anything else significant?
The question is subjective, and thus so is my answer.
In the projects I've automated before, CruiseControl was used essentially for that one purpose: so we didn't have to remote into the build machine and trigger builds. The CI part is that CruiseControl will monitor the repository for you, triggering builds at the intervals you define.
It also gave us the dashboard from which could trigger releases, or go back to examine logs and artefacts from past builds.
For us that was enough benefit to implement CruiseControl. Perhaps it doesn't "seem" like much until you've finished it and a month later realized you haven't had to touch your build system because it's off silently and thanklessly doing its thing for you.
A Continuous Integration server such as Hudson would do 1, 2, 3, 9 and 10 for you so that you don't have to implement them yourself. If you've already got it working that's maybe not a huge improvement for your current project but it makes things simpler for subsequent projects. It would also, as you mention, take care of when to trigger the build.
Hudson will also chart various trends over time, such as test coverage, build time, static analysis results. You can also have more sophisticated notifications than just e-mail if you choose.
The most important thing it gives you is visual feedback (the bigger the screen is better). When you have one machine, dedicated to displaying buildresults, visible to all team members, it works like a catalyst to people see that something is wrong and fixes it.
If you have something like that standing in a place where your boss can see it and ask you "Hey Wilkinson, why is this screen red?" will you fix your build faster?
Thay all look the same, you can pick whatever you think fits your needs, just have one setup and running.
These questions are for TeamCity users only
1) Is it possible to configure TeamCity to extract build artifact information based on your own your regular expressions? This is exactly what Pulse does here
2) Does TeamCity integrate with any task/bug tracking tool? like JIRA?
3) This question is for people who run static code analyzer only. A tool like PC-Lint/Visual Lint can generate XML reports. Can TeamCity be configured to parse these artifacts and generate a build failure?
4) I'm currently evaluating TeamCity right now...there community forum doesnt seem to be very active. For those who pay for support, how is Jetbrains support? Is it good? Atlassian seems to be much better.
TeamCity allows to get build artifacts with a Ant-based pattern. You can specify multiple patterns and set target directory for each pattern. Read more at http://www.jetbrains.net/confluence/display/TCD4/Build+Artifact
There is an integration which allows to link RF-3432 to the Jira issue. More advanced integration may appear in the next release of TC. Read more at http://www.jetbrains.net/confluence/display/TCD4/Mapping+External+Links+in+Comments
Only with custom plugin. Or your build process can send a specific "echo" message which will change build status and description.
OK, I'm JetBrainer. May be we don't response immediately, but we strive to answer forum questions ASAP. Paid customers also have e-mail support.
Hope this helps,
KIR
Disclaimer: I don't work for JetBrains! But I've worked with Pulse and TeamCity in my current job.
Build Artifacts: Yes, TeamCity will export artifacts that remain after a build. You can add define ant-style wildcard patterns to match files (the default pattern matches any files left in the root build directory). These files can be seen from the project view against each individual build.
You can use special service commands in a build script to immediately export artifacts along the way too, I do this for a code complexity tool that generates xml files, for which I've also defined a custom graph.
Bug Tracking: I don't have experience with this, but KIR pointed out some alternatives.
XML Parsing: You can control this with ant. I included a third-party tool called andariel in my build that can run XPaths across xml documents, then used service messages to export the result (in this case a count of methods exceeding a complexity limit) to be displayed in a custom graph.
I believe you could also publish the artifacts, provide TeamCity with an XSL to render the XML, and create an additional tab in your build results to display it (however I have not done this)
Tech Support: I've found the community forums to be pretty good, most questions I've had answered within a day or two by both civilians and Jetbrains employees, and I was using the free 'Professional' version.
I can only imagine that email support will be just as good if not better!
I am a little confused about this question because my use of TeamCity, TC (and I guess the design principles of TC) is to allow the build script (and not TC) to remain the correspondent of build imperatives.
In other words, if you need TeamCity to do something cool, just add that cool stuff in your build script either using an existing task in your build system or write one yourself.
TeamCity supports NAnt, MSBuild, Ant and am sure, any other build platform you can install on on the buildagents.
The only integration I will want TC or any other CI platform to have is source control integration with my choice of SC. The rest of the integration should be controlled by my build script. That way, I only configure my TC once at the beginning of project for each project and then, don't touch it ever again. In contrast, the build can change per version.
So, the indirect answer to your question is Yes, theoretically, through the build script.
Hope this helps.