Continuous Delivery- Handling Inbetween svn commits [closed] - continuous-integration

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
Suppose a deployment pipeline is going on. SVN tagging and development version change is going on. At that time a developer is committing his changes. so there is a chance that CI server releasing the new committed untested changes to the production or some other conflicts occure. How can I handle this situation. Do I need to lock the entire trunk until the build pipeline completes. Is there any other work around.

If I understand correctly, you assume the following steps
after a commit, the build server will check out the current trunk (let's say revision A),
perform the build,
execute some tests,
tags the trunk if the tests are successful
and deploys to production (still only if tests are successful)
The "crazy" developer commits between step 3 and 4 and thus creates revision B. Now you assume that the build server will again check out the latest revision (which would be revision B). This behaviour could indeed cause some trouble.
However, the build server should do all the steps based on a specific revision which is not a problem in common setups. E.g. Jenkins usually has a check-out step at the beginning of the job. If there is a tagging step in the end, you will usually not want that Jenkins blindly tags the current trunk (causing the problem you describe) but instead tag the revision that is checked out in Jenkins' workspace.
Additionally, please consider that there should be at least some manual approval step before anything gets deployed automatically to production. This is usually mentioned in the context of continuous delivery as far as I can see.
The key to continuous delivery is IMHO that you are able to deploy the current version of your source code at any time at the push of a button. IMHO it does not mean that every commit should be deployed automatically.

Related

Can SonarQube perform retrospective analysis of past commits?

I am considering the implementation of a Code Quality tool for our team's Projects.
SonarQube seems to be a good choice. I haven't gotten the ideal workflow figured out yet (we use SVN and Maven Projects and have a Jenkins server running the tests on every commit).
Aside from the importance of being able to analyse the quality of the current commit, historical evolution is also very interesting.
Given that we already have a few years of commits, is it possible, when setting up the project, to request a retrospective analysis of those commits, or will SonarQube only work for the commits from the day it is installed onwards?
SonarQube only displays data uploaded by scanners. You can checkout to any commit (read more here: How to checkout a specific Subversion revision from the command line?) and next execute a scanner. The used scanner depends of what kind of a build tool you use:
Ant
Gradle
Maven
MSBuild
Other
The analysis result will be pushed to a SonarQube server. Unfortunately, it is always treated as the last version of the application, so you cannot "insert" analysis of some old commits to the project history. But do you really need it? Scanners always analyze all sources. If somebody added some code three years ago and nobody deleted it, then it will be available on the server. If the code is deleted, then you shouldn't spend time on analyzing something, what doesn't exist anymore. That's why SonarQube always shows the last state of the project.
You can read a good blog post written by Fabrice Bellingrad (April 06, 2016): Stop planning; fix the leak!
Read more about SonarQube Architecture and Integration.
Yes, this is possible using the sonar.projectDate analysis parameter. Its purpose is precisely what you are asking for.
Quote from the docs:
Assign a date to the analysis. This parameter is only useful when you need to retroactively create the history of a not-analyzed-before project. The format is yyyy-MM-dd, for example: 2010-12-01. Since you cannot perform an analysis dated prior to the most recent one in the database, you must analyze recreate your project history in chronological order, oldest first.
You could for example check out your last 10 version tags in chronological order (oldest first!). For each tag run the analysis with sonar.projectDate set to the date the tag was created.

How to use Git tools nested in VS2013 properly? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I've got several questions about the latest function GitHub nested in VS2013 ( I didn't install any other 3-rd party Git tools).
If I fork a project in the GitHub website as a server side, and I clone the project into my local disc and do editing, deleting , adding files……, then:
1) If I "commit" all the changes, will these changes impact my forked project on the server side?
2) If I then "Push" all the changes after commit, will these changes impact my forked project on the server side?
3) If I then "Push" all the changes after commit, will this also send a pull request to the original project where I forked so that the original author knows I've sent some changes and maybe he/she can do merges?
4) If I click "Pull", and if the original has some changes (differing from mine), will my fork project be synchronized with the latest version from the original one?
5) If I click "Fetch", what will happen? what's the most difference between "Fetch" and "Pull" in VS2013?
1) If I "commit" all the changes, will these changes impact my forked project on the server side?
No, when you do a commit locally this affects the local branch, not the remote one.
2) If I then "Push" all the changes after commit, will these changes impact my forked project on the server side?
Yes, pushing your branch is how you "sync" the local and remote versions.
3) If I then "Push" all the changes after commit, will this also send a pull request to the original project where I forked so that the original author knows I've sent some changes and maybe he/she can do merges?
Yes, my recollection is GitHub will do this.
4) If I click "Pull", and if the original has some changes (differing from mine), will my fork project be synchronized with the latest version from the original one?
What happens depends on your pull strategy. You could trigger a merge or a rebase. This will depend on your settings. In either case, yes you will "sync" with the remote branch.
5) If I click "Fetch", what will happen? what's the most difference between "Fetch" and "Pull" in VS2013?
Doing a fetch will tell Git to bring in the changes from the remote to your local Git system but it will not merge or rebase the changes in to any branch. Here is a simple equation for you to remember:
git pull = git fetch + git merge/rebase

Is it possible to use the maven-release-plugin with a specific revision?

I am thinking about a deployment pipeline using SVN, Jenkins and Maven. At the moment I'm stuck at the point where I usually would call mvn release:perform on a working copy.
When thinking in deployment pipelines, I want to create a pipeline where every commit could be used to release a software to test/production. Let's say I have 5 builds, and I decide to release build 3 (with revision 3) to production. There will already be 2 new commits to trunk (which is now at revision 5).
Is it possible to use the maven-release-plugin to checkout/build/tag/commit a release at revision 3? When the maven-release-plugin finishes the release it usually commits the modified POMs to trunk.
I'm happy about any kind of information or advice here, so feel free to point me to books (like http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912), blog posts, Jenkins documentation... Maybe I'm completely on the wrong track.
By default, the release plugin creates the release based on the contents of your working copy, it just ensures that you don't have any uncommitted content before doing so. AFAIK it doesn't force an update of the sources, as that's usually the job of the Continuous Integration system (Jenkins in your case). So whatever is checked out by Jenkins will be released.
What you're trying to do sounds more like a configuration change on the Jenkins side, pointing it to the right revision.
On the other hand, if the POM files are modified as part of the release, but have been changed in SVN in the meantime, you will run into a conflict when Maven wants to check in the modified POM files. That's a situation that might happen, depending on how for back you want to go with the release.
Based on this, it might make more sense to always create a branch before doing a release. So you would create a branch based on revision 3 and then create your release in that branch. This way, you wouldn't run into issues with committing resources that have changed in more recent revisions.
Creating the branch and checking it out could probably be automated through Jenkins and Maven as well.
As far as I tested it, it is not possible.
More explicitely, as nwinler said, when you release, maven try to commit the modified pom. But, if it's an older revision than the current one, SVN will complain that your sources are not up to date. So it won't work. ... as far as I know.
You may read docs about promotion build. I don't find any one clear enough to be pointed out (in th few minutes of the writing of this message).

storing internal artifacts for how long? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I am interested in knowing about what other teams are doing about limiting internal artifact storage.
So how long an internal artifact is stored in Artifactory?
Sonatype (people behind Maven and Nexus) published a blog article on this issue:
http://www.sonatype.com/people/2012/01/releases-are-forever/
The vast majority of files published into our Maven repository are snapshot releases. Both Nexus and Artifactory have functionality for periodically purging old snapshots (useful for keeping disk under control)
It's the management of release builds that becomes the issue. In my opinion this falls into a couple of categories
Not every release is used
During QA some releases are rejected, this means it makes sense to publish these into a temporary "staging" repository, prior to full release.
I call these "release candidates" and Nexus Professional has functionality to manage these for me. (I assume artifactory also supports staging)
Not every release is needed
Sonatype's Blog addresses this point. Applications in production rarely need to rollback to a version older than 6 months. Applications in your Maven repository are unlikely to be used as depedencies in a 3rd party build, so it calls into question the need for continued storage.
Deleting such artifacts remains a judgement call.
We store every released artifact and I'm pretty sure you should also do that unless you have really strong reason to not to. We limit our snapshots to just last one for the artifact and only if there's no such release version, however we ensure that every snapshot live at least 3 days. It's easy to set this up in Nexus and AFAIR it's more or less its default policy about snapshots.

Is Continuous Integration important for a solo developer? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I've never used CI tools before, but from what I've read, I'm not sure it would provide any benefit to a solo developer that isn't writing code every day.
First - what benefits does CI provide to any project?
Second - who should use CI? Does it benefit all developers?
The basic concept of CI is that you have a system that builds the code and runs automated tests everytime someone makes a commit to the version control system. These tests would include unit and functional tests, or even behavior driven tests.
The benefit is that you know - immediately - when someone has broken the build.
This means either:
A. They committed code that prevents compilation, which would screw any one up
B. They committed code that broke some tests, which either means they introduced a bug that needs to be fixed, or the tests need to be updated to reflect the change in the code.
If you are a solo developer, CI isn't quite as useful if you are in a good habit of running your tests before a commit, which is what you should be doing. That being said, you could develop a bad habit of letting the CI do your tests for you.
As a solo programmer, it mainly comes down to discipline. Using CI is a useful skill to have, but you want to avoid developing any bad habits that wouldn't translate to a team environment.
As other people have noted, CI does have advantages for a solo developer. But the question you have to ask yourself is; is it worth the overhead? If you're like me, it will probably take an hour or two to set up a CI system for a project, just because I'll have to allocate a server, set up all the networking, and install the software. Remember that the CI system will only be saving you a few seconds at a time. For a solo developer, these times aren't likely to add up to more than the time it took to do the CI setup.
However, if you've never set up a CI system before, I recommend doing it just for the sake of learning how to do it. It doesn't take so long that it isn't worth the learning experience.
The benefit of CI lies in the ability to discover early when a check in has broken the build. You can also run your suite of automated tests against the build, as well as run any kind of tools to give you metrics and such.
Obviously, this is very valuable when you have a team of commiters, not all of whom are diligent to check for breaking changes. As a solo developer, it is not quite as valuable. Presumably, you run your unit tests, and even maybe integration tests. However, I have seen a number of occasions where the developer forgets to checkin a file out of a set.
The CI build can also be thought of as your "release" build. The environment should be stable, and unaffected by whatever development gizmo you just add to your machine. It should allow you to always reproduce a build.
This can be valuable if you add a new dependency to your project, and forget to setup the release build environment to take that into account.
If you need to support multiple compilers then it's handy to have a CI build system to do all of that whilst you just develop in one IDE. My code builds with Vc6 through VS2008 in x86 and x64 builds on VS2005 & 8, so that's 7 builds per project per project configuration... Having a CI system means that I can develop in one IDE and let the CI system prove that all of the compilers that I support still build.
Likewise, if you are building libs that are used by multiple projects then CI will make sure they work with ALL of the projects rather than just the one that you're working with right now...
The truth is, that continuous integration makes most sense in teams. Single developers can also get some advantages, you must decide yourself if they are enough to counter the time you invest into setting a CI-system up.
If you forgot to checkin some needed file, the repository contains a broken version, even if it works on your machine. CI would detect that case.
If your CI-server runs on a different machine, it can indicate dependencies on your build-environment. Means, the build and all tests can work on your dev-box, but on another machine some dependencies aren't fulfilled and the build breaks.
Daily builds can indicate, that your older software doesn't work with the newest upgrade of the OS/compiler/library...
If your CI-system has an archive of build-artifacts you can easy get an distribution of an older version of your software.
Some CI have a nice interface to show you metrics about your build, have links to automatic generated documentation and stuff like that.
We use our CI system to do Release builds (as well as the usual automatic "on-commit" builds).
Being able to click a button that kicks off a Release build that steps through all the processes to release a setup is:
fast (I can go straight on with other things, and it runs on a separate machine so it is not slowing me down);
repetitive (it doesn't forget anything, including copying the setup to the release folder and notifying everyone who needs to know)
dependable (no mistakes, unlike a human!).
In an Agile environment, where you expect to be delivering working software every 2-4 weeks, this is definitely worth having, even in a team of 1.
CI benefits a solo developer in the sense that you're aware if you forgot to check something in (because the build will be broken). The integration value of it is diminished when there are no other developers, though.

Resources