Liquibase: tracking changelogs - maven

The project
We are ~50 developers plus DevOps staff and run ~30 Oracle 12c EE instances. We have introduced Liquibase in 2018.
We use the Liquibase Maven plugin version 3.8.8, changelogs are stored in numerous Maven projects, these are commited to subversion in the usual trunk/tag/branch structure.
The Goal
We want to ease provisioning of new database instances with release versions matching the respective environments. A typical use case is setting up a fresh database in the integration test environment.
One would start with an empty database schema and apply changelogs up to a certain version.
Unfortunately, changelogs that were applied to a schema are often stored in different Maven projects. This makes them hard to locate.
Liquibase does NOT store actual changeset contents (concrete DDL) in the DATABASECHANGELOG table. This would solve the problem.
In search of a solution, I first used maven to store the changelog's SVN revision into the DATABASECHANGELOG when liquibase:update was executed.
Retrieving changelogs based on revision number was error prone.
I have spend a week now to find a robust solution, googled for hours and built several test cases (with adapted parent and and concrete poms, partly using the maven scm plugin and such) but without luck. Initially, I planned to use liquibase:tag to store file path + revision, but this works only if all changesets are in one single changelog file, which is not the case.
Of course, it's desirable to have all changelogs stored in ONE location,
but this is not always possible. For example, scripts that require DBA privileges must to be committed to extra maven projects.
I need a strong reference between each changeset and the corresponding changelog file, or the changelog must be stored directly in the DATABASECHANGELOG.
With our current setup, "Database versioning" with Liquibase is not possible. There is theoretical
tracebility, but it is up to the users to somehow locate original changelogs in a huge mess of 100+ separate Maven projects.
Question 1: Is it possible to store the actual changelog content for each changeset into the DATABASECHANGELOG?
Question 2: If not, how can one preserve a reference beetween a DATABASECHANGELOG entry and the originating changelog file?
(Also, what happens, when a changelog file is deleted from subversion by accident? The DATABASECHANGELOG would just tell me the date and time of the change, some details and a file name - pretty useless, because the actual file would be gone and there would be no way to restore the actual DDL. To prevent such a scenario, I would backup all changelog files. To do that, DATABASECHANGELOG meta data is insufficient, as Liquibase does not track SVN revisions and file paths.)

One option would be to combine various SVN repositories into new one using SVN Externals and then create new changelog file.
You can map URLs (SVN Tag/Branch/Revisions) to a folder without copying using SVN Externals. http://svnbook.red-bean.com/en/1.7/svn.advanced.externals.html.
Hope it helps.

Related

Is it possible for sonarqube to store the rules in git along with the source code of the project?

Is it possible for sonarqube to store the rules in git along with the source code of the project?
In other words, is it possible to versioned rules for different branches of the project?
Is it possible to store a profile in Git?
Yes. You can export a profile to file using the Back up feature. (Note that rule customizations aren't included)
Is it possible to have SonarQube store profiles in Git?
No. During analysis your SCM will be queried for blame information, but no part of the SonarQube ecosystem performs commits.
Is it possible to version your profiles and apply different versions to different branches of a project?
Yes, but why would you want to?
There's no point in re-analyzing branches that aren't changing. Which implies that if you improve/tighten your standards, you don't want to apply those higher standards to the new development that's going on in your branches.
"But new rules would raise all kinds of new issues in our branches & I don't want to have to address them because we're not doing new development there, just fixes." I can hear you saying.
Sure, but if you're using a recent version of SonarQube (>=6.3) then those new issues will be raised with a date that matches the blame date of the line. I.E. they won't be raised as leak period issues, but as old issues. Since you're presumably keeping the leak period clean on your branches, this should all work out.
But to answer your question, there's really no need to store old profiles in SCM, just peel off copies in SonarQube and assign them to the appropriate branch projects.
But if you insist on doing this via backups checked in to SCM be aware that you'll need to reconstitute (re-import) your profiles into SonarQube prior to analysis. You can't just point an analysis at a file containing a list of rules and expect it to work.

How clone duplicate existing project in SonarQube

I have one Project in SonarQube with some history and some Confirmed Issues and I need split this project, because of two versions of source code, but I need the history and Issue changes in both projects. How to do this? It is possible somehow clone, duplicate existing Project to another one with different name?
Cloning or duplicating a project is not supported.
You can use the Time machine functionality of SQ to "recreate" the past analyses of the project under another name but it won't recreate the history of changes on issues.

Maven publishing artefacts to remote repository and using $release in the artefact version

Wondering how people manage their project artefacts through an environment lifecycle of say DEV - AQA - CQA - RELEASE and if there's some best practices to follow.
I use a Jenkins build server to build my projects (code checkout then maven build). My artefacts all have version 1.0.0-SNAPSHOT and are published to a local .m2 repo on the build server. There are also Jenkins jobs that rebuild the DEV system (on the same server) using those artefacts. The project build is automated whenever someone checks in code. The DEV build is automated on a nightly basis.
At some point, my lead developer determines that our project is fit to go to AQA (the first level of testing environment on a different server).
For this I need to mark the artefacts as version 1.0.0-1 and publish to a remote AQA repository (it's actually a Nexus repo).
The Maven deploy plugin sounds like the right approach, but how do I change the version number to be effectively 1.0.0-$release (where $release is just an incrementing number starting from 1)? Would Maven/Nexus be able to manage the value of $release, or would I need a simple properties file in my project to store/update the last used $release.
Furthermore, someone tests AQA and determines its fit to move on to CQA (second testing env). This is 'promote to AQA'. So my requirement is to copy the artefact from the AQA Nexus repo and publish to the CQA Nexus repo.
Likewise, after CQA, there'd be a 'promote to RELEASE' job too.
I think the version value remains unchanged during the 'promote' phases. I'd expect the AQA repo to see all versions 1-50, but CQA only 25 and 50, then RELEASE only 50, for example.
I can find loads of info about Maven plugins/goals/phases, but very little about a prescriptive method on how or where to use outside of the immediate development environment.
Any suggestions gratefully received.
Staging/promoting is out of scope for Maven. Once deployed/uploaded to a remote repository, that system is responsible for the completion of the release cycle. Read this chapter about staging: http://books.sonatype.com/nexus-book/reference/staging.html if you use Nexus.
Build numbers are just that build numbers. They are not promotion / staging numbers.
You should come up with another means of tracking your promotions, because otherwise one might get confused in "knowing" that build 10.1.4-2 is the same as 10.1.4-6. Certainly, all of the Maven related software will see those two builds as different builds.
In addition, if a person "grabs" the wrong copy of the build, the way you are managing staging within your build number will increase confusion. As if you don't kill all of the 10.1.4-2 builds, then someone might get a copy of that not realizing that the build has been promoted to 10.1.4-6. This means that for the "last" staging number to be the most likely one to be grabbed, you must do two things (which are impossible in combination)
Remove all the old staging numbers, updating them to the new ones.
Ensure that no copy of an old staging number escaped the update.
Since people generally can copy files without being tracked, or said files might not be reachable at time up "update", or timing between reaching all the files cannot be simultaneous, such a system is doomed to fail.
Instead, I recommend (if you must track by file), placing the same file in different "staging directories". This defines release gateways by whether the file exists in a certain directory, and makes it clear that it is the same file that is going through the entire process. In addition, it becomes easy to have various stages of verification poll their respective directories (and you can write Jenkins tasks to promote from one directory to another, if you really wish).

Is it possible to use the maven-release-plugin with a specific revision?

I am thinking about a deployment pipeline using SVN, Jenkins and Maven. At the moment I'm stuck at the point where I usually would call mvn release:perform on a working copy.
When thinking in deployment pipelines, I want to create a pipeline where every commit could be used to release a software to test/production. Let's say I have 5 builds, and I decide to release build 3 (with revision 3) to production. There will already be 2 new commits to trunk (which is now at revision 5).
Is it possible to use the maven-release-plugin to checkout/build/tag/commit a release at revision 3? When the maven-release-plugin finishes the release it usually commits the modified POMs to trunk.
I'm happy about any kind of information or advice here, so feel free to point me to books (like http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912), blog posts, Jenkins documentation... Maybe I'm completely on the wrong track.
By default, the release plugin creates the release based on the contents of your working copy, it just ensures that you don't have any uncommitted content before doing so. AFAIK it doesn't force an update of the sources, as that's usually the job of the Continuous Integration system (Jenkins in your case). So whatever is checked out by Jenkins will be released.
What you're trying to do sounds more like a configuration change on the Jenkins side, pointing it to the right revision.
On the other hand, if the POM files are modified as part of the release, but have been changed in SVN in the meantime, you will run into a conflict when Maven wants to check in the modified POM files. That's a situation that might happen, depending on how for back you want to go with the release.
Based on this, it might make more sense to always create a branch before doing a release. So you would create a branch based on revision 3 and then create your release in that branch. This way, you wouldn't run into issues with committing resources that have changed in more recent revisions.
Creating the branch and checking it out could probably be automated through Jenkins and Maven as well.
As far as I tested it, it is not possible.
More explicitely, as nwinler said, when you release, maven try to commit the modified pom. But, if it's an older revision than the current one, SVN will complain that your sources are not up to date. So it won't work. ... as far as I know.
You may read docs about promotion build. I don't find any one clear enough to be pointed out (in th few minutes of the writing of this message).

How to take backup of StarTeam project

I have a project repositiory on Start Team server.
I need to take regular back up the same.
How can I achieve this?
The Star team backup steps are given in the Appendix C of the “The StarTeam Administrator’s Guide.pdf”
It depends on what you mean by backing up the Project. If you mean backing up the entire repository then StarTeam makes this really easy. You just need a snapshot of the DB and a full copy of the repository files (full steps are documented.) However, if you mean backing up a specific Project in the repository, and ONLY that Project, with all history intace, then this is not currently possible--or at least it is a major challenge.
StarTeam used to have an ability to Import/Export projects but they discontinued support and development of that tool years ago. If you wish to back up a single Project independent of the rest of the server, then this is still possible, and useful in the case where you want to split the repository into a separately managed repository. Here is how to do that:
Create a duplicate repository including all of the repository files.
Delete everything from the clone except for the Project(s) that you want to split off -- note that in StarTeam 2011 the Project Delete was broken, so you may need to do this in a direct SQL query which marks the projects/views as deleted. Contact Support if you run into problems deleting manually, especially if you have a large repository.
Once your clone has been pruned of unnecessary projects, run the Online Purge tool until all projects and respective files have been removed from the DB and the Vault.
You can now change what you need to change on the new repository, such as the users, groups, security, etc. without affecting the first repository.
Once you have validated the new repository is working properly, you can then run a similar process on the first repository to get rid of the projects that were split off.
Another potential use for this is if you had reached end of life for a project and you wanted to keep it offline and backed up but wanted it to be restorable with full history on demand (for regulatory purposes, etc.) while being freed up to remove it from the active repository so you can make other projects run faster. Though this is probably best done in batches of projects as the process is currently quite labor intensive to perform.

Resources