Rally Subversion Connector not robust at finding artifacts - ruby

Some of our engineers are finding that the Rally-Subversion connector does not do a very good job of finding artifacts in the commit message, for example if they are followed by a colon (e.g. DE2222:)
I took a look at the connector 3.7 code, and found that they first split the message into words, but that splitting is done like this:
words = message.gsub(/(\.|,|;)/, ' ').split(' ')
Is there any reason this would not be done like this:
words = message.split(/\W+/)
This seems like it will be much more robust and I'm having trouble thinking of a downside.
Any reason we should not make this change?
If not, could this update please also be made in the next release of the connector as well?

As the SCM connector source code is open, there's really no reason you shouldn't make a change to the commit message artifact "detection" regex, if you find it more efficient.
As a heads-up, Rally's new generation of SCM connectors (we're calling them "VCS" connectors for Version Control System connectors) will no longer utilize a post-commit hook, but instead will run at a scheduled interval and will collect commit events from the SVN log. These collected events will then be posted to Rally as Changesets.
The new VCS connectors will not parse the logs for commit messages to translate into artifact state changes - so ultimately implementing that type of functionality will end up needing a customer-extension to the connector code in the long run anyway.

Related

Why is Jenkins.get().getRootUrl() not available when generating DSL?

I'm debugging a problem with atlassian-bitbucket-server-integration-plugin. The behavior occurs when generating a multi-branch pipeline job, which requires a Bitbucket webhook. The plugin works fine when creating the pipeline job from the Jenkins UI. However, when using DSL to create an equivalent job, the plugin errors out attempting to create the webhook.
I've tracked this down to a line in RetryingWebhookHandler:
String jenkinsUrl = jenkinsProvider.get().getRootUrl();
if (isBlank(jenkinsUrl)) {
throw new IllegalArgumentException("Invalid Jenkins base url. Actual - " + jenkinsUrl);
}
The jenkinsUrl is used as the target for the webhook. When the pipeline job is created from the UI, the jenkinsUrl is set as expected. When the pipeline job is created by my DSL in a freeform job, the jenkinsUrl is always null. As a result, the webhook can't be created and the job fails.
I've tried various alternative ways to get the Jenkins root URL, such as static references like Jenkins.get().getRootUrl() and JenkinsLocationConfiguration.get().getUrl(). However, all values come up empty. It seems like the Jenkins context is not available at this point.
I'd like to submit a PR to fix this behavior in the plugin, but I can't come up with anything workable. I am looking for suggestions about the root cause and potential workarounds. For instance:
Is there something specific about the way my freeform job is executed that could cause this?
Is there anything specific to the way jobs are generated from DSL that could cause this?
Is there another mechanism I should be looking at to get the root URL from configuration, which might work better?
Is it possible that this behavior points to a misconfiguration in my Jenkins instance?
If needed, I can share the DSL I'm using to generate the job, but I don't think it's relevant. By commenting out the webhook code that fails, I've confirmed that the DSL generates a job with the correct config.xml underneath. So, the only problem is how to get the right configuration to the plugin so it can set up the webhook.
It turns out that this behavior was caused by a partial misconfiguration of Jenkins.
While debugging problems with broken build links in Bitbucket (pointing me at unconfigured-jenkins-location instead of the real Jenkins URL), I discovered a yellow warning message on the front page of Jenkins which I had missed before, telling me that the root server URL was not set:
Jenkins root URL is empty but is required for the proper operation of many Jenkins features like email notifications, PR status update, and environment variables such as BUILD_URL.
Please provide an accurate value in Jenkins configuration.
This error message had a link to Manage Jenkins > Configure System > Jenkins Location. The correct Jenkins URL actually was set there (I had already double-checked this), but the system admin email address in the same section was not set. When I added a valid email address, the yellow warning went away.
This change fixed both the broken build URL in BitBucket, as well as the problems with my DSL. So, even though it doesn't make much sense, it seems like the missing system admin email address was the root cause of this behavior.

Sonar api : what is the purpose of analysisid?

I'm following the guide to be able to control job status based on sonar report : https://docs.sonarqube.org/display/SONARQUBE53/Breaking+the+CI+Build
Here, it is explained you get a taskid ,and when task is completed you retrieve a analysisId that can be used to get the qualitygate info using /api/qualitygates/project_status?analysisId=
I would have expected that this analysisId keeps persist and provides the same report over the time.
It does not sound to be the case. From my experience, the api project_status is always returning the last valid report, and past analysis are no more kept.
Here is the protocol I used to demonstrate
trigger first analysis , providing me a first report :
api/qualitygates/project_status?analysisId=AWEnFPG63R-cEOOz4bmK
with a status ERROR and coverage = 80%
then i trigger the second analysis that give me another id:
api/qualitygates/project_status?analysisId=AWEnHBj53R-cEOOz4bny
with a status OK and coverage=90%
so now , if i call back the first analysisId api/qualitygates/project_status?analysisId=AWEnFPG63R-cEOOz4bmK -> the report has been changed and is similar as the last one
Can someone explain me the concept of analysisId? cause this is not really an identifier of analysis here.
The link you provide in your question is to an archived, rather old version of the documentation. Since your comment reveals that you are on a current (6.7.1) version of SonarQube, then you'll benefit from using the current documentation.
In current versions, Webhooks allow you to notify external systems once analysis report processing is complete. The SonarQube Scanner for Jenkins makes it very easy to use webhooks in a pipeline, but even if you're not using Jenkins pipelines, you should still use webhooks instead of trying to retrieve this all manually. As shown in the docs (linked earlier) the webhook payload includes analysis timestamp, project name and key, and quality gate status.

What to put into consideration while adding a new Kibana-5 data source?

I'm attempting to add solr as a new data source for Kibana-5 to read from. Does it all it take is to only add a new plugin to the source code or there are other areas where I should take into consideration?
You make it sound very simple: "only add a new plugin". I think this will be very hard, since Elasticsearch and its query DSL are baked into Kibana very deeply.
Lucidworks tried to fork Kibana twice:
https://github.com/lucidworks/silk — no commits since February 2016
https://github.com/lucidworks/banana — no commits since January 2017
You can probably take a look at their commits to get an idea, but this will be a lot of work.

Is there a version control feature in Oracle BI Answers for a single Analysis?

I built an Analysis that displayed Results, error free. All is well.
Then, I added some filters to existing criteria sets. I also copied an existing criteria set, pasted it, and modified it's filters. When I try to display results, I see a View Display Error.
I’d like to revert back to that earlier functional version of the analyses, hopefully without manually undoing the all of filter & criteria changes I made since then.
If you’ve seen a feature like this, I’d like to hear about it!
Micah-
Great question. There are many times in the past when we wished we had some simple SCM on the Oracle BI Web Catalog. There is currently no "out of the box" source control for the web catalog, but some simple work-arounds do exist.
If you have access server side where the web catalog lives you can start with the following approach.
Oracle BI Web Catalog Version Control Using GIT Server Side with CRON Job:
Make a backup of your web catalog!
Create a GIT Repository in the web cat
base directory where the root dir and root.atr file exist.
Initial commit eveything. ( git add -A; git commit -a -m
"initial commit"; git push )
Setup a CRON job to run a script Hourly,
Minutely, etc that will tell GIT to auto commit any
adds/deletes/modifications to your GIT repository. ( git add -A; git
commit -a -m "auto_commit_$(date +"%Y-%m-%d_%T")"; git push )
Here are the issues with this approach:
If the CRON runs hourly, and an Analysis changes 3 times in the hour
you'll be missing some versions in there.
No actual user submitted commit messages.
Object details such as the Objects pretty "Name" (caption), Description (user populated on Save Dialog), ACLs, and object custom properties are stored in a binary file format. These files have the .atr extension. The good news though is that the actual object definition is stored in a plain text file in XML (Without the .atr).
Take this as a baseline, and build upon it. Here is how you could step it up!
Use incron or other inotify based file monitoring such as ruby
based guard. Using this approach you could commit nearly
instantly anytime a user saves an object and the BI server updates
the file system.
Along with inotify, you could leverage the BI Soap API to retrieve the actual object details such as Description. This would allow you to create meaningfull commit messages. Or, parse the binary .atr file and pull the info out. Here are some good links to learn more about Web Cat ATR files: Link (Keep in mind this links are discussing OBI 10g. The binary format for 11G has changed slightly.)

SVN hooks for Windows

Can I in svn hooks for Windows to write a command which relocate automatically some folders to another location in repository?
Hook must run at server
For example: Users commit files in his working copy (C:svnworkingcopy\dev)
At server will run a hook and automatically relocated or copy this files into another folder of repository.(https://svnserver/onlyread)
Where this user have permission to read only.
Thnk !
svn switch --relocate a user's working copy with a hook script? Looks like you are confusing the terms. Nevertheless I advise you to check the following warning in SVNBook:
While hook scripts can do almost anything, there is one dimension in
which hook script authors should show restraint: do not modify a
commit transaction using hook scripts. While it might be tempting to
use hook scripts to automatically correct errors, shortcomings, or
policy violations present in the files being committed, doing so can
cause problems. Subversion keeps client-side caches of certain bits of
repository data, and if you change a commit transaction in this way,
those caches become indetectably stale. This inconsistency can lead to
surprising and unexpected behavior. Instead of modifying the
transaction, you should simply validate the transaction in the
pre-commit hook and reject the commit if it does not meet the desired
requirements. As a bonus, your users will learn the value of careful,
compliance-minded work habits.

Resources