How to enforce conventional commit messages in git (Bitbucket) - bash

I'm looking for a way to prevent commits which have no correct format for the commit message.
I intend to use the following convention:
https://www.conventionalcommits.org/en/v1.0.0/
I found out there is a folder with bash scripts which may be the key to that solution: .git/hooks/
However, I'm not sure how to write the script to enforce the format on the commit messages.
Will edit accordingly, thank you in advance.

There is a list of Tools in the documentation.
My suggestion would be to:
Pick a tool.
Run it from the command line, and see if you can get it to check commit messages.
Put the commands you use to check commits messages in a git hook.

Related

How to change or specify a DVC experiment name?

How do I change the name of the experiment? I tried to use dvc exp run -n to name the project then use git to push to github. However the experiment name is still SHA.
Tried: I tried to use dvc exp run -n to name the project then use git to push to github. However the experiment name is still SHA.
Expected experiment name on iterative studio interface to display the name
Actually happened: Github SHA value instead of the name
There are two types of experiments in DVC ecosystem that we need to distinguish and there are a few different approaches on naming them.
First, is what we sometimes call "ephemeral" experiments, those that do not create commits in your Git history unless you explicitly say so. They are described in this Get Started section. For each of those experiments a name is auto-generated (e.g. angry-upas in one of the examples from the doc) or you could use dvc exp run -n ... to pass a particular name to it.
Another way to create those experiments (+ send them to Studio) is to use DVC logger (DVCLive), e.g. it's described here. Those experiments will be visible in Studio with an auto-generated names (or a name that was provided when they were created).
Now, we have another type of an experiment - a commit. Something that someone decided to make persistent and share with the team via PR and/or via a regular commit. Those are presented in the image that is shared in the Question.
Since they are regular commits, the regular Git rules apply to them - they have hash, they have descriptions, and most relevant to this discussion is that they have tags. All this information will be reflected in Studio UI.
E.g. in this public example-get-started repo:
It think in your case, tags would be the most natural way to rename those. May be we can introduce a way to push exp name as a git tag along with the experiment when it's being saved. WDYT?
Let me know if that answers your question.

Is there a version control feature in Oracle BI Answers for a single Analysis?

I built an Analysis that displayed Results, error free. All is well.
Then, I added some filters to existing criteria sets. I also copied an existing criteria set, pasted it, and modified it's filters. When I try to display results, I see a View Display Error.
I’d like to revert back to that earlier functional version of the analyses, hopefully without manually undoing the all of filter & criteria changes I made since then.
If you’ve seen a feature like this, I’d like to hear about it!
Micah-
Great question. There are many times in the past when we wished we had some simple SCM on the Oracle BI Web Catalog. There is currently no "out of the box" source control for the web catalog, but some simple work-arounds do exist.
If you have access server side where the web catalog lives you can start with the following approach.
Oracle BI Web Catalog Version Control Using GIT Server Side with CRON Job:
Make a backup of your web catalog!
Create a GIT Repository in the web cat
base directory where the root dir and root.atr file exist.
Initial commit eveything. ( git add -A; git commit -a -m
"initial commit"; git push )
Setup a CRON job to run a script Hourly,
Minutely, etc that will tell GIT to auto commit any
adds/deletes/modifications to your GIT repository. ( git add -A; git
commit -a -m "auto_commit_$(date +"%Y-%m-%d_%T")"; git push )
Here are the issues with this approach:
If the CRON runs hourly, and an Analysis changes 3 times in the hour
you'll be missing some versions in there.
No actual user submitted commit messages.
Object details such as the Objects pretty "Name" (caption), Description (user populated on Save Dialog), ACLs, and object custom properties are stored in a binary file format. These files have the .atr extension. The good news though is that the actual object definition is stored in a plain text file in XML (Without the .atr).
Take this as a baseline, and build upon it. Here is how you could step it up!
Use incron or other inotify based file monitoring such as ruby
based guard. Using this approach you could commit nearly
instantly anytime a user saves an object and the BI server updates
the file system.
Along with inotify, you could leverage the BI Soap API to retrieve the actual object details such as Description. This would allow you to create meaningfull commit messages. Or, parse the binary .atr file and pull the info out. Here are some good links to learn more about Web Cat ATR files: Link (Keep in mind this links are discussing OBI 10g. The binary format for 11G has changed slightly.)

Rally Subversion Connector not robust at finding artifacts

Some of our engineers are finding that the Rally-Subversion connector does not do a very good job of finding artifacts in the commit message, for example if they are followed by a colon (e.g. DE2222:)
I took a look at the connector 3.7 code, and found that they first split the message into words, but that splitting is done like this:
words = message.gsub(/(\.|,|;)/, ' ').split(' ')
Is there any reason this would not be done like this:
words = message.split(/\W+/)
This seems like it will be much more robust and I'm having trouble thinking of a downside.
Any reason we should not make this change?
If not, could this update please also be made in the next release of the connector as well?
As the SCM connector source code is open, there's really no reason you shouldn't make a change to the commit message artifact "detection" regex, if you find it more efficient.
As a heads-up, Rally's new generation of SCM connectors (we're calling them "VCS" connectors for Version Control System connectors) will no longer utilize a post-commit hook, but instead will run at a scheduled interval and will collect commit events from the SVN log. These collected events will then be posted to Rally as Changesets.
The new VCS connectors will not parse the logs for commit messages to translate into artifact state changes - so ultimately implementing that type of functionality will end up needing a customer-extension to the connector code in the long run anyway.

SVN hooks for Windows

Can I in svn hooks for Windows to write a command which relocate automatically some folders to another location in repository?
Hook must run at server
For example: Users commit files in his working copy (C:svnworkingcopy\dev)
At server will run a hook and automatically relocated or copy this files into another folder of repository.(https://svnserver/onlyread)
Where this user have permission to read only.
Thnk !
svn switch --relocate a user's working copy with a hook script? Looks like you are confusing the terms. Nevertheless I advise you to check the following warning in SVNBook:
While hook scripts can do almost anything, there is one dimension in
which hook script authors should show restraint: do not modify a
commit transaction using hook scripts. While it might be tempting to
use hook scripts to automatically correct errors, shortcomings, or
policy violations present in the files being committed, doing so can
cause problems. Subversion keeps client-side caches of certain bits of
repository data, and if you change a commit transaction in this way,
those caches become indetectably stale. This inconsistency can lead to
surprising and unexpected behavior. Instead of modifying the
transaction, you should simply validate the transaction in the
pre-commit hook and reject the commit if it does not meet the desired
requirements. As a bonus, your users will learn the value of careful,
compliance-minded work habits.

How to trigger a hudson job by another job which is in a different hudson

I have job A in Hudson A and Job B in Hudson B. I want to trigger job A by Job B.
In your job B configuration, check the Trigger builds remotely (e.g., from scripts) checkbox and provide a token.
The help text there shows you the URL you can call to trigger a build from remote scripts (e.g. from a shell script in Hudson job A).
However, that would trigger job B no matter what the result of job A is.
Morechilli's answer is probably the best solution.
I haven't used Hudson but I would guess your simplest approach would be to use the URL trigger:
http://wiki.hudson-ci.org/display/HUDSON/URL+Change+Trigger
I think there is a latest build url that could be used for this.
In the latest versions of Hudson, the lastSuccessfultBuild/ HTML page will contain the elapased time since it was built, which will be different for each call. This causes the URL Change Trigger to spin.
One fix is to use the xml, json, or python APIs to request only a subset of the information. Using the 'tree' request parameter, the following URL will return an XML document containing only the build number of the last successful build.
http://SERVER:PORT/job/JOBNAME/lastSuccessfulBuild/api/xml?tree=number
Using this URL restored the behavior I expected from the URL Change Trigger.
Personally, I find the easiest way to do this is to watch the build timestamp:
PROJECT_NAME/lastSuccessfulBuild/buildTimestamp
I'm using wget to trigger the build:
wget --post-data 'it-just-need-to-be-a-POST-request'
--auth-no-challenge --http-user=myuser --http-password=mypassword
http://jenkins.xx.xx/xxx/job/A/build?delay=0sec
There's other ways how you can trigger a build, see the REST and other APIs of jenkins.
But this works great on unix.

Resources