Clearcase CQ restriction for Specific Stream - clearcase-ucm

I am working on Clearcase/Clearquest in which I have to create the CQ's for defect for the developers. Now the defect is to be delivered in old stream and in current stream also. So for each defect I have to create 3 CQ for single developer. Say I have three stream :
8.0_dev
9.0_dev
10.0_dev
So I create same defect CQ for abouve three stream. Now problem is developer dont care to check the CQ of which stream it is . He is committing the code in 8.0_dev branch by taking CQ of 10.0_dev and it creating mess for me to create Release notes. I want to restrict the commit to the respective CQ assigned to stream. I want Clearcase gives error if CQ assigned to 8.0_dev is used for committing in any other stream, it must be used in commit in 8.0_dev and nowhereelse.
Please advise me how I implement this.

One possible lead a a preop trigger on the deliver operation (with a cleartool mktrtypr similar to the one I mentioned in "clearcase rebase permission to specific person").
In your script implementing that check (and called by the trigger), display the ClearCase environment variables which are available, and see if one would mention an UCM activity named after the CQ you created. That would mean the currently set activity is not the right one, and you can exit -1.

Related

Terraform and OCI : "The existing Db System with ID <OCID> has a conflicting state of UPDATING" when creating multiple databases

I am trying to create 30 databases (oci_database_database resource) under 5 existing db_homes. All of these resources are under a single DB System :
When applying my code, a first database is successfully created then when terraform attempts to create the second one I get the following error message : "Error: Service error:IncorrectState. The existing Db System with ID has a conflicting state of UPDATING", which causes the execution to stop.
If I re-apply my code, the second database is created then I get the same previous error when terraform attempts to create the third one.
I am assuming I get this message because terraform starts creating the following database as soon as the first one is created, but the DB System status is not up to date yet (still 'UPDATING' instead of 'AVAILABLE').
A good way for the OCI provider to avoid this issue would be to consider a database creation as completed when the creation is indeed completed AND the associated db home and db system's status are back to 'AVAILABLE'.
Any suggestion on how to adress the issue I am encountering ?
Feel free to ask if you need any additional information.
Thank you.
As mentioned above, it looks like you have opened a ticket regarding this via github. What you are experiencing should not happen, as terraform should retry after seeing the error. As per your github post, the person helping you is in need of your log with timestamp so they can better troubleshoot. At this stage I would recommend following up there and sharing the requested info.

Clearcase UCM. Unable to deliver

I created a new development stream out of an Integration stream. On the dev stream I created a dynamic view and an activity. I then added a new directory to an existing directory using clearfsimport.
e.g. cd to the parent directory where I want to add the new dir. then run:
clearfsimport -recurse -follow -nsetevent -c "adding new version" ~/newdir .
When it's all done I try to deliver the activity using clearcase project explorer. This throws an error like so:
"Merge Manager: warning: Element xxxx is not visible in view <Integration view name>
... ... ...
If this element should be visible cancel this operation, fix the prolem, and re-run the operation"
I have been doing this every week for months now and never had an issue. I'm really not sure what am I missing here or how to fix it. If it helps, the mastership of the Integration stream was transfered from a remote replica to ours. All my previous delivers were on the remote replica. But now I have complete mastership over the integration stream.
It depends on the nature of the target view (the one in which you are doing the deliver, the one associated with the integration stream): snapshot or dynamic.
You would need to check if the parent folder of xxxx is correctly checked out (or if you can check it out).
A cleartool ls in that parent folder of 'xxx' in the target (integration) view can help to test if everything seems ok.
If you are using a snapshot view, a cleartool update before the deliver can help too.
Since the integration stream mastership has just been transferred from a remote replica to your site, I would suggest the following:
1) Make sure you have created a view associated to the integration stream on your site
2) Make sure that the default delivery view is set to your integration view. You might need to reset it using the command
cleartool deliver -reset -to your-integration-view
The command above has to be launched from your development view.
I suggest you have a look at the cleartool deliver command

Is there a version control feature in Oracle BI Answers for a single Analysis?

I built an Analysis that displayed Results, error free. All is well.
Then, I added some filters to existing criteria sets. I also copied an existing criteria set, pasted it, and modified it's filters. When I try to display results, I see a View Display Error.
I’d like to revert back to that earlier functional version of the analyses, hopefully without manually undoing the all of filter & criteria changes I made since then.
If you’ve seen a feature like this, I’d like to hear about it!
Micah-
Great question. There are many times in the past when we wished we had some simple SCM on the Oracle BI Web Catalog. There is currently no "out of the box" source control for the web catalog, but some simple work-arounds do exist.
If you have access server side where the web catalog lives you can start with the following approach.
Oracle BI Web Catalog Version Control Using GIT Server Side with CRON Job:
Make a backup of your web catalog!
Create a GIT Repository in the web cat
base directory where the root dir and root.atr file exist.
Initial commit eveything. ( git add -A; git commit -a -m
"initial commit"; git push )
Setup a CRON job to run a script Hourly,
Minutely, etc that will tell GIT to auto commit any
adds/deletes/modifications to your GIT repository. ( git add -A; git
commit -a -m "auto_commit_$(date +"%Y-%m-%d_%T")"; git push )
Here are the issues with this approach:
If the CRON runs hourly, and an Analysis changes 3 times in the hour
you'll be missing some versions in there.
No actual user submitted commit messages.
Object details such as the Objects pretty "Name" (caption), Description (user populated on Save Dialog), ACLs, and object custom properties are stored in a binary file format. These files have the .atr extension. The good news though is that the actual object definition is stored in a plain text file in XML (Without the .atr).
Take this as a baseline, and build upon it. Here is how you could step it up!
Use incron or other inotify based file monitoring such as ruby
based guard. Using this approach you could commit nearly
instantly anytime a user saves an object and the BI server updates
the file system.
Along with inotify, you could leverage the BI Soap API to retrieve the actual object details such as Description. This would allow you to create meaningfull commit messages. Or, parse the binary .atr file and pull the info out. Here are some good links to learn more about Web Cat ATR files: Link (Keep in mind this links are discussing OBI 10g. The binary format for 11G has changed slightly.)

Rally Subversion Connector not robust at finding artifacts

Some of our engineers are finding that the Rally-Subversion connector does not do a very good job of finding artifacts in the commit message, for example if they are followed by a colon (e.g. DE2222:)
I took a look at the connector 3.7 code, and found that they first split the message into words, but that splitting is done like this:
words = message.gsub(/(\.|,|;)/, ' ').split(' ')
Is there any reason this would not be done like this:
words = message.split(/\W+/)
This seems like it will be much more robust and I'm having trouble thinking of a downside.
Any reason we should not make this change?
If not, could this update please also be made in the next release of the connector as well?
As the SCM connector source code is open, there's really no reason you shouldn't make a change to the commit message artifact "detection" regex, if you find it more efficient.
As a heads-up, Rally's new generation of SCM connectors (we're calling them "VCS" connectors for Version Control System connectors) will no longer utilize a post-commit hook, but instead will run at a scheduled interval and will collect commit events from the SVN log. These collected events will then be posted to Rally as Changesets.
The new VCS connectors will not parse the logs for commit messages to translate into artifact state changes - so ultimately implementing that type of functionality will end up needing a customer-extension to the connector code in the long run anyway.

SVN hooks for Windows

Can I in svn hooks for Windows to write a command which relocate automatically some folders to another location in repository?
Hook must run at server
For example: Users commit files in his working copy (C:svnworkingcopy\dev)
At server will run a hook and automatically relocated or copy this files into another folder of repository.(https://svnserver/onlyread)
Where this user have permission to read only.
Thnk !
svn switch --relocate a user's working copy with a hook script? Looks like you are confusing the terms. Nevertheless I advise you to check the following warning in SVNBook:
While hook scripts can do almost anything, there is one dimension in
which hook script authors should show restraint: do not modify a
commit transaction using hook scripts. While it might be tempting to
use hook scripts to automatically correct errors, shortcomings, or
policy violations present in the files being committed, doing so can
cause problems. Subversion keeps client-side caches of certain bits of
repository data, and if you change a commit transaction in this way,
those caches become indetectably stale. This inconsistency can lead to
surprising and unexpected behavior. Instead of modifying the
transaction, you should simply validate the transaction in the
pre-commit hook and reject the commit if it does not meet the desired
requirements. As a bonus, your users will learn the value of careful,
compliance-minded work habits.

Resources