Unable to perform operation "make branch" in replica <replica name> of vob<vobname> - clearcase-ucm

We recently changed mastership of a stream from one site (inh) to another(ies). Things were fine till following error.
Now a delivery from child branch to the "moved branch" results in error. Not all merges are problematic. Select directories (or I think so) are not merging.
Unable to perform operation "make branch" in replica "interfaces_src_ies" of VOB "\interfaces_src".
Master replica of branch type "project_subset_QPE-5060" is "interfaces_src.inh".
There is no candidate version which can be checked out.
Unable to check out "M:\dyn_project_subset\interfaces_src\src\java\src\chs\cof\project".
How can I fix this? How can I change mastership of "branch type "project_subset_QPE-5060 to interfaces_src.ies

That should mean, as detailed in the IBM technote swg21142784, that the mastership transfer was incomplete.
That can happen when there was a checked out file at the time of the transfer.
Make sure there is no checked out files (on both sites), and try and transfer the mastership again (even if it says it is already transferred)
Or, as described in the technote, try and create the branch on the other site, and create a synchronization packet from the mastering site using multitool syncreplica -export so the site where the element creation is going to happen receives the mkbranch operation.
You see that kind of operation in IBM technote swg21118471.
On Windows, this setting can also help preventing this situation:
cleardlg.exe/options/Operations tab/Advanced Options:
When creating an element in a replicated VOB,
make current replica the master of all newly created branches.

I also had this exact issue when trying to checkout a file to modify.
I was able to create a view, but when I tried to checkout a file it kept complaining about:
Error checking out '<file>'.
Unable to perform operation "make branch" in replica "<branch>" of VOB "<vob>".
Master replica of branch type "<type>" is "<X>"
Unable to check out "<file>"
This was fixed by changing the ClearCase Registry Server to the correct host, and then re-creating the View.

Related

Clearcase UCM. Unable to deliver

I created a new development stream out of an Integration stream. On the dev stream I created a dynamic view and an activity. I then added a new directory to an existing directory using clearfsimport.
e.g. cd to the parent directory where I want to add the new dir. then run:
clearfsimport -recurse -follow -nsetevent -c "adding new version" ~/newdir .
When it's all done I try to deliver the activity using clearcase project explorer. This throws an error like so:
"Merge Manager: warning: Element xxxx is not visible in view <Integration view name>
... ... ...
If this element should be visible cancel this operation, fix the prolem, and re-run the operation"
I have been doing this every week for months now and never had an issue. I'm really not sure what am I missing here or how to fix it. If it helps, the mastership of the Integration stream was transfered from a remote replica to ours. All my previous delivers were on the remote replica. But now I have complete mastership over the integration stream.
It depends on the nature of the target view (the one in which you are doing the deliver, the one associated with the integration stream): snapshot or dynamic.
You would need to check if the parent folder of xxxx is correctly checked out (or if you can check it out).
A cleartool ls in that parent folder of 'xxx' in the target (integration) view can help to test if everything seems ok.
If you are using a snapshot view, a cleartool update before the deliver can help too.
Since the integration stream mastership has just been transferred from a remote replica to your site, I would suggest the following:
1) Make sure you have created a view associated to the integration stream on your site
2) Make sure that the default delivery view is set to your integration view. You might need to reset it using the command
cleartool deliver -reset -to your-integration-view
The command above has to be launched from your development view.
I suggest you have a look at the cleartool deliver command

How to recover master from a backup on moosefs-ce.2.0

My mfs version is moosefs-ce-2.0, it is installed on debian6 which is ext3 filesystem. There are a master and a metalogger and some chunkserver, when my master is down. How to recover master from metalogger? The documentation moosefs.org provided is outdated, I can't find more detailed information on documentaton. Or how to config muti-master on moosefs-ce-2.0?
It is described in the documentation. You can find the documentation here: MooseFS Documentation. Paragraph 4.2 (page 19) of MooseFS User's Manual "Master metadata restore from metaloggers" says:
4.2 Master metadata restore from metaloggers
In MooseFS Community Edition basic configuration there can be only one master and several metaloggers. If for some reason you loose all metadata files and changelogs from master server you can use data from metalogger to restore your data. To start dealing with recovery first you need to transfer all data stored on metalogger in /var/lib/mfs to master metadata folder. Files on metalogger will have ml prefix prepended to the filenames. After all files are copied, you need to create metadata.mfs file from changelogs and metadata.mfs.back files. To do this we need to use the command mfsmaster -a. Mfsmaster starts to build new metadata file and starts mfsmaster process.

SVN commit doesn't complete

When I commit files in svn I often get the situation where after it has transmitted all the files svn will hang and then eventually time out with the error svn: E175012: Connection timed out.
This seems to happen when I am uploading more than say 20 files.
I believe this is happening after all of the files have been transferred to the server as either new periods will have stopped being added after Transmitting file data in the console, or all of the files have been listed as sent in Tortoise. Also, if I then do an update from the repository I get merges for all of the files I've just tried to commit (or, more annoyingly, a ton of conflicts to resolve) and when I then go to commit again there is nothing to commit - presumably meaning all of the files were successfully transmitted the first time.
What could be causing this? It seems like the client is waiting for a 'all done' message from the server that is never arriving back at my PC?
Our set up is TortoiseSVN 1.8.2 on the client and VisualSVN Server 2.7 on the server.
I've checked for error messages in VisualSVN's event log on the server and there aren't any. This happens on both the office network and over VPN, and whether working on Wi-Fi or a wired connection.
Check whether any post-commit hook script is processing your commits.

Is there a version control feature in Oracle BI Answers for a single Analysis?

I built an Analysis that displayed Results, error free. All is well.
Then, I added some filters to existing criteria sets. I also copied an existing criteria set, pasted it, and modified it's filters. When I try to display results, I see a View Display Error.
I’d like to revert back to that earlier functional version of the analyses, hopefully without manually undoing the all of filter & criteria changes I made since then.
If you’ve seen a feature like this, I’d like to hear about it!
Micah-
Great question. There are many times in the past when we wished we had some simple SCM on the Oracle BI Web Catalog. There is currently no "out of the box" source control for the web catalog, but some simple work-arounds do exist.
If you have access server side where the web catalog lives you can start with the following approach.
Oracle BI Web Catalog Version Control Using GIT Server Side with CRON Job:
Make a backup of your web catalog!
Create a GIT Repository in the web cat
base directory where the root dir and root.atr file exist.
Initial commit eveything. ( git add -A; git commit -a -m
"initial commit"; git push )
Setup a CRON job to run a script Hourly,
Minutely, etc that will tell GIT to auto commit any
adds/deletes/modifications to your GIT repository. ( git add -A; git
commit -a -m "auto_commit_$(date +"%Y-%m-%d_%T")"; git push )
Here are the issues with this approach:
If the CRON runs hourly, and an Analysis changes 3 times in the hour
you'll be missing some versions in there.
No actual user submitted commit messages.
Object details such as the Objects pretty "Name" (caption), Description (user populated on Save Dialog), ACLs, and object custom properties are stored in a binary file format. These files have the .atr extension. The good news though is that the actual object definition is stored in a plain text file in XML (Without the .atr).
Take this as a baseline, and build upon it. Here is how you could step it up!
Use incron or other inotify based file monitoring such as ruby
based guard. Using this approach you could commit nearly
instantly anytime a user saves an object and the BI server updates
the file system.
Along with inotify, you could leverage the BI Soap API to retrieve the actual object details such as Description. This would allow you to create meaningfull commit messages. Or, parse the binary .atr file and pull the info out. Here are some good links to learn more about Web Cat ATR files: Link (Keep in mind this links are discussing OBI 10g. The binary format for 11G has changed slightly.)

SVN hooks for Windows

Can I in svn hooks for Windows to write a command which relocate automatically some folders to another location in repository?
Hook must run at server
For example: Users commit files in his working copy (C:svnworkingcopy\dev)
At server will run a hook and automatically relocated or copy this files into another folder of repository.(https://svnserver/onlyread)
Where this user have permission to read only.
Thnk !
svn switch --relocate a user's working copy with a hook script? Looks like you are confusing the terms. Nevertheless I advise you to check the following warning in SVNBook:
While hook scripts can do almost anything, there is one dimension in
which hook script authors should show restraint: do not modify a
commit transaction using hook scripts. While it might be tempting to
use hook scripts to automatically correct errors, shortcomings, or
policy violations present in the files being committed, doing so can
cause problems. Subversion keeps client-side caches of certain bits of
repository data, and if you change a commit transaction in this way,
those caches become indetectably stale. This inconsistency can lead to
surprising and unexpected behavior. Instead of modifying the
transaction, you should simply validate the transaction in the
pre-commit hook and reject the commit if it does not meet the desired
requirements. As a bonus, your users will learn the value of careful,
compliance-minded work habits.

Resources