I created a new development stream out of an Integration stream. On the dev stream I created a dynamic view and an activity. I then added a new directory to an existing directory using clearfsimport.
e.g. cd to the parent directory where I want to add the new dir. then run:
clearfsimport -recurse -follow -nsetevent -c "adding new version" ~/newdir .
When it's all done I try to deliver the activity using clearcase project explorer. This throws an error like so:
"Merge Manager: warning: Element xxxx is not visible in view <Integration view name>
... ... ...
If this element should be visible cancel this operation, fix the prolem, and re-run the operation"
I have been doing this every week for months now and never had an issue. I'm really not sure what am I missing here or how to fix it. If it helps, the mastership of the Integration stream was transfered from a remote replica to ours. All my previous delivers were on the remote replica. But now I have complete mastership over the integration stream.
It depends on the nature of the target view (the one in which you are doing the deliver, the one associated with the integration stream): snapshot or dynamic.
You would need to check if the parent folder of xxxx is correctly checked out (or if you can check it out).
A cleartool ls in that parent folder of 'xxx' in the target (integration) view can help to test if everything seems ok.
If you are using a snapshot view, a cleartool update before the deliver can help too.
Since the integration stream mastership has just been transferred from a remote replica to your site, I would suggest the following:
1) Make sure you have created a view associated to the integration stream on your site
2) Make sure that the default delivery view is set to your integration view. You might need to reset it using the command
cleartool deliver -reset -to your-integration-view
The command above has to be launched from your development view.
I suggest you have a look at the cleartool deliver command
Related
I have an azure blob container with data which I have not uploaded myself. The data is not locally on my computer.
Is it possible to use dvc to download the data to my computer when I haven’t uploaded the data with dvc? Is it possible with dvc import-url?
I have tried using dvc pull, but can only get it to work if I already have the data locally on the computer and have used dvc add and dvc push .
And if I do it that way, then the folders on azure are not human-readable. Is it possible to upload them in a human-readable format?
If it is not possible is there then another way to download data automatically from azure?
I'll build up on #Shcheklein's great answer - specifically on the 'external dependencies' proposal - and focus on your last question, i.e. "another way to download data automatically from Azure".
Assumptions
Let's assume the following:
We're using a DVC pipeline, specified in an existing dvc.yaml file. The first stage in the current pipeline is called prepare.
Our data is stored on some Azure blob storage container, in a folder named dataset/. This folder follows a structure of sub-folders that we'd like to keep intact.
The Azure blob storage container has been configured in our DVC environment as a DVC 'data remote', with name myazure (more info about DVC 'data remotes' here)
High-level idea
One possibility is to start the DVC pipeline by synchronizing a local dataset/ folder with the dataset/ folder on the remote container.
This can be achieved with a command-line tool called azcopy, which is available for Windows, Linux and macOS.
As recommended here, it is a good idea to add azcopy to your account or system path, so that you can call this application from any directory on your system.
The high-level idea is:
Add an initial update_dataset stage to the DVC pipeline that checks if changes have been made in the remote dataset/ directory (i.e., file additions, modifications or removals).
If changes are detected, the update_datset stage shall use the azcopy sync [src] [dst] command to apply the changes on the Azure blob storage container (the [src]) to the local dataset/ folder (the [dst])
Add a dependency between update_dataset and the subsequent DVC pipeline stage prepare, using a 'dummy' file. This file should be added to (a) the outputs of the update_dataset stage; and (b) the dependencies of the prepare stage.
Implementation
This procedure has been tested on Windows 10.
Add a simple update_dataset stage to the DVC pipeline by running:
$ dvc stage add -n update_dataset -d remote://myazure/dataset/ -o .dataset_updated azcopy sync \"https://[account].blob.core.windows.net/[container]/dataset?[sas token]\" \"dataset/\" --delete-destination=\"true\"
Notice how we specify the 'dummy' file .dataset_updated as an output of the stage.
Edit the dvc.yaml file directly to modify the command of the update_dataset stage. After the modifications, the command shall (a) create the .dataset_updated file after the azcopy command - touch .dataset_updated - and (b) pass the current date and time to the .dataset_updated file to guarantee uniqueness between different update events - echo %date%-%time% > .dataset_updated.
stages:
update_dataset:
cmd: azcopy sync "https://[account].blob.core.windows.net/[container]/dataset?[sas token]" "dataset/" --delete-destination="true" && touch .dataset_updated && echo %date%-%time% > .dataset_updated # updated command
deps:
- remote://myazure/dataset/
outs:
- .dataset_updated
...
I recommend editing the dvc.yaml file directly to modify the command, as I wasn't able to come up with a complete dvc add stage command that took care of everything in one go.
This is due to the use of multiple commands chained by &&, special characters in the Azure connection string, and the echo expression that needs to be evaluated dynamically.
To make the prepare stage depend on the .dataset_updated file, edit the dvc.yaml file directly to add the new dependency, e.g.:
stages:
prepare:
cmd: <some command>
deps:
- .dataset_updated # add new dependency here
- ... # all other dependencies
...
Finally, you can test different scenarios on your remote side - e.g., adding, modifying or deleting files - and check what happens when you run the DVC pipeline up till the prepare stage:
$ dvc repro prepare
Notes
The solution presented above is very similar to the example given in DVC's external dependencies documentation.
Instead of the az copy command, it uses azcopy sync.
The advantage of azcopy sync is that it only applies the differences between your local and remote folders, instead of 'blindly' downloading everything from the remote side when differences are detected.
This example relies on a full connection string with an SAS token, but you can probably do without it if you configure azcopy with your credentials or fetch the appropriate values from environment variables
When defining the DVC pipeline stage, I've intentionally left out an output dependency with the local dataset/ folder - i.e. the -o dataset part - as it was causing the azcopy command to fail. I think this is because DVC automatically clears the folders specified as output dependencies when you reproduce a stage.
When defining the azcopy command, I've included the --delete-destination="true" option. This allows synchronization of deleted files, i.e. files are deleted on your local dataset folder if deleted on the Azure container.
Please, bear with me, since you have a lot of questions. Answer needs a bit structure and background to be useful. Or skip to the very end to find some new ways of doing Is it possible to upload them in a human-readable format? :). Anyways, please let me know if that solves your problem, and in general would be great to have a better description of what you are trying to accomplish at the end (high level description).
You are right that by default DVC structures its remote in a content-addressable way (which makes it non human-readable). There are pros and cons to this. It's easy to deduplicate data, it's easy to enforce immutability and make sure that no one can touch it directly and remove something, directory names in projects make it connected to actual project and their meaning, etc.
Some materials on this: Versioning Data and Models, my answer of on how DVC structures its data, upcoming Data Management User Guide section (WIP still).
Saying that, it's clear there are downsides to this approach, especially when it comes to managing a lot of objects in the cloud (e.g. millions of images, etc). To name a few concerns that I see a lot as a pattern:
Data has been created (and being updated) by someone else. There is some ETL, third party tool, etc. We need to keep that format.
Third party tool expect to have data in "human" readable way. It doesn't integrate with DVC to being able to access it indirectly via Git. (one of the examples - Label Studio need direct links to S3).
It's not practical to move all of data into DVC, it doesn't make sense to instantiate all the files at once as one directory. Users need slices, usually based on some annotations (metadata), etc.
So, DVC has multiple features to deal with data in its own original layout:
dvc import-url - it'll download objects, it'll cache them, and will by default push (dvc push) to remote to again save them to guarantee reproducibility (this can be changed). This command creates a special file .dvc that is being used to detect changes in the cloud to see if DVC needs to download something again. It should cover the case for "to download data automatically from azure".
dvc get-url - this more or less wget or rclone or aws s3 cp, etc with multi cloud support. It just downloads objects.
A bit advanced thing (if you DVC pipelines):
Similar to import-url but for DVC pipelines - external dependencies
The the third (new) option. It's in beta phase, it's called "cloud versioning" and essentially it tries to keep the storage human readable while still benefit from using .dvc files in Git if you need them to reference an exact version of the data.
Cloud Versioning with DVC (it's WPI when I write this, if PR is merged it means you can find it in the docs
The document summarizes well the approach:
DVC supports the use of cloud object versioning for cases where users prefer to retain their original filenames and directory hierarchy in remote storage, in exchange for losing the de-duplication and performance benefits of content-addressable storage. When cloud versioning is enabled, DVC will store files in the remote according to their original directory location and
filenames. Different versions of a file will then be stored as separate versions of the corresponding object in cloud storage.
Our Jenkins job downloads some code from a Perforce server, using a pre-defined workspace. It
sometimes fails with the following error message:
Client 'xxxx' can only be used from host 'yyyy'.
When I look at the workspace ("client" is an obsolete name for workspace), I see that its settings don't mention host yyyy at all.
I suspect that people (or unknown scripts) change the workspace's settings, do some work and then change them back. If a Jenkins job is scheduled to run during that time, it fails.
How can I determine if I guessed correctly? Are there any logs on the Perforce server which report workspace changes? Maybe some server setting to record all changes to workspaces?
Workspace settings look like something I should be able to track and/or revert using version control; is this really the case?
First and foremost, you should set the locked option on the client if you don't want anyone else messing with it (and set its Owner to be the user who runs the Jenkins job, and ensure that this user is password-protected so that nobody else can impersonate Jenkins).
To track changes to client specs, you can set up a spec depot (just create a depot with Type: spec). This will cause every spec update to be saved in that depot as a revision of a text file, e.g. client xxxx will correspond to a text file called //spec/client/xxxx. You can run normal commands like p4 annotate on that file to see its change history, and you can pipe old versions of the file into the current client spec by doing, e.g.:
p4 print -q //spec/client/xxxx | p4 client -i
But again, first and foremost, persistent clients that automation depends on should simply be locked so that they can't be sabotaged (intentionally or unwittingly) by other users.
I am working on Clearcase/Clearquest in which I have to create the CQ's for defect for the developers. Now the defect is to be delivered in old stream and in current stream also. So for each defect I have to create 3 CQ for single developer. Say I have three stream :
8.0_dev
9.0_dev
10.0_dev
So I create same defect CQ for abouve three stream. Now problem is developer dont care to check the CQ of which stream it is . He is committing the code in 8.0_dev branch by taking CQ of 10.0_dev and it creating mess for me to create Release notes. I want to restrict the commit to the respective CQ assigned to stream. I want Clearcase gives error if CQ assigned to 8.0_dev is used for committing in any other stream, it must be used in commit in 8.0_dev and nowhereelse.
Please advise me how I implement this.
One possible lead a a preop trigger on the deliver operation (with a cleartool mktrtypr similar to the one I mentioned in "clearcase rebase permission to specific person").
In your script implementing that check (and called by the trigger), display the ClearCase environment variables which are available, and see if one would mention an UCM activity named after the CQ you created. That would mean the currently set activity is not the right one, and you can exit -1.
We recently changed mastership of a stream from one site (inh) to another(ies). Things were fine till following error.
Now a delivery from child branch to the "moved branch" results in error. Not all merges are problematic. Select directories (or I think so) are not merging.
Unable to perform operation "make branch" in replica "interfaces_src_ies" of VOB "\interfaces_src".
Master replica of branch type "project_subset_QPE-5060" is "interfaces_src.inh".
There is no candidate version which can be checked out.
Unable to check out "M:\dyn_project_subset\interfaces_src\src\java\src\chs\cof\project".
How can I fix this? How can I change mastership of "branch type "project_subset_QPE-5060 to interfaces_src.ies
That should mean, as detailed in the IBM technote swg21142784, that the mastership transfer was incomplete.
That can happen when there was a checked out file at the time of the transfer.
Make sure there is no checked out files (on both sites), and try and transfer the mastership again (even if it says it is already transferred)
Or, as described in the technote, try and create the branch on the other site, and create a synchronization packet from the mastering site using multitool syncreplica -export so the site where the element creation is going to happen receives the mkbranch operation.
You see that kind of operation in IBM technote swg21118471.
On Windows, this setting can also help preventing this situation:
cleardlg.exe/options/Operations tab/Advanced Options:
When creating an element in a replicated VOB,
make current replica the master of all newly created branches.
I also had this exact issue when trying to checkout a file to modify.
I was able to create a view, but when I tried to checkout a file it kept complaining about:
Error checking out '<file>'.
Unable to perform operation "make branch" in replica "<branch>" of VOB "<vob>".
Master replica of branch type "<type>" is "<X>"
Unable to check out "<file>"
This was fixed by changing the ClearCase Registry Server to the correct host, and then re-creating the View.
I built an Analysis that displayed Results, error free. All is well.
Then, I added some filters to existing criteria sets. I also copied an existing criteria set, pasted it, and modified it's filters. When I try to display results, I see a View Display Error.
I’d like to revert back to that earlier functional version of the analyses, hopefully without manually undoing the all of filter & criteria changes I made since then.
If you’ve seen a feature like this, I’d like to hear about it!
Micah-
Great question. There are many times in the past when we wished we had some simple SCM on the Oracle BI Web Catalog. There is currently no "out of the box" source control for the web catalog, but some simple work-arounds do exist.
If you have access server side where the web catalog lives you can start with the following approach.
Oracle BI Web Catalog Version Control Using GIT Server Side with CRON Job:
Make a backup of your web catalog!
Create a GIT Repository in the web cat
base directory where the root dir and root.atr file exist.
Initial commit eveything. ( git add -A; git commit -a -m
"initial commit"; git push )
Setup a CRON job to run a script Hourly,
Minutely, etc that will tell GIT to auto commit any
adds/deletes/modifications to your GIT repository. ( git add -A; git
commit -a -m "auto_commit_$(date +"%Y-%m-%d_%T")"; git push )
Here are the issues with this approach:
If the CRON runs hourly, and an Analysis changes 3 times in the hour
you'll be missing some versions in there.
No actual user submitted commit messages.
Object details such as the Objects pretty "Name" (caption), Description (user populated on Save Dialog), ACLs, and object custom properties are stored in a binary file format. These files have the .atr extension. The good news though is that the actual object definition is stored in a plain text file in XML (Without the .atr).
Take this as a baseline, and build upon it. Here is how you could step it up!
Use incron or other inotify based file monitoring such as ruby
based guard. Using this approach you could commit nearly
instantly anytime a user saves an object and the BI server updates
the file system.
Along with inotify, you could leverage the BI Soap API to retrieve the actual object details such as Description. This would allow you to create meaningfull commit messages. Or, parse the binary .atr file and pull the info out. Here are some good links to learn more about Web Cat ATR files: Link (Keep in mind this links are discussing OBI 10g. The binary format for 11G has changed slightly.)