UCM: How to exclude files from a ClearCase deliver - clearcase-ucm

We have several Activities that have versions of files associated with them that we do not want to deliver to the Integration Stream because they're not needed (they were only needed during troubleshooting in the local environment).
However, even though the ClearQuest records are in the Closed state, they continue to show up as candidates during a delivery. How can we prevent this?
I'd rather not have to create a new element type "never merge" and execute chtype commands on each file we don't want to merge.

See "ClearCase : Making new baseline with old baseline activities": if some of those activities were previously delivered (Dev to Int), then all of the Activities on Dev are now linked by a "timeline", which make them all candidate for deliver.
One solution is to findmerge each activity individually (non-UCM merge)
But a better long-term solution is to make sure that any troubleshooting activity is done in a sub-stream instead of being done and mixed with the other activities on the Dev stream.

Related

Alter 'status' request interval of CloudBuild submit

I'm trying to setup the CI/CD setup of a mono repository using Google Cloud Build. We have a single Cloud Build trigger that starts a build on a new commit, it does some general steps and then then starts a build for every (micro)service in the mono repository using gcloud build submit.
This however means that if 4 or 5 people are push code to the repository roughly at the same time we can have around 50-70 concurrent builds running in cloud build. Which in itself isn't an issue for us. The only issues is that when this happens the following errors will popup:
{
“code”: 429,
“message”: “Quota exceeded for quota metric ‘Build and Operation Get requests’ and limit ‘Build and Operation Get requests per minute’ of service ‘cloudbuild.googleapis.com’ for consumer ‘project_number:<PROJECT_NUMBER>’.“,
“status”: “RESOURCE_EXHAUSTED”,
“details”: [{
“#type”: “type.googleapis.com/google.rpc.ErrorInfo”,
“reason”: “RATE_LIMIT_EXCEEDED”,
“domain”: “googleapis.com”,
“metadata”: {
“service”: “cloudbuild.googleapis.com”,
“consumer”: “projects/<PROJECT_NUMBER>”,
“quota_limit”: “GetRequestsPerMinutePerProject”,
“quota_metric”: “cloudbuild.googleapis.com/get_requests”
}
}]
}
In other words: We are running into quota limits. The quota only allows us to only make 900 operational requests per minute.
We already tried switching to private pools in the hope that the above quota limit was only there for when you don't use private pools, but this unfortunately still makes us hit the quota.
Now, I am trying to find out if I can decrease the amount of these operational requests.
A possible solution might be related to how I am using gcloud build submit. When you run gcloud build submit, it starts a new build, waits for the build to finish, and shows the output of the build. To achieve this, I presume that gcloud is making requests every few seconds to find out what the status of the build is. I suspect that these 'status' requests are why my Cloud Build quota limit is reached. Which is why I'm trying to see how I can lower the amount of these requests per minute.
One option is to simple decrease the amount of builds running in parallel, which is unfortunately not an option in my situation. If I execute them sequentially it simply takes more time than acceptable in my situation.
Another option would be to increase the time in between such 'status' requests. However, on this page I did unfortunately not find a CLI flag to alter this.
Note: I did find the --async flag, however that does NOT help me, since I still want the process to wait until the build has succeeded. And I also did find the --supress-logs, which also does NOT help me, since these requests presumably don't interact with Cloud Build but with the GCS bucket where the logs are stored.
The only option left that I can think off, is that I can start my builds with the --async flag and then manually request whether the build has succeeded using a longer interval. However I do feel like that is a lot of manual work that, for which I need to write some bash scripts that need to be maintained. This preferably isn't a path I would like to take unless really necessary.
Does anyone know of another way of achieving this?
If 4 or 5 people are push code to the repository
This shouldn't happen. The reason it shouldn't happen is because you should use the "push" trigger on the main branch, not on a development branch.
What do I mean by this?
I mean that building should occur on the main branch, which would correspond to joined effort of those five users and a responsible party in charge of unifying their changes.
So, really, your users should be pushing to the development branch, and pushes to main should be reserved for things that need to be built.
How can we work around this if we're only allowed one branch or are required to have updates visible on one branch?
My recommendation would be to use the tag filter, specifically filter the pushes by tag, as mentioned in the documentation. That way only the pushes person in charge of merging the changes will be built (assuming that this person pushes to the tag you've set)
TL;DR
Don't create push triggers for Cloud Build on a branch multiple people are working on. Either create it with a tag filter or have seperate development and main branches (people work on dev, builds are only made from pushes to main)

VAADIN: Size of UI.access() push queue

I would like to monitor my pushs' to the clients with the famous
UI.access() ... sequence on the server side.
Background is that I have to propagate lots of pushs to my client and I
want to make sure, nothing gets queued up.
I found only client RPCQueue having a size(), but I have no idea if its the correct items searching for now how to access this.
Thanks for any hint.
Gerry
If you want to know the size of the queue of tasks that have been enqueued using UI.access but not yet run, then you can use VaadinSession.getPendingAccessQueue.
This will, however, not give the full picture since it doesn't cover changes that have been applied to the server-side state (i.e. the UI.access task has already been executed) but not yet sent to the client. Those types of changes are tracked in a couple of different places depending on the type of change and the Vaadin version you're using.
For this kind of use case, it might be good to use the built-in beforeClientResponse functionality to apply your own changes as late as possible instead of applying changes eagerly.
With Vaadin versions up to 8, you do this by overriding the beforeClientResponse method in your component or extension class. You need to use markAsDirty() to ensure that beforeClientResponse will eventually be run for that instance.
Wit Vaadin 10 and newer, there's instead a UI.beforeClientResponse to which you give a callback that will be run once at an appropriate time by the framework.

Bulk publish option (Publish inc. Subnodes) with cache in magnolia

I am using magnolia enterprise standard version 5.3. We have publish and publish inc. sub nodes option for different apps. Can someone please tell me how cache work when we publish a tree structure? i means to say that, is it publish each node one by one and after publishing each node is it flush the public cache? or first it publish whole tree and then flush public cache?
Actually i want to apply wait time for bulk publish? before that i want to understand cache role while we publish the tree structure.
Can we add wait time for bulk publish?
I am not talking about multisite cache things.
Depends on how you configured the cache (or flush policy (or actually the observer that triggers flush policy). IIRC, by default, it is configured such that when event ("something was published") arrives, it will wait and collect all other incoming activations that come within one second. If nothing comes in one second since last event, the event with aggregated messages is passed on to flush policy. If, on the other hand, the events keep arriving, observation will keep collecting and aggregating those events for maximum of 4 seconds before reacting and flushing the cache. (I hope, 1 sec and 4 secs are the correct intervals, but it has been couple of years since I was last time digging anything in that area, so it might have been slightly changed since.)
In EE you have also possibility to configure other caching policies and can have dual cache where one is always pre-heated w/ new content before other is flushed or you can write completely custom policy that suits your needs.

Workflow Waiting Forever

I have a workflow that runs when an entity is created and it creates two other entities and puts them on a queue. It then waits until each entity's status reason is set to done. After which is continues.
Basically two teams will work an order and then it will continue processing after both teams are done.
Most of the time it works. However sometimes it waits forever. I'll re-active and re-resolve the other tasks, but it just never wakes up.
What can I do? The workflows aren't really powerful enough for me to have it poll with a timeout (there are no loops). I'd like to avoid on-change plugins for these other entities to get workflow behavior all scattered about.
Edit:
Restarting the CRM services (not sure which did it, I restarted them all) allowed the workflow to resume. However, I'd still like to know how to make this more reliable.
I had the same problem (and a lot more) with workflows in CRM 2011 and decided not to use them (except for very special purposes).
The main reason is because of their very limited error handling. Another reason is that it is inconvenient to put them under source control. Another reasons are: Worflows cannot run offline and user impersonation is also not supported. For a comparison look here: http://goo.gl/9ht1QJ
Use plugins instead of workflows, then you have full control.
But keep in mind that plugins (unlike workflows) are not designed for long running tasks.
So they have a default max execution time of 120 sec and are not stateful/persisted. But in most cases (and i think also in your case) that is not a problem.
Just change your eventing a little bit:
Implement and register a plugin step for: entity is created and it creates two other entities and puts them on a queue
Implement and register another step: entity's status reason is set to done, query for other entity and check status, if done continue processing
If you really do not want use plugins for you business logic you can consider implementing a plugin which restarts/resumes faulted workflows.
But thats not a very nice solution.

what strategy work to update long-running process in SOA

In SOA practice, what strategies work better (or work at all) to update long running processes (in particular for Oracle BPEL)? For example, process may involve several human steps, which by their nature are time consuming. SOA Suites support starting new instances on new version of process and continue of running processes execution. But, what to do if the orchestration logic need to be updated and applied to already running instances? Let assume we do not want purchase orders to pass management approval, and would like this change to be applied to all orders, even those beying executed.
You cannot change the business process for anything which is in flight. Changes can only be applied to new processes. This is not a technical limitation, it is just common sense. Apart from anything, it would confuse audit trails or regulatory compliance.
If you have so catastrophically mis-designed a process - "we forgot to include management approval for orders!" "facepalm* - all you can do is shut off the server and clean up any half-completed processes. But that would be a really drastic step to take.
So the only strategy which is going to work is review and acceptance testing.

Resources