How to cancel delivery in clearcase - clearcase-ucm

I want to cancel (undo) a delivery in Clearcase for my Dev stream but it gives below error :
"unable to cancel delivery because another operation is trying to complete it"
What can be the possible cause and resolution?

Unless you are using an old ClearCase 7.0 (whic has a fix to avoid that error), that can happen when the deliver was started twice
The exact error message is:
cleartool deliver -cancel
Cancel deliver
FROM: stream "<source-stream>"
TO: stream "<target-stream>"
Using target view: "<target-view>".
Are you sure you want to cancel this deliver operation? [no] yes
cleartool: Error: Unable to cancel delivery because another operation is trying to complete.
cleartool: Error: Unable to cancel deliver.
("trying to complete", no "it" at the end)
Attempting to start a deliver twice in the Windows GUI results in a stuck deliver operation.
This is applicable when using UCM with ClearQuest (CQ) integration and having the ClearQuest policy, Transition To Complete After Delivery, enabled.
The CQ policy, Transition To Complete After Delivery, tries to transition the activities to complete, but cannot find any.
This causes the deliver -complete to fail.
More generally, check your OS processes to determine if another process keep an handle which would prevent any cancellation.

Related

Azure event Hub connection is closed error

I see something like the following error while using azure event hub to send event message. But as I see in the azure portal, the metric shows that the event message is sent to the event hub. So I'm puzzled by what this error message means.
As I read from the azure doc (https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-amqp-troubleshoot), it said "You see the following error on the AMQP connection when all links in the connection have been closed because there was no activity (idle) and a new link hasn't been created in 5 minutes."
The doc also said "You can avoid them by making send/receive calls on the connection/link, which automatically recreates the connection/link."
What should be done regarding this error message ? As although the event message can be sent, I worry whether there may be any potential issue there.
" Error{condition=amqp:connection:forced, description='The connection was inactive for more than the allowed 300000 milliseconds and is closed by container 'LinkTracker'. TrackingId:00000000000000000000000000000000000_G21, SystemTracker:gateway5, Timestamp:2019-03-06T17:32:00', info=null} "
I once tried that if I call the close() method in EventHubProducerClient (by refer to sample code in https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-java-get-started-send), this error seems not appear again. However, if doing so, when every time need to send the event, it will mean need to create a new EventHubProducerClient. I'm not sure if this may create another problem (like time required to create the new EventHubProducerClient, and memory consumption) if creating a new EventHubProducerClient for every send event, as there can be many events to send.
On another search, I found in How to configure Producer.close() in Eventhub, that it is recommended to close the producer client after using it.
However, if the above error message is actually not an error, whether to close or not may not matter.

how to resolve "EWS could not contact the appropriate CAS server for this request"

I have an application that is creating StreamingSubscription (using EWS managed API) for many hundreds of room resource mailboxes in EXO, and I'm trying to make the code tolerant of a subscription going "bad" and needing to be re-created. Here's the behavior I'm seeing at the moment.
I first divide up the mailboxes into groups according to best practices, and then within each group:
I create a StreamingSubscription for each mailbox
I add all the subscriptions to a connection and open the connection
Some time passes, and the OnSubscriptionError event fires for
one subscription. At this point I find that the subscription in
question is no longer in the connection's CurrentSubscriptions
collection, but I'm able to identify which mailbox it was originally
for.
I then flag that mailbox so that the code will try to re-create its
subscription.
When the code tries to re-create the failed subscription, this error is thrown:
Request failed because EWS could not contact the appropriate CAS server for this request.
Thereafter, my code tries again once per minute to create that subscription, and that same error is thrown each time. This continues for as long as I allow it to run.
If I then stop my Windows service and start it again, all the subscriptions are created successfully, including that failed one.
Here's my question. Why is it able to successfully create the subscription after stopping and re-starting the service, but can't re-create it after the OnSubscriptionError?

OSB failing to process message that doesn't exist in queue

My instance of OSB is attempting to process a message from a JMS queue that doesn't exist - I believe this has already been processed and removed, but my current concern is the multiple failures each second it is trying to continue. The error logs are now useless as they're flooded with failures for one particular message.
I have rebooted the managed servers and admin server, but each time, it is immediately reattempting to process the same message. I believe this is having knock-on effects to performance, and I have had to remove all logs as the file system is continuing to overflow.
Where is this "currently processing" message being picked up from, and how can I progress this so that it will not keep trying to reprocess this?
As far as I understand, a problematic message is processing continuous failure within a JMS queue. There are 2 important actions.
Identify failure root cause. For me to help you with this item
depends on the error message. If error details are provided, I might
provide suggestions.
Protect environment by JMS Queue Configuration for Delivery Failure
such as "Expiration Policy", "Redelivery Limit", "Error Destination"
etc.
Please check following Oracle documentation for these configurations.

How do I achieve a redelivery delay in azure service bus with amqp using rhea

I'm using rhea in a nodejs application to send messages around over Azure Service Bus using AMQP. My problem is as follows:
Sometimes a message processing attempt can fail because of something that is out of our hands. For instance, a call to some API could fail because a service is down. At that point we unlock the message so it can be picked up at a later time or by another instance. After a certain amount of retries (when delivery-count has hit a certain max) it just ends up in DLQ.
What I want to achieve is that between each delivery attempt there is an increasing pause so the X amount of retries don't just occur in rapid succession until the max is hit. This way I can give whatever is causing the failure some time to come back up if it's just a matter of waiting for some service to become available again. If that doesn't work the message can go to DLQ anyway.
Is there some setting in azure service bus that will achieve this or will I have to program this into my own application?
if you explicitly want to delay processing you can en-queue a new message with ScheduledEnqueueTime set of later delivery (using the message.Clone() function can help in creating the cloned message). You also have the ability to call message.Defer() and will not deliver this message again until you call Receive(Sequenceid) for that specific message at a later time .

Azure Queue delayed message

I has some strange behaviour on production deployment for azure queue messages:
Some of the messages in queues appears with big delay - minutes, and sometimes 10 minutes.
Befere you ask about setting delayTimeout when we put message to queue - we do not set delayTimeout for that message, so message should appear almost immedeatly after it was placed in queue.
At that moments we do not have a big load. So my instances has no work load, and able to process message fast, but they just don't appear.
Our service process millions of messages per month, we able to identify that 10-50 messages processed with very big delay, by that we fail SLA in front of our customers.
Does anyone have any idea what can be reason?
How to overcome?
Did anyone faced similar issues?
Some general ideas for troubleshooting:
Are you certain that the message was queued up for processing - ie the queue.addmessage operation returned successfully and then you are waiting 10 minutes - meaning you can rule out any client side retry policies etc as being the cause of the problem.
Is there any chance that the time calculation could be subject to some kind of clock skew problems. eg - if one of the worker roles pulling messages has its close out of sync with the other worker roles you could see this.
Is it possible that in the situations where the message is appearing to be delayed that a worker role responsible for pulling the messages is actually failing or crashing. If the client calls GetMessage but does not respond with an appropriate acknowledgement within the time specified by the invisibilityTimeout setting then the message will become visible again as the Queue Service assumes the client did not process the message. You could tell if this was a contributing factor by looking at the dequeue count on these messages that are taking longer. More information can be found here: http://msdn.microsoft.com/en-us/library/dd179474.aspx.
Is it possible that the number of workers you have pulling items from the queue is insufficient at certain times of the day and the delays are simply caused by the queue being populated faster than you can pull messages from the queue.
Have you enabled logging for queues and then looked to see if you can find the specific operations (look at e2elatency and serverlatency).
http://blogs.msdn.com/b/windowsazurestorage/archive/tags/analytics+2d00+logging+_2600_amp_3b00_+metrics/. You should also enable client logging and try to determine if the client is having connectivity problems and the retry logic is possibly kicking in.
And finally if none of these appear to help can you please send me the server logs (and ideally the client side logs as well) along with your account information (no passwords) to JAHOGG at Microsoft dot com.
Jason
Azure Service bus has a property in the BrokeredMessage class called ScheduledEnqueueTimeUtc, it allows you to set a time for when the message is added to the queue (effectively creating a delay).
Are you sure that in your code your not setting this property, and this might be the cause for the delay?
You can find more info on this at this url: https://www.amido.com/azure-service-bus-how-to-delay-a-message-being-sent-to-the-queue/
If you are using WebJobs to process messages from the queue, it can be due to WebJobs configuration.
From an MSDN forum post by pranav rastogi:
Starting with 0.4.0-beta, the (WebJobs) SDK implements a random exponential back-off algorithm. As a result of this if there are no messages on the queue, the SDK will back off and start polling less frequently.
The following setting allows you to configure this behavior.
MaxPollingInterval for when a queue remains empty, the longest period of time to wait before checking for a message to. Default is 10min.
static void Main()
{
JobHostConfiguration config = new JobHostConfiguration();
config.Queues.MaxPollingInterval = TimeSpan.FromMinutes(1);
JobHost host = new JobHost(config);
host.RunAndBlock();
}

Resources