Heartbeat status NiFi flow - apache-nifi

I want to create a scheduled heartbeat status process in a websocket flow. This is the flow I'm trying to achieve. Connect to websocket -> first message has heartbeat status interval -> parse the interval value -> start scheduled processor to send a heartbeat status through websocket connection -> in the same websocket connection, wait for other message to come through that will be parsed.
I have a flow that currently looks like this
The heartbeat statusing process group looks like this.
Does this look right?
How should the retry flow file in the heartbeat statusing look? It doesn't seem to be working. it keeps queuing retries, but never goes through the retries_exceeded relationship.
will the heartbeat statusing process group run in its own thread?
how can I pass the websocket session to the heartbeat statusing process group?

Related

Akka Websocket complete, but client still connected

(Akka 2.6.5, akka-http 10.1.12)
I have a server/client websocket setup using Source.queue and Sink.actorRef on each side of the connection.
It turns out my system has a rather critical and unexpected flaw (found in production no less):
Sink actor fails and terminates (Dead letters are logged)
Sink actor is sent Stream Failure message (configured in Sink.actorRef construction) - this also is logged at dead letters, since the actor is indeed dead.
So we have a finished web socket stream right? That's what Half-closed websockets would say (although, just noticing the heading id is "half-closed-client-websockets...)
What happens instead is... nothing. The connected client stays connected - there's no complete message or failure.
Is there something configuration I need to actively tell akka to fully close Http on failures like this?
Testing
I reproduced the issue in integrated testing:
Establish connection
Sleep for 70 seconds (just to ensure keep-alives are configured/working properly)
Send a message from server
Ensure receipt on client
Kill server actor sink (and see same Stream Failure -> dead letters as above)
Wait for client to acknowledge completion (100 seconds) - either:
If I did nothing -> Timeout
If I sent message from client to server before waiting for completion:
After 60s: Aborting tcp connection ... because of upstream failure: TcpIdleTimeoutException
Stream failed sent to client sink.
Notes
I've deliberately not included code at this stage because I'm trying to understand the technology properly - either I've found a bug, or a fundamental misunderstanding of how the web sockets are meant to work (and fail). If you think I should include code, you'll need to convince me on how it might help create an answer
In production, this failure to close meant that the websocket client was waiting for data for 12 hours (it wasn't attempting to send messages at the time)

Which timeout has TIBCO EMS while waiting for acknowledge?

We are developing a solution using TIBCO-EMS, and we have a question about its behaviour.
When using CLIENT_ACKNOWLEDGE mode to connect, the client acknowledges the received message. We'd like to know for how long does TIBCO wait for acknowledgement, and if this time is configurable by the system admin.
By default, the EMS server waits forever for the acknowledgement of the message.
As long as the session is still alive, the transaction will not be discarded and the server waits for an acknowledgement or rollback.
There is however a setting within the server disconnect_non_acking_consumers where the client will be disconnected, if there are more pending messages (not acknowledged) then the queue limit does allow to store (maxbytes, maxmsgs). In this case, the server sends a connection reset to get rid of the client.
Sadly the documentation doesn't state this explicitly and the only public record I found was a knowledge base entry: https://support.tibco.com/s/article/Tibco-KnowledgeArticle-Article-33925

Nats.io QueueSubscribe behavior on timeout

I'm evaluating NATS for migrating an existing msg based software
I did not find documentation about msg timeout exception and overload.
For Example:
After Subscriber has been chosen , Is it aware of timeout settings posted by Publisher ? Is it possible to notify an additional time extension ?
If the elected subscriber is aware that some DBMS connection is missing and cannot complete It could be possible to bounce the message
NATS server will pickup another subscriber and will re-post the same message ?
Ciao
Diego
For your first question: It seems to me that you are trying to publish a request message with a timeout (using the nc.Request). If so, the timeout is managed by the client. Effectively the client publishes the request message and creates a subscription on the reply subject. If the subscription doesn't get any messages within the timeout it will notify you of the timeout condition and unsubscribe from the reply subject.
On your second question - are you using a queue group? A queue group in NATS is a subscription that specifies a queue group name. All subscriptions having the same queue group name are treated specially by the server. The server will select one of the queue group subscriptions to send the message to rotating between them as messages arrive. However the responsibility of the server is simply to deliver the message.
To do what you describe, implement your functionality using request/reply using a timeout and a max number of messages equal to 1. If no responses are received after the timeout your client can then resend the request message after some delay or perform some other type of recovery logic. The reply message should be your 'protocol' to know that the message was handled properly. Note that this gets into the design of your messaging architecture. For example, it is possible for the timeout to trigger after the request recipient received the message and handled it but before the client or server was able to publish the response. In that case the request sender wouldn't be able to tell the difference and would eventually republish. This hints that such type of interactions need to make the requests idempotent to prevent duplicate side effects.

How to use Report action in OSB proxy service to record retry attempts

I want to record the retry attempts of a proxy service in OSB using report action.
I have created a JMS transport proxy service which would pick messages from an IN_QUEUE and routes the message to a business service which would push the message to an OUT_QUEUE and reports the status (success or failure).
However if there is an error while processing, the proxy service should retry for 5 times before getting failed. To acheive this, I have configured the routing options and gave the retry count as 5 and it works good.
All I want now is to record the retry attempts (using report action) of the proxy service. Please suggest me how to do this.
Logging the retry attempts of a business service is difficult, since it's handled out of the scope of the proxy. About the closest you can come is to set up a SLA alert to notify you when the bizref fails, but that doesn't trigger on every message - just if it detects errors during the aggregation interval.
Logging the retry attempts of the proxy is a lot easier, especially since it's a JMS proxy. Failed processing will put the message back on the queue (XA-enabled resources, you may want to enable Same Transaction For Response), and retries will increment a counter inside the JMS transport header, which the proxy can extract and decide whether to report on it or not.
Just remember that unless you set QoS to Best Effort on the publishes/reports, the publishes themselves will be rolled back if a failure happens, which is probably not what you want.

Message send timeout for producer sending message to ActiveMQ broker

Is there a way to set timeout for sending a message to broker.
I want to send large messages to ActiveMQ broker but I do not want it to take forever, so I was planning to set a timeout while sending message.
you can set connection.sendTimeout=some ms in URI while connecting to broker
Official document for sendTimeout says
Time to wait on Message Sends for a Response, default value of zero
indicates to wait forever. Waiting forever allows the broker to have
flow control over messages coming from this client if it is a fast
producer or there is no consumer such that the broker would run out
of memory if it did not slow down the producer. Does not affect Stomp
clients as the sends are ack'd by the broker. (Since ActiveMQ-CPP
2.2.1)
here is the documentation https://activemq.apache.org/components/cms/configuring
hope this helps!
Good luck!

Resources