HTTP Rest Request reached HTTP Input node and processed in computenode. It then failed in HTTP Reply node with the below error:
**catch exception and rethrowing errr:Handle re used after re ply sent EVHT replyidentifier **
Can anyone please help me to fix this issue? Thanks in advance.
Chapter Configuring message flows to process timeouts explains timeout handling.
Specifically for your problem, you can use the CHANGEIDENTIFIERTIMEOUT function like this:
DECLARE replyIdentifier BLOB InputLocalEnvironment.Destination.HTTP.RequestIdentifier;
IF replyIdentifier IS NOT NULL AND NOT CHANGEIDENTIFIERTIMEOUT(replyIdentifier, 1) THEN
-- Do not propagate to the Reply node
RETURN FALSE;
ELSE
-- Flow did not timeout, so we can send the reply
RETURN TRUE;
END IF;
Related
Below is a service with set of 3 Go-routines that process a message from Kafka:
Channel-1 & Channel-2 are unbuffered data channels in Go. Channel is like a queuing mechanism.
Goroutine-1 reads a message from a kafka topic, throw its message payload on Channel-1, after validation of the message.
Goroutine-2 reads from Channel-1 and processes the payload and throws the processed payload on Channel-2.
Goroutine-3 reads from Channel-2 and encapsulates the processed payload into http packet and perform http requests(using http client) to another service.
Loophole in the above flow: In our case, processing fails either due to bad network connections between services or remote service is not ready to accept http requests from Go-routine3(http client timeout), due to which, above service lose that message(already read from Kafka topic).
Goroutine-1 currently subscribes the message from Kafka without an acknowledgement sent to Kafka(to inform that specific message is processed successfully by Goroutine-3)
Correctness is preferred over performance.
How to ensure that every message is processed successfully?
E.g., add a feedback from Goroutine-3 to Goroutine-1 through new Channel-3. Goroutine-1 will block until it get acknowledgement from Channel-3.
// in gorouting 1
channel1 <- data
select {
case <-channel3:
case <-ctx.Done(): // or smth else to prevent deadlock
}
...
// in gorouting 3
data := <-channel2
for {
if err := sendData(data); err == nil {
break
}
}
channel3<-struct{}{}
To ensuring correctness you need to commit (=acknowledge) the message after processing finished successfully.
For the cases when the processing wasn't finished successfully - in general, you need to implement retry mechanism by yourself.
That should be specific to your use-case, but generally you throw the message back to a dedicated Kafka retry topic (that you create), add a sleep and process the message again. if after x times the processing fails - you throw the message to a DLQ (=dead letter queue).
You can read more here:
https://eng.uber.com/reliable-reprocessing/
https://www.confluent.io/blog/error-handling-patterns-in-kafka/
I have a simple process, which starts with HandleHttpRequest then goes PublishKafka. After that, on success status I return responce code 200. On failure I want to return response with code 500 and with error message in the content. I have tried many ways, but I still don't know how can I do it. Is there is any possible way?
UPDATE:
When PublishKafka fail with some error message I need to send this error message in http response. I don't know how can I get this error message. There is no appropriate attribute in the flowfile. I have wanted to put the message in flowfile content. One of the possible resolutions is to use nifi-api or bulletin, but maybe there is more easiest way to do it.
I have a WTX map which puts a message on WMQ "Q1". There is some other application which reads the message from "Q1" and then processes the message and places the response on the queue specified in "ReplyToQ" available on MQ Header information.
I am not able to find a command parameter to add the ReplyToQ in the message WTX map is placing on "Q1".
Any thoughts?
Thanks for taking the time to look at this question and helping out!
The ReplyToQueue is sent on the message header, so you must get the message with the header (-HDR) and parse it from there on your input card.
Here's the doc about the adapter's command:
http://pic.dhe.ibm.com/infocenter/wtxdoc/v8r4m1/index.jsp
You should have a type tree on the Examples to read the message with a header.
Regards,
Bruno.
I have a rabbitmq queue full of requests and I want to send the requests as an HTTP GET asynchronously, without the need to wait for each request response. now I'm confused of what is better to use, threads or just EM ? The way i'm using it at the moment is something like the following , but it would be great to know if there is any better implementation with better performance here since it is a very crucial part of the program :
AMQP.start(:host => "localhost") do |connection|
queue = MQ.queue("some_queue")
queue.subscribe do |body|
EventMachine::HttpRequest.new('http://localhost:9292/faye').post :body => {:message => body.to_json }
end
end
With the code above, is the system will wait for each request to finish before starting the next one ? and if there any tips here I would highly appreciate it
HTTP is synchronous so you have to wait for the replies. If you want to simulate an async environment that you could have a thread pool and pass each request to a thread which will wait for the reply, then go back in the pool until the next request. You would either send the thread a callback function to use when the reply is finished or you would immediately return a future reply object, which allows you to put off waiting for the reply until you actually need the reply data.
The other way is to have a pool of processes each one of which is processing a request, waiting for the reply, etc.
In both cases, you have to have a pool that is big enough or else you will still end up waiting some of the time.
A previous question asked if changing one line of code implemented persistent SSL connections. After seeing that question's responses, and checking the dearth of SSL documentation, the following appear true:
for the server, a persistent connection is simply doing repeated requests/responses between SSL_accept() and SSL_set_shutdown().
according to this page, the client has to indicate how many requests there will be by sending the appropriate "Content-length:" header or using an agreed-upon terminating request.
However, there's no guarantee the client will send what it's supposed to. Therefore, it would seem a server using blocking sockets can hang indefinitely on a SSL_read() while waiting for additional requests that never arrive. (SSL_CTX_set_timeout() doesn't appear to cause a subsequent SSL_read() to exit early, so it's not clear how to do timed-out connections as described at this Wikipedia page if sockets are blocking.)
Apparently, a server can indicate it won't do keep-alive by returning a "Connection: Close" header with a response, so I've ended up with the following code, which at least should always correctly do a single request/response per connection:
while TRUE do
begin // wait for incoming TCP connection
if notzero(listen(listen_socket, 100)) then continue; // listen failed
client_len := SizeOf(sa_cli);
sock := accept(listen_socket, #sa_cli, #client_len); // create socket for connection
if sock = INVALID_SOCKET then continue; // accept failed
ssl := SSL_new(ctx); // TCP connection ready, create ssl structure
if assigned(ssl) then
begin
SSL_set_fd(ssl, sock); // assign socket to ssl structure
if SSL_accept(ssl) = 1 then // handshake worked
begin
request := '';
repeat // gather request
bytesin := SSL_read(ssl, buffer, sizeof(buffer)-1);
if bytesin > 0 then
begin
buffer[bytesin] := #0;
request := request + buffer;
end;
until SSL_pending(ssl) <= 0;
if notempty(request) then
begin // decide on response, avoid keep-alive
response := 'HTTP/1.0 200 OK'#13#10'Connection: Close'#13#10 + etc;
SSL_write(ssl, pchar(response)^, length(response));
end; // else read empty or failed
end; // else handshake failed
SSL_set_shutdown(ssl, SSL_SENT_SHUTDOWN or SSL_RECEIVED_SHUTDOWN);
CloseSocket(sock);
SSL_free(ssl);
end; // else ssl creation failed
end; // infinite while
Two questions:
(1) Since SSL_accept() must be true to reach SSL_read(), is it true SSL_read() can never hang waiting for the first request?
(2) How should this code be modified to do timed-out persistent/keep alive SSL connections with blocking sockets (if that's even possible)?
To quote this letter, "The only way to ensure that indefinite blocking is avoided is to use nonblocking I/O." So, I guess I'll give up trying to timeout blocked SSL_read()s.
(1) if the client connects but does not send a request (DoS attack, for instance), then SSL_read() would hang.
(2) try calling setsockopt(SO_RCVTIMEO) on the accepted SOCKET to set a reading timeout on it.