Long polling using rxjs - rxjs

I want to create long polling client for a web service using RxJS.
Targeted endpoint that supports blocking requests sends x-index header to a client with a value representing current state of the endpoint. This x-index value is being sent in subsequent requests as a query parameter so the endpoint responds only when x-index changes or when request timeouts.
--> client sends first request to the server
<-- server immediately responds with a x-index header
--> client sends blocking request with a value of x-index as a paramter
<-- request is suspended until change of the state or until timeout, then server sends the response
if x-index is changed then pass data to subscriber && repeat from step 3.
I don't know how to create that loop of server requests with changing x-index parameter. Anybody can help, please?

Related

How to invalidate client-cache?

If an application has client side caching, and data changes on server side, then how does client comes to know about it so that it can invalidate the cache?
If server send "Cache-Control: max-age=100" on response header after first action to get data from server then client save response data on local cache store.
If client send same request in about 100 seconds then response retrieved from local cache store and request dont send to server.
When pass over 100 seconds and send same request again to server, cache data will have been invalidate on local. Thus request arrive to server. If server recognize the request and decide to not modified on the source then do nothing and return response that's status is 304 (not modified).
Seeing this status, the client renews the validity period of the expired data and all requests they sent within 100 seconds are retrieved from the cache again.
This flow has client cache invalidate mechanism.
https://developer.mozilla.org/en-US/docs/Web/HTTP/Caching

Is it guaranteed that subscription for response is established before request is processed in temp-topic based request/response scheme?

I would like to implement an request/response pattern in the following parts:
Server: Springboot with ActiveMQ
Client: JavaScript with stompjs over websocket
I want a behaviour like an http request.
Send a Request, get a corresponding Response.
I try to do this with temporary channels
I send 3 Messages
The steps are:
SUBSCRIBE to a temprorary channel
SUBSCRIBE
id:mysub
destination:/temp-queue/example
SEND the request and include a reply-to header with the subscribed channel of Step 1
destination:/queue/PO.REQUEST
reply-to:/temp-queue/example
Get the Response Message from the Server
MESSAGE
subscription:foo
reply-to:/queue/temp.default.23.example
destination:/queue/PO.REQUEST
reply-to:/temp-queue/example
But now (As Client send messages asynchronous) im not sure if Step 1 is complete on server, and so server is ready to send Response to the queue when the Request of Step 2 arrives at the server.
Is it possible that server finishes Step 2 before finishing Step 1, and so sends the response to nowhere? Or does ActiveMQ ensures that the received messages 1 and 2 from the client are processed and finished in the correct order?
Can any race condition between message 1 and 2 happen?
Thank you very much!
Any STOMP frame that your client sends can be sent with a receipt request that makes the processing of that frame synchronous. So if you want to ensure that a subscribe is complete before doing the send, then attach a receipt-id to the subscribe frame and wait for the spec mandated RECEIPT frame before going on to do the send, this ensures that your subscription is setup prior to any other processing.

One way request, no response

Normally JMeter sends a HTTP request and waits for a response to measure the response time and then to send a next request.
I have a situation where they have created a HTTP endpoint to an embedded software device. But it is one way traffic only, which means I send a HTTP request, but the embedded device doesn't send a response. This is the intention and how it should work.
Is it possible that JMeter doesn't wait for the response and just sends a next request?
A way to do it is to set a Response timeout on the HTTP Request and add an assertion to ignore response timeout.
1 second for response timeout in HTTP Request:
And adding a Response Assertion that ignores status and expects a Timeout:
But this is not very clean.

Spring Integration: Send response to client http inbound gateway

I have a http inbound gateway which needs to receive the request validate it and then immediately send response to client. After the response is sent back, my SI flow needs to continue with further processing. The response should be sent to client as soon as the validation is complete. Sending of response shouldn't wait until my entire processing is complete. How can I trigger SI flow to continue with further processing once the response is sent. What is the appropriate SI component for this scenario?
If the response is a simply 200 OK, use an inbound channel adapter (not a gateway) and make the first channel after the validation an ExecutorChannel. Then, immediately the message is handed off to the executor, the response will be sent.
If you need a custom reply, use a gateway and make the first channel after the validation a publish-subscribe channel (with an executor), construct the reply in one consumer of that channel, process the request in another.

JSF simultaneous ajax calls

Is it possible with JSF to make ajax calls that will execute simultaneously (not waiting for previous calls to finish before starting a new one)?
No, they are explicitly queued by specification, without any exception. See chapter 13.3.2 of the JSF 2 specification:
13.3.2 Ajax Request Queueing
All Ajax requests must be put into a client side request queue before they are sent to the
server to ensure Ajax requests are processed in the order they are sent. The request that has been waiting in the queue the
longest is the next request to be sent. After a request is sent, the Ajax request callback function must remove the request
from the queue (also known as dequeuing). If the request completed successfully, it must be removed from the queue. If
there was an error, the client must be notified, but the request must still be removed from the queue so the next request
can be sent. The next request (the oldest request in the queue) must be sent. Refer to the jsf.ajax.request
JavaScript documentation for more specifics about the Ajax request queue.
This is done so to ensure thread safety of among others the view scoped beans in the server side.
To prevent problems with the so called View-State of the page or some forms, AJAX requests are serialized.
JSF-Extensions (https://www.intersult.com/wiki/page/JSF%20Ext) gives you the option to parallelize AJAX requests. Just set the JavaScript variable jsf.ajaxQueue to another value than the default of 1. But if you don't lock out manually duplicate requests from within the same form or rendering of the same region, you will get errors.
This is how you activate parallel requests:
<script type="text/javascript">
if (jsf)
jsf.ajaxQueue = 2;
</script>
For example you can parallelize the rendering on the server of a page with <e:async>. Most applications would not need parallel requests, because they run nice when strictly serialized.

Resources