In the Majordomo pattern, a section of code in the worker looks like this
mdwrk session ("tcp://localhost:5555", sourceStr.c_str(), verbose);
zmsg *reply = 0;
while (1) {
zmsg *request = session.recv (reply);
if (request == 0) {
break; // Worker was interrupted
}
//reply = request; // Echo is complex... :-)
reply = new zmsg(sourceStr.c_str());
}
To my worker, the request from the client is an order to be sent to an exchange. I am trying to wrap my head around how, after I send the order to the exchange, and I get a message back like, Insert, Pending, New, etc, I can stuff the contents of the FIX response, into zmsg *reply.
The FIX message comes back asynchrously, so I won't be able to say
reply = FIXResponse;
How is this resolved?
I think the Majordomo protocol is meant to handle synchronous requests and not really appropriate here.
Just came across one of your other questions, and see that you have multiple sources for these replies. You could PUSH them all into a single stable PULL socket? (And then republish if appropriate. If the volume is low, you could even get away with durable subscribers for reliability.)
Related
I am using Akka in Play Controller and performing ask() to a actor by name publish , and internal publish actor performs ask to multiple actors and passes reference of sender. The controller actor needs to wait for response from multiple actors and create a list of response.
Please find the code below. but this code is only waiting for 1 response and latter terminating. Please suggest
// Performs ask to publish actor
Source<Object,NotUsed> inAsk = Source.fromFuture(ask(publishActor,service.getOfferVerifyRequest(request).getPayloadData(),1000));
final Sink<String, CompletionStage<String>> sink = Sink.head();
final Flow<Object, String, NotUsed> f3 = Flow.of(Object.class).map(elem -> {
log.info("Data in Graph is " +elem.toString());
return elem.toString();
});
RunnableGraph<CompletionStage<String>> result = RunnableGraph.fromGraph(
GraphDSL.create(
sink , (builder , out) ->{
final Outlet<Object> source = builder.add(inAsk).out();
builder
.from(source)
.via(builder.add(f3))
.to(out); // to() expects a SinkShape
return ClosedShape.getInstance();
}
));
ActorMaterializer mat = ActorMaterializer.create(aSystem);
CompletionStage<String> fin = result.run(mat);
fin.toCompletableFuture().thenApply(a->{
log.info("Data is "+a);
return true;
});
log.info("COMPLETED CONTROLLER ");
If you have several responses ask won't cut it, that is only for a single request-response where the response ends up in a Future/CompletionStage.
There are a few different strategies to wait for all answers:
One is to create an intermediate actor whose only job is to collect all answers and then when all partial responses has arrived respond to the original requestor, that way you could use ask to get a single aggregate response back.
Another option would be to use Source.actorRef to get an ActorRef that you could use as sender together with tell (and skip using ask). Inside the stream you would then take elements until some criteria is met (time has passed or elements have been seen). You may have to add an operator to mimic the ask response timeout to make sure the stream fails if the actor never responds.
There are some other issues with the code shared, one is creating a materializer on each request, these have a lifecycle and will fill up your heap over time, you should rather get a materializer injected from play.
With the given logic there is no need whatsoever to use the GraphDSL, that is only needed for complex streams with multiple inputs and outputs or cycles. You should be able to compose operators using the Flow API alone (see for example https://doc.akka.io/docs/akka/current/stream/stream-flows-and-basics.html#defining-and-running-streams )
I'm trying to return an Observable that is created asynchronously in a callback:
const mkAsync = (observer, delay) =>
setTimeout(() => Observable.of('some result').subscribe(observer), delay)
const create = arg => {
const ret = new Subject()
mkAsync(ret, arg)
return ret
}
Therefore I use a Subject as a unicast proxy which is subscribed to the underlying Observable in the callback.
The problem I have with this solution is that when I unsubscribe from the Subject's subsrciption the unsubscribe isn't forwarded to the underlying Observable. Looks like I need some type of refcounting to make the Subject unsubscribe when there are no more subscribers, but I wasn't able to figure it out when using it in this kind of imperative callback style.
I have to keep the mkAsync a void and am looking for an alternative implementation.
Is that the right way to do it? Is there an alternative solution to using a Subject?
How do I make sure that the created Observable is cancelled (unsubscribe is called on the Subscription) when the Subject is unsubscribed from?
This is pretty broad question and it's hard to tell what are you trying to achieve with this. I have two ideas:
The first thing is that there is refCount() operator that exists only on ConnectableObservable class that is returned from multicast (or publish) depending on the parameters you pass. See implementation for more details (basically if you don't set any selector function): https://github.com/ReactiveX/rxjs/blob/5.5.11/src/operators/multicast.ts
The second issue I can think of is that you're doing basically this:
const ret = new Subject()
Observable.of(...).subscribe(ret);
The problem with this is that .of will emit immediately it's next items and then it sends the complete notification. Subjects have internal state and when Subject receives the complete notification it marks itself as stopped and it will never ever emit anything.
I'm suspicious that's what's happening to you. Even when you return the Subject instance with return ret and later probably subscribe to it you still won't receive anything because this Subject has already received the complete notification.
In my frontend I have an input-field that sends an ajax request on every character typed in (using vue.js) to get realtime-filtering (can't use vue filter because of pagination).
Everything works smooth in my test environment, but could this lead to performance issues on (a bigger amount of) real data and if so, what can I do to prevent this?
Is it problematic?
Yes.
The client will send a lot of requests. Depending on the network connection and browser, this could lead to a perceptible feeling of lag by the client.
The server will receive a lot of requests, potentially leading to degraded performance for all clients, and extra usage of resources on the server side.
Responses to requests have a higher chance of arriving out of order. If you send requests very fast, it has increased chances of being apparent (e.g. displaying autocomplete for "ab" when the user has already typed "abc")
Overall, it's bad practice mostly because it's not necessary to do that many requests.
How to fix it?
As J. B. mentioned in his answer, debouncing is the way to go.
The debounce function (copied below) ensures that a certain function doesn't get called more than once every X milliseconds. Concretely, it allows you to send a request as soon as the user hasn't typed anything for, say, 200ms.
Here's a complete example (try typing text very fast in the input):
function debounce(func, wait, immediate) {
var timeout;
return function() {
var context = this, args = arguments;
var later = function() {
timeout = null;
if (!immediate) func.apply(context, args);
};
var callNow = immediate && !timeout;
clearTimeout(timeout);
timeout = setTimeout(later, wait);
if (callNow) func.apply(context, args);
};
}
var sendAjaxRequest = function(inputText) {
// do your ajax request here
console.log("sent via ajax: " + inputText);
};
var sendAjaxRequestDebounced = debounce(sendAjaxRequest, 200, false); // 200ms
var el = document.getElementById("my-input");
el.onkeyup = function(evt) {
// user pressed a key
console.log("typed: " + this.value)
sendAjaxRequestDebounced(this.value);
}
<input type="text" id="my-input">
For more details on how the debounce function works, see this question
I actually discuss this exact scenario in my Vue.js training course. In short, you may want to wait until a user clicks a button or something of that nature to trigger sending the request. Another approach to consider is to use the lazy modifier, which will delay the event until the change event is fired.
It's hard to know the correct approach without knowing more about the goals of the app. Still, the options listed above are two options to consider.
I hope this helps.
The mechanism I was searching for is called debouncing.
I used this approach in the application.
What I'm trying to accomplish is to implement reading a message from one of two sockets, wherever it arrives first. As far as I understand polling (zmq_poll) is the right thing to do (as demonstrated in mspoller in guide). Here I'll provide small pseudo-code snippet:
TimeSpan timeout = TimeSpan.FromMilliseconds(50);
using (var receiver1 = new ZSocket(ZContext.Current, ZSocketType.DEALER))
using (var receiver2 = new ZSocket(ZContext.Current, ZSocketType.PAIR))
{
receiver1.Bind("tcp://someaddress");
// Note that PAIR socket is inproc:
receiver2.Connect("inproc://otheraddress");
var poll = ZPollItem.CreateReceiver();
ZError error;
ZMessage msg;
while (true)
{
if (receiver1.PollIn(poll, out msg, out error, timeout))
{
// ...
}
if (receiver2.PollIn(poll, out msg, out error, timeout))
{
// ...
}
}
}
As you can see it is actually the same exact implementation as in mspoller in guide.
In my case receiver2 (PAIR socket) should receive a large number of messages. In fact I've created a test in which number of messages sent to it is always greater than the number of messages it is capable to receive (at least in demonstrated implementation).
I've run the test for 2 seconds, and I was very surprised with results:
Number of messages sent to receiver2: 180 (by "sent" I mean that they are handed out to another PAIR socket not shown in the previous snippet);
Number of messages received by receiver2: 21 ??? Only 21 messages in 2 seconds??? 10 messages per second???
Then I've tried to play with different timeout values and I've found out that it significantly influences the number of messages received. Duration (2 seconds) and number of messages sent (180) remain the same. The results are:
timeout value of 200 milliseconds - number of messages received drops to 10 (5 per second);
timeout value of 10 milliseconds - number of messages received rises to 120 (60 per second).
The results are telling me that polling simply does not work. If polling were working properly, as far as I understand the mechanism, timeout should not have any influence in this scenario. No matter if we set timeout to 1 hour or 5 milliseconds - since there are always messages to receive there's nothing to wait for, so the loop should work with the same speed.
My another big concern is the fact that even with very small timeout value receiver2 is not capable to receive all 180 messages. I'm struggling here to accomplish receiving rate of 100 messages per second, although I've selected ZeroMQ which should be very fast (benchmarks are mentioning numbers as 6 million messages per second).
So my question is obvious: am I doing something wrong here? Is there a better way to implement polling?
By browsing clrzmq4 code I've noticed that there's also possibility to call pollIn method on enumeration of sockets ZPollItems.cs, line 151, but I haven't found any example anywhere!
Can this be the right approach? Any documentation anywhere?
Thanks
I've found the problem / solution for this. Instead using PollIn method on each socket separately we should use PollIn method on array of sockets. Obviously the example from the guide is HUGELY MISLEADING. Here's the correct approach:
TimeSpan timeout = TimeSpan.FromMilliseconds(50);
using (var receiver1 = new ZSocket(ZContext.Current, ZSocketType.DEALER))
using (var receiver2 = new ZSocket(ZContext.Current, ZSocketType.PAIR))
{
receiver1.Bind("tcp://someaddress");
receiver2.Connect("inproc://otheraddress");
// We should "remember" the order of sockets within the array
// because order of messages in the received array will correspond to it.
ZSocket[] sockets = { receiver1, receiver2 };
// Note that we should use two ZPollItem instances:
ZPollItem[] pollItems = { ZPollItem.CreateReceiver(), ZPollItem.CreateReceiver() };
ZError error;
ZMessage[] msg;
while (true)
{
if (sockets.PollIn(pollItems, out msg, out error, timeout))
{
if (msg[0] != null)
{
// The first message gotten from receiver1
}
if (msg[1] != null)
{
// The second message gotten from receiver2
}
}
}
}
Now receiver2 reaches 15,000 received messages per second, no matter timeout value, and no matter number of messages received by receiver1.
UPDATE: Folks from clrzmq4 have acknowledged this issue, so probably the example will be corrected soon.
I used the tcp protocol to deal the request the client, I found a phenomenon which is some of the content is missing while using the function of 'send'. the code is as fellow:
_stprintf(cData,"[%s]",send_back);
memset(send_back,0,sizeof(cData));
int send_count;
if((send_count=send(service_sock,cData,_tcslen(cData),0))!=SOCKET_ERROR){
fwrite(cData,sizeof(char),_tcslen(cData),hFile);
fflush(hFile);
g_log->print_log("%c%c%c%c",cData[0],cData[1],cData[2],cData[send_count-1]);
g_log->print_log("buffer len is :%d , send %d bytes",_tcslen(cData),send_count);
fclose(hFile);
memset(cData,0,sizeof(cData));
return true;
}
the send function is always successful, and the value of _tcslen(cData) is equal to send_count and the cData[send_count-1] is ']'.
But when I use the wireshark(a capture tool) to capture the packet which is send out by the socket, I found some content is always missing including the Character of ']'. the content is encapsulated by JSON protocol, so the ']' is important. the total size of every time send out is 8900 bytes. But when I change the request item one time (before is 100) to 50, there is nothing missed, the size of send back is about 4000 bytes.
I do not know why this happened.
from my log file, I am sure the array named 'cData' contain the total content, But why the the content from the packets captured by the wireshark is not complete?
Seeing that you're using TCP, it already looks wrong.
First off, TCP is stream protocol which is not suited for one time packets ( especially small ) but the benefits are far more greater than just use UDP instead.
Keep in mind that in case of TPC you are not in control you can only make sure that your requests are handled correctly, the actual communication is done by the Winsock library.
Always remember that the send functions len parameter is NOT a requirement it's a hint on how big is your buffer and how much you can send in one go, it may return less than you want to send, and this may depend on lot of factors how often it happens, lets say you use the loopback device it would probably never ever do this, meaning that send will actually send what you requested. In a real network it may send it on one go in about 90% or with even less probability.
You have to make sure you send as much as you want, i.e. check for the return value and call send again if it didn't send as much as you wanted and do the same on the other side with recv, call recv until you get as much data as you wanted. This method only works if you know exactly how much data you want to send over the network.
As for the loss off data, TCP, I would say almost always sends data, assuming that you checked the return value of send. If there is a network problem, like loss of data you would see the TCP retransmit packet.
For your way of sending data this is more suitable, this is to make sure you really send the amount of data you want :
xint xsend(SOCKET s,const char* buf,xint len)
{
xint lastSize;
xint result;
if (len==0 || s==(SOCKET)NULL || buf==(const char*)NULL)
return SOCKET_ERROR;
lastSize=0;
result=0;
do
{
result=send(s,buf+lastSize,len-lastSize,0);
if (result==0)
return 0;
if (result==SOCKET_ERROR)
return SOCKET_ERROR;
if (result==len)
return len;
if (result>len)
{
xlog(1,"xsend : socket sent too much data [ %i of %i ]",result,len);
return SOCKET_ERROR;
}
lastSize+=result;
if (lastSize>len)
{
xlog(1,"xsend : socket sent too much data ( overall ) [ %i of %i ]",result,len);
return SOCKET_ERROR;
}
if (lastSize==len)
return len;
}
while (1);
xlog(2,"failed to do xsend");
return SOCKET_ERROR;
}
This code is just a copy paste from one of my projects, xlog is simple logging function, you can figure it out.