Boost calling method from outside of class - boost

Let's see how simple of a question I can ask. I have:
void TCPClient::test(const boost::system::error_code& ErrorCode)
{
// Anything can be here
}
and I would like to call it from another class. I have a global boost::thread_group that creates a thread
clientThreadGroup->create_thread(boost::bind(&TCPClient::test,client, /* this is where I need help */));
but am uncertain on how to call test, if this is even the correct way.
As an explanation for the overall project, I am creating a tcp connection between a client and a server and have a method "send" (in another class) that will be called when data needs to be sent. My current goal is to be able to call test (which currently has async_send in it) and send the information through the socket that is already set up when called. However, I am open to other ideas on how to implement and will probably work on creating a consumer/producer model if this proves to be too difficult.
I can use either for this project, but I will later have to implement listen to be able to receive control packets from the server later, so if there is any advice on which method to use, I would greatly appreciate it.

boost::system::error_code err;
clientThreadGroup->create_thread(boost::bind(&TCPClient::test,client, err));
This works for me. I don't know if it will actually have an error if something goes wrong, so if someone wants to correct me there, I would appreciate it (if just for the experience sake).

Related

Allow-listing IP addresses using `call.cancel()` from within `EventListener.dnsEnd()` in OkHttp

i am overriding the dnsEnd() function in EventListener:
#Override
public void dnsEnd(Call call, String domainName, List<InetAddress> inetAddressList) {
inetAddressList.forEach(address -> {
logger.debug("checking if url ({}) is in allowlist", address.toString());
if (!allowlist.contains(address)) {
call.cancel();
}
});
}
i know, in the documentation it says not to alter call parameters etc:
"All event methods must execute fast, without external locking, cannot throw exceptions, attempt to mutate the event parameters, or be re-entrant back into the client. Any IO - writing to files or network should be done asynchronously."
but, as i don't care about the call if it is trying to get to an address outside the allowlist, i fail to see the issue with this implementation.
I want to know if anyone has experience with this, and why it may be an issue?
I tested this and it seems to work fine.
This is fine and safe. Probably the strangest consequence of this is the canceled event will be triggered by the thread already processing the DNS event.
But cancelling is not the best way to constrain permitted IP addresses to a list. You can instead implement the Dns interface. Your implementation should delegate to Dns.SYSTEM and them filter its results to your allowlist. That way you don't have to worry about races on cancelation.

Spring Boot Webflux/Netty - Detect closed connection

I've been working with spring-boot 2.0.0.RC1 using the webflux starter (spring-boot-starter-webflux). I created a simple controller that returns a infinite flux. I would like that the Publisher only does its work if there is a client (Subscriber). Let's say I have a controller like this one:
#RestController
public class Demo {
#GetMapping(value = "/")
public Flux<String> getEvents(){
return Flux.create((FluxSink<String> sink) -> {
while(!sink.isCancelled()){
// TODO e.g. fetch data from somewhere
sink.next("DATA");
}
sink.complete();
}).doFinally(signal -> System.out.println("END"));
}
}
Now, when I try to run that code and access the endpoint http://localhost:8080/ with Chrome, then I can see the data. However, once I close the browser the while-loop continues since no cancel event has been fired. How can I terminate/cancel the streaming as soon as I close the browser?
From this answer I quote that:
Currently with HTTP, the exact backpressure information is not
transmitted over the network, since the HTTP protocol doesn't support
this. This can change if we use a different wire protocol.
I assume that, since backpressure is not supported by the HTTP protocol, it means that no cancel request will be made either.
Investigating a little bit further, by analyzing the network traffic, showed that the browser sends a TCP FIN as soon as I close the browser. Is there a way to configure Netty (or something else) so that a half-closed connection will trigger a cancel event on the publisher, making the while-loop stop?
Or do I have to write my own adapter similar to org.springframework.http.server.reactive.ServletHttpHandlerAdapter where I implement my own Subscriber?
Thanks for any help.
EDIT:
An IOException will be raised on the attempt to write data to the socket if there is no client. As you can see in the stack trace.
But that's not good enough, since it might take a while before the next chunk of data will be ready to send and therefore it takes the same amount of time to detect the gone client. As pointed out in Brian Clozel's answer it is a known issue in Reactor Netty. I tried to use Tomcat instead by adding the dependency to the POM.xml. Like this:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-tomcat</artifactId>
</dependency>
Although it replaces Netty and uses Tomcat instead, it does not seem reactive due to the fact that the browser does not show any data. However, there is no warning/info/exception in the console. Is spring-boot-starter-webflux as of this version (2.0.0.RC1) supposed to work together with Tomcat?
Since this is a known issue (see Brian Clozel's answer), I ended up using one Flux to fetch my real data and having another one in order to implement some sort of ping/heartbeat mechanism. As a result, I merge both together with Flux.merge().
Here you can see a simplified version of my solution:
#RestController
public class Demo {
public interface Notification{}
public static class MyData implements Notification{
…
public boolean isEmpty(){…}
}
#GetMapping(value = "/", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<ServerSentEvent<? extends Notification>> getNotificationStream() {
return Flux.merge(getEventMessageStream(), getHeartbeatStream());
}
private Flux<ServerSentEvent<Notification>> getHeartbeatStream() {
return Flux.interval(Duration.ofSeconds(2))
.map(i -> ServerSentEvent.<Notification>builder().event("ping").build())
.doFinally(signalType ->System.out.println("END"));
}
private Flux<ServerSentEvent<MyData>> getEventMessageStream() {
return Flux.interval(Duration.ofSeconds(30))
.map(i -> {
// TODO e.g. fetch data from somewhere,
// if there is no data return an empty object
return data;
})
.filter(data -> !data.isEmpty())
.map(data -> ServerSentEvent
.builder(data)
.event("message").build());
}
}
I wrap everything up as ServerSentEvent<? extends Notification>. Notification is just a marker interface. I use the event field from the ServerSentEvent class in order to separate between data and ping events. Since the heartbeat Flux sends events constantly and in short intervals, the time it takes to detect that the client is gone is at most the length of that interval. Remember, I need that because it might take a while before I get some real data that can be sent and, as a result, it might also take a while before it detects that the client is gone. Like this, it will detect that the client is gone as soon as it can’t sent the ping (or possibly the message event).
One last note on the marker interface, which I called Notification. This is not really necessary, but it gives some type safety. Without that, we could write Flux<ServerSentEvent<?>> instead of Flux<ServerSentEvent<? extends Notification>> as return type for the getNotificationStream() method. Or also possible, make getHeartbeatStream() return Flux<ServerSentEvent<MyData>>. However, like this it would allow that any object could be sent, which I don’t want. As a consequence, I added the interface.
I'm not sure why this behaves like this, but I suspect it is because of the choice of generation operator. I think using the following would work:
return Flux.interval(Duration.ofMillis(500))
.map(input -> {
return "DATA";
});
According to Reactor's reference documentation, you're probably hitting the key difference between generate and push (I believe a quite similar approach using generate would probably work as well).
My comment was referring to the backpressure information (how many elements a Subscriber is willing to accept), but the success/error information is communicated over the network.
Depending on your choice of web server (Reactor Netty, Tomcat, Jetty, etc), closing the client connection might result in:
a cancel signal being received on the server side (I think this is supported by Netty)
an error signal being received by the server when it's trying to write on a connection that's been closed (I believe the Servlet spec does not provide that that callback and we're missing the cancel information).
In short: you don't need to do anything special, it should be supported already, but your Flux implementation might be the actual problem here.
Update: this is a known issue in Reactor Netty

Differences between io.to(), io.in(), and socket.to() for emitting to all clients in a room

The Socket.io documentation seems to specify a few ways to emit an event to all connected clients in a room. They are as follows:
io.to(), as found in the first example here: https://socket.io/docs/server-api/#socket-join-room-callback
io.in(), as found in the emit cheatsheet, found here: https://socket.io/docs/emit-cheatsheet/
socket.to(), as found here: https://socket.io/docs/server-api/#socket-to-room
Other than the examples linked above, both io.to() and io.in() are not listed anywhere else in the documentation. What do these methods do exactly, and where can I find more information on them?
socket.to() can be used inside the io.on('connection', callback) event, like so:
io.on('connection', function(socket){
// to one room
socket.to('others').emit('an event', { some: 'data' });
// to multiple rooms
socket.to('room1').to('room2').emit('hello');
});
However, this does not make sense, as the socket object passed into this callback represents a connected client. How can the incoming socket object be used to broadcast to all other connected sockets, as shown in the above example?
Definitive explanations of the above are appreciated.
However, this does not make sense, as the socket object passed into this callback represents a connected client.
If you trace into those call in a debugger, you can see what is going on.
First off, the socket.to() creates a property on the socket named _rooms that is an array of room names. You can see the whole code in context here in the Github repository, but here's the relevant portion for .to():
Socket.prototype.to =
Socket.prototype.in = function(name){
if (!~this._rooms.indexOf(name)) this._rooms.push(name);
return this;
};
Each successive call to .to() just an addition room to the array.
Then, socket.emit() checks to see if the _rooms property exists and if it does, it calls this.adapter.broadcast(...) which grabs the adapter and tells it to broadcast this message to all sockets on that adapter except the current one. The whole code for socket.emit() is here on Github. The particular broadcast part of the code is this:
if (this._rooms.length || this.flags.broadcast) {
this.adapter.broadcast(packet, {
except: [this.id],
rooms: this._rooms,
flags: this.flags
});
} else {
// dispatch packet
this.packet(packet, this.flags);
}
How can the incoming socket object be used to broadcast to all other connected sockets, as shown in the above example?
Each socket contains a reference to the adapter and the adapter has a list of all sockets on that adapter. So, it's possible to get form the socket to the adapter, to all the other sockets.
I would agree that this is a bit of an odd overloading of functionality, but that's how they do it. I'm guessing they wanted to give people access to broadcast functionality when all you had a reference to was an individual socket.
FYI, the only way to really answer these types of questions yourself that are not documented is by looking at the code and that is certainly one of the huge advantages of using open source libraries. I find that the quickest way to get to the right source is to step into the method of interest in the debugger. Fire up the debugger, set a breakpoint in your code, then step into the function of choice and it will show you the relevant source code immediately. You can then further step through that function if you want to see what path it is taking.
For anyone coming across this question like me, here is the link to the docs for explanation:
https://socket.io/docs/v3/rooms/index.html

Uploading a file using post() method of QNetworkAccessManager

I'm having some trouble with a Qt application; specifically with the QNetworkAccessManager class. I'm attempting to perform a simple HTTP upload of a binary file using the post() method of the QNetworkAccessManager. The documentation states that I can give a pointer to a QIODevice to post(), and that the class will transmit the data found in the QIODevice. This suggests to me that I ought to be able to give post() a pointer to a QFile. For example:
QFile compressedFile("temp");
compressedFile.open(QIODevice::ReadOnly);
netManager.post(QNetworkRequest(QUrl("http://mywebsite.com/upload") ), &compressedFile);
What seems to happen on the Windows system where I'm developing this is that my Qt application pushes the data from the QFile, but then doesn't complete the request; it seems to be sitting there waiting for more data to show up from the file. The post request isn't "closed" until I manually kill the application, at which point the whole file shows up at my server end.
From some debugging and research, I think this is happening because the read() operation of QFile doesn't return -1 when you reach the end of the file. I think that QNetworkAccessManager is trying to read from the QIODevice until it gets a -1 from read(), at which point it assumes there is no more data and closes the request. If it keeps getting a return code of zero from read(), QNetworkAccessManager assumes that there might be more data coming, and so it keeps waiting for that hypothetical data.
I've confirmed with some test code that the read() operation of QFile just returns zero after you've read to the end of the file. This seems to be incompatible with the way that the post() method of QNetworkAccessManager expects a QIODevice to behave. My questions are:
Is this some sort of limitation with the way that QFile works under Windows?
Is there some other way I should be using either QFile or QNetworkAccessManager to push a file via post()?
Is this not going to work at all, and will I have to find some other way to upload my file?
Any suggestions or hints would be appreciated.
Update: It turns out that I had two different problems: one on the client side and one on the server side. On the client side, I had to ensure that my QFile object stayed around for the duration of the network transaction. The post() method of QNetworkAccessManager returns immediately but isn't actually finished immediately. You need to attach a slot to the finished() signal of QNetworkAccessManager to determine when the POST is actually finished. In my case it was easy enough to keep the QFile around more or less permanently, but I also attached a slot to the finished() signal in order to check for error responses from the server.
I attached the signal to the slot like this:
connect(&netManager, SIGNAL(finished(QNetworkReply*) ), this, SLOT(postFinished(QNetworkReply*) ) );
When it was time to send my file, I wrote the post code like this (note that compressedFile is a member of my class and so does not go out of scope after this code):
compressedFile.open(QIODevice::ReadOnly);
netManager.post(QNetworkRequest(QUrl(httpDestination.getCString() ) ), &compressedFile);
The finished(QNetworkReply*) signal from QNetworkAccessManager triggers my postFinished(QNetworkReply*) method. When this happens, it's safe for me to close compressedFile and to delete the data file represented by compressedFile. For debugging purposes I also added a few printf() statements to confirm that the transaction is complete:
void CL_QtLogCompressor::postFinished(QNetworkReply* reply)
{
QByteArray response = reply->readAll();
printf("response: %s\n", response.data() );
printf("reply error %d\n", reply->error() );
reply->deleteLater();
compressedFile.close();
compressedFile.remove();
}
Since compressedFile isn't closed immediately and doesn't go out of scope, the QNetworkAccessManager is able to take as much time as it likes to transmit my file. Eventually the transaction is complete and my postFinished() method gets called.
My other problem (which also contributed to the behavior I was seeing where the transaction never completed) was that the Python code for my web server wasn't fielding the POST correctly, but that's outside the scope of my original Qt question.
You're creating compressedFile on the stack, and passing a pointer to it to your QNetworkRequest (and ultimately your QNetworkAccessManager). As soon as you leave the method you're in, compressedFile is going out of scope. I'm surprised it's not crashing on you, though the behavior is undefined.
You need to create the QFile on the heap:
QFile *compressedFile = new QFile("temp");
You will of course need to keep track of it and then delete it once the post has completed, or set it as the child of the QNetworkReply so that it it gets destroyed when the reply gets destroyed later:
QFile *compressedFile = new QFile("temp");
compressedFile->open(QIODevice::ReadOnly);
QNetworkReply *reply = netManager.post(QNetworkRequest(QUrl("http://mywebsite.com/upload") ), compressedFile);
compressedFile->setParent(reply);
You can also schedule automatic deletion of a heap-allocated file using signals/slots
QFile* compressedFile = new QFile(...);
QNetworkReply* reply = Manager.post(...);
// This is where the tricks is
connect(reply, SIGNAL(finished()), reply, SLOT(deleteLater());
connect(reply, SIGNAL(destroyed()), compressedFile, SLOT(deleteLater());
IMHO, it is much more localized and encapsulated than having to keep around your file in the outer class.
Note that you must remove the first connect() if you have your postFinished(QNetworkReply*) slot, in which you must then not forget to call reply->deleteLater() inside it for the above to work.

How to know when a web page is loaded when using QtWebKit?

Both QWebFrame and QWebPage have void loadFinished(bool ok) signal which can be used to detect when a web page is completely loaded. The problem is when a web page has some content loaded asynchronously (ajax). How to know when the page is completely loaded in this case?
I haven't actually done this, but I think you may be able to achieve your solution using QNetworkAccessManager.
You can get the QNetworkAccessManager from your QWebPage using the networkAccessManager() function. QNetworkAccessManager has a signal finished ( QNetworkReply * reply ) which is fired whenever a file is requested by the QWebPage instance.
The finished signal gives you a QNetworkReply instance, from which you can get a copy of the original request made, in order to identify the request.
So, create a slot to attach to the finished signal, use the passed-in QNetworkReply's methods to figure out which file has just finished downloading and if it's your Ajax request, do whatever processing you need to do.
My only caveat is that I've never done this before, so I'm not 100% sure that it would work.
Another alternative might be to use QWebFrame's methods to insert objects into the page's object model and also insert some JavaScript which then notifies your object when the Ajax request is complete. This is a slightly hackier way of doing it, but should definitely work.
EDIT:
The second option seems better to me. The workflow is as follows:
Attach a slot to the QWebFrame::javascriptWindowObjectCleared() signal. At this point, call QWebFrame::evaluateJavascript() to add code similar to the following:
window.onload = function() { // page has fully loaded }
Put whatever code you need in that function. You might want to add a QObject to the page via QWebFrame::addToJavaScriptWindowObject() and then call a function on that object. This code will only execute when the page is fully loaded.
Hopefully this answers the question!
To check the load of specific element you can use a QTimer. Something like this in python:
#pyqtSlot()
def on_webView_loadFinished(self):
self.tObject = QTimer()
self.tObject.setInterval(1000)
self.tObject.setSingleShot(True)
self.tObject.timeout.connect(self.on_tObject_timeout)
self.tObject.start()
#pyqtSlot()
def on_tObject_timeout(self):
dElement = self.webView.page().currentFrame().documentElement()
element = dElement.findFirst("css selector")
if element.isNull():
self.tObject.start()
else:
print "Page loaded"
When your initial html/images/etc finishes loading, that's it. It is completely loaded. This fact doesn't change if you then decide to use some javascript to get some extra data, page views or whatever after the fact.
That said, what I suspect you want to do here is expose a QtScript object/interface to your view that you can invoke from your page's script, effectively providing a "callback" into your C++ once you've decided (from the page script) that you've have "completely loaded".
Hope this helps give you a direction to try...
The OP thought it was due to delayed AJAX requests but there also could be another reason that also explains why a very short time delay fixes the problem. There is a bug that causes the described behaviour:
https://bugreports.qt-project.org/browse/QTBUG-37377
To work around this problem the loadingFinished() signal must be connected using queued connection.

Resources