I am writing an application to send small file (~2kb) from Netty server to client through WebSocket.
For testing whether the file send success, I had the follows test.
A client connect to server.
Setting to drop all packets from server on the client machine.
The server send a file to the client.
Checking the result of "ChannelFuture" on the server.
I got true from "future.isSuccess()" and "future.isDone()" immediately when I send a file with ~2kb in this test even client side cannot receive the file.
I repeated this test for files with larger size. I find out that if the file size is larger than ~7kb, the "ChannelFuture future" will wait the feedback from transmission. This is the result I expected.
I am using Netty3.6.1 and my application is built base on "org.jboss.netty.example.http.websocketx.server".
Here is part of my code:
ChannelBuffer cb = ChannelBuffers.copiedBuffer(myfile_byteArray);
ChannelFuture result = ctx.getChannel().write( new BinaryWebSocketFrame( cb ) );
result.addListener(new ChannelFutureListener() {
public void operationComplete(ChannelFuture future) throws Exception {
if (future.isSuccess()){
System.err.println("future.isSuccess()");
}
if (future.isDone()){
System.err.println("future.isDone()");
}
if (future.isCancelled()){
System.err.println("future.isCancelled()");
}
}
});
Does anyone know how could I having "ChannelFuture" work correctly for file with small file size?
Many thanks in advance!
ChannelFuture will only be notified if the data can be written out to the remote peer. So if it is notified then the other peer received the data without a problem. This is true for all sizes of data.
Related
In Laravel we have an api endpoint that may take a few minutes. It's processing an input in batches and giving a response when all batches are processed. Pseudo-code below.
Sometimes it takes too long for the user, so the user navigates away and the connection is killed client-side. However, the backend processing still continues until the backend tries to return the response with a broken pipe error.
To save ressources, we're looking for a way to check after each batch if the client is still connected with a function like check_if_client_is_still_connected() below. If not, an error is raised and processing is stopped. Is there a way to achieve this ?
function myAPIEndpoint($all_batches){
$result = [];
for ($batch in $all_batches) {
$batch_result = do_something_long($batch);
$result = $result + $batch_result;
check_if_client_is_still_connected();
}
return result;
}
PS: I know async tasks or web sockets could be more appropriate for long requests, but we have good reasons to use a standard http endpoint for this.
I have a question about Spring Reactive WebClient...
Few days ago I decided to play with the new reactive stuff in Spring Framework and I made one small project for scraping data only for personal purposes. (making multiple requests to one webpage and combining the results).
I started using the new reactive WebClient for making requests but the problem I found is that the client not emitting response for every request. Sounds strange. Here is what I did for fetching data:
private Mono<String> fetchData(String uri) {
return this.client
.get()
.uri(uri)
.header("X-Fsign","SW9D1eZo")
.retrieve()
.bodyToMono(String.class)
.timeout(Duration.ofSeconds(35))
.log("category", Level.ALL, SignalType.ON_ERROR, SignalType.ON_COMPLETE, SignalType.CANCEL, SignalType.REQUEST);
}
And the function that calls fetchData:
public Mono<List<Stat>> fetch() {
return fetchData(URL)
.map(this::extractUrls)
.doOnNext(System.out::println)
.doOnNext(s-> System.out.println("all ids are "+s.size()))
.flatMapIterable(q->q)
.map(s -> s.substring(7, 15))
.map(s -> "http://d.flashscore.com/x/feed/d_hh_" + s + "_en_1") // list of N-length urls
.flatMap(this::fetchData)
.map(this::extractHeadToHead)
.collectList();
}
and the subscriber:
FlashScoreService bean = ctx.getBean(FlashScoreService.class);
bean.fetch().subscribe(s->{
System.out.println("finished !!! " + s.size()); //expecting same N-length list size
},Throwable::printStackTrace);
The problem is if I made a little bit more requests > 100.
I didn't get responses for all of them, no error is thrown or error response code is returned and subscribe method is invoked with size different from the number of requests.
The requests I made are based on List of Strings (urls) and after all responses are emitted I should receive all of them as list because I'm using collectList(). When I execute 100 requests, I expect to receive list of 100 responses but actually I'm receiving sometimes 100, sometimes 96 etc ... May be something fails silently.
This is easy reproducible here is my github project link.
Sample output:
all ids are 176
finished !!! 171
Please give me suggestions how I can debug or what I'm doing wrong. Help is appreciated.
Update:
The log shows if I pass 126 urls for example:
onNext(ReactorClientHttpResponse{request=[GET/some_url],status=200}) is called 121 times. May be here is the problem.
onComplete() is called 126 times which is the exact same length of the passed list of urls
but how it's possible some of the requests to be completed without calling onNext() or onError( ) ? (success and error in Mono)
I think the problem is not in the WebClient but somewhere else. Environment or server blocking the request, but may be I should receive some error log.
ps. Thanks for the help !
This is a tricky one. Debugging the actual HTTP frames received, it seems we're really not getting responses for some requests. Debugging a little more with Wireshark, it looks like the remote server is requesting the end of the connection with a FIN, ACK TCP packet and that the client acknowledges it. The problem is this connection is still taken from the pool to send another GET request after the first FIN, ACK TCP packet.
Maybe the remote server is closing connections after they've served a number of requests; in any case it's perfectly legal behavior. Note that I'm not reproducing this consistently.
Workaround
You can disable connection pooling on the client; this will be slower and apparently doesn't trigger this issue. For that, use the following:
this.client = WebClient.builder()
.clientConnector(new ReactorClientHttpConnector(new Consumer<HttpClientOptions.Builder>() {
#Override
public void accept(HttpClientOptions.Builder builder) {
builder.disablePool();
}
}))
.build();
Underlying issue
The root problem is that the HTTP client should not onComplete when the TCP connection is closed without sending a response. Or better, the HTTP client should not reuse a connection while it's being closed. I'll report back here when I'll know more.
I have started to learn web sockets. It is must learn technology in today's time.
But i am curious to learn more about it. My basic question is How many WebSocket connection can be created on Client Side.
My Typically Application is html UI based and on the server i have rest based services. I need to track whether
Session timeout has happed or not
Whether Connection to the server is lost or not ? A kind of pooling program to check with connections is alive or not.
So I am creating 2 websocket objects on client and different url for them.
I hope i have implemented it correctly ?
Basically Browser closes the old websocket connection once you opened to new connection to SAME URL(ws://127.0.0.1:8080/WebSocket-context-root/getResource). You can keep small hack like "ws://127.0.0.1:8080/WebSocket-context-root/getResource/"+k. where k is any number/any random string. On server side just ignore the path variable k.
In this way you can open many number of connection at same time. Browser restriction of max-number-connection per domain is not applying here (Tested on Firefox). I tried max 25 parallel connections.
You can use websocket.readyState to check the status of the web socket connection.
onclose Event of the Web socket have reason code for closed connection.
User below code to test number of active connections.
var x=0
var intervalID = setInterval(function () {
websocket = new WebSocket("ws://127.0.0.1:8080/WebSocketApi/web/chat/"+x);
websocket.onopen = function (evt) {
console.log('open')
}
websocket.onmessage = function (evt) {
console.log('msg');
}
websocket.onclose= function (evt) {
console.log('closed');
}
if (++x === 15) {
window.clearInterval(intervalID);
}
}, 1);
I'm working for CSV import in my Node.js based web app.
Most given CSV files has tens of thousands of records, and it takes several minutes.
So until import finish, I want to show users "Currently importing..." message.
What I want to create is similar to Github's forking screen. After you press fork button on top right of repo, it shows message that "Forking / It should only take a few seconds." until fork finishes.
In addition, I want to add progress bar to indicate percentage of processed records hopefully.
Current my implementation is:
Client send request with CSV data
Server processes received CSVs and insert records to DB.
Server respond 200 if CSV is valid.
But with implementation users cannot see current status. Even sometimes socket hangs up.
I'm considering following reimplementation:
Client send request with CSV data
Server respond 200 to tell client that CSV is received
Server starts to process received CSVs and insert records to DB.
However, I have no idea:
how client know that import is done
how client know when error is occur in CSV processing and DB insertion
How can implement server side?
Thanks in advance ;)
You need to use socket.io here to keep track of the progress. As soon as you receive the CSV, your client could connect to socket.
Server:
io.on('connection', function (socket) {
console.log('CONNECTED');
socket.join('progressSession');
});
You can periodically emit progress event to let the client know how many records you've been processed. (I hope you're processing records asynchronously, or can at least run some other code in between)
io.sockets.in('progressSession').emit('progress', noOfRecords);
And, Client can listen on progress event and show it to the user
var socket = io.connect('http://localhost:9000');
socket.on('progress' , function (status){
console.log(status);
// show status to the user
});
Comment if need any more clarity.
Send the request as you do, return the status immediately to confirm or reject CSV valididy and finish the response. Then use something like http://socket.io/ to send updates to the client.
I have a straightforward SignalR setup: OWIN-hosted .NET server and JavaScript client (both # v2.1.1). The client uses SignalR to synchronize its copy of an ordered event stream maintained in an Rx ReplaySubject on the server. When a client connects, it provides a startAfter query parameter that is used to initialize an IObserver against the ReplaySubject, and this observer then sends each event in the observed sequence to the client. Each event has a sequence number, and the client can tell, based on the event sequence number, if any event is missing in the sequence. (Which would be a serious problem in this application.)
The problem is that the client regularly receives only portions of the event sequence. In fact, there is a regular pattern to this. For every 250 events there is a large gap. So for example, each test shows that the first gap was from somewhere between 70 and 80 to 250. Why always 250? And from there on, the "skip-to" point is always in intervals of 250; e.g., a gap from 263 to 500, then one from 511 to 750, etc.. I have to assume that this is some kind of default buffer size.
Also, the first time a client connects to the server it always receives the entire sequence just fine. It's the subsequent connections that exhibit the regular skipping problem. So it seems like it's a server-side problem, and not a client problem at all.
I then added some checks to the server to ensure that the IObserver for each client is seeing all of the events in the correct order. It is. So it seems almost certain that the problem is on the SignalR server side and has nothing to do with Rx.
And finally, I checked to see if the dropped messages were perhaps just being delivered out of order (which I could live with, although I assumed SignalR provides an ordered-delivery guarantee). They are not - the messages just disappear into a void.
If it helps, I'm currently running locally, with IIS Express on Win 8.1 x64 and testing on IE Developer Channel as well as Chrome 36. The connection is using WebSockets. I couldn't find any reference to 250 as a special quantity in either the SignalR source (client or server) or the Rx.Net source.
Any suggestions on troubleshooting? I'd love to find a stable solution before I start building a complicated workaround.
Here's the relevant server-side code:
public class AllEventsReplaySource
{
private readonly IHubConnectionContext<dynamic> clients;
private readonly ReplaySubject<dynamic> allEvents;
private AllEventsReplaySource(IHubConnectionContext<dynamic> clients)
{
this.clients = clients;
this.allEvents = new ReplaySubject<dynamic>();
// (Not shown: code that generates the input to the ReplaySubject.)
}
public void SubscribeClient(string connectionId, int startAfter)
{
this.allEvents.Skip(startAfter).Subscribe(e =>
{
// (Not shown: code that verifies no skips are occurring at this point for a client.)
clients.Client(connectionId).notifyEvent(e);
});
}
private readonly static Lazy<AllEventsReplaySource> instance =
new Lazy<AllEventsReplaySource>(() => new AllEventsReplaySource(
GlobalHost.ConnectionManager.GetHubContext<AllEventsReplayHub>().Clients));
public static AllEventsReplaySource Instance
{
get { return instance.Value; }
}
}
[HubName("allEventsReplayHub")]
public class AllEventsReplayHub : Hub
{
private readonly AllEventsReplaySource source;
public AllEventsReplayHub()
: this(AllEventsReplaySource.Instance)
{ }
public AllEventsReplayHub(AllEventsReplaySource source)
{
this.source = source;
}
public override Task OnConnected()
{
var previousSequenceNumber = Int32.Parse(Context.QueryString["startAfter"]);
var connectionId = this.Context.ConnectionId;
AllEventsReplaySource.Instance.SubscribeClient(connectionId, previousSequenceNumber);
return base.OnConnected();
}
}
The issue you are experiencing seems consistent with a message buffer overflow. When SignalR releases messages from its buffer, it does so in 250 message fragments by default.
SignalR will buffer at least the last 1000 messages sent to a given connectionId. This means that when you send the 1251st message, the first 250 get dereferenced by the buffer. This explains why when a client first connects to the server, it receives the entire sequence of messages. You have to send at least 1251 messages to a given client before the buffer will drop fragments. Again, this is all assuming default settings.
While you could increase the DefaultMessageBufferSize, that probably will not fix your root problem. It seems that you are trying to send messages faster than the server can send them to the client. If you do that continuously, you will run out of buffer space no matter the size.
It's more common to reduce the DefaultMessageBufferSize rather than increase it, since the buffers can consume a lot of memory, especially if you are sending a lot of large unique messages to many different clients.
Your best bet to avoid overrunning the buffer is to have the client send an ACK at least every 1000 messages. Given this, it might be possible to avoid sending over 1000 unACKed messages thereby avoiding this problem altogether.
By the way, you can take a look at SignalR's message buffer implementation yourself if you feel so inclined. Note that the capacity constructor argument is the DefaultMessageBufferSize.