Netty channel().read() on channel without any data currently present - websocket

im trying to figure out how to implement back-pressure in netty and i found that autoread=true and using explicit ctx.channel().read() might work but im not sure how it works and was not able to find more details.
Specifically, i was wondering what happens if read() is called. Are the semantics that netty will try to read from underlying channel (lets say websocket connection)? What if there are no data ready to be read? Will the read succeed without reading any data? Or will i be guaranteed that if i call read once, it will keep trying to read until there are some data available?
Thanks

read() basically just tell the Channel to read the next chunk of data when there is something ready, which either means directly or at some point later. So yeah you are guaranteed that if you call read() it will read at some point.

Related

How does the remote Peer handles data send by a RDMA Write operation

I have difficulties to understand how and in which cases RDMA operations are used.
Let's say we have a server and a client. The client writes data via rdma-write to the memory region of the server. Since the server doesn't get any notification that data arrived during the client-side rdma-write operation (without immediate), I wonder now:
How can the server access this data if it doesn't even know that it got some, let alone where it is located (in the memory region)?
In my research I only found examples and explanations simply describing how to send/read data via rdma-write/read, but no one explained e.g. how to use the data accordingly.
The server's CPU needs to be notified in a separate message abuot the data's arrival before accessing it, either using a subsequent RDMA write with immediate operation, a send operation, or an atomic operation.

How to find out destination of a golang channel

I am taking over maintenance of a multi-file golang program and now trying to understand the code flow. One feature of golang is the use of channels for sending values to another part of the code base. This feature can make tracing and understanding the code flow difficult, as the execution will resume at the receiving end of the channel, which may well be in a different file and may have a different name.
When reading through the code, I can see where data is being sent to a channel, but I do not see an intuitive or easy way to figure out where it is being received.
Is there a way in gloang to find out where (as in filename:linenum) a data sent through a channel is received?
No, because multiple places can receive from the same channel, and multiple instances of the same function can be receiving from different channels. Your best bet is to follow the channel itself around - look at where it's created, then what it gets passed to, and find what is receiving from it that way.

How to communicate with external system

I'm trying to write a logic (js script) to communicate with external system. As far as understand, logic will be executed on all endorsing peer.
In this case, how can I avoid duplicate operation to external system ? For example, how to increment a value in external database ? If I write a logic to increment the value in js, I think the value will be incremented by all endorsing peer.
I'll appreciate any comment.
Firstly, currently the only way you can interact with external systems is using the experimental post API. This allows your Transaction Processor function to HTTP POST data to an external system and then to process the response.
Documentation here:
https://hyperledger.github.io/composer/integrating/call-out.html
You are correct in stating that if you have 4 peers, then the chain code container for each peer will run your logic, so you'd expect to see 4 calls to your HTTP service. This is required because each peer node is independent and Fabric must achieve consensus across the peers.
The external functions should therefore (ideally) be side-effect free "pure" functions (idempotent), meaning that for a given set of input parameters you always get the same set of output results.
Clearly a function that returns an incrementing integer doesn't fit this description! You probably need to rethink how you are structuring your problem to make it compatible with a decentralised blockchain-based approach.

Understanding goroutines for web API

Just starting out with Go and hoping to create a simple Web API. I'm looking into using Gorilla mux (http://www.gorillatoolkit.org/pkg/mux) to handle web requests.
I'm not sure how to best use Go's concurrency options to handle the requests. Did I read somewhere that the main function is actually a goroutine or should I dispatch each request to a goroutine as they are received? Apologies if I'm "way off".
Assuming you're using the Go's http.ListenAndServe to serve your http requests, the documentation clearly states that each incoming connection is handled by a separate goroutine for you. http://golang.org/pkg/net/http/#Server.Serve
You would usually call ListenAndServe from your main function.
Gorilla mux is simply a package for more flexible routing of requests to your handlers than the http.DefaultServeMux. It doesn't actually handle the incoming connection or request just simply relays it to your handler.
I highly suggest you read a bit of the documentation, specifically this guide https://golang.org/doc/articles/wiki/#tmp_3 on writing web applications.
I'm providing an answer even though I voted to close for being too broad.
Anyway, none of that is really necessary. You're over thinking it. If you haven't read this it looks like a decent tutorial; http://thenewstack.io/make-a-restful-json-api-go/
You can really just set up routes like you would with most typical rest frameworks and let the webserver/framework worry about concurrency at the request handling level. You would only employ goroutines to generate the response of a request, say if you needed to aggregate data from 10 files that are all in a folder. Contrived example, but this is where you would spin off 1 goroutine per file, aggregate all the information by reading off a channel in a non-blocking select and then return the result. You can expect all points of entry to your code are called in an asynchronous, non-blocking fashion if that makes sense...

Some data losts when I use NSURLConnection to get data asynchronously

I deal with the data and do some UI working according to the data in the method
-(void)connection:didReiveiceData(I use delegate as callback), and I find that UI working is always not finished completely.Maybe when the data is received, the UI threading is still busy, so some data losts.You may suggest me to deal with data in -(void)connectionFinishLoading:,it will cause other problems.
You've correctly suggested you need to process the received data in connectionDidFinishLoading:.
Before that, you need to collect all the receivedData (eg into an NSMutableData instance). Append the received data each time didReceiveData: is called (it may be called multiple times before it finishes).
the reason why some data lost is all about the method –rangeOfData:options:range:
I use it wrong.BTW, I think this method is very weird,the option accept only one of two value, NSDataSearchBackwards and NSDataSearchAnchored.why no "NSDataSearchForewards" or something like that?

Resources