I have a basic server made with ws-rs that holds a list of connections. When a connection closes, I'd like to be able to tell which connection it is and remove it from the list.
I'd like to achieve something like this:
extern crate ws; // 0.9.1
use ws::{listen, CloseCode, Handler, Sender};
struct Connection {
ip: String,
}
struct MyHandler {
out: Sender,
connections: Vec<Connection>,
}
impl Handler for MyHandler {
fn on_close(&mut self, code: CloseCode, reason: &str) {
// here I'd like to identify the connection
// but there doesn't seem to be any interface for it
}
}
fn main() {
listen("127.0.0.1:8001", |out| MyHandler {
out,
connections: Vec::new(),
})
.unwrap();
}
I feel like it's pretty basic to want to keep a list of connections, and I'm missing something obvious. I can't seem to find any resources online about this.
I found the answer over at r/rust.
This question is a case of googling the wrong question. I misunderstood the usage of Handler. I assumed there was just one Handler (for each thread), but a Handler is meant to represent a connection. Keeping a list of them is the same as keeping a list of connections, each of which is identifiable by a Sender instance created for each connection.
Related
In actix-web, it is possible to serve a file by returning in a handler:
HttpResponse::Ok().streaming(file)
But here, file must implement the Stream<Item = Result<Bytes, E>> trait. The File type from the crate async_std does not implement it, so I created a wrapper that implements it:
struct FileStreamer {
file: File,
}
impl Stream for FileStreamer {
type Item = Result<Bytes, std::io::Error>;
fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
let mut buf = [0; 1024];
self.file.read(&mut buf).poll_unpin(cx).map(|r| {
r.map(|n| {
if n == 0 {
None
} else {
Some(Bytes::copy_from_slice(&buf[0..n]))
}
})
.transpose()
})
}
}
It works but there is a problem. For every call to read we create a new instance of Bytes, which is a dynamically allocated buffer.
Is this the most efficient way to serve a file in actix-web?
It also feels to me, choosing the right buffer size in that case is actually more critical, as a small buffer will cause repetitive syscalls, and a too large buffer will cause slow memory allocation, that wont even be used entirely.
Am I right to consider recurring dynamic allocation as a performance issue?
PS: The file in question is not static, it is subject to modifications and deletion, for this reason, controlling the reading process is necessary.
From the actix-web documentation.
actix-web will send the file in question based on a path. This example takes a dynamic path from the URL. I feel you are overthinking the problem of streaming a file.
use actix_files::NamedFile;
use actix_web::{HttpRequest, Result};
use std::path::PathBuf;
async fn index(req: HttpRequest) -> Result<NamedFile> {
let path: PathBuf = req.match_info().query("filename").parse().unwrap();
Ok(NamedFile::open(path)?)
}
#[actix_web::main]
async fn main() -> std::io::Result<()> {
use actix_web::{web, App, HttpServer};
HttpServer::new(|| App::new().route("/{filename:.*}", web::get().to(index)))
.bind("127.0.0.1:8080")?
.run()
.await
}
Assuming I want to return an instance of a "stateful" component to a user, what is the typical way I can cleanup/join background work within that instance? And are there any patterns to avoid viral propagation of explicit cleanup functions all the way to the root code?
For example, let's assume I am returning a client to a database to the user. In this client, I have a loop that periodically polls the server for updates. Now any time this exists within an ownership DAG (like as a member variable in another struct, or as a list in another struct). Requiring an explicit Close() will bubble up virally throughout the call stack. As each upwards link in the DAG will require a Close() as well. All the way to the function that owns the root instance (eg. main() will be required to call Close() on the root server instance, which will require an implementation of Close() so it cleans up background behind itself, etc). Something like the below
type DbClient struct { ... }
func Cleanup(client DbClient) { ... }
type Component struct {
client DbClient
...
}
func Cleanup(component Component) { ... }
type Server struct {
component Component
...
}
func Cleanup(server Server) { ... }
Is there any other way to handle these cases? Or is an explicit Close() function the recommendation for such stateful components?
I guess the problem you mentiond: "upwards link in the DAG will require a Close()" & "all the way to the func that owns root instance.
Go has struct embedding feature. Go favors composition over inheritance.
There's an important way in which embedding differs from subclassing. When we embed a type, the methods of that type become methods of the outer type, but when they are invoked the receiver of the method is the inner type, not the outer one.
package main
import "fmt"
type DbClient struct{}
func (client *DbClient) Cleanup() {
fmt.Println("Closed called on client")
}
type Component struct {
*DbClient
}
type Server struct {
*Component
}
func main() {
client := DbClient{}
component := Component{&client}
server := Server{&component}
server.Cleanup()
}
I'd like to extend the http.Server functionality by performing a graceful shutdown and some other gadgets that I would share across my HTTP services. Currently my code says more or less:
type MyServer struct {
server *http.Server
// ...
}
func (s *MyServer) ListenAndServe() {
// Create listener and pass to s.server.Serve()
}
This works great, but requires exposing all necessary methods and variables of http.Server manually.
Wrapping most of the methods wouldn't be a big problem, but I can't find a sensible way to expose access to http.Server.ListenAndServeTLS without actually copying implementation from the source. The last line in the method says srv.Serve(tlsListener) and I'd love to provide my own Serve method, so modification of net.Listener is possible before passing it to http.Server.Serve.
I started to pencil my wrapper by putting simply:
type MyServer struct {
http.Server
}
func (s *MyServer) Serve(l net.Listener) {
// Wrap l with MyListener, pass to s.Server.Serve()
}
but obviously neither http.ListenAndServe nor http.ListenAndServeTLS would start using my implementation of Serve. And I'd like to ask them to... Is there any way I can tackle the problem or does the design of http.Server effectively prevent me from solving this?
Hacks welcome: even if I don't use them in production, I'll gain some knowledge.
The http.ListenAndServe* methods will work on the embedded type. The other way around works:
type MyServer struct {
http.Server
// ...
}
func (s *MyServer) ListenAndServe() error {
// create listener
// s.Server.Serve(s.listener)
}
func (s *MyServer) ListenAndServeTLS() error {
// create listener
// s.Server.Serve(s.tlsListener)
}
Is there any possible way to access the socket handle inside a boost asio async completion handler ? i looked at the boost asio placeholders but there is no variable which stores the socket handle.
You can just arrange for it, anyway you would outside boost or asio.
To bind a function that takes e.g. a socket to expose a void() function you can use bind:
int foo(std::string const& s, int);
std::function<void()> adapted = std::bind(foo, "hello world", 42);
So, usually you'd have code similar to this:
boost::asio::async_connect(socket_.lowest_layer(), endpoint_iterator,
boost::bind(&client::handle_connect, this, boost::asio::placeholders::error));
Note, by using bind and this, we've bound a member function to the completion handler:
struct client
{
// ....
void handle_connect(boost::system::error_code err)
{
// you can just use `this->socket_` here
// ...
}
};
This implies that in handle_connect we can just use the socket_ member variable.
However, if you want to make things complicated you can use free functions as well
boost::asio::async_connect(socket_.lowest_layer(), endpoint_iterator,
boost::bind(&free_handle_connect, boost::ref(socket_), boost::asio::placeholders::error));
Now the implied handler function looks like
static void free_handle_connect(
boost::asio::ip::tcp::socket& socket_,
boost::system::error_code err)
{
// using `socket_` as it was passed in
int fd = _socket.native_handle_type();
}
I was going over the examples of boost.asio and I am wondering why there isn't an example of a simple server/client example that prints a string on the server and then returns a response to the client.
I tried to modify the echo server but I can't really figure out what I'm doing at all.
Can anyone find me a template of a client and a template of a server?
I would like to eventually create a server/client application that receives binary data and just returns an acknowledgment back to the client that the data is received.
EDIT:
void handle_read(const boost::system::error_code& error,
size_t bytes_transferred) // from the server
{
if (!error)
{
boost::asio::async_write(socket_,
boost::asio::buffer("ACK", bytes_transferred),
boost::bind(&session::handle_write, this,
boost::asio::placeholders::error));
}
else
{
delete this;
}
}
This returns to the client only 'A'.
Also in data_ I get a lot of weird symbols after the response itself.
Those are my problems.
EDIT 2:
Ok so the main problem is with the client.
size_t reply_length = boost::asio::read(s,
boost::asio::buffer(reply, request_length));
Since it's an echo server the 'ACK' will only appear whenever the request length is more then 3 characters.
How do I overcome this?
I tried changing request_length to 4 but that only makes the client wait and not do anything at all.
Eventually I found out that the problem resides in this bit of code in the server:
void handle_read(const boost::system::error_code& error,
size_t bytes_transferred) // from the server
{
if (!error)
{
boost::asio::async_write(socket_,
boost::asio::buffer("ACK", 4), // replaced bytes_transferred with the length of my message
boost::bind(&session::handle_write, this,
boost::asio::placeholders::error));
}
else
{
delete this;
}
}
And in the client:
size_t reply_length = boost::asio::read(s,
boost::asio::buffer(reply, 4)); // replaced request_length with the length of the custom message.
The echo client/server is the simple example. What areas are you having trouble with? The client should be fairly straightforward since it uses the blocking APIs. The server is slightly more complex since it uses the asynchronous APIs with callbacks. When you boil it down to the core concepts (session, server, io_service) it's fairly easy to understand.