I am trying out the yet-unstable async-await syntax in nightly Rust 1.38 with futures-preview = "0.3.0-alpha.16" and runtime = "0.3.0-alpha.6". It feels really cool, but the docs are (yet) scarce and I got stuck.
To go a bit beyond the basic examples I would like to create an app that:
Accepts TCP connections on a given port;
Broadcasts all the data received from any connection to all active connections.
Existing docs and examples got me this far:
#![feature(async_await)]
#![feature(async_closure)]
use futures::{
prelude::*,
select,
future::select_all,
io::{ReadHalf, WriteHalf, Read},
};
use runtime::net::{TcpListener, TcpStream};
use std::io;
async fn read_stream(mut reader: ReadHalf<TcpStream>) -> (ReadHalf<TcpStream>, io::Result<Box<[u8]>>) {
let mut buffer: Vec<u8> = vec![0; 1024];
match reader.read(&mut buffer).await {
Ok(len) => {
buffer.truncate(len);
(reader, Ok(buffer.into_boxed_slice()))
},
Err(err) => (reader, Err(err)),
}
}
#[runtime::main]
async fn main() -> std::io::Result<()> {
let mut listener = TcpListener::bind("127.0.0.1:8080")?;
println!("Listening on {}", listener.local_addr()?);
let mut incoming = listener.incoming().fuse();
let mut writers: Vec<WriteHalf<TcpStream>> = vec![];
let mut reads = vec![];
loop {
select! {
maybe_stream = incoming.select_next_some() => {
let (mut reader, writer) = maybe_stream?.split();
writers.push(writer);
reads.push(read_stream(reader).fuse());
},
maybe_read = select_all(reads.iter()) => {
match maybe_read {
(reader, Ok(data)) => {
for writer in writers {
writer.write_all(data).await.ok(); // Ignore errors here
}
reads.push(read_stream(reader).fuse());
},
(reader, Err(err)) => {
let reader_addr = reader.peer_addr().unwrap();
writers.retain(|writer| writer.peer_addr().unwrap() != reader_addr);
},
}
}
}
}
}
This fails with:
error: recursion limit reached while expanding the macro `$crate::dispatch`
--> src/main.rs:36:9
|
36 | / select! {
37 | | maybe_stream = incoming.select_next_some() => {
38 | | let (mut reader, writer) = maybe_stream?.split();
39 | | writers.push(writer);
... |
55 | | }
56 | | }
| |_________^
|
= help: consider adding a `#![recursion_limit="128"]` attribute to your crate
= note: this error originates in a macro outside of the current crate (in Nightly builds, run with -Z external-macro-backtrace for more info)
This is very confusing. Maybe I am using select_all() in a wrong way? Any help in making it work is appreciated!
For completeness, my Cargo.toml:
[package]
name = "async-test"
version = "0.1.0"
authors = ["xxx"]
edition = "2018"
[dependencies]
runtime = "0.3.0-alpha.6"
futures-preview = { version = "=0.3.0-alpha.16", features = ["async-await", "nightly"] }
In case someone is following, I hacked it together finally. This code works:
#![feature(async_await)]
#![feature(async_closure)]
#![recursion_limit="128"]
use futures::{
prelude::*,
select,
stream,
io::ReadHalf,
channel::{
oneshot,
mpsc::{unbounded, UnboundedSender},
}
};
use runtime::net::{TcpListener, TcpStream};
use std::{
io,
net::SocketAddr,
collections::HashMap,
};
async fn read_stream(
addr: SocketAddr,
drop: oneshot::Receiver<()>,
mut reader: ReadHalf<TcpStream>,
sender: UnboundedSender<(SocketAddr, io::Result<Box<[u8]>>)>
) {
let mut drop = drop.fuse();
loop {
let mut buffer: Vec<u8> = vec![0; 1024];
select! {
result = reader.read(&mut buffer).fuse() => {
match result {
Ok(len) => {
buffer.truncate(len);
sender.unbounded_send((addr, Ok(buffer.into_boxed_slice())))
.expect("Channel error");
if len == 0 {
return;
}
},
Err(err) => {
sender.unbounded_send((addr, Err(err))).expect("Channel error");
return;
}
}
},
_ = drop => {
return;
},
}
}
}
enum Event {
Connection(io::Result<TcpStream>),
Message(SocketAddr, io::Result<Box<[u8]>>),
}
#[runtime::main]
async fn main() -> std::io::Result<()> {
let mut listener = TcpListener::bind("127.0.0.1:8080")?;
eprintln!("Listening on {}", listener.local_addr()?);
let mut writers = HashMap::new();
let (sender, receiver) = unbounded();
let connections = listener.incoming().map(|maybe_stream| Event::Connection(maybe_stream));
let messages = receiver.map(|(addr, maybe_message)| Event::Message(addr, maybe_message));
let mut events = stream::select(connections, messages);
loop {
match events.next().await {
Some(Event::Connection(Ok(stream))) => {
let addr = stream.peer_addr().unwrap();
eprintln!("New connection from {}", addr);
let (reader, writer) = stream.split();
let (drop_sender, drop_receiver) = oneshot::channel();
writers.insert(addr, (writer, drop_sender));
runtime::spawn(read_stream(addr, drop_receiver, reader, sender.clone()));
},
Some(Event::Message(addr, Ok(message))) => {
if message.len() == 0 {
eprintln!("Connection closed by client: {}", addr);
writers.remove(&addr);
continue;
}
eprintln!("Received {} bytes from {}", message.len(), addr);
if &*message == b"quit\n" {
eprintln!("Dropping client {}", addr);
writers.remove(&addr);
continue;
}
for (&other_addr, (writer, _)) in &mut writers {
if addr != other_addr {
writer.write_all(&message).await.ok(); // Ignore errors
}
}
},
Some(Event::Message(addr, Err(err))) => {
eprintln!("Error reading from {}: {}", addr, err);
writers.remove(&addr);
},
_ => panic!("Event error"),
}
}
}
I use a channel and spawn a reading task for each client. Special care had to be taken to ensure that readers get dropped with writers: this is why oneshot future is used. When oneshot::Sender is dropped, the oneshot::Receiver future resolves to canceled state, which is a notification mechanism for a reading task to know it is time to halt. To demonstrate that it works, we drop a client as soon as we get "quit" message.
Sadly, there is a (seemingly useless) warning regarding an unused JoinHandle from the runtime::spawn call, and I don't really know how to eliminate it.
Related
I'm writing an application using the fltk library.
The architecture of the application involves a global state that stores which page to display now (viewer) and store the viewer data.
There are buttons that sends a message to update and changes the global state indicating which viewer to display now.
#[derive(Clone, Copy)]
enum Message {
Update
}
#[derive(Clone, Copy)]
struct GlobalState {
viewer: Viewer,
}
impl GlobalState {
fn set_viewer(&mut self, v: Viewer) {
self.viewer = v;
}
}
#[derive(Clone, Copy)]
enum Viewer {
Page1,
Page2
}
fn main() {
let mut gs = GlobalState {viewer: Viewer::Page1};
let app = app::App::default().with_scheme(app::Scheme::Gtk);
let (s, r) = app::channel::<Message>();
let mut wind = Window::default().with_size(800, 600);
left_side(&mut gs);
let mut col = Column::new(155,70,800 - 150,600 - 65,None);
s.send(Message::Update);
col.end();
wind.end();
wind.show();
while app.wait() {
if let Some(msg) = r.recv() {
match msg {
Message::Update => {
match gs.viewer {
Viewer::Page1 => {
col.clear();
let view = view_ohm();
col.add(&view);
},
Viewer::Page2 => {
col.clear();
let view = view_ohm();
col.add(&view);
},
_ => ()
}
}
_ => (),
}
}
}
}
fn left_side(gs: &mut GlobalState) {
let btn_width = 130;
let btn_height = 30;
let (s, r) = app::channel::<Message>();
let mut grp = Group::default().with_size(65, 600);
let mut col = Pack::default()
.with_size(btn_width, 600)
.center_of_parent()
.with_type(PackType::Vertical);
col.set_spacing(2);
let mut btn = Button::new(0, 0, btn_width, btn_height, "Page 1");
btn.emit(s, Message::Update);
btn.set_callback(|_| {
gs.set_viewer(Viewer::Page1)
});
let mut btn = Button::new(0, 0, btn_width, btn_height, "Page 2");
btn.emit(s, Message::Update);
btn.set_callback(|_| {
gs.set_viewer(Viewer::Page2)
});
col.end();
grp.end();
}
Questions:
the code doesn't compile with error:
error[E0521]: borrowed data escapes outside of function
--> src/main.rs:89:5
|
74 | fn left_side(gs: &mut GlobalState) {
| -- - let's call the lifetime of this reference `'1`
| |
| `gs` is a reference that is only valid in the function body
...
89 | / btn.set_callback(|_| {
90 | | gs.set_viewer(Viewer::Page1)
91 | | });
| | ^
| | |
| |______`gs` escapes the function body here
| argument requires that `'1` must outlive `'static`
error[E0524]: two closures require unique access to `*gs` at the same time
--> src/main.rs:95:22
|
89 | btn.set_callback(|_| {
| - --- first closure is constructed here
| _____|
| |
90 | | gs.set_viewer(Viewer::Page1)
| | -- first borrow occurs due to use of `*gs` in closure
91 | | });
| |______- argument requires that `*gs` is borrowed for `'static`
...
95 | btn.set_callback(|_| {
| ^^^ second closure is constructed here
96 | gs.set_viewer(Viewer::Page2)
| -- second borrow occurs due to use of `*gs` in closure
my application architecture working or is there a better one? The application has several pages (one page one viewer) and each viewer has its own global state and the data that is stored in it and processes this data.
What is the best way to store viewer data in the global state?
This code wraps your GlobalState in an Rc<RefCell<GlobalState>>:
#[derive(Clone, Copy)]
enum Message {
Update
}
#[derive(Clone, Copy)]
struct GlobalState {
viewer: Viewer,
}
impl GlobalState {
fn set_viewer(&mut self, v: Viewer) {
self.viewer = v;
}
}
#[derive(Clone, Copy)]
enum Viewer {
Page1,
Page2
}
fn main() {
let mut gs = Rc::new(RefCell::new(GlobalState {viewer: Viewer::Page1}));
let app = app::App::default().with_scheme(app::Scheme::Gtk);
let (s, r) = app::channel::<Message>();
let mut wind = Window::default().with_size(800, 600);
left_side(gs.clone());
let mut col = Column::new(155,70,800 - 150,600 - 65,None);
col.end();
wind.end();
wind.show();
while app.wait() {
if let Some(msg) = r.recv() {
match msg {
Message::Update => {
let gs = gs.borrow();
match gs.viewer {
Viewer::Page1 => {
col.clear();
let view = view_ohm();
col.add(&view);
},
Viewer::Page2 => {
col.clear();
let view = view_ohm();
col.add(&view);
},
_ => ()
}
}
_ => (),
}
}
}
}
fn left_side(gs: Rc<RefCell<GlobalState>>) {
let btn_width = 130;
let btn_height = 30;
let s = app::Sender::get();
let mut grp = Group::default().with_size(65, 600);
let mut col = Pack::default()
.with_size(btn_width, 600)
.center_of_parent()
.with_type(PackType::Vertical);
col.set_spacing(2);
let mut btn = Button::new(0, 0, btn_width, btn_height, "Page 1");
btn.set_callback({
let gs = gs.clone();
move |_| {
gs.borrow_mut().set_viewer(Viewer::Page1);
s.send(Message::Update);
}});
let mut btn = Button::new(0, 0, btn_width, btn_height, "Page 2");
btn.set_callback(move |_| {
gs.borrow_mut().set_viewer(Viewer::Page2);
s.send(Message::Update);
});
col.end();
grp.end();
}
How do I create a writer function for the tunnel program below? The code below is a sample program to create windows tunnel interface. I want to write a function that writes (or sends) packets to another server IP address. Github link for full code and its dependencies given below.
https://github.com/nulldotblack/wintun/blob/main/examples/basic.rs
use log::*;
use std::sync::{
atomic::{AtomicBool, Ordering},
Arc,
};
static RUNNING: AtomicBool = AtomicBool::new(true);
fn main() {
env_logger::init();
let wintun = unsafe { wintun::load_from_path("examples/wintun/bin/amd64/wintun.dll") }
.expect("Failed to load wintun dll");
let version = wintun::get_running_driver_version(&wintun);
info!("Using wintun version: {:?}", version);
let adapter = match wintun::Adapter::open(&wintun, "Demo") {
Ok(a) => a,
Err(_) => wintun::Adapter::create(&wintun, "Example", "Demo", None)
.expect("Failed to create wintun adapter!"),
};
let version = wintun::get_running_driver_version(&wintun).unwrap();
info!("Using wintun version: {:?}", version);
let session = Arc::new(adapter.start_session(wintun::MAX_RING_CAPACITY).unwrap());
let reader_session = session.clone();
let reader = std::thread::spawn(move || {
while RUNNING.load(Ordering::Relaxed) {
match reader_session.receive_blocking() {
Ok(packet) => {
let bytes = packet.bytes();
println!(
"Read packet size {} bytes. Header data: {:?}",
bytes.len(),
&bytes[0..(20.min(bytes.len()))]
);
}
Err(_) => println!("Got error while reading packet"),
}
}
});
println!("Press enter to stop session");
let mut line = String::new();
let _ = std::io::stdin().read_line(&mut line);
println!("Shutting down session");
RUNNING.store(false, Ordering::Relaxed);
session.shutdown();
let _ = reader.join();
println!("Shutdown complete");
}
Take for example this code:
#[payable]
pub fn add_liquidity(&mut self, min_liquidity: u128, max_tokens: u128) -> U128 {
let deposit = env::attached_deposit();
let contract_near_balance = env::account_balance();
let user_address = env::predecessor_account_id();
let contract_address = env::current_account_id();
let token_amount = max_tokens;
println!("{}", token_amount);
let initial_liquidity = contract_near_balance;
println!("initial liquidity{}", initial_liquidity);
self.uni_totalsupply = initial_liquidity;
let balance_option = self.uni_balances.get(&user_address);
match balance_option {
Some(_) => {
self.uni_balances.insert(&user_address, &initial_liquidity);
}
None => {
self.uni_balances.insert(&user_address, &initial_liquidity);
}
}
Promise::new(self.avrit_token_id.clone()).function_call(
b"transfer_from".to_vec(),
json!({"owner_id":contract_address, "new_owner_id":"avrit.testnet", "amount": U128(token_amount)}).to_string().as_bytes().to_vec(),
DEPOSIT,
env::prepaid_gas() - GAS_FOR_SWAP,
);
initial_liquidity.into()
}
Even if the promise fails, will it set uni_balances in the storage? How can I make the transaction atomic?
Contract calls are not atomic. In order to make the chain of promises atomic is to use a then callback which is called after the initial promise. In the callback function you can check the success of the previous promise like here:
pub fn check_promise(&mut self) {
match env::promise_result(0) {
PromiseResult::Successful(_) => {
env::log(b"Check_promise successful");
self.checked_promise = true;
}
_ => panic!("Promise with index 0 failed"),
};
}
At this point you can make a state change that is final and could only happen if the whole transaction was successful.
I'm implementing a BLE protocol between a central (iPhone) and peripheral (custom device). The protocol works as follows:
central connects to peripheral and sets up notification
peripheral sends data on notification characteristic
central processes data and sends response on separate characteristic
peripheral sends addtnl data on notification characteristic
central process data and disconnects.
I'm attempting to implement this in a clean way using RxBluetoothKit. It currently works, but I'd like to solve the following challenges:
What is the best way to cleanly disconnect in step 5. I'm hoping to not have to dispose the overall observable, but rather just have it 'complete'. I'm currently using 'takeUntil', but not sure if that's the best way.
Allow for the notification to cleanup gracefully prior to disconnect. With my current code, I receive an 'API MISUSE can only accept commands while in the connected state' because I believe the notification is cleaning up while the disconnect is occurring.
Thanks.
enum TestPeripheralService: String, ServiceIdentifier {
case main = "CED916FA-6692-4A12-87D5-6F2764762B23"
var uuid: CBUUID { return CBUUID(string: self.rawValue) }
}
enum TestPeripheralCharacteristic: String, CharacteristicIdentifier {
case writer = "CED927B4-6692-4A12-87D5-6F2764762B2A"
case reader = "CED9D5D8-6692-4A12-87D5-6F2764762B2A"
var uuid: CBUUID { return CBUUID(string: self.rawValue) }
var service: ServiceIdentifier { return TestPeripheralService.main }
}
fileprivate lazy var centralManager: CentralManager = {
RxBluetoothKitLog.setLogLevel(.verbose)
return CentralManager(queue: .main)
}()
func executeConnectionAndHandshake() {
let disconnectSubject = PublishSubject<Bool>.init()
var peripheral: Peripheral?
var packetNum = 0
_ = centralManager
.observeState()
.startWith(centralManager.state)
.filter { $0 == .poweredOn }
.flatMap { _ in self.centralManager.scanForPeripherals(withServices: [TestPeripheralService.main.uuid]) }
.flatMap { $0.peripheral.establishConnection().takeUntil(disconnectSubject) }
.do(onNext: { peripheral = $0 })
.flatMap { $0.discoverServices([TestPeripheralService.main.uuid])}
.flatMap { $0[0].discoverCharacteristics(nil)}
.flatMap { _ in
Observable<Bool>.create { event in
let disposables = CompositeDisposable()
let readSubject = PublishSubject<Data>.init()
_ = disposables.insert(peripheral!.observeValueUpdateAndSetNotification(for: TestPeripheralCharacteristic.reader)
.subscribe(onNext: {
packetNum += 1
let packet = $0.value!
if (packetNum <= 1) {
readSubject.onNext(packet)
} else {
event.onNext(true)
event.onCompleted()
}
}, onError: { event.onError($0) })
)
_ = disposables.insert(readSubject
.flatMapLatest { data -> Single<Characteristic> in
var writeData = Data(capacity: 300)
for _ in 0..<300 {
writeData.append(0xFF)
}
return peripheral!.writeValue(writeData, for: TestPeripheralCharacteristic.writer, type: .withResponse)
}
.subscribe(onError: { event.onError($0) })
)
return Disposables.create {
disposables.dispose()
}
}
.do(onCompleted: { disconnectSubject.onNext(true) })
}
.subscribe(onError: { print($0) },
onCompleted: { print("Connection and handshake completed") })
}
Chat server which fails to compile:
use std::io::{TcpListener, TcpStream};
use std::io::{Acceptor, Listener};
enum StreamOrSlice {
Strm(TcpStream),
Slc(uint, [u8, ..1024])
}
fn main() {
let listener = TcpListener::bind("127.0.0.1", 5555);
// bind the listener to the specified address
let mut acceptor = listener.listen();
let (tx, rx) = channel();
spawn(proc() {
let mut streams: Vec<TcpStream> = Vec::new();
loop {
let rxd: StreamOrSlice = rx.recv();
match rxd {
Strm(stream) => {
streams.push(stream);
}
Slc(len, buf) => {
for stream in streams.iter_mut() {
let _ = stream.write(buf.slice(0, len));
}
}
}
}
});
// accept connections and process them, spawning a new tasks for each one
for stream in acceptor.incoming() {
match stream {
Err(e) => { /* connection failed */ }
Ok(mut stream) => {
// connection succeeded
tx.send(Strm(stream.clone()));
let tx2 = tx.clone();
spawn(proc() {
let mut buf: [u8, ..1024] = [0, ..1024];
loop {
let len = stream.read(buf);
tx2.send(Slc(len.unwrap(), buf));
}
})
}
}
}
}
The error is:
Compiling chat v0.1.0 (file:///home/chris/rust/chat)
task 'rustc' has overflowed its stack
Could not compile `chat`.
Is this fixable in code, or is it a compiler bug?
Note: #Levans thanks for your help this evening.
The compiler has crashed. You are not even writing your own macros. This is 100% a compiler bug. Report it.