Why can I not load the page from the Actix documentation sample? - windows

I'm learning the Actix framework. The documentation has the sample:
use actix_rt::System;
use actix_web::{web, App, HttpResponse, HttpServer};
use std::sync::mpsc;
use std::thread;
#[actix_rt::main]
async fn main() {
let (tx, rx) = mpsc::channel();
thread::spawn(move || {
let sys = System::new("http-server");
let srv = HttpServer::new(|| App::new().route("/", web::get().to(|| HttpResponse::Ok())))
.bind("127.0.0.1:8088")?
.shutdown_timeout(60) // <- Set shutdown timeout to 60 seconds
.run();
let _ = tx.send(srv);
sys.run()
});
let srv = rx.recv().unwrap();
// pause accepting new connections
srv.pause().await;
// resume accepting new connections
srv.resume().await;
// stop server
srv.stop(true).await;
}
I have no error after compiling this code:
But I can't open the page in my browser:
What have I missed and why does the page not open in my browser?

This section is an example of how to control the server running in the thread created previously. You can pause, resume, and stop the server gracefully. These lines do those three actions. At the end, the server is stopped.
let srv = rx.recv().unwrap();
// pause accepting new connections
srv.pause().await;
// resume accepting new connections
srv.resume().await;
// stop server
srv.stop(true).await;
That makes this example a server that shuts itself off at the end of the snippet. One small change to get this snippet to run indefinitely is to make this change:
let srv = rx.recv().unwrap();
// wait for any incoming connections
srv.await;
which is not something I'd recommend. There are other examples, particularly at the actix/examples repository, that would likely be more appropriate to get you started on how to structure an actix server.

Related

How to do an HTTP2 request with H2 using the new async-await syntax in Rust?

The problem I run into is that I tried to convert the H2 Akamai example into code using Rust's new async-await syntax.
I have been able to produce the following code, but it hangs on let response = response.compat().await; without me able to understand why.
#![feature(async_await)]
use tokio::net::TcpStream;
use std::sync::Arc;
use webpki::DNSNameRef;
use futures::compat::Future01CompatExt;
use futures::future::{FutureExt, TryFutureExt};
use h2::client;
use rustls::ClientConfig;
use tokio_rustls::ClientConfigExt;
use rustls::Session;
use std::net::ToSocketAddrs;
use hyper::{Method, Request};
pub fn setup_config() -> Arc<ClientConfig>
{
std::sync::Arc::new({
let mut c = rustls::ClientConfig::new();
c.root_store
.add_server_trust_anchors(&webpki_roots::TLS_SERVER_ROOTS);
c.alpn_protocols.push("h2".to_owned());
c
})
}
pub async fn worker()
{
// Set the address to run our socket on.
let address = "http2.akamai.com:443"
.to_socket_addrs()
.unwrap()
.next()
.unwrap();
let config = setup_config();
let dns_name = DNSNameRef::try_from_ascii_str("http2.akamai.com").unwrap();
// Open a TCP connection.
let tcp = TcpStream::connect(&address).compat().await.unwrap();
;
let tls = config.connect_async(dns_name, tcp).compat().await.unwrap();
let (_, session) = tls.get_ref();
let negotiated_protocol = session.get_alpn_protocol();
assert_eq!(Some("h2"), negotiated_protocol.as_ref().map(|x| &**x));
let res = client::handshake(tls).compat().await;
let (client, h2) = res.unwrap();
println!("Test5");
let request = Request::builder()
.method(Method::GET)
.uri("https://http2.akamai.com/")
.body(())
.unwrap();
println!("Test6");
let (response, x) = client.ready().compat().await.unwrap().send_request(request, true).unwrap();
println!("Test7");
let response = response.compat().await;
println!("Test8");
}
fn main()
{
// Call our `run_server` function, which returns a future.
// As with every `async fn`, for `run_server` to do anything,
// the returned future needs to be run. Additionally,
// we need to convert the returned future from a futures 0.3 future into a
// futures 0.1 future.
let futures_03_future = worker();
let futures_01_future = futures_03_future.unit_error().boxed().compat();
// Finally, we can run the future to completion using the `run` function
// provided by Hyper.
tokio::run(futures_01_future);
}
Cargo.toml:
[dependencies]
# The latest version of the "futures" library, which has lots of utilities
# for writing async code. Enable the "compat" feature to include the
# functions for using futures 0.3 and async/await with the Hyper library,
# which use futures 0.1.
futures-preview = { version = "=0.3.0-alpha.16", features = ["compat"] }
# Hyper is an asynchronous HTTP library. We'll use it to power our HTTP
# server and to make HTTP requests.
hyper = "0.12.9"
# Tokio
tokio = "0.1.22"
h2 = "0.1.26"
# RustTLS
rustls = "0.12"
tokio-rustls = "0.5.0"
webpki = "0.18"
webpki-roots = "0.14"
Output:
Test5
Test6
Test7
I hope you're able to help me on why it hangs during this request.
EDIT: I checked Wireshark as well, and the HTTP2 connection has been opened, but the request inside the connection is not being sent. But I still don't understand why.
I forgot to spawn the connection on a new thread as well:
tokio::spawn(h2.map_err(|_| panic!("connection failed")));
For more information see:
https://github.com/hyperium/h2/issues/390
https://github.com/hyperium/h2/issues/391

How to terminate all threads reading from Pipe (NSPipe) related to Process (NSTask)?

I am writing a MacOS/Cocoa app that monitors a remote log file using a common recipe that launches a Process (formerly NSTask) instance on a background thread and reads stdout of the process via a Pipe (formally a NSPipe) as listed below:
class LogTail {
var process : Process? = nil
func dolog() {
//
// Run ssh fred#foo.org /usr/bin/tail -f /var/log.system.log
// on a background thread and monitor it's stdout.
//
let processQueue = DispatchQueue.global(qos: .background)
processQueue.async {
//
// Create process and associated command.
//
self.process = Process()
process.launchPath = "/usr/bin/ssh"
process.arguments = ["fred#foo.org",
"/usr/bin/tail", "-f",
"/var/log.system.log"]
process.environment = [ ... ]
//
// Create pipe to read stdout of command as data is available
//
let pipe = Pipe()
process.standardOutput = pipe
let outHandle = pipe.fileHandleForReading
outHandle.readabilityHandler = { pipe in
if let string = String(data: pipe.availableData,
encoding: .utf8) {
// write string to NSTextView on main thread
}
}
//
// Launch process and block background thread
// until process complete.
//
process.launch()
process.waitUntilExit()
//
// What do I do here to make sure all related
// threads terminate?
//
outHandle.closeFile() // XXX
outHandle.readabilityHandler = nil // XXX
}
}
Everything works just dandy, but when the process quits (killed via process.terminate) I notice (via Xcode's Debug Navigator and the Console app) that there are multiple threads consuming 180% or more of the CPU!?!
Where is this CPU leak coming from?
I threw in outHandle.closeFile() (see code marked XXX above) and that reduced the CPU usage down to just a few percent but the threads still existed! What am I doing wrong or how do a make sure all the related threads terminate (I prefer graceful terminations i.e., threads body finish executing)!?
Some one posted a similar question almost 5 years ago!
UPDATE:
The documentation for NSFileHandle's readabilityHandler says:
To stop reading the file or socket, set the value of this property to
nil. Doing so cancels the dispatch source and cleans up the file
handle’s structures appropriately.
so setting outHandle.readabilityHandler = nil seems to solve the problem too.
Even though I have seemingly solved the problem, I really don't understand where this massive CPU leak comes from -- very mysterious.

copas looping issue while connecting the server

I am new to lua programming and trying to implement Lua-websocket in openwrt as a client. here is the library.
Was trying to use client copas library but the issue is the script is getting stopped listening server after executing once (i.e. connecting to server, receiving message, sending message). I want the script to always listen the server without any timeout or script halt.
Below is the script
local copas = require'copas'
local websocket = require'websocket'
local json = require('json')
local client = require 'websocket.client'.new()
local ok,err = client:connect('ws://192.168.1.250:8080')
if not ok then
print('could not connect',err)
end
local ap_mac = { command = 'subscribe', channel = 'test' }
local ok = client:send(json.encode(ap_mac))
if ok then
print('msg sent')
else
print('connection closed')
end
local message,opcode = client:receive()
if message then
print('msg',message,opcode)
else
print('connection closed')
end
local replymessage = { command = 'message', message = 'TEST' }
local ok = client:send(json.encode(replymessage))
if ok then
print('msg sent')
else
print('connection closed')
end
copas.loop()
Here copas.loop() is not working.
On openWrt I had installed lua 5.1
Short answer: you do not use Copas correctly.
In detail: copas.loop does nothing, because you have neither created a Copas server, nor a Copas thread. Check the Copas documentation.
The send and receive actions in your script are performed outside Copas, because they are not within a Copas.addthread (function () ... end). You also create a websocket client that is not a copas one, but a synchronous one (the default). Check the lua-websocket documentation and its examples.
The solution:
local copas = require'copas'
local websocket = require'websocket'
local json = require'cjson'
local function loop (client)
while client.state == "OPEN" do
local message, opcode = client:receive()
... -- handle message
local replymessage = { command = 'message', message = 'TEST' }
local ok, err = client:send(json.encode(replymessage))
... -- check ok, err
end
end
local function init ()
local client = websocket.client.copas ()
local ok,err = client:connect('ws://192.168.1.250:8080')
... -- check ok, err
local ap_mac = { command = 'subscribe', channel = 'test' }
ok, err = client:send(json.encode(ap_mac))
... -- check ok, err
copas.addthread (function ()
loop (client)
end)
end
copas.addthread (init)
copas.loop()
The init function instantiates a client for Copas. It also starts the main loop in a Copas thread, that waits for incoming messages as long as the connection is open.
Before starting the Copas loop, do not forget to add a Copas thread for the init function.

ZMQ c# binding HWM not working

HWM does not seem to work in clrzmq 2.2.5.
Here's my code
private static ulong hwm = 50;
static void testMQ()
{
var _Context = new Context(1);
var pubSock = _Context.Socket(SocketType.PUB);
pubSock.HWM = hwm;
pubSock.Bind("tcp://*:9999");
new Thread(testSub).Start();
Thread.Sleep(1000); // client connect
int i = 0;
while (true)
{
pubSock.Send(i.ToString(), Encoding.ASCII);
Debug.WriteLine(pubSock.Backlog + "/" + i++);
}
}
static void testSub()
{
var _ZmqCtx = new Context(1);
var subSock = _ZmqCtx.Socket(SocketType.SUB);
subSock.HWM = 500;
subSock.Identity = new ASCIIEncoding().GetBytes("bla");
subSock.Connect("tcp://127.0.0.1:9999");
Debug.WriteLine("connected");
subSock.Subscribe("", Encoding.ASCII);
while (true)
{
Debug.WriteLine("r:" + subSock.Recv(Encoding.ASCII));
Thread.Sleep(10);
}
}
Output:
'quickies.vshost.exe' (Managed (v4.0.30319)):
Loaded 'B:\sdev\MSenseWS\GoogleImporter\bin\Debug\clrzmq.dll', Symbols loaded.
connected
r:0
100/0
100/1
100/2
[...]
100/13
r:1
100/14
[...]
100/2988
100/2989
100/2990
100/2991
100/2992
100/2993
100/2994
100/2995
100/2996
r:179
100/2997
100/2998
Expected behavior: pubSock.Send blocks after 500 messages are queued.
Experienced behavior: pubSock.Sends does not block and sends forever until out of memory exception vom native code (clrzmq.dll) is thrown.
Also: Why is backlog always 100?
Thanks for your insights,
Armin
Edit: push/poll sockets achieve the same result
#
#
Resolution:
- The error was on my side, as i was expecting that the HWM is the number of outstanding messages that the clinet(s) have not commited (received). While in fact HWM is the number of messages that are buffered and queued for sending over the network.
In my case i had a client that can not process messages fast enough and so buffer space was allocated until out of memory.
To solve this problem i found out that setting HWM and SWAP on the client socket solves my problem, as messages are queued to a large swap file by zmq and are successively precessed by the application.
Ah, so I'm guessing you have the subscriber thread sleep, but that doesn't mean the underlying ZMQ socket threads also sleep. Therefore the subscriber will continue to take messages off the publisher queue. In other words, using Thread.Sleep() is probably not a good enough way to simulate limited network connectivity or other issues you expect to cause running into the HWM.

C++/Win. Not getting FD_CLOSE

I have an asynchronous socket and call to connect() + GetLastError() which returns WSA_WOULD_BLOCK, as expected. So I start "receiving/reading" thread and subscribe Event to FD_READ and FD_CLOSE.
The story is: connect will sequentially fail, since Server is not up and running. My understanding that my receiving thread should get FD_CLOSE soon and I need to follow-up with cleaning.
It does not happen. How soon should I receive FD_CLOSE? Is it proper approach? Is there any other way to understand that connect() failed? Shoul I ever receive FD_CLOSE if socket isn't connected?
I do start my receiving thread and subscribe event after successful call to DoConnect() and I am afraid that racing condition prevents me from getting FD_CLOSE.
Here is some code:
int RecvSocketThread::WaitForData()
{
int retVal = 0
while (!retVal)
{
// sockets to pool can be added on other threads.
// please validate that all of them in the pool are connected
// before doing any reading on them
retVal = DoWaitForData();
}
}
int RecvSocketThread::DoWaitForData()
{
// before waiting for incoming data, check if all sockets are connected
WaitForPendingConnection_DoForAllSocketsInThePool();
// other routine to read (FD_READ) or react to FD_CLOSE
// create array of event (each per socket) and wait
}
void RecvSocketThread::WaitForPendingConnection_DoForAllSocketsInThePool()
{
// create array and set it for events associated with pending connect sockets
HANDLE* EventArray = NULL;
int counter = 0;
EventArray = new HANDLE[m_RecvSocketInfoPool.size()];
// add those event whose associated socket is still not connected
// and wait for FD_WRITE and FD_CLOSE. At the end of this function
// don't forget to switch them to FD_READ and FD_CLOSE
while (it != m_RecvSocketInfoPool.end())
{
RecvSocketInfo* recvSocketInfo = it->second;
if (!IsEventSet(recvSocketInfo->m_Connected, &retVal2))
{
::WSAEventSelect(recvSocketInfo->m_WorkerSocket, recvSocketInfo->m_Event, FD_WRITE | FD_CLOSE);
EventArray[counter++] = recvSocketInfo->m_Event;
}
++it;
}
if (counter)
{
DWORD indexSignaled = WaitForMultipleObjects(counter, EventArray, WaitAtLeastOneEvent, INFINITE);
// no matter what is further Wait doen't return for failed to connect socket
if (WAIT_OBJECT_0 <= indexSignaled &&
indexSignaled < (WAIT_OBJECT_0 + counter))
{
it = m_RecvSocketInfoPool.begin();
while (it != m_RecvSocketInfoPool.end())
{
RecvSocketInfo* recvSocketInfo = it->second;
if (IsEventSet(recvSocketInfo->m_Event, NULL))
{
rc = WSAEnumNetworkEvents(recvSocketInfo->m_WorkerSocket,
recvSocketInfo->m_Event, &networkEvents);
// Check recvSocketInfo->m_Event using WSAEnumnetworkevents
// for FD_CLOSE using FD_CLOSE_BIT
if ((networkEvents.lNetworkEvents & FD_CLOSE))
{
recvSocketInfo->m_FD_CLOSE_Recieved = 1;
*retVal = networkEvents.iErrorCode[FD_CLOSE_BIT];
}
if ((networkEvents.lNetworkEvents & FD_WRITE))
{
WSASetEvent(recvSocketInfo->m_Connected);
*retVal = networkEvents.iErrorCode[FD_WRITE_BIT];
}
}
++it;
}
}
// if error - DoClean, if FD_WRITE (socket is writable) check if m_Connected
// before do any sending
}
}
You will not receive an FD_CLOSE notification if connect() fails. You must subscribe to FD_CONNECT to detect that. This is clearly stated in the connect() documentation:
With a nonblocking socket, the connection attempt cannot be completed
immediately. In this case, connect will return SOCKET_ERROR, and
WSAGetLastError will return WSAEWOULDBLOCK. In this case, there are
three possible scenarios:
•Use the select function to determine the completion of the
connection request by checking to see if the socket is writeable.
•If the application is using WSAAsyncSelect to indicate interest in
connection events, then the application will receive an FD_CONNECT
notification indicating that the connect operation is complete
(successfully or not).
•If the application is using WSAEventSelect to indicate interest in
connection events, then the associated event object will be signaled
indicating that the connect operation is complete (successfully or
not).
The result code of connect() will be in the event's HIWORD(lParam) value when LOWORD(lParam) is FD_CONNECT. If the result code is 0, connect() was successful, otherwise it will be a WinSock error code.
If you call connect() and get a blocking notification you have to write more code to monitor for connect() completion (success or failure) via one of three methods as described here.
With a nonblocking socket, the connection attempt cannot be completed
immediately. In this case, connect will return SOCKET_ERROR, and
WSAGetLastError will return WSAEWOULDBLOCK. In this case, there are
three possible scenarios:
•Use the select function to determine the completion of the connection
request by checking to see if the socket is writeable.
•If the
application is using WSAAsyncSelect to indicate interest in connection
events, then the application will receive an FD_CONNECT notification
indicating that the connect operation is complete (successfully or
not).
•If the application is using WSAEventSelect to indicate interest
in connection events, then the associated event object will be
signaled indicating that the connect operation is complete
(successfully or not).
I think I need to start Receving thread once socket handle is created, but before connect is called. It is too late to create it after connect was called on asynchronous socket.
For synchronous socket those two calls createsocket() and connect() was just two consequitive lines. Does not work for non-blocking.
In this case at the beginning of receiving thread I need to check for FD_CONNECT and/or FD_WRITE in order be informed of connect attempt status.

Resources