Resumed GCD socket Source won't fire - macos

I can't seem to accept TCP connections Grand Central Dispatch sources (dispatch_source_t) and I can't understand what I could possibly be doing wrong. I've gone through a lot of material and I'm still nowhere close (Apple's own documentation, Mike Ash's blog, etc.)
Of course, I could simply use third-party libraries (plenty on GitHub) but I'm trying to learn from this. Thank you.
I know the server is listening on the port since using telnet 127.0.0.1 6666 times out (but never actually connects) whereas any other port immediatly fails. The piece of code where I would accept() connections is never run.
class MyServer {
func startServing(#port: UInt16 = 6666) {
// Create a listeningSocket on TCP/IPv4.
println("\nStarting TCP server.\nCreating a listening Socket.")
let listeningSocket = socket(AF_INET, SOCK_STREAM, 0 /*IPPROTO_TCP*/)
if listeningSocket == -1 {
println("Failed!")
return
}
// Prepare an socket address.
var no_sig_pipe: Int32 = 1
setsockopt(listeningSocket, SOL_SOCKET, SO_NOSIGPIPE, &no_sig_pipe, socklen_t(sizeof(Int32)))
// Working around Swift's strict initialization policies
var addr = sockaddr_in( sin_len: __uint8_t(sizeof(sockaddr_in))
, sin_family: sa_family_t(AF_INET)
, sin_port: CFSwapInt16HostToBig(port)
, sin_addr: in_addr(s_addr: inet_addr("0.0.0.0"))
, sin_zero: (0, 0, 0, 0, 0, 0, 0, 0) )
var sock_addr = sockaddr( sa_len: 0
, sa_family: 0
, sa_data: (0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) )
memcpy(&sock_addr, &addr, UInt(sizeof(sockaddr_in)))
// bind() the socket to the address.
println("Binding socket to \(port).")
if bind(listeningSocket, &sock_addr, socklen_t(sizeof(sockaddr_in))) == POSIXSocketError {
println("Failed!")
close(listeningSocket)
return
}
// If we still have a working socket at this point...
if listeningSocket >= 0 {
println("Creating GCD Source.")
connectionSource = dispatch_source_create(DISPATCH_SOURCE_TYPE_READ, UInt(listeningSocket), 0, dispatch_get_global_queue(0,0) /*serverQueue*/)
if connectionSource == .None {
println("Failed!")
close(listeningSocket)
return
}
let myself = self
dispatch_source_set_event_handler( connectionSource, {
// *** This NEVER gets called!! ***
dispatch_async(dispatch_get_main_queue(), { println("GCD Src fired.") })
println("GCD Source triggered.")
myself.acceptConnections(listeningSocket)
})
dispatch_resume(connectionSource)
}
} // end func
} // end class

It looks like you just forgot to listen after you called bind. From Mike Ash's blog:
With the socket bound, the next step is to tell the system to listen
on it. This is done with, you guessed it, the listen function. It
takes two parameters: the socket to operate on, and the desired length
of the queue used for listening. This queue length tells the system
how many incoming connections you want it to sit on while trying to
hand those connections off to your program. Unless you have a good
reason to use something else, passing the SOMAXCONN gives you a safe,
large value.
So, after your bind call and error checking, and before you create your dispatch source, add something like:
if listen(listeningSocket, SOMAXCONN) != 0 {
println("Listen Failed!")
close(listeningSocket)
return
}
I cobbled a quick test together with your code and the above change and this fires off a slew of GCD Src fired. and GCD Source triggered. messages when a connection to the socket is made.

Related

How to receive messages via wifi while running main program in ESP32?

Ive incorporated multiple features i want in a microcontroller program (ESP32 Wroom32) and needed some advice on the best way to keep the program running and receive messages while it is running.
Current code:
//includes and declarations
setup()
{
//setup up wifi, server
}
main(){
WiFiClient client = server.available();
byte new_command[40];
if (client) // If client object is created, a connection is setup
{
Serial.println("New wifi Client.");
String currentLine = ""; //Used to print messages
while (client.connected())
{
recv_byte = client.read();
new_command = read_incoming(&recv_byte, client); //Returns received command and check for format. If invalid, returns a 0 array
if (new_command[0] != 0) //Checks if message is not zero, None of valid messages start with zero
{
execute_command(new_command);
//new_command is set to zero
}
}//end of while loop
}//end of if loop
}
The downside of this is that the ESP32 waits till the command is finished executing before it is ready to receive a new message. It is desired that the ESP32 receive commands and store them, and execute it at its own pace. I am planning to change the current code to receive a messages while the code is running as follows:
main()
{
WiFiClient client = server.available();
byte new_command[40];
int command_count = 0;
byte command_array[50][40];
if (command_count != 0)
{
execute_command(command_array[0]);
//Decrement command_count
//Shift all commands in command_array by 1 row above
//Set last executed command to zero
}
}//end of main loop
def message_interrupt(int recv_byte, WiFiClient& running_client)
{
If (running_client.connected())
{
recv_byte = running_client.read();
new_command = read_incoming(&recv_byte, running_client); //Returns received command and check for format. If invalid, returns a 0 array
//add new command to command_array after last command
//increment command_count
}
}
Which interrupt do I use to receive the message and update the command_array ? https://docs.espressif.com/projects/esp-idf/en/latest/esp32/api-guides/wifi.html Doesnt mention any receive/transmit events. I couldnt find any receive/transmit interrupt either or maybe I searched for the wrong term.

Using IRPs for I/O on device object returned by IoGetDeviceObjectPointer()

Can one use IoCallDriver() with an IRP created by IoBuildAsynchronousFsdRequest() on a device object returned by IoGetDeviceObjectPointer()? What I have currently fails with blue screen (BSOD) 0x7E (unhandled exception), which when caught shows an Access Violation (0xc0000005). Same code worked when the device was stacked (using the device object returned by IoAttachDeviceToDeviceStack()).
So what I have is about the following:
status = IoGetDeviceObjectPointer(&device_name, FILE_ALL_ACCESS, &FileObject, &windows_device);
if (!NT_SUCCESS(status)) {
return -1;
}
offset.QuadPart = 0;
newIrp = IoBuildAsynchronousFsdRequest(io, windows_device, buffer, 4096, &offset, &io_stat);
if (newIrp == NULL) {
return -1;
}
IoSetCompletionRoutine(newIrp, DrbdIoCompletion, bio, TRUE, TRUE, TRUE);
status = ObReferenceObjectByPointer(newIrp->Tail.Overlay.Thread, THREAD_ALL_ACCESS, NULL, KernelMode);
if (!NT_SUCCESS(status)) {
return -1;
}
status = IoCallDriver(bio->bi_bdev->windows_device, newIrp);
if (!NT_SUCCESS(status)) {
return -1;
}
return 0;
device_name is \Device\HarddiskVolume7 which exists according to WinObj.exe .
buffer has enough space and is read/writable. offset and io_stat are on stack (also tried with heap, didn't help). When catching the exception (SEH exception) it doesn't blue screen but shows an access violation as reason for the exception. io is IRP_MJ_READ.
Do I miss something obvious? Is it in general better to use IRPs than the ZwCreateFile / ZwReadFile / ZwWriteFile API (which would be an option, but isn't that slower?)? I also tried a ZwCreateFile to have an extra reference, but this also didn't help.
Thanks for any insights.
you make in this code how minimum 2 critical errors.
can I ask - from which file you try read (or write) data ? from
FileObject you say ? but how file system driver, which will handle
this request know this ? you not pass any file object to newIrp.
look for IoBuildAsynchronousFsdRequest - it have no file object
parameter (and impossible get file object from device object - only
visa versa - because on device can be multiple files open). so it
and can not be filled by this api in newIrp. you must setup it
yourself:
PIO_STACK_LOCATION irpSp = IoGetNextIrpStackLocation( newIrp );
irpSp->FileObject = FileObject;
I guess bug was exactly when file system try access FileObject
from irp which is 0 in your case. also read docs for
IRP_MJ_READ - IrpSp->FileObject -
Pointer to the file object that is associated with DeviceObject
you pass I guess local variables io_stat (and offset) to
IoBuildAsynchronousFsdRequest. as result io_stat must be valid
until newIrp is completed - I/O subsystem write final result to it
when operation completed. but you not wait in function until request
will be completed (in case STATUS_PENDING returned) but just exit
from function. as result later I/O subsystem, if operation completed
asynchronous, write data to arbitrary address &io_stat (it became
arbitrary just after you exit from function). so you need or check
for STATUS_PENDING returned and wait in this case (have actually
synchronous io request). but more logical use
IoBuildSynchronousFsdRequest in this case. or allocate io_stat
not from stack, but say in your object which correspond to file. in
this case you can not have more than single io request with this
object at time. or if you want exactly asynchronous I/O - you can do
next trick - newIrp->UserIosb = &newIrp->IoStatus. as result you
iosb always will be valid for newIrp. and actual operation status
you check/use in DrbdIoCompletion
also can you explain (not for me - for self) next code line ?:
status = ObReferenceObjectByPointer(newIrp->Tail.Overlay.Thread, THREAD_ALL_ACCESS, NULL, KernelMode);
who and where dereference thread and what sense in this ?
Can one use ...
we can use all, but with condition - we understand what we doing and deep understand system internally.
Is it in general better to use IRPs than the ZwCreateFile / ZwReadFile
/ ZwWriteFile API
for performance - yes, better. but this require more code and more complex code compare api calls. and require more knowledge. also if you know that previous mode is kernel mode - you can use NtCreateFile, NtWriteFile, NtReadFile - this of course will be bit slow (need every time reference file object by handle) but more faster compare Zw version
Just wanted to add that the ObReferenceObjectByPointer is needed
because the IRP references the current thread which may exit before
the request is completed. It is dereferenced in the Completion
Routine. Also as a hint the completion routine must return
STATUS_MORE_PROCESSING_REQUIRED if it frees the IRP (took me several
days to figure that out).
here you make again several mistakes. how i understand you in completion routine do next:
IoFreeIrp(Irp);
return StopCompletion;
but call simply call IoFreeIrp here is error - resource leak. i advice you check (DbgPrint) Irp->MdlAddress at this point. if you read data from file system object and request completed asynchronous - file system always allocate Mdl for access user buffer in arbitrary context. now question - who free this Mdl ? IoFreeIrp - simply free Irp memory - nothing more. you do this yourself ? doubt. but Irp is complex object, which internally hold many resources. as result need not only free it memory but call "destructor" for it. this "destructor" is IofCompleteRequest. when you return StopCompletion (=STATUS_MORE_PROCESSING_REQUIRED) you break this destructor at very begin. but you must latter again call IofCompleteRequest for continue Irp (and it resources) correct destroy.
about referencing Tail.Overlay.Thread - what you doing - have no sense:
It is dereferenced in the Completion Routine.
but IofCompleteRequest access Tail.Overlay.Thread after it
call your completion routine (and if you not return
StopCompletion). as result your reference/dereference thread lost
sense - because you deference it too early, before system
actually access it.
also if you return StopCompletion and not more call
IofCompleteRequest for this Irp - system not access
Tail.Overlay.Thread at all. and you not need reference it in this
case.
and exist else one reason, why reference thread is senseless. system
access Tail.Overlay.Thread only for insert Apc to him - for call
final part (IopCompleteRequest) of Irp destruction in original
thread context. really this need only for user mode Irp's requests,
where buffers and iosb located in user mode and valid only in
context of process (original thread ). but if thread is terminated -
call of KeInsertQueueApc fail - system not let insert apc to
died thread. as result IopCompleteRequest will be not called and
resources not freed.
so you or dereference Tail.Overlay.Thread too early or you not need do this at all. and reference for died thread anyway not help. in all case what you doing is error.
you can try do next here:
PETHREAD Thread = Irp->Tail.Overlay.Thread;
IofCompleteRequest(Irp, IO_NO_INCREMENT);// here Thread will be referenced
ObfDereferenceObject(Thread);
return StopCompletion;
A second call to IofCompleteRequest causes the I/O manager to resume calling the IRP's completion. here io manager and access Tail.Overlay.Thread insert Apc to him. and finally you call ObfDereferenceObject(Thread); already after system access it and return StopCompletion for break first call to IofCompleteRequest. look like correct but.. if thread already terminated, how i explain in 3 this will be error, because KeInsertQueueApc fail. for extended test - call IofCallDriver from separate thread and just exit from it. and in completion run next code:
PETHREAD Thread = Irp->Tail.Overlay.Thread;
if (PsIsThreadTerminating(Thread))
{
DbgPrint("ThreadTerminating\n");
if (PKAPC Apc = (PKAPC)ExAllocatePool(NonPagedPool, sizeof(KAPC)))
{
KeInitializeApc(Apc, Thread, 0, KernelRoutine, 0, 0, KernelMode, 0);
if (!KeInsertQueueApc(Apc, 0, 0, IO_NO_INCREMENT))
{
DbgPrint("!KeInsertQueueApc\n");
ExFreePool(Apc);
}
}
}
PMDL MdlAddress = Irp->MdlAddress;
IofCompleteRequest(Irp, IO_NO_INCREMENT);
ObfDereferenceObject(Thread);
if (MdlAddress == Irp->MdlAddress)
{
// IopCompleteRequest not called due KeInsertQueueApc fail
DbgPrint("!!!!!!!!!!!\n");
IoFreeMdl(MdlAddress);
IoFreeIrp(Irp);
}
return StopCompletion;
//---------------
VOID KernelRoutine (PKAPC Apc,PKNORMAL_ROUTINE *,PVOID *,PVOID *,PVOID *)
{
DbgPrint("KernelRoutine(%p)\n", Apc);
ExFreePool(Apc);
}
and you must got next debug output:
ThreadTerminating
!KeInsertQueueApc
!!!!!!!!!!!
and KernelRoutine will be not called (like and IopCompleteRequest) - no print from it.
so what is correct solution ? this of course not documented anywhere, but based on deep internal understand. you not need reference original thread. you need do next:
Irp->Tail.Overlay.Thread = KeGetCurrentThread();
return ContinueCompletion;
you can safe change Tail.Overlay.Thread - if you have no any pointers valid only in original process context. this is true for kernel mode requests - all your buffers in kernel mode and valid in any context. and of course you not need break Irp destruction but continue it. for correct free mdl and all irp resources. and finally system call IoFreeIrp for you.
and again for iosb pointer. how i say pass local variable address, if you exit from function before irp completed (and this iosb accessed) is error. if you break Irp destruction, iosb will be not accessed of course, but in this case much better pass 0 pointer as iosb. (if you latter something change and iosb pointer will be accessed - will be the worst error - arbitrary memory corrupted - with unpredictable effect. and research crash of this will be very-very hard). but if you completion routine - you not need separate iosb at all - you have irp in completion and can direct access it internal iosb - for what you need else one ? so the best solution will be do next:
Irp->UserIosb = &Irp->IoStatus;
full correct example how read file asynchronous:
NTSTATUS DemoCompletion (PDEVICE_OBJECT /*DeviceObject*/, PIRP Irp, BIO* bio)
{
DbgPrint("DemoCompletion(p=%x mdl=%p)\n", Irp->PendingReturned, Irp->MdlAddress);
bio->CheckResult(Irp->IoStatus.Status, Irp->IoStatus.Information);
bio->Release();
Irp->Tail.Overlay.Thread = KeGetCurrentThread();
return ContinueCompletion;
}
VOID DoTest (PVOID buf)
{
PFILE_OBJECT FileObject;
NTSTATUS status;
UNICODE_STRING ObjectName = RTL_CONSTANT_STRING(L"\\Device\\HarddiskVolume2");
OBJECT_ATTRIBUTES oa = { sizeof(oa), 0, &ObjectName, OBJ_CASE_INSENSITIVE };
if (0 <= (status = GetDeviceObjectPointer(&oa, &FileObject)))
{
status = STATUS_INSUFFICIENT_RESOURCES;
if (BIO* bio = new BIO(FileObject))
{
if (buf = bio->AllocBuffer(PAGE_SIZE))
{
LARGE_INTEGER ByteOffset = {};
PDEVICE_OBJECT DeviceObject = IoGetRelatedDeviceObject(FileObject);
if (PIRP Irp = IoBuildAsynchronousFsdRequest(IRP_MJ_READ, DeviceObject, buf, PAGE_SIZE, &ByteOffset, 0))
{
Irp->UserIosb = &Irp->IoStatus;
Irp->Tail.Overlay.Thread = 0;
PIO_STACK_LOCATION IrpSp = IoGetNextIrpStackLocation(Irp);
IrpSp->FileObject = FileObject;
bio->AddRef();
IrpSp->CompletionRoutine = (PIO_COMPLETION_ROUTINE)DemoCompletion;
IrpSp->Context = bio;
IrpSp->Control = SL_INVOKE_ON_CANCEL|SL_INVOKE_ON_ERROR|SL_INVOKE_ON_SUCCESS;
status = IofCallDriver(DeviceObject, Irp);
}
}
bio->Release();
}
ObfDereferenceObject(FileObject);
}
DbgPrint("DoTest=%x\n", status);
}
struct BIO
{
PVOID Buffer;
PFILE_OBJECT FileObject;
LONG dwRef;
void AddRef()
{
InterlockedIncrement(&dwRef);
}
void Release()
{
if (!InterlockedDecrement(&dwRef))
{
delete this;
}
}
void* operator new(size_t cb)
{
return ExAllocatePool(PagedPool, cb);
}
void operator delete(void* p)
{
ExFreePool(p);
}
BIO(PFILE_OBJECT FileObject) : FileObject(FileObject), Buffer(0), dwRef(1)
{
DbgPrint("%s<%p>(%p)\n", __FUNCTION__, this, FileObject);
ObfReferenceObject(FileObject);
}
~BIO()
{
if (Buffer)
{
ExFreePool(Buffer);
}
ObfDereferenceObject(FileObject);
DbgPrint("%s<%p>(%p)\n", __FUNCTION__, this, FileObject);
}
PVOID AllocBuffer(ULONG NumberOfBytes)
{
return Buffer = ExAllocatePool(PagedPool, NumberOfBytes);
}
void CheckResult(NTSTATUS status, ULONG_PTR Information)
{
DbgPrint("CheckResult:status = %x, info = %p\n", status, Information);
if (0 <= status)
{
if (ULONG_PTR cb = min(16, Information))
{
char buf[64], *sz = buf;
PBYTE pb = (PBYTE)Buffer;
do sz += sprintf(sz, "%02x ", *pb++); while (--cb); sz[-1]= '\n';
DbgPrint(buf);
}
}
}
};
NTSTATUS GetDeviceObjectPointer(POBJECT_ATTRIBUTES poa, PFILE_OBJECT *FileObject )
{
HANDLE hFile;
IO_STATUS_BLOCK iosb;
NTSTATUS status = IoCreateFile(&hFile, FILE_READ_DATA, poa, &iosb, 0, 0,
FILE_SHARE_VALID_FLAGS, FILE_OPEN, FILE_NO_INTERMEDIATE_BUFFERING, 0, 0, CreateFileTypeNone, 0, 0);
if (0 <= (status))
{
status = ObReferenceObjectByHandle(hFile, 0, *IoFileObjectType, KernelMode, (void**)FileObject, 0);
NtClose(hFile);
}
return status;
}
and output:
BIO::BIO<FFFFC000024D4870>(FFFFE00001BAAB70)
DoTest=103
DemoCompletion(p=1 mdl=FFFFE0000200EE70)
CheckResult:status = 0, info = 0000000000001000
eb 52 90 4e 54 46 53 20 20 20 20 00 02 08 00 00
BIO::~BIO<FFFFC000024D4870>(FFFFE00001BAAB70)
the eb 52 90 4e 54 46 53 read ok

Duplex named pipe hangs on a certain write

I have a C++ pipe server app and a C# pipe client app communicating via Windows named pipe (duplex, message mode, wait/blocking in separate read thread).
It all works fine (both sending and receiving data via the pipe) until I try and write to the pipe from the client in response to a forms 'textchanged' event. When I do this, the client hangs on the pipe write call (or flush call if autoflush is off). Breaking into the server app reveals it's also waiting on the pipe ReadFile call and not returning.
I tried running the client write on another thread -- same result.
Suspect some sort of deadlock or race condition but can't see where... don't think I'm writing to the pipe simultaneously.
Update1: tried pipes in byte mode instead of message mode - same lockup.
Update2: Strangely, if (and only if) I pump lots of data from the server to the client, it cures the lockup!?
Server code:
DWORD ReadMsg(char* aBuff, int aBuffLen, int& aBytesRead)
{
DWORD byteCount;
if (ReadFile(mPipe, aBuff, aBuffLen, &byteCount, NULL))
{
aBytesRead = (int)byteCount;
aBuff[byteCount] = 0;
return ERROR_SUCCESS;
}
return GetLastError();
}
DWORD SendMsg(const char* aBuff, unsigned int aBuffLen)
{
DWORD byteCount;
if (WriteFile(mPipe, aBuff, aBuffLen, &byteCount, NULL))
{
return ERROR_SUCCESS;
}
mClientConnected = false;
return GetLastError();
}
DWORD CommsThread()
{
while (1)
{
std::string fullPipeName = std::string("\\\\.\\pipe\\") + mPipeName;
mPipe = CreateNamedPipeA(fullPipeName.c_str(),
PIPE_ACCESS_DUPLEX,
PIPE_TYPE_MESSAGE | PIPE_READMODE_MESSAGE | PIPE_WAIT,
PIPE_UNLIMITED_INSTANCES,
KTxBuffSize, // output buffer size
KRxBuffSize, // input buffer size
5000, // client time-out ms
NULL); // no security attribute
if (mPipe == INVALID_HANDLE_VALUE)
return 1;
mClientConnected = ConnectNamedPipe(mPipe, NULL) ? TRUE : (GetLastError() == ERROR_PIPE_CONNECTED);
if (!mClientConnected)
return 1;
char rxBuff[KRxBuffSize+1];
DWORD error=0;
while (mClientConnected)
{
Sleep(1);
int bytesRead = 0;
error = ReadMsg(rxBuff, KRxBuffSize, bytesRead);
if (error == ERROR_SUCCESS)
{
rxBuff[bytesRead] = 0; // terminate string.
if (mMsgCallback && bytesRead>0)
mMsgCallback(rxBuff, bytesRead, mCallbackContext);
}
else
{
mClientConnected = false;
}
}
Close();
Sleep(1000);
}
return 0;
}
client code:
public void Start(string aPipeName)
{
mPipeName = aPipeName;
mPipeStream = new NamedPipeClientStream(".", mPipeName, PipeDirection.InOut, PipeOptions.None);
Console.Write("Attempting to connect to pipe...");
mPipeStream.Connect();
Console.WriteLine("Connected to pipe '{0}' ({1} server instances open)", mPipeName, mPipeStream.NumberOfServerInstances);
mPipeStream.ReadMode = PipeTransmissionMode.Message;
mPipeWriter = new StreamWriter(mPipeStream);
mPipeWriter.AutoFlush = true;
mReadThread = new Thread(new ThreadStart(ReadThread));
mReadThread.IsBackground = true;
mReadThread.Start();
if (mConnectionEventCallback != null)
{
mConnectionEventCallback(true);
}
}
private void ReadThread()
{
byte[] buffer = new byte[1024 * 400];
while (true)
{
int len = 0;
do
{
len += mPipeStream.Read(buffer, len, buffer.Length);
} while (len>0 && !mPipeStream.IsMessageComplete);
if (len==0)
{
OnPipeBroken();
return;
}
if (mMessageCallback != null)
{
mMessageCallback(buffer, len);
}
Thread.Sleep(1);
}
}
public void Write(string aMsg)
{
try
{
mPipeWriter.Write(aMsg);
mPipeWriter.Flush();
}
catch (Exception)
{
OnPipeBroken();
}
}
If you are using separate threads you will be unable to read from the pipe at the same time you write to it. For example, if you are doing a blocking read from the pipe then a subsequent blocking write (from a different thread) then the write call will wait/block until the read call has completed and in many cases if this is unexpected behavior your program will become deadlocked.
I have not tested overlapped I/O, but it MAY be able to resolve this issue. However, if you are determined to use synchronous calls then the following models below may help you to solve the problem.
Master/Slave
You could implement a master/slave model in which the client or the server is the master and the other end only responds which is generally what you will find the MSDN examples to be.
In some cases you may find this problematic in the event the slave periodically needs to send data to the master. You must either use an external signaling mechanism (outside of the pipe) or have the master periodically query/poll the slave or you can swap the roles where the client is the master and the server is the slave.
Writer/Reader
You could use a writer/reader model where you use two different pipes. However, you must associate those two pipes somehow if you have multiple clients since each pipe will have a different handle. You could do this by having the client send a unique identifier value on connection to each pipe which would then let the server associate the two pipes. This number could be the current system time or even a unique identifier that is global or local.
Threads
If you are determined to use the synchronous API you can use threads with the master/slave model if you do not want to be blocked while waiting for a message on the slave side. You will however want to lock the reader after it reads a message (or encounters the end of a series of message) then write the response (as the slave should) and finally unlock the reader. You can lock and unlock the reader using locking mechanisms that put the thread to sleep as these would be most efficient.
Security Problem With TCP
The loss going with TCP instead of named pipes is also the biggest possible problem. A TCP stream does not contain any security natively. So if security is a concern you will have to implement that and you have the possibility of creating a security hole since you would have to handle authentication yourself. The named pipe can provide security if you properly set the parameters. Also, to note again more clearly: security is no simple matter and generally you will want to use existing facilities that have been designed to provide it.
I think you may be running into problems with named pipes message mode. In this mode, each write to the kernel pipe handle constitutes a message. This doesn't necessarily correspond with what your application regards a Message to be, and a message may be bigger than your read buffer.
This means that your pipe reading code needs two loops, the inner reading until the current [named pipe] message has been completely received, and the outer looping until your [application level] message has been received.
Your C# client code does have a correct inner loop, reading again if IsMessageComplete is false:
do
{
len += mPipeStream.Read(buffer, len, buffer.Length);
} while (len>0 && !mPipeStream.IsMessageComplete);
Your C++ server code doesn't have such a loop - the equivalent at the Win32 API level is testing for the return code ERROR_MORE_DATA.
My guess is that somehow this is leading to the client waiting for the server to read on one pipe instance, whilst the server is waiting for the client to write on another pipe instance.
It seems to me that what you are trying to do will rather not work as expected.
Some time ago I was trying to do something that looked like your code and got similar results, the pipe just hanged
and it was difficult to establish what had gone wrong.
I would rather suggest to use client in very simple way:
CreateFile
Write request
Read answer
Close pipe.
If you want to have two way communication with clients which are also able to receive unrequested data from server you should
rather implement two servers. This was the workaround I used: here you can find sources.

In Win32, is there a way to test if a socket is non-blocking?

In Win32, is there a way to test if a socket is non-blocking?
Under POSIX systems, I'd do something like the following:
int is_non_blocking(int sock_fd) {
flags = fcntl(sock_fd, F_GETFL, 0);
return flags & O_NONBLOCK;
}
However, Windows sockets don't support fcntl(). The non-blocking mode is set using ioctl with FIONBIO, but there doesn't appear to be a way to get the current non-blocking mode using ioctl.
Is there some other call on Windows that I can use to determine if the socket is currently in non-blocking mode?
A slightly longer answer would be: No, but you will usually know whether or not it is, because it is relatively well-defined.
All sockets are blocking unless you explicitly ioctlsocket() them with FIONBIO or hand them to either WSAAsyncSelect or WSAEventSelect. The latter two functions "secretly" change the socket to non-blocking.
Since you know whether you have called one of those 3 functions, even though you cannot query the status, it is still known. The obvious exception is if that socket comes from some 3rd party library of which you don't know what exactly it has been doing to the socket.
Sidenote: Funnily, a socket can be blocking and overlapped at the same time, which does not immediately seem intuitive, but it kind of makes sense because they come from opposite paradigms (readiness vs completion).
Previously, you could call WSAIsBlocking to determine this. If you are managing legacy code, this may still be an option.
Otherwise, you could write a simple abstraction layer over the socket API. Since all sockets are blocking by default, you could maintain an internal flag and force all socket ops through your API so you always know the state.
Here is a cross-platform snippet to set/get the blocking mode, although it doesn't do exactly what you want:
/// #author Stephen Dunn
/// #date 10/12/15
bool set_blocking_mode(const int &socket, bool is_blocking)
{
bool ret = true;
#ifdef WIN32
/// #note windows sockets are created in blocking mode by default
// currently on windows, there is no easy way to obtain the socket's current blocking mode since WSAIsBlocking was deprecated
u_long flags = is_blocking ? 0 : 1;
ret = NO_ERROR == ioctlsocket(socket, FIONBIO, &flags);
#else
const int flags = fcntl(socket, F_GETFL, 0);
if ((flags & O_NONBLOCK) && !is_blocking) { info("set_blocking_mode(): socket was already in non-blocking mode"); return ret; }
if (!(flags & O_NONBLOCK) && is_blocking) { info("set_blocking_mode(): socket was already in blocking mode"); return ret; }
ret = 0 == fcntl(socket, F_SETFL, is_blocking ? flags ^ O_NONBLOCK : flags | O_NONBLOCK);
#endif
return ret;
}
I agree with the accepted answer, there is no official way to determine the blocking state of a socket on Windows. In case you get a socket from a third party (let's say, you are a TLS library and you get the socket from upper layer) you cannot decide if it is in blocking state or not.
Despite this I have a working, unofficial and limited solution for the problem which works for me for a long time.
I attempt to read 0 bytes from the socket. In case it is a blocking socket it will return 0, in case it is a non-blocking it will return -1 and GetLastError equals WSAEWOULDBLOCK.
int IsBlocking(SOCKET s)
{
int r = 0;
unsigned char b[1];
r = recv(s, b, 0, 0);
if (r == 0)
return 1;
else if (r == -1 && GetLastError() == WSAEWOULDBLOCK)
return 0;
return -1; /* In case it is a connection socket (TCP) and it is not in connected state you will get here 10060 */
}
Caveats:
Works with UDP sockets
Works with connected TCP sockets
Doesn't work with unconnected TCP sockets

Monitoring UDP socket in glib(mm) eats up CPU time

I have a GTKmm Windows application (built with MinGW) that receives UDP packets (no sending). The socket is native winsock and I use glibmm IOChannel to connect it to the application main loop. The socket is read with recvfrom.
My problem is: this setup eats 25% percent CPU time on a 3GHz workstation. Can somebody tell me why?
The application is idle in this case, and if I remove the UDP code, CPU usage drops down to almost zero. As the application has to perform some CPU intensive tasks, I could image better ways to spend that 25%
Here are some code excerpts: (sorry for the printf's ;) )
/* bind */
void UDPInterface::bindToPort(unsigned short port)
{
struct sockaddr_in target;
WSADATA wsaData;
target.sin_family = AF_INET;
target.sin_port = htons(port);
target.sin_addr.s_addr = 0;
if ( WSAStartup ( 0x0202, &wsaData ) )
{
printf("WSAStartup failed!\n");
exit(0); // :)
WSACleanup();
}
sock = socket( AF_INET, SOCK_DGRAM, 0 );
if (sock == INVALID_SOCKET)
{
printf("invalid socket!\n");
exit(0);
}
if (bind(sock,(struct sockaddr*) &target, sizeof(struct sockaddr_in) ) == SOCKET_ERROR)
{
printf("failed to bind to port!\n");
exit(0);
}
printf("[UDPInterface::bindToPort] listening on port %i\n", port);
}
/* read */
bool UDPInterface::UDPEvent(Glib::IOCondition io_condition)
{
recvfrom(sock, (char*)buf, BUF_SIZE*4, 0, NULL, NULL);
/* process packet... */
}
/* glibmm connect */
Glib::RefPtr channel = Glib::IOChannel::create_from_win32_socket(udp.sock);
Glib::signal_io().connect( sigc::mem_fun(udp, &UDPInterface::UDPEvent), channel, Glib::IO_IN );
I've read here in some other question, and also in glib docs (g_io_channel_win32_new_socket()) that the socket is put into nonblocking mode, and it's "a side-effect of the implementation and unavoidable". Does this explain the CPU effect, it's not clear to me?
Whether or not I use glib to access the socket or call recvfrom() directly doesn't seem to make much difference, since CPU is used up before any packet arrives and the read handler gets invoked. Also glibmm docs state that it's ok to call recvfrom() even if the socket is polled (Glib::IOChannel::create_from_win32_socket())
I've tried compiling the program with -pg and created a per function cpu usage report with gprof. This wasn't usefull because the time is not spent in my program, but in some external glib/glibmm dll.

Resources