I have a multi-threaded gSOAP service running with enabled http-keepalive. How can I gracefully shutdown the service when there are still clients connected?
A similar question was asked in gSoap: how to gracefully shutdown the webservice application?, but the answers do not cover the http-keepalive aspect: The soap-serve function will simply not return until the http-keepalive-session wasn't closed by the client. Thus, step 2 in the accepted answer will block until the client decides to close the connection (or the receive-timeout expires, but a short timeout would break the desired http-keepalive behaviour here).
The examples from the gSOAP documentation suffer from the same problem.
What I tried so far was to call soap_done() for all soap structs that are hanging in a soap_serve call from the main thread to interrupt the connections waiting for http-keepalive, which works most of the time, but crashes in rare conditions (a race condition maybe), so this is no solution for me.
I just ran into the very same problem and I think I've got a solution for you.
As you just said, the problem is that the gSoap hangs on soap_serve. This happens because gSOAP generates an internal loop for you that waits for the arrival of all keep-alive requests OR a timeout on the server-side arises.
What I've done is grabbing the soap_serve function inside the automatically generated service stub. I'm going to list the original soap_serve function so that you can find it on your service stub file :
SOAP_FMAC5 int SOAP_FMAC6 soap_serve(struct soap *soap)
{
#ifndef WITH_FASTCGI
unsigned int k = soap->max_keep_alive;
#endif
do
{
#ifdef WITH_FASTCGI
if (FCGI_Accept() < 0)
{
soap->error = SOAP_EOF;
return soap_send_fault(soap);
}
#endif
soap_begin(soap);
#ifndef WITH_FASTCGI
if (soap->max_keep_alive > 0 && !--k)
soap->keep_alive = 0;
#endif
if (soap_begin_recv(soap))
{ if (soap->error < SOAP_STOP)
{
#ifdef WITH_FASTCGI
soap_send_fault(soap);
#else
return soap_send_fault(soap);
#endif
}
soap_closesock(soap);
continue;
}
if (soap_envelope_begin_in(soap)
|| soap_recv_header(soap)
|| soap_body_begin_in(soap)
|| soap_serve_request(soap)
|| (soap->fserveloop && soap->fserveloop(soap)))
{
#ifdef WITH_FASTCGI
soap_send_fault(soap);
#else
return soap_send_fault(soap);
#endif
}
#ifdef WITH_FASTCGI
soap_destroy(soap);
soap_end(soap);
} while (1);
#else
} while (soap->keep_alive);
#endif
return SOAP_OK;
}
You should extract the body of this function and replace your old soap_serve(mySoap) call inside your thread (the thread that performs the requests and hagns because of the keep-alive) with the following:
do
{
if ( Server::mustShutdown() ) {
break;
}
soap_begin(mySoap);
// If we reached the max_keep_alive we'll exit
if (mySoap->max_keep_alive > 0 && !--k)
mySoap->keep_alive = 0;
if (soap_begin_recv(mySoap))
{ if (mySoap->error < SOAP_STOP)
{
soap_send_fault(mySoap);
break;
}
soap_closesock(mySoap);
continue;
}
if (soap_envelope_begin_in(mySoap)
|| soap_recv_header(mySoap)
|| soap_body_begin_in(mySoap)
|| soap_serve_request(mySoap)
|| (mySoap->fserveloop && mParm_Soap->fserveloop(mySoap)))
{
soap_send_fault(mySoap);
break;
}
} while (mySoap->keep_alive);
Note the following:
The Server::mustShutdown() acts as a flag that will be set to true (externally) to end all the threads. When you want to stop the server from handling new requests you'll this function will return true and the loop will end.
I've removed the ifdef, WITH_FASTCGI it's not interesting for us now.
When the you close the connection like this, any clients connected to the server will raise an exception. Clients written in C# for instance will throw a "The underlying connection is excepted to keep alive was closed by the server" wich makes perfect sense for us.
But we are not done yet, thanks to what AudioComplex pointed out, the system still remains waiting for reqeuests on soap_begin_recv. But I've got a solution for that too ;)
Each of the threads on the connection-handling pool creates a copy of the main soap context (via soap_copy), these threads are the ones that
I store each of these contexts as an element on the array that resides on the main connection-handling thread.
When terminating the main connection-handling thread (the one that serves the requests) it will go through all soap contexts and finalize "manually" the connection by using:
for (int i = 0; i < soaps.size(); ++i) {
soaps[i]->fclose(soaps[i]);
}
This will force the soap_serve loop to finish. It actually will stop the internal loop near line 921 of stdsoap2.cpp_
r = select((int)soap->socket + 1, &fd, NULL, &fd, &timeout);
It is not the cleanest solution (haven't found a cleaner one) but it will definitely stop the service.
You dont need to change loop in soap_serve, just return some error code in implementation of your services:
return Server::mustShutdown() ? SOAP_SVR_FAULT : SOAP_OK;
Related
UWP (or "Metro") apps in Windows 8/10 can be suspended when they are not in the foreground. Apps in this state continue to exist but no longer consume CPU time. It looks like this change was introduced to improve performance on low-power/storage devices like tablets and phones.
What is the most elegant and simple method to detect a process in this state?
I can see 2 possible solutions at the moment:
Call NtQuerySystemInformation() and the enumerate each process and each thread. A process is "suspended" if all threads are in the suspended state. This approach will require a lot of code and critically NtQuerySystemInformation() is only semi-documented and could be removed in a future OS. NtQueryInformationProcess() may also offer a similar solution with the same problem.
Call GetProcessTimes() and record the counters for each process. Wait some longish time (minutes) and check again. If the process counters haven't changed then assume the process is suspended. I admit this is a hack but maybe could work if the time period is long enough.
Is there a more elegant way?
for this exist PROCESS_EXTENDED_BASIC_INFORMATION - meaning of flags in it described in this answer. you are need IsFrozen flag. so you need open process with PROCESS_QUERY_LIMITED_INFORMATION access (for do this for all processes, you will be need have SE_DEBUG_PRIVILEGE enabled in token). and call NtQuerySystemInformation with ProcessBasicInformation and PROCESS_EXTENDED_BASIC_INFORMATION as input. for enumerate all processes we can use NtQuerySystemInformation with SystemProcessInformation. of course possible and use CreateToolhelp32Snapshot + Process32First + Process32Next but this api very not efficient, compare direct call to NtQuerySystemInformation
also possible enumerate all threads in process and check it state and if state wait - wait reason. this is very easy, because all this information already returned by single call to NtQuerySystemInformation with SystemProcessInformation. with this we not need open processes. usually both this ways give the same result (for suspended/frozen) processes, but however use IsFrozen is most correct solution.
void PrintSuspended()
{
BOOLEAN b;
RtlAdjustPrivilege(SE_DEBUG_PRIVILEGE, TRUE, FALSE, &b);
ULONG cb = 0x1000;
NTSTATUS status;
do
{
status = STATUS_INSUFFICIENT_RESOURCES;
if (PBYTE buf = new BYTE[cb])
{
if (0 <= (status = NtQuerySystemInformation(SystemProcessInformation, buf, cb, &cb)))
{
union {
PBYTE pb;
SYSTEM_PROCESS_INFORMATION* spi;
};
pb = buf;
ULONG NextEntryOffset = 0;
do
{
pb += NextEntryOffset;
if (!spi->UniqueProcessId)
{
continue;
}
if (HANDLE hProcess = OpenProcess(PROCESS_QUERY_LIMITED_INFORMATION, FALSE,
(ULONG)(ULONG_PTR)spi->UniqueProcessId))
{
PROCESS_EXTENDED_BASIC_INFORMATION pebi;
if (0 <= NtQueryInformationProcess(hProcess, ProcessBasicInformation, &pebi, sizeof(pebi), 0) &&
pebi.Size >= sizeof(pebi))
{
if (pebi.IsFrozen)
{
DbgPrint("f:%x %wZ\n", spi->UniqueProcessId, spi->ImageName);
}
}
CloseHandle(hProcess);
}
if (ULONG NumberOfThreads = spi->NumberOfThreads)
{
SYSTEM_THREAD_INFORMATION* TH = spi->TH;
do
{
if (TH->ThreadState != StateWait || TH->WaitReason != Suspended)
{
break;
}
} while (TH++, --NumberOfThreads);
if (!NumberOfThreads)
{
DbgPrint("s:%x %wZ\n", spi->UniqueProcessId, spi->ImageName);
}
}
} while (NextEntryOffset = spi->NextEntryOffset);
}
delete [] buf;
}
} while (status == STATUS_INFO_LENGTH_MISMATCH);
}
I thought, if I didn't call the ev_loop_fork in the child, then the watcher in child wouldn't be triggered.
This is my code, I build the ev_loop with EVBACKEND_EPOLL and EVFLAG_NOENV flags.
So there is no EVFLAG_FORKCHECK flag.
Then I comment the ev_loop_fork call in the child.
If everything goes well, I thought the child will not trigger the timeout callback function.
But actually, the output is something like this:
$ 4980 fork 4981
$ time out at 4980
$ time out at 4981
it seemed that the watchers still has been triggered in the child, it behaved the same as call ev_loop_fork .
So what's the problem, thank you.
#include<ev.h>
#include<stdio.h>
#include<unistd.h>
void timeout_cb(EV_P_ ev_timer *w,int revents)
{
printf("time out at %d\n", getpid());
ev_break(EV_A_ EVBREAK_ONE);
}
int main()
{
int ret;
ev_timer timeout_watcher;
struct ev_loop *loop = ev_default_loop(EVBACKEND_EPOLL | EVFLAG_NOENV);
ev_timer_init(&timeout_watcher,timeout_cb,5.5,0.);
ev_timer_start(loop,&timeout_watcher);
ret = fork();
if(ret>0) printf("%d fork %d\n",getpid(),ret);
else if(ret==0)
{
//ev_loop_fork(EV_DEFAULT);
}
else return -1;
ev_run(loop,0);
return 0;
}
The libev manual does not say that after a fork an event loop will be stopped. All it says is that to be sure that the event loop will properly work in the child, you need to call ev_loop_fork(). What's actually happening depends on the backend.
And technically, timers will even be more resilient against forks in most backends: select(), poll(), epoll(), kqueue all allow for specification of a timeout value after which these functions return in case of no event. libev uses this feature to be able to trigger timeouts when they are supposed to be triggered. So there's no need to re-register any file descriptors for timeouts to work.
I have a C++ pipe server app and a C# pipe client app communicating via Windows named pipe (duplex, message mode, wait/blocking in separate read thread).
It all works fine (both sending and receiving data via the pipe) until I try and write to the pipe from the client in response to a forms 'textchanged' event. When I do this, the client hangs on the pipe write call (or flush call if autoflush is off). Breaking into the server app reveals it's also waiting on the pipe ReadFile call and not returning.
I tried running the client write on another thread -- same result.
Suspect some sort of deadlock or race condition but can't see where... don't think I'm writing to the pipe simultaneously.
Update1: tried pipes in byte mode instead of message mode - same lockup.
Update2: Strangely, if (and only if) I pump lots of data from the server to the client, it cures the lockup!?
Server code:
DWORD ReadMsg(char* aBuff, int aBuffLen, int& aBytesRead)
{
DWORD byteCount;
if (ReadFile(mPipe, aBuff, aBuffLen, &byteCount, NULL))
{
aBytesRead = (int)byteCount;
aBuff[byteCount] = 0;
return ERROR_SUCCESS;
}
return GetLastError();
}
DWORD SendMsg(const char* aBuff, unsigned int aBuffLen)
{
DWORD byteCount;
if (WriteFile(mPipe, aBuff, aBuffLen, &byteCount, NULL))
{
return ERROR_SUCCESS;
}
mClientConnected = false;
return GetLastError();
}
DWORD CommsThread()
{
while (1)
{
std::string fullPipeName = std::string("\\\\.\\pipe\\") + mPipeName;
mPipe = CreateNamedPipeA(fullPipeName.c_str(),
PIPE_ACCESS_DUPLEX,
PIPE_TYPE_MESSAGE | PIPE_READMODE_MESSAGE | PIPE_WAIT,
PIPE_UNLIMITED_INSTANCES,
KTxBuffSize, // output buffer size
KRxBuffSize, // input buffer size
5000, // client time-out ms
NULL); // no security attribute
if (mPipe == INVALID_HANDLE_VALUE)
return 1;
mClientConnected = ConnectNamedPipe(mPipe, NULL) ? TRUE : (GetLastError() == ERROR_PIPE_CONNECTED);
if (!mClientConnected)
return 1;
char rxBuff[KRxBuffSize+1];
DWORD error=0;
while (mClientConnected)
{
Sleep(1);
int bytesRead = 0;
error = ReadMsg(rxBuff, KRxBuffSize, bytesRead);
if (error == ERROR_SUCCESS)
{
rxBuff[bytesRead] = 0; // terminate string.
if (mMsgCallback && bytesRead>0)
mMsgCallback(rxBuff, bytesRead, mCallbackContext);
}
else
{
mClientConnected = false;
}
}
Close();
Sleep(1000);
}
return 0;
}
client code:
public void Start(string aPipeName)
{
mPipeName = aPipeName;
mPipeStream = new NamedPipeClientStream(".", mPipeName, PipeDirection.InOut, PipeOptions.None);
Console.Write("Attempting to connect to pipe...");
mPipeStream.Connect();
Console.WriteLine("Connected to pipe '{0}' ({1} server instances open)", mPipeName, mPipeStream.NumberOfServerInstances);
mPipeStream.ReadMode = PipeTransmissionMode.Message;
mPipeWriter = new StreamWriter(mPipeStream);
mPipeWriter.AutoFlush = true;
mReadThread = new Thread(new ThreadStart(ReadThread));
mReadThread.IsBackground = true;
mReadThread.Start();
if (mConnectionEventCallback != null)
{
mConnectionEventCallback(true);
}
}
private void ReadThread()
{
byte[] buffer = new byte[1024 * 400];
while (true)
{
int len = 0;
do
{
len += mPipeStream.Read(buffer, len, buffer.Length);
} while (len>0 && !mPipeStream.IsMessageComplete);
if (len==0)
{
OnPipeBroken();
return;
}
if (mMessageCallback != null)
{
mMessageCallback(buffer, len);
}
Thread.Sleep(1);
}
}
public void Write(string aMsg)
{
try
{
mPipeWriter.Write(aMsg);
mPipeWriter.Flush();
}
catch (Exception)
{
OnPipeBroken();
}
}
If you are using separate threads you will be unable to read from the pipe at the same time you write to it. For example, if you are doing a blocking read from the pipe then a subsequent blocking write (from a different thread) then the write call will wait/block until the read call has completed and in many cases if this is unexpected behavior your program will become deadlocked.
I have not tested overlapped I/O, but it MAY be able to resolve this issue. However, if you are determined to use synchronous calls then the following models below may help you to solve the problem.
Master/Slave
You could implement a master/slave model in which the client or the server is the master and the other end only responds which is generally what you will find the MSDN examples to be.
In some cases you may find this problematic in the event the slave periodically needs to send data to the master. You must either use an external signaling mechanism (outside of the pipe) or have the master periodically query/poll the slave or you can swap the roles where the client is the master and the server is the slave.
Writer/Reader
You could use a writer/reader model where you use two different pipes. However, you must associate those two pipes somehow if you have multiple clients since each pipe will have a different handle. You could do this by having the client send a unique identifier value on connection to each pipe which would then let the server associate the two pipes. This number could be the current system time or even a unique identifier that is global or local.
Threads
If you are determined to use the synchronous API you can use threads with the master/slave model if you do not want to be blocked while waiting for a message on the slave side. You will however want to lock the reader after it reads a message (or encounters the end of a series of message) then write the response (as the slave should) and finally unlock the reader. You can lock and unlock the reader using locking mechanisms that put the thread to sleep as these would be most efficient.
Security Problem With TCP
The loss going with TCP instead of named pipes is also the biggest possible problem. A TCP stream does not contain any security natively. So if security is a concern you will have to implement that and you have the possibility of creating a security hole since you would have to handle authentication yourself. The named pipe can provide security if you properly set the parameters. Also, to note again more clearly: security is no simple matter and generally you will want to use existing facilities that have been designed to provide it.
I think you may be running into problems with named pipes message mode. In this mode, each write to the kernel pipe handle constitutes a message. This doesn't necessarily correspond with what your application regards a Message to be, and a message may be bigger than your read buffer.
This means that your pipe reading code needs two loops, the inner reading until the current [named pipe] message has been completely received, and the outer looping until your [application level] message has been received.
Your C# client code does have a correct inner loop, reading again if IsMessageComplete is false:
do
{
len += mPipeStream.Read(buffer, len, buffer.Length);
} while (len>0 && !mPipeStream.IsMessageComplete);
Your C++ server code doesn't have such a loop - the equivalent at the Win32 API level is testing for the return code ERROR_MORE_DATA.
My guess is that somehow this is leading to the client waiting for the server to read on one pipe instance, whilst the server is waiting for the client to write on another pipe instance.
It seems to me that what you are trying to do will rather not work as expected.
Some time ago I was trying to do something that looked like your code and got similar results, the pipe just hanged
and it was difficult to establish what had gone wrong.
I would rather suggest to use client in very simple way:
CreateFile
Write request
Read answer
Close pipe.
If you want to have two way communication with clients which are also able to receive unrequested data from server you should
rather implement two servers. This was the workaround I used: here you can find sources.
In Win32, is there a way to test if a socket is non-blocking?
Under POSIX systems, I'd do something like the following:
int is_non_blocking(int sock_fd) {
flags = fcntl(sock_fd, F_GETFL, 0);
return flags & O_NONBLOCK;
}
However, Windows sockets don't support fcntl(). The non-blocking mode is set using ioctl with FIONBIO, but there doesn't appear to be a way to get the current non-blocking mode using ioctl.
Is there some other call on Windows that I can use to determine if the socket is currently in non-blocking mode?
A slightly longer answer would be: No, but you will usually know whether or not it is, because it is relatively well-defined.
All sockets are blocking unless you explicitly ioctlsocket() them with FIONBIO or hand them to either WSAAsyncSelect or WSAEventSelect. The latter two functions "secretly" change the socket to non-blocking.
Since you know whether you have called one of those 3 functions, even though you cannot query the status, it is still known. The obvious exception is if that socket comes from some 3rd party library of which you don't know what exactly it has been doing to the socket.
Sidenote: Funnily, a socket can be blocking and overlapped at the same time, which does not immediately seem intuitive, but it kind of makes sense because they come from opposite paradigms (readiness vs completion).
Previously, you could call WSAIsBlocking to determine this. If you are managing legacy code, this may still be an option.
Otherwise, you could write a simple abstraction layer over the socket API. Since all sockets are blocking by default, you could maintain an internal flag and force all socket ops through your API so you always know the state.
Here is a cross-platform snippet to set/get the blocking mode, although it doesn't do exactly what you want:
/// #author Stephen Dunn
/// #date 10/12/15
bool set_blocking_mode(const int &socket, bool is_blocking)
{
bool ret = true;
#ifdef WIN32
/// #note windows sockets are created in blocking mode by default
// currently on windows, there is no easy way to obtain the socket's current blocking mode since WSAIsBlocking was deprecated
u_long flags = is_blocking ? 0 : 1;
ret = NO_ERROR == ioctlsocket(socket, FIONBIO, &flags);
#else
const int flags = fcntl(socket, F_GETFL, 0);
if ((flags & O_NONBLOCK) && !is_blocking) { info("set_blocking_mode(): socket was already in non-blocking mode"); return ret; }
if (!(flags & O_NONBLOCK) && is_blocking) { info("set_blocking_mode(): socket was already in blocking mode"); return ret; }
ret = 0 == fcntl(socket, F_SETFL, is_blocking ? flags ^ O_NONBLOCK : flags | O_NONBLOCK);
#endif
return ret;
}
I agree with the accepted answer, there is no official way to determine the blocking state of a socket on Windows. In case you get a socket from a third party (let's say, you are a TLS library and you get the socket from upper layer) you cannot decide if it is in blocking state or not.
Despite this I have a working, unofficial and limited solution for the problem which works for me for a long time.
I attempt to read 0 bytes from the socket. In case it is a blocking socket it will return 0, in case it is a non-blocking it will return -1 and GetLastError equals WSAEWOULDBLOCK.
int IsBlocking(SOCKET s)
{
int r = 0;
unsigned char b[1];
r = recv(s, b, 0, 0);
if (r == 0)
return 1;
else if (r == -1 && GetLastError() == WSAEWOULDBLOCK)
return 0;
return -1; /* In case it is a connection socket (TCP) and it is not in connected state you will get here 10060 */
}
Caveats:
Works with UDP sockets
Works with connected TCP sockets
Doesn't work with unconnected TCP sockets
I am attempting to use boost::asio to read and write from a device on a serial port. Both boost::asio:read() and boost::asio::serial_port::read_some() block when there is nothing to read. Instead I would like to detect this condition and write a command to the port to kick-start the device.
How can I either detect that no data is available?
If necessary I can do everything asynchronously, I would just rather avoid the extra complexity if I can.
You have a couple of options, actually. You can either use the serial port's built-in async_read_some function, or you can use the stand-alone function boost::asio::async_read (or async_read_some).
You'll still run into the situation where you are effectively "blocked", since neither of these will call the callback unless (1) data has been read or (2) an error occurs. To get around this, you'll want to use a deadline_timer object to set a timeout. If the timeout fires first, no data was available. Otherwise, you will have read data.
The added complexity isn't really all that bad. You'll end up with two callbacks with similar behavior. If either the "read" or the "timeout" callback fires with an error, you know it's the race loser. If either one fires without an error, then you know it's the race winner (and you should cancel the other call). In the place where you would have had your blocking call to read_some, you will now have a call to io_svc.run(). Your function will still block as before when it calls run, but this time you control the duration.
Here's an example:
void foo()
{
io_service io_svc;
serial_port ser_port(io_svc, "your string here");
deadline_timer timeout(io_svc);
unsigned char my_buffer[1];
bool data_available = false;
ser_port.async_read_some(boost::asio::buffer(my_buffer),
boost::bind(&read_callback, boost::ref(data_available), boost::ref(timeout),
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
timeout.expires_from_now(boost::posix_time::milliseconds(<<your_timeout_here>>));
timeout.async_wait(boost::bind(&wait_callback, boost::ref(ser_port),
boost::asio::placeholders::error));
io_svc.run(); // will block until async callbacks are finished
if (!data_available)
{
kick_start_the_device();
}
}
void read_callback(bool& data_available, deadline_timer& timeout, const boost::system::error_code& error, std::size_t bytes_transferred)
{
if (error || !bytes_transferred)
{
// No data was read!
data_available = false;
return;
}
timeout.cancel(); // will cause wait_callback to fire with an error
data_available = true;
}
void wait_callback(serial_port& ser_port, const boost::system::error_code& error)
{
if (error)
{
// Data was read and this timeout was canceled
return;
}
ser_port.cancel(); // will cause read_callback to fire with an error
}
That should get you started with only a few tweaks here and there to suit your specific needs. I hope this helps!
Another note: No extra threads were necessary to handle callbacks. Everything is handled within the call to run(). Not sure if you were already aware of this...
Its actually a lot simpler than the answers here have implied, and you can do it synchronously:
Suppose your blocking read was something like this:
size_t len = socket.receive_from(boost::asio::buffer(recv_buf), sender_endpoint);
Then you replace it with
socket.non_blocking(true);
size_t len = 0;
error = boost::asio::error::would_block;
while (error == boost::asio::error::would_block)
//do other things here like go and make coffee
len = socket.receive_from(boost::asio::buffer(recv_buf), sender_endpoint, 0, error);
std::cout.write(recv_buf.data(), len);
You use the alternative overloaded form of receive_from which almost all the send/receive methods have. They unfortunately take a flags argument but 0 seems to work fine.
You have to use the free-function asio::async_read.