I have a requirement, where my server(Windows C++ using OpenSSL) will listen to 'n' number of clients and responds asynchronously based on the client request. For this, I am planning to use the SELECT API call. But it seems that OpenSSL doesn't work with the SELECT API call. So just wondering whether any other method is there, through which I can achieve this functionality.
Any help on this is very much appreciated.
OpenSSL works with select(), but the trick is knowing WHEN to use select(). For instance, traditional non-blocking socket logic when reading data is to call select() first and then call recv() when select() says there is data to read. That does NOT work with the tradtional OpenSSL API! You need to call ssl_read() first and then call select() to wait for readibility only when ssl_read() reports an SSL_ERROR_WANT_READ error. In other words, it is the difference between wait-for-ready-then-act vs act-then-wait-for-ready, respectively. And there is the possibility that ssl_read() can report a SSL_ERROR_WANT_WRITE error, in which case you have to call select() to check for writability instead. Yes, a read action can trigger a write action!
Similar consideration is needed for ssl_send() and SSL_ERROR_WANT_WRITE for writing, and SSL_ERROR_WANT_READ for reading. Yes, a write action can trigger a read action!
You really cannot graft tradtional OpenSSL on top of an existing non-SSL socket implementation, at least not without extra work. It can be done (I've done it), but it is not easy. Tradtional OpenSSL has its own logistics that are almost backwards then traditional socket logic.
If you have an existing socket implementation and just want to add SSL/TLS to it without big headaches, you have two choices:
Use OpenSSL's BIO API instead. Create two memory BIO pairs, one for input, and one for output.
Switch to Microsoft's Crypto/SChannel API (or another third party library that supports push models).
Either approach allows you to use your own socket I/O. When receiving encrypted data, read the socket data however you want and push it into the crypto engine, and when it spits out decrypted data then process as needed. When sending unencrypted data, push it into the crypto engine, and when it spits out encrypted data then send it to the socket however you want. This leaves you in full control of the socket. Using the traditional OpenSSL API, OpenSSL takes over control of the socket.
Related
What I need to happen with a VB6 application I maintain is the following.
Establish a connection to a known address and port over a Ethernet
network.
Send a request
Wait for a response.
I tried using WinSock and Winsock replacement but they all rely in one form or another the messaging loop inherent in various Windows application. I don't know enough about the Winsock API how to implement the above algorithm in VB6 (or in any other language.
My application is a VB6 CAD/CAM software that controls metal cutting machines over a dedicated Ethernet network. The software has been maintain for 20 years and we developed several driver for different types of motion controllers. To date the API for these motion controllers consist of
Opening a connection to the hardware
Sending a request to the hardware (for example Position of an axis)
Waiting for a response (usually occurs in milliseconds).
Some of these controller work over a Ethernet Network but until now I never had to directly interact with the ports. I used the company supplied libraries to handle things. And they work in the way I mentioned above and throw a timeout error if a response doesn't occur in some defined time.
The problem with Winsock is that I have to insert DoEvents to get it a respond. This causes havoc with how we handle multi-tasking in our VB6 application. The replacement like CSocketMaster use subclassing which also cause havoc with our multi-tasking.
So any help on how to use the Winsock API or a third party dll that can do what I need to do as outlined above. I wouldn't be asking if I haven't seen other motion control do what I want to do.
Check out VbAsyncSocket repo on github for pure VB6 asynchronous sockets implementation (using WSAAsyncSelect API for sockets to post event notifications).
Contrary to it's name the class does support SyncSendArray and SyncReceiveArray methods for synchronous operations -- without DoEvents but with Timeouts.
In the same repo there is a handy cWinSockRequest contributed class, that is very similar to WinHttpRequest object as baked into the OS. This helper class will be very familliar to you if you have experience with JSON/XML (generally RESTful services over http/https) for accessing services/devices over plain tcp/udp sockets.
Another option would be to use cTlsClient contributed class, that can connect to host/device over tcp (no udp here) and provides ReadText/WriteText and ReadArray/WriteArray (synchronous) methods. The added benefit here is that the class supports both plain unencrypted sockets and SSL encrypted channels if need be.
We are using these classes to (synchronously) access ESP/POS printers from our LOB applications. Most POS printers also provide serial (USB-to-COM) links too, so we are abstracting our access w/ connector classes -- SyncWaitForEvent over async sockets and WaitForMultipleObjects on overlapped ReadFile/WriteFile APIs (oh, the irony)
I think it is rare for it to be appropriate to do networking
synchronously, however this isn't networking in the traditional sense.
This is a wire from a PC to a controller. This is like a string
between two cans. In this case with a large old program, the most
appropriate approach is the one that works the best and is the easiest
to maintenance. < /end2cents >
If VB6 + Winsock isn't working out for you, writing this in .NET and building it into a COM visible DLL for your VB6 program will fit the bill.
The example below will get you started. If you do more than the occasional call to this, it will be slow as it opens and closes the connection on each call. It should be easy to expand it to allow for reusing an open connection for back and forth communication between the PC and controller. Just be careful that you don't create a memory leak!
/// <summary>
/// Sends a message to the specified host:port, and waits for a response
/// </summary>
public string SendAndReceive(string host, int port, string messageToSend, int millisecondTimeout)
{
try
{
using (var client = new TcpClient())
{
client.SendTimeout = client.ReceiveTimeout = millisecondTimeout;
// Perform connection
client.Connect(host, port);
if (client.Connected)
{
using (var stream = client.GetStream())
{
// Convert the message to a byte array
var toSend = Encoding.ASCII.GetBytes(messageToSend);
// Send the message
stream.Write(toSend, 0, toSend.Length);
// Get a response
var response = new byte[client.ReceiveBufferSize];
stream.Read(response, 0, client.ReceiveBufferSize);
return Encoding.ASCII.GetString(retVal);
}
}
else
{
return null;
}
}
}
catch
{
return null;
}
}
As it turns out the answer involved implementing Allen's Suggestion.
The specific problem was communicating between two device involved in the control of a piece of machinery. The device that did the motion control is acting as a server, while the PC providing the motion data was a client.
The Ethernet network was being used in lieu of a proprietary bus interface or a RS-232/422 serial interface. So many of the considerations involved in serving up data over a widespread internet were not a factor. The network consisted of known devices residing at fixed IPs listening to specific ports.
After talking to people who make other motion control. The logic for the client turned out to be surprisingly simple.
Send the Data
Wait in a Loop for the response breaking out if it taking too long.
Handle any errors in the connection.
On the server side, we were lucky in that we had control over the software running on the motion controller. So there the communication loop was designed to be as fast as possible. One key point was to keep all data below 512 bytes so it is all contained in a single packet. They also took great care in priortizing the communication handler and the data structure so it can send out a response in tens of microseconds.
Another point was that in this application of dedicated clients and server that UDP is preferred over TCP as Operation Systems particularly Windows were in the habit of shutting down idle TCP Connections unexpectedly.
Because the client software is slowly transition over to the .NET framework that was another factor for implementing Allen's idea. The library discussed by #wqw worked as well.
Your problem is that you are using the Winsock control incorrectly. That probably stems from a flaw in your interaction model.
Don't "send and wait" because blocking like that is your big mistake. There is no "waiting" anyway unless you think sitting in a buzz loop is waiting.
Instead send your request and exit from that event handler. All of your code is contained in event handlers, that's how it works. Then as DataArrival events are raised you append the new arrived fragment to a stream buffer then scan the assembled stream for a complete response. Then go ahead and process the response.
Handle timeouts using a Timer control that you enable after sending. When you have assembled a completed response disable the Timer. If the interval elapses and Timer event is raised do your error processing there.
You seem to have a particularly chatty protocol so you shouldn't have to do anything else. For example you can just clear your stream buffer after processing a complete response, since there can't be anything else left in there anyway.
Forget "multitasking" and avoid DoEvents() calls like the plague they are.
This is very simple stuff.
I have a TIdCmdTCPClient which receives commands teminated in LF from a tcp server (written in C) into commandhandlers and accordingly updates a UI using TIdNotify. All is fine if it was not that somtimes I need to talk to the server in the traditional way using writeln and readln. If I try to do it there are problems such as the UI freezes, subsequent commands arrive later etc.
IS there a specific way to make work the pair writeln-readln just fine with TIdCmdTCPClient as they work with TIdTCPClient?
Please provide more informmation about the protocol you are implementing. You can certainly issue additional WriteLn() and ReadLn() calls while you are inside of a command handler event, as long as that is what the server is expecting you to do. But if you need to call ReadLn() out-of-band then you are going to conflict with TIdCmdTCPClient's internal reading.
In KEXT, I am listening for file close via vnode or file scope listener. For certain (very few) files, I need to send file path to my system daemon which does some processing (this has to happen in daemon) and returns the result back to KEXT. The file close call needs to be blocked until I get response from daemon. Based on result I need to some operation in close call and return close call successfully. There is lot of discussion on KEXT communication related topic on the forum. But they are not conclusive and appears be very old (year 2002 around). This requirement can be handled by FtlSendMessage(...) Win32 API. I am looking for equivalent of that on Mac
Here is what I have looked at and want to summarize my understanding:
Mach message: Provides very good way of bidirectional communication using sender and reply ports with queueing mechansim. However, the mach message APIs (e.g. mach_msg, mach_port_allocate, bootstrap_look_up) don't appear to be KPIs. The mach API mach_msg_send_from_kernel can be used, but that alone will not help in bidirectional communication. Is my understanding right?
IOUserClient: This appears be more to do with communicating from User space to KEXT and then having some callbacks from KEXT. I did not find a way to initiate communication from KEXT to daemon and then wait for result from daemon. Am I missing something?
Sockets: This could be last option since I would have to implement entire bidirectional communication channel from KEXT to Daemon.
ioctl/sysctl: I don't know much about them. From what I have read, its not recommended option especially for bidirectional communication
RPC-Mig: Again I don't know much about them. Looks complicated from what I have seen. Not sure if this is recommended way.
KUNCUserNotification: This appears to be just providing notification to the user from KEXT. It does not meet my requirement.
Supported platform is (10.5 onwards). So looking at the requirement, can someone suggest and provide some pointers on this topic?
Thanks in advance.
The pattern I've used for that process is to have the user-space process initiate a socket connection to the KEXT; the KEXT creates a new thread to handle messages over that socket and sleeps the thread. When the KEXT detects an event it needs to respond to, it wakes the messaging thread and uses the existing socket to send data to the daemon. On receiving a response, control is passed back to the requesting thread to decide whether to veto the operation.
I don't know of any single resource that will describe that whole pattern completely, but the relevant KPIs are discussed in Mac OS X Internals (which seems old, but the KPIs haven't changed much since it was written) and OS X and iOS Kernel Programming (which I was a tech reviewer on).
For what it's worth, autofs uses what I assume you mean by "RPC-Mig", so it's not too complicated (MIG is used to describe the RPC calls, and the stub code it generates handles calling the appropriate Mach-message sending and receiving code; there are special options to generate kernel-mode stubs).
However, it doesn't need to do any lookups, as automountd (the user-mode daemon to which the autofs kext sends messages) has a "host special port" assigned to it. Doing the lookups to find an arbitrary service would be harder.
If you want to use the socket established with ctl_register() on the KExt side, then beware: The communication from kext to user space (via ctl_enqueuedata()) works OK. However opposite direction is buggy on 10.5.x and 10.6.x.
After about 70.000 or 80.000 send() calls with SOCK_DGRAM in the PF_SYSTEM domain complete net stack breaks with disastrous consequences for complete system (hard turning off is the only way out). This has been fixed in 10.7.0. I workaround by using setsockopt() in our project for the direction from user space to kext as we only send very small data (just to allow/disallow some operation).
I am trying to handle SSL error scenarios where, for example, SSL async_handshake() is taking too long.
After some time (say 20sec) i want to close this connection (lowest_layer().close()).
I pass shared_ptr with connection object as a parameter to async_handshake(), so object still exists, eventually handshake handler is invoked and object gets destroyed.
But, still I'm getting sporadic crashes! Looks like after close() SSL is still trying to read or operate on read buffer.
So, the basic question - is it safe to hard close() SSL connection?
Any ideas?
Typically the method I've used stop outstanding asynchronous operations on a socket is socket::cancel as described in the documentation. Their handlers will be invoked with asio::error::operation_aborted as the error parameter, which you'll need to handle somehow.
That said, I don't see a problem using close instead of cancel. Though it is difficult to offer much help or advice without some code to analyze.
Note that some Windows platforms have problems when canceling outstanding asynchronous operations. The documentation has suggestions for portable cancelation if your application needs to support Windows.
Whenever I bash into OpenSSL on Windows or Mac I always make my own memory BIOs, and link them up to the platforms message based (asynchronous non blocking) socket implementation. (WSAAsyncSelect on windows: CFSocket on Mac)
Secure programming with the OpenSSL API hosted on ibm.com seems to be the best reference on implementing OpenSSL - but it implements a very simple blocking connection.
Is there a standard way to setup and use OpenSSL with non blocking sockets - such that calls to SSL_read will not block if there is no data for example?
SSL_read() (and the other SSL functions) work fine if the underlying socket is set non-blocking. If insufficient data is available, it will return a value less than zero; SSL_get_error() called on the return value will return SSL_ERROR_WANT_READ or SSL_ERROR_WANT_WRITE, indicating what SSL is waiting for.
Using BIO_set_nbio with either BIO_new_socket or BIO_new_connect/accept is probably less code than creating memory BIOs. Not sure if there's anything more standard than that. The docs explain this in more detail:
http://www.openssl.org/docs/crypto/BIO_s_connect.html