DecryptMessage (Schannel). Dealing with empty output buffers - windows

I'm trying to use Schannel SSPI to send/receive data over SSL connection, using sockets.
I have some questions on DecryptMessage()
1) MSDN says that sometimes the application will receive data from the remote party, then successfully decrypt it using DecryptMessage() but the output data buffer will be empty. This is normal and the application must be able to deal with it. (As I understand, "empty" means SecBuffer::cbBuffer==0)
How should I deal with it? I'm trying to create a (secure) srecv() function, a replacement for the winsock recv() function. Therefore I cannot just return 0. Because the calling application will think that the remote party has closed the connection. Should I try to receive another encrypted block from the connection and try to decrypt it?
2) And another question. After successfully decrypting data with DecryptMessage (return value = SEC_E_OK), I'm trying to find a SECBUFFER_DATA type buffer in the output buffers.
PSecBuffer pDataBuf=NULL;
for(int i = 1; i < 4; ++i) { // should I always start with 1?
if(NULL == pDataBuf && SECBUFFER_DATA == buffers[i].BufferType) {
pDataBuf = buffers+i;
}
}
What if I don't find a data buffer? Should I consider it as an error? Or should I again try to receive an encrypted block to decrypt it? (I saw several examples. In one of them they were retrying to receive data, in another one they were reporting an error)

It appears that you are attempting to replicate blocking recv function for your Schannel secure socket implementation. In that case, you have no choice but to return 0. An alternative would be to pass a callback to your implementation and only call it when SecBuffer.cbBuffer > 0.
Yes, always start at index 1 for checking the remaining buffers. Your for loop is missing a check for SECBUFFER_EXTRA. If there is extra data and the cbBuffer size > 0, then you need to call DecryptMessage again with the extra data placed into index 0. If your srecv is blocking and you don't implement a callback function (for decrypted data sent to application layer), then you will have to append the results of DecryptMessage for each SECBUFFER_DATA received in the loop before returning the aggregate to the calling application.

Related

Trying to send a FIX api message to ctrader server using Ruby but receiving no response

Trying to see if I can get a response from ctrader server.
Getting no response and seems to hang at "s.recv(1024)". So not sure what could be going wrong here. I have limited experience with sockets and network coding.
I have checked my login credentials and all seems ok.
Note: I am aware of many FIX engines that are available for this purpose but wanted to
try this on my own.
ctrader FIX guides
require 'socket'
hostname = "h51.p.ctrader.com"
port = 5201
#constructing a fix message to see what ctrader server returns
#8=FIX.4.4|9=123|35=A|49=demo.ctrader.*******|56=cServer|57=QUOTE|50=QUOTE|34=1|52=20220127-16:49:31|98=0|108=30|553=********|554=*******|10=155|
fix_message = "8=FIX.4.4|9=#{bodylengthsum}|" + bodylength + "10=#{checksumcalc}|"
s = TCPSocket.new(hostname, port)
s.send(fix_message.force_encoding("ASCII"),0)
print fix_message
puts s.recv(1024)
s.close
Sockets are by default blocking on read. When you call recv that call will block if no data is available.
The fact that your recv call is not returning anything, would be an indication that the server did not send you any reply at all; the call is blocking waiting for incoming data.
If you would use read instead, then the call will block until all the requested data has been received.
So calling recv(1024) will block until 1 or more bytes are available.
Calling read(1024) will block until all 1024 bytes have been received.
Note that you cannot rely on a single recv call to return a full message, even if the sender sent you everything you need. Multiple recv calls may be required to construct the full message.
Also note that the FIX protocol gives the msg length at the start of each message. So after you get enough data to see the msg length, you could call read to ensure you get the rest.
If you do not want your recv or read calls to block when no data (or incomplete data) is available, then you need to use non-blocking IO instead for your reads. This is complex topic, which you need to research, but often used when you don't want to block and need to read arbitary length messages. You can look here for some tips.
Another option would be to use something like EventMachine instead, which makes it easier to deal with sockets in situations like this, without having to worry about blocking in your code.

Two-way-binding for golang structs

TLDR: Can I register callback functions in golang to get notified if a struct member is changed?
I would like to create a simple two-way-binding between a go server and an angular client. The communication is done via websockets.
Example:
Go:
type SharedType struct {
A int
B string
}
sharedType := &SharedType{}
...
sharedType.A = 52
JavaScript:
var sharedType = {A: 0, B: ""};
...
sharedType.A = 52;
Idea:
In both cases, after modifying the values, I want to trigger a custom callback function, send a message via the websocket, and update the value on the client/server side accordingly.
The sent message should only state which value changed (the key / index) and what the new value is. It should also support nested types (structs, that contain other structs) without the need of transmitting everything.
On the client side (angular), I can detect changes of JavaScript objects by registering a callback function.
On the server side (golang), I could create my own map[] and slice[] implementations to trigger callbacks everytime a member is modified (see the Cabinet class in this example: https://appliedgo.net/generics/).
Within these callback-functions, I could then send the modified data to the other side, so two-way binding would be possible for maps and slices.
My Question:
I would like to avoid things like
sharedType.A = 52
sharedType.MemberChanged("A")
// or:
sharedType.Set("A", 52) //.. which is equivalent to map[], just with a predifined set of allowed keys
Is there any way in golang to get informed if a struct member is modified? Or is there any other, generic way for easy two-way binding without huge amounts of boiler-plate code?
No, it's not possible.
But the real question is: how do you suppose to wield all such magic in your Go program?
Consider what you'd like to have would be indeed possible.
Now an innocent assignment
v.A = 42
would—among other things—trigger sending stuff
over a websocket connection to the client.
Now what happens if the connection is closed (client disconnected),
and the sending fails?
What happens if sending fails to complete before a deadline is reached?
OK, suppose you get it at least partially right and actual modification of the local field happens only if sending succeeds.
Still, how should sending errors be handled?
Say, what should happen if the third assignment in
v.A = 42
v.B = "foo"
v.C = 1e10-23
fails?
you could try using server sent events (SSE) to send realtime data to the frontend, while sending a single post request with ur changes. That way you can monitor in the back and send data every second.

Libevent does not echo properly when there is a delay

Based on the following code, I built a version of an echo server, but with a threaded delay. This was built because I've noticed that upon initial connection, my first send is sent back to the client, but the client does not receive it until a second send. My real-world use case is that I need to send messages to the server, do a lot of processing, and then send the result back... say 10-30 seconds later (could be hours in some cases).
http://www.wangafu.net/~nickm/libevent-book/Ref8_listener.html
So here is my code. For brevity's sake, I have only included the libevent-related code; not the threading code or other stuff. When debugging, a new connection is set up, the string buffer is filled properly, and debugging reveals that the writes go successfully.
http://pastebin.com/g02S2RTi
But I only receive the echo from the send-before-last. I send from the client numbers to validate this and when I send a 1 from the client, I receive nothing from the server via echo... even though the server is definitely writing to the buffer using evbuffer_add ( I have also tried this using bufferevent_write_buffer).
From the client when I send a 2, I then receive the 1 from the previous send. It's like my writes are being cached.... I have turned off nagle.
So, my question is: Does libevent cache sends using the following method?
evbuffer_add( outputBuffer, buffer, length );
Is there a way to flush this cache? Is there some other method to mark the cache as finished or complete? Can I force a send? It never sends on it's own... I have even put in delays. Replacing evbuffer_add with "send" works perfectly every time.
Most likely you are affected by Nagle algorithm - basically it buffers outgoing data, before sending it to the network. Take a look at this article: TCP/IP options for high-performance data transmission.
Here is an example how to disable buffering:
int flag = 1;
int result = setsockopt(sock, /* socket affected */
IPPROTO_TCP, /* set option at TCP level */
TCP_NODELAY, /* name of option */
(char *) &flag, /* the cast is historical
cruft */
sizeof(int)); /* length of option value */

sendto() dgrams do not block for ENOBUFS on OSX

This is more of a observation and also a suggestion for whats the best way to handle this scenario.
I have two threads one just pumps in data and another receives the data and does lot of work before sending it another socket. Both the threads are connected via a Domain socket. The protocol used here is UDP. I did not want to use TCP as it is stream based, which means if there is little space in the queue my data is split and sent. This is bad as Iam sending data that should not be split. Hence I used DGRAM. Interestingly when the send thread overwhelms the recv thread by pumping so much data, at some point the Domain socket buffer gets filled up and sendto() returns ENOBUFS. I was of the opinion that should this happen, sendto() would block until the buffer is available. This would be my desired behaviour. However this does not seem to be the case. I solve this problem in a rather weird way.
CPU Yield method
If I get ENOBUFS, I do a sched_yield(); as there is no pthread_yield() in OSX. After that I try to resend again. If that fails I keep doing the same until it is taken. This is bad as Iam wasting cpu cycles just doing something useless. I would love if sendto() blocked.
Sleep method
I tried to solve the same issue using sleep(1) instead of sched_yield() but this of no use as sleep() would put my process to sleep instead of just that send thread.
Both of them does not seem to work for me and Iam running out of options. Can someone suggest what is the best way to handle this issue? Is there some clever tricks Iam not aware of that can reduce unnecessary cpu cycles? btw, what the man page says about sentto() is wrong, based on this discussion http://lists.freebsd.org/pipermail/freebsd-hackers/2004-January/005385.html
The Upd code in kernel:
The udp_output function in /sys/netinet/udp_usrreq.c, seems clear:
/*
* Calculate data length and get a mbuf
* for UDP and IP headers.
*/
M_PREPEND(m, sizeof(struct udpiphdr), M_DONTWAIT);
if (m == 0) {
error = ENOBUFS;
if (addr)
splx(s);
goto release;
}
I'm not sure why sendto() isn't blocking for you... but you might try calling this function before you each call to sendto():
#include <stdio.h>
#include <sys/select.h>
// Won't return until there is space available on the socket for writing
void WaitUntilSocketIsReadyForWrite(int socketFD)
{
fd_set writeSet;
FD_ZERO(&writeSet);
FD_SET(socketFD, &writeSet);
if (select(socketFD+1, NULL, &writeSet, NULL, NULL) < 0) perror("select");
}
Btw how big are the packets that you are trying to send?
sendto() on OS X is really nonblocking (that is M_DONTWAIT flag for).
I suggest you to use stream based connection and just receive the whole data on the other side by using MSG_WAITALL flag of the recv function. If your data has strict structure than it would be simple, just pass the correct size to the recv. If not than just send some fixed-size control packet first with the size of the next chunk of data and then the data itself. On the receiver side you would be wait for control packet of fixed size and than the data of size from control packet.

WinSock recv() timeout: setsockopt()-set value + half a second?

I am writing a cross-platform library which, among other things, provides a socket interface, and while running my unit-test suite, I noticed something strange with regard to timeouts set via setsockopt(): On Windows, a blocking recv() call seems to consistently return about half a second (500 ms) later than specified via the SO_RCVTIMEO option.
Is there any explanation for this in the docs I missed? Searching the web, I was only able to find a single other reference to the problem – could somebody who owns »Windows Sockets
Network Programming« by Bob Quinn and Dave Shute look up page 466 for me? Unfortunately, I can only run my test Windows Server 2008 R2 right now, does the same strange behavior exist on other Windows versions as well?
From Networking Programming for Microsoft Windows by Jones and Ohlund:
SO_RCVTIMEO optval
Type: int
Get/Set: Both
Winsock Version: 1+
Description : Gets or sets the timeout value (in milliseconds)
associated with receiving data on the
socket
The SO_RCVTIMEO option sets the
receive timeout value on a blocking
socket. The timeout value is an
integer in milliseconds that indicates
how long a Winsock receive function
should block when attempting to
receive data. If you need to use the
SO_RCVTIMEO option and you use the
WSASocket function to create the
socket, you must specify
WSA_FLAG_OVERLAPPED as part of
WSASocket's dwFlags parameter.
Subsequent calls to any Winsock
receive function (such as recv,
recvfrom, WSARecv, or WSARecvFrom)
block only for the amount of time
specified. If no data arrives within
that time, the call fails with the
error 10060 (WSAETIMEDOUT). If the
receiver operation does time out the
socket is in an indeterminate state
and should not be used.
For performance reasons, this option
was disabled in Windows CE 2.1. If you
attempt to set this option, it is
silently ignored and no failure
returns. Previous versions of Windows
CE do implement this option.
I'd think the crucial information in this is:
If you need to use the SO_RCVTIMEO option and you use the WSASocket
function to create the socket, you
must specify WSA_FLAG_OVERLAPPED as
part of WSASocket's dwFlags parameter
I hope this is still useful :)
I am having the same problem. Going to use
patchedTimeout = max ( unpatchedTimepit - 500, 1 )
Tested this with the unpatchedTimepit == 850
your problem is not in rcv function timeout!
if your application have a while loop to check and receive just put an if statement to check the receive buffer last index for '\0' char to check is the receiving string is ended or not.
typically if rcv function is still receiving return value is the size of received data. size can be used as last index of buffer array.
do{
result = rcv(s,buf,len,0);
if(buf[result] == '\0'){
break;
}
}
while(result > 0);

Resources