Error in read and write client server programming - client-server

I'm working on a c program in linux. I need to use client server programming. I used read and write and it worked fine. But after using more than 20 read and write in both server and client, it has stopped working. That is I don't receive any output for that. Line. I don't understand what is the problem because am using the very same lines.
bzero(&hl,200);
read(a,hl,50*sizeof(char));
printf("%s",hl);
In client side,
bzero(&hl,200);
strcpy(hl,"hello");
write(a,hl,50*sizeof(char));
printf("%s",hl);
Also, am not able to get the return value and print it. While I used it in debian, I got the return value and able to print. Now, am in Ubuntu (at home). It's not printing return value now. No error too! Is it anything to do with OS?
Please help me figure out the problem.
UPDATED:
In server,
int c:
s=read(a,&c,sizeof(int));
printf("choice: %d",c);
In client,
scanf("%d",&ch);
s=write(sd,&ch,sizeof(int));
Both has size 4. But, in client I got garbage value while I print the choice.

You throw away the return value of read, so you have no idea how many bytes you've read. What if it's less than 50?
Change:
read(a,hl,50*sizeof(char));
To:
int readBytes = 0;
do
{
int r = read(a, hl + readBytes, 50 - readBytes);
if (r <= 0)
return; // or however you want to handle an error
readBytes += r;
} while (readBytes < 50);
That will ensure you actually read 50 bytes.
You are imagining that TCP somehow "glues" those 50 bytes together. But the system has no idea that those 50 bytes are a message -- only your code does. So it's your code's job to glue them back together. TCP does not preserve application message boundaries -- that is the application's job.

Related

Data Sorting method for TCP server

I'm currently having a working TCP connection between Unity(Client) and an Android App(Server). Unity handles the task to send my controller joysticks data to the Server.
I have 2 joysticks on my controller and both of them return Vector2 output(x,y) (in total, I will receive 4 float values if I use both at the same time)
My current method is to individually parse them into a byte array and send each of them to the Server (Below is my code to send the left joy x and y values from the Client to the Server)
float fData_Lx = LeftJoy.GetAxis(SteamVR_Input_Sources.Any).x;
float fData_Ly = LeftJoy.GetAxis(SteamVR_Input_Sources.Any).y;
byte[] clientMessageAsByteArray_1 = BitConverter.GetBytes(fData_Lx);
byte[] clientMessageAsByteArray_2 = BitConverter.GetBytes(fData_Ly);
Array.Reverse(clientMessageAsByteArray_1); // Flip - Reverse from little Endian to Big Endian (C# -> Java)
Array.Reverse(clientMessageAsByteArray_2);
// Write byte array to socketConnection stream.
stream.Write(clientMessageAsByteArray_1, 0, clientMessageAsByteArray_1.Length);
stream.Write(clientMessageAsByteArray_2, 0, clientMessageAsByteArray_2.Length);
Server code to receive the data:
in = new DataInputStream(new BufferedInputStream(socket.getInputStream()));
in.read(cData);
float f = ByteBuffer.wrap(cData).order(ByteOrder.BIG_ENDIAN).getFloat(); //Use to read single Data
System.out.println("Data: " + f);
Therefore, the Server will receive 2 floats at the same time and it won't know which float is the x and which float is the y.
I want to ask if there is a solution to sort these floats at the Server side so that it can understand these float values. I'm planning to send 4 floats at the same time as I gonna use both of my joysticks.
Really appreciate your help (Please don't be mad at me, I know my logic to receive the data is really bad)
if there is a solution to sort these floats at the Server side so that it can understand these float values
-> No!
The server only knows it received a byte[] .. if you wouldn't tell it it is actually a float it wouldn't even know that fact.
However this is TCP. So actually you can rely on two facts:
Everything that is sent in a certain order is received in the exact same order.
There is an individual TCP socket for each of your clients.
These two should be enough to exactly identify your input:
simply make sure the server reads not one but rather two consecutive float values (first x then y - that's exactly how your client sends it). As long as you didn't receive both your Input is not valid anyway, so you wait until you definitely received x and y.
on server side you already know from which of the existing client sockets you received the data. So once you received the two floats you already know exactly who was the sender of these values and who you have to apply them to on server side.
Is there a special reason why you invert the order of the arrays on client side and then tell the server to receive them in BigEndian? Why not simply tell the server to receive LittleEndian and on client side only invert the array within if(BitConverter.IsBigEndian) ?

How to set up a ZeroMQ request-reply between a c# and python application

I'm trying to communicate between a c#(5.0) and a python (3.9) application via ZeroMQ. For .Net I'm using NetMQ and for python PyZMQ.
I have no trouble letting two applications communicate, as long as they are in the same language
c# app to c# app;
python -> python;
java -> java,
but trouble starts when I try to connect between different languages.
java -> c# and reverse works fine as well [edited]
I do not get any errors, but it does not work either.
I first tried the PUB-SUB Archetype pattern, but as that didn't work, I tried REQ-REP, so some remainders of the "PUB-SUB"-version can still be found in the code.
My Python code looks like this :
def run(monitor: bool):
loop_counter: int = 0
context = zmq.Context()
# socket = context.socket(zmq.PUB)
# socket.bind("tcp://*:5557")
socket = context.socket(zmq.REP)
socket.connect("tcp://localhost:5557")
if monitor:
print("Connecting")
# 0 = Longest version, 1 = shorter version, 2 = shortest version
length_version: int = 0
print("Ready and waiting for incoming requests ...")
while True:
message = socket.recv()
if monitor:
print("Received message:", message)
if message == "long":
length_version = 0
elif message == "middle":
length_version = 1
else:
length_version = 2
sys_info = get_system_info(length_version)
"""if not length_version == 2:
length_version = 2
loop_counter += 1
if loop_counter == 15:
length_version = 1
if loop_counter > 30:
loop_counter = 0
length_version = 0"""
if monitor:
print(sys_info)
json_string = json.dumps(sys_info)
print(json_string)
socket.send_string(json_string)
My C# code :
static void Main(string[] args)
{
//using (var requestSocket = new RequestSocket(">tcp://localhost:5557"))
using (var requestSocket = new RequestSocket("tcp://localhost:5557"))
{
while (true) {
Console.WriteLine($"Running the server ...");
string msg = "short";
requestSocket.SendFrame(msg);
var message = requestSocket.ReceiveFrameString();
Console.WriteLine($"requestSocket : Received '{message}'");
//Console.ReadLine();
Thread.Sleep(1_000);
}
}
}
Seeing the period of your problems maybe it's because of versions.
I run fine a program for long time with communications from Windows/C# with NTMQ 4.0.0.207 239,829 7/1/2019 on one side and Ubuntu/Python with zeromq=4.3.1 and pyzmq=18.1.0.
I just tried updating to use same NETMQ version but with new versions zeromq=4.3.3 and pyzmq=20.0.0 but there is a problem/bug somewhere and it doesn't run well anymore.
So your code doesn't look bad may be it's software versions issues not doing well try with NTMQ 4.0.0.207 on c# side and zeromq=4.3.1 with pyzmq=18.1.0 on python side
Q : "How to set up a ZeroMQ request-reply between a c# and python application"
The problem starts with the missed understanding of how REQ/REP archetype works.
Your code uses a blocking-form of the .recv()-method, so you remain yourselves hanging Out-of-the-Game, forever & unsalvageable, whenever a REQ/REP two-step gets into troubles (as no due care was taken to prevent this infinite live-lock).
Rather start using .poll()-method to start testing a presence / absence of a message in the local AccessNode-side of the queue and this leaves you in a capability to state-fully decide what to do next, if a message is already or is not yet present, so as to keep the mandatory sequence of an API-defined need to "zip" successful chainings ofREQ-side .send()-.recv()-.send()-.recv()-... with REP-side .recv()-.send()-.recv()-.send()-... calls, are the REQ/REP archetype works as a distributed-Finite-State-Automaton (dFSA), that may easily deadlock itself, due to "remote"-side not being compliant with the local-side expectations.
Having a code, that works in a non-blocking, .poll()-based mode avoids falling into these traps, as you may handle each of these unwanted circumstances while being still in a control of the code-execution paths (which a call to a blocking-mode method in a blind belief it will return at some future point in time, if ever, simply is not capable of).
Q.E.D.
If in doubts, one may use a PUSH/PULL archetype, as the PUB/SUB-archetype may run into problems with non-matching subscriptions ( topic-list management being another, version dependent detail ).
There ought be no other problem for any of the language-bindings, if they passed all the documented ZeroMQ API features without creating any "shortcuts" - some cases were seen, where language-specific binding took "another" direction for PUB/SUB, when sending a pure message, transformed into a multi-part message, putting a topic into a first frame and the message into the other. That is an example of a binding not compatible with the ZeroMQ API, where a cross-language / non-matching binding-version system problems are clear to come.
Your port numbers do not match, the python code is 55557 and the c# is 5557
I might be late, but this same thing happened to me. I have a python Subscriber using pyzmq and a C# Publisher using NetMQ.
After a few hours, it occurred to me that I needed to let the Publisher some time to connect. So a simple System.Threading.Thread.Sleep(500); after the Connect/Bind did the trick.

DecryptMessage (Schannel). Dealing with empty output buffers

I'm trying to use Schannel SSPI to send/receive data over SSL connection, using sockets.
I have some questions on DecryptMessage()
1) MSDN says that sometimes the application will receive data from the remote party, then successfully decrypt it using DecryptMessage() but the output data buffer will be empty. This is normal and the application must be able to deal with it. (As I understand, "empty" means SecBuffer::cbBuffer==0)
How should I deal with it? I'm trying to create a (secure) srecv() function, a replacement for the winsock recv() function. Therefore I cannot just return 0. Because the calling application will think that the remote party has closed the connection. Should I try to receive another encrypted block from the connection and try to decrypt it?
2) And another question. After successfully decrypting data with DecryptMessage (return value = SEC_E_OK), I'm trying to find a SECBUFFER_DATA type buffer in the output buffers.
PSecBuffer pDataBuf=NULL;
for(int i = 1; i < 4; ++i) { // should I always start with 1?
if(NULL == pDataBuf && SECBUFFER_DATA == buffers[i].BufferType) {
pDataBuf = buffers+i;
}
}
What if I don't find a data buffer? Should I consider it as an error? Or should I again try to receive an encrypted block to decrypt it? (I saw several examples. In one of them they were retrying to receive data, in another one they were reporting an error)
It appears that you are attempting to replicate blocking recv function for your Schannel secure socket implementation. In that case, you have no choice but to return 0. An alternative would be to pass a callback to your implementation and only call it when SecBuffer.cbBuffer > 0.
Yes, always start at index 1 for checking the remaining buffers. Your for loop is missing a check for SECBUFFER_EXTRA. If there is extra data and the cbBuffer size > 0, then you need to call DecryptMessage again with the extra data placed into index 0. If your srecv is blocking and you don't implement a callback function (for decrypted data sent to application layer), then you will have to append the results of DecryptMessage for each SECBUFFER_DATA received in the loop before returning the aggregate to the calling application.

WinSock recv() timeout: setsockopt()-set value + half a second?

I am writing a cross-platform library which, among other things, provides a socket interface, and while running my unit-test suite, I noticed something strange with regard to timeouts set via setsockopt(): On Windows, a blocking recv() call seems to consistently return about half a second (500 ms) later than specified via the SO_RCVTIMEO option.
Is there any explanation for this in the docs I missed? Searching the web, I was only able to find a single other reference to the problem – could somebody who owns »Windows Sockets
Network Programming« by Bob Quinn and Dave Shute look up page 466 for me? Unfortunately, I can only run my test Windows Server 2008 R2 right now, does the same strange behavior exist on other Windows versions as well?
From Networking Programming for Microsoft Windows by Jones and Ohlund:
SO_RCVTIMEO optval
Type: int
Get/Set: Both
Winsock Version: 1+
Description : Gets or sets the timeout value (in milliseconds)
associated with receiving data on the
socket
The SO_RCVTIMEO option sets the
receive timeout value on a blocking
socket. The timeout value is an
integer in milliseconds that indicates
how long a Winsock receive function
should block when attempting to
receive data. If you need to use the
SO_RCVTIMEO option and you use the
WSASocket function to create the
socket, you must specify
WSA_FLAG_OVERLAPPED as part of
WSASocket's dwFlags parameter.
Subsequent calls to any Winsock
receive function (such as recv,
recvfrom, WSARecv, or WSARecvFrom)
block only for the amount of time
specified. If no data arrives within
that time, the call fails with the
error 10060 (WSAETIMEDOUT). If the
receiver operation does time out the
socket is in an indeterminate state
and should not be used.
For performance reasons, this option
was disabled in Windows CE 2.1. If you
attempt to set this option, it is
silently ignored and no failure
returns. Previous versions of Windows
CE do implement this option.
I'd think the crucial information in this is:
If you need to use the SO_RCVTIMEO option and you use the WSASocket
function to create the socket, you
must specify WSA_FLAG_OVERLAPPED as
part of WSASocket's dwFlags parameter
I hope this is still useful :)
I am having the same problem. Going to use
patchedTimeout = max ( unpatchedTimepit - 500, 1 )
Tested this with the unpatchedTimepit == 850
your problem is not in rcv function timeout!
if your application have a while loop to check and receive just put an if statement to check the receive buffer last index for '\0' char to check is the receiving string is ended or not.
typically if rcv function is still receiving return value is the size of received data. size can be used as last index of buffer array.
do{
result = rcv(s,buf,len,0);
if(buf[result] == '\0'){
break;
}
}
while(result > 0);

Sending Large Data > 1 MB through Windows Sockets viz using the Send function

I am looking to send a large message > 1 MB through the windows sockets send api. Is there a efficient way to do this, I do not want to loop and then send the data in chunks. I have read somewhere that you can increase the socket buffer size and that could help. Could anyone please elaborate on this. Any help is appreciated
You should, and in fact must loop to send the data in chunks.
As explained in Beej's networking guide:
"send() returns the number of bytes actually sent out—this might be less than the number you told it to send! See, sometimes you tell it to send a whole gob of data and it just can't handle it. It'll fire off as much of the data as it can, and trust you to send the rest later."
This implies that even if you set the packet size to 1MB, the send() function may not send all of it, and you are forced to loop until the total number of bytes sent by your calls to send() total the number of bytes you are trying to send. In fact, the greater the size of the packet, the more likely it is that send() will not send it all.
Aside from all that, you don't want to send 1MB packets because if they get lost, you will have to transmit the entire 1MB packet again, whereas if you lost a 1K packet, retransmitting it is not a big deal.
In summary, you will have to loop your send() calls, and the receiver will even have to loop their recv() calls too. You will likely need to prepend a small header to each packet to tell the receiver how many bytes are being sent so the receiver can loop the appropriate number of times.
I suggest you take a look at Beej's network guide for more detailed info about send() and recv() and how to deal with this problem. It can be found at http://beej.us/guide/bgnet/output/print/bgnet_USLetter.pdf
Why don't you want to send it in chunks?
That's the way to do it in 99% of the cases.
What makes you think that sending in chunks is inefficient? The OS is likely to chunk large "send" calls anyway, and may coalesce small ones.
Likewise on the receiving side the client should be looping anyway as there's no guarantee of getting all the data in one go.
The windows sockets subsystem is not oblidged to send the whole buffer you provide anyway. You can't force it since some network level protocols have an upper limit for the packet size.
As a practical matter, you can actually allocate a large buffer and send in one call using Winsock. If you are not messing with socket buffer sizes, the buffer will generally be copied into kernel mode for sending anyway.
There is a theoretical possibility that it will return without sending everything, however, so you really should loop for correctness. The chunks you send should, however, be large (64k or the ballpark) to avoid repeated kernel transitions.
If you want to do a loop after all, you can use this C++ code:
#define DEFAULT_BUFLEN 1452
int SendStr(const SOCKET &ConnectSocket, const std::string &str, int strlen){
char sndbuf[DEFAULT_BUFLEN];
int sndbuflen = DEFAULT_BUFLEN;
int iResult;
int count = 0;
int len;
while(count < strlen){
len = min(strlen-count, sndbuflen);
//void * memcpy ( void * destination, const void * source, size_t num );
memcpy(sndbuf,str.data()+count,len);
// Send a buffer
iResult = send(ConnectSocket, sndbuf, len, 0);
// iResult: Bytes sent
if (iResult == SOCKET_ERROR){
throw WSAGetLastError();
}
else{
if(iResult > 0){
count+=iResult;
}
else{
break;
}
}
}
return count;
}

Resources