I'm implementing a voice chat server which will be used in my Virtual Class e-learning application for Windows, which makes use of the Remote Desktop API.
So far I 've been compressing the voice in with OPUS and I 've tested various options:
To pass the voice through the RDP Virtual Channel. This works but it creates lots of lag despite the channel creation with CHANNEL_PRIORITY_HI.
To use my own TCP (or UDP) voice server. For this option I have been wondering what would be the best method to implement.
Currently I 'm sending the udp datagram received, to all other clients (later on I will do server-side mixing).
The problem with my current UDP voice server is that is has lag even within the same pc: One server, and four clients connected, two of them have open mics, for example.
I get audible lag with this setup:
void VoiceServer(int port)
{
XSOCKET Y = make_shared<XSOCKET>(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
if (!Y->Bind(port))
return;
auto VoiceServer2 = [&]()
{
OPUSBUFF o;
char d[200] = { 0 };
map<int, vector<char>> udps;
for (;;)
{
// get datagram
int sle = sizeof(sockaddr_in6);
int r = recvfrom(*Y, o.d, 4000, 0, (sockaddr*)d, &sle);
if (r <= 0)
break;
// a MESSAGE is a header and opus data follows
MESSAGE* m = (MESSAGE*)o.d;
// have we received data from this client already?
// m->arb holds the RDP ID of the user
if (udps.find(m->arb) == udps.end())
{
vector<char>& uu = udps[m->arb];
uu.resize(sle);
memcpy(uu.data(), d, sle);
}
for (auto& att2 : aatts) // attendee list
{
long lxid = 0;
att2->get_Id(&lxid);
#ifndef _DEBUG
if (lxid == m->arb) // if same
continue;
#endif
const vector<char>& uud = udps[lxid];
sendto(*Y, o.d + sizeof(MESSAGE), r - sizeof(MESSAGE), 0, (sockaddr*)uud.data(), uud.size());
}
}
};
// 10 threads receiving
for (int i = 0; i < 9; i++)
{
std::thread t(VoiceServer2);
t.detach();
}
VoiceServer2();
}
Each client runs a VoiceServer thread:
void VoiceServer()
{
char b[4000] = { 0 };
vector<char> d2;
for (;;)
{
int r = recvfrom(Socket, b, 4000, 0, 0,0);
if (r <= 0)
break;
d2.resize(r);
memcpy(d2.data(), b, r);
if (audioin && wout)
audioin->push(d2); // this pushes the buffer to a waveOut writing class
SetEvent(hPlayEvent);
}
}
Is this because I test in the same machine? But with a TeamSpeak client I had setup in the past there is no lag whatsoever.
Thanks for your opinion.
SendTo():
For message-oriented sockets, care must be taken not to exceed the
maximum packet size of the underlying subnets, which can be obtained
by using getsockopt to retrieve the value of socket option
SO_MAX_MSG_SIZE. If the data is too long to pass atomically through
the underlying protocol, the error WSAEMSGSIZE is returned and no data
is transmitted.
A typical IPv4 header is 20 bytes, and the UDP header is 8 bytes. The theoretical limit (on Windows) for the maximum size of a UDP packet is 65507 bytes(determined by the following formula: 0xffff - 20 - 8 = 65507). Is it actually the best way to send such a large packet? If we set a packet size too large, bottom of network protocol will splits packets at the IP layer. This takes up a lot of network bandwidth, cause the delay.
MTU(maximum transmission unit), is actually related to the link layer protocol. The structure of the EthernetII frame DMAC+SMAC+Type+Data+CRC has a minimum size of 64 bytes per Ethernet frame due to the electrical limitations of Ethernet transmission, and the maximum size can not exceed 1518 bytes. For Ethernet frames less than or greater than this limitation, we can regard it as a mistake. Since the largest data frame of Ethernet EthernetII is 1518 bytes, except for the frame header 14Bytes and the frame tail CRC check part 4Bytes, there is only 1500 bytes in the data domain left. That's MTU.
In the case that the MTU is 1500 bytes, the maximum size of UDP packet should be 1500 bytes - IP header (20 bytes) - UDP header (8 bytes) = 1472 bytes if you want IP layer not to split packets. However, since the standard MTU value on the Internet is 576 bytes, it is recommended that UDP data length should be controlled within (576-8-20) 548 bytes in a sendto/recvfrom when programming UDP on the Internet.
You need to reduce the bytes of a send/receive and then control the number of times.
Related
I'm running into some initial LIDAR connection issue with simultaneously connecting 4 Slamtec RPLIDAR A3 using MATALB
with the provided interface library found here: https://github.com/ENSTABretagneRobotics/Hardware-MATLAB
The issue is that I am having to retry the connection on at least one of the LIDARS before it connects.
And it can also vary with LIDAR that is. That is, all but one LIDAR connects the first time.
One time, it could be LIDAR on one COM port, another time it could be a LIDAR on another COM port.
This is the way it is set up right now.
Basically MATALB loads the provided interface library, hardwarex.dll. That exposes some library methods to be used by MATLAB.
The method to connect the LIDAR does the following:
Opens the RS232 port
Sets port options
Gets some info and health statuses form lidar
Sets the motor PWM to zero (stop lidar motor)
Uses express scan mode option
Here somewhere the communication will error out.
Using a serial sniffer I was able to see that the LIDAR errors out after the following message to the LIDAR:
a5 f0 02 ff 03 ab a5 25 a5 82 05 00 00 00 00 00 22
Which I tracked to the following library methods, in that order
SetMotorPWMRequestRPLIDAR()
CheckMotorControlSupportRequestRPLIDAR()
StopRequestRPLIDAR()
StartExpressScanRequestRPLIDAR() <-- Error here
To which the LIDAR responds with:
a5 5a 54 00 00 40 82
Where as a successfully connection response from the LIDAR much longer in content.
Things I've tried
Drain (force all write data) the write buffer with the interface libraries DrainComputerRS232Port() method before and/or after any write to lidar.
Setting the TX/Write OS FIFO buffer to FILE_FLAG_NO_BUFFERING (ie. WriteFile()).
Changing the Hardware FIFO buffer form max (16) to min (1).
Using MATLAB's serial() command to flush any input or output buffers prior to loading the library or trying the connections.
This is the system and settings I am working with
Lidar (x4):
Slamtec RPLIDAR A3
Firmware 1.26
Connected via USB (no USB hub used)
No other COM port devices connected
Computer
OS: Windows 10 Pro - Build 1903
CPU: Intel Xeon 3.00Ghz
RAM: 64 GB
HD: SSD - 512GB NVMe
Serial Port Settings
Boud Rate: 256000
Timeout: 1000
Software
MATLAB R2018b (9.5.0)
I've been banging my head on the wall with this. Any help is much much appreciated!
I'm going to answer my own question. And anyone is interested in a more detailed discussion please refer to the issue posted on the MATLAB RPLIDAR repo:
https://github.com/ENSTABretagneRobotics/Hardware-MATLAB/issues/2
As I mentioned, when debugging, the error seemed to happen ConnectRPLIDAR() --> StartExpressScanRequestRPLIDAR(), then specifically here:
// Receive the first data response (2 data responses needed for angles computation...).
memset(pRPLIDAR->esdata_prev, 0, sizeof(pRPLIDAR->esdata_prev));
if (ReadAllRS232Port(&pRPLIDAR->RS232Port, pRPLIDAR->esdata_prev, sizeof(pRPLIDAR->esdata_prev)) != EXIT_SUCCESS)
{
// Failure
printf("A RPLIDAR is not responding correctly. \n");
return EXIT_FAILURE;
}
What seemed to have happened before that is after the command being send out in WriteAllRS232Port(), sometimes it would not read a response in the ReadAllRS232Port(), esdata_prev would be nothing.
We tried implementing a mSleep(500) delay before that second ReadAllRS232Port(), and it seemed to help (my guess that the lidar was slow to respond), but the issue did not get resolved fully.
The following is what made it work every time with 4 lidars:
inline int StartExpressScanRequestRPLIDAR(RPLIDAR* pRPLIDAR)
{
unsigned char reqbuf[] = { START_FLAG1_RPLIDAR,EXPRESS_SCAN_REQUEST_RPLIDAR,0x05,0,0,0,0,0,0x22 };
unsigned char descbuf[7];
unsigned char sync = 0;
unsigned char ChkSum = 0;
// Send request to output/tx OS FIFO buffer for port
if (WriteAllRS232Port(&pRPLIDAR->RS232Port, reqbuf, sizeof(reqbuf)) != EXIT_SUCCESS)
{
printf("Error writing data to a RPLIDAR. \n");
return EXIT_FAILURE;
}
// Receive the response descriptor.
memset(descbuf, 0, sizeof(descbuf)); // Alocate memory
if (ReadAllRS232Port(&pRPLIDAR->RS232Port, descbuf, sizeof(descbuf)) != EXIT_SUCCESS)
{
printf("A RPLIDAR is not responding correctly. \n");
return EXIT_FAILURE;
}
// Quick check of the response descriptor.
if ((descbuf[2] != 0x54) || (descbuf[5] != 0x40) || (descbuf[6] != MEASUREMENT_CAPSULED_RESPONSE_RPLIDAR))
{
printf("A RPLIDAR is not responding correctly. \n");
return EXIT_FAILURE;
}
// Keep anticipating a port read buffer for 1 second
int timeout = 1500;
// Check it every 5 ms
// Note on Checking Period Value:
// Waiting on 82 bytes in lidar payload
// 10 bits per byte for the serial communication
// 820 bits / 256000 baud = 0.0032s = 3.2ms
int checkingperiod = 5;
RS232PORT* pRS232Port = &pRPLIDAR->RS232Port;
int i;
int count = 0;
// Wait for something to show up on the input buffer on port
if (!WaitForRS232Port(&pRPLIDAR->RS232Port, timeout, checkingperiod))
{
//Success - Something is there
// If anything is on the input buffer, wait until there is enough
count = 0;
for (i = 0; i < 50; i++)
{
// Check the input FIFO buffer on the port
GetFIFOComputerRS232Port(pRS232Port->hDev, &count);
// Check if there is enough to get a full payload read
if (count >= sizeof(pRPLIDAR->esdata_prev))
{
// Thre is enough, stop waiting
break;
}
else
{
// Not enough, wait a little
mSleep(checkingperiod);
}
}
}
else
{
//Failure - After waiting for an input buffer, it wasn't there
printf("[StartExpressScanRequestRPLIDAR] : Failed to detect response on the input FIFO buffer. \n");
return EXIT_FAILURE;
}
// Receive the first data response (2 data responses needed for angles computation...).
memset(pRPLIDAR->esdata_prev, 0, sizeof(pRPLIDAR->esdata_prev));
if (ReadAllRS232Port(&pRPLIDAR->RS232Port, pRPLIDAR->esdata_prev, sizeof(pRPLIDAR->esdata_prev)) != EXIT_SUCCESS)
{
// Failure
printf("A RPLIDAR is not responding correctly. \n");
return EXIT_FAILURE;
}
// Analyze the first data response.
sync = (pRPLIDAR->esdata_prev[0] & 0xF0)|(pRPLIDAR->esdata_prev[1]>>4);
if (sync != START_FLAG1_RPLIDAR)
{
printf("A RPLIDAR is not responding correctly : Bad sync1 or sync2. \n");
return EXIT_FAILURE;
}
ChkSum = (pRPLIDAR->esdata_prev[1]<<4)|(pRPLIDAR->esdata_prev[0] & 0x0F);
// Force ComputeChecksumRPLIDAR() to compute until the last byte...
if (ChkSum != ComputeChecksumRPLIDAR(pRPLIDAR->esdata_prev+2, sizeof(pRPLIDAR->esdata_prev)-1))
{
printf("A RPLIDAR is not responding correctly : Bad ChkSum. \n");
return EXIT_FAILURE;
}
return EXIT_SUCCESS;
}
So in the above code, we are waiting for the OS read FIFO buffer to show something within 1.5s, checking every 5ms (WaitForRS232Port()). If anything shows up, makes sure to wait to have enough, the size of the payload (GetFIFOComputerRS232Port()).
I'm not sure if it made a difference but we also removed the OS write FIFO buffer by changing it from 0 to FILE_FLAG_NO_BUFFERING:
File: OSComputerRS232Port.h
...
hDev = CreateFile(
tstr,
GENERIC_READ|GENERIC_WRITE,
0, // Must be opened with exclusive-access.
NULL, // No security attributes.
OPEN_EXISTING, // Must use OPEN_EXISTING.
FILE_FLAG_NO_BUFFERING, // Not overlapped I/O. Should use FILE_FLAG_WRITE_THROUGH and maybe also FILE_FLAG_NO_BUFFERING?
NULL // hTemplate must be NULL for comm devices.
);
...
In order to design our API/messages, I've made some preliminary tests with our data:
Protobuf V3 Message:
message TcpGraphes {
uint32 flowId = 1;
repeated uint64 curTcpWinSizeUl = 2; // max 3600 elements
repeated uint64 curTcpWinSizeDl = 3; // max 3600 elements
repeated uint64 retransUl = 4; // max 3600 elements
repeated uint64 retransDl = 5; // max 3600 elements
repeated uint32 rtt = 6; // max 3600 elements
}
Message build as multipart message in order to add the filter functionality for the client
Tested with 10 python clients: 5 running on the same PC (localhost), 5 running on an external PC.
Protocol used was TCP. About 200 messages were sent every second.
Results:
Local client are working: they get every messages
Remote clients are missing some messages (throughput seems to be limited by the server to 1Mbit/s per client)
Server code (C++):
// zeroMQ init
zmq_ctx = zmq_ctx_new();
zmq_pub_sock = zmq_socket(zmq_ctx, ZMQ_PUB);
zmq_bind(zmq_pub_sock, "tcp://*:5559");
every second, about 200 messages are sent in a loop:
std::string serStrg;
tcpG.SerializeToString(&serStrg);
// first part identifier: [flowId]tcpAnalysis.TcpGraphes
std::stringstream id;
id << It->second->first << tcpG.GetTypeName();
zmq_send(zmq_pub_sock, id.str().c_str(), id.str().length(), ZMQ_SNDMORE);
zmq_send(zmq_pub_sock, serStrg.c_str(), serStrg.length(), 0);
Client code (python):
ctx = zmq.Context()
sub = ctx.socket(zmq.SUB)
sub.setsockopt(zmq.SUBSCRIBE, '')
sub.connect('tcp://x.x.x.x:5559')
print ("Waiting for data...")
while True:
message = sub.recv() # first part (filter part, eg:"134tcpAnalysis.TcpGraphes")
print ("Got some data:",message)
message = sub.recv() # second part (protobuf bin)
We have looked at the PCAP and the server don't use the full bandwidth available, I can add some new subscribers, remove some existing ones, every remote subscriber gets "only" 1Mbit/s.
I've tested an Iperf3 TCP connection between the two PCs and I reach 60Mbit/s.
The PC who runs the python clients has about 30% CPU last.
I've minimized the console where the clients are running in order to avoid the printout but it has no effect.
Is it a normal behavior for the TCP transport layer (PUB/SUB pattern) ? Does it means I should use the EPGM protocol ?
Config:
windows xp for the server
windows 7 for the python remote clients
zmq version 4.0.4 used
A performance motivated interest ?
Ok, let's first use the resources a bit more adequately :
// //////////////////////////////////////////////////////
// zeroMQ init
// //////////////////////////////////////////////////////
zmq_ctx = zmq_ctx_new();
int aRetCODE = zmq_ctx_set( zmq_ctx, ZMQ_IO_THREADS, 10 );
assert( 0 == aRetCODE );
zmq_pub_sock = zmq_socket( zmq_ctx, ZMQ_PUB );
aRetCODE = zmq_setsockopt( zmq_pub_sock, ZMQ_AFFINITY, 1023 );
// ^^^^
// ||||
// (:::::::::::)-------++++
// >>> print ( "[{0: >16b}]".format( 2**10 - 1 ) ).replace( " ", "." )
// [......1111111111]
// ||||||||||
// |||||||||+---- IO-thread 0
// ||||||||+----- IO-thread 1
// |......+------ IO-thread 2
// :: : :
// |+------------ IO-thread 8
// +------------- IO-thread 9
//
// API-defined AFFINITY-mapping
Non-windows platforms with a more recent API can touch also scheduler details and tweak O/S-side priorities even better.
Networking ?
Ok, let's first use the resources a bit more adequately :
aRetCODE = zmq_setsockopt( zmq_pub_sock, ZMQ_TOS, <_a_HIGH_PRIORITY_ToS#_> );
Converting the whole infrastructure into epgm:// ?
Well, if one wishes to experiment and gets warranted resources for doing that E2E.
socketCAN connection: read() not fast enough
Hello,
I use the socket() connection for my CAN communication.
fd = socket(PF_CAN, SOCK_RAW, CAN_RAW);
I'm using 2 threads: one periodic 1ms RT thread to send data and one
thread to read the incoming messages. The read function looks like:
void readCan0Socket(void){
int receivedBytes = 0;
do
{
// set GPIO pin low
receivedBytes = read(fd ,
&receiveCanFrame[recvBufferWritePosition],
sizeof(struct can_frame));
// reset GPIO pin high
if (receivedBytes != 0)
{
if (receivedBytes == sizeof(struct can_frame))
{
recvBufferWritePosition++;
if (recvBufferWritePosition == CAN_MAX_RECEIVE_BUFFER_LENGTH)
{
recvBufferWritePosition = 0;
}
}
receivedBytes = 0;
}
} while (1);
}
The socket is configured in blocking mode, so the read function stays open
until a message arrived. The current implementation is working, but when
I measure the time between reading a message and the next waiting state of
the read function (see set/reset GPIO comment) the time varies between 30 us
(the mean value) and > 200 us. A value greather than 200us means
(CAN has a baud rate of 1000 kBit/s) that packages are not recognized while
the read() handles the previous message. The read() function must be ready within
134 us.
How can I accelerate my implementation? I tried to use two threads which are
separated with Mutexes (lock before the read() function and unlock after a
message reception), but this didn't solve my problem.
I've got two systems, both running Windows 7. The source is 192.168.0.87, the target is 192.168.0.22, they are both connected to a small switch on my desk.
The source is transmitting a burst of 100 UDP packets to the target with this program -
#include <iostream>
#include <vector>
using namespace std;
#include <winsock2.h>
int main()
{
// It's windows, we need this.
WSAData wsaData;
int wres = WSAStartup(MAKEWORD(2,2), &wsaData);
if (wres != 0) { exit(1); }
SOCKET s = socket(AF_INET, SOCK_DGRAM, 0);
if (s < 0) { exit(1); }
struct sockaddr_in addr;
memset(&addr, 0, sizeof(addr));
addr.sin_family = AF_INET;
addr.sin_addr.s_addr = htonl(INADDR_ANY);
addr.sin_port = htons(0);
if (bind(s, (struct sockaddr *)&addr, sizeof(addr)) < 0) { exit(3); }
int max = 100;
// build all the packets to send
typedef vector<unsigned char> ByteArray;
vector<ByteArray> v;
v.reserve(max);
for(int i=0;i<max;i++) {
ByteArray bytes(150+(i%25), 'a'+(i%26));
v.push_back(bytes);
}
// send all the packets out, one right after the other.
addr.sin_addr.s_addr = htonl(0xC0A80016);// 192.168.0.22
addr.sin_port = htons(24105);
for(int i=0;i<max;++i) {
if (sendto(s, (const char *)v[i].data(), v[i].size(), 0,
(struct sockaddr *)&addr, sizeof(addr)) < 0) {
cout << "i: " << i << " error: " << errno;
}
}
closesocket(s);
cout << "Complete!" << endl;
}
Now, on first run I get massive losses of UDP packets (often only 1 will get through!).
On subsequent runs, all 100 make it through.
If I wait for 2 minutes or so, and run again, I'm back to losing most of the packets.
Reception on the target system is done using Wireshark.
I also ran Wireshark at the same time on the source system, and found exactly the same trace as on the target in all cases.
That means that the packets are getting lost on the source machine, rather than being lost in the switch or on the wire.
I also tried running sysinternals process monitor, and found that indeed, all 100 sendto calls do result in appropriate winsock calls, but not necessarily in packets on the wire.
As near as I can tell (using arp -a), in all cases the target's IP is in the source's arp cache.
Can anyone tell me why Windows is so inconsistent in how it treats these packets? I get that in my actual application I've just got to rate limit my sends a bit, but I'd like to understand why it works sometimes and not others.
Oh yes, and I also tried swapping the systems for send and receive, with no change in behavior.
Most probably the client is overruning udp send buffer. Maybe while ARP protocol is running to get the target MAC address. You say that you lose datagrams the first run and if you wait 2 minutes or more. Why don't you check with Wireshark what happens in that first run? (If ARP frames are sent/received)
If that is the problem, you could apply one of these 2 alternatives:
1-Before running make sure the ARP entry is there.
2-Send the first datagram, wait 1 sec or less, send the burst
I got a RS232 signal capture device. and it working great.
I need some help making sense of the data. Basically we bought it because we are dealing a late 80's machine controller that uses serial communication. We had little luck despite knowing the port parameters.
From the data I dumped machine control is using the break signal as part of it's protocol. I am having trouble duplicating it using VB and the MSComm. I know to toggle the break signal and on and off. But I am not sure what I am supposed to be doing with it. I am supposed to leave it on for each byte of data I send. Or send a byte of data and then toggle.
Also I am confused how I supposed to receive any data from the controller. Do I toggle a flag when the break is turned on and then when it turned off read the input?
Michael Burr's description of the way break works is accurate. Often, "break" signals are sent for significantly longer than one character time.
These days, "Break" is infrequently used in serial comms, but the most common use is as a 'cheap' way of providing packet synchronization. "Break" may be sent before a packet starts, to alert the receiver that a new packet is on the way (and allow it to reset buffers, etc.) or at the end of a packet, to signal that no more data is expected. It's a kind of 'meta-character' in that it allows you to keep the full range of 8 or 7-bit values for packet contents, and not worry about how start or end of packet are delineated.
To send a break, typically you call SetCommBreak, wait an appropriate period (say, around 2 millseconds at 9600 baud) then call ClearCommBreak. During this time you can't be sending anything else, of course.
So, assuming that the protocol requires 'break' at the start of the packet, I'd do this (sorry for pseudocode):-
procedure SendPacket(CommPort port, Packet packet)
{
SetCommBreak(port)
Sleep(2); // 2 milliseconds - assuming 9600 baud. Pro-rata for others
ClearCommBreak(port)
foreach(char in packet)
SendChar(port, char)
}
Pseudocode for a receiver is more difficult, because you have to make a load of assumptions about the incoming packet format and the API calls used to receive breaks. I'll write in C this time, and assume the existence of an imaginary function. WaitCommEvent is probably the key to handling incoming Breaks.
bool ReadCharOrBreak(char *ch); // return TRUE if break, FALSE if ch contains received char
We'll also assume fixed-length 100 byte packets with "break" sent before each packet.
void ReadAndProcessPackets()
{
char buff[100];
int count;
count = 0;
while (true)
{
char ch;
if (ReadcharOrBreak(ch))
count = 0; // start of packet - reset count
else
{
if (count < 100)
{
buff[count++] = ch;
if (count == 100)
ProcessPacket(buff);
}
else
Error("too many bytes rx'd without break")
}
}
WARNING - totally untested, but should give you the idea...
For an example of a protocol using Break, check out the DMX-512 stage lighting protocol.
The start of a packet is signified by
a Break followed by a "mark" (a
logical one) known as the "Mark After
Break" (MAB). The break signals end of
one packet and the start of the next.
It causes the receivers to start
reception. After the break up to 513
slots are sent.
A break signal is an invalid character. When the RS-232 line is idle, the voltage is in the 'mark' (or '1') state (which is -12 volts if I remember right). When a character is sent, the protocol toggles the line to the 'space' (or '0') state for one bit time (the start bit) then toggles the signal as appropriate for the data (the data bits) and any parity bits. It then holds the line in an idle/mark (or 1) state for a number of bits defined by the stop bits, which is typically configurable (usually 1 stop bit in my experience).
Since there is always some period of time where the line will be in a mark state between data characters, the start of a character can always be recognised. This also means that the longest period of time that the line can be in a space state is:
1 start bit + however many data bits + a parity bit (if any)
A break signal is defined as holding the line in the space state for longer than that period of time - no valid data byte can do that, so the break 'character' isn't really a character. It's a special signal.
As far as when you need to issue a break signal depends entirely on the protocol being used.
'Break' was intended for when the line synchronization got totally mixed up.
I am supposed to leave it on for each byte of data I send. Or send a byte of data and then toggle.
Try sending a nice long 'break' signal (500 ms?) then wait a bit (50 ms?) then send your data.
Sending break can be achieved by:
lowering the bit-rate
sending 0x00 which will seem as break.
change bit-rate back.
During the break, it won't be possible to receive data since the bit-rate is not correct.
I used this for Linbus communication which has 1 master sending break then 0x55 as sync.
I helped out a friend and implemented the Comm Break in C#
It can be used as an extension method:
System.IO.Ports.SerialPort myPort = new System.IO.Ports.SerialPort("COM1");
//... serial communications code
myPort.SendCOMMbreak(5); //sends a break for rougly 5ms
Or a direct call:
RS232_Com_Break.UseWinAPItoSendBreak("COM1", 5); //aquires serial port and sends a break for rougly 5ms
Here is the complete code for the supporting class:
using System;
using System.IO.Ports;
using System.Runtime.InteropServices;
using System.Threading;
public static class RS232_Com_Break
{
/*===================================================================================
* EXTENSION METHOD TO HOLD THE VOLTAGE ON A COMM PORT HIGH FOR X(ms),
* for talking to old devices that need an RS232 "break" signal sent.
*
* 1/5/23 Blue Mosaic Software ~mwr
*
* Use: System.IO.Ports.SerialPort myPort = new System.IO.Ports.SerialPort("COM1");
* ... serial communications code
* myPort.SendCOMMbreak(5); //sends a break for rougly 5ms
*
* Or: RS232_Com_Break.UseWinAPItoSendBreak("COM1", 5); //aquires serial port and sends a break for rougly 5ms
* ==================================================================================*/
[DllImport("kernel32.dll", SetLastError = true)]
[return: MarshalAsAttribute(UnmanagedType.Bool)]
static extern bool SetCommBreak([InAttribute] IntPtr fileHandle);
[DllImport("kernel32.dll")]
static extern bool ClearCommBreak(IntPtr hFile);
public const short FILE_ATTRIBUTE_NORMAL = 0x80;
public const short INVALID_HANDLE_VALUE = -1;
public const uint GENERIC_READ = 0x80000000;
public const uint GENERIC_WRITE = 0x40000000;
public const uint CREATE_NEW = 1;
public const uint CREATE_ALWAYS = 2;
public const uint OPEN_EXISTING = 3;
[DllImport("kernel32.dll", SetLastError = true)]
static extern IntPtr CreateFile(string lpFileName, uint dwDesiredAccess,
uint dwShareMode, IntPtr lpSecurityAttributes, uint dwCreationDisposition,
uint dwFlagsAndAttributes, IntPtr hTemplateFile);
[DllImport("kernel32.dll")]
static extern public bool CloseHandle(IntPtr hObject);
/// <summary>
/// Extension method to reset a serial port. Holds Port high (+5vdc) for the duration requested.
/// should work with an open or closed port.
/// </summary>
/// <param name="oPort"></param>
public static void SendCOMMbreak(this SerialPort oPort, int milliseconds)
{
// I think it takes 1 ms to set and clear the break.
int iPauseLength = milliseconds > 1 ? milliseconds - 1 : 1;
//little management to insure the port is free.
if (oPort.IsOpen)
{
oPort.Close();
UseWinAPItoSendBreak(oPort.PortName, iPauseLength);
oPort.Open();
}
else
{
UseWinAPItoSendBreak(oPort.PortName, iPauseLength);
}
}
/// <summary>
/// All the windows API calls are here, so this would be the code to use if
/// not using the .NET System.IO.Ports.SerialPort object.
/// </summary>
/// <param name="sPortName">The name of the COM port</param>
/// <param name="iPauseLengthMS">pause time in milliseconds</param>
public static void UseWinAPItoSendBreak(string sPortName, int iPauseLengthMS)
{
//get a handle to the port. Once we have done this sucessfully, it is unavalible elsewhere
IntPtr port_ptr = CreateFile(sPortName, GENERIC_WRITE, 0, IntPtr.Zero, OPEN_EXISTING, 0, IntPtr.Zero);
if (port_ptr.ToInt32() == INVALID_HANDLE_VALUE) // -1 = oops
{
// Did not get a handle. We need to ask the framework to marshall the win32 error code to an exception
Marshal.ThrowExceptionForHR(Marshal.GetHRForLastWin32Error());
}
else
{
SetCommBreak(port_ptr);
Thread.Sleep(iPauseLengthMS);
ClearCommBreak(port_ptr);
//Close only happens if handle achieved.
CloseHandle(port_ptr);
}
}
}
Not really an SO question, but let me dredge up stuff from my
long-past (1980s in fact)
days as a comms programmer. You normally send a break by holding all bits low
or high (depending on your comms hardware). So to cause a break
either send the value 0x00 repeatedly for about half a second, or the value 0xFF.
You should be able to see the data that the port is sending. You'll need a null-modem cable, a computer with a serial port (or a serial-USB dongle) and a terminal program (as HyperTerminal on Windows -- not included in Vista). If you configure your terminal program adequately (correct speed, number of bits for data, the correct setting of start-stop, and correct port) all the data will be show on screen.
Sometimes it is requiered to hit the enter key to start seeing the data. You can toggle the setting for the terminal program during the test to see if something changes ("noise" to data).