I need to get a list of interfaces on my local machine, along with their IP addresses, MACs and a set of QoS measurements ( Delay, Jitter, Error rate, Loss Rate, Bandwidth)...
I'm writing a kernel module to read these information from local network devices,So far I've extracted every thing mentioned above except for both Jitter and Bandwidth...
I'm using linux kernel 2.6.35
It depends what you mean by bandwidth. In most cases you only get from the PHY something that is better called bitrate. I guess you rather need some kind of information on the available bandwidth at a higher layer, which you can't get without active or passive measurements done, e.g. sending ICMP echo-like probe packets, and investigating replies. You should also make clear what the two points in the network are (both the actual endpoints and the communication layer) between which you would like to measure available bandwidth.
As for jitter you also need to do some kind of measurements, basically the same way as above.
I know this is an old post, but you could accomplish at least getting jitter by inspecting the RTCP packets if they're available. They come in on the +1 of the RTP port and come along with any RTP stream as far as I've seen. A lot of information can be gotten from RTCP, but for your purposes just the basic source description would do it:
EDIT: (didn't look at the preview)
Just check out this link for the details of the protocol, but you can get the jitter pretty easily from an RTCP packet.
Depending on what you're using the RTP stream for too there are a lot of other resources, like the VoIP Metrics Report Block in the Extended Report (https://www.rfc-editor.org/rfc/rfc3611#page-25).
EDIT:
As per Artem's request here is a basic flow of how you might do it:
An RTP stream is started on say port 16400 (the needed drivers/mechanism for this to happen are most likely already in place).
Tell the kernel to start listening on port 16401 (1 above your RTP stream's port) as well; this is where the RTCP pkts will start coming in.
As the RTCP pkts come in send them wherever you want to handle them (ie, if you're wanting to parse it in userspace or something).
Parse the pkts for the desired data. I'm not aware of a particular lib to do this, but it's pretty easy to just point some struct at it (in C) and dereference, watching out for Endianess.
Related
I have a ROS node that gets image frames from a camera sensor and publishes image messages to a topic of type sensor_msgs::image. I run a ros2 executable which deploys the node. I notice that the rate at which the camera sensor provides the frames is 30 fps but the frame rate returned by "ros2 topic hz" is comparatively quite low, that is, around 10 Hz. I verified this using the output of "ros2 topic echo" wherein only around 10 messages were published with the same "sec" (second) value.
So, it seems that a large overhead is involved in the topic publishing mechanism.
Most likely, entire image frames are being copied which is causing low fps. I would like to confirm whether this is indeed the case, that is, does ros2 copies the entire message while publishing to a topic? And if yes, what are the workarounds to that? It seems that using intra process communication (using components) might be a workaround. But note that I am only deploying one node and publishing messages to a topic from it, that is to say, there is no second node which is consuming those messages yet.
Regards
I can think of a couple reasons why the reported frequency from ros2 topic hz is reporting a lower frequency than expected.
There are known performance issues with Python publishers and large data (like images). Improvements have been made, but still exist in older version of ROS 2 (Galactic or earlier) (related Github issue). I don't know if these issues affect Python subscriptions, but I imagine there is some overhead in converting from C to Python (which ros2 topic hz is doing). You could try subscribing with a C++ node and see if that makes any difference.
The underlaying robot middleware (RMW) may also be a source of increased latency. There have been various issues documented in trying to send large data. You can check out this documentation on tuning the middleware for your use-case: https://docs.ros.org/en/rolling/How-To-Guides/DDS-tuning.html
To take advantage of intraprocess communication, I recommend writing your publisher and subscriber nodes in C++ as components, which have the flexibility of being run in their own process or loaded into a common process (letting them use pointers to pass around data). You can also configure the RMW to use shared memory (agnostic to how you're using ROS), but I won't get into that here since it depends on what RMW you are using.
You can try using usbcam pkg to get the camera feed.
This pkg is in cpp and sensor QoS. So you should get the best speed.
installation:
sudo apt get install ros-<ros2-distro>-usb-cam
ros2 run usb_cam usb_cam_node_exe
ros2 run image_transport republish compressed raw --ros-args --remap in/compressed:=image_raw/compressed --remap out:=image_raw/uncompressed
you can echo the topic: image_raw/uncompressed
link attached.
https://index.ros.org/r/usb_cam/
I have to model a bittorrent network, so there are a number of node connected each other. Each node has a download speed, say 600KBps, and an upload speed, say 130KBps.
The problem is: how can I model this in omnetpp? in the NED file i created the network this way. If A and B are nodes:
A.mygate$o++ --> {something} -->B.mygate$i++
B.mygate$o++ --> {something} -->A.mygate$i++
where mygate is a inout gate, $i and $o are the input and output half channel. But something must be a speed, but:
if I set a speed to the first line of code, this is the upload speed of A BUT is also the download speed of B. Ths is normal, because if I download from a slow server i have a slow download. How can I model the download speed of a peer in Omnetpp? I cannot understand this. Should i have to say: "allow k simultaneus download untill I reach the download speed?" or it is a bad approach? Can someone suggest me the right approach, and if a modul builtin in omnetpp already exists? I have read the manual but is a bit confusing. Thanks for every reply.
I suggest to take a look at OverSim which is a peer to peer network simulator on top of INET framework which is the framework to simulate internet related protocols in OMNeT++.
Generally each host should have a queue at link layer and the interfaces that are connected should manage that they do not put out further network packets until the transimmision line is not idle (which is determined by the datarate on the line and the length of the packet). Once the line gets idle, the next packet can be sent out on the line. This is how the datarate is limited by the actual channel.
If you don't want to implement this from scratch (no reason to do that) take a look at the INET framework. You should drop your hosts and connect their PPP interfaces with the asymmetric connection you have proposed in your question. The PPP interface in the StandardHost will do the queueing for you, so you only have to add some applications that generate your traffic, and you are set.
Still I would take a look at OverSim as it gives an even higher level abstraction on top of INET (I have no experience with it though)
A professional videocamera is sending me packets over UDP and RTP, which contain MJPG-data in YUV422-pixelformat (RFC 2435). By using the DatagramSocket and DatagramPacket classes I am able to receive the packet. Now I am looking for an efficient way to get from:
approx 80 * RTP_socket.receive(rtpPacket) ---> 1 jpg-File on my Harddisk (with 25 fps)
Otherwise I am pretty soon loosing relatively many packets as, according to the UDP standard the packets are send continuously by the camera (Loosing a package once a while is not the worst as I don't need every frame).
Right now I am using a ByteBuffer to store the packets sequentially (with the header cut off using put (.. int offset...) until I got the final packet of one frame.
But unfortunately, it seems like I need to use the ImageIO.write function in order to get the necessary jpg-Header, correct? Because it cannot handle a ByteBuffer directly...
If I would do some after-processing of the image in another thread (not implemented yet), would a DirectByteBuffer make sense?
Hope you understood what I am asking :) . If not please don't hesitate to ask
Thanks a lot
You can port this C# Implementation which achieves over 100 FPS quite easily :)
https://net7mma.codeplex.com/SourceControl/latest#Rtp/RFC2435Frame.cs
I am the author and if you need porting help let me know!
I have quite a bewildering problem.
I'm using a big C++ library for handling some proprietary protocol over UDP on Windows XP/7. It listens on one port throughout the run of the program, and waits for connections from distant peers.
Most of the time, this works well. However, due to some problems I'd experienced, I've decided to add a simple debug print directly after the call to WSARecvFrom (the win32 function used in the library to recv datagrams from my socket of interest, and tell what IP and port they came from).
Strangely enough, in some cases, I've discovered packets are dropped at the OS level (i.e. I see them in Wireshark, they have the right dst-port, all checksums are correct - but they never appear in the debug prints I've implanted into the code).
Now, I'm fully of the fact (people tend to mention a bit too often) that "UDP doesn't guarantee delivery" - but this is not relevant, as the packets are received by the machine - I see them in Wireshark.
Also, I'm familiar with OS buffers and the potential to fill up, but here comes the weird part...
I've done some research trying to find out which packets are dropped exactly. What I've discovered, is that all dropped packets share two things in common (though some, but definitely not most, of the packets that aren't dropped share these as well):
They are small. Many of the packets in the protocol are large, close to MTU - but all packets that are dropped are under 100 bytes (gross).
They are always one of two: a SYN-equivalent (i.e. the first packet a peer sends to us in order to initiate communications) or a FIN-equivalent (i.e. a packet a peer sends when it is no longer interested in talking to us).
Can either one of these two qualities affect the OS buffers, and cause packets to be randomly (or even more interesting - selectively) dropped?
Any light shed on this strange issue would be very appreciated.
Many thanks.
EDIT (24/10/12):
I think I may have missed an important detail. It seems that the packets dropped before arrival share something else in common: They (and I'm starting to believe, only they) are sent to the server by "new" peers, i.e. peers that it hasn't tried to contact before.
For example, if a syn-equivalent packet arrives from a peer* we've never seen before, it will not be seen by WSARecvFrom. However, if we have sent a syn-equivalent packet to that peer ourselves (even if it didn't reply at the time), and now it sends us a syn-equivalent, we will see it.
(*) I'm not sure whether this is a peer we haven't seen (i.e. ip:port) or just a port we haven't seen before.
Does this help?
Is this some kind of WinSock option I've never heard of? (as I stated above, the code is not mine, so it may be using socket options I'm not aware of)
Thanks again!
The OS has a fixed size buffer for data that has arrived at your socket but hasn't yet been read by you. When this buffer is exhausted, it'll start to discard data. Debug logging may exacerbate this by delaying the rate you pull data from the socket at, increasing the chances of overflows.
If this is the problem, you could at least reduce the instances of it by requesting a larger recv buffer.
You can check the size of your socket's recv buffer using
int recvBufSize;
int err = getsockopt(socket, SOL_SOCKET, SO_RCVBUF,
(char*)&recvBufSize, sizeof(recvBufSize));
and you can set it to a larger size using
int recvBufSize = /* usage specific size */;
int err = setsockopt(socket, SOL_SOCKET, SO_RCVBUF,
(const char*)&recvBufSize, sizeof(recvBufSize));
If you still see data being received by the OS but not delivered to your socket client, you could think about different approaches to logging. e.g.
Log to a RAM buffer and only print it occasionally (at whatever size you profile to be most efficient)
Log from a low priority thread, either accepting that the memory requirements for this will be unpredictable or adding code to discard data from the log's buffer when it gets full
I had a very similar issue, after confirming that the receive buffer wasn't causing drops, I learned that it was because I had the receive timeout set too low at 1ms. Setting the socket to non-blocking and not setting the receive timeout fixed the issue for me.
Turn off the Windows Firewall.
Does that fix it? If so, you can likely enable the Firewall back on and just add a rule for your program.
That's my most logical guess based on what you said here in your update:
It seems that the packets dropped before arrival share something else
in common: They (and I'm starting to believe, only they) are sent to
the server by "new" peers, i.e. peers that it hasn't tried to contact
before.
Faced same kind of problem on the redhat-linux as well.
This turn out to be a routing issue.
RCA is as follows:
True that UDP is able to reach destination machine(seen on Wireshark).
Now route to source is not found so there is no reply can be seen on the Wireshark.
On some OS you can see the request packet on the Wireshark but OS does not actually delivers the packets socket (You can see this socket in the netstat-nap).
In these case please check ping always (ping <dest ip> -I<source ip>)
I am using Ruby to test a C# networking application, which uses sockets. I open the connection with #socket = TCPSocket.new(IP,PORT) and it works - until the text I want to send is longer than 1024 characters. Then Ruby splits the message into 2 parts. C++ and C# send the message as one packet, so the C# application doesn't need to join the parts.
The messages never get longer than approx. 2000 chars. Is there a possibility to set the packet size for TCPSocket?
EDIT:
All of your answers are correct, but after reading a lot of ruby socket questions here on SO I found the solution:
socket.send(msg,0x4)
does not split the message. The option for direct send makes the difference.
I don't know if this works over the internet, but it works in my test lab.
Thx for the help.
TCP is a stream protocol. It does not care about application "messages". TCP can theoretically send your 1024 bytes in one packet, or in 1024 packets.
That said, keep in mind that Ethernet MTU is 1500 bytes. Factor in IP header, which is normally 20, and TCP header, which is at least 20. Then your 2000-char message will have to be sent in at least two packets. TCP also does flow control, which might be relevant to the issue. The best way to find out what's going on on the wire is to use tcpdump or wireshark.
The number of packets required to transmit your data should have very little effect on the stream in practice. What you might be encountering is a buffering problem in your implementation.
A socket should only be written to when it's in a "writeable" state, otherwise you risk overflowing the output buffer and causing the connection to be dropped by your networking stack.
As a TCP/IP socket functions as a simple stream, where data goes in, and comes out in order, the effect of packet fragmentation should be mostly irrelevant except for extremely time-sensitive applications.
Make sure you flush your output buffer when writing to the socket or you may have some data left waiting to transmit:
#socket.write(my_data)
#socket.flush