I have a ROS node that gets image frames from a camera sensor and publishes image messages to a topic of type sensor_msgs::image. I run a ros2 executable which deploys the node. I notice that the rate at which the camera sensor provides the frames is 30 fps but the frame rate returned by "ros2 topic hz" is comparatively quite low, that is, around 10 Hz. I verified this using the output of "ros2 topic echo" wherein only around 10 messages were published with the same "sec" (second) value.
So, it seems that a large overhead is involved in the topic publishing mechanism.
Most likely, entire image frames are being copied which is causing low fps. I would like to confirm whether this is indeed the case, that is, does ros2 copies the entire message while publishing to a topic? And if yes, what are the workarounds to that? It seems that using intra process communication (using components) might be a workaround. But note that I am only deploying one node and publishing messages to a topic from it, that is to say, there is no second node which is consuming those messages yet.
Regards
I can think of a couple reasons why the reported frequency from ros2 topic hz is reporting a lower frequency than expected.
There are known performance issues with Python publishers and large data (like images). Improvements have been made, but still exist in older version of ROS 2 (Galactic or earlier) (related Github issue). I don't know if these issues affect Python subscriptions, but I imagine there is some overhead in converting from C to Python (which ros2 topic hz is doing). You could try subscribing with a C++ node and see if that makes any difference.
The underlaying robot middleware (RMW) may also be a source of increased latency. There have been various issues documented in trying to send large data. You can check out this documentation on tuning the middleware for your use-case: https://docs.ros.org/en/rolling/How-To-Guides/DDS-tuning.html
To take advantage of intraprocess communication, I recommend writing your publisher and subscriber nodes in C++ as components, which have the flexibility of being run in their own process or loaded into a common process (letting them use pointers to pass around data). You can also configure the RMW to use shared memory (agnostic to how you're using ROS), but I won't get into that here since it depends on what RMW you are using.
You can try using usbcam pkg to get the camera feed.
This pkg is in cpp and sensor QoS. So you should get the best speed.
installation:
sudo apt get install ros-<ros2-distro>-usb-cam
ros2 run usb_cam usb_cam_node_exe
ros2 run image_transport republish compressed raw --ros-args --remap in/compressed:=image_raw/compressed --remap out:=image_raw/uncompressed
you can echo the topic: image_raw/uncompressed
link attached.
https://index.ros.org/r/usb_cam/
Related
I am using ZeroMQ to communicate between multiple services. I ran into an issue where I was sending responses but they were never getting to the caller. I did a lot of debugging and couldn't figure out what was happening. I eventually reduced the message size (I was returning the results of a query) and the messages started coming in. Then I increased the memory size of my JVM and the original messages started coming back.
This leads me to believe that the messages were too big to fit into memory and ZeroMQ just dropped them. My question is, how can I properly debug this? Does ZeroMQ output any logs or memory dumps?
I am using the Java version of ZeroMQ.
Q : "...how can I properly debug this?"
Well, if one were aware of the native ZeroMQ API settings, at least of both the buffering "mechanics" and the pair of { SNDHWM | RCVHWM }-hard-cut-off limits, one might do some trial-error testing for fine-tuning these parameters.
Q : "Does ZeroMQ output any logs or memory dumps?"
Well, no, the native ZeroMQ knowingly did not ever attempted to do so. The key priority of the ZeroMQ concept is the almost linearly scalable performance and the Zen-of-Zero reflecting that excluded any single operation, that did not support achieving this at minimalistic low-latency envelopes.
Yet, newer versions of the native API provide a tool called socket-monitor. That may help you write your own internal socket-events' analyser, if in such a need.
My 11+ years with ZeroMQ have never got me into an unsolvable corner. Best get the needed insights into the Context()-instance and Socket()-instance parameters, that will better configure the L3, protocol-dependent and O/S-related buffering attributes ( some of which need not be present in the java-bindings, yet the native API shows at its best all the possible and tweakable parameters of the ZeroMQ data-pumping engines ).
Simply put, I need to take results from a DAQ and display them visually in a UI (no interaction needed) that gets information updated in real time. The DAQ I am using has an "utility" to plug into Labview, so it seems that the easiest way is to grab this data from Labview and then transmit that data to some UI using one of these methods.
I am using Windows 10 (although I could boot to Ubuntu), just not sure what UI application would be best / easiest to use.
You can use this National Instrument's tool for DAQ UI visualization. As it is native it should be quite straightforward to use.
You may want to use the DAQExpress VI in LabVIEW as #MateoRandwolf suggested. The neat thing about it is that it almost creates your first programm automatically -- besides the configuration of your NI modules.
There are just two things missing:
a waveform chart, and
a write to a TDMS file
Here is a snippit of a simple program doing this (the stop button is important to actually close the TMDS file before aborting the program)
If you really want to stream the data to a different device, you I suggest to use TCP/IP. There exist good examples in the documentation from which you can start (Help > Find Examples... > Search-tab). If you cannot accept the roughly 40ms buffer that TCP/IP has (because of shake-hands etc.), have a look on UDP.
You can use Dewesoft's DAQ systems which use dual mode capability. They use dual data buses (EtherCAT and USB). USB for high-speed buffered data storage to the PC's SSD hard drive and the EtherCAT bus for low latency real-time stream to any 3rd party EtherCAT master.
The DAQ systems are also capable of visualising data in real-time on the display using various pre-build visual displays like recorders, XY graphs, 3D graphs, osciloscopes, FFTs, GPS, video, and numerous other...
On Mac OS X, I have a process which produces JSON objects, and another intermittent process which should consume them. The producer and consumer processes are independent of each other. Objects will be produced no more often than every 5 seconds, and will typically be several hundred bytes, but may range up into megabytes sometimes. The objects should be communicated first-in-first-out. The consumer may or may not be running when the producer is producing, and may or may not read objects immediately.
My boneheaded solution is
Create a directory.
Producer writes each JSON object to a text file, names it with a serial number.
When Consumer launches, it reads and then deletes files in serial-number order, and while it is running, uses FSEvents to watch this directory for new files arriving.
Is there any easier or better way to do this?
The modern way to do this, as of Lion, is to use XPC. Unfortunately, there's no good documentation of it; there's a broad overview in the Daemons and Services guide and a primitive HeaderDoc-generated reference, but the best way to get introduced to it is to watch the session about it from last year's WWDC sessions.
With XPC, you won't have to worry about keeping serial numbers serial, having to contend for a spinning disk, or whether there's enough disk space. Indeed, you don't even have to generate and parse JSON data at all, since XPC's communication mechanism is built around JSON-esque/plist-esque container and value objects.
Assuming you want the consumer to see the old files, this is the way it's been done since the beginning of time - loathsome though it may be.
There's lots of highish tech things that look cleaner - but honestly, they just tend to add complexity and/or deployment infrastructure that add hassle. What you suggest works, and it works well, and it's easy to write and maintain. You might need some kind of sentinel files to track what you are doing for crash recovery, but that's probably about it.
Hell, most people would just poll with a sleep 5. At least you are all all up in the fsevent.
Now if it was accepable to lose the events generated when the listener wasn't around; and perf was paramount - it could get more interesting. :)
I need to get a list of interfaces on my local machine, along with their IP addresses, MACs and a set of QoS measurements ( Delay, Jitter, Error rate, Loss Rate, Bandwidth)...
I'm writing a kernel module to read these information from local network devices,So far I've extracted every thing mentioned above except for both Jitter and Bandwidth...
I'm using linux kernel 2.6.35
It depends what you mean by bandwidth. In most cases you only get from the PHY something that is better called bitrate. I guess you rather need some kind of information on the available bandwidth at a higher layer, which you can't get without active or passive measurements done, e.g. sending ICMP echo-like probe packets, and investigating replies. You should also make clear what the two points in the network are (both the actual endpoints and the communication layer) between which you would like to measure available bandwidth.
As for jitter you also need to do some kind of measurements, basically the same way as above.
I know this is an old post, but you could accomplish at least getting jitter by inspecting the RTCP packets if they're available. They come in on the +1 of the RTP port and come along with any RTP stream as far as I've seen. A lot of information can be gotten from RTCP, but for your purposes just the basic source description would do it:
EDIT: (didn't look at the preview)
Just check out this link for the details of the protocol, but you can get the jitter pretty easily from an RTCP packet.
Depending on what you're using the RTP stream for too there are a lot of other resources, like the VoIP Metrics Report Block in the Extended Report (https://www.rfc-editor.org/rfc/rfc3611#page-25).
EDIT:
As per Artem's request here is a basic flow of how you might do it:
An RTP stream is started on say port 16400 (the needed drivers/mechanism for this to happen are most likely already in place).
Tell the kernel to start listening on port 16401 (1 above your RTP stream's port) as well; this is where the RTCP pkts will start coming in.
As the RTCP pkts come in send them wherever you want to handle them (ie, if you're wanting to parse it in userspace or something).
Parse the pkts for the desired data. I'm not aware of a particular lib to do this, but it's pretty easy to just point some struct at it (in C) and dereference, watching out for Endianess.
The VideoPlayer (possibly VideoDisplay also) component is capable of somehow automatically picking the best quality video on the list it's given. An example is here:
http://help.adobe.com/en_US/FlashPlatform/beta/reference/actionscript/3/spark/components/mediaClasses/DynamicStreamingVideoItem.html#includeExamplesSummary
I cannot find the answers to below questions.
Assuming that the server that streams recorded videos is capable of switching across same videos with different bit rates and streaming them from any point within their timelines:
Is the bandwidth test/calculation within this component only done before the video starts playing, at which point it picks the best video source and never uses the other ones? Or, does it continuously or periodically execute its bandwidth tests and does it accordingly switch between video sources during the playback?
Does it support setting the video source through code and can its automatic switching between video sources be turned off (in case I want to provide this functionality to the user in the form of some button/dropdown or similar)? I know that the preferred video source can be set, but this only means that that video source will be tested/attempted first.
What other media servers can be used with this component, besides the one provided by Adobe, to achieve automated and manual switching between different quality of same video?
Obviously, I'd like to create a player that is smart enough to automatically switch between different quality of videos, and that will support manual instructions related to which source to play - both without interrupting the playback, or at least without restarting it (minor interruptions acceptable). Also, the playback needs to be able to start at any given point within the video, after enough data has been buffered (of course), but most importantly, I want to be able to start the playback beyond what's buffered. A note or two about fast-forwarding is not going to hurt, if anyone knows anything.
Thank you for your time.