NIC teaming and video streaming - windows

I am currently making a client PC which has Windows 10 on it and it is receiving a video stream over IP. I would like to have redundancy because of the nature of the system so I've created aggregated connection. I've done that by using Powershell cmdlet New-NetLbfoTeam with default settings (dynamic load balancing with switch independent teaming mode, tried LACP too) on two identical NICs which are onboard. After that Windows made a new connection and I can access the network without any problem.
I am using only one switch in this setup.
Now to the problem description - when I am using both NICs, there is lots of block noise in my stream. In some cases there is no video what so ever - just black screen. Tried playing the stream with several tools like VLC etc. no changes. When I disconnect a cable from one of the NICs, there are no problems what so ever. I would like to have redundancy on the network without current issues so I am looking for some tips in fine tuning the NIC teaming.
Any help is appreciated.

Related

Recreating display output from X11 Stream

I do have two computers which are used to control an industrial plant. One of them controls the plant, the other is used as a failsafe. They are directly connected over ethernet, and the inactive" one just mirrors the display of the main controller.
I did capture the network traffic between the two and when i open it up in wireshark i see its all X11 traffic. It does include the initial connection request and also includes all the "draw calls" in plain text.
I now want to "replay" this captured stream and recreate the screen content from it. Is there any program available which can do so? Ideally directly from the wireshark capture file
My thoughts so far:
I can easily replay the network data itself and send it to some socket, but the communication is specific to the session, e.g. some commands refer to specific handle values set up earlier. Its unlikely a new session would work with the same values so i cant just pipe it into some program
What you see from your connection is only your connection requests + events relevant to the windows created by you ( or other's client windows where your connection sets an event mask ), and because of that quite a lot is lost. I'm not aware of the programs that can reconstruct best possible version of the screen from one client traffic but it's certainly not possible to have 100% accurate copy of the screen and best possible model will be far away from real screen (unless your connection periodically polls for backing store content of each mapped window).

How can I capture microphone data and route it to a virtual microphone device?

Recently, I wanted to get my hands dirty with Core Audio, so I started working on a simple desktop app that will apply effects (eg. echo) on the microphone data in real-time and then the processed data can be used on communication apps (eg. Skype, Zoom, etc).
To do that, I figured that I have to create a virtual microphone, to be able to send processed (with the applied effects) data over communication apps. For example, the user will need to select this new microphone (virtual) device as Input Device in a Zoom call so that the other users in the call can hear her with her voiced being processed.
My main concern is that I need to find a way to "route" the voice data captured from the physical microphone (eg. the built-in mic) to the virtual microphone. I've spent some time reading the book "Learning Core Audio" by Adamson and Avila, and in Chapter 8 the author explains how to write an app that a) uses an AUHAL in order to capture data from the system's default input device and b) then sends the data to the system's default output using an AUGraph. So, following this example, I figured that I also need to do create an app that captures the microphone data only when it's running.
So, what I've done so far:
I've created the virtual microphone, for which I followed the NullAudio driver example from Apple.
I've created the app that captures the microphone data.
For both of the above "modules" I'm certain that they work as expected independently, since I've tested them with various ways. The only missing piece now is how to "connect" the physical mic with the virtual mic. I need to connect the output of the physical microphone with the input of the virtual microphone.
So, my questions are:
Is this something trivial that can be achieved using the AUGraph approach, as described in the book? Should I just find the correct way to configure the graph in order to achieve this connection between the two devices?
The only related thread I found is this, where the author states that the routing is done by
sending this audio data to driver via socket connection So other apps that request audio from out virtual mic in fact get this audio from user-space application that listen for mic at the same time (so it should be active)
but I'm not quite sure how to even start implementing something like that.
The whole process I did for capturing data from the microphone seems quite long and I was thinking if there's a more optimal way to do this. The book seems to be from 2012 with some corrections done in 2014. Has Core Audio changed dramatically since then and this process can be achieved more easily with just a few lines of code?
I think you'll get more results by searching for the term "play through" instead of "routing".
The Adamson / Avila book has an ideal play through example that unfortunately for you only works for when both input and output are handled by the same device (e.g. the built in hardware on most mac laptops and iphone/ipad devices).
Note that there is another audio device concept called "playthru" (see kAudioDevicePropertyPlayThru and related properties) which seems to be a form of routing internal to a single device. I wish it were a property that let you set a forwarding device, but alas, no.
Some informal doco on this: https://lists.apple.com/archives/coreaudio-api/2005/Aug/msg00250.html
I've never tried it but you should be able to connect input to output on an AUGraph like this. AUGraph is however deprecated in favour of AVAudioEngine which last time I checked did not handle non default input/output devices well.
I instead manually copy buffers from the input device to the output device via a ring buffer (TPCircularBuffer works well). The devil is in the detail, and much of the work is deciding on what properties you want and their consequences. Some common and conflicting example properties:
minimal lag
minimal dropouts
no time distortion
In my case, if output is lagging too much behind input, I brutally dump everything bar 1 or 2 buffers. There is some dated Apple sample code called CAPlayThrough which elegantly speeds up the output stream. You should definitely check this out.
And if you find a simpler way, please tell me!
Update
I found a simpler way:
create an AVCaptureSession that captures from your mic
add an AVCaptureAudioPreviewOutput that references your virtual device
When routing from microphone to headphones, it sounded like it had a few hundred milliseconds' lag, but if AVCaptureAudioPreviewOutput and your virtual device handle timestamps properly, that lag may not matter.

Low latency/high performance network (ethernet) messaging

Background
I want to create a test application to test the network performance of different systems. To do this I plan to have that machine send out Ethernet frames over a private (otherwise non-busy) network to another machine (or device) that simply receives the message and sends it back. The sending application will record total roundtrip time (among other things).
The purpose of the tests is to see how a particular system (OS + components etc.) performs when it comes to networking traffic. This is illustrated as machine A in the picture below. Note that I'm not interested in the performance of the networking infrastructure (switches, cables etc) - I'm trying to test the performance of network traffic inside Machine A (i.e from when it hits the network card until it reaches user space)
We will (try to) measure all kind of things, one thing is the total roundtrip of the message but also things like interrupt latency of Machine A, general driver overhead etc. Machine A will be a real-time system. But to support these tests, I need a separate machine that can bounce back messages and in other ways add network stimuli to the tested system. This separate machine is Machine B in the picture below and is what this question is about.
My problem
I want to develop an application that can receive and return these messages with as consistent (and preferably low) latency as possible. I'm hoping to get latencies that are consistent within a few microseconds at least. For simplicity, I'd like to do this on a general purpose OS like Windows or Linux but I'm open for other suggestions. There will be no other load (CPU or otherwise) on the machine besides the operating system and my test application.
I've thought of the following approaches:
A normal application running in user space with a high priority
A thread running in kernel space to avoid the userspace/kernelspace transitions
An of-the-shelf device that already does this (I haven't found one though)
Questions
Are there any other approaches or perhaps frameworks that already does this? What else do I need to think of to gain a consistent and low latency? What approach is recommended?
You mentioned that you want to test the internal performance of Machine A, but "need a separate machine"; yet, you don't want to test network infrastructure performance.
You know much more about your requirements than I do; however, if I was testing network infrastructure in Machine A, I would set up my test like this:
There are couple of reasons for this:
You can use an Ethernet loopback cable to simulate the "pong" function performed by Machine B
Eliminating transit through infrastructure you don't care about is almost always a better solution when measuring performance
If you use this test method, be sure to note these points:
Ethernet performs a signal to noise test on the copper before it sets up a link. If you make your loopback bends too tight, you could introduce more latency if ethernet decides to fall back to a lower speed due to the kinks in the cable. There is no minimum length for copper ethernet cabling.
As you're probably aware, combinations of NICs / driver versions / OS can have a significant affect on intra-host latency. I work for a network equipment manufacturer, and one of the guys in the office used to work as an applications engineer for SolarFlare. He claims that many of the Wall Street trading systems use SolarFlare's NICs due to the low latency SolarFlare engineers their products for; he also said SolarFlare's drivers give you user-space access to the NIC buffers. Caveat: third-hand info, and I cannot verify myself.
If you loop the frames to Machine A, set the source and destination mac-address to the burned-in-address on the NIC
Even if you need to receive a modified "pong" frame from Machine B, you could still use this topology and simply rewrite packet fields on the receive-side of your code in Machine A. Put as many (or few) instrumentation points as you like in Machine A's "modules" to compare frame timestamps.
FYI:
The embedded systems I mentioned in my comments on your question are for measuring latency of network infrastructure, not end hosts. This is the best method I can think of for instrumenting host latency.
As an off the shelf solution, I would suggest taking a look at Solace, Tibco and AMQP. These are all enterprise messaging frameworks used extensively in trading applications. AMQP is open source and capable of handling throughputs of up to 100,000 messages per second. I am not sure of the latencies of other frameworks. There is a Java or C++ implementation of the AMQP message router. The C++ one of course returns higher performance.
Edit I've just heard of a new product called UltraMessaging which can provide 7,000,000 messages per second throughput with Java, C++ or C# clients. Crikey.
Best regards,

Simulating Slow Internet Connection [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
The community reviewed whether to reopen this question 3 days ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I know this is kind of an odd question. Since I usually develop applications based on the "assumption" that all users have a slow internet connection. But, does anybody think that there is a way to programmatically simulate a slow internet connection, so I can "see" how an application performs under various "connection speeds"?
I'm not worried about which language is used. And I'm not looking for code samples or anything, just interested in the logic behind it.
Starting with Chrome 38 you can do this without any plugins. Just click inspect element (or F12 hotkey), then click on "toggle device mod" and you will see something like this:
Among many other features it allows you to simulate specific internet connection (3G, GPRS)
P.S. for people who try to limit the upload speed. Sadly at the current time it is not possible.
P.S.2 now you do not need to toggle anything. Throttling panel is available right from the network panel.
Note that while clicking on the No throttling you can create your custom throttling options.
If you're running windows, fiddler is a great tool. It has a setting to simulate modem speed, and for someone who wants more control has a plugin to add latency to each request.
I prefer using a tool like this to putting latency code in my application as it is a much more realistic simulation, as well as not making me design or code the actual bits. The best code is code I don't have to write.
ADDED: This article at Pavel Donchev's blog on Software Technologies shows how to create custom simulated speeds: Limiting your Internet connection speed with Fiddler.
Google recommends:
Network Link Conditioner on OSX
Clumsy on Windows
Dummynet on Linux
On Linux machines u can use wondershaper
apt-get install wondershaper
$ sudo wondershaper {interface} {down} {up}
the {down} and {up} are bandwidth in kpbs
So for example if you want to limit the bandwidth of interface eth1 to 256kbps uplink and 128kbps downlink,
$ sudo wondershaper eth1 256 128
To clear the limit,
$ sudo wondershaper clear eth1
I was using http://www.netlimiter.com/ and it works very well. Not only limit speed for single processes but also shows actual transfer rates.
There are TCP proxies out there, like iprelay and Sloppy, that do bandwidth shaping to simulate slow connections. You can also do bandwidth shaping and simulate packet loss using IP filtering tools like ipfw and iptables.
You can try Dummynet, it can simulates queue and bandwidth limitations, delays, packet losses, and multipath effects
Use a web debugging proxy with throttling features, like Charles or Fiddler.
You'll find them useful web development in general. The major difference is that Charles is shareware, whereas Fiddler is free.
Also, for simulating a slow connection on some *nixes, you can try using ipfw. More information is provided by Ben Newman's answer on this Quora question
You can use NetEm (Network Emulation) as a proxy server to emulate many network characteristics (speed, delay, packet loss, etc.). It controls the networking using iproute2 package and it's enabled in the kernel of most Linux distributions.
It is controlled by the tc command-line application (from the iproute2 package), but there are also some web interface GUIs for NetEm, for example PHPnetemGUI2.
The advantage is that, as I wrote, it can emulate not only different network speeds but also, for example, the packet loss, duplication and/or corruption, random or defined delay, etc., so apart from the slow connections, you can also emulate various poorly performing networks and transmission errors.
For your application it's absolutely transparent, you can configure the operating system to use the NetEm as a proxy server, so all connections from that machine will be routed through it. Or you can configure only a specific application to use that proxy.
I have been using it to test the performance of an Android app on various emulated poor-performance networks.
Use a tool like TCPMon. It can fake a slow connection.
Basically, you request it the exact same thing and it just forwards the exact same request to the real server, and then delays the response with only the set amount of bytes.
For Linux, the following list of papers might be useful:
A Comparative Study of Network Link Emulators (2009)
KauNet: A Versatile and Flexible Emulation System (2009)
Dummynet Revisited (2010)
Measuring Accuracy and Performance of Network Emulators (2015)
Personally, whilst Dummynet is good, I find NetEm to be the most versatile for my use-cases; I'm usually interested in the effect of delays, rather than bandwidth (i.e. WiFi connection issues), and it's super-easy to emulate random packet loss/corruption, etc. It's also very accessible, and free (unlike the hardware-based Linktropy).
On a side-note, for Windows, Clumsy is awesome. I would also like to add that (regarding websites) browser throttling is not an accurate method for emulating real-life network issues (I think "TKK" commented on a few of the reasons why above).
Hope this helps someone!
One common case of shaping a single TCP connection can actually be assembled from dual pairs of socat and cpipe in UNIX fashion like this:
socat TCP-LISTEN:5555,reuseaddr,reuseport,fork SYSTEM:'cpipe -ngr -b 1 -s 10 | socat - "TCP:localhost:5000" | cpipe -ngr -b 1 -s 300'
This simulates a connection with bandwidth of approximately 300kB/s from your service at :5000 and to at approximately 10kB/s and listens on :5555 for incoming connections. Caveat: Note that this per-connection, so each individual TCP connection gets this amount.
Explanation:
The outer (left) socat listens with the given options on :5555 as a forking server. The first cpipe command in the SYSTEM:... option then throttles data that went into socket :5555 (and comes out of the first, outer socat) to at most 10kByte/s. That data is then forwarding using another socat which connects to localhost:5000 (where the service you want to slow down should be listening). Data from localhost:5000 is then put into the right cpipe command, which (with the given values) throttles it to about 300kB/s.
The option -ngr to cpipe is important. It causes cpipe to read non-greedily from its input file-descriptor. Otherwise, you might get stuck with data in the buffers not being forwarded and waiting for a reply.
Using the more common buffer tool instead of cpipe is likely possible as well.
(Credits: This is based on the "double-tee" recipe by Christophe Loor from the socat documentation)
Mac OSX since 10.10 has an app called Murus Firewall, which acts as a GUI to pf, the replacement for ipfw.
It works very well for system-wide or domain-specific throttling. I was just able to use it to slide my download speed between 300Kbps and 30Mbps to test how a streaming video player adjusts.
Updating this (9 years after it was asked) as the answer I was looking for wasn't mentioned:
Firefox also has presets for throttling connection speeds. Find them in the Network Monitor tab of the developer tools. Default is 'No throttling'.
Slowest is GPRS (Download speed: 50 Kbps, Upload speed: 20 Kbps, Minimum latency (ms): 500), ranging through 'good' and 'regular' 2G, 3G and 4G to DSL and WiFi (Download speed: 30Mbps, Upload speed: 15Mbps, Minimum latency (ms): 2).
More in the Dev Tools docs.
There is also another tool called WIPFW - http://wipfw.sourceforge.net/
It's a bit old school, but you can use it to simulate a slower connection. It's Windows based, and the tool allows the administrator to monitor how much traffic the router is getting from a certain machine, or how much WWW traffic it is forwarding, for example.
There is a simple and practical way to do it, without any application or code. Just connect to the internet using a mobile hotspot. Keep moving the hotspot (phone) away from the connected device to simulate slower networks. 😉

Testing file transfer speed across LAN/WAN

Is there a utility for Windows that allows you to test different aspects of file transfer operations across a Lan or a Wan.
Example...
How long does it take to move a file of a known size (500 MB or 1 GB) from Server A (on site) to Server B (on site) or to Server C (off site-Satellite location)?
D-ITG will allow you to test many aspects of your links. It does not necessarily allow you transfer a file directly, but it allows you to control almost all aspects of the transmission of data across the wire.
If all you are interested in is bulk transfer time (and not all the nitty-gritty details) you could just use a basic FTP application and time the transfer.
Probably nothing you've not already figured out. You could get some coarse grain metrics using a batch file to coordinate:
start monitoring
copy file
stop monitoring
Copy file might just be initiating a file copy between two nodes on the LAN, or it might initiate a FTP copy between two nodes on the WAN.
Monitoring could be as basic as writing the current time to output or file, or it could be as complex as adding performance counter metrics from the network adapter on the two machines.
A commercial WAN emulator would also give you the information your looking for. I've used the Shunra Appliance successfully in the past. Its pretty expensive, so I'd really only recommend it if critical business success is riding on understanding how application behavior could change based on network conditions and is something you could incorporate into regular testing activities.

Resources