What is the part of X11 in X11VNC? - x11

I understand that RFB protocol is used to remote display. X11VNC uses the RFB protocol so that any VNC viewer (RFB-based) can view the display.
Question:
Let's assume I have a frame buffer /dev/fb0 for example. I just can write and run the application that reads from framebuffer based on RFB Protocol. In this case, what does X11VNC differs from it.
Also, X11VNC itself provides the option to use raw frame buffer. What is the difference between using/not using this option?

x11vnc uses X11 requests to get your screen updates - via Composite/Damage extensions when available or just by doing GetImage requests at time interval and diffing it with local copy. You want to know not only current image of the screen at any point of time but also when it was changed and what area was affected. Also with x11vnc you can track individual window instead of whole screen - there is fair number of x11 features in addition to just rfb server.

Related

Recreating display output from X11 Stream

I do have two computers which are used to control an industrial plant. One of them controls the plant, the other is used as a failsafe. They are directly connected over ethernet, and the inactive" one just mirrors the display of the main controller.
I did capture the network traffic between the two and when i open it up in wireshark i see its all X11 traffic. It does include the initial connection request and also includes all the "draw calls" in plain text.
I now want to "replay" this captured stream and recreate the screen content from it. Is there any program available which can do so? Ideally directly from the wireshark capture file
My thoughts so far:
I can easily replay the network data itself and send it to some socket, but the communication is specific to the session, e.g. some commands refer to specific handle values set up earlier. Its unlikely a new session would work with the same values so i cant just pipe it into some program
What you see from your connection is only your connection requests + events relevant to the windows created by you ( or other's client windows where your connection sets an event mask ), and because of that quite a lot is lost. I'm not aware of the programs that can reconstruct best possible version of the screen from one client traffic but it's certainly not possible to have 100% accurate copy of the screen and best possible model will be far away from real screen (unless your connection periodically polls for backing store content of each mapped window).

How can I capture microphone data and route it to a virtual microphone device?

Recently, I wanted to get my hands dirty with Core Audio, so I started working on a simple desktop app that will apply effects (eg. echo) on the microphone data in real-time and then the processed data can be used on communication apps (eg. Skype, Zoom, etc).
To do that, I figured that I have to create a virtual microphone, to be able to send processed (with the applied effects) data over communication apps. For example, the user will need to select this new microphone (virtual) device as Input Device in a Zoom call so that the other users in the call can hear her with her voiced being processed.
My main concern is that I need to find a way to "route" the voice data captured from the physical microphone (eg. the built-in mic) to the virtual microphone. I've spent some time reading the book "Learning Core Audio" by Adamson and Avila, and in Chapter 8 the author explains how to write an app that a) uses an AUHAL in order to capture data from the system's default input device and b) then sends the data to the system's default output using an AUGraph. So, following this example, I figured that I also need to do create an app that captures the microphone data only when it's running.
So, what I've done so far:
I've created the virtual microphone, for which I followed the NullAudio driver example from Apple.
I've created the app that captures the microphone data.
For both of the above "modules" I'm certain that they work as expected independently, since I've tested them with various ways. The only missing piece now is how to "connect" the physical mic with the virtual mic. I need to connect the output of the physical microphone with the input of the virtual microphone.
So, my questions are:
Is this something trivial that can be achieved using the AUGraph approach, as described in the book? Should I just find the correct way to configure the graph in order to achieve this connection between the two devices?
The only related thread I found is this, where the author states that the routing is done by
sending this audio data to driver via socket connection So other apps that request audio from out virtual mic in fact get this audio from user-space application that listen for mic at the same time (so it should be active)
but I'm not quite sure how to even start implementing something like that.
The whole process I did for capturing data from the microphone seems quite long and I was thinking if there's a more optimal way to do this. The book seems to be from 2012 with some corrections done in 2014. Has Core Audio changed dramatically since then and this process can be achieved more easily with just a few lines of code?
I think you'll get more results by searching for the term "play through" instead of "routing".
The Adamson / Avila book has an ideal play through example that unfortunately for you only works for when both input and output are handled by the same device (e.g. the built in hardware on most mac laptops and iphone/ipad devices).
Note that there is another audio device concept called "playthru" (see kAudioDevicePropertyPlayThru and related properties) which seems to be a form of routing internal to a single device. I wish it were a property that let you set a forwarding device, but alas, no.
Some informal doco on this: https://lists.apple.com/archives/coreaudio-api/2005/Aug/msg00250.html
I've never tried it but you should be able to connect input to output on an AUGraph like this. AUGraph is however deprecated in favour of AVAudioEngine which last time I checked did not handle non default input/output devices well.
I instead manually copy buffers from the input device to the output device via a ring buffer (TPCircularBuffer works well). The devil is in the detail, and much of the work is deciding on what properties you want and their consequences. Some common and conflicting example properties:
minimal lag
minimal dropouts
no time distortion
In my case, if output is lagging too much behind input, I brutally dump everything bar 1 or 2 buffers. There is some dated Apple sample code called CAPlayThrough which elegantly speeds up the output stream. You should definitely check this out.
And if you find a simpler way, please tell me!
Update
I found a simpler way:
create an AVCaptureSession that captures from your mic
add an AVCaptureAudioPreviewOutput that references your virtual device
When routing from microphone to headphones, it sounded like it had a few hundred milliseconds' lag, but if AVCaptureAudioPreviewOutput and your virtual device handle timestamps properly, that lag may not matter.

Connect EASession after entering ibeacon

I wonder if the background time that an app receives when entering an ibeacon region can be used for opening a session with an external accessory object. Can the app then also keep on running in the background while the session remains open?
This of course assuming that the external-accessory background mode is enabled.
While I have not tried this myself, I do not see any reason why it wouldn't work. You would need to have a legitimate reason in Apple's eyes to use the external-accessory background mode in order to get your app approved.
Assuming you have this, as soon as the app sees an iBeacon in the background, the app gets about 5 seconds of run time, which should generally be sufficient to establish a connection to an external accessory. If the connection is made, and the app is exchanging data with it regularly, the external-accessory background mode should keep it going in the background indefinitely, so long as the connection to the external accessory stays open. So long as this is true, the app will probably be able to get ranging updates in the background indefinitely, too.
It doesn't take a huge leap of logic to see that you could build a device that behaves as an external accessory and an iBeacon simultaneously, and use this to get around the usual iBeacon background rules. That said, you'd need to be careful that Apple doesn't think this is illegitimate, or the app could get rejected quickly. To avoid this rejection, you probably would need to be providing some benefit to the end-user via that external accessory channel.

Drop selected packets at the link layer

I need to dump some incoming packets and then prevent them from going up the stack, so that applications won't process them.
Now, tcpdump works at layer 2, right? So ideally I should find some tool that I'd use right after tcpdump that drops selected packets. The filter I apply in tcpdump and when I drop packets is going to be the same.
Anything that already does this?
Now, tcpdump works at layer 2, right? So ideally I should find some tool that I'd use right after tcpdump that drops selected packets.
Tcpdump captures from a network at the link layer, yes. However, "captures", in this case, means "passively taps into the network, getting copies of all packets received and sent". It does not tap into the network in a fashion that allows it to prevent those packets from being processed by the network stack. Think of it as being similar to tapping a phone line - whoever's tapping the line can listen to the conversation, but they can't prevent somebody on one side of the conversation from hearing what the person on the other side says.
Anything that already does this?
There might be, but the mechanism that it would use to do so is probably going to be very dependent on the operating system it's running on. What operating system is the machine on which you need to trap the packets running.

Simulating Slow Internet Connection [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
The community reviewed whether to reopen this question 3 days ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I know this is kind of an odd question. Since I usually develop applications based on the "assumption" that all users have a slow internet connection. But, does anybody think that there is a way to programmatically simulate a slow internet connection, so I can "see" how an application performs under various "connection speeds"?
I'm not worried about which language is used. And I'm not looking for code samples or anything, just interested in the logic behind it.
Starting with Chrome 38 you can do this without any plugins. Just click inspect element (or F12 hotkey), then click on "toggle device mod" and you will see something like this:
Among many other features it allows you to simulate specific internet connection (3G, GPRS)
P.S. for people who try to limit the upload speed. Sadly at the current time it is not possible.
P.S.2 now you do not need to toggle anything. Throttling panel is available right from the network panel.
Note that while clicking on the No throttling you can create your custom throttling options.
If you're running windows, fiddler is a great tool. It has a setting to simulate modem speed, and for someone who wants more control has a plugin to add latency to each request.
I prefer using a tool like this to putting latency code in my application as it is a much more realistic simulation, as well as not making me design or code the actual bits. The best code is code I don't have to write.
ADDED: This article at Pavel Donchev's blog on Software Technologies shows how to create custom simulated speeds: Limiting your Internet connection speed with Fiddler.
Google recommends:
Network Link Conditioner on OSX
Clumsy on Windows
Dummynet on Linux
On Linux machines u can use wondershaper
apt-get install wondershaper
$ sudo wondershaper {interface} {down} {up}
the {down} and {up} are bandwidth in kpbs
So for example if you want to limit the bandwidth of interface eth1 to 256kbps uplink and 128kbps downlink,
$ sudo wondershaper eth1 256 128
To clear the limit,
$ sudo wondershaper clear eth1
I was using http://www.netlimiter.com/ and it works very well. Not only limit speed for single processes but also shows actual transfer rates.
There are TCP proxies out there, like iprelay and Sloppy, that do bandwidth shaping to simulate slow connections. You can also do bandwidth shaping and simulate packet loss using IP filtering tools like ipfw and iptables.
You can try Dummynet, it can simulates queue and bandwidth limitations, delays, packet losses, and multipath effects
Use a web debugging proxy with throttling features, like Charles or Fiddler.
You'll find them useful web development in general. The major difference is that Charles is shareware, whereas Fiddler is free.
Also, for simulating a slow connection on some *nixes, you can try using ipfw. More information is provided by Ben Newman's answer on this Quora question
You can use NetEm (Network Emulation) as a proxy server to emulate many network characteristics (speed, delay, packet loss, etc.). It controls the networking using iproute2 package and it's enabled in the kernel of most Linux distributions.
It is controlled by the tc command-line application (from the iproute2 package), but there are also some web interface GUIs for NetEm, for example PHPnetemGUI2.
The advantage is that, as I wrote, it can emulate not only different network speeds but also, for example, the packet loss, duplication and/or corruption, random or defined delay, etc., so apart from the slow connections, you can also emulate various poorly performing networks and transmission errors.
For your application it's absolutely transparent, you can configure the operating system to use the NetEm as a proxy server, so all connections from that machine will be routed through it. Or you can configure only a specific application to use that proxy.
I have been using it to test the performance of an Android app on various emulated poor-performance networks.
Use a tool like TCPMon. It can fake a slow connection.
Basically, you request it the exact same thing and it just forwards the exact same request to the real server, and then delays the response with only the set amount of bytes.
For Linux, the following list of papers might be useful:
A Comparative Study of Network Link Emulators (2009)
KauNet: A Versatile and Flexible Emulation System (2009)
Dummynet Revisited (2010)
Measuring Accuracy and Performance of Network Emulators (2015)
Personally, whilst Dummynet is good, I find NetEm to be the most versatile for my use-cases; I'm usually interested in the effect of delays, rather than bandwidth (i.e. WiFi connection issues), and it's super-easy to emulate random packet loss/corruption, etc. It's also very accessible, and free (unlike the hardware-based Linktropy).
On a side-note, for Windows, Clumsy is awesome. I would also like to add that (regarding websites) browser throttling is not an accurate method for emulating real-life network issues (I think "TKK" commented on a few of the reasons why above).
Hope this helps someone!
One common case of shaping a single TCP connection can actually be assembled from dual pairs of socat and cpipe in UNIX fashion like this:
socat TCP-LISTEN:5555,reuseaddr,reuseport,fork SYSTEM:'cpipe -ngr -b 1 -s 10 | socat - "TCP:localhost:5000" | cpipe -ngr -b 1 -s 300'
This simulates a connection with bandwidth of approximately 300kB/s from your service at :5000 and to at approximately 10kB/s and listens on :5555 for incoming connections. Caveat: Note that this per-connection, so each individual TCP connection gets this amount.
Explanation:
The outer (left) socat listens with the given options on :5555 as a forking server. The first cpipe command in the SYSTEM:... option then throttles data that went into socket :5555 (and comes out of the first, outer socat) to at most 10kByte/s. That data is then forwarding using another socat which connects to localhost:5000 (where the service you want to slow down should be listening). Data from localhost:5000 is then put into the right cpipe command, which (with the given values) throttles it to about 300kB/s.
The option -ngr to cpipe is important. It causes cpipe to read non-greedily from its input file-descriptor. Otherwise, you might get stuck with data in the buffers not being forwarded and waiting for a reply.
Using the more common buffer tool instead of cpipe is likely possible as well.
(Credits: This is based on the "double-tee" recipe by Christophe Loor from the socat documentation)
Mac OSX since 10.10 has an app called Murus Firewall, which acts as a GUI to pf, the replacement for ipfw.
It works very well for system-wide or domain-specific throttling. I was just able to use it to slide my download speed between 300Kbps and 30Mbps to test how a streaming video player adjusts.
Updating this (9 years after it was asked) as the answer I was looking for wasn't mentioned:
Firefox also has presets for throttling connection speeds. Find them in the Network Monitor tab of the developer tools. Default is 'No throttling'.
Slowest is GPRS (Download speed: 50 Kbps, Upload speed: 20 Kbps, Minimum latency (ms): 500), ranging through 'good' and 'regular' 2G, 3G and 4G to DSL and WiFi (Download speed: 30Mbps, Upload speed: 15Mbps, Minimum latency (ms): 2).
More in the Dev Tools docs.
There is also another tool called WIPFW - http://wipfw.sourceforge.net/
It's a bit old school, but you can use it to simulate a slower connection. It's Windows based, and the tool allows the administrator to monitor how much traffic the router is getting from a certain machine, or how much WWW traffic it is forwarding, for example.
There is a simple and practical way to do it, without any application or code. Just connect to the internet using a mobile hotspot. Keep moving the hotspot (phone) away from the connected device to simulate slower networks. 😉

Resources