Sorry about the length, it's kinda necessary.
Introduction
I'm developing a remote desktop software (just for fun) in C# 4.0 for Windows Vista/7. I've gotten through basic obstacles: I have a robust UDP messaging system, relatively clean program design, I've got a mirror driver (the free DFMirage mirror driver from DemoForge) up and running, and I've implemented NAT traversal for all NAT types except Symmetric NATs (present in corporate firewall situations).
Regarding screen transfer/sharing, thanks to the mirror driver, I'm automatically notified of changed screen regions and I can simply marshal the mirror driver's ever-changing screen bitmap to my own bitmap. Then I compress the screen region as a PNG and send it off from the server to my client. Things are looking pretty good, but it's not fast enough. It's just as slow as VNC (btw, I don't use the VNC protocol, just a custom amateur protocol).
From the slowest remote desktop software to the fastest, the list usually begins at all VNC-like implementations, then climbs up to Microsoft Windows Remote Desktop...and then...TeamViewer. Not quite sure about CrossLoop, LogMeIn - I haven't used them, but TeamViewer is insanely fast. It's quite literally live. I ran a tree command on Command Prompt and it updated with 20 ms delay. I can browse the web just a few milliseconds slower than on my laptop. Scrolling code vertically in Visual Studio has 50 ms lag time. Think about how robust TeamViewer's screen-transfer solution must be to accomplish all this.
VNCs use poll-based hooks for detecting screen change and brute force screen capturing/comparing at their worst. At their best, they use a mirror driver like DFMirage. I'm at this level. And they use something called the RFB protocol.
Microsoft Windows Remote Desktop apparently goes one step higher than VNC. I heard, from somewhere on StackOverflow, that Windows Remote Desktop doesn't send screen bitmaps, but actual drawing commands. That's quite brilliant, because it can just send simple text (draw this rectangle at this coordinate and color it with this gradient)! Remote Desktop really is pretty fast - and it's the standard way of working from home. And it uses something called the RDP protocol.
Now TeamViewer is a complete mystery to me. Apparently, they released their source code for Version 2 (TeamViewer is Version 7 as of February 2012). People have read it and said that Version 2 is useless - that it's just a few improvements over VNC with automatic NAT traversal.
But Version 7...it's ridiculously fast now. I mean, it's actually faster than Windows Remote Desktop. I've streamed DirectX 3D games with TeamViewer (at 1 fps, but Windows Remote Desktop doesn't even allow DirectX to run).
By the way, TeamViewer does all this without a mirror driver. There is an option to install one, and it gets just a bit faster.
The Question
My question is, how is TeamViewer so fast? It must not be possible. If you've got 1920 by 1080 resolution at even 24 bit depth (16 bit depth would be noticeably ugly), thats still 6,220,800 bytes raw. Even using libjpeg-turbo (one of the fastest JPG compression libraries used by large corporations), compressing it down to 30KB (let's be extremely generous), would take time to route through TeamViewer's servers (TeamViewer bypasses corporate Symmetric NATs by simply proxying traffic through their servers). And that libjpeg-turbo compression would take time to compress. High-quality JPG compression takes 175 milliseconds for a full 1920 by 1080 screenshot for me. And that number goes up if the host's computer runs an Atom processor. I simply don't understand how TeamViewer has optimized their screen transfer so well. Again, small-size images might be highly compressed, but take at least tens of milliseconds to compress. Large-size images take no time to compress, but take a long time to get through. Somehow, TeamViewer completes this entire process to get roughly 20-25 frames per second. I've used a network monitor, and TeamViewer is still lagless at speeds of 500 Kbps and 1 Mbps (VNC software lag for a few seconds at that transfer rate). During my tree Command Prompt test, TeamViewer was receiving inbound data at a rate of 1 Mbps and still running 5-6 fps. VNC and remote desktop don't do that. So, how?
The answers will be somewhat complicated and intricate, so please don't post your $0.02 if you're only going to say it's because they use UDP instead of TCP (would you believe they actually do use TCP just as successfully though).
I'm hoping there's a TeamViewer developer somewhere here on StackOverflow.
Potential Answers
Will update this once people reply.
My thoughts are, first of all, that TeamViewer has very fine network control. For example, they split large packets to just under the MTU size and never waste a trip. They probably have all sorts of fancy hooks to detect screen changes along with extremely fast XOR image comparisons.
The most fundamental thing here probably is that you don't want to transmit static images but only changes to the images, which essentially is analogous to video stream.
My best guess is some very efficient (and heavily specialized and optimized) motion compensation algorithm, because most of the actual change in generic desktop usage is linear movement of elements (scrolling text, moving windows, etc. opposed to transformation of elements).
The DirectX 3D performance of 1 FPS seems to confirm my guess to some extent.
would take time to route through TeamViewer's servers (TeamViewer bypasses corporate Symmetric NATs by simply proxying traffic through their servers)
You'll find that TeamViewer rarely needs to relay traffic through their own servers. TeamViewer penetrates NAT and networks complicated by NAT using NAT traversal (I think it is UDP hole-punching, like Google's libjingle).
They do use their own servers to middle-man in order to do the handshake and connection set-up, but most of the time the relationship between client and server will be P2P (best case, when the hand-shake is successful). If NAT traversal fails, then TeamViewer will indeed relay traffic through its own servers.
I've only ever seen it do this when a client has been behind double-NAT, though.
It sounds indeed like video streaming more than image streaming, as someone suggested.
JPEG/PNG compression isn't targeted for these types of speeds, so forget them.
Imagine having a recording codec on your system that can realtime record an incoming video stream (your screen). A bit like Fraps perhaps. Then imagine a video playback codec on the other side (the remote client).
As HD recorders can do it (record live and even playback live from the same HD), so should you, in the end. The HD surely can't deliver images quicker than you can read your display, so that isn't the bottleneck. The bottleneck are the video codecs. You'll find the encoder much more of a problem than the decoder, as all decoders are mostly free.
I'm not saying it's simple; I myself have used DirectShow to encode a video file, and it's not realtime by far. But given the right codec I'm convinced it can work.
My random guess is: TV uses x264 codec which has a commercial license (otherwise TeamViewer would have to release their source code). At some point (more than 5 years ago), I recall main developer of x264 wrote an article about improvements he made for low delay encoding (if you delay by a few frames encoders can compress better), plus he mentioned some other improvements that were relevant for TeamViewer-like use. In that post he mentioned playing quake over video stream with no noticeable issues. Back then I was kind of sure who was the sponsor of these improvements, as TeamViewer was pretty much the only option at that time. x264 is an open source implementation of H264 video codec, and it's insanely good implementation, it's the best one. At the same time it's extremely well optimized. Most likely due to extremely good implementation of x264 you get much better results with TV at lower CPU load. AnyDesk and Chrome Remote Desk use libvpx, which isn't as good as x264 (optimization and video quality wise).
However, I don't think TeamView can beat microsoft's RDP. To me it's the best, however it works between windows PCs or from Mac to Windows only. TV works even from mobiles.
Update: article was written in January 2010, so that work was done roughly 10 years ago. Also, I made a mistake: he played call of duty, not quake. When you posted your question, if my guess is correct, TeamViewer had been using that work for 3 years.
Read that blog post from web archive: x264: the best low-latency video streaming platform in the world. When I read the article back in 2010, I was sure that the "startup–which has requested not to be named" that the author mentions was TeamViewer.
Oddly. but in my experience TeamViewer is not faster/more responsive than VNC, only easier to setup. I have a couple of win-boxen that I VNC over OpenVPN into (so there is another overhead layer) and that's on cheap Cable (512 up) and I find properly setup TightVNC to be much more responsive than TeamViewer to same boxen. RDP (naturally) even more so since by large part it sends GUI draw commands instead of bitmap tiles.
Which brings us to:
Why are you not using VNC? There are plethora of open source
solutions, and Tight is probably on top of it's game right now.
Advanced VNC implementations use lossy compression and that seems to achieve
better results than your choice of PNG. Also, IIRC the rest of the payload is also
squashed using zlib. Bothj Tight and UltraVNC have very optimized algos, especially for windows. On top of that Tight is open-source.
If win boxen are your primary target RDP may be a better option, and has an opensource implementation (rdesktop)
If *nix boxen are your primary target NX may be a better option and has an open source implementation (FreeNX, albeit not as optimised as NoMachine's proprietary product).
If compressing JPEG is a performance issue for your algo, I'm pretty sure that image comparison would still take away some performance. I'd bet they use best-case compression for every specific situation ie lossy for large frames, some quick and dirty internall losless for smaller ones, compare bits of images and send only diffs of sort and bunch of other optimisation tricks.
And a lot of those tricks must be present in Tight > 2.0 since again, in my experience it beats the hell out of TeamViewer performance wyse, YMMV.
Also the choice of a JIT compiled runtime over something like C++ might take a slice from your performance edge, especially in memory constrained machines (a lot of performance tuning goes to the toilet when windows start using the pagefile intensively). And you will need memory to keep previous image states for internal comparison atop of what DF mirage gives you.
Related
I need to figure out if porting an application to Mac OS X (not iOS) is feasible. I wrote some code for Mac around 20 years ago, but what I'm looking at now is completely different, and may require a complete re-write, which I cannot afford. After googling for some time, I found a variety of APIs, which appearing and get deprecated so often, that I feel completely lost.
The application draws through copying small fragments of bitmaps to the window. This is accomplished with BitBlt() on Windows or XCopyArea() on X11. In both cases, the source is stored in the video memory, so copying is really fast, 500K copies per second on a decent card, possibly more. On Mac, there used to be CopyBits() function which did the same, but it is now depreacted. I found CGContextDrawImage() which looks it's getting deprecated too, but copies from the user memory, and can only copy the whole image (not fragments). Is there any way to accomplish bitmap copying at decent speed?
I see everything is 64-bit. I would want to keep it in 32-bit for a number of reasons. 32-bit applications still seem to be supported, but with the fast pace deprecation, Apple may stop supporting at any time. Is this a correct assesment?
Software distribution. I cannot find any information on this. Looks like you need to be a member of the Apple Development program to be able to install your software on user's computers. Is this true? In some other places, I have read that any software must undergo Apple approval. Is this correct?
Thank you for your help.
So much has changed in the past twenty years, that it may indeed be quite difficult to port your app directly to modern OS X. You may be better served by taking the general design concept and application objectives, and create a fresh implementation using up-to-date software technology.
Your drawing system might be much easier to do with modern APIs, but the first step is deciding which framework to use. Invest some time in reading the documentation and watching the many videos available on the Developer website. A logical place to start is Getting Started with Graphics & Animation, but you may also wish to explore Metal Programming Guide and SpriteKit.
The notion of 64-bit vs 32-bit is irrelevant. All Mac computers run 64-bit code.
If you don't purchase a Developer program membership, you can still create an unsigned application with Xcode. It can be installed on another user's computer, but they'll need to specifically change the setting in System Preferences -> Security to "Allow apps downloaded from: Anywhere".
The WWDC videos are very useful in understanding the concepts and benefits of advancements in these frameworks made over the past few years.
After some investingation, it appears that OS X graphics is completely different from others. Screen is regarded as a target for vector graphics, not bitmap.
When the user changes the screen resolution, the screen resolution doesn't really cange (as it would in Linux or Windows) and remains native, but the scale at which the vector graphics is rendered changes. Consequently, it's perfectly possible to set screen "resolution" to be higher than the native one - you just see the things renderd smaller.
When you take a screenshot, the system simply renders everything to an off-screen bitmap (which can be any size), so you can get nice smooth screenshots at any size.
As everything is vectored, the applications that use bitmap graphics are at a huge disadvantage. It is very hard to get to native pixels without much overhead, and worse yet, an application that uses native pixels will behave strange because it won't scale when the user changes screen resolution. It also will have problems when screenshots are taken. Is it possible to make it work? I guess I won't find out until I fork over $2K for Macbook Pro and try it.
32-bit apps seems to be supported, and I don't think there's an intent to drop the support.
As to code distribution, my Thawte Authenticode certificate is supposed to work on OS X as well, so I probably don't need to become a member of Apple Developer program to distribute software, but again there's no definitive answer to that until I try.
I work for a performing arts institution and have been asked to look into incorporating wearable technology into accessibility for our patrons. I am interested in finding out more information regarding the use of SmartEyeglasses for supertitles (aka, subtitles) in live or pre-recorded performance. Is it possible to program several glasses to show the user(s) the same supertitles at the same time? How does this programming process work? Can several pairs of SmartEyeglasses connect with the same host device?
Any information is very much appreciated. I look forward to hearing from you!
Your question is overly broad and liable to be closed as such, but I'll bite:
The documentation for the SDK is available here: https://developer.sony.com/develop/wearables/smarteyeglass-sdk/api-overview/ - it describes itself as being based on Android's. The content of the wearable display is defined in a "card" (an Android UI concept: https://developer.android.com/training/material/lists-cards.html ) and the software runs locally on the glasses.
Things like subtitles for prerecorded and pre-scripted live performances could be stored using file formats like .srt ( http://www.matroska.org/technical/specs/subtitles/srt.html ) which are easy to work with and already have a large ecosystem around them, such as freely available tools to create them and software libraries to read them.
Building such a system seems simple then: each performance has an .srt file stored on a webserver somewhere. The user selects the performance somehow, and you'd write software which reads the .srt file and displays text on the Card based on the current timecode through until the end of the script.
...this approach has the advantage of keeping server-side requirements to a minimum (just a static webserver will do).
If you have more complex requirements, such as live transcribing, support for interruptions and unscripted events then you'd have to write a custom server which sends "live" subtitles to the glasses, presumably over TCP, this would drain the device's battery life as the Wi-Fi radio would be active for much longer. An alternative might be to consider Bluetooth, but I don't know how you'd build a system that can handle 100+ simultaneous long-range Bluetooth connections.
A compromise is to use .srt files, but have the glasses poll the server every 30 seconds or so to check for any unscripted events. How you handle this is up to you.
(As an aside, this looks like a fun project - please contact me if you're looking to hire someone to build it :D)
Each phone can only host only 1 SmartEyeglass. So you would need separate host phones for each SmartEyeglass.
Recently I've been wondering if that's possible to create an application which would take some CPU from one computer and conduct it to another pc connected to the same network for example. I have 2 laptops and one is much worse than the other. When I'm playing a game the first laptop overheats quickly :P and I would like the better laptop to take some of the CPU, execute the calculations and return the results back so that the weaker laptop wouldn't overheat too quickly :P.
Is that possible to code in C/C++? With use of WinAPI or sth? Or maybe there is already an application which would enable me to achieve this goal?
Not as simply as you're probably looking for, no. Some old games may be very well playable over Remote Desktop, but that's probably not what you're looking for.
Still, there are some options, with a bit of tweaking, that can be used to play a game remotely, for example: http://www.tomshardware.co.uk/forum/id-1638643/tutorial-create-onlive-remote-streaming-setup.html
On a multi-tier application, I need to simulate various TCP/IP errors to test some reconnection code. Does anyone know of any tools (Windows based) I can use for this purpose? Thanks.
Scapy allows you to control every aspect of the packets, and randomly modify ("fuzz") the ones you don't want to control. If you're a command-line kind of guy, it's a great tool.
Try netwox (formerly lcrzoex.) If it won't do it, it can't be done. It contains >200 tools.
On FreeBSD, the best tool, by far, is dummynet, "a tool originally designed for testing networking protocols, and since then used for a variety of applications including bandwidth management. It simulates/enforces queue and bandwidth limitations, delays, packet losses, and multipath effects."
On Linux, you will have to use netem. (It seems there is now a port of dummynet but I never tried it.)
More details (in French) in my article.
Clumsy is a good tool for TCP error simulation on Windows. It can simulate (copy-pasted from link above):
Lag, hold the packets for a short period of time to emulate network
lagging.
Drop, randomly discard packets.
Throttle, block traffic for a given time frame, then send them in a single batch.
Duplicate, send cloned packets right after to the original one.
Out of order, re-arrange the order of packets.
Tamper, nudge bits of packet's content.
No tools that I'm aware of, but most of TCP errors can be emulated by a custom LSP filter. This article can get you started writing one
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
OnLive is a cloud computing solution for gaming. It offers streaming of high-end games to any pc, regardless of its hardware. I wonder how it works: sending raw HD res image and audio data seems unlikely. Would relatively simple compression, like jpeg and mp3/ogg, do the trick?
Have you read this article? Excerpts thereof:
It's essentially the gaming version of cloud computing - everything is computed, rendered and housed online. In its simplest description, your controller inputs are uploaded, a high-end server takes your inputs and plays the game, and then a video stream of the output is sent back to your computer. Think of it as something like Youtube or Hulu for games.
The service works with pretty much any Windows or Mac machine as a small browser plug-in. Optionally, you will also be able to purchase a small device, called the OnLive MicroConsole, that you can hook directly into your TV via HDMI, though if your computer supports video output to your TV, you can just do it that way instead. Of course, you can also just play on your computer's display if you don't want to pipe it out to your living room set.
[...]
OnLive has worked diligently to overcome lag issues. The first step in this was creating a video compression algorithm that was as quick as possible.
It's basically games-over-VNC. Obviously they use video compression; of what sort I'm not sure. The two obvious alternatives would seem to be something fairly computationally lightweight, such as motion JPEG or even MPEG 2, running on the same server that's running the game, or something more computationally intensive but compact, such as H264, running on dedicated hardware.
Personally, If I were designing the service, I'd go for the latter: It allows you to have better compression without massively upgrading all your servers, for the cost of a relatively inexpensive codec chip. Because the video stream is smaller, you can attract people who have connections that would have been marginal or too slow using a poorer codec.
This is what I understood: It is a thin client based gaming solution. Different from the gaming consoles like Wii, X-Box or Play Station, no CPU/GPU or any processing is needed at player’s side. The game is streamed from a monster server via internet, just like a HiFi terminal session (RDP/Remote Desktop) but with HD graphics. Controls (inputs) are sent to the server and graphics is sent back. It can be played on Mac or PC via a web browser add-in or in a TV with a small unit to connect to the server. Requires a 5mbps connection for HD and a 1.5mbps for SD. Almost all game titles will be available or ported to this platform. No need to buy a console or a game. No need of high end gaming PCs… Just a broadband connection (of course this should be high end).
I think that they are using something like an HDMI video h264 encoder in order to stream a video directly from an hdmi audio/video output.
Something like this HDMI encoder or this h264 realtime encoder
You can also use a frame-grabber card like this: http://www.epiphan.com/products/frame-grabbers/vga2ethernet/
There is also one more solition now. If you have a recent Nvidia graphics card, you can have the benefits of hardware accelerated capture, without the extra hardware. It's called "Gamestream" You can buy one of the Nvidia devices supporting the protocol, or you can download an open source app called "Moonlight" http://moonlight-stream.com