Performance of Node-Webkit Desktop Applications - performance

I need to create a desktop-application that has to handle some animations and a bunch of logic.
I thought about creating it with node-webkit which I never have used before.
Is anybody here who already wrote some desktop-apps and experienced the performance compared to something coded in c++?

So there's 2 parts to this question:
1) Speed comparison of Javascript executing under V8 (node-webkit), vs C++ compiled into native code
On most computationally-intensive tasks, you'd expect a 3x to 10x slowdown in execution (depending on the benchmark). An example can be found at http://benchmarksgame.alioth.debian.org/u64/benchmark.php?test=all&lang=v8&lang2=gpp ; if you want more examples search for other v8 benchmarks.
2) Speed comparison of browser-based UI toolkits (based on the DOM and CSS, and perhaps WebGL) as rendered with Chrome's engine, vs whatever desktop UI toolkit (for example, Qt, WxWidgets, etc) and/or 3D-rendering API (DirectX, OpenGL, or various different wrappers around them) that you'd be using with C++.
This unfortunately is rather difficult to benchmark, as there are tons of different UI toolkits out there, each with differing performance characteristics for each type of animation / widget that you might use (depending on how they were implemented). If you're 3D rendering and want to compare Javascript+WebGL on Chrome to C++ with DirectX, see https://www.scirra.com/blog/58/html5-2d-gaming-performance-analysis for an example benchmark (their figures indicate a ~5x slowdown); if you want more examples search for performance benchmarks comparing WebGL to OpenGL and DirectX.
Generally speaking, well-implemented C++ should execute faster than Javascript running under node-webkit, simply because there's fewer layers of abstraction away from the hardware. That said, unless you're building an exceptionally computationally intensive application, the difference will likely not be visible on a modern desktop, and you should be focusing more on ease-of-development rather than performance.
Using node-webkit also gives you advantage of the countless UI libraries built for browsers, which will likely accelerate your development time, especially if you already have experience in frontend web app development. There are also advantages in terms of portability - unless you use a cross-platform UI tookit like Qt with C++, you will need platform-specific UI code, whereas with node-webkit you get cross-platform portability for free.

Related

How these to platforms compare in performance namely staff-wsf and wt?

How these to c++ web service framworks compare in performance namely staff-wsf and witty?
I did not perform bench marking, But can get idea from following
Although implemented in C++, Wt’s main focus or novelty is not its
performance, but its focus on developing maintainable applications and
its extensive library of built-in widgets. But because it is popular
and widely used in embedded systems, you will find that performance
and foot-print has been optimized too, by virtue of a no-nonsense API,
thoughtful architecture, and C++ …
given in webtoolkit tutorial

Prototyping and simulating embedded software on Windows

I am looking for tools and techniques for prototyping (virtual prototyping), simulation, and testing of deeply embedded C code on desktop Windows, including building realistic embedded front panels consisting of buttons, LEDs, and LCD displays (both segmented and graphic).
I'm specifically interested in a possibly low-level approach, using pure C code and raw Win32 API rather than MFC, .NET/C#, vxWidgets or Qt. I'd also like to use free development tools, such as Visual C++ Express with Platform SDK and ResEdit for editing resources.
I'm looking for code examples to render graphic LCDs (from monochrome to 24-bit color) with efficient pixel-level interface, multi-segment LCDs, and owner-drawn buttons that respond both to "depressed" and "released" events.
I am surprised that my original question triggered so many misunderstandings and adverse comments. The strategy of developing deeply embedded C code on one machine (e.g., a PC) and running it on another (the embedded microcontroller) is called "dual targeting" and is really quite common. For example, developing and testing deeply embedded code on the PC is the cornerstone of the recent book "Test-Driven Development for Embedded C" by James Grenning.
Avoiding Target Hardware Bottleneck with Dual Targeting
Please note that dual targeting does not mean that the embedded device has anything to do with the PC. Neither it means that the simulation must be cycle-exact with the embedded target CPU.
Dual targeting simply means that from day one, your embedded code (typically in C) is designed to run on at least two platforms: the final target hardware and your PC. All you really need for this is two C compilers: one for the PC and another for the embedded device.
However, the dual targeting strategy does require a specific way of designing the embedded software such that any target hardware dependencies are handled through a well-defined interface often called the Board Support Package (BSP). This interface has at least two implementations: one for the actual target and one for the PC, for example running Windows. With such interface in place, the bulk of the embedded code can remain completely unaware which BSP implementation it is linked to and so it can be developed quickly on the PC, but can also run on the target hardware without any changes.
While some embedded programmers can view dual targeting as a self-inflicted burden, the more experienced developers generally agree that paying attention to the boundaries between software and hardware is actually beneficial, because it results in more modular, more portable, and more maintainable software with much longer useful lifetime. The investment in dual targeting has also an immediate payback in the vastly accelerated compile-run-debug cycle, which is much faster and more productive on the powerful PC compared to much slower, recourse-constrained deeply embedded target with limited visibility into the running code.
Front Panel Win32 GUI Toolkit
When developing embedded code for devices with non-trivial user interfaces, one often runs into the problem of representing the embedded front panels as GUI elements on the PC. The problem is so common, that I'm really surprised that nobody here could recommend an existing library or an open source project, which would provide a simple C-only interface to the basic elements, such as LCDs, buttons, and LEDs. This is really not that complicated, yet it seems that every embedded developer has to re-invent this wheel over and over again.
So, to help embedded developers interested in prototyping embedded devices on Windows, I have created a "Front Panel Win32 GUI Toolkit" and have posted it online under the GPL open source license (see http://www.state-machine.com/win32). This toolkit relies only on the raw Win32 API in C and currently provides the following elements:
Dot-matrix display for an efficient, pixel-addressable displays such as graphical LCDs, OLEDs, etc. with up to 24-bit color
Segment display for segmented display such as segment LCDs, and segment LEDs with generic, custom bitmaps for the segments.
Owner-drawn buttons with custom “depressed” and “released” bitmaps and capable of generating separate events when depressed and when released.
The toolkit comes with an example and an App Note (see http://www.state-machine.com/win32/AN_Win32-GUI.pdf), showing how to handle input from the owner-drawn buttons, regular buttons, keyboard, and the mouse. You can also view an animated demo at http://www.state-machine.com/win32/front_panel.html.
Regarding the size and complexity of the "Front Panel Win32 GUI Toolkit", the implementation of the aforementioned GUI elements takes only about 250 lines of C. The example with all sources of input and a lot of comments amounts to some 300 lines of C. The toolkit has been tested with the free Visual C++ Express 2010 (with the Express Edition Platform SDK) and the free ResEdit resource editor.
Enjoy!
The appliances you mention in your comment clarification to the question will never be using a windows PC, so low level windows programming is not a requirement in that case. In fact, I'd say its undesirable. Prototyping is about speed. It's about how fast you can put something together to show potential investors or upper management or some other decision maker.
You wouldn't want to spend the extra time with low level C and Win32 api until the project requirements were flushed out enough that you knew that was an absolute requirement for the final project deliverables (perhaps a server/PC monitoring tool?). Until then you want speed of development. Lucky for you the industry has tools for rapid prototyping and development of hardware like you describe.
My Preference for Prototyping with Embedded Development
As for my opinion as a developer, I like the .net microframework (.netmf) simply because I'm already a Microsoft .Net developer and can transfer a lot of my existing skills. Therefor I prototype with a FEZ microcontroller using C# under Visual C# Express 2010 (free as you required). Its fast, easy and you are working on the core of your project in minutes.
If your experience as a developer is different, you may look for a micro controller which is programmed using BASIC, Java or some other language to help with the speed of development by reusing your core skill set.
Addressing your Question Bounty Comments
Astonishingly large portions of the embedded software can be developed
on the desktop computer as opposed on the deeply embedded target. This
avoidance of the "target system bottleneck" can potentially improve
productivity by an order of magnitude, if done right. However, to
develop embedded software on the desktop, one needs to simulate the UI
components, such as displays (both segmented and increasingly
graphical), LEDs, knobs, and buttons. I'm looking for such UI
components written in plain Win32 API in C for easy integration with
embedded code to be developed and tested on the desktop Windows.
I did embedded development full time professionally for well over 4 years as well as many years surrounding that part time. While what you said above is somewhat true, it will not save you time or money which is why everyone is confused about the motivation for this strategy. We spent years trying to put out a windows emulator for this company's hardware devices that would theoretically save time for prototyping. It was always a pain and we spent many more hours of work trying to emulate the experience than if we just went straight from sketched UI drawing specs to real development. The emulator lagged behind hardware development and often wouldn't support the latest features until 6 months or more after the hardware was released. It was a lot of extra work for very little value.
You will spend more of your time developing non-reusable win32 platform code and hardware emulation components than actually writing the code for the core project itself. This only ever makes sense for hardware vendors who provide this emulator as a 'value add' tool to potential 3rd party developers, but it does not make sense for prototyping new hardware designs.
Modern development environments like Visual C# Express 2010 with a FEZ microcontroller can compile, push the project output to the microcontroller, and then begin debugging just as fast or faster than you could compile and run a low level windows app in C emulating LCDs or LEDs or switches, etc... So your comment, "improve productivity by an order of magnitude", is simply no longer true with modern tools. (It may have been prior to the last 10 years or so.)
If you really, truly just want to simulate the embedded hardware visually on a PC use something like adobe flash to mock up a UI. But don't duplicate code by coding for windows when the final device you are prototyping won't be running windows (maybe it will be, but you didn't say that). Use the fastest most reliable prototyping tools available today, which is unequivocally not low level C and win32 api!
Maybe use StackExchange for Electronics?
Because this is a development oriented site, discussion about the merits of specific embedded hardware isn't really relevant. If you decide to refocus on using microcontroller electronics for prototyping (Arduino, FEZ, Propeller, Basic Stamp, Pololu, etc) you might ask for electronics hardware advice on stackexchange for electronics. I will say that most of those platforms are designed to facilitate the prototyping of LCDs, LEDs, buttons and interfaces as you outlined. You can usually assemble a few pre-built modules in a matter of minutes and be ready to start coding your project. Huge time savings can be had here.
You are asking for too much you need to take a look # proteus.
http://www.labcenter.com/products/vsm_overview.cfm
As Mahmoud said, you may find your code solution with prototyping example in proteus professional. It is one of popular software for prototyping, simulation and coding, you can download proteus professional for free and check their manual.
Best of luck

Doing native GUI with Ruby

I'd like to develop a desktop app with Ruby. However, I'd like to have a native GUI on every platform (as opposed to a cross-platform GUI Toolkit that looks consistently awful across all platforms).
I expect to have to do different GUIs for each platform (as it's not just looks but also behaviors and idioms that are different), but I wonder what my options are? Especially wondering if there is a clean way to separate front and backend and bind the data properly?
Target Platforms are Windows (Vista & 7, XP is a Bonus), Mac OS X (Cocoa) and Linux (GTK? Qt? No idea).
The Ruby language has excellent Qt library bindings and your scripts will be cross-platform.
Two Kinds of Cross-Platform
It turns out there are two kinds of cross-platform UI toolkits.
One kind draws its own controls, and, like you said, looks equally bad on all platforms. Even worse: it looks out-of-place on all except one.
But there is another kind that just provides a harmonized interface to the native widgets. The best of example of this kind of toolkit is SWT1.. It looks, it is, approximately fully native on each platform, yet it has but a single API.
So you shouldn't simply rule-out all cross-platform toolkits, just rule out the ones that fake the native UI.
Develop the Wrapper Interface
There is a second way. If your program's interface with the user can be directed through a relatively narrow interface, you can simply develop to that interface and then implement the bottom part of it for each platform you want to support. Yes, you have to rewrite one module, but all the other modules stay exactly the same and you get native widgets. You also get the smallest possible executable without lots of bloat.
Perhaps most importantly, you don't have a complex and opaque software layer between your code and the native windowing system. You will probably save as much time debugging as you spend writing the extra module for your first port.
1. I know my Java examples won't help you much unless you are using jRuby, but SWT vs Swing is a really pure example of the right-vs-wrong (IMHO) UI toolkit divide.
The WxWidgets interface claims to use the native interface on Windows, OS X, Linux and UNIX through one API.
Coworkers who have used it in the past enjoyed it well enough, but I've not used it myself.

Haskell UI framework?

Is there, by chance, an emerging Haskell UI framework for Windows?
I recently took up looking over the language, and from what I see, it would be for great little "one-off" applications (elaborate scripts).
However, without a good UI framework I can't see it getting in under the smoke and mirrors of the more obvious contenders.
I've read that there are many frameworks, but none are full-featured.
I'm just wondering if this is something that's on the rise, or is it simply too difficult to get enough developers going in the same direction with one?
The two main frameworks are wxHaskell and Gtk2Hs. Both of these have been used for real work. From what I know my preference would be Gtk2Hs because it handles resources properly (i.e. uses the GC). wxHaskell requires the programmer to release widgets once they are no longer required, so you can get all the classic memory leaks and stale pointer screws with it.
The problem with both is that everything is in the IO monad. This reflects the fact that they are comparatively thin wrappers around existing GUI libraries for imperative languages. Of course this means you are no worse off than you would be writing a GUI in an imperative language, but you are hardly much better off either.
There are some interesting experimental libraries to be found on Hackage, including Grapefruit and Conal Elliott's "Tangible Values" ideas in GuiTV. Both of these try for a more declarative approach.
(Disclaimer: I am the wxHaskell maintainer)
Both wxHaskell and Gtk2Hs are more or less complete. That's to say, both wrap a great deal of the functionality provided by their underlying libraries. They also both, as mentioned earlier, require a rather 'imperative' style of programming in the IO monad.
There have been many discussions on the relative merits of each. I would say that wxHaskell is the easier of the two to get working, especially on Windows, as it can be installed via cabal (see http://www.haskell.org/haskellwiki/WxHaskell/Install#On_Windows)
The FRP frameworks (Grapefruit and others) provide a more 'functional' style of programming, at the cost of having much reduced widget coverage. I have the feeling that this is still an open research area, and not really ready for 'prime time'.
In practice, I've never had resource management issues with wxHaskell, although I agree that it's possible, and is an area handled better by Gtk2Hs, which uses reference counting in the underlying library.
For completeness, I should also mention that a Qt binding (QtHaskell?) also exists - it is relatively young, but apparently reasonably complete.
I rather feel that the Haskell community, small as it is, would do well to fix on one GUI framework, but accept the difficulty of this (e.g. licensing, support for all OS platforms etc.).
Also you can use wxWidgets (i mean C++ library) with Haskell. Here is an example: https://bitbucket.org/afiskon/hs-a-star-gui/src Such approach has some advantages over wxHaskell: 1. You can use UI generators (Code::Blocks, wxFormBuilder) 2. Your application takes less disk space 3. You can use all features of wxWidgets.
It should also be noted, that last version of wxHaskell uses wxWidgets 2.9, which probably will never be ported to Debian: http://bugs.debian.org/cgi-bin/bugreport.cgi?msg=16;bug=613431

Favorite graphical subsystem to program in

Ok, this is an interesting question to ask because everyone has a say in it.
What is your favorite library to program in for GUI's and the language that you program it in. Give a short reason why. (ex. Gtk, Qt, Windows, etc..) Just an FYI, this includes any scripting language that you program a GUI in Python, Perl etc...
Frankly I've always done Gtk in C, but I'm starting to warm up to Qt in C++ with the new KDE. I've never been a big fan of Windows programming.
ChrisW. stated that I did not give a reason for Gtk/Qt so here goes. I started with Gtk because when I started programming GUI's I was working in Linux and there was more Gtk information available. Started utilizing Qt when I started working more in KDE but really the move to Qt was based on trying to move to C++ and learn more languages. I've never been a fan of basic Windows programming, but I do enjoy a little DirectX now and then :P
Recently I had the opportunity to work with both wxWindows and QT, while some time ago I wrote some small programs using FLTK and Gtk. My conclusion is that widget libraries tend to be very similar; each one has its strenghts and its quirks.
Instead of advocating a specific library, then, I would like to advocate the use of high level languages in GUI programming: the development cycle is way faster and GUI programs are rarely CPU bound, so the performance hit is rarely a problem.
If a GUI program has to perform some intense computations, just develop a core library in C or C++, but keep the interface in Python or whatever other interpreted language.
People like to bash Swing for being old, slow and ugly, but it's just not true. Swing is mature, is faster than ever on Java SE 6/10, looks decent enough, and is tolerable to program. Above all, I've found Java + Swing to be the most trouble-free cross-platform combination. It also works remarkably seamlessly with Jython (Python on JVM).
SWT could also be an option, but so far I've been happy with Swing.
I realise you're focusing on application GUIs but if you want a quick, powerful and fun way to visualize anything on your computer, you can't go past Processing
From the site:
Processing is an open source
programming language and environment
for people who want to program images,
animation, and interactions. It is
used by students, artists, designers,
researchers, and hobbyists for
learning, prototyping, and production.
It is created to teach fundamentals of
computer programming within a visual
context and to serve as a software
sketchbook and professional production
tool. Processing is an alternative to
proprietary software tools in the same
domain.
WPF in particular, and XAML in all its reincarnations (WPF, Silverlight, Moonlight).
C# on top of .Net 3.5/Mono: $0
Visual Studio Express/MonoDevelop: $0
Being able to tell the designer "make my program pretty" and continue coding features: priceless.
I liked writing to video memory under DOS: for an animated game (i.e. an Asteroids clone), that was as fast (performant) as I knew how to do it (certainly faster that using the BIOS API).
This is really a somewhat subjective question, so there is no best or correct answer to it. The following is based on my (limited) experience:
I personally like wxWidgets with PLT Scheme for writing simple but flexible GUIs. There are much more advanced toolkits, but I usually do not need their features. wxWidgets is flexible and the Scheme interface follows Scheme traditions of being powerful with a relatively simple structure. I like the fact that wxWidgets is portable, and yet tries not to actually draw its own widgets, but can use native or common toolkits of the environment it is used under. It is written in C++ but I never used its C++ interface.
That is not to say that in my opinion Scheme will generally be the optimal language to write your application in. In fact there are many kinds of applications I would not write in Scheme, even though I like the language. But regarding the GUI programming part, that is my favourite because of its straight-forwardness, and the way that a functional language like Scheme goes well with declarative-style GUI programming.
Of course you will not have the same level of control when using that as when having your program involved in every stage of the window construction and input reaction, by using an event loop (such as with Win32API or Xt/Intrinsics). But that is not always convenient and often unnecessary, and seems to become decreasingly common.
Note: The wxWindows toolkit was renamed wxWidgets, but my installation of a rather recent version of PLT Scheme still comes with the older wxWindows. I am not sure whether there is an updated package of wxWidgets available or if it is going to be included in a future version of PLT Scheme.
Qt4 without question for me. Now that it has an LGPL license it makes sense for all kinds of applications that previously weren't possible. Additionally, it changes C++ in ways that dramatically improve the experience of using the language. (Things like a foreach and forever loop, atomic operations on integers, and memory management)
Gtk and is the primary window-drawing graphical subsystem I have experience working with (and is therefore my favorite XD).
As far as general graphics subsystems go, however, OpenGL (typically in combination with GLUT) has been an easy and productive ride for me. Regrettably I have little DirectX experience to compare to, though :S
For writing souped-up versions of standard Windows components, I loved Borland's VCL, and am very pleased with .NET.

Resources