On WASM3‘s Performance Benchmark
https://github.com/wasm3/wasm3/blob/main/docs/Performance.md
Under the Fibonacci Test, there is an entry called
wasm3 on V8 (Emscripten 1.38, node v13.0.1)
What does that even meaning here
Best guess: wasm3 is a C program (that just so happens to be a Wasm interpreter), and as such can be compiled to Wasm and then executed on a Wasm engine. For example, wasm3-compiled-to-Wasm could be run on wasm3-compiled-to-native. Or it could be run on V8.
Whether compiling a Wasm interpreter to Wasm and running it on another Wasm engine is ever useful (for anything other than demonstrating that it's possible) is a different question :-)
If you want to know for sure, you should probably ask this question on the wasm3 issue tracker.
Related
I wanted to see if I could play Exile 2 from web.archive.org and I found that I need to install it first, which takes ages. Given that I'm basically emulating x86 machine on an x86 computer, DosBox supports dynarec (dynamic recompilation) and contemporary browsers support JITing the JavaScript code (and Emscripten generates asm.js, which should be rather easy to JIT), what makes it all so slow? In other words, what could be the bottleneck?
Dosbox is compiled using Emterpreter, which makes it slower than the pure asmjs version:
The Emterpreter is an option that compiles asm.js output from Emscripten into a binary bytecode. It also generates an interpreter ("Emscripten interpreter", hence Emterpreter) capable of executing that bytecode. This lets you compile your project, or parts of your project, into bytecode that will be interpreted, as opposed to asm.js that will be executed directly by the JavaScript engine.
The second reason is, that the Dynamic recompilation in the emscripten port of dosbox is not available yet. It would be a lot of work, to make it possible to create asmjs code on the fly.
I'm trying to move from TypeScript to Dart. TypeScript compiles almost immediately - Dart takes more than 5 seconds to compile a Hello World program! Am I missing something? Are there any possible ways to improve that?
This is usually not much of an issue when developing with Dart because Dartium, a Chromium derivative executes Dart directly.
Only for testing compatibility with other browsers and for deployment it is necessary to build to JavaScript.
pub serve a Dart development web server does dart-to-js compilation on-the-fly with a lot of caching which usually improves transpilation times for reloads (after some warmup time) if you need JS during development with non-Dartium browsers.
TypeScript is a typed superset of JavaScript that compiles to plain JavaScript.
So it (ts compiler) translates from a higher level programming language to a lower level programming language.
Dart is an open-source, scalable programming language, with robust libraries and runtimes.
So it (dart2js compiler) is a source-to-source compiler (transpiler) that takes the source code of a program written in one programming language as its input and produces the equivalent source code in another programming language.
I think that explains everything.
In a tutorial I've encountered a new concept (for me), that I never thought is possible. Actually, I thought that compilation is an entirely pre-run-time process. This is the phrase from tutorial: "Compile time occurs before link time (when the output of one or more compiled files are joined together) and runtime (when a program is executed). In some programming languages it may be necessary for some compilation and linking to occur at runtime".
My questions are:
Is pre-run-time compilation and linking processes absolutely different from run-time compilation and linking? If yes, please explain the main differences.
How are code sections that need to be compiled(linked) during run-time marked and where is that information kept? (This may be different from language to language, if possible, please give a specific example).
Thank you very much for your time!
Runtime compilation
The best (most well known) example I'm personally aware of is the just in time compilation used by Java. As you might know Java code is being compiled into bytecode which can be interpreted by the Java Virtual Machine. It's therefore different from let's say C++ which is first fully (preprocessed) compiled (and linked) into an executable which can be ran directly by the OS without any virtual machine.
The Java bytecode is instead interpreted by the VM, which maps them to processor specific instructions. That being said the JVM does JIT, which takes that bytecode and compiles it (during runtime) into machine code. Here we arrive at your second question. Even in Java it can depend on which JVM you are using but basically there are pieces of code called hotspots, the pieces of code that are run frequently and which might be compiled so the application's performance improves. This is done during runtime because the normal compiler does not have (or well might not have) all the necessary data to make a proper judgement which pieces of code are in fact ran frequently. Therefore JIT requires some kind of runtime statistics gathering, which is done parallel to the program execution and is done by the JVM. What kind of statistics are gathered, what can be optimised (compiled in runtime) etc. depends on the implementation (you obviously cannot do everything a normal compiler would do due to memory and time constraints - guess this partly answers the first question? you don't compile everything and usually only a limited set of optimisations are supported in runtime compilation). You can try looking for such info but from my experience usually it's very badly documented and hard to find (at least when it comes to official sources, not presentations/blogs etc.)
Runtime linking
Linker is a different pair of shoes. We cannot use the Java example anymore since it doesn't really have a linker like C or C++ (instead it has a classloader which takes care of loading files and putting it all together).
Usually linking is performed by a linker after the compilation step (static linking), this has pros (no dependencies) and cons (higher memory imprint as we cannot use a shared library, when the library number changes you need to recompile your sources).
Runtime linking (dynamic/late linking) is actually performed by the OS and it's the OS linker's job to first load shared libraries and then attach them to a running process. Furthermore there are also different types of dynamic linking: explicit and implicit. This has the benefit of not having to recompile the source when the version number changes since it's dynamic and library sharing but also drawbacks, what if you have different programs that use the same library but require different versions (look for DLL hell). So yes those two concepts are also quite different.
Again how it's all done, how it's decided what and how should be linked, is OS specific, for instance Microsoft has the dynamic-link library concept.
Can GO be used as a scripting language within an application ? I can't find any informations about this: is there a dynamic link library version which could be interfaced from a Windows application with some standard methods such as Compile(), Execute and features such as callbacks, variables sharing etc ?
This might sound strange at first but go with me on this: I think it would be a perfect candidate for a scripting language because it's compile time is so fast....hear me out...
Most scripting languages are interpreted, and so they do not require (or even provide in some cases) compilation. However compiled languages are safer in general because they can catch certain errors at compile time, which is better than, for example, catching a syntax error at runtime.
With Go, the compile time is so speedy that whatever program is running your Go code (e.g. a web server) could hypothetically compile the code on-demand if the code has changed, and otherwise use the compiled version.
Actually if you check out Google App Engine and download their dev web server for Go (https://developers.google.com/appengine/) you'll notice that their web server does exactly this. If you run through their Hello World tutorial for Go you'll notice that if you make changes to your code you won't need to recompile the Go code in order for the changes to take affect.
Go is not a scripting language. Because Go is designed for fast compilation, there have been some attempts to use it as a scripting language. For example,
gorun
GoNow
In theory (and perhaps somewhere out there w/o me knowing), Go can be used as a script language. Just note that it makes as much sense as using e.g. C as a scripting language.
No. Go code cannot be used within a non-Go application unless Go is responsible for starting up the whole app.
Compiling a program to bytecode instead of native code enables a certain level of portability, so long a fitting Virtual Machine exists.
But I'm kinda wondering, why delay the compilation? Why not simply compile the byte code when installing an application?
And if that is done, why not adopt it to languages that directly compile to native code? Compile them to an intermediate format, distribute a "JIT" compiler with the installer and compile it on the target machine.
The only thing I can think of is runtime optimization. That's about the only major thing that can't be done at installation time. Thoughts?
Often it is precompiled. Consider, for example, precompiling .NET code with NGEN.
One reason for not precompiling everything would be extensibility. Consider those languages which allow use of reflection to load additional code at runtime.
Some JIT Compilers (Java HotSpot, for example) use type feedback based inlining. They track which types are actually used in the program, and inline function calls based on the assumption that what they saw earlier is what they will see later. In order for this to work, they need to run the program through a number of iterations of its "hot loop" in order to know what types are used.
This optimization is totally unavailable at install time.
The bytecode has been compiled just as well as the C++ code has been compiled.
Also the JIT compiler, i.e. .NET and the Java runtimes are massive and dynamic; And you can't foresee in a program which parts the apps use so you need the entire runtime.
Also one has to realize that a language targeted to a virtual machine has very different design goals than a language targeted to bare metal.
Take C++ vs. Java.
C++ wouldn't work on a VM, In particular a lot of the C++ language design is geared towards RAII.
Java wouldn't work on bare metal for so many reasons. primitive types for one.
EDIT: As delnan points out correctly; JIT and similar technologies, though hugely benificial to bytecode performance, would likely not be available at install time. Also compiling for a VM is very different from compiling to native code.