Can Rust and CyberDWARF be used together? - debugging

Emscripten provides CyberDWARF for inspecting variables at runtime via the console as a JS API. Is it possible to use CyberDWARF when compiling Rust to asm.js or Wasm?
Source maps are possible, but it would be nice to be able to inspect variables that appear in the original source.

Not CyberDWARF, but you can use LLDB with Wasmtime: https://hacks.mozilla.org/2019/09/debugging-webassembly-outside-of-the-browser/

Related

Is there a way to call c code from a substrate project?

It's trivial to call c from rust normally, is there a way to do so from a substrate project? I can't find anything online saying it's possible or it's not possible.
I have a c library I'd like to use as part of a substrate project and I was wondering if it would be possible to use it without rewriting it.
Many FFI libraries are no_std compliant, this is the only hard requirement for being used in substrate runtimes. Checkout the list here and look for that tag for options to move forward:
https://lib.rs/development-tools/ffi
If you are using this library outside the runtime, you should be able to use any rust library.

How can I inspect x86/x64 code generated by V8 from WebAssembly?

https://webassembly.studio/ allows inspection of WebAssembly (WASM) files and the corresponding SpiderMonkey-generated x86 code. I'd like to similarly inspect instructions generated by V8's WASM compilers (Liftoff and TurboFan).
I'm entirely unfamiliar with V8's codebase/API (I compiled & linked it and followed some tutorials, though). There seems to be a v8::CompiledWasmModule class available, but it does not seem to expose access to generated x86/x64 instructions by either Liftoff or TurboFan.
WebAssembly - adding a new opcode describes the process of adding a WASM opcode to V8. Seemingly appropriate functions for WASM compilation/execution are available in the mentioned classes. Though, these seem rather deeply layered within the V8 codebase and would be difficult to access were I to link V8 as a library. Also, I'm unsure if this corresponds to Liftoff or TurboFan.
Could anybody familiar with the V8 codebase give me some pointers as to how I can access Liftoff and/or TurboFan's WebAssembly compilation module, as to obtain x86/x64 code?
To inspect generated code, you can run the d8 shell with the --print-wasm-code flag. You'll need either a debug build, or a release build with the v8_enable_disassembler = true GN arg.
There's no existing way to retrieve generated code via V8's API; so if that's what you want, then you'd have to add it. Keep in mind that V8 is not designed to be a standalone compiler, which means generated code assumes that it's going to run "inside V8", so if you wanted to use it for anything else, you'd have to make significant modifications.

Building an fftw2 application with fftw3 libraries

I'm trying to build an old application that depends on fftw. It was written against fftw2 and I am currently on fftw3 (specifically 3.3.8). It fails to link because of undefined reference to fftw_create_plan and fftw_one. Indeed, my libfftw does not have those functions anymore; the following returns nothing:
readelf -s /usr/lib/libfftw3.so | grep 'fftw_create_plan\|fftw_one'
It looks like the api has changed significantly since the code was written. Is there a compatibility layer I can use or should I just go learn fftw3's new interface?
You cannot link an FFTW2 code against FFTW3 libraries, as you are coming to realize yourself. There is also no complete interface between the two, cause the apis are really not compatible.
Having said that, you may of course link your code against FFTW2 libraries. You still can obtain them. Why is that not an option?

How To Use libGLESv2.so in Your Program

I am accustomed to linking against libGL.so on most Linux distributions. Either mesa's implementation or NVIDIA's. However, I would really like to limit myself to OpenGL ES 2.X functionality, so I am attempting to link against and use libGLESv2.so. However, I see that glX functions are not present in libGLESv2.so dynamic section:
nm --dynamic /usr/lib64/nvidia/libGLESv2.so | grep glX
Also attempting to link agains libGLESv2.so results in undefined references to glX functions.
This leads me to my question. What is the correct way to "GetProcAddress" while dynamically linking against libGLESv2.so? Also how do you construct the appropriate context without glX?
I'm not sure how easy this will be for you to figure out and how relevant for your applications but in pi3d we get the drawing context using libEGL.so (or libegl.dll from ANGLE on windows).
This where the dynamic libraries are found and loaded
https://github.com/tipam/pi3d/blob/master/pi3d/constants/init.py
and this is where the surface is created and attached to the GLESv2 functions
https://github.com/tipam/pi3d/blob/master/pi3d/util/DisplayOpenGL.py

Using Three.js as Closure library?

Looking at Three.js's build script, I see they use Google Closure compiler. I am having a small script utilizing Three.js. Can I use Google Closure to compile the script with Three.js as a library (instead of having Three.min.js preambled or included in an HTML tag) so the final output javascript is much smaller.
I'm asking this because I don't see any goog.provide in Three.js source.
It looks like the build command does not specify a compilation level. That means it is using the default SIMPLE_OPTIMIZATIONS.
If that is indeed true, then no, you probably cannot include the source as a library as it is not compatible with ADVANCED_OPTIMIZATIONS.
One way to do this is to prepend the three.js script to the compiler output using whatever shell or build system you are using.

Resources