Is there any validation layer that validates Vulkan 1.1 compliance? Is there any convenient way to ensure that I'm not using Vulkan 1.2+ features?
Update
Here is a related github thread: https://github.com/KhronosGroup/MoltenVK/issues/1533
It seems there are potential serious downsides to using Vulkan over Metal via MoltenVK (transcribe cost).
If the application sets the apiVersion member of VkApplicationInfo to 1.1, the VK_LAYER_KHRONOS_validation layer will issue messages about any 1.2 usage.
Since you've mentioned MoltenVK, you may also want to look up information about the portability subset extension in the Vulkan specification. In particular, I think you need to enable that extension if you are going to use MoltenVK and the extension provides the ability to query for non-conformant behavior.
Related
I am trying to make use of some ES 3.1 features, and it's unclear if this is supported:
I notice that there is an OpenGL ES 3.1 header in the emscripten repository which defines some of the functions I'm looking for, and I can include them successfully in my project. However, they are not available when I try to link:
error: undefined symbol: glDispatchCompute (referenced by top-level compiled C/C++ code)
warning: _glDispatchCompute may need to be added to EXPORTED_FUNCTIONS if it arrives from a system library
The documentation says that OpenGL ES3 is supported if I specify -s FULL_ES3=1 (which I am doing).
Since there are headers for it, is this functionality available? If so, how do I enable support for it? (Does it require manually loading extensions or enabling experimental support in emscripten, for example?)
First thing to realize, is that decent Browsers currently implement only WebGL 1.0 (based on OpenGL ES 2.0) and WebGL 2.0 (based on OpenGL ES 3.0). So that Emscripten SDK may implement only features presented by WebGL 2.0 + extensions, and, unfortunately, Compute Shaders are not presented in any way in WebGL (and there are no more plans to add this functionality in future).
By the way, WebGL 2.0 support has been added to Safari 15 (iOS 15) only this (2021) year, so that even OpenGL ES 2.0 will not work on all devices.
I notice that there is an OpenGL ES 3.1 header in the emscripten
The extra <GLES3\gl3*.h> headers are there not because Emscripten implements all of them, but because this is a common way to distribute OpenGL ES 3.x headers that existing applications may rely on to conditionally load / use OpenGL ES 3.x functions at runtime, when they available.
The documentation says that OpenGL ES3 is supported if I specify -s FULL_ES3=1 (which I am doing).
I think that documentation is more or less clear about what FULL_ES3=1 adds:
To enable OpenGL ES 3.0 emulation, specify the emcc option -s FULL_ES3=1 when linking the final executable (.js/.html) of the project.
This adds emulation for mapping memory blocks to client side memory.
There is no word about OpenGL ES 3.1 here, nor a reason to expect a wonder from Emscripten, as I can barely imagine a reasonable hardware-accelerated way to emulate things like Compute Shaders on top of OpenGL ES 3.0. And software emulation would be certainly of no use for most applications.
WebGPU 1.0 that promised to appear mid '2022, is more capable than WebGL 2.0. So that WebGPU developers already see that at one time native WebGL 2.0 implementation in the Browsers could be replaced by a WebAssembly module implementing this legacy API on top of WebGPU - but in some very distant future. The same could also bring OpenGL ES 3.1/3.2 features to Emscripten SDK - at least theoretically, if somebody will be still interested to work on this.
Google state that:
Protocol buffers are Google's language-neutral, platform-neutral, extensible mechanism for serializing structured data
I searched for an explicit list of platforms and/or operating systems officially supported by Protocol Buffers but I couldn't find it. Ironically the closest thing I found was the following information in the Wikipedia page:
Operating system: Any
Platforms: Cross-platform
Is it safe to say that Protocol Buffers support any platform/OS?
Operating system is going to be any mainstream OS. If you're running something esoteric, you might get the same problems that you get with anything else.
Platform is similar; google offer support for a range of platforms, and a much wider list is provided by community owned projects. A list is here: https://github.com/google/protobuf/blob/master/docs/third_party.md
Ultimately, the wire specification is documented and doesn't depend on OS or platform, so worst case if you're using a custom language on a custom OS, then you could still implement your own decoder as long as that language has some mechanism to talk arbitrary binary data or can interop to one of the other prebuilt libraries.
We are currently in the process of upgrading the MongoDb c# driver. There used to be "GrdFS" functionality to save large BSON document into chunks. Looks like the 2.0 doesn't have that feature.
We would like to know whether it is still in the scope or when can we expect this feature to be out there?
Much appreciate your response regarding the same.
You can track the feature here: https://jira.mongodb.org/browse/CSHARP-1191.
It is largely implemented, but we are waiting for the specification to be finalized. It will ship with the 2.1 version of the driver.
I'm trying to build OpenSceneGraph 3.2 for the Ubuntu armhf architecture, but I'm getting a compile error about a symbol not found. The symbol in question is glReadBuffer. I looked at GLES2/gl2.h header, and indeed, that symbol is not there. However, the symbol is present in GLES3/gl3.h, and documentation online suggests that the function was added in OpenGL ES 3.0. However, I did find a function named glReadBufferNV in GLES2/gl2ext.h (which is not #include'd in the source files.
I'm wondering if glReadBufferNV can be used instead of glReadBuffer, and what might be the possible side effects. I'm suspecting that the NV stands for Nvidia, and that it is a Nvidia-only implementation. Is this correct? If so, is there any way to get glReadBuffer in OpenGL ES 2.0 (I am under the impression that OpenSceneGraph can be built under OpenGL ES 2.0)?
Edit: As it turned out, the code that builds this portion of OpenSceneGraph was excluded when building with OpenGL ES or OpenGL 3.0. However, I'm still interested in what's special about glReadBufferNV.
As your research suggests, glReadBuffer was added to ES for 3.0; it is not defined in 2.0. Prior to that, as per the header file you found, an extension defined glReadBufferNV — specifically the NV_read_buffer extension.
So what's happened is that something wasn't in the spec, at least Nvidia thought it would be useful, so they've implemented an OpenGL extension, which has subsequently been discussed at Khronos, had all the edge cases or ambiguities dealt with and has eventually made its way into the core spec.
That's generally how GL development proceeds: extensions come along to provide functionality that's not yet in the main library, they're discussed and refined and adopted into the core spec if appropriate.
Comparing the official specification for glReadBuffer with the extension documentation, the extension has a few ties into other extensions that you wouldn't expect to make it into the core spec (e.g. COLOR_ATTACHMENTi_NV is supported as a source) but see resolved issue 7:
Version 6 of this specification isn't compatible with OpenGL ES 3.0.
For contexts without a back buffer, this extension makes FRONT the
default read buffer. ES 3.0 instead calls it BACK.
How can this be harmonized?
RESOLVED: Update the specification to match ES 3.0 behavior. This
introduces a backwards incompatibility, but few applications are
expected to be affected. In the EGL ecosystem where ES 2.0 is
prevalent, only pixmaps have no backbuffer and their usage remains
limited.
So the extension has retroactively been modified to bring it into line with what was agreed for the core spec.
Firefox provides XPCOM API interface for writing add-on classes in C++ and allow third-party web applications to access them. I'm wondering - is there any other way of achieving these benefits (i.e. write an add-on in C++ and provide a JavaScript interface, so any JavaScript app can use this interface and eventually C++ class functionality)?
Yes, it is possible to write XPCOM components in C++ (or Javascript for that matter) and expose them to websites. You would register them in the JavaScript-global-property, JavaScript-global-constructor, etc. category or categories.
This stuff is generally not documented very well, if at all. If you'd like to know more about it, then read the code (e.g. on mxr).
But doing so is strongly discouraged for a lot of reasons:
It is very hard to ensure such things are actually secure and make it work reliable.
It is hard to control what website may actually use the new API. JavaScript-global-property and friends will be available to all websites!
Your add-on would be introducing new names into the global javascript namespace, which might clash with web site code. (Because of this, many new WebAPIs aren't introduced directly into the global namespace, but are made a property of navigator).
Due to these reasons, the addons.mozilla.org site will only accept add-ons using this mechanism as an exception to the rule and only when the author demonstrates that the use is required and without real alternatives.
MDN has a couple of articles regarding webpage-to-extension communication and vice versa you could use instead, e.g. "Interaction between privileged and non-privileged pages".
Also, it is discouraged to ship binary XPCOM components within extensions:
You'd need to re-compile your binaries against each Gecko version (every six weeks), because binary components are version tagged since Firefox 4.
Binary APIs/ABIs between (point) releases are not as stable as you would hope. Actually APIs/ABIs where inadvertently broken late in the game quite often, causing lots of unnecessary crashes for a lot of users, e.g. bug 828296.
The platform developers officially said binary components in extensions are a hassle.
Instead, you should try to use Javascript where possible, and/or create a normal binary library (.so, .dll, .dylib) with a C-API/ABI and use js-ctypes. You could implement your XPCOM components in javascript, which would then call into your library with js-ctypes.
AFAIK most add-ons switched to js-ctypes by now.