How can I render my Viewport3DX into a DX11 texture? - helix-3d-toolkit

I have a 3D engine, and I'd like to be able to embed Helix-based content into it (like draw on a wall).
I know it is possible to use RenderTargetBitmap to render any WPF elements, but that would introduce too many levels of indirection (especially considering WPF runs DX9), and no official way to access the image apart from copying it to the CPU memory first.
Ideally, I'd like to supply Viewport3DX with my own target color and depth buffers.

You can get the back buffer from
Viewport3DX.RenderHost.RenderBuffer.BackBuffer.Resource as global::SharpDX.Direct3D11.Texture2D
You can hookup Viewport3DX.RenderHost.Rendered event and copy the buffer data onto your own texture.

Related

How to render Direct3D content into ID2D1HwndRenderTarget?

In my app I render my Direct3D content to the window, as recommended, using the swap chain in flip sequential mode (IDXGISwapChain1, DXGI_SWAP_EFFECT_SEQUENTIAL), in a window without redirection bitmap (WS_EX_NOREDIRECTIONBITMAP)
However for the purpose of smooth window resizing (without lags and flicker), I want to intercept live resize enter/exit events and temporarily switch the rendering to ID2D1HwndRenderTarget to the window's redirection surface, which seems to be offer the smoothest possible resize.
The question is - how to render the Direct3D content to ID2D1HwndRenderTarget?
The problem is that obviously the Direct3D rendering and ID2D1HwndRenderTarget belong to different devices and I don't seem to find a way to interop them together.
What ideas come to mind:
A. Somehow assign ID2D1HwndRenderTarget to be the output frame buffer for 3D rendering. This would be the best case scenario, because it would minimize buffer copying.
Somehow obtain a ID3D11Texture2D or at least an IDXGISurface from the ID2D1HwndRenderTarget
Create render target ID3D11Device::CreateRenderTargetView from the DXGI surface / Direct3D texture
Set render target ID3D11DeviceContext::OMSetRenderTargets
Render directly to the ID2D1HwndRenderTarget
The problem is in step 1. ID2D1HwndRenderTarget and its ancestor ID2D1RenderTarget seem pretty scarce and don't seem to provide this capability. Is there a way to obtain a DXGI or D3D surface from the ID2D1HwndRenderTarget?
B. Create an off-screen Direct3D frame buffer texture, render the 3D content there and then copy it to the window render target.
Create off-screen texture ID3D11Texture2D
(ID3D11Device::CreateTexture2D)
Create a ID2D1Device using D2D1CreateDevice (from the same
IDXGIDevice as my ID3D11Device)
Create a ID2D1DeviceContext
using ID2D1Device::CreateDeviceContext
Create a ID2D1Bitmap
using ID2D1DeviceContext::CreateBitmapFromDxgiSurface
Draw the
bitmap using ID2D1HwndRenderTarget::DrawBitmap
Unfortunately, I get the error message "An operation failed because a device-dependent resource is associated with the wrong ID2D1Device (resource domain)". Obviously, the resource comes from a different ID2D1Device But how to draw a texture bitmap from one Direct2D device onto another?
C. Out of desperation, I tried to map the content of my frame buffer to CPU memory using IDXGISurface::Map, however this is tricky because it requires D3D11_USAGE_STAGING and D3D11_CPU_ACCESS_READ flags when creating the texture, which then seems make it impossible to use this texture as an output frame buffer (or am I missing something?). And generally this technique will be most likely very slow, because it involves syncing between CPU and GPU, and copying the whole texture at least two times.
Has anyone ever succeeded with the task of rendering 3D content to a ID2D1HwndRenderTarget? Please share your experience. Thanks!

AnimatedSprite vs AnimatedImage in QML

In QML I have multiple ways of including animations. Within others there are
AnimatedImage
AnimatedSprite
which both seem to be of similar use. With the right tools, it is quite easy to transform a sprite sheet into an animated gif or MNG file which could be handled by an AnimatedImage. The other way around is not that much harder.
In the documentation of Sprite they say:
The sprite engine internally copies and cuts up images to fit in an easier to read internal format, which leads to some graphics memory limitations. Because it requires all the sprites for a single engine to be in the same texture, attempting to load many different animations can run into texture memory limits on embedded devices. In these situations, a warning will be output to the console containing the maximum texture size.
On the other hand, the AnimatedImage usually caches the single frames, especially when the animation should loop, (which might also bring the maximum texture size to risk?)
I know that the Sprite has some fancy state machine and stuff, but the AnimatedSprite seems to be stripped of this.
As the production of content for either of those is the same work, I want to know if one of them is superior in any usecase, or whether their usecases and their performance are just entirely the same and which one to use is a question of flavour.
Actually I did not find a single reference that mentioned both in the same context...

Augment reality like zookazam

What algorithms are used for augmented reality like zookazam ?
I think it analyze image and find planes by contrast, but i don't know how.
What topics should I read before starting with app like this?
[Prologue]
This is extremly broad topic and mostly off topic in it's current state. I reedited your question but to make your question answerable within the rules/possibilities of this site
You should specify more closely what your augmented reality:
should do
adding 2D/3D objects with known mesh ...
changing light conditions
adding/removing body parts/clothes/hairs ...
a good idea is to provide some example image (sketch) of input/output of what you want to achieve.
what input it has
video,static image, 2D,stereo,3D. For pure 2D input specify what conditions/markers/illumination/LASER patterns you have to help the reconstruction.
what will be in the input image? empty room, persons, specific objects etc.
specify target platform
many algorithms are limited to memory size/bandwidth, CPU power, special HW capabilities etc so it is a good idea to add tag for your platform. The OS and language is also a good idea to add.
[How augmented reality works]
acquire input image
if you are connecting to some device like camera you need to use its driver/framework or something to obtain the image or use some common API it supports. This task is OS dependent. My favorite way on Windows is to use VFW (video for windows) API.
I would start with some static file(s) from start instead to ease up the debug and incremental building process. (you do not need to wait for camera and stuff to happen on each build). And when your App is ready for live video then switch back to camera...
reconstruct the scene into 3D mesh
if you use 3D cameras like Kinect then this step is not necessary. Otherwise you need to distinguish the object by some segmentation process usually based on the edge detections or color homogenity.
The quality of the 3D mesh depends on what you want to achieve and what is your input. For example if you want realistic shadows and lighting then you need very good mesh. If the camera is fixed in some room you can predefine the mesh manually (hard code it) and compute just the objects in view. Also the objects detection/segmentation can be done very simply by substracting the empty room image from current view image so the pixels with big difference are the objects.
you can also use planes instead of real 3D mesh as you suggested in the OP but then you can forget about more realistic quality of effects like lighting,shadows,intersections... if you assume the objects are standing straight then you can use room metrics to obtain the distance from camera. see:
selection criteria for different projections
estimate measure of photographed things
For pure 2D input you can also use the illumination to estimate the 3D mesh see:
Turn any 2D image into 3D printable sculpture with code
render
Just render the scene back to some image/video/screen... with added/removed features. If you are not changing the light conditions too much you can also use the original image and render directly to it. Shadows can be achieved by darkening the pixels ... For better results with this the illumination/shadows/spots/etc. are usually filtered out from the original image and then added directly by rendering instead. see
White balance (Color Suppression) Formula?
Enhancing dynamic range and normalizing illumination
The rendering process itself is also platform dependent (unless you are doing it by low level graphics in memory). You can use things like GDI,DX,OpenGL,... see:
Graphics rendering
You also need camera parameters for rendering like:
Transformation of 3D objects related to vanishing points and horizon line
[Basic topics to google/read]
2D
DIP digital image processing
Image Segmentation
3D
Vector math
Homogenous coordinates
3D scene reconstruction
3D graphics
normal shading
paltform dependent
image acquisition
rendering

XNA Texture loading speed (for extra large Texture sizes)

[Skip to the bottom for the question only]
While developing my XNA game I came to another horrible XNA limitation: Texture2D-s (at least on my PC) can't have dimensions higher than 2048*2048. No problem, I quickly wrote my custom texture class, which uses a [System.Drawing.] Bitmap by default and splits the texture into smaller Texture2D-s eventually and displays them as appropriate.
When I made this change I also had to update the method loading the textures. When loading the Texture2D-s in the old version I used Texture2D.FromStream() which worked pretty good but XNA can't even seem to store/load textures higher than the limit so if I tried to load/store a say 4092*2048 png file I ended up having a 2048*2048 Texture2D in my app. Therefore I switched to load the images using [System.Drawing.] Image.FromFile, then cast it to a Bitmap as it doesn't seem to have any limitation. (Later converting this Bitmap to a Texture2D list.)
The problem is that loading the textures this way is noticeably slower because now even those images that are under the 2048*2048 limit will be loaded as a Bitmap then converted to a Texture2D. So I am actually looking for a way to analyze an image file and check its dimensions (width;height) before even loading it into my application. Because if it is under the texture limit I can load it straight into a Texture2D without the need of loading it into a Bitmap then converting it into a single element Texture2D list.
Is there any (clean and possibly very quick) way to get the dimensions of an image file without loading the whole file into the application? And if it is, is it even worth using? As I guess that the slowest instruction is the file opening/seeking here (probably hardware-based, when it comes to hdd-s) and not streaming the contents into the application.
Do you need to support arbitrarily large textures? If not, switching to the HiDef profile will get you support for textures as large as 4096x4096.
If you do need to stick with your current technique, you might want to check out this answer regarding how to read image sizes without loading the entire file.

How to work with pixels using Direct2D

Could somebody provide an example of an efficient way to work with pixels using Direct2D?
For example, how can I swap all green pixels (RGB = 0x00FF00) with red pixels (RGB = 0xFF0000) on a render target? What is the standard approach? Is it possible to use ID2D1HwndRenderTarget for that? Here I assume using some kind of hardware acceleration. Should I create a different object for direct pixels manipulations?
Using DirectDraw I would use BltFast method on the IDirectDrawSurface7 with logical operation. Is there something similar with Direct2D?
Another task is to generate complex images dynamically where each point location and color is a result of a mathematical function. For the sake of an example let's simplify everything and draw Y = X ^ 2. How to do that with Direct2D? Ultimately I'm going to need to draw complex functions but if somebody could give me a simple example for Y = X ^ 2.
First, it helps to think of ID2D1Bitmap as a "device bitmap". It may or may not live in local, CPU-addressable memory, and it doesn't give you any convenient (or at least fast) way to read/write the pixels from the CPU side of the bus. So approaching from that angle is probably the wrong approach.
What I think you want is a regular WIC bitmap, IWICBitmap, which you can create with IWICImagingFactory::CreateBitmap(). From there you can call Lock() to get at the buffer, and then read/write using pointers and do whatever you want. Then, when you need to draw it on-screen with Direct2D, use ID2D1RenderTarget::CreateBitmap() to create a new device bitmap, or ID2D1Bitmap::CopyFromMemory() to update an existing device bitmap. You can also render into an IWICBitmap by making use of ID2D1Factory::CreateWicBitmapRenderTarget() (not hardware accelerated).
You will not get hardware acceleration for these types of operations. The updated Direct2D in Win8 (should also be available for Win7 eventually) has some spiffy stuff for this but it's rather complex looking.
Rick's answer talks about the methods you can use if you don't care about losing hardware acceleration. I'm focusing on how to accomplish this using a substantial amount of GPU acceleration.
In order to keep your rendering hardware accelerated and to get the best performance, you are going to want to switch from ID2DHwndRenderTarget to using the newer ID2DDevice and ID2DDeviceContext interfaces. It honestly doesn't add that much more logic to your code and the performance benefits are substantial. It also works on Windows 7 with the Platform Update. To summarize the process:
Create a DXGI factory when you create your D2D factory.
Create a D3D11 device and a D2D device to match.
Create a swap chain using your DXGI factory and the D3D device.
Ask the swap chain for its back buffer and wrap it in a D2D bitmap.
Render like before, between calls to BeginDraw() and EndDraw(). Remember to unbind the back buffer and destroy the D2D bitmap wrapping it!
Call Present() on the swap chain to see the results.
Repeat from 4.
Once you've done that, you have unlocked a number of possible solutions. Probably the simplest and most performant way to solve your exact problem (swapping color channels) is to use the color matrix effect as one of the other answers mentioned. It's important to recognize that you need to use the newer ID2DDeviceContext interface rather than the ID2DHwndRenderTarget to get this however. There are lots of other effects that can do more complicated operations if you so choose. Here are some of the most useful ones for simple pixel manipulation:
Color matrix effect
Arithmetic operation
Blend operation
For generally solving the problem of manipulating the pixels directly without dropping hardware acceleration or doing tons of copying, there are two options. The first is to write a pixel shader and wrap it in a completely custom D2D effect. It's more work than just getting the pixel buffer on the CPU and doing old-fashioned bit mashing, but doing it all on the GPU is substantially faster. The D2D effects framework also makes it super simple to reuse your effect for other purposes, combine it with other effects, etc.
For those times when you absolutely have to do CPU pixel manipulation but still want a substantial degree of acceleration, you can manage your own mappable D3D11 textures. For example, you can use staging textures if you want to asynchronously manipulate your texture resources from the CPU. There is another answer that goes into more detail. See ID3D11Texture2D for more information.
The specific issue of swapping all green pixels with red pixels can be addressed via ID2D1Effect as of Windows 8 and Platform Update for Windows 7.
More specifically, Color matrix effect.

Resources