I am trying to get a D3D capture using ID3D11DeviceContext::Map() function.
But the output is flipped and rotated.
Is this the default behavior? What is the simplest/efficient way to fix this?
D3D11_MAPPED_SUBRESOURCE desc;
hr = context->Map(pRes, subres , D3D11_MAP_READ_WRITE, 0, &desc);
// use desc.pData
context->Unmap(target, subres);
Thanks
Your code looks fine. More than likely you're simply interpreting the resulting data incorrectly. Texture data starts at the top-left and goes right, then down. For example the layout of a 4x2 texture looks like this:
[0][1][2][3]
[4][5][6][7]
Alternatively, the source texture may indeed be flipped and rotated, and is simply being corrected elsewhere in the pipeline (e.g. by rotating in the vertex shader).
Related
I am wondering how I would be able to use animated shapes inside a movieclip that would be acting as a mask?
In my Animate CC canvas file I have an instance (stripeMask) that should mask the below instance called mapAnim.
stripeMask contains shapes that are animating in.
So when the function maskIn is called, the playhead should move to the first frame inside the stripeMask clip (the one after frame 0) and animate the mask like so:
function maskIn(){
//maskAnimation to reveal image below
stripeMask.gotoAndPlay(1);
}
I love AnimateCC and it works great, but the need for creating more complex and animated masks is there and it's not easy to achieve unless I am missing something here.
Thanks!
Currently you can only use a Shape as a mask, not a Container or MovieClip.
If you want to do something more complex, you can use something like AlphaMaskFilter, but it has to be cached, and then updated every time the mask OR the content updates:
something.filters = [new createjs.AlphaMaskFilter(stripeMask)];
something cache(0,0,w,h);
// On Change
something.updateCache(); // Re-caches
The source of the AlphaMaskFilter must be an image, so you can either point to a Bitmap image, or a cacheCanvas of a mask clip you have also cached. Note that if the mask changes, the cache has to be updated as well.
This is admittedly not a fantastic solution, and we are working on other options.
I am facing the performance drop issue. I made some research and it seems that remap function takes too much time. Image size is VGA, but interesting area has about about 1/4 of this area. Therefore, I want to use remap() only for this region and finally get image with about 1/4 of VGA area.
This is image input in VGA resolution -green rect is trackableArea Rect
Desired output but in VGA
Generated by:
remap(originalCornersSamples[i], rview, map1, map2, INTER_NEAREST);
When I try to make remap only on specific area:
remap(frame_bgr, rview, map1(trackableArea), map2(trackableArea), INTER_NEAREST);
I got as expected - stretched desired image with desired resolution of the trackableArea rect.
map1 and map2 were generated from getPerspectiveTransform to get only TV screen from the input image. The trackableArea is a Rect like here (green lines):
Any ideas how to make it possible or how the remap() should look like?
I answer to myself :) So this helped:
resize(map1,modified,Size(trackableArea.width,trackableArea.height), 0, 0, INTER_CUBIC );
remap(frame_bgr, rview, modified, map2(trackableArea), INTER_NEAREST);
I am trying to port this (http://madebyevan.com/webgl-water/) over to THREE. I think I'm getting close (just want the simulation for now, don't care about caustics/refraction yet). I'd like to get it working with shaders for the GPU boost.
Here's my current THREE setup using shaders: http://jsfiddle.net/EqLL9/2/
(the second smaller plane is for debugging what's currently in the WebGLRenderTarget)
What I'm struggling with is reading data back from the WebGLRenderTarget (rtTexture in my example). In the example you'll see the 4 vertices surrounding the center point are displaced upwards. This is correct (after 1 simulation step) as it starts with the center point being the only point of displacement.
If I could read the data back from the rtTexture and update the data texture (buf1) each frame, then the simulation should properly animate. How does one read the data directly from a WebGLRenderTarget? All the examples demonstrate how to send data TO the target (render to it), not read FROM it. Or am I doing it all wrong? Something's telling me I'll have to work with multiple textures and somehow swap back and forth similar to how Evan did it.
TL;DR: How can I copy data from a WebGLRenderTarget to a DataTexture after a call like this:
// render to rtTexture
renderer.render( sceneRTT, cameraRTT, rtTexture, true );
EDIT: May have found the solution at jsfiddle /gero3/UyGD8/9/
Will investigate and report back.
Ok, I figured out how to read the data using native webgl calls:
// Render first scene into texture
renderer.render( sceneRTT, cameraRTT, rtTexture, true );
// read render texture into buffer
var gl = renderer.getContext();
gl.readPixels( 0, 0, simRes, simRes, gl.RGBA, gl.UNSIGNED_BYTE, buf1.image.data );
buf1.needsUpdate = true;
The simulation now animates. However, it doesn't seem to be functioning properly (probably a dumb error I'm overlooking). It seems that the height values are never being damped and I'm not sure why. The data from buf1 is used in the fragment shader, which calculates the new height (red in RGBA), damps the value (multiplies by 0.99), then renders it to a texture. I then read this updated data from the texture back into buf1.
Here's the latest fiddle: http://jsfiddle.net/EqLL9/3/
I'll keep this updated as I progress along.
EDIT: Works great now. Just got normals implemented, and now working on environment reflection and refraction (again purely though shaders). http://relicweb.com/webgl/rt.html
I've added some text to my scene with THREE.TextGeometry, and the text seems to be stuck in whichever xy plane I place it. Any way to have it adjust to always be in plane with the screen- readable for the user?
Try
mesh.lookAt( camera.position );
The local z-axis of the mesh should then point toward the camera.
To make the text always face the screen (rather than the camera):
mesh.quaternion.copy(camera.quaternion);
Note that this differs from object.lookAt(camera); since this will result in different orientation depending on where on the screen the object is located.
Object3D.onBeforeRender can be used to update the orientation on each frame. It might be useful to disable frustum culling using Object3D.frustumCulled = false; to ensure that the callback always is triggered.
Please see this thread for details.
To summarize, given the following circumstances:
gl = canvas.getContext('experimental-webgl');
gl.clearColor(0, 0, 0, 1);
gl.colorMask(1, 1, 1, 0);
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA);
gl.enable(gl.BLEND);
...and a standard render loop:
function doRender() {
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
// render stuff, and request another frame
requestAnimationFrame(doRender);
}
...then I would like to know what the expected output should theoretically be.
In actuality, I'm seeing that the first frame renders as if there were no color mask, and the second (and subsequent) frames render the entire canvas opaque white.
Note that it doesn't matter what the alpha level is set to: the second frame is always immediately, completely white (including areas that were not rendered to), even if the rendered alpha values are extremely low.
The Question: what is the expected result of the above operations on the first, second, and subsequent frames? Also, is what I am experiencing the expected result, or due to some bug in the GL driver or WebGL implementation? And finally, if it is the expected result, why? What is actually happening on the video card to produce this result?
System details: Chrome / Firefox (both) on a MacBook Pro / GeForce 320M / Snow Leopard.
WebGL automatically clears the drawing buffer on each frame unless you tell it not to
try
gl = canvas.getContext('experimental-webgl', {
preserveDrawingBuffer: true
});
That's potentially slower than letting it clear though since it might have to make a copy of the drawing buffer each frame to preserve it so it has a copy to composite with the rest of the page while you draw new stuff into it. So, it's probably better to call gl.clear inside your render loop. Of course if the effect you were going for was to continually blend stuff into the drawing buffer then you either need to tell it to be preserved like the example above or you need to draw into a texture or renderbuffer using a framebuffer and then render that to the drawing buffer.