I'm learning dx11 from Introduction_to_3D_Game_Programming_with_Directx_11.
Everything is ok without msaa. When I enable it, my .fx and C++ codes will not work well.
Do someone experienced it too and how to deal with this situation?
Before Codes:
Texture2D gTexture1;
float4 BLEND_PS(VertexOut_SV pin) :SV_TARGET
{
float4 texColor = float4(0.0f, 0.0f, 0.0f, 0.0f);
texColor = gTexture1.Sample(SamAnisotropic, pin.Tex);
return texColor;
}
because I can't bind a texture created with msaa to a texture2D,so I take msaa ON whenever.
After codes:
Texture2DMS<float4> gTexture1;
float4 BLEND_PS(VertexOut_SV pin) :SV_TARGET
{
float4 texColor = float4(0.0f, 0.0f, 0.0f, 0.0f);
texColor = gTexture1.Load(int2(pin.Tex.x*1400, pin.Tex.y*900), 0);
return texColor;
}
But the texColor is not right pixel I want.How to sample an SRV with msaa?
How to convert an UAV without msaa into a SRV with msaa?
And how to enable and disable msaa in C++ game codes with corresponding hlsl codes?
Do I have to keep different hlsl for each other?
For 'standard' MSAA use, you do the following:
When creating your swap chain and render traget view, set DXGI_SWAP_CHAIN_DESC.SampleDesc.Count or DXGI_SWAP_CHAIN_DESC1.SampleDesc.Count to 2, 4, 8, etc.
When creating your depth buffer/stencil, you need to use the same sample count for D3D11_TEXTURE2D_DESC.SampleDesc.Count.
When creating your render target view, you need to use D3D11_RTV_DIMENSION_TEXTURE2DMS (or pass nullptr for the view description so it matches the resource exactly)
When creating your depth buffer/stencil view, you need to use D3D11_DSV_DIMENSION_TEXTURE2DMS (or pass nullptr for the view description so it matches the resource exactly)
When rendering, you need to use a rasterizer state with D3D11_RASTERIZER_DESC.MultisampleEnable set to TRUE.
See also the Simple rendering tutorial for DirectX Tool Kit
Sample count
Depending on the Direct3D feature level, some MSAA sample counts are required for particular render target formats. Use can use CheckFormatSupport to verify render target format supports MSAA:
UINT formatSupport = 0;
if (FAILED(device->CheckFormatSupport(m_backBufferFormat, &formatSupport)))
{
throw std::exception("CheckFormatSupport");
}
UINT flags = D3D11_FORMAT_SUPPORT_MULTISAMPLE_RESOLVE
| D3D11_FORMAT_SUPPORT_MULTISAMPLE_RENDERTARGET;
if ( (formatSupport & flags) != flags )
{
// error
}
You then use CheckMultisampleQualityLevels to verify the sample count is supported. This code finds the highest supported MSAA level count for a particular format:
for (m_sampleCount = D3D11_MAX_MULTISAMPLE_SAMPLE_COUNT;
m_sampleCount > 1; m_sampleCount--)
{
UINT levels = 0;
if (FAILED(device->CheckMultisampleQualityLevels(m_backBufferFormat,
m_sampleCount, &levels)))
continue;
if ( levels > 0)
break;
}
if (m_sampleCount < 2)
{
// error
}
You can also validate the depth/stencil format you want to use supports D3D11_FORMAT_SUPPORT_DEPTH_STENCIL | D3D11_FORMAT_SUPPORT_MULTISAMPLE_RENDERTARGET.
Flip Style modes
The technique above only works for the older "bit-blt" style flip modes DXGI_SWAP_EFFECT_DISCARD or DXGI_SWAP_EFFECT_SEQUENTIAL. For UWP and DirectX 12 you are required to use DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL or DXGI_SWAP_EFFECT_FLIP_DISCARD which will fail if you attempt to create a back buffer with a SampleCount > 1.
In this case, you create the backbuffer with a SampleCount of 1, and create your own MSAA Render Target 2D texture. You have your Render Target View point to your MSAA render target, and before you Present you call ResolveSubresource from your MSAA render target to the backbufffer. This is exactly the same thing that DXGI did for you 'behind the scenes' with the older flip models.
For gamma-correct rendering (aka when you use a backbuffer format ending in _SRGB), the newer flip styles require that you use the non-SRGB equivalent for the backbuffer format or the swapchain create will fail. You set the SRGB format on the render target view instead.
Related
I'd like to use custom drawing within a Gtk::Layout. That is, I'm using the C++ bindings for Gtk3 (GTKmm 3.14.0), and I have embedded widgets placed on the "canvas", on top of my custom drawing. Basically this works just fine.
Now the problem is related to scrolling. Gtk::Layout can be placed into a Gtk::ScrolledWindow, and when the scrollable area is set to something larger than the visible allocation, scrollbars will show up. Unfortunately, those scrollbars influence only the placement of the embedded widgets, while my custom drawing remains at a fixed position within the window.
This means, both the Gtk::Allocation and the cairo context seem to be related to precisely the visible area, not to the extended virtual "canvas". I could work around that problem by accessing the adjustments from the scrollbars and then translate the cairo context accordingly...
My question is:
is this the proper way to handle such a scrollable drawing?
or is there some way to let the framework do this work for me?
Judging from the source code of gtk+3.0-3.14.5 (which is in Debian/Stable), the Gtk::Layout does nothing to adjust the drawing context. It just invokes the inherited draw() function from GtkWidget. On the other hand, Gtk::Layout is a full-blown container (it inherits from Gtk::Container), and it is scrollable, which together means that it handles gtk_layout_size_allocate() by passing a suitable allocation (screen area) to each of the embedded child widgets -- and in this respect it does handle the moving and clipping related to scrolling the virtual canvas (calls gdk_window_move_resize()).
Thus, if we want to combine the embedded child widgets with custom drawing, we need to bridge this discrepancy manually. This is quite easy actually: all we need to do is to look into the Gtk::Adjusments corresponding to the scrollbars. Because the value of these adjusments is precisely the upper left corner of the visible viewport. Now, if we want our custom drawing to use absolute canvas coordinates, we just have to translate() the given Cairo context. Beware: it is important to save() the state and to restore() it to pristine state when done, otherwise those translations will accumulate.
Here is some example code to demonstrate this custom drawing
we derive a custom container class called Canvas from Gtk::Layout
we override the on_draw() handler, because only there all size allocation to embedded child widgets have been processed
Layering: child widgets are always drawn in the order they have been added to the Gtk::Layout container. Any custom drawing done before invoking the inherited on_draw() function will be below those widgets; any drawing done afterwards will happen on top of them.
if necessary, we can use the foreach(callback) mechanism to visit all child widgets to find out their current position and extension
void
Canvas::determineExtension()
{
if (not recalcExtension_) return;
uint extH=20, extV=20;
Gtk::Container::ForeachSlot callback
= [&](Gtk::Widget& chld)
{
auto alloc = chld.get_allocation();
uint x = alloc.get_x();
uint y = alloc.get_y();
x += alloc.get_width();
y += alloc.get_height();
extH = max (extH, x);
extV = max (extV, y);
};
foreach(callback);
recalcExtension_ = false;
set_size (extH, extV); // define extension of the virtual canvas
}
bool
Canvas::on_draw(Cairo::RefPtr<Cairo::Context> const& cox)
{
if (shallDraw_)
{
uint extH, extV;
determineExtension();
get_size (extH, extV);
auto adjH = get_hadjustment();
auto adjV = get_vadjustment();
double offH = adjH->get_value();
double offV = adjV->get_value();
cox->save();
cox->translate(-offH, -offV);
// draw red diagonal line
cox->set_source_rgb(0.8, 0.0, 0.0);
cox->set_line_width (10.0);
cox->move_to(0, 0);
cox->line_to(extH, extV);
cox->stroke();
cox->restore();
// cause child widgets to be redrawn
bool event_is_handled = Gtk::Layout::on_draw(cox);
// any drawing which follows happens on top of child widgets...
cox->save();
cox->translate(-offH, -offV);
cox->set_source_rgb(0.2, 0.4, 0.9);
cox->set_line_width (2.0);
cox->rectangle(0,0, extH, extV);
cox->stroke();
cox->restore();
return event_is_handled;
}
else
return Gtk::Layout::on_draw(cox);
}
I have a small opencl kernel that writes to a shared GL texture. I have separated different stages of the computation into several functions. Every function gets a pointer to the final color and passes this along if needed. If you look at the code fragment you see a line that is called "UNREACHABLE". For some reason it does get executed. What ever color I put in there appears in the final image. How is that possible?
If I duplicate the same code block below that does not happen. Only for the first one. :(
To make things even funnier, if I change the code above (e.g. add another multiplication) the UNREACHABLE line gets executed at random.
Therefore my questions: Is this a compiler bug? Do I exhaust a certain memory or register budged that I should be aware of? Are OpenCL compilers buggy in general?
void sample(float4 *color) {
...
float4 r_color = get_color(...);
float factor = r_color.w + (*color).w - 1.0f;
r_color = r_color * ((r_color.w - factor) / r_color.w);
*color += r_color;
if(color->w >= 1.0f) {
if(color->w <= 0.0f) {
(*color) = (float4)(0.0f, 0.0f, 0.0f, 1.0f); //UNREACHABLE?
return;
}
}
...
}
...
__kernel void render(
__write_only image2d_t output_buffer,
int width,
int height
) {
uint screen_x = get_global_id(0);
uint screen_y = get_global_id(1);
float4 color = (float4)(0.0f, 0.0f, 0.0f, 0.0f);
sample(&color);
write_imagef(output_buffer, (int2)(screen_x, screen_y), color);
}
My Platform:
Apple
Intel(R) Core(TM) i5-2415M CPU # 2.30GHz
selected device has extensions: cl_APPLE_SetMemObjectDestructor cl_APPLE_ContextLoggingFunctions cl_APPLE_clut cl_APPLE_query_kernel_names cl_APPLE_gl_sharing cl_khr_gl_event cl_khr_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_byte_addressable_store cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_3d_image_writes cl_khr_image2d_from_buffer cl_APPLE_fp64_basic_ops cl_APPLE_fixed_alpha_channel_orders cl_APPLE_biased_fixed_point_image_formats cl_APPLE_command_queue_priority
[Edit]
After observing the values I get during the calculation I am thinking r_color.w being exactly 0.0f after get_color may cause the problem. I am still looking for a definite statement that says comparing a NaN is not defined or is always true or something.
Also "Is opencl known to generate corrupt code?" has the invisible postfix "or what am I missing".
I used to work with embedded systems where the vendor would provide their own proprietary compilers, which in turn were known to break your code. So I want to get this off the table if possible. I suspect that clang would not do that. But you never know.
I am developing an OS X app that uses custom Core Image filters to attain a particular effect: Set one image's luminance as another image's alpha channel. There are filters to use an image as a mask for another but they require a third -background- image; I need to output an image with transparent parts, no background set.
As explained in Apple's documentation, I wrote the kernel code and tested it in QuartzComposer; it works as expected.
The kernel code is:
kernel vec4 setMask(sampler src, sampler mask)
{
vec4 color = sample(src, samplerCoord(src));
vec4 alpha = sample(mask, samplerCoord(mask));
color.a = alpha.r;
// (mask image is grayscale; any channel colour will do)
return color;
}
But when I try to use the filter from my code (either packaging it as an image unit or directly from the app source), the output image turns out to have the following 'undefined'(?) extent:
extent CGRect origin=(x=-8.988465674311579E+307, y=-8.988465674311579E+307) size=(width=1.797693134862316E+308, height=1.797693134862316E+308)
and further processing (convert to NSImage bitmap representation, write to file, etc.) fails. The filter itself loads perfectly (not nil) and the output image it produces isn't nil either, just has an invalid rect.
EDIT: Also, I copied the exported image unit (plugin), to both /Library/Graphics/Image Units and ~/Library/Graphics/Image Units, so that it appears in QuartzComposer's Patch Library, but when I connect it to the source images and Billboard renderer, nothing is drawn (transparent background).
Am I missing something?
EDIT: Looks like I assumed to much about the default behaviour of -[CIFilter apply:].
My filter subclass code's -outputImage implementation was this:
- (CIImage*) outputImage
{
CISampler* src = [CISampler samplerWithImage:inputImage];
CISampler* mask = [CISampler samplerWithImage:inputMaskImage];
return [self apply:setMaskKernel, src, mask, nil];
}
So I tried and changed it to this:
- (CIImage*) outputImage
{
CISampler* src = [CISampler samplerWithImage:inputImage];
CISampler* mask = [CISampler samplerWithImage:inputMaskImage];
CGRect extent = [inputImage extent];
NSDictionary* options = #{ kCIApplyOptionExtent: #[#(extent.origin.x),
#(extent.origin.y),
#(extent.size.width),
#(extent.size.height)],
kCIApplyOptionDefinition: #[#(extent.origin.x),
#(extent.origin.y),
#(extent.size.width),
#(extent.size.height)]
};
return [self apply:setMaskKernel arguments:#[src, mask] options:options];
}
...and now it works!
How are you drawing it? And what does your CIFilter code look like? You'll need to provide a kCIApplyOptionDefinition most likely when you call apply: in outputImage.
Alternatively, you can also change how you are drawing the image, using CIContext's drawImage:inRect:fromRect.
I need my sprite to transition to one color to another and on and on... like blue tint then green then purple, but i cannot find any good actions for that and am wondering, should i use animations? or is there an incorporated action for this?
you can use CCTintTo action to change the color of the sprite
[sprite runAction:[CCTintTo actionWithDuration:2 red:255 green:0 blue:0]];
since i saw several questions about replacing pixel colours in sprites, and i did'nt see any good solution (all solution only tint the color, and none of them is able to change an array of colours without forcing you into creating multiple image layers which construct the final image you want, i.e: one layer for pans, other for show, other for shirt, another for hair colour... and it goes on - note that they do have their advantages like the ability to use accurate gradients)
my solution allows you to change array of colors, meaning you can have a single image with a known colors (you dont want any gradiants in this layer, only colours that you KNOW their values - PS this only applies to colors you intent to change, other pixels can have any colour you want)
if you need gradiants over the colours you change, create an additional image with only the shading and place it as a child of the sprite.
also be aware that i am super-new to cocos2d/x (3 days), and that this code is written for cocos2dx but can be ported to cocos2d easily.
also note that i didnt test it on android only on iOS, i am not sure how capable is android official gcc and how will it deal with the way i allocate _srcC and _dstC, but again, this is easily portable.
so here it goes:
cocos2d::CCSprite * spriteWithReplacedColors( const char * imgfilename, cocos2d::ccColor3B * srcColors, cocos2d::ccColor3B * dstColors, int numColors )
{
CCSprite *theSprite = NULL;
CCImage *theImage = new CCImage;
if( theImage->initWithImageFile( imgfilename ) )
{
//make a color array which is easier to work with
unsigned long _srcC [ numColors ];
unsigned long _dstC [ numColors ];
for( int c=0; c<numColors; c++ )
{
_srcC[c] = (srcColors[c].r << 0) | (srcColors[c].g << 8) | (srcColors[0].b << 16);
_dstC[c] = (dstColors[c].r << 0) | (dstColors[c].g << 8) | (dstColors[0].b << 16);
}
unsigned char * rawData = theImage->getData();
int width = theImage->getWidth();
int height = theImage->getHeight();
//replace the colors need replacing
unsigned int * b = (unsigned int *) rawData;
for( int pixel=0; pixel<width*height; pixel++ )
{
register unsigned int p = *b;
for( int c=0; c<numColors; c++ )
{
if( (p&0x00FFFFFF) == _srcC[c] )
{
*b = (p&0xFF000000) | _dstC[c];
break;
}
}
b++;
}
CCTexture2D *theTexture = new CCTexture2D();
if( theTexture->initWithData(rawData, kCCTexture2DPixelFormat_RGBA8888, width, height, CCSizeMake(width, height)) )
{
theSprite = CCSprite::spriteWithTexture(theTexture);
}
theTexture->release();
}
theImage->release();
return theSprite;
}
to use it just do the following:
ccColor3B src[] = { ccc3( 255,255,255 ), ccc3( 0, 0, 255 ) };
ccColor3B dst[] = { ccc3( 77,255,77 ), ccc3( 255, 0 0 ) };
//will change all whites to greens, and all blues to reds.
CCSprite * pSprite = spriteWithReplacedColors( "character_template.png", src, dst, sizeof(src)/sizeof(src[0]) );
of course if you need speed, you would create an extension for a sprite that create a pixel shader that does it hw accelerated at render time ;)
btw: this solution might cause some artefacts on the edges on some cases, so you can create a large image and scale it down, letting GL minimise the artefact.
you can also create "fix" layers with black outlines to hide the artefacts and place it on top etc.
also make sure you don't use these 'key' colors on the rest of the image you don't want the pixels changed.
also keep in mind that the fact that the alpha channel is not changed, and that if you use basic images with pure red/green/blue colors only, you can also optimize this function to eliminate all artefacts on edges automatically (and avoid in many cases, the need for an additional shade layer) and other cool stuff (multiplexing several images into a single bitmap - remember palette animation?)
enjoy ;)
In case someone wanted this for cocos2d-x here is the code:
somesprite->runAction(TintTo::create(float duration, Color3b &color));
Using free glut on windows 7 home ultimate with video card ati mobility radeon 5650
code snippet:
void ResizeFunction(int width, int height)
{
glViewport(0, 0, width, height);
}
void RenderFunction()
{
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//...drawing code based on some flag, I draw a triangle or a rectangle
//the flag is toggled on pressing 't' or 'T' key
glutSwapBuffers(); //double buffering is enabled
glutPostRedisplay();
}
void KeyboardFunction(unsigned char key, int x, int y)
{
switch(key)
{
case 't':
case 'T':
{
flag = !flag;
glutPostRedisplay();
break;
}
default:
break;
}
}
problem: The triangle or the rectangle is drawn covering the entire window first time. But if I partially cover the glut window with another window (say, with some notepad window) and then uncover it, subsequently, when I toggle, the object is drawn only in the covered portion of the glut window. If I re-size the glut window, drawing works correctly as before.
Any help will be appreciated.
regards,
fs
Glut only redraws on the screen when you tell it or when it decides. That is, if you don't do anything in the window, the scene is not redrawn. Advantage of this: less cpu/gpu usage. Disadvantage: Only good for non animated applications.
If you want to constantly update the screen (which is what is done in applications with lots of animations (games for example)), you can use glutIdleFunc
http://www.opengl.org/resources/libraries/glut/spec3/node63.html
That is in the beginning of the program when you set all the functions for glut, you also write:
glutIdleFunc(RenderFunction);
This way, when glut is idle, it keeps calling your render function.
If you want to render slower than possible (for example with a fixed frame rate), you could use a timer:
void RenderFunction()
{
glutTimerFunc(YOUR_DELAY_IN_MS, RenderFunction, 0);
/* rest of code */
}
and instead of glutIdleFunc(RenderFunction);, you write
`glutTimerFunc(YOUR_DELAY_IN_MS, RenderFunction, 0);`
To simply call the render function once (you could also just write RenderFunction() once) and the function keeps setting the timer for its next run.
As a side note, I suggest using SDL instead of glut.