How to write to specific outputs on Shader in google filament - google-filament

I'm setting a render target that has two outputs, one of type rgba and other of type r8. I would like to know, inside the filament material definition file, how do I specify the outputs for each target? in opengl gl it would look like :
out vec4 accum;
out float reveal;
void material(inout MaterialInputs material) {
prepareMaterial(material);
// prepare material inputs
accum = material.baseColor;
reveal = material.baseColor.a;
}

Related

Shader inputs to registers mapping

I have a compiled pixel shader 4.0 (I don’t have source code for that), with the following in the input signature:
// Name Index Mask Register
// TEXCOORD 4 xyz 4
// TEXCOORD 8 w 4
There’re are other input slots, I’ve only pasted the interesting lines of the signature.
As you see, the signature says both TEXCOORD4 and TEXCOORD8 input values go to v4 input register, the former one to xyz fields, the latter one to w field.
MSDN says the type of TEXCOORD[n] input is float4.
Which component of TEXCOORD8 input value goes to the v4.w field, x or w?
I don't think that mapping is specific to a particular component - the HLSL compiler is just attempting to map the inputs to a more efficient layout. I'm fairly certain the input signature for the shader is going to look something like:
float4 myPixelShader(float3 input_f3 : TEXCOORD4, float input_scalar : TEXCOORD8)
{
// ...
}
so if you're looking to create a vertex shader that outputs to this particular shader, you could just mirror that layout in your output, so something like:
struct MyInterpolants
{
float3 something : TEXCOORD4;
float something_scalar : TEXCOORD8;
}
MyInterpolants MyVertexShader(...)
{
MyInterpolants o;
float3 calc;
float4 some_var;
// Some code here that does something and fills elements of some_var and calc
o.something = calc;
o.something_scalar = some_var.z;
return o;
}
should map just fine. It doesn't really matter which element of a local variable is mapped to that output, as long as you're passing a scalar and the signatures between the pixel shader input and vertex shader outputs match. So you could pass x, y, z or w to that scalar output and it'll work out the same.

create a spiral UI bar in Unity

Is it possible to create a bar like this one in Unity?
The default components seem to work primitive only. I can have a rectangle as a bar or a radial bar.
This spiral bar would need to have access to a path because the color has to know how to move along the sprite.
I would use a custom shader with a sprite containing the "fill information" as the alpha channel.
On the image below, you will see your original sprite, and an other one with a gradient alpha (sorry, I'm not an expert with photoshop).
You can download the Unity shaders on their website, and pick the UI-Default.shader inside DefaultResourcesExtra/UI and tweak it a little bit so as to fill the sprite according to the alpha value of the sprite.
Something like this (not tested)
Shader "UI/FillAlpha"
{
Properties
{
[PerRendererData] _MainTex ("Sprite Texture", 2D) = "white" {}
_Color ("Tint", Color) = (1,1,1,1)
_FillAmount ("Fill amount", Float) = 1
// ...
}
SubShader
{
// ...
sampler2D _MainTex;
fixed _FillAmount;
fixed4 frag(v2f IN) : SV_Target
{
half4 color = (tex2D(_MainTex, IN.texcoord) + _TextureSampleAdd) * IN.color;
color.a = color.a > _FillAmount ? 1 : 0 ;
color.a *= UnityGet2DClipping(IN.worldPosition.xy, _ClipRect);
// ...
return color;
}
ENDCG
}
}
}
Then, you will be able to change the FillAmount parameter using myMaterial.SetFloat
You can use Line Renderer component to create your bar.
See documentation and examples here:
https://docs.unity3d.com/Manual/class-LineRenderer.html
https://docs.unity3d.com/352/Documentation/Components/class-LineRenderer.html
Note that Line Renderer works in a 3D space.

How to sample a SRV when enable msaa x4?DirectX11

I'm learning dx11 from Introduction_to_3D_Game_Programming_with_Directx_11.
Everything is ok without msaa. When I enable it, my .fx and C++ codes will not work well.
Do someone experienced it too and how to deal with this situation?
Before Codes:
Texture2D gTexture1;
float4 BLEND_PS(VertexOut_SV pin) :SV_TARGET
{
float4 texColor = float4(0.0f, 0.0f, 0.0f, 0.0f);
texColor = gTexture1.Sample(SamAnisotropic, pin.Tex);
return texColor;
}
because I can't bind a texture created with msaa to a texture2D,so I take msaa ON whenever.
After codes:
Texture2DMS<float4> gTexture1;
float4 BLEND_PS(VertexOut_SV pin) :SV_TARGET
{
float4 texColor = float4(0.0f, 0.0f, 0.0f, 0.0f);
texColor = gTexture1.Load(int2(pin.Tex.x*1400, pin.Tex.y*900), 0);
return texColor;
}
But the texColor is not right pixel I want.How to sample an SRV with msaa?
How to convert an UAV without msaa into a SRV with msaa?
And how to enable and disable msaa in C++ game codes with corresponding hlsl codes?
Do I have to keep different hlsl for each other?
For 'standard' MSAA use, you do the following:
When creating your swap chain and render traget view, set DXGI_SWAP_CHAIN_DESC.SampleDesc.Count or DXGI_SWAP_CHAIN_DESC1.SampleDesc.Count to 2, 4, 8, etc.
When creating your depth buffer/stencil, you need to use the same sample count for D3D11_TEXTURE2D_DESC.SampleDesc.Count.
When creating your render target view, you need to use D3D11_RTV_DIMENSION_TEXTURE2DMS (or pass nullptr for the view description so it matches the resource exactly)
When creating your depth buffer/stencil view, you need to use D3D11_DSV_DIMENSION_TEXTURE2DMS (or pass nullptr for the view description so it matches the resource exactly)
When rendering, you need to use a rasterizer state with D3D11_RASTERIZER_DESC.MultisampleEnable set to TRUE.
See also the Simple rendering tutorial for DirectX Tool Kit
Sample count
Depending on the Direct3D feature level, some MSAA sample counts are required for particular render target formats. Use can use CheckFormatSupport to verify render target format supports MSAA:
UINT formatSupport = 0;
if (FAILED(device->CheckFormatSupport(m_backBufferFormat, &formatSupport)))
{
throw std::exception("CheckFormatSupport");
}
UINT flags = D3D11_FORMAT_SUPPORT_MULTISAMPLE_RESOLVE
| D3D11_FORMAT_SUPPORT_MULTISAMPLE_RENDERTARGET;
if ( (formatSupport & flags) != flags )
{
// error
}
You then use CheckMultisampleQualityLevels to verify the sample count is supported. This code finds the highest supported MSAA level count for a particular format:
for (m_sampleCount = D3D11_MAX_MULTISAMPLE_SAMPLE_COUNT;
m_sampleCount > 1; m_sampleCount--)
{
UINT levels = 0;
if (FAILED(device->CheckMultisampleQualityLevels(m_backBufferFormat,
m_sampleCount, &levels)))
continue;
if ( levels > 0)
break;
}
if (m_sampleCount < 2)
{
// error
}
You can also validate the depth/stencil format you want to use supports D3D11_FORMAT_SUPPORT_DEPTH_STENCIL | D3D11_FORMAT_SUPPORT_MULTISAMPLE_RENDERTARGET.
Flip Style modes
The technique above only works for the older "bit-blt" style flip modes DXGI_SWAP_EFFECT_DISCARD or DXGI_SWAP_EFFECT_SEQUENTIAL. For UWP and DirectX 12 you are required to use DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL or DXGI_SWAP_EFFECT_FLIP_DISCARD which will fail if you attempt to create a back buffer with a SampleCount > 1.
In this case, you create the backbuffer with a SampleCount of 1, and create your own MSAA Render Target 2D texture. You have your Render Target View point to your MSAA render target, and before you Present you call ResolveSubresource from your MSAA render target to the backbufffer. This is exactly the same thing that DXGI did for you 'behind the scenes' with the older flip models.
For gamma-correct rendering (aka when you use a backbuffer format ending in _SRGB), the newer flip styles require that you use the non-SRGB equivalent for the backbuffer format or the swapchain create will fail. You set the SRGB format on the render target view instead.

Lights and shade with a custom fragmentShader

I'm creating a sphere and adding distortion to it, that's working fine.
When I look at the wireframe it's like this
and with the wireframe turned of it looks like this
As you can see the are no shades and the distortion isn't visible when the wireframe is turned of.
What I'm looking for is what to place in my custom fragmentShader.
I used this
// calc the dot product and clamp
// 0 -> 1 rather than -1 -> 1
vec3 light = vec3(0.5,0.2,1.0);
// ensure it's normalized
light = normalize(light);
// calculate the dot product of
// the light to the vertex normal
float dProd = max(0.0, dot(vNormal, light));
// feed into our frag colour
gl_FragColor = vec4(dProd, dProd, dProd, 1.0);
But that just creates a very ugly false light.
Any ideas anybody?
Thanks in advance,
Wezy
If you want to use Three.js' lights (and I can't think a reason why not), you need to include the corresponding shader chunks:
Take a look at how the WebGLRenderer composes its shaders in THREE.ShaderLib
Pick a material there that is close to what you want and copy its definition (uniforms and both shader codes) to your code, renaming it to something custom
Delete chunks you don't need, and add your custom code to appropriate places

OpenGL ES 2.x: Bind both `GL_TEXTURE_2D` and `GL_TEXTURE_CUBE_MAP` in the same texture image unit?

What happens if you bind (different textures) to both GL_TEXTURE_2D and GL_TEXTURE_CUBE_MAP in the same texture image unit?
For example, suppose I bind one texture to GL_TEXTURE0's GL_TEXTURE_2D target and another texture to the same texture unit's GL_TEXTURE_CUBE_MAP target. Can I then have two uniform variables, one a sampler2D and the other a samplerCube and set both to 0 (to refer to GL_TEXTURE0)?
I suspect the answer is "no" (or that the result is undefined) but I haven't found anything in the spec that specifically prohibits using multiple texture targets in the same texture image unit.
I haven't found anything that describes if you can bind a 2D texture and a cube map texture in the same texture unit, but (or so) I guess this is perfectly possible. It makes sense to allow it, since all texture modification functions require you to specify the texture target to operate on, anyway.
But the OpenGL ES 2 spec explicitly disallows to use both at the same time in a shader, as chapter 2.10 says:
It is not allowed to have variables of different sampler types
pointing to the same texture image unit within a program object. This
situation can only be detected at the next rendering command issued,
and an INVALID_OPERATION error will then be generated.
So you cannot use both a sampler2D and a samplerCube referring to the same texture unit to bend your implementation's texture unit limits.
For Chrome, I'm getting an error trying to perform such an operation.
var gl = document.getElementById("canv00").getContext("webgl");
const texture = gl.createTexture()
gl.bindTexture(gl.TEXTURE_2D, texture)
gl.bindTexture(gl.TEXTURE_CUBE_MAP, texture)
gl.getParameter(gl.TEXTURE_BINDING_2D) // texture
gl.getParameter(gl.TEXTURE_BINDING_CUBE_MAP) // null
gl.getError() // returns 1282 error
var gl = document.getElementById("canv00").getContext("webgl");
const texture = gl.createTexture()
gl.bindTexture(gl.TEXTURE_2D, texture)
// gl.bindTexture(gl.TEXTURE_CUBE_MAP, texture)
gl.getParameter(gl.TEXTURE_BINDING_2D) // texture
gl.getParameter(gl.TEXTURE_BINDING_CUBE_MAP) // null
gl.getError() // no error
var gl = document.getElementById("canv00").getContext("webgl");
const texture = gl.createTexture()
// gl.bindTexture(gl.TEXTURE_2D, texture)
gl.bindTexture(gl.TEXTURE_CUBE_MAP, texture)
gl.getParameter(gl.TEXTURE_BINDING_2D) // null
gl.getParameter(gl.TEXTURE_BINDING_CUBE_MAP) // texture
gl.getError() // no error

Resources