Easier way to use Texture2D array in shader - directx-11

I want to use Texture2D array by easier way in PixelShader.
That is,
PS
Texture2D map[3] : register(0);
if (input.index == 0)
color = map[input.index].Sample(samp, input.uv);
else if (input.index == 1)
color = map[input.index].Sample(samp, input.uv);
else
color = map[input.index].Sample(samp, input.uv);
But it brings me error, so I use instead like this.
PS
if (input.index == 0)
color = map[0].Sample(samp, input.uv);
else if (input.index == 1)
color = map[1].Sample(samp, input.uv);
else
color = map[2].Sample(samp, input.uv);
Is it the best way to use texture2D in array like this?

Related

How to set custom(some calculate logic) color according to each pixel value(R,G,B) using KONVA

I am trying to convert the color of the pixel using KONVA image class.
The conversion method is as follows.
var avg = (R + G + B)/3;
if(avg > 200){
pixelColorR = 255;
pixelColorG = 255;
pixelColorB = 255;
}else {
pixelColorR = 200;
pixelColorG = 200;
pixelColorB = 200;
}
So I'd like to get two colored grayscale images from RGB images like the following.
===>>>
But I can't figure out how to get the color of each pixel using Konva class.
And how to set custom colors.
Anyone can help me?
Thank you.

How to disable Center Color in polygon with multicolored gradient?

I am trying to build a polygon using only two colors for all vertex. But the gdiplus library automatically inserts an white center color blending all the figure. I would like to disable the center color, instead workarounding it by using SetCenterColor() available in PathGradientBrush class. Shifting the default position using SetCenterPoint() to a far way position is very inelegant. Is that possible?
Thanks
A sample follows:
CMyGDIPlus gdi(this); // use your class instead
using namespace Gdiplus;
Graphics & graphics = gdi.GetGraphics();
graphics.SetSmoothingMode(SmoothingModeNone);
Gdiplus::Rect gRect;
graphics.GetVisibleClipBounds(&gRect);
int i;
int colorSize = 4;
GraphicsPath path;
Point arrPoint[4];
Color arrColors[4];
arrPoint[0].X = gRect.GetLeft();
arrPoint[0].Y = gRect.GetTop();
arrPoint[1].X = gRect.GetRight();
arrPoint[1].Y = gRect.GetTop()+100;
arrPoint[2].X = gRect.GetRight();
arrPoint[2].Y = gRect.GetBottom();
arrPoint[3].X = gRect.GetLeft();
arrPoint[3].Y = gRect.GetBottom()-100;
for(i = 0; i < colorSize; i++)
{
if(i < 2)
arrColors[i].SetFromCOLORREF(RGB(0, 128, 0)); // green
else
arrColors[i].SetFromCOLORREF(RGB(0, 0, 192)); // blue
}
path.AddLines(arrPoint, 4);
PathGradientBrush pathBrush(&path);
pathBrush.SetSurroundColors(arrColors, &colorSize);
pathBrush.SetGammaCorrection(TRUE);
graphics.FillPath(&pathBrush, &path);
You only need to calculate the color value of the center point.
To average the values (e.g. (r1 + r2) / 2). This works better for lightening/darkening colors and creating gradients.
Refer: Algorithm for Additive Color Mixing for RGB Values
Add : pathBrush.SetCenterColor(Color(0, 128*0.5, 192*0.5));
Debug:

Showing an image and replacing it with another image if something happens

I use a color tracking code for processing.
What I want (example):
If red is detected show image 1
If green is detected show image 2
If blue is detected show image 3
The problem is, if the last color is detected and the last image is shown, and I track now the first color the first image is not in front (I can't see it).
The whole code:
import processing.video.*;
//import hypermedia.net.*;
PImage img;
PImage img2;
PImage img3;
Capture video;
final int TOLERANCE = 20;
float XRc = 0;// XY coordinate of the center of the first target
float YRc = 0;
float XRh = 0;// XY coordinate of the center of the second target
float YRh = 0;
float XRc2 = 0; // XY coordinate of the center of the third target
float YRc2 = 0;
float XRh2 = 0;// XY coordinate of the center of the fourth target
float YRh2 = 0;
int ii=0; //Mouse click counter
color trackColor; //The first color is the center of the robot
color trackColor2; //The second color is the head of the robot
color trackColor3; //The first color is the center of the robot 2
color trackColor4; //The first color is the center of the robot 2
void setup() {
img = loadImage("IMG_4700.JPG");
img2 = loadImage("2.JPG");
img3 = loadImage("3.JPG");
size(800,800);
video = new Capture(this,640,480);
video.start();
trackColor = color(94,164,126);
trackColor2 = color(60,110,194);
trackColor3 = color(197, 76,64);
trackColor4 = color(255,0,0);
smooth();
}
void draw() {
background(0);
if (video.available()) {
video.read();
}
video.loadPixels();
image(video,0,0);
float r2 = red(trackColor);
float g2 = green(trackColor);
float b2 = blue(trackColor);
float r3 = red(trackColor2);
float g3 = green(trackColor2);
float b3 = blue(trackColor2);
float r4 = red(trackColor3);
float g4 = green(trackColor3);
float b4 = blue(trackColor3);
float r5 = red(trackColor4);
float g5 = green(trackColor4);
float b5 = blue(trackColor4);
int somme_x = 0, somme_y = 0; // pour le calcul des baricentres
int compteur = 0;
int somme_x2 = 0, somme_y2 = 0; // pour le calcul des baricentres
int compteur2 = 0;
int somme_x3 = 0, somme_y3 = 0; // pour le calcul des baricentres
int compteur3 = 0;
int somme_x4 = 0, somme_y4 = 0; // pour le calcul des baricentres
int compteur4 = 0;
for(int x = 0; x < video.width; x++) {
for(int y = 0; y < video.height; y++) {
int currentLoc = x + y*video.width;
color currentColor = video.pixels[currentLoc];
float r1 = red(currentColor);
float g1 = green(currentColor);
float b1 = blue(currentColor);
if(dist(r1,g1,b1,r2,g2,b2) < TOLERANCE) {
somme_x += x;
somme_y += y;
compteur++;
}
else if(compteur > 0) {
XRc = somme_x / compteur;
YRc = somme_y / compteur;
}
if(dist(r1,g1,b1,r3,g3,b3) < TOLERANCE) {
somme_x2 += x;
somme_y2 += y;
compteur2++;
}
else if(compteur2 > 0) {
XRh = somme_x2 / compteur2;
YRh = somme_y2 / compteur2;
}
if(dist(r1,g1,b1,r4,g4,b4) < TOLERANCE) {
somme_x3 += x;
somme_y3 += y;
compteur3++;
}
else if(compteur3 > 0) {
XRc2 = somme_x3 / compteur3;
YRc2 = somme_y3 / compteur3;
}
if(dist(r1,g1,b1,r5,g5,b5) < TOLERANCE) {
somme_x4 += x;
somme_y4 += y;
compteur4++;
}
else if(compteur4 > 0) {
XRh2 = somme_x4 / compteur4;
YRh2 = somme_y4 / compteur4;
}
}
}
// track the color and show images
boolean c1 = false;
boolean c2 = false;
boolean c3 = false;
if(XRc != 0 || YRc != 0) { // color Green detected
c1 = true;
c2 = false;
c3 = false;
}
if(XRh != 0 || YRh != 0) { // color blue detected
c2 = true;
c1 = false;
c3 = false;
}
if(XRc2 != 0 || YRc2 != 0) { // color red detected
c3 = true;
c1 = false;
c2 = false;
}
if(c1 == true) {
image(img,0,0); // show image 1
} else if (c2 == true) {
image(img2,0,0); // show image 2
} else if (c3 == true) {
image(img3,0,0); // show image 3
}
}
The important snippet:
// detect color and show images
boolean c1 = false;
boolean c2 = false;
boolean c3 = false;
if(XRc != 0 || YRc != 0) { // color Green detected
c1 = true;
c2 = false;
c3 = false;
}
if(XRh != 0 || YRh != 0) { // color blue detected
c2 = true;
c1 = false;
c3 = false;
}
if(XRc2 != 0 || YRc2 != 0) { // color red detected
c3 = true;
c1 = false;
c2 = false;
}
if(c1 == true) {
image(img,0,0); // show image 1
} else if (c2 == true) {
image(img2,0,0); // show image 2
} else if (c3 == true) {
image(img3,0,0); // show image 3
}
Screenshots:
First object is tracked and image is shown
Second object is tracked and image is shown
Third object is tracked and image is shown
My Problem:
(the first object should be tracked and the first image should me shown)
There are few things that could be improved.
In terms of efficiency, a couple of minor suggestions:
you could pre-compute the RGB components of the colours you want to track once in setup() rather than many times per second in draw() (e.g.float r2 = red(trackColor); etc.)
You could use a flat 1D loop rather than a nested loop using video.pixels[]. One minor disadvantage is that you'd need to compute the x,y position from the pixel index. Since you need to display an image and it doesn't seem to matter where, you might not even need to compute x,y.(e.g. for(int currentLoc = 0; currentLoc < video.pixels.length; currentLoc++))
In terms of the algorithm itself:
You are using a single threshold value(TOLERANCE). This will cut out anything on the left of the value which is ok, but not cut a whole range of other colours that might mess with your counter. I recommend using a range (e.g. MIN_TOLERANCE,MAX_TOLERANCE).
You are using R,G,B colour space. R,G,B colours don't mix together as we'd expect. A more perceptual colour space will behave as you'd expect (e.g. orange will be closer to red and yellow). For that you would need to convert from RGB to CIE XYZ, then to Lab*, compute the euclidean distance between two colours, then convert the result back to RGB if you need to display it.You can find an example here. It's in OpenFrameworks (c++), but you should be able to see the similarities to Processing and port it.
There is another option: HSB colour space. More on that bellow
You could draw some visualisation of how your code is segmenting the image. It will be faster to get an idea of which values work better, which don't
I recommend trying the OpenCV for Processing library which uses a more modern OpenCV library under the hood and provides more functionalities and great examples. One of them is particularly useful for you: HueRangeSelection
Give it a go. Notice you can move the mouse to shift the range and if you hold a key pressed you can increase the range. For example, here's how the quick demo of it with your images. (The HSB range threshold result is displayed smaller in the bottom right corner):
From my own experience, I'd recommend not using shiny/reflective materials (e.g. the coke can). You can see in the image above the segmentation isn't as good as on the green and blue objects with flatter colours. Because the can is reflective, it will appear to have different colours not only with global lighting changes, but also it's position/rotation and objects close to it. It's a pain to cater to all these.
Also, to take the HueRange example further you can:
Apply a morphologic filter (for example erode(), then dilate()) to remove some of the noise (smaller white pixel patches). At this point you can count the number of white pixels per colour range and decide which image to display.
find contours on the filtered image which will can use to determine the x,y,width,height of the region that falls within a range of the colour you want to track.
Good luck and most importantly have fun!
Hummm... Without running the code, I'd bet the problem is that you rely on the coordinates (XRc and it's siblings) being zero to choose which image to use. They are all initiated to 0, so the first run goes fine, but... you never reset them to zero, do you? So after they all have being changed once by detecting the 3 colors your test became useless. Perhaps you can reset them all to zero whenever a color is detected.
And maybe you don't need the boolean at all...
What do think of this?
PSEUDO
//global
PImage imgs = new PImage[3];
int imageToDispaly = 0;
//all the stuff...
if(XRc != 0 || YRc != 0) { // color Green detected
// not sure this will work, but the idea is some thing like this.
XRh = YRh = XRc2 = YRc2 = 0;
imageToDispaly = 0;
}
if(XRh != 0 || YRh != 0) { // color blue detected
XRc = YRc = XRc2 = YRc2 = 0;
imageToDispaly = 1;
}
if(XRc2 != 0 || YRc2 != 0) { // color red detected
XRh = YRh = XRc = YRc = 0;
imageToDispaly = 2;
}
// at appropriated time...
image(imgs[imageToDispaly], x, y);

How to keep calling a function till it returns a different output than before? [duplicate]

This question already has answers here:
Randomly choosing an item from a Swift array without repeating
(6 answers)
Closed 6 years ago.
I have a function that will randomly output a SKColor.
func getRandomColor() -> SKColor{
let randomaval = arc4random_uniform(4)
var color = SKColor()
switch(randomaval)
{
case 0:
color = redColor
case 1:
color = greenColor
case 2:
color = blueColor
case 3:
color = yellowColor
default:()
}
return color
}
When two bodies collide I call this function to change colors
aball.color = getRandomColor()
if aball.color == redColor && getRandomColor() == redColor {
aball.color = getRandomColor() //to set the color to something other than red
aball.colorBlendFactor = 1.0
}
What I want to do is that, when I say aball.color = getRandomColor(), if getRandomColor() is redColor again, it needs to run the if statement again till the function returns something other than redColor. Most of the time, when my if condition is true, it calls redColor again and I can't understand how to avoid that. I basically want a different color to be returned everytime getRandomColor is called. How do I accomplish that?
How about repeatedly calling your getRandomColor() function until the result is something other than the ball's current color? :
repeat {
let newColor = getRandomColor()
} while newcolor != aball.color
aball.color = newColor
Alternatively, you could re-write getRandomColor to accept a parameter of a color it shouldn't return and then call your amended function with aball.color:
func getRandomNumber(notColor: SKColor) -> SKColor{
repeat {
let randomaval = arc4random_uniform(4)
var color = SKColor()
switch(randomaval)
{
case 0:
color = redColor
case 1:
color = greenColor
case 2:
color = blueColor
case 3:
color = yellowColor
default:()
}
let newColor = getRandomColor()
} while color != notColor
return color
}
Then, when you want to change the color of the ball:
aball.color = getRandomColor(notColor: aball.color)
another approach is to have getRandomColor track its own last result. In your initialisation decalre:
var lastrandomcolor: SKColor
then
func getRandomNumber() -> SKColor{
repeat {
let randomaval = arc4random_uniform(4)
var color = SKColor()
switch(randomaval)
{
case 0:
color = redColor
case 1:
color = greenColor
case 2:
color = blueColor
case 3:
color = yellowColor
default:()
}
let newColor = getRandomColor()
} while color != lastRandomColor
lastRandomcolor = color
return color
}
then just use:
aball.color = getRandomColor()
I prefer the 2nd approach where your pass a color that you don't want returned, as it gives you more control i.e. at certain times, you might not want the ball to be some other colour, not just it's own color. e.g. not blue if its in the sky. you could even pass an array of colours that shouldn't be returned.

Path Tracing Shadowing Error

I really dont know what else do to to fix this problem.I have written a path tracer using explicit light sampling in c++ and I keep getting this weird really black shadows which I know is wrong.I have done everything to fix it but I still keep getting it,even on higher samples.What am I doing wrong ? Below is a image of the scene.
And The Radiance Main Code
RGB Radiance(Ray PixRay,std::vector<Primitive*> sceneObjects,int depth,std::vector<AreaLight> AreaLights,unsigned short *XI,int E)
{
int MaxDepth = 10;
if(depth > MaxDepth) return RGB();
double nearest_t = INFINITY;
Primitive* nearestObject = NULL;
for(int i=0;i<sceneObjects.size();i++)
{
double root = sceneObjects[i]->intersect(PixRay);
if(root > 0)
{
if(root < nearest_t)
{
nearest_t = root;
nearestObject = sceneObjects[i];
}
}
}
RGB EstimatedRadiance;
if(nearestObject)
{
EstimatedRadiance = nearestObject->getEmission() * E;
Point intersectPoint = nearestObject->intersectPoint(PixRay,nearest_t);
Vector intersectNormal = nearestObject->surfacePointNormal(intersectPoint).Normalize();
if(nearestObject->getBRDF().Type == 1)
{
for(int x=0;x<AreaLights.size();x++)
{
Point pointOnTriangle = RandomPointOnTriangle(AreaLights[x].shape,XI);
Vector pointOnTriangleNormal = AreaLights[x].shape.surfacePointNormal(pointOnTriangle).Normalize();
Vector LightDistance = (pointOnTriangle - intersectPoint).Normalize();
//Geometric Term
RGB Geometric_Term = GeometricTerm(intersectPoint,pointOnTriangle,sceneObjects);
//Lambertian BRDF
RGB LambertianBRDF = nearestObject->getColor() * (1. / M_PI);
//Emitted Light Power
RGB Emission = AreaLights[x].emission;
double MagnitudeOfXandY = (pointOnTriangle - intersectPoint).Magnitude() * (pointOnTriangle - intersectPoint).Magnitude();
RGB DirectLight = Emission * LambertianBRDF * Dot(intersectNormal,-LightDistance) *
Dot(pointOnTriangleNormal,LightDistance) * (1./MagnitudeOfXandY) * AreaLights[x].shape.Area() * Geometric_Term;
EstimatedRadiance = EstimatedRadiance + DirectLight;
}
//
Vector diffDir = CosWeightedRandHemiDirection(intersectNormal,XI);
Ray diffRay = Ray(intersectPoint,diffDir);
EstimatedRadiance = EstimatedRadiance + ( Radiance(diffRay,sceneObjects,depth+1,AreaLights,XI,0) * nearestObject->getColor() * (1. / M_PI) * M_PI );
}
//Mirror
else if(nearestObject->getBRDF().Type == 2)
{
Vector reflDir = PixRay.d-intersectNormal*2*Dot(intersectNormal,PixRay.d);
Ray reflRay = Ray(intersectPoint,reflDir);
return nearestObject->getColor() *Radiance(reflRay,sceneObjects,depth+1,AreaLights,XI,0);
}
}
return EstimatedRadiance;
}
I haven't debugged your code, so there may be any number of bugs of course, but I can give you some tips: First, go look at SmallPT, and see what it does that you don't. It's tiny but still quite easy to read.
From the look of it, it seems there are issues with either the sampling and/or gamma correction. The easiest one is gamma: when converting RGB intensity in the range 0..1 to RGB in the range 0..255, remember to always gamma correct. Use a gamma of 2.2
R = r^(1.0/gamma)
G = g^(1.0/gamma)
B = b^(1.0/gamma)
Having the wrong gamma will make any path traced image look bad.
Second: sampling. It's not obvious from the code how the sampling is weighted. I'm only familiar with Path Tracing using russian roulette sampling. With RR the radiance basically works like so:
if (depth > MaxDepth)
return RGB();
RGB color = mat.Emission;
// Russian roulette:
float survival = 1.0f;
float pContinue = material.Albedo();
survival = 1.0f / pContinue;
if (Rand.Next() > pContinue)
return color;
color += DirectIllumination(sceneIntersection);
color += Radiance(sceneIntersection, depth+1) * survival;
RR is basically a way of terminating rays at random, but still maintaining an unbiased estimate of the true radiance. Since it adds a weight to the indirect term, and the shadow and bottom of the speheres are only indirectly lit, I'd suspect that has something to do with it (if it isn't just the gamma).

Resources