Brightness Overlay Changing instantly from min to max alpha instead of Incrementing - unityscript

I'm trying to make a brightness overlay that can be changed by the user using a slider. I'm using a panel (Color is black) for the overlay and editing its alpha with the slider. The slider has a min value of 0 and max of 150, but the slider only has to be 1 or greater for the alpha of the overlay to be at full. When I print the alpha to the console, it says it's only 1, but the alpha on the overlay says is at max (Check gif if there's confusion). How do I set the alpha of an overlay through script using a slider?
Scripts
Gif Of What is happening
Reasearch:
Brightness slider?
Changing Alpha Material
Changing Current Alpha

Looked at your code and found the problem.
Things to understand:
Color.a/Alpha min is 0.0f;
Color.a/Alpha max is 1.0f
Color.a/Alpha = float not int.
So change your public void ChangeBrightness(int brightness) to public void ChangeBrightness(float brightness).
On your Slider, make sure that Min Value =0 and Max Value =1; Also make sure that Whole Numbers is not selected.
Right now, the value from the slider is being converted into 0 or 1 due to the int in your function parameter. That's why that weird problem is happening.

Related

How to convert a screen coordinate into a translation for a projection matrix?

(More info at end)----->
I am trying to render a small picture-in-picture display over my scene. The PiP is just a smaller texture, but it is intended to reveal secret objects in the scene when it is placed over them.
To do this, I want to render my scene, then render the SAME scene on the smaller texture, but with the exact same positioning as the main scene. The intended result would be something like this:
My problem is... I cannot get the scene on the smaller texture to match up 1:1. I keep trying various kludges, but ultimately I suspect that I need to do something to the projection matrix to pan it over to the location of the frame. I can get it to zoom correctly...just can't get it to pan.
Can anyone suggest what I need to do to my projection matrix to render my scene 1:1 (but panned by x,y) onto a smaller texture?
The data I have:
Resolution of the full-screen framebuffer
Resolution of the smaller texture
XY coordinate where I want to draw the smaller texture as an overlay sprite
The world/view/projection matrices from the original full-screen scene
The viewport from the original full-screen scene
(Edit)
Here is the function I use to produce the 3D camera:
void Make3DCamera(Vector theCameraPos, Vector theLookAt, Vector theUpVector, float theFOV, Point theRez, Matrix& theViewMatrix,Matrix& theProjectionMatrix)
{
Matrix aCombinedViewMatrix;
Matrix aViewMatrix;
aCombinedViewMatrix.Scale(1,1,-1);
theCameraPos.mZ*=-1;
theLookAt.mZ*=-1;
theUpVector.mZ*=-1;
aCombinedViewMatrix.Translate(-theCameraPos);
Vector aLookAtVector=theLookAt-theCameraPos;
Vector aSideVector=theUpVector.Cross(aLookAtVector);
theUpVector=aLookAtVector.Cross(aSideVector);
aLookAtVector.Normalize();
aSideVector.Normalize();
theUpVector.Normalize();
aViewMatrix.mData.m[0][0] = -aSideVector.mX;
aViewMatrix.mData.m[1][0] = -aSideVector.mY;
aViewMatrix.mData.m[2][0] = -aSideVector.mZ;
aViewMatrix.mData.m[3][0] = 0;
aViewMatrix.mData.m[0][1] = -theUpVector.mX;
aViewMatrix.mData.m[1][1] = -theUpVector.mY;
aViewMatrix.mData.m[2][1] = -theUpVector.mZ;
aViewMatrix.mData.m[3][1] = 0;
aViewMatrix.mData.m[0][2] = aLookAtVector.mX;
aViewMatrix.mData.m[1][2] = aLookAtVector.mY;
aViewMatrix.mData.m[2][2] = aLookAtVector.mZ;
aViewMatrix.mData.m[3][2] = 0;
aViewMatrix.mData.m[0][3] = 0;
aViewMatrix.mData.m[1][3] = 0;
aViewMatrix.mData.m[2][3] = 0;
aViewMatrix.mData.m[3][3] = 1;
if (gG.mRenderToSprite) aViewMatrix.Scale(1,-1,1);
aCombinedViewMatrix*=aViewMatrix;
// Projection Matrix
float aAspect = (float) theRez.mX / (float) theRez.mY;
float aNear = gG.mZRange.mData1;
float aFar = gG.mZRange.mData2;
float aWidth = gMath.Cos(theFOV / 2.0f);
float aHeight = gMath.Cos(theFOV / 2.0f);
if (aAspect > 1.0) aWidth /= aAspect;
else aHeight *= aAspect;
float s = gMath.Sin(theFOV / 2.0f);
float d = 1.0f - aNear / aFar;
Matrix aPerspectiveMatrix;
aPerspectiveMatrix.mData.m[0][0] = aWidth;
aPerspectiveMatrix.mData.m[1][0] = 0;
aPerspectiveMatrix.mData.m[2][0] = gG.m3DOffset.mX/theRez.mX/2;
aPerspectiveMatrix.mData.m[3][0] = 0;
aPerspectiveMatrix.mData.m[0][1] = 0;
aPerspectiveMatrix.mData.m[1][1] = aHeight;
aPerspectiveMatrix.mData.m[2][1] = gG.m3DOffset.mY/theRez.mY/2;
aPerspectiveMatrix.mData.m[3][1] = 0;
aPerspectiveMatrix.mData.m[0][2] = 0;
aPerspectiveMatrix.mData.m[1][2] = 0;
aPerspectiveMatrix.mData.m[2][2] = s / d;
aPerspectiveMatrix.mData.m[3][2] = -(s * aNear / d);
aPerspectiveMatrix.mData.m[0][3] = 0;
aPerspectiveMatrix.mData.m[1][3] = 0;
aPerspectiveMatrix.mData.m[2][3] = s;
aPerspectiveMatrix.mData.m[3][3] = 0;
theViewMatrix=aCombinedViewMatrix;
theProjectionMatrix=aPerspectiveMatrix;
}
Edit to add more information:
Just playing and tweaking numbers, I have come to a "close" result. However the "close" result requires a multiplication by some kludge numbers, that I don't understand.
Here's what I'm doing to to perspective matrix to produce my close result:
//Before calling Make3DCamera, adjusting FOV:
aFOV*=smallerTexture.HeightF()/normalRenderSize.HeightF(); // Zoom it
aFOV*=1.02f // <- WTH is this?
//Then, to pan the camera over to the x/y position I want, I do:
Matrix aPM=GetCurrentProjectionMatrix();
float aX=(screenX-normalRenderSize.WidthF()/2.0f)/2.0f;
float aY=(screenY-normalRenderSize.HeightF()/2.0f)/2.0f;
aX*=1.07f; // <- WTH is this?
aY*=1.07f; // <- WTH is this?
aPM.mData.m[2][0]=-aX/normalRenderSize.HeightF();
aPM.mData.m[2][1]=-aY/normalRenderSize.HeightF();
SetCurrentProjectionMatrix(aPM);
When I do this, my new picture is VERY close... but not exactly perfect-- the small render tends to drift away from "center" the further the "magic window" is from the center. Without the kludge number, the drift away from center with the magic window is very pronounced.
The kludge numbers 1.02f for zoom and 1.07 for pan reduce the inaccuracies and drift to a fraction of a pixel, but those numbers must be a ratio from somewhere, right? They work at ANY RESOLUTION, though-- so I have have a 1280x800 screen and a 256,256 magic window texture... if I change the screen to 1024x768, it all still works.
Where the heck are these numbers coming from?
If you don't care about sub-optimal performance (i.e., drawing the whole scene twice) and if you don't need the smaller scene in a texture, an easy way to obtain the overlay with pixel perfect precision is:
Set up main scene (model/view/projection matrices, etc.) and draw it as you are now.
Use glScissor to set the rectangle for the overlay. glScissor takes the screen-space x, y, width, and height and discards anything outside that rectangle. It looks like you have those four data items already, so you should be good to go.
Call glEnable(GL_SCISSOR_TEST) to actually turn on the test.
Set the shader variables (if you're using shaders) for drawing the greyscale scene/hidden objects/etc. You still use the same view and projection matrices that you used for the main scene.
Draw the greyscale scene/hidden objects/etc.
Call glDisable(GL_SCISSOR_TEST) so you won't be scissoring at the start of the next frame.
Draw the red overlay border, if desired.
Now, if you actually need the overlay in its own texture for some reason, this probably won't be adequate...it could be made to work either with framebuffer objects and/or pixel readback, but this would be less efficient.
Most people completely overcomplicate such issues. There is absolutely no magic to applying transformations after applying the projection matrix.
If you have a projection matrix P (and I'm assuming default OpenGL conventions here where P is constructed in a way that the vector is post-multiplied to the matrix, so for an eye space vector v_eye, we get v_clip = P * v_eye), you can simply pre-multiply some other translate and scale transforms to cut out any region of interest.
Assume you have a viewport of size w_view * h_view pixels, and you want to find a projection matrix which renders only a tile w_tile * h_tile pixels , beginning at pixel location (x_tile, y_tile) (again, assuming default GL conventions here, window space origin is bottom left, so y_tile is measured from the bottom). Also note that the _tile coordinates are to be interpreted relative to the viewport, in the typical case, that would start at (0,0) and have the size of your full framebuffer, but this is by no means required nor assumed here.
Since after applying the projection matrix we are in clip space, we need to transform our coordinates from window space pixels to clip space. Note that clip space is a 4D homogeneous space, but we can use any w value we like (except 0) to represent any point (as a point in the 3D space we care about forms a line in the 4D space we work in), so let's just use w=1 for simplicity's sake.
The view volume in clip space is denoted by the [-w,w] range, so in the w=1 hyperplane, it is [-1,1]. Converting our tile into this space yields:
x_clip = 2 * (x_tile / w_view) -1
y_clip = 2 * (y_tile / h_view) -1
w_clip = 2 * (w_tile / w_view) -1
h_clip = 2 * (h_tile / h_view) -1
We now just need to translate the objects such that the center of the tile is moved to the center of the view volume, which by definition is the origin, and scale the w_clip * h_clip sized region to the full [-1,1] extent in each dimension.
That means:
T = translate(-(x_clip + 0.5*w_clip), -(y_clip + 0.5 *h_clip), 0)
S = scale(2.0/w_clip, 2.0/h_clip, 1.0)
We can now create the modified projection matrix P' as P' = S * T * P, and that's all there is. Rendering with P' instead of P will render exactly the region of your tile to whatever viewport you are using, so for it to be pixel-exact with respect to your original viewport, you must now render with a viewport which is also w_tile * h_tile pixels big.
Note that there is also another approach: The viewport is not clamped against the framebuffer you're rendering to. It is actually valid to provide negative values for x and y. If your framebuffer for rendering your tile into is exactly w_tile * h_tile pixels, you simply could set glViewport(-x_tile, -y_tile, x_tile + w_tile, y_tile + h_tile) and render with the unmodified projection matrix P instead.

How can I solve for current opacity in this animation?

I'm trying to do a simple fade in/out animation in Lua.
I feel like these variables should be enough to solve for the alpha/opacity I want to set the box at every frame, but I'm having a lot of trouble with the fade out, since alpha = targetAlpha * animationPos always returns 0 while multiplying by the target alpha of 0.
All of these variables are decimal values between 0-1, representing alpha or %time completed.
targetAlpha - The alpha value at the end of animation.
initialAlpha - The alpha the box started at when the animation initialized.
animationPos - The current position (%time completed) of the animation
currentAlpha - Current alpha of the box.
Maybe I'm just super fried today, but I've been trying what feels like a billion combinations of these vars to find the equation that works, and to no luck.
Any help is appreciated!
What you want is a linear interpolation, which takes two values a and b, and an interpolation value f between 0 and 1.
function lerp(a, b, f)
return a * (1 - f) + b * f
end
And now you can just interpolate between initial and target alpha using your current animation progress:
alpha = lerp(initialAlpha, targetAlpha, animationPos)

Calculate source RGBA given two samples composited over black and white backgrounds

Explanation
I have a semi-transparent color of unknown value.
I have a sample of this unknown color composited over a black background and another sample over a white background.
How do I find the RGBA value of the unknown color?
Example
Note: RGB values of composites are calculated using formulas from the Wikipedia article on alpha compositing
Composite over black:
rgb(103.5, 32.5, 169.5)
Composite over white:
rgb(167.25, 96, 233.25)
Calculated value of unknown color will be:
rgba(138, 43, 226, 0.75)
What I've Read
Manually alpha blending an RGBA pixel with an RGB pixel
Calculate source RGBA value from overlay
It took some experimentation, but I think I figured it out.
Subtracting any of the color component values between the black and white composite should give you the inverse of the original color's alpha value, eg:
A_original = 1 - ((R_white_composite - R_black_composite) / 255) // in %, 0.0 to 1.0
It should yield the same value whether you use the R, G, or B component. Now that you have the original alpha, finding the new components is as easy as:
R_original = R_black_composite / A_original
G_original = G_black_composite / A_original
B_original = B_black_composite / A_original

Color tint and temperature

Though I have found a lot of topics on color tint and temperature, but till now I have not seen any definite solution, which is the reason I am creating this post..My apologies for that.
I am interested in adjusting color temp and tint in images from RGB values, somewhat similar to the iPhoto application found in iOS where it can be adjusted with a slider bar from left to right.
Whatever I have found, temp and tint are orthogonal properties, where temp adjustment is along the blue (left; cool colors)--yellow(right; warm colors) and tint along the green (left) -- magenta (right) axis.
How do I adjust them using formulas from RGB values i.e., uderlying implementation of the color temp and tint slider bars.
I can convert them to HSV space and then I can rotate the hue wheel channel towards those (blue, yello, green, magenta) angles, but how to do them in a systematic fashion similar to the slider bar implementation by changing gradually from low level (middle of the slider bar) to high level (right/left ends of the slider bar).
Thanks!
You should try using HSL instead of HSV. HSL saturation separates itself from the hue and luminosity has very definitive range when it comes to mathematical calculation.
In HSL, to add tint you move the L factor between 50-100 and to add shade the L factor varies between 0-50. Also saturation for HSL controls the tone directly unlike HSV.
For temperature, you have to devise your own stratagy changing the color between red and blue but one golden hint that I can give you is "every pure RGB color has one of 3 color values as zero, second fixed to 255 and 3rd varies with the factor of 255/60.
Hope this helps-
Whereas color temparature is a physical value, its expression
in terms of RGB values
not
trivial. If all you need is a pair of orthogonal axes in the RGB colorspace for the visual adjustment of white balance, they can be defined with relative ease in such a way as to resemble the true color temperature and its derivative the tint.
Let us name our RGB temperature BY—for the balance between blue and yellow, and our RGB tint GR—for the balance balance between green and red. Now, these functions must satisfy the following obvious requirements:
They shall not depend on brightness, or be invariant to multiplication of all the RGB components by the same factor:
BY(r,g,b) = BY(kr, kg, kb),
GR(r,g,b) = GR(kr, kg, kb).
They shall be zero for neutral gray:
BY(0,0,0) = 0,
GR(0,0,0) = 0.
They shall belong the to same range, symmetrical around zero point. I will use [-1..+1]
Any combination of BY and GR shall define a valid color.
Now, one of the ways to define them could be:
BY = (r + g - 2b)/(r + g + 2b),
GR = (r - g )/(r + g) .
so that each pair of BY and GR determines a specific proportion
r:g:b = (1 + BY)(1 + GR)
(1 + BY)(1 - GR)
1 - BY
The following image shows the colors of maximum brightness on our BY-GR plane. BY is directed right, GR down, and the neutral point (0,0) is at the center:
Proper
adjustment of white balance consists of multiplication of the linear RGB values by individual factors:
r_new = wb_r * r_old
g_new = wb_g * g_old
b_new = wb_b * b_old
It happens to work on gamma-compressed RGB too, but not so well on sRGB, because of a
piece-wise
definition of its transfer function, but the distortion will be small and often unnoticeable. If you want a perfect adjustment, however, make sure to work in linear RGB.
Once a BY-GR pair is chosen and the corresponding RGB proportion calculated, only one degree of freedom remains—the overall multiplier (see req. 1). Choose it so that no pixels become clipped.

How to shift pixels of a pixmap efficient in Qt4

I have implemented a marquee text widget using Qt4. I painted the text content onto a pixmap first. And then paint a portion of this pixmap onto a paint device by calling painter.drawTiledPixmap(offsetX, offsetY, myPixmap)
My Imagination is that, Qt will fill the whole marquee text rectangle with the content from myPixmap.
Is there a ever faster way, to shift all existing content to left by 1px and than fill the newly exposed 1px wide and N-px high area with the content from myPixmap?
Well. This is a trick I used to do with slower hardware back in the old days. Basically, the image buffer is allocated twice as wide as needed with 1 extra line at the beginning. Build the image to the left of the buffer. Then draw the image repeatedly with the buffer advancing 1 pixel at a time in the buffer.
int w = 200;
int h = 100;
int rowBytes = w * sizeof(QRgb) * 2; // line buffer is twice as the width
QByteArray buffer(rowBytes * (h + 1), 0xFF); // 1 more line than the height
uchar * p = (uchar*)buffer.data() + rowBytes; // start drawing the image content at 2nd line
QImage image(p, w, h, rowBytes, QImage::Format_RGB32); // 1st line is used as the padding at the start of scroll
image.fill(qRgb(255, 0, 0)); // well. do something to the image
p = image.bits() - rowBytes / 2; // start scrolling at the middle of the 1st (blank) line
for(int i=0;i<w;++i, p+=sizeof(QRgb)) {
QImage scroll(p, w, h, rowBytes, QImage::Format_RGB32); // scrool 1 pixel at a time
scroll.save(QString("%1.png").arg(i));
}
I am not sure this will be any faster than just change the offset of the image and draw it strait. The hardware today is really powerful which renders a lot of old tricks useless. But it's fun to play obscure tricks. :)
Greetings,
one possibility to achieve this would be to:
Create a QGraphicsScene + View and put the pixmap on that twice (as QGraphicsPixmapItem), so they are right next to each other.
Size the view to fit the size of the (one) pixmap.
Then, instead of repainting the pixmap, you simply reposition the view's viewport, moving from one pixmap to the next.
Jump back at the end to create the loop.
This may or may not be faster (in terms of performance) - I have not tested it. But may be worth a try, if only for the sake of experiment.
Your approach is probably one of the fastest one since you use low level painting methods. You can implement an intermediate approach between low level painting and the QGraphicsScene option : using a scroll area containing a label.
Here is a sample of code that create a new scroll area containing a text label. You may scroll the label automatically using a QTimer to trigger the scrolling effect, that gives you a nice marquee widget.
QScrollArea *scrollArea = new QScrollArea();
// ensure that scroll bars never show
scrollArea->setVerticalScrollBarPolicy(Qt::ScrollBarAlwaysOff);
scrollArea->setHorizontalScrollBarPolicy(Qt::ScrollBarAlwaysOff);
QLabel *label = new QLabel("your scrolling text");
// resize the scroll area : 50px length and an height equals to its content height.
scrollArea->resize(50, label->size().height());
scrollArea->setWidget(label);
label->show(); // optionnal if the scroll area is not yet visible
The text label inside the scroll area can be moved from left to right by one pixel using the QScrollArea::scrollContentsBy(int dx, int dy) with a dx parameter equals to -1.
Why not just do it on a pixel by pixel basis? Due to the way caches work writing the pixel to the one before it all the way until you get to the end. Then you can fill the final column by reading from your other image.
Its then pretty easy to SIMD optimise it as well; though you start getting into per-platform optimisations at this point.

Resources