This have been driving me crazy for the past couple of days.
I'm animating a spritesheet, and it actually works out fine on my 96px 384px texture with this code:
glBegin(GL_QUADS);
glTexCoord2f((frameCount*24.0f)/imgWidth, (row*24.0f)/imgHeight); glVertex3f(0+x, 0+y, -0.001f*(y+32));
glTexCoord2f((frameCount*24.0f)/imgWidth, ((row+1)*24.0f)/imgHeight); glVertex3f(0+x, 32+y, -0.001f*(y+32));
glTexCoord2f(((frameCount+1)*24.0f)/imgWidth, ((row+1)*24.0f)/imgHeight); glVertex3f(32+x, 32+y, -0.001f*(y+32));
glTexCoord2f(((frameCount+1)*24.0f)/imgWidth, (row*24.0f)/imgHeight); glVertex3f(32+x, 0+y, -0.001f*(y+32));
glEnd();
Problem is though, that when I load in a 32px 32px texture, it looks weird! I suspect that the number 24.0f should be different according to the texture size, but I can't figure out how.
Second question: How does this method affect the performance, are there better ways of doing it?
The texture coordinate for the x-axis (width or u value) should be:
frameCount * (frameWidth / imgWidth)
with frameWidth being the width of each frame in your texture and imgWidth being the total width of the texture.
The texture coordinate for the y-axis (height or v value) should be:
frameCount * (frameHeight / imgHeight)
with frameHeight being the height of each frame in your texture and imgHeight being the total height of the texture (in this case they are probably the same since each frame texture has same height as the entire texture here - or that's what I'm assuming by looking at your code).
If you want the code to be more efficient, you can precompute the multiplications that happen multiple times for each quad. So you can probably precompute:
float widthFraction = frameWidth / imgWidth;
float heightFraction = frameHeight / imgHeight;
The same applies for the vertex coordinate calculations, by the way.
Over hundreds of thousands of vertices, this will definitely speed the computations up a bit, but you should compare the two methods to see how much.
Related
(More info at end)----->
I am trying to render a small picture-in-picture display over my scene. The PiP is just a smaller texture, but it is intended to reveal secret objects in the scene when it is placed over them.
To do this, I want to render my scene, then render the SAME scene on the smaller texture, but with the exact same positioning as the main scene. The intended result would be something like this:
My problem is... I cannot get the scene on the smaller texture to match up 1:1. I keep trying various kludges, but ultimately I suspect that I need to do something to the projection matrix to pan it over to the location of the frame. I can get it to zoom correctly...just can't get it to pan.
Can anyone suggest what I need to do to my projection matrix to render my scene 1:1 (but panned by x,y) onto a smaller texture?
The data I have:
Resolution of the full-screen framebuffer
Resolution of the smaller texture
XY coordinate where I want to draw the smaller texture as an overlay sprite
The world/view/projection matrices from the original full-screen scene
The viewport from the original full-screen scene
(Edit)
Here is the function I use to produce the 3D camera:
void Make3DCamera(Vector theCameraPos, Vector theLookAt, Vector theUpVector, float theFOV, Point theRez, Matrix& theViewMatrix,Matrix& theProjectionMatrix)
{
Matrix aCombinedViewMatrix;
Matrix aViewMatrix;
aCombinedViewMatrix.Scale(1,1,-1);
theCameraPos.mZ*=-1;
theLookAt.mZ*=-1;
theUpVector.mZ*=-1;
aCombinedViewMatrix.Translate(-theCameraPos);
Vector aLookAtVector=theLookAt-theCameraPos;
Vector aSideVector=theUpVector.Cross(aLookAtVector);
theUpVector=aLookAtVector.Cross(aSideVector);
aLookAtVector.Normalize();
aSideVector.Normalize();
theUpVector.Normalize();
aViewMatrix.mData.m[0][0] = -aSideVector.mX;
aViewMatrix.mData.m[1][0] = -aSideVector.mY;
aViewMatrix.mData.m[2][0] = -aSideVector.mZ;
aViewMatrix.mData.m[3][0] = 0;
aViewMatrix.mData.m[0][1] = -theUpVector.mX;
aViewMatrix.mData.m[1][1] = -theUpVector.mY;
aViewMatrix.mData.m[2][1] = -theUpVector.mZ;
aViewMatrix.mData.m[3][1] = 0;
aViewMatrix.mData.m[0][2] = aLookAtVector.mX;
aViewMatrix.mData.m[1][2] = aLookAtVector.mY;
aViewMatrix.mData.m[2][2] = aLookAtVector.mZ;
aViewMatrix.mData.m[3][2] = 0;
aViewMatrix.mData.m[0][3] = 0;
aViewMatrix.mData.m[1][3] = 0;
aViewMatrix.mData.m[2][3] = 0;
aViewMatrix.mData.m[3][3] = 1;
if (gG.mRenderToSprite) aViewMatrix.Scale(1,-1,1);
aCombinedViewMatrix*=aViewMatrix;
// Projection Matrix
float aAspect = (float) theRez.mX / (float) theRez.mY;
float aNear = gG.mZRange.mData1;
float aFar = gG.mZRange.mData2;
float aWidth = gMath.Cos(theFOV / 2.0f);
float aHeight = gMath.Cos(theFOV / 2.0f);
if (aAspect > 1.0) aWidth /= aAspect;
else aHeight *= aAspect;
float s = gMath.Sin(theFOV / 2.0f);
float d = 1.0f - aNear / aFar;
Matrix aPerspectiveMatrix;
aPerspectiveMatrix.mData.m[0][0] = aWidth;
aPerspectiveMatrix.mData.m[1][0] = 0;
aPerspectiveMatrix.mData.m[2][0] = gG.m3DOffset.mX/theRez.mX/2;
aPerspectiveMatrix.mData.m[3][0] = 0;
aPerspectiveMatrix.mData.m[0][1] = 0;
aPerspectiveMatrix.mData.m[1][1] = aHeight;
aPerspectiveMatrix.mData.m[2][1] = gG.m3DOffset.mY/theRez.mY/2;
aPerspectiveMatrix.mData.m[3][1] = 0;
aPerspectiveMatrix.mData.m[0][2] = 0;
aPerspectiveMatrix.mData.m[1][2] = 0;
aPerspectiveMatrix.mData.m[2][2] = s / d;
aPerspectiveMatrix.mData.m[3][2] = -(s * aNear / d);
aPerspectiveMatrix.mData.m[0][3] = 0;
aPerspectiveMatrix.mData.m[1][3] = 0;
aPerspectiveMatrix.mData.m[2][3] = s;
aPerspectiveMatrix.mData.m[3][3] = 0;
theViewMatrix=aCombinedViewMatrix;
theProjectionMatrix=aPerspectiveMatrix;
}
Edit to add more information:
Just playing and tweaking numbers, I have come to a "close" result. However the "close" result requires a multiplication by some kludge numbers, that I don't understand.
Here's what I'm doing to to perspective matrix to produce my close result:
//Before calling Make3DCamera, adjusting FOV:
aFOV*=smallerTexture.HeightF()/normalRenderSize.HeightF(); // Zoom it
aFOV*=1.02f // <- WTH is this?
//Then, to pan the camera over to the x/y position I want, I do:
Matrix aPM=GetCurrentProjectionMatrix();
float aX=(screenX-normalRenderSize.WidthF()/2.0f)/2.0f;
float aY=(screenY-normalRenderSize.HeightF()/2.0f)/2.0f;
aX*=1.07f; // <- WTH is this?
aY*=1.07f; // <- WTH is this?
aPM.mData.m[2][0]=-aX/normalRenderSize.HeightF();
aPM.mData.m[2][1]=-aY/normalRenderSize.HeightF();
SetCurrentProjectionMatrix(aPM);
When I do this, my new picture is VERY close... but not exactly perfect-- the small render tends to drift away from "center" the further the "magic window" is from the center. Without the kludge number, the drift away from center with the magic window is very pronounced.
The kludge numbers 1.02f for zoom and 1.07 for pan reduce the inaccuracies and drift to a fraction of a pixel, but those numbers must be a ratio from somewhere, right? They work at ANY RESOLUTION, though-- so I have have a 1280x800 screen and a 256,256 magic window texture... if I change the screen to 1024x768, it all still works.
Where the heck are these numbers coming from?
If you don't care about sub-optimal performance (i.e., drawing the whole scene twice) and if you don't need the smaller scene in a texture, an easy way to obtain the overlay with pixel perfect precision is:
Set up main scene (model/view/projection matrices, etc.) and draw it as you are now.
Use glScissor to set the rectangle for the overlay. glScissor takes the screen-space x, y, width, and height and discards anything outside that rectangle. It looks like you have those four data items already, so you should be good to go.
Call glEnable(GL_SCISSOR_TEST) to actually turn on the test.
Set the shader variables (if you're using shaders) for drawing the greyscale scene/hidden objects/etc. You still use the same view and projection matrices that you used for the main scene.
Draw the greyscale scene/hidden objects/etc.
Call glDisable(GL_SCISSOR_TEST) so you won't be scissoring at the start of the next frame.
Draw the red overlay border, if desired.
Now, if you actually need the overlay in its own texture for some reason, this probably won't be adequate...it could be made to work either with framebuffer objects and/or pixel readback, but this would be less efficient.
Most people completely overcomplicate such issues. There is absolutely no magic to applying transformations after applying the projection matrix.
If you have a projection matrix P (and I'm assuming default OpenGL conventions here where P is constructed in a way that the vector is post-multiplied to the matrix, so for an eye space vector v_eye, we get v_clip = P * v_eye), you can simply pre-multiply some other translate and scale transforms to cut out any region of interest.
Assume you have a viewport of size w_view * h_view pixels, and you want to find a projection matrix which renders only a tile w_tile * h_tile pixels , beginning at pixel location (x_tile, y_tile) (again, assuming default GL conventions here, window space origin is bottom left, so y_tile is measured from the bottom). Also note that the _tile coordinates are to be interpreted relative to the viewport, in the typical case, that would start at (0,0) and have the size of your full framebuffer, but this is by no means required nor assumed here.
Since after applying the projection matrix we are in clip space, we need to transform our coordinates from window space pixels to clip space. Note that clip space is a 4D homogeneous space, but we can use any w value we like (except 0) to represent any point (as a point in the 3D space we care about forms a line in the 4D space we work in), so let's just use w=1 for simplicity's sake.
The view volume in clip space is denoted by the [-w,w] range, so in the w=1 hyperplane, it is [-1,1]. Converting our tile into this space yields:
x_clip = 2 * (x_tile / w_view) -1
y_clip = 2 * (y_tile / h_view) -1
w_clip = 2 * (w_tile / w_view) -1
h_clip = 2 * (h_tile / h_view) -1
We now just need to translate the objects such that the center of the tile is moved to the center of the view volume, which by definition is the origin, and scale the w_clip * h_clip sized region to the full [-1,1] extent in each dimension.
That means:
T = translate(-(x_clip + 0.5*w_clip), -(y_clip + 0.5 *h_clip), 0)
S = scale(2.0/w_clip, 2.0/h_clip, 1.0)
We can now create the modified projection matrix P' as P' = S * T * P, and that's all there is. Rendering with P' instead of P will render exactly the region of your tile to whatever viewport you are using, so for it to be pixel-exact with respect to your original viewport, you must now render with a viewport which is also w_tile * h_tile pixels big.
Note that there is also another approach: The viewport is not clamped against the framebuffer you're rendering to. It is actually valid to provide negative values for x and y. If your framebuffer for rendering your tile into is exactly w_tile * h_tile pixels, you simply could set glViewport(-x_tile, -y_tile, x_tile + w_tile, y_tile + h_tile) and render with the unmodified projection matrix P instead.
I'm developing an animation application with 2D virtual camera. The camera viewport can be positioned and scaled in the keyframes and is then interpolated to render the final animation. I'm looking for the best way to interpolate the camera's parameters of x,y position, scale so that objects in the scene transformed by the camera change size at a constant rate and so that all objects travel in a straight line.
The transform matrix for rendering the scene from the point of view of the camera is calculate from the position and scale as follows, where DimX, DimY are the dimensions of the scene image, Pos and Scale are the position and scale of the camera (the variables that I want to interpolate).
LCen := PointF(DimX*0.5, DimY*0.5);
CamTransformInv := TMatrix.CreateTranslation(-(Pos.X + LCen.X), -(Pos.Y + LCen.Y));
LScaleInv := 1 / Scale;
CamTransformInv := CamTransformInv * TMatrix.CreateScaling(LScaleInv, LScaleInv);
CamTransformInv := CamTransformInv * TMatrix.CreateTranslation(LCen.X, LCen.Y);
Here's an animation created by linearly interpolating the scale and position. The black line extends from the center of the viewport in the first position to the center of the viewport in the second position. You can see the effect of it appearing to speed up as it zooms in, which I'd like to avoid. On the plus side, all objects in the scene move in a straight line. I've made the animation loop to make the acceleration effect more obvious.
So I modified my code to linearly interpolate the Ln of the scale and take Exp of the result. This results in an exponential interpolation with the scale change slowing down as it zooms in which looks good since objects in the scene then grow at a constant rate. This makes sense because objects in the scene get multiplied by scale whereas objects get added by position, so interpolation of scale has to be multiplicative. This is achieved by taking log before interpolating. Position is still linear as before. The problem now is that parts of the image to the sides of the line move in a curve. It doesn't look right (see the top of the tower).
It occurred to me that the problem is because I'm interpolating the scale non-linearly and the position linearly. If I made the position decelerate in the same way that the scale is decelerating then it would look correct. However, I can't think how this would be computed as the position and scale are coupled in a complex way. If there's no scale change then the position should change linearly, but the greater the scale change the greater the non-linearality of the position should be.
So is there a standard way of doing this?
This has been answered for me elsewhere. Here is my working code.
CamT.Scale := Exp(LinearInterpolate(Ln(Cam1.Scale), Ln(Cam2.Scale), k));
if abs(Cam2.Scale - Cam1.Scale) > 0.001 then begin
r := Cam2.Scale / Cam1.Scale
w := (Power(r, k) - 1) / (r - 1);
CamT.Pos.X := LinearInterpolate(Cam1.Pos.X, Cam2.Pos.X, w);
CamT.Pos.Y := LinearInterpolate(Cam1.Pos.Y, Cam2.Pos.Y, w);
end else begin
CamT.Pos.X := LinearInterpolate(Cam1.Pos.X, Cam2.Pos.X, k);
CamT.Pos.Y := LinearInterpolate(Cam1.Pos.Y, Cam2.Pos.Y, k);
end;
And the resulting zoom.
What value fed to strokeWidth() will give a stroke width of one pixel regardless of the current scale() setting?
I think strokeWeight(0) should work. Here is an example:
void setup() {
size(100,100);
noFill();
scale(10);
// 1st square, stroke will be 10 pixels
translate(3,3);
strokeWeight(1);
beginShape();
vertex(-1.0, -1.0);
vertex(-1.0, 1.0);
vertex( 1.0, 1.0);
vertex( 1.0, -1.0);
endShape(CLOSE);
// 2nd square, stroke will be 1 pixel
translate(3,3);
strokeWeight(0);
beginShape();
vertex(-1.0, -1.0);
vertex(-1.0, 1.0);
vertex( 1.0, 1.0);
vertex( 1.0, -1.0);
endShape(CLOSE);
}
Kevin did offer a couple of good approaches.
Your question doesn't make it clear what level of comfort you have with the language. My assumption (and I could be wrong) is that the layers approach isn't clear as you might have not used PGraphics before.
However, this option Kevin provided is simple and straight forward:
multiplying the coordinates manually
Notice most drawing functions take not only the coordinates, but also dimensions ?
Don't use scale(), but keep track of a multiplier floating point variable that you use for the shape dimensions. Manually scale the dimensions of each shape:
void draw(){
//map mouseX to a scale between 10% and 300%
float scale = map(constrain(mouseX,0,width),0,width,0.1,3.0);
background(255);
//scale the shape dimensions, without using scale()
ellipse(50,50, 30 * scale, 30 * scale);
}
You can run this as a demo bellow:
function setup(){
createCanvas(100,100);
}
function draw(){
//map mouseX to a scale between 10% and 300%
var scale = map(constrain(mouseX,0,width),0,width,0.1,3.0);
background(200);
//scale the shape dimensions, without using scale()
ellipse(50,50, 30 * scale, 30 * scale);
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/0.5.7/p5.min.js"></script>
Another answer is in the question itself: what value would you feed to strokeWidth() ? If scale() is making the stroke bigger, but you want to keep it's appearance the same, that means you need to use a smaller stroke weight as scale increases: the thickness is inversely proportional to the scale:
void draw(){
//map mouseX to a scale between 10% and 300%
float scale = map(constrain(mouseX,0,width),0,width,0.1,3.0);
background(255);
translate(50,50);
scale(scale);
strokeWeight(1/scale);
//scaled shape, same appearing stroke, just smaller in value as scale increases
ellipse(0,0, 30, 30);
}
You can run this bellow:
function setup(){
createCanvas(100,100);
}
function draw(){
//map mouseX to a scale between 10% and 300%
var scaleValue = map(constrain(mouseX,0,width),0,width,0.1,3.0);
background(240);
translate(50,50);
scale(scaleValue);
strokeWeight(1/scaleValue);
//scale the shape dimensions, without using scale()
ellipse(0,0, 30, 30);
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/0.5.7/p5.min.js"></script>
Kevin was patient, not only to answer your question, but also your comments, being generous with his time. You need to be patient to carefully read and understand the answers provided. Try it on your own then come back with specific questions on clarifications if that's the case. It's the best way to learn.
Simply asking "how do I do this ?" without showing what you're tried and what your thinking behind the problem is, expecting a snippet to copy/paste will not get your very far and this is not what stackoverflow is about.
You'll have way more to gain by learning, using the available documentation and especially thinking about the problem on your own first. You might not crack the problem at the first go (I know I certainly don't), but reasoning about it and viewing it from different angles will get your gears going.
Always be patient, it will serve you well on the long run, regardless of the situation.
Update Perhaps you mean by
What value fed to strokeWidth() will give a stroke width of one pixel regardless of the current scale() setting?
is how can you draw without anti-aliasing ?
If so, you can disable smoothing via a line: calling noSmooth(); once in setup(). Try it with the example code above.
None.
The whole point of scale() is that it, well, scales everything.
You might want to draw things in layers: draw one scaled layer, and one unscaled layer that contains the single-pixel-width lines. Then combine those layers.
That won't work if you need your layers to be mixed, such as an unscaled line on top of a scaled shape, on top of another scaled line. In that case you'll just have to unscale before drawing your lines, then scale again to draw your shapes.
I have some values that I need to plot into a 2D HTML5 <canvas>. All values are in the range [-1, +1] so I decided to set a transformation (scale + displacement) on the canvas 2D-context before drawing:
var scale = Math.min(canvas.width, canvas.height) / 2;
ctx.setTransform(scale, 0, 0, scale, canvas.width / 2, canvas.height / 2);
Each value is drawn using the arc method, but since I want a fixed arc-radius (no matter what scaling is used) I'm dividing the radius with the current scale value:
ctx.arc(value.X, value.Y, 2 / scale, 0, 2 * Math.PI, false);
Now, a canvas of size 200 x 200 will result in scale factor of 100, which in turn results in a arc-radius of 0.02. Unfortunately, it seems that values like 0.2 or 0.02 don't make any difference to the resulting arc-radius, only the stroke thickness is changing.
You can see this behavior in the JsFiddle. Is this a bug or am I doing something wrong?
The issue is that after scaling by a huge factor your lines you now have a lineWidth far too big to be drawn correctly with stroke.
Just adjust the lineWidth to 1/scale before drawing, and all will work fine.
Using a shader I'm trying to color a plane so it replicates the pixels on a texture. The texture is 32x32 pixels and the plane is also sized 32x32 in space coordinates.
Does anyone know how I would inspect the first pixel on the texture, then use it to color the first square (1x1) on the plane?
Generated texture example: (First pixel is red on purpose)
This code using a vec2 with coordinates (0,0) doesn't work as I expected. I assumed the color at (0,0) would be red but it's not, it's green:
vec4 color = texture2D(texture, vec2(0, 0));
I guess there's something that I'm missing, or not understanding about texture2D as (0,0) doesn't appear to be the ending pixel either.
If anyone could help me out, it would be greatly appriciated. Thanks.
EDIT:
Thanks for the comments and answers! Using this code, it's working now:
// Flip the texture vertically
vec3 verpos2 = verpos.xyz * vec3(1.0, 1.0, -1.0);
// Calculate the pixel coordinates the fragment belongs to
float pixX = floor(verpos2.x - floor(verpos2.x / 32.0) * 32.0);
float pixZ = floor(verpos2.z - floor(verpos2.z / 32.0) * 32.0);
float texX = (pixX + 0.5) / 32.0;
float texZ = (pixZ + 0.5) / 32.0;
gl_FragColor = texture2D(texture, vec2(texX, texZ));
That said, I'm having an issue with jagged lines on the edges of each "block". Looks to me like my math is off and it's confused about what color the sides should be, because I didn't have this problem when using only vertex colors. Can anyone see where I've gone wrong or how it could be done better?
Thanks again!
Yes... as Ben Pious mentioned in a comment, remember that WebGL displays 0,0 in lower left.
Also, for indexing into your textures, try to sample from the "middle" of each pixel. On a 32x32 source texture, to get pixel (0,0) you'd want:
texture2D(theSampler, vec2(0.5/32.0, 0.5/32.0));
Or more generally,
texture2D(theSampler, vec2((xPixelIndex + 0.5) / width, (yPixelIndex + 0.5) / height);
This is only if you're explicitly accessing texture pixels; if you're getting values interpolated and passed through from the vertex shader (say a (-1,-1) to (1,1) square, and pass varying vec2((x+1)/2,(y+1)/2) to the fragment shader), this "middle of each pixel" is reflected in your varying value.
But it's probably just the Y-up like Ben says. :)