Simple 2D UV mapping problem in openGLES - opengl-es

I'm fairly new to low level openGL coding and I have a problem that perhaps I'm just too tired to find a solution to. I'm sure the answer is simple and I'm doing something utterly stupid but here goes...
I have a texture of 1024 x 512,and I'm rendering a 2D quad which intends to use a portion of that texture. The quad is of the same size as the texture portion I am trying to render, eg 100 x 100 between vertices to render a 100 x 100 texture portion.
The texture pixels are from 0(s) and 104(t) assuming zero based pixel coords, and therefore the UV(st) is being calculated as 0 / 1024 and 104 / 512 to the extents of 100 / 1024 and 204 / 512.
However when I render the quad, I get a line from the pixel coords at 103(t), which is part of a different section of the image so it really stands out. I can get around the problem by setting the UV to 0 / 1024 and 105 / 512, but this seems very wrong to me.
I've been searching for a while and can't find what I'm doing wrong. I've tried GL_LINEAR, GL_NEAREST,clamping etc in glparams to no avail. Can someone please direct me to my error?

That is because texture coordinates don't address pixels, thus your assumption
and therefore the UV(st) is being
calculated as 0 / 1024 and 104 / 512
to the extents of 100 / 1024 and 204 /
512
is wrong!
This is kind of a FAQ, I recently answered it in OpenGL Texture Coordinates in Pixel Space

Related

What is the best value for Xft.dpi in .Xresources

From Arch Wiki:
For Xft.dpi, using integer multiples of 96 usually works best, e.g. 192 for 200% scaling.
I know that 200%, 300%,... scaling is the best possible because every pixel replaced with integer amount of pixels and we don't have situation where we need to display 1.5 pixels.
But what if don't have 4k monitor, and have for example 2.5k(2560x1440) monitor or monitor with some non-standard resolution or aspect ratio. In this case increasing scale factor 2 times is too much.
I have only 2 ideas:
Scale it in 1.25, 1.5, 1.75, so objects with 16x16 and 32x32 size will be properly scaled.
Scale it in (vertical_pixels*horizontal_pixels)/(1920*1080)*96, so you will get size of objects similar to normal display.

Align feature map with ego motion (problem of zooming ratio )

I want to align the feature map using ego motion, as mentioned in the paper An LSTM Approach to Temporal 3D Object Detection in LiDAR Point Clouds
I use VoxelNet as backbone, which will shrink the image for 8 times. The size of my voxel is 0.1m x 0.1m x 0.2m(height)
So given an input bird-eye-view image size of 1408 x 1024,
the extracted feature map size would be 176 x 128, shrunk for 8 times.
The ego translation of the car between the "images"(point clouds actually) is 1 meter in both x and y direction. Am I right to adjust the feature map for 1.25 pixels?
1m/0.1m = 10 # meter to pixel
10/8 = 1.25 # shrink ratio of the network
However, though experiments, I found the feature maps align better if I adjust the feature map with only 1/32 pixel for the 1 meter translation in real world.
Ps. I am using the function torch.nn.functional.affine_grid to perform the translation, which takes a 2x3 affine matrix as input.
It's caused by the function torch.nn.functional.affine_grid I used.
I didn't fully understand this function before I use it.
These vivid images would be very helpful on showing what this function actually do(with comparison to the affine transformations in Numpy.

How does memory usage in browsers work for images - can I do one large sprite?

I currently display 115 (!) different sponsor icons at the bottom of many web pages on my website. They're lazy-loaded, but even so, that's quite a lot.
At present, these icons are loaded separately, and are sized 75x50 (or x2 or x3, depending on the screen of the device).
I'm toying with the idea of making them all into one sprite, rather than 115 separate files. That would mean, instead of lots of tiny little files, I'd have one large PNG or WEBP file instead. The way I'm considering doing it would mean the smallest file would be 8,625 pixels across; and the x3 version would be 25,875 pixels across, which seems like a really very large image (albeit only 225 px high).
Will an image of this pixel size cause a browser to choke?
Is a sprite the right way to achieve a faster-loading page here, or is there something else I should be considering?
115 icons with 75 pixel wide sure will calculate to very wide 8625 pixels image, which is only 50px heigh...
but you don't have to use a low height (50 pixel) very wide (8625 pixel) image.
you can make a suitable rectangular smart size image with grid of icons... say, 12 rows of 10 icons per line...
115 x 10 = 1150 + 50 pixel (5 pixel space between 10 icons) total 1200 pixel wide approx.
50 x 12 = 600 + 120 pixel (5 pixel space between 12 icons) total 720 pixel tall approx.

Strange Texture array buffer limitations

Within a WebGl fragment shader I'm using a texture generated from an array of 32bit values but it yields errors when going above a resolution of 7000x7000px this is far below the maximum texture resolution for my gpu 16384x16384px. gl.Unsigned works without issue at higher resolutions but not so when changed to gl.float . Is this a known limitation when dealing with floats? Are there work arounds? any input much appreciated.
my texture parameters -
"gl.texImage2D(gl.TEXTURE_2D, 0, gl.ALPHA, 8192, 8192, 0, gl.ALPHA, gl.FLOAT, Z_pixels)"
7000*7000*32 bits per float*4 ~= 784 megabytes of memory. Perhaps that exceeded your graphic card memory capacity?
As per MSDN https://msdn.microsoft.com/en-us/library/dn302435(v=vs.85).aspx says "[gl.FLOAT] creates 128 bit-per-pixel textures instead of 32 bit-per-pixel for the image." so its possible that gl.ALPHA will still use 128 bits per pixel.

What does "Pixels per foot" mean in digital images?

When I am reading about the resolution of a digital image from the following link http://www.rideau-info.com/photos/whatis.html, I confused at the following Paragraph:
If the field of view is 20 feet across, a 3 megapixel camera will be resolving that view at 102 pixels per foot. If that same shot was taken with an 18 Mp camera it would be resolving that view at 259 pixels per foot, 2.5 times more resolution than a 3 Mp camera.
Here, how come the author is arriving at the conclusion: "102 pixels per foot and 259 pixels"?
A 3MP camera, in that article, is 2048 wide x 1536 high. Think of 2048 pixels across as 2048 boxes laid in a straight line. Now, if you were to divide these equally amongst 20 sections (20 feet of field of view), you would get ~120 boxes per section. Hence, the logic behind 102 pixels per foot. Similar reasoning is used for the 18MP camera which is 5184 W x 3546 H. 5184 divided into 20 is ~259.

Resources