I have a raw floating array of 12 elements which contain an affine transformation matrix and I would like to map it to an Affine3f object.
The input floating point array actually stores the parameter in a row major sequence, while the Affine3f stores it in a column major format, if I am correct.
Is there nice Eigen recommended way to map such an array to the Affine3f object?
You can either cast it as a Transform<float,3,AffineCompact,RowMajor> which is ugly, or copy the coefficients using a Map:
AffineCompact3f A;
A = Map<Matrix<float,3,4,RowMajor> >(data);
Related
What is the correct syntax for passing a variable number of MTLTexture's as an array to a fragment shader?
This StackOverflow Question: "How to use texture2d_array array in metal?" mentions the use of:
array<texture2d<half>, 5>
However, this requires specifying the size of the array. In Metal Shading Language Specification.pdf (Sec. 2.11) they also demonstrate this type. However, they also refer to array_ref but it's not clear to me how to use it, or if it's even allowed as a parameter type for a fragment shared given this statement:
"The array_ref type cannot be passed as an argument to graphics
and kernel functions."
What I'm currently doing is just declaring the parameter as:
fragment float4 fragmentShader(RasterizerData in [[ stage_in ]],
sampler s [[ sampler(0) ]],
const array<texture2d<half>, 128> textures [[ texture(0) ]]) {
const half4 c = textures[in.textureIndex].sample(s, in.coords);
}
Since the limit is 128 fragment textures. In any render pass, I might use between 1..n textures, where I ensure that n does not exceed 128. That seems to work for me, but am I Doing It Wrong?
My use-case is drawing a 2D plane that is sub-divided into a bunch of tiles. Each tile's content is sampled from a designated texture in the array based on a pre-computed texture index. The textures are set using setFragmentTexture:atIndex in the correct order at the start of the render pass. The texture index is passed from the vertex shader to the fragment shader.
You should consider an array texture instead of a texture array. That is, a texture whose type is MTLTextureType2DArray. You use the arrayLength property of the texture descriptor to specify how many 2-D slices the array texture contains.
To populate the texture, you specify which slice you're writing to with methods such as -replaceRegion:... or -copyFrom{Buffer,Texture}:...toTexture:....
In a shader, you can specify which element to sample or read from using the array parameter.
I dont know opengl-es. but I must use features of .obj 3d model in my android app.
In .obj file, I can find Vertices, Texcoords and Normals. but there is no Indices and instead there are face elements.
Can anyone clearly explain me how to obtain indices from .obj file?
Possibly a little hard to write clearly because what OBJ considers a vertex is not what OpenGL considers a vertex. Let's find out...
An OBJ file establishes lists of obj-vertices (v), texture coordinates (vt), normals (n). You probably don't ever want to hand these to OpenGL (but skip to the end for the caveat). They're just a way for your loading code to establish the meaning of v1, vt3, etc.
The only place that openGL-vertices are specified is within the f statement. E.g. v1/vt1/vn1 means "the OpenGL vertex with location, texture coordinate and normal as specified back in the list".
So a workable solution to loading is, in pseudocode:
instantiate an empty hash map from v/vt/vn triples to opengl-vertex indices, an empty opengl-vertex list, and an empty list of indices for later supplication to glDrawElements;
for each triple in the OBJ file:
look into the hash map to determine whether it is already in the opengl-vertex list and, if so, get the index and add it to your elements list;
if not then assign the next available index to the triple (so, this is just an incrementing number), put that into the hash map and the elements list, combine the triple and insert it into the opengl-vertex list.
You can try to do better than that by attempting to minimise the potential highly random access it implies to your opengl-vertex list at drawing time, but don't prematurely optimise.
Caveat:
If your GPU supports vertex texture fetch (i.e. texture sampling within the vertex shader) then you could just supply the triples directly to OpenGL, having accumulated the obj-vertices, etc, into texture maps, and do the indirect lookup in your vertex shader. With vertex texture fetch, textures really just become random access 2d arrays. However, many Android GPUs don't support vertex texture fetch (even if they support ES 3 which ostensibly makes it a requirement, as it allows an implementation to specify that it supports a maximum of zero samplers).
In THREE you can specify a DataTexture with a given data type and format. My shader pipeline normalizes the raw values based on a few user-controlled uniforms.
In the case of a Float32Array, it is very simple:
data = new Float32Array(...)
texture = new THREE.DataTexture(data, columns, rows, THREE.LuminanceFormat, THREE.FloatType)
And, in the shader, the swizzled values have non-normalized values. However, if I use:
data = new Uint8Array(...)
texture = new THREE.DataTexture(data, columns, rows, THREE.LuminanceFormat, THREE.UnsignedByteType);
Then the texture is normalized between 0.0 and 1.0 as an input to the pipeline. Not what I was expecting. Is there a way to prevent this behavior?
Here is an example jsfiddle demonstrating a quick test of what is unexpected (at least for me): http://jsfiddle.net/VsWb9/3796/
three.js r.71
For future reference, this is not currently possible in WebGL. It requires the use of GL_RED_INTEGER and the unsupported usampler2d.
This comment from the internalformat of Texture also describes the issue in GL for the internal formats.
For that matter, the format of GL_LUMINANCE says that you're passing
either floating-point data or normalized integer data (the type says
that it's normalized integer data). Of course, since there's no
GL_LUMINANCE_INTEGER (which is how you say that you're passing integer
data, to be used with integer internal formats), you can't really use
luminance data like this.
Use GL_RED_INTEGER for the format and GL_R8UI for the internal format
if you really want 8-bit unsigned integers in your texture. Note that
integer texture support requires OpenGL 3.x-class hardware.
That being said, you cannot use sampler2D with an integer texture. If
you are using a texture that uses an unsigned integer texture format,
you must use usampler2D.
I want to apply render transform property to ellipse but it gives an error.
I write it as
Point p = new Point(100,100);
e1.RenderTransform= p;
(Error is :Cannot implicitly convert type 'Windows.Foundation.Point' to 'Windows.UI.Xaml.Media.Transform')
Please help as i've a deadline to meet
Point is not a Transform. Typically you would use a TranslateTransform for simple repositioning or CompositeTransform for standard scale, skew, rotate, translate transform. There are also separate transforms for rotation, scale, skew, a group transform where you can combine multiple transforms together and a generic matrix transform where you need to generate and multiply some matrices to get the expected transformation, but it allows you most freedom especially when manipulating objects on the screen that have already been transformed before.
I am quite new to matlab/octave. I loaded an image using imread() function of octave. I tried to perform multiplication operation on the matrix but get the following error:
binary operator `*' not implemented for `uint8 matrix' by `matrix' operations
Is there another way to input the image??
I=imread('...');
I=rgb2gray(I);
I=double(I);
% you can perform the multiplication operation now
This usually means that you are trying to multiply arrays of different data types. The easiest thing to do here is to convert the uint8 image into double. You can either use the double() function, which will just cast the values, or us im2double(), which will also normalize the values to be between 0 and 1.
If you're trying to multiple two images (I'm guessing that's what you're trying to do since the error is for multiplying matrices and not a matrix by a scalar), you should use the immultiply function which handles different classes for you.
Simply, use immultiply (imgA, imgB) and never again worry about what class are imgA and imgB. Note that you'll need the image package installed in loaded.