Vertexbuffer Getdata VertexPositionNormalTexture - xna-4.0

I have a Vertexbuffer with 648 VertexPositionNormalTexture elements. That is 27 cubes and each cube hold 24 vertices.
If I want to access the vertices for my first cube I can write:
int startIndex = 0;
VertexPositionNormalTexture[] vertices = new VertexPositionNormalTexture[24];
vertexBuffer.GetData<VertexPositionNormalTexture>(vertices, startIndex, 24);
The problem is if I want to access my 9th cube (24*9 = 216). I have to write:
int startIndex = 216;
VertexPositionNormalTexture[] vertices = new VertexPositionNormalTexture[startIndex + 24];
vertexBuffer.GetData<VertexPositionNormalTexture>(vertices, startIndex, 24);
I have to create 192 extra slots just to access my 24 elements. Because the vertex.GetData will copy to same index it get data from. How do I do so It write my 24 elements to a correct sized array?
All classes, structs and functions are from XNA Framework 4.0

Why you need to use GetData?
Save the reference to your array and work with the array... not with the vertexBuffer...

Related

Three.js - How to fill Float32Array with equal number of points for different geometries

I have a project that loads various models (.obj, can be anything) and generates particles from the geometry position using Float23Array's.
Given the geometries of each model are completely different, this causes the number of particles to change depending on which model is used.
The code I'm using to populate the buffer attribute is below:
import { OBJLoader } from 'three/examples/jsm/loaders/OBJLoader.js';
const dataSize = 1024;
const modelLoader = new OBJLoader();
const modelObject = await modelLoader.loadAsync('/path/to/model.obj');
const positionData = new Float32Array(dataSize * dataSize * 3);
const modelChildren = modelObject.children as Mesh[];
const bufferPositions = modelChildren
.filter(({ isMesh }) => isMesh)
.map(({ geometry: { attributes } }) => attributes.position.array as Float32Array);
const combinedBuffer = concatFloat32Arrays(bufferPositions); // merge Float32's
for (let index = 0, length = positionData.length; index < length; index += 3) {
positionData[index] = combinedBuffer[index];
positionData[index + 1] = combinedBuffer[index + 1];
positionData[index + 2] = combinedBuffer[index + 2];
}
return new Float32BufferAttribute(positionData, 3);
A portion of the positionData array is empty, e.g 0, obviously because the combinedBuffer[index] is undefined.
Can anyone point me in the right direction?
I basically want an equal number of particles for each geometry, regardless of the model's geometry complexity.
You normally handle this use case by allocating a large enough buffer and then use BufferGeometry.setDrawRange() to decide which part of the data you want to draw. The values of vertices outside of the draw range doesn't matter with this approach.

HEALPix with texture UV mapping

I found an implementation of the HEALpix algorithm this is the dokumentation
And the output looks very nice.
The following images show the latitude / longitude conversion to HEALpix areas.
The x-axe goes from 0 to 2 * pi. The y-axe goes from 0 to pi. The grey color represents the HEALpix pixel encoded in grey.
Nside = 1
Nside = 2
Nside = 4
Nside = 8
The different grey values are the IDs for the texture I have to use. That means, that each HEALpix pixel represents one texture. The missing part is the UV mapping within each of the HEALpix pixels like shown below:
nSide = 1 with UV mapping
Right now I am using the function:
void ang2pix_ring( const long nside, double theta, double phi, long *ipix)
Which gives me the correct texture ID. But I've no idea how to calculate the UV mapping for each HEALpix pixel.
Is there a way to calculate all four corners in lat/lon coordinates of a HEALpix pixel? Or even better a direct calculation to the UV coordinates?
BTW: I am using the RING scheme. But if the NESTED scheme is simpler to calculate I also would change to that.
After a lot of research I came to a solution for this problem:
First of all, I've changed the scheme to NESTED. With the NESTED scheme and a very high nSide value (8192), the returned value from the
void ang2pix_ring( const long nside, double theta, double phi, long *ipix)
function gives back a long value where the UV coordinates can be read out in the following way:
Bit 26 till 30 represents the level 0 (only the 12 HEALPix pixels).
By using higher levels, the Bits from 30 till 26 - (level * 2) represents the HEALPix pixels.
The leftover 26 - (level * 2) - 1 till bit 1 encode the UV texture-coordinates in the following way:
Each second odd bit shrink together represents the U coordinate and the even once represents the V coordinate.
To normalize these UV-coordinates the responding shrinked values need to be divided by the value of pow(2, (26 - level * 2) / 2).
Code says more than 1000 words:
unsigned long ignoreEverySecondBit(unsigned long value, bool odd, unsigned int countBits)
{
unsigned long result = 0;
unsigned long mask = odd == true ? 0b1 : 0b10;
countBits = countBits / 2;
for (int i = 0; i < countBits; ++i)
{
if ((value & mask) != 0)
{
result += std::pow(2, i);
}
mask = mask << 2;
}
return result;
}
//calculate the HEALPix values:
latLonToHealPixNESTED(nSide, theta, phi, &pix);
result.level = level;
result.texture = pix >> (26 - level * 2);
result.u = static_cast<float>(ignoreEverySecondBit(pix, true, 26 - level * 2));
result.v = static_cast<float>(ignoreEverySecondBit(pix, false, 26 - level * 2));
result.u = result.u / pow(2, (26 - level * 2) / 2);
result.v = result.v / pow(2, (26 - level * 2) / 2);
And of cause a few images to show the results. The blue value represents the textureID, the red value represents the U-coordinate and the green value represents the V-coordinate:
Level 0
Level 1
Level 2
Level 3
Level 4
I hope this solution will help others too.

How to get vertices of obj model object in Three.JS?

After loading a .obj model in Three.js I am unable to find vertices data. Vertices data is needed to apply collision detection as suggested by this answer
var loader = new THREE.OBJLoader();
loader.load('models/wall.obj', function ( object ) {
object.traverse( function ( node ) {
if ( node.isMesh ) {
console.log(node);
}
});
scene.add( object );
});
In mesh there is geometry.attributes.position.array but I am unable to find "vertices" anywhere in object.
Right now trying to convert position.array data to vertices but below code is not working, this answer is pointing the problem correctly but I am unable to use it to solve the issue:
var tempVertex = new THREE.Vector3();
// set tempVertex based on information from mesh.geometry.attributes.position
mesh.localToWorld(tempVertex);
// tempVertex is converted from local coordinates into world coordinates,
// which is its "after mesh transformation" position
geometry.attributes.position.array IS the vertices. Every three values makes up one vertex. You will also want to look at the index property (geometry.index), because that is a list of indices into the position array, defining the vertices that make up a shape. In the case of a Mesh defined as individual triangles, every three indices makes up one triangle. (Tri-strip data is slightly different, but the concept of referencing vertex values by the index is the same.)
You could alternately use the attribute convenience functions:
BufferAttribute.getX
BufferAttribute.getY
BufferAttribute.getZ
These functions take the index into account. So if you want the first vertex of the first triangle (index = 0):
let pos = geometry.attributes.position;
let vertex = new THREE.Vector3( pos.getX(0), pos.getY(0), pos.getZ(0) );
This is equivalent to:
let pos = geometry.attributes.position.array;
let idx = geometry.index.array;
let size = geometry.attributes.position.itemSize;
let vertex = new THREE.Vector3( pos[(idx[0] * size) + 0], pos[(idx[0] * size) + 1], pos[(idx[0] * size) + 2] );
Once you have your vertex, then you can use mesh.localToWorld to convert the point to world-space.

efficiently calculate locations for rectangles in a unit grid

I'm working on a specific layout algorithm to display photos in a unit based grid. The desired behaviour is to have every photo placed in the next available space line by line.
Since there could easily be a thousand photos whose positions need to be calculated at once, efficiency is very important.
Has this problem maybe been solved with an existing algorithm already?
If not, how can I approach it to be as efficient as possible?
Edit
Regarding the positioning:
What I'm basically doing right now is iterating every line of the grid cell by cell until I find room to fit the element. That's why 4 is placed next to 2.
How about keeping a list of next available row by width? Initially the next-available-row list looks like:
(0,0,0,0,0)
When you've added the first photo, it looks like
(0,0,0,0,1)
Then
(0,0,0,2,2)
Then
(0,0,0,3,3)
Then
(1,1,1,4,4)
And the final photo doesn't change the list.
This could be efficient because you're only maintaining a small list, updating a little bit at each iteration (versus searching the entire space every time. It gets a little complicated - there could be a situation (with a tall photo) where the nominal next available row doesn't work, and then you could default to the existing approach. But overall I think this should save a fair amount of time, at the cost of a little added complexity.
Update
In response to #matteok's request for a coordinateForPhoto(width, height) method:
Let's say I called that array "nextAvailableRowByWidth".
public Coordinate coordinateForPhoto(width, height) {
int rowIndex = nextAvailableRowByWidth[width + 1]; // because arrays are zero-indexed
int[] row = space[rowIndex]
int column = findConsecutiveEmptySpace(width, row);
for (int i = 1; i < height; i++) {
if (!consecutiveEmptySpaceExists(width, space[i], column)) {
return null;
// return and fall back on the slow method, starting at rowIndex
}
}
// now either you broke out and are solving some other way,
// or your starting point is rowIndex, column. Done.
return new Coordinate(rowIndex, column);
}
Update #2
In response to #matteok's request for how to update the nextAvailableRowByWidth array:
OK, so you've just placed a new photo of height H and width W at row R. Any elements in the array which are less than R don't change (because this change didn't affect their row, so if there were 3 consecutive spaces available in the row before placing the photo, there are still 3 consecutive spaces available in it after). Every element which is in the range (R, R+H) needs to be checked, because it might have been affected. Let's postulate a method maxConsecutiveBlocksInRow() - because that's easy to write, right?
public void updateAvailableAfterPlacing(int W, int H, int R) {
for (int i = 0; i < nextAvailableRowByWidth.length; i++) {
if (nextAvailableRowByWidth[i] < R) {
continue;
}
int r = R;
while (maxConsecutiveBlocksInRow(r) < i + 1) {
r++;
}
nextAvailableRowByWidth[i] = r;
}
}
I think that should do it.
How about a matrix (your example would be 5x9) where each cell has a value representing the distance from the top left corner (for instance (row+1)*(column+1) [+1 is only necessary if your first row and value are 0]). In this matrix you look for the area which has the lowest value (when summing up the values of empty cells).
A 2nd matrix (or a 3rd dimension of the first matrix) stores the status of each cell.
edit:
int[][] grid = new int[9][5];
int[] filledRows = new int [9];
int photowidth = 2;
int photoheight = 1;
int emptyRowCounter = 0;
boolean photoFits = true;
for(int i = 0; i < grid.length; i++){
for(int m = 0; m < filledRows.length; m++){
if(filledRows[m]-(photoHeight-1) > i || filledRows[m]+(photoHeight-1) < i){
for(int j = 0; j < grid[i].length; j++){
if(grid[i][j] == 0){
for(int k = 0; k < photowidth; k++){
for(int l = 0; k < photoheight){
if(grid[i+l][j+k]!=0){
photoFits = false;
}
}
}
} else{
emptyRowCounter++;
}
}
if(photoFits){
//place Photo at i,j
}
if(emptyRowCounter == 5){
filledRows[i] = 1;
}
}
}
}
In the gif you have above, it turned out nicely that there was a photo (5) that could fit into the gap under (1) and to the left of (2). My intuition suggests we want to avoid creating gaps like that. Here is an idea that should avoid these gaps.
Maintain a list of "open regions", where an open region has a int leftBoundary, an int topBoundary, and an optional int bottomBoundary. The first open region is just the whole grid (leftBoundary:0, topBoundary: 0, bottom: null).
Sort the photos by height, breaking ties by width.
Until you have placed all photos:
Choose the tallest photo (in case of ties, choose the widest of the tallest photos). Find the first open region it can fit in (such that grid.Width - region.leftBoundary >= photo.Width). Place the photo at the top left of this region. When you place this photo, it may span the entire width or height of the region.
If it spans both the width and the height of the region, the region is filled! Remove this region from the list of open regions.
If it spans the width, but not the height, add the photo's height to the topBoundary of the region.
If it spans the height, but not the width, add the photo's width to the leftBoundary of the region.
If it does not span the height or width of the boundary, we are going to conceptually divide this region into two: one region will cover the space directly to the right of this photo (call it rightRegion), and the other region will cover the space below this region (call it belowRegion).
rightRegion = {
leftBoundary = parentRegion.leftBoundary + photo.width,
topBoundary = parentRegion.topBoundary,
bottomBoundary = parentRegion.topBoundary + photo.height
}
belowRegion = {
leftBoundary = 0,
topBoundary = parentRegion.topBoundary + photo.height,
bottomBoundary = parentRegion.bottomBoundary
}
Replace the current region in the list of open regions with rightRegion, and insert belowRegion directly after rightRegion.
You can visualize how this algorithm would work on your example: First, it would sort the photos: (2,3,4,1,5).
It considers 2, which fits into the first region (the whole grid). When it places 2 at the top left, it splits that region into the space directly to the right of 2, and the space below 2.
Then, it considers 3. It considers the open regions in turn. The first open region is to the right of 2. 3 fits there, so that's where it goes. It spans the width of the region, so the region's topBoundary gets adjusted downward.
Then, it considers 4. It again fits in the first open region, so it places 4 there. 4 spans the height of the region, so the region's leftBoundary gets adjusted rightward.
Then, 1 gets put in the 1x1 gap to the right of 4, filling its region. Finally, 5 gets put just below 2.

d3.geo buffer around a feature

Is it possible to draw a buffer (another feature) around a geographic feature in d3.js as a fixed distance unit (kilometers or miles)?
For instance, how would I draw a path around a point that extends 25 miles from that point in every direction. I've tried using d3.geo.circle and passing a fraction of degree (25 miles / approximately 69 miles per latitudinal degree or 25 / 69) but realize that although d3.geo.circle handles the reprojection of degrees, it does not accommodate for the differing lengths of each longitudinal degrees.
buffer = d3.geo.circle().angle(25/69).origin(function(x, y) { return [x, y]; })
I'm borrowing from this:
http://bl.ocks.org/mbostock/5628479
Update:
It looks like what I'd like to do is create a geodesic buffer.
Update:
I was able to create the buffer by drawing a path from a series of destination points given a distance and bearing from a start point.
See http://www.movable-type.co.uk/scripts/latlong.html for JavaScript implementation
Like this:
function drawBuffer(lat, long, distance){
var intervals = 18;
var intervalAngle = (360 / intervals);
var pointsData = [];
for(var i = 0; i < intervals; i++){
pointsData.push(getDestinationPoint(lat, long, i * intervalAngle, distance));
}
// Draw path using pointsData;
}
AFAIK doing this in D3 itself would be quite painful, as it's not really meant to create/modify features. I would recommend doing this as a preprocessing step in a GIS program such as QGIS, which allows you to buffer features in a number of ways. From there, you can export as GeoJSON and use with D3 in the usual way.
I was able to create a buffer around a point by drawing a path from a series of destination points given a distance and bearing from a start point.
See http://www.movable-type.co.uk/scripts/latlong.html for JavaScript implementation
Like this:
function drawBuffer(lat, long, distance){
var intervals = 18;
var intervalAngle = (360 / intervals);
var pointsData = [];
for(var i = 0; i < intervals; i++){
pointsData.push(getDestinationPoint(lat, long, i * intervalAngle, distance)); // See link
}
// Draw path using pointsData;
}

Resources