Mesh Simplification with Assimp and OpenMesh - algorithm

For days ago, I ask a question on how to use the edge collapse with Assimp. Smooth the obj and remove duplicated vertices in software are sloved the basic problem that could make edge collapse work, I mean it work because it could be simplicated by MeshLab like this:
It looks good in MeshLab, but I then do it in my engine which used Assimp and OpenMesh. The problem is Assimp imported the specified vertices and Indices, that could let the halfedge miss the opposite pair (Is this called non-manifold?).
The result snapshot use OpenMesh's Quadric Decimation:
To clear to find the problem, I do it without decimation and parse the OpenMesh data structure back directly. Everything is work fine as expect (I mean the result without decimation).
The code that I used to decimate the mesh:
Loader::BasicData Loader::TestEdgeCollapse(float vertices[], int vertexLength, int indices[], int indexLength, float texCoords[], int texCoordLength, float normals[], int normalLength)
{
// Mesh type
typedef OpenMesh::TriMesh_ArrayKernelT<> OPMesh;
// Decimater type
typedef OpenMesh::Decimater::DecimaterT< OPMesh > OPDecimater;
// Decimation Module Handle type
typedef OpenMesh::Decimater::ModQuadricT< OPMesh >::Handle HModQuadric;
OPMesh mesh;
std::vector<OPMesh::VertexHandle> vhandles;
int iteration = 0;
for (int i = 0; i < vertexLength; i += 3)
{
vhandles.push_back(mesh.add_vertex(OpenMesh::Vec3f(vertices[i], vertices[i + 1], vertices[i + 2])));
if (texCoords != nullptr)
mesh.set_texcoord2D(vhandles.back(),OpenMesh::Vec2f(texCoords[iteration * 2], texCoords[iteration * 2 + 1]));
if (normals != nullptr)
mesh.set_normal(vhandles.back(), OpenMesh::Vec3f(normals[i], normals[i + 1], normals[i + 2]));
iteration++;
}
for (int i = 0; i < indexLength; i += 3)
mesh.add_face(vhandles[indices[i]], vhandles[indices[i + 1]], vhandles[indices[i + 2]]);
OPDecimater decimater(mesh);
HModQuadric hModQuadric;
decimater.add(hModQuadric);
decimater.module(hModQuadric).unset_max_err();
decimater.initialize();
//decimater.decimate(); // without this, everything is fine as expect.
mesh.garbage_collection();
int verticesSize = mesh.n_vertices() * 3;
float* newVertices = new float[verticesSize];
int indicesSize = mesh.n_faces() * 3;
int* newIndices = new int[indicesSize];
float* newTexCoords = nullptr;
int texCoordSize = mesh.n_vertices() * 2;
if(mesh.has_vertex_texcoords2D())
newTexCoords = new float[texCoordSize];
float* newNormals = nullptr;
int normalSize = mesh.n_vertices() * 3;
if(mesh.has_vertex_normals())
newNormals = new float[normalSize];
Loader::BasicData data;
int index = 0;
for (v_it = mesh.vertices_begin(); v_it != mesh.vertices_end(); ++v_it)
{
OpenMesh::Vec3f &point = mesh.point(*v_it);
newVertices[index * 3] = point[0];
newVertices[index * 3 + 1] = point[1];
newVertices[index * 3 + 2] = point[2];
if (mesh.has_vertex_texcoords2D())
{
auto &tex = mesh.texcoord2D(*v_it);
newTexCoords[index * 2] = tex[0];
newTexCoords[index * 2 + 1] = tex[1];
}
if (mesh.has_vertex_normals())
{
auto &normal = mesh.normal(*v_it);
newNormals[index * 3] = normal[0];
newNormals[index * 3 + 1] = normal[1];
newNormals[index * 3 + 2] = normal[2];
}
index++;
}
index = 0;
for (f_it = mesh.faces_begin(); f_it != mesh.faces_end(); ++f_it)
for (fv_it = mesh.fv_ccwiter(*f_it); fv_it.is_valid(); ++fv_it)
{
int id = fv_it->idx();
newIndices[index] = id;
index++;
}
data.Indices = newIndices;
data.IndicesLength = indicesSize;
data.Vertices = newVertices;
data.VerticesLength = verticesSize;
data.TexCoords = nullptr;
data.TexCoordLength = -1;
data.Normals = nullptr;
data.NormalLength = -1;
if (mesh.has_vertex_texcoords2D())
{
data.TexCoords = newTexCoords;
data.TexCoordLength = texCoordSize;
}
if (mesh.has_vertex_normals())
{
data.Normals = newNormals;
data.NormalLength = normalSize;
}
return data;
}
Also provide the tree obj I tested, and the face data that generated by Assimp, I fetch out from visual studio debugger, that shows the problem that some of the indices could not find the index pair.

Few weeks thinking about this and fails, I thought I want some Academic/Mathematical solution for automatically generating these decimated mesh, but now I'm trying to find the simple way to implement this, the way I am able to do is changing the structure for loading multi-object (file.obj) in single custom object (class obj), and switch the object when needed it. The benefit of this is I could manage what should present and ignore any algorithm problem.
By the way, I list some obstacles that push me back to simple way.
Assimp Unique Indices and Vertices, this is nothing wrong, but for the algorithm, no way to make the adjacency half-edge structure for this.
OpenMesh for reading only object file(*.obj), this could be done when using read_mesh function, but the disadvantage is the lack example of document and hard to using in my engine.
Write a custom 3d model importer for any format is hard.
In conclusion, there are two ways to make level of details work in engine, one is using the mesh simplication algorithm and more test to ensure quality, the other is just switch the 3dmodel that made by 3d software, It is not automatic but stable. I use the second method, and I show the result here :)
However, this is not a real solution with my question, so I won't assign me an answer.

Related

Paper cut algorithm

I want to create a function to determine the most number of pieces of paper on a parent paper size
The formula above is still not optimal. If using the above formula will only produce at most 32 cut/sheet.
I want it like below.
This seems to be a very difficult problem to solve optimally. See http://lagrange.ime.usp.br/~lobato/packing/ for a discussion of a 2008 paper claiming that the problem is believed (but not proven) to be NP-hard. The researchers found some approximation algorithms and implemented them on that website.
The following solution uses Top-Down Dynamic Programming to find optimal solutions to this problem. I am providing this solution in C#, which shouldn't be too hard to convert into the language of your choice (or whatever style of pseudocode you prefer). I have tested this solution on your specific example and it completes in less than a second (I'm not sure how much less than a second).
It should be noted that this solution assumes that only guillotine cuts are allowed. This is a common restriction for real-world 2D Stock-Cutting applications and it greatly simplifies the solution complexity. However, CS, Math and other programming problems often allow all types of cutting, so in that case this solution would not necessarily find the optimal solution (but it would still provide a better heuristic answer than your current formula).
First, we need a value-structure to represent the size of the starting stock, the desired rectangle(s) and of the pieces cut from the stock (this needs to be a value-type because it will be used as the key to our memoization cache and other collections, and we need to to compare the actual values rather than an object reference address):
public struct Vector2D
{
public int X;
public int Y;
public Vector2D(int x, int y)
{
X = x;
Y = y;
}
}
Here is the main method to be called. Note that all values need to be in integers, for the specific case above this just means multiplying everything by 100. These methods here require integers, but are otherwise are scale-invariant so multiplying by 100 or 1000 or whatever won't affect performance (just make sure that the values don't overflow an int).
public int SolveMaxCount1R(Vector2D Parent, Vector2D Item)
{
// make a list to hold both the item size and its rotation
List<Vector2D> itemSizes = new List<Vector2D>();
itemSizes.Add(Item);
if (Item.X != Item.Y)
{
itemSizes.Add(new Vector2D(Item.Y, Item.X));
}
int solution = SolveGeneralMaxCount(Parent, itemSizes.ToArray());
return solution;
}
Here is an example of how you would call this method with your parameter values. In this case I have assumed that all of the solution methods are part of a class called SolverClass:
SolverClass solver = new SolverClass();
int count = solver.SolveMaxCount1R(new Vector2D(2500, 3800), new Vector2D(425, 550));
//(all units are in tenths of a millimeter to make everything integers)
The main method calls a general solver method for this type of problem (that is not restricted to just one size rectangle and its rotation):
public int SolveGeneralMaxCount(Vector2D Parent, Vector2D[] ItemSizes)
{
// determine the maximum x and y scaling factors using GCDs (Greastest
// Common Divisor)
List<int> xValues = new List<int>();
List<int> yValues = new List<int>();
foreach (Vector2D size in ItemSizes)
{
xValues.Add(size.X);
yValues.Add(size.Y);
}
xValues.Add(Parent.X);
yValues.Add(Parent.Y);
int xScale = NaturalNumbers.GCD(xValues);
int yScale = NaturalNumbers.GCD(yValues);
// rescale our parameters
Vector2D parent = new Vector2D(Parent.X / xScale, Parent.Y / yScale);
var baseShapes = new Dictionary<Vector2D, Vector2D>();
foreach (var size in ItemSizes)
{
var reducedSize = new Vector2D(size.X / xScale, size.Y / yScale);
baseShapes.Add(reducedSize, reducedSize);
}
//determine the minimum values that an allowed item shape can fit into
_xMin = int.MaxValue;
_yMin = int.MaxValue;
foreach (var size in baseShapes.Keys)
{
if (size.X < _xMin) _xMin = size.X;
if (size.Y < _yMin) _yMin = size.Y;
}
// create the memoization cache for shapes
Dictionary<Vector2D, SizeCount> shapesCache = new Dictionary<Vector2D, SizeCount>();
// find the solution pattern with the most finished items
int best = solveGMC(shapesCache, baseShapes, parent);
return best;
}
private int _xMin;
private int _yMin;
The general solution method calls a recursive worker method that does most of the actual work.
private int solveGMC(
Dictionary<Vector2D, SizeCount> shapeCache,
Dictionary<Vector2D, Vector2D> baseShapes,
Vector2D sheet )
{
// have we already solved this size?
if (shapeCache.ContainsKey(sheet)) return shapeCache[sheet].ItemCount;
SizeCount item = new SizeCount(sheet, 0);
if ((sheet.X < _xMin) || (sheet.Y < _yMin))
{
// if it's too small in either dimension then this is a scrap piece
item.ItemCount = 0;
}
else // try every way of cutting this sheet (guillotine cuts only)
{
int child0;
int child1;
// try every size of horizontal guillotine cut
for (int c = sheet.X / 2; c > 0; c--)
{
child0 = solveGMC(shapeCache, baseShapes, new Vector2D(c, sheet.Y));
child1 = solveGMC(shapeCache, baseShapes, new Vector2D(sheet.X - c, sheet.Y));
if (child0 + child1 > item.ItemCount)
{
item.ItemCount = child0 + child1;
}
}
// try every size of vertical guillotine cut
for (int c = sheet.Y / 2; c > 0; c--)
{
child0 = solveGMC(shapeCache, baseShapes, new Vector2D(sheet.X, c));
child1 = solveGMC(shapeCache, baseShapes, new Vector2D(sheet.X, sheet.Y - c));
if (child0 + child1 > item.ItemCount)
{
item.ItemCount = child0 + child1;
}
}
// if no children returned finished items, then the sheet is
// either scrap or a finished item itself
if (item.ItemCount == 0)
{
if (baseShapes.ContainsKey(item.Size))
{
item.ItemCount = 1;
}
else
{
item.ItemCount = 0;
}
}
}
// add the item to the cache before we return it
shapeCache.Add(item.Size, item);
return item.ItemCount;
}
Finally, the general solution method uses a GCD function to rescale the dimensions to achieve scale-invariance. This is implemented in a static class called NaturalNumbers. I have included the rlevant parts of this class below:
static class NaturalNumbers
{
/// <summary>
/// Returns the Greatest Common Divisor of two natural numbers.
/// Returns Zero if either number is Zero,
/// Returns One if either number is One and both numbers are >Zero
/// </summary>
public static int GCD(int a, int b)
{
if ((a == 0) || (b == 0)) return 0;
if (a >= b)
return gcd_(a, b);
else
return gcd_(b, a);
}
/// <summary>
/// Returns the Greatest Common Divisor of a list of natural numbers.
/// (Note: will run fastest if the list is in ascending order)
/// </summary>
public static int GCD(IEnumerable<int> numbers)
{
// parameter checks
if (numbers == null || numbers.Count() == 0) return 0;
int first = numbers.First();
if (first <= 1) return 0;
int g = (int)first;
if (g <= 1) return g;
int i = 0;
foreach (int n in numbers)
{
if (i == 0)
g = n;
else
g = GCD(n, g);
if (g <= 1) return g;
i++;
}
return g;
}
// Euclidian method with Euclidian Division,
// From: https://en.wikipedia.org/wiki/Euclidean_algorithm
private static int gcd_(int a, int b)
{
while (b != 0)
{
int t = b;
b = (a % b);
a = t;
}
return a;
}
}
Please let me know of any problems or questions you might have with this solution.
Oops, forgot that I was also using this class:
public class SizeCount
{
public Vector2D Size;
public int ItemCount;
public SizeCount(Vector2D itemSize, int itemCount)
{
Size = itemSize;
ItemCount = itemCount;
}
}
As I mentioned in the comments, it would actually be pretty easy to factor this class out of the code, but it's still in there right now.

copy an one eigen matrix of vectors

I have
A(matrix of vectors with length = depth) is 5x5 (5 rows and 5 cols).
depth = 3 (it is the length of vector of any cell of matrix A).
B(matrix of single values) is 75 x Any (5*5*3 rows and Any cols).
x_size_kernel = 5.
block_idx is the index, here for example we have made it equal 0 (for one column of matrix B only)
The task of this simple and strict example is to copy all vectors of matrix A to one (first column) of matrix B.
Now I solve the problem like this (it is concrete example with precise data)
Eigen::MatrixXf B;
B = Eigen::MatrixXf(x_size_kernel * y_size_kernel * depth, 100).setZero();
Eigen::Matrix<Eigen::VectorXf, Eigen::Dynamic, Eigen::Dynamic> A;
A.resize(5, 5);
auto depth = 3;
for (auto yy = 0; yy < A.rows(); yy++) {
for (auto xx = 0; xx < A.cols(); xx++) {
A(yy, xx).resize(depth);
}
}
auto block_idx = 0;
// and here are all copy for one column of matrix B
for (auto my = 0; my < x_size_kernel; my++) {
for (auto mx = 0; mx < x_size_kernel; mx++) {
// add the next column of block data
B.col(block_idx).
segment(mx * depth + my * x_size_kernel * depth, depth).noalias() =
A(my, mx);
}
}
But the above code is very slow, so I need more fast code. Maybe somebody know how to copy data in such way using only Eigen one pass.
Thank you for helping.

Cuckoo Search with Levy Flight in Java

I'm trying to learn concept about cuckoo search algorithm. Honestly, i'm learning cuckoo search using java source code. we know that cuckoo search using levy distribution for random walk. I have java source code impelements cuckoo search optimation , but i'm doubt that source code using levy distribution. Can anyone help me to inspect wheteher my source code using levy distribution or not?.
this is method that implements random walk..
public CSSolution randomWalk (OptimizationProblem prob, String distribution) {
int n = prob.getNumVar(); //2
// creates a neighborhood of size 1 times the scaling factor
double distanceSquared = Math.pow(rand.nextDouble() *
prob.getScalingFactor(),2);
System.out.println("distance Squared : "+distanceSquared);
// creates an ArrayList from 0 to n-1 (for indexing purposes only)
ArrayList<Integer> varIndices = new ArrayList<Integer>(n);
for (int i = 0; i < n; i++) {
varIndices.add(i, i);
}
ArrayList<Double> vars = this.getVars();
CSSolution newSol = new CSSolution(this.numVars);
newSol.initializeWithNull();
ArrayList<Double> newVars = newSol.getVars();
for (int i = 0; i < n; i++) {
/* Chooses a random variable index from the indices
* of the remaining/unwalked variables. */
int index = rand.nextInt(varIndices.size());
// Finds the variable value that this index corresponds to.
int varIndex = varIndices.get(index);
// System.out.println("varIndicesSize:"+varIndices.size()+" index: "+index+" varIndex: "+varIndex);
double curVar = vars.get(varIndex);
// use correct distribution to generate random double [0,1)
double r;
if (distribution == "weibull"){
r = weibull.random(1.5, 1, new uniform());
}
else if(distribution == "levy"){
r = 0.0;
}
else
r = rand.nextDouble();
// alters this variable coefficient by adding a random step between (-distance,distance)
double distance = Math.sqrt(distanceSquared);
System.out.println("distance : "+distance+" distance Squared : "+distanceSquared);
double varStep = r*distance*2-distance;
double newVar = curVar + varStep;
// System.out.println("x"+varIndex+" : "+curVar+" to "+newVar);
newVars.set(varIndex, newVar);
// removes the variable that has already been visited
varIndices.remove(index);
// updates distance for next for loop
distanceSquared -= Math.pow(varStep, 2);
}
// System.out.println("");
return newSol;
}
source code : https://github.com/cloudrave/Optimizer

inserting zeros between the elements of vector with high performance and speed ( preferred to use STL)

I have extracted raster data of a geotiff image using RasterIO of the GDAL library. Since the image shown by OpenGL needs to have width and height both a multiple of 4, I have used this code after extracting the data.
the first switch block evaluates the rest of RasterXSize(width) divided by 4 and for example if it is 1, it means that we should add 3 columns meaning that we should add 3 zeros at the end of each row. This is done by the code:
for ( int i = 1; i <= RasterYSize; i++)
pRasterData.insert(pRasterData.begin()+i*RasterXSize*depthOfPixel+(i-1)*3,3,0);
and the second switch block evaluates the rest of RasterYSize(height) divided by 4 and for example if it is 1, it means that we should easily add 3 rows to the end of the data which is done by this code:
pRasterData.insert(pRasterData.end(),3*RasterXSize,0);
This is the whole code that I have used for extracting the data and preparing it to be displayed by OpenGL:
void FilesWorkFlow::ReadRasterData(GDALDataset* poDataset)
{
RasterXSize = poDataset -> GetRasterXSize();
RasterYSize = poDataset -> GetRasterYSize();
RasterCount = poDataset -> GetRasterCount();
CPLErr error = CE_None;
GDALRasterBand *poRasterBand;
poRasterBand = poDataset -> GetRasterBand(1);
eType = poRasterBand -> GetRasterDataType();
BytesPerPixel = GDALGetDataTypeSize(eType) / 8;
depthOfPixel = RasterCount * BytesPerPixel;
pRasterData.resize(RasterXSize * RasterYSize * RasterCount * BytesPerPixel);
error = poDataset -> RasterIO(GF_Read,0,0,RasterXSize,RasterYSize,&pRasterData[0],RasterXSize,RasterYSize,eType,RasterCount,0,0,0,0);
int modRasterXSize = RasterXSize % 4;
switch (modRasterXSize)
{
case 1:
{
for ( int i = 1; i <= RasterYSize; i++)
pRasterData.insert(pRasterData.begin()+i*RasterXSize*depthOfPixel+(i-1)*3,3,0);
RasterXSize = RasterXSize+3;
break;
}
case 2:
{
for ( int i = 1; i <= RasterYSize; i++)
pRasterData.insert(pRasterData.begin()+i*RasterXSize*depthOfPixel+(i-1)*2,2,0);
RasterXSize = RasterXSize+2;
break;
}
case 3:
{
for ( int i = 1; i <= RasterYSize; i++)
pRasterData.insert(pRasterData.begin()+i*RasterXSize*depthOfPixel+(i-1)*1,1,0);
RasterXSize = RasterXSize+1;
break;
}
}
int modRasterYSize = RasterYSize % 4;
switch (modRasterYSize)
{
case 1:
{
pRasterData.insert(pRasterData.end(),3*RasterXSize,0);
RasterYSize = RasterYSize+3;
break;
}
case 2:
{
pRasterData.insert(pRasterData.end(),2*RasterXSize,0);
RasterYSize = RasterYSize+2;
break;
}
case 3:
{
pRasterData.insert(pRasterData.end(),1*RasterXSize,0);
RasterYSize = RasterYSize+1;
break;
}
}
}
the first switch block is where my code gets slow and because I am working with a 16997*15931 image it takes a lot of time for the program to run through the for loop.
Note that pRasterData is a member variable of the class FilesWorkFlow and because of the problems I had in sending this variable to the COpenGLControl class written by Brett Fowle in codeguru and used by me in the project with some slight changes, decided to use vector<unsigned char> instead of unsighned char*.
Now I am wondering is there anyway to implement these part of code faster using vectors?
Is there anyway to insert zero in certain parts of vector without using for loops and wasting too much time?
something like std::transform? I don't know!
Remember that I'm using MFC in Visual Studio 2010 and it's better for me to use STL but if you have another suggestions besides using vectors or STL, I'd be glad to hear that?
The reason it is slow is because the members of the vector are getting moved multiple times. Think about the members in the last row of your image. They all have to be moved once for every row of the image. It would be faster to create a whole new image, copying just the pixels you need from the original image and adding zeros where appropriate.
Here's an example:
void
padColumns(
std::vector<unsigned char> &old_image,
size_t old_width,
size_t new_width
)
{
size_t height = image.size() / old_width;
assert(image.size() == old_width*height);
std::vector<unsigned char> new_image(new_width * height);
for (size_t row=0; row!=height; ++row) {
std::copy(
old_image.begin() + row*old_width,
old_image.begin() + row*old_width + old_width,
new_image.begin() + row*new_width
);
std::fill(
new_image.begin() + row*new_width + old_width,
new_image.begin() + row*new_width + new_width,
0
);
}
old_image = new_image;
}

Rearranging data for quadtree/octree

I am working on implementing a voxel octree raycaster, and the only thing left is rearranging the data to populate the leaf level of the octree, so that the data then can be averaged to build the lower levels of the tree.
I'm thinking in 2D (quadtree) initially for convenience. I have the data ordered like in the left in the drawing, and I am currently able to rearrange it like to the right. Example is 8x8.
However, I realized that I need to order the data in node order, like in the drawing below:
In other words, I want to go from an array where the data correspond to indices like this:
[0 1 2 3 4 5 6 7 8 9 ... 63]
to an array that would have the data in this order:
[0 1 4 5 16 17 20 21 2 3 ... 63]
for an 8x8 quadtree example.
I can't figure out how to do it. My main problem is dealing with an arbitrary tree size. I could probably hard-code a set of nested loops if I knew the size beforehand, but that it obviously not a great or elegant solution. I'm thinking there might be a recursive way achieve it.
This it what my quick and dirty sketch for sorting the data in the way described in picture one. It basically works by keeping track of four positions in the original data, and then stepping these forward as the new array gets filled. As far as I have been able to tell, this works fine but is not extendable to my needs:
int w = 8;
int[] before = new int[w*w*w];
int[] after = new int[w*w*w];
for (int i=0; i<w*w*w; i++) {
before[i] = i;
}
int toFill = 0;
int front = 0;
int back = w;
int frontZ = w*w;
int backZ = w*w + w;
do {
for (int i=0; i<w/2; i++) {
for (int j=0; j<w/2; j++) {
after[toFill++] = front++;
after[toFill++] = front++;
after[toFill++] = back++;
after[toFill++] = back++;
after[toFill++] = frontZ++;
after[toFill++] = frontZ++;
after[toFill++] = backZ++;
after[toFill++] = backZ++;
}
front += w;
back += w;
frontZ += w;
backZ += w;
}
front += w*w;
back += w*w;
frontZ += w*w;
backZ += w*w;
} while (toFill < w*w*w);
for (int i=0; i<w*w*w; i++) {
println("after " + i + " " + after[i]);
}
For the problem I stated, phs hinted me that it is called a Z-order curve. Thanks to that, I found this question: Z-order-curve coordinates where an algorithm for the 2D case is presented. I tried it, and it works.
Here are a few implementations of the 3D case as well: How to compute a 3D Morton number (interleave the bits of 3 ints)

Resources