I try to write a custom preconditioner, but do not understand the concept of Eigen's preconditioner so far. My current state looks like this, but there is something obvious wrong...
class NullSpaceProjector: public Eigen::IdentityPreconditioner
{
public:
Eigen::MatrixXd null_space_1;
Eigen::MatrixXd null_space_2;
template<typename Rhs>
inline Rhs solve(Rhs& b) const {
return b - this->null_space_1 * (this->null_space_2 * b);
}
void setNullSpace(Eigen::MatrixXd null_space) {
// normalize + orthogonalize the nullspace
this->null_space_1 = null_space * ((null_space.transpose() * null_space).inverse());
this->null_space_2 = null_space.transpose();
}
};
I want to set some null-space information and remove it from the rhs in every cg-step. Somehow I have the feeling this is somehow not possible with the preconditioner concept. At least I can't find the right way to get it implmented.
ref.:
- solving a singular matrix
- https://bitbucket.org/eigen/eigen/src/1a24287c6c133b46f8929cf5a4550e270ab66025/Eigen/src/IterativeLinearSolvers/BasicPreconditioners.h?at=default&fileviewer=file-view-default#BasicPreconditioners.h-185
further information:
the nullspace is constructed this way:
Eigen::MatrixXd LscmRelax::get_nullspace()
{
Eigen::MatrixXd null_space;
null_space.setZero(this->flat_vertices.size() * 2, 3);
for (int i=0; i<this->flat_vertices.cols(); i++)
{
null_space(i * 2, 0) = 1; // x-translation
null_space(i * 2 + 1, 1) = 1; // y-translation
null_space(i * 2, 2) = - this->flat_vertices(1, i); // z-rotation
null_space(i * 2 + 1, 2) = this->flat_vertices(0, i); // z-rotation
}
return null_space;
}
So the question is maybe not so much related to eigen but more a basic c++ question:
How can I manipulate the rhs in a const function by a projection which depends on the rhs. So I guess the const concept doesn't fit my needs.
But removing the 'const' from the function results in errors like this:
.../include/eigen3/Eigen/src/IterativeLinearSolvers/ConjugateGradient.h:63:5: error: passing ‘const lscmrelax::NullSpaceProjector’ as ‘this’ argument of ‘Eigen::VectorXd lscmrelax::NullSpaceProjector::solve(const VectorXd&)’ discards qualifiers [-fpermissive]
Related
I came across a function a colleague had written that accepted an initializer list of std::vectors. I have simplified the code for demonstration:
int sum(const std::initializer_list<std::vector<int>> list)
{
int tot = 0;
for (auto &v : list)
{
tot += v.size();
}
return tot;
}
Such a function would allow you call the function like this with the curly braces for the initializer list:
std::vector<int> v1(50, 1);
std::vector<int> v2(75, 2);
int sum1 = sum({ v1, v2 });
That looks neat but doesn't this involve copying the vectors to create the initializer list? Wouldn't it be more efficient to have a function that takes a vector or vectors? That would involve less copying since you can move the vectors. Something like this:
int sum(const std::vector<std::vector<int>> &list)
{
int tot = 0;
for (auto &v : list)
{
tot += v.size();
}
return tot;
}
std::vector<std::vector<int>> vlist;
vlist.reserve(2);
vlist.push_back(std::move(v1));
vlist.push_back(std::move(v2));
int tot = sum2(vlist);
Passing by initializer list could be useful for scalar types like int and float, but I think it should be avoided for types like std::vector to avoid unnecessary copying. Best to use std::initializer_list for constructors as it intended?
That looks neat but doesn't this involve copying the vectors to create the initializer list?
Yes, that is correct.
Wouldn't it be more efficient to have a function that takes a vector or vectors?
If you are willing to move the contents of v1 and v2 to a std::vector<std::vector<int>>, you could do the samething when using std::initializer_list too.
std::vector<int> v1(50, 1);
std::vector<int> v2(75, 2);
int sum1 = sum({ std::move(v1), std::move(v2) });
In other words, you can use either approach to get the same effect.
For days ago, I ask a question on how to use the edge collapse with Assimp. Smooth the obj and remove duplicated vertices in software are sloved the basic problem that could make edge collapse work, I mean it work because it could be simplicated by MeshLab like this:
It looks good in MeshLab, but I then do it in my engine which used Assimp and OpenMesh. The problem is Assimp imported the specified vertices and Indices, that could let the halfedge miss the opposite pair (Is this called non-manifold?).
The result snapshot use OpenMesh's Quadric Decimation:
To clear to find the problem, I do it without decimation and parse the OpenMesh data structure back directly. Everything is work fine as expect (I mean the result without decimation).
The code that I used to decimate the mesh:
Loader::BasicData Loader::TestEdgeCollapse(float vertices[], int vertexLength, int indices[], int indexLength, float texCoords[], int texCoordLength, float normals[], int normalLength)
{
// Mesh type
typedef OpenMesh::TriMesh_ArrayKernelT<> OPMesh;
// Decimater type
typedef OpenMesh::Decimater::DecimaterT< OPMesh > OPDecimater;
// Decimation Module Handle type
typedef OpenMesh::Decimater::ModQuadricT< OPMesh >::Handle HModQuadric;
OPMesh mesh;
std::vector<OPMesh::VertexHandle> vhandles;
int iteration = 0;
for (int i = 0; i < vertexLength; i += 3)
{
vhandles.push_back(mesh.add_vertex(OpenMesh::Vec3f(vertices[i], vertices[i + 1], vertices[i + 2])));
if (texCoords != nullptr)
mesh.set_texcoord2D(vhandles.back(),OpenMesh::Vec2f(texCoords[iteration * 2], texCoords[iteration * 2 + 1]));
if (normals != nullptr)
mesh.set_normal(vhandles.back(), OpenMesh::Vec3f(normals[i], normals[i + 1], normals[i + 2]));
iteration++;
}
for (int i = 0; i < indexLength; i += 3)
mesh.add_face(vhandles[indices[i]], vhandles[indices[i + 1]], vhandles[indices[i + 2]]);
OPDecimater decimater(mesh);
HModQuadric hModQuadric;
decimater.add(hModQuadric);
decimater.module(hModQuadric).unset_max_err();
decimater.initialize();
//decimater.decimate(); // without this, everything is fine as expect.
mesh.garbage_collection();
int verticesSize = mesh.n_vertices() * 3;
float* newVertices = new float[verticesSize];
int indicesSize = mesh.n_faces() * 3;
int* newIndices = new int[indicesSize];
float* newTexCoords = nullptr;
int texCoordSize = mesh.n_vertices() * 2;
if(mesh.has_vertex_texcoords2D())
newTexCoords = new float[texCoordSize];
float* newNormals = nullptr;
int normalSize = mesh.n_vertices() * 3;
if(mesh.has_vertex_normals())
newNormals = new float[normalSize];
Loader::BasicData data;
int index = 0;
for (v_it = mesh.vertices_begin(); v_it != mesh.vertices_end(); ++v_it)
{
OpenMesh::Vec3f &point = mesh.point(*v_it);
newVertices[index * 3] = point[0];
newVertices[index * 3 + 1] = point[1];
newVertices[index * 3 + 2] = point[2];
if (mesh.has_vertex_texcoords2D())
{
auto &tex = mesh.texcoord2D(*v_it);
newTexCoords[index * 2] = tex[0];
newTexCoords[index * 2 + 1] = tex[1];
}
if (mesh.has_vertex_normals())
{
auto &normal = mesh.normal(*v_it);
newNormals[index * 3] = normal[0];
newNormals[index * 3 + 1] = normal[1];
newNormals[index * 3 + 2] = normal[2];
}
index++;
}
index = 0;
for (f_it = mesh.faces_begin(); f_it != mesh.faces_end(); ++f_it)
for (fv_it = mesh.fv_ccwiter(*f_it); fv_it.is_valid(); ++fv_it)
{
int id = fv_it->idx();
newIndices[index] = id;
index++;
}
data.Indices = newIndices;
data.IndicesLength = indicesSize;
data.Vertices = newVertices;
data.VerticesLength = verticesSize;
data.TexCoords = nullptr;
data.TexCoordLength = -1;
data.Normals = nullptr;
data.NormalLength = -1;
if (mesh.has_vertex_texcoords2D())
{
data.TexCoords = newTexCoords;
data.TexCoordLength = texCoordSize;
}
if (mesh.has_vertex_normals())
{
data.Normals = newNormals;
data.NormalLength = normalSize;
}
return data;
}
Also provide the tree obj I tested, and the face data that generated by Assimp, I fetch out from visual studio debugger, that shows the problem that some of the indices could not find the index pair.
Few weeks thinking about this and fails, I thought I want some Academic/Mathematical solution for automatically generating these decimated mesh, but now I'm trying to find the simple way to implement this, the way I am able to do is changing the structure for loading multi-object (file.obj) in single custom object (class obj), and switch the object when needed it. The benefit of this is I could manage what should present and ignore any algorithm problem.
By the way, I list some obstacles that push me back to simple way.
Assimp Unique Indices and Vertices, this is nothing wrong, but for the algorithm, no way to make the adjacency half-edge structure for this.
OpenMesh for reading only object file(*.obj), this could be done when using read_mesh function, but the disadvantage is the lack example of document and hard to using in my engine.
Write a custom 3d model importer for any format is hard.
In conclusion, there are two ways to make level of details work in engine, one is using the mesh simplication algorithm and more test to ensure quality, the other is just switch the 3dmodel that made by 3d software, It is not automatic but stable. I use the second method, and I show the result here :)
However, this is not a real solution with my question, so I won't assign me an answer.
I'm attempting to implement the variant of parallel radix sort described in http://arxiv.org/pdf/1008.2849v2.pdf (Algorithm 2), but my C++ implementation (for 4 digits in base 10) contains a bug that I'm unable to locate.
For debugging purposes I'm using no parallelism, but the code should still sort correctly.
For instance the line arr.at(i) = item accesses indices outside its bounds in the following
std::vector<int> v = {4612, 4598};
radix_sort2(v);
My implementation is as follows
#include <set>
#include <array>
#include <vector>
void radix_sort2(std::vector<int>& arr) {
std::array<std::set<int>, 10> buckets3;
for (const int item : arr) {
int d = item / 1000;
buckets3.at(d).insert(item);
}
//Prefix sum
std::array<int, 10> outputIndices;
outputIndices.at(0) = 0;
for (int i = 1; i < 10; ++i) {
outputIndices.at(i) = outputIndices.at(i - 1) +
buckets3.at(i - 1).size();
}
for (const auto& bucket3 : buckets3) {
std::array<std::set<int>, 10> buckets0, buckets1;
std::array<int, 10> histogram2 = {};
for (const int item : bucket3) {
int d = item % 10;
buckets0.at(d).insert(item);
}
for (const auto& bucket0 : buckets0) {
for (const int item : bucket0) {
int d = (item / 10) % 10;
buckets1.at(d).insert(item);
int d2 = (item / 100) % 10;
++histogram2.at(d2);
}
}
for (const auto& bucket1 : buckets1) {
for (const int item : bucket1) {
int d = (item / 100) % 10;
int i = outputIndices.at(d) + histogram2.at(d);
++histogram2.at(d);
arr.at(i) = item;
}
}
}
}
Can anyone spot my mistake?
I took at look at the paper you linked. You haven't made any mistakes, none that I can see. In fact, in my estimation, you corrected a mistake in the algorithm.
I wrote out the algorithm and ended up with the exact same problem as you did. After reviewing Algorithm 2, either I woefully mis-understand how it is supposed to work, or it is flawed. There are at least a couple of problems with the algorithm, specifically revolving around outputIndices, and histogram2.
Looking at the algorithm, the final index of an item is determined by the counting sort stored in outputIndices. (lets ignore the histogram for now).
If you had an inital array of numbers {0100, 0103, 0102, 0101} The prefix sum of that would be 4.
The algorithm makes no indication I can determine to lag the result by 1. That being said, in order for the algorithm to work the way they intend, it does have to be lagged, so, moving on.
Now, the prefix sums are 0, 4, 4.... The algorithm doesn't use the MSD as the index into the outputIndices array, it uses "MSD - 1"; So taking 1 as the index into the array, the starting index for the first item without the histogram is 4! Outside the array on the first try.
The outputIndices is built with the MSD, it makes sense for it to be accessed by MSD.
Further, even if you tweak the algorithm to correctly to use the MSD into the outputIndices, it still won't sort correctly. With your initial inputs (swapped) {4598, 4612}, they will stay in that order. They are sorted (locally) as if they are 2 digit numbers. If you increase it to have other numbers not starting with 4, they will be globally, sorted, but the local sort is never finished.
According to the paper the goal is to use the histogram to do that, but I don't see that happening.
Ultimately, I'm assuming, what you want is an algorithm that works the way described. I've modified the algorithm, keeping with the overall stated goal of the paper of using the MSD to do a global sort, and the rest of the digits by reverse LSD.
I don't think these changes should have any impact on your desire to parallel-ize the function.
void radix_sort2(std::vector<int>& arr)
{
std::array<std::vector<int>, 10> buckets3;
for (const int item : arr)
{
int d = item / 1000;
buckets3.at(d).push_back(item);
}
//Prefix sum
std::array<int, 10> outputIndices;
outputIndices.at(0) = 0;
for (int i = 1; i < 10; ++i)
{
outputIndices.at(i) = outputIndices.at(i - 1) + buckets3.at(i - 1).size();
}
for (const auto& bucket3 : buckets3)
{
if (bucket3.size() <= 0)
continue;
std::array<std::vector<int>, 10> buckets0, buckets1, buckets2;
for (const int item : bucket3)
buckets0.at(item % 10).push_back(item);
for (const auto& bucket0 : buckets0)
for (const int item : bucket0)
buckets1.at((item / 10) % 10).push_back(item);
for (const auto& bucket1 : buckets1)
for (const int item : bucket1)
buckets2.at((item / 100) % 10).push_back(item);
int count = 0;
for (const auto& bucket2 : buckets2)
{
for (const int item : bucket2)
{
int d = (item / 1000) % 10;
int i = outputIndices.at(d) + count;
++count;
arr.at(i) = item;
}
}
}
}
For extensiblility, it would probably make sense to create a helper function that does the local sorting. You should be able to extend it to handle any number of digit numbers that way.
I'm trying to create a class that calculates its variance from a vector<float>. It should do this by using its previously calculated this->mean in diffSquaredSum. I'm trying to call the method diffSquaredSum inside of accumulate but have no idea what the magical syntax is.
What is the correct syntax to use the diffSquaredSum class method as the op argument to accumulate in setVariance?
float diffSquaredSum(float sum, float f) {
// requires call to setMean
float diff = f - this->mean;
float diff_square = pow(diff,2);
return sum + diff_square;
}
void setVariance(vector<float>& values) {
size_t n = values.size();
double sum = accumulate(
values.begin(), values.end(), 0,
bind(this::diffSquaredSum));
this->variance = sum / n;
}
double sum = std::accumulate(
values.begin(),
values.end(),
0.f,
[&](float sum, float x){ return diffSquaredSum(sum,x);}
);
bind is only rarely useful. Prefer lambdas, they are easier to write and read.
You could instead get fancy with binding, but why?
I'm learning language D. My first try is a simple 2d Vector that I can add, substract, dot product, etc...
I have this error when I try to compile:
Error :
Error: (Vector2d __ctmp1245 = D4math2v28Vector2d6_initZ;
, __ctmp1245).this(this._x / l,this._y / l) is not mutable
Note : The error is related to Vector2d.dir()
The code is :
import std.math;
public struct Vector2d {
private const real _x;
private const real _y;
this(in real x, in real y) {
_x = x; _y = y;
}
// Basic Properties ***************************************
#property const real X () { return _x; }
#property const real Y () { return _y; }
#property const real length () { return sqrt(_x*_x + _y*_y); }
// Operations ***************************************
/**
* Define Equality
*/
const bool opEquals(ref const Vector2d rhs) {
return approxEqual(_x, rhs._x) && approxEqual(_y, rhs._y);
}
/**
* Define unary operators + and - (+v)
*/
ref Vector2d opUnary(string op)() const
if (op == "+" || op == "-")
{
return Vector2d(mixin(op~"_x"), mixin(op~"_y"));
}
/**
* Define binary operator + and - (v1 + v2)
*/
ref Vector2d opBinary(string op) (ref const Vector2d rhs)
if (op == "+" || op == "-")
{
return Vector2d(mixin("_x" ~ op ~ "rhs._x"),mixin("_y" ~ op ~ "rhs._y"));
}
/**
* Scalar multiplication & division (v * 7)
*/
ref Vector2d opBinary(string op) (ref const real rhs) const
if (op == "*" || op == "/")
{
return Vector2d(mixin("_x" ~ op ~ "rhs"),mixin("_y" ~ op ~ "rhs"));
}
/**
* Dot Product (v1 * v2)
*/
ref real opBinary(string op) (ref const Vector2d rhs) const
if (op == "*") {
return _x*rhs._x + _y*rhs._y;
}
/**
* Obtain the director vector of this vector.
*/
ref Vector2d dir() const {
auto l = this.length();
return Vector2d(_x / l, _y /l);
}
/**
* Obtains the projection of this vector over other vector
* Params:
* b = Vector over project this vector
*/
ref Vector2d projectOnTo(in Vector2d b) const {
return b.dir() * (this * b.dir());
}
}
I don't understand why I have this error. Plus I try to change Type Qualifiers unsuccessful.
Even I get the same error if i try this :
ref Vector2d dir() const {
auto l = this.length();
return Vector2d(2,3);
}
EDIT :
I tried removing "const" from attributes and removing "ref" from Dot product (I get a advice that not was lvalue) . The error is now this :
src/math/v2.d(82): Error: this.opBinary(b.dir()) is not an lvalue
The line 82 is :
return b.dir() * (this * b.dir());
AutoANSWER:
I fixed the last error, I changed ProjectOnTo
to this :
ref Vector2d projectOnTo(in Vector2d b) const {
auto a = this * b.dir();
return b.dir() * a;
}
Plus I run a unit test and looks that Vector2d It's doing ok.
So, finally I know now that I can't use immutable variables for a struct attributes, but I dont understand why.
drop off the const qualifiers of the fields
public struct Vector2d {
private real _x;
private real _y;
this(in real x, in real y) {
_x = x; _y = y;
}
//...
the const is unnecessary and it will block you from simple assignment:
Vector2d v;
v = Vector2d(0,0);
this wont wont compile
and do remember that generic functions (the opBinary and opUnary) are only tested on syntax when they are parsed not on correctness of the code (your opUnary returns a Vector!T but the T generic type is never declared (and thus undefined) yet this passes compilation...)
edit I've made my own struct with operator overloads and except for opCmp and opEquals I don't use ref const for parameters just const (in works just as well for this)
edit2 I've found it easiest to understand structs as groups of variables that can be declared, assigned and used at the same time with a few extra functions defined around the group of vars
so Vector2D v; in your original code would then be translated as const real v__x; const real v__y; the assignment v = Vector(0,0); would then be (after inlining the constructor) translated as v__x = 0;v__y = 0;
but as v__x is declared as const this wont be allowed
I've tried to rewrite the body of dir() as
Vector2d v = Vector2d(0, 0);
return v;
and saw a nice error message then:
vector2D.d(67): Error: variable vector2D.Vector2d.dir.v cannot modify struct with immutable members
vector2D.d(67): Error: escaping reference to local variable v
I don't understand how a variable could modify a struct, but this gave me the hint: why did you define _x and _y as const? I've removed the const from their declarations, and it has compiled :)