I'm trying to do neighbour processing on GPU with HLSL, and I'm wondering if there is a way to load an array of neigbour samples at once and not just one sample, so that I can utilize matrix math instead of for loops.
My current implementation, using SampleLevel function is something like:
float3 pixel = inputTex.SampleLevel(sampleState, uv + uvOffset, 0.0, 0.0);
Instead, I'd like to load more than one sample at a time, but I haven't found an API for that. Or if my approach for this is totally wrong, please let me know how else to go about utilizing vectorization and matrix math in HLSL. Thanks for any advice and have a great day!
As far as I know max you can get from single texture access is float4.
Your options are (Assuming you only need 4 neighboring values):
Calling 4 Load methods (requires texel integer coordinate, and not UV).
Calling 4 Sample methods using point sampler (requires offseting UV by 0.5/textureSize, so your samples would be (u - offset, v - offset), (u + offset, v - offset), (u - offset, v + offset), (u + offset, v + offset))
If your texture is single channel (Or if you only need red channel), you can use Gather - https://learn.microsoft.com/en-us/windows/win32/direct3dhlsl/dx-graphics-hlsl-to-gather
Related
I'm writing CG shaders for advanced lighting calculation for game based on Unity. Sometimes it is needed to sum all vector components. There are two ways to do it:
Just write something like:
float sum = v.x + v.y + v.z;
Or do something like:
float sum = dot(v,float3(1,1,1));
I am really curious about what is faster and looks better for code style.
It's obvious that if we have same question for CPU calculations, the first simle way is much better. Because of:
a) There is no need to allocate another float(1,1,1) vector
b) There is no need to multiply every original vector "v" components by 1.
But since we do it in shader code, which runs on GPU, I belive there is some great hardware optimization for dot product function, and may be allocation of float3(1,1,1) will be translated in no allocation at all.
float4 _someVector;
void surf (Input IN, inout SurfaceOutputStandard o){
float sum = _someVector.x + _someVector.y + _someVector.z + _someVector.w;
// VS
float sum2 = dot(_someVector, float4(1,1,1,1));
}
Check this link.
Vec3 Dot has a cost of 3 cycles, while Scalar Add has a cost of 1.
Thus, in almost all platforms (AMD and NVIDIA):
float sum = v.x + v.y + v.z; has a cost of 2
float sum = dot(v,float3(1,1,1)); has a cost of 3
The first implementation should be faster.
Implementation of the Dot product in cg: https://developer.download.nvidia.com/cg/dot.html
IMHO difference is immeasurable, in 98% of the cases, but first one should be faster, because multiplication is a "more expensive" operation
I'm trying to compare the performance in Halide of a 2-pass, separable approach to the integral-image-based box filtering approach to gain a better understanding of Halide scheduling. I cannot find any examples of Integral Image creation in Halide where the Integral Image Function is used in the defintion of a subsequent function.
ImageParam input(type_of<uint8_t>(), 3, "image 1");
Var x("x"), y("y"), c("c"), xi("xi"), yi("yi");
Func ip("ip");
ip(x, y, c) = cast<float>(BoundaryConditions::repeat_edge(input)(x, y, c));
Param<int> radius("radius", 15, 1, 50);
RDom imageDomain(input);
RDom r(-radius, radius, -radius, radius);
// Make an integral image
Func integralImage = ip;
integralImage(x, imageDomain.y, c) += integralImage(x, imageDomain.y - 1, c);
integralImage(imageDomain.x, y, c) += integralImage(imageDomain.x - 1, y, c);
integralImage.compute_root(); // Come up with a better schedule for this
// Apply box filter to integral image
Func outputImage;
outputImage(x,y,c) = integralImage(x+radius,y+radius,c)
+ integralImage(x-radius,y-radius,c)
- integralImage(x-radius,y+radius,c)
- integralImage(x-radius,y+radius,c);
Expr normFactor = (2*radius+1) * (2*radius+1);
outputImage(x,y,c) = outputImage(x,y,c) / normFactor;
result(x,y,c) = cast<uint8_t>(outputImage(x,y,c));
result.parallel(y).vectorize(x,8)
I did find the following code in the tests:
https://github.com/halide/Halide/blob/master/test/correctness/multi_pass_reduction.cpp
But this example uses realize to compute the integral image as a buffer over a fixed domain and doesn't consume the definition of integral image as a function in the definition of a subsequent function.
When I run this code, I observe that:
The computation of the Integral Image is extremely slow (moves my pipeline to 0 fps)
I get an incorrect answer. I feel like I must be somehow misdefining my integral image
I also have a related question, how would one best schedule the computation of an integral image in this type of scenario in Halide?
My problem was in the definition of my integral image. If I change my implementation to the standard one pass definition of the integral image, I get expected behavior:
Func integralImage;
integralImage(x,y,c) = 0.0f; // Pure definition
integralImage(intImDom.x,intImDom.y,c) = ip(intImDom.x,intImDom.y,c)
+ integralImage(intImDom.x-1,intImDom.y,c)
+ integralImage(intImDom.x,intImDom.y-1,c)
- integralImage(intImDom.x-1,intImDom.y-1,c);
integralImage.compute_root();
I still have remaining questions about the most efficient algorithm/schedule in Halide for computing an integral image, but I'll re-post that as a more specific question, as my current post was kind of open-ended.
As an aside, there is a second problem in the code above in that padding of the input image is not handled correctly.
I have two sets of 3D points (original and reconstructed) and correspondence information about pairs - which point from one set represents the second one. I need to find 3D translation and scaling factor which transforms reconstruct set so the sum of square distances would be least (rotation would be nice too, but points are rotated similarly, so this is not main priority and might be omitted in sake of simplicity and speed). And so my question is - is this solved and available somewhere on the Internet? Personally, I would use least square method, but I don't have much time (and although I'm somewhat good at math, I don't use it often, so it would be better for me to avoid it), so I would like to use other's solution if it exists. I prefer solution in C++, for example using OpenCV, but algorithm alone is good enough.
If there is no such solution, I will calculate it by myself, I don't want to bother you so much.
SOLUTION: (from your answers)
For me it's Kabsch alhorithm;
Base info: http://en.wikipedia.org/wiki/Kabsch_algorithm
General solution: http://nghiaho.com/?page_id=671
STILL NOT SOLVED:
I also need scale. Scale values from SVD are not understandable for me; when I need scale about 1-4 for all axises (estimated by me), SVD scale is about [2000, 200, 20], which is not helping at all.
Since you are already using Kabsch algorithm, just have a look at Umeyama's paper which extends it to get scale. All you need to do is to get the standard deviation of your points and calculate scale as:
(1/sigma^2)*trace(D*S)
where D is the diagonal matrix in SVD decomposition in the rotation estimation and S is either identity matrix or [1 1 -1] diagonal matrix, depending on the sign of determinant of UV (which Kabsch uses to correct reflections into proper rotations). So if you have [2000, 200, 20], multiply the last element by +-1 (depending on the sign of determinant of UV), sum them and divide by the standard deviation of your points to get scale.
You can recycle the following code, which is using the Eigen library:
typedef Eigen::Matrix<double, 3, 1, Eigen::DontAlign> Vector3d_U; // microsoft's 32-bit compiler can't put Eigen::Vector3d inside a std::vector. for other compilers or for 64-bit, feel free to replace this by Eigen::Vector3d
/**
* #brief rigidly aligns two sets of poses
*
* This calculates such a relative pose <tt>R, t</tt>, such that:
*
* #code
* _TyVector v_pose = R * r_vertices[i] + t;
* double f_error = (r_tar_vertices[i] - v_pose).squaredNorm();
* #endcode
*
* The sum of squared errors in <tt>f_error</tt> for each <tt>i</tt> is minimized.
*
* #param[in] r_vertices is a set of vertices to be aligned
* #param[in] r_tar_vertices is a set of vertices to align to
*
* #return Returns a relative pose that rigidly aligns the two given sets of poses.
*
* #note This requires the two sets of poses to have the corresponding vertices stored under the same index.
*/
static std::pair<Eigen::Matrix3d, Eigen::Vector3d> t_Align_Points(
const std::vector<Vector3d_U> &r_vertices, const std::vector<Vector3d_U> &r_tar_vertices)
{
_ASSERTE(r_tar_vertices.size() == r_vertices.size());
const size_t n = r_vertices.size();
Eigen::Vector3d v_center_tar3 = Eigen::Vector3d::Zero(), v_center3 = Eigen::Vector3d::Zero();
for(size_t i = 0; i < n; ++ i) {
v_center_tar3 += r_tar_vertices[i];
v_center3 += r_vertices[i];
}
v_center_tar3 /= double(n);
v_center3 /= double(n);
// calculate centers of positions, potentially extend to 3D
double f_sd2_tar = 0, f_sd2 = 0; // only one of those is really needed
Eigen::Matrix3d t_cov = Eigen::Matrix3d::Zero();
for(size_t i = 0; i < n; ++ i) {
Eigen::Vector3d v_vert_i_tar = r_tar_vertices[i] - v_center_tar3;
Eigen::Vector3d v_vert_i = r_vertices[i] - v_center3;
// get both vertices
f_sd2 += v_vert_i.squaredNorm();
f_sd2_tar += v_vert_i_tar.squaredNorm();
// accumulate squared standard deviation (only one of those is really needed)
t_cov.noalias() += v_vert_i * v_vert_i_tar.transpose();
// accumulate covariance
}
// calculate the covariance matrix
Eigen::JacobiSVD<Eigen::Matrix3d> svd(t_cov, Eigen::ComputeFullU | Eigen::ComputeFullV);
// calculate the SVD
Eigen::Matrix3d R = svd.matrixV() * svd.matrixU().transpose();
// compute the rotation
double f_det = R.determinant();
Eigen::Vector3d e(1, 1, (f_det < 0)? -1 : 1);
// calculate determinant of V*U^T to disambiguate rotation sign
if(f_det < 0)
R.noalias() = svd.matrixV() * e.asDiagonal() * svd.matrixU().transpose();
// recompute the rotation part if the determinant was negative
R = Eigen::Quaterniond(R).normalized().toRotationMatrix();
// renormalize the rotation (not needed but gives slightly more orthogonal transformations)
double f_scale = svd.singularValues().dot(e) / f_sd2_tar;
double f_inv_scale = svd.singularValues().dot(e) / f_sd2; // only one of those is needed
// calculate the scale
R *= f_inv_scale;
// apply scale
Eigen::Vector3d t = v_center_tar3 - (R * v_center3); // R needs to contain scale here, otherwise the translation is wrong
// want to align center with ground truth
return std::make_pair(R, t); // or put it in a single 4x4 matrix if you like
}
For 3D points the problem is known as the Absolute Orientation problem. A c++ implementation is available from Eigen http://eigen.tuxfamily.org/dox/group__Geometry__Module.html#gab3f5a82a24490b936f8694cf8fef8e60 and paper http://web.stanford.edu/class/cs273/refs/umeyama.pdf
you can use it via opencv by converting the matrices to eigen with cv::cv2eigen() calls.
Start with translation of both sets of points. So that their centroid coincides with the origin of the coordinate system. Translation vector is just the difference between these centroids.
Now we have two sets of coordinates represented as matrices P and Q. One set of points may be obtained from other one by applying some linear operator (which performs both scaling and rotation). This operator is represented by 3x3 matrix X:
P * X = Q
To find proper scale/rotation we just need to solve this matrix equation, find X, then decompose it into several matrices, each representing some scaling or rotation.
A simple (but probably not numerically stable) way to solve it is to multiply both parts of the equation to the transposed matrix P (to get rid of non-square matrices), then multiply both parts of the equation to the inverted PT * P:
PT * P * X = PT * Q
X = (PT * P)-1 * PT * Q
Applying Singular value decomposition to matrix X gives two rotation matrices and a matrix with scale factors:
X = U * S * V
Here S is a diagonal matrix with scale factors (one scale for each coordinate), U and V are rotation matrices, one properly rotates the points so that they may be scaled along the coordinate axes, other one rotates them once more to align their orientation to second set of points.
Example (2D points are used for simplicity):
P = 1 2 Q = 7.5391 4.3455
2 3 12.9796 5.8897
-2 1 -4.5847 5.3159
-1 -6 -15.9340 -15.5511
After solving the equation:
X = 3.3417 -1.2573
2.0987 2.8014
After SVD decomposition:
U = -0.7317 -0.6816
-0.6816 0.7317
S = 4 0
0 3
V = -0.9689 -0.2474
-0.2474 0.9689
Here SVD has properly reconstructed all manipulations I performed on matrix P to get matrix Q: rotate by the angle 0.75, scale X axis by 4, scale Y axis by 3, rotate by the angle -0.25.
If sets of points are scaled uniformly (scale factor is equal by each axis), this procedure may be significantly simplified.
Just use Kabsch algorithm to get translation/rotation values. Then perform these translation and rotation (centroids should coincide with the origin of the coordinate system). Then for each pair of points (and for each coordinate) estimate Linear regression. Linear regression coefficient is exactly the scale factor.
A good explanation Finding optimal rotation and translation between corresponding 3D points
The code is in matlab but it's trivial to convert to opengl using the cv::SVD function
You might want to try ICP (Iterative closest point).
Given two sets of 3d points, it will tell you the transformation (rotation + translation) to go from the first set to the second one.
If you're interested in a c++ lightweight implementation, try libicp.
Good luck!
The general transformation, as well the scale can be retrieved via Procrustes Analysis. It works by superimposing the objects on top of each other and tries to estimate the transformation from that setting. It has been used in the context of ICP, many times. In fact, your preference, Kabash algorithm is a special case of this.
Moreover, Horn's alignment algorithm (based on quaternions) also finds a very good solution, while being quite efficient. A Matlab implementation is also available.
Scale can be inferred without SVD, if your points are uniformly scaled in all directions (I could not make sense of SVD-s scale matrix either). Here is how I solved the same problem:
Measure distances of each point to other points in the point cloud to get a 2d table of distances, where entry at (i,j) is norm(point_i-point_j). Do the same thing for the other point cloud, so you get two tables -- one for original and the other for reconstructed points.
Divide all values in one table by the corresponding values in the other table. Because the points correspond to each other, the distances do too. Ideally, the resulting table has all values being equal to each other, and this is the scale.
The median value of the divisions should be pretty close to the scale you are looking for. The mean value is also close, but I chose median just to exclude outliers.
Now you can use the scale value to scale all the reconstructed points and then proceed to estimating the rotation.
Tip: If there are too many points in the point clouds to find distances between all of them, then a smaller subset of distances will work, too, as long as it is the same subset for both point clouds. Ideally, just one distance pair would work if there is no measurement noise, e.g when one point cloud is directly derived from the other by just rotating it.
you can also use ScaleRatio ICP proposed by BaoweiLin
The code can be found in github
I have a point that moves (in one dimension), and I need it to move smoothly. So I think that it's velocity has to be a continuous function and I need to control the acceleration and then calculate it's velocity and position.
The algorithm doesn't seem something obvious to me, but I guess this must be a common problem, I just can't find the solution.
Notes:
The final destination of the object may change while it's moving and the movement needs to be smooth anyway.
I guess that a naive implementation would produce bouncing, and I need to avoid that.
This is a perfect candidate for using a "critically damped spring".
Conceptually you attach the point to the target point with a spring, or piece of elastic. The spring is damped so that you get no 'bouncing'. You can control how fast the system reacts by changing a constant called the "SpringConstant". This is essentially how strong the piece of elastic is.
Basically you apply two forces to the position, then integrate this over time. The first force is that applied by the spring, Fs = SpringConstant * DistanceToTarget. The second is the damping force, Fd = -CurrentVelocity * 2 * sqrt( SpringConstant ).
The CurrentVelocity forms part of the state of the system, and can be initialised to zero.
In each step, you multiply the sum of these two forces by the time step. This gives you the change of the value of the CurrentVelocity. Multiply this by the time step again and it will give you the displacement.
We add this to the actual position of the point.
In C++ code:
float CriticallyDampedSpring( float a_Target,
float a_Current,
float & a_Velocity,
float a_TimeStep )
{
float currentToTarget = a_Target - a_Current;
float springForce = currentToTarget * SPRING_CONSTANT;
float dampingForce = -a_Velocity * 2 * sqrt( SPRING_CONSTANT );
float force = springForce + dampingForce;
a_Velocity += force * a_TimeStep;
float displacement = a_Velocity * a_TimeStep;
return a_Current + displacement;
}
In systems I was working with a value of around 5 was a good point to start experimenting with the value of the spring constant. Set it too high will result in too fast a reaction, and too low the point will react too slowly.
Note, you might be best to make a class that keeps the velocity state rather than have to pass it into the function over and over.
I hope this is helpful, good luck :)
EDIT: In case it's useful for others, it's easy to apply this to 2 or 3 dimensions. In this case you can just apply the CriticallyDampedSpring independently once for each dimension. Depending on the motion you want you might find it better to work in polar coordinates (for 2D), or spherical coordinates (for 3D).
I'd do something like Alex Deem's answer for trajectory planning, but with limits on force and velocity:
In pseudocode:
xtarget: target position
vtarget: target velocity*
x: object position
v: object velocity
dt: timestep
F = Ki * (xtarget-x) + Kp * (vtarget-v);
F = clipMagnitude(F, Fmax);
v = v + F * dt;
v = clipMagnitude(v, vmax);
x = x + v * dt;
clipMagnitude(y, ymax):
r = magnitude(y) / ymax
if (r <= 1)
return y;
else
return y * (1/r);
where Ki and Kp are tuning constants, Fmax and vmax are maximum force and velocity. This should work for 1-D, 2-D, or 3-D situations (magnitude(y) = abs(y) in 1-D, otherwise use vector magnitude).
It's not quite clear exactly what you're after, but I'm going to assume the following:
There is some maximum acceleration;
You want the object to have stopped moving when it reaches the destination;
Unlike velocity, you do not require acceleration to be continuous.
Let A be the maximum acceleration (by which I mean the acceleration is always between -A and A).
The equation you want is
v_f^2 = v_i^2 + 2 a d
where v_f = 0 is the final velocity, v_i is the initial (current) velocity, and d is the distance to the destination (when you switch from acceleration A to acceleration -A -- that is, from speeding up to slowing down; here I'm assuming d is positive).
Solving:
d = v_i^2 / (2A)
is the distance. (The negatives cancel).
If the current distance remaining is greater than d, speed up as quickly as possible. Otherwise, begin slowing down.
Let's say you update the object's position every t_step seconds. Then:
new_position = old_position + old_velocity * t_step + (1/2)a(t_step)^2
new_velocity = old_velocity + a * t_step.
If the destination is between new_position and old_position (i.e., the object reached its destination in between updates), simply set new_position = destination.
You need an easing formula, which you would call at a set interval, passing in the time elapsed, start point, end point and duration you want the animation to be.
Doing time-based calculations will account for slow clients and other random hiccups. Since it calculates on time elapsed vs. the time in which it has to compkete, it will account for slow intervals between calls when returning how far along your point should be in the animation.
The jquery.easing plugin has a ton of easing functions you can look at:
http://gsgd.co.uk/sandbox/jquery/easing/
I've found it best to pass in 0 and 1 as my start and end point, since it will return a floating point between the two, you can easily apply it to the real value you are modifying using multiplication.
standard scaling using the center of an image as the pivot point and is uniform in all dimensions. I'd like to figure out a way to scale an image from an arbitrary pivot point such that points closer to the pivot point scale less than points away from that point.
Well, I don't know what framework/library you're using but you can think of it as:
translation to make your pivot point the center point
standard scaling
opposite transalation to make the center point the original pivot point
Translation and scaling are isomorphismes so you can represent them as matrix. Each transformation is a matrix and you can multiply them for find the combined transformation matrix. So:
T = transformation
S = scalling
T' = opposite transformation
If you apply T.x being x a point vector it gives you the new coordinates. The same for S.x.
So if you want to do that operations you have to do: T'. (S. (T.x))
I think you can associate operations so it's the same as (T'.S.T).x
If you are using a framework apply three operations (or combine operations and apply).
If you are using crude math... go the matrix way :)
PS: If you are doing this by hand. I know that if you are scaling you will want to find the coordinates of the original point given a transformed point. So you can iterate over the resulting points (each of the pixels) and see what coordinates (or point in between) from the original image you have to use. In that case what you need is the inverse matrix. So instead of using S you want to use S^(-1). If you know that you want to apply T'.S.T you can find this resulting matrix and next find (T'.S.T)^(-1). Then you have your inverse matrix to find original points given the resulting points.
This is an oversimplification, but should help you get started. For one, since standard resampling is uniform, there isn't really a concept of a pivot-point. If anything, they usually just start from a corner, as it's easier to run the for loops that way.
Generally the algorithm is something like this pseudo-code
function resample (srcImg, dstSize) {
dstImg = makeImage(dstSize)
for (r = 0; r < dstSize.height; ++r) {
for (c = 0; r < dstSize.width; ++c) {
// getResampleLoc returns float coordinate
resampleLoc = getResampleLoc(c, r, dstImg.size, srcImg.size)
color = getColor(srcImg, resampleLoc)
dstImg.setColor(c, r, color)
}
}
return dstImage
}
For uniform resampling, getResampleLoc is just a simple scale of x and y from the dstImg size to the srcImg size. It returns float coordinates, which are passed to getColor. The implementation of getColor is what determines the various resampling algorithms. Basically, it blends the pixels surrounding the coordinate in some ratio. In reality, there are optimizations that can be done to make information generated inside of getColor shared between calls, but don't worry about that.
For you, you would need something like:
function resample (srcImg, dstSize, pivotPt) {
dstImg = makeImage(dstSize)
for (r = 0; r < dstSize.height; ++r) {
for (c = 0; r < dstSize.width; ++c) {
// getResampleLoc returns float coordinate
resampleLoc = getResampleLoc(c, r, dstImg.size, srcImg.size, pivotPt)
color = getColor(srcImg, resampleLoc)
dstImg.setColor(c, r, color)
}
}
return dstImage
}
And then you just need to implement getResampleLoc to take pivotPt into account. Probably the simplest thing is to log-scale the distance to the edge.