I know my question is not really a programming question but it came out of programming need. Does anyone happen to know the convolution matrix for diagonal motion blur. 3x3, 4x4 or 5x5 are all good.
Thanks,
This is 5x5:
0.22222 0.27778 0.22222 0.05556 0.00000
0.27778 0.44444 0.44444 0.22222 0.05556
0.22222 0.44444 0.55556 0.44444 0.22222
0.05556 0.22222 0.44444 0.44444 0.27778
0.00000 0.05556 0.22222 0.27778 0.22222
I basically drew a diagonal line, and then blurred it a little.
Related
I have a stationary camera, which is 640mm above the ground and is tilted slightly forward (about the x-axis?) roughly 30 degrees, so it's looking slightly down towards a flat ground plane.
My goal is to determine the distance from the camera to any small objects that it detects on the ground, for example, if my camera detects an object which is located at pixel [314, 203], I would like to know where on the ground that object would be in world coordinates [x, y, z] with y=0, and the distance to that object.
I've drawn a diagram to better visualize the problem:
camera plane diagram
I have my rotation matrix and translation vector and also my intrinsic matrix, but I'm not sure 1) if the rotation matrix and translation vector are correct given the information/diagram above, and 2) how to proceed with figuring out a mathematical formula for finding distance and real world location. Here is what I have so far:
Rotation matrix R (generated here https://www.andre-gaschler.com/rotationconverter/) from the orientation [-30, 0, 0] (degrees)
R =
[ 1.0000000, 0.0000000, 0.0000000;
0.0000000, 0.8660254, -0.5000000;
0.0000000, 0.5000000, 0.8660254 ]
Camera is 640mm above ground plane
t =
[0, 640, 0]
Intrinsic matrix from calibration information given by camera
fx=349.595, fy=349.505, cx=328.875, cy=178.204
K =
[ 349.595, 0.0000, 328.875;
0.0000, 349.505, 178.204;
0.0000, 0.0000, 1.000 ]
I also have these distortion parameters, I'm unsure what to do with them or if they are relevant to K
k1=-0.170901, k2=0.0255027, k3=-9.30328e-11, p1=0.000117187, p2=6.42836e-05
I get about this far and then I'm lost, any help would be much appreciated.
Also sorry if this is a lot of information or if it's confusing in any way, I'm very much a beginner when it comes to projection matrices
UPDATE:
After some more research and testing on my own I found a formula that seems to give me a somewhat decent approximation. Given pixel [x, y], I find (I think) the direction vector from the camera origin to the pixel coordinate with:
dir_x = (x - cx) / fx
dir_y = (y - cy) / fy
dir_z = 1
which I then multiply by rotation matrix R, which gives me the real-world vector. I then divide my camera height (640mm) by the y-value of that vector, which gives me (I think) the distance to the specified pixel in the real-world. After some testing and measuring by hand, this seems to be an adequate method for finding the distance, but I'm not sure if I'm missing steps for accuracy or if I'm actually doing this completely wrong.
Again, any insight is greatly appreciated.
I know that there is set_origin_pose to shift a pose in X/Y/Z.
But I was not able to to rotate a pose along its own X Y or Z axis. I cant simply add an angle to the pose's values, because they refer to the camera's coordinates.
How can a pose be rotated?
Solved by converting the pose to a mat3d, rotating the mat with hom_mat3d_rotate_local and then converting back to a pose:
*shift base pose
set_origin_pose (CalculationPose, X1 ,0, Y1, CalculationPose)
disp_3d_coord_system(3600, CameraParam, CalculationPose, 0.1)
*rotate base pose
pose_to_hom_mat3d(CalculationPose, CalculationMat)
hom_mat3d_rotate_local(CalculationMat, -AngleRad , 'y',CalculationMatRotated)
hom_mat3d_to_pose(CalculationMatRotated, CalculationPose)
I am working with hyperspheres and would like to use healpy to create the three angles
ksi, theta, phi for a nside hypersphere.
How do I do that considering that these are the four coordinates of this hypersphere:
u1=sin(ksi)sin(theta)cos(phi)
u2=sin(ksi)sin(theta)sin(phi)
u3=sin(ksi)cos(theta)
u4=cos(ksi)
Thanks,
Marco
I'm trying to implement a simple physically-accurate raytracer. I have it working in grayscale (so with light intensities) but I'm struggling with colors.
How do I calculate relation between colored (non-white) light and the surface. Say the light color is rgb(1.0,0.9,0.8) and the surface is rgb(0.8,0.9,1.0)?
In a very basic manner
Let's assume you've chosen the phong shading model or you've chosen not to do any specific shading.
You need the scene's ambient coefficient (a coefficient that describes the overall intensitiy of the colors in the scene), let's say it's 0.3; And then multiply the object's color by the coefficient.
Then you need to calculate the phong shading model or you just need the color of the object, w/o any special shading models.
Then you need to calculate the color of the next object if your reflection vector hit any, again starting from step 1 (recursive)
Sum all of the results
Code:
Color3 trace(..)
{
...
Color3 ambient = object.color * 0.3;
Color3 phong = phongModel(..) or object.color;
Color3 reflection = trace(..);
return ambient + phong + reflection;
}
I am getting
Matrix3.getInverse(): can't invert matrix, determinant is 0 three.js 3976
error when I am trying to scale a cube object.
var object = new.THREE.Mesh(geometry, material)
xScale = 0.1;
object.scale.x = object.scale.y = object.scale.z = xScale;
Could someone help me out of this.
Matrix3.getInverse(): can't invert matrix, determinant is 0 usually happens when either the scale.x, scale.y or scale.z are 0. Make sure you're not scaling the object to 0.
I think you may be trying to use a Matrix3 where a Matrix4 is required. At least in r61 of the three.js library, the line you refer to is pulling from the matrix array beyond index 8 (ie. a 16 element matrix vs. a 9 element).
If you need any advice beyond that provide some code and description to what your trying to achieve with the inverse matrix, good luck!