I am trying to use aruco markers and estimating the pose of single markers. Sometimes I get weirdly large values as
Marker ID 2 : [-1.11133e+06, -918896, 3.3727e+06] , [-3.22862e+08, 4.49601e+08, -5.05835e+08]
Has anyone experienced this problem?
Setting useExtrinsicGuess flag to true caused this problem. Now I am using SolvePnP directly with CV_ITERATIVE. The pose values are not super stable and consistent but still better than before. There is still an occasional Z axis flipping.
Related
I've been following some tutorials to make this simple sandbox with a test .glb file.
https://codesandbox.io/s/zen-black-et9cs?file=/src/App.js
Everything seems to work except the shadows. I can't find any missing castShadow/recieveShadow/shadowMap declarations anywhere... just not sure what I'm missing.
Thanks if you can point to my mistake!
Increasing the shadow map size to such a high value is no good approach since it's bad for performance.
Instead, decrease the frustum of the shadow camera. Use the following values:
shadow-camera-near={0.1}
shadow-camera-far={20}
shadow-camera-left={-10}
shadow-camera-right={10}
shadow-camera-top={10}
shadow-camera-bottom={-10}
Keep in mind that you can visually debug and thus better optimize the shadow camera by using THREE.CameraHelper.
I figured it out - it was the shadow-mapSize-width and height. 1024 wasn't enough, I had to bump that number much bigger (works at 10240). Not sure why this is the case, perhaps my imported model is of a different scale or something. But it works now!
I have InstancedBufferGeometry working in my scene. However, some of the instances are mirrors of the source, hence they have a negative scale to represent the geometry.
This flips the winding order of those instances and look wrong due to Back Face Culling (which I want to keep).
I'm fully aware of the limitations within this approach, but I was wondering if there was a way to tackle this that I may have not come across yet? Maybe some trick in the shader to specify which ones are front face and which are back face? I don't recall this being possible though...
Or should I be doing two separate loads? (Which will duplicate the draw calls)
I'm loading a lot of different geometries (which are all instanced) so trying to make sure I get the best performance possible.
Thanks!
Ant
[EDIT: Added a little more detail]
It would help if you provide an example. As far as I can understand your question, simple answer is - no, you can't do that.
As far as i'm aware, primitives are rejected before they get to the shader, meaning that it's not in your control. If you want to use negative scaling, and make sure that surfaces are still visible - enable rendering of both faces (Front and Back).
Alternatively, you might be okay with simply rotating objects and sticking to positive scale - if you have to have mirroring - you're out of luck here.
One more idea: have 2 instanced objects, one with normal geometry and one with mirrored, you can fix up normals in the mirrored geometry.
I am using the built-in radon function from the Image Processing Toolbox in MATLAB. Until today, I had been using some custom functions that gave me the results I expected. Particularly, I am developing a mathematical model that retrieves the projections of a Point Spread Function (PSF) in several directions (the baseline is 0/45/90/135 degrees).
I have prepared a really simple example that will show the problems I am experimenting:
I = zeros(1000,1000);
I(250:750, 250:750) = 1;
theta = [0 45 90 135];
[R,xp] = radon(I,theta);
figure;plot(R);legend('0°','45°','90°','135°');
If you run the example, you will see that the plot for 45/135° (diagonals) shows an artifact shaped as a saw-tooth along the curve. At first I thought it had to do with the sampling grid I am using (even number of points). However, when using a grid with an odd number of points, the problem remains there. I do not quite understand this result, since the radon transform is just a cumulative integral across several directions. Therefore, I should not get this "saw-tooth" pattern.
I am really confused about the result. Has anybody experimented the same problem?
Thanks in advance.
It is the aliasing artifact when you use a simple forward projector, which I believe is what implemented in the randon() function. To remove this artifact, you need to increase the number of samplings (randon() probably uses the same number of samplings of the phantom, you might want to increase that number to as least double the number of phantom samplings), or implement a better forward projector, such as driven-driven projector which is used in GE's CT image reconstruction software.
I have a problem finding defects at the edge of a circular object. It's hard to describe so I have a picture which may help a bit. I am trying to find the red marked areas, such as what is shown below:
I already tried matching with templates vision.TemplateMatcher(), but this only works well for the picture I made the template of.
I tried to match it with vision.CascadeObjectDetector() and I trained it with 150 images. I found only < 5% correct results with this.
I also tried matching with detectSURFFeatures() and then matchFeatures(), but this only works on quite similar defects (when the edges are not closed it fails).
Since the defects are close to the half of a circle, I tried to find it with imfindcircles(), but there I find so many possible results. When I take the one with the highest metric sometimes I get the right one but not even close to 30%.
Do any of you have an idea what I can try to find at least more than 50%?
If someone has an idea and wants to try something I added another picture.
Since I am new I can only add two pictures but if you need more I can provide more pictures.
Are you going to detect rough edges like that on smooth binary overlays as you provided before? For eg. are you making a program whose input consists of getting a black image with lots of circles with rough edges which its then supposed to detect? i.e. sudden rough discontinuities in a normally very smooth region.
If the above position is valid, then this may be solved via classical signal processing. My opinion, plot a graph of the intensity on a line between any two points outside and inside the circle. It should look like
.. continuous constant ... continuous constant .. continuous constant.. DISCONTINUOUS VARYING!! DISCONTINUOUS VARYING!! DISCONTINUOUS VARYING!! ... continuous constant .. continuous constant..
Write your own function to detect these discontinuities.
OR
Gradient: The rate of change of certain quantities w.r.t a distance measure.
Use the very famous Sobel (gradient) filter.
Use the X axis version of the filter, See result, if gives you something detectable use it, do same for Y axis version of filter.
In case you're wondering, if you're using Matlab then you just need to get a readily available and highly mentioned 3x3 matrix (seen almost everywhere on the internet ) and plug it into the imfilter function, or use the in-built implementation (edge(image,'sobel')) (if you have the required toolbox).
I'm new to d3, and just getting the hang of it. My requirement is pretty simple. I want the x-axis to be log scaled to a base of some decimal number. The default log scale has a base of 10. And scouring the reference API and the web hasnt yielded me a way of changing the base. Might be i'm missing something basic about d3. But i cant seem to get past this obstacle. Ideally, shouldnt there be a log.base() similar to the pow.exponent() for the power scale
d3.scale.log().base(2) seems to work fine. (As Adrien Be points out.)
There isn't such a function (although it wouldn't be too hard to add one). Your best bet is to write your own function that does the necessary log transformation you specify and then passes the result on to a normal linear scale to get the final value.