I’m learning how to use SphericalPlot3D. I’m using the following statement:
SphericalPlot3D[Cos[θ],{θ,0,π},{ɸ,0,2 π}]
I was expecting to get a cosine curve along y-axis rotated around the same axis. More or less like an hourglass. What I get is a sphere. What do I have to specify in SphericalPlot3D in order to get rotated vertical cosine?
When I use the following statement:
SphericalPlot3D[Cos[2 θ],{θ,0,π},{ɸ,0,2 π}]
I would expect to get 2 hourglasses, one standing on the other along y-axis. I get something different. Where do I go wrong?
Thanks/Mikael
I guess, what you wanted is a surface or revolution with a cosine shape rotated about one axis to get a Hourglass. It is more a Cylindrical plot. The Spherical Plot more or less bends the top and bottom together to form a kind of sphere.
My solution for your problem would be:
RevolutionPlot3D[{Cos[t], t}, {t, 0, π}, {ɸ, 0, 2 π}]
which gives the following plot:
This may be a bit late, but a friend of mine just had the same problem so I thought I'd document the solution somewhere.
The confusion comes from the way in which Mathematica defines θ and φ. In most cases they're defined as follows:
However, Mathematica actually defines θ as what you think should be φ and φ as what you think should be θ for some reason. The following screenshot is taken from the documentation page for SphericalPlot3D explaining these definitions.
A simple fix to this is to swap the way you define the variables when calling SphericalPlot3D. So instead of writing
SphericalPlot3D[Cos[2 θ],{θ,0,π},{ɸ,0,2 π}]
You would want to write:
SphericalPlot3D[Cos[2 θ],{ɸ,0,2 π},{θ,0,π}]
Related
I'm trying to replicate a real-world camera within Three.js, where I have the camera's distortion specified as parameters for a "plumb bob" model. In particular I have P, K, R and D as specified here:
If I understand everything correctly, I want to set up Three to do the translation from "X" on the right, to the "input image" in the top left. One approach I'm considering is making a Three.js Camera of my own, setting the projectionMatrix to ... something involving K and D. Does this make sense? What exactly would I set it to, considering K and D are specified as 1 dimensional arrays, of length 9 and 5 respectively. I'm a bit lost how to combine all the numbers :(
I notice in this answer that there are complicated things necessary to render straight lines as curved, they way they would be with certain camera distortions (like a fish eye lens). I do not need that for my purposes if that is more complicated. Simply rendering each vertex is the correct spot is sufficient.
This document shows the step by step derivation of the camera matrix (Matlab reference).
See: http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/parameters.html
So, yes. You can calculate the matrix using this procedure and use that to map the real-world 3D point to 2D output image.
On a shape from a logical image, I am trying to extract the field of view from any point inside the shape on matlab :
I tried something involving to test each line going through the point but it is really really long.(I hope to do it for each points of the shape or at least each point of it's contour wich is quite a few times)
I think a faster method would be working iteratively by the expansion of a disk from the considered point but I am not sure how to do it.
How can I find this field of view in an efficient way?
Any ideas or solution would be appreciated, thanks.
Here is a possible approach (the principle behind the function I wrote, available on Matlab Central):
I created this test image and an arbitrary point of view:
testscene=zeros(500);
testscene(80:120,80:120)=1;
testscene(200:250,400:450)=1;
testscene(380:450,200:270)=1;
viewpoint=[250, 300];
imsize=size(testscene); % checks the size of the image
It looks like this (the circle marks the view point I chose):
The next line computes the longest distance to the edge of the image from the viewpoint:
maxdist=max([norm(viewpoint), norm(viewpoint-[1 imsize(2)]), norm(viewpoint-[imsize(1) 1]), norm(viewpoint-imsize)]);
angles=1:360; % use smaller increment to increase resolution
Then generate a set of points uniformly distributed around the viewpoint.:
endpoints=bsxfun(#plus, maxdist*[cosd(angles)' sind(angles)'], viewpoint);
for k=1:numel(angles)
[CX,CY,C] = improfile(testscene,[viewpoint(1), endpoints(k,1)],[viewpoint(2), endpoints(k,2)]);
idx=find(C);
intersec(k,:)=[CX(idx(1)), CY(idx(1))];
end
What this does is drawing lines from the view point to each directions specified in the array angles and look for the position of the intersection with an obstacle or the edge of the image.
This should help visualizing the process:
Finally, let's use the built-in roipoly function to create a binary mask from a set of coordinates:
FieldofView = roipoly(testscene,intersec(:,1),intersec(:,2));
Here is how it looks like (obstacles in white, visible field in gray, viewpoint in red):
I'm working on a plugin of 3ds Max. In this plugin, I export the geometry information into a .rib file which can be rendered by a RenderMan renderer. When I export a nubrs curve's data into .rib file described by RiBasis and RiCurve. I use the RtBsplineBasis in RiBasis, but I get the wrong result that the rendered curve is short than the result of 3ds Max's renderer. Then I reprint the first and the last control vertex, the curve is long enough, but its shape is a little different.Who can tell me how I get wrong result or what does RiBasis mean? How can get correct RiBasis? Thank u very much!
RiCurve draws a cubic spline. The control points do not uniquely determine the curve; you also need the basis, which is expressed as a 4x4 matrix -- one matrix give the coefficients you need for a B-spline, Bezier, Catmull-Rom, and so on, and of course you can also supply the matrix yourself for some kind of hybrid interpolant that isn't quite one of the standard 3 or 4. The basis determines the character of the spline -- whether the curve is guaranteed to go through the control points or is merely approximating, the degree of continuity, the "tension", and so on.
There is a great discussion in one of the appendices of "The RenderMan Companion," including numeric examples of how different basis matrices affect the interpolation.
It sounds like you requested a B-spline basis, which is approximating (not interpolating) and continuous in both 1st and 2nd derivatives. Maybe that's not what you had in mind. It's hard to tell, since you didn't describe the properties of the spline that you were hoping for.
As an aside, approximating an arbitrary NURBS curve with a nonrational cubic is not always going to give you an exact match. Something else to keep in mind.
My code to calculate the minimum translation vector using the Separating Axis Theorem works perfectly well, except when one of the polygons is completely contained by another polygon. I have scoured the internet for the solution to this problem and everyone just seems to ignore it ( http://www.codezealot.org/archives/55#sat-contain talks about this, but doesn't give a full solution...)
The pictures below is a screenshot from my program illustrating the problem. The translucent blue triangle is the position of the rectangle before the MTV is applied, and the other triangle is with the MTV applied.
It seems to me that the link you shared does give a solution to this. In your MTV calculation, you have to test for complete containment in a projection and change the calculations accordingly. (The pseudocode is in reference to figure 9 on that page.) Perhaps if you post your code, we can comment on why it isn't working.
I'm trying to build something like the Liquify filter in Photoshop. I've been reading through image distortion code but I'm struggling with finding out what will create similar effects. The closest reference I could find was the iWarp filter in Gimp but the code for that isn't commented at all.
I've also looked at places like ImageMagick but they don't have anything in this area
Any pointers or a description of algorithms would be greatly appreciated.
Excuse me if I make this sound a little simplistic, I'm not sure how much you know about gfx programming or even what techniques you're using (I'd do it with HLSL myself).
The way I would approach this problem is to generate a texture which contains offsets of x/y coordinates in the r/g channels. Then the output colour of a pixel would be:
Texture inputImage
Texture distortionMap
colour(x,y) = inputImage(x + distortionMap(x, y).R, y + distortionMap(x, y).G)
(To tell the truth this isn't quite right, using the colours as offsets directly means you can only represent positive vectors, it's simple enough to subtract 0.5 so that you can represent negative vectors)
Now the only problem that remains is how to generate this distortion map, which is a different question altogether (any image would generate a distortion of some kind, obviously, working on a proper liquify effect is quite complex and I'll leave it to someone more qualified).
I think liquefy works by altering a grid.
Imagine each pixel is defined by its location on the grid.
Now when the user clicks on a location and move the mouse he's changing the grid location.
The new grid is again projected into the 2D view able space of the user.
Check this tutorial about a way to implement the liquify filter with Javascript. Basically, in the tutorial, the effect is done transforming the pixel Cartesian coordinates (x, y) to Polar coordinates (r, α) and then applying Math.sqrt on r.