Solving an equation in terms of unknown constants wolfram mathematica - wolfram-mathematica

I want to solve the following equation. I want to get an expression of x in terms of unknown constants alpha and beta. Does anyone know how to solve this in Matlab or Mathematica?
Thanks.
Here's my one line code in wolfram Mathematica.
'Assuming[alpha>beta>0,Solve[Cos(alpha*Cos(x)) + Cos(beta*Cos(x)) -1.96 ==0,x]] '

Since it doesn't appear simple to get an analytic solution, perhaps a graphic showing the behavior might provide some insight about what to do next.
ListPointPlot3D[Reap[Do[
{alpha, beta, x} = RandomReal[{0, 2 Pi}, 3];
If[alpha > beta,
err = Norm[Cos[alpha*Cos[x]]+Cos[beta*Cos[x]]-1.96];
If[err < .01, Sow[{alpha, beta, x}]]
],{10^6}]][[2, 1]], ViewPoint->{0, -2., 0}]
Once that displays on your monitor you can either adjust the numbers inside that Viewpoint or you might be able to place your mouse inside the graphic, press and hold the left mouse button and drag to rotate the image around.
That graphic seems to show that the solutions lie within a fairly well defined region.
Once you have looked at this then you might bump the range of the random numbers up to {0,4Pi} because it looks like there is more interesting behavior for larger values of alpha and beta.

Related

How to use SphericalPlot3D in Mathematica

I’m learning how to use SphericalPlot3D. I’m using the following statement:
SphericalPlot3D[Cos[θ],{θ,0,π},{ɸ,0,2 π}]
I was expecting to get a cosine curve along y-axis rotated around the same axis. More or less like an hourglass. What I get is a sphere. What do I have to specify in SphericalPlot3D in order to get rotated vertical cosine?
When I use the following statement:
SphericalPlot3D[Cos[2 θ],{θ,0,π},{ɸ,0,2 π}]
I would expect to get 2 hourglasses, one standing on the other along y-axis. I get something different. Where do I go wrong?
Thanks/Mikael
I guess, what you wanted is a surface or revolution with a cosine shape rotated about one axis to get a Hourglass. It is more a Cylindrical plot. The Spherical Plot more or less bends the top and bottom together to form a kind of sphere.
My solution for your problem would be:
RevolutionPlot3D[{Cos[t], t}, {t, 0, π}, {ɸ, 0, 2 π}]
which gives the following plot:
This may be a bit late, but a friend of mine just had the same problem so I thought I'd document the solution somewhere.
The confusion comes from the way in which Mathematica defines θ and φ. In most cases they're defined as follows:
However, Mathematica actually defines θ as what you think should be φ and φ as what you think should be θ for some reason. The following screenshot is taken from the documentation page for SphericalPlot3D explaining these definitions.
A simple fix to this is to swap the way you define the variables when calling SphericalPlot3D. So instead of writing
SphericalPlot3D[Cos[2 θ],{θ,0,π},{ɸ,0,2 π}]
You would want to write:
SphericalPlot3D[Cos[2 θ],{ɸ,0,2 π},{θ,0,π}]

MATLAB, algorithm for free surface detection in bubbly flow

I am trying to figure out an algorithm for detecting the free surface from a PIV image (see attached). The major problem is that in the flow under consideration gas bubbles are injected into the fluid, these rise up due to buoyancy and tend to sit on top of the surface. I don't want these to be mistaken for the free surface (actually want the '2nd' edge underneath them) - I'm struggling to figure out how to include that in the algorithm.
Ideally, I want an array of x and y values representing coordinates of the free surface (like a continuous, smooth curve).
My initial approach was to scan the picture left to right, one column at a time, find an edge, move to the next column etc... That works somewhat ok, but fails as soon as the bubbles appear and my 'edge' splits in two. So I am wondering if there is some more sophisticated way of going about it.
If anybody have any expertise in the area of image processing/edge detection, any advice would be greatly appreciated.
Typical PIV image
Desired outcome
I think you can actually solve the problem by using morphologic methods.
A = imread('./MATLAB/ZBhAM.jpg');
figure;
subplot 131;
imshow(A)
subplot 132;
B = double(A(:,:,1));
B = B/255;
B = im2bw(B, 0.1);
imshow(B);
subplot 133;
st = strel('diamond', 5);
B = imerode(B, st);
B = imdilate(B, st);
B = imshow(B);
This gives the following result:
As you can see this approach is not perfect mostly because I picked a random value for the threshold in im2bw, if you use an adaptive threshold for the different column of your images you should have something better.
Try to work on your lighting otherwise.

Accurate (and fast) angle matching

For a hobby project I'm attempting to align photo's and create 3D pictures. I basically have 2 camera's on a rig, that I use to make pictures. Automatically I attempt to align the images in such a way that you get a 3D SBS image.
They are high resolution images, which means a lot of pixels to process. Because I'm not really patient with computers, I want things to go fast.
Originally I've worked with code based on image stitching and feature extraction. In practice I found these algorithms to be too inaccurate and too slow. The main reason is that you have different levels of depth here, so you cannot do a 1-on-1 match of features. Most of the code already works fine, including vertical alignment.
For this question, you can assume that different ISO exposion levels / color correction and vertical alignment of the images are both taken care of.
What is still missing is a good algorithm for correcting the angle of the pictures. I noticed that left-right pictures usually vary a small number of degrees (think +/- 1.2 degrees difference) in angle, which is enough to get a slight headache. As a human you can easily spot this by looking at sharp differences in color and lining them up.
The irony here is that you spot it immediately as a human if it's correct or not, but somehow I'm not able to learn this to a machine. :-)
I've experimented with edge detectors, Hough transform and a large variety of home-brew algorithms, but so far found all of them to be both too slow and too inaccurate for my purposes. I've also attempted to iteratively aligning vertically while changing the angles slightly, so far without any luck.
Please note: Accuracy is perhaps more important than speed here.
I've added an example image here. It's actually both a left and right eye, alpha-blended. If you look closely, you can see the lamb at the top having two ellipses, and you can see how the chairs don't exactly line up at the top. It might seem negliable, but on a full screen resolution while using a beamer, you will easily see the difference. This also shows the level of accuracy that is required; it's quite a lot.
The shift in 'x' direction will give the 3D effect. Basically, if the shift is 0, it's on the screen, if it's <0 it's behind the screen and if it's >0 it's in front of the screen. This also makes matching harder, since you're not looking for a 'stitch'.
Basically the two camera's 'look' in the same direction (perpendicular as in the second picture here: http://www.triplespark.net/render/stereo/create.html ).
The difference originates from the camera being on a slightly different angle. This means the rotation is uniform throughout the picture.
I have once used the following amateur approach.
Assume that the second image has a rotation + vertical shift mismatch. This means that we need to apply some transform for the second image which can be expressed in matrix form as
x' = a*x + b*y + c
y' = d*x + e*y + f
that is, every pixel that has coordinates (x,y) on the second image, should be moved to a position (x',y') to compensate for this rotation and vertical shift.
We have a strict requirement that a=e, b=-d and d*d+e*e=1 so that it is indeed rotation+shift, no zoom or slanting etc. Also this notation allows for horizontal shift too, but this is easy to fix after angle+vertical shift correction.
Now select several common features on both images (I did selection by hand, as just 5-10 seemed enough, you can try to apply some automatic feature detection mechanism). Assume i-th feature has coordinates (x1[i], y1[i]) on first image and (x2[i], y2[i]) on the second. We expect that after out transformation the features have as equal as possible y-coordinates, that is we want (ideally)
y1[i]=y2'[i]=d*x2[i]+e*y2[i]+f
Having enough (>=3) features, we can determine d, e and f from this requirement. In fact, if you have more than 3 features, you will most probably not be able to find common d, e and f for them, but you can apply least-square method to find d, e and f that make y2' as close to y1 as possible. You can also account for the requirement that d*d+e*e=1 while finding d, e and f, though as far as i remember, I got acceptable results even not accounting for this.
After you have determined d, e and f, you have the requirement a=e and b=-d. This leaves only c unknown, which is horizontal shift. If you know what the horizontal shift should be, you can find c from there. I used the background (clouds on a landscape, for example) to get c.
When you know all the parameters, you can do one pass on the image and correct it. You might also want to apply some anti-aliasing, but that's a different question.
Note also that you can in a similar way introduce quadratic correction to the formulas to account for additional distortions the camera usually has.
However, that's just a simple algorithm I came up with when I faced the same problem some time ago. I did not do much research, so I'll be glad to know if there is a better or well-established approach or even a ready software.

Create a "visual noise matrix" in Mathematica

In order to avoid "retinal persistence" after the presentation of a stimuli, I need to create a visual noise mask.
This for a screen that has a dimension, in pixel of : 1280 * 960
I believe I could randomly (uniform) assign gray shade to pixels but my attempts yet failed.
Thank you for your attention.
Just noticed:
RandomImage[1, {1280, 960}]
New in Mathematica 8, apparently...
Damn, at last a question on Stack Overflow I could have answered and I was too late... :)
Oh well, here's an alternative solution...
ImageEffect[Image[Table[{0.5, 0.5, 0.5}, {i, 1, 960}, {j, 1, 1280}] ], "GaussianNoise"]
Probably got too many colours in it?
ImageEffect also works on greyscale images.
ImageEffect[Image[Table[0.5, {400}, {600}]], "GaussianNoise"]
Did you try looking in the help docs? One of the first examples for Image should have done it.
Image#RandomReal[1, {960, 1280}]
You can specify a different range of values:
Image#RandomReal[{0.4, 1}, {400, 600}]
Others have already shown you ways of creating a random image. In case you were designing your application to use up the full screen (or based on the current screen's dimensions), you might find it convenient to not hard code the values, but to capture the screen size programmatically. Here's an example showing how:
screenSize = Last /# ("FullScreenArea" /.
Flatten#SystemInformation["Devices", "ScreenInformation"]);
RandomImage[1, screenSize]

Possible to put equation's expression near its graphic representation?

Is it possible that when I Plot a function in Mathematica, it will automatically put near it it's equation(i.e. y = 2x) or even some other text?
At first glance I don't find any option for it, but if there is one I'd like to know.
Thanks
Using Mathematica 6 or higher, I often use Tooltip to help me identify plot curves:
Plot[Tooltip[Sin[x]], {x, 0, 8 Pi}]
Alas, this is only useful when using the graph interactively since you must hover the mouse cursor over the curve. It doesn't work so well on paper or on a static image.
You can use the Epilog option to manually place some text on the plot, as in this example:
Plot[
Sin[x], {x, 0, 8 Pi},
Epilog -> Text["My Text", Offset[{32, 0}, {14, Sin[14]}]]
]
Tweak the arguments of Offset to taste.
This works if you do not mind manual placement. Automatic placement poses some challenges, depending upon the kinds of functions that you wish to plot. But if you know something of the general characteristics of the functions of interest, you could write a function that calculates nice looking values for the Offset arguments. For example, if I knew I was going to plot lots of exponential decline functions, I might define something like the function myPlot in this example:
SetAttributes[myPlot, HoldAll]
myPlot[function_, {var_, min_, max_}] :=
Plot[
function, {var, min, max},
Epilog -> Text[function, Offset[{40, 0}, {var, function} /. var -> min + (max - min)/20]],
PlotRange -> All, AxesOrigin -> {0, 0}
]
... where the arguments to Offset are computed automatically using some arbitrary constants that work reasonably well for these kinds of plots:
Manipulate[
myPlot[1000 E^(-d t), {t, 0, 100}, "My Label"],
{d, 0.01, .2}
]
Since all of these options are programmable, the sky's limit as to how much sophistication you could code up for the label placement. Of course, such programming drifts farther and farther away from the ideal of a built-in option to Plot that just magically drops on some text next to the function. Mathematica 8 or 9 maybe :)
One way to do this, which automatically associates the expression with the style used to plot it, is to use the PlotLegends standard add-on package. The output doesn't look very good by default; I recommend setting the LegendShadow -> None option and using Style on the expressions you stick in the legend to make them look better. Also, loading the package inflicts some funny redefinitions on Plot and related functions which can break some other things if you're not careful.
"Near its equation" is the problem. This isn't an easy problem to solve, and it becomes somewhat impossible when you start getting "busy" graphs with overlapping plots and so on.
I don't have a good example to show, but usually I'll define a "labelling function" that takes the same input as the function being plotted, which places a dot on the graph and writes some text nearby. This has the advantage of being able to easily vary the location of the text but still have it tied to the function.

Resources