The stress intensity at crack tips is commonly described in terms of MPa sqrt[m]. This is a difficult unit, and Mathematica prefers to return answers in sqrt[J]sqrt[MPa]/m, for which the numerical value is 1000x larger.
This can be confirmed with:
Quantity[1, (Sqrt["Joules"] Sqrt["Megapascals"])/("Meters")]/ Quantity[1, "Megapascals" Sqrt["Meters"]]
The most obvious solution:
UnitConvert[Quantity[1, (Sqrt["Joules"]*Sqrt["Megapascals"])/
"Meters"], "MPa m^0.5"]
Just returns the input. I would like an output in the form:
Quantity[0.001, ("Megapascals" Sqrt["Meters"] )]
Any suggestions?
How about
u = UnitConvert[
Quantity[1, (Sqrt["Joules"]*Sqrt["Megapascals"])/"Meters"],
"Megapascals" Sqrt["Meters"]]
InputForm # u
(* Quantity[1/1000, "Megapascals"*Sqrt["Meters"]] *)
Related
I need a little help with my code during curve fitting some data.
I have the following data:
'''
x_data=[0.0, 0.006702200711821348, 0.012673613376102217, 0.01805805116486128, 0.02296065262674275, 0.027460615301376282,
0.03161908492177514, 0.03548425629114566, 0.03909479074665314, 0.06168416627459879, 0.06395092768264225,
0.0952415360565632, 0.0964823380829502, 0.11590819258911032, 0.11676250975220677, 0.18973251809768016,
0.1899603458289615, 0.2585011532435637, 0.2586068948029052, 0.40046782450999047, 0.40067753715444315]
y_data=[0.005278154532534359, 0.004670803439961002, 0.004188802888597246, 0.003796976494876385, 0.003472183813732432,
0.0031985782141146, 0.002964943046115825, 0.0027631157936632137, 0.0025870148284089897, 0.001713418196416643,
0.0016440241050665323, 0.0009291243501697267, 0.0009083385934116964, 0.0006374601714823219, 0.0006276132323039056,
0.00016900738921547616, 0.00016834735819595378, 7.829234957755694e-05, 7.828353274888779e-05, 0.00015519569743801753,
0.00015533437619227267]
'''
I know that the data can be fitted using the following mathematical model:
'''
def model(x,a,b,c):
return (ab)/(bx+1)+3cx**2
'''
I am trying to obtain the a,b,c coefficients of the model calibrated, so that I obtain the following result (in red is the model calibrated and in blue is the data sample):
My code to achieve the shown result in the former picture is:enter image description here
'''
import numpy as np
from scipy.optimize import curve_fit
popt, _pcov = curve_fit(model, x_data, y_data,maxfev = 100000)
x_sample=np.linspace(0,0.5,1000)
y_sample=model(x_sample,*popt)
'''
If I plot the predicted data based on the fitted coefficients (in green) I get this result:enter image description here
for some reason I get some coefficients that produce a result I know it is wrong. Does anyone know how to solve this issue?
Your model y=(ab)/(bx+1)+3cx**2 appears not really satisfising. Instead of the hyperbolic term an exponential term seems better according to the shape of the data. That is why the proposed model is :
y=A * exp(B * x) + C * x**2
The method to compute approximates of the parameters A,B,C is shown below :
Details of the numerical calculus :
Note :
The parabolic term appears under represented. This is because they are not enough points at large x compare to the many points at small x.
The method used above is explained in https://fr.scribd.com/doc/14674814/Regressions-et-equations-integrales. The method isn't iterative and doesn't need initial "guessed" values. The accuracy is not good in case of few points, due to the numerical integration (calculus of the Sk).
If necessary, this can be improved thanks to post-treatment with non-linear regression starting from the above approximative values of the parameters;
An even better model is made of two exponentials :
In Stata, after running the xthtaylor command, the command
matrix regtab = r(table)
yields an empty matrix. I think this is because of the multilevel of the output of this command
Being new to Stata, I haven't found how to fix this. The purpose here is to extract the coeffecient and standard errors to add them to another output (as is done in the accepted solution of How do I create a table wth both plain and robust standard errors?)
To expand on Nick's point: matrix regtab = r(table) gives you an empty matrix, because xthtaylor doesn't put anything into r(table).
To see this run the following example:
clear all // empties r(table) and everything else
webuse psidextract
* the example regression from `help xthtaylor`
xthtaylor lwage wks south smsa ms exp exp2 occ ind union fem blk ed, endog(exp exp2 occ ind union ed) constant(fem blk ed)
return list doesn't have anything in r(table), but ereturn list will show you that you have access to the coefficients through e(b) and the variance-covariance matrix through e(V).
You can assign these to their own matrices as follows:
matrix betas = e(b)
matrix varcovar = e(V)
Then you can use matrix commands (see help matrix) to manipulate these matrices.
As you discovered, ereturn display creates r(table) which appears quite convenient for your use. It's worth taking a look at help return for more information about the differences between the contents of return list and ereturn list.
I have some problem with the eigenvectors that julia gives me when I calculate the eigenvectors of some matrix of the type:
[-3.454373366796186+1.0*im -0.25350955594231006 0.08482455312233446 0.5677952929872186 0.8512642461184345 -3.3973836853171955
-0.25350955594231006 -4.188304472566067 -0.7536261600953561 -0.2208291476393107 -0.9576102121737481 0.7295909738153196
0.08482455312233446 -0.7536261600953561 -4.145281297093087 0.40094370842599164 -0.3177721876030173 -1.1267847565490017
0.5677952929872186 -0.2208291476393107 0.40094370842599164 -2.561932209885087 0.40874651002530255 -0.5972057181377701
0.8512642461184345 -0.9576102121737481 -0.3177721876030173 0.40874651002530255 -4.22394564475772 -0.6957268391716376
-3.3973836853171955 0.7295909738153196 -1.1267847565490017 -0.5972057181377701 -0.6957268391716376 -3.4158987954939084+1.0*im]
(the matrix should be hermitian, except for the element (1,1) and (6,6).
It's eigenvectors are: Real part
[[-0.60946085 0.66877065 -0.10826958 -0.253947 0.30520429 0.02194697]
[ 0.20102357 -0.07276538 0.60248336 -0.07765244 0.71609468 -0.24683536]
[-0.18741272 0.21271718 0.48641162 0.11191183 -0.52801356 -0.62029698]
[-0.26210071 -0.0094668 -0.07383844 0.91999668 0.22550855 -0.0102918 ]
[-0.23182113 -0.02787858 0.61634939 0.03726956 -0.20443225 0.72225431]
[ 0.64708605 0.70447722 0.04021026 0.22014373 -0.06068686 0.16822489]]
Imaginary part
[[ 0.00680416 0.01172969 0.0036139 -0.00816376 0.02468384 -0.05604585]
[ 0.04974942 0.00719276 -0.01608118 0.09895638 0. -0.01326765]
[-0.04007749 -0.06932898 0.01283773 -0.06201991 -0.01329243 0.00324368]
[-0.07372251 0.00715689 0.0038056 0. -0.09608138 0.01970827]
[-0.04798741 -0.00062382 0. -0.07323346 0.03896021 0. ]
[ 0. 0. 0.03589898 0.04052119 -0.08599638 -0.00702559]]
Obviously there's a dependence on the imaginary part, otherwise the zeros in every imaginary part of the eigenvectors would not appear. I know this in part because I did the calculation in mathematica and it doesn't give me zeroes.
How do I erase such behaviour?
By way of extending Colin's exploration (and my comments on it), here is a function which might help transform the results from Julia/Matlab into the Mathematica results:
matlab2mathematica(m) = m/Diagonal(vec(m[end,:]))
It simply uses the freedom to choose any multiple of an eigenvector and still span the same space.
On the matrix in the OP this gives:
# m2 is the matrix from OP
real(matlab2mathematica(m2)) =
6x6 Array{Float64,2}:
-0.941854 0.949315 -1.45368 -1.12235 -1.86352 0.144124
0.31066 -0.10329 8.13901 -0.261148 -3.92277 -1.46145
-0.289626 0.30195 6.89 0.441542 2.99565 -3.68169
-0.405048 -0.0134381 -0.974822 4.04212 -0.489495 -0.0659565
-0.358254 -0.0395734 8.52958 0.104523 0.817448 4.28591
1.0 1.0 1.0 1.0 1.0 1.0
imag(matlab2mathematica(m2)) =
6x6 Array{Float64,2}:
-0.0105151 -0.0166502 -1.38769 -0.169504 -2.23397 0.327141
-0.0768822 -0.0102101 7.66628 -0.497577 -5.55877 0.139903
0.0619353 0.098412 5.832 0.362998 4.02595 0.134477
0.11393 -0.0101592 -0.964946 0.744021 -2.27687 -0.1144
0.0741592 0.000885503 7.61505 0.351901 1.80035 -0.178993
-0.0 -0.0 -5.55112e-17 -0.0 5.55112e-17 -0.0
This is probably what Mathematica gives. Is it?
UPDATE: Given lack of clarification from OP, I'm going to mark this question as an exact duplicate and vote-to-close.
You state: "obviously there's a dependence on the imaginary part, otherwise the zeros in every imaginary part of the eigenvectors would not appear."
I'm not sure what that means.
However, all the numbers you provide in the question look normal and correct to me, i.e. typical behaviour.
Remember that eigenvectors are unique only up to an orthogonal transformation, so any piece of software needs to choose a rule for how to scale the output of an eigenvector function. Mathematica uses a different rule to most other pieces of software, and this has confused many users in the past. For example, if you have Matlab, you'll notice that it provides exactly the output you describe in the question. So Julia behaves like Matlab in this instance, and not Mathematica.
Come to think of it, I've answered this question before in relation to Matlab/Mathematica. See here. I think this question is a duplicate, but might wait for a response from you before marking it as such. It is possible I have misunderstood what you want.
I have a function like this:
float_as_thousands_str_with_precision(value, precision)
If I use it like this:
float_as_thousands_str_with_precision(volts, 1)
float_as_thousands_str_with_precision(amps, 2)
float_as_thousands_str_with_precision(watts, 2)
Are those 1/2s magic numbers?
Yes, they are magic numbers. It's obvious that the numbers 1 and 2 specify precision in the code sample but not why. Why do you need amps and watts to be more precise than volts at that point?
Also, avoiding magic numbers allows you to centralize code changes rather than having to scour the code when for the literal number 2 when your precision needs to change.
I would propose something like:
HIGH_PRECISION = 3;
MED_PRECISION = 2;
LOW_PRECISION = 1;
And your client code would look like:
float_as_thousands_str_with_precision(volts, LOW_PRECISION )
float_as_thousands_str_with_precision(amps, MED_PRECISION )
float_as_thousands_str_with_precision(watts, MED_PRECISION )
Then, if in the future you do something like this:
HIGH_PRECISION = 6;
MED_PRECISION = 4;
LOW_PRECISION = 2;
All you do is change the constants...
But to try and answer the question in the OP title:
IMO the only numbers that can truly be used and not be considered "magic" are -1, 0 and 1 when used in iteration, testing lengths and sizes and many mathematical operations. Some examples where using constants would actually obfuscate code:
for (int i=0; i<someCollection.Length; i++) {...}
if (someCollection.Length == 0) {...}
if (someCollection.Length < 1) {...}
int MyRidiculousSignReversalFunction(int i) {return i * -1;}
Those are all pretty obvious examples. E.g. start and the first element and increment by one, testing to see whether a collection is empty and sign reversal... ridiculous but works as an example. Now replace all of the -1, 0 and 1 values with 2:
for (int i=2; i<50; i+=2) {...}
if (someCollection.Length == 2) {...}
if (someCollection.Length < 2) {...}
int MyRidiculousDoublinglFunction(int i) {return i * 2;}
Now you have start asking yourself: Why am I starting iteration on the 3rd element and checking every other? And what's so special about the number 50? What's so special about a collection with two elements? the doubler example actually makes sense here but you can see that the non -1, 0, 1 values of 2 and 50 immediately become magic because there's obviously something special in what they're doing and we have no idea why.
No, they aren't.
A magic number in that context would be a number that has an unexplained meaning. In your case, it specifies the precision, which clearly visible.
A magic number would be something like:
int calculateFoo(int input)
{
return 0x3557 * input;
}
You should be aware that the phrase "magic number" has multiple meanings. In this case, it specifies a number in source code, that is unexplainable by the surroundings. There are other cases where the phrase is used, for example in a file header, identifying it as a file of a certain type.
A literal numeral IS NOT a magic number when:
it is used one time, in one place, with very clear purpose based on its context
it is used with such common frequency and within such a limited context as to be widely accepted as not magic (e.g. the +1 or -1 in loops that people so frequently accept as being not magic).
some people accept the +1 of a zero offset as not magic. I do not. When I see variable + 1 I still want to know why, and ZERO_OFFSET cannot be mistaken.
As for the example scenario of:
float_as_thousands_str_with_precision(volts, 1)
And the proposed
float_as_thousands_str_with_precision(volts, HIGH_PRECISION)
The 1 is magic if that function for volts with 1 is going to be used repeatedly for the same purpose. Then sure, it's "magic" but not because the meaning is unclear, but because you simply have multiple occurences.
Paul's answer focused on the "unexplained meaning" part thinking HIGH_PRECISION = 3 explained the purpose. IMO, HIGH_PRECISION offers no more explanation or value than something like PRECISION_THREE or THREE or 3. Of course 3 is higher than 1, but it still doesn't explain WHY higher precision was needed, or why there's a difference in precision. The numerals offer every bit as much intent and clarity as the proposed labels.
Why is there a need for varying precision in the first place? As an engineering guy, I can assume there's three possible reasons: (a) a true engineering justification that the measurement itself is only valid to X precision, so therefore the display shoulld reflect that, or (b) there's only enough display space for X precision, or (c) the viewer won't care about anything higher that X precision even if its available.
Those are complex reasons difficult to capture in a constant label, and are probbaly better served by a comment (to explain why something is beng done).
IF the use of those functions were in one place, and one place only, I would not consider the numerals magic. The intent is clear.
For reference:
A literal numeral IS magic when
"Unique values with unexplained meaning or multiple occurrences which
could (preferably) be replaced with named constants." http://en.wikipedia.org/wiki/Magic_number_%28programming%29 (3rd bullet)
I would like to pass the parameter values in meters or kilometers (both possible) and get the result in meters/second.
I've tried to do this in the following example:
u = 3.986*10^14 Meter^3/Second^2;
v[r_, a_] := Sqrt[u (2/r - 1/a)];
Convert[r, Meter];
Convert[a, Meter];
If I try to use the defined function and conversion:
a = 24503 Kilo Meter;
s = 10198.5 Meter/Second;
r = 6620 Kilo Meter;
Solve[v[r, x] == s, x]
The function returns the following:
{x -> (3310. Kilo Meter^3)/(Meter^2 - 0.000863701 Kilo Meter^2)}
which is not the user-friendly format.
Anyway I would like to define a and r in meters or kilometers and get the result s in meters/second (Meter/Second).
I would be very thankful if anyone of you could correct the given function definition and other statements in order to get the wanted result.
Here's one way of doing it, where you use the fact that Solve returns a list of rules to substitute a value for x into v[r, x], and then use Convert, which will do the necessary simplification of the resulting algebraic expression as well:
With[{rule = First#Solve[v[r,x]==s,x]
(* Solve always returns a list of rules, because algebraic
equations may have multiple solutions. *)},
Convert[v[r,x] /. rule, Meter/Second]]
This will return (10198.5 Meter)/Second as your answer.
You just need to tell Mathematica to simplify the expression assuming that the units are "possitive", which is the reason why it doesn't do the simplifications itself. So, something like
SimplifyWithUnits[blabla_, unit_List]:= Simplify[blalba, (#>0)&/#unit];
So if you get that ugly thing, you then just type %~SimplifyWithUnits~{Meter} or whatever.