I am trying to simplify the following using Mathematica.
simplify[iota * arctan(-iota*x)]
This simply yields the expression in mathematical terms.
I want it to be simplified and expressed in terms of hyperbolic function.
Mathematica is case sensitive and only uses square brackets for function argument delimiters:
Also the imaginary unit is I : fixing all that it works:
Simplify[I ArcTan[-I x]]
ArcTanh[x]
In fact you don't even need the Simplify, ArcTan[-I x] evaluates to -I ArcTanh[x] without Simplify
Related
I'm writing a boolean Reverse Polish Notation parser and evaluator. When I want to evaluate a double-negation, like !!A, the corresponding RPN is !A!, according to the shunting-yard algorithm. However, when I try to run the evaluation algorithm from Wikipedia it fails, as, when the first ! is found, there is no value to apply the operator to, as expected.
However, if I write the expression as !(!A), it translates to A!! in RPN, which is what I would need.
Is it a problem with conversion to RPN, or a evaluation one? I could always fix it by enforcing parentheses with each negation, but that doesn't seem like an elegant solution...
I have to write a program that tests whether two algebraic expressions are equivalent. It should follow MDAS precedence and parenthesis grouping. To solve the problem about precedence, I'm thinking I should implement a Infix to Postfix Notation converter for these expressions. But by doing this, I could not conclude their equivalence.
The program should look like this:
User Input: a*(a+b) = a*a + a*b
Output : Equivalent
For this problem I'm not allowed to use Computer Algebraic Systems or any external libraries. Please don't post the actual code if you have one, I just need an idea to work this problem out.
If you are not allowed to evaluate the expressions, you will have to parse them out into expression trees.
After that, I would get rid of all parenthesis by multiplying/dividing all members so a(b - c) becomes a*b - a*c.
Then convert all expressions back to strings, making sure you have all members alphabetically sorted (a*b, not b*a) ,remove all spaces and compare strings.
That's an idea:
You need to implement building expression tree first because it's a very natural representation of expression.
Then maybe you'll need to simplify it by open brackets and etc. using associative or distributive algebraic properties.
Then you'll have to compare trees. It's not obvious because you need to take care of all branch permutations in commutative operations and etc. E.g. you can sort them (I mean branches) and then compare for equality. Also you need to keep in mind possible renaming of parameters, i.e. a + b need to be equal x + y.
I'm writing an input file for OTTER that is very simple:
set(auto).
formula_list(usable).
all x y ([Nipah(x) & Encephalitis(y)] -> Causes(x,y)).
exists x y (Nipah(x) & Encephalitis(y)).
end_of_list.
I get this output for the search :
given clause #1: (wt=2) 2 [] Nipah($c2).
given clause #2: (wt=2) 2 [] Encephalitis($c1).
search stopped because sos empty
Why won't OTTER infer Causes($c2,$c1)?
EDIT:
I removed the square brackets from [Nipah(x) & Encephalitis(x)] and it worked. Why does this matter?
I'd answer with a question: Why did you use square brackets in the first place?
Look into Otter manual, Section 4.3, List Notation. Square brackets are used for lists, it's syntactic sugar that is expanded into special terms. In your case, it expanded to something like
all x y ($cons(Nipah(x) & Encephalitis(y), $nil) -> Causes(x,y)).
Why won't OTTER infer Causes($c2,$c1)?
Note that the resolution calculus is not complete in the sense that every formula provable in a given theory could be inferred by the calculus. This would be highly undesirable! Instead, resolution is only refutationally complete, meaning that if a given theory is
contradictory then the resolution will find a proof of the empty clause. So even if a clause C is a logical consequence of a set of clauses T, it doesn't mean that the resolution calculus can derive C from T. In your case, the fact that Causes($c2,$c1) follows from the input doesn't mean Otter has to derive it.
I'm still very new to prolog, and am trying to wrap my head around why math constraints don't seem to work the same way logical ones do.
It seems like there's enough information to solve this:
f(A, B) :- A = (B xor 2).
But when I try f(C, 3), I get back C = 3 xor 2. which isn't very helpful. Even less useful is the fact that it simply can't find a solution if the inputs are reversed. Using is instead of = makes the example input return the correct answer, but the reverse refuses to even attempt anything.
From my earlier experimentation, it seems that I could write a function that did this logically using the binary without trouble, and it would in fact go both ways. What makes the math different?
For reference, my first attempt at solving my problem looks like this:
f(Input, Output) :-
A is Input xor (Input >> 11),
B is A xor ((A >> 7) /\ 2636928640),
C is B xor ((B << 15) /\ 4022730752),
Output is C xor (C >> 18).
This works fine going from input to output, but not the other way around. If I switch the is to =, it produces a long logical sequence with values substituted but can't find a numerical solution.
I'm using swi-prolog which has xor built in, but it could just as easily be defined. I was hoping to be able to use prolog to work this function in both directions, and really don't want to have to implement the logical behaviors by hand. Any suggestions about how I might reformulate the problem are welcome.
Pure Prolog is not supposed to handle math. The basic algorithm that drives Prolog - Unify and backtrack on failure - Doesn't mention arithmetic operators. Most Prolog implementations add arithmetics as an ugly hack into their bytecode.
The reason for this is that arithmetic functions do not act the same way as functors. They cannot be unified in the same way. Not every function is guaranteed to work for each combination of ground and unground arguments. For example, the algorithm for raising X to the power of Y is not symmetric to finding the Yth root of X. If all arithmetic functions were symmetric, encryption and cryptography wouldn't work!
That said, here are the missing facts about Prolog operators:
First, '=' is not "equals" in Prolog, but "unify". The goal X = Y op Z where op is an operator, unifies X with the functor 'op'(Y,Z). It has nothing to do with arithmetic equality or assignment.
Second, is, the ugly math hack, is not guaranteed to be reversible. The goal X is Expr, where Expr is an arithmetic expression, first evaluates the expression and then tries to assign it to X. It won't always work for each combination of numbers and variables - Check your Prolog library documentation.
To summarize:
Writing reversible mathematical functions requires the mathematical knowledge and algorithm to make the function reversible. Prolog won't do the magic for you in this case.
If you're looking for smart equation solving, you might want to check Prolog constraint-solving libraries for finite and contiguous domains. Not the same thing as reversible math, but somewhat smarter than Prolog's naive arithmetic operators.
If you want to compare the result of evaluating expression, you should use the operator (=:=)/2, or when checking for apartness the operator (=/=)/2.
The operator works also for bitwise operations, since bitwise operations work on integeres, and integers are numbers. The operator is part of the ISO core standard. For the following clause:
f(A, B) :- A =:= (B xor 2).
I get the following runs, in SWI-Prolog, Jekejeke Prolog etc..:
Welcome to SWI-Prolog (Multi-threaded, 64 bits, Version 7.3.31)
Copyright (c) 1990-2016 University of Amsterdam, VU Amsterdam
?- f(100, 102).
true.
?- f(102, 100).
true.
?- f(100, 101).
false.
If you want a more declarative way of handling bits, you can use a SAT solver integrated into Prolog. A good SAT solver should also support limited or unlimited bit vectors, but I cant currenty tell whats available here and what the restrictions would be.
See for example this question here:
Prolog SAT Solver
I would like to know if there is a straightforward algorithm for rearranging simple symbolic algebraic expressions. Ideally I would like to be able to rewrite any such expression with one variable alone on the left hand side. For example, given the input:
m = (x + y) / 2
... I would like to be able to ask about x in terms of m and y, or y in terms of x and m, and get these:
x = 2*m - y
y = 2*m - x
Of course we've all done this algorithm on paper for years. But I was wondering if there was a name for it. It seems simple enough but if somebody has already cataloged the various "gotchas" it would make life easier.
For my purposes I won't need it to handle quadratics.
(And yes, CAS systems do this, and yes I know I could just use them as a library. I would like to avoid such a dependency in my application. I really would just like to know if there are named algorithms for approaching this problem.)
What you want is equation solving algorithm(s). But i bet that this is huge topic. In general case, there may be:
system of equations
equations may be non-linear, thus additional algorithms such as equation factorization needed.
knowledge how to reverse functions is needed,- for example =>
sin(x) + 10 = z, solving for x we reverse sin(), which is arcsin(). (Not all functions may be reversible !)
finally some equations may be hard-solvable even for CAS, such as
sin(x)+x=y, solve for x.
Hard answer is - your best bet is to take source code of some CAS,- for example you can take a look at MAXIMA CAS source code which is written in LISP. And find code which is responsible for equation solving.
Easy answer - if all that you need is solving equation which is linear and is composed only from basic operators +-*/. Then you know answer already - use old good paper method
- think of what rules we used on paper, and just re-write these rules as symbolic algorithm which manipulates equation string.
good luck !
It seems like what you're interested in doing is maintaining a system of linear equations and then, at any time, being able to solve for one variable in terms of all the others. If you encode the relationships as a matrix, it seems like you could then reduce the matrix to some nice form (for example, reduced row echelon form) to get the "simplest" dependencies amongst the variables (for some nice definition of "simplest.") Once you have the data like this, you should be able to read off all the dependencies by just looking at some row that has a nonzero entry for the variable in question, then normalizing it so that the variable has coefficient one.
A note - in general, you won't always get a unique solution for each variable. For example, given the trivial equations
x = y
x = z
Then solving for z could yield either "z = x" or "z = y," depending on how much simplification you want. Or alternatively, in a setup like
x = 2y + 3w
x = 9z
Returning a value for x could hand back either expression, or their sum over two, or a whole bunch of other things that are all technically true but not necessarily useful. I'm not sure how you'd handle this, but depending on the form of your equations you can probably find a way to handle it.
Convert your expression into a data structure (tree) in reverse Polish notation. Your tree is made up of nodes, each node has an operation, a left and a right. Each of the left and right can be a symbol (eg: "x") or another node. For example:
(x + (a + b))
Would become:
(+ x (+ a b))
Or in JSON:
["+", "x", ["+", "a", "b"]]
Your original expression m = (x + y) / 2 would look like this:
m = ["/", ["+", "x", "y"], "2"]
One of your desired expressions (solving for x) looks like this:
x = ["-", ["*", "m", "2"], "y"]
Can you see that the tree of expressions has been turned inside out and each operator has been reversed? The "-" is the reverse of the "+" and now wraps the "*" which is a reverse of the "/". The:
["+", "x", "y"]
Becomes:
["-", (something), "y"]
Where (something) is a reversal of the outer expression recursively. To try to explain the process: a) recursively descend the expression tree until you find the node containing the symbol you want to solve for, b) make a new node containing a reverse of this nodes operation, c) replace the symbol you want to solve for with the reversed outer expression, recursively doing this as you progress back outwards.
There are various simple ways in which the initial equation can be modified, performing the correct modifications in the correct order will lead to the correct solution. So how about seeing this as a search or even a pathfinding problem?