solution for integral of x/(x-6)dx - integral

I was trying to solve this integral x/(x-6)dx and I used substitution. u = x-6 and x = u+6.
In the end, I ended up with the answer x+6ln|x-6|-6+C, however, the answer is x+6ln|x-6|+C without the -6. Can someone help me understand why this is the case?
MY SOLUTION

You shouldn’t be asking this here, but I’ll answer you anyway. A constant term “eats” any other constants. Because think about it. +C just denotes “plus any constant” and -6 + C means “Any constant - 6” which is still…”any constant”. So the +C effectively “eats” anything that is added or subtracted to it

Related

Algorithm design manual solution to 1-8

I'm currently reading through The Algorithm Design Manual by Steven S. Skiena. Some of the concepts in the book I haven't used in almost 7 years. Even while I was in college it was difficult for me to understand how some of my classmates came up with some of these proofs. Now, I'm completely stuck on one of the exercises. Please help.
Will you please answer this question and explain how you came up with what to use for your Base case and why each step proves why it is valid and correct. I know this might be asking a lot, but I really need help understanding how to do these.
Thank you in advance!
Proofs of Correctness
Question:
1-8. Proove the correctness of the following algorithm for evaluating a polynomial.
$$P(x) = a_nx_n+a_n−1x_n−1+⋯+a_1x+a_0$$
&function horner(A,x)
p=A_n
for i from n−1 to 0
p=p∗x+Ai
return p$
btw, off topic: Sorry guys, I'm not sure how to correctly add the mathematical formatting for the formula. I tried by addign '$' around each section. Not sure why that isn't working.
https://cs.stackexchange.com/ is probably better for this. Also I'm pretty sure that $$ formatting only works on some StackExchange sites. But anyways, think about what this algorithm is doing at each step.
We start with p = A_n.
Then we take p = p*x + A_{n-1}. So what is this doing? We now have p = x*A_n + A_{n-1}.
I'll try one more step. p = p*x + A_{n-2} so now p = (x^2)*A_n + x*A_{n-1} + A{n-2} (here x^2 means x to the power 2, of course).
You should be able to take it from here.

How to find inverse function in Wolfram Mathematica?

The question seems too easy to answer, however it is not, since I have to deal with functions that do not have closed forms (or I don't know how to find them). For example, I would like to find inverse functions for:
y == x Tan[x]
and
y == a x + b Tan[x].
Thus far, I used Newton-Rhapson's method for the inverse transformations. It works fine, but requires iterations. I just wonder whether there is a method to prove that there is a better solution or not. I've tried Wolfram Mathematica to find a solution, but since I'm a beginner. I have had no luck to get anything meaningful.
Seems it can't be done.
Solve[y == x Tan[x], x]
Solve::nsmet: This system cannot be solved with the methods available to Solve.
InverseFunction[# Tan[#] &]

Failure of Simplify in Mathematica

My question is about an apparent failure of Mathematica's function "FullSimplify" to simplify an easy algebraic expression.
This is the expression that I ask Mathematica to evaluate:
FullSimplify[Re[a^(I*b)] - Re[a^(-I*b)], Element[a, Reals] && a > 0 && Element[b, Reals]]
This should give the result 0. Instead Mathematica only restates my expression:
Re[a^(-I b) (-1 + a^(2 I b))]
Replacing a and b by actual numbers solves the problem.
What could be the cause of it? How to effectively use FullSimplify (and Simplify, Expand, Integrate and so...)?
I read that the order of variables could play a role here, but I couldn't wrap my head around it.
I tried to check for similar problems on the website as well, but I couldn't find any answer that could explain this phenomenon.
Thanks in advance for your support.

Prover9 hints not being used

I'm running some Lattice proofs through Prover9/Mace4. I'm using a non-standard axiomatization of the lattice join operation, from which it is not immediately obvious that the join is commutative, associative and idempotent. (I can get Prover9 to prove that it is -- eventually.)
I know that Prover9 looks for those properties to help it search faster. I've tried putting those properties in the Additional Input section (I'm running the GUI version 0.5), with
formulas(hints).
x v y = y v x.
% etc
end_of_list.
Q1: Is this the way to get it to look at hints?
Q2: Is there a good place to look for help on speeding up proofs/tips and tricks?
(If I can get this to work, there are further operators I'd like to give hints for.)
For ref, my axioms are (bi-lattice with only one primitive operation):
x ^ y = y ^ x. % lattice meet
x ^ x = x.
(x ^ y) ^ z = x ^ (y ^ z).
x ^ (x v y) = x. % standard absorption for join
x ^ z = x & y ^ z = y <-> z ^ (x v y) = (x v y).
% non-standard absorption
(EDIT after DougS's answer posted.)
Wow! Thank you. Orders-of-magnitude speed-up.
Some follow-on q's if I might ...
Q3: The generated hints seem to include all of the initial axioms plus the goal -- is that what I should expect? (Presumably hence your comment about not needing all of the hints. I've certainly experienced that removing axioms makes a proof go faster.)
Q4: What if I add hints that (as it turns out) aren't justified by the axioms? Are they ignored?
Q5: What if I add hints that contradict the axioms? (From a few trials, this doesn't make Prover9 mis-infer.)
Q6: For a proof (attempt) that times out, is there any way to retrieve the formulas inferred so far and recycle them for hints to speed up the next attempt? (I have a feeling in my waters that this would drag in some sort of fallacy, despite what I've seen re Q3 and Q4.)
Q3: Yes, you should expect the axiom(s) and the goal(s) included as hints. Both of them can serve as useful. I more meant that you might see something like "$F" as a hint doesn't seem to add much to me, and that hints also lead you down a particular path first which can make it more difficult or easier to find shorter proofs. However, if you just want a faster proof, then using all of the suggested hints probably comes as the way to go.
Q4: Hints do NOT need to come as deducible from the axioms.
Q5: Hints can contradict the axioms, sure.
The manual says "A derived clause matches a hint if it subsumes the hint.
...
In short, the default value of the hints_part parameter says to select clauses that match hints (lightest first) whenever any are available."
"Clause C subsumes clause D if the variables of C can be instantiated in such a way that it becomes a subclause of D. If C subsumes D, then D can be discarded, because it is weaker than or equivalent to C. (There are some proof procedures that require retention of subsumed clauses.)"
So let's say that you put
1. x ^((y^z) V v)=x V y as a hint.
Then if Prover9 generates
2. x ^ ((x^x) V v)=x V x
x ^ ((x^x) V v)=x V x will get selected whenever it's available, since it matches the hint.
This explanation isn't complete, because I'm not exactly sure how "subclause" gets defined.
Still, instead of generating formulas with the original axioms and whatever procedure Prover9 uses to generate formulas, formulas that match hints will get put to the front of the list for generating formulas. This can pick up the speed of the program, but from what I've read about some other problems it seems that many difficult problems basically wouldn't have gotten proved automatically if it weren't for things like hints, weighting, and other strategies.
Q6: I'm not sure which formulas you're referring to. In Prover9, of course, you can click on "show output" and look through the dozens of formulas it has generated. You could also set lemmas you think of as useful as additional goals, and then use Prooftrans to generate hints from those lemmas to use as hints on the next run. Or you could use the steps of the proofs of those lemmas as hints for the next run. There's no fallacy in terms of reasoning if you use steps of those proofs as hints, or the hints suggested by Prooftrans, because hints don't actually add any assumptions to the initial set. The hint mechanism works, at least according to my somewhat rough understanding, by changing the search procedure to use a clause that matches a hint once we have something which matches a hint (that is, the program has to deduce something that matches a hint, and only then can what matches the hint get used).
Q1: Yes, that should work for hints. But, to better test it, take the proof you have, and then use the "reformat" option and check the "hints" part. Then copy and paste all of those hints into your "formulas(hints)." list. (well you don't necessarily need them all... and using only some of them might lead to a shorter proof if it exists, but I digress). Then run the proof again, and if it runs like my proofs in propositional calculi with hints do, you'll get the proof "in an instant".
Just in case... you'll need to click on the "additional input" tab, and put your hint list there.
Q2: For strategies, the Prover9 manual has useful information on weighting, hints, and semantic guidance (I haven't tried semantic guidance). You might also want to see Bob Veroff's page (some of his work got done in OTTER, but the programs are similar). There also exists useful information Larry Wos's notebooks, as well as Dr. Wos's published work, though all of Wos's recent work has gotten done using OTTER (again, the programs are similar).

Iterative solving for unknowns in a fluids problem

I am a Mechanical engineer with a computer scientist question. This is an example of what the equations I'm working with are like:
x = √((y-z)×2/r)
z = f×(L/D)×(x/2g)
f = something crazy with x in it
etc…(there are more equations with x in it)
The situation is this:
I need r to find x, but I need x to find z. I also need x to find f which is a part of finding z. So I guess a value for x, and then I use that value to find r and f. Then I go back and use the value I found for r and f to find x. I keep doing this until the guess and the calculated are the same.
My question is:
How do I get the computer to do this? I've been using mathcad, but an example in another language like C++ is fine.
The very first thing you should do faced with iterative algorithms is write down on paper the sequence that will result from your idea:
Eg.:
x_0 = ..., f_0 = ..., r_0 = ...
x_1 = ..., f_1 = ..., r_1 = ...
...
x_n = ..., f_n = ..., r_n = ...
Now, you have an idea of what you should implement (even if you don't know how). If you don't manage to find a closed form expression for one of the x_i, r_i or whatever_i, you will need to solve one dimensional equations numerically. This will imply more work.
Now, for the implementation part, if you never wrote a program, you should seriously ask someone live who can help you (or hire an intern and have him write the code). We cannot help you beginning from scratch with, eg. C programming, but we are willing to help you with specific problems which should arise when you write the program.
Please note that your algorithm is not guaranteed to converge, even if you strongly think there is a unique solution. Solving non linear equations is a difficult subject.
It appears that mathcad has many abstractions for iterative algorithms without the need to actually implement them directly using a "lower level" language. Perhaps this question is better suited for the mathcad forums at:
http://communities.ptc.com/index.jspa
If you are using Mathcad, it has the functionality built in. It is called solve block.
Start with the keyword "given"
Given
define the guess values for all unknowns
x:=2
f:=3
r:=2
...
define your constraints
x = √((y-z)×2/r)
z = f×(L/D)×(x/2g)
f = something crazy with x in it
etc…(there are more equations with x in it)
calculate the solution
find(x, y, z, r, ...)=
Check Mathcad help or Quicksheets for examples of the exact syntax.
The simple answer to your question is this pseudo-code:
X = startingX;
lastF = Infinity;
F = 0;
tolerance = 1e-10;
while ((lastF - F)^2 > tolerance)
{
lastF = F;
X = ?;
R = ?;
F = FunctionOf(X,R);
}
This may not do what you expect at all. It may give a valid but nonsense answer or it may loop endlessly between alternate wrong answers.
This is standard substitution to convergence. There are more advanced techniques like DIIS but I'm not sure you want to go there. I found this article while figuring out if I want to go there.
In general, it really pays to think about how you can transform your problem into an easier problem.
In my experience it is better to pose your problem as a univariate bounded root-finding problem and use Brent's Method if you can
Next worst option is multivariate minimization with something like BFGS.
Iterative solutions are horrible, but are more easily solved once you think of them as X2 = f(X1) where X is the input vector and you're trying to reduce the difference between X1 and X2.
As the commenters have noted, the mathematical aspects of your question are beyond the scope of the help you can expect here, and are even beyond the help you could be offered based on the detail you posted.
However, I think that even if you understood the mathematics thoroughly there are computer science aspects to your question that should be addressed.
When you write your code, try to make organize it into functions that depend only upon the parameters you are passing in to a subroutine. So write a subroutine that takes in values for y, z, and r and returns you x. Make another that takes in f,L,D,G and returns z. Now you have testable routines that you can check to make sure they are computing correctly. Check the input values to your routines in the routines - for instance in computing x you will get a divide by 0 error if you pass in a 0 for r. Think about how you want to handle this.
If you are going to solve this problem interatively you will need a method that will decide, based on the results of one iteration, what the values for the next iteration will be. This also should be encapsulated within a subroutine. Now if you are using a language that allows only one value to be returned from a subroutine (which is most common computation languages C, C++, Java, C#) you need to package up all your variables into some kind of data structure to return them. You could use an array of reals or doubles, but it would be nicer to choose to make an object and then you can reference the variables by their name and not their position (less chance of error).
Another aspect of iteration is knowing when to stop. Certainly you'll do so when you get a solution that converges. Make this decision into another subroutine. Now when you need to change the convergence criteria there is only one place in the code to go to. But you need to consider other reasons for stopping - what do you do if your solution starts diverging instead of converging? How many iterations will you allow the run to go before giving up?
Another aspect of iteration of a computer is round-off error. Mathematically 10^40/10^38 is 100. Mathematically 10^20 + 1 > 10^20. These statements are not true in most computations. Your calculations may need to take this into account or you will end up with numbers that are garbage. This is an example of a cross-cutting concern that does not lend itself to encapsulation in a subroutine.
I would suggest that you go look at the Python language, and the pythonxy.com extensions. There are people in the associated forums that would be a good resource for helping you learn how to do iterative solving of a system of equations.

Resources