Inequality solving using Prolog - prolog

I am working on solving inequality problems using prolog.I have found a code and it solves ax+b>=0 type equations.
The code I have used is as follows.
:-use_module(library(clpr)).
dec_inc(left,right):-
copy_term(left-right,Copyleft-Copyright).
tell_cs(Copyleft).
max(Copyright,right,Leq).
tell_cs(Leq).
max([],[],[]).
max([E=<_|Ps],[_=<P1|P1s],[K=<P1|Ls]):-
sup(E,K),
max(Ps,P1s,Ls).
tell_cs([]).
tell_cs([C|Cs]):-
{C},
tell_cs(Cs).
for example
when we give {2*X+2>=5}. it gives the correct answer. {X>=1.5}.
2.But if I enter fraction like {(X+3)/(3*X+1)>=1}. it gives {1- (3+X)/ (1+3.0*X)=<0.0}.
How can I solve this type of inequality questions to find the final answer.(questions which include fractions).
Please help me.
If there is any learning material I can refer please let me know.

library(clpr) documentation advises that it deals with non linear constraints only passively, so you're out of luck. You need a more sophisticated algebra system.

Related

Ode implementation of Maxwell Stefan's equation

I am trying to solve Maxwell Stefan's equation over a membrane to get the transient mole fraction distribution over the membrane thickness 'z'. But somehow I am not able to code it using ODE45, more likely I am not able to write the system to solve using ODE45. It will be really great if someone can help me with the primary syntaxes and function. The equation I am trying to solve is
(dy_i)/dz=1/(cD_(i,j) ) [y_i (N_i+N_j )-N_i ]
where c is concentration and D_(i,j) is binary diffusion coefficient.

what is the relationship between LCS and string similarity?

I wanted to know how similar where two strings and I found a tool in the following page:
https://www.tools4noobs.com/online_tools/string_similarity/
and it says that this tool is based on the article:
"An O(ND) Difference Algorithm and its Variations"
available on:
http://www.xmailserver.org/diff2.pdf
I have read the article, but I have some doubts about how they programmed that tool, for example the authors said that it is based on the C library GNU diff and analyze.c; maybe it refers to this:
https://www.gnu.org/software/diffutils/
and this:
https://github.com/masukomi/dwdiff-annotated/blob/master/src/diff/analyze.c
The problem that I have is how to understand the relation with the article, for what I read the article shows an algorithm for finding the LCS (longest common subsequence) between a pair of strings, so they use a modification of the dynamic programming algorithm used for solving this problem. The modification is the use of the shortest path algorithm to find the LCS that has the minimum number of modifications.
At this point I am lost, because I do not know how the authors of the tool I first mentioned used the LCS for finding how similar are two sequences. Also the have put a limit value of 0.4, what does that mean? can anybody help me with this? or have I misunderstood that article?
Thanks
I think the description on the string similarity tool is not being entirely honest, because I'm pretty sure it has been implemented using the Perl module String::Similarity. The similarity score is normalised to a value between 0 and 1, and as the module page describes, the limit value can be used to abort the comparison early if the similarity falls below it.
If you download the Perl module and expand it, you can read the C source of the algorithm, in the file called fstrcmp.c, which says that it is "Derived from GNU diff 2.7, analyze.c et al.".
The connection between the LCS and string similarity is simply that those characters that are not in the LCS are precisely the characters you would need to add, delete or substitute in order to convert the first string to the second, and the number of these differing characters is usually used as the difference score, as in the Levenshtein Distance.

Algorithm design manual solution to 1-8

I'm currently reading through The Algorithm Design Manual by Steven S. Skiena. Some of the concepts in the book I haven't used in almost 7 years. Even while I was in college it was difficult for me to understand how some of my classmates came up with some of these proofs. Now, I'm completely stuck on one of the exercises. Please help.
Will you please answer this question and explain how you came up with what to use for your Base case and why each step proves why it is valid and correct. I know this might be asking a lot, but I really need help understanding how to do these.
Thank you in advance!
Proofs of Correctness
Question:
1-8. Proove the correctness of the following algorithm for evaluating a polynomial.
$$P(x) = a_nx_n+a_n−1x_n−1+⋯+a_1x+a_0$$
&function horner(A,x)
p=A_n
for i from n−1 to 0
p=p∗x+Ai
return p$
btw, off topic: Sorry guys, I'm not sure how to correctly add the mathematical formatting for the formula. I tried by addign '$' around each section. Not sure why that isn't working.
https://cs.stackexchange.com/ is probably better for this. Also I'm pretty sure that $$ formatting only works on some StackExchange sites. But anyways, think about what this algorithm is doing at each step.
We start with p = A_n.
Then we take p = p*x + A_{n-1}. So what is this doing? We now have p = x*A_n + A_{n-1}.
I'll try one more step. p = p*x + A_{n-2} so now p = (x^2)*A_n + x*A_{n-1} + A{n-2} (here x^2 means x to the power 2, of course).
You should be able to take it from here.

diophantine analysis in maxima

I have defined an extended Euclidean algorithm in Maxima as
ext_euclid(a,b):=block(
[x,y,d,x_old,y_old,d_old],
if b = 0 then return([1,0,a])
else ([x_old,y_old,d_old]:ext_euclid(b,mod(a,b)),
[x,y,d]:[y_old,x_old-quotient(a,b)*y_old,d_old],
return([x,y,d])));
in order to solve linear Diophantine equations of the form a+b=c where gcd(a,b)=1, however if a-b=c I get -1 returned by the algorithm for the divisor but gcd(a,-b) as before. Is my algorithm wrong, or can it be used for a-b=c?
Iain
I don't quite understand what the problem is. Can you please give two examples, one in which the result matches what you expected, and one in which it doesn't (and please say what's your expected result in that case).
EDIT: OP replies: "to solve 5x+7y is 23 ext_euclid (5,7) returns [3,-2,1] where gcd(5,7) is 1 however for 5x-7y is 23 ext_euclid(5,-7) returns [-3,1,-1] but gcd(5,-7) is still 1 this fails Bezout's identity ax+by is gcd(a,b)"
Also if you are trying to implement a particular statement of the algorithm, can you please link to it or copy it here.
OP replies: "code at http://mvngu.wordpress.com/2009/08/25/elementary-number-theory-using-maxima/"
One possible thing to look at: does the mod function behave as you expect it?
OP replies: "mod(5,7) is, mod(5,-7) is -2"

LightsOut game solving method "reduced echolean ".Does it always gives correct result?

I am studing the algorithm given here, and
somewhere it is claimed that it is efficent and always give correct result.
But, I try to run the algorithm and it is not giving me correct or efficent output for the following patterns.
For 5 x 5 grid, where (n) is light number and 0/1 state whethere the light is on/off, 1 ON and 0 OFF.
(1)1 (2)0 (3)0 (4)0 (5)0 the output should be 1,7,13,19,25(Pressing this light will make the full grid OFF. But what I am getting is this
(6)0 (7)1 (8)0 (9)0 (10)0 3,5,6,7,8,10,13,16,18,19,20,21,23.
(11)0 (12)0 (13)1 (14)0 (15)0
(16)0 (17)0 (18)0 (19)1 (20)0
(21)0 (22)0 (23)0 (24)0 (25)1
While for some pattern it is giving me correct output as below.
(1)0 (2)0 (3)0 (4)0 (5)1 the output should be 5,9,13,17,21, and the algorithm is giving me correct result.
(6)0 (7)0 (8)0 (9)1 (10)0
(11)0 (12)0 (13)1 (14)0 (15)0
(16)0 (17)1 (18)0 (19)0 (20)0
(21)1 (22)0 (23)0 (24)0 (25)0
If somebody need a code let me know I can post it.
Can please somebody let me know if this methods will always give correct as well as efficient result or not ?
(I'm the author of the code you linked to.) To the best of my knowledge, the code is correct (and I'm sure that the high-level algorithm of using Gaussian elimination over GF(2) is correct). The solution it produces is guaranteed to solve the puzzle, though it's not necessarily the minimal number of button presses. The "efficiency" I was referring to in the writeup refers to the time complexity of solving the puzzle overall (it can solve a Lights Out grid in polynomial time, as opposed to the exponential-time brute-force solution of trying all possible combinations) rather than to the "efficiency" of the generated solution.
I actually don't know any efficient algorithms for finding a solution requiring the minimum number of button presses. Let me know if you find one!
Hope this helps!

Resources