Solving a simple linear equation - algorithm

Suppose I needed to solve the following equation,
ax + by = c
Where a, b, and c are known values and x, y are natural numbers between 0 and 10 (inclusively).
Other than the trivial solution of,
for (x = 0; x <= 10; x++)
for (y = 0; y <= 10; y++)
if (a * x + b * y == c)
printf("%d %d", x, y);
... is there any way to find all solutions for this independent system efficiently?

In your case, since x and y only take values between 0 and 10, brute force algorithm maybe the best option as it takes less time to implement.
However, if you have to find all pairs of integral solution (x, y) in a larger range, you really should apply the right mathematical tool for tackling this problem.
You are trying to solve a linear Diophantine equation, and it is well known that integral solution exists if and only if the greatest common divisor d of a and b divides c.
If solution does not exist, then you are done. Otherwise, you should first apply the Extended Euclidean Algorithm to find a paritcular solution for the equation ax + by = d.
And according to Bézout's identity, all other integral solutions are of the form:
where k is an arbitrary integer.
But note that we are interested in the solution of ax + by = c, we have to scale all our pairs of (x, y) by a factor of c / d.

You only to loop thru x, then calculate y. (x, y) is a solution if y is integer, and between 0 and 10.
In C:
for (int x = 0; x <= 10; ++x) {
double y = (double)(c - ax) / b;
// If y is an integer, and it's between 0 and 10, then (x, y) is a solution
BOOL isInteger = abs(floor(y) - y) < 0.001;
if (isInteger && 0 <= y && y <= 10) {
printf("%d %d", x, y);
}
}

You could avoid the second for loop by checking directly if (c-a*x)/b is an integer.
EDIT: My code is less clean than I had hoped, due to some careless oversights on my part pointed out in the comments, but it is still faster than nested for loops.
int by;
for (x = 0; x <= 10; x++) {
by = c-a*x; // this is b*y
if(b==0) { // check for special case of b==0
if (by==0) {
printf("%d and any value for y", x);
}
} else { // b!=0 case
y = by/b;
if (by%b==0 && 0<=y && y<=10) { // is y an integer between 0 and 10?
printf("%d %d", x, by/b);
}
}
}

Related

Connected component labeling with diagonal connections using union-find

I'm trying to develop a modification of the connected component algorithm I found as an answer to this question: Connected Component Labelling.
Basically, I have 2d- and 3d- matrices consisting of 0s and 1s. My problem is to find connected regions of 1s, labeling each region separately. The matrix sizes can be very large (consisting of 5e4-by-5e4 elements in 2-d and 1000^3 elements in 3d). So I need something which doesn't strain the stack memory, and which is fast enough to repeat several times over the course of a simulation.
The most upvoted answer to that question, using depth-first search, gives a stack overflow error (as noted in a comment). I have been trying to use the union-find algorithm suggested by another user.
The original code (by user Dukeling) works very well for large 2-d matrices, but I want to have diagonal connections between elements. Here's my code, with the example input I am trying to use:
#include <iostream>
#include <stdio.h>
#include <stdlib.h>
const int w = 8, h = 8;
int input[w][h] = {{1,0,0,0,1,0,0,1},
{1,1,0,1,1,1,1,0},
{0,1,0,0,0,0,0,1},
{1,1,1,1,0,1,0,1},
{0,0,0,0,0,0,1,0},
{0,0,1,0,0,1,0,0},
{0,1,0,0,1,1,1,0},
{1,0,1,1,0,1,0,1}};
int component[w*h];
void doUnion(int a, int b)
{
// get the root component of a and b, and set the one's parent to the other
while (component[a] != a)
a = component[a];
while (component[b] != b)
b = component[b];
component[b] = a;
}
void unionCoords(int x, int y, int x2, int y2)
{
if (y2 < h && x2 < w && input[x][y] && input[x2][y2] && y2 > 0 && x2 > 0)
doUnion(x*h + y, x2*h + y2);
}
int main()
{
int i, j;
for (i = 0; i < w*h; i++)
component[i] = i;
for (int x = 0; x < w; x++)
for (int y = 0; y < h; y++)
{
unionCoords(x, y, x+1, y);
unionCoords(x, y, x, y+1);
unionCoords(x, y, x+1, y+1);
unionCoords(x, y, x-1, y+1);
unionCoords(x, y, x+1, y-1);
unionCoords(x, y, x-1, y-1);
}
// print the array
for (int x = 0; x < w; x++)
{
for (int y = 0; y < h; y++)
{
if (input[x][y] == 0)
{
printf("%4d ",input[x][y]);
continue;
}
int c = x*h + y;
while (component[c] != c) c = component[c];
printf("%4d ", component[c]);
}
printf("\n");
}
}
As you can see, I added 4 commands for doing diagonal connectivity between elements. Is this a valid modification of the union-find algorithm? I searched Google and stackoverflow in particular, but I can't find any example of diagonal connectivity. In addition, I want to extend this to 3 dimensions - so I would need to add 26 commands for checking. Will this way scale well? I mean the code seems to work for my case, but sometimes I randomly get an unlabeled isolated element. I don't want to integrate it with my code only to discover a bug months later.
Thanks.
There is nothing wrong with your approach using the union find algorithm. Union find runs on any graph. For each node it examines, it checks its connected nodes to determine whether they are in the same subset. Your approach appears to be doing just that, checking the 8 adjacent nodes of any observed node. The union find algorithm has nothing to do with the dimensions of your graph. You can extend that approach to 3d or any dimension, as long as your graph corresponds correctly to that dimension. If you are experiencing errors with this, you can post an example of that error, or check out code review: https://codereview.stackexchange.com/.

Given the exp() function, how to implement the ln() function?

I came cross this question when I was in a CS job interview. I have no idea about it, let alone implement the code……
Could I get some tips?
P.S. exp() is the function y = e^x and ln() is y = ln(x)
You can find the value in log time by binary searching the answer. This is possible because log X is a monotonically increasing function.
(courtesy of WolframAlpha).
For example, if the value whose logarithm we have to calculate (assume it to be X) is greater than 1, then start with an assumption of answer = X. Raise the power e^answer and check if the value is greater than or smaller than X. Now based on whether the value you get is greater than or lesser than X, you can refine your limits. The search stops when you have reached within suitable ranges of your answer.
double log(double X){
double lo = 1;
double hi = X;
while(true){
double mid = (lo+hi)/2;
double val = power(e, mid);
if(val > X){
hi = mid;
}
if(val < X){
lo = mid;
}
if(abs(val-X) < error){
return mid;
}
}
}
Similarly, if the value of X is smaller than 1, then you can reduce this case to the case we have already considered, ie. when X is greater than 1. For example if X = 0.04, then
log 0.04 = log (4/100)
= (log 4) - (log 100)
If X is positive, then the logarithm can be found using Newton's method.
X_{0} = 0
X_{n+1} = X_{n} - (exp(X_{n}) - X) / (exp(X_{n})
Very fast convergence.
Adapting this answer to get X scaled in the range [0,e]. A few things we know about ln(x), ln(x) is only defined for 0 < x, ln(1)=0, the results can be any number from -infinity to +infinity. ln(x^a) = a * ln(x) in particular ln(x^(-1)) = - ln(x), ln(X/e) = ln(X)-ln(e) so ln(X) = ln(X/e) + 1.
double E = exp(1);
double ln(double X) {
if(X<0) return NaN;
// use recursion to get approx range
if(X<1) {
return - ln( 1 / X );
}
if(X>E) {
return ln(X/E) + 1;
}
// X is now between 1 and e
// Y is between 0 and 1
double lo = 0;
double hi = 1;
while(true){
double mid = (lo+hi)/2;
double val = exp(mid);
if(val > X){
hi = mid;
}
if(val < X){
lo = mid;
}
if(abs(val-X) < error){
return mid;
}
}
}
If you look at the actual implementations of mathematical functions in the libraries. They do quite a lot of prescaling work to narrow the ranges of input, probably more aggressive than is done here.

Algorithm to view a larger container in a small screen

I need a mathematical algorithm (or not) simple (or not, too).
It is as follows:
I have two numbers a and b, and need to find the smaller number closer to b, c.
Such that "a% c == 0"
If "a% b == 0", then c == b
Why is that?
My screen has size x pixels. And a container has pixels y such that y> x.
I want to calculate how much I have to scroll so that I can see my container on my screen without wasting space.
I want to necessarily roll to view.
I need to know just how much I need to roll, according to my screen and how often to view my entire container.
This could you help? (Java code)
int a = 2000;
int b = 300;
int c = 0;
for (int i = b; i > 0; i--) {
if ( (a % i) == 0) {
c = i;
break;
}
}
The result will be in c.
The problem asks, given a and b, find the largest c such that
c <= b
c*k = a for some k
The first constraint puts a lower bound on k, and maximizing c is equivalent to minimizing k given the second constraint.
The lower bound for k is given by
a = c*k <= b*k
and so k >= a/b. Therefore we just look for the smallest k that is a divisor of a, e.g.
if (b > a) return a;
for (int k=a/b; k<=a; ++k)
if (a % k == 0) {
return a/k;
}
}

What's algorithm used to solve Linear Diophantine equation: ax + by = c

I'm looking for integers solution here. I know it has infinitely many solution derived from the first pair solution and gcd(a,b)|c. However, how could we find the first pair of solution? Is there any algorithm to solve this problem?
Thanks,
Chan
Note that there isn't always a solution. In fact, there's only a solution if c is a multiple of gcd(a, b).
That said, you can use the extended euclidean algorithm for this.
Here's a C++ function that implements it, assuming c = gcd(a, b). I prefer to use the recursive algorithm:
function extended_gcd(a, b)
if a mod b = 0
return {0, 1}
else
{x, y} := extended_gcd(b, a mod b)
return {y, x-(y*(a div b))}
int ExtendedGcd(int a, int b, int &x, int &y)
{
if (a % b == 0)
{
x = 0;
y = 1;
return b;
}
int newx, newy;
int ret = ExtendedGcd(b, a % b, newx, newy);
x = newy;
y = newx - newy * (a / b);
return ret;
}
Now if you have c = k*gcd(a, b) with k > 0, the equation becomes:
ax + by = k*gcd(a, b) (1)
(a / k)x + (b / k)y = gcd(a, b) (2)
So just find your solution for (2), or alternatively find the solution for (1) and multiply x and y by k.

Check if one integer is an integer power of another

This is an interview question: "Given 2 integers x and y, check if x is an integer power of y" (e.g. for x = 8 and y = 2 the answer is "true", and for x = 10 and y = 2 "false").
The obvious solution is:int n = y; while(n < x) n *= y; return n == x
Now I am thinking about how to improve it.
Of course, I can check some special cases: e.g. both x and y should be either odd or even numbers, i.e. we can check the least significant bit of x and y. However I wonder if I can improve the core algorithm itself.
You'd do better to repeatedly divide y into x. The first time you get a non-zero remainder you know x is not an integer power of y.
while (x%y == 0) x = x / y
return x == 1
This deals with your odd/even point on the first iteration.
It means logy(x) should be an integer. Don't need any loop. in O(1) time
public class PowerTest {
public static boolean isPower(int x, int y) {
double d = Math.log(Math.abs(x)) / Math.log(Math.abs(y));
if ((x > 0 && y > 0) || (x < 0 && y < 0)) {
if (d == (int) d) {
return true;
} else {
return false;
}
} else if (x > 0 && y < 0) {
if ((int) d % 2 == 0) {
return true;
} else {
return false;
}
} else {
return false;
}
}
/**
* #param args
*/
public static void main(String[] args) {
System.out.println(isPower(-32, -2));
System.out.println(isPower(2, 8));
System.out.println(isPower(8, 12));
System.out.println(isPower(9, 9));
System.out.println(isPower(-16, 2));
System.out.println(isPower(-8, -2));
System.out.println(isPower(16, -2));
System.out.println(isPower(8, -2));
}
}
This looks for the exponent in O(log N) steps:
#define MAX_POWERS 100
int is_power(unsigned long x, unsigned long y) {
int i;
unsigned long powers[MAX_POWERS];
unsigned long last;
last = powers[0] = y;
for (i = 1; last < x; i++) {
last *= last; // note that last * last can overflow here!
powers[i] = last;
}
while (x >= y) {
unsigned long top = powers[--i];
if (x >= top) {
unsigned long x1 = x / top;
if (x1 * top != x) return 0;
x = x1;
}
}
return (x == 1);
}
Negative numbers are not handled by this code, but it can be done easyly with some conditional code when i = 1
This looks to be pretty fast for positive numbers as it finds the lower and upper limits for desired power and then applies binary search.
#include <iostream>
#include <cmath>
using namespace std;
//x is the dividend, y the divisor.
bool isIntegerPower(int x, int y)
{
int low = 0, high;
int exp = 1;
int val = y;
//Loop by changing exponent in the powers of 2 and
//Find out low and high exponents between which the required exponent lies.
while(1)
{
val = pow((double)y, exp);
if(val == x)
return true;
else if(val > x)
break;
low = exp;
exp = exp * 2;
high = exp;
}
//Use binary search to find out the actual integer exponent if exists
//Otherwise, return false as no integer power.
int mid = (low + high)/2;
while(low < high)
{
val = pow((double)y, mid);
if(val > x)
{
high = mid-1;
}
else if(val == x)
{
return true;
}
else if(val < x)
{
low = mid+1;
}
mid = (low + high)/2;
}
return false;
}
int main()
{
cout<<isIntegerPower(1024,2);
}
double a=8;
double b=64;
double n = Math.log(b)/Math.log(a);
double e = Math.ceil(n);
if((n/e) == 1){
System.out.println("true");
} else{
System.out.println("false");
}
I would implement the function like so:
bool IsWholeNumberPower(int x, int y)
{
double power = log(x)/log(y);
return floor(power) == power;
}
This shouldn't need check within a delta as is common with floating point comparisons, since we're checking whole numbers.
On second thoughts, don't do this. It does not work for negative x and/or y. Note that all other log-based answers presented right now are also broken in exactly the same manner.
The following is a fast general solution (in Java):
static boolean isPow(int x, int y) {
int logyx = (int)(Math.log(x) / Math.log(y));
return pow(y, logyx) == x || pow(y, logyx + 1) == x;
}
Where pow() is an integer exponentiation function such as the following in Java:
static int pow(int a, int b) {
return (int)Math.pow(a, b);
}
(This works due to the following guarantee provided by Math.pow: "If both arguments are integers, then the result is exactly equal to the mathematical result of raising the first argument to the power of the second argument...")
The reason to go with logarithms instead of repeated division is performance: while log is slower than division, it is slower by a small fixed multiple. At the same time it does remove the need for a loop and therefore gives you a constant-time algorithm.
In cases where y is 2, there is a quick approach that avoids the need for a loop. This approach can be extended to cases where y is some larger power of 2.
If x is a power of 2, the binary representation of x has a single set bit. There is a fairly simple bit-fiddling algorithm for counting the bits in an integer in O(log n) time where n is the bit-width of an integer. Many processors also have specialised instructions that can handle this as a single operation, about as fast as (for example) an integer negation.
To extend the approach, though, first take a slightly different approach to checking for a single bit. First determine the position of the least significant bit. Again, there is a simple bit-fiddling algorithm, and many processors have fast specialised instructions.
If this bit is the only bit, then (1 << pos) == x. The advantage here is that if you're testing for a power of 4, you can test for pos % 2 == 0 (the single bit is at an even position). Testing for a power of any power of two, you can test for pos % (y >> 1) == 0.
In principle, you could do something similar for testing for powers of 3 and powers of powers of 3. The problem is that you'd need a machine that works in base 3, which is a tad unlikely. You can certainly test any value x to see if its representation in base y has a single non-zero digit, but you'd be doing more work that you're already doing. The above exploits the fact that computers work in binary.
Probably not worth doing in the real world, though.
Here is a Python version which puts together the ideas of #salva and #Axn and is modified to not generate any numbers greater than those given and uses only simple storage (read, "no lists") by repeatedly paring away at the number of interest:
def perfect_base(b, n):
"""Returns True if integer n can be expressed as b**e where
n is a positive integer, else False."""
assert b > 1 and n >= b and int(n) == n and int(b) == b
# parity check
if not b % 2:
if n % 2:
return False # b,n is even,odd
if b == 2:
return n & (n - 1) == 0
if not b & (b - 1) and n & (n - 1):
return False # b == 2**m but n != 2**M
elif not n % 2:
return False # b,n is odd,even
while n >= b:
d = b
while d <= n:
n, r = divmod(n, d)
if r:
return False
d *= d
return n == 1
Previous answers are correct, I liked Paul's answer the best. It's Simple and clean.
Here is the Java implementation of what he suggested:
public static boolean isPowerOfaNumber(int baseOrg, int powerOrg) {
double base = baseOrg;
double power = powerOrg;
while (base % power == 0)
base = base / power;
// return true if base is equal 1
return base == 1;
}
in the case the number is too large ... use log function to reduce time complexity:
import math
base = int(input("Enter the base number: "))
for i in range(base,int(input("Enter the end of range: "))+1):
if(math.log(i) / math.log(base) % 1 == 0 ):
print(i)
If you have access to the largest power of y, that can be fitted inside the required datatype, this is a really slick way of solving this problem.
Lets say, for our case, y == 3. So, we would need to check if x is a power of 3.
Given that we need to check if an integer x is a power of 3, let us start thinking about this problem in terms of what information is already at hand.
1162261467 is the largest power of 3 that can fit into an Java int.
1162261467 = 3^19 + 0
The given x can be expressed as [(a power of 3) + (some n)]. I think it is fairly elementary to be able to prove that if n is 0(which happens iff x is a power of 3), 1162261467 % x = 0.
So, to check if a given integer x is a power of three, check if x > 0 && 1162261467 % x == 0.
Generalizing. To check if a given integer x is a power of a given integer y, check if x > 0 && Y % x == 0: Y is the largest power of y that can fit into an integer datatype.
The general idea is that if A is some power of Y, A can be expressed as B/Ya, where a is some integer and A < B. It follows the exact same principle for A > B. The A = B case is elementary.
I found this Solution
//Check for If A can be expressed as power of two integers
int isPower(int A)
{
int i,a;
double p;
if(A==1)
return 1;
for(int a=1; a<=sqrt(A);++a )
{
p=log(A)/log(a);
if(p-int(p)<0.000000001)
return 1;
}
return 0;
}
binarycoder.org

Resources