Algorithm for checking transitivity of relation? - algorithm

I need to check if relation is transitive or not?
Would you please suggest some algorithm to check the transitivity of relations?
I am storing relation as a boolean matrix there is 1 if elements are related other wise 0 like in graphs.
Thanks.

Much simpler algorithm as my Map/Set version (deleted), now with boolean matrix. Maybe this is easier to understand, even if you don't know Java?
public class Trans
{
final static int SIZE = 4;
static boolean isTransitive(boolean[][] function)
{
for (int i = 0; i < SIZE; i++)
{
for (int j = 0; j < SIZE; j++)
{
if (function[i][j])
{
for (int k = 0; k < SIZE; k++)
{
if (function[j][k] && !function[i][k])
{
return false;
}
}
}
}
}
return true;
}
public static void main(String[] args)
{
boolean[][] function = new boolean[SIZE][SIZE];
for (int i = 0; i < SIZE; i++)
{
function[i] = new boolean[SIZE];
}
function[0][1] = true;
function[1][2] = true;
function[0][2] = true;
function[0][3] = true;
function[1][3] = true;
System.out.println(isTransitive(function));
}
}

Despite this totally sounds like homework...
You'd need to store your relations so that you can look them up by the antecedent very quickly. Then you can discover transitive relations of the type A->B->C, add them to the same storage, and keep going to look up A->B->C->D, etc etc...

Topological sorting may be the right direction. The relationship is transitive if there are no loops in its directed graph representation. If you care about speed, graph algorithms are probably the way to go.

Related

Argument of varying types?

how do I make a argument of varying types?
I want to do m.add(5) or m.add(float[][]). How would I do that?
void add(? n) {
for (int i = 0; i < cols; i++) {
for (int j = 0; j < rows; j++) {
data[i][j] += n;
}
}
}
}
You're looking for something called method overloading. You can google that for a ton of results, but basically you'd want to define the function twice:
void add(float n){
// do the thing
}
void add(float[][] n){
// do the thing
}
In theory you could also take an Object parameter and then use the instanceof keyword to figure out what type was actually passed in, but that's a hackier approach.

Merge two text input files, each line of the files one after the other. See example

I was trying to solve a problem using java 8 that I have already solved using a simple for loop. However I have no idea how to do this.
The Problem is :
File1 :
1,sdfasfsf
2,sdfhfghrt
3,hdfxcgyjs
File2 :
10,xhgdfgxgf
11,hcvcnhfjh
12,sdfgasasdfa
13,ghdhtfhdsdf
Output should be like
1,sdfasfsf
10,xhgdfgxgf
2,sdfhfghrt
11,hcvcnhfjh
3,hdfxcgyjs
12,sdfgasasdfa
13,ghdhtfhdsdf
I already have this basically working,
The core logic is :
List<String> left = readFile(lhs);
List<String> right = readFile(rhs);
int leftSize = left.size();
int rightSize = right.size();
int size = leftSize > rightSize? leftSize : right.size();
for (int i = 0; i < size; i++) {
if(i < leftSize) {
merged.add(left.get(i));
}
if(i < rightSize) {
merged.add(right.get(i));
}
}
MergeInputs.java
UnitTest
Input files are in src/test/resources/com/linux/test/merge/list of the same repo (only allowed to post two links)
However, I boasted I could do this easily using streams and now I am not sure if this can even be done.
Help is really appreciated.
You may simplify your operation to have less conditionals per element:
int leftSize = left.size(), rightSize = right.size(), min = Math.min(leftSize, rightSize);
List<String> merged = new ArrayList<>(leftSize+rightSize);
for(int i = 0; i < min; i++) {
merged.add(left.get(i));
merged.add(right.get(i));
}
if(leftSize!=rightSize) {
merged.addAll(
(leftSize<rightSize? right: left).subList(min, Math.max(leftSize, rightSize)));
}
Then, you may replace the first part by a stream operation:
int leftSize = left.size(), rightSize = right.size(), min = Math.min(leftSize, rightSize);
List<String> merged=IntStream.range(0, min)
.mapToObj(i -> Stream.of(left.get(i), right.get(i)))
.flatMap(Function.identity())
.collect(Collectors.toCollection(ArrayList::new));
if(leftSize!=rightSize) {
merged.addAll(
(leftSize<rightSize? right: left).subList(min, Math.max(leftSize, rightSize)));
}
But it isn’t really simpler than the loop variant. The loop variant may be even more efficient due to its presized list.
Incorporating both operation into one stream operation would be even more complicated (and probably even less efficient).
the code logic should be like as this:
int leftSize = left.size();
int rightSize = right.size();
int minSize = Math.min(leftSize,rightSize);
for (int i = 0; i < minSize; i++) {
merged.add(left.get(i));
merged.add(right.get(i));
}
// adding remaining elements
merged.addAll(
minSize < leftSize ? left.subList(minSize, leftSize)
: right.subList(minSize, rightSize)
);
Another option is using toggle mode through Iterator, for example:
toggle(left, right).forEachRemaining(merged::add);
//OR using stream instead
List<String> merged = Stream.generate(toggle(left, right)::next)
.limit(left.size() + right.size())
.collect(Collectors.toList());
the toggle method as below:
<T> Iterator<? extends T> toggle(List<T> left, List<T> right) {
return new Iterator<T>() {
private final int RIGHT = 1;
private final int LEFT = 0;
int cursor = -1;
Iterator<T>[] pair = arrayOf(left.iterator(), right.iterator());
#SafeVarargs
private final Iterator<T>[] arrayOf(Iterator<T>... iterators) {
return iterators;
}
#Override
public boolean hasNext() {
for (Iterator<T> each : pair) {
if (each.hasNext()) {
return true;
}
}
return false;
}
#Override
public T next() {
return pair[cursor = next(cursor)].next();
}
private int next(int cursor) {
cursor=pair[LEFT].hasNext()?pair[RIGHT].hasNext()?cursor: RIGHT:LEFT;
return (cursor + 1) % pair.length;
}
};
}

Checking for bipartite-ness in a large graph, made up of several disconnected graphs?

I was doing a problem on SPOJ SPOJ:BUGLIFE
It required me to check whether the graph was bipartite or not. I know the method for a single connected graph, but for a combination of disconnected graphs, my method gives Time limit exceeded error.
Here's my approach - Breadth First Search, using Circular Queues with the graph implemented by adjacency lists.
method -> Choose a source, and if that source vertex=unvisited, then start a Breadth First Search assuming it to be the source. If I found a conflict in the BFS, then I abort the whole thing. Else I move to another un-visited source.
How can I make this faster? or better?
P.S. I am new to Graph Theory, so please explain in detail.
The following implementation (C++ version) is fast enough when testing in very large dataset (edages>1000). Hope it helps.
struct NODE
{
int color;
vector<int> neigh_list;
};
bool checkAllNodesVisited(NODE *graph, int numNodes, int & index);
bool checkBigraph(NODE * graph, int numNodes)
{
int start = 0;
do
{
queue<int> Myqueue;
Myqueue.push(start);
graph[start].color = 0;
while(!Myqueue.empty())
{
int gid = Myqueue.front();
for(int i=0; i<graph[gid].neigh_list.size(); i++)
{
int neighid = graph[gid].neigh_list[i];
if(graph[neighid].color == -1)
{
graph[neighid].color = (graph[gid].color+1)%2; // assign to another group
Myqueue.push(neighid);
}
else
{
if(graph[neighid].color == graph[gid].color) // touble pair in the same group
return false;
}
}
Myqueue.pop();
}
} while (!checkAllNodesVisited(graph, numNodes, start)); // make sure all nodes visited
// to be able to handle several separated graphs, IMPORTANT!!!
return true;
}
bool checkAllNodesVisited(NODE *graph, int numNodes, int & index)
{
for (int i=0; i<numNodes; i++)
{
if (graph[i].color == -1)
{
index = i;
return false;
}
}
return true;
}

Need to make program paralleling Sieve of Eratosthenes algorithm in java using arrays

We were assigned to make a java program which paralleled Sieve of Eratosthenes algorithm. I have tried several times in every which way i know of, but haven't been able to get it right. Im supposed to use fill arrays with prime numbers less than the number that was imputed. Here is the code I have, Can someone please help me in double checking the program and/or figuring out why I am getting this error? Any help is appreciated.
import java.text.DecimalFormat;
import java.util.Scanner;
import java.util.Arrays;
public class Lab6st
{
static int MAX = 100;
static int i;
static int k;
static int intArray;
static int isPrime;
public static void main(String args[])
{
System.out.println("\nLAB12 100 Point Version");
Scanner input = new Scanner(System.in);
boolean primes[] = new boolean[MAX];
computePrimes(primes);
displayPrimes(primes);
Arrays.fill(primes,true);
}
public static void computePrimes(boolean primes[])
{
System.out.println("\nCOMPUTING PRIME NUMBERS");
for (int i = 1; i < MAX; i++);
{
for (i=1; i < MAX; i++ );
for (k=2; k<i; k++){
int n = i%k;
if (n==0)
{
break;
}
}
if (i==k);
{
primes[i] = true;
}
}
}
public static void displayPrimes(boolean primes[])
{
System.out.println("\n\nPRIMES BETWEEN 1 AND "+ primes.length);
for (int isPrime = 0; isPrime < MAX; isPrime++);
if (primes[isPrime] == true);
System.out.println(Arrays.asList(primes));
}
}
Your algorithm is not the Sieve of Eratosthenes; it's a poor implementation of trial division (the modulo operator gives it away). My essay Programming with Prime Numbers describes the Sieve of Eratosthenes algorithm in detail, discusses the very common error that you made, and includes this implementation in Java:
public static LinkedList sieve(int n)
{
BitSet b = new BitSet(n);
LinkedList ps = new LinkedList();
b.set(0,n);
for (int p=2; p<n; p++)
{
if (b.get(p))
{
ps.add(p);
for (int i=p+p; i<n; i+=p)
{
b.clear(i);
}
}
}
return ps;
}

An even and sorted distribution problem

I have a given number of boxes in a specific order and a number of weights in a specific order. The weights may have different weights (ie one may weigh 1kg, another 2kg etc).
I want to put the weights in the boxes in a way so that they are as evenly distributed as possible weight wise. I must take the weights in the order that they are given and I must fill the boxes in the order that they are given. That is if I put a weight in box n+1 I cannot put a weight in box n, and I cannot put weight m+1 in a box until I've first put weight m in a box.
I need to find an algorithm that solves this problem for any number of boxes and any set of weights.
A few tests in C# with xUnit (Distribute is the method that should solve the problem):
[Fact]
public void ReturnsCorrectNumberOfBoxes()
{
int[] populatedColumns = Distribute(new int[0], 4);
Assert.Equal<int>(4, populatedColumns.Length);
}
[Fact]
public void Test1()
{
int[] weights = new int[] { 1, 1, 1, 1 };
int[] boxes = Distribute(weights, 4);
Assert.Equal<int>(weights[0], boxes[0]);
Assert.Equal<int>(weights[1], boxes[1]);
Assert.Equal<int>(weights[2], boxes[2]);
Assert.Equal<int>(weights[3], boxes[3]);
}
[Fact]
public void Test2()
{
int[] weights = new int[] { 1, 1, 17, 1, 1 };
int[] boxes = Distribute(weights, 4);
Assert.Equal<int>(2, boxes[0]);
Assert.Equal<int>(17, boxes[1]);
Assert.Equal<int>(1, boxes[2]);
Assert.Equal<int>(1, boxes[3]);
}
[Fact]
public void Test3()
{
int[] weights = new int[] { 5, 4, 6, 1, 5 };
int[] boxes = Distribute(weights, 4);
Assert.Equal<int>(5, boxes[0]);
Assert.Equal<int>(4, boxes[1]);
Assert.Equal<int>(6, boxes[2]);
Assert.Equal<int>(6, boxes[3]);
}
Any help is greatly appreciated!
See the solution below.
Cheers,
Maras
public static int[] Distribute(int[] weights, int boxesNo)
{
if (weights.Length == 0)
{
return new int[boxesNo];
}
double average = weights.Average();
int[] distribution = new int[weights.Length];
for (int i = 0; i < distribution.Length; i++)
{
distribution[i] = 0;
}
double avDeviation = double.MaxValue;
List<int> bestResult = new List<int>(boxesNo);
while (true)
{
List<int> result = new List<int>(boxesNo);
for (int i = 0; i < boxesNo; i++)
{
result.Add(0);
}
for (int i = 0; i < weights.Length; i++)
{
result[distribution[i]] += weights[i];
}
double tmpAvDeviation = 0;
for (int i = 0; i < boxesNo; i++)
{
tmpAvDeviation += Math.Pow(Math.Abs(average - result[i]), 2);
}
if (tmpAvDeviation < avDeviation)
{
bestResult = result;
avDeviation = tmpAvDeviation;
}
if (distribution[weights.Length - 1] < boxesNo - 1)
{
distribution[weights.Length - 1]++;
}
else
{
int index = weights.Length - 1;
while (distribution[index] == boxesNo - 1)
{
index--;
if (index == -1)
{
return bestResult.ToArray();
}
}
distribution[index]++;
for (int i = index; i < weights.Length; i++)
{
distribution[i] = distribution[index];
}
}
}
}
Second try: i think the A* (pronounced "a star") algorithm would work well here, even if it would consume a lot of memory. you are guranteed to get an optimal answer, if one exists.
Each "node" you are searching is a possible combination of weights in boxes. The first node should be any weight you pick at random, put into a box. I would recommend picking new weights randomly as well.
Unforetunately, A* is complex enough that I don't have time to explain it here. It is easy enough to understand by reading on your own, but mapping it to this problem as I described above will be more difficult. Please post back questions on that if you choose this route.

Resources