Related
Here is the code challenge:
A restaurant has 𝑚 tables, each table can contain 𝑎𝑖 people, 𝑛 teams come to eat, and each team has 𝑏𝑖 people.
You can choose to serve or turn away a certain group. If you do not serve the group, the cost is 𝑏𝑖𝑥. If you do serve this group, then one table might not seat everyone, so they may need to sit at separate tables,
and the cost will be (𝑘−1)𝑦, where 𝑘 is the number of groups the team separated into.
A table can only be used by one team. Please calculate the least cost.
Example:
m n x y
5 2 5 3
a[] = {4,5,1,1,1}
b[] = {7,3}
output:6
Explanation:
The second team sits at the first table, the first team sits at 2nd, 3rd, 4th tables, and the cost is (3-1)*3=6
I have tried a lot times. First of all, I think this is a dynamic programming question. But I couldn't conclude the state transfer function and the example is not an optimal substructure it seems. So I tried to solve it by recursion, and calculate all the possibilities, but I couldn't make it work.
There is no known general and efficient solution to this problem. Not even a pseudo-efficient solution like dynamic programming.
That can be seen by reducing a special case of the 3-partition problem to this one. Given 3*m numbers summing to m*T with each number of size between T/4 and T/2, we set x=1, y=4, make the numbers into our tables, and have m groups of size T to seat. If there is a 3-partition of the set, the optimal solution will be to seat every group at 3 tables. (Why? Because no 2 tables are big enough to seat a group. So a 3-partition seats all groups optimally, and any seating of that cost is a 3-partition.) Therefore a solver for your problem is able to solve this special case of the 3-partition problem.
But this special case is strongly NP-complete. Which means that not only is it NP complete, but it doesn't even have a pseudo-polynomial solution that dynamic programming would give us.
I would personally solve this problem in practice by viewing it as a A* search. With a cost that I'd use a pair of numbers for, (cost, -groups_left). Even with a bad heuristic of the cost being 0 you can solve this greedily if a greedy solution is optimal. And with some work on a smart heuristic, this likely will perform well in practice.
That said, there will be pathological cases where it takes exponential time and memory to finish.
i try to solve it as follows :
make all permutations of tables
calculate all combination with mixing team & table order sequence.
sort by cost by ascending.
hope it can help you.
import java.util.* ;
public class TableAllotment {
static class Table {
int idx, numOfSeat ;
Table(int idx, int numOfSeat) {
this.idx = idx ;
this.numOfSeat = numOfSeat ;
}
public String toString() {
return String.format("t(%d:%d)", idx+1, numOfSeat) ;
}
}
static class Team implements Cloneable, Comparable<Team> {
int idx, numOfPerson ;
ArrayList<Table> tables ;
Team(int idx, int numOfPerson) {
this.idx = idx ;
this.numOfPerson = numOfPerson ;
tables = new ArrayList<>() ;
}
#Override protected Team clone() {
return new Team(idx, numOfPerson) ;
}
#Override public int compareTo(Team team) {
return toString().compareTo(team.toString()) ;
}
public String toString() {
return String.format("g(%d:%d),%s", idx+1, numOfPerson, tables) ;
}
}
static class Allotment implements Comparable<Allotment> {
TreeSet<Team> teams ;
int score ;
Allotment() {
teams = new TreeSet<>() ;
}
#Override public int compareTo(Allotment team) {
return toString().compareTo(team.toString()) ;
}
public String toString() {
return String.format("%6d, %s", score, teams) ;
}
}
TableAllotment(Integer[] tables, Integer[] teams, int x, int y) {
var tableList = new ArrayList<Table>() ; // initialize Table list with num of seat
for (int i = 0 ; i < tables.length ; i++) {
tableList.add(new Table(i, tables[i])) ;
}
var tableAnyOrderSeq = new ArrayList<ArrayList<Table>>() ; // make random table sequence
genAllOrderSeq(tableList, 0, tableAnyOrderSeq) ;
var teamList = new ArrayList<Team>() ; // initialize Team list with num of person
for (int i = 0 ; i < teams.length ; i++) {
teamList.add(new Team(i, teams[i])) ;
}
var results = new TreeSet<Allotment>() ;
calcByOrder(tableAnyOrderSeq, teamList, x, y, results) ;
System.out.println("Team arrival order by input :") ;
for (Allotment allocment : results) System.out.println(allocment) ;
// var teamAnyOrderSeq = new ArrayList<ArrayList<Team>>() ; // make random Team order sequence
// genAllOrderSeq(teamList, 0, teamAnyOrderSeq) ;
// results = new TreeSet<Allotment>() ;
// calcByRandomOrder(tableAnyOrderSeq, teamAnyOrderSeq, x, y, results) ;
// System.out.println("Team arrival order by random :") ;
// for (Allotment allocment : results) System.out.println(allocment) ;
}
static void calcByOrder(List<ArrayList<Table>> tablesAnyOrder, ArrayList<Team> teams,
int x, int y, TreeSet<Allotment> results) {
for (int i = 0 ; i < tablesAnyOrder.size(); i++) {
var tables = tablesAnyOrder.get(i) ;
var remainPerson = -1 ;
var teamIdx = -1 ;
var result = new Allotment() ;
Team team = new Team(-1,-1) ; // dummy initialize
for (int j = 0 ; j < tables.size(); j++) {
if (remainPerson <= 0) { // new Team
if (++teamIdx >= teams.size()) break ;
team = teams.get(teamIdx).clone() ;
remainPerson = team.numOfPerson ;
}
remainPerson -= tables.get(j).numOfSeat ;
team.tables.add(tables.get(j)) ;
if (remainPerson <= 0) {
Collections.sort(team.tables, (t1, t2) -> t1.idx==t2.idx ? 0 : t1.idx>t2.idx ? 1 : -1) ;
result.teams.add(team) ;
}
}
if (result.teams.size() > 0) {
for (Team team_ : result.teams) { // calc served score
result.score += (team_.tables.size() - 1) * y ;
}
for (int k = 0 ; k < teams.size() ; k++) { // find not served teams
var isServed = false ;
for (Team team__ : result.teams) {
if (teams.get(k).idx == team__.idx) {
isServed = true ;
break ;
}
}
if (!isServed) result.score += teams.get(k).numOfPerson * x ; // calc not served score
}
results.add(result) ;
}
}
}
// static void calcByRandomOrder(List<ArrayList<Table>> tablesAnyOrder, List<ArrayList<Team>> teamAnyOrder,
// int x, int y, TreeSet<Allotment> results) {
// for (int i = 0 ; i < teamAnyOrder.size(); i++) {
// var teams = teamAnyOrder.get(i) ;
// calcByOrder(tablesAnyOrder, teams, x, y, results) ;
// }
// }
static <T> void genAllOrderSeq(List<T> array, int k, List<ArrayList<T>> results) { // generate all order sequence
for (int i = k; i < array.size(); i++) {
Collections.swap(array, i, k) ;
genAllOrderSeq(array, k+1, results) ;
Collections.swap(array, k, i) ;
}
if (k == array.size() -1) results.add(new ArrayList<T>(array)) ;
}
public static void main(String[] args) {
new TableAllotment(new Integer[]{4,5,1,1,1}, new Integer[]{7, 3}, 5, 3) ;
}
}
Result :
Team arrival order by input :
6, [g(1:7),[t(2:5), t(3:1), t(4:1)], g(2:3),[t(1:4)]]
6, [g(1:7),[t(2:5), t(3:1), t(5:1)], g(2:3),[t(1:4)]]
6, [g(1:7),[t(2:5), t(4:1), t(5:1)], g(2:3),[t(1:4)]]
9, [g(1:7),[t(1:4), t(2:5)], g(2:3),[t(3:1), t(4:1), t(5:1)]]
9, [g(1:7),[t(1:4), t(3:1), t(4:1), t(5:1)], g(2:3),[t(2:5)]]
9, [g(1:7),[t(2:5), t(3:1), t(4:1), t(5:1)], g(2:3),[t(1:4)]]
9, [g(1:7),[t(2:5), t(3:1), t(4:1)], g(2:3),[t(1:4), t(5:1)]]
9, [g(1:7),[t(2:5), t(3:1), t(5:1)], g(2:3),[t(1:4), t(4:1)]]
9, [g(1:7),[t(2:5), t(4:1), t(5:1)], g(2:3),[t(1:4), t(3:1)]]
21, [g(1:7),[t(1:4), t(2:5), t(3:1)]]
21, [g(1:7),[t(1:4), t(2:5), t(4:1)]]
21, [g(1:7),[t(1:4), t(2:5), t(5:1)]]
24, [g(1:7),[t(1:4), t(2:5), t(3:1), t(4:1)]]
24, [g(1:7),[t(1:4), t(2:5), t(3:1), t(5:1)]]
24, [g(1:7),[t(1:4), t(2:5), t(4:1), t(5:1)]]
results are sorted by cost.
first group can sit at 2nd, 3rd, 4th tables.
first group can sit at 2nd, 3rd, 5th tables.
first group can sit at 2nd, 4th, 5th tables.
second group sits at the first table.
the cost are 6 also.
playground : https://www.sololearn.com/compiler-playground/cGSZq2NIbRxm
I'm having trouble determining the most efficient way of doing this in Dart.
If have two lists that in sorted descending order,
List<int> messages = [10, 5, 4, 1];
List<int> newMessages = [5, 3, 2];
How can I add newMessages to messages so that messages now looks like
messages = [10, 5, 5, 4, 3, 2, 1];
If both lists are long, and are using the default list implementation, it might be more efficient to create a new list based on the two other lists. The reason is that inserting an element inside an existing list requires all elements after this insertion index to be moved forward. Also, when the list grows, it needs to allocate a bigger list and move all elements into this.
If we instead creates a new list, we can inform Dart what the size of this list is going to be exactly and we can prevent moving elements:
void main() {
List<int> messages = [10, 5, 4, 1];
List<int> newMessages = [5, 3, 2];
// The compare argument is given since both lists are sorted in reverse order
print(newSortedListBasedOnTwoAlreadySortedLists<int>(
messages, newMessages, (a, b) => b.compareTo(a)));
// [10, 5, 5, 4, 3, 2, 1]
}
List<E> newSortedListBasedOnTwoAlreadySortedLists<E>(
List<E> l1,
List<E> l2, [
int Function(E a, E b)? compare,
]) {
Iterator<E> i1 = l1.iterator;
Iterator<E> i2 = l2.iterator;
if (!i1.moveNext()) {
if (!i2.moveNext()) {
return [];
} else {
return l2.toList();
}
}
if (!i2.moveNext()) {
return l1.toList();
}
bool i1alive = true;
bool i2alive = true;
return List.generate(l1.length + l2.length, (_) {
if (i1alive && i2alive) {
E v1 = i1.current;
E v2 = i2.current;
int compareResult = (compare == null)
? Comparable.compare(v1 as Comparable, v2 as Comparable)
: compare(v1, v2);
if (compareResult > 0) {
i2alive = i2.moveNext();
return v2;
} else {
i1alive = i1.moveNext();
return v1;
}
} else if (i1alive) {
E v1 = i1.current;
i1alive = i1.moveNext();
return v1;
} else {
E v2 = i2.current;
i2alive = i2.moveNext();
return v2;
}
});
}
Note: The method could in theory take two Iterable as argument as long as we are sure that a call to .length does not have any negative consequences like e.g. need to iterate over the full structure (with e.g. mappings). To prevent this issue, I ended up declaring the method to take List as arguments since we know for sure that .length is not problematic here.
This sounds like you need to merge the two lists.
As stated elsewhere, it's more efficient to create a new list than to move elements around inside the existing lists.
The merge can be written fairly simply:
/// Merges two sorted lists.
///
/// The lists must be ordered in increasing order according to [compare].
///
/// Returns a new list containing the elements of both [first] and [second]
/// in increasing order according to [compare].
List<T> merge<T>(List<T> first, List<T> second, int Function(T, T) compare) {
var result = <T>[];
var i = 0;
var j = 0;
while (i < first.length && j < second.length) {
var a = first[i];
var b = second[j];
if (compare(a, b) <= 0) {
result.add(a);
i++;
} else {
result.add(b);
j++;
}
}
while (i < first.length) {
result.add(first[i++]);
}
while (j < second.length) {
result.add(second[j++]);
}
return result;
}
(In this case, the lists are descending, so they'll need a compare function which reverses the order, like (a, b) => b.compareTo(a))
You can use binary search to insert all new messages one by one in a sorted manner while maintaining efficiency.
void main() {
List<int> messages = [10, 5, 4, 1];
List<int> newMessages = [5, 3, 2];
for (final newMessage in newMessages) {
final index = binarySearchIndex(messages, newMessage);
messages.insert(index, newMessage);
}
print(messages); // [10, 5, 5, 4, 3, 2, 1]
}
int binarySearchIndex(
List<int> numList,
int value, [
int? preferredMinIndex,
int? preferredMaxIndex,
]) {
final minIndex = preferredMinIndex ?? 0;
final maxIndex = preferredMaxIndex ?? numList.length - 1;
final middleIndex = ((maxIndex - minIndex) / 2).floor() + minIndex;
final comparator = numList[middleIndex];
if (middleIndex == minIndex) {
return comparator > value ? maxIndex : minIndex;
}
return comparator > value ?
binarySearchIndex(numList, value, middleIndex, maxIndex):
binarySearchIndex(numList, value, minIndex, middleIndex);
}
Given a store of 3-tuples where:
All elements are numeric ex :( 1, 3, 4) (1300, 3, 15) (1300, 3, 15) …
Tuples are removed and added frequently
At any time the store is typically under 100,000 elements
All Tuples are available in memory
The application is interactive requiring 100s of searches per second.
What are the most efficient algorithms/data structures to perform wild card (*) searches such as:
(1, *, 6) (3601, *, *) (*, 1935, *)
The aim is to have a Linda like tuple space but on an application level
Well, there are only 8 possible arrangements of wildcards, so you can easily construct 6 multi-maps and a set to serve as indices: one for each arrangement of wildcards in the query. You don't need an 8th index because the query (*,*,*) trivially returns all tuples. The set is for tuples with no wildcards; only a membership test is needed in this case.
A multimap takes a key to a set. In your example, e.g., the query (1,*,6) would consult the multimap for queries of the form (X,*,Y), which takes key <X,Y> to the set of all tuples with X in the first position and Y in third. In this case, X=1 and Y=6.
With any reasonable hash-based multimap implementation, lookups ought to be very fast. Several hundred a second ought to be easy, and several thousand per second doable (with e.g a contemporary x86 CPU).
Insertions and deletions require updating the maps and set. Again this ought to be reasonably fast, though not as fast as lookups of course. Again several hundred per second ought to be doable.
With only ~10^5 tuples, this approach ought to be fine for memory as well. You can save a bit of space with tricks, e.g. keeping a single copy of each tuple in an array and storing indices in the map/set to represent both key and value. Manage array slots with a free list.
To make this concrete, here is pseudocode. I'm going to use angle brackets <a,b,c> for tuples to avoid too many parens:
# Definitions
For a query Q <k2,k1,k0> where each of k_i is either * or an integer,
Let I(Q) be a 3-digit binary number b2|b1|b0 where
b_i=0 if k_i is * and 1 if k_i is an integer.
Let N(i) be the number of 1's in the binary representation of i
Let M(i) be a multimap taking a tuple with N(i) elements to a set
of tuples with 3 elements.
Let t be a 3 element tuple. Then T(t,i) returns a new tuple with
only the elements of t in positions where i has a 1. For example
T(<1,2,3>,0) = <> and T(<1,2,3>,6) = <2,3>
Note that function T works fine on query tuples with wildcards.
# Algorithm to insert tuple T into the database:
fun insert(t)
for i = 0 to 7
add the entry T(t,i)->t to M(i)
# Algorithm to delete tuple T from the database:
fun delete(t)
for i = 0 to 7
delete the entry T(t,i)->t from M(i)
# Query algorithm
fun query(Q)
let i = I(Q)
return M(i).lookup(T(Q, i)) # lookup failure returns empty set
Note that for simplicity, I've not shown the "optimizations" for M(0) and M(7). For M(0), the algorithm above would create a multimap taking the empty tuple to the set of all 3-tuples in the database. You can avoid this merely by treating i=0 as a special case. Similarly M(7) would take each tuple to a set containing only itself.
An "optimized" version:
fun insert(t)
for i = 1 to 6
add the entry T(t,i)->t to M(i)
add t to set S
fun delete(t)
for i = 1 to 6
delete the entry T(t,i)->t from M(i)
remove t from set S
fun query(Q)
let i = I(Q)
if i = 0, return S
elsif i = 7 return if Q\in S { Q } else {}
else return M(i).lookup(T(Q, i))
Addition
For fun, a Java implementation:
package hacking;
import java.util.Arrays;
import java.util.Collections;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Random;
import java.util.Scanner;
import java.util.Set;
public class Hacking {
public static void main(String [] args) {
TupleDatabase db = new TupleDatabase();
int n = 200000;
long start = System.nanoTime();
for (int i = 0; i < n; ++i) {
db.insert(db.randomTriple());
}
long stop = System.nanoTime();
double elapsedSec = (stop - start) * 1e-9;
System.out.println("Inserted " + n + " tuples in " + elapsedSec
+ " seconds (" + (elapsedSec / n * 1000.0) + "ms per insert).");
Scanner in = new Scanner(System.in);
for (;;) {
System.out.print("Query: ");
int a = in.nextInt();
int b = in.nextInt();
int c = in.nextInt();
System.out.println(db.query(new Tuple(a, b, c)));
}
}
}
class Tuple {
static final int [] N_ONES = new int[] { 0, 1, 1, 2, 1, 2, 2, 3 };
static final int STAR = -1;
final int [] vals;
Tuple(int a, int b, int c) {
vals = new int[] { a, b, c };
}
Tuple(Tuple t, int code) {
vals = new int[N_ONES[code]];
int m = 0;
for (int k = 0; k < 3; ++k) {
if (((1 << k) & code) > 0) {
vals[m++] = t.vals[k];
}
}
}
#Override
public boolean equals(Object other) {
if (other instanceof Tuple) {
Tuple triple = (Tuple) other;
return Arrays.equals(this.vals, triple.vals);
}
return false;
}
#Override
public int hashCode() {
return Arrays.hashCode(this.vals);
}
#Override
public String toString() {
return Arrays.toString(vals);
}
int code() {
int c = 0;
for (int k = 0; k < 3; k++) {
if (vals[k] != STAR) {
c |= (1 << k);
}
}
return c;
}
Set<Tuple> setOf() {
Set<Tuple> s = new HashSet<>();
s.add(this);
return s;
}
}
class Multimap extends HashMap<Tuple, Set<Tuple>> {
#Override
public Set<Tuple> get(Object key) {
Set<Tuple> r = super.get(key);
return r == null ? Collections.<Tuple>emptySet() : r;
}
void put(Tuple key, Tuple value) {
if (containsKey(key)) {
super.get(key).add(value);
} else {
super.put(key, value.setOf());
}
}
void remove(Tuple key, Tuple value) {
Set<Tuple> set = super.get(key);
set.remove(value);
if (set.isEmpty()) {
super.remove(key);
}
}
}
class TupleDatabase {
final Set<Tuple> set;
final Multimap [] maps;
TupleDatabase() {
set = new HashSet<>();
maps = new Multimap[7];
for (int i = 1; i < 7; i++) {
maps[i] = new Multimap();
}
}
void insert(Tuple t) {
set.add(t);
for (int i = 1; i < 7; i++) {
maps[i].put(new Tuple(t, i), t);
}
}
void delete(Tuple t) {
set.remove(t);
for (int i = 1; i < 7; i++) {
maps[i].remove(new Tuple(t, i), t);
}
}
Set<Tuple> query(Tuple q) {
int c = q.code();
switch (c) {
case 0: return set;
case 7: return set.contains(q) ? q.setOf() : Collections.<Tuple>emptySet();
default: return maps[c].get(new Tuple(q, c));
}
}
Random gen = new Random();
int randPositive() {
return gen.nextInt(1000);
}
Tuple randomTriple() {
return new Tuple(randPositive(), randPositive(), randPositive());
}
}
Some output:
Inserted 200000 tuples in 2.981607358 seconds (0.014908036790000002ms per insert).
Query: -1 -1 -1
[[504, 296, 987], [500, 446, 184], [499, 482, 16], [488, 823, 40], ...
Query: 500 446 -1
[[500, 446, 184], [500, 446, 762]]
Query: -1 -1 500
[[297, 56, 500], [848, 185, 500], [556, 351, 500], [779, 986, 500], [935, 279, 500], ...
If you think of the tuples like a ip address, then a radix tree (trie) type structure might work. Radix tree is used for IP discovery.
Another way maybe to calculate use bit operations and calculate a bit hash for the tuple and in your search do bit (or, and) for quick discovery.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
The community reviewed whether to reopen this question 12 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
You are given as input an unsorted array of n distinct numbers, where n is a power of 2. Give an algorithm that identifies the second-largest number in the array, and that uses at most n+log₂(n)−2 comparisons.
Start with comparing elements of the n element array in odd and even positions and determining largest element of each pair. This step requires n/2 comparisons. Now you've got only n/2 elements. Continue pairwise comparisons to get n/4, n/8, ... elements. Stop when the largest element is found. This step requires a total of n/2 + n/4 + n/8 + ... + 1 = n-1 comparisons.
During previous step, the largest element was immediately compared with log₂(n) other elements. You can determine the largest of these elements in log₂(n)-1 comparisons. That would be the second-largest number in the array.
Example: array of 8 numbers [10,9,5,4,11,100,120,110].
Comparisons on level 1: [10,9] ->10 [5,4]-> 5, [11,100]->100 , [120,110]-->120.
Comparisons on level 2: [10,5] ->10 [100,120]->120.
Comparisons on level 3: [10,120]->120.
Maximum is 120. It was immediately compared with: 10 (on level 3), 100 (on level 2), 110 (on level 1).
Step 2 should find the maximum of 10, 100, and 110. Which is 110. That's the second largest element.
sly s's answer is derived from this paper, but he didn't explain the algorithm, which means someone stumbling across this question has to read the whole paper, and his code isn't very sleek as well. I'll give the crux of the algorithm from the aforementioned paper, complete with complexity analysis, and also provide a Scala implementation, just because that's the language I chose while working on these problems.
Basically, we do two passes:
Find the max, and keep track of which elements the max was compared to.
Find the max among the elements the max was compared to; the result is the second largest element.
In the picture above, 12 is the largest number in the array, and was compared to 3, 1, 11, and 10 in the first pass. In the second pass, we find the largest among {3, 1, 11, 10}, which is 11, which is the second largest number in the original array.
Time Complexity:
All elements must be looked at, therefore, n - 1 comparisons for pass 1.
Since we divide the problem into two halves each time, there are at most log₂n recursive calls, for each of which, the comparisons sequence grows by at most one; the size of the comparisons sequence is thus at most log₂n, therefore, log₂n - 1 comparisons for pass 2.
Total number of comparisons <= (n - 1) + (log₂n - 1) = n + log₂n - 2
def second_largest(nums: Sequence[int]) -> int:
def _max(lo: int, hi: int, seq: Sequence[int]) -> Tuple[int, MutableSequence[int]]:
if lo >= hi:
return seq[lo], []
mid = lo + (hi - lo) // 2
x, a = _max(lo, mid, seq)
y, b = _max(mid + 1, hi, seq)
if x > y:
a.append(y)
return x, a
b.append(x)
return y, b
comparisons = _max(0, len(nums) - 1, nums)[1]
return _max(0, len(comparisons) - 1, comparisons)[0]
The first run for the given example is as follows:
lo=0, hi=1, mid=0, x=10, a=[], y=4, b=[]
lo=0, hi=2, mid=1, x=10, a=[4], y=5, b=[]
lo=3, hi=4, mid=3, x=8, a=[], y=7, b=[]
lo=3, hi=5, mid=4, x=8, a=[7], y=2, b=[]
lo=0, hi=5, mid=2, x=10, a=[4, 5], y=8, b=[7, 2]
lo=6, hi=7, mid=6, x=12, a=[], y=3, b=[]
lo=6, hi=8, mid=7, x=12, a=[3], y=1, b=[]
lo=9, hi=10, mid=9, x=6, a=[], y=9, b=[]
lo=9, hi=11, mid=10, x=9, a=[6], y=11, b=[]
lo=6, hi=11, mid=8, x=12, a=[3, 1], y=11, b=[9]
lo=0, hi=11, mid=5, x=10, a=[4, 5, 8], y=12, b=[3, 1, 11]
Things to note:
There are exactly n - 1=11 comparisons for n=12.
From the last line, y=12 wins over x=10, and the next pass starts with the sequence [3, 1, 11, 10], which has log₂(12)=3.58 ~ 4 elements, and will require 3 comparisons to find the maximum.
I have implemented this algorithm in Java answered by #Evgeny Kluev. The total comparisons are n+log2(n)−2. There is also a good reference:
Alexander Dekhtyar: CSC 349: Design and Analyis of Algorithms. This is similar to the top voted algorithm.
public class op1 {
private static int findSecondRecursive(int n, int[] A){
int[] firstCompared = findMaxTournament(0, n-1, A); //n-1 comparisons;
int[] secondCompared = findMaxTournament(2, firstCompared[0]-1, firstCompared); //log2(n)-1 comparisons.
//Total comparisons: n+log2(n)-2;
return secondCompared[1];
}
private static int[] findMaxTournament(int low, int high, int[] A){
if(low == high){
int[] compared = new int[2];
compared[0] = 2;
compared[1] = A[low];
return compared;
}
int[] compared1 = findMaxTournament(low, (low+high)/2, A);
int[] compared2 = findMaxTournament((low+high)/2+1, high, A);
if(compared1[1] > compared2[1]){
int k = compared1[0] + 1;
int[] newcompared1 = new int[k];
System.arraycopy(compared1, 0, newcompared1, 0, compared1[0]);
newcompared1[0] = k;
newcompared1[k-1] = compared2[1];
return newcompared1;
}
int k = compared2[0] + 1;
int[] newcompared2 = new int[k];
System.arraycopy(compared2, 0, newcompared2, 0, compared2[0]);
newcompared2[0] = k;
newcompared2[k-1] = compared1[1];
return newcompared2;
}
private static void printarray(int[] a){
for(int i:a){
System.out.print(i + " ");
}
System.out.println();
}
public static void main(String[] args) {
//Demo.
System.out.println("Origial array: ");
int[] A = {10,4,5,8,7,2,12,3,1,6,9,11};
printarray(A);
int secondMax = findSecondRecursive(A.length,A);
Arrays.sort(A);
System.out.println("Sorted array(for check use): ");
printarray(A);
System.out.println("Second largest number in A: " + secondMax);
}
}
the problem is:
let's say, in comparison level 1, the algorithm need to be remember all the array element because largest is not yet known, then, second, finally, third. by keep tracking these element via assignment will invoke additional value assignment and later when the largest is known, you need also consider the tracking back. As the result, it will not be significantly faster than simple 2N-2 Comparison algorithm. Moreover, because the code is more complicated, you need also think about potential debugging time.
eg: in PHP, RUNNING time for comparison vs value assignment roughly is :Comparison: (11-19) to value assignment: 16.
I shall give some examples for better understanding. :
example 1 :
>12 56 98 12 76 34 97 23
>>(12 56) (98 12) (76 34) (97 23)
>>> 56 98 76 97
>>>> (56 98) (76 97)
>>>>> 98 97
>>>>>> 98
The largest element is 98
Now compare with lost ones of the largest element 98. 97 will be the second largest.
nlogn implementation
public class Test {
public static void main(String...args){
int arr[] = new int[]{1,2,2,3,3,4,9,5, 100 , 101, 1, 2, 1000, 102, 2,2,2};
System.out.println(getMax(arr, 0, 16));
}
public static Holder getMax(int[] arr, int start, int end){
if (start == end)
return new Holder(arr[start], Integer.MIN_VALUE);
else {
int mid = ( start + end ) / 2;
Holder l = getMax(arr, start, mid);
Holder r = getMax(arr, mid + 1, end);
if (l.compareTo(r) > 0 )
return new Holder(l.high(), r.high() > l.low() ? r.high() : l.low());
else
return new Holder(r.high(), l.high() > r.low() ? l.high(): r.low());
}
}
static class Holder implements Comparable<Holder> {
private int low, high;
public Holder(int r, int l){low = l; high = r;}
public String toString(){
return String.format("Max: %d, SecMax: %d", high, low);
}
public int compareTo(Holder data){
if (high == data.high)
return 0;
if (high > data.high)
return 1;
else
return -1;
}
public int high(){
return high;
}
public int low(){
return low;
}
}
}
Why not to use this hashing algorithm for given array[n]? It runs c*n, where c is constant time for check and hash. And it does n comparisons.
int first = 0;
int second = 0;
for(int i = 0; i < n; i++) {
if(array[i] > first) {
second = first;
first = array[i];
}
}
Or am I just do not understand the question...
In Python2.7: The following code works at O(nlog log n) for the extra sort. Any optimizations?
def secondLargest(testList):
secondList = []
# Iterate through the list
while(len(testList) > 1):
left = testList[0::2]
right = testList[1::2]
if (len(testList) % 2 == 1):
right.append(0)
myzip = zip(left,right)
mymax = [ max(list(val)) for val in myzip ]
myzip.sort()
secondMax = [x for x in myzip[-1] if x != max(mymax)][0]
if (secondMax != 0 ):
secondList.append(secondMax)
testList = mymax
return max(secondList)
public static int FindSecondLargest(int[] input)
{
Dictionary<int, List<int>> dictWinnerLoser = new Dictionary<int, List<int>>();//Keeps track of loosers with winners
List<int> lstWinners = null;
List<int> lstLoosers = null;
int winner = 0;
int looser = 0;
while (input.Count() > 1)//Runs till we get max in the array
{
lstWinners = new List<int>();//Keeps track of winners of each run, as we have to run with winners of each run till we get one winner
for (int i = 0; i < input.Count() - 1; i += 2)
{
if (input[i] > input[i + 1])
{
winner = input[i];
looser = input[i + 1];
}
else
{
winner = input[i + 1];
looser = input[i];
}
lstWinners.Add(winner);
if (!dictWinnerLoser.ContainsKey(winner))
{
lstLoosers = new List<int>();
lstLoosers.Add(looser);
dictWinnerLoser.Add(winner, lstLoosers);
}
else
{
lstLoosers = dictWinnerLoser[winner];
lstLoosers.Add(looser);
dictWinnerLoser[winner] = lstLoosers;
}
}
input = lstWinners.ToArray();//run the loop again with winners
}
List<int> loosersOfWinner = dictWinnerLoser[input[0]];//Gives all the elemetns who lost to max element of array, input array now has only one element which is actually the max of the array
winner = 0;
for (int i = 0; i < loosersOfWinner.Count(); i++)//Now max in the lossers of winner will give second largest
{
if (winner < loosersOfWinner[i])
{
winner = loosersOfWinner[i];
}
}
return winner;
}
Say I have the following array of integers:
int[] numbers = { 1, 6, 4, 10, 9, 12, 15, 17, 8, 3, 20, 21, 2, 23, 25, 27, 5, 67,33, 13, 8, 12, 41, 5 };
How could I write a Linq query that finds 3 consecutive elements that are, say, greater than 10? Also, it would be nice if I could specify I want say the first, second, third etc. group of such elements.
For example, the Linq query should be able to identify:
12,15,17 as the first group of consecutive elements
23,25,27 as the second group
67,33,13 as the third group
The query should return to me the 2nd group if I specify I want the 2nd group of 3 consecutive elements.
Thanks.
UPDATE: While not technically a "linq query" as Patrick points out in the comments, this solution is reusable, flexible, and generic.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace ConsoleApplication32
{
class Program
{
static void Main(string[] args)
{
int[] numbers = { 1, 6, 4, 10, 9, 12, 15, 17, 8, 3, 20, 21, 2, 23, 25, 27, 5, 67,33, 13, 8, 12, 41, 5 };
var consecutiveGroups = numbers.FindConsecutiveGroups((x) => x > 10, 3);
foreach (var group in consecutiveGroups)
{
Console.WriteLine(String.Join(",", group));
}
}
}
public static class Extensions
{
public static IEnumerable<IEnumerable<T>> FindConsecutiveGroups<T>(this IEnumerable<T> sequence, Predicate<T> predicate, int count)
{
IEnumerable<T> current = sequence;
while (current.Count() > count)
{
IEnumerable<T> window = current.Take(count);
if (window.Where(x => predicate(x)).Count() >= count)
yield return window;
current = current.Skip(1);
}
}
}
}
Output:
12,15,17
23,25,27
67,33,13
To get the 2nd group, change:
var consecutiveGroups = numbers.FindConsecutiveGroups((x) => x > 10, 3);
To:
var consecutiveGroups = numbers.FindConsecutiveGroups((x) => x > 10, 3).Skip(1).Take(1);
UPDATE 2 After tweaking this in our production use, the following implementation is far faster as the count of items in the numbers array grows larger.
public static IEnumerable<IEnumerable<T>> FindConsecutiveGroups<T>(this IEnumerable<T> sequence, Predicate<T> predicate, int sequenceSize)
{
IEnumerable<T> window = Enumerable.Empty<T>();
int count = 0;
foreach (var item in sequence)
{
if (predicate(item))
{
window = window.Concat(Enumerable.Repeat(item, 1));
count++;
if (count == sequenceSize)
{
yield return window;
window = window.Skip(1);
count--;
}
}
else
{
count = 0;
window = Enumerable.Empty<T>();
}
}
}
int[] numbers = { 1, 6, 4, 10, 9, 12, 15, 17, 8, 3, 20, 21, 2, 23, 25, 27, 5, 67, 33, 13, 8, 12, 41, 5 };
var numbersQuery = numbers.Select((x, index) => new { Index = index, Value = x});
var query = from n in numbersQuery
from n2 in numbersQuery.Where(x => n.Index == x.Index - 1).DefaultIfEmpty()
from n3 in numbersQuery.Where(x => n.Index == x.Index - 2).DefaultIfEmpty()
where n.Value > 10
where n2 != null && n2.Value > 10
where n3 != null && n3.Value > 10
select new
{
Value1 = n.Value,
Value2 = n2.Value,
Value3 = n3.Value
};
In order to specify which group, you can call the Skip method
query.Skip(1)
Why don't you try this extension method?
public static IEnumerable<IEnumerable<T>> Consecutives<T>(this IEnumerable<T> numbers, int ranges, Func<T, bool> predicate)
{
IEnumerable<T> ordered = numbers.OrderBy(a => a).Where(predicate);
decimal n = Decimal.Divide(ordered.Count(), ranges);
decimal max = Math.Ceiling(n); // or Math.Floor(n) if you want
return from i in Enumerable.Range(0, (int)max)
select ordered.Skip(i * ranges).Take(ranges);
}
The only thing to improve could be the call to Count method because causes the enumeration of numbers (so the query loses its laziness).
Anyway I'm sure this could fit your linqness requirements.
EDIT: Alternatively this is the less words version (it doesn't make use of Count method):
public static IEnumerable<IEnumerable<T>> Consecutives<T>(this IEnumerable<T> numbers, int ranges, Func<T, bool> predicate)
{
var ordered = numbers.OrderBy(a => a);
return ordered.Where(predicate)
.Select((element, i) => ordered.Skip(i * ranges).Take(ranges))
.TakeWhile(Enumerable.Any);
}
I had to do this for a list of doubles. There is an upper as well as a lower limit. This is also not a true Linq solution, it is just a pragmatic approach I wrote this in scripting language that only implements a subset of C#.
var sequence =
[0.25,0.5,0.5,0.5,0.7,0.8,0.7,0.9,0.5,0.5,0.8,0.8,0.5,0.5,0.65,0.65,0.65,0.65,0.65,0.65,0.65];
double lowerLimit = 0.1;
double upperLimit = 0.6;
int minWindowLength = 3;
// return type is a list of lists
var windows = [[0.0]];
windows.Clear();
int consec = 0;
int index = 0;
while (index < sequence.Count){
// store segments here
var window = new System.Collections.Generic.List<double>();
while ((index < sequence.Count) && (sequence[index] > upperLimit || sequence[index] < lowerLimit)) {
window.Add(sequence[index]);
consec = consec + 1;
index = index +1;
}
if (consec > minWindowLength) {
windows.Add(window);
}
window = new System.Collections.Generic.List<double>();
consec = 0;
index = index+1;
}
return windows;