Related
I'm having trouble determining the most efficient way of doing this in Dart.
If have two lists that in sorted descending order,
List<int> messages = [10, 5, 4, 1];
List<int> newMessages = [5, 3, 2];
How can I add newMessages to messages so that messages now looks like
messages = [10, 5, 5, 4, 3, 2, 1];
If both lists are long, and are using the default list implementation, it might be more efficient to create a new list based on the two other lists. The reason is that inserting an element inside an existing list requires all elements after this insertion index to be moved forward. Also, when the list grows, it needs to allocate a bigger list and move all elements into this.
If we instead creates a new list, we can inform Dart what the size of this list is going to be exactly and we can prevent moving elements:
void main() {
List<int> messages = [10, 5, 4, 1];
List<int> newMessages = [5, 3, 2];
// The compare argument is given since both lists are sorted in reverse order
print(newSortedListBasedOnTwoAlreadySortedLists<int>(
messages, newMessages, (a, b) => b.compareTo(a)));
// [10, 5, 5, 4, 3, 2, 1]
}
List<E> newSortedListBasedOnTwoAlreadySortedLists<E>(
List<E> l1,
List<E> l2, [
int Function(E a, E b)? compare,
]) {
Iterator<E> i1 = l1.iterator;
Iterator<E> i2 = l2.iterator;
if (!i1.moveNext()) {
if (!i2.moveNext()) {
return [];
} else {
return l2.toList();
}
}
if (!i2.moveNext()) {
return l1.toList();
}
bool i1alive = true;
bool i2alive = true;
return List.generate(l1.length + l2.length, (_) {
if (i1alive && i2alive) {
E v1 = i1.current;
E v2 = i2.current;
int compareResult = (compare == null)
? Comparable.compare(v1 as Comparable, v2 as Comparable)
: compare(v1, v2);
if (compareResult > 0) {
i2alive = i2.moveNext();
return v2;
} else {
i1alive = i1.moveNext();
return v1;
}
} else if (i1alive) {
E v1 = i1.current;
i1alive = i1.moveNext();
return v1;
} else {
E v2 = i2.current;
i2alive = i2.moveNext();
return v2;
}
});
}
Note: The method could in theory take two Iterable as argument as long as we are sure that a call to .length does not have any negative consequences like e.g. need to iterate over the full structure (with e.g. mappings). To prevent this issue, I ended up declaring the method to take List as arguments since we know for sure that .length is not problematic here.
This sounds like you need to merge the two lists.
As stated elsewhere, it's more efficient to create a new list than to move elements around inside the existing lists.
The merge can be written fairly simply:
/// Merges two sorted lists.
///
/// The lists must be ordered in increasing order according to [compare].
///
/// Returns a new list containing the elements of both [first] and [second]
/// in increasing order according to [compare].
List<T> merge<T>(List<T> first, List<T> second, int Function(T, T) compare) {
var result = <T>[];
var i = 0;
var j = 0;
while (i < first.length && j < second.length) {
var a = first[i];
var b = second[j];
if (compare(a, b) <= 0) {
result.add(a);
i++;
} else {
result.add(b);
j++;
}
}
while (i < first.length) {
result.add(first[i++]);
}
while (j < second.length) {
result.add(second[j++]);
}
return result;
}
(In this case, the lists are descending, so they'll need a compare function which reverses the order, like (a, b) => b.compareTo(a))
You can use binary search to insert all new messages one by one in a sorted manner while maintaining efficiency.
void main() {
List<int> messages = [10, 5, 4, 1];
List<int> newMessages = [5, 3, 2];
for (final newMessage in newMessages) {
final index = binarySearchIndex(messages, newMessage);
messages.insert(index, newMessage);
}
print(messages); // [10, 5, 5, 4, 3, 2, 1]
}
int binarySearchIndex(
List<int> numList,
int value, [
int? preferredMinIndex,
int? preferredMaxIndex,
]) {
final minIndex = preferredMinIndex ?? 0;
final maxIndex = preferredMaxIndex ?? numList.length - 1;
final middleIndex = ((maxIndex - minIndex) / 2).floor() + minIndex;
final comparator = numList[middleIndex];
if (middleIndex == minIndex) {
return comparator > value ? maxIndex : minIndex;
}
return comparator > value ?
binarySearchIndex(numList, value, middleIndex, maxIndex):
binarySearchIndex(numList, value, minIndex, middleIndex);
}
Given a store of 3-tuples where:
All elements are numeric ex :( 1, 3, 4) (1300, 3, 15) (1300, 3, 15) …
Tuples are removed and added frequently
At any time the store is typically under 100,000 elements
All Tuples are available in memory
The application is interactive requiring 100s of searches per second.
What are the most efficient algorithms/data structures to perform wild card (*) searches such as:
(1, *, 6) (3601, *, *) (*, 1935, *)
The aim is to have a Linda like tuple space but on an application level
Well, there are only 8 possible arrangements of wildcards, so you can easily construct 6 multi-maps and a set to serve as indices: one for each arrangement of wildcards in the query. You don't need an 8th index because the query (*,*,*) trivially returns all tuples. The set is for tuples with no wildcards; only a membership test is needed in this case.
A multimap takes a key to a set. In your example, e.g., the query (1,*,6) would consult the multimap for queries of the form (X,*,Y), which takes key <X,Y> to the set of all tuples with X in the first position and Y in third. In this case, X=1 and Y=6.
With any reasonable hash-based multimap implementation, lookups ought to be very fast. Several hundred a second ought to be easy, and several thousand per second doable (with e.g a contemporary x86 CPU).
Insertions and deletions require updating the maps and set. Again this ought to be reasonably fast, though not as fast as lookups of course. Again several hundred per second ought to be doable.
With only ~10^5 tuples, this approach ought to be fine for memory as well. You can save a bit of space with tricks, e.g. keeping a single copy of each tuple in an array and storing indices in the map/set to represent both key and value. Manage array slots with a free list.
To make this concrete, here is pseudocode. I'm going to use angle brackets <a,b,c> for tuples to avoid too many parens:
# Definitions
For a query Q <k2,k1,k0> where each of k_i is either * or an integer,
Let I(Q) be a 3-digit binary number b2|b1|b0 where
b_i=0 if k_i is * and 1 if k_i is an integer.
Let N(i) be the number of 1's in the binary representation of i
Let M(i) be a multimap taking a tuple with N(i) elements to a set
of tuples with 3 elements.
Let t be a 3 element tuple. Then T(t,i) returns a new tuple with
only the elements of t in positions where i has a 1. For example
T(<1,2,3>,0) = <> and T(<1,2,3>,6) = <2,3>
Note that function T works fine on query tuples with wildcards.
# Algorithm to insert tuple T into the database:
fun insert(t)
for i = 0 to 7
add the entry T(t,i)->t to M(i)
# Algorithm to delete tuple T from the database:
fun delete(t)
for i = 0 to 7
delete the entry T(t,i)->t from M(i)
# Query algorithm
fun query(Q)
let i = I(Q)
return M(i).lookup(T(Q, i)) # lookup failure returns empty set
Note that for simplicity, I've not shown the "optimizations" for M(0) and M(7). For M(0), the algorithm above would create a multimap taking the empty tuple to the set of all 3-tuples in the database. You can avoid this merely by treating i=0 as a special case. Similarly M(7) would take each tuple to a set containing only itself.
An "optimized" version:
fun insert(t)
for i = 1 to 6
add the entry T(t,i)->t to M(i)
add t to set S
fun delete(t)
for i = 1 to 6
delete the entry T(t,i)->t from M(i)
remove t from set S
fun query(Q)
let i = I(Q)
if i = 0, return S
elsif i = 7 return if Q\in S { Q } else {}
else return M(i).lookup(T(Q, i))
Addition
For fun, a Java implementation:
package hacking;
import java.util.Arrays;
import java.util.Collections;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Random;
import java.util.Scanner;
import java.util.Set;
public class Hacking {
public static void main(String [] args) {
TupleDatabase db = new TupleDatabase();
int n = 200000;
long start = System.nanoTime();
for (int i = 0; i < n; ++i) {
db.insert(db.randomTriple());
}
long stop = System.nanoTime();
double elapsedSec = (stop - start) * 1e-9;
System.out.println("Inserted " + n + " tuples in " + elapsedSec
+ " seconds (" + (elapsedSec / n * 1000.0) + "ms per insert).");
Scanner in = new Scanner(System.in);
for (;;) {
System.out.print("Query: ");
int a = in.nextInt();
int b = in.nextInt();
int c = in.nextInt();
System.out.println(db.query(new Tuple(a, b, c)));
}
}
}
class Tuple {
static final int [] N_ONES = new int[] { 0, 1, 1, 2, 1, 2, 2, 3 };
static final int STAR = -1;
final int [] vals;
Tuple(int a, int b, int c) {
vals = new int[] { a, b, c };
}
Tuple(Tuple t, int code) {
vals = new int[N_ONES[code]];
int m = 0;
for (int k = 0; k < 3; ++k) {
if (((1 << k) & code) > 0) {
vals[m++] = t.vals[k];
}
}
}
#Override
public boolean equals(Object other) {
if (other instanceof Tuple) {
Tuple triple = (Tuple) other;
return Arrays.equals(this.vals, triple.vals);
}
return false;
}
#Override
public int hashCode() {
return Arrays.hashCode(this.vals);
}
#Override
public String toString() {
return Arrays.toString(vals);
}
int code() {
int c = 0;
for (int k = 0; k < 3; k++) {
if (vals[k] != STAR) {
c |= (1 << k);
}
}
return c;
}
Set<Tuple> setOf() {
Set<Tuple> s = new HashSet<>();
s.add(this);
return s;
}
}
class Multimap extends HashMap<Tuple, Set<Tuple>> {
#Override
public Set<Tuple> get(Object key) {
Set<Tuple> r = super.get(key);
return r == null ? Collections.<Tuple>emptySet() : r;
}
void put(Tuple key, Tuple value) {
if (containsKey(key)) {
super.get(key).add(value);
} else {
super.put(key, value.setOf());
}
}
void remove(Tuple key, Tuple value) {
Set<Tuple> set = super.get(key);
set.remove(value);
if (set.isEmpty()) {
super.remove(key);
}
}
}
class TupleDatabase {
final Set<Tuple> set;
final Multimap [] maps;
TupleDatabase() {
set = new HashSet<>();
maps = new Multimap[7];
for (int i = 1; i < 7; i++) {
maps[i] = new Multimap();
}
}
void insert(Tuple t) {
set.add(t);
for (int i = 1; i < 7; i++) {
maps[i].put(new Tuple(t, i), t);
}
}
void delete(Tuple t) {
set.remove(t);
for (int i = 1; i < 7; i++) {
maps[i].remove(new Tuple(t, i), t);
}
}
Set<Tuple> query(Tuple q) {
int c = q.code();
switch (c) {
case 0: return set;
case 7: return set.contains(q) ? q.setOf() : Collections.<Tuple>emptySet();
default: return maps[c].get(new Tuple(q, c));
}
}
Random gen = new Random();
int randPositive() {
return gen.nextInt(1000);
}
Tuple randomTriple() {
return new Tuple(randPositive(), randPositive(), randPositive());
}
}
Some output:
Inserted 200000 tuples in 2.981607358 seconds (0.014908036790000002ms per insert).
Query: -1 -1 -1
[[504, 296, 987], [500, 446, 184], [499, 482, 16], [488, 823, 40], ...
Query: 500 446 -1
[[500, 446, 184], [500, 446, 762]]
Query: -1 -1 500
[[297, 56, 500], [848, 185, 500], [556, 351, 500], [779, 986, 500], [935, 279, 500], ...
If you think of the tuples like a ip address, then a radix tree (trie) type structure might work. Radix tree is used for IP discovery.
Another way maybe to calculate use bit operations and calculate a bit hash for the tuple and in your search do bit (or, and) for quick discovery.
I'm trying to use LINQ to transform the following list. LINQ should multiply each element against the next as long as the product is less than 15. Additionally we should save the number of elements used to form the product.
int[] values = { 1, 3, 4, 2, 7, 14 }; //assume Largest value will never be >= 15
1x3x4 = 12
2x7 = 14
14 = 14
{ {12,3}, {14,2}, {14,1} }
My ultimate goal is to take the geometric average of a very large list of numbers. This is normally done by multiplying each element in the list together (1x3x4x2x7x14) then taking the nth root (in this case 1/6).
The obvious problem in using the "normal" method is that you will quickly find yourself using numbers beyond the maximum allowable number. You can workaround this by using the old divide and conquer method and with a little help from the natural log function.
I don't think there is something like that build into standard LINQ method library. But you can easily create your own extension method. I called it AggregateUntil:
public static class EnumerableExtensions
{
public static IEnumerable<TResult> AggregateUntil<TSource, TAccumulate, TResult>(
this IEnumerable<TSource> source,
TAccumulate seed,
Func<TAccumulate, TSource, TAccumulate> func,
Func<TAccumulate, bool> condition,
Func<TAccumulate, TResult> resultSelector
)
{
TAccumulate acc = seed;
TAccumulate newAcc;
foreach(var item in source)
{
newAcc = func(acc, item);
if(!condition(newAcc))
{
yield return resultSelector(acc);
acc = func(seed, item);
}
else
{
acc = newAcc;
}
}
yield return resultSelector(acc);
}
}
And now let's use it. First, take multiplications only, as long as they met < 15 condition:
var grouped
= values.AggregateUntil(1, (a,i) => a * i, a => a < 15, a => a).ToList();
Returns List<int> with 3 items: 12, 14, 14. That's what you need. But now lets take number of items which were aggregated into each multiplication. That's easy using anonymous type::
int[] values = { 1, 3, 4, 2, 7, 14 };
var grouped
= values.AggregateUntil(
new { v = 1, c = 0 },
(a, i) => new { v = a.v * i, c = a.c + 1 },
a => a.v < 15,
a => a).ToList(); ;
Returns exactly what you need:
My ultimate goal is to take the geometric average of a very large list of numbers.
Then just take the nth root of each number and multiply afterwards. Then you don't need to worry about splitting the list into groups:
double mean = 1.0;
foreach(int i in values)
{
mean *= Math.Pow(i, 1.0 / values.Length);
}
Which could also be done in Linq with Aggregate:
mean = values.Aggregate(1.0, (prev, i) => prev * Math.Pow(i, 1.0 / values.Length ));
Well my solution is not quite as elegant as #MarcinJuraszek, but it's fast and it works within your constraints.
int[] values = {1, 3, 4, 2, 7, 14};
int product = 1;
int elementsMultiplied = 0;
List<Tuple<int,int>> allElements = new List<Tuple<int,int>>();
for(int i = 0; i < values.Length ; i++)
{
product = product * values[i];
elementsMultiplied++;
if(i == values.Length - 1 || product * values[i+1] >= 15)
{
allElements.Add(new Tuple<int,int>(product, elementsMultiplied));
product = 1;
elementsMultiplied = 0;
}
}
foreach(Tuple<int,int> pair in allElements)
{
Console.WriteLine(pair.Item1 + "," + pair.Item2);
}
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
The community reviewed whether to reopen this question 12 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
You are given as input an unsorted array of n distinct numbers, where n is a power of 2. Give an algorithm that identifies the second-largest number in the array, and that uses at most n+log₂(n)−2 comparisons.
Start with comparing elements of the n element array in odd and even positions and determining largest element of each pair. This step requires n/2 comparisons. Now you've got only n/2 elements. Continue pairwise comparisons to get n/4, n/8, ... elements. Stop when the largest element is found. This step requires a total of n/2 + n/4 + n/8 + ... + 1 = n-1 comparisons.
During previous step, the largest element was immediately compared with log₂(n) other elements. You can determine the largest of these elements in log₂(n)-1 comparisons. That would be the second-largest number in the array.
Example: array of 8 numbers [10,9,5,4,11,100,120,110].
Comparisons on level 1: [10,9] ->10 [5,4]-> 5, [11,100]->100 , [120,110]-->120.
Comparisons on level 2: [10,5] ->10 [100,120]->120.
Comparisons on level 3: [10,120]->120.
Maximum is 120. It was immediately compared with: 10 (on level 3), 100 (on level 2), 110 (on level 1).
Step 2 should find the maximum of 10, 100, and 110. Which is 110. That's the second largest element.
sly s's answer is derived from this paper, but he didn't explain the algorithm, which means someone stumbling across this question has to read the whole paper, and his code isn't very sleek as well. I'll give the crux of the algorithm from the aforementioned paper, complete with complexity analysis, and also provide a Scala implementation, just because that's the language I chose while working on these problems.
Basically, we do two passes:
Find the max, and keep track of which elements the max was compared to.
Find the max among the elements the max was compared to; the result is the second largest element.
In the picture above, 12 is the largest number in the array, and was compared to 3, 1, 11, and 10 in the first pass. In the second pass, we find the largest among {3, 1, 11, 10}, which is 11, which is the second largest number in the original array.
Time Complexity:
All elements must be looked at, therefore, n - 1 comparisons for pass 1.
Since we divide the problem into two halves each time, there are at most log₂n recursive calls, for each of which, the comparisons sequence grows by at most one; the size of the comparisons sequence is thus at most log₂n, therefore, log₂n - 1 comparisons for pass 2.
Total number of comparisons <= (n - 1) + (log₂n - 1) = n + log₂n - 2
def second_largest(nums: Sequence[int]) -> int:
def _max(lo: int, hi: int, seq: Sequence[int]) -> Tuple[int, MutableSequence[int]]:
if lo >= hi:
return seq[lo], []
mid = lo + (hi - lo) // 2
x, a = _max(lo, mid, seq)
y, b = _max(mid + 1, hi, seq)
if x > y:
a.append(y)
return x, a
b.append(x)
return y, b
comparisons = _max(0, len(nums) - 1, nums)[1]
return _max(0, len(comparisons) - 1, comparisons)[0]
The first run for the given example is as follows:
lo=0, hi=1, mid=0, x=10, a=[], y=4, b=[]
lo=0, hi=2, mid=1, x=10, a=[4], y=5, b=[]
lo=3, hi=4, mid=3, x=8, a=[], y=7, b=[]
lo=3, hi=5, mid=4, x=8, a=[7], y=2, b=[]
lo=0, hi=5, mid=2, x=10, a=[4, 5], y=8, b=[7, 2]
lo=6, hi=7, mid=6, x=12, a=[], y=3, b=[]
lo=6, hi=8, mid=7, x=12, a=[3], y=1, b=[]
lo=9, hi=10, mid=9, x=6, a=[], y=9, b=[]
lo=9, hi=11, mid=10, x=9, a=[6], y=11, b=[]
lo=6, hi=11, mid=8, x=12, a=[3, 1], y=11, b=[9]
lo=0, hi=11, mid=5, x=10, a=[4, 5, 8], y=12, b=[3, 1, 11]
Things to note:
There are exactly n - 1=11 comparisons for n=12.
From the last line, y=12 wins over x=10, and the next pass starts with the sequence [3, 1, 11, 10], which has log₂(12)=3.58 ~ 4 elements, and will require 3 comparisons to find the maximum.
I have implemented this algorithm in Java answered by #Evgeny Kluev. The total comparisons are n+log2(n)−2. There is also a good reference:
Alexander Dekhtyar: CSC 349: Design and Analyis of Algorithms. This is similar to the top voted algorithm.
public class op1 {
private static int findSecondRecursive(int n, int[] A){
int[] firstCompared = findMaxTournament(0, n-1, A); //n-1 comparisons;
int[] secondCompared = findMaxTournament(2, firstCompared[0]-1, firstCompared); //log2(n)-1 comparisons.
//Total comparisons: n+log2(n)-2;
return secondCompared[1];
}
private static int[] findMaxTournament(int low, int high, int[] A){
if(low == high){
int[] compared = new int[2];
compared[0] = 2;
compared[1] = A[low];
return compared;
}
int[] compared1 = findMaxTournament(low, (low+high)/2, A);
int[] compared2 = findMaxTournament((low+high)/2+1, high, A);
if(compared1[1] > compared2[1]){
int k = compared1[0] + 1;
int[] newcompared1 = new int[k];
System.arraycopy(compared1, 0, newcompared1, 0, compared1[0]);
newcompared1[0] = k;
newcompared1[k-1] = compared2[1];
return newcompared1;
}
int k = compared2[0] + 1;
int[] newcompared2 = new int[k];
System.arraycopy(compared2, 0, newcompared2, 0, compared2[0]);
newcompared2[0] = k;
newcompared2[k-1] = compared1[1];
return newcompared2;
}
private static void printarray(int[] a){
for(int i:a){
System.out.print(i + " ");
}
System.out.println();
}
public static void main(String[] args) {
//Demo.
System.out.println("Origial array: ");
int[] A = {10,4,5,8,7,2,12,3,1,6,9,11};
printarray(A);
int secondMax = findSecondRecursive(A.length,A);
Arrays.sort(A);
System.out.println("Sorted array(for check use): ");
printarray(A);
System.out.println("Second largest number in A: " + secondMax);
}
}
the problem is:
let's say, in comparison level 1, the algorithm need to be remember all the array element because largest is not yet known, then, second, finally, third. by keep tracking these element via assignment will invoke additional value assignment and later when the largest is known, you need also consider the tracking back. As the result, it will not be significantly faster than simple 2N-2 Comparison algorithm. Moreover, because the code is more complicated, you need also think about potential debugging time.
eg: in PHP, RUNNING time for comparison vs value assignment roughly is :Comparison: (11-19) to value assignment: 16.
I shall give some examples for better understanding. :
example 1 :
>12 56 98 12 76 34 97 23
>>(12 56) (98 12) (76 34) (97 23)
>>> 56 98 76 97
>>>> (56 98) (76 97)
>>>>> 98 97
>>>>>> 98
The largest element is 98
Now compare with lost ones of the largest element 98. 97 will be the second largest.
nlogn implementation
public class Test {
public static void main(String...args){
int arr[] = new int[]{1,2,2,3,3,4,9,5, 100 , 101, 1, 2, 1000, 102, 2,2,2};
System.out.println(getMax(arr, 0, 16));
}
public static Holder getMax(int[] arr, int start, int end){
if (start == end)
return new Holder(arr[start], Integer.MIN_VALUE);
else {
int mid = ( start + end ) / 2;
Holder l = getMax(arr, start, mid);
Holder r = getMax(arr, mid + 1, end);
if (l.compareTo(r) > 0 )
return new Holder(l.high(), r.high() > l.low() ? r.high() : l.low());
else
return new Holder(r.high(), l.high() > r.low() ? l.high(): r.low());
}
}
static class Holder implements Comparable<Holder> {
private int low, high;
public Holder(int r, int l){low = l; high = r;}
public String toString(){
return String.format("Max: %d, SecMax: %d", high, low);
}
public int compareTo(Holder data){
if (high == data.high)
return 0;
if (high > data.high)
return 1;
else
return -1;
}
public int high(){
return high;
}
public int low(){
return low;
}
}
}
Why not to use this hashing algorithm for given array[n]? It runs c*n, where c is constant time for check and hash. And it does n comparisons.
int first = 0;
int second = 0;
for(int i = 0; i < n; i++) {
if(array[i] > first) {
second = first;
first = array[i];
}
}
Or am I just do not understand the question...
In Python2.7: The following code works at O(nlog log n) for the extra sort. Any optimizations?
def secondLargest(testList):
secondList = []
# Iterate through the list
while(len(testList) > 1):
left = testList[0::2]
right = testList[1::2]
if (len(testList) % 2 == 1):
right.append(0)
myzip = zip(left,right)
mymax = [ max(list(val)) for val in myzip ]
myzip.sort()
secondMax = [x for x in myzip[-1] if x != max(mymax)][0]
if (secondMax != 0 ):
secondList.append(secondMax)
testList = mymax
return max(secondList)
public static int FindSecondLargest(int[] input)
{
Dictionary<int, List<int>> dictWinnerLoser = new Dictionary<int, List<int>>();//Keeps track of loosers with winners
List<int> lstWinners = null;
List<int> lstLoosers = null;
int winner = 0;
int looser = 0;
while (input.Count() > 1)//Runs till we get max in the array
{
lstWinners = new List<int>();//Keeps track of winners of each run, as we have to run with winners of each run till we get one winner
for (int i = 0; i < input.Count() - 1; i += 2)
{
if (input[i] > input[i + 1])
{
winner = input[i];
looser = input[i + 1];
}
else
{
winner = input[i + 1];
looser = input[i];
}
lstWinners.Add(winner);
if (!dictWinnerLoser.ContainsKey(winner))
{
lstLoosers = new List<int>();
lstLoosers.Add(looser);
dictWinnerLoser.Add(winner, lstLoosers);
}
else
{
lstLoosers = dictWinnerLoser[winner];
lstLoosers.Add(looser);
dictWinnerLoser[winner] = lstLoosers;
}
}
input = lstWinners.ToArray();//run the loop again with winners
}
List<int> loosersOfWinner = dictWinnerLoser[input[0]];//Gives all the elemetns who lost to max element of array, input array now has only one element which is actually the max of the array
winner = 0;
for (int i = 0; i < loosersOfWinner.Count(); i++)//Now max in the lossers of winner will give second largest
{
if (winner < loosersOfWinner[i])
{
winner = loosersOfWinner[i];
}
}
return winner;
}
This is probably a quite exotic question.
My Problem is as follows:
The TI 83+ graphing calculator allows you to program on it using either Assembly and a link cable to a computer or its built-in TI-BASIC programming language.
According to what I've found, it supports only 16-Bit Integers and some emulated floats.
I want to work with a bit larger numbers however (around 64 bit), so for that I use an array with the single digits:
{1, 2, 3, 4, 5}
would be the Decimal 12345.
In binary, that's 110000 00111001, or as a binary digit array:
{1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1}
which would be how the calculator displays it.
How would i go about converting this array of decimal digits (which is too large for the calculator to display it as a native type) into an array of decimal digits?
Efficiency is not an issue. This is NOT homework.
This would leave me free to implement Addition for such arrays and such.
thanks!
Thought about it and I think I would do it with the following 'algorithm'
check the last digit (5 in the example case)
if it is odd, store (from the reverse order) a 1 in the binary array
now divide the number by 2 through the following method:
begin with the first digit and clear the 'carry' variable.
divide it by 2 and add the 'carry' variable. If the remainder is 1 (check this before you do the divide with an and&1) then put 5 in the carry
repeat untill all digits have been done
repeat both steps again untill the whole number is reduced to 0's.
the number in your binary array is the binary representation
your example:
1,2,3,4,5
the 5 is odd so we store 1 in the binary array: 1
we divide the array by 2 using the algorithm:
0,2,3,4,5 => 0,1+5,3,4,5 => 0,6,1,4,5 => 0,6,1,2+5,5 => 0,6,1,7,2
and repeat:
0,6,1,7,2 last digit is even so we store a 0: 0,1 (notice we fill the binary string from right to left)
etc
you end up with a binary
EDIT:
Just to clarify above: All I'm doing is the age old algorithm:
int value=12345;
while(value>0)
{
binaryArray.push(value&1);
value>>=1; //divide by 2
}
except in your example we don't have an int but an array which represents a (10 base) int ;^)
On way would be to convert each digit in the decimal representation to it's binary representation and then add the binary representations of all the digits:
5 = 101
40 = 101000
300 = 100101100
2000 = 11111010000
10000 = 10011100010000
101
101000
100101100
11111010000
+ 10011100010000
----------------
11000000111001
Proof of concept in C#:
Methods for converting to an array of binary digits, adding arrays and multiplying an array by ten:
private static byte[] GetBinary(int value) {
int bit = 1, len = 1;
while (bit * 2 < value) {
bit <<= 1;
len++;
}
byte[] result = new byte[len];
for (int i = 0; value > 0;i++ ) {
if (value >= bit) {
value -= bit;
result[i] = 1;
}
bit >>= 1;
}
return result;
}
private static byte[] Add(byte[] a, byte[] b) {
byte[] result = new byte[Math.Max(a.Length, b.Length) + 1];
int carry = 0;
for (int i = 1; i <= result.Length; i++) {
if (i <= a.Length) carry += a[a.Length - i];
if (i <= b.Length) carry += b[b.Length - i];
result[result.Length - i] = (byte)(carry & 1);
carry >>= 1;
}
if (result[0] == 0) {
byte[] shorter = new byte[result.Length - 1];
Array.Copy(result, 1, shorter, 0, shorter.Length);
result = shorter;
}
return result;
}
private static byte[] Mul2(byte[] a, int exp) {
byte[] result = new byte[a.Length + exp];
Array.Copy(a, result, a.Length);
return result;
}
private static byte[] Mul10(byte[] a, int exp) {
for (int i = 0; i < exp; i++) {
a = Add(Mul2(a, 3), Mul2(a, 1));
}
return a;
}
Converting an array:
byte[] digits = { 1, 2, 3, 4, 5 };
byte[][] bin = new byte[digits.Length][];
int exp = 0;
for (int i = digits.Length - 1; i >= 0; i--) {
bin[i] = Mul10(GetBinary(digits[i]), exp);
exp++;
}
byte[] result = null;
foreach (byte[] digit in bin) {
result = result == null ? digit: Add(result, digit);
}
// output array
Console.WriteLine(
result.Aggregate(
new StringBuilder(),
(s, n) => s.Append(s.Length == 0 ? "" : ",").Append(n)
).ToString()
);
Output:
1,1,0,0,0,0,0,0,1,1,1,0,0,1
Edit:
Added methods for multiplying an array by tens. Intead of multiplying the digit before converting it to a binary array, it has to be done to the array.
The main issue here is that you're going between bases which aren't multiples of one another, and thus there isn't a direct isolated mapping between input digits and output digits. You're probably going to have to start with your least significant digit, output as many least significant digits of the output as you can before you need to consult the next digit, and so on. That way you only need to have at most 2 of your input digits being examined at any given point in time.
You might find it advantageous in terms of processing order to store your numbers in reversed form (such that the least significant digits come first in the array).