Sorting a table by nested value in Lua [duplicate] - algorithm

This question already has answers here:
Associatively sorting a table by value in Lua
(7 answers)
Closed 8 years ago.
I have a program which aggregates for every user the total number of downloads performed with an aggregate of the total downloaded data in kb.
local table = {}
table[userID] = {5, 23498502}
My aim is that the output of the printTable function will produce the entire list of users ordered in descending order by the amount of kb downloaded v[2]
local aUsers = {}
...
function topUsers(key, nDownloads, totalSize)
if aUsers[key] then
aUsers[key][1] = aUsers[key][1] + nDownloads
aUsers[key][2] = aUsers[key][2] + totalSize
else
aUsers[key] = {nDownloads, totalSize}
end
end
function printTable(t)
local str = ""
-- How to sort 't' so that it prints in v[2] descending order?
for k,v in pairs(t) do
str = str .. k .. ", " .. v[1] .. ", " .. v[2] .. "\n"
end
return str
end
...
Any ideas how could I do that?

You can get the keys into a separate table and then sort that table using the criteria you need:
local t = {
a = {1,2},
b = {2,3},
c = {4,1},
d = {9,9},
}
local keys = {}
for k in pairs(t) do table.insert(keys, k) end
table.sort(keys, function(a, b) return t[a][2] > t[b][2] end)
for _, k in ipairs(keys) do print(k, t[k][1], t[k][2]) end
will print:
d 9 9
b 2 3
a 1 2
c 4 1

Related

Lua iterate over table sorted by values

I have a table t with many entries like t["name1"] = 42, t["name2"] = 123, ...
I would like to iterate over the table in descending order of the value numbers. How can this be accomplished? I have found methods to create iterator functions that go ordered over the keys of a table, but no way to go over the entries with ordered values.
function pairs_order_by_values_desc(tab)
local keys = {}
for k in pairs(tab) do
keys[#keys + 1] = k
end
table.sort(keys, function(a, b) return tab[a] > tab[b] end)
local j = 0
return
function()
j = j + 1
local k = keys[j]
if k ~= nil then
return k, tab[k]
end
end
end
local t = {}
t.name1 = 42
t.name2 = 123
t.name3 = 99
for k, v in pairs_order_by_values_desc(t) do
print(k, v)
end

Lua sorting values of tables within table

I am very new to Lua, so please be gentle.
I want a sorted results based on the "error" key. For this example, the output should be:
c 50 70
d 25 50
b 30 40
a 10 20
Here is my script:
records = {}
records["a"] = {["count"] = 10, ["error"] = 20}
records["b"] = {["count"] = 30, ["error"] = 40}
records["c"] = {["count"] = 50, ["error"] = 70}
records["d"] = {["count"] = 25, ["error"] = 50}
function spairs(t, order)
-- collect the keys
local keys = {}
for k in pairs(t) do keys[#keys+1] = k end
-- if order function given, sort by it by passing the table and keys a, b,
-- otherwise just sort the keys
if order then
table.sort(keys, function(a,b) return order(t, a, b) end)
else
table.sort(keys)
end
-- return the iterator function
local i = 0
return function()
i = i + 1
if keys[i] then
return keys[i], t[keys[i]]
end
end
end
for k, v in pairs(records) do
for m, n in pairs(v) do
for x, y in spairs(v, function(t,a,b) return t[b] < t[a] end) do
line = string.format("%s %5s %-10d", k, n, y)
end
end
print(line)
end
I found this about sorting a table and tried to implement it. But it does not work, results are not sorted.
table.sort only works when the table elements are integrally indexed. In your case; when you try to call spairs, you are actually calling table.sort on the count and error indices.
First off; remove the ugly, irrelevant nested for..pairs loops. You only need the spairs for your task.
for x, y in spairs(records, function(t, a, b) return t[b].error < t[a].error end) do
print( x, y.count, y.error)
end
And that is all.

Find all possible combinations of a String representation of a number

Given a mapping:
A: 1
B: 2
C: 3
...
...
...
Z: 26
Find all possible ways a number can be represented. E.g. For an input: "121", we can represent it as:
ABA [using: 1 2 1]
LA [using: 12 1]
AU [using: 1 21]
I tried thinking about using some sort of a dynamic programming approach, but I am not sure how to proceed. I was asked this question in a technical interview.
Here is a solution I could think of, please let me know if this looks good:
A[i]: Total number of ways to represent the sub-array number[0..i-1] using the integer to alphabet mapping.
Solution [am I missing something?]:
A[0] = 1 // there is only 1 way to represent the subarray consisting of only 1 number
for(i = 1:A.size):
A[i] = A[i-1]
if(input[i-1]*10 + input[i] < 26):
A[i] += 1
end
end
print A[A.size-1]
To just get the count, the dynamic programming approach is pretty straight-forward:
A[0] = 1
for i = 1:n
A[i] = 0
if input[i-1] > 0 // avoid 0
A[i] += A[i-1];
if i > 1 && // avoid index-out-of-bounds on i = 1
10 <= (10*input[i-2] + input[i-1]) <= 26 // check that number is 10-26
A[i] += A[i-2];
If you instead want to list all representations, dynamic programming isn't particularly well-suited for this, you're better off with a simple recursive algorithm.
First off, we need to find an intuitive way to enumerate all the possibilities. My simple construction, is given below.
let us assume a simple way to represent your integer in string format.
a1 a2 a3 a4 ....an, for instance in 121 a1 -> 1 a2 -> 2, a3 -> 1
Now,
We need to find out number of possibilities of placing a + sign in between two characters. + is to mean characters concatenation here.
a1 - a2 - a3 - .... - an, - shows the places where '+' can be placed. So, number of positions is n - 1, where n is the string length.
Assume a position may or may not have a + symbol shall be represented as a bit.
So, this boils down to how many different bit strings are possible with the length of n-1, which is clearly 2^(n-1). Now in order to enumerate the possibilities go through every bit string and place right + signs in respective positions to get every representations,
For your example, 121
Four bit strings are possible 00 01 10 11
1 2 1
1 2 + 1
1 + 2 1
1 + 2 + 1
And if you see a character followed by a +, just add the next char with the current one and do it sequentially to get the representation,
x + y z a + b + c d
would be (x+y) z (a+b+c) d
Hope it helps.
And you will have to take care of edge cases where the size of some integer > 26, of course.
I think, recursive traverse through all possible combinations would do just fine:
mapping = {"1":"A", "2":"B", "3":"C", "4":"D", "5":"E", "6":"F", "7":"G",
"8":"H", "9":"I", "10":"J",
"11":"K", "12":"L", "13":"M", "14":"N", "15":"O", "16":"P",
"17":"Q", "18":"R", "19":"S", "20":"T", "21":"U", "22":"V", "23":"W",
"24":"A", "25":"Y", "26":"Z"}
def represent(A, B):
if A == B == '':
return [""]
ret = []
if A in mapping:
ret += [mapping[A] + r for r in represent(B, '')]
if len(A) > 1:
ret += represent(A[:-1], A[-1]+B)
return ret
print represent("121", "")
Assuming you only need to count the number of combinations.
Assuming 0 followed by an integer in [1,9] is not a valid concatenation, then a brute-force strategy would be:
Count(s,n)
x=0
if (s[n-1] is valid)
x=Count(s,n-1)
y=0
if (s[n-2] concat s[n-1] is valid)
y=Count(s,n-2)
return x+y
A better strategy would be to use divide-and-conquer:
Count(s,start,n)
if (len is even)
{
//split s into equal left and right part, total count is left count multiply right count
x=Count(s,start,n/2) + Count(s,start+n/2,n/2);
y=0;
if (s[start+len/2-1] concat s[start+len/2] is valid)
{
//if middle two charaters concatenation is valid
//count left of the middle two characters
//count right of the middle two characters
//multiply the two counts and add to existing count
y=Count(s,start,len/2-1)*Count(s,start+len/2+1,len/2-1);
}
return x+y;
}
else
{
//there are three cases here:
//case 1: if middle character is valid,
//then count everything to the left of the middle character,
//count everything to the right of the middle character,
//multiply the two, assign to x
x=...
//case 2: if middle character concatenates the one to the left is valid,
//then count everything to the left of these two characters
//count everything to the right of these two characters
//multiply the two, assign to y
y=...
//case 3: if middle character concatenates the one to the right is valid,
//then count everything to the left of these two characters
//count everything to the right of these two characters
//multiply the two, assign to z
z=...
return x+y+z;
}
The brute-force solution has time complexity of T(n)=T(n-1)+T(n-2)+O(1) which is exponential.
The divide-and-conquer solution has time complexity of T(n)=3T(n/2)+O(1) which is O(n**lg3).
Hope this is correct.
Something like this?
Haskell code:
import qualified Data.Map as M
import Data.Maybe (fromJust)
combs str = f str [] where
charMap = M.fromList $ zip (map show [1..]) ['A'..'Z']
f [] result = [reverse result]
f (x:xs) result
| null xs =
case M.lookup [x] charMap of
Nothing -> ["The character " ++ [x] ++ " is not in the map."]
Just a -> [reverse $ a:result]
| otherwise =
case M.lookup [x,head xs] charMap of
Just a -> f (tail xs) (a:result)
++ (f xs ((fromJust $ M.lookup [x] charMap):result))
Nothing -> case M.lookup [x] charMap of
Nothing -> ["The character " ++ [x]
++ " is not in the map."]
Just a -> f xs (a:result)
Output:
*Main> combs "121"
["LA","AU","ABA"]
Here is the solution based on my discussion here:
private static int decoder2(int[] input) {
int[] A = new int[input.length + 1];
A[0] = 1;
for(int i=1; i<input.length+1; i++) {
A[i] = 0;
if(input[i-1] > 0) {
A[i] += A[i-1];
}
if (i > 1 && (10*input[i-2] + input[i-1]) <= 26) {
A[i] += A[i-2];
}
System.out.println(A[i]);
}
return A[input.length];
}
Just us breadth-first search.
for instance 121
Start from the first integer,
consider 1 integer character first, map 1 to a, leave 21
then 2 integer character map 12 to L leave 1.
This problem can be done in o(fib(n+2)) time with a standard DP algorithm.
We have exactly n sub problems and button up we can solve each problem with size i in o(fib(i)) time.
Summing the series gives fib (n+2).
If you consider the question carefully you see that it is a Fibonacci series.
I took a standard Fibonacci code and just changed it to fit our conditions.
The space is obviously bound to the size of all solutions o(fib(n)).
Consider this pseudo code:
Map<Integer, String> mapping = new HashMap<Integer, String>();
List<String > iterative_fib_sequence(string input) {
int length = input.length;
if (length <= 1)
{
if (length==0)
{
return "";
}
else//input is a-j
{
return mapping.get(input);
}
}
List<String> b = new List<String>();
List<String> a = new List<String>(mapping.get(input.substring(0,0));
List<String> c = new List<String>();
for (int i = 1; i < length; ++i)
{
int dig2Prefix = input.substring(i-1, i); //Get a letter with 2 digit (k-z)
if (mapping.contains(dig2Prefix))
{
String word2Prefix = mapping.get(dig2Prefix);
foreach (String s in b)
{
c.Add(s.append(word2Prefix));
}
}
int dig1Prefix = input.substring(i, i); //Get a letter with 1 digit (a-j)
String word1Prefix = mapping.get(dig1Prefix);
foreach (String s in a)
{
c.Add(s.append(word1Prefix));
}
b = a;
a = c;
c = new List<String>();
}
return a;
}
old question but adding an answer so that one can find help
It took me some time to understand the solution to this problem – I refer accepted answer and #Karthikeyan's answer and the solution from geeksforgeeks and written my own code as below:
To understand my code first understand below examples:
we know, decodings([1, 2]) are "AB" or "L" and so decoding_counts([1, 2]) == 2
And, decodings([1, 2, 1]) are "ABA", "AU", "LA" and so decoding_counts([1, 2, 1]) == 3
using the above two examples let's evaluate decodings([1, 2, 1, 4]):
case:- "taking next digit as single digit"
taking 4 as single digit to decode to letter 'D', we get decodings([1, 2, 1, 4]) == decoding_counts([1, 2, 1]) because [1, 2, 1, 4] will be decode as "ABAD", "AUD", "LAD"
case:- "combining next digit with the previous digit"
combining 4 with previous 1 as 14 as a single to decode to letter N, we get decodings([1, 2, 1, 4]) == decoding_counts([1, 2]) because [1, 2, 1, 4] will be decode as "ABN" or "LN"
Below is my Python code, read comments
def decoding_counts(digits):
# defininig count as, counts[i] -> decoding_counts(digits[: i+1])
counts = [0] * len(digits)
counts[0] = 1
for i in xrange(1, len(digits)):
# case:- "taking next digit as single digit"
if digits[i] != 0: # `0` do not have mapping to any letter
counts[i] = counts[i -1]
# case:- "combining next digit with the previous digit"
combine = 10 * digits[i - 1] + digits[i]
if 10 <= combine <= 26: # two digits mappings
counts[i] += (1 if i < 2 else counts[i-2])
return counts[-1]
for digits in "13", "121", "1214", "1234121":
print digits, "-->", decoding_counts(map(int, digits))
outputs:
13 --> 2
121 --> 3
1214 --> 5
1234121 --> 9
note: I assumed that input digits do not start with 0 and only consists of 0-9 and have a sufficent length
For Swift, this is what I came up with. Basically, I converted the string into an array and goes through it, adding a space into different positions of this array, then appending them to another array for the second part, which should be easy after this is done.
//test case
let input = [1,2,2,1]
func combination(_ input: String) {
var arr = Array(input)
var possible = [String]()
//... means inclusive range
for i in 2...arr.count {
var temp = arr
//basically goes through it backwards so
// adding the space doesn't mess up the index
for j in (1..<i).reversed() {
temp.insert(" ", at: j)
possible.append(String(temp))
}
}
print(possible)
}
combination(input)
//prints:
//["1 221", "12 21", "1 2 21", "122 1", "12 2 1", "1 2 2 1"]
def stringCombinations(digits, i=0, s=''):
if i == len(digits):
print(s)
return
alphabet = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
total = 0
for j in range(i, min(i + 1, len(digits) - 1) + 1):
total = (total * 10) + digits[j]
if 0 < total <= 26:
stringCombinations(digits, j + 1, s + alphabet[total - 1])
if __name__ == '__main__':
digits = list()
n = input()
n.split()
d = list(n)
for i in d:
i = int(i)
digits.append(i)
print(digits)
stringCombinations(digits)

copying unordered keys from one table to ordered values in another

I have a table mapping strings to numbers like this:
t['a']=10
t['b']=2
t['c']=4
t['d']=11
From this I want to create an array-like table whose values are the keys from the first table, ordered by their (descending) values in the first table, like this:
T[1] = 'd' -- 11
T[2] = 'a' -- 10
T[3] = 'c' -- 4
T[4] = 'b' -- 2
How can this be done in Lua?
-- Your table
local t = { }
t["a"] = 10
t["b"] = 2
t["c"] = 4
t["d"] = 11
local T = { } -- Result goes here
-- Store both key and value as pairs
for k, v in pairs(t) do
T[#T + 1] = { k = k, v = v }
end
-- Sort by value
table.sort(T, function(lhs, rhs) return lhs.v > rhs.v end)
-- Leave only keys, drop values
for i = 1, #T do
T[i] = T[i].k
end
-- Print the result
for i = 1, #T do
print("T["..i.."] = " .. ("%q"):format(T[i]))
end
It prints
T[1] = "d"
T[2] = "a"
T[3] = "c"
T[4] = "b"
Alexander Gladysh's answer can be simplified slightly:
-- Your table
local t = { }
t["a"] = 10
t["b"] = 2
t["c"] = 4
t["d"] = 11
local T = { } -- Result goes here
-- Store keys (in arbitrary order) in the output table
for k, _ in pairs(t) do
T[#T + 1] = k
end
-- Sort by value
table.sort(T, function(lhs, rhs) return t[lhs] > t[rhs] end)
-- Print the result
for i = 1, #T do
print("T["..i.."] = " .. ("%q"):format(T[i]))
end
Tables in Lua do not have an order associated with them.
When using a table as an array with sequential integer keys from 1 to N, the table can be iterated in order using a loop or ipairs().
When using keys that are not sequential integers from 1 to N, the order can not be controlled. To get around this limitation a second table can be used as an array to store the order of the keys in the first table.

Algorithm to evenly distribute items into 3 columns

I'm looking for an algorithm that will evenly distribute 1 to many items into three columns. No column can have more than one more item than any other column. I typed up an example of what I'm looking for below. Adding up Col1,Col2, and Col3 should equal ItemCount.
Edit: Also, the items are alpha-numeric and must be ordered within the column. The last item in the column has to be less than the first item in the next column.
Items Col1,Col2,Col3
A A
AB A,B
ABC A,B,C
ABCD AB,C,D
ABCDE AB,CD,E
ABCDEF AB,CD,EF
ABCDEFG ABC,DE,FG
ABCDEFGH ABC,DEF,GH
ABCDEFGHI ABC,DEF,GHI
ABCDEFHGIJ ABCD,EFG,HIJ
ABCDEFHGIJK ABCD,EFGH,IJK
Here you go, in Python:
NumCols = 3
DATA = "ABCDEFGHIJK"
for ItemCount in range(1, 12):
subdata = DATA[:ItemCount]
Col1Count = (ItemCount + NumCols - 1) / NumCols
Col2Count = (ItemCount + NumCols - 2) / NumCols
Col3Count = (ItemCount + NumCols - 3) / NumCols
Col1 = subdata[:Col1Count]
Col2 = subdata[Col1Count:Col1Count+Col2Count]
Col3 = subdata[Col1Count+Col2Count:]
print "%2d %5s %5s %5s" % (ItemCount, Col1, Col2, Col3)
# Prints:
# 1 A
# 2 A B
# 3 A B C
# 4 AB C D
# 5 AB CD E
# 6 AB CD EF
# 7 ABC DE FG
# 8 ABC DEF GH
# 9 ABC DEF GHI
# 10 ABCD EFG HIJ
# 11 ABCD EFGH IJK
This answer is now obsolete because the OP decided to simply change the question after I answered it. I’m just too lazy to delete it.
function getColumnItemCount(int items, int column) {
return (int) (items / 3) + (((items % 3) >= (column + 1)) ? 1 : 0);
}
This question was the closest thing to my own that I found, so I'll post the solution I came up with. In JavaScript:
var items = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K']
var columns = [[], [], []]
for (var i=0; i<items.length; i++) {
columns[Math.floor(i * columns.length / items.length)].push(items[i])
}
console.log(columns)
just to give you a hint (it's pretty easy, so figure out yourself)
divide ItemCount by 3, rounding down. This is what is at least in every column.
Now you do ItemCount % 3 (modulo), which is either 1 or 2 (because else it would be dividable by 3, right) and you distribute that.
I needed a C# version so here's what I came up with (the algorithm is from Richie's answer):
// Start with 11 values
var data = "ABCDEFGHIJK";
// Split in 3 columns
var columnCount = 3;
// Find out how many values to display in each column
var columnCounts = new int[columnCount];
for (int i = 0; i < columnCount; i++)
columnCounts[i] = (data.Count() + columnCount - (i + 1)) / columnCount;
// Allocate each value to the appropriate column
int iData = 0;
for (int i = 0; i < columnCount; i++)
for (int j = 0; j < columnCounts[i]; j++)
Console.WriteLine("{0} -> Column {1}", data[iData++], i + 1);
// PRINTS:
// A -> Column 1
// B -> Column 1
// C -> Column 1
// D -> Column 1
// E -> Column 2
// F -> Column 2
// G -> Column 2
// H -> Column 2
// I -> Column 3
// J -> Column 3
// K -> Column 3
It's quite simple
If you have N elements indexed from 0 to N-1 and column indexed from 0to 2, the i-th element will go in column i mod 3 (where mod is the modulo operator, % in C,C++ and some other languages)
Do you just want the count of items in each column? If you have n items, then
the counts will be:
round(n/3), round(n/3), n-2*round(n/3)
where "round" round to the nearest integer (e.g. round(x)=(int)(x+0.5))
If you want to actually put the items there, try something like this Python-style pseudocode:
def columnize(items):
i=0
answer=[ [], [], [] ]
for it in items:
answer[i%3] += it
i += 1
return answer
Here's a PHP version I hacked together for all the PHP hacks out there like me (yup, guilt by association!)
function column_item_count($items, $column, $maxcolumns) {
return round($items / $maxcolumns) + (($items % $maxcolumns) >= $column ? 1 : 0);
}
And you can call it like this...
$cnt = sizeof($an_array_of_data);
$col1_cnt = column_item_count($cnt,1,3);
$col2_cnt = column_item_count($cnt,2,3);
$col3_cnt = column_item_count($cnt,3,3);
Credit for this should go to #Bombe who provided it in Java (?) above.
NB: This function expects you to pass in an ordinal column number, i.e. first col = 1, second col = 2, etc...

Resources