How can the performance of an Alloy model be improved? - performance

I've written two Alloy solutions to the "water pouring puzzle" (Given a 5 quart jug, a 3 quart jug, can you measure exactly 4 quarts?)
My first attempt (specific.als) hardcodes the two jugs as named relations:
sig State {
threeJug: one Int,
fiveJug: one Int
}
It solves the puzzle (finds a counterexample) in about 500ms.
My second attempt (generic.als) is coded to allow for an arbitrary number of jugs and different sizes:
sig State {
jugAmounts: Jug -> one Int
}
It solves the puzzle in about 2000ms.
Is it possible for the generic code, to run as quickly as the specific code?
What would need to be changed?
Specific Model
open util/ordering[State]
sig State {
threeJug: one Int,
fiveJug: one Int
}
fact {
all s: State {
s.threeJug >= 0
s.threeJug <= 3
s.fiveJug >= 0
s.fiveJug <= 5
}
first.threeJug = 0
first.fiveJug = 0
}
pred Fill(s, s': State){
(s'.threeJug = 3 and s'.fiveJug = s.fiveJug)
or (s'.fiveJug = 5 and s'.threeJug = s.threeJug)
}
pred Empty(s, s': State){
(s.threeJug > 0 and s'.threeJug = 0 and s'.fiveJug = s.fiveJug)
or (s.fiveJug > 0 and s'.fiveJug = 0 and s'.threeJug = s.threeJug)
}
pred Pour3To5(s, s': State){
some x: Int {
s'.fiveJug = plus[s.fiveJug, x]
and s'.threeJug = minus[s.threeJug, x]
and (s'.fiveJug = 5 or s'.threeJug = 0)
}
}
pred Pour5To3(s, s': State){
some x: Int {
s'.threeJug = plus[s.threeJug, x]
and s'.fiveJug = minus[s.fiveJug, x]
and (s'.threeJug = 3 or s'.fiveJug = 0)
}
}
fact Next {
all s: State, s': s.next {
Fill[s, s']
or Pour3To5[s, s']
or Pour5To3[s, s']
or Empty[s, s']
}
}
assert notOne {
no s: State | s.fiveJug = 4
}
check notOne for 7
Generic Model
open util/ordering[State]
sig Jug {
max: Int
}
sig State {
jugAmounts: Jug -> one Int
}
fact jugCapacity {
all s: State {
all j: s.jugAmounts.univ {
j.(s.jugAmounts) >= 0
and j.(s.jugAmounts) <= j.max
}
}
-- jugs start empty
first.jugAmounts = Jug -> 0
}
pred fill(s, s': State){
one j: Jug {
j.(s'.jugAmounts) = j.max
all r: Jug - j | r.(s'.jugAmounts) = r.(s.jugAmounts)
}
}
pred empty(s, s': State){
one j: Jug {
j.(s'.jugAmounts) = 0
all r: Jug - j | r.(s'.jugAmounts) = r.(s.jugAmounts)
}
}
pred pour(s, s': State){
some x: Int {
some from, to: Jug {
from.(s'.jugAmounts) = 0 or to.(s'.jugAmounts) = to.max
from.(s'.jugAmounts) = minus[from.(s.jugAmounts), x]
to.(s'.jugAmounts) = plus[to.(s.jugAmounts), x]
all r: Jug - from - to | r.(s'.jugAmounts) = r.(s.jugAmounts)
}
}
}
fact next {
all s: State, s': s.next {
fill[s, s']
or empty[s, s']
or pour[s, s']
}
}
fact SpecifyPuzzle{
all s: State | #s.jugAmounts = 2
one j: Jug | j.max = 5
one j: Jug | j.max = 3
}
assert no4 {
no s: State | 4 in univ.(s.jugAmounts)
}
check no4 for 7

By experience, better performance can be obtained by :
Reducing the size of the search space ( by decreasing the scope, the number of sig and relations, the arity of relations,..)
Simplifying constraints. Try to avoid using set comprehension and quantification as much as possible. As an example, your assertion, no s: State | 4 in univ.(s.jugAmounts) is logically equivalent to 4 not in univ.(State.jugAmounts). Making this tiny change alone in your model has already made the processing of clauses 200ms faster on my end.
EDIT
I came back to your problem during my free time.
Here's the model I used to reach this conclusion
module WaterJugs/AbstractSyntax/ASM
open util/ordering[State]
open util/integer
//===== HARDCODE 2 jugs of volume 3 and 5 resp. comment those lines for generic approach ====
one sig JugA extends Jug{}{
volume=3
water[first]=0
}
one sig JugB extends Jug{}{
volume=5
water[first]=0
}
//==================================================================
abstract sig Jug{
volume: Int,
water: State ->one Int
}{
all s:State| water[s]<=volume and water[s]>=0
volume >0
}
pred Fill(s1,s2:State){
one disj j1,j2:Jug{j1.water[s1]!=j1.volume and j1.water[s2]=j1.volume and j2.water[s2]=j2.water[s1] }
}
pred Empty(s1,s2:State){
one disj j1,j2:Jug{ j1.water[s1]!=0 and j1.water[s2]=0 and j2.water[s2]= j2.water[s1] }
}
pred Pour(s1,s2:State){
one disj j1,j2: Jug{
add[j1.water[s1],j2.water[s1]] >j1.volume implies {
( j1.water[s2]=j1.volume and j2.water[s2]=sub[j2.water[s1],sub[j1.volume,j1.water[s1]]])}
else{
( j1.water[s2]=add[j1.water[s1],j2.water[s1]] and j2.water[s2]=0)
}
}
}
fact Next {
all s: State-last{
Fill[s, s.next]
or Pour[s, s.next]
or Empty[s, s.next]
}
}
sig State{
}
assert no4 {
4 not in Jug.water[State]
}
check no4 for 7
In bonus, here's a visualization of the counter exemple provided by Lightning

Related

How would one solve the staircase problem recursively with a variable number of steps?

The problem of determining the n amount of ways to climb a staircase given you can take 1 or 2 steps is well known with the Fibonacci sequencing solution being very clear. However how exactly could one solve this recursively if you also assume that you can take a variable M amount of steps?
I tried to make a quick mockup of this algorithm in typescript with
function counter(n: number, h: number){
console.log(`counter(n=${n},h=${h})`);
let sum = 0
if(h<1) return 0;
sum = 1
if (n>h) {
n = h;
}
if (n==h) {
sum = Math.pow(2, h-1)
console.log(`return sum=${sum}, pow(2,${h-1}) `)
return sum
}
for (let c = 1; c <= h; c++) {
console.log(`c=${c}`)
sum += counter(n, h-c);
console.log(`sum=${sum}`)
}
console.log(`return sum=${sum}`)
return sum;
}
let result = counter (2, 4);
console.log(`result=${result}`)
but unfortunately this doesn't seem to work for most cases where the height is not equal to the number of steps one could take.
I think this could be solved with recursive DP.
vector<vector<int>> dp2; //[stair count][number of jumps]
int stair(int c, int p) {
int& ret = dp2[c][p];
if (ret != -1) return ret; //If you've already done same search, return saved result
if (c == n) { //If you hit the last stair, return 1
return ret = 1;
}
int s1 = 0, s2 = 0;
if (p < m) { //If you can do more jumps, make recursive call
s1 = stair(c + 1, p + 1);
if (c + 2 <= n) { //+2 stairs can jump over the last stair. That shouldn't happen.
s2 = stair(c + 2, p + 1);
}
}
return ret = s1 + s2; //Final result will be addition of +1 stair methods and +2 methods
}
int main()
{
ios::sync_with_stdio(0); cin.tie(0); cout.tie(0);
cin >> n >> m; dp2 = vector<vector<int>>(n + 1, vector<int>(m + 1, -1));
for (int i = 1; i <= m; i++) {
dp2[n][i] = 1; //All last stair method count should be 1, because there is no more after.
}
cout << stair(0, 0) << "\n";
return 0;
}
Example IO 1
5 5
8
// 1 1 1 1 1
// 1 1 1 2
// 1 1 2 1
// 1 2 1 1
// 2 1 1 1
// 1 2 2
// 2 1 2
// 2 2 1
Example IO 2
5 4
7
// 1 1 1 2
// 1 1 2 1
// 1 2 1 1
// 2 1 1 1
// 1 2 2
// 2 1 2
// 2 2 1
Example IO 3
5 3
3
// 1 2 2
// 2 1 2
// 2 2 1

Tail recursive solution in Scala for Linked-List chaining

I wanted to write a tail-recursive solution for the following problem on Leetcode -
You are given two non-empty linked lists representing two non-negative integers. The digits are stored in reverse order and each of their nodes contains a single digit. Add the two numbers and return it as a linked list.
You may assume the two numbers do not contain any leading zero, except the number 0 itself.
Example:
*Input: (2 -> 4 -> 3) + (5 -> 6 -> 4)*
*Output: 7 -> 0 -> 8*
*Explanation: 342 + 465 = 807.*
Link to the problem on Leetcode
I was not able to figure out a way to call the recursive function in the last line.
What I am trying to achieve here is the recursive calling of the add function that adds the heads of the two lists with a carry and returns a node. The returned node is chained with the node in the calling stack.
I am pretty new to scala, I am guessing I may have missed some useful constructs.
/**
* Definition for singly-linked list.
* class ListNode(_x: Int = 0, _next: ListNode = null) {
* var next: ListNode = _next
* var x: Int = _x
* }
*/
import scala.annotation.tailrec
object Solution {
def addTwoNumbers(l1: ListNode, l2: ListNode): ListNode = {
add(l1, l2, 0)
}
//#tailrec
def add(l1: ListNode, l2: ListNode, carry: Int): ListNode = {
var sum = 0;
sum = (if(l1!=null) l1.x else 0) + (if(l2!=null) l2.x else 0) + carry;
if(l1 != null || l2 != null || sum > 0)
ListNode(sum%10,add(if(l1!=null) l1.next else null, if(l2!=null) l2.next else null,sum/10))
else null;
}
}
You have a couple of problems, which can mostly be reduced as being not idiomatic.
Things like var and null are not common in Scala and usually, you would use a tail-recursive algorithm to avoid that kind of things.
Finally, remember that a tail-recursive algorithm requires that the last expression is either a plain value or a recursive call. For doing that, you usually keep track of the remaining job as well as an accumulator.
Here is a possible solution:
type Digit = Int // Refined [0..9]
type Number = List[Digit] // Refined NonEmpty.
def sum(n1: Number, n2: Number): Number = {
def aux(d1: Digit, d2: Digit, carry: Digit): (Digit, Digit) = {
val tmp = d1 + d2 + carry
val d = tmp % 10
val c = tmp / 10
d -> c
}
#annotation.tailrec
def loop(r1: Number, r2: Number, acc: Number, carry: Digit): Number =
(r1, r2) match {
case (d1 :: tail1, d2 :: tail2) =>
val (d, c) = aux(d1, d2, carry)
loop(r1 = tail1, r2 = tail2, d :: acc, carry = c)
case (Nil, d2 :: tail2) =>
val (d, c) = aux(d1 = 0, d2, carry)
loop(r1 = Nil, r2 = tail2, d :: acc, carry = c)
case (d1 :: tail1, Nil) =>
val (d, c) = aux(d1, d2 = 0, carry)
loop(r1 = tail1, r2 = Nil, d :: acc, carry = c)
case (Nil, Nil) =>
acc
}
loop(r1 = n1, r2 = n2, acc = List.empty, carry = 0).reverse
}
Now, this kind of recursions tends to be very verbose.
Usually, the stdlib provide ways to make this same algorithm more concise:
// This is a solution that do not require the numbers to be already reversed and the output is also in the correct order.
def sum(n1: Number, n2: Number): Number = {
val (result, carry) = n1.reverseIterator.zipAll(n2.reverseIterator, 0, 0).foldLeft(List.empty[Digit] -> 0) {
case ((acc, carry), (d1, d2)) =>
val tmp = d1 + d2 + carry
val d = tmp % 10
val c = tmp / 10
(d :: acc) -> c
}
if (carry > 0) carry :: result else result
}
Scala is less popular on LeetCode, but this Solution (which is not the best) would get accepted by LeetCode's online judge:
import scala.collection.mutable._
object Solution {
def addTwoNumbers(listA: ListNode, listB: ListNode): ListNode = {
var tempBufferA: ListBuffer[Int] = ListBuffer.empty
var tempBufferB: ListBuffer[Int] = ListBuffer.empty
tempBufferA.clear()
tempBufferB.clear()
def listTraversalA(listA: ListNode): ListBuffer[Int] = {
if (listA == null) {
return tempBufferA
} else {
tempBufferA += listA.x
listTraversalA(listA.next)
}
}
def listTraversalB(listB: ListNode): ListBuffer[Int] = {
if (listB == null) {
return tempBufferB
} else {
tempBufferB += listB.x
listTraversalB(listB.next)
}
}
val resultA: ListBuffer[Int] = listTraversalA(listA)
val resultB: ListBuffer[Int] = listTraversalB(listB)
val resultSum: BigInt = BigInt(resultA.reverse.mkString) + BigInt(resultB.reverse.mkString)
var listNodeResult: ListBuffer[ListNode] = ListBuffer.empty
val resultList = resultSum.toString.toList
var lastListNode: ListNode = null
for (i <-0 until resultList.size) {
if (i == 0) {
lastListNode = new ListNode(resultList(i).toString.toInt)
listNodeResult += lastListNode
} else {
lastListNode = new ListNode(resultList(i).toString.toInt, lastListNode)
listNodeResult += lastListNode
}
}
return listNodeResult.reverse(0)
}
}
References
For additional details, you can see the Discussion Board. There are plenty of accepted solutions, explanations, efficient algorithms with a variety of languages, and time/space complexity analysis in there.

Remove consecutive duplicates in a string to make the smallest string

Given a string and the constraint of matching on >= 3 characters, how can you ensure that the result string will be as small as possible?
edit with gassa's explicitness:
E.G.
'AAAABBBAC'
If I remove the B's first,
AAAA[BBB]AC -- > AAAAAC, then I can remove all of the A's from the resultant string and be left with:
[AAAAA]C --> C
'C'
If I just remove what is available first (the sequence of A's), I get:
[AAAA]BBBAC -- > [BBB]AC --> AC
'AC'
A tree would definitely get you the shortest string(s).
The tree solution:
Define a State (node) for each current string Input and all its removable sub-strings' int[] Indexes.
Create the tree: For each int index create another State and add it to the parent state State[] Children.
A State with no possible removable sub-strings has no children Children = null.
Get all Descendants State[] of your root State. Order them by their shortest string Input. And that is/are your answer(s).
Test cases:
string result = FindShortest("AAAABBBAC"); // AC
string result2 = FindShortest("AABBAAAC"); // AABBC
string result3 = FindShortest("BAABCCCBBA"); // B
The Code:
Note: Of-course everyone is welcome to enhance the following code in terms of performance and/or fixing any bug.
class Program
{
static void Main(string[] args)
{
string result = FindShortest("AAAABBBAC"); // AC
string result2 = FindShortest("AABBAAAC"); // AABBC
string result3 = FindShortest("BAABCCCBBA"); // B
}
// finds the FIRST shortest string for a given input
private static string FindShortest(string input)
{
// all possible removable strings' indexes
// for this given input
int[] indexes = RemovableIndexes(input);
// each input string and its possible removables are a state
var state = new State { Input = input, Indexes = indexes };
// create the tree
GetChildren(state);
// get the FIRST shortest
// i.e. there would be more than one answer sometimes
// this could be easily changed to get all possible results
var result =
Descendants(state)
.Where(d => d.Children == null || d.Children.Length == 0)
.OrderBy(d => d.Input.Length)
.FirstOrDefault().Input;
return result;
}
// simple get all descendants of a node/state in a tree
private static IEnumerable<State> Descendants(State root)
{
var states = new Stack<State>(new[] { root });
while (states.Any())
{
State node = states.Pop();
yield return node;
if (node.Children != null)
foreach (var n in node.Children) states.Push(n);
}
}
// creates the tree
private static void GetChildren(State state)
{
// for each an index there is a child
state.Children = state.Indexes.Select(
i =>
{
var input = RemoveAllAt(state.Input, i);
return input.Length < state.Input.Length && input.Length > 0
? new State
{
Input = input,
Indexes = RemovableIndexes(input)
}
: null;
}).ToArray();
foreach (var c in state.Children)
GetChildren(c);
}
// find all possible removable strings' indexes
private static int[] RemovableIndexes(string input)
{
var indexes = new List<int>();
char d = input[0];
int count = 1;
for (int i = 1; i < input.Length; i++)
{
if (d == input[i])
count++;
else
{
if (count >= 3)
indexes.Add(i - count);
// reset
d = input[i];
count = 1;
}
}
if (count >= 3)
indexes.Add(input.Length - count);
return indexes.ToArray();
}
// remove all duplicate chars starting from an index
private static string RemoveAllAt(string input, int startIndex)
{
string part1, part2;
int endIndex = startIndex + 1;
int i = endIndex;
for (; i < input.Length; i++)
if (input[i] != input[startIndex])
{
endIndex = i;
break;
}
if (i == input.Length && input[i - 1] == input[startIndex])
endIndex = input.Length;
part1 = startIndex > 0 ? input.Substring(0, startIndex) : string.Empty;
part2 = endIndex <= (input.Length - 1) ? input.Substring(endIndex) : string.Empty;
return part1 + part2;
}
// our node, which is
// an input string &
// all possible removable strings' indexes
// & its children
public class State
{
public string Input;
public int[] Indexes;
public State[] Children;
}
}
I propose O(n^2) solution with dynamic programming.
Let's introduce notation. Prefix and suffix of length l of string A denoted by P[l] and S[l]. And we call our procedure Rcd.
Rcd(A) = Rcd(Rcd(P[n-1])+S[1])
Rcd(A) = Rcd(P[1]+Rcd(S[n-1]))
Note that outer Rcd in the RHS is trivial. So, that's our optimal substructure. Based on this i came up with the following implementation:
#include <iostream>
#include <string>
#include <vector>
#include <cassert>
using namespace std;
string remdupright(string s, bool allowEmpty) {
if (s.size() >= 3) {
auto pos = s.find_last_not_of(s.back());
if (pos == string::npos && allowEmpty) s = "";
else if (pos != string::npos && s.size() - pos > 3) s = s.substr(0, pos + 1);
}
return s;
}
string remdupleft(string s, bool allowEmpty) {
if (s.size() >= 3) {
auto pos = s.find_first_not_of(s.front());
if (pos == string::npos && allowEmpty) s = "";
else if (pos != string::npos && pos >= 3) s = s.substr(pos);
}
return s;
}
string remdup(string s, bool allowEmpty) {
return remdupleft(remdupright(s, allowEmpty), allowEmpty);
}
string run(const string in) {
vector<vector<string>> table(in.size());
for (int i = 0; i < (int)table.size(); ++i) {
table[i].resize(in.size() - i);
}
for (int i = 0; i < (int)table[0].size(); ++i) {
table[0][i] = in.substr(i,1);
}
for (int len = 2; len <= (int)table.size(); ++len) {
for (int pos = 0; pos < (int)in.size() - len + 1; ++pos) {
string base(table[len - 2][pos]);
const char suffix = in[pos + len - 1];
if (base.size() && suffix != base.back()) {
base = remdupright(base, false);
}
const string opt1 = base + suffix;
base = table[len - 2][pos+1];
const char prefix = in[pos];
if (base.size() && prefix != base.front()) {
base = remdupleft(base, false);
}
const string opt2 = prefix + base;
const string nodupopt1 = remdup(opt1, true);
const string nodupopt2 = remdup(opt2, true);
table[len - 1][pos] = nodupopt1.size() > nodupopt2.size() ? opt2 : opt1;
assert(nodupopt1.size() != nodupopt2.size() || nodupopt1 == nodupopt2);
}
}
string& res = table[in.size() - 1][0];
return remdup(res, true);
}
void testRcd(string s, string expected) {
cout << s << " : " << run(s) << ", expected: " << expected << endl;
}
int main()
{
testRcd("BAABCCCBBA", "B");
testRcd("AABBAAAC", "AABBC");
testRcd("AAAA", "");
testRcd("AAAABBBAC", "C");
}
You can check default and run your tests here.
Clearly we are not concerned about any block of repeated characters longer than 2 characters. And there is only one way two blocks of the same character where at least one of the blocks is less than 3 in length can be combined - namely, if the sequence between them can be removed.
So (1) look at pairs of blocks of the same character where at least one is less than 3 in length, and (2) determine if the sequence between them can be removed.
We want to decide which pairs to join so as to minimize the total length of blocks less than 3 characters long. (Note that the number of pairs is bound by the size (and distribution) of the alphabet.)
Let f(b) represent the minimal total length of same-character blocks remaining up to the block b that are less than 3 characters in length. Then:
f(b):
p1 <- previous block of the same character
if b and p1 can combine:
if b.length + p1.length > 2:
f(b) = min(
// don't combine
(0 if b.length > 2 else b.length) +
f(block before b),
// combine
f(block before p1)
)
// b.length + p1.length < 3
else:
p2 <- block previous to p1 of the same character
if p1 and p2 can combine:
f(b) = min(
// don't combine
b.length + f(block before b),
// combine
f(block before p2)
)
else:
f(b) = b.length + f(block before b)
// b and p1 cannot combine
else:
f(b) = b.length + f(block before b)
for all p1 before b
The question is how can we efficiently determine if a block can be combined with the previous block of the same character (aside from the obvious recursion into the sub-block-list between the two blocks).
Python code:
import random
import time
def parse(length):
return length if length < 3 else 0
def f(string):
chars = {}
blocks = [[string[0], 1, 0]]
chars[string[0]] = {'indexes': [0]}
chars[string[0]][0] = {'prev': -1}
p = 0 # pointer to current block
for i in xrange(1, len(string)):
if blocks[len(blocks) - 1][0] == string[i]:
blocks[len(blocks) - 1][1] += 1
else:
p += 1
# [char, length, index, f(i), temp]
blocks.append([string[i], 1, p])
if string[i] in chars:
chars[string[i]][p] = {'prev': chars[string[i]]['indexes'][ len(chars[string[i]]['indexes']) - 1 ]}
chars[string[i]]['indexes'].append(p)
else:
chars[string[i]] = {'indexes': [p]}
chars[string[i]][p] = {'prev': -1}
#print blocks
#print
#print chars
#print
memo = [[None for j in xrange(len(blocks))] for i in xrange(len(blocks))]
def g(l, r, top_level=False):
####
####
#print "(l, r): (%s, %s)" % (l,r)
if l == r:
return parse(blocks[l][1])
if memo[l][r]:
return memo[l][r]
result = [parse(blocks[l][1])] + [None for k in xrange(r - l)]
if l < r:
for i in xrange(l + 1, r + 1):
result[i - l] = parse(blocks[i][1]) + result[i - l - 1]
for i in xrange(l, r + 1):
####
####
#print "\ni: %s" % i
[char, length, index] = blocks[i]
#p1 <- previous block of the same character
p1_idx = chars[char][index]['prev']
####
####
#print "(p1_idx, l, p1_idx >= l): (%s, %s, %s)" % (p1_idx, l, p1_idx >= l)
if p1_idx < l and index > l:
result[index - l] = parse(length) + result[index - l - 1]
while p1_idx >= l:
p1 = blocks[p1_idx]
####
####
#print "(b, p1, p1_idx, l): (%s, %s, %s, %s)\n" % (blocks[i], p1, p1_idx, l)
between = g(p1[2] + 1, index - 1)
####
####
#print "between: %s" % between
#if b and p1 can combine:
if between == 0:
if length + p1[1] > 2:
result[index - l] = min(
result[index - l],
# don't combine
parse(length) + (result[index - l - 1] if index - l > 0 else 0),
# combine: f(block before p1)
result[p1[2] - l - 1] if p1[2] > l else 0
)
# b.length + p1.length < 3
else:
#p2 <- block previous to p1 of the same character
p2_idx = chars[char][p1[2]]['prev']
if p2_idx < l:
p1_idx = chars[char][p1_idx]['prev']
continue
between2 = g(p2_idx + 1, p1[2] - 1)
#if p1 and p2 can combine:
if between2 == 0:
result[index - l] = min(
result[index - l],
# don't combine
parse(length) + (result[index - l - 1] if index - l > 0 else 0),
# combine the block, p1 and p2
result[p2_idx - l - 1] if p2_idx - l > 0 else 0
)
else:
#f(b) = b.length + f(block before b)
result[index - l] = min(
result[index - l],
parse(length) + (result[index - l - 1] if index - l > 0 else 0)
)
# b and p1 cannot combine
else:
#f(b) = b.length + f(block before b)
result[index - l] = min(
result[index - l],
parse(length) + (result[index - l - 1] if index - l > 0 else 0)
)
p1_idx = chars[char][p1_idx]['prev']
#print l,r,result
memo[l][r] = result[r - l]
"""if top_level:
return (result, blocks)
else:"""
return result[r - l]
if len(blocks) == 1:
return ([parse(blocks[0][1])], blocks)
else:
return g(0, len(blocks) - 1, True)
"""s = ""
for i in xrange(300):
s = s + ['A','B','C'][random.randint(0,2)]"""
print f("abcccbcccbacccab") # b
print
print f("AAAABBBAC"); # C
print
print f("CAAAABBBA"); # C
print
print f("AABBAAAC"); # AABBC
print
print f("BAABCCCBBA"); # B
print
print f("aaaa")
print
The string answers for these longer examples were computed using jdehesa's answer:
t0 = time.time()
print f("BCBCCBCCBCABBACCBABAABBBABBBACCBBBAABBACBCCCACABBCAABACBBBBCCCBBAACBAABACCBBCBBAABCCCCCAABBBBACBBAAACACCBCCBBBCCCCCCCACBABACCABBCBBBBBCBABABBACCAACBCBBAACBBBBBCCBABACBBABABAAABCCBBBAACBCACBAABAAAABABB")
# BCBCCBCCBCABBACCBABCCAABBACBACABBCAABACAACBAABACCBBCBBCACCBACBABACCABBCCBABABBACCAACBCBBAABABACBBABABBCCAACBCACBAABBABB
t1 = time.time()
total = t1-t0
print total
t0 = time.time()
print f("CBBACAAAAABBBBCAABBCBAABBBCBCBCACACBAABCBACBBABCABACCCCBACBCBBCBACBBACCCBAAAACACCABAACCACCBCBCABAACAABACBABACBCBAACACCBCBCCCABACABBCABBAAAAABBBBAABAABBCACACABBCBCBCACCCBABCAACBCAAAABCBCABACBABCABCBBBBABCBACABABABCCCBBCCBBCCBAAABCABBAAABBCAAABCCBAABAABCAACCCABBCAABCBCBCBBAACCBBBACBBBCABAABCABABABABCA")
# CBBACCAABBCBAACBCBCACACBAABCBACBBABCABABACBCBBCBACBBABCACCABAACCACCBCBCABAACAABACBABACBCBAACACCBCBABACABBCBBCACACABBCBCBCABABCAACBCBCBCABACBABCABCABCBACABABACCBBCCBBCACBCCBAABAABCBBCAABCBCBCBBAACCACCABAABCABABABABCA
t1 = time.time()
total = t1-t0
print total
t0 = time.time()
print f("AADBDBEBBBBCABCEBCDBBBBABABDCCBCEBABADDCABEEECCECCCADDACCEEAAACCABBECBAEDCEEBDDDBAAAECCBBCEECBAEBEEEECBEEBDACDDABEEABEEEECBABEDDABCDECDAABDAEADEECECEBCBDDAEEECCEEACCBBEACDDDDBDBCCAAECBEDAAAADBEADBAAECBDEACDEABABEBCABDCEEAABABABECDECADCEDAEEEBBBCEDECBCABDEDEBBBABABEEBDAEADBEDABCAEABCCBCCEDCBBEBCECCCA")
# AADBDBECABCEBCDABABDCCBCEBABADDCABCCEADDACCEECCABBECBAEDCEEBBECCBBCEECBAEBCBEEBDACDDABEEABCBABEDDABCDECDAABDAEADEECECEBCBDDACCEEACCBBEACBDBCCAAECBEDDBEADBAAECBDEACDEABABEBCABDCEEAABABABECDECADCEDACEDECBCABDEDEABABEEBDAEADBEDABCAEABCCBCCEDCBBEBCEA
t1 = time.time()
total = t1-t0
print total
Another scala answer, using memoization and tailcall optimization (partly) (updated).
import scala.collection.mutable.HashSet
import scala.annotation._
object StringCondense extends App {
#tailrec
def groupConsecutive (s: String, sofar: List[String]): List[String] = s.toList match {
// def groupConsecutive (s: String): List[String] = s.toList match {
case Nil => sofar
// case Nil => Nil
case c :: str => {
val (prefix, rest) = (c :: str).span (_ == c)
// Strings of equal characters, longer than 3, don't make a difference to just 3
groupConsecutive (rest.mkString(""), (prefix.take (3)).mkString ("") :: sofar)
// (prefix.take (3)).mkString ("") :: groupConsecutive (rest.mkString(""))
}
}
// to count the effect of memoization
var count = 0
// recursively try to eliminate every group of 3 or more, brute forcing
// but for "aabbaabbaaabbbaabb", many reductions will lead sooner or
// later to the same result, so we try to detect these and avoid duplicate
// work
def moreThan2consecutive (s: String, seenbefore: HashSet [String]): String = {
if (seenbefore.contains (s)) s
else
{
count += 1
seenbefore += s
val sublists = groupConsecutive (s, Nil)
// val sublists = groupConsecutive (s)
val atLeast3 = sublists.filter (_.size > 2)
atLeast3.length match {
case 0 => s
case 1 => {
val res = sublists.filter (_.size < 3)
moreThan2consecutive (res.mkString (""), seenbefore)
}
case _ => {
val shrinked = (
for {idx <- (0 until sublists.size)
if (sublists (idx).length >= 3)
pre = (sublists.take (idx)).mkString ("")
post= (sublists.drop (idx+1)).mkString ("")
} yield {
moreThan2consecutive (pre + post, seenbefore)
}
)
(shrinked.head /: shrinked.tail) ((a, b) => if (a.length <= b.length) a else b)
}
}
}
}
// don't know what Rcd means, adopted from other solution but modified
// kind of a unit test **update**: forgot to reset count
testRcd (s: String, expected: String) : Boolean = {
count = 0
val seenbefore = HashSet [String] ()
val result = moreThan2consecutive (s, seenbefore)
val hit = result.equals (expected)
println (s"Input: $s\t result: ${result}\t expected ${expected}\t $hit\t count: $count");
hit
}
// some test values from other users with expected result
// **upd:** more testcases
def testgroup () : Unit = {
testRcd ("baabcccbba", "b")
testRcd ("aabbaaac", "aabbc")
testRcd ("aaaa", "")
testRcd ("aaaabbbac", "c")
testRcd ("abcccbcccbacccab", "b")
testRcd ("AAAABBBAC", "C")
testRcd ("CAAAABBBA", "C")
testRcd ("AABBAAAC", "AABBC")
testRcd ("BAABCCCBBA", "B")
testRcd ("AAABBBAAABBBAAABBBC", "C") // 377 subcalls reported by Yola,
testRcd ("AAABBBAAABBBAAABBBAAABBBC", "C") // 4913 when preceeded with AAABBB
}
testgroup
def testBigs () : Unit = {
/*
testRcd ("BCBCCBCCBCABBACCBABAABBBABBBACCBBBAABBACBCCCACABBCAABACBBBBCCCBBAACBAABACCBBCBBAABCCCCCAABBBBACBBAAACACCBCCBBBCCCCCCCACBABACCABBCBBBBBCBABABBACCAACBCBBAACBBBBBCCBABACBBABABAAABCCBBBAACBCACBAABAAAABABB",
"BCBCCBCCBCABBACCBABCCAABBACBACABBCAABACAACBAABACCBBCBBCACCBACBABACCABBCCBABABBACCAACBCBBAABABACBBABABBCCAACBCACBAABBABB")
*/
testRcd ("CBBACAAAAABBBBCAABBCBAABBBCBCBCACACBAABCBACBBABCABACCCCBACBCBBCBACBBACCCBAAAACACCABAACCACCBCBCABAACAABACBABACBCBAACACCBCBCCCABACABBCABBAAAAABBBBAABAABBCACACABBCBCBCACCCBABCAACBCAAAABCBCABACBABCABCBBBBABCBACABABABCCCBBCCBBCCBAAABCABBAAABBCAAABCCBAABAABCAACCCABBCAABCBCBCBBAACCBBBACBBBCABAABCABABABABCA",
"CBBACCAABBCBAACBCBCACACBAABCBACBBABCABABACBCBBCBACBBABCACCABAACCACCBCBCABAACAABACBABACBCBAACACCBCBABACABBCBBCACACABBCBCBCABABCAACBCBCBCABACBABCABCABCBACABABACCBBCCBBCACBCCBAABAABCBBCAABCBCBCBBAACCACCABAABCABABABABCA")
/*testRcd ("AADBDBEBBBBCABCEBCDBBBBABABDCCBCEBABADDCABEEECCECCCADDACCEEAAACCABBECBAEDCEEBDDDBAAAECCBBCEECBAEBEEEECBEEBDACDDABEEABEEEECBABEDDABCDECDAABDAEADEECECEBCBDDAEEECCEEACCBBEACDDDDBDBCCAAECBEDAAAADBEADBAAECBDEACDEABABEBCABDCEEAABABABECDECADCEDAEEEBBBCEDECBCABDEDEBBBABABEEBDAEADBEDABCAEABCCBCCEDCBBEBCECCCA",
"AADBDBECABCEBCDABABDCCBCEBABADDCABCCEADDACCEECCABBECBAEDCEEBBECCBBCEECBAEBCBEEBDACDDABEEABCBABEDDABCDECDAABDAEADEECECEBCBDDACCEEACCBBEACBDBCCAAECBEDDBEADBAAECBDEACDEABABEBCABDCEEAABABABECDECADCEDACEDECBCABDEDEABABEEBDAEADBEDABCAEABCCBCCEDCBBEBCEA")
*/
}
// for generated input, but with fixed seed, to compare the count with
// and without memoization
import util.Random
val r = new Random (31415)
// generate Strings but with high chances to produce some triples and
// longer sequences of char clones
def genRandomString () : String = {
(1 to 20).map (_ => r.nextInt (6) match {
case 0 => "t"
case 1 => "r"
case 2 => "-"
case 3 => "tt"
case 4 => "rr"
case 5 => "--"
}).mkString ("")
}
def testRandom () : Unit = {
(1 to 10).map (i=> testRcd (genRandomString, "random mode - false might be true"))
}
testRandom
testgroup
testRandom
// testBigs
}
Comparing the effect of memoization lead to interesting results:
Updated measurements. In the old values, I forgot to reset the counter, which leaded to much higher results. Now the spreading of results
is much more impressive and in total, the values are smaller.
No seenbefore:
Input: baabcccbba result: b expected b true count: 4
Input: aabbaaac result: aabbc expected aabbc true count: 2
Input: aaaa result: expected true count: 2
Input: aaaabbbac result: c expected c true count: 5
Input: abcccbcccbacccab result: b expected b true count: 34
Input: AAAABBBAC result: C expected C true count: 5
Input: CAAAABBBA result: C expected C true count: 5
Input: AABBAAAC result: AABBC expected AABBC true count: 2
Input: BAABCCCBBA result: B expected B true count: 4
Input: AAABBBAAABBBAAABBBC res: C expected C true count: 377
Input: AAABBBAAABBBAAABBBAAABBBC r: C expected C true count: 4913
Input: r--t----ttrrrrrr--tttrtttt--rr----result: rr--rr expected ? unknown ? false count: 1959
Input: ttrtt----tr---rrrtttttttrtr--rr result: r--rr expected ? unknown ? false count: 213
Input: tt----r-----ttrr----ttrr-rr--rr-- result: ttrttrrttrr-rr--rr-- ex ? unknown ? false count: 16
Input: --rr---rrrrrrr-r--rr-r--tt--rrrrr result: rr-r--tt-- expected ? unknown ? false count: 32
Input: tt-rrrrr--r--tt--rrtrrr------- result: ttr--tt--rrt expected ? unknown ? false count: 35
Input: --t-ttt-ttt--rrrrrt-rrtrttrr result: --tt-rrtrttrr expected ? unknown ? false count: 35
Input: rrt--rrrr----trrr-rttttrrtttrr result: rrtt- expected ? unknown ? false count: 1310
Input: ---tttrrrrrttrrttrr---tt-----tt result: rrttrr expected ? unknown ? false count: 1011
Input: -rrtt--rrtt---t-r--r---rttr-- result: -rrtt--rr-r--rrttr-- ex ? unknown ? false count: 9
Input: rtttt--rrrrrrrt-rrttt--tt--t result: r--t-rr--tt--t expectd ? unknown ? false count: 16
real 0m0.607s (without testBigs)
user 0m1.276s
sys 0m0.056s
With seenbefore:
Input: baabcccbba result: b expected b true count: 4
Input: aabbaaac result: aabbc expected aabbc true count: 2
Input: aaaa result: expected true count: 2
Input: aaaabbbac result: c expected c true count: 5
Input: abcccbcccbacccab result: b expected b true count: 11
Input: AAAABBBAC result: C expected C true count: 5
Input: CAAAABBBA result: C expected C true count: 5
Input: AABBAAAC result: AABBC expected AABBC true count: 2
Input: BAABCCCBBA result: B expected B true count: 4
Input: AAABBBAAABBBAAABBBC rest: C expected C true count: 28
Input: AAABBBAAABBBAAABBBAAABBBC C expected C true count: 52
Input: r--t----ttrrrrrr--tttrtttt--rr----result: rr--rr expected ? unknown ? false count: 63
Input: ttrtt----tr---rrrtttttttrtr--rr result: r--rr expected ? unknown ? false count: 48
Input: tt----r-----ttrr----ttrr-rr--rr-- result: ttrttrrttrr-rr--rr-- xpe? unknown ? false count: 8
Input: --rr---rrrrrrr-r--rr-r--tt--rrrrr result: rr-r--tt-- expected ? unknown ? false count: 19
Input: tt-rrrrr--r--tt--rrtrrr------- result: ttr--tt--rrt expected ? unknown ? false count: 12
Input: --t-ttt-ttt--rrrrrt-rrtrttrr result: --tt-rrtrttrr expected ? unknown ? false count: 16
Input: rrt--rrrr----trrr-rttttrrtttrr result: rrtt- expected ? unknown ? false count: 133
Input: ---tttrrrrrttrrttrr---tt-----tt result: rrttrr expected ? unknown ? false count: 89
Input: -rrtt--rrtt---t-r--r---rttr-- result: -rrtt--rr-r--rrttr-- ex ? unknown ? false count: 6
Input: rtttt--rrrrrrrt-rrttt--tt--t result: r--t-rr--tt--t expected ? unknown ? false count: 8
real 0m0.474s (without testBigs)
user 0m0.852s
sys 0m0.060s
With tailcall:
real 0m0.478s (without testBigs)
user 0m0.860s
sys 0m0.060s
For some random strings, the difference is bigger than a 10fold.
For long Strings with many groups one could, as an improvement, eliminate all groups which are the only group of that character, for instance:
aa bbb aa ccc xx ddd aa eee aa fff xx
The groups bbb, ccc, ddd, eee and fff are unique in the string, so they can't fit to something else and could all be eliminated, and the order of removal is will not matter. This would lead to the intermediate result
aaaa xx aaaa xx
and a fast solution. Maybe I try to implement it too. However, I guess, it will be possible to produce random Strings, where this will have a big impact and by a different form of random generated strings, to distributions, where the impact is low.
Here is a Python solution (function reduce_min), not particularly smart but I think fairly easy to understand (excessive amount of comments added for answer clarity):
def reductions(s, min_len):
"""
Yields every possible reduction of s by eliminating contiguous blocks
of l or more repeated characters.
For example, reductions('AAABBCCCCBAAC', 3) yields
'BBCCCCBAAC' and 'AAABBBAAC'.
"""
# Current character
curr = ''
# Length of current block
n = 0
# Start position of current block
idx = 0
# For each character
for i, c in enumerate(s):
if c != curr:
# New block begins
if n >= min_len:
# If previous block was long enough
# yield reduced string without it
yield s[:idx] + s[i:]
# Start new block
curr = c
n = 1
idx = i
else:
# Still in the same block
n += 1
# Yield reduction without last block if it was long enough
if n >= min_len:
yield s[:idx]
def reduce_min(s, min_len):
"""
Finds the smallest possible reduction of s by successive
elimination of contiguous blocks of min_len or more repeated
characters.
"""
# Current set of possible reductions
rs = set([s])
# Current best solution
result = s
# While there are strings to reduce
while rs:
# Get one element
r = rs.pop()
# Find reductions
r_red = list(reductions(r, min_len))
# If no reductions are found it is irreducible
if len(r_red) == 0 and len(r) < len(result):
# Replace if shorter than current best
result = r
else:
# Save reductions for next iterations
rs.update(r_red)
return result
assert reduce_min("BAABCCCBBA", 3) == "B"
assert reduce_min("AABBAAAC", 3) == "AABBC"
assert reduce_min("AAAA", 3) == ""
assert reduce_min("AAAABBBAC", 3) == "C"
EDIT: Since people seem to be posting C++ solutions, here is mine in C++ (again, function reduce_min):
#include <string>
#include <vector>
#include <unordered_set>
#include <iterator>
#include <utility>
#include <cassert>
using namespace std;
void reductions(const string &s, unsigned int min_len, vector<string> &rs)
{
char curr = '\0';
unsigned int n = 0;
unsigned int idx = 0;
for (auto it = s.begin(); it != s.end(); ++it)
{
if (curr != *it)
{
auto i = distance(s.begin(), it);
if (n >= min_len)
{
rs.push_back(s.substr(0, idx) + s.substr(i));
}
curr = *it;
n = 1;
idx = i;
}
else
{
n += 1;
}
}
if (n >= min_len)
{
rs.push_back(s.substr(0, idx));
}
}
string reduce_min(const string &s, unsigned int min_len)
{
unordered_set<string> rs { s };
string result = s;
vector<string> rs_new;
while (!rs.empty())
{
auto it = rs.begin();
auto r = *it;
rs.erase(it);
rs_new.clear();
reductions(r, min_len, rs_new);
if (rs_new.empty() && r.size() < result.size())
{
result = move(r);
}
else
{
rs.insert(rs_new.begin(), rs_new.end());
}
}
return result;
}
int main(int argc, char **argv)
{
assert(reduce_min("BAABCCCBBA", 3) == "B");
assert(reduce_min("AABBAAAC", 3) == "AABBC");
assert(reduce_min("AAAA", 3) == "");
assert(reduce_min("AAAABBBAC", 3) == "C");
return 0;
}
If you can use C++17 you can save memory by using string views.
EDIT 2: About the complexity of the algorithm. It is not straightforward to figure out, and as I said the algorithm is meant to be simple more than anything, but let's see. In the end, it is more or less the same as a breadth-first search. Let's say the string length is n, and, for generality, let's say the minimum block length (value 3 in the question) is m. In the first level, we can generate up to n / m reductions in the worst case. For each of these, we can generate up to (n - m) / m reductions, and so on. So basically, at "level" i (loop iteration i) we create up to (n - i * m) / m reductions per string we had, and each of these will take O(n - i * m) time to process. The maximum number of levels we can have is, again, n / m. So the complexity of the algorithm (if I'm not making mistakes) should have the form:
O( sum {i = 0 .. n / m} ( O(n - i * m) * prod {j = 0 .. i} ((n - i * m) / m) ))
|-Outer iters--| |---Cost---| |-Prev lvl-| |---Branching---|
Whew. So this should be something like:
O( sum {i = 0 .. n / m} (n - i * m) * O(n^i / m^i) )
Which in turn would collapse to:
O((n / m)^(n / m))
So yeah, the algorithm is more or less simple, but it can run into exponential cost cases (the bad cases would be strings made entirely of exactly m-long blocks, like AAABBBCCCAAACCC... for m = 3).

Pouring water using Scala

I am trying to solve the pouring water problem from codechef using scala. The problem statement is as follows:
Given two vessels, one of which can accommodate a liters of water and
the other which can accommodate b liters of water, determine the
number of steps required to obtain exactly c liters of water in one of
the vessels.
At the beginning both vessels are empty. The following operations are
counted as 'steps':
emptying a vessel,
filling a vessel,
pouring water from one vessel to the other, without spilling, until one of the vessels is either full or empty.
Input
An integer t, 1<=t<=100, denoting the number of test cases, followed
by t sets of input data, each consisting of three positive integers a
(the number of liters the first container can hold), b (the number of
liters the second container can hold), and c (the final amount of
liters of water one vessel should contain), not larger than 40000,
given in separate lines.
Output
For each set of input data, output the minimum number of steps
required to obtain c liters, or -1 if this is impossible.
Example Sample input:
2
5
2
3
2
3
4
Sample output:
2
-1
I am approaching this problem as a graph theory problem. Given the initial configuration of containers to be (0, 0), I get the next state of the containers by applying the operations:
FillA, FillB, PourAtoB, PourBtoA, EmptyA, EmptyB recursively until the target is reached.
My code is as follows:
import scala.collection.mutable.Queue
def pour(initA:Int, initB:Int, targetCapacity:Int) {
var pourCombinations = new scala.collection.mutable.HashMap[(Int, Int),Int]
val capacityA = initA
val capacityB = initB
val processingQueue = new Queue[(Int, Int, Int, Int)]
def FillA(a:Int, b:Int) = {
(capacityA, b)
}
def FillB(b:Int, a:Int) = {
(a, capacityB)
}
def PourAtoB(a:Int, b:Int): (Int, Int) = {
if((a == 0) || (b == capacityB)) (a, b)
else PourAtoB(a - 1, b + 1)
}
def PourBtoA(b:Int, a:Int): (Int, Int) = {
if((b == 0) || (a == capacityA)) (a, b)
else PourBtoA(b - 1, a + 1)
}
def EmptyA(a:Int, b:Int) = {
(0, b)
}
def EmptyB(a:Int, b:Int) = {
(a, 0)
}
processingQueue.enqueue((0, 0, targetCapacity, 0))
pourCombinations((0, 0)) = 0
def pourwater(a:Int, b:Int, c:Int, numSteps:Int): Int = {
println(a + ":" + b + ":" + c + ":" + numSteps)
if((a == c) || (b == c)) {return numSteps}
if(processingQueue.isEmpty && (pourCombinations((a,b)) == 1)) {return -1}
//Put all the vals in a List of tuples
val pStateList = scala.List(FillA(a, b), FillB(a, b), PourAtoB(a, b), PourBtoA(b, a), EmptyA(a, b), EmptyB(a, b))
pStateList.foreach{e =>
{
if(!pourCombinations.contains(e)) {
pourCombinations(e) = 0
processingQueue.enqueue((e._1, e._2, c, numSteps + 1))
}
}
}
pourCombinations((a, b)) = 1
val processingTuple = processingQueue.dequeue()
pourwater(processingTuple._1, processingTuple._2, processingTuple._3, processingTuple._4)
}
val intialvalue = processingQueue.dequeue()
pourwater(intialvalue._1, intialvalue._2, intialvalue._3, intialvalue._4)
}
There are a couple of issues with this, first of all I am not sure if I have my base cases of my recursive step set-up properly. Also, it might be that I am not using the proper Scala conventions to solve this problem. Also, I want the pour function to return the numSteps once it is finished executing. It is not doing that at the moment.
It will be great if somebody can go through my code and point out the mistakes with my approach.
Thanks

MergeSort in scala

I came across another codechef problem which I am attempting to solve in Scala. The problem statement is as follows:
Stepford Street was a dead end street. The houses on Stepford Street
were bought by wealthy millionaires. They had them extensively altered
so that as one progressed along the street, the height of the
buildings increased rapidly. However, not all millionaires were
created equal. Some refused to follow this trend and kept their houses
at their original heights. The resulting progression of heights was
thus disturbed. A contest to locate the most ordered street was
announced by the Beverly Hills Municipal Corporation. The criteria for
the most ordered street was set as follows: If there exists a house
with a lower height later in the street than the house under
consideration, then the pair (current house, later house) counts as 1
point towards the disorderliness index of the street. It is not
necessary that the later house be adjacent to the current house. Note:
No two houses on a street will be of the same height For example, for
the input: 1 2 4 5 3 6 The pairs (4,3), (5,3) form disordered pairs.
Thus the disorderliness index of this array is 2. As the criteria for
determining the disorderliness is complex, the BHMC has requested your
help to automate the process. You need to write an efficient program
that calculates the disorderliness index of a street.
A sample input output provided is as follows:
Input: 1 2 4 5 3 6
Output: 2
The output is 2 because of two pairs (4,3) and (5,3)
To solve this problem I thought I should use a variant of MergeSort,incrementing by 1 when the left element is greater than the right element.
My scala code is as follows:
def dysfunctionCalc(input:List[Int]):Int = {
val leftHalf = input.size/2
println("HalfSize:"+leftHalf)
val isOdd = input.size%2
println("Is odd:"+isOdd)
val leftList = input.take(leftHalf+isOdd)
println("LeftList:"+leftList)
val rightList = input.drop(leftHalf+isOdd)
println("RightList:"+rightList)
if ((leftList.size <= 1) && (rightList.size <= 1)){
println("Entering input where both lists are <= 1")
if(leftList.size == 0 || rightList.size == 0){
println("One of the lists is less than 0")
0
}
else if(leftList.head > rightList.head)1 else 0
}
else{
println("Both lists are greater than 1")
dysfunctionCalc(leftList) + dysfunctionCalc(rightList)
}
}
First off, my logic is wrong,it doesn't have a merge stage and I am not sure what would be the best way to percolate the result of the base-case up the stack and compare it with the other values. Also, using recursion to solve this problem may not be the most optimal way to go since for large lists, I maybe blowing up the stack. Also, there might be stylistic issues with my code as well.
I would be great if somebody could point out other flaws and the right way to solve this problem.
Thanks
Suppose you split your list into three pieces: the item you are considering, those on the left, and those on the right. Suppose further that those on the left are in a sorted set. Now you just need to walk through the list, moving items from "right" to "considered" and from "considered" to "left"; at each point, you look at the size of the subset of the sorted set that is greater than your item. In general, the size lookup can be done in O(log(N)) as can the add-element (with a Red-Black or AVL tree, for instance). So you have O(N log N) performance.
Now the question is how to implement this in Scala efficiently. It turns out that Scala has a Red-Black tree used for its TreeSet sorted set, and the implementation is actually quite simple (here in tail-recursive form):
import collection.immutable.TreeSet
final def calcDisorder(xs: List[Int], left: TreeSet[Int] = TreeSet.empty, n: Int = 0): Int = xs match {
case Nil => n
case x :: rest => calcDisorder(rest, left + x, n + left.from(x).size)
}
Unfortunately, left.from(x).size takes O(N) time (I believe), which yields a quadratic execution time. That's no good--what you need is an IndexedTreeSet which can do indexOf(x) in O(log(n)) (and then iterate with n + left.size - left.indexOf(x) - 1). You can build your own implementation or find one on the web. For instance, I found one here (API here) for Java that does exactly the right thing.
Incidentally, the problem with doing a mergesort is that you cannot easily work cumulatively. With merging a pair, you can keep track of how out-of-order it is. But when you merge in a third list, you must see how out of order it is with respect to both other lists, which spoils your divide-and-conquer strategy. (I am not sure whether there is some invariant one could find that would allow you to calculate directly if you kept track of it.)
Here is my try, I don't use MergeSort but it seems to solve the problem:
def calcDisorderness(myList:List[Int]):Int = myList match{
case Nil => 0
case t::q => q.count(_<t) + calcDisorderness(q)
}
scala> val input = List(1,2,4,5,3,6)
input: List[Int] = List(1, 2, 4, 5, 3, 6)
scala> calcDisorderness(input)
res1: Int = 2
The question is, is there a way to have a lower complexity?
Edit: tail recursive version of the same function and cool usage of default values in function arguments.
def calcDisorderness(myList:List[Int], disorder:Int=0):Int = myList match{
case Nil => disorder
case t::q => calcDisorderness(q, disorder + q.count(_<t))
}
A solution based on Merge Sort. Not super fast, potential slowdown could be in "xs.length".
def countSwaps(a: Array[Int]): Long = {
var disorder: Long = 0
def msort(xs: List[Int]): List[Int] = {
import Stream._
def merge(left: List[Int], right: List[Int], inc: Int): Stream[Int] = {
(left, right) match {
case (x :: xs, y :: ys) if x > y =>
cons(y, merge(left, ys, inc + 1))
case (x :: xs, _) => {
disorder += inc
cons(x, merge(xs, right, inc))
}
case _ => right.toStream
}
}
val n = xs.length / 2
if (n == 0)
xs
else {
val (ys, zs) = xs splitAt n
merge(msort(ys), msort(zs), 0).toList
}
}
msort(a.toList)
disorder
}
Another solution based on Merge Sort. Very fast: no FP or for-loop.
def countSwaps(a: Array[Int]): Count = {
var swaps: Count = 0
def mergeRun(begin: Int, run_len: Int, src: Array[Int], dst: Array[Int]) = {
var li = begin
val lend = math.min(begin + run_len, src.length)
var ri = begin + run_len
val rend = math.min(begin + run_len * 2, src.length)
var ti = begin
while (ti < rend) {
if (ri >= rend) {
dst(ti) = src(li); li += 1
swaps += ri - begin - run_len
} else if (li >= lend) {
dst(ti) = src(ri); ri += 1
} else if (a(li) <= a(ri)) {
dst(ti) = src(li); li += 1
swaps += ri - begin - run_len
} else {
dst(ti) = src(ri); ri += 1
}
ti += 1
}
}
val b = new Array[Int](a.length)
var run = 0
var run_len = 1
while (run_len < a.length) {
var begin = 0
while (begin < a.length) {
val (src, dst) = if (run % 2 == 0) (a, b) else (b, a)
mergeRun(begin, run_len, src, dst)
begin += run_len * 2
}
run += 1
run_len *= 2
}
swaps
}
Convert the above code to Functional style: no mutable variable, no loop.
All recursions are tail calls, thus the performance is good.
def countSwaps(a: Array[Int]): Count = {
def mergeRun(li: Int, lend: Int, rb: Int, ri: Int, rend: Int, di: Int, src: Array[Int], dst: Array[Int], swaps: Count): Count = {
if (ri >= rend && li >= lend) {
swaps
} else if (ri >= rend) {
dst(di) = src(li)
mergeRun(li + 1, lend, rb, ri, rend, di + 1, src, dst, ri - rb + swaps)
} else if (li >= lend) {
dst(di) = src(ri)
mergeRun(li, lend, rb, ri + 1, rend, di + 1, src, dst, swaps)
} else if (src(li) <= src(ri)) {
dst(di) = src(li)
mergeRun(li + 1, lend, rb, ri, rend, di + 1, src, dst, ri - rb + swaps)
} else {
dst(di) = src(ri)
mergeRun(li, lend, rb, ri + 1, rend, di + 1, src, dst, swaps)
}
}
val b = new Array[Int](a.length)
def merge(run: Int, run_len: Int, lb: Int, swaps: Count): Count = {
if (run_len >= a.length) {
swaps
} else if (lb >= a.length) {
merge(run + 1, run_len * 2, 0, swaps)
} else {
val lend = math.min(lb + run_len, a.length)
val rb = lb + run_len
val rend = math.min(rb + run_len, a.length)
val (src, dst) = if (run % 2 == 0) (a, b) else (b, a)
val inc_swaps = mergeRun(lb, lend, rb, rb, rend, lb, src, dst, 0)
merge(run, run_len, lb + run_len * 2, inc_swaps + swaps)
}
}
merge(0, 1, 0, 0)
}
It seems to me that the key is to break the list into a series of ascending sequences. For example, your example would be broken into (1 2 4 5)(3 6). None of the items in the first list can end a pair. Now you do a kind of merge of these two lists, working backwards:
6 > 5, so 6 can't be in any pairs
3 < 5, so its a pair
3 < 4, so its a pair
3 > 2, so we're done
I'm not clear from the definition on how to handle more than 2 such sequences.

Resources