Let's imagine two arrays like this:
[8,2,3,4,9,5,7]
[0,1,1,0,0,1,1]
How can I perform a binary search only in numbers with an 1 below it, ignoring the rest?
I know this can be in O(log n) comparisons, but my current method is slower because it has to go through all the 0s until it hits an 1.
If you hit a number with a 0 below, you need to scan in both directions for a number with a 1 below until you find it -- or the local search space is exhausted. As the scan for a 1 is linear, the ratio of 0s to 1s determines whether the resulting algorithm can still be faster than linear.
This question is very old, but I've just discovered a wonderful little trick to solve this problem in most cases where it comes up. I'm writing this answer so that I can refer to it elsewhere:
Fast Append, Delete, and Binary Search in a Sorted Array
The need to dynamically insert or delete items from a sorted collection, while preserving the ability to search, typically forces us to switch from a simple array representation using binary search to some kind of search tree -- a far more complicated data structure.
If you only need to insert at the end, however (i.e., you always insert a largest or smallest item), or you don't need to insert at all, then it's possible to use a much simpler data structure. It consists of:
A dynamic (resizable) array of items, the item array; and
A dynamic array of integers, the set array. The set array is used as a disjoint set data structure, using the single-array representation described here: How to properly implement disjoint set data structure for finding spanning forests in Python?
The two arrays are always the same size. As long as there have been no deletions, the item array just contains the items in sorted order, and the set array is full of singleton sets corresponding to those items.
If items have been deleted, though, items in the item array are only valid if the there is a root set at the corresponding position in the set array. All sets that have been merged into a single root will be contiguous in the set array.
This data structure supports the required operations as follows:
Append (O(1))
To append a new largest item, just append the item to the item array, and append a new singleton set to the set array.
Delete (amortized effectively O(log N))
To delete a valid item, first call search to find the adjacent larger valid item. If there is no larger valid item, then just truncate both arrays to remove the item and all adjacent deleted items. Since merged sets are contiguous in the set array, this will leave both arrays in a consistent state.
Otherwise, merge the sets for the deleted item and adjacent item in the set array. If the deleted item's set is chosen as the new root, then move the adjacent item into the deleted item's position in the item array. Whichever position isn't chosen will be unused from now on, and can be nulled-out to release a reference if necessary.
If less than half of the item array is valid after a delete, then deleted items should be removed from the item array and the set array should be reset to an all-singleton state.
Search (amortized effectively O(log N))
Binary search proceeds normally, except that we need to find the representative item for every test position:
int find(item_array, set_array, itemToFind) {
int pos = 0;
int limit = item_array.length;
while (pos < limit) {
int testPos = pos + floor((limit-pos)/2);
if (item_array[find_set(set_array, testPos)] < itemToFind) {
pos = testPos + 1; //testPos is too low
} else {
limit = testPos; //testPos is not too low
}
}
if (pos >= item_array.length) {
return -1; //not found
}
pos = find_set(set_array, pos);
return (item_array[pos] == itemToFind) ? pos : -1;
}
Related
I'm looking for a solution for this problem.
I have an array which defines the rule of the order of elements like below.
let rule = [A,B,C,D,E,F,G,H,I,J,K]
I then have another array whose element can be removed or added back.
So for example, I have a list like this:
var list = [A,D,E,I,J,K]
Now If I want to add element 'B' to 'list' the list should be
var list = [A,B,D,E,I,J,K]
because 'B' comes after 'A' and before 'D' in the rule array. So the insertion index would be 1 in this case.
The item in the array are not comparable each other (Let's say a developer can change the order of rule list at any time if that make sense). And there needs no duplicates in the array.
I'm not sure if I explained the problem clearly, but I'd like to know a good approach that finds an insertion index.
Explained the Python code in comments. Basically, find the right place to insert the new element using binary search. The order of elements is decided using rank. The below code assumes that if elements is non-empty then the rule is followed by items in the elements.
rule = ['A','B','C','D','E','F','G','H','I','J','K']
rank = dict()
for i in range(len(rule)):
rank[rule[i]] = i
elements = ['A','D','E','I','J','K'] #list in which we wish to add elements
target = 'B' #element to be inserted
#Binary search to find the right place to insert the target in elements
left, right = 0, len(elements)
while left < right:
mid = left + (right - left) // 2
if rank[elements[mid]] >= rank[target]:
right = mid
else:
left = mid + 1
elements.insert(left, target) #left is the insertion index
print(elements)
Time complexity of add: O(log(len(elements)))
Space complexity: O(1)
If the items are unique (only can occur once), and are not comparable to each other (don't know that B comes after A), then.
Iterate through the rules and find the items position in the rule array.
Check if it is the first item in rules, if so insert at the first position and skip the other steps.
Check to see if it is the last item in rules, if so insert at the end and skip the other steps.
Select the value of the item 1 before into a variable A.
Select the value of the item 1 after into a variable B.
Iterate through the list,
if you encounter the value in parameter A insert it after that value, if you encounter the value B, add the value before that.
If you come to the end without finding either value A or B, then you need to repeat but with values 2 before and 2 after the item in the rules (again checking to see if you hit the start or end of the rules list).
You will probably want to make 6 & 7 a function that calls itself recursively.
A simple approach is, we can use one iteration of Insertion sort.
So, we start from right side of array compare our input x with array elements a go from right to left side. if we arrive an index i of array that let[i]<=x then let[i+1] is correct location that x can be insert.
This approach that has time complexity O(n), follow from correctness of Insertion sort.
Note that the lower of your problem is O(n) because your data structure is array so you need after each insertion shift whole elements.
If a sequence is ordered. And you only ask for the first element of the ordered sequence. Is Orderby smart enough not to order the complete sequence?
IEnumerable<MyClass> myItems = ...
MyClass maxItem = myItems.OrderBy(item => item.Id).FirstOrDefault();
So if the first element is asked, only the item with the minimum value is ordered as first element of the sequence. When the next element is asked, the item with the minimum value of the remaining sequence is ordered etc.
Or is the complete sequence completely ordered if you only want the first element?
Addition
Apparently the question is unclear. Let's give an example.
The Sort function could do the following:
Create a linked list containing all the elements
as long as the linked list contains element:
Take the first element of the linked list as the smallest
scan the rest of the linked list once to find any smaller elements
remove the smallest element from the linked list
yield return the smallest element
Code:
public static IEnumerable<TSource> Sort<TSource, TKey>(
this IEnumerable<TSource> source, Func<TSource, TKey> keySelector)
{
if (source == null) throw new ArgumentNullException(nameof(source));
if (keySelector == null) throw new ArgumentNullException(nameof(keySelector));
IComparer<TKey> comparer = Comparer<TKey>.Default;
// create a linkedList with keyValuePairs of TKey and TSource
var keyValuePairs = source
.Select(source => new KeyValuePair<TKey, TSource>(keySelector(source), source);
var itemsToSort = new LinkedList<KeyValuePair<Tkey, TSource>>(keyValuePairs);
while (itemsToSort.Any())
{ // there are still items in the list
// select the first element as the smallest one
var smallest = itemsToSort.First();
// scan the rest of the linkedList to find the smallest one
foreach (var element in itemsToSort.Skip(1))
{
if (comparer.Compare(element.Key, smallest.Key) < 1)
{ // element.Key is smaller than smallest.Key: element becomes the smallest:
smallest = element;
}
}
// remove the smallest element from the linked list and return the value:
itemsToSort.Remove(smallestElement);
yield return smallestElement.Value;
}
Suppose I have a sequence of integers.
Suppose I have the following sequence of integers:
{4, 8, 3, 1, 7}
At the first iteration the iterator internally creates a linked list of key/value pairs and assigns the first element of the list as smallest
Linked List = 4 - 8 - 3 - 1 - 7
Smallest = 4
The linked list is scanned once to see if there is a smaller one.
Linked List = 4 - 8 - 3 - 1 - 7
Smallest = 1
The smallest is removed from the linked list and yield return:
Linked List = 4 - 8 - 3 - 7
return 1
The second iteration the same is done with the shorter linked list
Linked List = 4 - 8 - 3 - 7
smallest = 4
Again the linked list is scanned once to find the smallest one
Linked List = 4 - 8 - 3 - 7
smallest = 3
Remove the smallest from the linked list and return the smallest
Linked List = 4 - 8 - 7
return 3
It's easy to see that if you only ask for first element in the sorted list, the list is scanned only once. Every iteration the list to scan becomes smaller.
Back to my original question:
I understand that if you only want the first element, you have to scan the list at least once. If you don't ask for a second element, the rest of the list is not ordered.
Is the sort that is used by Enumerable.OrderBy thus smart that if doesn't sort the remainder of the list if you only ask for the firs ordered item?
It depends on the version.
In the framework versions (4.0, 4.5, etc.) then:
The entire source is loaded into a buffer.
Produce a map of keys (so that they key production is only once per element).
A map of integers is produced and then sorted according to those keys (using a map means swap operations have cheaper copies if the source elements are large value types).
The FirstOrDefault attempts to obtain the first item according to this mapping by using MoveNext and Current on the resulting object. Either it finds one, or (if the buffer is empty because the source was empty) returns default(TSource).
In .NET Core, then:
The FirstOrDefault operation on the IOrderedEnumerable scans through the source. If there are no elements it returns default(TSource) otherwise it holds onto the first element found and the key produced by the key generator and compares it with all subsequent, replacing that held-onto value and key with the next found if the next found compares as lower than the current value.
The held-onto value will be the same element as the Framework version would have found by first sorting, so it is returned.
This means that in the Framework version myItems.OrderBy(item => item.Id).FirstOrDefault() is O(n log n) time complexity (worse case O(n²)) and O(n) space complexity, but in the .NET Core version it is O(n) time complexity and O(1) space complexity.
The main difference here is that in .NET Core FirstOrDefault() has knowledge of how the results of OrderBy (and ThenBy etc.) differ from other possible sources and has code to handle it*, while in the framework version it does not.
Both scan the entire sequence (you can't know the last element in myItems isn't the first by the sorting rules until you've examined it) but they differ in the mechanism and efficiency after that point.
When the next element is asked, the item with the minimum value of the remaining sequence is ordered etc.
If the next element is asked, then not only would any sorting be done again, but it would have to be done again as the contents of myItems could have change in the meantime.
If you were trying to obtain it with myItems.OrderBy(item => item.Id).ElementAtOrDefault(i) then the framework version would find the element by first doing a sort (O(n log n)) and then a scan (O(n) relative to i) while the .NET Core version would find it with a quickselect (O(n) though the constant factors are bigger than for FirstOrDefault() and can be as high as O(n²) in the same cases that sorting is, so its a slower O(n) than with that (it's smart enough to turn ElementAtOrDefault(0) into FirstOrDefault() for that reason). Both versions also use space complexity of O(n) (unless .NET Core can turn it into FirstOrDefault()).
If you were finding the first few values with myItems.OrderBy(item => item.Id).Take(k) then the Framework version would again do a sort (O(n log n)) and the put a limit on the subsequent enumeration of the results so that it stopped returning elements after k were obtained. The .NET Core version would do a partial sort, not bothering to sort elements it realised were always going to come after the portion taken, which is O(n + k log k) time complexity. .NET Core would also do a single partial sort for combinations of Take and Skip reducing the amount of sorting necessary further.
In theory the sorting of just OrderBy(cmp) could be lazier as per:
Load the elements into the buffer.
Do a sort, probably favouring the "left" partition as partitioning is happening.
yield elements as soon as it is found that they are the next to enumerate.
This would improve time-to-first-result (low time-to-first-result is often a nice feature of other Linq operations), and particularly benefit consumers who may stop working on the result part way through. However it adds extra constant costs to the sorting operation and either prevents picking the next partition to work on in such a way as to reduce the amount of recursion (an important optimisation of partition-based sorting) or else would often not actually yield anything until near the end anyway (making the exercise rather pointless). It would also make the sorting much more complicated. While I experimented with this approach the pay-offs to some cases didn't justify the costs to others, especially as it seemed likely to hurt more people than it benefited.
*Strictly speaking, the results of several linq operations have knowledge of how to find the first element in a way that is optimised for each of them, and FirstOrDefault() knows how to detect any of those cases.
If a sequence is ordered ...
That is fine but not a property of IEnumerable so OrderBy can never 'know' this directly.
There are precedents for this though, Count() will check at runtime if its IEnumerable<> source is actually pointing at a List and then take a shortcut to the Count property.
Likewise, OrderBy could look to see if it's called on a SortedList or something but there is no clear marker interface and those collections are used far too infrequently to make this worth the effort.
There are other ways to optimize this, .OrderBy().First() could conceivably map to a .Min() but again, nobody has bothered till now as far as I knew. See Jon's answer.
No, it's not. How can it know that the list is in order without iterating through the entire list?
Here's a simple test:
void Main()
{
Console.WriteLine(OrderedEnumerable().OrderBy(x => x).First());
}
public IEnumerable<int> OrderedEnumerable()
{
Console.WriteLine(1);
yield return 1;
Console.WriteLine(2);
yield return 2;
Console.WriteLine(3);
yield return 3;
}
This, as expected, outputs:
1
2
3
1
If you look at the reference source and follow the classes you will see that all keys will be computed and then a quick sort algorithm will sort the index table according to the keys.
So the sequence is read once, all the keys are computed, then an index is sorted according to the keys and then you get your first output.
This question already has answers here:
Stable separation for two classes of elements in an array
(3 answers)
Closed 9 years ago.
Suppose I have a function f and array of elements.
The function returns A or B for any element; you could visualize the elements this way ABBAABABAA.
I need to sort the elements according to the function, so the result is: AAAAAABBBB
The number of A values doesn't have to equal the number of B values. The total number of elements can be arbitrary (not fixed). Note that you don't sort chars, you sort objects that have a single char representation.
Few more things:
the sort should take linear time - O(n),
it should be performed in place,
it should be a stable sort.
Any ideas?
Note: if the above is not possible, do you have ideas for algorithms sacrificing one of the above requirements?
If it has to be linear and in-place, you could do a semi-stable version. By semi-stable I mean that A or B could be stable, but not both. Similar to Dukeling's answer, but you move both iterators from the same side:
a = first A
b = first B
loop while next A exists
if b < a
swap a,b elements
b = next B
a = next A
else
a = next A
With the sample string ABBAABABAA, you get:
ABBAABABAA
AABBABABAA
AAABBBABAA
AAAABBBBAA
AAAAABBBBA
AAAAAABBBB
on each turn, if you make a swap you move both, if not you just move a. This will keep A stable, but B will lose its ordering. To keep B stable instead, start from the end and work your way left.
It may be possible to do it with full stability, but I don't see how.
A stable sort might not be possible with the other given constraints, so here's an unstable sort that's similar to the partition step of quick-sort.
Have 2 iterators, one starting on the left, one starting on the right.
While there's a B at the right iterator, decrement the iterator.
While there's an A at the left iterator, increment the iterator.
If the iterators haven't crossed each other, swap their elements and repeat from 2.
Lets say,
Object_Array[1...N]
Type_A objs are A1,A2,...Ai
Type_B objs are B1,B2,...Bj
i+j = N
FOR i=1 :N
if Object_Array[i] is of Type_A
obj_A_count=obj_A_count+1
else
obj_B_count=obj_B_count+1
LOOP
Fill the resultant array with obj_A and obj_B with their respective counts depending on obj_A > obj_B
The following should work in linear time for a doubly-linked list. Because up to N insertion/deletions are involved that may cause quadratic time for arrays though.
Find the location where the first B should be after "sorting". This can be done in linear time by counting As.
Start with 3 iterators: iterA starts from the beginning of the container, and iterB starts from the above location where As and Bs should meet, and iterMiddle starts one element prior to iterB.
With iterA skip over As, find the 1st B, and move the object from iterA to iterB->previous position. Now iterA points to the next element after where the moved element used to be, and the moved element is now just before iterB.
Continue with step 3 until you reach iterMiddle. After that all elements between first() and iterB-1 are As.
Now set iterA to iterB-1.
Skip over Bs with iterB. When A is found move it to just after iterA and increment iterA.
Continue step 6 until iterB reaches end().
This would work as a stable sort for any container. The algorithm includes O(N) insertion/deletion, which is linear time for containers with O(1) insertions/deletions, but, alas, O(N^2) for arrays. Applicability in you case depends on whether the container is an array rather than a list.
If your data structure is a linked list instead of an array, you should be able to meet all three of your constraints. You just skim through the list and accumulating and moving the "B"s will be trivial pointer changes. Pseudo code below:
sort(list) {
node = list.head, blast = null, bhead = null
while(node != null) {
nextnode = node.next
if(node.val == "a") {
if(blast != null){
//move the 'a' to the front of the 'B' list
bhead.prev.next = node, node.prev = bhead.prev
blast.next = node.next, node.next.prev = blast
node.next = bhead, bhead.prev = node
}
}
else if(node.val == "b") {
if(blast == null)
bhead = blast = node
else //accumulate the "b"s..
blast = node
}
3
node = nextnode
}
}
So, you can do this in an array, but the memcopies, that emulate the list swap, will make it quiet slow for large arrays.
Firstly, assuming the array of A's and B's is either generated or read-in, I wonder why not avoid this question entirely by simply applying f as the list is being accumulated into memory into two lists that would subsequently be merged.
Otherwise, we can posit an alternative solution in O(n) time and O(1) space that may be sufficient depending on Sir Bohumil's ultimate needs:
Traverse the list and sort each segment of 1,000,000 elements in-place using the permutation cycles of the segment (once this step is done, the list could technically be sorted in-place by recursively swapping the inner-blocks, e.g., ABB AAB -> AAABBB, but that may be too time-consuming without extra space). Traverse the list again and use the same constant space to store, in two interval trees, the pointers to each block of A's and B's. For example, segments of 4,
ABBAABABAA => AABB AABB AA + pointers to blocks of A's and B's
Sequential access to A's or B's would be immediately available, and random access would come from using the interval tree to locate a specific A or B. One option could be to have the intervals number the A's and B's; e.g., to find the 4th A, look for the interval containing 4.
For sorting, an array of 1,000,000 four-byte elements (3.8MB) would suffice to store the indexes, using one bit in each element for recording visited indexes during the swaps; and two temporary variables the size of the largest A or B. For a list of one billion elements, the maximum combined interval trees would number 4000 intervals. Using 128 bits per interval, we can easily store numbered intervals for the A's and B's, and we can use the unused bits as pointers to the block index (10 bits) and offset in the case of B (20 bits). 4000*16 bytes = 62.5KB. We can store an additional array with only the B blocks' offsets in 4KB. Total space under 5MB for a list of one billion elements. (Space is in fact dependent on n but because it is extremely small in relation to n, for all practical purposes, we may consider it O(1).)
Time for sorting the million-element segments would be - one pass to count and index (here we can also accumulate the intervals and B offsets) and one pass to sort. Constructing the interval tree is O(nlogn) but n here is only 4000 (0.00005 of the one-billion list count). Total time O(2n) = O(n)
This should be possible with a bit of dynamic programming.
It works a bit like counting sort, but with a key difference. Make arrays of size n for both a and b count_a[n] and count_b[n]. Fill these arrays with how many As or Bs there has been before index i.
After just one loop, we can use these arrays to look up the correct index for any element in O(1). Like this:
int final_index(char id, int pos){
if(id == 'A')
return count_a[pos];
else
return count_a[n-1] + count_b[pos];
}
Finally, to meet the total O(n) requirement, the swapping needs to be done in a smart order. One simple option is to have recursive swapping procedure that doesn't actually perform any swapping until both elements would be placed in correct final positions. EDIT: This is actually not true. Even naive swapping will have O(n) swaps. But doing this recursive strategy will give you absolute minimum required swaps.
Note that in general case this would be very bad sorting algorithm since it has memory requirement of O(n * element value range).
dynamic list means elements of the list can be added deleted or changed.
The question is similar to how duplicate file names are handled in windows.
For example:
file
file (1)
file (2)
file (3)
If file (2) is deleted and then another file with name file is added, file (2) will be the generated file name. (don't think that happens in windows though)
Is there an elegant way to do this without searching through the whole list on every insert?
You can use a queue to store the integers which were freed and a counter to keep in mind which was the last one:
Queue s;
int lastUsedInt; //Initialize to 0
void delete( int fileNumber ){
s.push(fileNumber);
}
int getIntForNewFile(){
if( s.empty() ){ //If our queue is empty then there are no unused spaces
return lastUsedInt++; //Return lastUsedInt and increment it by 1
}
else{
int i = s.top(); //Get current top of the queue
s.pop(); //Delete current top of the queue
return i; //Return the retrieved number
}
}
Some Cpp pseudo code here :)
This will "fill" the empty spots in the order they were deleted.
So if you have files 1 through 10, then delete 5 and 2: 5 will be filled first, then 2.
If you want them to be filled in order, you can use a sorted container.
In C++ that would be a priority_queue.
Use a min heap to store the items that get deleted. If the heap is empty, then there are no free items. Otherwise, take the first one.
A simple min heap implementation is available at http://www.informit.com/guides/content.aspx?g=dotnet&seqNum=789
To use it:
BinaryHeap<int> MyHeap = new BinaryHeap<int>();
When you remove an item from your list, add the number to the heap:
MyHeap.Insert(number);
To get the next number:
if (MyHeap.Count > 0)
nextNumber = MyHeap.RemoveRoot();
else
nextNumber = List.Count;
This guarantees that you'll always get the smallest available number.
I would use a self-balancing binary tree as a set representation (e.g., the C++ standard std::set<int> container).
The set would consist of all "unallocated" numbers up to and including one greater than the largest allocated number.
Initialize the set to contain 0 (or 1, whatever your preference is).
To allocate a new number, grab the smallest element in the set and remove it. Call that number n. If the set is now empty, put n+1 into the set.
To de-allocate a number, add it to the set. Then perform the following cleanup algorithm:
Step 1: If the set has one element, stop.
Step 2: If the largest element in the set minus the second-largest element is greater than 1, stop.
Step 3: Remove the largest element.
Step 4: Goto step 2.
With this data structure and algorithm, any sequence of k allocate/deallocate operations requires O(k log k) time. (Even though the "cleanup" operation is O(k log k) time all by itself, you can only remove an element after you have inserted it, so the total time does not exceed O(k log k).)
Put another way, each allocate/deallocate takes logarithmic amortized time.
I have a set of items, for example: {1,1,1,2,2,3,3,3}, and a restricting set of sets, for example {{3},{1,2},{1,2,3},{1,2,3},{1,2,3},{1,2,3},{2,3},{2,3}. I am looking for permutations of items, but the first element must be 3, and the second must be 1 or 2, etc.
One such permutation that fits is:
{3,1,1,1,2,2,3}
Is there an algorithm to count all permutations for this problem in general? Is there a name for this type of problem?
For illustration, I know how to solve this problem for certain types of "restricting sets".
Set of items: {1,1,2,2,3}, Restrictions {{1,2},{1,2,3},{1,2,3},{1,2},{1,2}}. This is equal to 2!/(2-1)!/1! * 4!/2!/2!. Effectively permuting the 3 first, since it is the most restrictive and then permuting the remaining items where there is room.
Also... polynomial time. Is that possible?
UPDATE: This is discussed further at below links. The problem above is called "counting perfect matchings" and each permutation restriction above is represented by a {0,1} on a matrix of slots to occupants.
https://math.stackexchange.com/questions/519056/does-a-matrix-represent-a-bijection
https://math.stackexchange.com/questions/509563/counting-permutations-with-additional-restrictions
https://math.stackexchange.com/questions/800977/parking-cars-and-vans-into-car-van-and-car-van-parking-spots
All of the other solutions here are exponential time--even for cases that they don't need to be. This problem exhibits similar substructure, and so it should be solved with dynamic programming.
What you want to do is write a class that memoizes solutions to subproblems:
class Counter {
struct Problem {
unordered_multiset<int> s;
vector<unordered_set<int>> v;
};
int Count(Problem const& p) {
if (m.v.size() == 0)
return 1;
if (m.find(p) != m.end())
return m[p];
// otherwise, attack the problem choosing either choosing an index 'i' (notes below)
// or a number 'n'. This code only illustrates choosing an index 'i'.
Problem smaller_p = p;
smaller_p.v.erase(v.begin() + i);
int retval = 0;
for (auto it = p.s.begin(); it != p.s.end(); ++it) {
if (smaller_p.s.find(*it) == smaller_p.s.end())
continue;
smaller_p.s.erase(*it);
retval += Count(smaller_p);
smaller_p.s.insert(*it);
}
m[p] = retval;
return retval;
}
unordered_map<Problem, int> m;
};
The code illustrates choosing an index i, which should be chosen at a place where there are v[i].size() is small. The other option is to choose a number n, which should be one for which there are few locations v that it can be placed in. I'd say the minimum of the two deciding factors should win.
Also, you'll have to define a hash function for Problem -- that shouldn't be too hard using boost's hash stuff.
This solution can be improved by replacing the vector with a set<>, and defining a < operator for unordered_set. This will collapse many more identical subproblems into a single map element, and further reduce mitigate exponential blow-up.
This solution can be further improved by making Problem instances that are the same except that the numbers are rearranged hash to the same value and compare to be the same.
You might consider a recursive solution that uses a pool of digits (in the example you provide, it would be initialized to {1,1,1,2,2,3,3,3}), and decides, at the index given as a parameter, which digit to place at this index (using, of course, the restrictions that you supply).
If you like, I can supply pseudo-code.
You could build a tree.
Level 0: Create a root node.
Level 1: Append each item from the first "restricting set" as children of the root.
Level 2: Append each item from the second restricting set as children of each of the Level 1 nodes.
Level 3: Append each item from the third restricting set as children of each of the Level 2 nodes.
...
The permutation count is then the number of leaf nodes of the final tree.
Edit
It's unclear what is meant by the "set of items" {1,1,1,2,2,3,3,3}. If that is meant to constrain how many times each value can be used ("1" can be used 3 times, "2" twice, etc.) then we need one more step:
Before appending a node to the tree, remove the values used on the current path from the set of items. If the value you want to append is still available (e.g. you want to append a "1", and "1" has only been used twice so far) then append it to the tree.
To save space, you could build a directed graph instead of a tree.
Create a root node.
Create a node for each item in the
first set, and link from the root to
the new nodes.
Create a node for each item in the
second set, and link from each first
set item to each second set item.
...
The number of permutations is then the number of paths from the root node to the nodes of the final set.