Which data structure supports given operations efficiently - algorithm

I need to think of a data structure, which supports the following operations efficiently:
1) Add an integer x
2) Delete an integer with maximum frequency (if there are more than one element with the same maximum frequency delete all of them).
I am thinking of implementing a segment tree where each node stores the index of its child having largest frequency.
Any ideas or suggestions on how to approach this problem or how should it be implemented would be kindly appreciated.

We can use a combination of data structures. A hash_map to maintain the frequency mappings, where the key is the integer, and value a pointer to a "frequency" node representing the frequency value and the set of integers having the same frequency. The frequency nodes will be maintained in a list ordered by the values of the frequencies.
The Frequency node can be defined as
class Freq {
int frequency;
Set<Integer> values_with_frequency;
Freq prev;
Freq next;
}
The elements HashMap would then contain entries of the form
Entry<Integer, Freq>
So, for a snapshot of the dataset such as
a,b,c,b,d,d,a,e,a,f,b where the letters denote integers, the following would be how the data structure would look like.
c -----> (1, [c, e, f])
|
|
e --
|
|
f --
a -----> (3, [a, b])
|
|
b --
d --> (2, [d])
The Freq nodes would be maintained in a linked list, say freq_nodes, sorted by the frequency value. Note that, as explained below, there wouldn't be any log(n) operation needed for keeping the list sorted on the add/delete operations.
The way the add(x), and delete_max_freq() operations could be implemented is as follows
add(x) :
If x is not found in the elements map, check if the first element of the freq_nodes contains the Freq object with frequency 1. If so, add x to the values_with_frequency set of the Freq object. Otherwise, create a new Freq object with 1 as the frequency value and x added to the (now only single element) wrapped set values_with_frequency
Otherwise, (i.e. if x is already there in the elements map), follow the pointer in the value of the entry corresponding to x in elements to the Freq object in the freq_nodes, remove x from the values_with_frequency field of the Freq object, noting the current value of x’s frequency which is the value of elements.get(x).frequency(Hold this value in say F). If the set values_with_frequency is rendered empty due to this removal, delete the corresponding node from the freq_nodes linked list. Finally if the next Freq node in the freq_nodes linked list has the frequency F+1, just add x to the values_with_frequency field of the next node. Otherwise just create a Freq node as was done in the case of non-existence of Freq node with frequency 1 above.
Finally, add the entry (x, Freq) to the elements map.
Note that this whole add(x) operation is going to be O(1) in time.
Here's an example of a sequence of add() operations with the subsequent state of the data structure.
add(a)
a -> N1 : freq_nodes : |N1 (1, {a}) | ( N1 is actually a Freq object)
add(b)
a -> N1 : freq_nodes : |N1 (1, {a, b}) |
b -> N1
add(a)
At this point ‘a’ points to N1, however, its current frequency is 2, so we need to insert a node N2 next to N1 in the DLL, after removing it from N1’s values_with_frequency set {a,b}
a -> N2 : freq_nodes : |N1 (1, {b}) | --> |N2 (2, {a}) |
b -> N1
The interesting thing to note here is that any time we increase the frequency of an existing element from F to say F+1, we need to do the following
if (next node has a higher frequency than F+1 or we have reached the end of the list):
create a new Freq node with frequency equal to F+1 (as is done above)
and insert it next to the current node
else :
add ‘a’ (the input to the add() operation) to the ```values_with_frequency``` set of the next node
The delete_max_freq() operation would just involve removing the last entry of the linked list freq_nodes, and iterating over the keys in the wrapped set values_with_frequency to remove the corresponding keys from the elements map. This operation would take O(k) time where k is the number of elements with maximum frequency.

Assuming "efficient" refers to the way the complexity of those operations scale, big-O style, I'd consider something consisting of:
a hashmap with the integers as keys and their frequencies as values
a tree structure (possibly a binary search tree, e.g.) where its nodes have a number representing a frequency and a hashset of numbers which have that frequency.
When a number is inserted:
1. Look up the number in the hashmap to find its frequency. (O(1))
2. Look up the frequency in the tree (O(log N)). Remove the number from its collection (O(1)). If the collection is empty, remove the frequency from the tree (O(log N)).
3. Increment the number's frequency. Set that value in the hashmap (O(1)).
4. Look up its new frequency in the tree (O(log N)). If it's there, add the number to the collection there (O(1)). If not, add a new node with the number in its collection (O(log N)).
When deleting items with the maximum frequency:
1. Remove the highest-valued node from the tree (O(log N)).
2. For each number in that node's collection, remove that number's entry from the hashmap (O(1) for each number removed).
If you have N numbers to add and remove, your worst-case scenario should be O(N log N) regardless of the actual distribution of frequencies or the order in which numbers are added and removed.
If you know of any assumptions you can make about the numbers being added, it's possible you could make further enhancements like using an indexed array rather than an ordered tree. But if your inputs are fairly unbounded, this seems like a pretty good structure to handle all the operations you want without getting into O(n²) territory.

My thoughts:
You will need 2 maps.
Map 1: Integer as key with frequency as value.
Map 2: Have a map of frequencies as keys and list of integers as values.
Add Integer: Add the integer to map 1. Get the frequency. Add it to the list of frequency key in map 2.
Delete Integer : We can obviously maintain maximum frequency in a variable across these operations. Now, remove the key from map2 which has this max frequency and decrement max frequency.
So, adding and deleting performance should be O(1) on average.
In the above scenario, we will still have integers in map 1 which exist and have the frequency which is unrealistic after the delete from map 2. In this case, when same integer gets added, we do an on demand update in map 1, meaning, if current frequency in map 1 doesn't exist in map 2 for this integer, it means it was deleted and we can reset that to 1 again.
Implementation:
import java.util.*;
class Foo{
Map<Integer,Integer> map1;
Map<Integer,Set<Integer>> map2;
int max_freq;
Foo(){
map1 = new HashMap<>();
map2 = new HashMap<>();
map2.put(0,new HashSet<>());
max_freq = 0;
}
public void add(int x){
map1.putIfAbsent(x,0);
int curr_f = map1.get(x);
if(!map2.containsKey(curr_f)){
map1.put(x,1);
}else{
map1.merge(x,1,Integer::sum);
}
map2.putIfAbsent(map1.get(x),new HashSet<>());
map2.get(map1.get(x)-1).remove(x); // remove from previous frequency list
map2.get(map1.get(x)).add(x);// add to current frequency list
max_freq = Math.max(max_freq,map1.get(x));
printState();
}
public List<Integer> delete(){
List<Integer> ls = new ArrayList<>(map2.get(max_freq));
map2.remove(max_freq);
max_freq--;
while(max_freq > 0 && map2.get(max_freq).size() == 0) max_freq--;
printState();
return ls;
}
public void printState(){
System.out.println(map1.toString());
System.out.println("Maximum frequency: " + max_freq);
for(Map.Entry<Integer,Set<Integer>> m : map2.entrySet()){
System.out.println(m.getKey() + " " + m.getValue().toString());
}
System.out.println("----------------------------------------------------");
}
}
Demo: https://ideone.com/tETHKV
Note: The call to delete() is amortized.

Related

Approximation-tolerant map

I'm working with arrays of integer, all of the same size l.
I have a static set of them and I need to build a function to efficiently look them up.
The tricky part is that the elements in the array I need to search might be off by 1.
Given the arrays {A_1, A_2, ..., A_n}, and an array S, I need a function search such that:
search(S)=x iff ∀i: A_x[i] ∈ {S[i]-1, S[i], S[i]+1}.
A possible solution is treating each vector as a point in an l-dimensional space and looking for the closest point, but it'd cost something like O(l*n) in space and O(l*log(n)) in time.
Would there be a solution with a better space complexity (and/or time, of course)?
My arrays are pretty different from each other, and good heuristics might be enough.
Consider a search array S with the values:
S = [s1, s2, s3, ... , sl]
and the average value:
s̅ = (s1 + s2 + s3 + ... + sl) / l
and two matching arrays, one where every value is one greater than the corresponding value in S, and one where very value is one smaller:
A1 = [s1+1, s2+1, s3+1, ... , sl+1]
A2 = [s1−1, s2−1, s3−1, ... , sl−1]
These two arrays would have the average values:
a̅1 = (s1 + 1 + s2 + 1 + s3 + 1 + ... + sl + 1) / l = s̅ + 1
a̅2 = (s1 − 1 + s2 − 1 + s3 − 1 + ... + sl − 1) / l = s̅ − 1
So every matching array, whose values are at most 1 away from the corresponding values in the search array, has an average value that is at most 1 away from the average value of the search array.
If you calculate and store the average value of each array, and then sort the arrays based on their average value (or use an extra data structure that enables you to find all arrays with a certain average value), you can quickly identify which arrays have an average value within 1 of the search array's average value. Depending on the data, this could drastically reduce the number of arrays you have to check for similarity.
After having pre-processed the arrays and stores their average values, performing a search would mean iterating over the search array to calculate the average value, looking up which arrays have a similar average value, and then iterating over those arrays to check every value.
If you expect many arrays to have a similar average value, you could use several averages to detect arrays that are locally very different but similar on average. You could e.g. calculate these four averages:
the first half of the array
the second half of the array
the odd-numbered elements
the even-numbered elements
Analysis of the actual data should give you more information about how to divide the array and combine different averages to be most effective.
If the total sum of an array cannot exceed the integer size, you could store the total sum of each array, and check whether it is within l of the total sum of the search array, instead of using averages. This would avoid having to use floats and divisions.
(You could expand this idea by also storing other properties which are easily calculated and don't take up much space to store, such as the highest and lowest value, the biggest jump, ... They could help create a fingerprint of each array that is near-unique, depending on the data.)
If the number of dimensions is not very small, then probably the best solution will be to build a decision tree that recursively partitions the set along different dimensions.
Each node, including the root, would be a hash table from the possible values for some dimension to either:
The list of points that match that value within tolerance, if it's small enough; or
Those same points in a similar tree partitioning on the remaining dimensions.
Since each level completely eliminates one dimension, the depth of the tree is at most L, and search takes O(L) time.
The order in which the dimensions are chosen along each path is important, of course -- the wrong choice could explode the size of the data structure, with each point appearing many times.
Since your points are "pretty different", though, it should be possible to build a tree with minimal duplication. I would try the ID3 algorithm to choose the dimensions: https://en.wikipedia.org/wiki/ID3_algorithm. That basically means you greedily choose the dimension that maximizes the overall reduction in set size, using an entropy metric.
I would personally create something like a Trie for the lookup. I said "something like" because we have up to 3 values per index that might match. So we aren't creating a decision tree, but a DAG. Where sometimes we have choices.
That is straightforward and will run (with backtracking) in maximum time O(k*l).
But here is the trick. Whenever we see a choice of matching states that we can go into next, we can create a merged state which tries all of them. We can create a few or a lot of these merged states. Each one will defer a choice by 1 step. And if we're careful to keep track of which merged states we've created, we can reuse the same one over and over again.
In theory we can be generating partial matches for somewhat arbitrary subsets of our arrays. Which can grow exponentially in the number of arrays. In practice are likely to only wind up with a few of these merged states. But still we can guarantee a tradeoff - more states up front runs faster later. So we optimize until we are done or have hit the limit of how much data we want to have.
Here is some proof of concept code for this in Python. It will likely build the matcher in time O(n*l) and match in time O(l). However it is only guaranteed to build the matcher in time O(n^2 * l^2) and match in time O(n * l).
import pprint
class Matcher:
def __init__ (self, arrays, optimize_limit=None):
# These are the partial states we could be in during a match.
self.states = [{}]
# By state, this is what we would be trying to match.
self.state_for = ['start']
# By combination we could try to match for, which state it is.
self.comb_state = {'start': 0}
for i in range(len(arrays)):
arr = arrays[i]
# Set up "matched the end".
state_index = len(self.states)
this_state = {'matched': [i]}
self.comb_state[(i, len(arr))] = state_index
self.states.append(this_state)
self.state_for.append((i, len(arr)))
for j in reversed(range(len(arr))):
this_for = (i, j)
prev_state = {}
if 0 == j:
prev_state = self.states[0]
matching_values = set((arr[k] for k in range(max(j-1, 0), min(j+2, len(arr)))))
for v in matching_values:
if v in prev_state:
prev_state[v].append(state_index)
else:
prev_state[v] = [state_index]
if 0 < j:
state_index = len(self.states)
self.states.append(prev_state)
self.state_for.append(this_for)
self.comb_state[this_for] = state_index
# Theoretically optimization can take space
# O(2**len(arrays) * len(arrays[0]))
# We will optimize until we are done or hit a more reasonable limit.
if optimize_limit is None:
# Normally
optimize_limit = len(self.states)**2
# First we find all of the choices at the root.
# This will be an array of arrays with format:
# [state, key, values]
todo = []
for k, v in self.states[0].iteritems():
if 1 < len(v):
todo.append([self.states[0], k, tuple(v)])
while len(todo) and len(self.states) < optimize_limit:
this_state, this_key, this_match = todo.pop(0)
if this_key == 'matched':
pass # We do not need to optimize this!
elif this_match in self.comb_state:
this_state[this_key] = self.comb_state[this_match]
else:
# Construct a new state that is all of these.
new_state = {}
for state_ind in this_match:
for k, v in self.states[state_ind].iteritems():
if k in new_state:
new_state[k] = new_state[k] + v
else:
new_state[k] = v
i = len(self.states)
self.states.append(new_state)
self.comb_state[this_match] = i
self.state_for.append(this_match)
this_state[this_key] = [i]
for k, v in new_state.iteritems():
if 1 < len(v):
todo.append([new_state, k, tuple(v)])
#pp = pprint.PrettyPrinter()
#pp.pprint(self.states)
#pp.pprint(self.comb_state)
#pp.pprint(self.state_for)
def match (self, list1, ind=0, state=0):
this_state = self.states[state]
if 'matched' in this_state:
return this_state['matched']
elif list1[ind] in this_state:
answer = []
for next_state in this_state[list1[ind]]:
answer = answer + self.match(list1, ind+1, next_state)
return answer;
else:
return []
foo = Matcher([[1, 2, 3], [2, 3, 4]])
print(foo.match([2, 2, 3]))
Please note that I deliberately set up a situation where there are 2 matches. It reports both of them. :-)
I came up with a further approach derived off Matt Timmermans's answer: building a simple decision tree that might have certain some arrays in multiple branches. It works even if the error in the array I'm searching is larger than 1.
The idea is the following: given the set of arrays As...
Pick an index and a pivot.
I fixed the pivot to a constant value that works well with my data, and tried all indices to find the best one. Trying multiple pivots might work better, but I didn't need to.
Partition As into two possibly-intersecting subsets, one for the arrays (whose index-th element is) smaller than the pivot, one for the larger arrays. Arrays very close to the pivot are added to both sets:
function partition( As, pivot, index ):
return {
As.filter( A => A[index] <= pivot + 1 ),
As.filter( A => A[index] >= pivot - 1 ),
}
Apply both previous steps to each subset recursively, stopping when a subset only contains a single element.
Here an example of a possible tree generated with this algorithm (note that A2 appears both on the left and right child of the root node):
{A1, A2, A3, A4}
pivot:15
index:73
/ \
/ \
{A1, A2} {A2, A3, A4}
pivot:7 pivot:33
index:54 index:0
/ \ / \
/ \ / \
A1 A2 {A2, A3} A4
pivot:5
index:48
/ \
/ \
A2 A3
The search function then uses this as a normal decision tree: it starts from the root node and recurses either to the left or the right child depending on whether its value at index currentNode.index is greater or less than currentNode.pivot. It proceeds recursively until it reaches a leaf.
Once the decision tree is built, the time complexity is in the worst case O(n), but in practice it's probably closer to O(log(n)) if we choose good indices and pivots (and if the dataset is diverse enough) and find a fairly balanced tree.
The space complexity can be really bad in the worst case (O(2^n)), but it's closer to O(n) with balanced trees.

How to assign many subsets to their largest supersets?

My data has large number of sets (few millions). Each of those set size is between few members to several tens of thousands integers. Many of those sets are subsets of larger sets (there are many of those super-sets). I'm trying to assign each subset to it's largest superset.
Please can anyone recommend algorithm for this type of task?
There are many algorithms for generating all possible sub-sets of a set, but this type of approach is time-prohibitive given my data size (e.g. this paper or SO question).
Example of my data-set:
A {1, 2, 3}
B {1, 3}
C {2, 4}
D {2, 4, 9}
E {3, 5}
F {1, 2, 3, 7}
Expected answer: B and A are subset of F (it's not important B is also subset of A); C is a subset of D; E remains unassigned.
Here's an idea that might work:
Build a table that maps number to a sorted list of sets, sorted first by size with largest first, and then, by size, arbitrarily but with some canonical order. (Say, alphabetically by set name.) So in your example, you'd have a table that maps 1 to [F, A, B], 2 to [F, A, D, C], 3 to [F, A, B, E] and so on. This can be implemented to take O(n log n) time where n is the total size of the input.
For each set in the input:
fetch the lists associated with each entry in that set. So for A, you'd get the lists associated with 1, 2, and 3. The total number of selects you'll issue in the runtime of the whole algorithm is O(n), so runtime so far is O(n log n + n) which is still O(n log n).
Now walk down each list simultaneously. If a set is the first entry in all three lists, then it's the largest set that contains the input set. Output that association and continue with the next input list. If not, then discard the smallest item among all the items in the input lists and try again. Implementing this last bit is tricky, but you can store the heads of all lists in a heap and get (IIRC) something like O(n log k) overall runtime where k is the maximum size of any individual set, so you can bound that at O(n log n) in the worst case.
So if I got everything straight, the runtime of the algorithm is overall O(n log n), which seems like probably as good as you're going to get for this problem.
Here is a python implementation of the algorithm:
from collections import defaultdict, deque
import heapq
def LargestSupersets(setlists):
'''Computes, for each item in the input, the largest superset in the same input.
setlists: A list of lists, each of which represents a set of items. Items must be hashable.
'''
# First, build a table that maps each element in any input setlist to a list of records
# of the form (-size of setlist, index of setlist), one for each setlist that contains
# the corresponding element
element_to_entries = defaultdict(list)
for idx, setlist in enumerate(setlists):
entry = (-len(setlist), idx) # cheesy way to make an entry that sorts properly -- largest first
for element in setlist:
element_to_entries[element].append(entry)
# Within each entry, sort so that larger items come first, with ties broken arbitrarily by
# the set's index
for entries in element_to_entries.values():
entries.sort()
# Now build up the output by going over each setlist and walking over the entries list for
# each element in the setlist. Since the entries list for each element is sorted largest to
# smallest, the first entry we find that is in every entry set we pulled will be the largest
# element of the input that contains each item in this setlist. We are guaranteed to eventually
# find such an element because, at the very least, the item we're iterating on itself is in
# each entries list.
output = []
for idx, setlist in enumerate(setlists):
num_elements = len(setlist)
buckets = [element_to_entries[element] for element in setlist]
# We implement the search for an item that appears in every list by maintaining a heap and
# a queue. We have the invariants that:
# 1. The queue contains the n smallest items across all the buckets, in order
# 2. The heap contains the smallest item from each bucket that has not already passed through
# the queue.
smallest_entries_heap = []
smallest_entries_deque = deque([], num_elements)
for bucket_idx, bucket in enumerate(buckets):
smallest_entries_heap.append((bucket[0], bucket_idx, 0))
heapq.heapify(smallest_entries_heap)
while (len(smallest_entries_deque) < num_elements or
smallest_entries_deque[0] != smallest_entries_deque[num_elements - 1]):
# First extract the next smallest entry in the queue ...
(smallest_entry, bucket_idx, element_within_bucket_idx) = heapq.heappop(smallest_entries_heap)
smallest_entries_deque.append(smallest_entry)
# ... then add the next-smallest item from the bucket that we just removed an element from
if element_within_bucket_idx + 1 < len(buckets[bucket_idx]):
new_element = buckets[bucket_idx][element_within_bucket_idx + 1]
heapq.heappush(smallest_entries_heap, (new_element, bucket_idx, element_within_bucket_idx + 1))
output.append((idx, smallest_entries_deque[0][1]))
return output
Note: don't trust my writeup too much here. I just thought of this algorithm right now, I haven't proved it correct or anything.
So you have millions of sets, with thousands of elements each. Just representing that dataset takes billions of integers. In your comparisons you'll quickly get to trillions of operations without even breaking a sweat.
Therefore I'll assume that you need a solution which will distribute across a lot of machines. Which means that I'll think in terms of https://en.wikipedia.org/wiki/MapReduce. A series of them.
Read the sets in, mapping them to k:v pairs of i: s where i is an element of the set s.
Receive a key of an integers, along with a list of sets. Map them off to pairs (s1, s2): i where s1 <= s2 are both sets that included to i. Do not omit to map each set to be paired with itself!
For each pair (s1, s2) count the size k of the intersection, and send off pairs s1: k, s2: k. (Only send the second if s1 and s2 are different.
For each set s receive the set of supersets. If it is maximal, send off s: s. Otherwise send off t: s for every t that is a strict superset of s.
For each set s, receive the set of subsets, with s in the list only if it is maximal. If s is maximal, send off t: s for every t that is a subset of s.
For each set we receive the set of maximal sets that it is a subset of. (There may be many.)
There are a lot of steps for this, but at its heart it requires repeated comparisons between pairs of sets with a common element for each common element. Potentially that is O(n * n * m) where n is the number of sets and m is the number of distinct elements that are in many sets.
Here is a simple suggestion for an algorithm that might give better results based on your numbers (n = 10^6 to 10^7 sets with m = 2 to 10^5 members, a lot of super/subsets). Of course it depends a lot on your data. Generally speaking complexity is much worse than for the other proposed algorithms. Maybe you could only process the sets with less than X, e.g. 1000 members that way and for the rest use the other proposed methods.
Sort the sets by their size.
Remove the first (smallest) set and start comparing it against the others from behind (largest set first).
Stop as soon as you found a superset and create a relation. Just remove if no superset was found.
Repeat 2. and 3. for all but the last set.
If you're using Excel, you could structure it as follows:
1) Create a cartesian plot as a two-way table that has all your data sets as titles on both the side and the top
2) In a seperate tab, create a row for each data set in the first column, along with a second column that will count the number of entries (ex: F has 4) and then just stack FIND(",") and MID formulas across the sheet to split out all the entries within each data set. Use the counter in the second column to do COUNTIF(">0"). Each variable you find can be your starting point in a subsequent FIND until it runs out of variables and just returns a blank.
3) Go back to your cartesian plot, and bring over the separate entries you just generated for your column titles (ex: F is 1,2,3,7). Use an AND statement to then check that each entry in your left hand column is in your top row data set using an OFFSET to your seperate area and utilizing your counter as the width for the OFFSET

Data structure for dynamically changing n-length sequence with longest subsequence length query

I need to design a data structure for holding n-length sequences, with the following methods:
increasing() - returns length of the longest increasing sub-sequence
change(i, x) - adds x to i-th element of the sequence
Intuitively, this sounds like something solvable with some kind of interval tree. But I have no idea how to think of that.
I'm wondering how to use the fact, that we completely don't need to know how this sub-sequence looks like, we only need its length...
Maybe this is something that can be used, but I'm pretty much stuck at this point.
This solves the problem only for contiguous intervals. It doesn't solve arbitrary subsequences. :-(
It is possible to implement this with time O(1) for interval and O(log(n)) for change.
First of all we'll need a heap for all of the current intervals, with the largest on top. Finding the longest interval is just a question of looking on the top of the heap.
Next we need a bunch of information for each of our n slots.
value: Current value in this slot
interval_start: Where the interval containing this point starts
interval_end: Where the interval containing this point ends
heap_index: Where to find this interval in the heap NOTE: Heap operations MUST maintain this!
And now the clever trick! We always store the value for each slot. But we only store the interval information for an interval at the point in the interval whose index is divisible by the highest power of 2. There is always only one such point for any interval, so storing/modifying this is very little work.
Then to figure out what interval a given position in the array currently falls in, we have to look at all of the neighbors that are increasing powers of 2 until we find the last one with our value. So, for instance, position 13's information might be found in any of the positions 0, 8, 12, 13, 14, 16, 32, 64, .... (And we'll take the first interval we find it in in the list 0, ..., 64, 32, 16, 8, 12, 14, 13.) This is a search of a O(log(n)) list so is O(log(n)) work.
Now how do we implement change?
Update value.
Figure out what interval we were in, and whether we were at an interval boundary.
If intervals got changed, remove the old ones from the heap. (We may remove 0, 1 or 2)
If intervals got change, insert the new ones into the heap. (We may insert 0, 1, or 2)
That update is very complex, but it is a fixed number of O(log(n)) operations and so should be O(log(n)).
I try to explain my idea. It can be a bit simpler than implementing interval tree, and should give desirable complexity - O(1) for increasing(), and O(logS) for change(), where S is sequences count (can be reduced to N in worst cases of course).
At first you need original array. It need to check borders of intervals (I will use word interval as synonym to sequence) after change(). Let it be A
At the second you need bidirectional list of intervals. Element of this list should store left and right borders. Every increasing sequence should be presented as separate element of this list and this intervals should go one after another as they presented in A. Let this list be L. We need to operate pointers on elements, so, I don't know is it possible to do it on iterators with standard container.
At third you need priority queue that stores lengths of all intervals in you array. So, increasing() function can be done with O(1) time. But you need also storing of pointer to node from L to lookup intervals. Let this priority queue be PQ. More formally you priority queue contains pairs (length of interval, pointer to list node) with comparison only by length.
At forth you need tree, that can retrieve interval borders (or range) for particular element. It can be simply implemented with std::map where key is left border of tree, so with help of map::lower_bound you can find this interval. Value should store pointer to interval in L. Let this map be MP
And next important thing - List nodes should stores indecies of corresponding element in priority queue. And you shouldn't work with priority queue without connection with link to node from L (every swap operation on PQ you should update corresponding indecies on L).
change(i, x) operation can be looks like this:
Find interval, where i located with map. -> you find pointer to corresponding node in L. So, you know borders and length of interval
Try to understand what actions need to do: nothing, split interval, glue intervals.
Do this action on list and map with connection with PQ. If you need split interval, remove it from PQ (this is not remove-max operation) and then add 2 new elements to PQ. Similar if you need to glue intervals, you can remove one from PQ and do increase-key to second.
One difficulty is that PQ should support removing arbitrary element (by index), so you can't use std::priority_queue, but it is not difficult to implement as I think.
LIS can be solved with tree, but there is another implementation with dynamic programming, which is faster than recursive tree.
This is a simple implementation in C++.
class LIS {
private vector<int> seq ;
public LIS(vector<int> _seq) {seq = _seq ;}
public int increasing() {
int i, j ;
vector<int> lengths ;
lengths.resize(seq.size()) ;
for(i=0;i<seq.size();i++) lengths[i] = 1 ;
for(i=1;i<seq.size();i++) {
for(j=0;j<i;j++) {
if( seq[i] > seq[j] && lengths[i] < lengths[j]+1 ) {
lengths[i] = lengths[j] + 1 ;
}
}
}
int mxx = 0 ;
for(i=0;i<seq.size();i++)
mxx = mxx < lengths[i] ? lengths[i] : mxx ;
return mxx ;
}
public void change(i, x) {
seq[i] += x ;
}
}

How to "sort" elements of 2 possible values in place in linear time? [duplicate]

This question already has answers here:
Stable separation for two classes of elements in an array
(3 answers)
Closed 9 years ago.
Suppose I have a function f and array of elements.
The function returns A or B for any element; you could visualize the elements this way ABBAABABAA.
I need to sort the elements according to the function, so the result is: AAAAAABBBB
The number of A values doesn't have to equal the number of B values. The total number of elements can be arbitrary (not fixed). Note that you don't sort chars, you sort objects that have a single char representation.
Few more things:
the sort should take linear time - O(n),
it should be performed in place,
it should be a stable sort.
Any ideas?
Note: if the above is not possible, do you have ideas for algorithms sacrificing one of the above requirements?
If it has to be linear and in-place, you could do a semi-stable version. By semi-stable I mean that A or B could be stable, but not both. Similar to Dukeling's answer, but you move both iterators from the same side:
a = first A
b = first B
loop while next A exists
if b < a
swap a,b elements
b = next B
a = next A
else
a = next A
With the sample string ABBAABABAA, you get:
ABBAABABAA
AABBABABAA
AAABBBABAA
AAAABBBBAA
AAAAABBBBA
AAAAAABBBB
on each turn, if you make a swap you move both, if not you just move a. This will keep A stable, but B will lose its ordering. To keep B stable instead, start from the end and work your way left.
It may be possible to do it with full stability, but I don't see how.
A stable sort might not be possible with the other given constraints, so here's an unstable sort that's similar to the partition step of quick-sort.
Have 2 iterators, one starting on the left, one starting on the right.
While there's a B at the right iterator, decrement the iterator.
While there's an A at the left iterator, increment the iterator.
If the iterators haven't crossed each other, swap their elements and repeat from 2.
Lets say,
Object_Array[1...N]
Type_A objs are A1,A2,...Ai
Type_B objs are B1,B2,...Bj
i+j = N
FOR i=1 :N
if Object_Array[i] is of Type_A
obj_A_count=obj_A_count+1
else
obj_B_count=obj_B_count+1
LOOP
Fill the resultant array with obj_A and obj_B with their respective counts depending on obj_A > obj_B
The following should work in linear time for a doubly-linked list. Because up to N insertion/deletions are involved that may cause quadratic time for arrays though.
Find the location where the first B should be after "sorting". This can be done in linear time by counting As.
Start with 3 iterators: iterA starts from the beginning of the container, and iterB starts from the above location where As and Bs should meet, and iterMiddle starts one element prior to iterB.
With iterA skip over As, find the 1st B, and move the object from iterA to iterB->previous position. Now iterA points to the next element after where the moved element used to be, and the moved element is now just before iterB.
Continue with step 3 until you reach iterMiddle. After that all elements between first() and iterB-1 are As.
Now set iterA to iterB-1.
Skip over Bs with iterB. When A is found move it to just after iterA and increment iterA.
Continue step 6 until iterB reaches end().
This would work as a stable sort for any container. The algorithm includes O(N) insertion/deletion, which is linear time for containers with O(1) insertions/deletions, but, alas, O(N^2) for arrays. Applicability in you case depends on whether the container is an array rather than a list.
If your data structure is a linked list instead of an array, you should be able to meet all three of your constraints. You just skim through the list and accumulating and moving the "B"s will be trivial pointer changes. Pseudo code below:
sort(list) {
node = list.head, blast = null, bhead = null
while(node != null) {
nextnode = node.next
if(node.val == "a") {
if(blast != null){
//move the 'a' to the front of the 'B' list
bhead.prev.next = node, node.prev = bhead.prev
blast.next = node.next, node.next.prev = blast
node.next = bhead, bhead.prev = node
}
}
else if(node.val == "b") {
if(blast == null)
bhead = blast = node
else //accumulate the "b"s..
blast = node
}
3
node = nextnode
}
}
So, you can do this in an array, but the memcopies, that emulate the list swap, will make it quiet slow for large arrays.
Firstly, assuming the array of A's and B's is either generated or read-in, I wonder why not avoid this question entirely by simply applying f as the list is being accumulated into memory into two lists that would subsequently be merged.
Otherwise, we can posit an alternative solution in O(n) time and O(1) space that may be sufficient depending on Sir Bohumil's ultimate needs:
Traverse the list and sort each segment of 1,000,000 elements in-place using the permutation cycles of the segment (once this step is done, the list could technically be sorted in-place by recursively swapping the inner-blocks, e.g., ABB AAB -> AAABBB, but that may be too time-consuming without extra space). Traverse the list again and use the same constant space to store, in two interval trees, the pointers to each block of A's and B's. For example, segments of 4,
ABBAABABAA => AABB AABB AA + pointers to blocks of A's and B's
Sequential access to A's or B's would be immediately available, and random access would come from using the interval tree to locate a specific A or B. One option could be to have the intervals number the A's and B's; e.g., to find the 4th A, look for the interval containing 4.
For sorting, an array of 1,000,000 four-byte elements (3.8MB) would suffice to store the indexes, using one bit in each element for recording visited indexes during the swaps; and two temporary variables the size of the largest A or B. For a list of one billion elements, the maximum combined interval trees would number 4000 intervals. Using 128 bits per interval, we can easily store numbered intervals for the A's and B's, and we can use the unused bits as pointers to the block index (10 bits) and offset in the case of B (20 bits). 4000*16 bytes = 62.5KB. We can store an additional array with only the B blocks' offsets in 4KB. Total space under 5MB for a list of one billion elements. (Space is in fact dependent on n but because it is extremely small in relation to n, for all practical purposes, we may consider it O(1).)
Time for sorting the million-element segments would be - one pass to count and index (here we can also accumulate the intervals and B offsets) and one pass to sort. Constructing the interval tree is O(nlogn) but n here is only 4000 (0.00005 of the one-billion list count). Total time O(2n) = O(n)
This should be possible with a bit of dynamic programming.
It works a bit like counting sort, but with a key difference. Make arrays of size n for both a and b count_a[n] and count_b[n]. Fill these arrays with how many As or Bs there has been before index i.
After just one loop, we can use these arrays to look up the correct index for any element in O(1). Like this:
int final_index(char id, int pos){
if(id == 'A')
return count_a[pos];
else
return count_a[n-1] + count_b[pos];
}
Finally, to meet the total O(n) requirement, the swapping needs to be done in a smart order. One simple option is to have recursive swapping procedure that doesn't actually perform any swapping until both elements would be placed in correct final positions. EDIT: This is actually not true. Even naive swapping will have O(n) swaps. But doing this recursive strategy will give you absolute minimum required swaps.
Note that in general case this would be very bad sorting algorithm since it has memory requirement of O(n * element value range).

Finding the best pair of elements that don't exceed a certain weight?

I have a collection of objects, each of which has a weight and a value. I want to pick the pair of objects with the highest total value subject to the restriction that their combined weight does not exceed some threshold. Additionally, I am given two arrays, one containing the objects sorted by weight and one containing the objects sorted by value.
I know how to do it in O(n2) but how can I do it in O(n)?
This is a combinatorial optimization problem, and the fact the values are sorted means you can easily try a branch and bound approach.
I think that I have a solution that works in O(n log n) time and O(n) extra space. This isn't quite the O(n) solution you wanted, but it's still better than the naive quadratic solution.
The intuition behind the algorithm is that we want to be able to efficiently determine, for any amount of weight, the maximum value we can get with a single item that uses at most that much weight. If we can do this, we have a simple algorithm for solving the problem: iterate across the array of elements sorted by value. For each element, see how much additional value we could get by pairing a single element with it (using the values we precomputed), then find which of these pairs is maximum. If we can do the preprocessing in O(n log n) time and can answer each of the above queries in O(log n) time, then the total time for the second step will be O(n log n) and we have our answer.
An important observation we need to do the preprocessing step is as follows. Our goal is to build up a structure that can answer the question "which element with weight less than x has maximum value?" Let's think about how we might do this by adding one element at a time. If we have an element (value, weight) and the structure is empty, then we want to say that the maximum value we can get using weight at most "weight" is "value". This means that everything in the range [0, max_weight - weight) should be set to value. Otherwise, suppose that the structure isn't empty when we try adding in (value, weight). In that case, we want to say that any portion of the range [0, weight) whose value is less than value should be replaced by value.
The problem here is that when we do these insertions, there might be, on iteration k, O(k) different subranges that need to be updated, leading to an O(n2) algorithm. However, we can use a very clever trick to avoid this. Suppose that we insert all of the elements into this data structure in descending order of value. In that case, when we add in (value, weight), because we add the elements in descending order of value, each existing value in the data structure must be higher than our value. This means that if the range [0, weight) intersects any range at all, those ranges will automatically be higher than value and so we don't need to update them. If we combine this with the fact that each range we add always spans from zero to some value, the only portion of the new range that could ever be added to the data structure is the range [weight, x), where x is the highest weight stored in the data structure so far.
To summarize, assuming that we visit the (value, weight) pairs in descending order of value, we can update our data structure as follows:
If the structure is empty, record that the range [0, value) has value "value."
Otherwise, if the highest weight recorded in the structure is greater than weight, skip this element.
Otherwise, if the highest weight recorded so far is x, record that the range [weight, x) has value "value."
Notice that this means that we are always splitting ranges at the front of the list of ranges we have encountered so far. Because of this, we can think about storing the list of ranges as a simple array, where each array element tracks the upper endpoint of some range and the value assigned to that range. For example, we might track the ranges [0, 3), [3, 9), and [9, 12) as the array
3, 9, 12
If we then needed to split the range [0, 3) into [0, 1) and [1, 3), we could do so by prepending 1 to he list:
1, 3, 9, 12
If we represent this array in reverse (actually storing the ranges from high to low instead of low to high), this step of creating the array runs in O(n) time because at each point we just do O(1) work to decide whether or not to add another element onto the end of the array.
Once we have the ranges stored like this, to determine which of the ranges a particular weight falls into, we can just use a binary search to find the largest element smaller than that weight. For example, to look up 6 in the above array we'd do a binary search to find 3.
Finally, once we have this data structure built up, we can just look at each of the objects one at a time. For each element, we see how much weight is left, use a binary search in the other structure to see what element it should be paired with to maximize the total value, and then find the maximum attainable value.
Let's trace through an example. Given maximum allowable weight 10 and the objects
Weight | Value
------+------
2 | 3
6 | 5
4 | 7
7 | 8
Let's see what the algorithm does. First, we need to build up our auxiliary structure for the ranges. We look at the objects in descending order of value, starting with the object of weight 7 and value 8. This means that if we ever have at least seven units of weight left, we can get 8 value. Our array now looks like this:
Weight: 7
Value: 8
Next, we look at the object of weight 4 and value 7. This means that with four or more units of weight left, we can get value 7:
Weight: 7 4
Value: 8 7
Repeating this for the next item (weight six, value five) does not change the array, since if the object has weight six, if we ever had six or more units of free space left, we would never choose this; we'd always take the seven-value item of weight four. We can tell this since there is already an object in the table whose range includes remaining weight four.
Finally, we look at the last item (value 3, weight 2). This means that if we ever have weight two or more free, we could get 3 units of value. The final array now looks like this:
Weight: 7 4 2
Value: 8 7 3
Finally, we just look at the objects in any order to see what the best option is. When looking at the object of weight 2 and value 3, since the maximum allowed weight is 10, we need tom see how much value we can get with at most 10 - 2 = 8 weight. A binary search over the array tells us that this value is 8, so one option would give us 11 weight. If we look at the object of weight 6 and value 5, a binary search tells us that with five remaining weight the best we can do would be to get 7 units of value, for a total of 12 value. Repeating this on the next two entries doesn't turn up anything new, so the optimum value found has value 12, which is indeed the correct answer.
Hope this helps!
Here is an O(n) time, O(1) space solution.
Let's call an object x better than an object y if and only if (x is no heavier than y) and (x is no less valuable) and (x is lighter or more valuable). Call an object x first-choice if no object is better than x. There exists an optimal solution consisting either of two first-choice objects, or a first-choice object x and an object y such that only x is better than y.
The main tool is to be able to iterate the first-choice objects from lightest to heaviest (= least valuable to most valuable) and from most valuable to least valuable (= heaviest to lightest). The iterator state is an index into the objects by weight (resp. value) and a max value (resp. min weight) so far.
Each of the following steps is O(n).
During a scan, whenever we encounter an object that is not first-choice, we know an object that's better than it. Scan once and consider these pairs of objects.
For each first-choice object from lightest to heaviest, determine the heaviest first-choice object that it can be paired with, and consider the pair. (All lighter objects are less valuable.) Since the latter object becomes lighter over time, each iteration of the loop is amortized O(1). (See also searching in a matrix whose rows and columns are sorted.)
Code for the unbelievers. Not heavily tested.
from collections import namedtuple
from operator import attrgetter
Item = namedtuple('Item', ('weight', 'value'))
sentinel = Item(float('inf'), float('-inf'))
def firstchoicefrombyweight(byweight):
bestsofar = sentinel
for x in byweight:
if x.value > bestsofar.value:
bestsofar = x
yield (x, bestsofar)
def firstchoicefrombyvalue(byvalue):
bestsofar = sentinel
for x in byvalue:
if x.weight < bestsofar.weight:
bestsofar = x
yield x
def optimize(items, maxweight):
byweight = sorted(items, key=attrgetter('weight'))
byvalue = sorted(items, key=attrgetter('value'), reverse=True)
maxvalue = float('-inf')
try:
i = firstchoicefrombyvalue(byvalue)
y = i.next()
for x, z in firstchoicefrombyweight(byweight):
if z is not x and x.weight + z.weight <= maxweight:
maxvalue = max(maxvalue, x.value + z.value)
while x.weight + y.weight > maxweight:
y = i.next()
if y is x:
break
maxvalue = max(maxvalue, x.value + y.value)
except StopIteration:
pass
return maxvalue
items = [Item(1, 1), Item(2, 2), Item(3, 5), Item(3, 7), Item(5, 8)]
for maxweight in xrange(3, 10):
print maxweight, optimize(items, maxweight)
This is similar to Knapsack problem. I will use naming from it (num - weight, val - value).
The essential part:
Start with a = 0 and b = n-1. Assuming 0 is the index of heaviest object and n-1 is the index of lightest object.
Increase a til objects a and b satisfy the limit.
Compare current solution with best solution.
Decrease b by one.
Go to 2.
Update:
It's the knapsack problem, except there is a limit of 2 items. You basically need to decide how much space you want for the first object and how much for the other. There is n significant ways to split available space, so the complexity is O(n). Picking the most valuable objects to fit in those spaces can be done without additional cost.

Resources