Generating immutable cyclic data structures - data-structures

Suppose I have this simple class:
public class Pair {
public readonly object first;
public readonly object second;
public Pair(object first, object second) {
this.first = first;
this.second = second;
}
}
It would be impossible to generate a cyclic graph of pairs.
How would you create a similar class, that is still immutable, but can be used somehow to generate cyclic graphs?

There are zillions of ways to represent graph structures. One such way is with a matrix. each row and column is indexed by the vertex, and each cell in the matrix represents a directed (possibly weighted) edge. A simple, cyclic graph, with 0's as no connecting edge and 1 with a connecting edge would just be like so:
| 0 1 |
| 1 0 |
As with many immutable structures, the way you construct them is by returning new structures based on the desired relationship of given matrices. for instance, if we wanted to take the above graph and add an edge on the first vertex back onto itself, the matrix representing that is just.
| 1 0 |
| 0 0 |
and to combine that with the other matrix, we just add them together.
| 0 1 | + | 1 0 | == | 1 1 |
| 1 0 | | 0 0 | | 1 0 |
Of course, there are many ways to represent matrices, with different tradeoffs for speed, space, and certain other operations, but that's a different question.

I don't think this is possible with a strictly immutable class of the type you proposed. The only thing I can think of is to add a property with a setter that check whether or not a field is null, and allows it to be set if it is. In this way you could leave the first field in the first object null, and once you've created the last object in the cycle set that field appropriately to close the loop. Once it's set, it is no longer null, and the setter would no longer allow it to be changed. It would still be possible for the field to be changed by code internal to the class, of course, but it would be essentially immutable from the outside.
Something like this (C#):
public class Pair {
private object first;
private object second;
public Pair(object first, object second) {
this.first = first;
this.second = second;
}
public object First {
get { return first; }
set
{
if (first == null)
{
first = value;
}
}
}
// and a similar property for second
}

I would take a functional approach, passing a continuation into the ctor. Alternately, it could take a sequence of similar elements instead (think IEnumerable as an argument).

Related

Hashmap (O(1)) supporting joker/match-all keys

The title is not so clear, because I cannot put my problem in a sentence (If you have a better title for this question, please suggest). I'll try to clarify my requirement with an example:
Suppose I have a table like this:
| Origin | Destination | Airline | Free Baggage |
===================================================
| NYC | London | American | 20KG |
---------------------------------------------------
| NYC | * | Southwest | 30KG |
---------------------------------------------------
| * | * | Southwest | 25KG |
---------------------------------------------------
| * | LA | * | 20KG |
---------------------------------------------------
| * | * | * | 15KG |
---------------------------------------------------
and so on ...
This table describes free baggage amount that the airlines provide in different routes. You can see that some rows have * value, meaning that they match all possible values (those values are not known necessarily).
So we have a large list of baggage rules (like the table above) and a large list of flights (which their origin, destination and airline is known), and we intend to find the baggage amount for each one of flights in the most efficient way (iterating the list is not an efficient way, obviously, as it will cost an O(N) computation). It is possible to exist more than one result for each flight, but we will assume that in this case either the first matching or the most specific one will be preferred (whichever is simpler for you to continue with).
If there was not * signs in the table, the problem would be easy, and we could use a Hashmap or Dictionary with a Tuple of values as a key. But with presence of those * (lets say match-all) keys, it is not so straight forward to provide a general solution for that.
Please note that the above example was just an example, and I need a solution that can be used for any number of keys, not just three.
Do you have any idea or implementation for this problem, with a lookup method having time complexity equal or close to O(1) like a regular hashmap (memory will not be an issue)? What would be the best possible solution?
Regarding the comments, the more I think about it, and the more it looks like a relational database with indexes rather than an hashmap...
A trivial, quite easy solution could be something like an In-memory SQlite database. But it would probably be something in O(log2(n)), and not O(1). The main advantage is that it's easy to set up, and IF performances are good enough, it could be the final solution.
Here, key is to use proper indexes, the LIKE operator, and of course well-defined JOIN clauses.
From scratch, I can't think about any solution that, having N rows and M columns, isn't at least in O(M)... But usually, you'll have way less columns than rows. Quickly - I may have skipped a detail, I write that on-the-fly - I can propose you this algorithm / container:
Data must be stored in a vector-like container VECDATA, accessed by a simple index in O(1). Think about this as a primary key in databases, and we'll call it PK. Knowing PK gives you instantly, in O(1), the required data. You'll have N rows grand total.
For each row NOT containing any *, you'll insert in a real hashmap called MAINHASH the pair (<tuple>, PK). This is your primary index, for exact results. It will be in O(1), BUT what you requested may not be within... Obviously, you must maintain consistency between MAINHASH and VECDATA, with whatever is needed (mutexes, locks, don't care as long as both are consistents).
This hash contains at most N entries. Without any joker, it will act near as a standard hashmap, but for the indirection to VECDATA. It's still O(1) in this case.
For each searchable column, you'll build a specific index, dedicated to this column.
The index has N entries. It will be a standard hashmap, but it MUST allow multiple values for a given key. That's quite a common container, so it shouldn't be an issue.
For each row, the index entry will be: ( <VECDATA value>, PK ). The container is stored in a vector of indexes, INDEX[i] (with 0<=i<M).
Same as MAINHASH, consistency must be enforced.
Obviously, all these indexes / subcontainers should be constructed when an entry is inserted into VECDATA, and saved on disk across sessions if needed - you don't want to reconstruct all this each time you start the application...
Searching a row
So, user search for a given tuple.
Search it in MAINHASH. If found, return it, search done.
Upgrade (see below): search also in CACHE before going to step #2.
For each tuple element tuple[0<=i<M], search in INDEX[i] for both tuple[i] (returns a vector of PK, EXACT[i]) AND for * (returns another vector of PK, FUZZY[i]).
With these two vectors, build another (temporary) hash TMPHASH, associating ( PK, integer COUNT ). It quite simple: COUNT is initialized to 1 if entry comes from EXACT, and 0 if it comes from FUZZY.
For next column, build EXACT and FUZZY (see #2). But instead of making a new TMPHASH, you'll MERGE the results into rather than creating a new temporary hash.
Method is: if TMPHASH doesn't have this PK entry, trash this entry: it can't match at all. Otherwise, read the COUNT value, add 1 or 0 to it according to where it comes from, reinject it in TMPHASH.
Once all columns are done, you'll have to analyze TMPHASH.
Analyzing TMPHASH
First, if TMPHASH is empty, then you don't have any suitable answer. Return that to user. If it contains only one entry, same: return to user directly.
For more than one element in TMPHASH:
Parse the whole TMPHASH container, searching for the maximum COUNT. Maintain in memory the PK associated to the current maximum for COUNT.
Developper's choice: in case of multiple COUNT at the same maximum value, you can either return them all, return the first one, or the last one.
COUNT if obviously always stricly lower than M - otherwise, you would have found the tuple in MAINHASH. This value, compared to M, can give a confidence mark to your result (=100*COUNT/M% of confidence).
You can also now store the original tuple searched, and the corresponding PK, in another hashmap called CACHE.
Since it would be way too complicated to update properly CACHE when adding/modifying something in VECDATA, simply purge CACHE when it occurs. It's only a cache, after all...
This is quite complex to implement if the language doesn't help you, in particular by allowing to redefine operators and having all base containers available, but it should work.
Exact matches / cached matches are in O(1). Fuzzy search is in O(n.M), n being the number of matching rows (and 0<=n<N, of course).
Without further researchs, I can't see anything better than that. It will consume an obscene amount of memory, but you said that it won't be an issue.
I would recommend doing this with Tries that have a little data decorated. For routes, you want to know the lowest route ID so we can match to the first available route. For flights you want to track how many flights there are left to match.
What this will allow you to do, for instance, is partway through the match ONLY ONCE realize that flights from city1 to city2 might be matching routes that start off city1, city2, or city1, * or *, city2, or *, * without having to repeat that logic for each route or flight.
Here is a proof of concept in Python:
import heapq
import weakref
class Flight:
def __init__(self, fields, flight_no):
self.fields = fields
self.flight_no = flight_no
class Route:
def __init__(self, route_id, fields, baggage):
self.route_id = route_id
self.fields = fields
self.baggage = baggage
class SearchTrie:
def __init__(self, value=0, item=None, parent=None):
# value = # unmatched flights for flights
# value = lowest route id for routes.
self.value = value
self.item = item
self.trie = {}
self.parent = None
if parent:
self.parent = weakref.ref(parent)
def add_flight (self, flight, i=0):
self.value += 1
fields = flight.fields
if i < len(fields):
if fields[i] not in self.trie:
self.trie[fields[i]] = SearchTrie(0, None, self)
self.trie[fields[i]].add_flight(flight, i+1)
else:
self.item = flight
def remove_flight(self):
self.value -= 1
if self.parent and self.parent():
self.parent().remove_flight()
def add_route (self, route, i=0):
route_id = route.route_id
fields = route.fields
if i < len(fields):
if fields[i] not in self.trie:
self.trie[fields[i]] = SearchTrie(route_id)
self.trie[fields[i]].add_route(route, i+1)
else:
self.item = route
def match_flight_baggage(route_search, flight_search):
# Construct a heap of one search to do.
tmp_id = 0
todo = [((0, tmp_id), route_search, flight_search)]
# This will hold by flight number, baggage.
matched = {}
while 0 < len(todo):
priority, route_search, flight_search = heapq.heappop(todo)
if 0 == flight_search.value: # There are no flights left to match
# Already matched all flights.
pass
elif flight_search.item is not None:
# We found a match!
matched[flight_search.item.flight_no] = route_search.item.baggage
flight_search.remove_flight()
else:
for key, r_search in route_search.trie.items():
if key == '*': # Found wildcard.
for a_search in flight_search.trie.values():
if 0 < a_search.value:
heapq.heappush(todo, ((r_search.value, tmp_id), r_search, a_search))
tmp_id += 1
elif key in flight_search.trie and 0 < flight_search.trie[key].value:
heapq.heappush(todo, ((r_search.value, tmp_id), r_search, flight_search.trie[key]))
tmp_id += 1
return matched
# Sample data - the id is the position.
route_data = [
["NYC", "London", "American", "20KG"],
["NYC", "*", "Southwest", "30KG"],
["*", "*", "Southwest", "25KG"],
["*", "LA", "*", "20KG"],
["*", "*", "*", "15KG"],
]
routes = []
for i in range(len(route_data)):
data = route_data[i]
routes.append(Route(i, [data[0], data[1], data[2]], data[3]))
flight_data = [
["NYC", "London", "American"],
["NYC", "Dallas", "Southwest"],
["Dallas", "Houston", "Southwest"],
["Denver", "LA", "American"],
["Denver", "Houston", "American"],
]
flights = []
for i in range(len(flight_data)):
data = flight_data[i]
flights.append(Flight([data[0], data[1], data[2]], i))
# Convert to searches.
flight_search = SearchTrie()
for flight in flights:
flight_search.add_flight(flight)
route_search = SearchTrie()
for route in routes:
route_search.add_route(route)
print(route_search.match_flight_baggage(flight_search))
As Wisblade notices in his answer, for an array of N rows and M columns the best possible complexity is O(M). You can get O(1) only if you consider M to be a constant.
You can easily solve your problem in O(2^M) which is practical for a small M and is effectively O(1) if you consider M to be a constant.
Create a single hashmap which contains (as keys) strings of concatenated column values, possibly separated by some special character, e.g. a slash:
map.put("NYC/London/American", "20KG");
map.put("NYC/*/Southwest", "30KG");
map.put("*/*/Southwest", "25KG");
map.put("*/LA/*", "20KG");
map.put("*/*/*", "15KG");
Then, when you query, you try different combinations of actual data and wildcard characters. E.g. let's assume you want to query NYC/LA/Southwest; then you try the following combinations:
map.get("NYC/LA/Southwest"); // null
map.get("NYC/LA/*"); // null
map.get("NYC/*/Southwest"); // found: 30KG
If the answer in the third step was null, you would continue as follows:
map.get("NYC/*/*"); // null
map.get("*/LA/Southwest"); // null
map.get("*/LA/*"); // found: 20KG
And there still remain two options:
map.get("*/*/Southwest"); // found: 25KG
map.get("*/*/*"); // found: 15KG
Basically, for three data columns you have 8 possibilities to check in the hashmap -- not bad! and possibly you find an answer much earlier.

How to get all IP addresses that are not in a given range of IP addresses

I need to be able to output all the ranges of IP addresses that are not in a given list of IP addresses ranges.
There is some sort of algorithm that I can use for this kind of task that I can transform into working code?
Basically I will use Salesforce Apex code, so any JAVA like language will do if a given example is possible.
I think the key for an easy solution is to remember IP addresses can be treated as a number of type long, and so they can be sorted.
I assumed the excluded ranges are given in a "nice" way, meaning no overlaps, no partial overlaps with global range and so on. You can of course add such input checks later on.
In this example I'll to all network ranges (global, included, excluded) as instances of NetworkRange class.
Following is the implementation of NetworkRange. Pay attention to the methods splitByExcludedRange and includes.
public class NetworkRange {
private long startAddress;
private long endAddress;
public NetworkRange(String start, String end) {
startAddress = addressRepresentationToAddress(start);
endAddress = addressRepresentationToAddress(end);
}
public NetworkRange(long start, long end) {
startAddress = start;
endAddress = end;
}
public String getStartAddress() {
return addressToAddressRepresentation(startAddress);
}
public String getEndAddress() {
return addressToAddressRepresentation(endAddress);
}
static String addressToAddressRepresentation(long address) {
String result = String.valueOf(address % 256);
for (int i = 1; i < 4; i++) {
address = address / 256;
result = String.valueOf(address % 256) + "." + result;
}
return result;
}
static long addressRepresentationToAddress(String addressRep) {
long result = 0L;
String[] tokens = addressRep.split("\\.");
for (int i = 0; i < 4; i++) {
result += Math.pow(256, i) * Long.parseLong(tokens[3-i]);
}
return result;
}
public List<NetworkRange> splitByExcludedRange(NetworkRange excludedRange) {
if (this.startAddress == excludedRange.startAddress && this.endAddress == excludedRange.endAddress)
return Arrays.asList();
if (this.startAddress == excludedRange.startAddress)
return Arrays.asList(new NetworkRange(excludedRange.endAddress+1, this.endAddress));
if (this.endAddress == excludedRange.endAddress)
return Arrays.asList(new NetworkRange(this.startAddress, excludedRange.startAddress-1));
return Arrays.asList(new NetworkRange(this.startAddress, excludedRange.startAddress-1),
new NetworkRange(excludedRange.endAddress+1, this.endAddress));
}
public boolean includes(NetworkRange excludedRange) {
return this.startAddress <= excludedRange.startAddress && this.endAddress >= excludedRange.endAddress;
}
public String toString() {
return "[" + getStartAddress() + "-" + getEndAddress() + "]";
}
}
Now comes the class that calculates the network ranges left included. It accepts a global range in constructor.
public class RangeProducer {
private NetworkRange global;
public RangeProducer(NetworkRange global) {
this.global = global;
}
public List<NetworkRange> computeEffectiveRanges(List<NetworkRange> excludedRanges) {
List<NetworkRange> effectiveRanges = new ArrayList<>();
effectiveRanges.add(global);
List<NetworkRange> effectiveRangesSplitted = new ArrayList<>();
for (NetworkRange excludedRange : excludedRanges) {
for (NetworkRange effectiveRange : effectiveRanges) {
if (effectiveRange.includes(excludedRange)) {
effectiveRangesSplitted.addAll(effectiveRange.splitByExcludedRange(excludedRange));
} else {
effectiveRangesSplitted.add(effectiveRange);
}
}
effectiveRanges = effectiveRangesSplitted;
effectiveRangesSplitted = new ArrayList<>();
}
return effectiveRanges;
}
}
You can run the following example:
public static void main(String[] args) {
NetworkRange global = new NetworkRange("10.0.0.0", "10.255.255.255");
NetworkRange ex1 = new NetworkRange("10.0.0.0", "10.0.1.255");
NetworkRange ex2 = new NetworkRange("10.1.0.0", "10.1.1.255");
NetworkRange ex3 = new NetworkRange("10.6.1.0", "10.6.2.255");
List<NetworkRange> excluded = Arrays.asList(ex1, ex2, ex3);
RangeProducer producer = new RangeProducer(global);
for (NetworkRange effective : producer.computeEffectiveRanges(excluded)) {
System.out.println(effective);
}
}
Output should be:
[10.0.2.0-10.0.255.255]
[10.1.2.0-10.6.0.255]
[10.6.3.0-10.255.255.255]
First, I assume you mean that you get one or more disjoint CIDR ranges as input, and need to produce the list of all CIDR ranges not including any of the ones given as input. For convenience, let's further assume that the input does not include the entire IP address space: i.e. 0.0.0.0/0. (That can be accommodated with a single special case but is not of much interest.)
I've written code analogous to this before and, though I'm not at liberty to share the code, I can describe the methodology. It's essentially a binary search algorithm wherein you bisect the full address space repeatedly until you've isolated the one range you're interested in.
Think of the IP address space as a binary tree: At the root is the full IPv4 address space 0.0.0.0/0. Its children each represent half of the address space: 0.0.0.0/1 and 128.0.0.0/1. Those, in turn, can be sub-divided to create children 0.0.0.0/2 / 64.0.0.0/2 and 128.0.0.0/2 / 192.0.0.0/2, respectively. Continue this all the way down and you end up with 2**32 leaves, each of which represents a single /32 (i.e. a single address).
Now, consider this tree to be the parts of the address space that are excluded from your input list. So your task is to traverse this tree, find each range from your input list in the tree, and cut out all parts of the tree that are in your input, leaving the remaining parts of the address space.
Fortunately, you needn't actually create all the 2**32 leaves. Each node at CIDR N can be assumed to include all nodes at CIDR N+1 and above if no children have been created for it (you'll need a flag to remember that it has already been subdivided -- i.e. is no longer a leaf -- see below for why).
So, to start, the entire address space is present in the tree, but can all be represented by a single leaf node. Call the tree excluded, and initialize it with the single node 0.0.0.0/0.
Now, take the first input range to consider -- we'll call this trial (I'll use 14.27.34.0/24 as the initial trial value just to provide a concrete value for demonstration). The task is to remove trial from excluded leaving the rest of the address space.
Start with current node pointer set to the excluded root node.
Start:
Compare the trial CIDR with current. If it is identical, you're done (but this should never happen if your input ranges are disjoint and you've excluded 0.0.0.0/0 from input).
Otherwise, if current is a leaf node (has not been subdivided, meaning it represents the entire address space at this CIDR level and below), set its sub-divided flag, and create two children for it: a left pointer to the first half of its address space, and a right pointer to the latter half. Label each of these appropriately (for the root node's children, that will be 0.0.0.0/1 and 128.0.0.0/1).
Determine whether the trial CIDR falls within the left side or the right side of current. For our initial trial value, it's to the left. Now, if the pointer on that side is already NULL, again you're done (though again that "can't happen" if your input ranges are disjoint).
If the trial CIDR is exactly equivalent to the CIDR in the node on that side, then simply free the node (and any children it might have, which again should be none if you have only disjoint inputs), set the pointer to that side NULL and you're done. You've just excluded that entire range by cutting that leaf out of the tree.
If the trial value is not exactly equivalent to the CIDR in the node on that side, set current to that side and start over (i.e. jump to Start label above).
So, with the initial input range of 14.27.34.0/24, you will first split 0.0.0.0/0 into 0.0.0.0/1 and 128.0.0.0/1. You will then drop down on the left side and split 0.0.0.0/1 into 0.0.0.0/2 and 64.0.0.0/2. You will then drop down to the left again to create 0.0.0.0/3 and 32.0.0.0/3. And so forth, until after 23 splits, you will then split 14.27.34.0/23 into 14.27.34.0/24 and 14.27.35.0/24. You will then delete the left-hand 14.27.34.0/24 child node and set its pointer to NULL, leaving the other.
That will leave you with a sparse tree containing 24 leaf nodes (after you dropped the target one). The remaining leaf nodes are marked with *:
(ROOT)
0.0.0.0/0
/ \
0.0.0.0/1 128.0.0.0/1*
/ \
0.0.0.0/2 64.0.0.0/2*
/ \
0.0.0.0/3 32.0.0.0.0/3*
/ \
0.0.0.0/4 16.0.0.0/4*
/ \
*0.0.0.0/5 8.0.0.0/5
/ \
*8.0.0.0/6 12.0.0.0/6
/ \
*12.0.0.0/7 14.0.0.0/7
/ \
14.0.0.0/8 15.0.0.0/8*
/ \
...
/ \
*14.27.32.0/23 14.27.34.0/23
/ \
(null) 14.27.35.0/24*
(14.27.34.0/24)
For each remaining input range, you will run through the tree again, bisecting leaf nodes when necessary, often resulting in more leaves, but always cutting out some part of the address space.
At the end, you simply traverse the resulting tree in whatever order is convenient, collecting the CIDRs of the remaining leaves. Note that in this phase you must exclude those that have previously been subdivided. Consider for example, in the above tree, if you next processed input range 14.27.35.0/24, you would leave 14.27.34.0/23 with no children, but both its halves have been separately cut out and it should not be included in the output. (With some additional complication, you could of course collapse nodes above it to accommodate that scenario as well, but it's easier to just keep a flag in each node.)
First, what you describe can be simplified to:
you have intervals of the form x.x.x.x - y.y.y.y
you want to output the intervals that are not yet "taken" in this range.
you want to be able to add or remove intervals efficiently
I would suggest the use of an interval tree, where each node stores an interval, and you can efficiently insert and remove nodes; and query for overlaps at a given point (= IP address).
If you can guarantee that there will be no overlaps, you can instead use a simple TreeSet<String>, where you must however guarantee (for correct sorting) that all strings use the xxx.xxx.xxx.xxx-yyy.yyy.yyy.yyy zero-padded format.
Once your intervals are in a tree, you can then generate your desired output, assuming that no intervals overlap, by performing a depth-first pre-order traversal of your tree, and storing the starts and ends of each visited node in a list. Given this list,
pre-pend 0.0.0.0 at the start
append 255.255.255.255 at the end
remove all duplicate ips (which will forcefully be right next to each other in the list)
take them by pairs (the number will always be even), and there you have the intervals of free IPs, perfectly sorted.
Note that 0.0.0.0 and 255.255.255.255 are not actually valid, routable IPs. You should read the relevant RFCs if you really need to output real-world-aware IPs.

Comparator.compareBoolean() the same as Comparator.compare()?

How can I write this
Comparator <Item> sort = (i1, i2) -> Boolean.compare(i2.isOpen(), i1.isOpen());
to something like this (code does not work):
Comparator<Item> sort = Comparator.comparing(Item::isOpen).reversed();
Comparing method does not have something like Comparator.comparingBool(). Comparator.comparing returns int and not "Item".
Why can't you write it like this?
Comparator<Item> sort = Comparator.comparing(Item::isOpen);
Underneath Boolean.compareTo is called, which in turn is the same as Boolean.compare
public static int compare(boolean x, boolean y) {
return (x == y) ? 0 : (x ? 1 : -1);
}
And this: Comparator.comparing returns int and not "Item". make little sense, Comparator.comparing must return a Comparator<T>; in your case it correctly returns a Comparator<Item>.
The overloads comparingInt, comparingLong, and comparingDouble exist for performance reasons only. They are semantically identical to the unspecialized comparing method, so using comparing instead of comparingXXX has the same outcome, but might having boxing overhead, but the actual implications depend on the particular execution environment.
In case of boolean values, we can predict that the overhead will be negligible, as the method Boolean.valueOf will always return either Boolean.TRUE or Boolean.FALSE and never create new instances, so even if a particular JVM fails to inline the entire code, it does not depend on the presence of Escape Analysis in the optimizer.
As you already figured out, reversing a comparator is implemented by swapping the argument internally, just like you did manually in your lambda expression.
Note that it is still possible to create a comparator fusing the reversal and an unboxed comparison without having to repeat the isOpen() expression:
Comparator<Item> sort = Comparator.comparingInt(i -> i.isOpen()? 0: 1);
but, as said, it’s unlikely to have a significantly higher performance than the Comparator.comparing(Item::isOpen).reversed() approach.
But note that if you have a boolean sort criteria and care for the maximum performance, you may consider replacing the general-purpose sort algorithm with a bucket sort variant. E.g.
If you have a Stream, replace
List<Item> result = /* stream of Item */
.sorted(Comparator.comparing(Item::isOpen).reversed())
.collect(Collectors.toList());
with
Map<Boolean,List<Item>> map = /* stream of Item */
.collect(Collectors.partitioningBy(Item::isOpen,
Collectors.toCollection(ArrayList::new)));
List<Item> result = map.get(true);
result.addAll(map.get(false));
or, if you have a List, replace
list.sort(Comparator.comparing(Item::isOpen).reversed());
with
ArrayList<Item> temp = new ArrayList<>(list.size());
list.removeIf(item -> !item.isOpen() && temp.add(item));
list.addAll(temp);
etc.
Use comparing using key extractor parameter:
Comparator<Item> comparator =
Comparator.comparing(Item::isOpen, Boolean::compare).reversed();

Create 3rd vector while looping through 2 others

I'm totally newbie in C++ and I need to solve a problem with vectors. What I need is to merge two existing vectors and create third one. While I saw several answers, the difference here is I need vector #3 (values3) to contain not all values, but only those which are in both vectors #1 (values1) and #2 (values2). So, if integer 2 is in vector 1 but is not in vector 2, this number does not fit me. I should use a function provided below. Commented lines are which I don't know what to write in. Other lines are working.
void CommonValues(vector<MainClass> & values1, vector<MainClass> & values2, vector<MainClass> & values3)
{
MainClass Class;
string pav;
int kiek;
vector<MainClass>::iterator iter3; // ?
for (vector<MainClass>::iterator iter1 = values1.begin(); iter1 != values1.end(); iter1++)
{
for (vector<MainClass>::iterator iter2 = values2.begin(); iter2 != values2.end(); iter2++)
{
if (iter1 == iter2)
{
pav = iter2->TakePav();
iter3->TakePav(pav); // ?
kiek = iter1->TakeKiek() + iter2->TakeKiek();
iter3->TakeKie(kiek); // ?
iter3++; // ?
}
}
}
}
You can sort values1 and values2, then use std::intersection: http://en.cppreference.com/w/cpp/algorithm/set_intersection
Your code at the moment won't work, among other problems, you are comparing iterator from vector 1 with iterator from vector 2, which doesn't make any sense. If you want to do it by looping, you should iterate through one vector and check if the value, for example *iter1, is in the 2nd vector.

find an object which has at least 1 parameter(out of three)that matches with corresponding parameter of each object from array

I'm creating a matching puzzle game, and I'm stuck in creating logic for this function.
Node is a class that has 3 parameters:
{
int a;
int b;
int c;
}
then if I have 2 node objects, say n1 and n2 and
(n1 == n2) if (n1.a == n2.a || n1.b == n2.b || n1.c == n2.c)
so if:
n1.a=6, n1.b=4, n1.c=3
and:
n2.a=4, n2.b=4, n2.c=5.
here ( n1 == n2 ) or n1 connects with n2 because ( n1.b == n2.b ).
The problem: I need to write logic for the function that accepts an array of node objects, and it should return a node object that can connect with all the nodes in the array. If a connecting node is impossible, it should return a null value. So the node returned should have at least 1 parameter in common with every object of the array.
I'm using ActionScript 3 but just need the logic part in either AS3 or pseudo-code.
You need to have a set of possible points that satisfy this condition, and filter it out at each new point added to the list. First you will have to spawn many "any-any" points, these must be differentiated from normal. You start the algorithm with one "any,any,any" point, then whenever a point (a,b,c) is added, you check the list of existing points, and drop any that are not compliant with "a,b,c", and a point with "any" is now axis-locked to either of the axes, meaning the first step makes "a,any,any","any,b,any","any,any,c" set of points in the list. This continues until either all the list is processed, or there are no points left in the list.
function allconnected(nodes:Vector.<Node>):Node {
var list:Array=[];
list.push({a:null,b:null,c:null}); // "any,any,any" initial object
for (var i:int=nodes.length-1;i>=0;i--) {
var node:Node=nodes[i]; // current node
if (list.length==0) return null; // no nodes match
for (var j:int=list.length-1;j>=0;j--) {
var o:Object=list.splice(j,1)[0]; // get the element out of the array
var pushed:Boolean=false;
if (o.a!==null) if (!pushed && (o.a==node.a)) {
list.push(o);
pushed=true;
}
if (o.b!==null) if (!pushed && (o.b==node.b)) {
list.push(o);
pushed=true;
}
if (o.c!==null) if (!pushed && (o.c==node.c)) {
list.push(o);
pushed=true;
}
// if this point connects by either side, and the side is defined, push it back
if (o.a===null) list.push({a:node.a,b:o.b,c:o.c});
if (o.b===null) list.push({a:o.a,b:node.b,c:o.c});
if (o.c===null) list.push({a:o.a,b:o.b,c:node.c});
// and if any side is "any", push an axis-aligned object in the list
} // this way new object that are aligned with current node
// are put into the array in an already processed segment, so we won't hit
// an infinite loop
}
// okay if we are here, then there is something left in the array
var result:Node=new Node();
result.a=list[0].a; // 0 if null, this is pretty much OK
result.b=list[0].b;
result.c=list[0].c;
return result;
}
This code has been written on-the-fly, and there might be errors inside, so don't blindly copy and paste, please.

Resources