Does Java optimize new HashSet(someHashSet).contains() to be O(1)? - performance

This class is an example of where the issue arises:
public class ContainsSet {
private static HashSet<E> myHashSet;
[...]
public static Set<E> getMyHashSet() {
return new HashSet<E>(myHashSet);
}
public static boolean doesMyHashSetContain(E e) {
return myHashSet.contains(e);
}
}
Now imagine two possible functions:
boolean method1() {
return ContainsSet.getMyHashSet().contains(someE);
}
boolean method2() {
return ContainsSet.doesMyHashSetContain(someE);
}
Now is my question whether or not method 1 will have the same time complexity after Java optimization as method 2.
(I used HashSet instead of just Set to emphasize that myHashSet.contains(someE) has complexity O(1).)
Without optimization it would not. Although .contains() has complexity O(1), the new HashSet<E>(myHashSet) has complexity O(n), which would give method 1 a complexity of O(n) + O(1) = O(n), which is horrible compared to the beloved O(1).
The reason why I this issue is imported is because I am teached not to return lists or sets if you will not allow an external class to change the contents of it. Returning a copy is an obvious solution, but it can be really time-consuming.

No, javac does not (and cannot) optimize this away. It's required to emit the byte code you describe in your source to this level. And the JVM will not be nearly intelligent enough to optimize this away. It's way too likely to have side effects to prove.
Don't return a copy of the HashSet if you want immutability. Wrap it in an unmodifiable wrapper: Collections.unmodifiableSet(myHashSet)

What can I say here but that creating a new HashSet and populating it via the constructor is expensive!
Java will not "optimize away" this work: Even though you and I know it would give the same result as "passing through" the contains() call, java can not know this.

No. That is beyond optimization. You returned a new object and you could use it in somewhere else, Java is not supposed to omit it. A new HashSet will be created.
This is not a good practice to return a copy. It's not only time consuming but also space consuming. As Sean said, you can wrap with unmodifiableSet or you can wrap it in your own class.
You can try this:
public static Set<E> getMyHashSet() {
return Collection.unmodifiableSortedSet(myHashSet);
}
Note: use that method will return a view of your set, not a copy.

Related

Jsprit VRP with multiple Time Windows

I try to use jsprit to solve a VRP with multiple TimeWindows. Therefore I created a new Constraint-Class which contains a Map that relates a "TimeWindowsNotAvailable"-class to a Service.
The "TimeWindowsNotAvailable"-class contains a List of TimeWindows where the Service can't be done (e.g. customer is not at home etc.).
The main problem is, that the newAct.getArrTime() is always 0.0, although you can see in the solution of the VRP that the arrTime is not 0.0.
Does anybody have an idea how I can fix this issue or are multiple TimeWindows much harder to implement?
public class TimeConstraint implements HardActivityStateLevelConstraint {
private Map<Service, TimeWindowsNotAvailable> notAvailableMap;
private RouteAndActivityStateGetter states;
private VehicleRoutingTransportCosts routingCosts;
public TimeConstraint() {
super();
}
public boolean checkDepTime(Service service, Double depTime){
TimeWindowsNotAvailable timeWindowsNotAvailable = notAvailableMap.get(service);
if(timeWindowsNotAvailable == null) return true;
System.out.println(depTime);
return timeWindowsNotAvailable.isAvailable(depTime);
}
public void setNotAvailableMap(Map<Service, TimeWindowsNotAvailable> notAvailableMap){
this.notAvailableMap = notAvailableMap;
}
#Override
public ConstraintsStatus fulfilled(JobInsertionContext iFacts, TourActivity prevAct, TourActivity newAct, TourActivity nextAct, double prevActDepTime) {
Service currentService = (Service)iFacts.getJob();
if(checkDepTime(currentService, **newAct.getArrTime()**)) return ConstraintsStatus.FULFILLED;
return ConstraintsStatus.NOT_FULFILLED;
}
}
You cannot yet model multiple time windows out-of-the box but it is going to be implemented. For the time being, you can implement your own. Assume you have for example the following two time windows for a service: (e1,l1), (e2,l2) where e means earliest operation start and l latest. If l1 < e2, it is comparably "easy" to implement. Just look at how I implemented single hard time windows. Look at which is the TimeWindowConstraint and which is the practical time window state updater. You probably only need minor modifications of these classes, so just copy them and add multiple time windows, and add these two new classes to your State- and ConstraintManager (do not forget to deactivate the default time window constraints/stateUpdater).
The newAct does not have any arrTime since it is not yet inserted into the route and the best insertion position is still to be determined (by checking constraints and calculating marginal insertion costs). But you can easily calculate it as follows:
double newActArrTime = prevActDepTime + routingCosts.getTransportTime(prevAct.getLocationId(), newAct.getLocationId(), prevActDepTime,iFacts.getNewDriver(),iFacts.getNewVehicle);

Avoiding duplicate code when performing operation on different object properties

I have recently run into a problem which has had me thinking in circles. Assume that I have an object of type O with properties O.A and O.B. Also assume that I have a collection of instances of type O, where O.A and O.B are defined for each instance.
Now assume that I need to perform some operation (like sorting) on a collection of O instances using either O.A or O.B, but not both at any given time. My original solution is as follows.
Example -- just for demonstration, not production code:
public class O {
int A;
int B;
}
public static class Utils {
public static void SortByA (O[] collection) {
// Sort the objects in the collection using O.A as the key. Note: this is custom sorting logic, so it is not simply a one-line call to a built-in sort method.
}
public static void SortByB (O[] collection) {
// Sort the objects in the collection using O.B as the key. Same logic as above.
}
}
What I would love to do is this...
public static void SortAgnostic (O[] collection, FieldRepresentation x /* some non-bool, non-int variable representing whether to chose O.A or O.B as the sorting key */) {
// Sort by whatever "x" represents...
}
... but creating a new, highly-specific type that I will have to maintain just to avoid duplicating a few lines of code seems unnecessary to me. Perhaps I am incorrect on that (and I am sure someone will correct me if that statement is wrong :D), but that is my current thought nonetheless.
Question: What is the best way to implement this method? The logic that I have to implement is difficult to break down into smaller methods, as it is already fairly optimized. At the root of the issue is the fact that I need to perform the same operation using different properties of an object. I would like to stay away from using codes/flags/etc. in the method signature if possible so that the solution can be as robust as possible.
Note: When answering this question, please approach it from an algorithmic point of view. I am aware that some language-specific features may be suitable alternatives, but I have encountered this problem before and would like to understand it from a relatively language-agnostic viewpoint. Also, please do not constrain responses to sorting solutions only, as I have only chosen it as an example. The real question is how to avoid code duplication when performing an identical operation on two different properties of an object.
"The real question is how to avoid code duplication when performing an identical operation on two different properties of an object."
This is a very good question as this situation arises all the time. I think, one of the best ways to deal with this situation is to use the following pattern.
public class O {
int A;
int B;
}
public doOperationX1() {
doOperationX(something to indicate which property to use);
}
public doOperationX2() {
doOperationX(something to indicate which property to use);
}
private doOperationX(input ) {
// actual work is done here
}
In this pattern, the actual implementation is performed in a private method, which is called by public methods, with some extra information. For example, in this case, it can be
doOperationX(A), or doOperationX(B), or something like that.
My Reasoning: In my opinion this pattern is optimal as it achieves two main requirements:
It keeps the public interface descriptive and clear, as it keeps operations separate, and avoids flags etc that you also mentioned in your post. This is good for the client.
From the implementation perspective, it prevents duplication, as it is in one place. This is good for the development.
A simple way to approach this I think is to internalize the behavior of choosing the sort field to the class O itself. This way the solution can be language-agnostic.
The implementation in Java could be using an Abstract class for O, where the purpose of the abstract method getSortField() would be to return the field to sort by. All that the invocation logic would need to do is to implement the abstract method to return the desired field.
O o = new O() {
public int getSortField() {
return A;
}
};
The problem might be reduced to obtaining the value of the specified field from the given object so it can be use for sorting purposes, or,
TField getValue(TEntity entity, string fieldName)
{
// Return value of field "A" from entity,
// implementation depends on language of choice, possibly with
// some sort of reflection support
}
This method can be used to substitute comparisons within the sorting algorithm,
if (getValue(o[i], "A")) > getValue(o[j], "A"))
{
swap(i, j);
}
The field name can then be parametrized, as,
public static void SortAgnostic (O[] collection, string fieldName)
{
if (getValue(collection[i], fieldName)) > getValue(collection[j], fieldName))
{
swap(i, j);
}
...
}
which you can use like SortAgnostic(collection, "A").
Some languages allow you to express the field in a more elegant way,
public static void SortAgnostic (O[] collection, Expression fieldExpression)
{
if (getValue(collection[i], fieldExpression)) >
getValue(collection[j], fieldExpression))
{
swap(i, j);
}
...
}
which you can use like SortAgnostic(collection, entity => entity.A).
And yet another option can be passing a pointer to a function which will return the value of the field needed,
public static void SortAgnostic (O[] collection, Function getValue)
{
if (getValue(collection[i])) > getValue(collection[j]))
{
swap(i, j);
}
...
}
which given a function,
TField getValueOfA(TEntity entity)
{
return entity.A;
}
and passing it like SortAgnostic(collection, getValueOfA).
"... but creating a new, highly-specific type that I will have to maintain just to avoid duplicating a few lines of code seems unnecessary to me"
That is why you should use available tools like frameworks or other typo of code libraries that provide you requested solution.
When some mechanism is common that mean it can be moved to higher level of abstraction. When you can not find proper solution try to create own one. Think about the result of operation as not part of class functionality. The sorting is only a feature, that why it should not be part of your class from the beginning. Try to keep class as simple as possible.
Do not worry premature about the sense of having something small just because it is small. Focus on the final usage of it. If you use very often one type of sorting just create a definition of it to reuse it. You do not have to necessary create a utill class and then call it. Sometimes the base functionality enclosed in utill class is fair enough.
I assume that you use Java:
In your case the wheal was already implemented in person of Collection#sort(List, Comparator).
To full fill it you could create a Enum type that implement Comparator interface with predefined sorting types.

Sorted hash table (map, dictionary) data structure design

Here's a description of the data structure:
It operates like a regular map with get, put, and remove methods, but has a sort method that can be called to sorts the map. However, the map remembers its sorted structure, so subsequent calls to sort can be much quicker (if the structure doesn't change too much between calls to sort).
For example:
I call the put method 1,000,000 times.
I call the sort method.
I call the put method 100 more times.
I call the sort method.
The second time I call the sort method should be a much quicker operation, as the map's structure hasn't changed much. Note that the map doesn't have to maintain sorted order between calls to sort.
I understand that it might not be possible, but I'm hoping for O(1) get, put, and remove operations. Something like TreeMap provides guaranteed O(log(n)) time cost for these operations, but always maintains a sorted order (no sort method).
So what's the design of this data structure?
Edit 1 - returning the top-K entries
Alhough I'd enjoy hearing the answer to the general case above, my use case has gotten more specific: I don't need the whole thing sorted; just the top K elements.
Data structure for efficiently returning the top-K entries of a hash table (map, dictionary)
Thanks!
For "O(1) get, put, and remove operations" you essentially need O(1) lookup, which implies a hash function (as you know), but the requirements of a good hash function often break the requirement to be easily sorted. (If you had a hash table where adjacent values mapped to the same bucket, it would degenerate to O(N) on lots of common data, which is a worse case you typically want a hash function to avoid.)
I can think of how to get you 90% of the way there. Set up a hashtable alongside a parallel index that is sorted. The index has a clean part (ordered) and a dirty part (unordered). The index would map keys to the values (or references to the values stored in the hashtable - whichever suits you in terms of performance or memory use). When you add to the hashtable, the new entry is pushed onto the back of the dirty list. When you remove from the hashtable, the entry is nulled/removed from the clean and dirty parts of the index. You can sort the index, which sorts the dirty entries only, then merges them into the already sorted 'clean' part of the index. And obviously you can iterate over the index.
As far as I can see, this gives you the O(1) everywhere except on the remove operation and is still fairly simple to implement with standard containers (at least as provided by C++, Java, or Python). It also gives you the "second sort is cheaper" condition by only needing to sort the dirty index entries and then letting you do an O(N) merge. The cost of all this is obviously extra memory for the index and extra indirection when using it.
Why exactly do you need a sort() function ?
What do you perhaps want and need is a Red-Black Tree.
http://en.wikipedia.org/wiki/Red-black_tree
These trees are automatically sorting your input by a comparator you give. They are complex, but have excellent O(n) characteristics. Couple your tree entries as key with a hash
map as dictionary and you get your datastructure.
In Java it is implemented as TreeMap as instance of SortedMap.
What you're looking at is a hashtable with pointers in the entries to the next entry in sorted order. It's a lot like the LinkedHashMap in java except that the links are tracking a sort order rather than the insertion order. You can actually implement this totally by wrapping a LinkedHashMap and having the implementation of sort transfer the entries from the LinkedHashMap into a TreeMap and then back into a LinkedHashMap.
Here's an implementation that sorts the entries in an array list rather than transferring to a tree map. I think the sort algorithm used by Collection.sort will do a good job of merging the new entries into the already sorted portion.
public class SortaSortedMap<K extends Comparable<K>,V> implements Map<K,V> {
private LinkedHashMap<K,V> innerMap;
public SortaSortedMap() {
this.innerMap = new LinkedHashMap<K,V>();
}
public SortaSortedMap(Map<K,V> map) {
this.innerMap = new LinkedHashMap<K,V>(map);
}
public Collection<V> values() {
return innerMap.values();
}
public int size() {
return innerMap.size();
}
public V remove(Object key) {
return innerMap.remove(key);
}
public V put(K key, V value) {
return innerMap.put(key, value);
}
public Set<K> keySet() {
return innerMap.keySet();
}
public boolean isEmpty() {
return innerMap.isEmpty();
}
public Set<Entry<K, V>> entrySet() {
return innerMap.entrySet();
}
public boolean containsKey(Object key) {
return innerMap.containsKey(key);
}
public V get(Object key) {
return innerMap.get(key);
}
public boolean containsValue(Object value) {
return innerMap.containsValue(value);
}
public void clear() {
innerMap.clear();
}
public void putAll(Map<? extends K, ? extends V> m) {
innerMap.putAll(m);
}
public void sort() {
List<Map.Entry<K,V>> entries = new ArrayList<Map.Entry<K,V>>(innerMap.entrySet());
Collections.sort(entries, new KeyComparator());
LinkedHashMap<K,V> newMap = new LinkedHashMap<K,V>();
for (Map.Entry<K,V> e: entries) {
newMap.put(e.getKey(), e.getValue());
}
innerMap = newMap;
}
private class KeyComparator implements Comparator<Map.Entry<K,V>> {
public int compare(Entry<K, V> o1, Entry<K, V> o2) {
return o1.getKey().compareTo(o2.getKey());
}
}
}
I don't know if there's a name, but you could store the current index of each item on the hash.
That is, you have a HashMap< Object, Pair( Integer, Object ) >
and a List<Object> objects
When you put, add to the tail or head of the list and insert into the hashmap with your data and the index of insertion. This is O(1).
When you get, pull from the hashmap and ignore the index. This is O(1).
When you remove, you pull from the map. Take the index and remove from the list as well. This is O(1)
When you sort, just sort the list. Either update the indexes in the map during the sort, or update after the sort is complete. This does not affect the O(nlgn) sort, as it's a linear step. O(nlgn + n) == O(nlgn)
Ordered Dictionary
Recent versions of Python (2.7, 3.1) have "ordered dictionaries" which sound like what you're describing.
The official Python "ordered dictionary" implementation is inspired by previous 3rd-party implementations, as described in the PEP 372.
References:
collections.OrderedDict documentation for Python 2.7
collections.OrderedDict documentation for Python 3.1
PEP 372
ActiveState Ordered Dictionary recipe for Python ≥ 2.4
I'm not aware of a data structure classification with that exact behavior, at least not in Java Collections (or from nonlinear data structures class). Perhaps you can implement it, and it will henceforth be known as the RudigerMap.

Recursion algorithms: suggested patterns and practices?

I am writing a utility that reflects on two object graphs and returns a value to indicate whether the graphs are identical or not. It got me thinking, is there a generally accepted pattern for writing a recursion algorithm that returns a value from some where in the recursion?
My solution would probably use a ref parameter and look something like this pseudo code:
public static bool IsChanged(T current, T previous)
{
bool isChanged = false;
CheckChanged(current, previous, ref isChanged);
return isChanged ;
}
private static void CheckChanged(T current, T previous, ref isChanged)
{
//perform recursion
if (graphIsChanged)
isChanged = true;
else
CheckChanged(current, previous, ref isChanged);
}
Is there a better / cleaner / more efficient way? Is there a general pattern for such a function?
I don't see any benefits of your version when compared to this highly trivial version:
public static bool IsChanged(T current, T previous)
{
//perform recursion
if (graphIsChanged)
return true;
else
return IsChanged(current, previous);
}
As an added benefit, some compilers are able to use tail call optimization to turn this version into a simple loop, which is more effective.
Tail recursion isn't just more effective, it keeps you from blowing out the stack on deep recursion:
http://en.wikipedia.org/wiki/Tail_recursion
That is to say, it prevents "Stack Overflow" :)
http://en.wikipedia.org/wiki/Stack_overflow
I've always been a fan of having an actual return value from a recursive function, not just passing in a reference to a variable. I[m not really sure what you're trying to do in your sample, but why not just return a bool from CheckChanged?

Tree traversal without recursive / stack usage (C#)?

I'm working with WPF and I'm developing a complex usercontrol, which is composed of a tree with rich functionality etc.
For this purpose I used a View-Model design pattern, because some operations couldn't be achieved directly in WPF. So I take the IHierarchyItem (which is a node and pass it to this constructor to create a tree structure)
private IHierarchyItemViewModel(IHierarchyItem hierarchyItem, IHierarchyItemViewModel parent)
{
this.hierarchyItem = hierarchyItem;
this.parent = parent;
List<IHierarchyItemViewModel> l = new List<IHierarchyItemViewModel>();
foreach (IHierarchyItem item in hierarchyItem.Children)
{
l.Add(new IHierarchyItemViewModel(item, this));
}
children = new ReadOnlyCollection<IHierarchyItemViewModel>(l);
}
The problem is that this constructor takes about 3 seconds !! for 200 items on my dual-core.
Am I doing anythig wrong or recursive constructor call is that slow?
Thank you very much!
OK I found a non recursive version by myself, although it uses the Stack.
It traverses the whole tree:
Stack<MyItem> stack = new Stack<MyItem>();
stack.Push(root);
while (stack.Count > 0)
{
MyItem taken = stack.Pop();
foreach (MyItem child in taken.Children)
stack.Push(MyItem);
}
There should be nothing wrong with a recursive implementation of a tree, especially for such a small number of items. A recursive implementation is sometimes less space efficient, and slightly less time efficient, but the code clarity often makes up for it.
It would be useful for you to perform some simple profiling on your constructor. Using one of the suggestions from: http://en.csharp-online.net/Measure_execution_time you could indicate for yourself how long each piece is taking.
There is a possibility that one piece in particular is taking a long time. In any case, that might help you narrow down where you are really spending the time.

Resources