java - Is JavaFX SortedList stable? - sorting

Let's say I want to sort a list twice, using different comparators each, with the first comparator taking precedence over the second. Is it OK to simply call the SortedList constructor twice, with one call for each comparator?
To clarify why I want to do this, let's say I have an unordered list of Student objects, which have the fields Age and Name. Now, the end result that I want to achieve is a list of Student, where the younger students will appear earlier in the list. However, in the case where there is a group of students with the same age, I want them to appear in sorted-name order. The catch is that I'm restricted to using the comparators which are given to me - which in this case compares the students by either age or name depending on which comparator is specified.
Sorting the list twice, first by name and then by age a la Radix Sort, solves the problem but only if the sorting algorithm involved is stable. I know Collections.sort is stable, but I'm not sure if SortedList exerts the same behaviour.
I like J Atkin's thenComparing answer, and I think this is what I will do, but now I'm just curious whether SortedList is stable.

Yes it's stable. I just checked the implementation of SortedList and discovered a class named SortHelper that performs sorting for a SortedList. In it there is a method named sort:
public <T> int[] sort(T[] a, int fromIndex, int toIndex, Comparator<? super T> c) {
....
if (c==null)
mergeSort(aux, a, fromIndex, toIndex, -fromIndex);
So SortedList uses merge sort, which is stable.
All that is well and good, however, a cleaner way would be to combine the comparators using thenComparing:
sortAge.thenComparing(sortName))

Related

An effective way to perform a prefix search on ranked (sorted) list?

I have a large list of some elements sorted by their probabilities:
data class Element(val value: String, val probability: Float)
val sortedElements = listOf(
Element("dddcccdd", 0.7f),
Element("aaaabb", 0.2f),
Element("bbddee", 0.1f)
)
Now I need to perform a prefix searches on this list to find items that start with one prefix and then with the next prefix and so on (elements still need to be sorted by probabilities)
val filteredElements1 = sortedElements
.filter { it.value.startsWith("aa") }
val filteredElements2 = sortedElements
.filter { it.value.startsWith("bb") }
Each "request" of elements filtered by some prefix takes O(n) time, which is too slow in case of a large list.
If I didn't care about the order of the elements (their probabilities), I could sort the elements lexicographically and perform a binary search: sorting takes O(n*log n) time and each request -- O(log n) time.
Is there any way to speed up the execution of these operations without losing the sorting (probability) of elements at the same time? Maybe there is some kind of special data structure that is suitable for this task?
You can read more about Trie data structure https://en.wikipedia.org/wiki/Trie
This could be really useful for your usecase.
Leetcode has another very detailed explanation on it, which you can find here https://leetcode.com/articles/implement-trie-prefix-tree/
Hope this helps
If your List does not change often, you could create a HashMap where each existing Prefix is a key referring to a collection (sorted by probability) of all entries it is a prefix of.
getting all entries for a given prefix needs ~O(1) then.
Be careful the Map get really big. And creation of the map takes quite some time.

What is the efficient algorithm / way to find intersection between 2 array lists. ( I am using java 8 )

I have 2 array lists which contains a custom object Stock.
public class Stock{
private String companyName;
private double stockPrice;
// getters and setters
}
List1 contains Stock objects . List2 also contains stock objects.
List 1 and list 2 are same in size. There are some stock objects in list 1 which are same as present in list 2. I need to get those same objects which re present in list 1 out of list 2. i.e. in another words get the intersection of list 1 and list 2 in list 2. I am trying to find out if there is any direct way in Java 8 which gives this result in an efficient way .Or if not , how to construct an efficient algorithm in terms of time complexity and space complexity ? Help is highly appreciated.
List<T> intersect = list1.stream()
.filter(list2::contains)
.collect(Collectors.toList());
CREDIT: Fat_FS answer at Intersection and union of ArrayLists in Java.
Make sure you override equals and hashcode methods (assuming you wanted to look at the contents of the objects for equality comparison)
Convert list to set for better performance

Hash Tables and Separate Chaining: How do you know which value to return from the bucket's list?

We're learning about hash tables in my data structures and algorithms class, and I'm having trouble understanding separate chaining.
I know the basic premise: each bucket has a pointer to a Node that contains a key-value pair, and each Node contains a pointer to the next (potential) Node in the current bucket's mini linked list. This is mainly used to handle collisions.
Now, suppose for simplicity that the hash table has 5 buckets. Suppose I wrote the following lines of code in my main after creating an appropriate hash table instance.
myHashTable["rick"] = "Rick Sanchez";
myHashTable["morty"] = "Morty Smith";
Let's imagine whatever hashing function we're using just so happens to produce the same bucket index for both string keys rick and morty. Let's say that bucket index is index 0, for simplicity.
So at index 0 in our hash table, we have two nodes with values of Rick Sanchez and Morty Smith, in whatever order we decide to put them in (the first pointing to the second).
When I want to display the corresponding value for rick, which is Rick Sanchez per our code here, the hashing function will produce the bucket index of 0.
How do I decide which node needs to be returned? Do I loop through the nodes until I find the one whose key matches rick?
To resolve Hash Tables conflicts, that's it, to put or get an item into the Hash Table whose hash value collides with another one, you will end up reducing a map to the data structure that is backing the hash table implementation; this is generally a linked list. In the case of a collision this is the worst case for the Hash Table structure and you will end up with an O(n) operation to get to the correct item in the linked list. That's it, a loop as you said, that will search the item with the matching key. But, in the cases that you have a data structure like a balanced tree to search, it can be O(logN) time, as the Java8 implementation.
As JEP 180: Handle Frequent HashMap Collisions with Balanced Trees says:
The principal idea is that once the number of items in a hash bucket
grows beyond a certain threshold, that bucket will switch from using a
linked list of entries to a balanced tree. In the case of high hash
collisions, this will improve worst-case performance from O(n) to
O(log n).
This technique has already been implemented in the latest version of
the java.util.concurrent.ConcurrentHashMap class, which is also slated
for inclusion in JDK 8 as part of JEP 155. Portions of that code will
be re-used to implement the same idea in the HashMap and LinkedHashMap
classes.
I strongly suggest to always look at some existing implementation. To say about one, you could look at the Java 7 implementation. That will increase your code reading skills, that is almost more important or you do more often than writing code. I know that it is more effort but it will pay off.
For example, take a look at the HashTable.get method from Java 7:
public synchronized V get(Object key) {
Entry<?,?> tab[] = table;
int hash = key.hashCode();
int index = (hash & 0x7FFFFFFF) % tab.length;
for (Entry<?,?> e = tab[index] ; e != null ; e = e.next) {
if ((e.hash == hash) && e.key.equals(key)) {
return (V)e.value;
}
}
return null;
}
Here we see that if ((e.hash == hash) && e.key.equals(key)) is trying to find the correct item with the matching key.
And here is the full source code: HashTable.java

Filtering subsets using Linq

Imagine a have a very long enunumeration, too big to reasonably convert to a list. Imagine also that I want to remove duplicates from the list. Lastly imagine that I know that only a small subset of the initial enumeration could possibly contain duplicates. The last point makes the problem practical.
Basically I want to filter out the list based on some predicate and only call Distinct() on that subset, but also recombine with the enumeration where the predicate returned false.
Can anyone think of a good idiomatic Linq way of doing this? I suppose the question boils down to the following:
With Linq how can you perform selective processing on a predicated enumeration and recombine the result stream with the rejected cases from the predicate?
You can do it by traversing the list twice, once to apply the predicate and dedup, and a second time to apply the negation of the predicate. Another solution is to write your own variant of the Where extension method that pushes non-matching entries into a buffer on the side:
IEnumerable<T> WhereTee(this IEnumerable<T> input, Predicate<T> pred, List<T> buffer)
{
foreach (T t in input)
{
if (pred(t))
{
yield return t;
}
else
{
buffer.Add(t);
}
}
}
Can you give a little more details on how you would like to recombine the elments.
One way i can think of solving this problem is by using the Zip operator of .Net 4.0 like this.
var initialList = new List<int>();
var resjectedElemnts = initialList.Where( x=> !aPredicate(x) );
var accepetedElements = initialList.Where( x=> aPredicate(x) );
var result = accepetedElements.Zip(resjectedElemnts,(accepted,rejected) => T new {accepted,rejected});
This will create a list of pair of rejected and accepeted elements. But the size of the list will be contrained by the shorter list between the two inputs.

Best data structure to retrieve by max values and ID?

I have quite a big amount of fixed size records. Each record has lots of fields, ID and Value are among them. I am wondering what kind of data structure would be best so that I can
locate a record by ID(unique) very fast,
list the 100 records with the biggest values.
Max-heap seems work, but far from perfect; do you have a smarter solution?
Thank you.
A hybrid data structure will most likely be best. For efficient lookup by ID a good structure is obviously a hash-table. To support top-100 iteration a max-heap or a binary tree is a good fit. When inserting and deleting you just do the operation on both structures. If the 100 for the iteration case is fixed, iteration happens often and insertions/deletions aren't heavily skewed to the top-100, just keep the top 100 as a sorted array with an overflow to a max-heap. That won't modify the big-O complexity of the structure, but it will give a really good constant factor speed-up for the iteration case.
I know you want pseudo-code algorithm, but in Java for example i would use TreeSet, add all the records by ID,value pairs.
The Tree will add them sorted by value, so querying the first 100 will give you the top 100. Retrieving by ID will be straight-forward.
I think the algorithm is called Binary-Tree or Balanced Tree not sure.
Max heap would match the second requirement, but hash maps or balanced search trees would be better for the first one. Make the choice based on frequency of these operations. How often would you need to locate a single item by ID and how often would you need to retrieve top 100 items?
Pseudo code:
add(Item t)
{
//Add the same object instance to both data structures
heap.add(t);
hash.add(t);
}
remove(int id)
{
heap.removeItemWithId(id);//this is gonna be slow
hash.remove(id);
}
getTopN(int n)
{
return heap.topNitems(n);
}
getItemById(int id)
{
return hash.getItemById(id);
}
updateValue(int id, String value)
{
Item t = hash.getItemById(id);
//now t is the same object referred to by the heap and hash
t.value = value;
//updated both.
}

Resources