Flatten a multilevel linked list through Collection framework - data-structures

Consider this example http://www.geeksforgeeks.org/flatten-a-linked-list-with-next-and-child-pointers/
I am able to flatten a multilevel linked list by collections in java but I am not able to establish relationship between a given node and its child node and next node . In the current example 10 has a child nodes with value 4,20 and 13 . I want to use Collections framework in a better way that can specify the child nodes and next nodes and then apply the flatten operation on given input. Please help me .
List<List<Integer>> obj =new LinkedList<List<Integer>>();
obj.add(Arrays.asList(10,5,12,7,11));
obj.add(Arrays.asList(4,20,13,17,6));
obj.add(Arrays.asList(2,16,9,8));
obj.add(Arrays.asList(3,19,15));
obj.stream().flatMap(e->e.stream()).forEach(i->System.out.print(i+" "));;

I solved the problem here is the code .
import java.util.*;
import java.lang.*;
import java.io.*;
class Node {
int data;
List<Node> child;
public Node(int data, List<Node> child) {
this.data = data;
this.child = child;
}
#Override
public String toString() {
return data+" ";
}
}
/* Name of the class has to be "Main" only if the class is public. */
class Ideone
{
public static void main (String[] args) throws java.lang.Exception
{
List<Node> obj1 = new LinkedList<Node>();
obj1.add(new Node(10, Arrays.asList(new Node(4,null),new Node(20,Arrays.asList(new Node(2, null))),
new Node(13,Arrays.asList(new Node(16, Arrays.asList(new Node(3, null))))) )));
obj1.add(new Node(5, null));
obj1.add(new Node(12,null));
obj1.add(new Node(7,Arrays.asList(new Node(17, Arrays.asList(new Node(9, Arrays.asList(new Node(19, null),
new Node(15, null))),new Node(8, null))
),new Node(6, null))
));
obj1.add(new Node(11,null));
int tail=obj1.size();
for(int i=0;i<tail;i++)
{
Node n=obj1.get(i);
if(n.child!=null)
{
for(Node o:n.child)
{
obj1.add(tail, o);
tail++;
}
n.child=null;
}
}
System.out.println(obj1);
}
}

Related

How to get all keys whose values are null in Java 8 using Map

I was going through How to remove a key from HashMap while iterating over it?, but my requirement is bit different.
class Main {
public static void main(String[] args) {
Map<String, String> hashMap = new HashMap<>();
hashMap.put("RED", "#FF0000");
hashMap.put("BLACK", null);
hashMap.put("BLUE", "#0000FF");
hashMap.put("GREEN", "#008000");
hashMap.put("WHITE", null);
// I wan't result like below - get All keys whose value is null
List<String> collect = hashMap.values()
.stream()
.filter(e -> e == null)
.collect(Collectors.toList());
System.out.println(collect);
// Result - BLACK, WHITE in list
}
}
Try this:
import java.util.*;
import java.util.stream.*;
class Main {
public static void main(String[] args) {
Map<String, String> hashMap = new HashMap<>();
hashMap.put("RED", "#FF0000");
hashMap.put("BLACK", null);
hashMap.put("BLUE", "#0000FF");
hashMap.put("GREEN", "#008000");
hashMap.put("WHITE", null);
// I wan't result like below - get All keys whose value is null
List<String> collect = hashMap.keySet()
.stream()
.filter(e -> Objects.isNull(hashMap.get(e)))
.collect(Collectors.toList());
System.out.println(collect);
// Result - BLACK, WHITE in list
}
}
As pointed out in the comments, you can try this as well:
import java.util.*;
import java.util.stream.*;
class Main {
public static void main(String[] args) {
Map<String, String> hashMap = new HashMap<>();
hashMap.put("RED", "#FF0000");
hashMap.put("BLACK", null);
hashMap.put("BLUE", "#0000FF");
hashMap.put("GREEN", "#008000");
hashMap.put("WHITE", null);
// I wan't result like below - get All keys whose value is null
List<String> collect = hashMap.entrySet()
.stream()
.filter(e -> Objects.isNull(e.getValue()))
.map(e -> e.getKey())
.collect(Collectors.toList());
System.out.println(collect);
// Result - BLACK, WHITE in list
}
}
This is more optimized, as compared to the first solution.

How to create a Binary Tree using a String array?

I am given an assignment where I need to do the following:
input your binary tree as an array, using the array representation and node labels A, ..., J, as Strings. Label null stands for a non-existent node, not for a node having a value of null.
Check the validity of your binary tree input: each node, excepting the root, should have a father.
Generate the dynamic memory implementation of the tree, using only the nodes with labels different than null.
So far I have:
public class Project1{
public static void main(String[] args){
String[] input = new String[]{"A","B","C","D","E","F","G","H","I","J"};
}
public class BinaryTree<T> implements java.io.Serializable{
private T data;
private BinaryTree<T> left;
private BinaryTree<T> right;
public BinaryTree(T data){
this.data = data;
left = null;
right = null;
}
public T getData(){
return data;
}
public void attachLeft(BinaryTree<T> tree){
if(tree != null){
left = tree;
}
}
public void attachRight(BinaryTree<T> tree){
if(tree != null){
right = tree;
}
}
public BinaryTree<T> detachLeft(){
BinaryTree<T> t = left;
left = null;
return t;
}
public BinaryTree<T> detachRight(){
BinaryTree<T> t = right;
right = null;
return t;
}
public boolean isEmpty(){
return data == null;
}
public void inOrder(BinaryTree<T> tree){
if (tree != null){
inOrder(tree.left);
System.out.println(tree.getData());
inOrder(tree.right);
}
}
public void preOrder(BinaryTree<T> tree){
if(tree != null){
System.out.println(tree.getData());
preOrder(tree.left);
preOrder(tree.right);
}
}
public void postOrder(BinaryTree<T> tree){
if(tree != null){
postOrder(tree.left);
postOrder(tree.right);
System.out.println(tree.getData());
}
}
}
What I don't understand is how to create a BinaryTree using my data from the string array

How to print a properly sorted Radix Trie

I was trying to print a properly sorted Radix Trie but when I tried to do it, it does with DFS but single letters are printed in a wrong place... What am I doing wrong? Am I doing wrong with the recursion or do I need another condition that I'm missing?
Ex:
0692
072755
0
1008
1076
10
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
public class Trie {
static class TrieNode {
TrieNode[] children = new TrieNode[128];
boolean leaf;
}
public static void insertString(TrieNode rootNode, String s) {
TrieNode root = rootNode;
for (char ch : s.toCharArray()) {
TrieNode next = root.children[ch];
if (next == null)
root.children[ch] = next = new TrieNode();
root = next;
}
root.leaf = true;
}
public static void printSorted(TrieNode node, String s) {
for (char ch = 0; ch < node.children.length; ch++) {
TrieNode child = node.children[ch];
if (child != null)
printSorted(child, s + ch);
}
if (node.leaf) {
System.out.println(s);
}
}
public static void main(String[] args) throws IOException {
TrieNode root = new TrieNode();
BufferedReader br = new BufferedReader(
new InputStreamReader(System.in)
);
String line = null;
while ((line = br.readLine()) != null) {
insertString(root, line);
line = null;
}
printSorted(root, "");
}
}
in your printSorted function, try printing the leaf first as the leaf is always less than its children.

Cannot insert multiple elements in PriorityQueue

I'm trying to implement a heap sorting algorithm.
My problem is when I try to insert Elements to my PriorityQueue, it only works for one element. When I add multiple elements to it, I get these errors
Exception in thread "main" java.lang.ClassCastException: Element cannot be cast to java.lang.Comparable
at java.util.PriorityQueue.siftUpComparable(PriorityQueue.java:652)
at java.util.PriorityQueue.siftUp(PriorityQueue.java:647)
at java.util.PriorityQueue.offer(PriorityQueue.java:344)
at java.util.PriorityQueue.add(PriorityQueue.java:321)
at PQHeap.insert(PQHeap.java:47)
at PQHeap.main(PQHeap.java:17)
This is my Element class
public class Element {
public int key;
public Object data;
public Element(int i, Object o) {
this.key = i;
this.data = o;
}}
The interface:
public interface PQ {
public Element extractMin();
public void insert(Element e);
}
And this is the class, which generates the heap. Note that the main class is located here just to debug with. When I only insert Element e, it works. But when I insert f aswell, it give's me the errors above.
import java.util.*;
public class PQHeap implements PQ{
public static void main(String[] args) {
PQHeap hq = new PQHeap(5);
Element e = new Element(5, null);
hq.insert(e);
hq.insert(f);
for(int in = 0; in<hq.pq.size();in++){
System.out.println(hq.pq.remove());
}
}// end of main method
public PriorityQueue<Element> pq;
public PQHeap(int maxElms) {
this.pq = new PriorityQueue<Element>(maxElms);
}
#Override
public Element extractMin() {
Element min = pq.remove();
System.out.println(min.key);
return min;
}
#Override
public void insert(Element e) {
this.pq.add(e);
}

Filter index hits by node ids in Neo4j

I have a set of node id's (Set< Long >) and want to restrict or filter the results of an query to only the nodes in this set. Is there a performant way to do this?
Set<Node> query(final GraphDatabaseService graphDb, final Set<Long> nodeSet) {
final Index<Node> searchIndex = graphdb.index().forNodes("search");
final IndexHits<Node> hits = searchIndex.query(new QueryContext("value*"));
// what now to return only index hits that are in the given Set of Node's?
}
Wouldn't be faster the other way round? If you get the nodes from your set and compare the property to the value you are looking for?
for (Iterator it=nodeSet.iterator();it.hasNext();) {
Node n=db.getNodeById(it.next());
if (!n.getProperty("value","").equals("foo")) it.remove();
}
or for your suggestion
Set<Node> query(final GraphDatabaseService graphDb, final Set<Long> nodeSet) {
final Index<Node> searchIndex = graphdb.index().forNodes("search");
final IndexHits<Node> hits = searchIndex.query(new QueryContext("value*"));
Set<Node> result=new HashSet<>();
for (Node n : hits) {
if (nodeSet.contains(n.getId())) result.add(n);
}
return result;
}
So the fastest solution I found was directly using lucenes IndexSearcher on the index created by neo4j and use an custom Filter to restrict the search to specific nodes.
Just open the neo4j index folder "{neo4j-database-folder}/index/lucene/node/{index-name}" with the lucene IndexReader. Make sure to use not add a lucene dependency to your project in another version than the one neo4j uses, which currently is lucene 3.6.2!
here's my lucene Filter implementation that filters all query results by the given Set of document id's. (Lucene Document id's (Integer) ARE NOT Neo4j Node id's (Long)!)
import java.io.IOException;
import java.util.PriorityQueue;
import java.util.Set;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.search.DocIdSet;
import org.apache.lucene.search.DocIdSetIterator;
import org.apache.lucene.search.Filter;
public class DocIdFilter extends Filter {
public class FilteredDocIdSetIterator extends DocIdSetIterator {
private final PriorityQueue<Integer> filterQueue;
private int docId;
public FilteredDocIdSetIterator(final Set<Integer> filterSet) {
this(new PriorityQueue<Integer>(filterSet));
}
public FilteredDocIdSetIterator(final PriorityQueue<Integer> filterQueue) {
this.filterQueue = filterQueue;
}
#Override
public int docID() {
return this.docId;
}
#Override
public int nextDoc() throws IOException {
if (this.filterQueue.isEmpty()) {
this.docId = NO_MORE_DOCS;
} else {
this.docId = this.filterQueue.poll();
}
return this.docId;
}
#Override
public int advance(final int target) throws IOException {
while ((this.docId = this.nextDoc()) < target)
;
return this.docId;
}
}
private final PriorityQueue<Integer> filterQueue;
public DocIdFilter(final Set<Integer> filterSet) {
super();
this.filterQueue = new PriorityQueue<Integer>(filterSet);
}
private static final long serialVersionUID = -865683019349988312L;
#Override
public DocIdSet getDocIdSet(final IndexReader reader) throws IOException {
return new DocIdSet() {
#Override
public DocIdSetIterator iterator() throws IOException {
return new FilteredDocIdSetIterator(DocIdFilter.this.filterQueue);
}
};
}
}
To map the set of neo4j node id's (the query result should be filtered with) to the correct lucene document id's i created an inmemory bidirectional map:
public static HashBiMap<Integer, Long> generateDocIdToNodeIdMap(final IndexReader indexReader)
throws LuceneIndexException {
final HashBiMap<Integer, Long> result = HashBiMap.create(indexReader.numDocs());
for (int i = 0; i < indexReader.maxDoc(); i++) {
if (indexReader.isDeleted(i)) {
continue;
}
final Document doc;
try {
doc = indexReader.document(i, new FieldSelector() {
private static final long serialVersionUID = 5853247619312916012L;
#Override
public FieldSelectorResult accept(final String fieldName) {
if ("_id_".equals(fieldName)) {
return FieldSelectorResult.LOAD_AND_BREAK;
} else {
return FieldSelectorResult.NO_LOAD;
}
}
};
);
} catch (final IOException e) {
throw new LuceneIndexException(indexReader.directory(), "could not read document with ID: '" + i
+ "' from index.", e);
}
final Long nodeId;
try {
nodeId = Long.valueOf(doc.get("_id_"));
} catch (final NumberFormatException e) {
throw new LuceneIndexException(indexReader.directory(),
"could not parse node ID value from document ID: '" + i + "'", e);
}
result.put(i, nodeId);
}
return result;
}
I'm using the Google Guava Library that provides an bidirectional map and the initialization of collections with an specific size.

Resources