I want to create a tree (using a Node or ADT) in which every node has an annotation pointing back to its parent. Below is an example with a simple linked list data structure:
import util::Math;
import IO;
import Node;
anno LinkedList LinkedList#parent;
anno int LinkedList#something;
data LinkedList = item(int val, LinkedList next)
| last(int val)
;
public LinkedList linkedList = item(5,
item(4,
item(3,
item(2,
last(1)[#something=99]
)[#something=99]
)[#something=99]
)[#something=99]
)[#something=99];
public LinkedList addParentAnnotations(LinkedList n) {
return top-down visit (n) {
case LinkedList x: {
/* go through all children whose type is LinkedList */
for (LinkedList cx <- getChildren(x), LinkedList ll := cx) {
/* setting the annotation like this doesn't seem to work */
cx#parent = x;
// DOESN'T WORK EITHER: setAnnotation(cx, getAnnotations(cx) + ("parent": x));
}
}
}
}
Executing addParentAnnotations(linkedList) yields the following result:
rascal>addParentAnnotations(linkedList);
LinkedList: item(
5,
item(
4,
item(
3,
item(
2,
last(1)[
#something=99
])[
#something=99
])[
#something=99
])[
#something=99
])[
#something=99
]
The thing is that Rascal data is immutable, so you can not update anything using an assignment. The assignment will simply give you a new binding for cx with the annotation set, but will not change the original tree.
To change the original tree, you can either use the => operator for case statements, or the insert statement as follows:
case LinkedList x => x[#parent=...] // replace x by a new x that is annotated
or:
case LinkedList x : {
...
x#parent= ...;
insert x; // replace original x by new x in tree
}
Some other tip, in the Traversal.rsc library you can find a function called getTraversalContext() which produces a list of parents of the currently visited node if called from the body of a case:
import Traversal;
visit (...) {
case somePattern: {
parents = getTraversalContext();
parent = parents[1];
}
}
Related
I am working on a compiler and one aspect currently is how to wait for interpolated variable names to be resolved. So I am wondering how to take a nested interpolated variable string and build some sort of simple data model/schema for unwrapping the evaluated string so to speak. Let me demonstrate.
Say we have a string like this:
foo{a{x}-{y}}-{baz{one}-{two}}-foo{c}
That has 1, 2, and 3 levels of nested interpolations in it. So essentially it should resolve something like this:
wait for x, y, one, two, and c to resolve.
when both x and y resolve, then resolve a{x}-{y} immediately.
when both one and two resolve, resolve baz{one}-{two}.
when a{x}-{y}, baz{one}-{two}, and c all resolve, then finally resolve the whole expression.
I am shaky on my understanding of the logic flow for handling something like this, wondering if you could help solidify/clarify the general algorithm (high level pseudocode or something like that). Mainly just looking for how I would structure the data model and algorithm so as to progressively evaluate when the pieces are ready.
I'm starting out trying and it's not clear what to do next:
{
dependencies: [
{
path: [x]
},
{
path: [y]
}
],
parent: {
dependency: a{x}-{y} // interpolated term
parent: {
dependencies: [
{
}
]
}
}
}
Some sort of tree is probably necessary, but I am having trouble figuring out what it might look like, wondering if you could shed some light on that with some pseudocode (or JavaScript even).
watch the leaf nodes at first
then, when the children of a node are completed, propagate upward to resolving the next parent node. This would mean once x and y are done, it could resolve a{x}-{y}, but then wait until the other nodes are ready before doing the final top-level evaluation.
You can just simulate it by sending "events" to the system theoretically, like:
ready('y')
ready('c')
ready('x')
ready('a{x}-{y}')
function ready(variable) {
if ()
}
...actually that may not work, not sure how to handle the interpolated nodes in a hacky way like that. But even a high level description of how to solve this would be helpful.
export type SiteDependencyObserverParentType = {
observer: SiteDependencyObserverType
remaining: number
}
export type SiteDependencyObserverType = {
children: Array<SiteDependencyObserverType>
node: LinkNodeType
parent?: SiteDependencyObserverParentType
path: Array<string>
}
(What I'm currently thinking, some TypeScript)
Here is an approach in JavaScript:
Parse the input string to create a Node instance for each {} term, and create parent-child dependencies between the nodes.
Collect the leaf Nodes of this tree as the tree is being constructed: group these leaf nodes by their identifier. Note that the same identifier could occur multiple times in the input string, leading to multiple Nodes. If a variable x is resolved, then all Nodes with that name (the group) will be resolved.
Each node has a resolve method to set its final value
Each node has a notify method that any of its child nodes can call in order to notify it that the child has been resolved with a value. This may (or may not yet) lead to a cascading call of resolve.
In a demo, a timer is set up that at every tick will resolve a randomly picked variable to some number
I think that in your example, foo, and a might be functions that need to be called, but I didn't elaborate on that, and just considered them as literal text that does not need further treatment. It should not be difficult to extend the algorithm with such function-calling features.
class Node {
constructor(parent) {
this.source = ""; // The slice of the input string that maps to this node
this.texts = []; // Literal text that's not part of interpolation
this.children = []; // Node instances corresponding to interpolation
this.parent = parent; // Link to parent that should get notified when this node resolves
this.value = undefined; // Not yet resolved
}
isResolved() {
return this.value !== undefined;
}
resolve(value) {
if (this.isResolved()) return; // A node is not allowed to resolve twice: ignore
console.log(`Resolving "${this.source}" to "${value}"`);
this.value = value;
if (this.parent) this.parent.notify();
}
notify() {
// Check if all dependencies have been resolved
let value = "";
for (let i = 0; i < this.children.length; i++) {
const child = this.children[i];
if (!child.isResolved()) { // Not ready yet
console.log(`"${this.source}" is getting notified, but not all dependecies are ready yet`);
return;
}
value += this.texts[i] + child.value;
}
console.log(`"${this.source}" is getting notified, and all dependecies are ready:`);
this.resolve(value + this.texts.at(-1));
}
}
function makeTree(s) {
const leaves = {}; // nodes keyed by atomic names (like "x" "y" in the example)
const tokens = s.split(/([{}])/);
let i = 0; // Index in s
function dfs(parent=null) {
const node = new Node(parent);
const start = i;
while (tokens.length) {
const token = tokens.shift();
i += token.length;
if (token == "}") break;
if (token == "{") {
node.children.push(dfs(node));
} else {
node.texts.push(token);
}
}
node.source = s.slice(start, i - (tokens.length ? 1 : 0));
if (node.children.length == 0) { // It's a leaf
const label = node.texts[0];
leaves[label] ??= []; // Define as empty array if not yet defined
leaves[label].push(node);
}
return node;
}
dfs();
return leaves;
}
// ------------------- DEMO --------------------
let s = "foo{a{x}-{y}}-{baz{one}-{two}}-foo{c}";
const leaves = makeTree(s);
// Create a random order in which to resolve the atomic variables:
function shuffle(array) {
for (var i = array.length - 1; i > 0; i--) {
var j = Math.floor(Math.random() * (i + 1));
[array[j], array[i]] = [array[i], array[j]];
}
return array;
}
const names = shuffle(Object.keys(leaves));
// Use a timer to resolve the variables one by one in the given random order
let index = 0;
function resolveRandomVariable() {
if (index >= names.length) return; // all done
console.log("\n---------------- timer tick --------------");
const name = names[index++];
console.log(`Variable ${name} gets a value: "${index}". Calling resolve() on the connected node instance(s):`);
for (const node of leaves[name]) node.resolve(index);
setTimeout(resolveRandomVariable, 1000);
}
setTimeout(resolveRandomVariable, 1000);
your idea of building a dependency tree it's really likeable.
Anyway I tryed to find a solution as simplest possible.
Even if it already works, there are many optimizations possible, take this just as proof of concept.
The background idea it's produce a List of Strings which you can read in order where each element it's what you need to solve progressively. Each element might be mandatory to solve something that come next in the List, hence for the overall expression. Once you solved all the chunks you have all pieces to solve your original expression.
It's written in Java, I hope it's understandable.
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.Objects;
public class StackOverflow {
public static void main(String[] args) {
String exp = "foo{a{x}-{y}}-{baz{one}-{two}}-foo{c}";
List<String> chunks = expToChunks(exp);
//it just reverse the order of the list
Collections.reverse(chunks);
System.out.println(chunks);
//output -> [c, two, one, baz{one}-{two}, y, x, a{x}-{y}]
}
public static List<String> expToChunks(String exp) {
List<String> chunks = new ArrayList<>();
//this first piece just find the first inner open parenthesys and its relative close parenthesys
int begin = exp.indexOf("{") + 1;
int numberOfParenthesys = 1;
int end = -1;
for(int i = begin; i < exp.length(); i++) {
char c = exp.charAt(i);
if (c == '{') numberOfParenthesys ++;
if (c == '}') numberOfParenthesys --;
if (numberOfParenthesys == 0) {
end = i;
break;
}
}
//this if put an end to recursive calls
if(begin > 0 && begin < exp.length() && end > 0) {
//add the chunk to the final list
String substring = exp.substring(begin, end);
chunks.add(substring);
//remove from the starting expression the already considered chunk
String newExp = exp.replace("{" + substring + "}", "");
//recursive call for inner element on the chunk found
chunks.addAll(Objects.requireNonNull(expToChunks(substring)));
//calculate other chunks on the remained expression
chunks.addAll(Objects.requireNonNull(expToChunks(newExp)));
}
return chunks;
}
}
Some details on the code:
The following piece find the begin and the end index of the first outer chunk of expression. The background idea is: in a valid expression the number of open parenthesys must be equal to the number of closing parenthesys. The count of open(+1) and close(-1) parenthesys can't ever be negative.
So using that simple loop once I find the count of parenthesys to be 0, I also found the first chunk of the expression.
int begin = exp.indexOf("{") + 1;
int numberOfParenthesys = 1;
int end = -1;
for(int i = begin; i < exp.length(); i++) {
char c = exp.charAt(i);
if (c == '{') numberOfParenthesys ++;
if (c == '}') numberOfParenthesys --;
if (numberOfParenthesys == 0) {
end = i;
break;
}
}
The if condition provide validation on the begin and end indexes and stop the recursive call when no more chunks can be found on the remained expression.
if(begin > 0 && begin < exp.length() && end > 0) {
...
}
2->7->8->11
|
13->16->17->21
|
22->23->27->29
|
30->32
Sorted Linked List given like above where each node has 2 pointers next and down. For each row starting nodes down points to next row start. Each row has 4 elements, except last one which can have <= 4 elements. Next rows start element is greater than previous rows end element. We need to design and code for it insert of new value at correct place and delete operation. I could not solve this problem.
Structure representation and Pseudo code for the add operation is as follows
And we can implement the delete recursively using the add data as example
typedef struct sibling{
int data;
struct sibling *nxt;
} t_sibling
typedef struct children {
struct sibling *sibling;
struct children *nxt;
} t_children;
add_element(t_children **head, int newdata)
{
t_children *walk_down = *head;
t_children *parent = NULL;
while (walk_down != NULL) {
if(parent == NULL && Compare newdata < head of current walk_down->sibling) {
// Code comes here when we add 1 to above mentioned list example
newdata is added to begining to head of walk_down->sibling
sibling_list_count++;
if (sibling_list_count > 4) {
taildata = delete_end from tail of walk_down->sibling
add_element(&walk_down, taildata)
}
break;
}
else if(newdata < head of current walk_down->sibling) {
if (Compare newdata > tail of parent sibling) {
// Code comes here when we add 12 to above mentioned list
newdata is added to begining to head of walk_down->sibling
if (sibling_list_count > 4) {
taildata = delete_end from tail of walk_down->sibling
add_element(&walk_down, taildata)
}
}
else {
// Code comes here when we add 6 to above mentioned list
newdata is added to the appropriate location of parent of sibling
Since above step disturbs the <= 4 property we
taildata = delete_end from tail of parent->sibling
add_element(&walk_down, taildata)
}
break;
}
parent = walk_down;
walk_down = walk_down->nxt;
}
}
I'm doing the exercise on leetcode using Scala. The problem I'm working on is "Maximum Depth of Binary Tree", which means find the maximum depth of a binary tree.
I've passed my code with IntelliJ, but I keep having compile error(type mismatch) when submitting my solution in Leetcode. Here is my code, is there any problem or any other solution please?
object Solution {
abstract class BinTree
case object EmptyTree extends BinTree
case class TreeNode(mid: Int, left: BinTree, right: BinTree) extends BinTree
def maxDepth(root: BinTree): Int = {
root match {
case EmptyTree => 0
case TreeNode(_, l, r) => Math.max(maxDepth(l), maxDepth(r)) + 1
}
}
}
The error is here : Line 17: error: type mismatch; Line 24: error: type mismatch; I know it is quite strange because I just have 13 lines of codes, but I didn't made mistakes, trust me ;)
This looks like an error specific of the leetcode problem.
I assume you're referring to https://leetcode.com/problems/maximum-depth-of-binary-tree/description/
Perhaps you're not supposed to re-implement the data structure but just to provide the implementation for maxDepth, i.e. TreeNode is already given. Try this:
object Solution {
def maxDepth(root: TreeNode): Int = {
if (root == null) {
0
} else {
Math.max(maxDepth(root.left), maxDepth(root.right)) + 1
}
}
}
This assumes that the TreeNode data structure is the one given in the comment:
/**
* Definition for a binary tree node.
* class TreeNode(var _value: Int) {
* var value: Int = _value
* var left: TreeNode = null
* var right: TreeNode = null
* }
*/
Problem
The input data has 2 types of records, lets call them R and W.
I need to traverse this data in Sequence from top to bottom in such a way that if the current record is of type W, it has to be merged with a map(lets call it workMap). If the key of that W-type record is already present in the map, the value of this record is added to it, otherwise a new entry is made into workMap.
If the current record is of type R, the workMap calculated until this record, is attached to the current record.
For example, if this is the order of records -
W1- a -> 2
W2- b -> 3
W3- a -> 4
R1
W4- c -> 1
R2
W5- c -> 4
Where W1, W2, W3, W4 and W5 are of type W; And R1 and R2 are of type R
At the end of this function, I should have the following -
R1 - { a -> 6,
b -> 3 } //merged(W1, W2, W3)
R2 - { a -> 6,
b -> 3,
c -> 1 } //merged(W1, W2, W3, W4)
{ a -> 6,
b -> 3,
c -> 5 } //merged(W1, W2, W3, W4, W5)
I want all the R-type records attached to the intermediate workMaps calculated until that point; And the final workMap after the last record is processed.
Here is the code that I have written -
def calcPerPartition(itr: Iterator[(InputKey, InputVal)]):
Iterator[(ReportKey, ReportVal)] = {
val workMap = mutable.HashMap.empty[WorkKey, WorkVal]
val reportList = mutable.ArrayBuffer.empty[(ReportKey, Reportval)]
while (itr.hasNext) {
val temp = itr.next()
val (iKey, iVal) = (temp._1, temp._2)
if (iKey.recordType == reportType) {
//creates a new (ReportKey, Reportval)
reportList += getNewReportRecord(workMap, iKey, iVal)
}
else {
//if iKey is already present, merge the values
//other wise adds a new entry
updateWorkMap(workMap, iKey, iVal)
}
}
val workList: Seq[(ReportKey, ReportVal)] = workMap.toList.map(convertToReport)
reportList.iterator ++ workList.iterator
}
ReportKey class is like this -
case class ReportKey (
// the type of record - report or work
rType: Int,
date: String,
.....
)
There are two problems with this approach that I am asking help for -
I have to keep track of a reportList - a list of R type records attached with intermediate workMaps. As the data grows, the reportList also grows and I am running into OutOfMemoryExceptions.
I have to combine reportList and workMap records in the same data structure and then return them. If there is any other elegant way, I would definitely consider changing this design.
For the sake of completeness - I am using spark. The function calcPerPartition is passed as argument for mapPartitions on an RDD. I need the workMaps from each partition to do some additional calculations later.
I know that if I don't have to return workMaps from each partition, the problem becomes much easier, like this -
...
val workMap = mutable.HashMap.empty[WorkKey, WorkVal]
itr.scanLeft[Option[(ReportKey, Reportval)]](
None)((acc: Option[(ReportKey, Reportval)],
curr: (InputKey, InputVal)) => {
if (curr._1.recordType == reportType) {
val rec = getNewReportRecord(workMap, curr._1, curr._2)
Some(rec)
}
else {
updateWorkMap(workMap, curr._1, curr._2)
None
}
})
val reportList = scan.filter(_.isDefined).map(_.get)
//workMap is still empty after the scanLeft.
...
Sure, I can do a reduce operation on the input data to derive the final workMap but I would need to look at the data twice. Considering that the input data set is huge, I want to avoid that too.
But unfortunately I need the workMaps at a latter step.
So, is there a better way to solve the above problem? If I can't solve problem 2 at all(according to this), is there any other way I can avoid storing R records(reportList) in a list or scan the data more than once?
I don't yet have a better design for the second question - if you can avoid combining reportList and workMap into a single data structure but we can certainly avoid storing R type records in a list.
Here is how we can re-write the calcPerPartition from the above question -
def calcPerPartition(itr: Iterator[(InputKey, InputVal)]):
Iterator[Option[(ReportKey, ReportVal)]] = {
val workMap = mutable.HashMap.empty[WorkKey, WorkVal]
var finalWorkMap = true
new Iterator[Option[(ReportKey, ReportVal)]](){
override def hasNext: Boolean = itr.hasNext
override def next(): Option[(ReportKey, ReportVal)] = {
val curr = itr.next()
val iKey = curr._1
val iVal = curr._2
val eventKey = EventKey(openKey.date, openKey.symbol)
if (iKey.recordType == reportType) {
Some(getNewReportRecord(workMap, iKey, iVal))
}
else {
//otherwise update the generic interest map but don't accumulate anything
updateWorkMap(workMap, iKey, iVal)
if (itr.hasNext) {
next()
}
else {
if(finalWorkMap){
finalWorkMap = false //because we want a final only once
Some(workMap.map(convertToReport))
}
else {
None
}
}
}
}
}
}
Instead of storing results in a list, we defined an iterator. That solved most of the memory issues we had around this issue.
I am working on a binary search tree and i have been given an insertnode function that looks like
void insertNode(Node **t, Node *n)
{
if(!(*t))
*t=n;
else if((*t)->key<n->key)insertNode(&(*t)->right,n);
else if((*t)->key>n->key) insertNode(&(*t)->left,n);
}
I am trying to write a function that removes nodes recursively so far i have come up with:
void remove(int huntKey,Node **t)
{
bool keyFound=false;
if(!(*t))
cout<<"There are no nodes"<<endl;
while(keyFound==false)
{
if((*t)->key==huntKey)
{
keyFound=true;
(*t)->key=0;
}
else if((*t)->key < huntKey)remove(huntKey,&(*t)->right);
else if((*t)->key> huntKey) remove(huntKey,&(*t)->left);
}
}
Both of these functions are getting called from a switch in my main function which looks like:
int main()
{
int key=0,countCatch=0;char q;
Node *t, *n;
t=0;
while((q=menu()) !=0)
{
switch(q)
{
case'?': menu(); break;
case'i': inOrderPrint(t); break;
case'a': preOrderPrint(t); break;
case'b': postOrderPrint(t); break;
case'c': {cout<<"enter key: ";cin>>key;
n=createNode(key);insertNode(&t,n);break;}
case'r':{cout<<"enter the key you want removed: ";
cin>>key;
remove(key,&t);
break;}
case'n': {countCatch=countNodes(t);cout<<countCatch<<"\n"; };break;
}
}
return 0;
}
my remove node function is not working properly....any advice would help....
When you remove the node, you are only setting its key to '0', not actually removing it.
Example:
'4' has child '2,' which has children '1' and '3.'
In your code, removing '2' gives you this tree: 4 has child 0, which has children 1 and 3.
To remove an internal node (a node with children), you must handle its parent pointer and its children. You must set the parent's child-pointer to one of the removed node's children. Check this article for more:
http://en.wikipedia.org/wiki/Binary_tree#Deletion
Look at the code, it is not recursive though
http://code.google.com/p/cstl/source/browse/src/c_rb.c