How to find internal nodes in binary tree? - data-structures

I already got the method to find external nodes but I have no idea about to count internal nodes so someone please help.

You could convert the following pseudocode to any language as per your preference.
function count_internal_nodes(curr):
if curr == null: return 0
else if curr is leaf: return 0
else: return 1 + count_internal_node(curr.left) +
count_internal_nodes(curr.right)

You can try this algo
getInteriorNodes(self)
count = 0
hasLeft, hasRight = self.left<>null, self.right <>null
if (hasLeft)
count += self.left.getInteriorNodes()
else if (hasRight)
count += self.right.getInteriorNodes()
else if ((hasLeft || hasRight) && self.parent)
count += 1
return count

Related

QuickSort - Median Three

I am working on the QuickSort - Median Three Algorithm.
I have no problem with the first and last element sorting. But, when comes to the Median-three, I am slightly confused. I hope someone could help me on this.
Would be appreciate if someone could provide me some pseudocode?
My understanding is to get the middle index by doing this. (start + end) / 2 , then swap the middle pivot value to the first value, after all these done it should goes well with the normal quick sort ( partitioning and sorting).
Somehow, I couldn't get it works. Please help!
#Array Swap function
def swap(A,i,k):
temp=A[i]
A[i]=A[k]
A[k]=temp
# Get Middle pivot function
def middle(lista):
if len(lista) % 2 == 0:
result= len(lista) // 2 - 1
else:
result = len(lista) // 2
return result
def median(lista):
if len(lista) % 2 == 0:
return sorted(lista)[len(lista) // 2 - 1]
else:
return sorted(lista)[len(lista) // 2]
# Create partition function
def partition(A,start,end):
m = middle(A[start:end+1])
medianThree = [ A[start], A[m], A[end] ]
if A[start] == median(medianThree):
pivot_pos = start
elif A[m] == median(medianThree):
tempList = A[start:end+1]
pivot_pos = middle(A[start:end+1])
swap(A,start,pivot_pos+start)
elif A[end] == median(medianThree):
pivot_pos = end
#pivot = A[pivot_pos]
pivot = pivot_pos
# swap(A,start,end) // This line of code is to switch the first and last element pivot
swap(A,pivot,end)
p = A[pivot]
i = pivot + 1
for j in range(pivot+1,end+1):
if A[j] < p:
swap(A,i,j)
i+=1
swap(A,start,i-1)
return i-1
count = 0
#Quick sort algorithm
def quickSort(A,start,end):
global tot_comparisons
if start < end:
# This to create the partition based on the
pivot_pos = partition(A,start,end)
tot_comparisons += len(A[start:pivot_pos-1]) + len(A[pivot_pos+1:end])
# This to sort the the left partition
quickSort(A,start,pivot_pos -1)
#This to sort the right partition
quickSort(A,pivot_pos+1,end)

Algorithms, DFS

I've written a program to find shortest path in a N*N grid recursively.
def dfs(x,y,Map,p):
N = len(Map)
p += [[x,y]]
if Map[x][y] == 'E':
return p
for i in [[x-1,y],[x+1,y],[x,y-1],[x,y+1]]:
if N > i[0] >= 0 and N > i[1] >= 0 :
if (Map[i[0]][i[1]] == 'P' or Map[i[0]][i[1]] == 'E') and i not in p:
dfs(i[0], i[1], Map,p)
return []
When Map[x][y] = 'E' the recursion don't stop and return p. But it goes till the end. How to correct it and return the path(p).
By the looks of it, the code is prone to loop indefinitely. This is due to lack of checks whether you've entered a node before and moving in all (4) directions from a given node.
To solve it simply, add another array NxN of Boolean values answering the question: visited?. Then update the code to something along the lines:
def dfs(x,y,Map,visited,p):
visited[x,y] = true;
N = len(Map)
(...)
if (Map[i[0]][i[1]] == 'P' or Map[i[0]][i[1]] == 'E')
and i not in p
and visited[i[0], i[1]] == false:
dfs(i[0], i[1], Map,visited,p)

Knapsack 0-1 with fixed quanitity

I'm writing a variation of knapsack 0-1 with multiple constraints. In addition to a weight constraint I also have a quantity constraint, but in this case I want to solve the knapsack problem given that I'm required to have exactly n items in my knapsack, with a weight less than or equal to W. I'm currently implementing a dynamic programming ruby solution for the simple 0-1 case based off of the code at Rosetta Code at http://rosettacode.org/wiki/Knapsack_problem/0-1#Ruby.
What's the best way to implement the fixed quantity constraint?
You could add a third dimension to the table: Number of items. Each item included adds both weight in the weight-dimension, and count in the count-dimension.
def dynamic_programming_knapsack(problem)
num_items = problem.items.size
items = problem.items
max_cost = problem.max_cost
count = problem.count
cost_matrix = zeros(num_items, max_cost+1, count+1)
num_items.times do |i|
(max_cost + 1).times do |j|
(count + 1).times do |k|
if (items[i].cost > j) or (1 > k)
cost_matrix[i][j][k] = cost_matrix[i-1][j][k]
else
cost_matrix[i][j][k] = [
cost_matrix[i-1][j][k],
items[i].value + cost_matrix[i-1][j-items[i].cost][k-1]
].max
end
end
end
end
cost_matrix
end
To find the solution (which items to pick), you need to look at the grid cost_matrix[num_items-1][j][k], for all values of j and k, and find the cell with maximum value.
Once you find the winning cell, you need to trace backwards towards the start (i = j = k = 0). On each cell you examine, you need to determine if item i was used to get here or not.
def get_used_items(problem, cost_matrix)
itemIndex = problem.items.size - 1
currentCost = -1
currentCount = -1
marked = Array.new(cost_matrix.size, 0)
# Locate the cell with the maximum value
bestValue = -1
(problem.max_cost + 1).times do |j|
(problem.count + 1).times do |k|
value = cost_matrix[itemIndex][j][k]
if (bestValue == -1) or (value > bestValue)
currentCost = j
currentCount = k
bestValue = value
end
end
end
# Trace path back to the start
while(itemIndex >= 0 && currentCost >= 0 && currentCount >= 0)
if (itemIndex == 0 && cost_matrix[itemIndex][currentCost][currentCount] > 0) or
(cost_matrix[itemIndex][currentCost][currentCount] != cost_matrix[itemIndex-1][currentCost][currentCount])
marked[itemIndex] = 1
currentCost -= problem.items[itemIndex].cost
currentCount -= 1
end
itemIndex -= 1
end
marked
end

What is the difference between these two find algorithms?

I have these two find algorithm which look the same to me. Can anyone help me out why they are actually different?
Find ( x ) :
if x.parent = x then
return x
else
return Find ( x.parent )
vs
Find ( x ) :
if x.parent = x then
return x
else
x.parent <- Find(x.parent)
return x.parent
I interpret the first one as
int i = 0;
return i++;
while the second one as
int i = 0;
int tmp = i++;
return tmp
which are exactly the same to me.
This looks like Disjoint-set data structure.
Now to the question:
For the sake of clarity first version is FindA, second is FindB.
Suppose you have structure:
0
|
1
|
2
|
...
n
First call to FindA(n) will return 0 in O(n), second call will return 0 in O(n) and so on.
If you call FindB(n) it will return 0 in O(n), but will also modify structure:
0
/ /|\
1 2...n
Now second call to FindB(n) will return 0 in O(1). More over FindB(k) will return 0 in O(1).
The second one will change the value of x.parent as a side effect to the result of find

How do I improve the algorithm for my Traffic Jam recursive solver?

Theres a cute little game on Android called Traffic Jam
I've written a recursive solver:
import copy,sys
sys.setrecursionlimit(10000)
def lookup_car(car_string,ingrid):
car=[]
i=0
for row in ingrid:
j=0
for cell in row:
if cell == car_string:
car.append([i,j])
j+=1
i+=1
return car
#True if up/down False if side to side
def isDirectionUp(car):
init_row=car[0][0]
for node in car:
if node[0] != init_row:
return True
return False
def space_up(car,grid):
top_node=car[0]
m_space_up = (top_node[0]-1,top_node[1])
if top_node[0] == 0:
return -1
elif grid[m_space_up[0]][m_space_up[1]] != " ":
return -1
else:
return m_space_up
def space_down(car,grid):
bottom_node = car[-1]
m_space_down = (bottom_node[0]+1,bottom_node[1])
if bottom_node[0] == 5 :
return -1
elif grid[m_space_down[0]][m_space_down[1]] != " ":
return -1
else:
return m_space_down
def space_left(car,grid):
left_node = car[0]
m_space_left = (left_node[0],left_node[1]-1)
if left_node[1] == 0 :
return -1
elif grid[m_space_left[0]][m_space_left[1]] != " ":
return -1
else:
return m_space_left
def space_right(car,grid):
right_node = car[-1]
m_space_right = (right_node[0],right_node[1]+1)
if right_node[1] == 5 :
return -1
elif grid[m_space_right[0]][m_space_right[1]] != " ":
return -1
else:
return m_space_right
def list_moves(car,grid):
ret =[]
if isDirectionUp(car):
up = space_up(car,grid)
if up != -1:
ret.append(("UP",up))
down = space_down(car,grid)
if down != -1:
ret.append(("DOWN",down))
else:
left = space_left(car,grid)
if left != -1:
ret.append(("LEFT",left))
right = space_right(car,grid)
if right != -1:
ret.append(("RIGHT",right))
return ret
def move_car(car_string,move,ingrid):
grid = copy.deepcopy(ingrid)
car = lookup_car(car_string,grid)
move_to = move[1]
front = car[0]
back = car[-1]
if(move[0] == "UP" or move[0] == "LEFT"):
grid[back[0]][back[1]] = " "
grid[move_to[0]][move_to[1]] = car_string
elif(move[0] == "DOWN" or move[0] == "RIGHT"):
grid[front[0]][front[1]] = " "
grid[move_to[0]][move_to[1]] = car_string
return grid
def is_solution(grid):
car = lookup_car("z",grid)
if(car[-1] == [2,5]):
return True
elif space_right(car,grid) == -1:
return False
else:
solgrid = move_car("z",("RIGHT",space_right(car,grid)),grid)
return is_solution(solgrid)
def print_grid(grid):
for row in grid:
print ''.join(row)
def solve(grid,solution,depth):
global stop
global state
if grid in state:
return
else:
state.append(grid)
if stop:
return
if is_solution(grid):
print_grid(grid)
print len(solution)
else:
for each in "abcdefzhijklm":
car = lookup_car(each,grid)
moves = list_moves(car,grid)
for move in moves:
solution.append((each,move))
moved_grid = move_car(each,move,grid)
solve(moved_grid,solution,depth)
stop=False
state=[]
recdepth=0
#grid file using a-w and x means free space, and ' ' means yellow car
grid=[list(x) for x in file(sys.argv[1]).read().split('\n')[0:-1]]
solve(grid,[],0)
WHERE grid is in a file:
abccdd
abeef
azzhfi
jjjhfi
kll
kmm
But, it takes over 8000 moves to find a solution ,and fails to find a simple 30 move solution. What is wrong with the algorithm?
If the branching factor of your search space is r then the number of vertices in the search tree to depth n is (1-r^n)/(1-r). A problem with a minimal 30 step solution, even in the simple case where r = 2, will have a search tree with around 2^30 - 1 ~= 1,000,000,000 vertices. Now, your branching factor is likely to be bigger than 2, so a 30 step problem is a long way from trivial!
That said, I'd be inclined to (a) find a better representation of your problem (searching arrays of strings is slow) and (b) consider best-first search where you guide your search with a heuristic (e.g., distance of the yellow car from its destination or the number of cars blocking the path of the yellow car).
Hope this helps.
This is essentially a (relatively token) searching problem, with a huge search space. As others recommended, read up on Depth-first search, and then read up on Breadth-first search, and when you learn the difference, read up on A* Search, and coming up with a pessimistic scoring function.
Also, note that in this case, you already know what the end state should be, thus, another approach would be to search from both ends and meet in the middle. This should reduce your search space considerably.
If that's still not enough, you can combine the techniques!

Resources