Leetcode : Time complexity for bfs/dfs - depth-first-search

As per my understanding, both DFS and BFS take O(V+E). But is it possible for the search algorithms to have different time complexities?
For example, in this problem (https://leetcode.com/problems/kill-process/#/description) using DFS takes longer than BFS.
BFS:
class Solution(object):
def bfs(self, pid, ppid, tmp, output):
child = []
for i in xrange(len(ppid)):
if ppid[i] in tmp:
output.append(pid[i])
child.append(pid[i])
if child != []:
self.bfs(pid, ppid, child, output)
def killProcess(self, pid, ppid, kill):
"""
:type pid: List[int]
:type ppid: List[int]
:type kill: int
:rtype: List[int]
"""
output, tmp = [kill], [kill]
self.bfs(pid, ppid, tmp, output)
return output
Time complexity: O(NlgN)
DFS:
class Solution(object):
def dfs(self, pid, ppid, kill, output):
for i in xrange(len(ppid)):
if ppid[i] == kill:
if kill not in output:
output.append(kill)
self.dfs(pid, ppid, pid[i], output)
if kill not in output:
output.append(kill)
def killProcess(self, pid, ppid, kill):
"""
:type pid: List[int]
:type ppid: List[int]
:type kill: int
:rtype: List[int]
"""
output = []
self.dfs(pid, ppid, kill, output)
return output
Time complexity: O(N^2)

First of all, the complexity of algorithms depend upon the data structures used.
The complexity of BFS and DFS are O(V+E) only when you use adjacency list representation of graph.
Secondly, the code does not maintain a visited set of nodes which is referenced to backtrack, and not to re-visit the same nodes.
Hence the complexity of your code is O(n^2) and not O(V+E)

Related

Does exists a more efficient algortihm of finding the node(s) (in a graph) with the maximum degree?

Problem:
Given a graph, find the node (s) with the maximum degree and return it/them in a list.
My solution:
def max_degree(graph):
"""
:param graph: non-null and non-oriented networkx type graph to analyze.
:return: list of nodes of this graph with maximum degree; 0 if the graph has no nodes; -1 if the graph is disconnected;
"""
if len(graph.nodes) == 0:
return 0
if not nx.is_connected(graph):
return -1
# node_grade_list -> [(<node>, <grade>), ...]
node_grade_list = sorted(grafo.degree, key=lambda x: x[1], reverse=True)
max_grade = node_grade_list[0][1]
max_grade_nodes_list = []
for node in node_grade_list:
if node[1] == max_grade:
max_grade_nodes_list.append(node[0])
return max_grade_nodes_list
Is there a way to make it much more efficient?
Assuming that graph.degree is a list of tuples
[(<node>, <grade>), ...]
Replace getting max nodes with
max_grade = max(graph.degree, key=lambda x: x[1])[1]
max_grade_nodes_list = [node[0] for node in graph.degree if node[1] == max_grade]
This is faster since this is O(n), while sorted requires O(n*log(n))
Revised code
def max_degree(graph):
"""
:param graph: non-null and non-oriented networkx type graph to analyze.
:return: list of nodes of this graph with maximum degree; 0 if the graph has no nodes; -1 if the graph is disconnected;
"""
if len(graph.nodes) == 0:
return 0
if not nx.is_connected(graph):
return -1
# graph.degree -> [(<node>, <grade>), ...]
max_grade = max(graph.degree, key=lambda x: x[1])[1]
return [node[0] for node in graph.degree if node[1] == max_grade]
Explanation of Why Max is Faster
Time complexity is a concept in computer science that deals with the
quantification of the amount of time taken by a set of code or
algorithm to process or run as a function of the amount of input.
Both max and sorted are builtin functions so both are natively coded.
Function max is linear in time O(n), since we only have to traverse the list once to find a max. This means as we double the size n of a list, the time it takes to find the max doubles.
Python sort uses TimSort which has an average time complexity of O(n*log(n)), where n is the length of the input list.
O(n*log(n)) average complexity is typical for many sorting algorithms such as quicksort, mergesort, etc.
Comparing O(n*log(n)) to O(n) we see that Max has a speed advantage of:
O(n*log(n))/log(n) = O(log(n)).
Meaning for a list of 1024 elements, n = 2^10 it will be log(2^10) = 10 times faster
For smaller list, for instance n = 10 it won't really matter which method you use.

Merge k sorted lists: Space complexity

class Solution(object):
def mergeKLists(self, lists):
"""
:type lists: List[ListNode]
:rtype: ListNode
"""
head = point = ListNode(0)
q = PriorityQueue()
for l in lists:
if l:
q.put((l.val, l))
while not q.empty():
val, node = q.get()
point.next = ListNode(val)
point = point.next
node = node.next
if node:
q.put((node.val, node))
return head.next
Hi, I have Python solution for the problem "Merge k sorted lists",
https://leetcode.com/problems/merge-k-sorted-lists/solution/
In the official solution section it says:
O(n) Creating a new linked list costs O(n) space.
O(k) The code above present applies in-place method which cost O(1) space. And the priority queue (often implemented with heaps) costs O(k) space (it's far less than N in most situations).
I don't quite understand why the space is O(n) ? For me it's obviously O(k) because we only need a heap of size k. I don't see where we have created a new linked list.
Thanks.

Runtime complexity of finding power set of a list of integers

I have the following algorithm to return the power set of a given list of integers (power set = all possible subsets).
class Solution:
def subsets_recursive(self, nums):
"""
O(2^n). Space O(1)
:type nums: List[int]
:rtype: List[List[int]]
Given a set of distinct integers, nums, return all possible subsets (the power set).
Note: The solution set must not contain duplicate subsets.
Example:
Input: nums = [1,2,3]
Output:
[
[3],
[1],
[2],
[1,2,3],
[1,3],
[2,3],
[1,2],
[]
]
"""
# [1,2,3], [ [],[1],[2],[3],[1,2],[1,3],[2,3],[1,2,3] ]
res = []
self.generateSet(nums, 0, res, [])
return res
def generateSet(self, nums, startingPos, res, subRes):
res.append(subRes[:])
for i in range(startingPos, len(nums)):
subRes.append(nums[i])
self.generateSet(nums, i + 1, res, subRes)
subRes.pop()
Now, I'm trying to figure out what is the runtime complexit of this recursive backtracking algorithm. I happen to know that for n numbers, there are 2^n members in its power set. But that is not the same as saying this particular algorithm runs in O(2^n).
So, how can I show that this particular algorithm also runs in O(2^n)? Or does it?

Minimal value of longest path with free choice of starting node in undirected graph

The problem is to find minimal value (weight of every edge is equal to 1) of path to every other node having free choice of starting node in undirected graph. Data is in form of adjacency relations. Nodes are numerated from 0 to max.
So in graph above correct solution is 3, because the longest path to every other node in this case is 3 as we choose node number 2 or 4.
My solution:
Iterate over every node and find it path cost to next node, the biggest of those values is our result
It's achieved using findRoad that searches recursively level by level to find connection.
Acc keeps value of current iteration that equals to longest path that this vertex have to travel
If acc is greater than previously finded value (min) we stop iterating this node, because this node won't give us better result
Algorithm end after iteration on last node is finished
Algorithm works correctly hovever very slow, I tried one solutions to improve it, but it only slowed the algorithm further down:
in recursive call to findRoad replacing first parameter that is always a full list of edges to filtered list (without already checked ones)
Code:
val a: List[(Int,Int)] //list of all edges
val c: Set[(Int,Int)] //set of all edges
val max //maximum index of node
//main loop that manages iterations and limits their number
def loop(it: Int, it2: Int, acc: Int,min: Int): Int = {
if(it > max) min
else if(it2 > max) if(min<acc)loop(it+1,0,0,min)
else loop(it+1,0,0,acc)
else if(acc >= min) loop(it+1,0,0,min)
else if(it == it2) loop(it,it2+1,acc,min)
else {
val z = findRoad(b,List(it),it2,1,min)
if(z > acc) loop(it,it2+1,z,min)
else loop(it,it2+1,acc,min)
}
}
//finding shortest path from node s to node e
def findRoad(a: List[(Int,Int)], s: List[Int], e: Int, lev: Int,min: Int): Int = {
if(lev > min) Int.MaxValue
else if(s.exists( s => (a.exists(p => p == (s,e) || p == (e,s))))) lev
else findRoad(a,connectEdges(s,Set()),e,lev+1,min)
}
//finding all predecessor and successors
def connectEdges(a: List[Int], acc: Set[Int]): List[Int] = {
if(a.isEmpty) acc.toList
else connectEdges(a.tail, acc++c.collect{
case (b,c) if(b == a.head) => c
case (b,c) if(c == a.head) => b
})
}
Is whole idea flawed or some scala operations should be avoided (like filter, collect, transforming sets into collections)?
Maybe I should use some All-pairs shortest paths algorithm like Floyd–Warshall algorithm?
Use BFS. Since edge cost is 1 it will give you shortest paths from vertex u to all other vertices in O(V+E) time. Now take maximum of those [u,v] distances for all vertex v from u. Lets call this d. Finally you need the vertex that gives minimum value of d. Overall running time is O((V+E)*V).
Algorithm:
min = infinity
result = -1 // which vertex yields minimum result
for all vertices u in V:
dist = [|V|] // array of size |V| initialized to 0
fill dist using BFS starting from u
max = max(dist)
if max < min:
min = max
result = u
return result
I think you can just run BFS from every vertex, and for each vertex remember the length of longest path traveled using BFS. Your result is the minimum value among those. It would be O(n²).
You can also find shortest paths for each pair of vertices using Floyd–Warshall algorithm and then for each vertex v find vertex u so that dist[v][u] is maximum. Among all those values for each vertex the minimal one is your answer. It would be O(n³) because of Floyd-Warshall.
n - number of vertices

Equivalent data structures with same bounds in worst case (vs. amortized)

I could not make my title very descriptive, apologies!
Is it the case that for every data structure, supporting some operations with certain amortized running times, another data structure supporting the same operations in the same running times in worst case? I am interested in both the iterative, ephermal data structures and functional ones too.
I am certain that this question must have been asked before. I cannot seem to find the correct search keywords (in Google, SO, TCS). I am looking for a cited answer in {yes, no, open}.
No, at least in models where element distinctness of n elements requires time Ω(n log n).
Consider the following data structure, which I describe using Python.
class SpecialList:
def __init__(self):
self.lst = []
def append(self, x):
self.lst.append(x)
def rotationalsimilarity(self, k):
rotatedlst = self.lst[k:] + self.lst[:k]
count = sum(1 if x == y else 0 for (x, y) in zip(self.lst, rotatedlst))
self.lst = []
return count
Clearly append and rotationalsimilarity (since it clears the list) are amortized O(1). If rotationalsimilarity were worst-case O(1), then we could provide an O(1) undo operation that restores the data structure to its previous state. It follows that we could implement element distinctness in time O(n).
def distinct(lst):
slst = SpecialList()
for x in lst:
slst.append(x)
for k in range(1, len(lst)): # 1 <= k < len(lst)
if slst.rotationalsimilarity(k) > 0:
return False
slst.undo()
else:
return True

Resources