Is "Circular" list a kind of "linear list"? [closed] - algorithm

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I'm not sure what kind of "list", could be considered as "linear list".
For example, if the concept "linear" means we have one and only one rule to say what is the "next" element: then "Circular list", should also be "linear list"?
If yes, then "general lists", although they could have high dimensional structure, but as long as we give the rule of how to find "next" element, could it be considered as "linear list"?

Circular lists are linear data structures. However, it is not sufficient to give a rule for finding next element: in order for the structure to be linear, a single element must not be the next element to more than one element.
For example, the structure below is not linear:
Although each node has at most one successor, node "C" is a successor to two other nodes - "B" and "F". The structure is, therefore, cannot be considered linear.
A list of linear data structures can be found here.

I'm not sure what context you are coming from with this question, but my understanding of "circular list" in general computer data structure terminology is a list where the last element points back to the first element, so that the list could be traversed infinitely. This has usefulness in certain applications.

Yes you are right that , linearly linked means that you have a specific method for reaching to a unique next node
Because of a little difference in implementation of circular lists ,that is,
None of the pointers points to a NULL and becoz of it's infinite nature
there tends to be a confusion ...
but
Circular Linked List is generally called as a linear Linked List only
Note that
a tree is called a non linear datatype because one node's next could be more than one node so not a unique next node hence ** tree is example of non linear**

Related

What is the right way to calculate sub-sequence, subset and sub-strings? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
Often, I come across the following terminology in coding interviews.
Given an array or string, find the
sub-array
sub-sequence
sub-string
What difference they have?
For example, I see an integer array can be split into
n*(n+1)/2
sub arrays. Do they become subsets as well? Should sub-arrays are contiguous?
For calculating the sub-sequences of a string, why to use
2^str_length - 1
After searching online, I ended up with this link
https://www.geeksforgeeks.org/subarraysubstring-vs-subsequence-and-programs-to-generate-them/
But I still feel ambiguous as what is the universal term for calling a part of an array/string? and how to compute them?
In general, arrays and strings are both sub-sequences. The "sequence" part indicates that the order of elements is important somehow. "substring" is usually contiguous; "sub-array" and "sub-sequence" are unclear. If you're in a job interview and not certain of the interpretation, your first job is to ask. Sometimes, part of the job interview is making sure you can spot and resolve ambiguities.
UPDATE after question update
I find the referenced page quite clear.
First, note that string and array are both specific types of a sequence.
subsequence is the generic term: elements of the original sequence
appearing in the same order as in the original, but not necessarily contiguous. For instance, given the sequence "abcdefg", we have sub-sequences "a", "ag", bce", etc.
Elements repeated or otherwise not in the original ordering would include "ga", "bb", bcfe", etc. None of these is a sub-sequence.
"Subset" is a separate type. In a set, repeated elements do not exist, and ordering does not matter.
Does that clear up your problems?

Binary Search Tree Construction [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Currently reviewing how to construct a BST, and it seems like there are two "common" ways of constructing it. One way, like this example, simply puts everything inside a Node class and does all the operation within such Node class. Another way is to break it down to both Node and BST class, and construct the tree from there.
I can see the appeal for both, but what is the standard way of constructing s BST? or is it really just more of a personal preference?
There's no good reason to have a separate BST class. It doesn't have any properties that a Node doesn't also have. Any Node is also a BST.
Alternatively there is no good reason to have a Node class, just a BST class.
Whatever you call it, there's only one of it.
A binary tree (a binary search tree is a special case of a binary tree that satisfies a property known as binary search tree property at every node) is a recursive data structure. A binary tree is either
nothing (represented as null), or
a node with three (optionally four) things (a key, a left child and a right child).
The optional fourth part of a node can be a parent reference, which itself is a tree. This reference is useful in several useful tree operations like node deletion (and even traversal).
So, a binary tree can be sufficiently represented by a node and vice-versa and a separate BST or BinaryTree representation (for instance, as a 'class') is not needed since a binary tree can be seen as a reference to its "root" node.
Since balancing of a BST is critical for its performance, it's important to note that in practice (e.g. library code), red-black trees usually replace binary (search) trees.
With respect to this representation, a binary tree is no different from an n-ary tree.
There is no need to define a 'subtree' while defining a binary tree. A separate BST class is not needed because a Node is enough to represent an entire tree.

Algorithmic complexity of checking if an element exists in an array [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
If I have an array of unsorted numbers and a number I'm looking for, I believe there's no way of checking if my number is in it except by going through each member and comparing.
Now, in mathematics and various theoretical branches I've been interested in, there's often the pattern that you usually get what you put in. What I mean is, there's usually an explanation for every unexpected result. Take the Monty Hall problem as an example. It seems counter-intuitive until you realize the host adds more information into the situation because he knows what door the car is behind.
Since you're iterating on the array, instead of just getting a yes or no answer, you also get the exact location of the element (if it's there). Wouldn't it then make sense that there's an algorithm that's less complex, but give you ONLY a single bit of information?
Am I completely off base here?
Is there an actual correlation between the amount of information you get and the complexity of an algorithm? What's the theory behind the relationship between the amount of information you get from an algorithm and it's complexity?
Yes, you're completely off base, sorry!
Algorithmic complexity is defined in terms of how many operations it takes to solve the problem of size N. If the array has N elements in it, then there is no way of determining whether the value appears in the array other than checking all N elements. That makes it linear, or O(N).
The fact that you can also determine the location of the value in O(N) (as indeed you can) doesn't mean that you can solve the simpler problem in less time.
When you are searching an array, indexing is the price that you pay for having an array. An ability to access an element by index is inherent in the structure of the array: in other words, once you say "I am going to search an array" (not a collection, but specifically an array) you have already paid for the index. At this point, there is no way to get rid of it, and the O(n) cost associated with searching the array.
However, this is not the only solution: if you agree to drop the ability to index as a requirement, you could build a collection that gives you a yes/no answer much faster. For example, if you use a hash table, your search time becomes O(1). Of course there is no associated index in a hash table: inability to access items in arbitrary order is your payment for an ability to check for presence or absence of items in constant time.

Need algorithm for choosing sets of integer from list of such sets [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 8 years ago.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Improve this question
I have a list whose members are sets of 5 numbers chosen from the
integers 1 to 600 [or 0 to 599 for storage purposes].
I need to choose a sublist of this list such that among the sets in this
sublist, each integer in the 1 to 600 range appears exactly once, so a
sublist of 120 elements. My list has either 4200 or 840 elements in
it--I'll find out by running whether the bigger number is necessary.
I need any one such sublist.
This sounds like a standard problem to me, but I have no idea how to
search. Can someone help with providing an algorithm, please?
From Set Cover Problem
The greedy algorithm for set covering chooses sets according to one rule: at each stage, choose the set that contains the largest number of uncovered elements
Wikipedia seems to say that this algorithm works the best under plausible complexity assumptions.
I would boil it down to these steps:
Pick an element from the list (the first one, probably)
Pick the next element you come across where all 5 numbers are not yet represented in the sub-list
If you reach the end, go back to the beginning of the list and lower the criteria of step #2 to 4 numbers
Repeat steps 2 & 3 until you have covered all integers
Depending on the programming language you're using, there are ways of making this pretty quick.
Edit: the poster has explained that each integer must be used exactly once
So, what you really need to do is just continue adding elements until the element contains an integer that is already present in your subset. The "exactly" criterion takes precedent over the "not yet in the subset" criterion. You'll break out of the loop when you hit 120 subsets.
You may also want to keep track of the order in which you add elements to your subset, and when you hit a dead end (e.g., each of the elements remaining in the superset contains an integer that is already present in your subset) you backtrack one element and continue.
In order to backtrack and remember what combinations do not work, you will need to keep a list of "banned collections", and each time you decide whether to add a new element you should first make sure it's not in this list of banned collections. The best way (that I've found) to do this in Ruby is to store the Hash of the collection rather than the collection itself. This provides an inexpensive way to evaluate whether the prospective collection has already been tried and has led to a dead-end.
Good luck!

Whether a given Binary Tree is complete or not [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
Given a Binary Tree, write a function to check whether the given Binary Tree is Complete Binary Tree or not.
A complete binary tree is a binary tree in which every level, except possibly the last, is completely filled, and all nodes are as far left as possible. source: wikipedia
My approach is do BFS using queue and count the no of nodes. Run a
loop till the queue is not null but break once you find one of the
below condition holds good:
left node is not present for a node
left node is present but right node is not present.
Now we can compare the count that we get from the above approach and
the original count of the nodes in the tree. If both equal then
complete binary tree else not.
Please tell me whether the approach is correct or not. Thanks.
This question is same as that of this. But i wan to verify my method here.
Edit:
The algorithm is verified by #Boris Strandjev below. I felt this is the easiest algorithm to implement out of some algorithms available in net. Sincere apologize if you do not agree with my assertion.
Your algorithm should solve the problem.
What you are doing with the BFS is entirely equivalent to drawing the tree and then tracing the nodes with your finger top-down and left-right. The first time you can not continue you stop tracing with your finger. If you have not counted all the nodes then the structure is not as expected obviously.

Resources