Challenging Problems Involving Stacks (Data Structures) [closed] - algorithm

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Does anyone know where I can get lots of challenging programming problems that involve the use of stacks? I'll like to challenge myself with super hard problems.

My Favorite
Convert Prefix expression to Postfix using stack
Implement a stack with O(1) for access to max element
Implement tower of hanoi without
recursion. :)
Take any damn recursive program , which is recursive by intuition, and try to implement it iteratively. Every recursive has a iterative solution.
That should keep you busy for a while. :)
BTW,
Why dont you buy/borrow "Data Structures Using C and C++" by langsam/tennenbaum ?
YOu will get sufficient problem for a week. :)

A Google search for stack data structure problems turns up lots of interesting stuff. Here are a few interesting ones (from easier to harder, in my judgment):
Check for balanced parentheses, braces, and other paired delimiters in text.
Write a postfix calculator. (So 1 3 2 4 + * - should calculate 1 - (3 * (2+4)).)
Use stacks to solve the "N queens" problem. (Place N queens on an N x N chess board so that no two queens are on the same row, column, or diagonal.)

Cracking the Coding Interview, Fourth Edition: 150 Programming Interview Questions and Solutions:
http://www.amazon.com/Cracking-Coding-Interview-Fourth-Programming/dp/145157827X
Chapter 3 is dedicated to stacks and queues questions. Here is a sample question:
How would you design a stack which, in addition to push and pop, also has a function min which returns the minimum element? Push, pop and min should all operate in O(1) time.

You could learn a stack-based language, like Forth, or a stack virtual machine, like MS's CIL VM. Either one will force you to reconsider how you implement anything you write in it to make use of stacks.

Go to the library and look at textbooks on data structures. They should all be shelved together, so you can always just find the call number for your current textbook and the others will be right nearby.
I'd expect most would have chapters specifically on stacks, with end-of-chapter problems. There will be a lot of overlap, but you should be able to get a good selection of problems of varying difficulty, along with a range of different explanations for standard problems.

Related

What's the usual practice to avoid stack overflow in recursive algorithms? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I used to think iterative algorithms be always superior to recursive ones due to the potential stack overflow issue, as long as one doesn't mind the extra efforts on coding. Therefore, if I were to build a util function to be used over and over again, I should always go with iterative. Is that logic correct? Or is there any standard trick to avoid stack overflow in recursive even for very large N? Still assume the data structure itself is not too big to fit in RAM.
Basically you're right for common compilers iteration is superior in both memory consumption and performance, but do not forget there are languages usually focused on functional programing and or AI that are optimized for recursion and with them the recursion is superior instead...
Anyway if I can I always use iteration for the reasons you mentioned. However recursive approach is often more simplistic than iterative one and the amount of coding required to convert to iteration is too much... In such cases you can do these:
limit heap/stack trashing
simply get rid of as much operands, return values and local variables as you can as each recursion will make a copy/instance of it...
you can use static variables for the locals or even global ones for the operands but beware not to break the functionality by using this.
You would not believe how many times I saw passing array as operand into recursion ...
limit recursion
if you got multi split recursion (one recursive layer have more than 1 recursive call) then you can have some internal global counter how many recursions are currently active ...
if you hit some threshold value do not use recursive calls anymore ... instead schedule them into some global array/list IIRC called priority queue and once all the pended recursions stop process this queue until its empty. However this approach is not always applicable. Nice examples for this approach are: Flood Fill, grid A* path finding,...
increase heap/stack size
executables have a table that tells the OS how much memory to allocate for its parts... so just find the settings in your compiler/linker/IDE and set the value to reasonable size.
I am sure there are more techniques but these are those I use ...

What makes a task difficult or 'complex' to machine learn? Regarding complexity of pattern, not computationally [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
As many, I am interested in machine learning. I have taken a class on this topic, and have been reading some papers. I am interested in finding out what makes a problem difficult to solve with machine learning. Ideally, I want to learn about how the complexity of a problem regarding machine learning can be quantified or expressed.
Obviously, if a pattern is very noisy,one can look at the update techniques of different algorithms and observe that some particular machine learning algorithm incorrectly updates itself into the wrong direction due to a noisy label, but this is very qualitative arguing instead of some analytical / quantifiable reasoning.
So, how can the complexity of a problem or pattern be quantified to reflect the difficulty a machine learning algorithm faces? Maybe something from information theory or so, I really do not have an idea.
In thery of machine learning, the VC dimension of the domain is usually used to classify "How hard it is to learn it"
A domain said to have VC dimension of k if there is a set of k samples, such that regardless their label, the suggested model can "shatter them" (split them perfectly using some configuration of the model).
The wikipedia page offers the 2D example as a domain, with a linear seperator as a model:
The above tries to demonstrate that there is a setup of points in 2D, such that one can fit a linear seperator to split them, whatever the labels are. However, for every 4 points in 2D, there is some assignment of labels such that a linear seperator cannot split them:
Thus, the VC Dimension of 2D space with linear seperator is 3.
Also, if VC dimension of a domain and a model is infinty, it is said that the problem is not learnable
If you have strong enough mathematical background, and interested in the theory of machine learning, you can try following the lecture of Amnon Shashua about PAC

Which function in pascal is the fastest one, while, for or repeat? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 9 years ago.
Improve this question
I solved one problem in pascal using mostly for function and I didn't meet the time limit, and the problems is correctly solver in the best way. So is any of these other functions faster maybe or I made a mistake and solved it wrongly?
Short answer: Try out! just make 3 versions of your solution and test each against a good mass data and record how much times it takes. If you are still not getting the time limit you want try a faster PC or review your solution.
while and repeat are not functions, but indicate looping control structures intrinsic to the programming language.
Neither is faster, neither is slower. Both are faster, both are slower. A general answer is not possible, and the more work you do inside the loop, the less relevant any difference, if any, becomes.
If you didn't solve your exercise, then the problem is not the choice of the loop. It could be the algorithm you chose, it could be a mistake you made, it could be the testing machine didn't have enough processing time left to your program to be in time.
Assuming a sensible compiler, there should be no performance difference between them.
Even if there were a difference, it would likely be negligible -- unnoticeable by humans, and easily within experimental error for computed results.
Since this is a school assignment, I'd suggest reviewing the material taught in the last week; probably something there will hint at a better way to solve the problem.
Originally, for was considered faster, because it didn't allow changing the loop variable , which enabled certain kind of optimizations, specially if the value of the loop variable wasn't used.
Nowadays, with less memory and time limits on optimization, the various forms are easily transformed into each other, and the whole discussion becomes academic.
Note that most modern compilers (after Delphi) add another check to the FOR statement, to check that the higher bound is greater or equal than the lower bound. If you have to answer such question for homework, check your compiler very thoroughly.

an algorithm to minimize the total excess [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
Imagine that you have ropes which are 5 meters long. And you want to cut the rope in some certain lengths(30 cm,73 cm) for some certain times. I want to write a program that minimize the total length of the excessed robe and tells you how you should cut every rope. But, I don't know where to start and use what algorithm. Can you give me some reference? Thank you in advance.
What you are looking for is so called Cutting stock problem.
Start by looking at this Wikipedia article and follow Suggested readings. I remember we had this as a part of some course back at the university (although I can't remember which one), so you could have a look at coursera.
Seems like homework, but I can still point you in the right direction. What you have on hand is an example of dynamic programming. From what I can understand from your question, you have a sub-case of the ever popular knapsack problem. Which is in essence an optimization problem of using the space on hand most efficiently, thus reducing the waste. Tweak it a bit to your own needs and you should be able to manage to get the solution for your problem.

Naive approaches to detecting plagiarism? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Let's say you wanted to compare essays by students and see if one of those essays was plagiarized. How would you go about this in a naive manner (i.e. not too complicated approach)? Of course, there are simple ways like comparing the words used in the essays, and complicated ways like using compressing functions, but what are some other ways to check plagiarism without too much complexity/theory?
There are several papers giving several approaches, I recommend reading this
The paper shows an algorithm based on an index structure
built over the entire file collection.
So they say their algorithm can be used to find similar code fragments in a large software system. Before the index is built, all the files in the
collection are tokenized. This is a simple parsing problem, and can be solved in
linear time. For each of the N files in the collection, The output of the tokenizer
for a file F_i is a string of n_i tokens.
here is other paper you could read
Other good algorithm is a scam based algorithm that consists on detecting plagiarism by making comparison on a set of words that are common between test document
and registered document. Our plagiarism detection system, like many Information Retrieval systems, is evaluated with metrics of precision and recall.
You could take a look at Dick Grune's similarity comparator, which claims to work on natural language texts as well (I've only tried it on software). The algorithms are described as well. (By the way, his book on parsing is really good, in my opinion.)

Resources