Why are Linked Lists Reversed? [closed] - algorithm

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Why are Linked Lists Reversed?
I have been listening to a lot of Software Engineers that talk about reversing Linked Lists. What is the use of reversing a linked list? What are the advantages of doing so rather than traversing backwards?
Why are they used in technical interviews?
What is the use besides technical interviews?

As far as I know there are linked Lists aswell as double linked lists. When you are usind double linked lists, every node has a pointer to the next and the previous node so that it is easy to traverse backwards. When the list is "only linked" the nodes only have a "next-reference" so you won't be able to traverse backwards.
I hope that answered the question properly

Reversing a linked list comes up every now and then in real life.
Most recently I did that while implementing a lock-free multi-producer single-consumer work queue. Stacks are the simplest lock-free data structure to implement, and the implementation is a singly-linked list, so my multiple producers would push new work onto a lock free stack.
The single consumer can then get the whole stack at once, but then has to reverse it to make a queue of tasks in the correct order.
That said... you're probably asking the wrong question. The task of reversing a linked list is one of the easiest data structure manipulations you can do. If you're being taught to do it, then it's just an early lesson on the long road to really understanding programming. If you're being asked about it in an interview, then the interviewer wants to know if you might be programmer who can design, implement, and maintain data structures, or if you can just copy and modify answers from StackOverflow, and glue together things you don't understand.

It is very common to reverse lists that are the result of a recursive function. Consider a map function
fn map (fun, lst) =
local fn mp (in, out) =
if null(in)
then out
else mp(tail(in), fun(head(in)) :: out)
return reverse(mp(lst, nil))
It reverses the result of the inner function to restore the result order to the order of the input list.
There are more efficient ways to do this if your language supports mutation, i.e., by keeping a pointer to the last pair in the list and appending a pair on every call to the inner function.

Related

What's the usual practice to avoid stack overflow in recursive algorithms? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I used to think iterative algorithms be always superior to recursive ones due to the potential stack overflow issue, as long as one doesn't mind the extra efforts on coding. Therefore, if I were to build a util function to be used over and over again, I should always go with iterative. Is that logic correct? Or is there any standard trick to avoid stack overflow in recursive even for very large N? Still assume the data structure itself is not too big to fit in RAM.
Basically you're right for common compilers iteration is superior in both memory consumption and performance, but do not forget there are languages usually focused on functional programing and or AI that are optimized for recursion and with them the recursion is superior instead...
Anyway if I can I always use iteration for the reasons you mentioned. However recursive approach is often more simplistic than iterative one and the amount of coding required to convert to iteration is too much... In such cases you can do these:
limit heap/stack trashing
simply get rid of as much operands, return values and local variables as you can as each recursion will make a copy/instance of it...
you can use static variables for the locals or even global ones for the operands but beware not to break the functionality by using this.
You would not believe how many times I saw passing array as operand into recursion ...
limit recursion
if you got multi split recursion (one recursive layer have more than 1 recursive call) then you can have some internal global counter how many recursions are currently active ...
if you hit some threshold value do not use recursive calls anymore ... instead schedule them into some global array/list IIRC called priority queue and once all the pended recursions stop process this queue until its empty. However this approach is not always applicable. Nice examples for this approach are: Flood Fill, grid A* path finding,...
increase heap/stack size
executables have a table that tells the OS how much memory to allocate for its parts... so just find the settings in your compiler/linker/IDE and set the value to reasonable size.
I am sure there are more techniques but these are those I use ...

How to design/word Dynamic Programming sub problems? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am trying out the bottom up dynamic programming method and I have a few problems with it.
I have learnt to arrive at the required solution by storing previously computed values in either a 1D or 2D array and referring to them when necessary. The problem is I am not able to backtrack using the values stored in my array.
For example, if the problem is the classic 'Longest Subsequence' problem, I can arrive at the value of the longest subsequence, but I am not able to backtrack through the stored values and find what letters/digits appear in the subsequence.
I have gone through a lot of university course tutorials and youtube tutorials, but nobody seems to explain how a person can 'word' the subproblem correctly.
Does anybody have tips on how to craft the subproblems and maintain array values so that backtracking possible and easy?
A simple solution is to keep a second array with the same dimensions as the first, and call it your index array, and use it to track the location of the element that contributed to your choice.
So in a 2d example:
LetA be the standard dynamic programming array
Let I be the index array
If the value A[x,y] is decided by A[x0,y0], then I[x,y]=(x0,y0).
When trying to backtrack from A[i,j], access I[i,j] to find the next element in the backtrack chain.
You can use default values for the array I so you know when you have reached the end of the chain.

Go: Do arrays and maps have to be different concepts/features? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Arrays in PHP work both with numeric keys and string keys. Which is awesome.
Ex:
$array[0] = "My value.";
or
$array['key'] = "My value";
Why doesn't go implement arrays like this?
What's the benefit for having two different concepts and syntax (maps) in Go?
I believe I'm failing to see the usefulness behind this.
Go is not PHP. While a few higher-level languages share this abstraction, it's not very common. Arrays and Maps are different data structures for different purposes.
PHP's arrays are actually hash tables underneath. Go has true arrays, and it has slices which are a more powerful abstraction over arrays.
Having real arrays, gives you predictable memory layouts, and true O(1) indexing (the same goes for Go's slices, which use an array internally). Using a hash-map for the underlying data store costs you a constant overhead for all operations, as well as not being able to better control data locality.
One of the primary reason is that arrays have order, and maps do not, which has important implications as stated here:
When iterating over a map with a range loop, the iteration order is not specified and is not guaranteed to be the same from one iteration to the next. If you require a stable iteration order you must maintain a separate data structure that specifies that order.

Challenging Problems Involving Stacks (Data Structures) [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Does anyone know where I can get lots of challenging programming problems that involve the use of stacks? I'll like to challenge myself with super hard problems.
My Favorite
Convert Prefix expression to Postfix using stack
Implement a stack with O(1) for access to max element
Implement tower of hanoi without
recursion. :)
Take any damn recursive program , which is recursive by intuition, and try to implement it iteratively. Every recursive has a iterative solution.
That should keep you busy for a while. :)
BTW,
Why dont you buy/borrow "Data Structures Using C and C++" by langsam/tennenbaum ?
YOu will get sufficient problem for a week. :)
A Google search for stack data structure problems turns up lots of interesting stuff. Here are a few interesting ones (from easier to harder, in my judgment):
Check for balanced parentheses, braces, and other paired delimiters in text.
Write a postfix calculator. (So 1 3 2 4 + * - should calculate 1 - (3 * (2+4)).)
Use stacks to solve the "N queens" problem. (Place N queens on an N x N chess board so that no two queens are on the same row, column, or diagonal.)
Cracking the Coding Interview, Fourth Edition: 150 Programming Interview Questions and Solutions:
http://www.amazon.com/Cracking-Coding-Interview-Fourth-Programming/dp/145157827X
Chapter 3 is dedicated to stacks and queues questions. Here is a sample question:
How would you design a stack which, in addition to push and pop, also has a function min which returns the minimum element? Push, pop and min should all operate in O(1) time.
You could learn a stack-based language, like Forth, or a stack virtual machine, like MS's CIL VM. Either one will force you to reconsider how you implement anything you write in it to make use of stacks.
Go to the library and look at textbooks on data structures. They should all be shelved together, so you can always just find the call number for your current textbook and the others will be right nearby.
I'd expect most would have chapters specifically on stacks, with end-of-chapter problems. There will be a lot of overlap, but you should be able to get a good selection of problems of varying difficulty, along with a range of different explanations for standard problems.

Joining very large lists [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Lets put some numbers first:
The largest of the list is about 100M records. (but is expected to grow upto 500). The other lists (5-6 of them) are in millions but would be less than 100M for the foreseeable future.
These are always joined based on a single id. and never with any other parameters.
Whats the best algorithm to join such lists?
I was thinking in lines of distributed computing. Have a good hash (the circular hash kinds, where you can add a node and there's not a lot of data movement) function and have these lists split into several smaller files. And since, they are always joined on the common id (which i will be hashing) it would boil down to joining to small files. And maybe use the nix join commands for that.
A DB (at least MySQL) would join using merge join (since it would be on primary key). Is that going to be more efficient that my approach?
I know its best to test and see. But given the magnitute of these files, its pretty time consuming. And I would like to do some theoretical calculation and then see how it fairs in practice.
Any insights on these or other ideas would be helpful. I dont mind if it takes slightly longer, but would prefer the best utilization of the resources I have. Don't have a huge budget :)
Use a Database. They are designed for performing joins (with the right indexes of course!)

Resources