Suppose i want to implement the browser history functionality. If i visit the the url for this first time it goes into the history , if i visit the same page again it comes up in the history list.
lets say that i only display the top 20 sites, but i can choose to see history say for the last month , last week and so on .
what is the best approach for this ? i would use hash map for inserting / checking if it is visited earlier , but how do i sort efficiently for recently visited, i don't want to use tree map or tree set . also, how can i store history of weeks and months. Is it written on disk when browser closes ? and when i click clear history , how is the data structure deleted ?
This is in Java-ish code.
You'll need two data structures: a hash map and a doubly linked list. The doubly linked list contains History objects (which contain a url string and a timestamp) in order sorted by timestamp; the hash map is a Map<String, History>, with urls as the key.
class History {
History prev
History next
String url
Long timestamp
void remove() {
prev.next = next
next.prev = prev
next = null
prev = null
}
}
When you add a url to the history, check to see if it's in the hash map; if it is then update its timestamp, remove it from the linked list, and add it to the end of the linked list. If it's not in the hash map then add it to the hash map and also add it to the end of the linked list. Adding a url (whether or not it's already in the hash map) is a constant time operation.
class Main {
History first // first element of the linked list
History last // last element of the linked list
HashMap<String, History> map
void add(String url) {
History hist = map.get(url)
if(hist != null) {
hist.remove()
hist.timestamp = System.currenttimemillis()
} else {
hist = new History(url, System.currenttimemillis())
map.add(url, hist)
}
last.next = hist
hist.prev = last
last = hist
}
}
To get the history from e.g. the last week, traverse the linked list backwards until you hit the correct timestamp.
If thread-safety is a concern, then use a thread-safe queue for urls to be added to the history, and use a single thread to process this queue; this way your map and linked list don't need to be thread-safe i.e. you don't need to worry about locks etc.
For persistence you can serialize / deserialize the linked list; when you deserialize the linked list, reconstruct the hash map by traversing it and adding its elements to the map. Then to clear the history you'd null the list and map in memory and delete the file you serialized the data to.
A more efficient solution in terms of memory consumption and IO (i.e. (de)serialization cost) is to use a serverless database like SQLite; this way you won't need to keep the history in memory, and if you want to get the history from e.g. the last week you'd just need to query the database rather than traversing the linked list. However, SQLite is essentially a treemap (specifically a B-Tree, which is optimized for data stored on disk).
Here is a Swift 4.0 implementation based on Zim-Zam O'Pootertoot's answer, including an iterator for traversing the history:
import Foundation
class SearchHistory: Sequence {
var first: SearchHistoryItem
var last: SearchHistoryItem
var map = [String: SearchHistoryItem]()
var count = 0
var limit: Int
init(limit: Int) {
first = SearchHistoryItem(name: "")
last = first
self.limit = Swift.max(limit, 2)
}
func add(name: String) {
var item: SearchHistoryItem! = map[name]
if item != nil {
if item.name == last.name {
last = last.prev!
}
item.remove()
item.timestamp = Date()
} else {
item = SearchHistoryItem(name: name)
count += 1
map[name] = item
if count > limit {
first.next!.remove()
count -= 1
}
}
last.next = item
item.prev = last
last = item
}
func makeIterator() -> SearchHistory.SearchHistoryIterator {
return SearchHistoryIterator(item: last)
}
struct SearchHistoryIterator: IteratorProtocol {
var currentItem: SearchHistoryItem
init(item: SearchHistoryItem) {
currentItem = item
}
mutating func next() -> SearchHistoryItem? {
var item: SearchHistoryItem? = nil
if let prev = currentItem.prev {
item = currentItem
currentItem = prev
}
return item
}
}
}
class SearchHistoryItem {
var prev: SearchHistoryItem?
var next: SearchHistoryItem?
var name: String
var timestamp: Date
init(name: String) {
self.name = name
timestamp = Date()
}
func remove() {
prev?.next = next
next?.prev = prev
next = nil
prev = nil
}
}
Related
I am working on a compiler and one aspect currently is how to wait for interpolated variable names to be resolved. So I am wondering how to take a nested interpolated variable string and build some sort of simple data model/schema for unwrapping the evaluated string so to speak. Let me demonstrate.
Say we have a string like this:
foo{a{x}-{y}}-{baz{one}-{two}}-foo{c}
That has 1, 2, and 3 levels of nested interpolations in it. So essentially it should resolve something like this:
wait for x, y, one, two, and c to resolve.
when both x and y resolve, then resolve a{x}-{y} immediately.
when both one and two resolve, resolve baz{one}-{two}.
when a{x}-{y}, baz{one}-{two}, and c all resolve, then finally resolve the whole expression.
I am shaky on my understanding of the logic flow for handling something like this, wondering if you could help solidify/clarify the general algorithm (high level pseudocode or something like that). Mainly just looking for how I would structure the data model and algorithm so as to progressively evaluate when the pieces are ready.
I'm starting out trying and it's not clear what to do next:
{
dependencies: [
{
path: [x]
},
{
path: [y]
}
],
parent: {
dependency: a{x}-{y} // interpolated term
parent: {
dependencies: [
{
}
]
}
}
}
Some sort of tree is probably necessary, but I am having trouble figuring out what it might look like, wondering if you could shed some light on that with some pseudocode (or JavaScript even).
watch the leaf nodes at first
then, when the children of a node are completed, propagate upward to resolving the next parent node. This would mean once x and y are done, it could resolve a{x}-{y}, but then wait until the other nodes are ready before doing the final top-level evaluation.
You can just simulate it by sending "events" to the system theoretically, like:
ready('y')
ready('c')
ready('x')
ready('a{x}-{y}')
function ready(variable) {
if ()
}
...actually that may not work, not sure how to handle the interpolated nodes in a hacky way like that. But even a high level description of how to solve this would be helpful.
export type SiteDependencyObserverParentType = {
observer: SiteDependencyObserverType
remaining: number
}
export type SiteDependencyObserverType = {
children: Array<SiteDependencyObserverType>
node: LinkNodeType
parent?: SiteDependencyObserverParentType
path: Array<string>
}
(What I'm currently thinking, some TypeScript)
Here is an approach in JavaScript:
Parse the input string to create a Node instance for each {} term, and create parent-child dependencies between the nodes.
Collect the leaf Nodes of this tree as the tree is being constructed: group these leaf nodes by their identifier. Note that the same identifier could occur multiple times in the input string, leading to multiple Nodes. If a variable x is resolved, then all Nodes with that name (the group) will be resolved.
Each node has a resolve method to set its final value
Each node has a notify method that any of its child nodes can call in order to notify it that the child has been resolved with a value. This may (or may not yet) lead to a cascading call of resolve.
In a demo, a timer is set up that at every tick will resolve a randomly picked variable to some number
I think that in your example, foo, and a might be functions that need to be called, but I didn't elaborate on that, and just considered them as literal text that does not need further treatment. It should not be difficult to extend the algorithm with such function-calling features.
class Node {
constructor(parent) {
this.source = ""; // The slice of the input string that maps to this node
this.texts = []; // Literal text that's not part of interpolation
this.children = []; // Node instances corresponding to interpolation
this.parent = parent; // Link to parent that should get notified when this node resolves
this.value = undefined; // Not yet resolved
}
isResolved() {
return this.value !== undefined;
}
resolve(value) {
if (this.isResolved()) return; // A node is not allowed to resolve twice: ignore
console.log(`Resolving "${this.source}" to "${value}"`);
this.value = value;
if (this.parent) this.parent.notify();
}
notify() {
// Check if all dependencies have been resolved
let value = "";
for (let i = 0; i < this.children.length; i++) {
const child = this.children[i];
if (!child.isResolved()) { // Not ready yet
console.log(`"${this.source}" is getting notified, but not all dependecies are ready yet`);
return;
}
value += this.texts[i] + child.value;
}
console.log(`"${this.source}" is getting notified, and all dependecies are ready:`);
this.resolve(value + this.texts.at(-1));
}
}
function makeTree(s) {
const leaves = {}; // nodes keyed by atomic names (like "x" "y" in the example)
const tokens = s.split(/([{}])/);
let i = 0; // Index in s
function dfs(parent=null) {
const node = new Node(parent);
const start = i;
while (tokens.length) {
const token = tokens.shift();
i += token.length;
if (token == "}") break;
if (token == "{") {
node.children.push(dfs(node));
} else {
node.texts.push(token);
}
}
node.source = s.slice(start, i - (tokens.length ? 1 : 0));
if (node.children.length == 0) { // It's a leaf
const label = node.texts[0];
leaves[label] ??= []; // Define as empty array if not yet defined
leaves[label].push(node);
}
return node;
}
dfs();
return leaves;
}
// ------------------- DEMO --------------------
let s = "foo{a{x}-{y}}-{baz{one}-{two}}-foo{c}";
const leaves = makeTree(s);
// Create a random order in which to resolve the atomic variables:
function shuffle(array) {
for (var i = array.length - 1; i > 0; i--) {
var j = Math.floor(Math.random() * (i + 1));
[array[j], array[i]] = [array[i], array[j]];
}
return array;
}
const names = shuffle(Object.keys(leaves));
// Use a timer to resolve the variables one by one in the given random order
let index = 0;
function resolveRandomVariable() {
if (index >= names.length) return; // all done
console.log("\n---------------- timer tick --------------");
const name = names[index++];
console.log(`Variable ${name} gets a value: "${index}". Calling resolve() on the connected node instance(s):`);
for (const node of leaves[name]) node.resolve(index);
setTimeout(resolveRandomVariable, 1000);
}
setTimeout(resolveRandomVariable, 1000);
your idea of building a dependency tree it's really likeable.
Anyway I tryed to find a solution as simplest possible.
Even if it already works, there are many optimizations possible, take this just as proof of concept.
The background idea it's produce a List of Strings which you can read in order where each element it's what you need to solve progressively. Each element might be mandatory to solve something that come next in the List, hence for the overall expression. Once you solved all the chunks you have all pieces to solve your original expression.
It's written in Java, I hope it's understandable.
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.Objects;
public class StackOverflow {
public static void main(String[] args) {
String exp = "foo{a{x}-{y}}-{baz{one}-{two}}-foo{c}";
List<String> chunks = expToChunks(exp);
//it just reverse the order of the list
Collections.reverse(chunks);
System.out.println(chunks);
//output -> [c, two, one, baz{one}-{two}, y, x, a{x}-{y}]
}
public static List<String> expToChunks(String exp) {
List<String> chunks = new ArrayList<>();
//this first piece just find the first inner open parenthesys and its relative close parenthesys
int begin = exp.indexOf("{") + 1;
int numberOfParenthesys = 1;
int end = -1;
for(int i = begin; i < exp.length(); i++) {
char c = exp.charAt(i);
if (c == '{') numberOfParenthesys ++;
if (c == '}') numberOfParenthesys --;
if (numberOfParenthesys == 0) {
end = i;
break;
}
}
//this if put an end to recursive calls
if(begin > 0 && begin < exp.length() && end > 0) {
//add the chunk to the final list
String substring = exp.substring(begin, end);
chunks.add(substring);
//remove from the starting expression the already considered chunk
String newExp = exp.replace("{" + substring + "}", "");
//recursive call for inner element on the chunk found
chunks.addAll(Objects.requireNonNull(expToChunks(substring)));
//calculate other chunks on the remained expression
chunks.addAll(Objects.requireNonNull(expToChunks(newExp)));
}
return chunks;
}
}
Some details on the code:
The following piece find the begin and the end index of the first outer chunk of expression. The background idea is: in a valid expression the number of open parenthesys must be equal to the number of closing parenthesys. The count of open(+1) and close(-1) parenthesys can't ever be negative.
So using that simple loop once I find the count of parenthesys to be 0, I also found the first chunk of the expression.
int begin = exp.indexOf("{") + 1;
int numberOfParenthesys = 1;
int end = -1;
for(int i = begin; i < exp.length(); i++) {
char c = exp.charAt(i);
if (c == '{') numberOfParenthesys ++;
if (c == '}') numberOfParenthesys --;
if (numberOfParenthesys == 0) {
end = i;
break;
}
}
The if condition provide validation on the begin and end indexes and stop the recursive call when no more chunks can be found on the remained expression.
if(begin > 0 && begin < exp.length() && end > 0) {
...
}
2->7->8->11
|
13->16->17->21
|
22->23->27->29
|
30->32
Sorted Linked List given like above where each node has 2 pointers next and down. For each row starting nodes down points to next row start. Each row has 4 elements, except last one which can have <= 4 elements. Next rows start element is greater than previous rows end element. We need to design and code for it insert of new value at correct place and delete operation. I could not solve this problem.
Structure representation and Pseudo code for the add operation is as follows
And we can implement the delete recursively using the add data as example
typedef struct sibling{
int data;
struct sibling *nxt;
} t_sibling
typedef struct children {
struct sibling *sibling;
struct children *nxt;
} t_children;
add_element(t_children **head, int newdata)
{
t_children *walk_down = *head;
t_children *parent = NULL;
while (walk_down != NULL) {
if(parent == NULL && Compare newdata < head of current walk_down->sibling) {
// Code comes here when we add 1 to above mentioned list example
newdata is added to begining to head of walk_down->sibling
sibling_list_count++;
if (sibling_list_count > 4) {
taildata = delete_end from tail of walk_down->sibling
add_element(&walk_down, taildata)
}
break;
}
else if(newdata < head of current walk_down->sibling) {
if (Compare newdata > tail of parent sibling) {
// Code comes here when we add 12 to above mentioned list
newdata is added to begining to head of walk_down->sibling
if (sibling_list_count > 4) {
taildata = delete_end from tail of walk_down->sibling
add_element(&walk_down, taildata)
}
}
else {
// Code comes here when we add 6 to above mentioned list
newdata is added to the appropriate location of parent of sibling
Since above step disturbs the <= 4 property we
taildata = delete_end from tail of parent->sibling
add_element(&walk_down, taildata)
}
break;
}
parent = walk_down;
walk_down = walk_down->nxt;
}
}
Problem
The input data has 2 types of records, lets call them R and W.
I need to traverse this data in Sequence from top to bottom in such a way that if the current record is of type W, it has to be merged with a map(lets call it workMap). If the key of that W-type record is already present in the map, the value of this record is added to it, otherwise a new entry is made into workMap.
If the current record is of type R, the workMap calculated until this record, is attached to the current record.
For example, if this is the order of records -
W1- a -> 2
W2- b -> 3
W3- a -> 4
R1
W4- c -> 1
R2
W5- c -> 4
Where W1, W2, W3, W4 and W5 are of type W; And R1 and R2 are of type R
At the end of this function, I should have the following -
R1 - { a -> 6,
b -> 3 } //merged(W1, W2, W3)
R2 - { a -> 6,
b -> 3,
c -> 1 } //merged(W1, W2, W3, W4)
{ a -> 6,
b -> 3,
c -> 5 } //merged(W1, W2, W3, W4, W5)
I want all the R-type records attached to the intermediate workMaps calculated until that point; And the final workMap after the last record is processed.
Here is the code that I have written -
def calcPerPartition(itr: Iterator[(InputKey, InputVal)]):
Iterator[(ReportKey, ReportVal)] = {
val workMap = mutable.HashMap.empty[WorkKey, WorkVal]
val reportList = mutable.ArrayBuffer.empty[(ReportKey, Reportval)]
while (itr.hasNext) {
val temp = itr.next()
val (iKey, iVal) = (temp._1, temp._2)
if (iKey.recordType == reportType) {
//creates a new (ReportKey, Reportval)
reportList += getNewReportRecord(workMap, iKey, iVal)
}
else {
//if iKey is already present, merge the values
//other wise adds a new entry
updateWorkMap(workMap, iKey, iVal)
}
}
val workList: Seq[(ReportKey, ReportVal)] = workMap.toList.map(convertToReport)
reportList.iterator ++ workList.iterator
}
ReportKey class is like this -
case class ReportKey (
// the type of record - report or work
rType: Int,
date: String,
.....
)
There are two problems with this approach that I am asking help for -
I have to keep track of a reportList - a list of R type records attached with intermediate workMaps. As the data grows, the reportList also grows and I am running into OutOfMemoryExceptions.
I have to combine reportList and workMap records in the same data structure and then return them. If there is any other elegant way, I would definitely consider changing this design.
For the sake of completeness - I am using spark. The function calcPerPartition is passed as argument for mapPartitions on an RDD. I need the workMaps from each partition to do some additional calculations later.
I know that if I don't have to return workMaps from each partition, the problem becomes much easier, like this -
...
val workMap = mutable.HashMap.empty[WorkKey, WorkVal]
itr.scanLeft[Option[(ReportKey, Reportval)]](
None)((acc: Option[(ReportKey, Reportval)],
curr: (InputKey, InputVal)) => {
if (curr._1.recordType == reportType) {
val rec = getNewReportRecord(workMap, curr._1, curr._2)
Some(rec)
}
else {
updateWorkMap(workMap, curr._1, curr._2)
None
}
})
val reportList = scan.filter(_.isDefined).map(_.get)
//workMap is still empty after the scanLeft.
...
Sure, I can do a reduce operation on the input data to derive the final workMap but I would need to look at the data twice. Considering that the input data set is huge, I want to avoid that too.
But unfortunately I need the workMaps at a latter step.
So, is there a better way to solve the above problem? If I can't solve problem 2 at all(according to this), is there any other way I can avoid storing R records(reportList) in a list or scan the data more than once?
I don't yet have a better design for the second question - if you can avoid combining reportList and workMap into a single data structure but we can certainly avoid storing R type records in a list.
Here is how we can re-write the calcPerPartition from the above question -
def calcPerPartition(itr: Iterator[(InputKey, InputVal)]):
Iterator[Option[(ReportKey, ReportVal)]] = {
val workMap = mutable.HashMap.empty[WorkKey, WorkVal]
var finalWorkMap = true
new Iterator[Option[(ReportKey, ReportVal)]](){
override def hasNext: Boolean = itr.hasNext
override def next(): Option[(ReportKey, ReportVal)] = {
val curr = itr.next()
val iKey = curr._1
val iVal = curr._2
val eventKey = EventKey(openKey.date, openKey.symbol)
if (iKey.recordType == reportType) {
Some(getNewReportRecord(workMap, iKey, iVal))
}
else {
//otherwise update the generic interest map but don't accumulate anything
updateWorkMap(workMap, iKey, iVal)
if (itr.hasNext) {
next()
}
else {
if(finalWorkMap){
finalWorkMap = false //because we want a final only once
Some(workMap.map(convertToReport))
}
else {
None
}
}
}
}
}
}
Instead of storing results in a list, we defined an iterator. That solved most of the memory issues we had around this issue.
I'm trying to loop through every row of a variable length table on the a webpage (http://www.oddschecker.com/golf/the-masters/winner) and extract some data
The problem is I can't seem to catch the null reference and terminate the loop without it throwing an exception!
int i = 1;
bool test = string.IsNullOrEmpty(doc.DocumentNode.SelectNodes(String.Format("//*[#id='t1']/tr[{0}]/td[3]/a[2]", i))[0].InnerText);
while (test != true)
{
string name = doc.DocumentNode.SelectNodes(String.Format("//*[#id='t1']/tr[{0}]/td[3]/a[2]", i))[0].InnerText;
//extract data
i++;
}
try-catch statements don't catch it either:
bool test = false;
try
{
string golfersName = doc.DocumentNode.SelectNodes(String.Format("//*[#id='t1']/tr[{0}]/td[3]/a[2]", i))[0].InnerText;
}
catch
{
test = true;
}
while (test != true)
{
...
The code logic is a bit off. With the original code, if test evaluated true the loop will never terminates. It seems that you want to do checking in every loop iteration instead of only once at the beginning.
Anyway, there is a better way around. You can select all relevant nodes without specifying each <tr> indices, and use foreach to loop through the node set :
var nodes = doc.DocumentNode.SelectNodes("//*[#id='t1']/tr/td[3]/a[2]");
foreach(HtmlNode node in nodes)
{
string name = node.InnerText;
//extract data
}
or using for loop instead of foreach, if index of each node is necessary for the "extract data" process :
for(i=1; i<=nodes.Count; i++)
{
//array index starts from 0, unlike XPath element index
string name = nodes[i-1].InnerText;
//extract data
}
Side note : To query single element you can use SelectSingleNode("...") instead of SelectNodes("...")[0]. Both methods return null if no nodes match XPath criteria, so you can do checking against the original value returned instead of against InnerText property to avoid exception :
var node = doc.DocumentNode.SelectSingleNode("...");
if(node != null)
{
//do something
}
I just started a project for my Java2 class and I've come to a complete stop. I just can't get
my head around this method. Especially when the assignment does NOT let us use any other DATA STRUCTURE or shuffle methods from java at all.
So I have a Deck.class in which I've already created a linked list containing 52 nodes that hold 52 cards.
public class Deck {
private Node theDeck;
private int numCards;
public Deck ()
{
while(numCards < 52)
{
theDeck = new Node (new Card(numCards), theDeck);
numCards++;
}
}
public void shuffleDeck()
{
int rNum;
int count = 0;
Node current = theDeck;
Card tCard;
int range = 0;
while(count != 51)
{
// Store whatever is inside the current node in a temp variable
tCard = current.getItem();
// Generate a random number between 0 -51
rNum = (int)(Math.random()* 51);
// Send current on a loop a random amount of times
for (int i=0; i < rNum; i ++)
current = current.getNext(); ******<-- (Btw this is the line I'm getting my error, i sort of know why but idk how to stop it.)
// So wherever current landed get that item stored in that node and store it in the first on
theDeck.setItem(current.getItem());
// Now make use of the temp variable at the beginning and store it where current landed
current.setItem(tCard);
// Send current back to the beginning of the deck
current = theDeck;
// I've created a counter for another loop i want to do
count++;
// Send current a "count" amount of times for a loop so that it doesn't shuffle the cards that have been already shuffled.
for(int i=0; i<count; i++)
current = current.getNext(); ****<-- Not to sure about this last loop because if i don't shuffle the cards that i've already shuffled it will not count as a legitimate shuffle? i think? ****Also this is where i sometimes get a nullpointerexception****
}
}
}
Now I get different kinds of errors
When I call on this method:
it will sometimes shuffle just 2 cards but at times it will shuffle 3 - 5 cards then give me a NullPointerException.
I've pointed out where it gives me this error with asterisks in my code above
at one point I got it to shuffle 13 cards but then everytime it did that it didn't quite shuffle them the right way. one card kept always repeating.
at another point I got all 52 cards to go through the while loop but again it repeated one card various times.
So I really need some input in what I'm doing wrong. Towards the end of my code I think my logic is completely wrong but I can't seem to figure out a way around it.
Seems pretty long-winded.
I'd go with something like the following:
public void shuffleDeck() {
for(int i=0; i<52; i++) {
int card = (int) (Math.random() * (52-i));
deck.addLast(deck.remove(card));
}
}
So each card just gets moved to the back of the deck in a random order.
If you are authorized to use a secondary data structure, one way is simply to compute a random number within the number of remaining cards, select that card, move it to the end of the secondary structure until empty, then replace your list with the secondary list.
My implementation shuffles a linked list using a divide-and-conquer algorithm
public class LinkedListShuffle
{
public static DataStructures.Linear.LinkedListNode<T> Shuffle<T>(DataStructures.Linear.LinkedListNode<T> firstNode) where T : IComparable<T>
{
if (firstNode == null)
throw new ArgumentNullException();
if (firstNode.Next == null)
return firstNode;
var middle = GetMiddle(firstNode);
var rightNode = middle.Next;
middle.Next = null;
var mergedResult = ShuffledMerge(Shuffle(firstNode), Shuffle(rightNode));
return mergedResult;
}
private static DataStructures.Linear.LinkedListNode<T> ShuffledMerge<T>(DataStructures.Linear.LinkedListNode<T> leftNode, DataStructures.Linear.LinkedListNode<T> rightNode) where T : IComparable<T>
{
var dummyHead = new DataStructures.Linear.LinkedListNode<T>();
DataStructures.Linear.LinkedListNode<T> curNode = dummyHead;
var rnd = new Random((int)DateTime.Now.Ticks);
while (leftNode != null || rightNode != null)
{
var rndRes = rnd.Next(0, 2);
if (rndRes == 0)
{
if (leftNode != null)
{
curNode.Next = leftNode;
leftNode = leftNode.Next;
}
else
{
curNode.Next = rightNode;
rightNode = rightNode.Next;
}
}
else
{
if (rightNode != null)
{
curNode.Next = rightNode;
rightNode = rightNode.Next;
}
else
{
curNode.Next = leftNode;
leftNode = leftNode.Next;
}
}
curNode = curNode.Next;
}
return dummyHead.Next;
}
private static DataStructures.Linear.LinkedListNode<T> GetMiddle<T>(DataStructures.Linear.LinkedListNode<T> firstNode) where T : IComparable<T>
{
if (firstNode.Next == null)
return firstNode;
DataStructures.Linear.LinkedListNode<T> fast, slow;
fast = slow = firstNode;
while (fast.Next != null && fast.Next.Next != null)
{
slow = slow.Next;
fast = fast.Next.Next;
}
return slow;
}
}
Just came across this and decided to post a more concise solution which allows you to specify how much shuffling you want to do.
For the purposes of the answer, you have a linked list containing PlayingCard objects;
LinkedList<PlayingCard> deck = new LinkedList<PlayingCard>();
And to shuffle them use something like this;
public void shuffle(Integer swaps) {
for (int i=0; i < swaps; i++) {
deck.add(deck.remove((int)(Math.random() * deck.size())));
}
}
The more swaps you do, the more randomised the list will be.