What method of shuffling do online blackjack games use? Do they keep a "deck" of cards representation in code, or simply treat each hand as a completely new pick from the set of cards at random?
It depends a lot who is the manufacturer of the game and what kind of programming language they use. Imagine that each time you are creating new deck of card objects. Even in this case you will need to shuffle them. Good shuffling algorithms are very important and if the manufacturer is skilled enough it will use algorithm like this one:
var rand = new Random();
for (int i = cards.Length - 1; i > 0; i--) {
int n = rand.Next(i + 1);
int temp = cards[i];
cards[i] = cards[n];
cards[n] = temp;
}
Related
I am looking at this discussion
Step 4
Iterative + memo (bottom-up)
public int rob(int[] nums) {
if (nums.length == 0) return 0;
int[] memo = new int[nums.length + 1];
memo[0] = 0;
memo[1] = nums[0];
for (int i = 1; i < nums.length; i++) {
int val = nums[i];
memo[i+1] = Math.max(memo[i], memo[i-1] + val);
}
return memo[nums.length];
}
Questions:
Does memo[i] mean robbing the non-current house?
Does memo[i-1] + val mean robbing the current house?
I am reading step 3 and find this: memo[i] = Math.max(rob(nums, i - 2) + nums[i], rob(nums, i - 1));. I am a bit confused with memo[i+1] = Math.max(memo[i], memo[i-1] + val);
memo isn't storing any particular decision per se. Rather, it's storing the best solution up to the ith element, the heart of a dynamic program. Since it is being assigned to memo[i+1], the choice between memo[i] and memo[i-1] + val can be thought of as either the best up to the (i-1)th house (so without robbing the ith), or, if we were to rob it, then we have to skip a house so let's add that robbery to the best up to the (i-2)th house. (Notice that memo[i+1] is the best up to the ith house, so memo[i-1] is the best up to the (i-2)th house. This is for the convenience of processing the first house, which is an edge case.)
That should also give you a hint as to how to understand the top-down recursion - again, if we were to choose to rob, we need to add that to the best solution that skips over the previous house.
I'm working on an implementation of Ukkonen's linear time suffix tree construction algorithm, and planning to implement improvements suggested by e.g. Kurtz and NJ Larsson (for example edge links instead of suffix links).
While testing, I experienced mixed green and red lights based on the specific strings I tested, and had similar experiences with a few algorithms I found online. Which made me wonder:
Are there any known, specifically built (preferably simple/short) strings for unit-testing suffix trees to ensure the algorithm works precisely in all branching scenarios?
Furthermore, are there any good methods to separate the testing of the tree building algorithm from the testing of the traversal/lookup algorithm?
I know this question doesn't have a single specific correct answer, but I think it could serve as a good reference point for people working on similar algorithms.
My current unit-testing approach is quite primitive (C# with NUnit):
[TestCase]
public void Contains_Simple_ShouldReturnTrue()
{
var s = "bananasbanananananananananabananas";
var st = SuffixTree.Build(s);
var t1 = s.Substring(0, 10);
Assert.IsTrue(st.Contains(t1));
}
// ... Other simple test cases
[TestCase]
// This test fails, but it's not particularly helpful for bugfixing
public void Contains_DynamicBarrage_OnLongString_ShouldReturnTrue()
{
const int CYCLES = 200,
MAXLEN = 200;
var s = "olbafuynhfcxzqhnebecxjrfwfttw"; // Shortened for sanity
var st = SuffixTree.Build(s);
var r = new Random();
for (int i = 0; i < CYCLES; i++)
{
var pos = r.Next(0, s.Length - 2);
var len = r.Next(1, Math.Min(s.Length - pos, MAXLEN));
Assert.IsTrue(st.Contains(s.Substring(pos, len)));
}
}
I'm using GA, so I took example from this page (http://www.ai-junkie.com/ga/intro/gat3.html) and tried to do on my own.
The problem is, it doesn't work. For example, maximum fitness does not always grow in the next generation, but becomes smallest. Also, after some number of generations, it just stops getting better. For example, in first 100 generations, it found the largest circle with radius 104. And in next 900 largest radius is 107. And after drawing it, I see that it can grow much more.
Here is my code connected with GA. I leave out generating random circles, decoding and drawing.
private Genome ChooseParent(Genome[] population, Random r)
{
double sumFitness = 0;
double maxFitness = 0;
for (int i = 0; i < population.Length; i++)
{
sumFitness += population[i].fitness;
if (i == 0 || maxFitness < population[i].fitness)
{
maxFitness = population[i].fitness;
}
}
sumFitness = population.Length * maxFitness - sumFitness;
double randNum = r.NextDouble() *sumFitness;
double acumulatedSum = 0;
for(int i=0;i<population.Length;i++)
{
acumulatedSum += population[i].fitness;
if(randNum<acumulatedSum)
{
return population[i];
}
}
return population[0];
}
private void Crossover(Genome parent1, Genome parent2, Genome child1, Genome child2, Random r)
{
double d=r.NextDouble();
if(d>this.crossoverRate || child1.Equals(child2))
{
for (int i = 0; i < parent1.bitNum; i++)
{
child1.bit[i] = parent1.bit[i];
child2.bit[i] = parent2.bit[i];
}
}
else
{
int cp = r.Next(parent1.bitNum - 1);
for (int i = 0; i < cp; i++)
{
child1.bit[i] = parent1.bit[i];
child2.bit[i] = parent2.bit[i];
}
for (int i = cp; i < parent1.bitNum; i++)
{
child1.bit[i] = parent2.bit[i];
child2.bit[i] = parent1.bit[i];
}
}
}
private void Mutation(Genome child, Random r)
{
for(int i=0;i<child.bitNum;i++)
{
if(r.NextDouble()<=this.mutationRate)
{
child.bit[i] = (byte)(1 - child.bit[i]);
}
}
}
public void Run()
{
for(int generation=0;generation<1000;generation++)
{
CalculateFitness(population);
System.Diagnostics.Debug.WriteLine(maxFitness);
population = population.OrderByDescending(x => x).ToArray();
//ELITIZM
Copy(population[0], newpopulation[0]);
Copy(population[1], newpopulation[1]);
for(int i=1;i<this.populationSize/2;i++)
{
Genome parent1 = ChooseParent(population, r);
Genome parent2 = ChooseParent(population, r);
Genome child1 = newpopulation[2 * i];
Genome child2 = newpopulation[2 * i + 1];
Crossover(parent1, parent2, child1, child2, r);
Mutation(child1, r);
Mutation(child2, r);
}
Genome[] tmp = population;
population = newpopulation;
newpopulation = tmp;
DekodePopulation(population); //decoding and fitness calculation for each member of population
}
}
If someone can point on potential problem that caused such behaviour and ways to fix it, I'll be grateful.
Welcome to the world of genetic algorithms!
I'll go through your issues and suggest a potential problem. Here we go:
maximum fitness does not always grow in the next generation, but becomes smallest - You probably meant smaller. This is weird since you employed elitism, so each generation's best individual should be at least as good as in the previous one. I suggest you check your code for mistakes because this really should not happen. However, the fitness does not need to always grow. It is impossible to achieve this in GA - it's a stochastic algorithm, working with randomness - suppose that, by chance, no mutation nor crossover happens in a generation - then the fitness cannot improve to the next generation since there is no change.
after some number of generations, it just stops getting better. For example, in first 100 generations, it found the largest circle with radius 104. And in next 900 largest radius is 107. And after drawing it, I see that it can grow much more. - this is (probably) a sign of a phenomenon called premature convergence and it's, unfortunately, a "normal" thing in genetic algorithm. Premature convergence is a situation when the whole population converges to a single solution or to a set of solutions which are near each other and which is/are sub-optimal (i.e. it is not the best possible soluion). When this happens, the GA has a very hard time escaping this local optimum. You can try to tweak the parameters, especially the mutation probability, to force more exploration.
Also, another very important thing that can cause problems is the encoding, i.e. how is the bit string mapped to the circle. If the encoding is much too indirect, it can lead to poor performance of the GA. GAs work when there are some building blocks in the genotype which can be exchanged between among the population. If there are no such blocks, the performance of a GA is usually going to be poor.
I have implemented this exercise and achieved good results. Here is the link:
https://github.com/ManhTruongDang/ai-junkie
Hope this can be of use to you.
I'm something of a beginner to the art and science of algorithms,and as I was learning about "Quick Sort" (which is allegedly quite fast) I had an idea for a sort that uses a dictionary. I coded it up, and I was quite surprised that for what I was interested in (sorting, say, earth temperatures or elevation data) that what I had coded up was actually faster than C# .NET's List.Sort() once I compiled it in Release mode. For example, if I create a list of one million integers loaded with values ranging from zero to 8000 (a good range for typical Earth elevations), the .NET List.Sort() method averages about 88 milliseconds to sort the list, while the algo below does so in about 58 milliseconds.
So this gets me thinking that I should either be up for the Nobel prize for computer science (unlikely) or that there is something that I am missing and that there is a much more efficient way of sorting a large number of integers in the range say of zero to 10,000. How would you experts sort a large amount of data in that range?
private static long DictionarySort(List<int> myList, out List<int> sortedList)
{
Stopwatch sw = new Stopwatch();
sw.Start();
int max = myList.Max();
Dictionary<int, int> sorter = new Dictionary<int, int>();
int myListCount = myList.Count;
for (int i = 0; i < myListCount; i++)
{
int val = myList[i];
int occurances = 0;
sorter[val] = sorter.TryGetValue(val, out occurances) ? occurances + 1 : 1;
}
sortedList = new List<int>(myList.Count + 1);
int numOccur = 0;
for (int i = 0; i <= max; i++)
{
if (sorter.TryGetValue(i, out numOccur))
{
for (int j = 0; j < numOccur; j++)
{
sortedList.Add(i);
}
}
}
sw.Stop();
return sw.ElapsedMilliseconds;
}
You've rediscovered what Wikipedia calls counting sort, a very simple distribution sorting algorithm. It is the optimal algorithm for your data set: it runs in O(N + k) time (N is number of records and k is number of distinct keys), uses O(k) additional storage, and has very low coefficients.
I hear the term algorithm use often and have been confused by the context I am seeing it in on this site sometimes so I thought I would try and clear up my understanding.
To me an algorithm is some for of mathematical process such as this
uint UPDC16( unsigned char a, uint crc )
{
uint b,p;
a^=crc; crc=(crc>>8)|(a<<8); p=a^(a>>4); p^=(p>>2); b=a; a>>=1;
if( (p^(p>>1))&1 ) { crc^=0x0001; a|=0x80; }
if( b & 1 ) crc^=0x0040; b=a; a^=(crc>>8);
if( a & 1 ) crc^=0x0080; a>>=1;
if( b & 0x80 ) a|=0x80;
crc = (crc&0x00ff)|(a<<8);
return crc;
}
Where as I thought that as this performed an action (rotating image) through nester if statments and not a mathmatical function it was not an algorithm but a function.
for (int block_x = 0; block_x < 2048; block_x+=8)
{
for (int block_y = 0; blocky_y < 2048; block_y+=8)
{
// this is the inner-loop that processes a block
// of 8x8 pixels.
for (int x= 0; x<8; x++)
for (int y=0; y<8; y++)
dest[x+block_x][y+block_y] = src[y+block_y][x+block_x]
}
}
I have googled it but I am looking for an experienced coders explanation. can anyone help explain algorithms to me ?
The other thing that is bothering me is that I have seen the term "script it" several times and do not understand. I have heard there are scripting languages like lua (may be wrong).
Do they mean to used these languages or is a "script" a special method of coding ?
I mostly use c/c++ if this makes any difference.
For your first question : for me an algorithm can be an idea such as "to compute the sum of all the elements of the array you need to.....", a function (there is an input and un output and some steps in between) or a serie of mathematical operation.
So an algorithm would be a serie of steps that allow to go from somewhere to somewhere else (going from your home to your work using the subway is also an algorithm).
For your second question : there are two big types (I'm simplifying) of programming languages, the "compiled" ones and the "interprated" ones and among the latters you have the interactive ones or scripting languages. Also, generally speaking, scripting languages are considered high-levels ones : you can do powerful things in a few lines that together are forming a script.
Of course some scripting language can also be compiled....