AS3 - Comparing BitmapDatas, fastest method? - performance

Right, so let's say I want to compare two BitmapDatas. One is an image of a background (not solid, it has varying pixels), and another is of something (like a sprite) on top of the exact same background. Now what I want to do is remove the background from the second image, by comparing the two images, and removing all the pixels from the background that are present in the second image.
For clarity, basically I want to do this in AS3.
Now I came up with two ways to do this, and they both work perfectly. One compares pixels directly, while the other uses the BitmapData.compare() method first, then copies the appropriate pixels into the result. What I want to know is which way is faster.
Here are my two ways of doing it:
Method 1
for (var j:int = 0; j < layer1.height; j++)
{
for (var i:int = 0; i < layer1.width; i++)
{
if (layer1.getPixel32(i, j) != layer2.getPixel32(i, j))
{
result.setPixel32(i, j, layer2.getPixel32(i, j));
}
}
}
Method 2
result = layer1.compare(layer2) as BitmapData;
for (var j:int = 0; j < layer1.height; j++)
{
for (var i:int = 0; i < layer1.width; i++)
{
if (result.getPixel32(i, j) != 0x00000000)
{
result.setPixel32(i, j, layer2.getPixel32(i, j));
}
}
}
layer1 is the background, layer2 is the image the background will be removed from, and result is just a BitmapData that the result will come out on.
These are very similar, and personally I haven't noticed any difference in speed, but I was just wondering if anybody knows which would be faster. I'll probably use Method 1 either way, since BitmapData.compare() doesn't compare pixel alpha unless the colours are identical, but I still thought it wouldn't hurt to ask.

This might not be 100% applicable to your situation, but FWIW I did some research into this a while back, here's what I wrote back then:
I've been meaning to try out grant skinners performance test thing for a while, so this was my opportunity.
I tested the native compare, the iterative compare, a reverse iterative compare (because i know they're a bit faster) and finally a compare using blendmode DIFFERENCE.
The reverse iterative compare looks like this:
for (var bx:int = _base.width - 1; bx >= 0; --bx) {
for (var by:int = _base.height - 1; by >= 0; --by) {
if (_base.getPixel32(bx, by) != compareTo.getPixel32(bx, by)) {
return false;
}
}
}
return true;
The blendmode compare looks like this:
var test:BitmapData = _base.clone();
test.draw(compareTo, null, null, BlendMode.DIFFERENCE);
var rect:Rectangle = test.getColorBoundsRect(0xffffff, 0x000000, false);
return (rect.toString() != _base.rect.toString());
I'm not 100% sure this is completely reliable, but it seemed to work.
I ran each test for 50 iterations on 500x500 images.
Unfortunately my pixel bender toolkit is borked, so I couldn't try that method.
Each test was run in both the debug and release players for comparison, notice the massive differences!
Complete code is here: http://webbfarbror.se/dump/bitmapdata-compare.zip

You might try a shader that would check if two images are matching, and if not, save one of image's point as output. Like this:
kernel NewFilter
< namespace : "Your Namespace";
vendor : "Your Vendor";
version : 1;
description : "your description";
>
{
input image4 srcOne;
input image4 srcTwo;
output pixel4 dst;
void evaluatePixel()
{
float2 positionHere = outCoord();
pixel4 fromOne = sampleNearest(srcOne, positionHere);
pixel4 fromTwo = sampleNearest(srcTwo, positionHere);
float4 difference=fromOne-fromTwo;
if (abs(difference.r)<0.01&&abs(difference.g)<0.01&&abs(difference.b)<0.01) dst=pixel4(0,0,0,1);
else dst = fromOne;
}
}
This will return black pixel if matched and first image's pixel when no match. Use Pixel Bender to make it run in Flash. Some tutorial is here. Shaders are usually a factor faster than plain AS3 code.

Related

How to find if a path is close to a path

I have a map which shows a path people are supposed to take. This path translates to ~190 points(x, y) positions. When someone actually takes the path, it's almost impossible for them to match the original point for point. In addition, the actual points can be greatly more, ~350 points, compared to the defined path.
What I'm trying to see is if the person sort of followed the path. Meaning they were within a given distance. In the case of a single point, lets say they were within 100+- pixels.
I tried to create a rect around an actual point and see if a path point is within it. Since the number of points are different I figure I would create a range of points to check.
Overall, the ultimate goal is to see if the person was within the range of the path the entire time or not.
One problem I get with my current approach is the point isn't always found even though I know it's close enough. I'm pretty sure it has to do with the range I'm looking in. The other issue which I haven't been able to work through is the different number of points.
In the following image the original path is black and the actual path is red.
const int expandBy = 120;
const int inflateBy = expandBy / 2;
const int range = 5;
for (int i = 0; i < actualPoints.Length; i++)
{
bool found = false;
var pt = actualPoints[i];
var actualRect = new CCRect(pt.X - inflateBy, pt.Y - inflateBy, expandBy, expandBy);
var upperRange = range * 2;
var lowerRange = i - range;
if (lowerRange < 0)
lowerRange = 0;
var points = path.Skip(lowerRange).Take(upperRange);
foreach (var pts1 in points)
{
if (!actualRect.ContainsPoint(pts1))
{
drawNode.DrawRect(actualRect, CCColor4B.Green);
}
else
{
found = true;
break;
}
}
if (!found)
return false;
}
return true;

Game Maker Studio - Preventing objects from sliding vertically

When my object jumps over a block, if it cannot, it sticks with the side of the wall for a few seconds before the object drops slightly and sticks again until it hits the floor. During this time, the user can jump again allowing them to bypass any wall.
Any ideas on how to fix this?
if (place_meeting(x+hsp,y,Room))
{
while (!place_meeting(x+sign(hsp),y,Room))
{
x += sign(hsp);
}
hsp = 0;
}
x += hsp;
//VerticalCollision
if (place_meeting(x,y+vsp,Room))
{
while (!place_meeting(x,y+sign(vsp),Room))
{
y += sign(vsp);
}
vsp = 0;
The above code handles collisions in the game, with experimenting I've messed it up even more now. The character will stand against a wall and everything will freeze...
I'm using a collision mask but that hasn't helped.
Would be easier if we could see your jumping/movement code.
Generally, Freezing is caused by infinite while loops, try adding a limiter.
if (place_meeting(x+hsp,y,Room))
{
var a = 64;
while (!place_meeting(x+sign(hsp),y,Room) and 64 > 0 )
{
a --;
x += sign(hsp);
}
hsp = 0;
}
Also, why are you colliding with Room? Whats Room?

Find largest circle not overlapping with others using genetic algorithm

I'm using GA, so I took example from this page (http://www.ai-junkie.com/ga/intro/gat3.html) and tried to do on my own.
The problem is, it doesn't work. For example, maximum fitness does not always grow in the next generation, but becomes smallest. Also, after some number of generations, it just stops getting better. For example, in first 100 generations, it found the largest circle with radius 104. And in next 900 largest radius is 107. And after drawing it, I see that it can grow much more.
Here is my code connected with GA. I leave out generating random circles, decoding and drawing.
private Genome ChooseParent(Genome[] population, Random r)
{
double sumFitness = 0;
double maxFitness = 0;
for (int i = 0; i < population.Length; i++)
{
sumFitness += population[i].fitness;
if (i == 0 || maxFitness < population[i].fitness)
{
maxFitness = population[i].fitness;
}
}
sumFitness = population.Length * maxFitness - sumFitness;
double randNum = r.NextDouble() *sumFitness;
double acumulatedSum = 0;
for(int i=0;i<population.Length;i++)
{
acumulatedSum += population[i].fitness;
if(randNum<acumulatedSum)
{
return population[i];
}
}
return population[0];
}
private void Crossover(Genome parent1, Genome parent2, Genome child1, Genome child2, Random r)
{
double d=r.NextDouble();
if(d>this.crossoverRate || child1.Equals(child2))
{
for (int i = 0; i < parent1.bitNum; i++)
{
child1.bit[i] = parent1.bit[i];
child2.bit[i] = parent2.bit[i];
}
}
else
{
int cp = r.Next(parent1.bitNum - 1);
for (int i = 0; i < cp; i++)
{
child1.bit[i] = parent1.bit[i];
child2.bit[i] = parent2.bit[i];
}
for (int i = cp; i < parent1.bitNum; i++)
{
child1.bit[i] = parent2.bit[i];
child2.bit[i] = parent1.bit[i];
}
}
}
private void Mutation(Genome child, Random r)
{
for(int i=0;i<child.bitNum;i++)
{
if(r.NextDouble()<=this.mutationRate)
{
child.bit[i] = (byte)(1 - child.bit[i]);
}
}
}
public void Run()
{
for(int generation=0;generation<1000;generation++)
{
CalculateFitness(population);
System.Diagnostics.Debug.WriteLine(maxFitness);
population = population.OrderByDescending(x => x).ToArray();
//ELITIZM
Copy(population[0], newpopulation[0]);
Copy(population[1], newpopulation[1]);
for(int i=1;i<this.populationSize/2;i++)
{
Genome parent1 = ChooseParent(population, r);
Genome parent2 = ChooseParent(population, r);
Genome child1 = newpopulation[2 * i];
Genome child2 = newpopulation[2 * i + 1];
Crossover(parent1, parent2, child1, child2, r);
Mutation(child1, r);
Mutation(child2, r);
}
Genome[] tmp = population;
population = newpopulation;
newpopulation = tmp;
DekodePopulation(population); //decoding and fitness calculation for each member of population
}
}
If someone can point on potential problem that caused such behaviour and ways to fix it, I'll be grateful.
Welcome to the world of genetic algorithms!
I'll go through your issues and suggest a potential problem. Here we go:
maximum fitness does not always grow in the next generation, but becomes smallest - You probably meant smaller. This is weird since you employed elitism, so each generation's best individual should be at least as good as in the previous one. I suggest you check your code for mistakes because this really should not happen. However, the fitness does not need to always grow. It is impossible to achieve this in GA - it's a stochastic algorithm, working with randomness - suppose that, by chance, no mutation nor crossover happens in a generation - then the fitness cannot improve to the next generation since there is no change.
after some number of generations, it just stops getting better. For example, in first 100 generations, it found the largest circle with radius 104. And in next 900 largest radius is 107. And after drawing it, I see that it can grow much more. - this is (probably) a sign of a phenomenon called premature convergence and it's, unfortunately, a "normal" thing in genetic algorithm. Premature convergence is a situation when the whole population converges to a single solution or to a set of solutions which are near each other and which is/are sub-optimal (i.e. it is not the best possible soluion). When this happens, the GA has a very hard time escaping this local optimum. You can try to tweak the parameters, especially the mutation probability, to force more exploration.
Also, another very important thing that can cause problems is the encoding, i.e. how is the bit string mapped to the circle. If the encoding is much too indirect, it can lead to poor performance of the GA. GAs work when there are some building blocks in the genotype which can be exchanged between among the population. If there are no such blocks, the performance of a GA is usually going to be poor.
I have implemented this exercise and achieved good results. Here is the link:
https://github.com/ManhTruongDang/ai-junkie
Hope this can be of use to you.

how to sort objects for Guendelman shock propagation?

This is a question regarding forming a topographical tree in 3D. A bit of context: I have a physics engine where I have bodies and collision points and some constraints. This is not homework, but an experiment in a multi-threading.
I need to sort the bodies in a bottom-to-top fashion with groups of objects belonging to layers like in this document: See section on "shock-propagation"
http://www2.imm.dtu.dk/visiondag/VD05/graphical/slides/kenny.pdf
the pseudocode he uses to describe how to iterate over the tree makes perfect sense:
shock-propagation(algorithm A)
compute contact graph
for each stack layer in bottom up order
fixate bottom-most objects of layer
apply algorithm A to layer
un-fixate bottom-most objects of layer
next layer
I already have algorithm A figured out (my impulse code). What would the pseudocode look like for tree/layer sorting (topo sort?) with a list of 3D points?
I.E., I don't know where to stop/begin the next "rung" or "branch". I guess I could just chunk it up by y position, but that seems clunky and error prone. Do I look into topographical sorting? I don't really know how to go about this in 3D. How would I get "edges" for a topo sort, if that's the way to do it?
Am I over thinking this and I just "connect the dots" by finding point p1 then the least distant next point p2 where p2.y > p1.y ? I see a problem here where p1 distance from p0 could be greater than p2 using pure distances, which would lead to a bad sort.
i just tackled this myself.
I found an example of how to accomplish this in the downloadable source code linked to this paper:
http://www-cs-students.stanford.edu/~eparker/files/PhysicsEngine/
Specifically the WorldState.cs file starting at line 221.
But the idea being that you assign all static objects with the level of -1 and each other object with a different default level of say -2. Then for each collision with the bodies at level -1 you add the body it collided with to a list and set its level to 0.
After that using a while loop while(list.Count > 0) check for the bodies that collide with it and set there levels to the body.level + 1.
After that, for each body in the simulation that still has the default level (i said -2 earlier) set its level to the highest level.
There are a few more fine details, but looking at the code in the example will explain it way better than i ever could.
Hope it helps!
Relevant Code from Evan Parker's code. [Stanford]
{{{
// topological sort (bfs)
// TODO check this
int max_level = -1;
while (queue.Count > 0)
{
RigidBody a = queue.Dequeue() as RigidBody;
//Console.Out.WriteLine("considering collisions with '{0}'", a.Name);
if (a.level > max_level) max_level = a.level;
foreach (CollisionPair cp in a.collisions)
{
RigidBody b = (cp.body[0] == a ? cp.body[1] : cp.body[0]);
//Console.Out.WriteLine("considering collision between '{0}' and '{1}'", a.Name, b.Name);
if (!b.levelSet)
{
b.level = a.level + 1;
b.levelSet = true;
queue.Enqueue(b);
//Console.Out.WriteLine("found body '{0}' in level {1}", b.Name, b.level);
}
}
}
int num_levels = max_level + 1;
//Console.WriteLine("num_levels = {0}", num_levels);
ArrayList[] bodiesAtLevel = new ArrayList[num_levels];
ArrayList[] collisionsAtLevel = new ArrayList[num_levels];
for (int i = 0; i < num_levels; i++)
{
bodiesAtLevel[i] = new ArrayList();
collisionsAtLevel[i] = new ArrayList();
}
for (int i = 0; i < bodies.GetNumBodies(); i++)
{
RigidBody a = bodies.GetBody(i);
if (!a.levelSet || a.level < 0) continue; // either a static body or no contacts
// add a to a's level
bodiesAtLevel[a.level].Add(a);
// add collisions involving a to a's level
foreach (CollisionPair cp in a.collisions)
{
RigidBody b = (cp.body[0] == a ? cp.body[1] : cp.body[0]);
if (b.level <= a.level) // contact with object at or below the same level as a
{
// make sure not to add duplicate collisions
bool found = false;
foreach (CollisionPair cp2 in collisionsAtLevel[a.level])
if (cp == cp2) found = true;
if (!found) collisionsAtLevel[a.level].Add(cp);
}
}
}
for (int step = 0; step < num_contact_steps; step++)
{
for (int level = 0; level < num_levels; level++)
{
// process all contacts
foreach (CollisionPair cp in collisionsAtLevel[level])
{
cp.ResolveContact(dt, (num_contact_steps - step - 1) * -1.0f/num_contact_steps);
}
}
}
}}}

For VS Foreach on Array performance (in AS3/Flex)

Which one is faster? Why?
var messages:Array = [.....]
// 1 - for
var len:int = messages.length;
for (var i:int = 0; i < len; i++) {
var o:Object = messages[i];
// ...
}
// 2 - foreach
for each (var o:Object in messages) {
// ...
}
From where I'm sitting, regular for loops are moderately faster than for each loops in the minimal case. Also, as with AS2 days, decrementing your way through a for loop generally provides a very minor improvement.
But really, any slight difference here will be dwarfed by the requirements of what you actually do inside the loop. You can find operations that will work faster or slower in either case. The real answer is that neither kind of loop can be meaningfully said to be faster than the other - you must profile your code as it appears in your application.
Sample code:
var size:Number = 10000000;
var arr:Array = [];
for (var i:int=0; i<size; i++) { arr[i] = i; }
var time:Number, o:Object;
// for()
time = getTimer();
for (i=0; i<size; i++) { arr[i]; }
trace("for test: "+(getTimer()-time)+"ms");
// for() reversed
time = getTimer();
for (i=size-1; i>=0; i--) { arr[i]; }
trace("for reversed test: "+(getTimer()-time)+"ms");
// for..in
time = getTimer();
for each(o in arr) { o; }
trace("for each test: "+(getTimer()-time)+"ms");
Results:
for test: 124ms
for reversed test: 110ms
for each test: 261ms
Edit: To improve the comparison, I changed the inner loops so they do nothing but access the collection value.
Edit 2: Answers to oshyshko's comment:
The compiler could skip the accesses in my internal loops, but it doesn't. The loops would exit two or three times faster if it was.
The results change in the sample code you posted because in that version, the for loop now has an implicit type conversion. I left assignments out of my loops to avoid that.
Of course one could argue that it's okay to have an extra cast in the for loop because "real code" would need it anyway, but to me that's just another way of saying "there's no general answer; which loop is faster depends on what you do inside your loop". Which is the answer I'm giving you. ;)
When iterating over an array, for each loops are way faster in my tests.
var len:int = 1000000;
var i:int = 0;
var arr:Array = [];
while(i < len) {
arr[i] = i;
i++;
}
function forEachLoop():void {
var t:Number = getTimer();
var sum:Number = 0;
for each(var num:Number in arr) {
sum += num;
}
trace("forEachLoop :", (getTimer() - t));
}
function whileLoop():void {
var t:Number = getTimer();
var sum:Number = 0;
var i:int = 0;
while(i < len) {
sum += arr[i] as Number;
i++;
}
trace("whileLoop :", (getTimer() - t));
}
forEachLoop();
whileLoop();
This gives:
forEachLoop : 87
whileLoop : 967
Here, probably most of while loop time is spent casting the array item to a Number. However, I consider it a fair comparison, since that's what you get in the for each loop.
My guess is that this difference has to do with the fact that, as mentioned, the as operator is relatively expensive and array access is also relatively slow. With a for each loop, both operations are handled natively, I think, as opossed to performed in Actionscript.
Note, however, that if type conversion actually takes place, the for each version is much slower and the while version if noticeably faster (though, still, for each beats while):
To test, change array initialization to this:
while(i < len) {
arr[i] = i + "";
i++;
}
And now the results are:
forEachLoop : 328
whileLoop : 366
forEachLoop : 324
whileLoop : 369
I've had this discussion with a few collegues before, and we have all found different results for different scenarios. However, there was one test that I found quite eloquent for comparison's sake:
var array:Array=new Array();
for (var k:uint=0; k<1000000; k++) {
array.push(Math.random());
}
stage.addEventListener("mouseDown",foreachloop);
stage.addEventListener("mouseUp",forloop);
/////// Array /////
/* 49ms */
function foreachloop(e) {
var t1:uint=getTimer();
var tmp:Number=0;
var i:uint=0;
for each (var n:Number in array) {
i++;
tmp+=n;
}
trace("foreach", i, tmp, getTimer() - t1);
}
/***** 81ms ****/
function forloop(e) {
var t1:uint=getTimer();
var tmp:Number=0;
var l:uint=array.length;
for(var i:uint = 0; i < l; i++)
tmp += Number(array[i]);
trace("for", i, tmp, getTimer() - t1);
}
What I like about this tests is that you have a reference for both the key and value in each iteration of both loops (removing the key counter in the "for-each" loop is not that relevant). Also, it operates with Number, which is probably the most common loop that you will want to optimize that much. And most importantly, the winner is the "for-each", which is my favorite loop :P
Notes:
-Referencing the array in a local variable within the function of the "for-each" loop is irrelevant, but in the "for" loop you do get a speed bump (75ms instead of 105ms):
function forloop(e) {
var t1:uint=getTimer();
var tmp:Number=0;
var a:Array=array;
var l:uint=a.length;
for(var i:uint = 0; i < l; i++)
tmp += Number(a[i]);
trace("for", i, tmp, getTimer() - t1);
}
-If you run the same tests with the Vector class, the results are a bit confusing :S
for would be faster for arrays...but depending on the situation it can be foreach that is best...see this .net benchmark test.
Personally, I'd use either until I got to the point where it became necessary for me to optimize the code. Premature optimization is wasteful :-)
Maybe in a array where all element are there and start at zero (0 to X) it would be faster to use a for loop. In all other case (sparse array) it can be a LOT faster to use for each.
The reason is the usage of two data structure in the array: Hast table an Debse Array.
Please read my Array analysis using Tamarin source:
http://jpauclair.wordpress.com/2009/12/02/tamarin-part-i-as3-array/
The for loop will check at undefined index where the for each will skip those one jumping to next element in the HastTable
guys!
Especially Juan Pablo Califano.
I've checked your test. The main difference in obtain array item.
If you will put var len : int = 40000;, you will see that 'while' cycle is faster.
But it loses with big counts of array, instead for..each.
Just an add-on:
a for each...in loop doesn't assure You, that the elements in the array/vector gets enumerated in the ORDER THEY ARE STORED in them. (except XMLs)
This IS a vital difference, IMO.
"...Therefore, you should not write code that depends on a for-
each-in or for-in loop’s enumeration order unless you are processing
XML data..." C.Moock
(i hope not to break law stating this one phrase...)
Happy benchmarking.
sorry to prove you guys wrong, but for each is faster. even a lot. except, if you don't want to access the array values, but a) this does not make sense and b) this is not the case here.
as a result of this, i made a detailed post on my super new blog ... :D
greetz
back2dos

Resources