Is there a performance hit when iterating over object attributes vs. iterating an array?
Example, using objects:
var x:Object = {one: 1, two: 2, three: 3};
for (var s:String in x) {
trace(x[s]);
}
Vs using an array
var a:Array = [1, 2, 3];
var len:Number = a.length;
for (var i:Number = 0; i < len; ++i) {
trace(a[i]);
}
So - which is faster and most importantly by what factor?
IIRC, in some JavaScript implementation iterating over objects attributes is slower up to 20x but I haven't been able to find such measurement for ActionScript2.
I just tried a very similar test, but iterating just once over 200k elements, with opposite results:
Task build-arr: 2221ms
Task iter-arr: 516ms
Task build-obj: 1410ms
Task iter-obj: 953ms
I suspect Luke's test is dominated by loop overhead, which seems bigger in the array case.
Also, note that the array took significantly longer to populate in the first place, so ymmv if your task is insert-heavy.
Also, in my test, storing arr.length in a local variable gave a measurable performance increase of about 15%.
Update:
By popular demand, I am posting the code I used.
var iter:Number = 200000;
var time:Number = 0;
var obj:Object = {};
var arr:Array = [];
time = getTimer();
for (var i:Number = 0; i < iter; ++i) {
arr[i] = i;
}
trace("Task build-arr: " + (getTimer() - time) + "ms");
time = getTimer();
for (var i:Number = 0; i < iter; ++i) {
arr[i] = arr[i];
}
trace("Task iter-arr: " + (getTimer() - time) + "ms");
time = getTimer();
for (var i:Number = 0; i < iter; ++i) {
obj[String(i)] = i;
}
trace("Task build-obj: " + (getTimer() - time) + "ms");
time = getTimer();
for (var i:String in obj) {
obj[i] = obj[i];
}
trace("Task iter-obj: " + (getTimer() - time) + "ms");
OK. Why not do some simple measurements?
var time : Number;
time = getTimer();
var x:Object = {one: 1, two: 2, three: 3};
for( i = 0; i < 100000; i++ )
{
for (var s:String in x)
{
// lets not trace but do a simple assignment instead.
x[s] = x[s];
}
}
trace( getTimer() - time + "ms");
time = getTimer();
var a:Array = [1, 2, 3];
var len:Number = a.length;
for( i = 0; i < 100000; i++ )
{
for ( var j : Number = 0; j < len; j++)
{
a[j] = a[j];
}
}
trace( getTimer() - time + "ms");
On my machine the array iteration is somewhat slower. This could be because ActionScript 2 doesn't have 'real' arrays but only associative arrays (maps). Apparently to work with an array the compiler has to generate some code overhead. I haven't looked into the specifics of this but I can imagine that that could be the case.
BTW. Doing this test might also show that putting the array length value into a variable doesn't really increase performance either. Just give it go....
UPDATE: Even though ActionScript and JavaScript are syntactically related, the underlying execution mechanism is completely different. E.g. FireFox uses SpiderMonkey and IE will probably use a Microsoft implementation whereas AS2 is executed by the Adobe's AVM1.
Related
I was surprised by the performance of my Dart code in the browser vs. the Dart VM. Here is a simple example that reproduces the issue.
test('speed test', () {
var n = 10000;
var rand = Random(0);
var x = List.generate(n, (i) => rand.nextDouble());
var res = <num>[];
var sw = Stopwatch()..start();
for (int i=0; i<1000; i++) {
for (int j=0; j<n; j++) {
x[j] += i;
}
res.add(x.reduce((a, b) => a + b));
}
sw.stop();
print('Milliseconds: ${sw.elapsedMilliseconds}');
});
If I run this code with dart, I get somewhere around 140 milliseconds. If I run the same code as a browser test with pub run test -p "chrome" ... I get times around 8000 milliseconds.
I am willing to wait for a 0.1 s calculation, but to wait 8 s for something in the browser, no -- it is basically unusable. When I go in release mode, the performance in browser improve but it's still 10x slower.
Am I missing something? Do I have to avoid any calculations in the browser?
Thanks,
Tony
It's interesting how slow this is.
The corresponding JavaScript code:
(function() {
"use strict";
var n = 10000;
var x = [];
var res = [];
for (var i = 0; i < n; i++) x.push(Math.random());
var t0 = Date.now();
for (var i = 0; i < 1000; i++) {
for (var j = 0; j < n; j++) {
x[j] += i;
}
res.push(x.reduce((a, b) => a + b));
}
var t1 = Date.now();
console.log("Milliseconds: " + (t1 - t0));
}());
runs in as little as ~20 milliseconds.
So, it looks like Dart is somehow triggering "slow mode" for its generated Javascript.
If you look at the generated code, it contains:
for (i = 0; i < 1000; ++i) {
for (j = 0; j < 10000; ++j) {
if (j >= x.length)
return H.ioore(x, j);
t1 = x[j];
if (typeof t1 !== "number")
return t1.$add();
C.JSArray_methods.$indexSet(x, j, t1 + i);
}
C.JSArray_methods.add$1(res, C.JSArray_methods.reduce$1(x, new A.main_closure0()));
}
You can try to tweak this code, but the big cost comes from C.JSArray_methods.$indexSet(x, j, t1 + i);. If you change that to x[j] = t1 + i;, the time drops to a few hundred milliseconds. So, this is the problem with the current code.
(You can improve performance a little, ~20%, by making x a List<num> instead of a List<double>. I have no idea why that makes a difference, the generated code is almost the same, the add closure uses checkDouble to check the type instead of checkNum, but they have exactly the same body).
You don't have to avoid any computation in the browser. You may have to optimize a little for slow cases like this (or report them to the compiler developers, because this probably can be recognized and optimized, it just fails to be so for now). For example, you can change your list x of doubles to a Float64List from dart:typed_data:
var x = Float64List.fromList([for (var i = 0; i < n; i++) rand.nextDouble()]);
Then speed increases quite a lot.
The Dart tracking issue for this is https://github.com/dart-lang/sdk/issues/38705.
The performance of this kind of code has recently improved considerably and is much closer to the Dart VM.
public int MinCoins(int[] change, int cents)
{
Stopwatch sw = Stopwatch.StartNew();
int coins = 0;
int cent = 0;
int finalCount = cents;
for (int i = change.Length - 1; i >= 0; i--)
{
cent = cents;
for (int j = i; j <= change.Length - 1; j++)
{
coins += cent / change[j];
cent = cent % change[j];
if (cent == 0) break;
}
if (coins < finalCount)
{
finalCount = coins;
}
coins = 0;
}
sw.Stop();
var elapsedMs = sw.Elapsed.ToString(); ;
Console.WriteLine("time for non dp " + elapsedMs);
return finalCount;
}
public int MinCoinsDp(int[] change, int cents)
{
Stopwatch sw = Stopwatch.StartNew();
int[] minCoins = new int[cents + 1];
for (int i = 1; i <= cents; i++)
{
minCoins[i] = 99999;
for (int j = 0; j < change.Length; j++)
{
if(i >= change[j])
{
int n = minCoins[i - change[j]] + 1;
if (n < minCoins[i])
minCoins[i] = n;
}
}
}
sw.Stop();
var elapsedMs = sw.Elapsed.ToString();
Console.WriteLine("time for dp " + elapsedMs);
return minCoins[cents];
}
I have written a minimum number of coins programs using iterative and Dynamic Programming. I have seen a lot of blogs discussing about DP for this problem. Iterative solutions has running time O(numberOfCoins * numberofCoins) and DP has O(numberofcoins*arraySize) roughly same. Which one is better? Please suggest good book for advanced algorithms.
Please run with {v1 > v2 > v3 > v4} like {25,10,5}
I see that you're trying to measure running times of both algorithms and decide which one is better.
Well, there is a more important thing about your algorithms. The first one is unfortunately incorrect. For example, please consider the following input:
Suppose we want to exchange 100 and available coins have the following nominals: 5, 6, 90, 96. The best that we can do is to use 3 coins: 5, 5, 90. However, your solution returns 1
For instance for N highest numbers, lets say N = 3
I have a and want to get b
a = np.array([12.3,15.4,1,13.3,16.5])
b = ([15.4,13.3,16.5])
Thanks in advance.
well, my take on this:
Make a copy of the original array;
Sort the copied array to find the n highest numbers;
Go through the original array and after comparing its numbers to n highest numbers from the previous step move needed ones in a resulting array.
var a = [12.3,15.4,1,13.3,16.5], n = 3, x = 0, c =[]; // c - the resulting array
var b = a.slice(); // copy the original array to sort it
for(var i = 1; i < b.length; i++) { // insertion sorting of the copy
var temp = b[i];
for(var j = i - 1; j >= 0 && temp > b[j]; j--) b[j + 1] = b[j];
b[j + 1] = temp;
}
for(var i = 0; i < a.length; i++) { // creating the resulting array
for(var j = 0; j < n; j++) {
if(a[i] === b[j]) {
c[x] = a[i]; x++; // or just c.push(a[i]);
}
}
}
console.log(c);
The example is written in Javascript and is somewhat straightforward, but, in fact, it is quite language agnostic and does the job.
I can't seem to get my sorting algorithm to work. The partition works great, but I can't get the recursive part to work. I know the problem is my conditional value that starts the recursion isn't working correctly, but I can't figure out what I would put instead.
var partition = function(arr, i_lo, i_hi) {
var pivot = arr[i_hi];
var i = i_lo
for (var j = i_lo; j < i_hi; j++) {
if (arr[j] <= pivot) {
var swap = arr[i];
arr[i] = arr[j];
arr[j] = swap;
i++
}
}
var swap = arr[i];
arr[i] = arr[i_hi]
arr[i_hi] = swap;
return i;
}
//1 3 9 8 2 7 5
var quickSort = function(arr, i_lo, i_hi) {
console.log(arr[i_lo], arr[i_hi])
if (arr[i_lo] < arr[i_hi]) {
var p = partition(arr, i_lo, i_hi);
arr = quickSort(arr, i_lo, p-1);
arr = quickSort(arr, p + 1, i_hi);
console.log(arr);
}
return arr;
}
console.log(quickSort(arr, 0, arr.length-1))
I need to loop through an array using the for clause, but starting at some specific index and just to a maximum of iterations.
The code below does the task, but it looks awful to me: it's there a better way?
var offset = 10, max = 5;
for (var i = 0; (i + offset) < data.length && i < max; i++) {
doSomething(data[i + offset]);
}
If I am understanding your question correctly you would just need to initialize i to the offset.
var offset = 10, max = 5 + offset;
for (var i = offset; i < data.length && i < max; i++) {
doSomething(data[i]);
}
edit: didn't understand the max at first.