Slow updates to my event handler from DownloadFileAsync during DownloadProgressChanged event - events

My Problem
I'm writing a PowerShell script that will need to download several large files from a remote web server before carrying on with other tasks. One of my project requirements is to show the progress of each download so the end-user knows what's going on.
A response to another SO question contained a solution which uses registered events and Net.WebClient.DownloadFileAsync. However, when I try the solution in the accepted answer, the download is completing long before my script has handled all of the DownloadProgressChanged events.
What I've Tried
I've created a test script which I've provided below in order to reproduce the problem and attempt to isolate where it's occurring. At first I thought the problem was with the Write-Progress cmdlet being slow, so I replaced it with Write-Host to show the percentage completed. This produced many repeated lines with the same percentage value. Thinking that Write-Host might also be slow, I changed the event handler's action to only output the percentage when it changed. This didn't make a difference.
I noticed in the output that the download was completing before the event handler indicated that it was more than a few percent complete. There was a large delay during the first few percent, then it sped up. But the output of percentage complete was still slow even after the download completed. I made another modification to the script to show the time elapsed at each percentage change.
My environment is Windows 7 and Server 2008 using PowerShell 2.0. I've tried the script on both OS's with the same results. I'm not using a PS profile and the Server 2008 computer is a fresh install.
Here is my test script.
# based on https://stackoverflow.com/a/4927295/588006
$client = New-Object System.Net.WebClient
$url=[uri]"https://www.kernel.org/pub/linux/kernel/v2.6/linux-2.6.18.tar.bz2"
$file="$env:USERPROFILE\Downloads\linux-2.6.18.tar.bz2"
try {
Register-ObjectEvent $client DownloadProgressChanged -action {
if ( $eventargs.ProgressPercentage -gt $percent ) {
$percent = $eventargs.ProgressPercentage
if ( $start_time -eq $null ) {
$start_time = $(get-date)
}
# Get the elapsed time since we displayed the last percentage change
$elapsed_time = new-timespan $start_time $(get-date)
write-host "Percent complete:" $eventargs.ProgressPercentage "($elapsed_time)"
}
}
Register-ObjectEvent $client DownloadFileCompleted -SourceIdentifier Finished
$client.DownloadFileAsync($url, $file)
# optionally wait, but you can break out and it will still write progress
Wait-Event -SourceIdentifier Finished
} finally {
write-host "File download completed"
$client.dispose()
Unregister-Event -SourceIdentifier Finished
Remove-Event -SourceIdentifier Finished
}
The script produces the following output.
PS C:\Users\devtest\Desktop> .\write-progress-speed.ps1
Id Name State HasMoreData Location Command
-- ---- ----- ----------- -------- -------
1 7989b3fe-cce... NotStarted False ...
Percent complete: 0 (00:00:00)
Percent complete: 1 (00:00:09.2435931)
ComputerName :
RunspaceId : 6c207bde-bb4a-442b-a7bd-05a9c12fae95
EventIdentifier : 9978
Sender : System.Net.WebClient
SourceEventArgs : System.ComponentModel.AsyncCompletedEventArgs
SourceArgs : {System.Net.WebClient, System.ComponentModel.AsyncCompletedEventArgs}
SourceIdentifier : Finished
TimeGenerated : 8/9/2013 8:02:59 AM
MessageData :
File download completed
Percent complete: 2 (00:00:12.2896000)
PS C:\Users\devtest\Desktop> Percent complete: 3 (00:00:12.6756120)
Percent complete: 4 (00:00:13.0646281)
Percent complete: 5 (00:00:13.2796284)
Percent complete: 6 (00:00:13.4656313)
Percent complete: 7 (00:00:13.6106315)
Percent complete: 8 (00:00:13.7756318)
Percent complete: 9 (00:00:13.9656320)
Percent complete: 10 (00:00:14.1306323)
Percent complete: 11 (00:00:14.2406324)
Percent complete: 12 (00:00:14.3706326)
Percent complete: 13 (00:00:14.5006328)
Percent complete: 14 (00:00:14.6556330)
Percent complete: 15 (00:00:14.7806332)
Percent complete: 16 (00:00:14.9006333)
Percent complete: 17 (00:00:15.0156335)
Percent complete: 18 (00:00:15.1406337)
Percent complete: 19 (00:00:15.2556338)
Percent complete: 20 (00:00:15.3656340)
Percent complete: 21 (00:00:15.4756342)
Percent complete: 22 (00:00:15.5856343)
Percent complete: 23 (00:00:15.6706344)
Percent complete: 24 (00:00:15.7906346)
Percent complete: 25 (00:00:15.9056348)
Percent complete: 26 (00:00:16.0156349)
Percent complete: 27 (00:00:16.1206351)
Percent complete: 28 (00:00:16.2056352)
Percent complete: 29 (00:00:16.3006353)
Percent complete: 30 (00:00:16.4006354)
Percent complete: 31 (00:00:16.5106356)
Percent complete: 32 (00:00:16.6206358)
Percent complete: 33 (00:00:16.7356359)
Percent complete: 34 (00:00:16.8256360)
Percent complete: 35 (00:00:16.9156362)
Percent complete: 36 (00:00:17.0306363)
Percent complete: 37 (00:00:17.1506365)
Percent complete: 38 (00:00:17.2606367)
Percent complete: 39 (00:00:17.3756368)
Percent complete: 40 (00:00:17.5856371)
Percent complete: 41 (00:00:17.7356373)
Percent complete: 42 (00:00:17.9056376)
Percent complete: 43 (00:00:18.0256377)
Percent complete: 44 (00:00:18.1366405)
Percent complete: 45 (00:00:18.2216406)
Percent complete: 46 (00:00:18.3216408)
Percent complete: 47 (00:00:18.4166409)
Percent complete: 48 (00:00:18.5066410)
Percent complete: 49 (00:00:18.6116412)
Percent complete: 50 (00:00:18.7166413)
Percent complete: 51 (00:00:18.8266415)
Percent complete: 52 (00:00:18.9316416)
Percent complete: 53 (00:00:19.0716418)
Percent complete: 54 (00:00:19.1966420)
Percent complete: 55 (00:00:19.2966421)
Percent complete: 56 (00:00:19.3766423)
Percent complete: 57 (00:00:19.4616424)
Percent complete: 58 (00:00:19.5441441)
Percent complete: 59 (00:00:19.6426453)
Percent complete: 60 (00:00:19.7526454)
Percent complete: 61 (00:00:19.8476455)
Percent complete: 62 (00:00:19.9226457)
Percent complete: 63 (00:00:20.0026458)
Percent complete: 64 (00:00:20.0676459)
Percent complete: 65 (00:00:20.1626460)
Percent complete: 66 (00:00:20.2626461)
Percent complete: 67 (00:00:20.3626463)
Percent complete: 68 (00:00:20.4576464)
Percent complete: 69 (00:00:20.5676466)
Percent complete: 70 (00:00:20.6826467)
Percent complete: 71 (00:00:20.7776468)
Percent complete: 72 (00:00:20.8626470)
Percent complete: 73 (00:00:20.9526471)
Percent complete: 74 (00:00:21.0326472)
Percent complete: 75 (00:00:21.1076473)
Percent complete: 76 (00:00:21.1976474)
Percent complete: 77 (00:00:21.2776475)
Percent complete: 78 (00:00:21.3626477)
Percent complete: 79 (00:00:21.4476478)
Percent complete: 80 (00:00:21.5276479)
Percent complete: 81 (00:00:21.6076480)
Percent complete: 82 (00:00:21.6876481)
Percent complete: 83 (00:00:21.7726482)
Percent complete: 84 (00:00:21.8226483)
Percent complete: 85 (00:00:21.8876484)
Percent complete: 86 (00:00:21.9876485)
Percent complete: 87 (00:00:22.0626486)
Percent complete: 88 (00:00:22.1226487)
Percent complete: 89 (00:00:22.1876488)
Percent complete: 90 (00:00:22.2626489)
Percent complete: 91 (00:00:22.3026490)
Percent complete: 92 (00:00:22.3726491)
Percent complete: 93 (00:00:22.4376492)
Percent complete: 94 (00:00:22.4926493)
Percent complete: 95 (00:00:22.5676494)
Percent complete: 96 (00:00:22.6126494)
Percent complete: 97 (00:00:22.6926495)
Percent complete: 98 (00:00:22.7776496)
Percent complete: 99 (00:00:22.8426497)
Percent complete: 100 (00:00:22.9176498)
Does anyone have any recommendations on how I can fix this problem or further investigate what is causing it?

Async eventing is rather poorly supported in Powershell. When events are fired they go into some Powershell event queue, which is then fed to handlers as they become available. I've seen various performance or functionality issues with this before, where it seems the handlers need to wait for the console to become idle before they are executed. See this question for an example.
From what I can tell, your code is set up well and "should" work. I think it's slow just because powershell doesn't support this kind of pattern very well.
Here's a workaround which reverts to plain .NET C# code to handle all the event stuff, which avoids the Powershell event queue entirely.
# helper for handling events
Add-Type -TypeDef #"
using System;
using System.Text;
using System.Net;
using System.IO;
public class Downloader
{
private Uri source;
private string destination;
private string log;
private object syncRoot = new object();
private int percent = 0;
public Downloader(string source, string destination, string log)
{
this.source = new Uri(source);
this.destination = destination;
this.log = log;
}
public void Download()
{
WebClient wc = new WebClient();
wc.DownloadProgressChanged += new DownloadProgressChangedEventHandler(OnProgressChanged);
wc.DownloadFileAsync(source, destination);
}
private void OnProgressChanged(object sender, DownloadProgressChangedEventArgs e)
{
lock (this.syncRoot)
{
if (e.ProgressPercentage > this.percent)
{
this.percent = e.ProgressPercentage;
string message = String.Format("{0}: {1} percent", DateTime.Now, this.percent);
File.AppendAllLines(this.log, new string[1] { message }, Encoding.ASCII);
}
}
}
}
"#
$source = 'https://www.kernel.org/pub/linux/kernel/v2.6/linux-2.6.18.tar.bz2'
$dest = "$env:USERPROFILE\Downloads\linux-2.6.18.tar.bz2"
$log = [io.path]::GetTempFileName()
$downloader = new-object Downloader $source,$dest,$log
$downloader.Download();
gc $log -tail 1 -wait `
|?{ $_ -match ': (\d+) percent' } `
|%{
$percent = [int]$matches[1]
if($percent -lt 100)
{
Write-Progress -Activity "Downloading $source" -Status "${percent}% complete" -PercentComplete $percent
}
else{ break }
}

Related

What solution should I use to generate a list of all possible alphabetic combinaisons?

I want to generate a list of all the possible combinations of the following characters with a minimum length of 3 characters and a maximum length of 12 characters.
abcdefghijklmnopqrstuvwxyz1234567890_
I though of using PHP to do so this but this operation requires too much memory. What would be the best tool to achieve this?
It would be better if you set a limit on each run; For example all possibilities with 5 characters in one run, and all with 7 in another. And write a code to send the output after each run to a text file so you have all the possibilities and
That would take less memory.
example with numbers in python:
# 1 2 3 4 5 6 7 8 9 0
listx=[1,2,3,4,5,6,7,8,9,0]
#one letter
for i in listx:
print(i)
#two letters
for i in listx:
for j in listx:
print(f"{i}{j}")
and it goes on and on...
output=>
1
2
3
4
5
6
7
8
9
0
11
12
13
14
15
16
17
18
19
10
21
22
23
24
25
26
27
28
29
20
31
32
33
34
35
36
37
38
39
30
41
42
43
44
45
46
47
48
49
40
51
52
53
54
55
56
57
58
59
50
61
62
63
64
65
66
67
68
69
60
71
72
73
74
75
76
77
78
79
70
81
82
83
84
85
86
87
88
89
80
91
92
93
94
95
96
97
98
99
90
01
02
03
04
05
06
07
08
09
00
In python, there is a function itertools.product which returns the combinations you want for a fixed number of characters. You can call it repeatedly to get each number of characters between 3 and 12.
def get_combinations(charset, begin, end):
result = []
for i in range(begin, end+1):
result.extend(''.join(p) for p in itertools.product(charset, repeat=i))
return result
print(get_combinations('abcdefghijklmnopqrstuvwxyz0123456789_', 3, 5))
# ['aaa', 'aab', 'aac', 'aad', 'aae', 'aaf', 'aag', 'aah', 'aai', 'aaj', 'aak', 'aal', 'aam', 'aan', 'aao', 'aap', 'aaq', 'aar', 'aas', 'aat', 'aau', 'aav', 'aaw', 'aax', 'aay', 'aaz', 'aa0', 'aa1', 'aa2', 'aa3', 'aa4', 'aa5', 'aa6', 'aa7', 'aa8', 'aa9', 'aa_', 'aba', 'abb', 'abc', 'abd', 'abe', 'abf', 'abg', 'abh', 'abi', 'abj', 'abk', 'abl', 'abm', 'abn', 'abo', 'abp', 'abq', 'abr', 'abs', 'abt', 'abu', 'abv', 'abw', 'abx', 'aby', 'abz', 'ab0', 'ab1', 'ab2', 'ab3', 'ab4', 'ab5', 'ab6', 'ab7', 'ab8', 'ab9', 'ab_', 'aca', 'acb', 'acc', 'acd', 'ace', 'acf', 'acg', 'ach', 'aci', 'acj', 'ack', 'acl', 'acm', 'acn', 'aco', 'acp', 'acq', 'acr', 'acs', 'act', 'acu', 'acv', 'acw', 'acx', 'acy', 'acz', 'ac0', 'ac1', 'ac2', 'ac3', 'ac4', 'ac5', 'ac6', 'ac7', 'ac8', 'ac9', 'ac_', 'ada', 'adb', 'adc', 'add', 'ade', 'adf', 'adg', 'adh', 'adi', 'adj', 'adk', 'adl', 'adm', 'adn', 'ado', 'adp', 'adq', 'adr', 'ads', 'adt', 'adu', 'adv', 'adw', 'adx', 'ady', 'adz', 'ad0', 'ad1', 'ad2', 'ad3', 'ad4', 'ad5', 'ad6', 'ad7', 'ad8', 'ad9', 'ad_', 'aea', 'aeb', 'aec', 'aed', 'aee', 'aef', 'aeg', ..., '__o0', '__o1', '__o2', '__o3', '__o4', '__o5', '__o6', '__o7', '__o8', '__o9', '__o_', '__pa', '__pb', '__pc', '__pd', '__pe', '__pf', '__pg', '__ph', '__pi', '__pj', '__pk', '__pl', '__pm', '__pn', '__po', '__pp', '__pq', '__pr', '__ps', '__pt', '__pu', '__pv', '__pw', '__px', '__py', '__pz', '__p0', '__p1', '__p2', '__p3', '__p4', '__p5', '__p6', '__p7', '__p8', '__p9', '__p_', '__qa', '__qb', '__qc', '__qd', '__qe', '__qf', '__qg', '__qh', '__qi', '__qj', '__qk', '__ql', '__qm', '__qn', '__qo', '__qp', '__qq', '__qr', '__qs', '__qt', '__qu', '__qv', '__qw', '__qx', '__qy', '__qz', '__q0', '__q1', '__q2', '__q3', '__q4', '__q5', '__q6', '__q7', '__q8', '__q9', '__q_', '__ra', '__rb', '__rc', '__rd', '__re', '__rf', '__rg', '__rh', '__ri', '__rj', '__rk', '__rl', '__rm', '__rn', '__ro', '__rp', '__rq', '__rr', '__rs', '__rt', '__ru', '__rv', '__rw', '__rx', '__ry', '__rz', '__r0', '__r1', '__r2', '__r3', '__r4', '__r5', '__r6', '__r7', '__r8', '__r9', '__r_', '__sa', '__sb', '__sc', '__sd', '__se', '__sf', '__sg', '__sh', '__si', '__sj', '__sk', '__sl', '__sm', '__sn', '__so', '__sp', '__sq', '__sr', '__ss', '__st', '__su', '__sv', '__sw', '__sx', '__sy', '__sz', '__s0', '__s1', '__s2', '__s3', '__s4', '__s5', '__s6', '__s7', '__s8', '__s9', '__s_', '__ta', '__tb', '__tc', '__td', '__te', '__tf', '__tg', '__th', '__ti', '__tj', '__tk', '__tl', '__tm', '__tn', '__to', '__tp', '__tq', '__tr', '__ts', '__tt', '__tu', '__tv', '__tw', '__tx', '__ty', '__tz', '__t0', '__t1', '__t2', '__t3', '__t4', '__t5', '__t6', '__t7', '__t8', '__t9', '__t_', '__ua', '__ub', '__uc', '__ud', '__ue', '__uf', '__ug', '__uh', '__ui', '__uj', '__uk', '__ul', '__um', '__un', '__uo', '__up', '__uq', '__ur', '__us', '__ut', '__uu', '__uv', '__uw', '__ux', '__uy', '__uz', '__u0', '__u1', '__u2', '__u3', '__u4', '__u5', '__u6', '__u7', '__u8', '__u9', '__u_', '__va', '__vb', '__vc', '__vd', '__ve', '__vf', '__vg', '__vh', '__vi', '__vj', '__vk', '__vl', '__vm', '__vn', '__vo', '__vp', '__vq', '__vr', '__vs', '__vt', '__vu', '__vv', '__vw', '__vx', '__vy', '__vz', '__v0', '__v1', '__v2', '__v3', '__v4', '__v5', '__v6', '__v7', '__v8', '__v9', '__v_', '__wa', '__wb', '__wc', '__wd', '__we', '__wf', '__wg', '__wh', '__wi', '__wj', '__wk', '__wl', '__wm', '__wn', '__wo', '__wp', '__wq', '__wr', '__ws', '__wt', '__wu', '__wv', '__ww', '__wx', '__wy', '__wz', '__w0', '__w1', '__w2', '__w3', '__w4', '__w5', '__w6', '__w7', '__w8', '__w9', '__w_', '__xa', '__xb', '__xc', '__xd', '__xe', '__xf', '__xg', '__xh', '__xi', '__xj', '__xk', '__xl', '__xm', '__xn', '__xo', '__xp', '__xq', '__xr', '__xs', '__xt', '__xu', '__xv', '__xw', '__xx', '__xy', '__xz', '__x0', '__x1', '__x2', '__x3', '__x4', '__x5', '__x6', '__x7', '__x8', '__x9', '__x_', '__ya', '__yb', '__yc', '__yd', '__ye', '__yf', '__yg', '__yh', '__yi', '__yj', '__yk', '__yl', '__ym', '__yn', '__yo', '__yp', '__yq', '__yr', '__ys', '__yt', '__yu', '__yv', '__yw', '__yx', '__yy', '__yz', '__y0', '__y1', '__y2', '__y3', '__y4', '__y5', '__y6', '__y7', '__y8', '__y9', '__y_', '__za', '__zb', '__zc', '__zd', '__ze', '__zf', '__zg', '__zh', '__zi', '__zj', '__zk', '__zl', '__zm', '__zn', '__zo', '__zp', '__zq', '__zr', '__zs', '__zt', '__zu', '__zv', '__zw', '__zx', '__zy', '__zz', '__z0', '__z1', '__z2', '__z3', '__z4', '__z5', '__z6', '__z7', '__z8', '__z9', '__z_', '__0a', '__0b', '__0c', '__0d', '__0e', '__0f', '__0g', '__0h', '__0i', '__0j', '__0k', '__0l', '__0m', '__0n', '__0o', '__0p', '__0q', '__0r', '__0s', '__0t', '__0u', '__0v', '__0w', '__0x', '__0y', '__0z', '__00', '__01', '__02', '__03', '__04', '__05', '__06', '__07', '__08', '__09', '__0_', '__1a', '__1b', '__1c', '__1d', '__1e', '__1f', '__1g', '__1h', '__1i', '__1j', '__1k', '__1l', '__1m', '__1n', '__1o', '__1p', '__1q', '__1r', '__1s', '__1t', '__1u', '__1v', '__1w', '__1x', '__1y', '__1z', '__10', '__11', '__12', '__13', '__14', '__15', '__16', '__17', '__18', '__19', '__1_', '__2a', '__2b', '__2c', '__2d', '__2e', '__2f', '__2g', '__2h', '__2i', '__2j', '__2k', '__2l', '__2m', '__2n', '__2o', '__2p', '__2q', '__2r', '__2s', '__2t', '__2u', '__2v', '__2w', '__2x', '__2y', '__2z', '__20', '__21', '__22', '__23', '__24', '__25', '__26', '__27', '__28', '__29', '__2_', '__3a', '__3b', '__3c', '__3d', '__3e', '__3f', '__3g', '__3h', '__3i', '__3j', '__3k', '__3l', '__3m', '__3n', '__3o', '__3p', '__3q', '__3r', '__3s', '__3t', '__3u', '__3v', '__3w', '__3x', '__3y', '__3z', '__30', '__31', '__32', '__33', '__34', '__35', '__36', '__37', '__38', '__39', '__3_', '__4a', '__4b', '__4c', '__4d', '__4e', '__4f', '__4g', '__4h', '__4i', '__4j', '__4k', '__4l', '__4m', '__4n', '__4o', '__4p', '__4q', '__4r', '__4s', '__4t', '__4u', '__4v', '__4w', '__4x', '__4y', '__4z', '__40', '__41', '__42', '__43', '__44', '__45', '__46', '__47', '__48', '__49', '__4_', '__5a', '__5b', '__5c', '__5d', '__5e', '__5f', '__5g', '__5h', '__5i', '__5j', '__5k', '__5l', '__5m', '__5n', '__5o', '__5p', '__5q', '__5r', '__5s', '__5t', '__5u', '__5v', '__5w', '__5x', '__5y', '__5z', '__50', '__51', '__52', '__53', '__54', '__55', '__56', '__57', '__58', '__59', '__5_', '__6a', '__6b', '__6c', '__6d', '__6e', '__6f', '__6g', '__6h', '__6i', '__6j', '__6k', '__6l', '__6m', '__6n', '__6o', '__6p', '__6q', '__6r', '__6s', '__6t', '__6u', '__6v', '__6w', '__6x', '__6y', '__6z', '__60', '__61', '__62', '__63', '__64', '__65', '__66', '__67', '__68', '__69', '__6_', '__7a', '__7b', '__7c', '__7d', '__7e', '__7f', '__7g', '__7h', '__7i', '__7j', '__7k', '__7l', '__7m', '__7n', '__7o', '__7p', '__7q', '__7r', '__7s', '__7t', '__7u', '__7v', '__7w', '__7x', '__7y', '__7z', '__70', '__71', '__72', '__73', '__74', '__75', '__76', '__77', '__78', '__79', '__7_', '__8a', '__8b', '__8c', '__8d', '__8e', '__8f', '__8g', '__8h', '__8i', '__8j', '__8k', '__8l', '__8m', '__8n', '__8o', '__8p', '__8q', '__8r', '__8s', '__8t', '__8u', '__8v', '__8w', '__8x', '__8y', '__8z', '__80', '__81', '__82', '__83', '__84', '__85', '__86', '__87', '__88', '__89', '__8_', '__9a', '__9b', '__9c', '__9d', '__9e', '__9f', '__9g', '__9h', '__9i', '__9j', '__9k', '__9l', '__9m', '__9n', '__9o', '__9p', '__9q', '__9r', '__9s', '__9t', '__9u', '__9v', '__9w', '__9x', '__9y', '__9z', '__90', '__91', '__92', '__93', '__94', '__95', '__96', '__97', '__98', '__99', '__9_', '___a', '___b', '___c', '___d', '___e', '___f', '___g', '___h', '___i', '___j', '___k', '___l', '___m', '___n', '___o', '___p', '___q', '___r', '___s', '___t', '___u', '___v', '___w', '___x', '___y', '___z', '___0', '___1', '___2', '___3', '___4', '___5', '___6', '___7', '___8', '___9', '____']
Note how I called the function with parameters 3 and 5 instead of 3 and 12. With parameters 3 and 5, the number of combinations is already 71268771. Over 71 millions. With parameters 3 and 12, the number of combinations would be 6765811783780034854. That's 6.8 * 10**18. This is nearly one thousand million times the number of humans on Earth.

Calculation of application speedup using gnuplot and awk

Here's the problem:
Speedup formula: S(p) = T(1)/T(p) = (avg time for one process / avg time for p processes)
There are 5 logs, from which one wants to extract the information.
cg.B.1.log contains the execution times for one process, so we do the calculation of the average time to obtain T(1). The other log files contain the execution times for 2, 4, 8 and 16 processes. Averages of those times must also be calculated, since they are T(p).
Here's the code that calculates the averages:
tavg(n) = "awk 'BEGIN { FS = \"[ \\t]*=[ \\t]*\" } /Time in seconds/ { s += $2; c++ } /Total processes/ { if (! CP) CP = $2 } END { print s/c }' cg.B.".n.".log ".(n == 1 ? ">" : ">>")." tavg.dat;"
And the code that calculates the speedup:
system "awk 'NR==1{n=$0} {print n/$0}' tavg.dat > speedup.dat;"
How do I combine those two commands so that the output 'speedup.dat' is produced directly without using file tavg.dat?
Here are the contents of files, the structure of all log files is identical. I attached only the first two executions for abbreviation purposes.
cg.B.1.log
-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-
Start in 16:45:15--25/12/2014
NAS Parallel Benchmarks 3.3 -- CG Benchmark
Size: 75000
Iterations: 75
Number of active processes: 1
Number of nonzeroes per row: 13
Eigenvalue shift: .600E+02
iteration ||r|| zeta
1 0.30354859861452E-12 59.9994751578754
2 0.11186435488267E-14 21.7627846142536
3 0.11312258511928E-14 22.2876617043224
4 0.11222160585284E-14 22.5230738188346
5 0.11244234177219E-14 22.6275390653892
6 0.11330434819384E-14 22.6740259189533
7 0.11334259623050E-14 22.6949056826251
8 0.11374839313647E-14 22.7044023166872
9 0.11424877443039E-14 22.7087834345620
10 0.11329475190566E-14 22.7108351397177
11 0.11337364093482E-14 22.7118107121341
12 0.11379928308864E-14 22.7122816240971
13 0.11369453681794E-14 22.7125122663243
14 0.11430390337015E-14 22.7126268007594
15 0.11400318886400E-14 22.7126844161819
16 0.11352091331197E-14 22.7127137461755
17 0.11350923439124E-14 22.7127288402000
18 0.11475378864565E-14 22.7127366848296
19 0.11366777929028E-14 22.7127407981217
20 0.11274243312504E-14 22.7127429721364
21 0.11353930792856E-14 22.7127441294025
22 0.11299685800278E-14 22.7127447493900
23 0.11296405041170E-14 22.7127450834533
24 0.11381975597887E-14 22.7127452643881
25 0.11328127301663E-14 22.7127453628451
26 0.11367332658939E-14 22.7127454166517
27 0.11283372178605E-14 22.7127454461696
28 0.11384734158863E-14 22.7127454624211
29 0.11394011989719E-14 22.7127454713974
30 0.11354294067640E-14 22.7127454763703
31 0.11412988029103E-14 22.7127454791343
32 0.11358088407717E-14 22.7127454806740
33 0.11263266152515E-14 22.7127454815316
34 0.11275183080286E-14 22.7127454820131
35 0.11328306951409E-14 22.7127454822840
36 0.11357880314891E-14 22.7127454824349
37 0.11332687790488E-14 22.7127454825202
38 0.11324108818137E-14 22.7127454825684
39 0.11365065523777E-14 22.7127454825967
40 0.11361185361321E-14 22.7127454826116
41 0.11276519820716E-14 22.7127454826202
42 0.11317183424878E-14 22.7127454826253
43 0.11236007481770E-14 22.7127454826276
44 0.11304065564684E-14 22.7127454826296
45 0.11287791356431E-14 22.7127454826310
46 0.11297028000133E-14 22.7127454826310
47 0.11281236869666E-14 22.7127454826314
48 0.11277254075548E-14 22.7127454826317
49 0.11320327289847E-14 22.7127454826309
50 0.11287655285563E-14 22.7127454826321
51 0.11230503422400E-14 22.7127454826324
52 0.11292089094944E-14 22.7127454826313
53 0.11366728396408E-14 22.7127454826315
54 0.11222618466968E-14 22.7127454826310
55 0.11278193276516E-14 22.7127454826315
56 0.11244624896030E-14 22.7127454826316
57 0.11264508872685E-14 22.7127454826318
58 0.11255583774760E-14 22.7127454826314
59 0.11227129146723E-14 22.7127454826314
60 0.11189480800173E-14 22.7127454826318
61 0.11163241472678E-14 22.7127454826315
62 0.11278839424218E-14 22.7127454826318
63 0.11226804133008E-14 22.7127454826313
64 0.11222456601361E-14 22.7127454826317
65 0.11270879524310E-14 22.7127454826308
66 0.11303771390006E-14 22.7127454826319
67 0.11240101357287E-14 22.7127454826319
68 0.11240278884391E-14 22.7127454826321
69 0.11207748067718E-14 22.7127454826317
70 0.11178755187571E-14 22.7127454826327
71 0.11195935245649E-14 22.7127454826313
72 0.11260715126337E-14 22.7127454826322
73 0.11281677964997E-14 22.7127454826316
74 0.11162340034815E-14 22.7127454826318
75 0.11208709203921E-14 22.7127454826310
Benchmark completed
VERIFICATION SUCCESSFUL
Zeta is 0.2271274548263E+02
Error is 0.3128387698896E-15
CG Benchmark Completed.
Class = B
Size = 75000
Iterations = 75
Time in seconds = 88.72
Total processes = 1
Compiled procs = 1
Mop/s total = 616.64
Mop/s/process = 616.64
Operation type = floating point
Verification = SUCCESSFUL
Version = 3.3
Compile date = 25 Dec 2014
Compile options:
MPIF77 = mpif77
FLINK = $(MPIF77)
FMPI_LIB = -L/usr/lib/openmpi/lib -lmpi -lopen-rte -lo...
FMPI_INC = -I/usr/lib/openmpi/include -I/usr/lib/openm...
FFLAGS = -O
FLINKFLAGS = -O
RAND = randi8
Please send the results of this run to:
NPB Development Team
Internet: npb#nas.nasa.gov
If email is not available, send this to:
MS T27A-1
NASA Ames Research Center
Moffett Field, CA 94035-1000
Fax: 650-604-3957
Finish in 16:46:46--25/12/2014
-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-
-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-
Start in 17:03:13--25/12/2014
NAS Parallel Benchmarks 3.3 -- CG Benchmark
Size: 75000
Iterations: 75
Number of active processes: 1
Number of nonzeroes per row: 13
Eigenvalue shift: .600E+02
iteration ||r|| zeta
1 0.30354859861452E-12 59.9994751578754
2 0.11186435488267E-14 21.7627846142536
3 0.11312258511928E-14 22.2876617043224
4 0.11222160585284E-14 22.5230738188346
5 0.11244234177219E-14 22.6275390653892
6 0.11330434819384E-14 22.6740259189533
7 0.11334259623050E-14 22.6949056826251
8 0.11374839313647E-14 22.7044023166872
9 0.11424877443039E-14 22.7087834345620
10 0.11329475190566E-14 22.7108351397177
11 0.11337364093482E-14 22.7118107121341
12 0.11379928308864E-14 22.7122816240971
13 0.11369453681794E-14 22.7125122663243
14 0.11430390337015E-14 22.7126268007594
15 0.11400318886400E-14 22.7126844161819
16 0.11352091331197E-14 22.7127137461755
17 0.11350923439124E-14 22.7127288402000
18 0.11475378864565E-14 22.7127366848296
19 0.11366777929028E-14 22.7127407981217
20 0.11274243312504E-14 22.7127429721364
21 0.11353930792856E-14 22.7127441294025
22 0.11299685800278E-14 22.7127447493900
23 0.11296405041170E-14 22.7127450834533
24 0.11381975597887E-14 22.7127452643881
25 0.11328127301663E-14 22.7127453628451
26 0.11367332658939E-14 22.7127454166517
27 0.11283372178605E-14 22.7127454461696
28 0.11384734158863E-14 22.7127454624211
29 0.11394011989719E-14 22.7127454713974
30 0.11354294067640E-14 22.7127454763703
31 0.11412988029103E-14 22.7127454791343
32 0.11358088407717E-14 22.7127454806740
33 0.11263266152515E-14 22.7127454815316
34 0.11275183080286E-14 22.7127454820131
35 0.11328306951409E-14 22.7127454822840
36 0.11357880314891E-14 22.7127454824349
37 0.11332687790488E-14 22.7127454825202
38 0.11324108818137E-14 22.7127454825684
39 0.11365065523777E-14 22.7127454825967
40 0.11361185361321E-14 22.7127454826116
41 0.11276519820716E-14 22.7127454826202
42 0.11317183424878E-14 22.7127454826253
43 0.11236007481770E-14 22.7127454826276
44 0.11304065564684E-14 22.7127454826296
45 0.11287791356431E-14 22.7127454826310
46 0.11297028000133E-14 22.7127454826310
47 0.11281236869666E-14 22.7127454826314
48 0.11277254075548E-14 22.7127454826317
49 0.11320327289847E-14 22.7127454826309
50 0.11287655285563E-14 22.7127454826321
51 0.11230503422400E-14 22.7127454826324
52 0.11292089094944E-14 22.7127454826313
53 0.11366728396408E-14 22.7127454826315
54 0.11222618466968E-14 22.7127454826310
55 0.11278193276516E-14 22.7127454826315
56 0.11244624896030E-14 22.7127454826316
57 0.11264508872685E-14 22.7127454826318
58 0.11255583774760E-14 22.7127454826314
59 0.11227129146723E-14 22.7127454826314
60 0.11189480800173E-14 22.7127454826318
61 0.11163241472678E-14 22.7127454826315
62 0.11278839424218E-14 22.7127454826318
63 0.11226804133008E-14 22.7127454826313
64 0.11222456601361E-14 22.7127454826317
65 0.11270879524310E-14 22.7127454826308
66 0.11303771390006E-14 22.7127454826319
67 0.11240101357287E-14 22.7127454826319
68 0.11240278884391E-14 22.7127454826321
69 0.11207748067718E-14 22.7127454826317
70 0.11178755187571E-14 22.7127454826327
71 0.11195935245649E-14 22.7127454826313
72 0.11260715126337E-14 22.7127454826322
73 0.11281677964997E-14 22.7127454826316
74 0.11162340034815E-14 22.7127454826318
75 0.11208709203921E-14 22.7127454826310
Benchmark completed
VERIFICATION SUCCESSFUL
Zeta is 0.2271274548263E+02
Error is 0.3128387698896E-15
CG Benchmark Completed.
Class = B
Size = 75000
Iterations = 75
Time in seconds = 87.47
Total processes = 1
Compiled procs = 1
Mop/s total = 625.43
Mop/s/process = 625.43
Operation type = floating point
Verification = SUCCESSFUL
Version = 3.3
Compile date = 25 Dec 2014
Compile options:
MPIF77 = mpif77
FLINK = $(MPIF77)
FMPI_LIB = -L/usr/lib/openmpi/lib -lmpi -lopen-rte -lo...
FMPI_INC = -I/usr/lib/openmpi/include -I/usr/lib/openm...
FFLAGS = -O
FLINKFLAGS = -O
RAND = randi8
Please send the results of this run to:
NPB Development Team
Internet: npb#nas.nasa.gov
If email is not available, send this to:
MS T27A-1
NASA Ames Research Center
Moffett Field, CA 94035-1000
Fax: 650-604-3957
Finish in 17:04:43--25/12/2014
-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-/-
tavg.dat
88.3055
45.1482
37.7202
37.4035
53.777
speedup.dat
1
1.9559
2.34107
2.36089
1.64207
You can do it all in one awk script that processes all the log files:
#!/usr/bin/awk -f
BEGIN { FS="=" }
lfname != FILENAME { lfname = FILENAME; split(FILENAME, a, "."); fnum=a[3] }
/Time in seconds/ { tsecs[fnum] += $2; tcnt[fnum]++ }
/Total processes/ { cp[fnum] = int($2) }
END {
tavg1 = tsecs[1]/tcnt[1]
for( k in tsecs ) {
tavgk = tsecs[k]/tcnt[k]
if( tavgk > 0 ) {
print k OFS cp[k] OFS tavgk OFS tavg1/tavgk
}
}
}
If you put that in a file called awk.script and make it executable with chmod +x awk.script you can run it in bash like:
./awk.script cg.B.*.log
If you're using GNU awk, the output will be ordered( extra steps may be needed to ensure the output is ordered using other awk flavors ).
Where I generated a 2nd and 3rd file, the output is like:
1 1 88.095 1
2 2 68.095 1.29371
3 4 49.595 1.77629
where the unnamed columns are like: file number, # processes, avg per file, speedup. You could get just the speedups by changing the print in the END block to be like print tavg1/tavgk.
Here's a breakdown of the script:
Use a simpler field separator in BEGIN
lfname != FILENAME - parse out file number from the filename as fnum but only when the FILENAME changes.
/Time in seconds/ - store the values in tsecs and tcnt arrays with an fnum key. Use int() function to strip whitespace from processes value.
/Total processes/ - store the process in the cp array with an fnum key
END - Calculate the average for fnum 1 as tavg1, loop through the keys in tsecs and calculate the average by fnum key as tavgk. When tavgk > 0 print the output as described above.
You have figured out all the difficult parts already. You don't need the tavg.dat file at all. Create your tavg(n) function directly as a system call:
tavg(n) = system("awk 'BEGIN { FS = \"[ \\t]*=[ \\t]*\" } \
/Time in seconds/ { s += $2; c++ } /Total processes/ { \
if (! CP) CP = $2 } END { print s/c }' cg.B.".n.".log")
And a speedup(n) function as
speedup(n)=tavg(n)/tavg(1)
Now you can set print to write to a file:
set print "speedup.dat"
do for [i=1:5] {
print speedup(i)
}
unset print

using Jmeter with non-GUI for test plan

i have faced a weird problem
i run 300 users simultaneously to log in to a website and read a file
i used the non-GUI mode to do this test plan
my problem is that this test plan have passed for just one time then when i run it again it get error then i tried to reduce the number of users to 200 and it passed but again after a while it did not.
Here is what i get:
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=64m; support
was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; sup
port was removed in 8.0
Creating summariser <summary>
Created the tree successfully using C:\Users\samo\Dropbox\Jmeter\jmetet\Reading_
script.jmx
Starting the test # Mon Jul 07 13:24:11 GMT+03:00 2014 (1404728651964)
Waiting for possible shutdown message on port 4445
summary + 1980 in 48s = 41.6/s Avg: 5536 Min: 6 Max: 21171 Err: 77
2 (38.99%) Active: 300 Started: 300 Finished: 0
summary + 1272 in 40.1s = 31.7/s Avg: 3257 Min: 3 Max: 39796 Err: 3
1 (2.44%) Active: 192 Started: 300 Finished: 108
summary = 3252 in 77.4s = 42.0/s Avg: 4644 Min: 3 Max: 39796 Err: 80
3 (24.69%)
summary + 1203 in 70s = 17.2/s Avg: 6020 Min: 3 Max: 69837 Err: 5
8 (4.82%) Active: 84 Started: 300 Finished: 216
summary = 4455 in 107s = 41.5/s Avg: 5016 Min: 3 Max: 69837 Err: 86
1 (19.33%)
summary + 608 in 100s = 6.1/s Avg: 6753 Min: 3 Max: 78722 Err: 4
2 (6.91%) Active: 7 Started: 300 Finished: 293
summary = 5063 in 137s = 36.9/s Avg: 5224 Min: 3 Max: 78722 Err: 90
3 (17.84%)
summary + 37 in 41s = 0.9/s Avg: 4880 Min: 4 Max: 37736 Err: 1
7 (45.95%) Active: 0 Started: 300 Finished: 300
summary = 5100 in 142s = 35.9/s Avg: 5222 Min: 3 Max: 78722 Err: 92
0 (18.04%)
Tidying up ... # Mon Jul 07 13:26:34 GMT+03:00 2014 (1404728794704)
... end of run
what did i miss to face this problem?
and how to know if the problem is out of memory or something else
hi guys i have figured out the problem
1- First of all i changed the heap size in the properties file in the bin folder to be:
BEFORE: set HEAP=-Xms512m -Xmx512m AFTER: set HEAP=-Xms2048m -Xmx2048m
2- Removed all the listeners i used before
3- Set the ramp up time in the thread group interface to be 180.
Ramp-Up before making changes was set to 1 which is not realistic because Jmeter can not run all the 300 users in 1 second
4- Set the loop count in the thread group interface to be 2
the error that i get before making these change was
java.net.SocketException,Non HTTP response message: Connection reset
which means that the server closed the connection
hope this can help someone out there

Node.js slower than Apache

I am comparing performance of Node.js (0.5.1-pre) vs Apache (2.2.17) for a very simple scenario - serving a text file.
Here's the code I use for node server:
var http = require('http')
, fs = require('fs')
fs.readFile('/var/www/README.txt',
function(err, data) {
http.createServer(function(req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'})
res.end(data)
}).listen(8080, '127.0.0.1')
}
)
For Apache I am just using whatever default configuration which goes with Ubuntu 11.04
When running Apache Bench with the following parameters against Apache
ab -n10000 -c100 http://127.0.0.1/README.txt
I get the following runtimes:
Time taken for tests: 1.083 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Total transferred: 27630000 bytes
HTML transferred: 24830000 bytes
Requests per second: 9229.38 [#/sec] (mean)
Time per request: 10.835 [ms] (mean)
Time per request: 0.108 [ms] (mean, across all concurrent requests)
Transfer rate: 24903.11 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.8 0 9
Processing: 5 10 2.0 10 23
Waiting: 4 10 1.9 10 21
Total: 6 11 2.1 10 23
Percentage of the requests served within a certain time (ms)
50% 10
66% 11
75% 11
80% 11
90% 14
95% 15
98% 18
99% 19
100% 23 (longest request)
When running Apache bench against node instance, these are the runtimes:
Time taken for tests: 1.712 seconds
Complete requests: 10000
Failed requests: 0
Write errors: 0
Total transferred: 25470000 bytes
HTML transferred: 24830000 bytes
Requests per second: 5840.83 [#/sec] (mean)
Time per request: 17.121 [ms] (mean)
Time per request: 0.171 [ms] (mean, across all concurrent requests)
Transfer rate: 14527.94 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.9 0 8
Processing: 0 17 8.8 16 53
Waiting: 0 17 8.6 16 48
Total: 1 17 8.7 17 53
Percentage of the requests served within a certain time (ms)
50% 17
66% 21
75% 23
80% 25
90% 28
95% 31
98% 35
99% 38
100% 53 (longest request)
Which is clearly slower than Apache. This is especially surprising if you consider the fact that Apache is doing a lot of other stuff, like logging etc.
Am I doing it wrong? Or is Node.js really slower in this scenario?
Edit 1: I do notice that node's concurrency is better - when increasing a number of simultaneous request to 1000, Apache starts dropping few of them, while node works fine with no connections dropped.
Dynamic requests
node.js is very good at handling at lot small dynamic requests(which can be hanging/long-polling). But it is not good at handling large buffers. Ryan Dahl(Author node.js) explained this one of his presentations. I recommend you to study these slides. I also watched this online somewhere.
Garbage Collector
As you can see from slide(13 from 45) it is bad at big buffers.
Slide 15 from 45:
V8 has a generational garbage
collector. Moves objects around
randomly. Node can’t get a pointer to
raw string data to write to socket.
Use Buffer
Slide 16 from 45
Using Node’s new Buffer object, the
results change.
Still not that good as for example nginx, but a lot better. Also these slides are pretty old so probably Ryan has even improved this.
CDN
Still I don't think you should be using node.js to host static files. You are probably better of hosting them on a CDN which is optimized for hosting static files. Some popular CDN's(some even free for) via WIKI.
NGinx(+Memcached)
If you don't want to use CDN to host your static files I recommend you to use Nginx with memcached instead which is very fast.
In this scenario Apache is probably doing sendfile which result in kernel sending chunk of memory data (cached by fs driver) directly to socket. In the case of node there is some overhead in copying data in userspace between v8, libeio and kernel (see this great article on using sendfile in node)
There are plenty possible scenarios where node will outperform Apache, like 'send stream of data with constant slow speed to as many tcp connections as possible'
The result of your benchmark can change in favor of node.js if you increase the concurrency and use cache in node.js
A sample code from the book "Node Cookbook":
var http = require('http');
var path = require('path');
var fs = require('fs');
var mimeTypes = {
'.js' : 'text/javascript',
'.html': 'text/html',
'.css' : 'text/css'
} ;
var cache = {};
function cacheAndDeliver(f, cb) {
if (!cache[f]) {
fs.readFile(f, function(err, data) {
if (!err) {
cache[f] = {content: data} ;
}
cb(err, data);
});
return;
}
console.log('loading ' + f + ' from cache');
cb(null, cache[f].content);
}
http.createServer(function (request, response) {
var lookup = path.basename(decodeURI(request.url)) || 'index.html';
var f = 'content/'+lookup;
fs.exists(f, function (exists) {
if (exists) {
fs.readFile(f, function(err,data) {
if (err) { response.writeHead(500);
response.end('Server Error!'); return; }
var headers = {'Content-type': mimeTypes[path.extname(lookup)]};
response.writeHead(200, headers);
response.end(data);
});
return;
}
response.writeHead(404); //no such file found!
response.end('Page Not Found!');
});
Really all you're doing here is getting the system to copy data between buffers in memory, in different process's address spaces - the disk cache means you aren't really touching the disk, and you're using local sockets.
So the fewer copies have to be done per request, the faster it goes.
Edit: I suggested adding caching, but in fact I see now you're already doing that - you read the file once, then start the server and send back the same buffer each time.
Have you tried appending the header part to the file data once upfront, so you only have to do a single write operation for each request?
$ cat /var/www/test.php
<?php
for ($i=0; $i<10; $i++) {
echo "hello, world\n";
}
$ ab -r -n 100000 -k -c 50 http://localhost/test.php
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Completed 10000 requests
Completed 20000 requests
Completed 30000 requests
Completed 40000 requests
Completed 50000 requests
Completed 60000 requests
Completed 70000 requests
Completed 80000 requests
Completed 90000 requests
Completed 100000 requests
Finished 100000 requests
Server Software: Apache/2.2.17
Server Hostname: localhost
Server Port: 80
Document Path: /test.php
Document Length: 130 bytes
Concurrency Level: 50
Time taken for tests: 3.656 seconds
Complete requests: 100000
Failed requests: 0
Write errors: 0
Keep-Alive requests: 100000
Total transferred: 37100000 bytes
HTML transferred: 13000000 bytes
Requests per second: 27350.70 [#/sec] (mean)
Time per request: 1.828 [ms] (mean)
Time per request: 0.037 [ms] (mean, across all concurrent requests)
Transfer rate: 9909.29 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 3
Processing: 0 2 2.7 0 29
Waiting: 0 2 2.7 0 29
Total: 0 2 2.7 0 29
Percentage of the requests served within a certain time (ms)
50% 0
66% 2
75% 3
80% 3
90% 5
95% 7
98% 10
99% 12
100% 29 (longest request)
$ cat node-test.js
var http = require('http');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello World\n');
}).listen(1337, "127.0.0.1");
console.log('Server running at http://127.0.0.1:1337/');
$ ab -r -n 100000 -k -c 50 http://localhost:1337/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Completed 10000 requests
Completed 20000 requests
Completed 30000 requests
Completed 40000 requests
Completed 50000 requests
Completed 60000 requests
Completed 70000 requests
Completed 80000 requests
Completed 90000 requests
Completed 100000 requests
Finished 100000 requests
Server Software:
Server Hostname: localhost
Server Port: 1337
Document Path: /
Document Length: 12 bytes
Concurrency Level: 50
Time taken for tests: 14.708 seconds
Complete requests: 100000
Failed requests: 0
Write errors: 0
Keep-Alive requests: 0
Total transferred: 7600000 bytes
HTML transferred: 1200000 bytes
Requests per second: 6799.08 [#/sec] (mean)
Time per request: 7.354 [ms] (mean)
Time per request: 0.147 [ms] (mean, across all concurrent requests)
Transfer rate: 504.62 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 3
Processing: 0 7 3.8 7 28
Waiting: 0 7 3.8 7 28
Total: 1 7 3.8 7 28
Percentage of the requests served within a certain time (ms)
50% 7
66% 9
75% 10
80% 11
90% 12
95% 14
98% 16
99% 17
100% 28 (longest request)
$ node --version
v0.4.8
In the below benchmarks,
Apache:
$ apache2 -version
Server version: Apache/2.2.17 (Ubuntu)
Server built: Feb 22 2011 18:35:08
PHP APC cache/accelerator is installed.
Test run on my laptop, a Sager NP9280 with Core I7 920, 12G of RAM.
$ uname -a
Linux presto 2.6.38-8-generic #42-Ubuntu SMP Mon Apr 11 03:31:24 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux
KUbuntu natty

SCSS Interpolation Fails for Assigning a Percentage to Width

I'm generating several column classes with widths defined in a Sass map like so:
$column-widths: 5 10 20 25 30 33 40 50 60 66 70 80 90 100;
#each $width in $column-widths {
.column-#{$width} {
width: #{$width}%;
}
}
However, I get this error on compilation:
Error in plugin 'sass'
Message:
grid.scss
Error: Invalid CSS after "...dth: #{$width}%": expected expression (e.g. 1px, bold), was ";"
on line 10 of grid.scss
>> width: #{$width}%;
----------------------^
It looks like it's not interpreting this the way I expected. I wanted to interpolate the number values before the percent sign. But I think it's reading them as string and then trying to evaluate the percentage and just getting confused.
Figured out the correct way to do this. Instead of interpolation, I should use the percentage function in Sass.
$column-widths: 5 10 20 25 30 33 40 50 60 66 70 80 90 100;
#each $width in $column-widths {
.column-#{$width} {
width: percentage($width / 100);
}
}

Resources