AWK loops test on local vm - bash

I'm trying to understand this difference of 5 seconds between 1000 and 10000 loops extracted from gawk article. Would appreciate your help.
#!/usr/bin/gawk -f
# how long does it take to do a few loops?
BEGIN {
LOOPS=100;
# do the test twice
start=systime();
for (i=0;i<LOOPS;i++) {
}
end = systime();
# calculate how long it takes to do a dummy test
do_nothing = end-start;
# now do the test again with the *IMPORTANT* code inside
start=systime();
for (i=0;i<LOOPS;i++) {
# How long does this take?
while ("date" | getline) {
date = $0;
}
close("date");
}
end = systime();
newtime = (end - start) - do_nothing;
if (newtime <= 0) {
printf("%d loops were not enough to test, increase it\n",
LOOPS);
exit;
} else {
printf("%d loops took %6.4f seconds to execute\n",
LOOPS, newtime);
printf("That's %10.8f seconds per loop\n",
(newtime)/LOOPS);
# since the clock has an accuracy of +/- one second, what is the error
printf("accuracy of this measurement = %6.2f%%\n",
(1/(newtime))*100);
}
exit;
}
[root#krislasnetwork ~]# ./testSpeed.sh
1000 loops took 2.0000 seconds to execute
That's 0.00200000 seconds per loop
accuracy of this measurement = 50.00%
[root#krislasnetwork ~]# vi testSpeed.sh # changing loops from 1000 to 10000
[root#krislasnetwork ~]# ./testSpeed.sh
10000 loops took 15.0000 seconds to execute
That's 0.00150000 seconds per loop
accuracy of this measurement = 6.67%
It appears that each loop during the 10,000 loops is executed faster than each loop during the 1,000. Can you help me understand this behavior?

Related

fast loading of large hash table in Perl

I have about 30 text files with the structure
wordleft1|wordright1
wordleft2|wordright2
wordleft3|wordright3
...
The total size of the files is about 1 GB with about 32 million lines of word combinations.
I tried a few approaches to load them as fast as possible and store the combinations within a hash
$hash{$wordleft} = $wordright
Opening file by file and reading line by line takes about 42 seconds. I then store the hash with the Storable module
store \%hash, $filename
Loading the data again
$hashref = retrieve $filename
reduces the time to about 28 seconds. I use a fast SSD drive and a fast CPU and have enough RAM to hold all the data (it takes about 7 GB).
I'm searching for a faster way to load this data into the RAM (I can't keep it there for a few reasons).
You could try using Dan Bernstein's CDB file format using a tied hash, which will require minimal code change. You may need to install CDB_File. On my laptop, the cdb file is opened very quickly and I can do about 200-250k lookups per second. Here is an example script to create/use/benchmark a cdb:
test_cdb.pl
#!/usr/bin/env perl
use warnings;
use strict;
use Benchmark qw(:all) ;
use CDB_File 'create';
use Time::HiRes qw( gettimeofday tv_interval );
scalar #ARGV or die "usage: $0 number_of_keys seconds_to_benchmark\n";
my ($size) = $ARGV[0] || 1000;
my ($seconds) = $ARGV[1] || 10;
my $t0;
tic();
# Create CDB
my ($file, %data);
%data = map { $_ => 'something' } (1..$size);
print "Created $size element hash in memory\n";
toc();
$file = 'data.cdb';
create %data, $file, "$file.$$";
my $bytes = -s $file;
print "Created data.cdb [ $size keys and values, $bytes bytes]\n";
toc();
# Read from CDB
my $c = tie my %h, 'CDB_File', 'data.cdb' or die "tie failed: $!\n";
print "Opened data.cdb as a tied hash.\n";
toc();
timethese( -1 * $seconds, {
'Pick Random Key' => sub { int rand $size },
'Fetch Random Value' => sub { $h{ int rand $size }; },
});
tic();
print "Fetching Every Value\n";
for (0..$size) {
no warnings; # Useless use of hash element
$h{ $_ };
}
toc();
sub tic {
$t0 = [gettimeofday];
}
sub toc {
my $t1 = [gettimeofday];
my $elapsed = tv_interval ( $t0, $t1);
$t0 = $t1;
print "==> took $elapsed seconds\n";
}
Output ( 1 million keys, tested over 10 seconds )
./test_cdb.pl 1000000 10
Created 1000000 element hash in memory
==> took 2.882813 seconds
Created data.cdb [ 1000000 keys and values, 38890944 bytes]
==> took 2.333624 seconds
Opened data.cdb as a tied hash.
==> took 0.00015 seconds
Benchmark: running Fetch Random Value, Pick Random Key for at least 10 CPU seconds...
Fetch Random Value: 10 wallclock secs (10.46 usr + 0.01 sys = 10.47 CPU) # 236984.72/s (n=2481230)
Pick Random Key: 9 wallclock secs (10.11 usr + 0.02 sys = 10.13 CPU) # 3117208.98/s (n=31577327)
Fetching Every Value
==> took 3.514183 seconds
Output ( 10 million keys, tested over 10 seconds )
./test_cdb.pl 10000000 10
Created 10000000 element hash in memory
==> took 44.72331 seconds
Created data.cdb [ 10000000 keys and values, 398890945 bytes]
==> took 25.729652 seconds
Opened data.cdb as a tied hash.
==> took 0.000222 seconds
Benchmark: running Fetch Random Value, Pick Random Key for at least 10 CPU seconds...
Fetch Random Value: 14 wallclock secs ( 9.65 usr + 0.35 sys = 10.00 CPU) # 209811.20/s (n=2098112)
Pick Random Key: 12 wallclock secs (10.40 usr + 0.02 sys = 10.42 CPU) # 2865335.22/s (n=29856793)
Fetching Every Value
==> took 38.274356 seconds
It sounds like you do have a good use case for wanting an in-memory perl hash.
For faster storing/retrieving, I would recommend Sereal (Sereal::Encoder/Sereal::Decoder). If your disk storage is slow, you may even want to enable Snappy compression.

Ruby time subtraction

There is the following task: I need to get minutes between one time and another one: for example, between "8:15" and "7:45". I have the following code:
(Time.parse("8:15") - Time.parse("7:45")).minute
But I get result as "108000.0 seconds".
How can I fix it?
The result you get back is a float of the number of seconds not a Time object. So to get the number of minutes and seconds between the two times:
require 'time'
t1 = Time.parse("8:15")
t2 = Time.parse("7:45")
total_seconds = (t1 - t2) # => 1800.0
minutes = (total_seconds / 60).floor # => 30
seconds = total_seconds.to_i % 60 # => 0
puts "difference is #{minutes} minute(s) and #{seconds} second(s)"
Using floor and modulus (%) allows you to split up the minutes and seconds so it's more human readable, rather than having '6.57 minutes'
You can avoid weird time parsing gotchas (Daylight Saving, running the code around midnight) by simply doing some math on the hours and minutes instead of parsing them into Time objects. Something along these lines (I'd verify the math with tests):
one = "8:15"
two = "7:45"
h1, m1 = one.split(":").map(&:to_i)
h2, m2 = two.split(":").map(&:to_i)
puts (h1 - h2) * 60 + m1 - m2
If you do want to take Daylight Saving into account (e.g. you sometimes want an extra hour added or subtracted depending on today's date) then you will need to involve Time, of course.
Time subtraction returns the value in seconds. So divide by 60 to get the answer in minutes:
=> (Time.parse("8:15") - Time.parse("7:45")) / 60
#> 30.0

Perform a loop for a certain time interval or while condition is met

I am trying to have a check fire off every second for 30 seconds. I haven't found a clear way to do this with Ruby yet. Trying something like this currently:
until counter == 30
sleep 1
if condition
do something
break
else
counter +=1
end
Problem with something like that is it has to use sleep, which stops the thread in its tracks for a full second. Is there another way to achieve something similar to the above without the use of sleep? Is there a way to have something cycle though on a time based interval?
You can approximate what you're looking for with something along these lines:
now = Time.now
counter = 1
loop do
if Time.now < now + counter
next
else
puts "counting another second ..."
end
counter += 1
break if counter > 30
end
You could do something simple like..
max_runtime = 10.seconds.from_now
puts 'whatever' until Time.now > max_runtime
you can try this it allows for interval controls
counter == 30
interval = 5 # Check every 5 seconds
interval_timer = 1 # must start at 1
now = Time.now
while Time.now - now < counter
if interval_timer % interval == 0 #Every 5 attempts the activity will process
if condition
stuff
end
end
process_timer = process_timer + 1
end
This will happen under a guaranteed 30 seconds the interval can be set to any value 1 or greater. Some things process via milliseconds this will give you an option that will save you cycles on processing. Works well in graphics processing.

Rolling list over unequal times in XTS

I have stock data at the tick level and would like to create a rolling list of all ticks for the previous 10 seconds. The code below works, but takes a very long time for large amounts of data. I'd like to vectorize this process or otherwise make it faster, but I'm not coming up with anything. Any suggestions or nudges in the right direction would be appreciated.
library(quantmod)
set.seed(150)
# Create five minutes of xts example data at .1 second intervals
mins <- 5
ticks <- mins * 60 * 10 + 1
times <- xts(runif(seq_len(ticks),1,100), order.by=seq(as.POSIXct("1973-03-17 09:00:00"),
as.POSIXct("1973-03-17 09:05:00"), length = ticks))
# Randomly remove some ticks to create unequal intervals
times <- times[runif(seq_along(times))>.3]
# Number of seconds to look back
lookback <- 10
dist.list <- list(rep(NA, nrow(times)))
system.time(
for (i in 1:length(times)) {
dist.list[[i]] <- times[paste(strptime(index(times[i])-(lookback-1), format = "%Y-%m-%d %H:%M:%S"), "/",
strptime(index(times[i])-1, format = "%Y-%m-%d %H:%M:%S"), sep = "")]
}
)
> user system elapsed
6.12 0.00 5.85
You should check out the window function, it will make your subselection of dates a lot easier. The following code uses lapply to do the work of the for loop.
# Your code
system.time(
for (i in 1:length(times)) {
dist.list[[i]] <- times[paste(strptime(index(times[i])-(lookback-1), format = "%Y-%m-%d %H:%M:%S"), "/",
strptime(index(times[i])-1, format = "%Y-%m-%d %H:%M:%S"), sep = "")]
}
)
# user system elapsed
# 10.09 0.00 10.11
# My code
system.time(dist.list<-lapply(index(times),
function(x) window(times,start=x-lookback-1,end=x))
)
# user system elapsed
# 3.02 0.00 3.03
So, about a third faster.
But, if you really want to speed things up, and you are willing to forgo millisecond accuracy (which I think your original method implicitly does), you could just run the loop on unique date-hour-second combinations, because they will all return the same time window. This should speed things up roughly twenty or thirty times:
dat.time=unique(as.POSIXct(as.character(index(times)))) # Cheesy method to drop the ms.
system.time(dist.list.2<-lapply(dat.time,function(x) window(times,start=x-lookback-1,end=x)))
# user system elapsed
# 0.37 0.00 0.39

how to judge of the trade-off of lua closure and lua coroutine?(when both of them can perform the same task)

ps:let alone the code complexity of closure implementation of the same task.
The memory overhead for a closure will be less than for a coroutine (unless you've got a lot of "upvalues" in the closure, and none in the coroutine). Also the time overhead for invoking the closure is negligible, whereas there is some small overhead for invoking the coroutine. From what I've seen, Lua does a pretty good job with coroutine switches, but if performance matters and you have the option not to use a coroutine, you should explore that option.
If you want to do benchmarks yourself, for this or anything else in Lua:
You use collectgarbage("collect");collectgarbage("count") to report the size of all non-garbage-collectable memory. (You may want to do "collect" a few times, not just one.) Do that before and after creating something (a closure, a coroutine) to know how much size it consumes.
You use os.clock() to time things.
See also Programming in Lua on profiling.
see also:
https://gist.github.com/LiXizhi/911069b7e7f98db76d295dc7d1c5e34a
-- Testing coroutine overhead in LuaJIT 2.1 with NPL runtime
--[[
Starting function test...
memory(KB): 0.35546875
Functions: 500000
Elapsed time: 0 s
Starting coroutine test...
memory(KB): 13781.81640625
Coroutines: 500000
Elapsed time: 0.191 s
Starting single coroutine test...
memory(KB): 0.4453125
Coroutines: 500000
Elapsed time: 0.02800000000002
conclusions:
1. memory overhead: 0.26KB per coroutine
2. yield/resume pair overhead: 0.0004 ms
if you have 1000 objects each is calling yield/resume at 60FPS, then the time overhead is 0.2*1000/500000*60*1000 = 24ms
and if you do not reuse coroutine, then memory overhead is 1000*60*0.26 = 15.6MB/sec
]]
local total = 500000
local start, stop
function loopy(n)
n = n + 1
return n
end
print "Starting function test..."
collectgarbage("collect");collectgarbage("collect");collectgarbage("collect");
local beforeCount =collectgarbage("count")
start = os.clock()
for i = 1, total do
loopy(i)
end
stop = os.clock()
print("memory(KB):", collectgarbage("count") - beforeCount)
print("Functions:", total)
print("Elapsed time:", stop-start, " s")
print "Starting coroutine test..."
collectgarbage("collect");collectgarbage("collect");collectgarbage("collect");
local beforeCount =collectgarbage("count")
start = os.clock()
for i = 1, total do
co = coroutine.create(loopy)
coroutine.resume(co, i)
end
stop = os.clock()
print("memory(KB):", collectgarbage("count") - beforeCount)
print("Coroutines:", total)
print("Elapsed time:", stop-start, " s")
print "Starting single coroutine test..."
collectgarbage("collect");collectgarbage("collect");collectgarbage("collect");
local beforeCount =collectgarbage("count")
start = os.clock()
co = coroutine.create(function()
for i = 1, total do
loopy(i)
coroutine.yield();
end
end)
for i = 1, total do
coroutine.resume(co, i)
end
stop = os.clock()
print("memory(KB):", collectgarbage("count") - beforeCount)
print("Coroutines:", total)
print("Elapsed time:", stop-start, " s")

Resources