Source of Ruby benchmark irregularites - ruby

Running this code:
require 'benchmark'
Benchmark.bm do |x|
  x.report("1+1") {15_000_000.times {1+1}}
  x.report("1+1") {15_000_000.times {1+1}}
  x.report("1+1") {15_000_000.times {1+1}}
  x.report("1+1") {15_000_000.times {1+1}}
  x.report("1+1") {15_000_000.times {1+1}}
end
Outputs these results:
       user     system      total        real
1+1  2.188000   0.000000   2.188000 (  2.250000)
1+1  2.250000   0.000000   2.250000 (  2.265625)
1+1  2.234000   0.000000   2.234000 (  2.250000)
1+1  2.203000   0.000000   2.203000 (  2.250000)
1+1  2.266000   0.000000   2.266000 (  2.281250)
Guessing the variation is a result of the system environment, but wanted to confirm this is the case.

"Guessing the variation is a result of the system environment", you are right.
Benchmarks can't be precise all time. You don't have a perfect regular machine to run something always in the same time. Take two numbers from benchmark as the same if they were too near, as in this case.

I tried using eval to partially unroll the loop, and although it made it faster, it made the execution time less consistent!
$VERBOSE &&= false # You do not want 15 thousand "warning: useless use of + in void context" warnings
# large_number = 15_000_000 # Too large! Caused eval to take too long, so I gave up
somewhat_large_number = 15_000
unrolled = "def do_addition\n" + ("1+1\n" * somewhat_large_number) + "end\n" ; nil
eval(unrolled)
require 'benchmark'
Benchmark.bm do |x|
x.report("1+1 partially unrolled") { i = 0; while i < 1000; do_addition; i += 1; end}
x.report("1+1 partially unrolled") { i = 0; while i < 1000; do_addition; i += 1; end}
x.report("1+1 partially unrolled") { i = 0; while i < 1000; do_addition; i += 1; end}
x.report("1+1 partially unrolled") { i = 0; while i < 1000; do_addition; i += 1; end}
x.report("1+1 partially unrolled") { i = 0; while i < 1000; do_addition; i += 1; end}
x.report("1+1 partially unrolled") { i = 0; while i < 1000; do_addition; i += 1; end}
x.report("1+1 partially unrolled") { i = 0; while i < 1000; do_addition; i += 1; end}
x.report("1+1 partially unrolled") { i = 0; while i < 1000; do_addition; i += 1; end}
x.report("1+1 partially unrolled") { i = 0; while i < 1000; do_addition; i += 1; end}
x.report("1+1 partially unrolled") { i = 0; while i < 1000; do_addition; i += 1; end}
end
gave me
user system total real
1+1 partially unrolled 0.750000 0.000000 0.750000 ( 0.765586)
1+1 partially unrolled 0.765000 0.000000 0.765000 ( 0.765586)
1+1 partially unrolled 0.688000 0.000000 0.688000 ( 0.703089)
1+1 partially unrolled 0.797000 0.000000 0.797000 ( 0.796834)
1+1 partially unrolled 0.750000 0.000000 0.750000 ( 0.749962)
1+1 partially unrolled 0.781000 0.000000 0.781000 ( 0.781210)
1+1 partially unrolled 0.719000 0.000000 0.719000 ( 0.718713)
1+1 partially unrolled 0.750000 0.000000 0.750000 ( 0.749962)
1+1 partially unrolled 0.765000 0.000000 0.765000 ( 0.765585)
1+1 partially unrolled 0.781000 0.000000 0.781000 ( 0.781210)
For the purpose of comparison, your benchmark on my computer gave
user system total real
1+1 2.406000 0.000000 2.406000 ( 2.406497)
1+1 2.407000 0.000000 2.407000 ( 2.484629)
1+1 2.500000 0.000000 2.500000 ( 2.734655)
1+1 2.515000 0.000000 2.515000 ( 2.765908)
1+1 2.703000 0.000000 2.703000 ( 4.391075)
(real time varied in the last line, but not user or total)

Related

Divide number without divide operator using ruby

I want to divide number without divide operator
def divede_me(val,ded)
i = 1; new_num=0
rem = val % ded
val = val - rem
while (val != new_num)
i += 1
new_num = ded * i
end
return i
end
p divede_me(14,4)
above script return 3 but i want floating point also (for Ex. 3.5) and best way to write above script.
def divide_me(val,ded)
i = 1; new_num=0
rem = val.to_f % ded
val = val - rem
while (val != new_num)
i += 1
new_num = ded * i
end
temp = 0.01
temp += 0.01 until ded * temp >= rem
return i + temp.round(2)
end
p divide_me(14,4)
=>3.5
p divide_me(15,4)
=>3.75
p divide_me(16,7)
=>2.29
Expanding on your existing code, this will get you to reasonably accurate 2 decimal places. Remove the .round(2) to see how inaccurate floats are.
This logic may help you
val = 14
ded = 4
r = val % ded
value = val -r
v_ck = 0
i = 0
while( value != v_ck )
i+=1
v_ck = ded * i
end
ded_ck = 0
j = 0
while(ded_ck != ded)
j += 1
ded_ck = r * j
end
puts i.to_s+"."+j.to_s

Why is array.min so slow?

I noticed that array.min seems slow, so I did this test against my own naive implementation:
require 'benchmark'
array = (1..100000).to_a.shuffle
Benchmark.bmbm(5) do |x|
x.report("lib:") { 99.times { min = array.min } }
x.report("own:") { 99.times { min = array[0]; array.each { |n| min = n if n < min } } }
end
The results:
Rehearsal -----------------------------------------
lib: 1.531000 0.000000 1.531000 ( 1.538159)
own: 1.094000 0.016000 1.110000 ( 1.102130)
-------------------------------- total: 2.641000sec
user system total real
lib: 1.500000 0.000000 1.500000 ( 1.515249)
own: 1.125000 0.000000 1.125000 ( 1.145894)
I'm shocked. How can my own implementation running a block via each beat the built-in? And beat it by so much?
Am I somehow mistaken? Or is this somehow normal? I'm confused.
My Ruby version, running on Windows 8.1 Pro:
C:\>ruby --version
ruby 2.2.3p173 (2015-08-18 revision 51636) [i386-mingw32]
Have a look at the implementation of Enumerable#min. It might use each eventually to loop through the elements and get the min element, but before that it does some extra checking to see if it needs to return more than one element, or if it needs to compare the elements via a passed block. In your case the elements will get to be compared via min_i function, and I suspect that's where the speed difference comes from - that function will be slower than simply comparing two numbers.
There's no extra optimization for arrays, all enumerables are traversed the same way.
It's even faster if you use:
def my_min(ary)
the_min = ary[0]
i = 1
len = ary.length
while i < len
the_min = ary[i] if ary[i] < the_min
i += 1
end
the_min
end
NOTE
I know this is not an answer, but I thought it was worth sharing and putting this code into a comment would have been exceedingly ugly.
For those who likes to upgrade to newer versions of software
require 'benchmark'
array = (1..100000).to_a.shuffle
Benchmark.bmbm(5) do |x|
x.report("lib:") { 99.times { min = array.min } }
x.report("own:") { 99.times { min = array[0]; array.each { |n| min = n if n < min } } }
end
Rehearsal -----------------------------------------
lib: 0.021326 0.000017 0.021343 ( 0.021343)
own: 0.498233 0.001024 0.499257 ( 0.499746)
-------------------------------- total: 0.520600sec
user system total real
lib: 0.018126 0.000000 0.018126 ( 0.018139)
own: 0.492046 0.000000 0.492046 ( 0.492367)
RUBY_VERSION # => "2.7.1"
If you are looking into solving this in really performant manner: O(log(n)) or O(n), look at https://en.wikipedia.org/wiki/Selection_algorithm#Incremental_sorting_by_selection and https://en.wikipedia.org/wiki/Heap_(data_structure)

Fast bounding of data in R

Suppose I have a vector, vec, which is long (starting at 1E8 entries) and would like to bound it to the range [a,b]. I can certainly code vec[vec < a] = a and vec[vec > b] = b, but this requires two passes over the data and a large RAM allocation for the temporary indicator vector (~800MB, twice). The two passes burn time because we can do better if we copy data from main memory to the local cache just once (calls to main memory are bad, as are cache misses). And who knows how much this could be improved with multiple threads, but let's not get greedy. :)
Is there a nice implementation in base R or some package that I'm overlooking, or is this a job for Rcpp (or my old friend data.table)?
A naive C solution is
library(inline)
fun4 <-
cfunction(c(x="numeric", a="numeric", b="numeric"), body4,
language="C")
body4 <- "
R_len_t len = Rf_length(x);
SEXP result = Rf_allocVector(REALSXP, len);
const double aa = REAL(a)[0], bb = REAL(b)[0], *xp = REAL(x);
double *rp = REAL(result);
for (int i = 0; i < len; ++i)
if (xp[i] < aa)
rp[i] = aa;
else if (xp[i] > bb)
rp[i] = bb;
else
rp[i] = xp[i];
return result;
"
fun4 <-
cfunction(c(x="numeric", a="numeric", b="numeric"), body4,
language="C")
With a simple parallel version (as Dirk points out, this is with CFLAGS = -fopenmp in ~/.R/Makevars, and on a platform / compiler supporting openmp)
body5 <- "
R_len_t len = Rf_length(x);
const double aa = REAL(a)[0], bb = REAL(b)[0], *xp = REAL(x);
SEXP result = Rf_allocVector(REALSXP, len);
double *rp = REAL(result);
#pragma omp parallel for
for (int i = 0; i < len; ++i)
if (xp[i] < aa)
rp[i] = aa;
else if (xp[i] > bb)
rp[i] = bb;
else
rp[i] = xp[i];
return result;
"
fun5 <-
cfunction(c(x="numeric", a="numeric", b="numeric"), body5,
language="C")
And benchmarks
> z <- runif(1e7)
> benchmark(fun1(z,0.25,0.75), fun4(z, .25, .75), fun5(z, .25, .75),
+ replications=10)
test replications elapsed relative user.self sys.self
1 fun1(z, 0.25, 0.75) 10 9.087 14.609325 8.335 0.739
2 fun4(z, 0.25, 0.75) 10 1.505 2.419614 1.305 0.198
3 fun5(z, 0.25, 0.75) 10 0.622 1.000000 2.156 0.320
user.child sys.child
1 0 0
2 0 0
3 0 0
> identical(res1 <- fun1(z,0.25,0.75), fun4(z,0.25,0.75))
[1] TRUE
> identical(res1, fun5(z, 0.25, 0.75))
[1] TRUE
on my quad-core laptop. Assumes numeric input, no error checking, NA handling, etc.
Just to start things off: not much difference between your solution and the pmin/pmax solution (trying things out with n=1e7 rather than n=1e8 because I'm impatient) -- pmin/pmax is actually marginally slower.
fun1 <- function(x,a,b) {x[x<a] <- a; x[x>b] <- b; x}
fun2 <- function(x,a,b) pmin(pmax(x,a),b)
library(rbenchmark)
z <- runif(1e7)
benchmark(fun1(z,0.25,0.75),fun2(z,0.25,0.75),rep=50)
test replications elapsed relative user.self sys.self
1 fun1(z, 0.25, 0.75) 10 21.607 1.00000 6.556 15.001
2 fun2(z, 0.25, 0.75) 10 23.336 1.08002 5.656 17.605

How much slower are strings containing numbers compared to numbers?

Say I want to take a number and return its digits as an array in Ruby.
For this specific purpose or for string functions and number functions in general, which is faster?
These are the algorithms I assume would be most commonly used:
Using Strings: n.to_s.split(//).map {|x| x.to_i}
Using Numbers:
array = []
until n = 0
m = n % 10
array.unshift(m)
n /= 10
end
The difference seems to be less than one order of magnitude, with the integer-based approach faster for Fixnums. For Bignums, the relative performance starts out more or less even, with the string approach winning out significantly as the number of digits grows.
As strings
Program
#!/usr/bin/env ruby
require 'profile'
$n = 1234567890
10000.times do
$n.to_s.split(//).map {|x| x.to_i}
end
Output
% cumulative self self total
time seconds seconds calls ms/call ms/call name
55.64 0.74 0.74 10000 0.07 0.10 Array#map
21.05 1.02 0.28 100000 0.00 0.00 String#to_i
10.53 1.16 0.14 1 140.00 1330.00 Integer#times
7.52 1.26 0.10 10000 0.01 0.01 String#split
5.26 1.33 0.07 10000 0.01 0.01 Fixnum#to_s
0.00 1.33 0.00 1 0.00 1330.00 #toplevel
As integers
Program
#!/usr/bin/env ruby
require 'profile'
$n = 1234567890
10000.times do
array = []
n = $n
until n == 0
m = n%10
array.unshift(m)
n /= 10
end
array
end
Output
% cumulative self self total
time seconds seconds calls ms/call ms/call name
70.64 0.77 0.77 1 770.00 1090.00 Integer#times
29.36 1.09 0.32 100000 0.00 0.00 Array#unshift
0.00 1.09 0.00 1 0.00 1090.00 #toplevel
Addendum
The pattern seems to hold for smaller numbers also. With $n = 12345, it was around 800ms for the string-based approach and 550ms for the integer-based approach.
When I crossed the boundary into Bignums, say, with $n = 12345678901234567890, I got 2375ms for both approaches. It would appear that the difference evens out nicely, which I would have taken to mean that the internal local powering Bignum is string-like. However, the documentation seems to suggest otherwise.
For academic purposes, I once again doubled the number of digits to $n = 1234567890123456789012345678901234567890. I got around 4450ms for the string approach and 9850ms for the integer approach, a stark reversal that rules out my previous postulate.
Summary
Number of digits | String program | Integer program | Difference
---------------------------------------------------------------------------
5 | 800ms | 550ms | Integer wins by 250ms
10 | 1330ms | 1090ms | Integer wins by 240ms
20 | 2375ms | 2375ms | Tie
40 | 4450ms | 9850ms | String wins by 4400ms
Steven's response is impressive, but I looked at it for a couple minutes of and couldn't distill it into a simple answer, so here is mine.
For Fixnums
It is fastest to use the digits method I provide below. It's also pretty quick (and much easier) to use num.to_s.each_char.map(&:to_i).
For Bignums
It is fastest to use num.to_s.each_char.map(&:to_i).
The Solution
If speed is honestly the determining factor for what code you use (meaning don't be evil), then this code is the best choice for the job.
class Integer
def digits
working_int, digits = self, Array.new
until working_int.zero?
digits.unshift working_int % 10
working_int /= 10
end
digits
end
end
class Bignum
def digits
to_s.each_char.map(&:to_i)
end
end
Here are the approaches I considered to arrive at this conclusion.
I made a solution with 'benchmark' using the code examples of Steven Xu and a String#each_byte-version.
require 'benchmark'
MAX = 10_000
#Solution based on http://stackoverflow.com/questions/6445496/how-much-slower-are-strings-containing-numbers-compared-to-numbers/6447254#6447254
class Integer
def digits
working_int, digits = self, Array.new
until working_int.zero?
digits.unshift working_int % 10
working_int /= 10
end
digits
end
end
class Bignum
def digits
to_s.each_char.map(&:to_i)
end
end
[
12345,
1234567890,
12345678901234567890,
1234567890123456789012345678901234567890,
].each{|num|
puts "========="
puts "Benchmark #{num}"
Benchmark.bm do|b|
b.report("Integer% ") do
MAX.times {
array = []
n = num
until n == 0
m = n%10
array.unshift(m)
n /= 10
end
array
}
end
b.report("Integer% << ") do
MAX.times {
array = []
n = num
until n == 0
m = n%10
array << m
n /= 10
end
array.reverse
}
end
b.report("Integer#divmod ") do
MAX.times {
array = []
n = num
until n == 0
n, x = *n.divmod(10)
array.unshift(x)
end
array
}
end
b.report("Integer#divmod<<") do
MAX.times {
array = []
n = num
until n == 0
n, x = *n.divmod(10)
array << x
end
array.reverse
}
end
b.report("String+split// ") do
MAX.times { num.to_s.split(//).map {|x| x.to_i} }
end
b.report("String#each_byte") do
MAX.times { num.to_s.each_byte.map{|x| x.chr } }
end
b.report("String#each_char") do
MAX.times { num.to_s.each_char.map{|x| x.to_i } }
end
#http://stackoverflow.com/questions/6445496/how-much-slower-are-strings-containing-numbers-compared-to-numbers/6447254#6447254
b.report("Num#digit ") do
MAX.times { num.to_s.each_char.map{|x| x.to_i } }
end
end
}
My results:
Benchmark 12345
user system total real
Integer% 0.015000 0.000000 0.015000 ( 0.015625)
Integer% << 0.016000 0.000000 0.016000 ( 0.015625)
Integer#divmod 0.047000 0.000000 0.047000 ( 0.046875)
Integer#divmod<< 0.031000 0.000000 0.031000 ( 0.031250)
String+split// 0.109000 0.000000 0.109000 ( 0.109375)
String#each_byte 0.047000 0.000000 0.047000 ( 0.046875)
String#each_char 0.047000 0.000000 0.047000 ( 0.046875)
Num#digit 0.047000 0.000000 0.047000 ( 0.046875)
=========
Benchmark 1234567890
user system total real
Integer% 0.047000 0.000000 0.047000 ( 0.046875)
Integer% << 0.046000 0.000000 0.046000 ( 0.046875)
Integer#divmod 0.063000 0.000000 0.063000 ( 0.062500)
Integer#divmod<< 0.062000 0.000000 0.062000 ( 0.062500)
String+split// 0.188000 0.000000 0.188000 ( 0.187500)
String#each_byte 0.063000 0.000000 0.063000 ( 0.062500)
String#each_char 0.093000 0.000000 0.093000 ( 0.093750)
Num#digit 0.079000 0.000000 0.079000 ( 0.078125)
=========
Benchmark 12345678901234567890
user system total real
Integer% 0.234000 0.000000 0.234000 ( 0.234375)
Integer% << 0.234000 0.000000 0.234000 ( 0.234375)
Integer#divmod 0.203000 0.000000 0.203000 ( 0.203125)
Integer#divmod<< 0.172000 0.000000 0.172000 ( 0.171875)
String+split// 0.266000 0.000000 0.266000 ( 0.265625)
String#each_byte 0.125000 0.000000 0.125000 ( 0.125000)
String#each_char 0.141000 0.000000 0.141000 ( 0.140625)
Num#digit 0.141000 0.000000 0.141000 ( 0.140625)
=========
Benchmark 1234567890123456789012345678901234567890
user system total real
Integer% 0.718000 0.000000 0.718000 ( 0.718750)
Integer% << 0.657000 0.000000 0.657000 ( 0.656250)
Integer#divmod 0.562000 0.000000 0.562000 ( 0.562500)
Integer#divmod<< 0.485000 0.000000 0.485000 ( 0.484375)
String+split// 0.500000 0.000000 0.500000 ( 0.500000)
String#each_byte 0.218000 0.000000 0.218000 ( 0.218750)
String#each_char 0.282000 0.000000 0.282000 ( 0.281250)
Num#digit 0.265000 0.000000 0.265000 ( 0.265625)
String#each_byte/each_char is faster the split, for lower numbers the integer version is faster.

Is there a "queue" in MATLAB?

I want to convert a recursive function to a iterative one. What I normally do is, I initialize a queue, put the first job into queue. Then in a while loop I consume jobs from queue and add new ones to the queue. If my recursive function calls itself multiple times (e.g walking a tree with many branches) multiple jobs are added. Pseudo code:
queue = new Queue();
queue.put(param);
result = 0;
while (!queue.isEmpty()) {
param = queue.remove();
// process param and obtain new param(s)
// change result
queue.add(param1);
queue.add(param2);
}
return result;
I cannot find any queue like structure in MATLAB though. I can use vector to simulate queue where adding 3 to queue is like:
a = [a 3]
and removing element is
val = a(1);
a(1) = [];
If I got the MATLAB way right, this method will be a performance killer.
Is there a sane way to use a queue in MATLAB?
What about other data structures?
If you insist on using proper data structures, you can use Java from inside MATLAB:
import java.util.LinkedList
q = LinkedList();
q.add('item1');
q.add(2);
q.add([3 3 3]);
item = q.remove();
q.add('item4');
Ok, here's a quick-and-dirty, barely tested implementation using a MATLAB handle class. If you're only storing scalar numeric values, you could use a double array for "elements" rather than a cell array. No idea about performance.
classdef Queue < handle
properties ( Access = private )
elements
nextInsert
nextRemove
end
properties ( Dependent = true )
NumElements
end
methods
function obj = Queue
obj.elements = cell(1, 10);
obj.nextInsert = 1;
obj.nextRemove = 1;
end
function add( obj, el )
if obj.nextInsert == length( obj.elements )
obj.elements = [ obj.elements, cell( 1, length( obj.elements ) ) ];
end
obj.elements{obj.nextInsert} = el;
obj.nextInsert = obj.nextInsert + 1;
end
function el = remove( obj )
if obj.isEmpty()
error( 'Queue is empty' );
end
el = obj.elements{ obj.nextRemove };
obj.elements{ obj.nextRemove } = [];
obj.nextRemove = obj.nextRemove + 1;
% Trim "elements"
if obj.nextRemove > ( length( obj.elements ) / 2 )
ntrim = fix( length( obj.elements ) / 2 );
obj.elements = obj.elements( (ntrim+1):end );
obj.nextInsert = obj.nextInsert - ntrim;
obj.nextRemove = obj.nextRemove - ntrim;
end
end
function tf = isEmpty( obj )
tf = ( obj.nextRemove >= obj.nextInsert );
end
function n = get.NumElements( obj )
n = obj.nextInsert - obj.nextRemove;
end
end
end
Is a recursive solution really so bad? (always examine your design first).
File Exchange is your friend. (steal with pride!)
Why bother with the trouble of a proper Queue or a class - fake it a bit. Keep it simple:
q = {};
head = 1;
q{head} = param;
result = 0;
while (head<=numel(q))
%process param{head} and obtain new param(s)
head = head + 1;
%change result
q{end+1} = param1;
q{end+1} = param2;
end %loop over q
return result;
If the performance suffers from adding at the end too much - add in chunks:
chunkSize = 100;
chunk = cell(1, chunkSize);
q = chunk;
head = 1;
nextLoc = 2;
q{head} = param;
result = 0;
while (head<endLoc)
%process param{head} and obtain new param(s)
head = head + 1;
%change result
if nextLoc > numel(q);
q = [q chunk];
end
q{nextLoc} = param1;
nextLoc = nextLoc + 1;
q{end+1} = param2;
nextLoc = nextLoc + 1;
end %loop over q
return result;
A class is certainly more elegant and reusable - but fit the tool to the task.
If you can do with a FIFO queue of predefined size without the need for simple direct access, you can simply use the modulo operator and some counter variable:
myQueueSize = 25; % Define queue size
myQueue = zeros(1,myQueueSize); % Initialize queue
k = 1 % Counter variable
while 1
% Do something, and then
% Store some number into the queue in a FIFO manner
myQueue(mod(k, myQueueSize)+1) = someNumberToQueue;
k= k+1; % Iterate counter
end
This approach is super simple, but has the downside of not being as easily accessed as your typical queue. In other words, the newest element will always be element k, not element 1 etc.. For some applications, such as FIFO data storage for statistical operations, this is not necessarily a problem.
Use this code, save the code as a m file, and use the functions such q.pop() etc.
this is the original code with some modifications:
properties (Access = private)
buffer % a cell, to maintain the data
beg % the start position of the queue
rear % the end position of the queue
% the actually data is buffer(beg:rear-1)
end
properties (Access = public)
capacity % ص»µؤبفء؟£¬µ±بفء؟²»¹»ت±£¬بفء؟ہ©³نخھ2±¶،£
end
methods
function obj = CQueue(c) % ³ُت¼»¯
if nargin >= 1 && iscell(c)
obj.buffer = [c(:); cell(numel(c), 1)];
obj.beg = 1;
obj.rear = numel(c) + 1;
obj.capacity = 2*numel(c);
elseif nargin >= 1
obj.buffer = cell(100, 1);
obj.buffer{1} = c;
obj.beg = 1;
obj.rear = 2;
obj.capacity = 100;
else
obj.buffer = cell(100, 1);
obj.capacity = 100;
obj.beg = 1;
obj.rear = 1;
end
end
function s = size(obj) % ¶سءذ³¤¶ب
if obj.rear >= obj.beg
s = obj.rear - obj.beg;
else
s = obj.rear - obj.beg + obj.capacity;
end
end
function b = isempty(obj) % return true when the queue is empty
b = ~logical(obj.size());
end
function s = empty(obj) % clear all the data in the queue
s = obj.size();
obj.beg = 1;
obj.rear = 1;
end
function push(obj, el) % ر¹بëذآشھثطµ½¶سخ²
if obj.size >= obj.capacity - 1
sz = obj.size();
if obj.rear >= obj.beg
obj.buffer(1:sz) = obj.buffer(obj.beg:obj.rear-1);
else
obj.buffer(1:sz) = obj.buffer([obj.beg:obj.capacity 1:obj.rear-1]);
end
obj.buffer(sz+1:obj.capacity*2) = cell(obj.capacity*2-sz, 1);
obj.capacity = numel(obj.buffer);
obj.beg = 1;
obj.rear = sz+1;
end
obj.buffer{obj.rear} = el;
obj.rear = mod(obj.rear, obj.capacity) + 1;
end
function el = front(obj) % ·µ»ط¶ست×شھثط
if obj.rear ~= obj.beg
el = obj.buffer{obj.beg};
else
el = [];
warning('CQueue:NO_DATA', 'try to get data from an empty queue');
end
end
function el = back(obj) % ·µ»ط¶سخ²شھثط
if obj.rear == obj.beg
el = [];
warning('CQueue:NO_DATA', 'try to get data from an empty queue');
else
if obj.rear == 1
el = obj.buffer{obj.capacity};
else
el = obj.buffer{obj.rear - 1};
end
end
end
function el = pop(obj) % µ¯³ِ¶ست×شھثط
if obj.rear == obj.beg
error('CQueue:NO_Data', 'Trying to pop an empty queue');
else
el = obj.buffer{obj.beg};
obj.beg = obj.beg + 1;
if obj.beg > obj.capacity, obj.beg = 1; end
end
end
function remove(obj) % اه؟ص¶سءذ
obj.beg = 1;
obj.rear = 1;
end
function display(obj) % دشت¾¶سءذ
if obj.size()
if obj.beg <= obj.rear
for i = obj.beg : obj.rear-1
disp([num2str(i - obj.beg + 1) '-th element of the stack:']);
disp(obj.buffer{i});
end
else
for i = obj.beg : obj.capacity
disp([num2str(i - obj.beg + 1) '-th element of the stack:']);
disp(obj.buffer{i});
end
for i = 1 : obj.rear-1
disp([num2str(i + obj.capacity - obj.beg + 1) '-th element of the stack:']);
disp(obj.buffer{i});
end
end
else
disp('The queue is empty');
end
end
function c = content(obj) % ب،³ِ¶سءذشھثط
if obj.rear >= obj.beg
c = obj.buffer(obj.beg:obj.rear-1);
else
c = obj.buffer([obj.beg:obj.capacity 1:obj.rear-1]);
end
end
end end
Reference:
list, queue, stack Structures in Matlab
I had a need for queue like data structure as well.
Fortunately I had a limited number of elements (n).
They all get into queue at some point but only once.
If you situation is similar you can adapt the simple algorithm using fixed size array and 2 indices.
queue = zeros( n, 1 );
firstq = 1;
lastq = 1;
while( lastq >= firstq && firstq <= n )
i = queue( firstq ); % pull first element from the queue
% you do not physically remove it from an array,
% thus saving time on memory access
firstq = firstq + 1;
% % % % % % % % % % % % % WORKER PART HERE
% do stuff
%
% % % % % % % % % % % % % % % % % % % % %
queue( lastq ) = j; % push element to the end of the queue
lastq = lastq + 1; % increment index
end;
In the case where you need a queue only to store vectors (or scalars), then it is not difficult to use a matrix along with the circshift() function to implement a basic queue with a fixed length.
% Set the parameters of our queue
n = 4; % length of each vector in queue
max_length = 5;
% Initialize a queue of length of nx1 vectors
queue = NaN*zeros(n, max_length);
queue_length = 0;
To push:
queue = circshift(queue, 1, 2); % Move each column to the right
queue(:,1) = rand(n, 1); % Add new vector to queue
queue_length = min(max_length, queue_length + 1);
To pop:
result = queue(:,last)
queue(:, last) = NaN;
queue_length = max(1, queue_length - 1);

Resources