Can someone advise what unit of time t_duration is in, below:
local timer = os.clock()
*insert script here*
t_duration = string.format("%.9f", os.clock() - timer)
I am getting conflicting answers elsewhere.
Have been unable to test properly myself.
The first time called os.clock() in timer will be substracted from the actual os.clock() in t_duration with a constant format of 9 digits after the floatingpoint.
In short: It will be measure the runtime of *insert script here* in seconds as a floatingpoint converted to a string with string.format()
PS: "seconds" is not a integer it will be a floatingpoint number
Example:
> print(os.clock())
1.892664
> print(string.format("%.9f", os.clock()))
1.911050000
With Lua i learned a new Datatype: HEX FLOAT
> print(string.format("%a", os.clock()))
0x1.f96638433d6c7p+0
> print(0x1.f96638433d6c7p+0)
1.974216
Related
In my experience Matlab performs publish subscribe operations with ROS slow for some reason. I work with components as defined in an object class as shown below, where I made a test-class. Normally objects of comparable structure are used to control mobile robots.
To quantify performance tested required time for an operation and got the following results:
1x publishing a message + 1x simple subscriber callback : 3.7ms
Simply counting in a callback (per count): 2.1318e-03 ms
Creating a new message with msg1 = rosmessage(obj.publisher) adds 3.6-4.3ms per iteration
Pinging myself indicated communication latency of 0.05 ms
The times required for a simple publish + start of a subscribe callback seems oddly slow.
I want to have multiple system components as objects in my workspace such that they respond to ROS topic updates or on timer events. The pc used for testing is not a monster but should not be garbage either.
Do you also think the shown time requirements are unneccesary large? this allows barely to publish a single topic at 200hz without doing anything else. Normally I have multiple lower frequency topics (e.g.20hz) but the total consumed time becomes significant.
Do you know any practices to make the system operate quicker?
What do you think of the OOP style of making control system components in general?
classdef subpubspeedMonitor < handle
% Use: call in matlab console, after initializing ros:
%
% SPM1 = subpubspeedMonitor()
%
% This will create an object which starts a set repetitive task upon creation
% and finally destructs itself after posting results in console.
properties
node
subscriber
publisher
timestart
messagetotal
end
methods
function obj = subpubspeedMonitor()
obj.node = ros.Node('subspeedmonitor1');
obj.subscriber = ros.Subscriber(obj.node,'topic1','sensor_msgs/NavSatFix',{#obj.rosSubCallback});
obj.publisher = ros.Publisher(obj.node,'topic1','sensor_msgs/NavSatFix');
obj.timestart = tic;
obj.messagetotal = 0;
msg1 = rosmessage(obj.publisher);
% Choose to evaluate subscriber + publisher loop or just counting
if 1
send(obj.publisher,msg1);
else
countAndDisplay(obj)
end
end
%% Test method one: repetitive publishing and subscribing
function rosSubCallback(obj,~,msg_) % ~3.7 ms per loop for a simple publish+subscribe action
% Latency to self is 0.05ms on average, according to "pinging" in terminal
obj.messagetotal = obj.messagetotal+1;
if obj.messagetotal <10000
%msg1 = rosmessage(obj.publisher); % this line adds 4.3000ms per loop
msg_.Longitude = 51; % this line adds 0.25000 ms per loop
send(obj.publisher,msg_)
else
% Display some results
timepassed = toc(obj.timestart);
time_per_pubsub = timepassed/obj.messagetotal
delete(obj);
end
end
%% Test method two: simply counting
function countAndDisplay(obj) % this costs 2.1318e-03 ms(!) per loop
obj.messagetotal = obj.messagetotal+1;
if obj.messagetotal <10000
%msg1 = rosmessage(obj.publisher); %adds 3.6ms per loop
%i = 1% adds 5.7532e-03 ms per loop
%msg1 = rosmessage("std_msgs/Bool"); %adds 1.5ms per loop
countAndDisplay(obj);
else
% Display some results
timepassed = toc(obj.timestart);
time_per_count_FCN = timepassed/obj.messagetotal
delete(obj);
end
end
%% Deconstructor
function delete(obj)
delete(obj.subscriber)
delete(obj.publisher)
delete(obj.node)
end
end
end
I am writing this question related to this. In his reply, Marco gave me an excellent answer but, unfortunately, I am new with OpenModelica so I would need some further help.
I am actually using OpenModelica and not Dymola so unfortunately I have to build the function that does it for me and I am very new with OpenModelica language.
So far, I have a model that simulates the physical behavior based on a DAEs. Now, I am trying to build what you suggest here:
With get time() you can build a function that: reads the system time as t_start translates the model and simulate for 0 seconds reads the system time again and as t_stop computes the difference between t_start and t_stop.
Could you please, give me more details: Which command can I use to read the system at time t_start and to simulate it for 0 seconds? To do this for both t_start and t_stop do I need to different function?
Once I have done this, do I have to call the function (or functions) inside the OpenModelica Model of which I want to know its time?
Thank you so much again for your precious help!
Very best regards, Gabriele
From the other question:
I noticed in Modelica there are different flags for the simulation time but actually the time I get is very small compared to the time that elapses since I press the simulation button to the end of the simulation (approximately measured with the clock of my phone).
The time that is reported is correct. Most of the time taken is not initialisation or simulation, but compilation. If you use the re-simulate option in OMEdit (right-click a result-file in the plot view for variables), you will notice the simulation is very fast.
$ cat e.mos
loadString("model M
Real r(fixed=true, start=2.0);
equation
der(r) = time;
end M;");getErrorString();
simulate(M);getErrorString();
$ omc e.mos
true
""
record SimulationResult
resultFile = "/mnt/data/#Mech/martin/tmp/M_res.mat",
simulationOptions = "startTime = 0.0, stopTime = 1.0, numberOfIntervals = 500, tolerance = 1e-06, method = 'dassl', fileNamePrefix = 'M', options = '', outputFormat = 'mat', variableFilter = '.*', cflags = '', simflags = ''",
messages = "LOG_SUCCESS | info | The initialization finished successfully without homotopy method.
LOG_SUCCESS | info | The simulation finished successfully.
",
timeFrontend = 0.004114061,
timeBackend = 0.00237546,
timeSimCode = 0.0008126780000000001,
timeTemplates = 0.062749837,
timeCompile = 0.633754155,
timeSimulation = 0.006627571000000001,
timeTotal = 0.7106012479999999
end SimulationResult;
""
OMEdit does not report these other numbers (time to translate and compile the model) as far as I know. On Windows, these times are quite big because linking takes longer.
I have to make a random numer (1 and 2) in .lua, and change this value every 3 seconds.
I have a variable = randomMode, this randomMode have to change every 3 seconds (1 or 2)
You could try making a kind of timer that changes the value. For example the main program loop could to change the variable every 3 seconds by using time stamps.
If you cant use a good way to implement a timer, maybe just checking time stamps since last call is good enough. For example this function randomizes the number on each call to GetRandomMode if more than 3 seconds has passed:
local lastChange = os.time()
local mode = math.random(1, 2)
function GetRandomMode()
local now = os.time()
if os.difftime(now, lastChange) > 3 then
lastChange = now
mode = math.random(1, 2)
end
return mode
end
I've been running a lot of scripts lately that iterate over 10k - 300k objects, and I'm thinking of writing some code that estimates the completion time of the script (they take 20-180 minutes). I've got to imagine though that there's something out there that does this already. Is there?
To Clarify (edit):
Were I to write code to do this, it would work by measuring how long it takes to perform "the operation" on a single object, multiplying that amount of time by the number of objects left, and adding it to the current time.
Granted, this would only work in situations where you have a script involving a single loop that takes up 99% of the script's total run time, and in which you could reasonably expect to be able to calculate an semi-accurate average for each iteration of that loop. This is true of the scripts for which I'd like estimate completion time.
Have a look at the ruby-progressbar gem: https://github.com/jfelchner/ruby-progressbar
It generates a nice progressbar and estimates completion time (ETA):
example task: 67% |oooooooooooooooooooooo | ETA: 00:01:15
You can granularity measure the time of each method within your script and then sum the components as described here.
You let your process run, and after a set time of iterations, you measure the elapsed time. You then use that value as an estimation for the time left. This ensures that the time is always dynamically estimated according to the current task.
This example is extra verbose, like a code double whopper with triple cheese:
# Some variables for this test
iterations = 1000
probe_at = (iterations * 0.1).to_i
time_total = 0
#======================================
iterations.times do |i|
time_start = Time.now
#you could yield here if this were a function
5000.times do # <tedius task simulation>
Math.sqrt(rand(200000))
end # <end of tedious task simulation>
time_total += time_taken = Time.now - time_start
if i == probe_at
iteration_cost = (time_total / probe_at)
time_left = iteration_cost * (iterations - probe_at)
puts "Time taken (ACTUAL): #{time_total} | iteration: #{i}"
puts "Time left (ESTIMATE): #{time_left} | iteration: #{i}"
puts "Estimated total: #{time_total + time_left} | iteration: #{i}"
end
if i == 999
puts "Time taken (ACTUAL): #{time_total} | iteration: #{i}"
end
end
You could easily rewrite this into a class or a method.
I've written a feature for my library Rubikon that displays a throbber (a spinning — as you may have seen in other console apps) as long as some other code is running.
To test this feature I capture the output of the throbber in a StringIO and compare it with the expected value. As the throbber is only displayed as long as the other code is running the content of the IO gets longer when the code runs longer. In my tests I do a simple sleep 1 and should have a constant 1 second delay. This works most of the time, but sometimes (apparently due to external factors like heavy load on the CPU) it fails, because the code doesn't run for 1 second, but for a bit more, so that the throbber prints a few additional characters.
My question is: Is there any possibility to test such time critical features in Ruby?
From your github repository, I found this test for the Throbber class:
should 'work correctly' do
ostream = StringIO.new
thread = Thread.new { sleep 1 }
throbber = Throbber.new(ostream, thread)
thread.join
throbber.join
assert_equal " \b-\b\\\b|\b/\b", ostream.string
end
I'll assume that a throbber iterates over ['-', '\', '|', '/'], backspacing before each write, once per second. Consider the following test:
should 'work correctly' do
ostream = StringIO.new
started_at = Time.now
ended_at = nil
thread = Thread.new { sleep 1; ended_at = Time.now }
throbber = Throbber.new(ostream, thread)
thread.join
throbber.join
duration = ended_at - started_at
iterated_chars = " -\\|/"
expected = ""
if duration >= 1
# After n seconds we should have n copies of " -\\|/", excluding \b for now
expected << iterated_chars * duration.to_i
end
# Next append the characters we'd get from working for fractions of a second:
remainder = duration - duration.to_i
expected << iterated_chars[0..((iterated_chars.length*remainder).to_i)] if remainder > 0.0
expected = expected.split('').join("\b") + "\b"
assert_equal expected, ostream.string
end
The last assignment of expected is a bit unpleasant, but I made the assumption that the throbber would write character/backspace pairs atomically. If this is not true, you should be able to insert the \b escape sequence into the iterated_chars string and remove the last assignment entirely.
This question is similar (I think, altough I'm not completely sure) to this one:
Only real time operating system can
give you such precision. You can
assume Thread.Sleep has a precision of
about 20 ms so you could, in theory
sleep until the desired time - the
actual time is about 20 ms and THEN
spin for 20 ms but you'll have to
waste those 20 ms. And even that
doesn't guarantee that you'll get real
time results, the scheduler might just
take your thread out just when it was
about to execute the RELEVANT part
(just after spinning)
The problem is not rubby (possibly, I'm no expert in ruby), the problem is the real time capabilities of your operating system.