Luaj os.time() return milliseconds - time

os.time() in Luaj returns time in milliseconds, but according to lua documentation, it should return time in seconds.
Is this a bug in Luaj?
And Can you suggest a workaround that will work with Luaj(for java) and real Lua(c/c++)? because i have to use the same lua source for both applications.(cant simply divide it with 1000, as they both have return different time scale)
example in my lua file:
local start = os.time()
while(true) do
print(os.time() - start)
end
in c++ , i received output:
1
1
1
...(1 seconds passed)
2
2
2
in java (using Luaj), i got:
1
...(terminate in eclipse as fast as my finger can)
659
659
659
659
fyi, i try this on windows

Yes there's a bug in luaj.
The implementation just returns System.currentTimeMillis() when you call os.time(). It should really return something like (long)(System.currentTimeMillis()/1000.)
It's also worth pointing out that the os.date and os.time handling in luaj is almost completely missing. I would recommend that you assume that they've not been implemented yet.

Lua manual about os.time():
The returned value is a number, whose meaning depends on your system. In POSIX, Windows, and some other systems, this number counts the number of seconds since some given start time (the "epoch"). In other systems, the meaning is not specified, and the number returned by time can be used only as an argument to os.date and os.difftime.
So, any Lua implementation could freely change the meaning of os.time() value.

It appears like you've already confirmed that it's a bug in LuaJ; as for the workaround you can replace os.time() with your own version:
if (runningunderluaj) then
local ostime = os.time
os.time = function(...) return ostime(...)/1000 end
end
where runningunderluaj can check for some global variable that is only set under luaj. If that's not available, you can probably come up with your own check by comparing the results from calls to os.clock and os.time that measure time difference:
local s = os.clock()
local t = os.time()
while true do
if os.clock()-s > 0.1 then break end
end
-- (at least) 100ms has passed
local runningunderluaj = os.time() - t > 1
Note: It's possible that os.clock() is "broken" as well. I don't have access to luaj to test this...

In luaj-3.0-beta2, this has been fixed to return time in seconds.
This was a bug in all versions of luaj up to and including luaj-3.0-beta1.

Related

lua math.random first randomized number doesn't reroll

so I'm new to LUA and am writing a simple guess-the-number script, but I've found a weird quirk that happens with math.random and I would like to understand what's happening here.
So I create a random seed with math.randomseed(os.time()), but when I go to get a random number, like this:
correctNum = math.random(10)
print(correctNum),
it always gets the same random number everytime I run it, unless I do it twice (irrespective of arguments given):
random1 = math.random(10)
print(random1)
random2 = math.random(10)
print(random2),
in which case the first random number will never reroll on every execution, but the second one will.
Just confused about how randomization works in LUA and would appreciate some help.
Thanks,
-Electroshockist
Here is the full working code:
math.randomseed(os.time())
random1 = math.random(10)
print(random1)
random2 = math.random(10)
print(random2)
repeat
io.write "\nEnter your guess between 1 and 10: "
guess = io.read()
if tonumber(guess) ~= random2 then
print("Try again!")
end
print()
until tonumber(guess) == random2
print("Correct!")
I guess you are calling the script twice within the same second. The resolution of os.time() is one second, i.e. if you are calling the script twice in the same second, you start with the same seed.
os.time ([table])
Returns the current time when called without arguments, or a time representing the date and time specified by the given table. This table must have fields year, month, and day, and may have fields hour, min, sec, and isdst (for a description of these fields, see the os.date function).
The returned value is a number, whose meaning depends on your system. In POSIX, Windows, and some other systems, this number counts the number of seconds since some given start time (the "epoch"). In other systems, the meaning is not specified, and the number returned by time can be used only as an argument to date and difftime.
Furthermore you are rolling a number between 1 and 10, so there is a 0.1 chance that you are hitting 4 (which is not that small).
For better methods to seed random numbers, take a look here: https://stackoverflow.com/a/31083615

How to slow down Framer animations

I'm looking for a solution to slow down FramerJS animations by a certain amplitude.
In the Velocity Animation framework it's posible to do Velocity.mock = 10, to slow down everything by a factor of 10.
Either the docs are lacking in the respect, or this feature doesn't currently exist and should really be implemented.
You can use
Framer.Loop.delta = 1 / 120
to slow down all the animations by a factor of 2. The default value is 1 / 60.
While Javier's answer works for most animations, it doesn't apply to delays. While not ideal, the method I've adopted is to set up a debugging variable and function, and pass every time-related value through it:
slowdown = 5
s = (ms) ->
return ms * slowdown
Then use it like so:
Framer.Defaults.Animation =
time: s 0.3
…and:
Utils.delay s(0.3), ->
myLayer.sendToBack()
Setting the slowdown variable to 1 will use your standard timing (anything times 1 is itself).

local VS global in lua

every source agrees in that point:
the access to local variables is faster than to global ones
In practical use, the main difference is how to handle the variable, due it's limited to the scope and not accessable from any point of the code.
In theory, a local variable is save from illegal alteration due it's not accessable from a wrong point and, even bette, to lookup the var is much more performant.
Now I wonder about the details of that concept;
How does it technically work, that some parts of the code can access, others can not?
How improoved is the Performance?
But the main question is:
Let's mention I got a var bazinga = "So cool." and want to change it from somewhere.
Due the string is public, I can do this easy.
But now, if it's declared local and I'm out of scope, what performance effort is made, to gain access, if I handover the variable through X functions like this:
func_3(bazinga)
func_N(bazinga)
end
func_2(bazinga)
func_3(bazinga)
end
func_1()
local bazinga = "So cool."
func_2(bazinga)
end
Up too wich point, the local variable keeps beeing more performant and why?
I ask you, due maintaining code in which objects are handed over many functions are getting a mess and I want to know, if it's really worth it.
In theory, a local variable is save from illegal alteration due it's not accessable from a wrong point and, even bette, to lookup the var is much more performant.
Local variable is not save from anything in practical sense. This conception is a part of lexical scoping – the method of name resolution that has some advantages (as well as disadvantages, if you like) over dynamic and/or purely global scoping.
The root of performance is that in Lua locals are just stack slots, indexed by integer offset, computed once at compile-time (i.e. at load()). But globals are really keys into globals table, which is pretty regular table, so any access is a non-precomupted lookup. All this depends on the implementation details and may vary across different languages or even implementations (as someone already noted, LuaJIT is capable to optimize many things, so YMMV).
Now I wonder about the details of that concept; How does it technically work, that some parts of the code can access, others can not? How improoved is the Performance?
Technically, in 5.1 globals is a special table with special opcodes that access it, and 5.2 removed global opcodes and introduced _ENV upvalue per function. (What we call globals are actually environmental variables, because lookups go into function's environment that may be set to value other than "globals table", but let's not change the terminology on the fly). So, speaking in 5.2 terms, any global is just a key-value pair in globals table, that is accessible in every function through a lexically scoped variable.
Now on locals and lexical scoping. As you already know, local variables are stack slots. But what if our function uses a variable from outer scope? In that case a special block is created that holds a variable, and it becomes upvalue. Upvalue is a sort of seamless pointer to original variable, that prevents it from destruction when it's scope is over (local variables generally cease to exist when you escape the scope, right?).
But the main question is: Let's mention I got a var bazinga = "So cool." and want to change it from somewhere. Due the string is public, I can do this easy. But now, if it's declared local and I'm out of scope, what performance effort is made, to gain access, if I handover the variable through X functions like this: .....
Up too wich point, the local variable keeps beeing more performant and why?
In your snippet, it is not a variable that gets passed down the call stack, but a value "So cool." (which is a pointer into the heap, as all other garbage-collectible values). The local variable bazinga was never passed to any function, because Lua has no conception known as var-parameters (Pascal) or pointers/references (C/C++). Each time you call a function, all arguments become its local variables, and it our case, bazinga is not a single variable, but a bunch of stack slots in different stack frames that have the same value – same pointer into heap, with "So cool." string at that address. So there is no penalty with each level of call stack.
Before going into any comparison I'd want to mention that your worries are probably premature: write your code first, then profile it, and then optimize based on that. It may be difficult to optimize things after the fact in some cases, but this is not likely to be one of those cases.
Access to local variables is faster because access to global variables includes table lookup (whether in _G or in _ENV). LuaJIT may optimize some of that table access, so the difference may be less noticeable there.
You don't need to trade ease of access in this case as you can always use access from functions to upvalues to keep local variables available:
local myvar
function getvar() return myvar end
function setvar(val) myvar = val end
-- elsewhere
setvar('foo')
print(getvar()) -- prints 'foo'
Using getvar is not going to be faster than accessing myvar as a global variable, but this gives you an option to use myvar as local and still have access to it from other files (which is probably why you'd want it to be a global variable).
You can test the performance of locals vs globals yourself with os.clock(). The following code was tested on a 2,8 Ghz quad core running inside a virtual machine.
-- Dedicate memory:
local start = os.clock()
local timeend = os.clock()
local diff = timeend - start
local difflocal = {}
local diffglobal = {}
local x, y = 1, 1 --Locals
a, b = 1, 1 -- Globals
-- 10 tests:
for i = 0, 10, 1 do
-- Start
start = os.clock()
for ii = 0, 100000, 1 do
y = y + ii
x = x + ii
end
timeend = os.clock()
-- Stop
diff = (timeend - start) * 1000
table.insert(difflocal, diff)
-- Start
start = os.clock()
for ii = 0, 100000, 1 do
b = b + ii
a = a + ii
end
timeend = os.clock()
-- Stop
diff = (timeend - start) * 1000
table.insert(diffglobal, diff)
end
print(a)
print(b)
print(table.concat(difflocal, " ms, "))
print(table.concat(diffglobal, " ms, "))
Prints:
55000550001
55000550001
2.033 ms, 1.979 ms, 1.97 ms, 1.952 ms, 1.914 ms, 2.522 ms, 1.944 ms, 2.121 ms, 2.099 ms, 1.923 ms, 2.12
9.649 ms, 9.402 ms, 9.572 ms, 9.286 ms, 8.767 ms, 10.254 ms, 9.351 ms, 9.316 ms, 9.936 ms, 9.587 ms, 9.58

Is there a way to speed up the library loading in R?

I have a Rscript that will load ggplot2 in its first line.
Though loading a library doesn't take much time, as this script may be executed in command line for millions of times so the speed is really important for me.
Is there a way to speed up this loading process?
Don't restart -- keep a persistent R session and just issue requests to it. Something like Rserve can provide this, and for example FastRWeb uses it very well -- with millsecond round-trips for chart generation.
As an addition to #MikeDunlavey's answer:
Actually, both library and require check whether the package is already loaded.
Here are some timings with microbenchmark I get:
> microbenchmark (`!` (exists ("qplot")),
`!` (existsFunction ('qplot')),
require ('ggplot2'),
library ('ggplot2'),
"package:ggplot2" %in% search ())
## results reordered with descending median:
Unit: microseconds
expr min lq median uq max
3 library("ggplot2") 259.720 262.8700 266.3405 271.7285 448.749
1 !existsFunction("qplot") 79.501 81.8770 83.7870 89.2965 114.182
5 require("ggplot2") 12.556 14.3755 15.5125 16.1325 33.526
4 "package:ggplot2" %in% search() 4.315 5.3225 6.0010 6.5475 9.201
2 !exists("qplot") 3.370 4.4250 5.0300 6.2375 12.165
For comparison, loading for the first time:
> system.time (library (ggplot2))
User System verstrichen
0.284 0.016 0.300
(these are seconds!)
In the end, as long as the factor 3 = 10 μs between require and "package:ggplot2" %in% search() isn't needed, I'd go with require, otherwise witht the %in% search ().
What Dirk said, plus you can use the exists function to conditionally load a library, as in
if ( ! exists( "some.function.defined.in.the.library" )){
library( the.library )
}
So if you put that in the script you can run the script more than once in the same R session.

What does C "Sleep" function (capital "S") do on a Mac?

Note the capital "S" in Sleep. Sleep with a capital "S" is a standard function that sleeps milliseconds on the PC. On Mac OS X, there is no such symbol. However, the Xcode linking environment seems to find something to link it to. What is it?
Well, it’s an old old Carbon function (in the CoreServices / OSServices framework) that puts the computer to sleep. I can’t find any documentation.
Sleep and Xcode
sleep(int) is a method from the unix system that run mac Known as Darwin.
Here is the ManPage for sleep
Essentially it is a C call that lets you tell the computer to sleep for 'int' number of seconds.
alternatively you can use 'usleep(unsigned int)' which will sleep for 'unsigned int' number of "microseconds" which is second * 1000 * 1000 or 1000 milliseconds.
Here is the ManPage for usleep
Both of these functions are wrapped to allow you access to the underlying "C/C++" methods that a normal C/C++ developer would use.
here is an equivalent code example
NSTimeInterval sleepTime = 2.0; //Time interval is a double containing fractions of seconds
[NSThread sleepForTimeInterval:sleepTime]; // This will sleep for 2 seconds
sleep((int)sleepTime); // This will also sleep for 2 seconds
if you wish to have more granularity you will need usleep(unsigned int) which will give you a much more precise number.
NSTimeInterval sleepTime = 0.2; // This is essentially 2 tenths of a second
[NSThread sleepForTimeInterval:sleepTime]; // This will sleep for 2 tenths of a second;
usleep((unsigned int)(sleepTime * 1000 * 1000)); // This will also sleep for 2 tenths of a second
I hope that helps
The equivalent to sleep should be
[NSThread sleepForTimeInterval:5.0];
However this is in seconds. To use milliseconds I think you have to use usleep( num * 1000), where num is number of mills
But I don't know what Sleep(...) does
On the mac, under OSX, there is no such symbol.
I don't think there is such a symbol in Classic mac either - I even looked in my ancient copy of THINK Reference.
I would also be surprised to find a Sleep (with a capital S) function in C, since many people name C functions using all lower case.
Were you prompted to ask the question because you're getting a link error?
There is usleep().
pthread_create(&pth,NULL,threadFunc,"foo");
while(i < 100)
{
usleep(1);
printf("main is running...\n");
++i;
}
printf("main waiting for thread to terminate...\n");
pthread_join(pth,NULL);
return 0;
Are you using any other libraries in your project?
I'm getting compile errors using both Cocoa and Carbon using apple's template projects, however I notice that sleep functions (using the definition) are a feature in both the SDL and SFML cross platform libraries, and perhaps for many others.
Have you tried your code example on a template project using only with apples libraries?
It could be that Sleep() is a function in something else you are linking to.

Resources