I'm looking for a solution to slow down FramerJS animations by a certain amplitude.
In the Velocity Animation framework it's posible to do Velocity.mock = 10, to slow down everything by a factor of 10.
Either the docs are lacking in the respect, or this feature doesn't currently exist and should really be implemented.
You can use
Framer.Loop.delta = 1 / 120
to slow down all the animations by a factor of 2. The default value is 1 / 60.
While Javier's answer works for most animations, it doesn't apply to delays. While not ideal, the method I've adopted is to set up a debugging variable and function, and pass every time-related value through it:
slowdown = 5
s = (ms) ->
return ms * slowdown
Then use it like so:
Framer.Defaults.Animation =
time: s 0.3
…and:
Utils.delay s(0.3), ->
myLayer.sendToBack()
Setting the slowdown variable to 1 will use your standard timing (anything times 1 is itself).
Related
I use the combination of DXGI_SWAP_CHAIN_FLAG_FRAME_LATENCY_WAITABLE_OBJECT, GetFrameLatencyWaitableObject() and SetMaximumFrameLatency(UINT MaxLatency) to control the input lag vs. smoothness of my application as explained at https://learn.microsoft.com/en-us/windows/uwp/gaming/reduce-latency-with-dxgi-1-3-swap-chains. A value of 1 gives the lowest input lag, but sometimes I need a higher value to reduce jitter/stutter/slowdown caused by cpu and gpu cannot really work in parallel when the value is 1.
I want to be able to dynamically change this value based on the required input lag vs smoothness trade-off.
The problem I have noticed is that while it's possible to, between frames, increase this value by calling SetMaximumFrameLatency with a higher value than set before, I see no effect when decreasing this value by calling the function again with a lower value than the maximum value ever set for this swap chain by a previous call to the same function. So if I ever set it to 2, it is not possible to set it to 1 later. Is this a bug or undocumented "feature"? Or did I do something wrong?
The API itself does not return any error or similar; from the API point of view it appears to apply the new lower value correctly.
To test this, I have BufferCount = 16 and then adjust the max latency value from 1 to 16 which makes the current latency obvious to the eye. It's therefore apparent that dxgi does not apply new lower values.
I've tried to call functions in different orders, close the handle for the waitable object and recreate a new one when modifying the latency, but nothing works. The only workaround so far I'm aware of is to fully recreate the swap chain, which is annoying due to the requirement to unbind all context objects etc.
When initializing the game, I create the swap chain and set an initial latency using SetMaximumFrameLatency.
The game loop is then basically this:
Call WaitForSingleObject on the waitable object handle.
Process inputs.
Render and present a frame.
If it's decided that the latency should change at this point, call SetMaximumFrameLatency with the new value.
Other info:
Renderer: Direct3D 11
OS: Windows 11 21H2 version 22000.675
Graphics card: Intel UHD Graphics 620 / Nvidia GeForce MX150 (tried with both cards) with latest drivers, supporting WDDM 3.0
App type: Win32 desktop application
I've looked at this answer, that states that this problem might happen when the description files for the negative images is created with tools different from Opencv_createSamples, but this is not the case here.
The break occurs somewhere between the fourth and the seventh stage. In another post, someone suggested that this message means the classifier cannot be improved, but with only 5 stages, it is at least odd.
For training, I´m using numPos=800 while the vec file (60x60 px) contains 1200 positive samples. Moreover, I´m using 1491 negative samples(30x30 px). I´ve made all kinds of changes in the parameters, and none of them worked.
For the last attempt I used the parameters as follows:
cascadeDirName: 15stages
vecFileName: pos.vec
bgFileName: neg_dir.txt
numPos: 800
numNeg: 1491
numStages: 15
precalcValBufSize[Mb] : 1024
precalcIdxBufSize[Mb] : 1024
acceptanceRatioBreakValue : -1
stageType: BOOST
featureType: HAAR
sampleWidth: 60
sampleHeight: 60
boostType: GAB
minHitRate: 0.9999
maxFalseAlarmRate: 0.3
weightTrimRate: 0.9
maxDepth: 1
maxWeakCount: 100
mode: ALL
I had the same problem, after making a big research, I've got the best parameters that should be supplied to the opencv_traincascade.
If you are using a rectangular image, specify -w 24 -h 24, In addition make sure you have more positives than negatives and set -maxFalseAlarmRate 0.5.
That worked for me very well, hope it is useful for you too.
i also have this problem before. but after i reduce the param [maxFalseAlarmRate] ,like set small than 0.1 , it works ok. hope this have some help.
I'm plotting an animation of circles. It looks and works great as long as speed is set to a positive number. However, I want to set speed to 0.0. When I do that, something changes and it no longer animates. Instead, I have to click the 'x' on the window after each frame. I tried using combinations of plt.draw() and plt.show() to get the same effect as plt.pause(), but the frames don't show up. How do I replicate the functionality of plt.pause() precisely either without the timer involved or with it set to 0.0?
speed = 0.0001
plt.ion()
for i in range(timesteps):
fig, ax = plt.subplots()
for j in range(num):
circle = plt.Circle(a[j], b[j]), r[j], color='b')
fig.gca().add_artist(circle)
plt.pause(speed)
#plt.draw()
#plt.show()
plt.clf()
plt.close()
I've copied the code of pyplot.pause() here:
def pause(interval):
"""
Pause for *interval* seconds.
If there is an active figure it will be updated and displayed,
and the GUI event loop will run during the pause.
If there is no active figure, or if a non-interactive backend
is in use, this executes time.sleep(interval).
This can be used for crude animation. For more complex
animation, see :mod:`matplotlib.animation`.
This function is experimental; its behavior may be changed
or extended in a future release.
"""
backend = rcParams['backend']
if backend in _interactive_bk:
figManager = _pylab_helpers.Gcf.get_active()
if figManager is not None:
canvas = figManager.canvas
canvas.draw()
show(block=False)
canvas.start_event_loop(interval)
return
# No on-screen figure is active, so sleep() is all we need.
import time
time.sleep(interval)
As you can see, it calls start_event_loop, which starts a separate crude event loop for interval seconds. What happens if interval == 0 seems backend-dependend. For instance, for the WX backend a value of 0 means that this loop is blocking and never ends (I had to look in the code here, it doesn't show up in the documentation. See line 773).
In short, 0 is a special case. Can't you set it to a small value, e.g. 0.1 seconds?
The pause docstring above says that it can only be used for crude anmiations, you may have to resort to the animation module if you want something more sophisticated.
Quick query:
I've noticed that on the default progress bar, the filesize calculation for the file seems to be calculated as bytes / 1000 / 1000, rather than / 1024 / 1024.
Is this intentional, or a bug? Or possibly a setting I've missed?
For example, a 347mb file as reported by Windows shows in the progress bar as it's uploading as 364mb.
The IEEE is pretty clear that 1 MB = 1000000 bytes. While some OSes don't follow this definition, such as Windows, others do, such as OS X. Here is one such reference: http://physics.nist.gov/cuu/Units/binary.html. It is clear that there are those in both camps (powers of 2 and powers of 10) that are willing to argue their side. That said, I'm for not changing the code as it follows a standard/codified definition.
os.time() in Luaj returns time in milliseconds, but according to lua documentation, it should return time in seconds.
Is this a bug in Luaj?
And Can you suggest a workaround that will work with Luaj(for java) and real Lua(c/c++)? because i have to use the same lua source for both applications.(cant simply divide it with 1000, as they both have return different time scale)
example in my lua file:
local start = os.time()
while(true) do
print(os.time() - start)
end
in c++ , i received output:
1
1
1
...(1 seconds passed)
2
2
2
in java (using Luaj), i got:
1
...(terminate in eclipse as fast as my finger can)
659
659
659
659
fyi, i try this on windows
Yes there's a bug in luaj.
The implementation just returns System.currentTimeMillis() when you call os.time(). It should really return something like (long)(System.currentTimeMillis()/1000.)
It's also worth pointing out that the os.date and os.time handling in luaj is almost completely missing. I would recommend that you assume that they've not been implemented yet.
Lua manual about os.time():
The returned value is a number, whose meaning depends on your system. In POSIX, Windows, and some other systems, this number counts the number of seconds since some given start time (the "epoch"). In other systems, the meaning is not specified, and the number returned by time can be used only as an argument to os.date and os.difftime.
So, any Lua implementation could freely change the meaning of os.time() value.
It appears like you've already confirmed that it's a bug in LuaJ; as for the workaround you can replace os.time() with your own version:
if (runningunderluaj) then
local ostime = os.time
os.time = function(...) return ostime(...)/1000 end
end
where runningunderluaj can check for some global variable that is only set under luaj. If that's not available, you can probably come up with your own check by comparing the results from calls to os.clock and os.time that measure time difference:
local s = os.clock()
local t = os.time()
while true do
if os.clock()-s > 0.1 then break end
end
-- (at least) 100ms has passed
local runningunderluaj = os.time() - t > 1
Note: It's possible that os.clock() is "broken" as well. I don't have access to luaj to test this...
In luaj-3.0-beta2, this has been fixed to return time in seconds.
This was a bug in all versions of luaj up to and including luaj-3.0-beta1.