I originally noticed this issue in my completed app, but have installed the default react-native app to test, and I'm seeing the "dropped so far" number in the perf monitor constantly creep up even though nothing is happening.
Is this number supposed to increase constantly?
Yes, though it's not always constant.
(I'm making the assumption that you mean constant as in the same value, although if you just mean it as it never stops, then you can ignore the extra extra explanation behind how it works.)
To understand the logic of dropped so far, you can look at the React Native codebase. You will find the code for the perf monitor in the FpsView.java file. In it, you can see what variable (droppedUIFrames) is being used for the dropped so far code (line 67). If you follow this all the way back, you get to the FPSMonitorRunnable class which uses the mTotalFramesDropped variable to keep track of frames dropped so far (line 79). In this class, you just have a loop updating the variables being reported. The line you'll be interested in is this one on line 90:
mTotalFramesDropped += mFrameCallback.getExpectedNumFrames() - mFrameCallback.getNumFrames();
From this, you can see that yes, this value is a counter that simply increases but never gets reset while the perf monitor is running. You can also see that it isn't constant (fixed value); in your case, it probably happens to appear constant because you are on the "hello world" screen where nothing interesting is happening.
So based on Michael Cheng's answer I delved into the RN code a bit more, and dug out EXPECTED_FRAME_TIME which is set to 16.9, a-la the classic 60fps magic number.
The reason the dropped frames counter was constantly (i.e continually) increasing was that RN expects to run at 60fps and thinks any framerate less than this means dropped frames.
however, having tested this particular tablet with various "framerate" testing apps, the tablet's native framerate appears to be 51.9fps. I don't know why that is, it seems a particularly arbitrary number, but in all my testing the framerate never went above 52 and mostly hovered at 51.
So to answer my question, "dropped so far" means how many frames have been less than 60fps, and "should it increase continually?"; yes if the device is only capable of drawing less than 60fps anyway.
Related
I'm implementing RayTracingInOneWeekend and I optimized it from 33m to 23s for image size 384x216 scene and the parameters as given in the article. However, when I profile it, the entries (the 5th column from the left in the screenshot below) changes on almost every run. How is that possible? In my program everything stays same, including even the random number generators, as generators are created as (you can see it on github):
g = mkStdGen (i * width + j)
If width and height stay same, then all g (one for each pixel) should stay same as well. However, as you can see the two screenshots have different values in the entries column.
What could be the reason behind this impurity? Or the profiler is not just able to gather all the information and the numbers are not exact (means, in reality the frequency of function calls are different from the numbers shown above; the docs however does not say anything like that).
My program builds with cabal v2-build -O2 --enable-profiling --enable-executable-profiling and I dont pass -prof -fprof-auto to ghc-options (I guess cabal takes care of that). I've also used -threaded and parallel library.
I'm on GHC 8.6.5 and Cabal 3.2.
It looks like, the profiler in multi-core mode does not run consistently — not sure if that counts as bug. I ran the program couple of times without passing -N to RTS and now every time I see the same entries count:
Not sure if that proves that my program does not have any impurity. I'm stil looking for better and more plausible response (if there is any, at all).
I'm running some lattice proofs through Prover9/Mace4. Prover9's saying Exit: Time limit. plus the message in the Title.
I've doubled the time limit from 60 to 120 seconds. Same message (in twice the time). The weird thing is:
there's only one statement to prove. That is, only one label(goal) in the report (what's with the but not all?)
it does seem to have completed the proof, in that it shows last line $F.
Mace4 can't find any counter-examples (I upped its time to 120 seconds).
I've found some GHits for that message, but they seem to be all in Chinese(?)
It's possible the axioms I've given are (mutually) recursive -- I'm trying to introduce a function and a nominated 'absorbing element' [**]; and that solving will need infinitary unification. Does Prover9 do that?
I'm happy to add the axioms and goal to this message. (I'm using a non-standard way to define the meet and join.) But first, are there any sanity checks I should go through?
[**] the absorbing element is neither lattice top nor lattice bottom; more like lattice left-corner. (The element will be lattice bottom just in case the lattice degenerates to two elements.) The function is a partial ordering 'at right angles' to top/bottom. The lattice I expect to be neither complemented nor distributive (again except when 2 elements).
I've reproduced this after much trying, but only by setting some strange option that I'm sure I wouldn't have touched. (The only option I usually change is the Time limit, and I Reset to defaults quite often, so that would have blatted any evidence.)
Here's my guess for what happened.
what's with the but not all?
You can enter multiple goals (providing they're all positive). [**]
With strange option settings, if Prover9 can prove the first but not the second, it'll keep trying until exhausted; but then only report the successful one -- with a $F. result OK.
If you double the Time limit, it'll still prove the first and still keep on trying for the second -- taking twice the time for the same outcome.
Mace4 will come across the first goal, and use up its time trying for a counter-example. There isn't one because it's provable. Again, doubling its Time limit will get the same outcome after twice as long.
[Note **] It's never that I intend to set multiple goals; but when I'm hacking/experimenting with axioms, I keep all the goals in the Goals: box so I can easily toggle un/comment. I guess I didn't comment-out one when I was uncommenting another.
The behaviour usually, as described in the manual, is Prover9 reports success at the first goal it proves; doesn't go on to other goals. If there's multiple provable goals, it seems to choose the easiest/quickest(?) irrespective of position in the file.
But with max_proofs set to more than default 1, Prover9 will keep trying. (There's also a auto_denials flag that has something to do with it I don't understand.)
I've no idea how I set max_proofs -- I didn't recognise the Options/Limits sub-screen when I eventually found it. Weird.
I have a large, rather complicated procedural content generation lua project. One thing I want to be able to do, for debugging purposes, is use a random seed so that I can re-run the system & get the same results.
To the end, I print out the seed at the start of a run. The problem is, I still get completely different results each time I run it. Assuming the seed doesn't change anywhere else, this shouldn't be possible, right?
My question is, what other ways are there to influence the output of lua's math.random()? I've searched through all the code in the project, and there's only one place where I call math.randomseed(), and I do that before I do anything else. I don't use the time or date for any calculations, so that wouldn't be influencing the results... What else could I be missing?
Updated on 2/22/16 monkey patching math.random & math.randomseed has, oftentimes (but not always) output the same sequence of random numbers. But still not the same results – so I guess the real question is now: what behavior in lua is indeterminate, and could result in different output when the same code is run in sequence? Noting where it diverges, when it does, is helping me narrow it down, but I still haven't found it. (this code does NOT use coroutines, so I don't think it's a threading / race condition issue)
randomseed is using srandom/srand function, which "sets its argument as the seed for a new sequence of pseudo-random integers to be returned by random()".
I can offer several possible explanations:
you think you call randomseed, but you do not (random will initialize the sequence for you in this case).
you think you call randomseed once, but you call it multiple times (or some other part of the code calls randomseed as well, possibly at different times in your sequence).
some other part of the code calls random (some number of times), which generates different results for your part of the code.
there is nothing wrong with the generated sequence, but you are misinterpreting the results.
your version of Lua has a bug in srandom/random processing.
there is something wrong with srandom or random function in your system.
Having some information about your version of Lua and your system (in addition to the small example demonstrating the issue) would help in figuring out what's causing this.
Updated on 2016/2/22: It should be fairly easy to check; monkeypatch both math.randomseed and math.random and log all the calls and the values returned by the functions for two subsequent runs. Compare the results. If the results differ, you should be able to isolate why they differ and reproduce on a smaller example. You can also look at where the functions are called from using debug.traceback.
Correct, as stated in the documentation, 'equal seeds produce equal sequences of numbers.'
Immediately after setting the seed to a known constant value, output a call to rand - if this varies across runs, you know something is seriously wrong (corrupt library download, whack install, gamma ray hit your drive, etc).
Assuming that the first value matches across runs, add another output midway through the code. From there, you can use a binary search to zero in on where things go wrong (I.E. first half or second half of the code block in question).
While you can & should use some intuition to find the error as you go, keep in mind that if intuition alone was enough, you would have already found it, thus a bit of systematic elimination is warranted.
Revision to cover comment regarding array order:
If possible, use debugging tools. This SO post on detecting when the value of a Lua variable changes might help.
In the absence of tools, here's one way to roll your own for this problem:
A full debugging dump of any sizable array quickly becomes a mess that makes it tough to spot changes. Instead, I'd use a few extra variables & a test function to keep things concise.
Make two deep copies of the array. Let's call them debug01 & debug02 & call the original array original. Next, deliberately swap the order of two elements in debug02.
Next, build a function to compare two arrays & test if their elements match up & return / print the index of the first mismatch if they do not. Immediately after initializing the arrays, test them to ensure:
original & debug01 match
original & debug02 do not match
original & debug02 mismatch where you changed them
I cannot stress enough the insanity of using an unverified (and thus, potentially bugged) test function to track down bugs.
Once you've verified the function works, you can again use a binary search to zero in on where things go off the rails. As before, balance the use of a systematic search with your intuition.
I am implementing an image encryption algorithm and in one phase I would like to change the least significant bit of the pixel. As per steganography, there is a stego-key which can be used to overwrite the LSB of pixels. But, how is the stego-key determined at the receiver end. Also, would like to know if changing the least significant bit from 1 to 0 or 0 to 1 is also considered as steganography?
But, how is the stego-key determined at the receiver end.
Key management or even encryption is not specifically part of steganography. You may perform key agreement by hiding that as well, but again, steganography is only about the hiding of the information. Encryption may be used to let the message to appear random as well as adding an additional layer of security though. Data that appears to be random may be easier to hide.
See the following definition from Wikipedia:
the practice of concealing messages or information within other non-secret text or data.
Also, would like to know if changing the least significant bit from 1 to 0 or 0 to 1 is also considered as steganography?
That is likely the case yes. But note that if you have a completely blue background that your message would still be visible - if encrypted as random changes. But in general, if the chances of the least significant bit being set is more or less random, then it would make a prime candidate for steganography.
You might however question how many times raw RGB (or whatever other lossless format) is exchanged, where the pixels are more or less random. That in itself could be considered a hint that something strange is going on. As long as you try to hide the message it would probably still be called steganography though.
I have a question about the EXCEPTION_INT_OVERFLOW and EXCEPTION_INT_DIVIDE_BY_ZERO exceptions.
Windows will trap the #DE errors generated by the IDIV instruction and will end up generating and SEH exception with one of those 2 codes.
The question I have is how does it differentiate between the two conditions? The information about idiv in the Intel manual indicates that it will generate #DE in both the "divide by zero" and "underflow cases".
I took a quick look at the section on the #DE error in Volume 3 of the intel manual, and the best I could gather is that the OS must be decoding the DIV instruction, loading the divisor argument, and then comparing it to zero.
That seems a little crazy to me though. Why would the chip designers not use a flag of some sort to differentiate between the 2 causes of the error? I feel like I must be missing something.
Does anyone know for sure how the OS differentiates between the 2 different causes of failure?
Your assumptions appear to be correct. The only information available on #DE is CS and EIP, which gives the instruction. Since the two status codes are different, the OS must be decoding the instruction to determine which.
I'd also suggest that the chip makers don't really need two separate interrupts for this case, since anything divided by zero is infinity, which is too big to fit into your destination register.
As for "knowing for sure" how it differentiates, all of those who do know are probably not allowed to reveal it, either to prevent people exploiting it (not entirely sure how, but jumping into kernel mode is a good place to start looking to exploit) or making assumptions based on an implementation detail that may change without notice.
Edit: Having played with kd I can at least say that on the particular version of Windows XP (32-bit) I had access to (and the processor it was running on) the nt!Ki386CheckDivideByZeroTrap interrupt handler appears to decode the ModRM value of the instruction to determine whether to return STATUS_INTEGER_DIVIDE_BY_ZERO or STATUS_INTEGER_OVERFLOW.
(Obviously this is original research, is not guaranteed by anyone anywhere, and also happens to match the deductions that can be made based on Intel's manuals.)
Zooba's answer summarizes the Windows parses the instruction to find out what to raise.
But you cannot rely on that the routine correctly chooses the code.
I observed the following on 64 bit Windows 7 with 64 bit DIV instructions:
If the operand (divisor) is a memory operand it always raises EXCEPTION_INT_DIVIDE_BY_ZERO, regardless of the argument value.
If the operand is a register and the lower dword is zero it raises EXCEPTION_INT_DIVIDE_BY_ZERO regardless if the upper half isn't zero.
Took me a day to find this out... Hope this helps.