I'm trying to create code in AppleScript that will click my mouse randomly every 1-2 seconds... I want a video game I'm playing to not know or be able to tell that a robot is clicking for me so I need it to be RANDOM... not every second or every 2 seconds but every x seconds where x is a constantly changing variable in-between 1 and 2 seconds... Here is the code so far but it clicks every 1 second:
on idle
tell application "System Events"
key code 87
end tell
return 1
end idle
I thought changing the return 1 to return random number 1 to 2 would work
Something like this:
on idle
tell application "System Events"
key code 87
end tell
set randomDelay to random number from 1 to 2
return randomDelay
end idle
but it didn't work /:
Make it into
random number from 1.0 to 2.0
If you give integers as the bounds for the random numbers, it will just select random integers. By giving floating point literals, AppleScript switches to giving a random floating point number in the range. From the documentation of random number in the StandardAdditions, it seems that the limits are both inclusive, which is strange for floats but isn't a problem in your case.
Related
So they say if you flip a coin 50 times and get heads all 50 times, you're still 50/50 the next flip and 1/4 for the next two. Do you think/know if this same principle applies to computer pseudo-random number generators? I theorize they're less likely to repeat the same number for long stretches.
I ran this a few times and the results are believable, but I'm wondering how many times I'd have to run it to get an anomaly output.
def genString(iterations):
mystring = ''
for _ in range(iterations):
mystring += str(random.randint(0,9))
return mystring
def repeatMax(mystring):
tempchar = ''
max = 0
for char in mystring:
if char == tempchar:
count += 1
if count > max:
max = count
else:
count = 0
tempchar = char
return max
for _ in range(10):
stringer = genString()
print repeatMax(stringer)
I got all 7's and a couple 6's. If I run this 1000 times, will it approximate a normal distribution or should I expect it to stay relatively predictable? I'm trying to understand the predictability of pseudo random number generation.
Failure to produce specific patterns is a typical weakness of PRNGs, but the probability of hitting a substantial run of repeated digits at random is so small it's hard to demonstrate that weakness.
It's perfectly reasonable for a PRNG to use only a 32-bit state, which (traditionally) means producing a sequence of four billion numbers and then repeating from the start again. In that case your sequence of 50 coin-flips coming out the same is probably never going to happen (four billion tries at something that has a one in a quadrillion chance is unlikely to succeed); but if it does, then it's going to appear way too often.
Superficially you're looking for k-dimensional equidistribution as a test for whether or not you can expect to find a prescribed pattern in the output without deeper analysis of the specific generator. If your generator claims at least 50-dimensional equidistribution then you're guaranteed to see the 50-heads state at least once.
However, if your generator emits 32-bit results but you only test whether each result maps to heads or tails, you have some chance at success even if the generator fails the k-dimension test, and that chance depends on the specifics of the generator and the mapping function.
If you adjust the implementation of your generator to return just one bit at a time, then you have an opportunity to try to squeeze 50 heads out of just 50 bits of state (or potentially as few as 18, but that generator would probably be faulty). Provided the generator visits all 2**50 possible states, one of those states will produce 50 heads in a row. You may get a few more heads when adjacent states start or end with more zeroes.
I am interpreting some output from a serial port. The output is in VT100 protocol. VT100 terminal protocol use some control character sequence to set the cursor location on screen. The control sequence looks like this:
ESC[row;columnH
For example,
ESC[01;01H means set cursor to row 1, column 1.
But I see the following sequence when column number exceed 2-digit number.
ESC[10;:0H
Note the extra ":" after the semicolon. This control sequence comes after ESC[10;99H, which means row 10, column 99.
My understanding is :0 = 100. But what if the column number is 200?
I don't think that's actually valid or, if it is, it's entirely by accident. The arguments passed to the CUP (cursor position) command (and many others involved in screen coordinates) is limited to one or two digits.
In the ASCII table, the digit 9 is followed by : so, where 99 would represent 9 * 10 + 9, :0 may represent 10 * 10 + 0 or 100:
Assuming the bug holds up for higher numbers (something I'm not confident of), you're looking for 200, which would be 20 * 10 + 0 or probably D0 (D being the character ten higher than : in the ASCII table).
No, the relevant standards do not specify that the number of digits is limited to two, for instance because VT100s can address 24 rows by 132 columns.
Leading zeroes in the parameters are ignored. Likely, OP is reporting a problem (from some unmentioned program) which uses only two digits. That is not related to the terminal itself (except perhaps in the context of a bug report directed to a terminal emulator's developers).
The resize program assumes that one's terminal is no larger than 999 by 999 to position the cursor to "past" the lower-right corner of the screen. For those individuals who do not rely upon multiple pixels to discern characters, xterm does use a font called "Unreadable", which could result in larger screens.
By the way, the source given in the question is not very good, although not the worst -- refer to vt100.net and ECMA-48.
I am trying to create a vhdl code that will randomly blink four LEDs. After pushing a button that corresponds to the blinking led, a score will be displayed using 7 segment after 60 seconds.
Can anyone help me in generating random LED blink for the 4 LEDs?
Have a look at a Linear Feedback Shift Register. That'll give you a pseudo-random sequence of whatever length you want, and it's both effective and easy to implement in VHDL.
Depending on "how random" you need your sequence to be, you could for instance create a 16 bit long LFSR, and then use four arbitrarily selected bits from this to display (instead of using four consecutive bits, which might make the next value easier to guess, depending on the implementation).
The code examples are gonna be in Lua, but the question is rather general - it's just an example.
for k=0,100 do
::again::
local X = math.random(100)
if X <= 30
then
-- do something
else
goto again
end
end
This code generates 100 pseudorandom numbers between 0-30. It should do it between 0-100, but doesn't let the loop go on if any of them is larger than 30.
I try to do this task without goto statement.
for k=0,100 do
local X = 100 -- may be put behind "for", in some cases, the matter is that we need an 'X' variable
while X >= 30 do --IMPORTANT! it's the opposite operation of the "if" condition above!
X = math.random(100)
end
-- do the same "something" as in the condition above
end
Instead, this program runs the random number generation until I get a desired value. In general, I put all the codes here that was between the main loop and the condition in the first example.
Theoretically, it does the same as the first example, only without gotos. However, I'm not sure in it.
Main question: are these program codes equal? They do the same? If yes, which is the faster (=more optimized)? If no, what's the difference?
It is bad practice to use Goto. Please see http://xkcd.com/292/
Anyway, I'm not much into Lua, but this looks simple enough;
For your first code: What you are doing is starting a loop to repeat 100 times. In the loop you make a random number between 0 and 100. If this number is less than or equal to 30, you do something with it. If this number is greater than 30, you actually throw it away and get another random number. This continues until you have 100 random numbers which will ALL be less than or equal to thirty.
The second code says: Start a loop from 0 to 100. Then you set X to be 100. Then you start another loop with this condition: As long as X is greater than 30, keep randomizing X. Only when X is less than 30 will your code exit and perform some action. When it has performed that action 100 times, the program ends.
Sure, both codes do the same thing, but the first one uses a goto - which is bad practice regardless of efficiency.
The second code uses loops, but is still not efficient - there are 2 levels of loops - and one is based on psuedo-random generation which can be extremely inefficient (maybe the CPU generates only numbers between 30-100 for a trillion iterations?) Then things get very slow. But this is also true for you're first piece of code - it has a 'loop' that is based on psuedo-random number generation.
TLDR; strictly speaking about efficiency, I do not see one of those being more efficient than the other. I could be wrong but it seems the same things is going on.
you can directly use math.random(lower, upper)
for k=0,100 do
local X = math.random(0, 30)
end
even faster.
As I see this pieces of code do the same, but using goto always isn't the best choice (in any programming language). For lua see details here
I am in need of a data storage type and algorithm for keeping track of the status of the last N items I have seen. Each item has a status of Pass or Fail, but the system I am monitoring will deemed to have failed if M items in a row have failed. Once the system is deemed to have failed I then need to scan back through the history of data and find the last window of width W in which all items had a "good" status.
For example, with a M=4 and W = 3:
1 Good
2 Good
3 Good
4 Good
5 Good |
6 Good |- Window of size 3 where all are good.
7 Good |
8 Bad
9 Bad
10 Good
11 Good
12 Bad
13 Good
14 Bad
15 Bad
16 Bad
17 Bad <== System is deemed bad at this point So scan backwards to find "Good" window.
I know that this is going to end up in something like a regular expression search and have vague recollections of Knuth floating up out the dark recesses of my memory, so could anyone point me towards a simple introduction on how to do this? Also for what it is worth I will be implementing this in C# .Net 3.5 on a Windows XP system seeing 3GB of Ram (and an i7 processor - sniff the machine used to have Windows 7 and it does have 8GB of memory - but that was a story for TDWTF)
Finally I will be scanning numbers of items in the 100,000's to millions in any given run of this system. I won't need to keep track of the entire run, just the subset of all items until a system failure occurs. When that happens I can dump all my collected data and start the process all over again. However for each item I am tracking, I will have to keep at least the pass/fail status, and a 10 char string. So I am looking for suggestions on how to collect and maintain this data in the system as well. Although I am tempted to say - "meh, it will all fit in memory even if the entire run pass with 100%, so its off to an array for you!"
I know that this is going to end up in something like a regular expression search
The problem is, actually, much simpler. We can take advantage of the fact that we're searching for subsequences consisting only of bad results (or only good results).
Something like this should work
// how many consecutive bad results we have at this point
int consecutiveFailures = 0;
// same for good results
int consecutivePasses = 0;
for each result
if result == 'pass' then
consecutiveFailures = 0;
++consecutivePasses;
else if result == 'fail' then
consecutivePasses = 0;
++consecutiveFailures;
end
if consecutiveFailures == M
// M consecutive failures, stop processing
...
end
if consecutivePasses >= W
// record last set of W consecutive passes for later use
...
end
end