Swift: The smallest excutable for-loop - xcode

If lowerBound is greater than upperBound your code will crash
Why does this run (nothing will be outputted, but still runs, also, floating point can't be inputted so "<1" would seeming be "0"))...
for i in 1..<1 {}
But this doesn't...
for i in 1...0 {}
?

The former expresses "starting at 1 and incrementing by one, if the value is less than one, include it." That is a set with no values in it, but it's a reasonable thing to say.
The later expresses "starting at 1 and incrementing by one, until the value equals 0, include it." That is really a set of all positive integers, but in practice you definitely didn't mean that, and it is is instead explicitly defined to be an error.
Another way to say the same thing is to consider the former to be all integers greater than or equal to 1 and also less than 1. Again, that's an empty set.
The latter is a set of all values greater than or equal to 1 and less than or equal to 0. That's also an empty set, but almost certainly not what you meant, so it's defined to be an error.

The error message tells you the answer:
Can't form Range with upperBound < lowerBound
That is the only rule that matters.
How the range gets formed is irrelevant to that rule. It doesn't matter whether the operator that forms the range is ... or ..<. All that matters is that when we try to obey the operator and instantiate the range, it must turn out that the upper bound is not smaller than the lower bound.
Well, in 1..<1, the upper bound is not smaller than the lower bound. So it's a legal range.
It is also an "empty" range; it contains no integer (neither 0, nor 1, nor 2, nor any other integer). But it's still a range.
Now, if you think about it, that's a very valuable thing. It doesn't look valuable in your example, because you're using literals. But when the lower bound and upper bound come from variables, it's a very good thing that lo..<hi doesn't crash in the corner case where lo and hi happen to be equal! That case arises a lot, and for good reasons.
For example, consider cycling thru the elements of an array. If the array is empty, its indices are (you guessed it) 0..<0. You want it to be legal to cycle thru this array. Nothing happens, but it's not illegal. And that's just what this rule says.

Related

Is there a better way to generate all equal arithmetic sequences using numbers 1 to 10?

Problem:
The numbers from 1 to 10 are given. Put the equal sign(somewhere between
them) and any arithmetic operator {+ - * /} so that a perfect integer
equality is obtained(both the final result and the partial results must be
integer)
Example:
1*2*3*4*5/6+7=8+9+10
1*2*3*4*5/6+7-8=9+10
My first idea to resolve this was using backtracking:
Generate all possibilities of putting operators between the numbers
For one such possibility replace all the operators, one by one, with the equal sign and check if we have two equal results
But this solution takes a lot of time.
So, my question is: Is there a faster solution, maybe something that uses the operator properties or some other cool math trick ?
I'd start with the equals sign. Pick a possible location for that, and split your sequence there. For left and right side independently, find all possible results you could get for each, and store them in a dict. Then match them up later on.
Finding all 226 solutions took my Python program, based on this approach, less than 0.15 seconds. So there certainly is no need to optimize further, is there? Along the way, I computed a total of 20683 subexpressions for a single side of one equation. They are fairly well balenced: 10327 expressions for left hand sides and 10356 expressions for right hand sides.
If you want to be a bit more clever, you can try reduce the places where you even attempt division. In order to allov for division without remainder, the prime factors of the divisor must be contained in those of the dividend. So the dividend must be some product and that product must contain the factors of number by which you divide. 2, 3, 5 and 7 are prime numbers, so they can never be such divisors. 4 will never have two even numbers before it. So the only possible ways are 2*3*4*5/6, 4*5*6*7/8 and 3*4*5*6*7*8/9. But I'd say it's far easier to check whether a given division is possible as you go, without any need for cleverness.

How to represent -infinity in programming

How can I represent -infinity in C++, Java, etc.?
In my exercise, I need to initialize a variable with -infinity to show that it's a very small number.
When computing -infinity - 3, or -infinity + 5 it should also result -infinity.
I tried initializing it with INT_MIN, but when I compute INT_MIN - 1 I get the upper limit, so I can't make a test like: if(value < INT_MIN) var = INT_MIN;
So how can I do that?
You cannot represent infinity with integers[1]. However, you can do so with floating point numbers, i.e., float and double.
You list several languages in the tags, and they all have different ways of obtaining the infinity value (e.g., C99 defines INFINITY in math.h, if infinity is available with that implementation, while Java has POSITIVE_INFINITY and NEGATIVE_INFINITY in Float and Double classes). It is also often (but not always) possible to obtain infinity values by dividing floating point numbers by zero.
[1] Excepting the possibility that you could wrap every arithmetic operation on your integers with code that checks for a special value that you treat as infinity. I wouldn't recommend this.
You can have -Infinity as a floating point literal (at least in Java):
double negInf = Double.NEGATIVE_INFINITY;
It is implemented according to the IEEE754 floating point spec.
If there was the possibility that a number was not there, instead of picking a number from its domain to represent 'not there', I would pick a type with both every integer I care about, and a 'not there' state.
A (deferred) C++1y proposal for optional is an example of that: an optional<int> is either absent, or an integer. To access the integer, you first ask if it is there, and if it is you 'dereference' the optional to get it.
Making infectious optionals: ones that, on almost any binary operation, infect the result if either value is absent, should be an easy extension of this idea.
You could define a number as -infinite and, when adding or substracting something from a number, you do first check if the variable was equal to that pseudo-number. If so you just leave it as it was.
But you might find library functions giving you that functionality implemented, e.g Double in Java or std in c++.
If you want to represent the minimum value you can use, for example
for integers
int a = Integer.MIN_VALUE;
and floats, doubles, etc take each object wrapper min value
You can't truly represent an infinite value because you've got to store it in a finite number of bits. There are symbolic versions of infinity in certain types (e.g. in the typical floating point specification), but it won't behave exactly like infinity in the strict sense. You'll need to include some additional logic in your program to emulate the behaviour you need.

Understanding the concept behind finding the duplicates in an array

I found a method to find the duplicates in an array of n elements ranging from 0 to n-1.
Traverse the array. Do following for every index i of A[].
{
check for sign of A[abs(A[i])] ;
if positive then
make it negative by A[abs(A[i])] = -A[abs(A[i])];
else // i.e., A[abs(A[i])] is negative
this element (ith element of list) is a repetition
}
This method works fine. But I fail to understand how. Can someone explain it?
I am basically looking for a proof or a simpler understanding of this algorithm!
You're basically cheating by using the sign bits of each array element as an array of one-bit flags indicating the presence or absence of an element. This might or might not be faster than simply using a separate bit-set array, but it certainly makes use of the special case that you are using a signed representation (int) of unsigned values, therefore you have an extra unused bit to play with on each. This would not work if your values were signed, for example.
The algorithm stores additional information in the sign of each number in the array.
The sign in A[i] stores whether i occured previously during the processing: if it's negative, it occured once.
Note: "elements ranging from 0 to n-1." - Oh well, you cannot store the sign in 0, so this isn't a correct algorithm for the task.
See http://www.geeksforgeeks.org/find-the-two-repeating-elements-in-a-given-array/
Method 5.
The examples there might help you.

Algorithm to find an arbitrarily large number

Here's something I've been thinking about: suppose you have a number, x, that can be infinitely large, and you have to find out what it is. All you know is if another number, y, is larger or smaller than x. What would be the fastest/best way to find x?
An evil adversary chooses a really large number somehow ... say:
int x = 9^9^9^9^9^9^9^9^9^9^9^9^9^9^9
and provides isX, isBiggerThanX, and isSmallerThanx functions. Example code might look something like this:
int c = 2
int y = 2
while(true)
if isX(y) return true
if(isBiggerThanX(y)) fn()
else y = y^c
where fn() is a function that, once a number y has been found (that's bigger than x) does something to determine x (like divide the number in half and compare that, then repeat). The thing is, since x is arbitrarily large, it seems like a bad idea to me to use a constant to increase y.
This is just something that I've been wondering about for a while now, I'd like to hear what other people think
Use a binary search as in the usual "try to guess my number" game. But since there is no finite upper end point, we do a first phase to find a suitable one:
Initially set the upper end point arbitrarily (e.g. 1000000, though 1 or 1^100 would also work -- given the infinite space to work in, all finite values are equally disproportionate).
Compare the mystery number X with the upper end point.
If it's not big enough, double it, and try again.
Once the upper end point is bigger than the mystery number, proceed with a normal binary search.
The first phase is itself similar to a binary search. The difference is that instead of halving the search space with each step, it's doubling it! The cost for each phase is O(log X). A small improvement would be to set the lower end point at each doubling step: we know X is at least as high as the previous upper end point, so we can reuse it as the lower end point. The size of the search space still doubles at each step, but in the end it will be half as large as would have been. The cost of the binary search will be reduced by only 1 step, so its overall complexity remains the same.
Some notes
A couple of notes in response to other comments:
It's an interesting question, and computer science is not just about what can be done on physical machines. As long as the question can be defined properly, it's worth asking and thinking about.
The range of numbers is infinite, but any possible mystery number is finite. So the above method will eventually find it. Eventually is defined such as that, for any possible finite input, the algorithm will terminate within a finite number of steps. However since the input is unbounded, the number of steps is also unbounded (it's just that, in every particular case, it will "eventually" terminate.)
If I understand your question correctly (advise if I do not), you're asking about how to solve "pick a number from 1 to 10", except that instead of 10, the upper bound is infinity.
If your number space is truly infinite, the following are true:
The value will never be held in an int (or any other data type) on any physical hardware
You will NEVER find your number
If the space is immensely large but bound, I think the best you can do is a binary search. Start at the middle of the number range. If the desired number turns out to be higher or lower, divide that half of the number space, and repeat until the desired number is found.
In your suggested implementation you raise y ^ c. However, no matter how large c is chosen to be, it will not even move the needle in infinite space.
Infinity isn't a number. Thus you can't find it, even with a computer.
That's funny. I've wondered the same thing for years, though I've never heard anyone else ask the question.
As simple as your scenario is, it still seems to provide insufficient information to allow the choice of an optimal strategy. All one can choose is a suitable heuristic. My heuristic had been to double y, but I think that I like yours better. Yours doubles log(y).
The beauty of your heuristic is that, so long as the integer fits in the computer's memory, it finds a suitable y in logarithmic time.
Counter-question. Once you find y, how do you proceed?
I agree with using binary search, though I believe that a ONE-SIDED binary search would be more suitable, since here the complexity would NOT be O( log n ) [ Where n is the range of allowable numbers ], but O( log k ) - where k is the number selected by your adversary.
This would work as follows : ( Pseudocode )
k = 1;
while( isSmallerThanX( k ) )
{
k = k*2;
}
// At this point, once the loop is exited, k is bigger than x
// Now do normal binary search for the range [ k/2, k ] to find your number :)
So even if the allowable range is infinity, as long as your number is finite, you should be able to find it :)
Your method of tetration is guaranteed to take longer than the age of the universe to find an answer, if the opponent merely uses a paradigm which is better (for example, pentation). This is how you should do it:
You can only do this with symbolic representations of numbers, because it is trivial to name a number your computer cannot store in floating-point representation, even if it used arbitrary-precision arithmetic and all its memory.
Required reading: http://www.scottaaronson.com/writings/bignumbers.html - that pretty much sums it up
How do you represent a number then? You represent it by a program which will, if run to completion, print out that number. Even then, your computer is incapable of computing BusyBeaver(10^100) (if you dictated a program 1 terabyte in size, this well over the maximum number of finite clock cycles it could run without looping forever). You can see that we could easily have the computer print out 1 0 0... each clock cycle, making the maximum number it could say (if we waited nearly an eternity) would be 10^BusyBeaver(10^100). If you allowed it to say more complicated expressions like eval(someprogram), power-towers, Ackermann's function, whatever-- then I believe that would be no better than increasing the original 10^100 by some constant proportional to the complexity of what you described (plus some logarithmic interpreter factor, see Kolmogorov complexity).
So let's frame this another way:
Your opponent picks a finite computable number, and gives you a function tells you if the number is smaller/larger/equal by computing it. He also gives you a representation for the output (in a sane world this would be "you can only print numbers like 99999", but he can make it more complicated; it actually doesn't matter). Proceed to measure the size of this function in bits.
Now, answer with your own function, which is twice the size of his function (in bits), and prints out the largest number it can while keeping the code to less than 2N bits in length. (You use the same representation he chose: In a world where you can only print out numbers like "99999", that's what you do. If you can define functions, it gets slightly more complicated.)
I do not understand the purpose here, but I this is what I thought of:
Reading your comments, I suppose you aren't looking for infinitely large number, but a "super large number" instead. And whatever be the number, it will have a large no. of digits. How you got them, isn't the concern. Keeping this in mind:
No complex computation is required. Just type random keys on your numeric keyboard to have a super large number, and then have a program randomly add/remove/modify digits of that number. You get a list of very large numbers - select any one out of them.
e.g: 3672036025039629036790672927305060260103610831569252706723680972067397267209
and keep modifying/adding digits to get more numbers
PS: If you state the purpose in your question clearly, we might be able to give better answers.

How to design an efficient algorithm for least upper bound search

Let's say you have some set of numbers with a known lower bound and unknown upper bound, i.e. 0, 1, 2, 3, ... 78 where 78 is the unknown. Assume for the moment there are no gaps in between numbers. There is a time-expensive function test() that tests if a number is in the set.
What is an efficient way (requiring a low amount of test() calls) to find the highest number in the set?
What if you have the added knowledge that the upper bound is 75 +/- 25?
What if there are random gaps between numbers in the set, i.e. 0, 1, 3, 4, 7, ... 78?
For the "no gaps case":
I assume that this is a fixed size of number, e.g. a 32 bit int
We wish to find x such that test(x) == true, test(x+1) == false, right?
You basically do a binary chop between the lowest known "not in set" (e.g. the biggest 32 bit int) and the highest known "in set" (starting with the known lower bound) by testing the middle value in the range each time and adjusting the boundaries accordingly. This would give an O(log N) solution (in terms of numbers of calls to test()) where X is the size of the potential set, not the actual set. This will be slower than just trying 1, 2, 3... for small sets, but much faster for large ones.
All of this falls down if there can be gaps, at which point I don't think there's any feasible solution beyond "start with the absolute highest possible number and work down until test(x) == true at which point that's the highest number". Any other strategy will fail or be more expensive as far as I can see.
Your best bet is to simply run through the set with O(n) complexity, which is not bad.
Take into consideration that the set is not sorted (it is a set, after all, and this is the given), each isInSet(n) operation takes O(n) as well, bringing you to O(n^2) for the entire operation, if you choose any algorithm for prodding the set at certain places...
A much better solution, if the set is in your control, would be to simply keep a max value of the set and update it on each insertion to the set. This will be O(1) for all cases.
Set Step to 1
set Upper to Lower + Step
if test(Upper) is true then set Lower to Upper, multiply Step by 2 and go to point 2
at this point you know that Lower is in your set while Upper is not. You can now do a binary search between Lower and Upper to find the limit.
This looks like O(log n * O(test)) complexity.
If you know that Upper is between 50 and 100, Do a binary search between these two values.
If you have random gaps and you know that the upper bound is 100 maximum I suspect you can not do better than starting from there and testing every number one by one until test() finds a value in your set.
If you have random gaps and you do not know an upper limit then you can never be sure you found the upper bound.
Maybe you should just traverse through it? It would be O(n) complex. I think there is no other way to do this.
Do you know the set size, before hand?
Actually, I guess you probably don't - otherwise the first problem would be trivial.
It would help if you had some idea how big the set was though.
Take a guess at the top value
Test - if in then increment value by some amount
If not in then decrease value by some amount
Once you have upper and lower bounds for largest value, binary search till you find it (to required precision).
For the gaps you've no such ability - you can't even tell when you've found the largest element. (Unless you known the maximum gap size)
If there are no gaps, then you are probably best off with a binary search.
If we use the second assumption, that the top is 75 +/- 25, then are Low end is 50 and high end is 100, and our first test case is 75. If it is present, then the low end is 75 and the high end is 100, and our test case is 87. That should yield results in O( ln N) (where here N would be 50).
If we can't assume a possible upper range, we just have to made educated guess at what it might be. If a value is not found, it becomes the high end. If it is found, it's the low end, and we double it to find the high end.
If there are gaps, the only way I can see of doing it is a linear search -- but even then you'll need a way of knowing when you reached the end, rather that just a big gap.
If your set happens to be the set of prime numbers, let me know when you find the biggest one. I'm sure we can work something out. ;)
But seriously, I'm guessing you know for a fact that the set does indeed have a largest value. Or, you're chopping it to a 32-bit integer.
A couple of suggestions:
1) Think of every case you can that would speed a result of test(x) == false. Then you can go on to the next one. If the time you spend going through all of the ejection cases is far less than going through the full test, then you'll come out ahead.
2) Can you gain any information from each test? For example, does test(x) == false imply that test(x+5679) == false as well?

Resources