Swift 2.0 - is the programming logic different from languages? - swift2

I am new to Swift 2.0 programming. I made an app integrate with Parse. Following is the snippet of my code
private func isIdNotFound() ->Bool{
var notFound = true
let query = PFQuery(className: "Customer")
query.whereKey("customerId", equalTo: self.id)
query.findObjectsInBackgroundWithBlock{
(objects : [PFObject]?, error: NSError?) -> Void in
if error == nil && objects != nil{
print(objects)
notFound = false
print(notFound)
}
}
print(notFound)
return notFound
}
The console:
true
<Customer: 0x7feccbf07ef0, objectId: AiPH5pNgum, localId: (null)> {
customerId = wilson93;
email = 123;
password = 123;
}])
false
Why is it that it prints true then only it runs the logic and print false. As far as other languages such Java, it should print false twice.

As mentioned in comments, your code is query.findObjectsInBackgroundWithBlock:. Here, Background is the keyword. Because you are searching asynchronously, therefore not necessarily in the order the code is written, the second instance of print(notFound) is executed before the instance in the block.
You can tell because, if read out loud, the two print statements are right next to each other (excluding braces) but notFound is not printed twice in succession. objects is printed after the first (second instance in code) print statement but before the other one.
var notFound = true
query.findObjectsInBackgroundWithBlock{
(objects : [PFObject]?, error: NSError?) -> Void in
if error == nil && objects != nil{
print(objects) //this comes second
notFound = false
print(notFound) //then this comes last
}
}
print(notFound) //this runs first
See the documentation for PFQuery here. It says, in italics, that the search is performed asynchronously.
From the doc:
Finds objects asynchronously and calls the given block with the results.
- (void)findObjectsInBackgroundWithBlock:(nullable PFQueryArrayResultBlock)block
Parameters
IIRC, this means that the search is run on another thread.
#Community correct me if I'm wrong.

Related

Why does Newtonsoft Replace function only replace the value if it is changed?

Take the following code:
JProperty toke = new JProperty("value", new JValue(50)); //toke.Value is 50
toke.Value.Replace(new JValue(20)); //toke.Value is 20
This works as expected. Now examine the following code:
JValue val0 = new JValue(50);
JProperty toke = new JProperty("value", val0); //toke.Value is 50
JValue val1 = new JValue(20);
toke.Value.Replace(val1); //toke.Value is 20
This also works as expected, but there is an important detail. val0 is no longer part of the toke's JSON tree, and val1 is part of the JSON tree; this means that val0 has no valid parent, while val1 does.
Now take this code.
JValue val0 = new JValue(50);
JProperty toke = new JProperty("value", val0); //toke.Value is 50
JValue val1 = new JValue(50);
toke.Value.Replace(val1); //toke.Value is 50
The behavior is different; val0 is still part of toke's JSON tree, and val1 is not. Now val0 has a valid parent, while val1 does not.
This is a critical distinction, and if you are using Newtonsoft JSON tree's to represent a structure, and storing JTokens as references into the tree, the way the references are structure can change based on the value being Replaced, which seems incorrect.
Is there any flaw with my reasoning? Or is behavior incorrect, as I believe it is?
I think you have a valid point: Replace should replace the token instance and set the parent properly even if the tokens have the same values.
This works as you would expect if the property value is a JObject and you replace it with an identical JObject:
JObject obj1 = JObject.Parse(#"{ ""foo"" : 1 }");
JProperty prop = new JProperty("bar", obj1);
JObject obj2 = JObject.Parse(#"{ ""foo"" : 1 }");
prop.Value.Replace(obj2);
Console.WriteLine("obj1 parent is " +
(ReferenceEquals(obj1.Parent, prop) ? "prop" : "not prop")); // "not prop"
Console.WriteLine("obj2 parent is " +
(ReferenceEquals(obj2.Parent, prop) ? "prop" : "not prop")); // "prop"
However, the code seems to have been deliberately written to work differently for JValues. In the source code we see that JToken.Replace() calls JContainer.ReplaceItem(), which in turn calls SetItem(). In the JProperty class, SetItem() is implemented like this:
internal override void SetItem(int index, JToken item)
{
if (index != 0)
{
throw new ArgumentOutOfRangeException();
}
if (IsTokenUnchanged(Value, item))
{
return;
}
if (Parent != null)
{
((JObject)Parent).InternalPropertyChanging(this);
}
base.SetItem(0, item);
if (Parent != null)
{
((JObject)Parent).InternalPropertyChanged(this);
}
}
You can see that it checks whether the value is "unchanged", and if so, it returns without doing anything. If we look at the implementation of IsTokenUnchanged() we see this:
internal static bool IsTokenUnchanged(JToken currentValue, JToken newValue)
{
JValue v1 = currentValue as JValue;
if (v1 != null)
{
// null will get turned into a JValue of type null
if (v1.Type == JTokenType.Null && newValue == null)
{
return true;
}
return v1.Equals(newValue);
}
return false;
}
So, if the current token is a JValue, it checks whether it Equals the other token, otherwise the token is automatically considered to have changed. And Equals for a JValue is of course based on whether the underlying primitives themselves are equal.
I cannot speak to the reasoning behind this implementation decision, but it seems to be worth reporting an issue to the author. The "correct" fix, I think, would be to make SetItem use ReferenceEquals(Value, item) instead of IsTokenUnchanged(Value, item).

How to check for a Not a Number (NaN) in Swift 2

The following method calculates the percentage using two variables.
func casePercentage() {
let percentage = Int(Double(cases) / Double(calls) * 100)
percentageLabel.stringValue = String(percentage) + "%"
}
The above method is functioning well except when cases = 1 and calls = 0.
This gives a fatal error: floating point value can not be converted to Int because it is either infinite or NaN
So I created this workaround:
func casePercentage() {
if calls != 0 {
let percentage = Int(Double(cases) / Double(calls) * 100)
percentageLabel.stringValue = String(percentage) + "%"
} else {
percentageLabel.stringValue = "0%"
}
}
This will give no errors but in other languages you can check a variable with an .isNaN() method. How does this work within Swift2?
You can "force unwrap" the optional type using the ! operator:
calls! //asserts that calls is NOT nil and gives a non-optional type
However, this will result in a runtime error if it is nil.
One option to prevent using nil or 0 is to do what you have done and check if it's 0.
The second is option is to nil-check
if calls != nil
The third (and most Swift-y) option is to use the if let structure:
if let nonNilCalls = calls {
//...
}
The inside of the if block won't run if calls is nil.
Note that nil-checking and if let will NOT protect you from dividing by 0. You will have to check for that separately.
Combining second and your method:
//calls can neither be nil nor <= 0
if calls != nil && calls > 0

dispatch_async: Why do I need to return in the block? [duplicate]

I am creating a doubly-linked-list of scripts (MSScripts) that are supposed to have their own run() implementation, and they call the next script (rscript) when they're ready . One of the scripts I'd like to create is just a delay. It looks like this:
class DelayScript : MSScript
{
var delay = 0.0
override func run() {
let delay = self.delay * Double(NSEC_PER_SEC)
let time = dispatch_time(DISPATCH_TIME_NOW, Int64(delay))
let weakSelf = self
dispatch_after(time, dispatch_get_main_queue()) {
weakSelf.rscript?.run()
Void.self
}
}
init(delay: Double) {
super.init()
self.delay = delay
}
}
Where rscript is the next script to run. The problem is that if I remove the last line of the dispatch_after, it doesn't compile, and that's because of the changed return type of run() from optional chaining. I randomly decided to insert Void.self and it fixed the problem, but I have no idea why.
What is this Void.self, and is it the right solution?
Optional chaining wraps whatever the result of the right side is inside an optional. So if run() returned T, then x?.run() returns T?. Since run() returns Void (a.k.a. ()), that means the whole optional chaining expression has type Void? (or ()?).
When a closure has only one line, the contents of that line is implicitly returned. So if you only have that one line, it is as if you wrote return weakSelf.rscript?.run(). So you are returning type Void?, but dispatch_async needs a function that returns Void. So they don't match.
One solution is to add another line that explicitly returns nothing:
dispatch_after(time, dispatch_get_main_queue()) {
weakSelf.rscript?.run()
return
}

Why does the debugger have to jump "back and forth" before it sets my tuple value?

I've actually fixed this problem already (while documenting it for this post), but I still want to know is why it is happening, so that I can understand what I did and hopefully avoid wasting time with it in the future.
In a Swift project, I have a function that parses out a string that I know will be presented in a specific format and uses it to fill in some instance variables.
There is a helper function that is passed the string, a starting index, and a divider character and spits out a tuple made up of the next string and the index from which to continue. Just in case a poorly formatted string gets passed in, I define a return type of (String, Int)? and return nil if the divider character isn't found.
The helper function looks, in relevant part, like this:
func nextChunk(stringArray: Array<Character>, startIndex: Int, divider: Character) -> (String, Int)?
{
[...]
var returnValue: (String, Int)? = (returnString, i)
return returnValue
}
So far, so good. I set a breakpoint, and just before the function returns, I see that all is as it should be:
(lldb) po returnValue
(0 = "21三體綜合症", 1 = 7)
{
0 = "21三體綜合症"
1 = 7
}
That's what I expected to see: the correct string value, and the correct index.
However, when I go back to the init() function that called the helper in the first place, and put a breakpoint immediately after the call:
var returnedValue = self.nextChunk(stringArray, startIndex: stringArrayIndex, divider: " ")
I get a completely different value for returnedValue than I had for returnValue:
(lldb) po returnedValue
(0 = "I", 1 = 48)
{
0 = "I"
1 = 48
}
Now here's the really weird part. After I get the return value, I want to test to see if it's nil, and if it's not, I want to use the values I fetched to set a couple of instance variables:
if(returnedValue == nil)
{
return
}
else
{
self.traditionalCharacter = returnedValue!.0
stringArrayIndex = returnedValue!.1
}
If I comment out both of the lines in the "else" brackets:
else
{
// self.traditionalCharacter = returnedValue!.0
// stringArrayIndex = returnedValue!.1
}
then my original breakpoint gives the expected value for the returned tuple:
(lldb) po returnedValue
(0 = "21三體綜合症", 1 = 7)
{
0 = "21三體綜合症"
1 = 7
}
Again: the breakpoint is set before this if/else statement, so I'm taking the value before any of this code has had the chance to execute.
After banging my head against this for a few hours, I realize that...there isn't actually a problem. If I press the "step over" button in the debugger, the execution pointer jumps back from the if() line to the call to nextChunk. Pressing it again sends it forward to "if" again, and sets the values properly.
This extra double-jump only happens if the assignment code is active, consistently and reproducibly. As I know, since I reproduced it for hours trying to figure out what was wrong before even trying stepping forward and noticing that it "fixed itself."
So my question is: why? Is this a bug in the debugger, or am I using breakpoints wrong? It happens just the same whether I put the breakpoint between the function call and the if() or on the if() line. Can someone explain why the debugger is jumping back and forth and when the value I need is actually getting set?

Slow Scala assert

We've been profiling our code recently and we've come across a few annoying hotspots. They're in the form
assert(a == b, a + " is not equal to " + b)
Because some of these asserts can be in code called a huge amount of times the string concat starts to add up. assert is defined as:
def assert(assumption : Boolean, message : Any) = ....
why isn't it defined as:
def assert(assumption : Boolean, message : => Any) = ....
That way it would evaluate lazily. Given that it's not defined that way is there an inline way of calling assert with a message param that is evaluated lazily?
Thanks
Lazy evaluation has also some overhead for the function object created. If your message object is already fully constructed (a static message) this overhead is unnecessary.
The appropriate method for your use case would be sprintf-style:
assert(a == b, "%s is not equal to %s", a, b)
As long as there is a speciaized function
assert(Boolean, String, Any, Any)
this implementation has no overhead or the cost of the var args array
assert(Boolean, String, Any*)
for the general case.
Implementing toString would be evaluated lazily, but is not readable:
assert(a == b, new { override def toString = a + " is not equal to " + b })
It is by-name, I changed it over a year ago.
http://www.scala-lang.org/node/825
Current Predef:
#elidable(ASSERTION)
def assert(assertion: Boolean, message: => Any) {
if (!assertion)
throw new java.lang.AssertionError("assertion failed: "+ message)
}
Thomas' answer is great, but just in case you like the idea of the last answer but dislike the unreadability, you can get around it:
object LazyS {
def apply(f: => String): AnyRef = new {
override def toString = f
}
}
Example:
object KnightSpeak {
override def toString = { println("Turned into a string") ; "Ni" }
}
scala> assert(true != false , LazyS("I say " + KnightSpeak))
scala> println( LazyS("I say " + KnightSpeak) )
Turned into a string
I say Ni
Try: assert( a==b, "%s is not equals to %s".format(a,b))
The format should only be called when the assert needs the string. Format is added to RichString via implicit.

Resources