why F# debugger lies? - visual-studio

Can anyone explain me why debugger of VS2012 is showing different values for same member of object? (See the figure)
http://s2.uploads.ru/jlkw0.png (Sorry for nonEnglish interface of VS, but I think the situation is clear.)
Here the code:
http://pastie.org/7186239

The debugging experience seems to do a poor job of identifying the correct binding for identifiers. In your example, this means that any identifier called Source is really showing the value of this.Source, rather than the corresponding property of the correct object. Note that you can get the right value by hovering over y and expanding the members (although this is obviously not a great experience).
There are even more confusing ways that this issue manifests itself:
type T() =
member val P = 1
member this.DoSomething() =
let P = "test" // set breakpoint here, hover over P
printfn "%i" this.P // set breakpoint here, hover over P
T().DoSomething()
Now, whichever instance of P you hover over, you get the wrong thing!

Related

ReSharper - Possible 'System.NullReferenceException' after Any

I'm using ReSharper Ultimate 2016.2, but I have seen this in previous versions as well. ReSharper will give me a warning saying Possible 'System.NullReferenceException' when I use FirstOrDefault after Any. Example below:
Foo[] items = GetItems();
var myName = "MyName";
if (items.Any(x => x.Name == myName))
{
var item = items.FirstOrDefault(x => x.Name == myName);
var name = item.Name; // Possible 'System.NullReferenceException'
}
Is the warning correct, or is the code above safe?
I know that I can disable the warning, but that's not the point. I want to make sure that there's no chance for that NullReferenceException to occur. Since I first check with Any, then FirstOrDefault should return one item. Or am I missing something?
The code above is simply a MCVE.
UPDATE:
As mentioned in the comments, the code can be optimized (and simplified). The question is not how to fix the problem in the code. But if there actually can occur a NullReferenceException, as ReSharper states?
It is because of FirstOrDefault. It returns NULL for classes, if the condition does not match. ReSharper does not take the Any in account at this point.
You should replace it with a call to First
This is like the error emitted by the compiler use of unassigned variable when you are sure that your variable is assigned inside an if block that is logically always true. Simply put. Trying to analyze your code to this level is not feasible because the static analyzers should know (or better understand) at compile time what will be the status of your variables at runtime
void Main()
{
int a;
Environment.CurrentDirectory = "C:\\temp";
if(Environment.CurrentDirectory == "C:\\temp")
a = 1;
// Error - Use of unassigned variable
Console.WriteLine(a);
}
Here, the human brain can see that there is no way to not assign the variable, (there are no other threads that mess with the same property, you have checked the reference source about the property behavior) but the compiler should analyze with the same depth the meaning of this code and, at this point in time, we don't have that. The same rules applies to Resharper, they lack the necessary intelligence to do everything required (for now)

Qt/Ruby Application Crash in Qt::AbstractItemModel::mimeData - How to find and fix the cause?

Recently I wrote a little application using Ruby and Qt. It was my first non-trivial project with qtruby. After some time I managed to translate C++ references and tutorials into working Ruby code, and got the hang of it.
Only when implementing my own drag and drop functionality in QTreeView widgets, I got really stuck. I could isolate the problem to the Qt::AbstractItemModel::mimeData method. After eliminating some other mistakes, I ended up with the following code which somehow caused the application to crash as soon as a drag action occured.
class TreeModel < Qt::AbstractItemModel
def mimeData(indexlist)
encodedData = Qt::ByteArray.new("")
stream = Qt::DataStream.new(encodedData, Qt::IODevice::WriteOnly)
indexlist.each do |index|
if index.column == 0
if index.isValid
stream << index.item.mime_data.to_json
end
end
end
mimeData = Qt::MimeData.new
mimeData.setData("application/x-tegi.json", encodedData)
mimeData
end
end
The crash appeared non-deterministic, a few times the code just worked. The error dump and backtrace did not give me a hint, only that the crash happened inside QApplication::exec.
What is the issue and how can it be fixed?
The C++ prototype for mimeData is :
QMimeData * QAbstractItemModel::mimeData
( const QModelIndexList & indexes ) const [virtual]
The method returns a pointer to a QMimeData object. My implementation of mimeData was returning a pointer to a local variable. Thus the non-deterministic crashes.
Once found, fixing the issue was easy. An instance variable would stay alive and keep its value until the next call of mimeData:
#mimeData = Qt::MimeData.new
#mimeData.setData("application/x-tegi.json", encodedData)
#mimeData
end
Another possibility is to call super and leave the construction and deconstruction to the superclass. This also inherits the default mime data ("application/x-qabstractitemmodeldatalist"):
mimeData = super
mimeData.setData("application/x-tegi.json", encodedData)
mimeData
end
After having found the issue, it appears trivial to me. Nevertheless I would like to share it, because I wasted too much time with it. The qtruby documentation warns about pointer arguments and mentions some unsupported functions returning uchar pointers, but no word about functions returning pointers to Qt objects. Being lulled by qtruby seamlessly wrapping data types to and fro, I looked so many times over that little asterisk without spotting it.

Swift debugging generic type variables

I'm trying out generics in swift an came across something unusual while debugging:
instead of printing out the value assigned to the variable, it just prints something different. Heres the example code:
class GenericExample<T: Comparable>{
var someVar: T
init(myVar: T){
someVar = myVar
}
}
let generics = GenericExample<Int>(myVar: 57)
print(generics.someVar)
the result is fine(57) but when running the program for debugging, as in with breakpoints, the value is presented as something like 4301684792
any tips would be appreciated
Don't examine, in the debugger, the value of a variable whose value is not assigned until the line you're breakpointed in or later. That line has not been executed yet, so you will see a random value! (Namely, whatever happens to be sitting in memory at that address.)
Only examine earlier variable values! And don't even look at the little tooltips that pop up. Look only at the variables pane and the lldb console.

What does auto always deduce the basic type mean here

I was reading this article which states
that
widget w = get_gadget(); -->a
auto w = get_gadget(); -->b
in statement 'a' a temporary is created then widget w is move constructed from the temporary. I totally understand that statement. What I don't understand is how statement 'b' using the auto variable is better than statement 'a'. It says
.... we could write the following which guarantees there is no
implicit conversion because auto always deduces the basic type
exactly:
// better, if you don't need an explicit type
auto w = get_gadget();
Could anyone please explain why statement 'b' is better than 'a' ?
It depends on what your criteria are for "better".
If the return type of get_gadget() is actually widget, and that will forever be true, then there is no difference whatsoever. However, real-world program designs change, and there may be a need to change the return type, or change properties of the type returned.
If the return type of get_gadget() is something (say gadget) that can be implicitly converted to widget, then the working of "a" is effectively
gadget temp = get_gadget();
widget w = widget(temp); // assuming widget has a constructor that accepts a gadget
or to
gadget temp = get_gadget();
widget w = (gadget)widget; // assuming gadget has an implicit conversion to widget
In either case, a temporary object is created and then some conversion is performed.
In comparison, the case "b" is deduced by the compiler as being
gadget w = get_gadget();
There is also potentially a maintenance advantage in using auto. Let's say that our return type from get_gadget() (gadget) is changed so it can no longer be converted to a widget. In that case, the case "a" will simply not compile. Whereas case "b" will compile, and the code which uses w will still work (assuming all operations on it are supported by the new return type).
While that last case could easily be fixed by changing to
gadget w = get_gadget();
even this can be broken again by changing the return type of get_gadget() to better_gadget.
The bottom line is that auto hands the work to the compiler of worrying about what type w needs to be.
Sutter explains it in the paragraph.
This works, assuming that gadget is implicitly convertible to widget, but creates a temporary object. That’s a potential performance pitfall, as the creation of the temporary object is not at all obvious from reading the call site alone in a code review. If we can use a gadget just as well as a widget in this calling code and so don’t explicitly need to commit to the widget type, we could write the following which guarantees there is no implicit conversion because auto always deduces the basic type exactly
Statement b creates a gadget directly. That bypasses the creation of the temporary and the conversion, which could be expensive operations depending on the class implementation. auto picks the exact type needed and directly creates w as that type. His caveat is important though: the code can't care that w is a gadget and not a widget.
For example, suppose that widget inherits from gadget. In statement b w would be a plain gadget, thus lacking the extra stuff that you'd get with a widget. When using statement b your code can't care that you got a gadget. If you're using statement a then you get the widget at the added expense of creating a temporary gadget and converting it to a widget.

How to trigger a break when an Array is empty?

In my app, I have a problème when I try to reach an index in an Array, the Array is actually empty. I cannot find what is emptying it so I was wondering if it's possible with the debugger to create a dynamic breakpoint that will pop up when my array is empty. So as soon as something is either resetting the array or taking away its last object, I'd like to know.
I tried to create a symbolic breakpoint with "myArray.isEmpty == true" as the condition but it doesn't look to work the way I want.
Is it possible or I'm just dreaming?
Thanks
As #kendall mentions, you could use didSet to detect when the array is being emptied, and put a breakpoint on it:
// a acts like a normal variable
var a: [Int] = [] {
// but whenever it’s updated, the following runs:
didSet {
if a.isEmpty {
// put a breakpoint on the next line:
println("Array is empty")
}
}
}
a.append(1)
a.append(2)
println(a.removeLast())
// will print “Array is empty” before the value is printed:
println(a.removeLast())
What you want is called a Watchpoint, which lets you monitor changes in memory. I'm not sure yet how to set one on a Swift Array, but that could be a good starting point for research.
One idea would be to add a didSet{} block to the property that holds the array, adding a log statement within - break on that based on your condition that the array is empty.
To the best of my knowledge this isn't possible with Swift and Xcode (or with any other language of IDE I have used). To make this work the IDE would have to continually evaluate the given expression at every step of programs execution.
Now, if arrays were classes, you could subclass and add a breakpoint in an override isEmpty method, but as they are classed you cannot. :-(

Resources