When cola.js fails, how do you find out what went wrong?
I don't have a minimal reproducible example, because that will likely take hours of debugging to make, and if I had it, then it wouldn't be an example of what I'm asking about. I'm hoping to hear of a way to quickly find out what's gone wrong. Presumably it's some bad data in my graph. But how can I tell what the bad data is?
The screenshot below illustrates a typical situation where something has gone wrong. Various NaNs are reported, there are a bunch of Uncaught TypeErrors, and an Assertion fails. There are a whole lot of local variables, all named with a single letter. Clicking and looking at them shows that a NaN got into a coordinate somewhere, as suggested by the previous error messages. It looks like sometime earlier, something got a null when it needed some sort of list-like data structure. How can I quickly find out what was the bad data that got into the graph?
I'm using cola.v3.min.js.
Details
I'm looking for a general approach to finding this kind of error, but here's the specific data that produced the errors in this example. Each line is the argument passed to cy.add() (as suggested by Stepan T.—and printing out everything passed to cy.add() might turn out to be the first step of the answer!).
{"group":"nodes","data":{"id":1,"label":"Workspace","parent":[]},"position":{"x":45,"y":0}}
{"group":"nodes","data":{"id":2,"label":"Target(121)","parent":[1]},"position":{"x":5,"y":0}}
{"group":"nodes","data":{"id":3,"label":"WantFullySourced","parent":[]},"position":{"x":50,"y":0}}
{"group":"nodes","data":{"id":4,"label":"Brick(120)","parent":[1]},"position":{"x":10,"y":0}}
{"group":"nodes","data":{"id":5,"label":"Avail","parent":[]},"position":{"x":55,"y":0}}
{"group":"nodes","data":{"id":6,"label":"Brick(1)","parent":[1]},"position":{"x":15,"y":0}}
{"group":"nodes","data":{"id":7,"label":"Avail","parent":[]},"position":{"x":60,"y":0}}
{"group":"nodes","data":{"id":8,"label":"Brick(2)","parent":[1]},"position":{"x":20,"y":0}}
{"group":"nodes","data":{"id":9,"label":"Avail","parent":[]},"position":{"x":65,"y":0}}
{"group":"nodes","data":{"id":10,"label":"Brick(3)","parent":[1]},"position":{"x":25,"y":0}}
{"group":"nodes","data":{"id":11,"label":"Avail","parent":[]},"position":{"x":70,"y":0}}
{"group":"nodes","data":{"id":12,"label":"Brick(4)","parent":[1]},"position":{"x":30,"y":0}}
{"group":"nodes","data":{"id":13,"label":"Avail","parent":[]},"position":{"x":75,"y":0}}
{"group":"nodes","data":{"id":14,"label":"Brick(5)","parent":[1]},"position":{"x":35,"y":0}}
{"group":"nodes","data":{"id":15,"label":"Avail","parent":[]},"position":{"x":80,"y":0}}
{"group":"nodes","data":{"id":16,"label":"NumericalRelationScout","parent":[1]},"position":{"x":40,"y":0}}
{"group":"nodes","data":{"id":17,"label":"OperandView","parent":[1]},"position":{"x":0,"y":0}}
{"group":"edges","data":{"id":"2.member_of.1.members","source":2,"source_port_label":"member_of","target":1,"target_port_label":"members","weight":0.5}}
{"group":"edges","data":{"id":"4.member_of.1.members","source":4,"source_port_label":"member_of","target":1,"target_port_label":"members","weight":0.5}}
{"group":"edges","data":{"id":"6.member_of.1.members","source":6,"source_port_label":"member_of","target":1,"target_port_label":"members","weight":0.5}}
{"group":"edges","data":{"id":"8.member_of.1.members","source":8,"source_port_label":"member_of","target":1,"target_port_label":"members","weight":0.5}}
{"group":"edges","data":{"id":"10.member_of.1.members","source":10,"source_port_label":"member_of","target":1,"target_port_label":"members","weight":0.5}}
{"group":"edges","data":{"id":"12.member_of.1.members","source":12,"source_port_label":"member_of","target":1,"target_port_label":"members","weight":0.5}}
{"group":"edges","data":{"id":"14.member_of.1.members","source":14,"source_port_label":"member_of","target":1,"target_port_label":"members","weight":0.5}}
{"group":"edges","data":{"id":"16.member_of.1.members","source":16,"source_port_label":"member_of","target":1,"target_port_label":"members","weight":0.5}}
{"group":"edges","data":{"id":"17.member_of.1.members","source":17,"source_port_label":"member_of","target":1,"target_port_label":"members","weight":0.5}}
{"group":"edges","data":{"id":"3.taggees.2.tags","source":3,"source_port_label":"taggees","target":2,"target_port_label":"tags","weight":0.5}}
{"group":"edges","data":{"id":"2.support_from.3.support_to","source":2,"source_port_label":"support_from","target":3,"target_port_label":"support_to","weight":2}}
{"group":"edges","data":{"id":"3.support_from.2.support_to","source":3,"source_port_label":"support_from","target":2,"target_port_label":"support_to","weight":2}}
{"group":"edges","data":{"id":"5.taggees.4.tags","source":5,"source_port_label":"taggees","target":4,"target_port_label":"tags","weight":0.5}}
{"group":"edges","data":{"id":"4.support_from.5.support_to","source":4,"source_port_label":"support_from","target":5,"target_port_label":"support_to","weight":2}}
{"group":"edges","data":{"id":"7.taggees.6.tags","source":7,"source_port_label":"taggees","target":6,"target_port_label":"tags","weight":0.5}}
{"group":"edges","data":{"id":"6.support_from.7.support_to","source":6,"source_port_label":"support_from","target":7,"target_port_label":"support_to","weight":2}}
{"group":"edges","data":{"id":"9.taggees.8.tags","source":9,"source_port_label":"taggees","target":8,"target_port_label":"tags","weight":0.5}}
{"group":"edges","data":{"id":"8.support_from.9.support_to","source":8,"source_port_label":"support_from","target":9,"target_port_label":"support_to","weight":2}}
{"group":"edges","data":{"id":"11.taggees.10.tags","source":11,"source_port_label":"taggees","target":10,"target_port_label":"tags","weight":0.5}}
{"group":"edges","data":{"id":"10.support_from.11.support_to","source":10,"source_port_label":"support_from","target":11,"target_port_label":"support_to","weight":2}}
{"group":"edges","data":{"id":"13.taggees.12.tags","source":13,"source_port_label":"taggees","target":12,"target_port_label":"tags","weight":0.5}}
{"group":"edges","data":{"id":"12.support_from.13.support_to","source":12,"source_port_label":"support_from","target":13,"target_port_label":"support_to","weight":2}}
{"group":"edges","data":{"id":"15.taggees.14.tags","source":15,"source_port_label":"taggees","target":14,"target_port_label":"tags","weight":0.5}}
{"group":"edges","data":{"id":"14.support_from.15.support_to","source":14,"source_port_label":"support_from","target":15,"target_port_label":"support_to","weight":2}}
The object passed to cy.layout() had a circular structure, so JSON.stringify() wouldn't print it. Adding the getCircularReplacer() function recommended here seemed to crash the browser. But manually removing elements from the layout object exposed the error: in a relative alignment constraint (documented under API here), I had an offset of '0' where I needed a 0, i.e. a string where a number was needed.
OK, happily that problem is now fixed. That still leaves the original question: is there a faster way to find errors like that? (The static-typing advocates are surely cackling by now.)
I understand that there is definitely something wrong with my report (e.g. columns missmatcch) and I need to correct it but what I see is the WCF error message that hides actual problem and exactly this hiding irritates me much more than original problem: columns missmatch.
I guess we need to adjust the WCF 'buffer size' and we will get original problem message. But where is the config file?
Text search of "system.serviceModel" in the C:\Program Files (x86)\Microsoft Visual Studio 10.0 doesn't bring good idea...
P.S. Since this is just preview of report I do not think that it is SSRS configuration problem. Problem localised somewhere in DevStudio process or int the DevStudio's internal web server process ...
P.P.S Please help me too improve the question. I see that responders doesn't understand what kind of help I need.
I have encountered multiple "flavors" of this bug in SSRS Preview. It seems the renderer for Preview mode is quite fragile.
There is a simple way to solve this. Ignore the error and attempt to upload the RDL file to your reporting server. The uploader will happily tell you exactly what is wrong with your file - it will tell you exactly what field has a problem and what that problem is. If there are multiple errors, you will get told each and every field and the error associated with each one.
I can create this bogus XML buffer error with any of the following:
Add a new Tablix, start to connect it to a dataset, then cancel out.
Copy/paste some text into a textbox from a MS Word document where one or more lines have a negative right indent (right column end is outside page margin).
Connect a dataset with a varchar(8000) returned value.
Please Check if any of your report items are referencing fields that are not in existing dataset scope.
This indeed worked for me.
See Below link for more information:
http://connect.microsoft.com/SQLServer/feedback/details/742913/ssdt-reporting-services-designer-error
I have seen this error when adding a new field to an existing dataset by clicking "Refresh Fields".
The dataset source was a stored procedure. The result was only a few of original fields showed up in the dataset field list and not the new field. If I tried to preview the report I get XML buffer error.
Workaround was to not refresh fields but hit add new field and type the new field name into the dataset properties field last.
Worked fine after that.
I got this error again today.
I had created a table to hold data to replace two slow queries. I changed some names to clean up the process.
I think the error actually means that there are so many problems with my report that the buffer holding the various error messages isn't large enough which leads to the error message.
The size necessary to buffer the XML content exceeded the buffer quota
Of course this should be an easy fix but Microsoft has said that they will not fix it.
https://connect.microsoft.com/SQLServer/feedback/details/742913/ssdt-reporting-services-designer-error
EDIT: I've updated my answer based on having fixed the issue.
I'm currently experiencing this problem after having changed multiple stored procedures and updating the dataset names in the SSRS report.
And when I try to run the preview I get the exact same error.
As it turns out, after investigating the issue, the problem was that I had changed the name property of my datasets.
There several places in my report where formulas or expressions use the old name properties of the datasets I renamed. After reverting the dataset names back, I managed to get the real errors like missing fields etc. atcual errors came back after I set my dataset name properties back to what they were.
I only changed the name property back to what it was, the stored procedure names were correctly referring to my renamed stored procedures.
I had this problem when after copying and pasting a tablix, it changed CDbl in a formula to Microsoft.ReportingServices.RdlObjectModel.ExpressionParser.VBFunctions.CDbl. I opened up the XML and removed all instances of "Microsoft.ReportingServices.RdlObjectModel.ExpressionParser.VBFunctions." and the report then worked.
For a working report, when I tried to add a column it gave me this error. I edited the .rdl file using notepad++. After SSRS prompt to reload the change from disk, it worked without issues.
I got this error after copying my Custom Code to Visual Studio for hightlighting the code for better readability. Well, Visual Studio added class definitions to the beginning and end of the file. After editing code, I pasted it back to report Custom Code, then got this error. Fix was just to remove class definitions (Public Class Class1 and End Class) from Custom Code. So, check your Custom Code also (if any).
I got this error after adding some new parameters to an existing report.
For some reason when I created the parameters first then modified the Dataset to use the new parameters I got the error, but when I modified the Dataset first then added the parameters second and I did not get the error.
This seemed like very strange behavior to me so I tested it by restoring the report from repository and repeating the process three times with each method, and had identical behavior every time.
I am also facing this problem. I solve this Find and replace
Microsoft.VisualBasic.Interaction.iif ==> iif
Microsoft.ReportingServices.RdlObjectModel.ExpressionParser.VBFunctions.cdbl
==> cdbl
I hope this may helps someone. Thanks
Possible root causes
Parameter name is incorrect(case/order)
Accessing non-existing property.
and many more...
Solution: To get the exact error message are
Deploy SSRS report and find the error : Suggested by "Kim Crosser" already
Remove the section(SSRS/Report content) temporarily you feel is error free to free space in buffer so that you can get actual error message. Later add sections back to the page(removed earlier).
I had the same error message and it was totally caused by my doing. It's a bit embarrassing, but if it helps someone out then great! I had accidentally copied my dataset query that included a small sub select statement within it, which I was using to check parameter/variable values.
Another solution is to open the .rdl file in Report Builder 3.0 (as opposed to Visual Studio) and try to preview it. I found this gave me the details of the error, although if more than one error is present it only shows the first.
I previously binded a TextBox to
Fields!FieldName
and fixed it with
Fields!FieldName.Value
With that said, and with the other answers posted as well, this error happens in different flavors. My issue was fixed after I had the field property "Value" included.
I've been reading up on LINQ lately to start implementing it, and there's a particular thing as to how it generates UPDATE queries that bothers me.
Creating the entities code automatically using SQLMetal or the Object Relational Designer, apparently all fields for all tables will get attribute UpdateCheck.Always, which means that for every UPDATE and DELETE query, i'll get SQL statement like this:
UPDATE table SET a = 'a' WHERE a='x' AND b='x' ... AND z='x', ad infinitum
Now, call me a purist, but this seems EXTREMELY inefficient to me, and it feels like a bad idea anyway, even if it weren't inefficient. I know the fetch will be done by the clustered primary key, so that's not slow, but SQL still needs to check every field after that to make sure it matches.
Granted, in some very sensitive applications something like this can be useful, but for the typical web app (think Stack Overflow), it seems like UpdateCheck.WhenChanged would be a more appropriate default, and I'd personally prefer UpdateCheck.Never, since LINQ will only update the actual fields that changed, not all fields, and in most real cases, the second person editing something wins anyway.
It does mean that if two people manage to edit the same field of the same row in the small time between reading that row and firing the UPDATE, then the conflict that would be found won't be fired. But in reality that's a very rare case. The one thing we may want to guard against when two people change the same thing won't be caught by this, because they won't click Submit at the exact same time anyway, so there will be no conflict at the time the second DataContext reads and updates the record (unless the DataContext is left open and stored in Session when the page is shown, or some other seriously bad idea like that).
However, as rare as the case is, i'd really like to not be getting exceptions in my code every now and then if this happens.
So my first question is, am I wrong in believing this? (again, for "typical" web apps, not for banking applications)
Am I missing some reason why having UpdateCheck.Always as default is a sane idea?
My second question is, can I change this civilizedly? Is there a way to tell SQLMetal or the ORD which UpdateCheck attribute to set?
I'm trying to avoid the situation where I have to remember to run a tool I'll have to make that'll take some regexes and edit all the attributes in the file directly, because it's evident that at some point we'll run SQLMetal after an update to the DB, we won't run this tool, and all our code will break in very subtle ways that we probably won't find while testing in dev.
Any suggestions?
War stories are more than welcome, i'd love to learn from other people's experiences on this.
Thank you very much!
Well, to answer the first question - I agree with you. I'm not a big fan of this "built in" optimistic concurrency, especially if you have timestamp columns or any fields which are not guaranteed to be the same after an update occurs.
To address the second question - I don't know of any way to override SqlMetal's default approach (UpdateCheck = Always), we ended up writing a tool which sets UpdateCheck = Never for appropriate columns. We're using a batch file to call SqlMetal and afterwards running the tool).
Oh, while I think of it - it was also a treat to find that SqlMetal also models relationships to set a foreign key to null instead of "Delete On Null" (for join tables in particular). We had to use the same post-generation tool to set these appropriately too.