Don't Render a table unless certain condition is true - birt

I am running into grouping errors on my dataset and am looking to run a script on the beforeFactory such that if a certain value is null, it doesn't render the table+grouping at all.
I've found a related posted here, but don't think it fully addresses my use-case. Will preventing the table from rendering still produce the errors?

Dropping a table in beforeFactory script prevents any related datasets from being executed, therefore it prevents errors due to these datasets. Except of course if these datasets are also bound to further report elements.
The link you mentionned should address your use case. Though you don't mention how this "certain value" compared with null is computed: if it is computed from a report parameter or a context variable it is fine, otherwise it is probably wrong because "beforeFactory" is triggered before all datasets (therefore this script cannot use a value returned by a dataset...).
I hope it helps.

Related

Hiding parameter prevents other parameters from refreshing

I have the first parameter (client drop down) that passes an ID to the second parameter to be used in the 3rd and 4th parameter. During testing all the parameters are set to visible to verify what's being passed. As soon as I hide that second parameter the rest don't refresh. Again, visible, correct info flowing through, hidden, DOA. I have this set up exactly like this on at least a dozen other reports with no issue. I'm not real sure where this going wrong since I can see the correct value being passed while visible?
I was able to duplicate your issue. While I can't find a reference to support my supposition, I believe that hidden parameters (even those with a default value) are not evaluated until rendering, which would explain this behavior. Data sets that drive parameter values are run prior to the report being rendered.
As you're talking about having trouble with dependencies between the parameter values, I assume you're using queries to drive your available values for parameters three and four. If this is the case, then simply embed the logic that relates the value(s) of parameter 1 to the value(s) of parameter 3 into the query that drives parameter 3's available values. By cutting out Parameter 2 and incorporating it into the query logic for parameter 3, you will work around this issue.
If you're having trouble accomplishing this, reply in a comment and I'll see if I can help. As it stands, I don't know enough about what you're trying to do to help any more.

Need help rewriting XQuery to avoid expanded tree cache full error in MarkLogic

I am new to XQuery and MarkLogic.
I am trying to update documents in MarkLogic and get the extended tree cache full error.
Just to get the work done I have increased the expanded tree cache but that is not recommended.
I would like to tune this query so that it does not need to simultaneously cache as much XML.
Here is my query
I have uploaded my query as an image because it was not so pretty when I pasted it on the editor. If any one knows a better way please suggest.
Thanks in advance.
Expanded tree cache errors can be caused by executing queries that select too many XML nodes at once. In your example, this is likely the culprit: /tx:AttVal[tx:AttributeName/text()=$attributeName].
It's possible that calling text() is the source of your problem (and text() probably not what you mean anyway - see this blog), causing MarkLogic to evaluate that function on all these nodes, and that by simply using /tx:AttVal[tx:AttributeName=$attributeName] it may solve your problem.
Next I would consider an adding a path range index on /tx:AttVal/tx:AttributeName and query those nodes using cts:search and cts:path-range-query. This will be substantially faster than just XPath without a range index. It's also possible to use XPath with a range index: MarkLogic will automatically optimize the XPath expression to use the range index; however, there can be reasons it doesn't optimize the expression correctly, and you would want to check that using xdmp:plan.
Also note that the general best practice recommendation for XML in MarkLogic is to use "semantic XML". E.g., when you mean an attribute, use an attribute: <some-node AttributeName=AttVal>. MarkLogic's indexes are optimized out of the box for semantic XML design. However, if you don't have an option but to work with XML that's not, then that's what path range indexes were designed for.
I've just solved exactly this scenario. There are two things I did
I put the node-replace and node-insert type calls (that is any calls that modify the XML structure into a separate module and then called that module using xdmp:invoke, passing in any parameters required, like this
let $update := xdmp:invoke("/app/lib/update-attribute-node.xqy",
(xs:QName("newValue"), $new),
{xdmp:modules-database()})
The reason why this works is that the call to xdmp:invoke happens in it's own transaction and once it completes, the memory is cleared up. If you don't do this then, each time you call the update or insert function, it will not actually do the write, until the end in a single transaction meaning your memory will fill up pretty quickly.
Any time I needed to loop over paths in MarkLogic (or documents or whatever they are called - I've only been using MarkLogic for a few days) and there are a large number of them I processed them only a few at a time like below. I came up with an elaborate way of skipping and taking only a batch of documents at a time, but you can do it in any number of ways.
let $whatever:= xdmp:directory("/whatever/")[$start to $end]
I also put this into a separate module so that it is processed immediately and not in a single transaction.
Putting all expensive calls into separate modules and taking only a subset of large data sets at a time helped me solve my expanded tree cache full errors.

RDLC Report With Empty DataSet

I'm currently trying to get an rdlc report to work, and it works fine except when I pass an empty dataset to it (the dataset is determined dynamically at runtime -- ugly, but this is the code I was passed and I'm not quite experienced enough to really feel confident about wiping it and rewriting it). When I pass it an empty DataSet, it fails with an "Object reference not set to an instance of an object." The strange thing is, I have another report that gets passed a similar DataSet, but it doesn't fail. I'm not quite sure why one report with an empty DataSet is failing, but the other is not.
Any help is much appreciated!
Figured it out. As it turns out there was a stray parameter that wasn't getting set... rebuilding the report from the ground up did the trick.

Modifying parameters with code in Microsoft Reporting Services

I made a report with about 30 different rectangles and textboxes that have different visibility expressions depending on the parameters. (It's a student invoice and many different messages have to appear depending on the semester) When I made all the expressions I coded in the parameters in all upper case. Now I have a problem when users enter lowercase letters, the SQL all works fine since it is not case sensitive, but the different rectangles and textboxes don't show. Is there a way in the report code to first capitalize all the parameters before running the SQL? Or do I actually have to go back to every visibility expression and add separate iif's for upper and lower case? (That seems incredibly silly to have to do). I can't change my parameters to numbers because I have been given strict requirements for input. Thanks.
I do not know if this is the most elegant solution, but you could accomplish this by following this procedure for every parameter on the Report Parameters page:
1)Re-name the parameter, leaving its prompt as that of the old parameter.
2)Add a new parameter with the same name as the old parameter.
3)Mark this new parameter as Hidden.
4)Make sure that the new parameter's available values are marked as non-queried(available values will never be actually used.)
5)Mark the Default Values as Non-queried, using the following syntax:
=ucase(Parameters!OldParameterName.Value)
Can't you just UCASE the params (do it in the xml view, it will be quicker and you might even be able to do a regex find/replace)

LINQ Conflict Detection: Setting UpdateCheck attribute

I've been reading up on LINQ lately to start implementing it, and there's a particular thing as to how it generates UPDATE queries that bothers me.
Creating the entities code automatically using SQLMetal or the Object Relational Designer, apparently all fields for all tables will get attribute UpdateCheck.Always, which means that for every UPDATE and DELETE query, i'll get SQL statement like this:
UPDATE table SET a = 'a' WHERE a='x' AND b='x' ... AND z='x', ad infinitum
Now, call me a purist, but this seems EXTREMELY inefficient to me, and it feels like a bad idea anyway, even if it weren't inefficient. I know the fetch will be done by the clustered primary key, so that's not slow, but SQL still needs to check every field after that to make sure it matches.
Granted, in some very sensitive applications something like this can be useful, but for the typical web app (think Stack Overflow), it seems like UpdateCheck.WhenChanged would be a more appropriate default, and I'd personally prefer UpdateCheck.Never, since LINQ will only update the actual fields that changed, not all fields, and in most real cases, the second person editing something wins anyway.
It does mean that if two people manage to edit the same field of the same row in the small time between reading that row and firing the UPDATE, then the conflict that would be found won't be fired. But in reality that's a very rare case. The one thing we may want to guard against when two people change the same thing won't be caught by this, because they won't click Submit at the exact same time anyway, so there will be no conflict at the time the second DataContext reads and updates the record (unless the DataContext is left open and stored in Session when the page is shown, or some other seriously bad idea like that).
However, as rare as the case is, i'd really like to not be getting exceptions in my code every now and then if this happens.
So my first question is, am I wrong in believing this? (again, for "typical" web apps, not for banking applications)
Am I missing some reason why having UpdateCheck.Always as default is a sane idea?
My second question is, can I change this civilizedly? Is there a way to tell SQLMetal or the ORD which UpdateCheck attribute to set?
I'm trying to avoid the situation where I have to remember to run a tool I'll have to make that'll take some regexes and edit all the attributes in the file directly, because it's evident that at some point we'll run SQLMetal after an update to the DB, we won't run this tool, and all our code will break in very subtle ways that we probably won't find while testing in dev.
Any suggestions?
War stories are more than welcome, i'd love to learn from other people's experiences on this.
Thank you very much!
Well, to answer the first question - I agree with you. I'm not a big fan of this "built in" optimistic concurrency, especially if you have timestamp columns or any fields which are not guaranteed to be the same after an update occurs.
To address the second question - I don't know of any way to override SqlMetal's default approach (UpdateCheck = Always), we ended up writing a tool which sets UpdateCheck = Never for appropriate columns. We're using a batch file to call SqlMetal and afterwards running the tool).
Oh, while I think of it - it was also a treat to find that SqlMetal also models relationships to set a foreign key to null instead of "Delete On Null" (for join tables in particular). We had to use the same post-generation tool to set these appropriately too.

Resources