Conditional formatting in Cognos and performance - performance

I have added conditional formatting to a Cognos report, and it seems to have slowed down
The report was running okay before I added the formatting, and I have not changed anything else on the report, other than the conditional formatting.
Does conditional formatting, as a general rule, cause Cognos to run slower?

As a general concept, conditional formatting will not slow down a report.
That said, I can envision one scenario where the addition of conditional formatting could have an impact: You base your conditional formatting on a query item that wasn't previously included in the main data container (list, crosstab etc.).
Cognos' SQL generation is opportunistic. If your report only references one query, all other queries will be left out of the SQL statement sent to the data source. If you include a data item that comes from another query (assuming there is an established join between the two), Cognos will now include the second query in the SQL statement, constructing a join with the original query in accordance on how you define the relationship. Joining tables inevitably results in some slowdown.
If your original report took 10 seconds to generate and then added conditional formatting that forced a join, it's inevitable that the result will take longer. It could be an imperceptible amount of time or a considerable slowdown depending on the query joined and the nature of the join.
Barring the scenario I described, I would generate the tabular data for the query and see how long it takes back. When you generate the tablular data, conditional formatting is ignored. If tabular data is slow then you know it's not the conditional formatting causing the problem.
If you want to really track Cognos performance, check out the article on my blog regarding automatic report timing: Automated Cognos Report Performance Measurement

Related

Check queries not used in a Oracle reports

I'm using Oracle Report Builder 9.0.4.1.0 and I have a heavy report that has defined a large number of queries. I think not all that queries are used in the report and are not linked to any layout object.
Is there a easy way to detect what queries (or other objects) aren't used at all in a specific report? Instead of delete the query, compile and run and verify one by one if are used or not?
Thanks
If there is an easy way to do that, I don't know it. A long time ago, when Reports 1.x was used, report was saved in the database so you could write a query to fetch metadata you're interested in. I never did that, though, but - that would be an option. Now, all you have is a RDF (or a JSP) file.
However, a few suggestions, if I may.
Open Paper Layout Editor. Click the repeating frame and observe its property palette as it contains information about the group it belongs to. "Group" can be viewed in Data Model layout.
As there aren't that many repeating frames, you should be able to eliminate queries that don't have any frames, i.e. don't contribute to the final result.
Another option is to put a condition
WHERE 1 = 2
into every query so that they won't return any rows. Run the report and check what's missing - then remove that condition so that you'd get values. Move on to second query, and so forth. That's a little bit tedious and time consuming, but should still be faster than deleting queries.
You can return a report results to an XML file. Each query with data will contain something in XML-s tags.
enter image description here

Adding multiple SSRS reports into one report is very slow

I inherited a report from a developer where he combined 5 reports into one SSRS report. It looks like he just copied and pasted each tablix from the original reports one below the other. This was done so that when the user exports to Excel they can have each report on a separate tab. I've never done a multiple SSRS report like this before so I'm just now analyzing how this whole thing works. A major problem I'm finding is that it runs extremely slow, about 10 minutes, seemingly because it has to run all 5 queries. Each stored procedure is listed separately as a data set. Does anyone know a better way to create multiple SSRS reports onto one page, or at least how to make this thing faster?
The first step to improving performance for an SSRS report is to determine what the bottleneck is. Run a query against the view named ExecutionLog4 in the ReportServer database. For each recent execution of a report, the view will give you a record that includes 3 critical fields: TimeDataRetrieval, TimeProcessing, and TimeRendering.
TimeDataRetrieval indicates how long (in milliseconds) it takes for all of the queries to run and return your datasets. If this number is high, then you will need to tune your queries or eliminate some of them to improve performance. You can run a profiler trace to identify which of the procedures is running slowly.
Keep in mind also that subreports fire their dataset queries each time they are rendered in the report. So even a minor performance hiccup in a subreports dataset gets magnified by the number of executions.
TimeProcessing indicates how much time the report server spends manipulating the retrieved data. If this number is high, you may want to consider performing aggregate calculations that are being run many times within a report to run on the SQL side.
TimeRendering indicates how long the server takes to actually render the report. If this number is high, consider avoiding or simplifying expressions used on visual properties that repeat over and over again. This scenario is less common than the other two, in my experience.
Furthermore, here are some tips I've picked up that help to avoid performance issues:
-Avoid using row visibility expressions if you expect a large number of rows to be returned.
-Hiding an object does not prevent dataset execution. If your datasets have similar structure, consider combining them and using object filters to limit what is displayed in different sections. Or use an IF statement in your stored procedure if you only intend to display one of several choices depending on data or parameters.
-Try to limit the number of column groupings in a large tablix. For each grouping in a tablix, you multiply the number of rows of data that may be returned to pivot into those groupings.
More info on SSRS performance can be found at
https://technet.microsoft.com/en-us/library/bb522806(v=sql.105).aspx
This was written for 2008R2, but seems mostly applicable to 2012 as well.
Give all that a shot, then post back here with a more specific question if you get stuck.

Materialized View vs SSAS Cube

Here is current scenario - We have 3 tables in Oracle DB (with millions of records) which are being used to generate SSRS reports.
These reports are displaying complex data calculation such as deviations, median etc.
SSRS fetch data using stored procs in oracle (joining all the 3 tables) based on date parameters
Calculations are performed in SSRS and data is displayed in tables and charts
Now, for small date duration, report is getting generated quite fast, so no issues there.
When date range is big like a week or 2-3 months, report takes lot of time to process and most of the time it gets timed out as well.
To resolve this issue, I am thinking to remove calculations from SSRS and move them to DB level. Where we can have pre-calculated data
which will be served to SSRS reports for faster report generation.
In order to do this, I can see 2 options -
Oracle Materialized Views
SSAS Cube
I have never used Materialized Views before, so I am a bit skeptical about its performance specially FAST REFRESH issues.
What way would you prefer? MV or SSAS or mix of both?
Data models (SSAS) are great for organizing data, consolidating business logic, and defining how calculations behave in different scopes. They are generally faster to query than the raw data which is what you currently have. There is some caching involved, but you still have to query the data and wait for it to be processed. Models are also most appropriate when you have multiple reports that will be using a common set of data.
With a materialized view, you can shift the heavy lifting of calculation time to the scheduled refresh. Think of it as essentially the same as creating a new table that is refreshed by a procedure. This will greatly improve query times for the report especially if the date column you're filtering on is indexed. Also, the development and maintenance requirements are much lower for this than a model.
So, based on your specifications I would suggest the materialized view.
I would concur with the Materialized View (MV) approach. Depending on the amount and type (insert vs update vs delete) would determine if a fast refresh is possible or practical.
Counter intuitively, a FULL refresh is often a better approach, since you can better take advantage of set based SQL processing, together with parallelism to build the MV.

Oracle PL/SQL: choosing the update/merge column dynamically

I have a table with data relating to several moments in time that I have to keep updated. To save space and time, however, each row in my table refers to a given day and hourly and quarter-hourly data for that day are scattered throughout the several columns in that same row. When updating the data for a particular moment in time I, therefore, must choose the column that has to be be updated through some programming logic in my PL/SQL procedures and functions.
Is there a way to dynamically choose the column or columns involved in an update/merge operation without having to assemble the query string anew every time? Performance is a concern and the throughput must be high, so I can't do anything that would perform poorly.
Edit: I am aware of normalization issues. However I still would like to know a good way for choosing the columns to be updated/merged dynamically and programatically.
The only way to dynamically choose what column or columns to use for a DML statement is to use dynamic SQL. And the only way to use dynamic SQL is to generate a SQL statement that can then be prepared and executed. Of course, you can assemble the string in a more or less efficient manner, you can potentially parse the statement once and execute it multiple times, etc. in order to minimize the expense of using dynamic SQL. But using dynamic SQL that performs close to what you'd get with static SQL requires quite a bit more work.
I'd echo Ben's point-- it doesn't appear that you are saving time by structuring your table this way. You'll likely get much better performance by normalizing the table properly. I'm not sure what space you believe you are saving but I would tend to doubt that denormalizing your table structure is going to save you much if anything in terms of space.
One way to do what is required is to create a package with all possible updates (which aren't that many, as I'll only update one field at a given time) and then choosing which query to use depending on my internal logic. This would, however, lead to a big if/else or switch/case-like statement. Is there a way to achieve similar results with better performance?

Identify source of linq to sql query

We are starting to have numerous linq to sql queries in our code. We have started to pay more attention to performance and are starting to see queries that we think are coming from linq. They have the t1, t2...tN values, so we are sure they are linq generated. However, we are having difficulty determining the location in code that is the source of the query. Obviously we have a general idea based on the tables and columns requested.
Is there a way to "tag" or "name" a query so that is shows up in a trace to more easily identify the query?
You might find my Linq-to-SQL query profiler useful; it allows you to log queries together with stack trace and db-side I/O, timings, execution plans, and other details that can be used to pinpoint both what effect the query had and where it came from (in code, what user action(s) and or calls triggered it etc).
It has a number of filter options that you can control from within your own code, so you can set it up to catch queries that fulfill specific criteria only. E.g. queries that: are expensive I/O-wise, has long execution time, does table scans, hits specific tables, even your own custom filters, etc. It is designed for runtime profiling, so you can distribute the logging component with your apps and switch it on as necessary in prod environments.
I have posted a short intro to it here:
http://huagati.blogspot.com/2009/06/profiling-linq-to-sql-applications.html
And you can download the profiler and get a free 45-day trial license from:
http://www.huagati.com/L2SProfiler/
I have, to date, found no way to do this.

Resources