Compound filters in Cognos 10.2 - filter

I've encountered an interesting situation with Cognos Report Studio 10.2. I've written a simple report of employees with one report page and one query.
Version 1 of the report uses eight individual filters (a=1, b=2, c=3, ...) and the output is 12,869 records.
Version 2 is exactly the same as Version 1, except the eight filters are combined into one with a compound statement (a=1 AND b=2 AND c=3 AND ...), and the output is 12,010 records.
Logically, shouldn't they produce identical output?

Go to Tools - Show Generated SQL from both reports. Compare the generated SQL using a tool like WinMerge (http://winmerge.org). That should make the difference obvious.
If after looking at the difference between the queries the problem is not apparent, run the vendor specific SQL in your database and verify whether you get the same counts from the queryies as you do from the reports.

Related

Error: 'ORA-24374: define not done before fetch or execute and fetch' when refreshing a report in Crystal Reports

Using
Crystal Reports 2013
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
Failed to retrieve data from the database.
Details: HY000: [Oracle][Ora]ORA-24374: define not done before fetch or execute and fetch
I have no comments (multi-line or single line), so https://apps.support.sap.com/sap/support/knowledge/public/en/2322011 doesn't apply.
The query works fine in DbVisualizer.
I'm trying to combine two reports. Records from the summary report should be in the group footer of the detail report. (Yes, my first thought was to use a subreport and call it a day. When I determined the report would take all week to run, I decided to have the database server do all of the work.)
My query is 384 lines long, uses 16 parameters, and has no comments. It is a join of two main queries. I have tried using subqueries and common table expressions. As expected, same results both ways. All of the parameters used in the query are in the Parameter List in the Modify Command window in Crystal Reports. I have looked through every parameter definition in the Parameter List and they all look good.
When I have seen this error message before, I was able to delete comments and move forward. This is different.
Any idea what Crystal Reports is doing here?

Conditional formatting in Cognos and performance

I have added conditional formatting to a Cognos report, and it seems to have slowed down
The report was running okay before I added the formatting, and I have not changed anything else on the report, other than the conditional formatting.
Does conditional formatting, as a general rule, cause Cognos to run slower?
As a general concept, conditional formatting will not slow down a report.
That said, I can envision one scenario where the addition of conditional formatting could have an impact: You base your conditional formatting on a query item that wasn't previously included in the main data container (list, crosstab etc.).
Cognos' SQL generation is opportunistic. If your report only references one query, all other queries will be left out of the SQL statement sent to the data source. If you include a data item that comes from another query (assuming there is an established join between the two), Cognos will now include the second query in the SQL statement, constructing a join with the original query in accordance on how you define the relationship. Joining tables inevitably results in some slowdown.
If your original report took 10 seconds to generate and then added conditional formatting that forced a join, it's inevitable that the result will take longer. It could be an imperceptible amount of time or a considerable slowdown depending on the query joined and the nature of the join.
Barring the scenario I described, I would generate the tabular data for the query and see how long it takes back. When you generate the tablular data, conditional formatting is ignored. If tabular data is slow then you know it's not the conditional formatting causing the problem.
If you want to really track Cognos performance, check out the article on my blog regarding automatic report timing: Automated Cognos Report Performance Measurement

Update functionality to data base from reporting tools

is there any way we can update the values in BIRT report which in-turn will update the database ? We need to present a report generated in Microsoft SQL server to the client , we tried providing the report in excel however our client changes the format and it is difficult to again consume it in our proprietary tool
(which is Microsoft SQL based). Is there any way we can achieve this? Client should update the values in the report and it should get reflected in the DB
while it's possible to write to wrtie back to db from BIRT using a servlet (see Eclipse Community Forum) I don't know of a way how BIRT could track the changed values.
While it's difficult to campare excel files it should be simpler to create csv files from these excel files and comparing the csv files independant of excel formating changes.
I see the gattering of value changes and writing back to the db as an independant separate workflow not related to the reporting.
Reporting tools are made for generating output only.
A general automatism concept is impossible, if you think about it from a more abstract point of view:
There's data D in the data base (usually spread accross several tables T1, ..., Tn, and records R1, ..., Rm).
The report output data O = (o1, o2, ...) is a the result of a more or less complex (the opposite of trivial) function f(R1, ..., Rm).
An automatic back-propagation automatism of any kind like you dream of would have to know what changing the value of o1 from "spam" to "eggs" means for R1, ..., Rm.
... Or even for records which were not selected by f, for example if the user changed the value of a primary key column.
This is only possible if the function f is bijective (I don't know if the english word is correct), but usually f isn't bijective. Even if it is, the task of inverting a non-trivial function is very hard.
Thus, if you want to let the user change values and persist the changes inside the DB, you need some kind of database UI or some kind of import interface.
Depending on your database, it might be as trivial as let the user work with Oracle SQL*developer or similar tools which support importing data from excel sheets.
However, these tools are intended for SQL developers, as the name implies.
OTOH, if all you want is to perform DML statements in BIRT, this is possible indirectly: You can write stored procedures in the database doing the DML work, and call these procedures from BIRT (use a JDBC Stored Procedure Query instead of JDBC SQL Select Query).

How can we do data analysis for DB replication project

We are facing one issue in our project i.e. Data verification issue.
The project is about Replication of data from Sybase to oracle DBs.
The table structures for Table A across Sybase, Oracle is same.
Same column and primary key combination across all the databases.
e.g. If Sybase has Table A with columns a, b and C
same table with same name and same columns will be available in different databses.
We are done with replication stuff part.But we faced some silent failure like data discrepancy just wondering if there will any tool already available for this.
Any information on his would be helpful. Thanks.
Sybase (now SAP) has a couple products that can be used for data comparisons and reconciliation:
rs_subcmp - an older, 32-bit tool that comes with the Sybase Replication Server product that can be used to compare data between
source and target; SQL reconciliation scripts can be generated from
the differences and then applied to the target to bring it in sync
with the source; if your tables are more than 1GB in size you can
still use rs_subcmp but you'll need to create multiple comparison
jobs (via where clauses) to work on different subsets of your tables
[I don't recall if rs_subcmp can be use for heterogeneous
replication setsup, eg, ASE-Oracle.]
Data Assurance (DA) - the newer, 64-bit product ... also from
Sybase ... which can also compare data and (re)sync the target(s)
from the source (either via SQL reconciliation scripts or directly);
DA is capable of handling comparisons between a handful of
different RDBMS products (eg, ASE-Oracle); I'm currently working on a
project where one of the requirements is to validate (and reconcile
where needed) 200+TB of data being migrated from Oracle to HANA and
I'm using DA for the validation/reconciliation portion of the project
As #TenG has hinted at with his answer, there's a good bit of effort involved to compare data and generate code to reconcile the differences. Rolling your own code is doable but will entail a lot of work. If you've got the money you'll likely find 3rd party tools can get most/all of the work done for you.
If you used a 3rd party product to replicate your data from Sybase to Oracle, you may want to see if the same vendor has a comparison/validation/reconciliation tool you could use.
I've worked on a few migration projects and a key part has always been data reconciliation.
I can only talk about the approaches we took, based on constraints around tools available and minimising downtime, and constraints of available space.
In all cases I took to writing scripts that worked on two levels - summary view and "deep dive". We couldn't find any tools readily available that did what we wanted in a timely enough manner. In fact even the migration tools we found had limitations (datapump, sqlloader, golden gate, etc) and hand coded scripts to handle the bits that we found to be lacking or too slow in the standard tools.
The summary view varied from project to project. It was part functional based (do the accounting figures for transactions match) for the users to verify, and part technical. For smaller tables we could just write simple reports and the diff was straight forward.
For larger tables we wrote technical reports that looked at bands of data (e.g group the PK into 1000s) collect all the column data and produce checksum, generating a report for each table like:
PK ID Range Start Checksum
----------------- -----------
100000 22773377829
200000 38938938282
.
.
Corresponding table pairs from each database were then were "diff"d against each other to highlight discrepancies. Any differences that were found could then be looked at in more detail.
The scripts were written in such a way to allow them to run in parallel looking at discrete bands. Te band ranges were tunable as well to get the best throughput. This obviously sped things up.
The scripts were shell scripts firing off sqlplus reports, and similar for the source database.
On one project there wasn't enough diskspace to do these reports, so I wrote a Java program that queried the two databases side by side, using block queues to fetch and compare rowsets. Being in memory meant this was super fast.
For the "deep dive" we looked at the details for key tables, or for tables that reports a checksum difference.
For the user reports, the users would specify what they wanted to see, and we wrote the reports accordingly.
On the last project, the only discrepancies found were caused by character set conversion issues (people names with accents weren't handled correctly).
On projects where the overall dataset was smaller we extracted the data to XML files and wrote a Java tool to processes pairs and report differences.
The SAP/Sybase rs_subcmp tool is pretty powerful and also pretty hard to use. For details see:
https://help.sap.com/viewer/075940003f1549159206fcc89d020515/16.0.3.3/en-US/feb58db1bd1c1014b134ef4efef25563.html?q=rs_subcmp
You have to pass it key field information, but once you do that, it can retry/restart the compare streams after transient differences. Pretty fancy.
rs_subcmp expects to work on Sybase data source. So to compare against Oracle, you'd probably have to setup one of those Sybase-to-Oracle gateway products ($$$$$).
Could you install the Oracle ODBC drivers and configure them to allow Sybase clients to access Oracle? I'm guessing not (but that's outside the range of my experience).
Note the "-h" option for rs_subcmp. The docs just say it runs a "fast comparison", but what it's actually doing is running queries using the hashbytes() function. Something like:
select keyfield1,keyfield2, hashbytes("Md5",datacol1,datacol2,datacol3)
from mytable
So this sort of query might be good for the "summary view" type comparison discussed above (if the Oracle STANDARD_HASH() function output matches up with the Sybase hashbytes() function (again, outside my experience))
Note, as of ASE 16, there was a bug with the hash() & hashbytes() functions running the Md5 hash option against large varbinary columns where they could use up all procedure cache, potentially crashing the server (CR 811073)

Adding multiple SSRS reports into one report is very slow

I inherited a report from a developer where he combined 5 reports into one SSRS report. It looks like he just copied and pasted each tablix from the original reports one below the other. This was done so that when the user exports to Excel they can have each report on a separate tab. I've never done a multiple SSRS report like this before so I'm just now analyzing how this whole thing works. A major problem I'm finding is that it runs extremely slow, about 10 minutes, seemingly because it has to run all 5 queries. Each stored procedure is listed separately as a data set. Does anyone know a better way to create multiple SSRS reports onto one page, or at least how to make this thing faster?
The first step to improving performance for an SSRS report is to determine what the bottleneck is. Run a query against the view named ExecutionLog4 in the ReportServer database. For each recent execution of a report, the view will give you a record that includes 3 critical fields: TimeDataRetrieval, TimeProcessing, and TimeRendering.
TimeDataRetrieval indicates how long (in milliseconds) it takes for all of the queries to run and return your datasets. If this number is high, then you will need to tune your queries or eliminate some of them to improve performance. You can run a profiler trace to identify which of the procedures is running slowly.
Keep in mind also that subreports fire their dataset queries each time they are rendered in the report. So even a minor performance hiccup in a subreports dataset gets magnified by the number of executions.
TimeProcessing indicates how much time the report server spends manipulating the retrieved data. If this number is high, you may want to consider performing aggregate calculations that are being run many times within a report to run on the SQL side.
TimeRendering indicates how long the server takes to actually render the report. If this number is high, consider avoiding or simplifying expressions used on visual properties that repeat over and over again. This scenario is less common than the other two, in my experience.
Furthermore, here are some tips I've picked up that help to avoid performance issues:
-Avoid using row visibility expressions if you expect a large number of rows to be returned.
-Hiding an object does not prevent dataset execution. If your datasets have similar structure, consider combining them and using object filters to limit what is displayed in different sections. Or use an IF statement in your stored procedure if you only intend to display one of several choices depending on data or parameters.
-Try to limit the number of column groupings in a large tablix. For each grouping in a tablix, you multiply the number of rows of data that may be returned to pivot into those groupings.
More info on SSRS performance can be found at
https://technet.microsoft.com/en-us/library/bb522806(v=sql.105).aspx
This was written for 2008R2, but seems mostly applicable to 2012 as well.
Give all that a shot, then post back here with a more specific question if you get stuck.

Resources