is there any way we can update the values in BIRT report which in-turn will update the database ? We need to present a report generated in Microsoft SQL server to the client , we tried providing the report in excel however our client changes the format and it is difficult to again consume it in our proprietary tool
(which is Microsoft SQL based). Is there any way we can achieve this? Client should update the values in the report and it should get reflected in the DB
while it's possible to write to wrtie back to db from BIRT using a servlet (see Eclipse Community Forum) I don't know of a way how BIRT could track the changed values.
While it's difficult to campare excel files it should be simpler to create csv files from these excel files and comparing the csv files independant of excel formating changes.
I see the gattering of value changes and writing back to the db as an independant separate workflow not related to the reporting.
Reporting tools are made for generating output only.
A general automatism concept is impossible, if you think about it from a more abstract point of view:
There's data D in the data base (usually spread accross several tables T1, ..., Tn, and records R1, ..., Rm).
The report output data O = (o1, o2, ...) is a the result of a more or less complex (the opposite of trivial) function f(R1, ..., Rm).
An automatic back-propagation automatism of any kind like you dream of would have to know what changing the value of o1 from "spam" to "eggs" means for R1, ..., Rm.
... Or even for records which were not selected by f, for example if the user changed the value of a primary key column.
This is only possible if the function f is bijective (I don't know if the english word is correct), but usually f isn't bijective. Even if it is, the task of inverting a non-trivial function is very hard.
Thus, if you want to let the user change values and persist the changes inside the DB, you need some kind of database UI or some kind of import interface.
Depending on your database, it might be as trivial as let the user work with Oracle SQL*developer or similar tools which support importing data from excel sheets.
However, these tools are intended for SQL developers, as the name implies.
OTOH, if all you want is to perform DML statements in BIRT, this is possible indirectly: You can write stored procedures in the database doing the DML work, and call these procedures from BIRT (use a JDBC Stored Procedure Query instead of JDBC SQL Select Query).
Related
May I have your opinion on below queries please:
Option 1:
I have select script handy with me which fetch data by joining many source tables and performs some transformations like aggregations (group by), data conversion, sub-string etc.
Can I invoke this script through ODI mapping and return results (transformed data output) can be inserted into target of ODI mapping ?
Option2:
Convert the select script into equivalent ODI mapping by using equivalent ODI transformations , functions , look ups etc and use various tables (tables in join clause) as source of mappings.
Basically develop ODI mapping which is equivalent to provided select script plus a target table to insert records into it.
I need to know pros and cons of both options in above (if option 1 is possible).
Is it still possible to track transformation errors, join source tables and where clause condition related errors etc through ODI with option 1?
Log file for mapping failure will have as granular level details as offered by option 2?
Can I still enable Flow Control at Knowledge Module and redirect select script errors into E$_ error tables provided by ODI?
Thanks,
Rajneesh
Option 1: ODI 12c includes that concept out of the box. On the physical tab of a mapping, click on the source node (datastore). Then in the properties pane, there is the CUSTOM_TEMPLATE option under "Extract Options" menu. This allows to enter a custom SQL statement that will be used instead of the code generated by ODI.
However it is probably less maintainable over time than option 2. SQL is less visual than mapping components. Also if you need to bulk change it, it will be trickier. Changing a component in several mappings can be done with the SDK. Changing SQL code would require to parse it. You might indeed have less information in your operator logs as the SQL would be seen as just one block of code. It also wouldn't provide any lineage.
I believe using Flow Control would work but I haven't tested it.
Option 2 would take more time to complete but with that you would benefit from all the functionalities of ODI.
My own preference would be to occasionally use option 1 for really complex SQL queries but to use option 2 for most of the normal use cases.
I am working on ODI 10 project which has 153 interfaces divided in a few packages. What I want to do is create a PL/SQL procedure with INSERT statements instead of having 153 interfaces. These interfaces are more or less similar i.e they have the same source table and same target (in my case target is a Essbase Hyperion cube), the transformations & filters are different. So anytime I have to update something like a column value , I have to open 153 interfaces and update in each and every one of them. In a procedure, I could do this so easily, I can just replace all values.
So I feel that its best that I create a PL/SQL procedure, as I can maintain the code better that way.
Is there a way to convert the interface into a SQL query?. I want a direct data dump, I don't want to do an complex incremental load. I am just looking to truncate the table and load the data.
Thanks in advance.
It is possible to get the SQL code generated by ODI from the Operator in the log tables. It can also be retrieved in the repository.
Here is an example of a query for ODI 12c (10g being out of support for a long time now) :
SELECT s.sess_no, s.nno, step_name, scen_task_no, def_txt
FROM SNP_SESS_STEP s, SNP_SESS_TASK_LOG t
WHERE s.sess_no = t.sess_no
AND s.nno = t.nno;
Starting with ODI 11g, it is also possible to simulate the execution instead of doing an actual execution. This functionality will just display the code generated in ODI Studio instead of running it.
Finally, upgrading to a more recent of ODI would allow to use the ODI SDK. With it you could programmatically do changes to all the mappings in one go. Reusable mappings could also help as it sounds that some logic is implemented multiple times. That would enable to ease these kind of changes while keeping the benefits of an ELT tool (scheduling, monitoring, visual representation of flows, cross-technology, ...).
Disclaimer : I'm an Oracle employee
We are facing one issue in our project i.e. Data verification issue.
The project is about Replication of data from Sybase to oracle DBs.
The table structures for Table A across Sybase, Oracle is same.
Same column and primary key combination across all the databases.
e.g. If Sybase has Table A with columns a, b and C
same table with same name and same columns will be available in different databses.
We are done with replication stuff part.But we faced some silent failure like data discrepancy just wondering if there will any tool already available for this.
Any information on his would be helpful. Thanks.
Sybase (now SAP) has a couple products that can be used for data comparisons and reconciliation:
rs_subcmp - an older, 32-bit tool that comes with the Sybase Replication Server product that can be used to compare data between
source and target; SQL reconciliation scripts can be generated from
the differences and then applied to the target to bring it in sync
with the source; if your tables are more than 1GB in size you can
still use rs_subcmp but you'll need to create multiple comparison
jobs (via where clauses) to work on different subsets of your tables
[I don't recall if rs_subcmp can be use for heterogeneous
replication setsup, eg, ASE-Oracle.]
Data Assurance (DA) - the newer, 64-bit product ... also from
Sybase ... which can also compare data and (re)sync the target(s)
from the source (either via SQL reconciliation scripts or directly);
DA is capable of handling comparisons between a handful of
different RDBMS products (eg, ASE-Oracle); I'm currently working on a
project where one of the requirements is to validate (and reconcile
where needed) 200+TB of data being migrated from Oracle to HANA and
I'm using DA for the validation/reconciliation portion of the project
As #TenG has hinted at with his answer, there's a good bit of effort involved to compare data and generate code to reconcile the differences. Rolling your own code is doable but will entail a lot of work. If you've got the money you'll likely find 3rd party tools can get most/all of the work done for you.
If you used a 3rd party product to replicate your data from Sybase to Oracle, you may want to see if the same vendor has a comparison/validation/reconciliation tool you could use.
I've worked on a few migration projects and a key part has always been data reconciliation.
I can only talk about the approaches we took, based on constraints around tools available and minimising downtime, and constraints of available space.
In all cases I took to writing scripts that worked on two levels - summary view and "deep dive". We couldn't find any tools readily available that did what we wanted in a timely enough manner. In fact even the migration tools we found had limitations (datapump, sqlloader, golden gate, etc) and hand coded scripts to handle the bits that we found to be lacking or too slow in the standard tools.
The summary view varied from project to project. It was part functional based (do the accounting figures for transactions match) for the users to verify, and part technical. For smaller tables we could just write simple reports and the diff was straight forward.
For larger tables we wrote technical reports that looked at bands of data (e.g group the PK into 1000s) collect all the column data and produce checksum, generating a report for each table like:
PK ID Range Start Checksum
----------------- -----------
100000 22773377829
200000 38938938282
.
.
Corresponding table pairs from each database were then were "diff"d against each other to highlight discrepancies. Any differences that were found could then be looked at in more detail.
The scripts were written in such a way to allow them to run in parallel looking at discrete bands. Te band ranges were tunable as well to get the best throughput. This obviously sped things up.
The scripts were shell scripts firing off sqlplus reports, and similar for the source database.
On one project there wasn't enough diskspace to do these reports, so I wrote a Java program that queried the two databases side by side, using block queues to fetch and compare rowsets. Being in memory meant this was super fast.
For the "deep dive" we looked at the details for key tables, or for tables that reports a checksum difference.
For the user reports, the users would specify what they wanted to see, and we wrote the reports accordingly.
On the last project, the only discrepancies found were caused by character set conversion issues (people names with accents weren't handled correctly).
On projects where the overall dataset was smaller we extracted the data to XML files and wrote a Java tool to processes pairs and report differences.
The SAP/Sybase rs_subcmp tool is pretty powerful and also pretty hard to use. For details see:
https://help.sap.com/viewer/075940003f1549159206fcc89d020515/16.0.3.3/en-US/feb58db1bd1c1014b134ef4efef25563.html?q=rs_subcmp
You have to pass it key field information, but once you do that, it can retry/restart the compare streams after transient differences. Pretty fancy.
rs_subcmp expects to work on Sybase data source. So to compare against Oracle, you'd probably have to setup one of those Sybase-to-Oracle gateway products ($$$$$).
Could you install the Oracle ODBC drivers and configure them to allow Sybase clients to access Oracle? I'm guessing not (but that's outside the range of my experience).
Note the "-h" option for rs_subcmp. The docs just say it runs a "fast comparison", but what it's actually doing is running queries using the hashbytes() function. Something like:
select keyfield1,keyfield2, hashbytes("Md5",datacol1,datacol2,datacol3)
from mytable
So this sort of query might be good for the "summary view" type comparison discussed above (if the Oracle STANDARD_HASH() function output matches up with the Sybase hashbytes() function (again, outside my experience))
Note, as of ASE 16, there was a bug with the hash() & hashbytes() functions running the Md5 hash option against large varbinary columns where they could use up all procedure cache, potentially crashing the server (CR 811073)
My application will be querying a database using Entity Framework. The problem is that the database table structure changes fairly often (a few times a year).
Back in the SQL days, we would store SQL queries in Resource files (.resx) and when any database changes occurred, we could just edit the one resource file and not have to edit any code in the app, recompile, etc.
Are there any good ways to do this with Linq-to-SQL?
Linq2SQL is innately code-based. If your schema is going to change, then the code will need to change.
The only way I can see around this, and still get some of the benefits of linq, is to write everything as Stored Procedures, which you can than add as method to the linq DataContext.
Then, as long as the Name, input parameters and output columns remain the same, you can change what the SP is doing on the database and the code can stay the same.
I am new to Oracle. Since we have rewritten an earlier application , we have to migrate the data from the earlier database in Oracle 9i to a new database , also in 9i, with totally different structures. The column names and types would be totally different. We need to map the tables and columns , try to export as much data as possible, eliminate duplicates, and fill empty values with defaults.
Are there any tools which can help in mapping the elements of the 2 databases , with rules to handle duplicates, and default values and migrate the data ?
Thanks,
Chak.
If your goal is to migrate data between two very different schemas you will probably need an ETL solution (ETL=Extract Transform Load).
An ETL will allow you to:
Select data from your source database(s) [Extract]
apply business logic to the selected data [Transform] (deal with duplicates, default values, map source tables/columns with destination tables/columns...)
insert the data into the new database [Load]
Most ETLs also allow some kind of automatisation and reporting of the loads (bad/discarded rows...)
Oracle's ETL is called Oracle Warehouse Builder (OWB). It is included in the Database licence and you can download it from the Oracle website. As most Oracle products it is powerful but the learning curve is a bit steep.
You may want to look into the [ETL] section here in SO, among others:
What ETL tool do you use?
ETL tools… what do they do exactly? In laymans terms please.
In many cases, creating a database link and some scripts a'la
insert into newtable select distinct foo, bar, 'defaultvalue' from oldtable#olddatabase where xxx
should do the trick