Use of ODBC and Relational connections in Informatica - informatica-powercenter

I noticed that in the mapping level we are creating the ODBC connection and in the Workflow level we are creating the Relational connection. What are those 2 connections needed for?

Question is crystal clear.
When you create the mapping you are describing what you want the data to do and shouldn't be restricted by whether the data model exists yet or not.
In order to do this you need to know the structure of your source and target but you don't need to actually connect to them. Having a dummy csv to get you going in mapping designer and while the dba builds the tables in the database is enough.

In the designer you may connect to existing structure to create Source or Target transformations. But you just create the structure, the definition - it's not connected to the mapping anymore.
In the workflow designer you choose a connection that should process data strutured in a way described by the Source or Target definition. It's the connection that will be used to access the data.

Related

Informatica Workflow Cannot create proper Relational connection object to connect to SQL Server

On my Infa server PC, in Informatica Administration I created a repository service giving Oracle Database information. Once in Informatica I connect to this repository and there I want to create a mapping importing a table from a remote sql server pc (on my home network domain) and then create a workflow to put the data into an oracle target table.
Using ODBC admin console I created and tested the connection and I am also able to telnet the linked sql server and port.
Within informatica I created a relational connection for sql server and when I run the workflow I get the error reason (14007) failed to create and inituiate OLE DB instance and datatabase driven error, failed to connect to database using user I use in SSMS for windows authentication and connection string(). I would like to know, first of of all, if I am doing something wrong, willing to connect me to a repository with oracle database information and then use a sql server table on remote pc. Do I have to create another repository for Sql server and there use sql server tables or I can mix them? secondly I would like to know how to create a relational connection object in informatica for my linked sql server so that it will be the same of relatonal connection created with ODBC admin consolle. Last but not least I would like to understand why gives an error out saying I left connection string empty, when I cannot see a place where I can put it by creating the relational connection object
I might not be able to solve the problem completely, but here's few remarks that might be helpful:
PowerCenter Repository database is where PowerCenter will store all the metadata about the processes you create. It may be Oracle - that's perfectly fine. And as it is not releated to your data sources or targets, you do not need to create another one for different sources/targets. One is enough for all ot them.
Using PowerCenter Workflow Manager create the appropriate connections to all the systems you need. Here you create the connections that indicate ODBC/other connections that will be used by Integration Service to actually connect to your data sources and targets, hence
Make sure the ODBC / other data sources are specified on Intergration Service. It is the IS that will run the process, connect to systems specified in the process with the defined connections.
When you build the mappings, you create them in a client app (Mapping Designer) and you can connect to DB engines to create source and target definitions. Mark, that in such case you use the connection (eg. ODBC data source) defined on the client. Once you will try to actually run the workflow with the given mapping, it is executed on IS (mentioned above) where appropriate connections need to be defined - and that's completely separate.
When editing a session in a workflow, for each source and target you need to pick a connection defined in Informatica Repository, created as described in point 2 above (or use a variable to indicate one - but that's another story)
So the error you mention seems to be related to connection created in Workflow Manager - it probably does not specify the connection string that should refer the data source defined on IS.

Using Saiku-ui without saiku-server/mondrian?

Is it possible to use the saiku-ui component with a different jolap provider than mondrian, or with a different server backend than the saiku-server component?
I have been looking but I have not found an architecture description of how these pieces fit together and what interfaces they use to communicate. Can anyone point me towards an understanding of what the saiku-ui wants to speak with and what the saiku-server is providing?
The reason for my interest is that I have a set of data spread across hundreds of csv files that I would like to query with a pivot and charting tool. It looks like the standard way to use this with saiku would be to have an ETL process to load in to an RDBMS. However, this would not be a simple process because the files and content and the way the files relate to each other vary, so the ETL would have to do a lot of inspection of the data sources to figure it out.
Given this it seems to me that I would have three options in how to use saiku:
1) write a complex ETL to load in to a rdbms, and then use a standard jdbc driver to provide the data to modrian. A side function of the ETL would be to analyze the inputs and write the mondrian schema file describing the cubes.
2) write a jdbc driver to access the data natively. This driver would parse sql and provide access to the underlying tables. Essentially this would be a custom r/o dbms written on top of the csv files. The jdbc connection would be used by mondrian to access the data. A side function of this custom dbms would be to produce the mondrian schema file.
3) write a tool that provides a jolap interface to the native data (accepts discovery and mdx queries). This would bypass mondrian entirely and interface with the ui.
I may be a bit naive here but I consider each of the three options to be feasible. Option #1 is my least preferred because of the likelihood of the data in the rdbms becoming out of sync with the cvs files. Option #3 is most preferred because the data are simple, so not much aggregating required and I suspect that mdx will be easier to parse than sql.
So, if i could produce my own jolap data source would it be possible to hook the saiku-ui tools up to it? Where would I look to find out the interface configuration details?
Many years ago, #ronaldbouman created xmondrian - the set of tools with the olap server, and web ui tools for xmla browsing and visualusation. But that project was not updating, and has no source code.
I just updated olap server and libraries to the latest versions.
You may get it here and build:.
https://github.com/Muritiku/xmondrian-build.
You may use web package as the example. The mondrian server works with the saiku-ui.
IMHO,
I would not be as confident as your are, because it took Julian Hyde more than a decade to build Mondrian (MDX->SQL) and Calcite (SQL), fulfilling your last two proposals.
You might simply consider using Calcite, or even better Dremio. Dremio has a JDBC interface, and can query directories of CSV files in SQL. I tested Saiku over Dremio successfully (with a schema based on two separate RDBMS). Just be careful to setup tables' schema accordingly in the Mondrian v4 schema.
Best regards,
Fabrice Etanchaud
Dremio

How to use ODI 11g ETL error table as source?

I'm currently using ODI 11g to import into Oracle, via CSV files, records from Mainframe Adabas table views. This is being done successfully.
The point is that I'm trying now to send back to a mainframe application via CSV the records that, for a reason or other, could not be imported into Oracle and are stored in the ETL's error tables.
I'm trying to use the same process, in this case backwards, to export the data from the error tables to a CSV file, which is to be imported by the mainframe application into Adabas.
I successfully imported via reverse engineering the structure of the error table to be my source base. I've set up new physical e and logical models to be used by this process. I've also created the interface.
My problem is that when I try to save the interface, it gives me a fatal error saying that I don't have an "LKM selected for this origin set".
When I try to set the LKM in Flow tab, it doesn't give me any option at LKM Selector.
I'm quite green on ODI and have no idea how to solve this problem, so any insights would be most appreciated.
Thanks all!
You need to change the location where transformations will occur. Currently the interface is trying to move all data to the file technology and process it there. But it's easier to work the other way around and make the database do the job. To do so, go on the overview pane of your interface and select "Staging Area Different From Target" checkbox, then select the logical schema of your Oracle source below.
On the Flow tab, click on your target and select the following IKM : "IKM SQL to File Append". This is a multi-technology IKM which means you won't need an LKM anymore to move data from source to target.

How to Replace ODBC Data Source with OLE DB Data Source in SSIS Package?

I use Integration Services 2012 in project deployment mode. I want to replace existing ODBC data source with OLE DB data source in existing package without breaking all the links that cascade down the package into the data destination.
I have tried deleting the ODBC & adding OLE DB data source. Then I lost all my output aliases after the first MERGE JOIN data flow. What can I do about it?
First fix all of the metadata in your source components, by opening them for edit. Then edit each component in order in the data flow. This will often fix downstream components as you go, but if data types changed (i.e. unicode to non-unicode) then you may have conversions to do.

AS400: Dealing with Multiple schema's while connecting to AS400 System

I am using the below driver to connect to AS400 system.
“jdbc:as400://system-name/default-schema;properties”
I have a requirement where in i have to deal with multiple schema's.
As the schema name need to be mentioned in the JDBC URL, do i need to open separate connection pool for each schema i am trying to connect?
Currently there are two connection pool i am using for two different schema pointing to same DB properties.
Is there any other way to deal with multiple schema with single connection.
A schema is more commonly referred to as a library on the IBM i (AS/400).
You can use a single database connection and qualify the table names with schema.table for the default SQL naming convention or schema/table with the system naming convention.
See the "naming" property in the IBM Toolbox for Java JDBC properties section of the Toolbox programmer's guide and SQL and system naming conventions topic in the SQL Programming guide for more information.
By using "system naming" settings your session can take advantage of a "library list" attribute which each job has. It is a list of schemes that is searched when the system is resolving the location of an unqualified object. The concept is similar to the notion of a path in Windows or Linux.
In addition to the links that #JamesA provided, also read the two part article by Birgitta Hauser, and the SQL Reference on unqualified names.
It is commonly considered best practice to use the session (ie job) library list, rather than statically hardcoding schema names. I suggest you follow this practice. While the terms schema and library are essentially synonymous, I use the IBM i command CHGCURLIB, rather than SET CURRENT SCHEMA, because the command does not restrict the behavior of SQL regarding the library list. But my understanding from Birgitta's article, is that SET CURRENT SCHEMA blocks the use of the library list entirely. The current library becomes the first library on (the user portion of) your library list.

Resources