Power Center view result - etl

I'm starting working with Informatica Power Center, I'm new in this technology. In the past I worked with Datastage. I made a task that read data from an Oracle table and write them on a Flat File. The Job run and finish correctly (I saw on Workflow Manger).
Is there a way to view the records written on my flat file on Power Center?
Thanks
Luca

You need to access the output file. Informatica does not provide a data browser to inspect flat files or databases. You need to use a separate tool.
Try FTP or SSH connection to wherever the output file got generated.

Related

Is there any way to deal with Minio file by sql-based statements?

I am new on Minio and object based databases.
I know that there is S3 select API but I want to add a new row or update a specific row in CSV file in Minio without need to download it and upload again.
Is there any way to do it?
In another words, I want to use sql based statements(insert/update) on a file stored in Minio.
You can only change Databases with SQL, it can only Import and Export CSVs so that they are usable for the Database. The Answer for now would then be a no. The easiest way you could achieve editing this csv would be to write a Script which either:
Connects to the Database and Changes the File in the Databases
Directory.
Downloads the File to edit it locally and then upload it again.

Monitor a folder and ETL xml data to oracle

Is there any tool I can use to:
monitor a folder, when a xml file is added, it checks its name validate it(if exists in a list of names), validates it against a schema xsd, and extract data contained in xml and load it to an oracle database, if any error occurs in that process it rejects the file (write its name in a file of rejected files). I don't expect it to be able to fullfill all these features but at least help me with the monitoring and automating the process.
thanks in advance,
Yes, any ETL or data integration tool should be able to do that. I’ve implemented a project that had most of those features in the past using Pentaho Data Integration.

Take Data From Oracle to Cassandra in every day

We want to take tables from Oracle to Cassandra every day. Because tables is updated in Oracle everyday. So when i searched this , i find these options:
Extract oracle tables as a file , then write Cassandra
Using sqoop to get tables from oracle, write Map Reduce job and insert into Cassandra ?
I am not sure which way is the appropriate ? Also is there another options ?
Thank you.
Option 1
Extracting oracle tables as a file and then writing to Cassandra manually everyday can be tiresome process unless if you are scheduling a cron job. I have tried this before, but if the process fails then logging it might be an issue. If you are using this process and exporting to CSV and trying to write to cassandra then I would suggest using cassandra bulk loader (https://github.com/brianmhess/cassandra-loader)
Option 2
I haven't worked with this, so can't speak about this.
Option 3 (I use this)
I use an open source tool, Pentaho Data Integration (Spoon) (https://community.hitachivantara.com/docs/DOC-1009855-data-integration-kettle) to solve this problem. It's fairly a simple process
spoon. You can automate this process by using a carte server (spoon server) which has logging capabilities as well as automatic restarting if the process failed in between.
Let me know if you found any other solution that worked for you.

How to use ODI 11g ETL error table as source?

I'm currently using ODI 11g to import into Oracle, via CSV files, records from Mainframe Adabas table views. This is being done successfully.
The point is that I'm trying now to send back to a mainframe application via CSV the records that, for a reason or other, could not be imported into Oracle and are stored in the ETL's error tables.
I'm trying to use the same process, in this case backwards, to export the data from the error tables to a CSV file, which is to be imported by the mainframe application into Adabas.
I successfully imported via reverse engineering the structure of the error table to be my source base. I've set up new physical e and logical models to be used by this process. I've also created the interface.
My problem is that when I try to save the interface, it gives me a fatal error saying that I don't have an "LKM selected for this origin set".
When I try to set the LKM in Flow tab, it doesn't give me any option at LKM Selector.
I'm quite green on ODI and have no idea how to solve this problem, so any insights would be most appreciated.
Thanks all!
You need to change the location where transformations will occur. Currently the interface is trying to move all data to the file technology and process it there. But it's easier to work the other way around and make the database do the job. To do so, go on the overview pane of your interface and select "Staging Area Different From Target" checkbox, then select the logical schema of your Oracle source below.
On the Flow tab, click on your target and select the following IKM : "IKM SQL to File Append". This is a multi-technology IKM which means you won't need an LKM anymore to move data from source to target.

Save data from database

I have created an application with internal database LightSwitch..
Now I want to publish my application and I want to publish also data of my internal database..How can I do?
for example : I have an application Fantacalcio and I created some players in my internal database of lightswitch..now when I publish my application and I install it in my pc there are no data in my application.. I want that when I install my application there must be players that I have created before..
You can do it programmatically in something like Application_Initialize, or in a SQL script.
LS has no "built-in" way to pre-populate data, so it's a matter of choosing a workaround.
One possible way is to do the following:
Attach the lightswitch internal database to SQL server
Export all the data into a SQL script, here are the instructions
After you have the sql script (mostly INSERT statements), then run
the script on your designated database.
The exact same data should now be populated there.

Resources