Do we need to create the target output file in Informatica before starting the workflow? - informatica-powercenter

I am performing some mappings and want to store them in a new flat file (.CSV). But this requires the creation of a target file.

No it does not require to create a target file before running the workflow, informatica creates the file for you by the file_name given by you in session.

Related

Writing data into a single file for Azure instead of part files

I am trying to write a file to my Azure Storage using Mosaic Decisions. I want to write the file to Azure as a single file and not part files. How can this be achieved?
You need to simply switch on the toggle in your Writer Node Configuration menu(Double click on the writer node) for Single File Output.
The file would be written into a single file if this toggle is enabled.

How to move elasticsearch index using file system?

Usecase:
I have created es-indexes: mywebsiteindex-yyyymmdd , mysharepointindex-yyyymmdd in my laptop/dev machine. I want to export/zip that index as a file. The file may be migrated by someone who has credentials to target machine. And the zip/file may be imported to target-elastic folder.
You can abstract the words 'machine' 'folder' 'zip' in the above. Focus is 'transfer index as a file and reimport at target which I may not have access through http/tcp/ftp/ssh'.
Is there any python/other script out there that can export-from-source and import-to-target? A script that hides internal complexities of node/cluster count differences between dev/prod etc, and just move index.
Note: I already referred to the below page, so no need to reiterate the same
https://www.elastic.co/guide/en/cloud/current/ec-migrate-data.html
There are some options:
You can use the snapshot and restore api to create a snapshot of your index and restore it in your new instance. (recommended way)
You can use the reindex api in your new instance to reindex your index from remote.
You can use Logstash with your old instance as an input and your new instance as the output.
And you can write a script/application using one of the supported clients to query your index, export to a file, read that file and import in your new instance. (logstash can also do that).
But you can't move your data files, this is not supported nor recommended by Elastic.

Moved file to another location in Apache NIFI

I am trying to load to MySQL database using LOCAL INFILE however, i am having difficulties to move the files to a new location once they file has been successfully imported in MySql.
Below is a screen show of the process-flow.
My problem is:
I am managed to import/ load the database using the LOAD DATA LOCAL INFILE of MySql but the issue is when I am trying to move the successfully imported files to the correct directory. I fail to achieve so. The PutFile_sucess & PutFile_fail do not work as expected, so I decided to use: FetchFile and then I get an empty file when I say FetchFile it just creates it instead of moving the whole file.
I hope I have made myself clear, I would appreciate any inputs.
if your issue is to remove the file once imported, you could just add a FetchFile processor somewhere after your sucess part and set the Completion Strategy to Delete File
However, better aproach will be to load the content of the file in Nifi then parse/split/process it and then (eventually regroup by batch) ingest the content in MySQL.
Could you maybe improve your question with informations like the format/structure/content of the file you're trying to load ?

Program solution reading through share folders

I have a quick project I am working on for one of our VPs.
We have a few thousand CAD jobs stored on a network file share. The file structure is such that there is a parent folder for the CAD job. Part of the folder name contains the job number. Inside the folder, there are 1 to many .ini text files that contain the connection information I need.
What I need is a programatic way to search through all the folders and extract the job number from the folder name, and all the connection values from the ini files.
For example for a folder named CM8252390-3, the job number is 8252390-3. Inside this folder are 3 ini files. Inside the ini files are that look like this:
[Connection]
Name=IMP_Acme_3.5
[Origin]
X=-15.044784
Y=19.620095
Z=44.621395
So my program needs to give me the following result
Job Connection
8252390-3 IMP_Acme1_3.5
8252390-3 IMP_Acme2_3.5
8252390-3 IMP_Acme3_3.5
8254260-1 IMP_Acme3_2.4
8254260-1 IMP_Acme3_4.1
...continued for all folders in the network share
Any suggestion on the best way to do this. I am primarily an Oracle PL/SQL developer, but have some basic Windows batch and Unix shell experience. If I can get the data loaded into Oracle tables, I can search using PL/SQL tools, but is there a better way using shell, batch, or other tools?
Thank you.
I think this is a job for Powershell or vbScript. It would be easy to use these tools to write the information you need to one file.
This file should be written to an Oracle directory.
grant read permission to a database user on this directory
use utl_file to read the file or treat the file as an external table and expose it as a view
schedule a regular OS job to refresh or rebuild the list

Hudson - how to trigger a build via file using the filename and file contents

Currently I'm working in a continuous integration server solution using Hudson.
Now I'm looking for a build job which will be triggered every time it finds a file in a specific directory.
I've found some plugins which allow Hudson to watch and poll files from a directory (File Found Trigger, FSTrigger and SCM File Trigger) but none of them allow me to get the filename and file contents from the file found and use these values during the build execution (My idea would pass these values to a shell script)
Do you guys know if this is something possible to do via any other Hudson plugin? or maybe I'm missing something.
Thanks,
Davi
Two valid solutions:
As suggested by Christopher, read the values from the file via Shell/Batch commands at the beginning of your build-script.(The downside is that Hudson will not be aware of those values in any way)
Use the Envfile Plugin to read the content of the file and interperate it as a set of key-value pairs.
Note that if the File Found Trigger "eats" the flag-file, you may need to create two files -
one to hold the key-value pairs and another to serve as a flag for the File Found Trigger.

Resources