Truncate target Table based on rows in a Target Flatfile - informatica-powercenter

I have a workflow which loads data from a Flat file to a Stage Table after a few basic checks on a few columns. In the mapping, each time my check fails (meaning if the column has an invalid value) , I make an entry to a ErrorFlatFile with an error text.
Now , I have two targets in my mapping. One being the Stage table and the other is the Error Flat File.
What i want to achieve is this ? Even if there is one entry in the ErrorFlatFile (indicating there is an error in the source file ) , I want to truncate the Target Stage Table.
Can someone Please help me with how i can do this at the session level.
Thanks,

You would need one more session. Make a dummy session (one that reads no data) and add a Pre or Post-SQL statement:
TRUNCATE TABLE YourTargetStageTableName
Create a link from your existing session to the dummy one and add the condition like:
$PMTargetName#numAffectedRow > 0
replacing TargetName with the name of your ErrorFlatFileName. The second session should only be executed in case when there was an entry made to the error file. If there will be no errors, it should not be executed.

Related

Create workflow first time insert then update

I'm using Informatica PowerCenter 9.1.0 and to put it simple I have two identical tables as source (table A) and target (table B). The columns are ID and EMAIL.
I need to make a workflow where the very first time it runs all the records are copied from table A to B.
Then every day I need to update in the target table B the rows modified in A (the mail can change). If in the source table the record is deleted I still want to see it in the target table.
I used these values
Treat source rows as : "Insert"
Then in the Mapping tab I have checked the Attribute "Insert" and "Update as Update"
In the first time I have all the record in the target table but then if after few days some emails change I see no update. I still see the first email inserted the first time.
I changed the value of Treat source rows as to "Update" but in the first run (table B is empty ) it copies no row.
It's possible to have the workflow that in the first run insert all the rows the first time then in the next ones update the records without change the Treat source rows as value?
Select the option "Update else insert" in the mapping tab. Keep "treat source rows as" as Update

How to find out failed insert script number among the multiple insertion scripts

My file.sql has 50000 insert scripts, in that one are more insert scripts exection failed because of the value is too large for the column, then how we can find out which insert script got failed (which line number of insert script failed in the file).
I take it you want the missing data to be inserted after all?
1 Can you delete all data, change the table to hold larger values and run the script again?
2 Is there a unique key on the table? Then modify the table so it can hold larger values and run the script again. Only the data you do not already have will be inserted now.
3 Create the same table in another schema or database with the modified definition. Insert the data. Query the records where length of columns value > previous maximum. Generate insert statements only for these records and run these on the original (but now modified to hold larger values) table.

Why Phoenix always add a extra column (named _0) to hbase when I execute UPSERT command?

When I execute the UPSERT command on apache phoenix, I always see that Phoenix add an extra column (named _0) with an empty value in the hbase, this column(_0) is auto generate by phoenix, but I don't need it, like this:
ROW COLUMN+CELL
abc column=F:A,timestamp=1451305685300,value=123
abc column=F:_0, timestamp=1451305685300, value=  # I want to avoid generate this row
Could you tell me how to avoid that? Thank you very much!
"At create time, to improve query performance, an empty key value is
added to the first column family of any existing rows or the default
column family if no column families are explicitly defined. Upserts will also add this empty key value. This improves query performance by having a key value column we can guarantee always being there and thus minimizing the amount of data that must be projected and subsequently returned back to the client."
Apache Phoenix Documentation
Regarding your question if that is avoidable:
You could work around the problem by adding the following statements at the end of your sql:
ALTER TABLE "<your-table>" ADD "<your-cf>"."_0" VARCHAR(1);
ALTER TABLE "<your-table>" DROP COLUMN "<your-cf>"."_0";
You should only do this if you query some table with phoenix but then access the table with another system that is not aware of this phoenix-specific dummy value.

updating data in external table

Lets assume the following scenario :
I have several users that will prepare .csv files (not being aware of each other so concurrency is possible).
The .csv file will always be in same format.
The data in the .csv file will contain a list of ids together with some other columns like update_date.
Based on that data i will create a procedure that will update data in real DB table.
The idea is to use external tables, to maximally simplify it for the .csv creators, so they will put files in a folder and stuff will be done for them, rest is my job.
The questions are :
Can i have several files as the source for 1 external table or i need 1 ext table for each file (and what i mean here is whenever there is new func call to load data from csv, it should be added to existing external table...so not all files are being loaded at once)
Can i update records/fields in external table.
External table basically allowes to query the data stored in the external file(s). So from this point you can't issue an UPDATE on it.
You can
1) add new files in the directory and ALTER the table
ALTER TABLE my_ex LOCATION ('file1.csv','file2.csv');
2) you can of course modify the existing files as well. There is no database state of the external table, each SELECT loads the data in the database, so you will always see the "updated" status.
** UPDATE **
An attempt to modify (e.g. UPDATE) leads to ORA-30657 operation not supported on external organized table.
To be able to maintain status in the database the data must be first copied in a regular table (CTAS - create table as select from the external table).

prevent last record in access table to be deleted

I made a loginform for my access database but how to prevent anyuser to delete the last record
Ex: if there is two records in login_table or more. The user can delete all the record but not the lastone
There are many ways to do that:
1. Create constraint on your server side code to check whether there is only one record at the time of deleting the records.
2. Create a trigger on the table which prevents the user from deleting the last record.
Probably the easiest way to accomplish this in Access 2013 would be to create a "Before Delete" data macro that looks like this:
If DCount("*","Table1")<2 Then
RaiseError
Error Number 1
Error Description You cannot delete the last remaining record in this table.
End If
To create this data macro, open the table in Design View, then on the "Design" tab of the ribbon click "Create Data Macros" and choose "Before Delete". (Remember to replace "Table1" with the actual name of your table.)
The previous record is saved in the table so it can be deleted. The record being entered is not actually saved in the table until the form is closed or an action is taken to enter another record.
On an entry form I create a duplicate table with the same fields. The entry form places the data temporarily into the first table. Then I created two queries. One to update from the temporary table to the secondary table. The second query clears the first table making it ready for new data entry. The action of the query requires to command entry to save the record prior to running the two queries. I perform this by creating a macro to perform the actions in sequence. 1. save record, 2 copy the data to the second table, 3 clear the first table.
You will have better control over the data.

Resources