I want to read from a table, change a couple column values for a few lines in a query, then update those lines on the same table.
I'm using SAP BODS, and that's what I tried:
I was about to insert images but just found out I can't insert images until 10 rep.
Anyway, I created a DataFlow where I have the same table as source and target.
A query to filter (using where) and change values (using mapping). And then a Table Comparison (where I expected those lines to be set to update, in this particular case), set table name on first entry, then PK in 'input primary key' and then the two columns I want to change in 'Compare columns'. No other changes from default that I can recall.
Got no warnings on 'validate all', and on execution I receive an ORA-00001 for the PK.
So ... I thought the Table Comparison would try to update, but seems like it's trying to insert instead. I want to know what I'm doing wrong and how could I get the job to do those updates. Thanks in advance.
Ps. I did search SO before asking and didn't find anything relevant.
Ok
So, turns out I just found what's going on a few minutes after posting the question.
Wasn't sure if I should answer my own question and took a look at this Etiquette for answering your own question
and decided to come back here and answer my own question.
For some reason I got stuck thinking that it was something to do with the Table Comparison trying to insert a line with a PK that's already there, instead of doing the update I wanted.
But after going back to the job to take another look at the issue, it occurred to me that maybe the problem could be a duplicate in the incoming data set. Made a few adjustment to filter those, and voilĂ .
Related
I don't know if I even worded the question correctly, but I'm trying to create a measure that depends on what is showing in the pivot table (using PowerPivot). In the image I posted, "DealMonth" is an expression in the PowerQuery table itself that simply takes the start date of the employee and subtracts it from the month a deal was closed in. That will show how long it took for that salesperson to close the deal. "TenureMonths" is also an expression in the PowerQuery table that calculates the tenure of the person. The values populating this screenshot are coming from a total headcount measure created. What I'm trying to do is create a separate measure that will show when the "TenureMonths" is less than the "DealMonth." So if the TenureMonths is 5, then after DealMonth of 5, the value would be 0. Is this possible?
Screenshot
I should add the following information.
"DealMonth" - Comes from the FactData table
"TenureMonths" - Comes from the DimSalesStart table
These two tables are joined by name. I feel like I'm so close because I can see what I want. The second image below is a copy/paste of the pivot table result but with my edits to show what I'd want to have shown. Basically, if(TenureMonths >= DealMonth,1,0). The trouble seems to be that since they're in two different tables, I can't make it work. The rows in the fact table are transactions, but the rows in the dim table are just the people with their start and end dates.
Desired Result
This is possible with some IF([measure1]<[measure2],blank(),[measure1]), however without seeing more of the data it will be hard to guide you specifically.
However you need to create two separate measures, one for TenureMonths and one for DealMonth, depending on the data this can be done with an aggregator forumla such as sum, min, max, etc (depends if there will be more than one value).
Then reference those two measures in the formula pattern I mentioned above, and that should give you want you want.
I figured out a solution. I added a dimension table for DealMonth itself and joined to my fact table. That allowed me to do the formulas that I needed.
I've made the mistake of using the 'Calculate and Replace Column' feature to replace the wrong column, and realized after the fact. The column I replaced corresponds to last names and is important. I would like to retrieve this column but maintain my other 15 or so data transformations. Ideally, I would like to remove this transformation, but I've come up empty so far. Here's what I've tried:
I tried adding the 'last name' column again from the same external source, using >Insert >Columns... I also tried renaming this column to avoid the data transformation. Unfortunately, this resulted in an entirely empty column, so it did not successfully match to the table or was affected by the transformation..
I checked the source information, and found exactly the 3-4 lines that I wish were not there. I thought it might be possible to edit this but haven't found a way. This seems like it would be the easiest.
Another idea I had was I could replace the data table with the same source, and repeat all of the transformations from the replace data table dialogue (excluding the bad one). This is my next plan of attack, but I figured I would come on here to see if there's an easier way first.
Thanks in advance!
Good News for YOU!!! #jeremyVollen.
It is possible to 'edit' your transformation per Tibco article 44098.
Resolution: If there are more then one transformations on a data table and you need to edit any of those transformation, follow the steps below:
Go To Edit >> Data Table Properties.
Select the desired data table inside which the transformation has been added and click on Refresh Data > With Prompt.
A new window will pop up which will allow you to make the desired changes in each of the transformations.
unfortunately it is NOT possible to reverse data table transformations.
it IS possible to undo the transformations with Edit>>Undo or CTRL+Z, but that's as far as it goes.
my strategy for dealing with this is (in accordance with your #3) to visit Edit>>Data Table Properties, select the table I'm interested in, select Source Information, then copy the contents of the textarea and paste it into notepad. then, I'll File>>Replace Data Table and start over from the beginning while keeping the notepad open so I don't miss any steps.
I realize it's not ideal, but there is unfortunately not another way.
I have a query that builds a cursor to create invoices. Part of it is the expression "IIF(cuPR.curren="EUR",NULL,rate) AS Taux".
My problem is that my query works fine for January through April and for June, but not for May. I checked the query to identify the problem, I checked and rechecked my data, everything looks fine. The data being the only thing that changes, what else should I check, please??
Usually if there exists a possibility that I will have data that may throw things off like you have here I will use a cast to ensure my field is what I expect.
Something like...
SELECT CAST(CAST(IIF(cuPR.curren="EUR", NULL, rate) AS Numeric(10,5)) AS Taux ...
Upon further research I noticed that in May the first record in the cursor cuPR had "EUR" as "curren". I tried sorting my cursor by "curren DESC", making sure EUR would not be in the first record (USD and GBP are other possible values) and my query went through.
DRapp had given the explanation in response to a previous question of mine:
"Bernard (and others new to VFP). VFP queries actually run the query twice, once for the first record just to confirm the final column types and sizes, then for the actual query of ALL records." In my case one of the columns was NULL...
Say i have something like this:
somerecord SOMETABLE%ROWTYPE;
Is it possible to access the fields of somerecord with out knowing the fields names?
Something like somerecord[i] such that the order of fields would be the same as the column order in the table?
I have seen a few examples using dynamic sql but i was wondering if there is a cleaner way of doing this.
What i am trying to do is generate/get the DML (insert query) for a specific row in my table but i havent been able to find anything on this.
If there is another way of doing this i'd be happy to use but would also be very curious in knowing how to do the former part of this question - it's more versatile.
Thanks
This doesn't exactly answer the question you asked, but might get you the result you want...
You can query the USER_TAB_COLUMNS view (or the other similar *_TAB_COLUMN views) to get information like the column name (COLUMN_NAME), position (COLUMN_ID), and data type (DATA_TYPE) on the columns in a table (or a view) that you might use to generate DML.
You would still need to use dynamic SQL to execute the generated DML (or at least generate static SQL separately).
However, this approach won't work for identifying the columns in an arbitrary query (unless you create a view of it). If you need that, you might need to resort to DBMS_SQL (or other tools).
Hope this helps.
As far as I know there is no clean way of referencing record fields by their index.
However, if you have a lot of different kinds of updates of the same table each with its own column set to update, you might want to avoid dynamic sql and look in the direction of statically populating your record with values, and then issuing update someTable set row = someTableRecord where someTable.id = someTableRecord.id;.
This approach has it's own drawbacks, like, issuing an update to every, even unchanged column, and thus creating additional redo log data, but I believe it should be considered.
I am writing a simple program to insert rows into a table.But when i started writing the program i got a doubt. In my program i will get duplicate input some times. That time i have to notify the user that this already exists.
Which of the Following Approaches is good to Use to achieve this
Directly Perform Insert statement will get the primary key violation error if it is duplicate notify otherwise it will be inserted. One Query to Perform
First make a search for the primary key values. If found a Value Prompt User. Otherwise perform insert operation.For a non-duplicate row this approach takes 2 queries.
Please let me know trade-offs between these approaches. Which one is best to follow ?
Regards,
Sunny.
I would choose the 2nd approach.
The first one would cause an exception to be thrown which is known to be very expensive...
The 2nd approach would use a SELECT count(*) FROM mytable WHERE key = userinput which will be very fast and the INSERT statement for which you can use the same DB connection object (assuming OO ;) ).
Using prepared statements will pre-optimize the queries and I think that will make the 2nd approach much better and mre flexible than the first one.
EDIT: depending on your DBMS you can also use a if not exists clause
EDIT2: I think Java would throw a SQLExcpetion no matter what went wrong, i.e. using the 1st approach you wouldn't be able to differ between a duplicate entry or an unavailable database without having to parse the error message - which is again a point for using SELECT+INSERT (or if not exists)