Update query in LINQ contains all columns in WHERE clause instead of just the primary key column - linq

I am updating a single column in a table using Linq, take fictitious table below.
MyTable (PKID, ColumnToUpdate, SomeRandomColumn)
var row = (from x in DataContext.MyTable
where b.PKID == 5
select x).FirstOrDefault();
row.ColumnToUpdate = 20;
DataContext.SubmitChanges();
This updates the column to as expected, no surprises here. However when I inspect the SQL commands which are generated, it does this:
UPDATE [dbo].[MyTable ]
SET [ColumnToUpdate ] = #p2
WHERE ([PKID] = #p0) AND ([SomeRandomColumn] = #p1)
This is performing the update, but only if all columns have matched the values of what Entity expects them to be, rather than referencing the Primary Key column on it's own.
If a database column is changed by another process, which is very feasible in this particular project; eg. There is a window between getting the row you want to manipulate, calculating the changes you would like to set the value to, and issuing the update command as a batch of rows. In this situation the query will cause an exception, causing a partial update, unless I trap, reload the data and resend individual queries. It also has a downside that the row information can be quite large (ie, containing HTML mark up for instance), and the whole thing gets passed to SQL and slows the system down when larger batches are processed.
Is there a way of making Linq / Entity to issue update commands based only on the PK column in the Where clause?

I never used LINQ-to-SQL for production projects and I never were aware of it applying optimistic concurrency1 by default.
This is the default behavior:
If a table doesn't have a Timestamp/Rowversion column2, all columns have "Update Check" set to "Always" in the DBML (except primary key columns and computed columns, i.e. all updateable columns).
If a table does have a Timestamp/Rowversion column, this column has "Time Stamp" set to "True" in the DBML and all columns have "Update Check" = "Never".
Either "Update Check" or "Time Stamp" mark a column as concurrency token. That's why in update statements you see these additional predicates on (not so) "random" columns. Apparently, the tables in your model didn't have Timestamp/Rowversion columns, hence an update checks the values of all updateable columns in the table.
1 Optimistic concurrency: no exclusive locks are set when updating records, but existing values of all or selected columns are checked while updating. If one of those column value was changed by another user between getting the data and saving them, an update exception occurs.
2 A column of data type Timestamp or Rowversion is automatically incremented when a record is updated and therefore detects all concurrent changes to this record.

Related

Create workflow first time insert then update

I'm using Informatica PowerCenter 9.1.0 and to put it simple I have two identical tables as source (table A) and target (table B). The columns are ID and EMAIL.
I need to make a workflow where the very first time it runs all the records are copied from table A to B.
Then every day I need to update in the target table B the rows modified in A (the mail can change). If in the source table the record is deleted I still want to see it in the target table.
I used these values
Treat source rows as : "Insert"
Then in the Mapping tab I have checked the Attribute "Insert" and "Update as Update"
In the first time I have all the record in the target table but then if after few days some emails change I see no update. I still see the first email inserted the first time.
I changed the value of Treat source rows as to "Update" but in the first run (table B is empty ) it copies no row.
It's possible to have the workflow that in the first run insert all the rows the first time then in the next ones update the records without change the Treat source rows as value?
Select the option "Update else insert" in the mapping tab. Keep "treat source rows as" as Update

session item does not change when using lov in primary key

I am implementing a Interactive grid to perform DML operations on a table.
it has combined primary key of two columns
One primary key column is display only and refer to master table and another primary key column I want to have a LOV to select value. LOV is dynamic lov having a display and return value picked from another table.
Inserts are fine but session state item value is set for one row and all the operations are performed on that same row irrespective of which row is selected.
you can see a sample here
https://apex.oracle.com/pls/apex/f?p=128616:2:1964277347439::NO:::
master table name: sample
detail table name: sample_child
primary key in sample child : ID and Name
pop lov is implemented in NAME
LOV values are picked from table: Sample_uncle
LOV display : ID || '-' || NAME
LOV return : ID
you can try to update blabla column of sample_child table to see the issue.
I am not sure how I can give you access to look at the implementation.
I have already tried all the options I can think of
This is to do with your primary keys, the detail table does not appear to have proper ones, thats why it always tried to update the first entry, and I think this is also why every row is marked when you load the table.
Primary keys also do the annoying thing of refusing to be empty, as you can see if you insert a new row, the middle column(which is a PK) is filled with 't1001'.
Since you are dealing with simple tables(and not a whole bunch of joined tables) I always consider it best to use ROWID as PK. So set ROWID as PK for the master table, and ROWID for the detail table. And have the detail table have a Master table be your master table, and then click on the first column in the detail table and set the master column for it. And I also personaly always hide the column that is linked.
I would advise you use ROWID whenever possible as its just so much easier to work with, it does mean you might need to set up a validation to prevent someone adding duplicated values for your actual PK, but since the PK is in the underlying table, they cant enter it anyways(but if you have a validation, the error will be much prettier), whilst if the column is a PK, APEX will prevent duplicates by default.
I hope this helps

Updating Resultset that points to an Oracle view

Is it possible to do updates via Resultset to an Oracle view? Asking as my code is giving me insufficient priviledge error when it does the rs.updateRow() call. I have checked and I definitely have access to the table/view.
The code looks like:
white (rs.next()) {
int updateStatus = getPSCforAction(status);
rs.updateInt("SPSC", updateStatus);
rs.updateRow;
}
The SELECT statement changes depends on operation but it will always be querying an Oracle view (and in some cases multiple views). My main question is whether updating via resultSet can be done to an Oracle view (or views)?
To answer your question one must see a definition of you view and a SELECT statament that is used to produce the resulset in your Java code. Without looking at this it is hard to give the answer
Anyway, generar rules and limitations are described in the Oracle Database JDBC Developer's guide:
Result Set Limitations
The following limitations are placed on queries for enhanced result
sets. Failure to follow these guidelines results in the JDBC driver
choosing an alternative result set type or concurrency type.
To produce an updatable result set:
A query can select from only a single table and cannot contain any
join operations.
In addition, for inserts to be feasible, the query must select all
non-nullable columns and all columns that do not have a default value.
A query cannot use SELECT * . However, there is a workaround for this.
A query must select table columns only.
It cannot select derived columns or aggregates, such as the SUM or MAX
of a set of columns.
To produce a scroll-sensitive result set:
A query cannot use SELECT *. However, there is a workaround for this.
A query can select from only a single table.
Scrollable and updatable result sets cannot have any column as Stream.
When the server has to fetch a Stream column, it reduces the fetch
size to one and blocks all columns following the Stream column until
the Stream column is read. As a result, columns cannot be fetched in
bulk and scrolled through.
They vaguely write that:
A query can select from only a single table and cannot contain any
join operations.
Could be that they mean "exlusively from tables, but not views", but also they could mean: "from tables and views", nobody knows, one must test this.
Another possible problem - your view may not be updatable, that is it doesn't conform to the following rules:
Notes on Updatable Views The following notes apply to updatable views:
An updatable view is one you can use to insert, update, or delete base
table rows. You can create a view to be inherently updatable, or you
can create an INSTEAD OF trigger on any view to make it updatable.
To learn whether and in what ways the columns of an inherently
updatable view can be modified, query the USER_UPDATABLE_COLUMNS data
dictionary view. The information displayed by this view is meaningful
only for inherently updatable views. For a view to be inherently
updatable, the following conditions must be met:
Each column in the view must map to a column of a single table. For
example, if a view column maps to the output of a TABLE clause (an
unnested collection), then the view is not inherently updatable.
The view must not contain any of the following constructs:
A set operator
A DISTINCT operator
An aggregate or analytic function
A GROUP BY, ORDER BY, MODEL, CONNECT BY, or START WITH clause
A collection expression in a SELECT list
A subquery in a SELECT list
A subquery designated WITH READ ONLY Joins, with some exceptions, as
documented in Oracle Database Administrator's Guide
In addition, if an
inherently updatable view contains pseudocolumns or expressions, then
you cannot update base table rows with an UPDATE statement that refers
to any of these pseudocolumns or expressions.
If you want a join view to be updatable, then all of the following
conditions must be true:
The DML statement must affect only one table underlying the join.
For an INSERT statement, the view must not be created WITH CHECK
OPTION, and all columns into which values are inserted must come from
a key-preserved table. A key-preserved table is one for which every
primary key or unique key value in the base table is also unique in
the join view.
For an UPDATE statement, the view must not be created WITH CHECK
OPTION, and all columns updated must be extracted from a key-preserved
table.
For a DELETE statement, if the join results in more than one
key-preserved table, then Oracle Database deletes from the first table
named in the FROM clause, whether or not the view was created WITH
CHECK OPTION.

Oracle 12c - refreshing the data in my tables based on the data from warehouse tables

I need to update the some tables in my application from some other warehouse tables which would be updating weekly or biweekly. I should update my tables based on those. And these are having foreign keys in another tables. So I cannot just truncate the table and reinsert the whole data every time. So I have to take the delta and update accordingly based on few primary key columns which doesn't change. Need some inputs on how to implement this approach.
My approach:
Check the last updated time of those tables, views.
If it is most recent then compare each row based on the primary key in my table and warehouse table.
update each column if it is different.
Do nothing if there is no change in columns.
insert if there is a new record.
My Question:
How do I implement this? Writing a PL/SQL code is it a good and efficient way? as the expected number of records are around 800K.
Please provide any sample code or links.
I would go for Pl/Sql and bulk collect forall method. You can use minus in your cursor in order to reduce data size and calculating difference.
You can check this site for more information about bulk collect, forall and engines: http://www.oracle.com/technetwork/issue-archive/2012/12-sep/o52plsql-1709862.html
There are many parts to your question above and I will answer as best I can:
While it is possible to disable referencing foreign keys, truncate the table, repopulate the table with the updated data then reenable the foreign keys, given your requirements described above I don't believe truncating the table each time to be optimal
Yes, in principle PL/SQL is a good way to achieve what you are wanting to
achieve as this is too complex to deal with in native SQL and PL/SQL is an efficient alternative
Conceptually, the approach I would take is something like as follows:
Initial set up:
create a sequence called activity_seq
Add an "activity_id" column of type number to your source tables with a unique constraint
Add a trigger to the source table/s setting activity_id = activity_seq.nextval for each insert / update of a table row
create some kind of master table to hold the "last processed activity id" value
Then bi/weekly:
retrieve the value of "last processed activity id" from the master
table
select all rows in the source table/s having activity_id value > "last processed activity id" value
iterate through the selected source rows and update the target if a match is found based on whatever your match criterion is, or if
no match is found then insert a new row into the target (I assume
there is no delete as you do not mention it)
on completion, update the master table "last processed activity id" to the greatest value of activity_id for the source rows
processed in step 3 above.
(please note that, depending on your environment and the number of rows processed, the above process may need to be split and repeated over a number of transactions)
I hope this proves helpful

Why does Oracle SQL Developer use such strange delete criteria?

When deleting rows in the grid view, the message panel indicates that SQL Developer issues this delete command.
DELETE FROM "MH"."T" WHERE ROWID = 'AABUG+AAEAAEZtrAAA'
AND ORA_ROWSCN = '1220510600909'
and ( "A" is null or "A" is not null )
It seems specifying the ROWID should be sufficient to identify the row, so
Why does it specify ORA_ROWSCN?
And more befuddling, why the is null / not null clause?
The ROWID is just the physical address of the row. If one row is deleted and another row is inserted, the new row could have the same ROWID as the old row. If the data in the row had been modified, it is also possible that its ROWID could have changed. The ORA_ROWSCN criteria ensures that neither of these have actually happened. It also allows SQL Developer to alert you if another session had modified the data since you read it so that you can confirm that you still want to delete the row.
I'm at a loss for what the A is null or A is not null predicate would be adding. If it was the first predicate, I would guess that it was the standard 1 = 1 predicate that folks sometimes add to queries that are dynamically built to simplify the process of building the SQL statement dynamically. But that doesn't fit with it being the last predicate in the query. Is A the primary key of the table?

Resources