In VS2010 database project, I try to generate test data for a table that has existing data (by clicking 'No' when prompted). Identity column (that is the primary key) is SQL computed value so I can not change the data generator for that column.
So why the data generation plan doesn't recognize existing primary key values in the database, but tries always to insert duplicates, i.e., seems that the plan is starting always from the seed value, not from the next available identity column value? Can I force the data generation plan to start from some other seed value for this particular table?
To answer my second question: seems that from the schema view, I can set some other seed to start from. Still, I don't understand why the data generation plan doesn't automatically recogzine the next available IDENTITY value.
Related
I've created a simple table in Oracle 12c like so:
CREATE TABLE TEST
(
ID NUMBER GENERATED ALWAYS AS IDENTITY,
TEXT VARCHAR2(2000 CHAR),
CONSTRAINT ID_PK PRIMARY KEY (ID)
);
Then I linked it in MS Access using ODBC driver. The problem is that when I input value into TEXT and click away both ID and TEXT show #Deleted. My value gets recorded in the database but I have to requery in MS Access in order to see it.
I also noticed that if I change the datatype of TEXT field to NUMBER, it works fine. After saving the record in MS Access both auto generated ID and value in TEXT field are there. I don't have to requery anything.
This happens only when inserting. Updating works just fine.
Please advise.
So, it would appear you already found the solution, but this is more of an explanation as to why it works that way. Simply speaking, if the base-table uses non-integer values as primary keys, Access rounds these integers to the nearest whole number and then (since it was not a numeric value) Access can no longer find the applicable records. So, changing the data type from TEXT to INTEGER in the table structure would give you your desired result.
Alternatively, if you're using a query to run through these, if you cannot change the keys in the Oracle table then altering the Access query type to a snapshot (in the query properties) will also bypass this problem. But from the sounds of it, this is not how you are utilizing the data.
In my case, the Oracle ODBC driver (using the rather old version 11.02.00.01 that otherwise works ok and Microsoft Access 2016 32Bit) seems to use the unique indices and not the primary key constraint for determining the primary key.
I had a field with NUMBER(11) as PK with an unique index, then added a VARCHAR2 field with another unique index. The name of the index of the VARCHAR2 field was alphabetically before that of the NUMBER field.
Now, the linked table in Microsoft Access showed the VARCHAR2 field as primary key and I had the problem with '#Deleted' appearing after entering & saving a record as you describe.
After renaming the unique index on the NUMBER field in Oracle to be alphabetically before that of the VARCHAR2 field and re-linking the table in Microsoft Acces, the NUMBER field was the primary key again in Microsoft Access and the '#Deleted' problem was solved.
A sequence generator transformation is used to generate the primary key column which generates this key in sequential order for the incoming records and stores the maximum key value used in each session at the end of the session in the informatica repository. The issue is likely to occur when two or more workflows start running with the same key value stored in the repository and the workflow with more number of records completed first. In this case the maximum value updated by the first workflow gets updated with the lesser value (which is the maximum value of the second workflow) when the workflow with less number of records completes. Because of this in the next ODS run, the first workflow gets a value for sequence generator which is already used in the previous run and thus causes unique constraint violation.
I need to update the some tables in my application from some other warehouse tables which would be updating weekly or biweekly. I should update my tables based on those. And these are having foreign keys in another tables. So I cannot just truncate the table and reinsert the whole data every time. So I have to take the delta and update accordingly based on few primary key columns which doesn't change. Need some inputs on how to implement this approach.
My approach:
Check the last updated time of those tables, views.
If it is most recent then compare each row based on the primary key in my table and warehouse table.
update each column if it is different.
Do nothing if there is no change in columns.
insert if there is a new record.
My Question:
How do I implement this? Writing a PL/SQL code is it a good and efficient way? as the expected number of records are around 800K.
Please provide any sample code or links.
I would go for Pl/Sql and bulk collect forall method. You can use minus in your cursor in order to reduce data size and calculating difference.
You can check this site for more information about bulk collect, forall and engines: http://www.oracle.com/technetwork/issue-archive/2012/12-sep/o52plsql-1709862.html
There are many parts to your question above and I will answer as best I can:
While it is possible to disable referencing foreign keys, truncate the table, repopulate the table with the updated data then reenable the foreign keys, given your requirements described above I don't believe truncating the table each time to be optimal
Yes, in principle PL/SQL is a good way to achieve what you are wanting to
achieve as this is too complex to deal with in native SQL and PL/SQL is an efficient alternative
Conceptually, the approach I would take is something like as follows:
Initial set up:
create a sequence called activity_seq
Add an "activity_id" column of type number to your source tables with a unique constraint
Add a trigger to the source table/s setting activity_id = activity_seq.nextval for each insert / update of a table row
create some kind of master table to hold the "last processed activity id" value
Then bi/weekly:
retrieve the value of "last processed activity id" from the master
table
select all rows in the source table/s having activity_id value > "last processed activity id" value
iterate through the selected source rows and update the target if a match is found based on whatever your match criterion is, or if
no match is found then insert a new row into the target (I assume
there is no delete as you do not mention it)
on completion, update the master table "last processed activity id" to the greatest value of activity_id for the source rows
processed in step 3 above.
(please note that, depending on your environment and the number of rows processed, the above process may need to be split and repeated over a number of transactions)
I hope this proves helpful
I am new to Hbase
is it possible to/how can I auto increment row-key in Hbase? (like for each insert row-key has to be auto increment itself)
or is it possible to auto-increment any other column ? (like for each insert this column has to be auto-increment by 1)
Monolitically increasing row keys are not recommended in HBase, see this for reference: http://hbase.apache.org/book/rowkey.design.html, p.6.3.2. In fact, using globally ordered row keys would cause all instances of your distributed application write to the same region, which will become a bottleneck.
If you can avoid using auto-increment IDs and need to have just unique IDs in a distributed system, you can use something like "hostname" + "PID" + "TIMESTAMP" as a key. This way it would be unique for each row
If you are sure you need a global autoincrement in a table (it can be a key or some value from the column), you can use incrementColumnValue call - have a separate row in your table (or create a dedicated table for this) that would store the actual value, and the process will call incrementColumnValue before inserting new row to get the next value. But this way you cannot guarantee that there would be no gaps: if the client will fail after calling the incrementColumnValue, you might get the counter incremented but the row won't be inserted.
In short, all of the proposed solutions are client-side, there is no server-side implementation for this feature in HBase
I'm trying to create a new row in a table. There are two constraints on the table -- one is on the key field (DB_ID), the other constrains a value to be one of several the the field ENV. When I do an insert, I do not include the key field as one of the fields I'm trying to insert, yet I'm getting this error:
unique constraint (N390.PK_DB_ID) violated
Here's the SQL that causes the error:
insert into cmdb_db
(narrative_name, db_name, db_type, schema, node, env, server_id, state, path)
values
('Test Database', 'DB', 'TYPE', 'SCH', '', 'SB01', 381, 'TEST', '')
The only thing I've been able to turn up is the possibility that Oracle might be trying to assign an already in-use DB_ID if rows were inserted manually. The data in this database was somehow restored/moved from a production database, but I don't have the details as to how that was done.
Any thoughts?
Presumably, since you're not providing a value for the DB_ID column, that value is being populated by a row-level before insert trigger defined on the table. That trigger, presumably, is selecting the value from a sequence.
Since the data was moved (presumably recently) from the production database, my wager would be that when the data was copied, the sequence was not modified as well. I would guess that the sequence is generating values that are much lower than the largest DB_ID that is currently in the table leading to the error.
You could confirm this suspicion by looking at the trigger to determine which sequence is being used and doing a
SELECT <<sequence name>>.nextval
FROM dual
and comparing that to
SELECT MAX(db_id)
FROM cmdb_db
If, as I suspect, the sequence is generating values that already exist in the database, you could increment the sequence until it was generating unused values or you could alter it to set the INCREMENT to something very large, get the nextval once, and set the INCREMENT back to 1.
Your error looks like you are duplicating an already existing Primary Key in your DB. You should modify your sql code to implement its own primary key by using something like the IDENTITY keyword.
CREATE TABLE [DB] (
[DBId] bigint NOT NULL IDENTITY,
...
CONSTRAINT [DB_PK] PRIMARY KEY ([DB] ASC),
);
It looks like you are not providing a value for the primary key field DB_ID. If that is a primary key, you must provide a unique value for that column. The only way not to provide it would be to create a database trigger that, on insert, would provide a value, most likely derived from a sequence.
If this is a restoration from another database and there is a sequence on this new instance, it might be trying to reuse a value. If the old data had unique keys from 1 - 1000 and your current sequence is at 500, it would be generating values that already exist. If a sequence does exist for this table and it is trying to use it, you would need to reconcile the values in your table with the current value of the sequence.
You can use SEQUENCE_NAME.CURRVAL to see the current value of the sequence (if it exists of course)