Generate seed data in Visual Studio 2010 Database project - visual-studio-2010

I have a database and a database project in Visual studio 2010. I inferred the schema in the database project successfully but I also need to import somehow the data in a few tables (Country, State, UserType etc.) that are reference tables and not really data tables.
Is there a way?
The only way I found so far is generating data scripts in SQL Server Management Studio and placing this script in the post-deployment script file in the database project.
Any simpler way?

Try the Static Data Script Generator for SQL Server. It automates those Post-Deployment scripts for you in the correct format. It is a free Google Code hosted project and we have found it useful for scripting our static data (also over the top of existing data for updates as well).

this is pretty much how I've done it before.
I would just make sure that each statement in you script look something like:
IF (EXISTS(SELECT * FROM Country WHERE CountryId = 1))
UPDATE MyTable SET Name = 'UK' WHERE CountryId = 1 AND Name != 'UK'
ELSE
INSERT INTO MyTable (CountryId, Name) VALUES (1, 'UK')
This means that each time you deploy the database your core reference data will be inserted or updated, whichever is most appropriate, and you can modify these scripts to create new reference data for newer versions of the database.
You could use a T4 template to generate these scripts - I've done something similar in the past.

you can also use the Merge INTO statement for updating/deleting/inserting seed data into the table in your post deployment script.I have tried and its works for me here is the simple example:
*/ print 'Inserting seed data for seedingTable'
MERGE INTO seedingTable AS Target
USING (VALUES (1, N'Pakistan', N'Babar Azam', N'Asia',N'1'),
(2, N'England', N'Nasir Hussain', N'Wales',N'2'),
(3, N'Newzeland', N'Stepn Flemming', N'Australia',N'4'),
(4, N'India', N'Virat Koli', N'Asia',N'3'),
(5, N'Bangladash', N'Saeed', N'Asia',N'8'),
(6, N'Srilanka', N'Sangakara', N'Asia',N'7') )
AS Source (Id, Cric_name,captain,region,[T20-Rank]) ON Target.Id = Source.Id
-- update matched rows
WHEN MATCHED THEN
UPDATE SET Cric_name = Source.Cric_name, Captain = Source.Captain, Region=source.Region, [T20-Rank]=source.[T20-Rank]
-- insert new rows
WHEN NOT MATCHED BY TARGET THEN
INSERT (Id, Cric_name,captain,region,[T20-Rank])
VALUES (Id, Cric_name,captain,region,[T20-Rank])
-- delete rows that are in the target but not the source
WHEN NOT MATCHED BY SOURCE THEN
DELETE;

By Using postdeployment.script and its executed perfectly with new table and seed data into it below is the script,after that I want to add new column and insert data into it how i i can do this
insert into seedingTable (Id, Cric_name, captain, region,[T20-Rank])
select 1, N'Pakistan', N'Babar Azam', N'Asia',N'1'
where not exists
(select 1 from dbo.seedingTable where id=1)
go
insert into seedingTable (Id, Cric_name, captain, region,[T20-Rank]) select 2,
N'England', N'Nasir Hussain', N'Wales',N'3'
where not exists
(select 1 from dbo.seedingTable where id=2)
Let me know above script will run every time when deploying database by using azure pipeline. how to update data.

Related

Combine MERGE and INSERT ALL statment

I have an issue that could be solved by implementing full aduting either with triggers or flash data archive but that's much more than is required.
So right now we are performing a merge which is updating a row when its present or inserting when its not. It works well and was easy to write. We now have a new requirement which is that users must know which rows have been updated or inserted. Yes this could be accomplished by introducing another field into the table, but thats not allowed because that would change the table. So we are forced to create one or two tables which will identify which rows are updated or inserted via the PK.
What I am hoping to do is take the existing MERGE statement and add the ability to insert into the secondary table, but I have not been able to find any merge statements which work that way, and INSERT ALL lacks the more complicated conditionals of the merge.
Here is the structure of the MERGE statement thats currently being used.
MERGE INTO EXISTING_TABLE ET USING TMP_TABLE TMP ON (HRR.ID = TMP.ID)
WHEN MATCHED THEN UPDATE SET
ET.ID = TMP.ID,
ET.TITLE_EN = TMP.TITLE_EN,
ET.TITLE_FR = TMP.TITLE_FR,
WHEN NOT MATCHED THEN INSERT (ID, TITLE_EN, TITLE_FR)
VALUES (TMP.ID, TMP.TITLE_EN, TMP.TITLE_FR);
Below is the way Im hoping to accomplish the MERGE INSERT ALL.
MERGE INTO EXISTING_TABLE ET USING TMP_TABLE TMP ON (HRR.ID = TMP.ID)
WHEN MATCHED THEN UPDATE
SET
ET.ID = TMP.ID,
ET.TITLE_EN = TMP.TITLE_EN,
ET.TITLE_FR = TMP.TITLE_FR,
INSERT INTO NEW_TABLE (ID, TYPE) VALUES (TMP.ID, 'U')
WHEN NOT MATCHED THEN INSERT ALL
INTO EXISTING_TABLE (ID, TITLE_EN, TITLE_FR) VALUES (TMP.ID, TMP.TITLE_EN, TMP.TITLE_FR),
INTO NEW_TABLE (ID, TYPE) VALUES (TMP.ID, 'I');
The only other way to reasonably accomplish this that I can see would be with a PLSQL block which works on row statements and would be slower.
At our site, we use after triggers to update audit stuff. It has an advantage of tracking the changes via program, or if someone fat fingers an update statement.
That might do the job for you.

Oracle Merge statement with conditional insert

I use following statement to insert or update current document versions our customers have signed:
MERGE INTO schema.table ccv
USING (select :customer_id as customer_id, :doc_type as doc_type, :version as Version FROM DUAL) n
ON (ccv.customer_id = n.customer_id and ccv.doc_type = n.doc_type)
WHEN MATCHED
THEN UPDATE
set ccv.version = n.version
DELETE WHERE ccv.version is null
WHEN NOT MATCHED
THEN INSERT
( customer_id, doc_type, version)
VALUES
(:customer_id,:doc_type,:version)
Basically I want to avoid inserting on the same condition when I am deleting with DELETE WHERE statement.
The thing is, that there is three different document types (doc_type), but only one or two might be simulatenously signed.
If a client signed a document, then I want to store it's version, if not, then I do not want a record with that document in database.
So, when the new :version is null I delete the existing row. That works great.
The problem is, when there were no documents of that customer stored, then oracle actually inserts a record with version = NULL.
How can I avoid inserting records when :version is null?
Is splitting the merge to separate delete, update and insert statement the only way to do that?
If you're using 10g, you may use conditions for the insert as well:
http://www.oracle-developer.net/display.php?id=310
Rgds.

Oracle script client that exports query result as UPDATE statement

I use Quest TOAD for Oracle and Quest SQL Navigator for Oracle as my database
query tools of choice.
These tools allow me to export the query results grid as INSERT statements.
For example
SELECT dummy
FROM dual;
exports to
INSERT INTO dual
(DUMMY)
VALUES
('X')
/
Is there an Oracle database query tool that exports query results as UPDATE statements?
For example
SELECT dummy
FROM dual;
would export to
UPDATE dual
SET dummy = 'X'
/
Try this one: http://www.sql-workbench.net
I don't know if there are other tools (wasn't able to find one, to be precise).
What I used to do for this to export the data as CSV, then hack a quick awk script to generate the desired UPDATEs.
You can export the Insert query first, insert into a backup table. Then update the target table with the backup table with PK.
UPDATE (SELECT tr.id,
tr.name a,
tr.desc b,
bk.name A,
bk.desc B
FROM target tr,
backup bk
WHERE tr.id = bk.id)
SET a = A,
b = B

select * through dblink

I have some trouble when trying to update a table by looping cursor which select from source table through dblink.
I have two database DB1, DB2.
They are two different database instance.
And I am using this following statement in DB1:
CURSOR TestCursor IS
SELECT a.*, 'A' TEST_COL_A, 'B' TEST_COL_B
FROM rpt.SOURCE#DB2 a;
BEGIN
For C1 in TestCursor loop
INSERT into RPT.TARGET
(
/*The company_name and cust_id are select from SOURCE table from DB2*/
COMPANY_NAME, CUST_ID, TEST_COL_A, TEST_COL_B
)
values
(
C1.COMPANY_NAME, C1.CUST_ID, C1.TEST_COL_A , C1.TEST_COL_B
) ;
End loop;
/*Some code...*/
End
Everything works fine until I add a column "NEW_COL" to SOURCE table#DB2
The insert data got the wrong value.
The value of TEST_COL_A , as I expect, should be 'A'.
However, it contains the value of NEW_COL which i add at SOURCE table.
And the value of TEST_COL_B contains 'A'.
Have anyone encounter the same issue?
It seems like oracle cache the table columns when it compile.
Is there any way to add a column to source table without recompile?
According to this:
Oracle Database does not manage
dependencies among remote schema
objects other than
local-procedure-to-remote-procedure
dependencies.
For example, assume that a local view
is created and defined by a query that
references a remote table. Also assume
that a local procedure includes a SQL
statement that references the same
remote table. Later, the definition of
the table is altered.
Therefore, the local view and
procedure are never invalidated, even
if the view or procedure is used after
the table is altered, and even if the
view or procedure now returns errors
when used. In this case, the view or
procedure must be altered manually so
that errors are not returned. In such
cases, lack of dependency management
is preferable to unnecessary
recompilations of dependent objects.
In this case you aren't quite seeing errors, but the cause is the same. You also wouldn't have a problem if you used explicit column names instead of *, which is usually safer anyway. If you're using * you can't avoid recompiling (unless, I suppose, the * is the last item in the select list, in which case any extra columns on the end wouldn't cause a problem - as long as their names didn't clash).
I recommend that you use a single set processing insert statement in DB1 rather than a row at a time cursor for loop for the insert, for example:
INSERT into RPT.TARGET
select COMPANY_NAME, CUST_ID, 'A' TEST_COL_A, 'B' TEST_COL_B
FROM rpt.SOURCE#DB2
;
Rationale:
Set processing will almost always out perform Row-at-a-time
processing [which is really slow-at-a-time processing].
Set processing the insert is a scalable solution. If the application will need to scale to tens of thousands of rows or millions of rows, the row-at-a-time solution will not likely scale.
Also, using the select * construct is dangerous for the reason you
encountered [and other similar reasons].

Inserting into Oracle the wrong way - how to deal with it?

I've just found the following code:
select max(id) from TABLE_NAME ...
... do some stuff ...
insert into TABLE_NAME (id, ... )
VALUES (max(id) + 1, ...)
I can create a sequence for the PK, but there's a bunch of existing code (classic asp, existing asp.net apps that aren't part of this project) that's not going to use it.
Should I just ignore it, or is there a way to fix it without going into the existing code?
I'm thinking that the best option is just to do:
insert into TABLE_NAME (id, ... )
VALUES (select max(id) + 1, ...)
Options?
You can create a trigger on the table that overwrites the value for ID with a value that you fetch from a sequence.
That way you can still use the other existing code and have no problems with concurrent inserts.
If you cannot change the other software and they still do the select max(id)+1 insert that is most unfortunate. What you then can do is:
For your own insert use a sequence and populate the ID field with -1*(sequence value).
This way the insert will not interfere with the existing programs, but also not conflict with the existing programs.
(of do the insert without a value for id and use a trigger to populate the ID with the negative value of a sequence).
As others have said, you can override the max value in a database trigger using a sequence. However, that could cause problems if any of the application code uses that value like this:
select max(id) from TABLE_NAME ...
... do some stuff ...
insert into TABLE_NAME (id, ... )
VALUES (max(id) + 1, ...)
insert into CHILD_TABLE (parent_id, ...)
VALUES (max(id) + 1, ...)
Use a seqeunce in a before insert row trigger. select max(id) + 1 doesn't work in a multi concerrency environment.
This quickly turns in to a discussion of application architecture, especially when the question boils down to "what should I do?"
Primary keys in Oracle really need to come from sequences and since you're dealing with complex insert logic (parent/child inserts, at least) in your application code, you should go into the existing code, as you say (since triggers probably won't help you).
On one extreme you could take away direct SQL access from applications and make them call services so the insert/update/delete code can be centralized. Or you could rewrite your code using some sort of MVC architecture. I'm assuming both are overkill for your situation.
Is the id column at least set to be a true primary key so there's a constraint that will keep duplicates from occurring? If not, start there.
Once the primary key is in place, or if it already is, it's only a matter of time until inserts start to fail; you'll know when they start to fail, right? If not, get on the error-logging.
Now fix the application code. While you're in there, you should at least write and call helper code so your database interactions are in as few places as possible. Then provide some leadership to the other developers and make sure they use the helper code too.
Big question: does anybody rely on the value of the PK? If not I would recommend using a trigger, fetching the id from a sequence and setting it. The inserts wouldn't specify and id at all.
I am not sure but the
insert into TABLE_NAME (id, ... )
VALUES (select max(id) + 1, ...)
might cause problems when to sessions reach that code. It might be that oracle reads the table (calculating max(id)) and then trys to get the lock on the PK for insertion. If that is the case two concurrent session might try to use the same id, causing an exception in the second session.
You could add some logging to the trigger, to check if inserts get processed that already have an ID set. So you know you have still to hunt down some place where the old code is used.
It can be done by fetching the max value in a variable and then just insert it in the table like
Declare
v_max int;
select max(id) into v_max from table;
insert into table values((v_max+rownum),val1,val2....,valn);
commit;
This will create a sequence in a single as well as Bulk inserts.

Resources