I have a Visual Studio Database Project, there seems to be little and sketchy documentation on this type of Project.
The issue: I want to rename a column.
Problem: The table I want to rename the column on has data in it, so every time I generate a script I end up with this piece of code that causes the script to bomb out because there is data in the table.
IF EXISTS (select top 1 1 from [dbo].[res_file_submission])
RAISERROR (N'Rows were detected. The schema update is terminating because data loss might occur.', 16, 127) WITH NOWAIT
I have no idea how to get round this, and I really don't believe deleting this line is the answer, I have deselected the 'Block incremental deployment if data loss might occur' option, but again this seems to make no difference.
UPDATE: The column has a constraint, which seems to be the cause.
You can simply rename column name in post build deployment script by using command(rename query). It will allow you to change the name of column and also not harm your data in that table.
You can handle this via pre- and post-deployment scripts.
Create a pre-deployment script to back up the table and delete its data:
if (OBJECT_ID('TempDB..#MyTableBackup') is null)
begin
-- backup data to a temp table
SELECT *
INTO #MyTableBackup
FROM MyTable
-- TODO: If you have foreign key constraints that reference MyTable, you'll need to disable them here.
-- delete the data in your table
DELETE MyTable
end
Create a post-deployment script that restores the data:
-- TODO: Only include the SET IDENTITY_INSERT lines if your table has an identity column
--SET IDENTITY_INSERT MyTable ON
INSERT MyTable
SELECT *
FROM #MyTableBackup
--SET IDENTITY_INSERT MyTable OFF
-- TODO: If you disabled foreign key constraints in the pre-deployment script, enable them here.
DROP TABLE #MyTableBackup
Since the pre-deployment script empties your table, the column rename will occur during the regular part of the deployment without getting the "Block incremental deployment..." warning.
Be sure to remove these scripts from the project after the deployment succeeds so that they are not rerun during your next deployment.
The issue was not with my inability to rename the column, but the fact that I needed to rename the column and then deploy the rename, and the compare script kept preventing the rename.
THE SOLUTION: Rename my column in the database project including references to it, then add a script to the pre-deployment script on my target to perform the same name change. Then to run the pre-deployment script on my Database to update it, then run the compare, which will not now include the column rename as both the target and the source have the same name.
Related
I have created a table that has a foreign key constraint on spring-session-jdbc's spring_session table. The main motivation is that spring-session would delete the rows so that it would cascade and delete entries associated with the actual session. It became a "only works on my machine" problem because only me have had the table already in place when I start the development server. It would only work if others comment out the table first, initialize the server, then revert and do it again. Otherwise, nested exception is java.sql.SQLException: Failed to open the referenced table 'spring_session'.
I think the solution is to specify the run order of (or dependencies between) the initialization sql files. I cannot find that setting after some searching, so I am here.
schema.sql:
drop table if exists foo;
create table if not exists foo (
sid char(36) not null,
foreign key (sid) references spring_session (session_id) on delete cascade,
-- other columns and constraints
);
Possible workarounds:
Workaround #1: put an alter table add constraint statement like this in data.sql.
Workaround #2: grab spring-session-jdbc's schema.sql and put it into my schema.sql, then set spring.session.jdbc.initialize-schema=never in application.properties.
U can try flyway,it can manage your init sql files by giving them a version number. And it can record which sql have been executed, so if add another sql files, it will excute the sql you added, pass the others that have been executed.
I wanna create a script for table that should include
Create Table statement
Data in the table
Sequence in the table(Only sequence code)
And Trigger associated to it
I have added Sequence and trigger for auto increment ID, I searched but I couldn't get enough answers for Sequence in trigger.
I understand you, partially.
In order to get CREATE TABLE statement, choose that table and on right-hand side of the screen navigate to the "Script" tab - there it is. Apart from CREATE TABLE, it contains some more statements (such as ALTER TABLE in order to add constraints, CREATE INDEX and your number 4 - CREATE TRIGGER).
As of the sequence: it is a separate object, which is not related to any table. One sequence can be used to provide unique numbers for many tables, so - I'm not sure what is it that you are looking for.
In order to get data from that table, right-click table name; in menu choose "Export data" >> "Insert statements". That'll create bunch of INSERT INTO commands. That's OK if table is small; for large ones, you'll get old before it finishes.
The last sentence leads to another suggestion: why would you want to do it that way? A proper option is to export that table, using either Data Pump or the Original EXP utility.
[EDIT]
After you insert data "as is" (i.e. no changes in ID column values), disable trigger and run additional update. If we suppose that sequence name is MY_SEQ (create it the way you want it, specifying its start value etc.), it would be as simple as
update your_table set id = my_seq.nextval;
Once it is done, enable the trigger so that it fires for newly added rows.
Let's say I created a table test_table in development just to test a trigger, this trigger would then be reused in many other tables (future and existing).
So I code the trigger, test it, all good! But at the moment, if I want to replicate it, I will have to copy it from test_table's triggers and edit it.
So if someone deletes the table accidentally, the trigger is gone, and I don't have it saved nowhere else. Or if I just want to delete random test tables in our database, I can't.
What's a recommended way to save a trigger as a "template" in oracle? So I can reuse it in other tables and have it not be dependant of a random test table, or any table.
There are a lot of ways you can keep a copy of your TRIGGER SQLText.
Here's a few examples.
In Version Control:
You can use any of the many version control tools to maintain a versioned history for any code you like, including SQL, PL/SQL, etc. You can rewind time, view differences over time, track changes to the template, even allow concurrent development.
As a Function:
If you want the template to live in the database, you can create a FUNCTION (or PACKAGE)that takes as parameters the target USER and TABLE, and it replaces the USER and TABLE values in its template to generate the SQLTEXT required to create or replace the template TRIGGER on the target TABLE. You can make it EDITIONABLE as needed.
In a Table:
You can always just create a TABLE that holds template TRIGGER SQLText as a CLOB or VARCHAR2. It would need to be somewhere where it isn't likele to be "randomly" deleted, though. You can AUDIT changes to the TABLE's data, to see the template change over time. Oracle has tons of auditing options.
In the logs:
You can just log (all) DDL out. If you ENABLE_DDL_LOGGING, the log xml will have a copy of every DDL statement, categorized, along with when and where it came from.
I have a workflow which loads data from a Flat file to a Stage Table after a few basic checks on a few columns. In the mapping, each time my check fails (meaning if the column has an invalid value) , I make an entry to a ErrorFlatFile with an error text.
Now , I have two targets in my mapping. One being the Stage table and the other is the Error Flat File.
What i want to achieve is this ? Even if there is one entry in the ErrorFlatFile (indicating there is an error in the source file ) , I want to truncate the Target Stage Table.
Can someone Please help me with how i can do this at the session level.
Thanks,
You would need one more session. Make a dummy session (one that reads no data) and add a Pre or Post-SQL statement:
TRUNCATE TABLE YourTargetStageTableName
Create a link from your existing session to the dummy one and add the condition like:
$PMTargetName#numAffectedRow > 0
replacing TargetName with the name of your ErrorFlatFileName. The second session should only be executed in case when there was an entry made to the error file. If there will be no errors, it should not be executed.
My question is about data-load command.
When I made more data-load commands, without rebuild the tables, the ID index from the table is kept ans the first line of table don't begin with ID 1. It's possible with data-load command to insert the data from fixtures and the id for the first line of table to be 1?
No you can't. In fact, data-load delete all entries from your database and then load them again. So the autoincrement column aren't resetted (since it performs a DELETE and not a TRUNCATE).
What you can do is before launching data-load, do a TRUNCATE on your table. So any AUTO_INCREMENT value is reset to its start value.
Otherwise, you can use --and-load option from doctrine:build:
./symfony doctrine:build --all --and-load
It will re-generate all classes and SQL files and rebuilt the database then load data from the project and plugin data/fixtures/ directories.
This way, every thing will be fresh, and id resetted.